Resolution Enhancement of Hyperion Hyperspectral Data using Ikonos Multispectral Data
2007-09-01
spatial - resolution hyperspectral image to produce a sharpened product. The result is a product that has the spectral properties of the ...multispectral sensors. In this work, we examine the benefits of combining data from high- spatial - resolution , low- spectral - resolution spectral imaging...sensors with data obtained from high- spectral - resolution , low- spatial - resolution spectral imaging sensors.
Single-shot and single-sensor high/super-resolution microwave imaging based on metasurface.
Wang, Libo; Li, Lianlin; Li, Yunbo; Zhang, Hao Chi; Cui, Tie Jun
2016-06-01
Real-time high-resolution (including super-resolution) imaging with low-cost hardware is a long sought-after goal in various imaging applications. Here, we propose broadband single-shot and single-sensor high-/super-resolution imaging by using a spatio-temporal dispersive metasurface and an imaging reconstruction algorithm. The metasurface with spatio-temporal dispersive property ensures the feasibility of the single-shot and single-sensor imager for super- and high-resolution imaging, since it can convert efficiently the detailed spatial information of the probed object into one-dimensional time- or frequency-dependent signal acquired by a single sensor fixed in the far-field region. The imaging quality can be improved by applying a feature-enhanced reconstruction algorithm in post-processing, and the desired imaging resolution is related to the distance between the object and metasurface. When the object is placed in the vicinity of the metasurface, the super-resolution imaging can be realized. The proposed imaging methodology provides a unique means to perform real-time data acquisition, high-/super-resolution images without employing expensive hardware (e.g. mechanical scanner, antenna array, etc.). We expect that this methodology could make potential breakthroughs in the areas of microwave, terahertz, optical, and even ultrasound imaging.
NASA Technical Reports Server (NTRS)
1987-01-01
The high-resolution imaging spectrometer (HIRIS) is an Earth Observing System (EOS) sensor developed for high spatial and spectral resolution. It can acquire more information in the 0.4 to 2.5 micrometer spectral region than any other sensor yet envisioned. Its capability for critical sampling at high spatial resolution makes it an ideal complement to the MODIS (moderate-resolution imaging spectrometer) and HMMR (high-resolution multifrequency microwave radiometer), lower resolution sensors designed for repetitive coverage. With HIRIS it is possible to observe transient processes in a multistage remote sensing strategy for Earth observations on a global scale. The objectives, science requirements, and current sensor design of the HIRIS are discussed along with the synergism of the sensor with other EOS instruments and data handling and processing requirements.
Single-shot and single-sensor high/super-resolution microwave imaging based on metasurface
Wang, Libo; Li, Lianlin; Li, Yunbo; Zhang, Hao Chi; Cui, Tie Jun
2016-01-01
Real-time high-resolution (including super-resolution) imaging with low-cost hardware is a long sought-after goal in various imaging applications. Here, we propose broadband single-shot and single-sensor high-/super-resolution imaging by using a spatio-temporal dispersive metasurface and an imaging reconstruction algorithm. The metasurface with spatio-temporal dispersive property ensures the feasibility of the single-shot and single-sensor imager for super- and high-resolution imaging, since it can convert efficiently the detailed spatial information of the probed object into one-dimensional time- or frequency-dependent signal acquired by a single sensor fixed in the far-field region. The imaging quality can be improved by applying a feature-enhanced reconstruction algorithm in post-processing, and the desired imaging resolution is related to the distance between the object and metasurface. When the object is placed in the vicinity of the metasurface, the super-resolution imaging can be realized. The proposed imaging methodology provides a unique means to perform real-time data acquisition, high-/super-resolution images without employing expensive hardware (e.g. mechanical scanner, antenna array, etc.). We expect that this methodology could make potential breakthroughs in the areas of microwave, terahertz, optical, and even ultrasound imaging. PMID:27246668
Single sensor processing to obtain high resolution color component signals
NASA Technical Reports Server (NTRS)
Glenn, William E. (Inventor)
2010-01-01
A method for generating color video signals representative of color images of a scene includes the following steps: focusing light from the scene on an electronic image sensor via a filter having a tri-color filter pattern; producing, from outputs of the sensor, first and second relatively low resolution luminance signals; producing, from outputs of the sensor, a relatively high resolution luminance signal; producing, from a ratio of the relatively high resolution luminance signal to the first relatively low resolution luminance signal, a high band luminance component signal; producing, from outputs of the sensor, relatively low resolution color component signals; and combining each of the relatively low resolution color component signals with the high band luminance component signal to obtain relatively high resolution color component signals.
Fusion of spectral and panchromatic images using false color mapping and wavelet integrated approach
NASA Astrophysics Data System (ADS)
Zhao, Yongqiang; Pan, Quan; Zhang, Hongcai
2006-01-01
With the development of sensory technology, new image sensors have been introduced that provide a greater range of information to users. But as the power limitation of radiation, there will always be some trade-off between spatial and spectral resolution in the image captured by specific sensors. Images with high spatial resolution can locate objects with high accuracy, whereas images with high spectral resolution can be used to identify the materials. Many applications in remote sensing require fusing low-resolution imaging spectral images with panchromatic images to identify materials at high resolution in clutter. A pixel-based false color mapping and wavelet transform integrated fusion algorithm is presented in this paper, the resulting images have a higher information content than each of the original images and retain sensor-specific image information. The simulation results show that this algorithm can enhance the visibility of certain details and preserve the difference of different materials.
The fusion of satellite and UAV data: simulation of high spatial resolution band
NASA Astrophysics Data System (ADS)
Jenerowicz, Agnieszka; Siok, Katarzyna; Woroszkiewicz, Malgorzata; Orych, Agata
2017-10-01
Remote sensing techniques used in the precision agriculture and farming that apply imagery data obtained with sensors mounted on UAV platforms became more popular in the last few years due to the availability of low- cost UAV platforms and low- cost sensors. Data obtained from low altitudes with low- cost sensors can be characterised by high spatial and radiometric resolution but quite low spectral resolution, therefore the application of imagery data obtained with such technology is quite limited and can be used only for the basic land cover classification. To enrich the spectral resolution of imagery data acquired with low- cost sensors from low altitudes, the authors proposed the fusion of RGB data obtained with UAV platform with multispectral satellite imagery. The fusion is based on the pansharpening process, that aims to integrate the spatial details of the high-resolution panchromatic image with the spectral information of lower resolution multispectral or hyperspectral imagery to obtain multispectral or hyperspectral images with high spatial resolution. The key of pansharpening is to properly estimate the missing spatial details of multispectral images while preserving their spectral properties. In the research, the authors presented the fusion of RGB images (with high spatial resolution) obtained with sensors mounted on low- cost UAV platforms and multispectral satellite imagery with satellite sensors, i.e. Landsat 8 OLI. To perform the fusion of UAV data with satellite imagery, the simulation of the panchromatic bands from RGB data based on the spectral channels linear combination, was conducted. Next, for simulated bands and multispectral satellite images, the Gram-Schmidt pansharpening method was applied. As a result of the fusion, the authors obtained several multispectral images with very high spatial resolution and then analysed the spatial and spectral accuracies of processed images.
NASA Astrophysics Data System (ADS)
Zhu, Y.; Jin, S.; Tian, Y.; Wang, M.
2017-09-01
To meet the requirement of high accuracy and high speed processing for wide swath high resolution optical satellite imagery under emergency situation in both ground processing system and on-board processing system. This paper proposed a ROI-orientated sensor correction algorithm based on virtual steady reimaging model for wide swath high resolution optical satellite imagery. Firstly, the imaging time and spatial window of the ROI is determined by a dynamic search method. Then, the dynamic ROI sensor correction model based on virtual steady reimaging model is constructed. Finally, the corrected image corresponding to the ROI is generated based on the coordinates mapping relationship which is established by the dynamic sensor correction model for corrected image and rigours imaging model for original image. Two experimental results show that the image registration between panchromatic and multispectral images can be well achieved and the image distortion caused by satellite jitter can be also corrected efficiently.
Peng, Mingzeng; Li, Zhou; Liu, Caihong; Zheng, Qiang; Shi, Xieqing; Song, Ming; Zhang, Yang; Du, Shiyu; Zhai, Junyi; Wang, Zhong Lin
2015-03-24
A high-resolution dynamic tactile/pressure display is indispensable to the comprehensive perception of force/mechanical stimulations such as electronic skin, biomechanical imaging/analysis, or personalized signatures. Here, we present a dynamic pressure sensor array based on pressure/strain tuned photoluminescence imaging without the need for electricity. Each sensor is a nanopillar that consists of InGaN/GaN multiple quantum wells. Its photoluminescence intensity can be modulated dramatically and linearly by small strain (0-0.15%) owing to the piezo-phototronic effect. The sensor array has a high pixel density of 6350 dpi and exceptional small standard deviation of photoluminescence. High-quality tactile/pressure sensing distribution can be real-time recorded by parallel photoluminescence imaging without any cross-talk. The sensor array can be inexpensively fabricated over large areas by semiconductor product lines. The proposed dynamic all-optical pressure imaging with excellent resolution, high sensitivity, good uniformity, and ultrafast response time offers a suitable way for smart sensing, micro/nano-opto-electromechanical systems.
Architecture and applications of a high resolution gated SPAD image sensor
Burri, Samuel; Maruyama, Yuki; Michalet, Xavier; Regazzoni, Francesco; Bruschini, Claudio; Charbon, Edoardo
2014-01-01
We present the architecture and three applications of the largest resolution image sensor based on single-photon avalanche diodes (SPADs) published to date. The sensor, fabricated in a high-voltage CMOS process, has a resolution of 512 × 128 pixels and a pitch of 24 μm. The fill-factor of 5% can be increased to 30% with the use of microlenses. For precise control of the exposure and for time-resolved imaging, we use fast global gating signals to define exposure windows as small as 4 ns. The uniformity of the gate edges location is ∼140 ps (FWHM) over the whole array, while in-pixel digital counting enables frame rates as high as 156 kfps. Currently, our camera is used as a highly sensitive sensor with high temporal resolution, for applications ranging from fluorescence lifetime measurements to fluorescence correlation spectroscopy and generation of true random numbers. PMID:25090572
NASA Astrophysics Data System (ADS)
Igoe, Damien P.; Parisi, Alfio V.; Amar, Abdurazaq; Rummenie, Katherine J.
2018-01-01
An evaluation of the use of median filters in the reduction of dark noise in smartphone high resolution image sensors is presented. The Sony Xperia Z1 employed has a maximum image sensor resolution of 20.7 Mpixels, with each pixel having a side length of just over 1 μm. Due to the large number of photosites, this provides an image sensor with very high sensitivity but also makes them prone to noise effects such as hot-pixels. Similar to earlier research with older models of smartphone, no appreciable temperature effects were observed in the overall average pixel values for images taken in ambient temperatures between 5 °C and 25 °C. In this research, hot-pixels are defined as pixels with intensities above a specific threshold. The threshold is determined using the distribution of pixel values of a set of images with uniform statistical properties associated with the application of median-filters of increasing size. An image with uniform statistics was employed as a training set from 124 dark images, and the threshold was determined to be 9 digital numbers (DN). The threshold remained constant for multiple resolutions and did not appreciably change even after a year of extensive field use and exposure to solar ultraviolet radiation. Although the temperature effects' uniformity masked an increase in hot-pixel occurrences, the total number of occurrences represented less than 0.1% of the total image. Hot-pixels were removed by applying a median filter, with an optimum filter size of 7 × 7; similar trends were observed for four additional smartphone image sensors used for validation. Hot-pixels were also reduced by decreasing image resolution. The method outlined in this research provides a methodology to characterise the dark noise behavior of high resolution image sensors for use in scientific investigations, especially as pixel sizes decrease.
Coded aperture detector: an image sensor with sub 20-nm pixel resolution.
Miyakawa, Ryan; Mayer, Rafael; Wojdyla, Antoine; Vannier, Nicolas; Lesser, Ian; Aron-Dine, Shifrah; Naulleau, Patrick
2014-08-11
We describe the coded aperture detector, a novel image sensor based on uniformly redundant arrays (URAs) with customizable pixel size, resolution, and operating photon energy regime. In this sensor, a coded aperture is scanned laterally at the image plane of an optical system, and the transmitted intensity is measured by a photodiode. The image intensity is then digitally reconstructed using a simple convolution. We present results from a proof-of-principle optical prototype, demonstrating high-fidelity image sensing comparable to a CCD. A 20-nm half-pitch URA fabricated by the Center for X-ray Optics (CXRO) nano-fabrication laboratory is presented that is suitable for high-resolution image sensing at EUV and soft X-ray wavelengths.
NASA Astrophysics Data System (ADS)
Sankey, T.; Donald, J.; McVay, J.
2015-12-01
High resolution remote sensing images and datasets are typically acquired at a large cost, which poses big a challenge for many scientists. Northern Arizona University recently acquired a custom-engineered, cutting-edge UAV and we can now generate our own images with the instrument. The UAV has a unique capability to carry a large payload including a hyperspectral sensor, which images the Earth surface in over 350 spectral bands at 5 cm resolution, and a lidar scanner, which images the land surface and vegetation in 3-dimensions. Both sensors represent the newest available technology with very high resolution, precision, and accuracy. Using the UAV sensors, we are monitoring the effects of regional forest restoration treatment efforts. Individual tree canopy width and height are measured in the field and via the UAV sensors. The high-resolution UAV images are then used to segment individual tree canopies and to derive 3-dimensional estimates. The UAV image-derived variables are then correlated to the field-based measurements and scaled to satellite-derived tree canopy measurements. The relationships between the field-based and UAV-derived estimates are then extrapolated to a larger area to scale the tree canopy dimensions and to estimate tree density within restored and control forest sites.
Testing and evaluation of tactical electro-optical sensors
NASA Astrophysics Data System (ADS)
Middlebrook, Christopher T.; Smith, John G.
2002-07-01
As integrated electro-optical sensor payloads (multi- sensors) comprised of infrared imagers, visible imagers, and lasers advance in performance, the tests and testing methods must also advance in order to fully evaluate them. Future operational requirements will require integrated sensor payloads to perform missions at further ranges and with increased targeting accuracy. In order to meet these requirements sensors will require advanced imaging algorithms, advanced tracking capability, high-powered lasers, and high-resolution imagers. To meet the U.S. Navy's testing requirements of such multi-sensors, the test and evaluation group in the Night Vision and Chemical Biological Warfare Department at NAVSEA Crane is developing automated testing methods, and improved tests to evaluate imaging algorithms, and procuring advanced testing hardware to measure high resolution imagers and line of sight stabilization of targeting systems. This paper addresses: descriptions of the multi-sensor payloads tested, testing methods used and under development, and the different types of testing hardware and specific payload tests that are being developed and used at NAVSEA Crane.
Yield variability prediction by remote sensing sensors with different spatial resolution
NASA Astrophysics Data System (ADS)
Kumhálová, Jitka; Matějková, Štěpánka
2017-04-01
Currently, remote sensing sensors are very popular for crop monitoring and yield prediction. This paper describes how satellite images with moderate (Landsat satellite data) and very high (QuickBird and WorldView-2 satellite data) spatial resolution, together with GreenSeeker hand held crop sensor, can be used to estimate yield and crop growth variability. Winter barley (2007 and 2015) and winter wheat (2009 and 2011) were chosen because of cloud-free data availability in the same time period for experimental field from Landsat satellite images and QuickBird or WorldView-2 images. Very high spatial resolution images were resampled to worse spatial resolution. Normalised difference vegetation index was derived from each satellite image data sets and it was also measured with GreenSeeker handheld crop sensor for the year 2015 only. Results showed that each satellite image data set can be used for yield and plant variability estimation. Nevertheless, better results, in comparison with crop yield, were obtained for images acquired in later phenological phases, e.g. in 2007 - BBCH 59 - average correlation coefficient 0.856, and in 2011 - BBCH 59-0.784. GreenSeeker handheld crop sensor was not suitable for yield estimation due to different measuring method.
High-speed uncooled MWIR hostile fire indication sensor
NASA Astrophysics Data System (ADS)
Zhang, L.; Pantuso, F. P.; Jin, G.; Mazurenko, A.; Erdtmann, M.; Radhakrishnan, S.; Salerno, J.
2011-06-01
Hostile fire indication (HFI) systems require high-resolution sensor operation at extremely high speeds to capture hostile fire events, including rocket-propelled grenades, anti-aircraft artillery, heavy machine guns, anti-tank guided missiles and small arms. HFI must also be conducted in a waveband with large available signal and low background clutter, in particular the mid-wavelength infrared (MWIR). The shortcoming of current HFI sensors in the MWIR is the bandwidth of the sensor is not sufficient to achieve the required frame rate at the high sensor resolution. Furthermore, current HFI sensors require cryogenic cooling that contributes to size, weight, and power (SWAP) in aircraft-mounted applications where these factors are at a premium. Based on its uncooled photomechanical infrared imaging technology, Agiltron has developed a low-SWAP, high-speed MWIR HFI sensor that breaks the bandwidth bottleneck typical of current infrared sensors. This accomplishment is made possible by using a commercial-off-the-shelf, high-performance visible imager as the readout integrated circuit and physically separating this visible imager from the MWIR-optimized photomechanical sensor chip. With this approach, we have achieved high-resolution operation of our MWIR HFI sensor at 1000 fps, which is unprecedented for an uncooled infrared sensor. We have field tested our MWIR HFI sensor for detecting all hostile fire events mentioned above at several test ranges under a wide range of environmental conditions. The field testing results will be presented.
HPT: A High Spatial Resolution Multispectral Sensor for Microsatellite Remote Sensing
Takahashi, Yukihiro; Sakamoto, Yuji; Kuwahara, Toshinori
2018-01-01
Although nano/microsatellites have great potential as remote sensing platforms, the spatial and spectral resolutions of an optical payload instrument are limited. In this study, a high spatial resolution multispectral sensor, the High-Precision Telescope (HPT), was developed for the RISING-2 microsatellite. The HPT has four image sensors: three in the visible region of the spectrum used for the composition of true color images, and a fourth in the near-infrared region, which employs liquid crystal tunable filter (LCTF) technology for wavelength scanning. Band-to-band image registration methods have also been developed for the HPT and implemented in the image processing procedure. The processed images were compared with other satellite images, and proven to be useful in various remote sensing applications. Thus, LCTF technology can be considered an innovative tool that is suitable for future multi/hyperspectral remote sensing by nano/microsatellites. PMID:29463022
Ultra-high resolution coded wavefront sensor.
Wang, Congli; Dun, Xiong; Fu, Qiang; Heidrich, Wolfgang
2017-06-12
Wavefront sensors and more general phase retrieval methods have recently attracted a lot of attention in a host of application domains, ranging from astronomy to scientific imaging and microscopy. In this paper, we introduce a new class of sensor, the Coded Wavefront Sensor, which provides high spatio-temporal resolution using a simple masked sensor under white light illumination. Specifically, we demonstrate megapixel spatial resolution and phase accuracy better than 0.1 wavelengths at reconstruction rates of 50 Hz or more, thus opening up many new applications from high-resolution adaptive optics to real-time phase retrieval in microscopy.
High-Speed Binary-Output Image Sensor
NASA Technical Reports Server (NTRS)
Fossum, Eric; Panicacci, Roger A.; Kemeny, Sabrina E.; Jones, Peter D.
1996-01-01
Photodetector outputs digitized by circuitry on same integrated-circuit chip. Developmental special-purpose binary-output image sensor designed to capture up to 1,000 images per second, with resolution greater than 10 to the 6th power pixels per image. Lower-resolution but higher-frame-rate prototype of sensor contains 128 x 128 array of photodiodes on complementary metal oxide/semiconductor (CMOS) integrated-circuit chip. In application for which it is being developed, sensor used to examine helicopter oil to determine whether amount of metal and sand in oil sufficient to warrant replacement.
Gyrocopter-Based Remote Sensing Platform
NASA Astrophysics Data System (ADS)
Weber, I.; Jenal, A.; Kneer, C.; Bongartz, J.
2015-04-01
In this paper the development of a lightweight and highly modularized airborne sensor platform for remote sensing applications utilizing a gyrocopter as a carrier platform is described. The current sensor configuration consists of a high resolution DSLR camera for VIS-RGB recordings. As a second sensor modality, a snapshot hyperspectral camera was integrated in the aircraft. Moreover a custom-developed thermal imaging system composed of a VIS-PAN camera and a LWIR-camera is used for aerial recordings in the thermal infrared range. Furthermore another custom-developed highly flexible imaging system for high resolution multispectral image acquisition with up to six spectral bands in the VIS-NIR range is presented. The performance of the overall system was tested during several flights with all sensor modalities and the precalculated demands with respect to spatial resolution and reliability were validated. The collected data sets were georeferenced, georectified, orthorectified and then stitched to mosaics.
The lucky image-motion prediction for simple scene observation based soft-sensor technology
NASA Astrophysics Data System (ADS)
Li, Yan; Su, Yun; Hu, Bin
2015-08-01
High resolution is important to earth remote sensors, while the vibration of the platforms of the remote sensors is a major factor restricting high resolution imaging. The image-motion prediction and real-time compensation are key technologies to solve this problem. For the reason that the traditional autocorrelation image algorithm cannot meet the demand for the simple scene image stabilization, this paper proposes to utilize soft-sensor technology in image-motion prediction, and focus on the research of algorithm optimization in imaging image-motion prediction. Simulations results indicate that the improving lucky image-motion stabilization algorithm combining the Back Propagation Network (BP NN) and support vector machine (SVM) is the most suitable for the simple scene image stabilization. The relative error of the image-motion prediction based the soft-sensor technology is below 5%, the training computing speed of the mathematical predication model is as fast as the real-time image stabilization in aerial photography.
Multisensor data fusion across time and space
NASA Astrophysics Data System (ADS)
Villeneuve, Pierre V.; Beaven, Scott G.; Reed, Robert A.
2014-06-01
Field measurement campaigns typically deploy numerous sensors having different sampling characteristics for spatial, temporal, and spectral domains. Data analysis and exploitation is made more difficult and time consuming as the sample data grids between sensors do not align. This report summarizes our recent effort to demonstrate feasibility of a processing chain capable of "fusing" image data from multiple independent and asynchronous sensors into a form amenable to analysis and exploitation using commercially-available tools. Two important technical issues were addressed in this work: 1) Image spatial registration onto a common pixel grid, 2) Image temporal interpolation onto a common time base. The first step leverages existing image matching and registration algorithms. The second step relies upon a new and innovative use of optical flow algorithms to perform accurate temporal upsampling of slower frame rate imagery. Optical flow field vectors were first derived from high-frame rate, high-resolution imagery, and then finally used as a basis for temporal upsampling of the slower frame rate sensor's imagery. Optical flow field values are computed using a multi-scale image pyramid, thus allowing for more extreme object motion. This involves preprocessing imagery to varying resolution scales and initializing new vector flow estimates using that from the previous coarser-resolution image. Overall performance of this processing chain is demonstrated using sample data involving complex too motion observed by multiple sensors mounted to the same base. Multiple sensors were included, including a high-speed visible camera, up to a coarser resolution LWIR camera.
Chander, G.; Scaramuzza, P.L.
2006-01-01
Increasingly, data from multiple sensors are used to gain a more complete understanding of land surface processes at a variety of scales. The Landsat suite of satellites has collected the longest continuous archive of multispectral data. The ResourceSat-1 Satellite (also called as IRS-P6) was launched into the polar sunsynchronous orbit on Oct 17, 2003. It carries three remote sensing sensors: the High Resolution Linear Imaging Self-Scanner (LISS-IV), Medium Resolution Linear Imaging Self-Scanner (LISS-III), and the Advanced Wide Field Sensor (AWiFS). These three sensors are used together to provide images with different resolution and coverage. To understand the absolute radiometric calibration accuracy of IRS-P6 AWiFS and LISS-III sensors, image pairs from these sensors were compared to the Landsat-5 TM and Landsat-7 ETM+ sensors. The approach involved the calibration of nearly simultaneous surface observations based on image statistics from areas observed simultaneously by the two sensors.
NASA Astrophysics Data System (ADS)
Fong de Los Santos, Luis E.
Development of a scanning superconducting quantum interference device (SQUID) microscope system with interchangeable sensor configurations for imaging magnetic fields of room-temperature (RT) samples with sub-millimeter resolution. The low-critical-temperature (Tc) niobium-based monolithic SQUID sensor is mounted in the tip of a sapphire rod and thermally anchored to the cryostat helium reservoir. A 25 mum sapphire window separates the vacuum space from the RT sample. A positioning mechanism allows adjusting the sample-to-sensor spacing from the top of the Dewar. I have achieved a sensor-to-sample spacing of 100 mum, which could be maintained for periods of up to 4 weeks. Different SQUID sensor configurations are necessary to achieve the best combination of spatial resolution and field sensitivity for a given magnetic source. For imaging thin sections of geological samples, I used a custom-designed monolithic low-Tc niobium bare SQUID sensor, with an effective diameter of 80 mum, and achieved a field sensitivity of 1.5 pT/Hz1/2 and a magnetic moment sensitivity of 5.4 x 10-18 Am2/Hz1/2 at a sensor-to-sample spacing of 100 mum in the white noise region for frequencies above 100 Hz. Imaging action currents in cardiac tissue requires higher field sensitivity, which can only be achieved by compromising spatial resolution. I developed a monolithic low-Tc niobium multiloop SQUID sensor, with sensor sizes ranging from 250 mum to 1 mm, and achieved sensitivities of 480 - 180 fT/Hz1/2 in the white noise region for frequencies above 100 Hz, respectively. For all sensor configurations, the spatial resolution was comparable to the effective diameter and limited by the sensor-to-sample spacing. Spatial registration allowed us to compare high-resolution images of magnetic fields associated with action currents and optical recordings of transmembrane potentials to study the bidomain nature of cardiac tissue or to match petrography to magnetic field maps in thin sections of geological samples.
Advanced x-ray imaging spectrometer
NASA Technical Reports Server (NTRS)
Callas, John L. (Inventor); Soli, George A. (Inventor)
1998-01-01
An x-ray spectrometer that also provides images of an x-ray source. Coded aperture imaging techniques are used to provide high resolution images. Imaging position-sensitive x-ray sensors with good energy resolution are utilized to provide excellent spectroscopic performance. The system produces high resolution spectral images of the x-ray source which can be viewed in any one of a number of specific energy bands.
Lensless high-resolution photoacoustic imaging scanner for in vivo skin imaging
NASA Astrophysics Data System (ADS)
Ida, Taiichiro; Iwazaki, Hideaki; Omuro, Toshiyuki; Kawaguchi, Yasushi; Tsunoi, Yasuyuki; Kawauchi, Satoko; Sato, Shunichi
2018-02-01
We previously launched a high-resolution photoacoustic (PA) imaging scanner based on a unique lensless design for in vivo skin imaging. The design, imaging algorithm and characteristics of the system are described in this paper. Neither an optical lens nor an acoustic lens is used in the system. In the imaging head, four sensor elements are arranged quadrilaterally, and by checking the phase differences for PA waves detected with these four sensors, a set of PA signals only originating from a chromophore located on the sensor center axis is extracted for constructing an image. A phantom study using a carbon fiber showed a depth-independent horizontal resolution of 84.0 ± 3.5 µm, and the scan direction-dependent variation of PA signals was about ± 20%. We then performed imaging of vasculature phantoms: patterns of red ink lines with widths of 100 or 200 μm formed in an acrylic block co-polymer. The patterns were visualized with high contrast, showing the capability for imaging arterioles and venues in the skin. Vasculatures in rat burn models and healthy human skin were also clearly visualized in vivo.
Joint estimation of high resolution images and depth maps from light field cameras
NASA Astrophysics Data System (ADS)
Ohashi, Kazuki; Takahashi, Keita; Fujii, Toshiaki
2014-03-01
Light field cameras are attracting much attention as tools for acquiring 3D information of a scene through a single camera. The main drawback of typical lenselet-based light field cameras is the limited resolution. This limitation comes from the structure where a microlens array is inserted between the sensor and the main lens. The microlens array projects 4D light field on a single 2D image sensor at the sacrifice of the resolution; the angular resolution and the position resolution trade-off under the fixed resolution of the image sensor. This fundamental trade-off remains after the raw light field image is converted to a set of sub-aperture images. The purpose of our study is to estimate a higher resolution image from low resolution sub-aperture images using a framework of super-resolution reconstruction. In this reconstruction, these sub-aperture images should be registered as accurately as possible. This registration is equivalent to depth estimation. Therefore, we propose a method where super-resolution and depth refinement are performed alternatively. Most of the process of our method is implemented by image processing operations. We present several experimental results using a Lytro camera, where we increased the resolution of a sub-aperture image by three times horizontally and vertically. Our method can produce clearer images compared to the original sub-aperture images and the case without depth refinement.
Multi-Sensor Fusion of Infrared and Electro-Optic Signals for High Resolution Night Images
Huang, Xiaopeng; Netravali, Ravi; Man, Hong; Lawrence, Victor
2012-01-01
Electro-optic (EO) image sensors exhibit the properties of high resolution and low noise level at daytime, but they do not work in dark environments. Infrared (IR) image sensors exhibit poor resolution and cannot separate objects with similar temperature. Therefore, we propose a novel framework of IR image enhancement based on the information (e.g., edge) from EO images, which improves the resolution of IR images and helps us distinguish objects at night. Our framework superimposing/blending the edges of the EO image onto the corresponding transformed IR image improves their resolution. In this framework, we adopt the theoretical point spread function (PSF) proposed by Hardie et al. for the IR image, which has the modulation transfer function (MTF) of a uniform detector array and the incoherent optical transfer function (OTF) of diffraction-limited optics. In addition, we design an inverse filter for the proposed PSF and use it for the IR image transformation. The framework requires four main steps: (1) inverse filter-based IR image transformation; (2) EO image edge detection; (3) registration; and (4) blending/superimposing of the obtained image pair. Simulation results show both blended and superimposed IR images, and demonstrate that blended IR images have better quality over the superimposed images. Additionally, based on the same steps, simulation result shows a blended IR image of better quality when only the original IR image is available. PMID:23112602
Multi-sensor fusion of infrared and electro-optic signals for high resolution night images.
Huang, Xiaopeng; Netravali, Ravi; Man, Hong; Lawrence, Victor
2012-01-01
Electro-optic (EO) image sensors exhibit the properties of high resolution and low noise level at daytime, but they do not work in dark environments. Infrared (IR) image sensors exhibit poor resolution and cannot separate objects with similar temperature. Therefore, we propose a novel framework of IR image enhancement based on the information (e.g., edge) from EO images, which improves the resolution of IR images and helps us distinguish objects at night. Our framework superimposing/blending the edges of the EO image onto the corresponding transformed IR image improves their resolution. In this framework, we adopt the theoretical point spread function (PSF) proposed by Hardie et al. for the IR image, which has the modulation transfer function (MTF) of a uniform detector array and the incoherent optical transfer function (OTF) of diffraction-limited optics. In addition, we design an inverse filter for the proposed PSF and use it for the IR image transformation. The framework requires four main steps: (1) inverse filter-based IR image transformation; (2) EO image edge detection; (3) registration; and (4) blending/superimposing of the obtained image pair. Simulation results show both blended and superimposed IR images, and demonstrate that blended IR images have better quality over the superimposed images. Additionally, based on the same steps, simulation result shows a blended IR image of better quality when only the original IR image is available.
NASA Astrophysics Data System (ADS)
Seo, Hokuto; Aihara, Satoshi; Namba, Masakazu; Watabe, Toshihisa; Ohtake, Hiroshi; Kubota, Misao; Egami, Norifumi; Hiramatsu, Takahiro; Matsuda, Tokiyoshi; Furuta, Mamoru; Nitta, Hiroshi; Hirao, Takashi
2010-01-01
Our group has been developing a new type of image sensor overlaid with three organic photoconductive films, which are individually sensitive to only one of the primary color components (blue (B), green (G), or red (R) light), with the aim of developing a compact, high resolution color camera without any color separation optical systems. In this paper, we firstly revealed the unique characteristics of organic photoconductive films. Only choosing organic materials can tune the photoconductive properties of the film, especially excellent wavelength selectivities which are good enough to divide the incident light into three primary colors. Color separation with vertically stacked organic films was also shown. In addition, the high-resolution of organic photoconductive films sufficient for high-definition television (HDTV) was confirmed in a shooting experiment using a camera tube. Secondly, as a step toward our goal, we fabricated a stacked organic image sensor with G- and R-sensitive organic photoconductive films, each of which had a zinc oxide (ZnO) thin film transistor (TFT) readout circuit, and demonstrated image pickup at a TV frame rate. A color image with a resolution corresponding to the pixel number of the ZnO TFT readout circuit was obtained from the stacked image sensor. These results show the potential for the development of high-resolution prism-less color cameras with stacked organic photoconductive films.
High-Resolution Spin-on-Patterning of Perovskite Thin Films for a Multiplexed Image Sensor Array.
Lee, Woongchan; Lee, Jongha; Yun, Huiwon; Kim, Joonsoo; Park, Jinhong; Choi, Changsoon; Kim, Dong Chan; Seo, Hyunseon; Lee, Hakyong; Yu, Ji Woong; Lee, Won Bo; Kim, Dae-Hyeong
2017-10-01
Inorganic-organic hybrid perovskite thin films have attracted significant attention as an alternative to silicon in photon-absorbing devices mainly because of their superb optoelectronic properties. However, high-definition patterning of perovskite thin films, which is important for fabrication of the image sensor array, is hardly accomplished owing to their extreme instability in general photolithographic solvents. Here, a novel patterning process for perovskite thin films is described: the high-resolution spin-on-patterning (SoP) process. This fast and facile process is compatible with a variety of spin-coated perovskite materials and perovskite deposition techniques. The SoP process is successfully applied to develop a high-performance, ultrathin, and deformable perovskite-on-silicon multiplexed image sensor array, paving the road toward next-generation image sensor arrays. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Multi-image acquisition-based distance sensor using agile laser spot beam.
Riza, Nabeel A; Amin, M Junaid
2014-09-01
We present a novel laser-based distance measurement technique that uses multiple-image-based spatial processing to enable distance measurements. Compared with the first-generation distance sensor using spatial processing, the modified sensor is no longer hindered by the classic Rayleigh axial resolution limit for the propagating laser beam at its minimum beam waist location. The proposed high-resolution distance sensor design uses an electronically controlled variable focus lens (ECVFL) in combination with an optical imaging device, such as a charged-coupled device (CCD), to produce and capture different laser spot size images on a target with these beam spot sizes different from the minimal spot size possible at this target distance. By exploiting the unique relationship of the target located spot sizes with the varying ECVFL focal length for each target distance, the proposed distance sensor can compute the target distance with a distance measurement resolution better than the axial resolution via the Rayleigh resolution criterion. Using a 30 mW 633 nm He-Ne laser coupled with an electromagnetically actuated liquid ECVFL, along with a 20 cm focal length bias lens, and using five spot images captured per target position by a CCD-based Nikon camera, a proof-of-concept proposed distance sensor is successfully implemented in the laboratory over target ranges from 10 to 100 cm with a demonstrated sub-cm axial resolution, which is better than the axial Rayleigh resolution limit at these target distances. Applications for the proposed potentially cost-effective distance sensor are diverse and include industrial inspection and measurement and 3D object shape mapping and imaging.
Holographic imaging with a Shack-Hartmann wavefront sensor.
Gong, Hai; Soloviev, Oleg; Wilding, Dean; Pozzi, Paolo; Verhaegen, Michel; Vdovin, Gleb
2016-06-27
A high-resolution Shack-Hartmann wavefront sensor has been used for coherent holographic imaging, by computer reconstruction and propagation of the complex field in a lensless imaging setup. The resolution of the images obtained with the experimental data is in a good agreement with the diffraction theory. Although a proper calibration with a reference beam improves the image quality, the method has a potential for reference-less holographic imaging with spatially coherent monochromatic and narrowband polychromatic sources in microscopy and imaging through turbulence.
Higher resolution satellite remote sensing and the impact on image mapping
Watkins, Allen H.; Thormodsgard, June M.
1987-01-01
Recent advances in spatial, spectral, and temporal resolution of civil land remote sensing satellite data are presenting new opportunities for image mapping applications. The U.S. Geological Survey's experimental satellite image mapping program is evolving toward larger scale image map products with increased information content as a result of improved image processing techniques and increased resolution. Thematic mapper data are being used to produce experimental image maps at 1:100,000 scale that meet established U.S. and European map accuracy standards. Availability of high quality, cloud-free, 30-meter ground resolution multispectral data from the Landsat thematic mapper sensor, along with 10-meter ground resolution panchromatic and 20-meter ground resolution multispectral data from the recently launched French SPOT satellite, present new cartographic and image processing challenges.The need to fully exploit these higher resolution data increases the complexity of processing the images into large-scale image maps. The removal of radiometric artifacts and noise prior to geometric correction can be accomplished by using a variety of image processing filters and transforms. Sensor modeling and image restoration techniques allow maximum retention of spatial and radiometric information. An optimum combination of spectral information and spatial resolution can be obtained by merging different sensor types. These processing techniques are discussed and examples are presented.
Swap intensified WDR CMOS module for I2/LWIR fusion
NASA Astrophysics Data System (ADS)
Ni, Yang; Noguier, Vincent
2015-05-01
The combination of high resolution visible-near-infrared low light sensor and moderate resolution uncooled thermal sensor provides an efficient way for multi-task night vision. Tremendous progress has been made on uncooled thermal sensors (a-Si, VOx, etc.). It's possible to make a miniature uncooled thermal camera module in a tiny 1cm3 cube with <1W power consumption. For silicon based solid-state low light CCD/CMOS sensors have observed also a constant progress in terms of readout noise, dark current, resolution and frame rate. In contrast to thermal sensing which is intrinsic day&night operational, the silicon based solid-state sensors are not yet capable to do the night vision performance required by defense and critical surveillance applications. Readout noise, dark current are 2 major obstacles. The low dynamic range at high sensitivity mode of silicon sensors is also an important limiting factor, which leads to recognition failure due to local or global saturations & blooming. In this context, the image intensifier based solution is still attractive for the following reasons: 1) high gain and ultra-low dark current; 2) wide dynamic range and 3) ultra-low power consumption. With high electron gain and ultra low dark current of image intensifier, the only requirement on the silicon image pickup device are resolution, dynamic range and power consumption. In this paper, we present a SWAP intensified Wide Dynamic Range CMOS module for night vision applications, especially for I2/LWIR fusion. This module is based on a dedicated CMOS image sensor using solar-cell mode photodiode logarithmic pixel design which covers a huge dynamic range (> 140dB) without saturation and blooming. The ultra-wide dynamic range image from this new generation logarithmic sensor can be used directly without any image processing and provide an instant light accommodation. The complete module is slightly bigger than a simple ANVIS format I2 tube with <500mW power consumption.
Evaluation of Sun Glint Correction Algorithms for High-Spatial Resolution Hyperspectral Imagery
2012-09-01
ACRONYMS AND ABBREVIATIONS AISA Airborne Imaging Spectrometer for Applications AVIRIS Airborne Visible/Infrared Imaging Spectrometer BIL Band...sensor bracket mount combining Airborne Imaging Spectrometer for Applications ( AISA ) Eagle and Hawk sensors into a single imaging system (SpecTIR 2011...The AISA Eagle is a VNIR sensor with a wavelength range of approximately 400–970 nm and the AISA Hawk sensor is a SWIR sensor with a wavelength
Toroidal sensor arrays for real-time photoacoustic imaging
NASA Astrophysics Data System (ADS)
Bychkov, Anton S.; Cherepetskaya, Elena B.; Karabutov, Alexander A.; Makarov, Vladimir A.
2017-07-01
This article addresses theoretical and numerical investigation of image formation in photoacoustic (PA) imaging with complex-shaped concave sensor arrays. The spatial resolution and the size of sensitivity region of PA and laser ultrasonic (LU) imaging systems are assessed using sensitivity maps and spatial resolution maps in the image plane. This paper also discusses the relationship between the size of high-sensitivity regions and the spatial resolution of real-time imaging systems utilizing toroidal arrays. It is shown that the use of arrays with toroidal geometry significantly improves the diagnostic capabilities of PA and LU imaging to investigate biological objects, rocks, and composite materials.
NASA Technical Reports Server (NTRS)
Holekamp, Kara; Aaron, David; Thome, Kurtis
2006-01-01
Radiometric calibration of commercial imaging satellite products is required to ensure that science and application communities can better understand their properties. Inaccurate radiometric calibrations can lead to erroneous decisions and invalid conclusions and can limit intercomparisons with other systems. To address this calibration need, satellite at-sensor radiance values were compared to those estimated by each independent team member to determine the sensor's radiometric accuracy. The combined results of this evaluation provide the user community with an independent assessment of these commercially available high spatial resolution sensors' absolute calibration values.
High-speed imaging using CMOS image sensor with quasi pixel-wise exposure
NASA Astrophysics Data System (ADS)
Sonoda, T.; Nagahara, H.; Endo, K.; Sugiyama, Y.; Taniguchi, R.
2017-02-01
Several recent studies in compressive video sensing have realized scene capture beyond the fundamental trade-off limit between spatial resolution and temporal resolution using random space-time sampling. However, most of these studies showed results for higher frame rate video that were produced by simulation experiments or using an optically simulated random sampling camera, because there are currently no commercially available image sensors with random exposure or sampling capabilities. We fabricated a prototype complementary metal oxide semiconductor (CMOS) image sensor with quasi pixel-wise exposure timing that can realize nonuniform space-time sampling. The prototype sensor can reset exposures independently by columns and fix these amount of exposure by rows for each 8x8 pixel block. This CMOS sensor is not fully controllable via the pixels, and has line-dependent controls, but it offers flexibility when compared with regular CMOS or charge-coupled device sensors with global or rolling shutters. We propose a method to realize pseudo-random sampling for high-speed video acquisition that uses the flexibility of the CMOS sensor. We reconstruct the high-speed video sequence from the images produced by pseudo-random sampling using an over-complete dictionary.
Fusion of radar and ultrasound sensors for concealed weapons detection
NASA Astrophysics Data System (ADS)
Felber, Franklin S.; Davis, Herbert T., III; Mallon, Charles E.; Wild, Norbert C.
1996-06-01
An integrated radar and ultrasound sensor, capable of remotely detecting and imaging concealed weapons, is being developed. A modified frequency-agile, mine-detection radar is intended to specify with high probability of detection at ranges of 1 to 10 m which individuals in a moving crowd may be concealing metallic or nonmetallic weapons. Within about 1 to 5 m, the active ultrasound sensor is intended to enable a user to identify a concealed weapon on a moving person with low false-detection rate, achieved through a real-time centimeter-resolution image of the weapon. The goal for sensor fusion is to have the radar acquire concealed weapons at long ranges and seamlessly hand over tracking data to the ultrasound sensor for high-resolution imaging on a video monitor. We have demonstrated centimeter-resolution ultrasound images of metallic and non-metallic weapons concealed on a human at ranges over 1 m. Processing of the ultrasound images includes filters for noise, frequency, brightness, and contrast. A frequency-agile radar has been developed by JAYCOR under the U.S. Army Advanced Mine Detection Radar Program. The signature of an armed person, detected by this radar, differs appreciably from that of the same person unarmed.
Image-receptor performance: a comparison of Trophy RVG UI sensor and Kodak Ektaspeed Plus film.
Ludlow, J; Mol, A
2001-01-01
Objective. This study compares the physical characteristics of the RVG UI sensor (RVG) with Ektaspeed Plus film. Dose-response curves were generated for film and for each of 6 available RVG modes. An aluminum step-wedge was used to evaluate exposure latitude. Spatial resolution was assessed by using a line-pair test tool. Latitude and resolution were assessed by observers for both modalities. The RVG was further characterized by its modulation transfer function. Exposure latitude was equal for film and RVG in the periodontal mode. Other gray scale modes demonstrated much lower latitude. The average maximum resolution was 15.3 line-pairs per millimeter (lp/mm) for RVG in high-resolution mode, 10.5 lp/mm for RVG in low-resolution mode, and 20 lp/mm for film (P <.0001). Modulation transfer function measurements supported the subjective assessments. In periodontal mode, the RVG UI sensor demonstrates exposure latitude similar to that of Ektaspeed Plus film. Film images exhibit significantly higher spatial resolution than the RVG images acquired in high-resolution mode.
High-resolution, continuous field-of-view (FOV), non-rotating imaging system
NASA Technical Reports Server (NTRS)
Huntsberger, Terrance L. (Inventor); Stirbl, Robert C. (Inventor); Aghazarian, Hrand (Inventor); Padgett, Curtis W. (Inventor)
2010-01-01
A high resolution CMOS imaging system especially suitable for use in a periscope head. The imaging system includes a sensor head for scene acquisition, and a control apparatus inclusive of distributed processors and software for device-control, data handling, and display. The sensor head encloses a combination of wide field-of-view CMOS imagers and narrow field-of-view CMOS imagers. Each bank of imagers is controlled by a dedicated processing module in order to handle information flow and image analysis of the outputs of the camera system. The imaging system also includes automated or manually controlled display system and software for providing an interactive graphical user interface (GUI) that displays a full 360-degree field of view and allows the user or automated ATR system to select regions for higher resolution inspection.
Commercial CMOS image sensors as X-ray imagers and particle beam monitors
NASA Astrophysics Data System (ADS)
Castoldi, A.; Guazzoni, C.; Maffessanti, S.; Montemurro, G. V.; Carraresi, L.
2015-01-01
CMOS image sensors are widely used in several applications such as mobile handsets webcams and digital cameras among others. Furthermore they are available across a wide range of resolutions with excellent spectral and chromatic responses. In order to fulfill the need of cheap systems as beam monitors and high resolution image sensors for scientific applications we exploited the possibility of using commercial CMOS image sensors as X-rays and proton detectors. Two different sensors have been mounted and tested. An Aptina MT9v034, featuring 752 × 480 pixels, 6μm × 6μm pixel size has been mounted and successfully tested as bi-dimensional beam profile monitor, able to take pictures of the incoming proton bunches at the DeFEL beamline (1-6 MeV pulsed proton beam) of the LaBeC of INFN in Florence. The naked sensor is able to successfully detect the interactions of the single protons. The sensor point-spread-function (PSF) has been qualified with 1MeV protons and is equal to one pixel (6 mm) r.m.s. in both directions. A second sensor MT9M032, featuring 1472 × 1096 pixels, 2.2 × 2.2 μm pixel size has been mounted on a dedicated board as high-resolution imager to be used in X-ray imaging experiments with table-top generators. In order to ease and simplify the data transfer and the image acquisition the system is controlled by a dedicated micro-processor board (DM3730 1GHz SoC ARM Cortex-A8) on which a modified LINUX kernel has been implemented. The paper presents the architecture of the sensor systems and the results of the experimental measurements.
NASA Astrophysics Data System (ADS)
Sedano, Fernando; Kempeneers, Pieter; Strobl, Peter; Kucera, Jan; Vogt, Peter; Seebach, Lucia; San-Miguel-Ayanz, Jesús
2011-09-01
This study presents a novel cloud masking approach for high resolution remote sensing images in the context of land cover mapping. As an advantage to traditional methods, the approach does not rely on thermal bands and it is applicable to images from most high resolution earth observation remote sensing sensors. The methodology couples pixel-based seed identification and object-based region growing. The seed identification stage relies on pixel value comparison between high resolution images and cloud free composites at lower spatial resolution from almost simultaneously acquired dates. The methodology was tested taking SPOT4-HRVIR, SPOT5-HRG and IRS-LISS III as high resolution images and cloud free MODIS composites as reference images. The selected scenes included a wide range of cloud types and surface features. The resulting cloud masks were evaluated through visual comparison. They were also compared with ad-hoc independently generated cloud masks and with the automatic cloud cover assessment algorithm (ACCA). In general the results showed an agreement in detected clouds higher than 95% for clouds larger than 50 ha. The approach produced consistent results identifying and mapping clouds of different type and size over various land surfaces including natural vegetation, agriculture land, built-up areas, water bodies and snow.
Sensor Webs: Autonomous Rapid Response to Monitor Transient Science Events
NASA Technical Reports Server (NTRS)
Mandl, Dan; Grosvenor, Sandra; Frye, Stu; Sherwood, Robert; Chien, Steve; Davies, Ashley; Cichy, Ben; Ingram, Mary Ann; Langley, John; Miranda, Felix
2005-01-01
To better understand how physical phenomena, such as volcanic eruptions, evolve over time, multiple sensor observations over the duration of the event are required. Using sensor web approaches that integrate original detections by in-situ sensors and global-coverage, lower-resolution, on-orbit assets with automated rapid response observations from high resolution sensors, more observations of significant events can be made with increased temporal, spatial, and spectral resolution. This paper describes experiments using Earth Observing 1 (EO-1) along with other space and ground assets to implement progressive mission autonomy to identify, locate and image with high resolution instruments phenomena such as wildfires, volcanoes, floods and ice breakup. The software that plans, schedules and controls the various satellite assets are used to form ad hoc constellations which enable collaborative autonomous image collections triggered by transient phenomena. This software is both flight and ground based and works in concert to run all of the required assets cohesively and includes software that is model-based, artificial intelligence software.
Automatic Near-Real-Time Image Processing Chain for Very High Resolution Optical Satellite Data
NASA Astrophysics Data System (ADS)
Ostir, K.; Cotar, K.; Marsetic, A.; Pehani, P.; Perse, M.; Zaksek, K.; Zaletelj, J.; Rodic, T.
2015-04-01
In response to the increasing need for automatic and fast satellite image processing SPACE-SI has developed and implemented a fully automatic image processing chain STORM that performs all processing steps from sensor-corrected optical images (level 1) to web-delivered map-ready images and products without operator's intervention. Initial development was tailored to high resolution RapidEye images, and all crucial and most challenging parts of the planned full processing chain were developed: module for automatic image orthorectification based on a physical sensor model and supported by the algorithm for automatic detection of ground control points (GCPs); atmospheric correction module, topographic corrections module that combines physical approach with Minnaert method and utilizing anisotropic illumination model; and modules for high level products generation. Various parts of the chain were implemented also for WorldView-2, THEOS, Pleiades, SPOT 6, Landsat 5-8, and PROBA-V. Support of full-frame sensor currently in development by SPACE-SI is in plan. The proposed paper focuses on the adaptation of the STORM processing chain to very high resolution multispectral images. The development concentrated on the sub-module for automatic detection of GCPs. The initially implemented two-step algorithm that worked only with rasterized vector roads and delivered GCPs with sub-pixel accuracy for the RapidEye images, was improved with the introduction of a third step: super-fine positioning of each GCP based on a reference raster chip. The added step exploits the high spatial resolution of the reference raster to improve the final matching results and to achieve pixel accuracy also on very high resolution optical satellite data.
Wavelength scanning achieves pixel super-resolution in holographic on-chip microscopy
NASA Astrophysics Data System (ADS)
Luo, Wei; Göröcs, Zoltan; Zhang, Yibo; Feizi, Alborz; Greenbaum, Alon; Ozcan, Aydogan
2016-03-01
Lensfree holographic on-chip imaging is a potent solution for high-resolution and field-portable bright-field imaging over a wide field-of-view. Previous lensfree imaging approaches utilize a pixel super-resolution technique, which relies on sub-pixel lateral displacements between the lensfree diffraction patterns and the image sensor's pixel-array, to achieve sub-micron resolution under unit magnification using state-of-the-art CMOS imager chips, commonly used in e.g., mobile-phones. Here we report, for the first time, a wavelength scanning based pixel super-resolution technique in lensfree holographic imaging. We developed an iterative super-resolution algorithm, which generates high-resolution reconstructions of the specimen from low-resolution (i.e., under-sampled) diffraction patterns recorded at multiple wavelengths within a narrow spectral range (e.g., 10-30 nm). Compared with lateral shift-based pixel super-resolution, this wavelength scanning approach does not require any physical shifts in the imaging setup, and the resolution improvement is uniform in all directions across the sensor-array. Our wavelength scanning super-resolution approach can also be integrated with multi-height and/or multi-angle on-chip imaging techniques to obtain even higher resolution reconstructions. For example, using wavelength scanning together with multi-angle illumination, we achieved a halfpitch resolution of 250 nm, corresponding to a numerical aperture of 1. In addition to pixel super-resolution, the small scanning steps in wavelength also enable us to robustly unwrap phase, revealing the specimen's optical path length in our reconstructed images. We believe that this new wavelength scanning based pixel super-resolution approach can provide competitive microscopy solutions for high-resolution and field-portable imaging needs, potentially impacting tele-pathology applications in resource-limited-settings.
High speed, real-time, camera bandwidth converter
Bower, Dan E; Bloom, David A; Curry, James R
2014-10-21
Image data from a CMOS sensor with 10 bit resolution is reformatted in real time to allow the data to stream through communications equipment that is designed to transport data with 8 bit resolution. The incoming image data has 10 bit resolution. The communication equipment can transport image data with 8 bit resolution. Image data with 10 bit resolution is transmitted in real-time, without a frame delay, through the communication equipment by reformatting the image data.
Comparison of the performance of intraoral X-ray sensors using objective image quality assessment.
Hellén-Halme, Kristina; Johansson, Curt; Nilsson, Mats
2016-05-01
The main aim of this study was to evaluate the performance of 10 individual sensors of the same make, using objective measures of key image quality parameters. A further aim was to compare 8 brands of sensors. Ten new sensors of 8 different models from 6 manufacturers (i.e., 80 sensors) were included in the study. All sensors were exposed in a standardized way using an X-ray tube voltage of 60 kVp and different exposure times. Sensor response, noise, low-contrast resolution, spatial resolution and uniformity were measured. Individual differences between sensors of the same brand were surprisingly large in some cases. There were clear differences in the characteristics of the different brands of sensors. The largest variations were found for individual sensor response for some of the brands studied. Also, noise level and low contrast resolution showed large variations between brands. Sensors, even of the same brand, vary significantly in their quality. It is thus valuable to establish action levels for the acceptance of newly delivered sensors and to use objective image quality control for commissioning purposes and periodic checks to ensure high performance of individual digital sensors. Copyright © 2016 Elsevier Inc. All rights reserved.
Micromachined Chip Scale Thermal Sensor for Thermal Imaging.
Shekhawat, Gajendra S; Ramachandran, Srinivasan; Jiryaei Sharahi, Hossein; Sarkar, Souravi; Hujsak, Karl; Li, Yuan; Hagglund, Karl; Kim, Seonghwan; Aden, Gary; Chand, Ami; Dravid, Vinayak P
2018-02-27
The lateral resolution of scanning thermal microscopy (SThM) has hitherto never approached that of mainstream atomic force microscopy, mainly due to poor performance of the thermal sensor. Herein, we report a nanomechanical system-based thermal sensor (thermocouple) that enables high lateral resolution that is often required in nanoscale thermal characterization in a wide range of applications. This thermocouple-based probe technology delivers excellent lateral resolution (∼20 nm), extended high-temperature measurements >700 °C without cantilever bending, and thermal sensitivity (∼0.04 °C). The origin of significantly improved figures-of-merit lies in the probe design that consists of a hollow silicon tip integrated with a vertically oriented thermocouple sensor at the apex (low thermal mass) which interacts with the sample through a metallic nanowire (50 nm diameter), thereby achieving high lateral resolution. The efficacy of this approach to SThM is demonstrated by imaging embedded metallic nanostructures in silica core-shell, metal nanostructures coated with polymer films, and metal-polymer interconnect structures. The nanoscale pitch and extremely small thermal mass of the probe promise significant improvements over existing methods and wide range of applications in several fields including semiconductor industry, biomedical imaging, and data storage.
Blur spot limitations in distal endoscope sensors
NASA Astrophysics Data System (ADS)
Yaron, Avi; Shechterman, Mark; Horesh, Nadav
2006-02-01
In years past, the picture quality of electronic video systems was limited by the image sensor. In the present, the resolution of miniature image sensors, as in medical endoscopy, is typically superior to the resolution of the optical system. This "excess resolution" is utilized by Visionsense to create stereoscopic vision. Visionsense has developed a single chip stereoscopic camera that multiplexes the horizontal dimension of the image sensor into two (left and right) images, compensates the blur phenomena, and provides additional depth resolution without sacrificing planar resolution. The camera is based on a dual-pupil imaging objective and an image sensor coated by an array of microlenses (a plenoptic camera). The camera has the advantage of being compact, providing simultaneous acquisition of left and right images, and offering resolution comparable to a dual chip stereoscopic camera with low to medium resolution imaging lenses. A stereoscopic vision system provides an improved 3-dimensional perspective of intra-operative sites that is crucial for advanced minimally invasive surgery and contributes to surgeon performance. An additional advantage of single chip stereo sensors is improvement of tolerance to electronic signal noise.
Compressive Sensing Image Sensors-Hardware Implementation
Dadkhah, Mohammadreza; Deen, M. Jamal; Shirani, Shahram
2013-01-01
The compressive sensing (CS) paradigm uses simultaneous sensing and compression to provide an efficient image acquisition technique. The main advantages of the CS method include high resolution imaging using low resolution sensor arrays and faster image acquisition. Since the imaging philosophy in CS imagers is different from conventional imaging systems, new physical structures have been developed for cameras that use the CS technique. In this paper, a review of different hardware implementations of CS encoding in optical and electrical domains is presented. Considering the recent advances in CMOS (complementary metal–oxide–semiconductor) technologies and the feasibility of performing on-chip signal processing, important practical issues in the implementation of CS in CMOS sensors are emphasized. In addition, the CS coding for video capture is discussed. PMID:23584123
A new omni-directional multi-camera system for high resolution surveillance
NASA Astrophysics Data System (ADS)
Cogal, Omer; Akin, Abdulkadir; Seyid, Kerem; Popovic, Vladan; Schmid, Alexandre; Ott, Beat; Wellig, Peter; Leblebici, Yusuf
2014-05-01
Omni-directional high resolution surveillance has a wide application range in defense and security fields. Early systems used for this purpose are based on parabolic mirror or fisheye lens where distortion due to the nature of the optical elements cannot be avoided. Moreover, in such systems, the image resolution is limited to a single image sensor's image resolution. Recently, the Panoptic camera approach that mimics the eyes of flying insects using multiple imagers has been presented. This approach features a novel solution for constructing a spherically arranged wide FOV plenoptic imaging system where the omni-directional image quality is limited by low-end sensors. In this paper, an overview of current Panoptic camera designs is provided. New results for a very-high resolution visible spectrum imaging and recording system inspired from the Panoptic approach are presented. The GigaEye-1 system, with 44 single cameras and 22 FPGAs, is capable of recording omni-directional video in a 360°×100° FOV at 9.5 fps with a resolution over (17,700×4,650) pixels (82.3MP). Real-time video capturing capability is also verified at 30 fps for a resolution over (9,000×2,400) pixels (21.6MP). The next generation system with significantly higher resolution and real-time processing capacity, called GigaEye-2, is currently under development. The important capacity of GigaEye-1 opens the door to various post-processing techniques in surveillance domain such as large perimeter object tracking, very-high resolution depth map estimation and high dynamicrange imaging which are beyond standard stitching and panorama generation methods.
Image acquisition system using on sensor compressed sampling technique
NASA Astrophysics Data System (ADS)
Gupta, Pravir Singh; Choi, Gwan Seong
2018-01-01
Advances in CMOS technology have made high-resolution image sensors possible. These image sensors pose significant challenges in terms of the amount of raw data generated, energy efficiency, and frame rate. This paper presents a design methodology for an imaging system and a simplified image sensor pixel design to be used in the system so that the compressed sensing (CS) technique can be implemented easily at the sensor level. This results in significant energy savings as it not only cuts the raw data rate but also reduces transistor count per pixel; decreases pixel size; increases fill factor; simplifies analog-to-digital converter, JPEG encoder, and JPEG decoder design; decreases wiring; and reduces the decoder size by half. Thus, CS has the potential to increase the resolution of image sensors for a given technology and die size while significantly decreasing the power consumption and design complexity. We show that it has potential to reduce power consumption by about 23% to 65%.
Automatic panoramic thermal integrated sensor
NASA Astrophysics Data System (ADS)
Gutin, Mikhail A.; Tsui, Eddy K.; Gutin, Olga N.
2005-05-01
Historically, the US Army has recognized the advantages of panoramic imagers with high image resolution: increased area coverage with fewer cameras, instantaneous full horizon detection, location and tracking of multiple targets simultaneously, extended range, and others. The novel ViperViewTM high-resolution panoramic thermal imager is the heart of the Automatic Panoramic Thermal Integrated Sensor (APTIS), being jointly developed by Applied Science Innovative, Inc. (ASI) and the Armament Research, Development and Engineering Center (ARDEC) in support of the Future Combat Systems (FCS) and the Intelligent Munitions Systems (IMS). The APTIS is anticipated to operate as an intelligent node in a wireless network of multifunctional nodes that work together to improve situational awareness (SA) in many defense and offensive operations, as well as serve as a sensor node in tactical Intelligence Surveillance Reconnaissance (ISR). The ViperView is as an aberration-corrected omnidirectional imager with small optics designed to match the resolution of a 640x480 pixels IR camera with improved image quality for longer range target detection, classification, and tracking. The same approach is applicable to panoramic cameras working in the visible spectral range. Other components of the ATPIS sensor suite include ancillary sensors, advanced power management, and wakeup capability. This paper describes the development status of the APTIS system.
Qu, Bin; Huang, Ying; Wang, Weiyuan; Sharma, Prateek; Kuhls-Gilcrist, Andrew T.; Cartwright, Alexander N.; Titus, Albert H.; Bednarek, Daniel R.; Rudin, Stephen
2011-01-01
Use of an extensible array of Electron Multiplying CCDs (EMCCDs) in medical x-ray imager applications was demonstrated for the first time. The large variable electronic-gain (up to 2000) and small pixel size of EMCCDs provide effective suppression of readout noise compared to signal, as well as high resolution, enabling the development of an x-ray detector with far superior performance compared to conventional x-ray image intensifiers and flat panel detectors. We are developing arrays of EMCCDs to overcome their limited field of view (FOV). In this work we report on an array of two EMCCD sensors running simultaneously at a high frame rate and optically focused on a mammogram film showing calcified ducts. The work was conducted on an optical table with a pulsed LED bar used to provide a uniform diffuse light onto the film to simulate x-ray projection images. The system can be selected to run at up to 17.5 frames per second or even higher frame rate with binning. Integration time for the sensors can be adjusted from 1 ms to 1000 ms. Twelve-bit correlated double sampling AD converters were used to digitize the images, which were acquired by a National Instruments dual-channel Camera Link PC board in real time. A user-friendly interface was programmed using LabVIEW to save and display 2K × 1K pixel matrix digital images. The demonstration tiles a 2 × 1 array to acquire increased-FOV stationary images taken at different gains and fluoroscopic-like videos recorded by scanning the mammogram simultaneously with both sensors. The results show high resolution and high dynamic range images stitched together with minimal adjustments needed. The EMCCD array design allows for expansion to an M×N array for arbitrarily larger FOV, yet with high resolution and large dynamic range maintained. PMID:23505330
All-optical endoscopic probe for high resolution 3D photoacoustic tomography
NASA Astrophysics Data System (ADS)
Ansari, R.; Zhang, E.; Desjardins, A. E.; Beard, P. C.
2017-03-01
A novel all-optical forward-viewing photoacoustic probe using a flexible coherent fibre-optic bundle and a Fabry- Perot (FP) ultrasound sensor has been developed. The fibre bundle, along with the FP sensor at its distal end, synthesizes a high density 2D array of wideband ultrasound detectors. Photoacoustic waves arriving at the sensor are spatially mapped by optically scanning the proximal end face of the bundle in 2D with a CW wavelength-tunable interrogation laser. 3D images are formed from the detected signals using a time-reversal image reconstruction algorithm. The system has been characterized in terms of its PSF, noise-equivalent pressure and field of view. Finally, the high resolution 3D imaging capability has been demonstrated using arbitrary shaped phantoms and duck embryo.
NASA Astrophysics Data System (ADS)
Fong, L. E.; Holzer, J. R.; McBride, K. K.; Lima, E. A.; Baudenbacher, F.; Radparvar, M.
2005-05-01
We have developed a scanning superconducting quantum interference device (SQUID) microscope system with interchangeable sensor configurations for imaging magnetic fields of room-temperature (RT) samples with submillimeter resolution. The low-critical-temperature (Tc) niobium-based monolithic SQUID sensors are mounted on the tip of a sapphire and thermally anchored to the helium reservoir. A 25μm sapphire window separates the vacuum space from the RT sample. A positioning mechanism allows us to adjust the sample-to-sensor spacing from the top of the Dewar. We achieved a sensor-to-sample spacing of 100μm, which could be maintained for periods of up to four weeks. Different SQUID sensor designs are necessary to achieve the best combination of spatial resolution and field sensitivity for a given source configuration. For imaging thin sections of geological samples, we used a custom-designed monolithic low-Tc niobium bare SQUID sensor, with an effective diameter of 80μm, and achieved a field sensitivity of 1.5pT/Hz1/2 and a magnetic moment sensitivity of 5.4×10-18Am2/Hz1/2 at a sensor-to-sample spacing of 100μm in the white noise region for frequencies above 100Hz. Imaging action currents in cardiac tissue requires a higher field sensitivity, which can only be achieved by compromising spatial resolution. We developed a monolithic low-Tc niobium multiloop SQUID sensor, with sensor sizes ranging from 250μm to 1mm, and achieved sensitivities of 480-180fT /Hz1/2 in the white noise region for frequencies above 100Hz, respectively. For all sensor configurations, the spatial resolution was comparable to the effective diameter and limited by the sensor-to-sample spacing. Spatial registration allowed us to compare high-resolution images of magnetic fields associated with action currents and optical recordings of transmembrane potentials to study the bidomain nature of cardiac tissue or to match petrography to magnetic field maps in thin sections of geological samples.
NASA Astrophysics Data System (ADS)
Hall-Brown, Mary
The heterogeneity of Arctic vegetation can make land cover classification vey difficult when using medium to small resolution imagery (Schneider et al., 2009; Muller et al., 1999). Using high radiometric and spatial resolution imagery, such as the SPOT 5 and IKONOS satellites, have helped arctic land cover classification accuracies rise into the 80 and 90 percentiles (Allard, 2003; Stine et al., 2010; Muller et al., 1999). However, those increases usually come at a high price. High resolution imagery is very expensive and can often add tens of thousands of dollars onto the cost of the research. The EO-1 satellite launched in 2002 carries two sensors that have high specral and/or high spatial resolutions and can be an acceptable compromise between the resolution versus cost issues. The Hyperion is a hyperspectral sensor with the capability of collecting 242 spectral bands of information. The Advanced Land Imager (ALI) is an advanced multispectral sensor whose spatial resolution can be sharpened to 10 meters. This dissertation compares the accuracies of arctic land cover classifications produced by the Hyperion and ALI sensors to the classification accuracies produced by the Systeme Pour l' Observation de le Terre (SPOT), the Landsat Thematic Mapper (TM) and the Landsat Enhanced Thematic Mapper Plus (ETM+) sensors. Hyperion and ALI images from August 2004 were collected over the Upper Kuparuk River Basin, Alaska. Image processing included the stepwise discriminant analysis of pixels that were positively classified from coinciding ground control points, geometric and radiometric correction, and principle component analysis. Finally, stratified random sampling was used to perform accuracy assessments on satellite derived land cover classifications. Accuracy was estimated from an error matrix (confusion matrix) that provided the overall, producer's and user's accuracies. This research found that while the Hyperion sensor produced classfication accuracies that were equivalent to the TM and ETM+ sensor (approximately 78%), the Hyperion could not obtain the accuracy of the SPOT 5 HRV sensor. However, the land cover classifications derived from the ALI sensor exceeded most classification accuracies derived from the TM and ETM+ senors and were even comparable to most SPOT 5 HRV classifications (87%). With the deactivation of the Landsat series satellites, the monitoring of remote locations such as in the Arctic on an uninterupted basis thoughout the world is in jeopardy. The utilization of the Hyperion and ALI sensors are a way to keep that endeavor operational. By keeping the ALI sensor active at all times, uninterupted observation of the entire Earth can be accomplished. Keeping the Hyperion sensor as a "tasked" sensor can provide scientists with additional imagery and options for their studies without overburdening storage issues.
Machine Learning Based Single-Frame Super-Resolution Processing for Lensless Blood Cell Counting
Huang, Xiwei; Jiang, Yu; Liu, Xu; Xu, Hang; Han, Zhi; Rong, Hailong; Yang, Haiping; Yan, Mei; Yu, Hao
2016-01-01
A lensless blood cell counting system integrating microfluidic channel and a complementary metal oxide semiconductor (CMOS) image sensor is a promising technique to miniaturize the conventional optical lens based imaging system for point-of-care testing (POCT). However, such a system has limited resolution, making it imperative to improve resolution from the system-level using super-resolution (SR) processing. Yet, how to improve resolution towards better cell detection and recognition with low cost of processing resources and without degrading system throughput is still a challenge. In this article, two machine learning based single-frame SR processing types are proposed and compared for lensless blood cell counting, namely the Extreme Learning Machine based SR (ELMSR) and Convolutional Neural Network based SR (CNNSR). Moreover, lensless blood cell counting prototypes using commercial CMOS image sensors and custom designed backside-illuminated CMOS image sensors are demonstrated with ELMSR and CNNSR. When one captured low-resolution lensless cell image is input, an improved high-resolution cell image will be output. The experimental results show that the cell resolution is improved by 4×, and CNNSR has 9.5% improvement over the ELMSR on resolution enhancing performance. The cell counting results also match well with a commercial flow cytometer. Such ELMSR and CNNSR therefore have the potential for efficient resolution improvement in lensless blood cell counting systems towards POCT applications. PMID:27827837
Chen, Chia-Wei; Chow, Chi-Wai; Liu, Yang; Yeh, Chien-Hung
2017-10-02
Recently even the low-end mobile-phones are equipped with a high-resolution complementary-metal-oxide-semiconductor (CMOS) image sensor. This motivates using a CMOS image sensor for visible light communication (VLC). Here we propose and demonstrate an efficient demodulation scheme to synchronize and demodulate the rolling shutter pattern in image sensor based VLC. The implementation algorithm is discussed. The bit-error-rate (BER) performance and processing latency are evaluated and compared with other thresholding schemes.
Fast range estimation based on active range-gated imaging for coastal surveillance
NASA Astrophysics Data System (ADS)
Kong, Qingshan; Cao, Yinan; Wang, Xinwei; Tong, Youwan; Zhou, Yan; Liu, Yuliang
2012-11-01
Coastal surveillance is very important because it is useful for search and rescue, illegal immigration, or harbor security and so on. Furthermore, range estimation is critical for precisely detecting the target. Range-gated laser imaging sensor is suitable for high accuracy range especially in night and no moonlight. Generally, before detecting the target, it is necessary to change delay time till the target is captured. There are two operating mode for range-gated imaging sensor, one is passive imaging mode, and the other is gate viewing mode. Firstly, the sensor is passive mode, only capturing scenes by ICCD, once the object appears in the range of monitoring area, we can obtain the course range of the target according to the imaging geometry/projecting transform. Then, the sensor is gate viewing mode, applying micro second laser pulses and sensor gate width, we can get the range of targets by at least two continuous images with trapezoid-shaped range intensity profile. This technique enables super-resolution depth mapping with a reduction of imaging data processing. Based on the first step, we can calculate the rough value and quickly fix delay time which the target is detected. This technique has overcome the depth resolution limitation for 3D active imaging and enables super-resolution depth mapping with a reduction of imaging data processing. By the two steps, we can quickly obtain the distance between the object and sensor.
Low cost, multiscale and multi-sensor application for flooded area mapping
NASA Astrophysics Data System (ADS)
Giordan, Daniele; Notti, Davide; Villa, Alfredo; Zucca, Francesco; Calò, Fabiana; Pepe, Antonio; Dutto, Furio; Pari, Paolo; Baldo, Marco; Allasia, Paolo
2018-05-01
Flood mapping and estimation of the maximum water depth are essential elements for the first damage evaluation, civil protection intervention planning and detection of areas where remediation is needed. In this work, we present and discuss a methodology for mapping and quantifying flood severity over floodplains. The proposed methodology considers a multiscale and multi-sensor approach using free or low-cost data and sensors. We applied this method to the November 2016 Piedmont (northwestern Italy) flood. We first mapped the flooded areas at the basin scale using free satellite data from low- to medium-high-resolution from both the SAR (Sentinel-1, COSMO-Skymed) and multispectral sensors (MODIS, Sentinel-2). Using very- and ultra-high-resolution images from the low-cost aerial platform and remotely piloted aerial system, we refined the flooded zone and detected the most damaged sector. The presented method considers both urbanised and non-urbanised areas. Nadiral images have several limitations, in particular in urbanised areas, where the use of terrestrial images solved this limitation. Very- and ultra-high-resolution images were processed with structure from motion (SfM) for the realisation of 3-D models. These data, combined with an available digital terrain model, allowed us to obtain maps of the flooded area, maximum high water area and damaged infrastructures.
Jamaludin, Juliza; Rahim, Ruzairi Abdul; Fazul Rahiman, Mohd Hafiz; Mohd Rohani, Jemmy
2018-04-01
Optical tomography (OPT) is a method to capture a cross-sectional image based on the data obtained by sensors, distributed around the periphery of the analyzed system. This system is based on the measurement of the final light attenuation or absorption of radiation after crossing the measured objects. The number of sensor views will affect the results of image reconstruction, where the high number of sensor views per projection will give a high image quality. This research presents an application of charge-coupled device linear sensor and laser diode in an OPT system. Experiments in detecting solid and transparent objects in crystal clear water were conducted. Two numbers of sensors views, 160 and 320 views are evaluated in this research in reconstructing the images. The image reconstruction algorithms used were filtered images of linear back projection algorithms. Analysis on comparing the simulation and experiments image results shows that, with 320 image views giving less area error than 160 views. This suggests that high image view resulted in the high resolution of image reconstruction.
Satellite image fusion based on principal component analysis and high-pass filtering.
Metwalli, Mohamed R; Nasr, Ayman H; Allah, Osama S Farag; El-Rabaie, S; Abd El-Samie, Fathi E
2010-06-01
This paper presents an integrated method for the fusion of satellite images. Several commercial earth observation satellites carry dual-resolution sensors, which provide high spatial resolution or simply high-resolution (HR) panchromatic (pan) images and low-resolution (LR) multi-spectral (MS) images. Image fusion methods are therefore required to integrate a high-spectral-resolution MS image with a high-spatial-resolution pan image to produce a pan-sharpened image with high spectral and spatial resolutions. Some image fusion methods such as the intensity, hue, and saturation (IHS) method, the principal component analysis (PCA) method, and the Brovey transform (BT) method provide HR MS images, but with low spectral quality. Another family of image fusion methods, such as the high-pass-filtering (HPF) method, operates on the basis of the injection of high frequency components from the HR pan image into the MS image. This family of methods provides less spectral distortion. In this paper, we propose the integration of the PCA method and the HPF method to provide a pan-sharpened MS image with superior spatial resolution and less spectral distortion. The experimental results show that the proposed fusion method retains the spectral characteristics of the MS image and, at the same time, improves the spatial resolution of the pan-sharpened image.
Performance study of double SOI image sensors
NASA Astrophysics Data System (ADS)
Miyoshi, T.; Arai, Y.; Fujita, Y.; Hamasaki, R.; Hara, K.; Ikegami, Y.; Kurachi, I.; Nishimura, R.; Ono, S.; Tauchi, K.; Tsuboyama, T.; Yamada, M.
2018-02-01
Double silicon-on-insulator (DSOI) sensors composed of two thin silicon layers and one thick silicon layer have been developed since 2011. The thick substrate consists of high resistivity silicon with p-n junctions while the thin layers are used as SOI-CMOS circuitry and as shielding to reduce the back-gate effect and crosstalk between the sensor and the circuitry. In 2014, a high-resolution integration-type pixel sensor, INTPIX8, was developed based on the DSOI concept. This device is fabricated using a Czochralski p-type (Cz-p) substrate in contrast to a single SOI (SSOI) device having a single thin silicon layer and a Float Zone p-type (FZ-p) substrate. In the present work, X-ray spectra of both DSOI and SSOI sensors were obtained using an Am-241 radiation source at four gain settings. The gain of the DSOI sensor was found to be approximately three times that of the SSOI device because the coupling capacitance is reduced by the DSOI structure. An X-ray imaging demonstration was also performed and high spatial resolution X-ray images were obtained.
Organic-on-silicon complementary metal-oxide-semiconductor colour image sensors.
Lim, Seon-Jeong; Leem, Dong-Seok; Park, Kyung-Bae; Kim, Kyu-Sik; Sul, Sangchul; Na, Kyoungwon; Lee, Gae Hwang; Heo, Chul-Joon; Lee, Kwang-Hee; Bulliard, Xavier; Satoh, Ryu-Ichi; Yagi, Tadao; Ro, Takkyun; Im, Dongmo; Jung, Jungkyu; Lee, Myungwon; Lee, Tae-Yon; Han, Moon Gyu; Jin, Yong Wan; Lee, Sangyoon
2015-01-12
Complementary metal-oxide-semiconductor (CMOS) colour image sensors are representative examples of light-detection devices. To achieve extremely high resolutions, the pixel sizes of the CMOS image sensors must be reduced to less than a micron, which in turn significantly limits the number of photons that can be captured by each pixel using silicon (Si)-based technology (i.e., this reduction in pixel size results in a loss of sensitivity). Here, we demonstrate a novel and efficient method of increasing the sensitivity and resolution of the CMOS image sensors by superposing an organic photodiode (OPD) onto a CMOS circuit with Si photodiodes, which consequently doubles the light-input surface area of each pixel. To realise this concept, we developed organic semiconductor materials with absorption properties selective to green light and successfully fabricated highly efficient green-light-sensitive OPDs without colour filters. We found that such a top light-receiving OPD, which is selective to specific green wavelengths, demonstrates great potential when combined with a newly designed Si-based CMOS circuit containing only blue and red colour filters. To demonstrate the effectiveness of this state-of-the-art hybrid colour image sensor, we acquired a real full-colour image using a camera that contained the organic-on-Si hybrid CMOS colour image sensor.
Organic-on-silicon complementary metal–oxide–semiconductor colour image sensors
Lim, Seon-Jeong; Leem, Dong-Seok; Park, Kyung-Bae; Kim, Kyu-Sik; Sul, Sangchul; Na, Kyoungwon; Lee, Gae Hwang; Heo, Chul-Joon; Lee, Kwang-Hee; Bulliard, Xavier; Satoh, Ryu-Ichi; Yagi, Tadao; Ro, Takkyun; Im, Dongmo; Jung, Jungkyu; Lee, Myungwon; Lee, Tae-Yon; Han, Moon Gyu; Jin, Yong Wan; Lee, Sangyoon
2015-01-01
Complementary metal–oxide–semiconductor (CMOS) colour image sensors are representative examples of light-detection devices. To achieve extremely high resolutions, the pixel sizes of the CMOS image sensors must be reduced to less than a micron, which in turn significantly limits the number of photons that can be captured by each pixel using silicon (Si)-based technology (i.e., this reduction in pixel size results in a loss of sensitivity). Here, we demonstrate a novel and efficient method of increasing the sensitivity and resolution of the CMOS image sensors by superposing an organic photodiode (OPD) onto a CMOS circuit with Si photodiodes, which consequently doubles the light-input surface area of each pixel. To realise this concept, we developed organic semiconductor materials with absorption properties selective to green light and successfully fabricated highly efficient green-light-sensitive OPDs without colour filters. We found that such a top light-receiving OPD, which is selective to specific green wavelengths, demonstrates great potential when combined with a newly designed Si-based CMOS circuit containing only blue and red colour filters. To demonstrate the effectiveness of this state-of-the-art hybrid colour image sensor, we acquired a real full-colour image using a camera that contained the organic-on-Si hybrid CMOS colour image sensor. PMID:25578322
Advanced radiometric and interferometric milimeter-wave scene simulations
NASA Technical Reports Server (NTRS)
Hauss, B. I.; Moffa, P. J.; Steele, W. G.; Agravante, H.; Davidheiser, R.; Samec, T.; Young, S. K.
1993-01-01
Smart munitions and weapons utilize various imaging sensors (including passive IR, active and passive millimeter-wave, and visible wavebands) to detect/identify targets at short standoff ranges and in varied terrain backgrounds. In order to design and evaluate these sensors under a variety of conditions, a high-fidelity scene simulation capability is necessary. Such a capability for passive millimeter-wave scene simulation exists at TRW. TRW's Advanced Radiometric Millimeter-Wave Scene Simulation (ARMSS) code is a rigorous, benchmarked, end-to-end passive millimeter-wave scene simulation code for interpreting millimeter-wave data, establishing scene signatures and evaluating sensor performance. In passive millimeter-wave imaging, resolution is limited due to wavelength and aperture size. Where high resolution is required, the utility of passive millimeter-wave imaging is confined to short ranges. Recent developments in interferometry have made possible high resolution applications on military platforms. Interferometry or synthetic aperture radiometry allows the creation of a high resolution image with a sparsely filled aperture. Borrowing from research work in radio astronomy, we have developed and tested at TRW scene reconstruction algorithms that allow the recovery of the scene from a relatively small number of spatial frequency components. In this paper, the TRW modeling capability is described and numerical results are presented.
Single Photon Counting Large Format Imaging Sensors with High Spatial and Temporal Resolution
NASA Astrophysics Data System (ADS)
Siegmund, O. H. W.; Ertley, C.; Vallerga, J. V.; Cremer, T.; Craven, C. A.; Lyashenko, A.; Minot, M. J.
High time resolution astronomical and remote sensing applications have been addressed with microchannel plate based imaging, photon time tagging detector sealed tube schemes. These are being realized with the advent of cross strip readout techniques with high performance encoding electronics and atomic layer deposited (ALD) microchannel plate technologies. Sealed tube devices up to 20 cm square have now been successfully implemented with sub nanosecond timing and imaging. The objective is to provide sensors with large areas (25 cm2 to 400 cm2) with spatial resolutions of <20 μm FWHM and timing resolutions of <100 ps for dynamic imaging. New high efficiency photocathodes for the visible regime are discussed, which also allow response down below 150nm for UV sensing. Borosilicate MCPs are providing high performance, and when processed with ALD techniques are providing order of magnitude lifetime improvements and enhanced photocathode stability. New developments include UV/visible photocathodes, ALD MCPs, and high resolution cross strip anodes for 100 mm detectors. Tests with 50 mm format cross strip readouts suitable for Planacon devices show spatial resolutions better than 20 μm FWHM, with good image linearity while using low gain ( 106). Current cross strip encoding electronics can accommodate event rates of >5 MHz and event timing accuracy of 100 ps. High-performance ASIC versions of these electronics are in development with better event rate, power and mass suitable for spaceflight instruments.
Enhancing Spatial Resolution of Remotely Sensed Imagery Using Deep Learning
NASA Astrophysics Data System (ADS)
Beck, J. M.; Bridges, S.; Collins, C.; Rushing, J.; Graves, S. J.
2017-12-01
Researchers at the Information Technology and Systems Center at the University of Alabama in Huntsville are using Deep Learning with Convolutional Neural Networks (CNNs) to develop a method for enhancing the spatial resolutions of moderate resolution (10-60m) multispectral satellite imagery. This enhancement will effectively match the resolutions of imagery from multiple sensors to provide increased global temporal-spatial coverage for a variety of Earth science products. Our research is centered on using Deep Learning for automatically generating transformations for increasing the spatial resolution of remotely sensed images with different spatial, spectral, and temporal resolutions. One of the most important steps in using images from multiple sensors is to transform the different image layers into the same spatial resolution, preferably the highest spatial resolution, without compromising the spectral information. Recent advances in Deep Learning have shown that CNNs can be used to effectively and efficiently upscale or enhance the spatial resolution of multispectral images with the use of an auxiliary data source such as a high spatial resolution panchromatic image. In contrast, we are using both the spatial and spectral details inherent in low spatial resolution multispectral images for image enhancement without the use of a panchromatic image. This presentation will discuss how this technology will benefit many Earth Science applications that use remotely sensed images with moderate spatial resolutions.
Evolution of miniature detectors and focal plane arrays for infrared sensors
NASA Astrophysics Data System (ADS)
Watts, Louis A.
1993-06-01
Sensors that are sensitive in the infrared spectral region have been under continuous development since the WW2 era. A quest for the military advantage of 'seeing in the dark' has pushed thermal imaging technology toward high spatial and temporal resolution for night vision equipment, fire control, search track, and seeker 'homing' guidance sensing devices. Similarly, scientific applications have pushed spectral resolution for chemical analysis, remote sensing of earth resources, and astronomical exploration applications. As a result of these developments, focal plane arrays (FPA) are now available with sufficient sensitivity for both high spatial and narrow bandwidth spectral resolution imaging over large fields of view. Such devices combined with emerging opto-electronic developments in integrated FPA data processing techniques can yield miniature sensors capable of imaging reflected sunlight in the near IR and emitted thermal energy in the Mid-wave (MWIR) and longwave (LWIR) IR spectral regions. Robotic space sensors equipped with advanced versions of these FPA's will provide high resolution 'pictures' of their surroundings, perform remote analysis of solid, liquid, and gas matter, or selectively look for 'signatures' of specific objects. Evolutionary trends and projections of future low power micro detector FPA developments for day/night operation or use in adverse viewing conditions are presented in the following test.
Evolution of miniature detectors and focal plane arrays for infrared sensors
NASA Technical Reports Server (NTRS)
Watts, Louis A.
1993-01-01
Sensors that are sensitive in the infrared spectral region have been under continuous development since the WW2 era. A quest for the military advantage of 'seeing in the dark' has pushed thermal imaging technology toward high spatial and temporal resolution for night vision equipment, fire control, search track, and seeker 'homing' guidance sensing devices. Similarly, scientific applications have pushed spectral resolution for chemical analysis, remote sensing of earth resources, and astronomical exploration applications. As a result of these developments, focal plane arrays (FPA) are now available with sufficient sensitivity for both high spatial and narrow bandwidth spectral resolution imaging over large fields of view. Such devices combined with emerging opto-electronic developments in integrated FPA data processing techniques can yield miniature sensors capable of imaging reflected sunlight in the near IR and emitted thermal energy in the Mid-wave (MWIR) and longwave (LWIR) IR spectral regions. Robotic space sensors equipped with advanced versions of these FPA's will provide high resolution 'pictures' of their surroundings, perform remote analysis of solid, liquid, and gas matter, or selectively look for 'signatures' of specific objects. Evolutionary trends and projections of future low power micro detector FPA developments for day/night operation or use in adverse viewing conditions are presented in the following test.
A 3D image sensor with adaptable charge subtraction scheme for background light suppression
NASA Astrophysics Data System (ADS)
Shin, Jungsoon; Kang, Byongmin; Lee, Keechang; Kim, James D. K.
2013-02-01
We present a 3D ToF (Time-of-Flight) image sensor with adaptive charge subtraction scheme for background light suppression. The proposed sensor can alternately capture high resolution color image and high quality depth map in each frame. In depth-mode, the sensor requires enough integration time for accurate depth acquisition, but saturation will occur in high background light illumination. We propose to divide the integration time into N sub-integration times adaptively. In each sub-integration time, our sensor captures an image without saturation and subtracts the charge to prevent the pixel from the saturation. In addition, the subtraction results are cumulated N times obtaining a final result image without background illumination at full integration time. Experimental results with our own ToF sensor show high background suppression performance. We also propose in-pixel storage and column-level subtraction circuit for chiplevel implementation of the proposed method. We believe the proposed scheme will enable 3D sensors to be used in out-door environment.
A CMOS-based large-area high-resolution imaging system for high-energy x-ray applications
NASA Astrophysics Data System (ADS)
Rodricks, Brian; Fowler, Boyd; Liu, Chiao; Lowes, John; Haeffner, Dean; Lienert, Ulrich; Almer, John
2008-08-01
CCDs have been the primary sensor in imaging systems for x-ray diffraction and imaging applications in recent years. CCDs have met the fundamental requirements of low noise, high-sensitivity, high dynamic range and spatial resolution necessary for these scientific applications. State-of-the-art CMOS image sensor (CIS) technology has experienced dramatic improvements recently and their performance is rivaling or surpassing that of most CCDs. The advancement of CIS technology is at an ever-accelerating pace and is driven by the multi-billion dollar consumer market. There are several advantages of CIS over traditional CCDs and other solid-state imaging devices; they include low power, high-speed operation, system-on-chip integration and lower manufacturing costs. The combination of superior imaging performance and system advantages makes CIS a good candidate for high-sensitivity imaging system development. This paper will describe a 1344 x 1212 CIS imaging system with a 19.5μm pitch optimized for x-ray scattering studies at high-energies. Fundamental metrics of linearity, dynamic range, spatial resolution, conversion gain, sensitivity are estimated. The Detective Quantum Efficiency (DQE) is also estimated. Representative x-ray diffraction images are presented. Diffraction images are compared against a CCD-based imaging system.
High-resolution panoramic images with megapixel MWIR FPA
NASA Astrophysics Data System (ADS)
Leboucher, Vincent; Aubry, Gilles
2014-06-01
In the continuity of its current strategy, HGH maintains a deep effort in developing its most recent product family: the infrared (IR) panoramic 360-degree surveillance sensors. During the last two years, HGH optimized its prototype Middle Wave IR (MWIR) panoramic sensor IR Revolution 360 HD that gave birth to Spynel-S product. Various test campaigns proved its excellent image quality. Cyclope, the software associated with Spynel, benefitted from recent image processing improvements and new functionalities such as target geolocalization, long range sensor slue to cue and facilitated forensics analysis. In the frame of the PANORAMIR project sustained by the DGA (Délégation Générale de l'Armement), HGH designed a new extra large resolution sensor including a MWIR megapixel Focal Plane Array (FPA) detector (1280×1024 pixels). This new sensor is called Spynel-X. It provides outstanding resolution 360-degree images (with more than 100 Mpixels). The mechanical frame of Spynel (-S and -X) was designed with the collaboration of an industrial design agency. Spynel got the "Observeur du Design 2013" label.
Dorji, Passang; Fearns, Peter
2017-01-01
The impact of anthropogenic activities on coastal waters is a cause of concern because such activities add to the total suspended sediment (TSS) budget of the coastal waters, which have negative impacts on the coastal ecosystem. Satellite remote sensing provides a powerful tool in monitoring TSS concentration at high spatiotemporal resolution, but coastal managers should be mindful that the satellite-derived TSS concentrations are dependent on the satellite sensor's radiometric properties, atmospheric correction approaches, the spatial resolution and the limitations of specific TSS algorithms. In this study, we investigated the impact of different spatial resolutions of satellite sensor on the quantification of TSS concentration in coastal waters of northern Western Australia. We quantified the TSS product derived from MODerate resolution Imaging Spectroradiometer (MODIS)-Aqua, Landsat-8 Operational Land Image (OLI), and WorldView-2 (WV2) at native spatial resolutions of 250 m, 30 m and 2 m respectively and coarser spatial resolution (resampled up to 5 km) to quantify the impact of spatial resolution on the derived TSS product in different turbidity conditions. The results from the study show that in the waters of high turbidity and high spatial variability, the high spatial resolution WV2 sensor reported TSS concentration as high as 160 mg L-1 while the low spatial resolution MODIS-Aqua reported a maximum TSS concentration of 23.6 mg L-1. Degrading the spatial resolution of each satellite sensor for highly spatially variable turbid waters led to variability in the TSS concentrations of 114.46%, 304.68% and 38.2% for WV2, Landsat-8 OLI and MODIS-Aqua respectively. The implications of this work are particularly relevant in the situation of compliance monitoring where operations may be required to restrict TSS concentrations to a pre-defined limit.
Hu, Xin; Wen, Long; Yu, Yan; Cumming, David R. S.
2016-01-01
The increasing miniaturization and resolution of image sensors bring challenges to conventional optical elements such as spectral filters and polarizers, the properties of which are determined mainly by the materials used, including dye polymers. Recent developments in spectral filtering and optical manipulating techniques based on nanophotonics have opened up the possibility of an alternative method to control light spectrally and spatially. By integrating these technologies into image sensors, it will become possible to achieve high compactness, improved process compatibility, robust stability and tunable functionality. In this Review, recent representative achievements on nanophotonic image sensors are presented and analyzed including image sensors with nanophotonic color filters and polarizers, metamaterial‐based THz image sensors, filter‐free nanowire image sensors and nanostructured‐based multispectral image sensors. This novel combination of cutting edge photonics research and well‐developed commercial products may not only lead to an important application of nanophotonics but also offer great potential for next generation image sensors beyond Moore's Law expectations. PMID:27239941
NASA Astrophysics Data System (ADS)
Scaduto, David A.; Lubinsky, Anthony R.; Rowlands, John A.; Kenmotsu, Hidenori; Nishimoto, Norihito; Nishino, Takeshi; Tanioka, Kenkichi; Zhao, Wei
2014-03-01
We have previously proposed SAPHIRE (scintillator avalanche photoconductor with high resolution emitter readout), a novel detector concept with potentially superior spatial resolution and low-dose performance compared with existing flat-panel imagers. The detector comprises a scintillator that is optically coupled to an amorphous selenium photoconductor operated with avalanche gain, known as high-gain avalanche rushing photoconductor (HARP). High resolution electron beam readout is achieved using a field emitter array (FEA). This combination of avalanche gain, allowing for very low-dose imaging, and electron emitter readout, providing high spatial resolution, offers potentially superior image quality compared with existing flat-panel imagers, with specific applications to fluoroscopy and breast imaging. Through the present collaboration, a prototype HARP sensor with integrated electrostatic focusing and nano- Spindt FEA readout technology has been fabricated. The integrated electron-optic focusing approach is more suitable for fabricating large-area detectors. We investigate the dependence of spatial resolution on sensor structure and operating conditions, and compare the performance of electrostatic focusing with previous technologies. Our results show a clear dependence of spatial resolution on electrostatic focusing potential, with performance approaching that of the previous design with external mesh-electrode. Further, temporal performance (lag) of the detector is evaluated and the results show that the integrated electrostatic focusing design exhibits comparable or better performance compared with the mesh-electrode design. This study represents the first technical evaluation and characterization of the SAPHIRE concept with integrated electrostatic focusing.
Radiometric and geometric assessment of data from the RapidEye constellation of satellites
Chander, Gyanesh; Haque, Md. Obaidul; Sampath, Aparajithan; Brunn, A.; Trosset, G.; Hoffmann, D.; Roloff, S.; Thiele, M.; Anderson, C.
2013-01-01
To monitor land surface processes over a wide range of temporal and spatial scales, it is critical to have coordinated observations of the Earth's surface using imagery acquired from multiple spaceborne imaging sensors. The RapidEye (RE) satellite constellation acquires high-resolution satellite images covering the entire globe within a very short period of time by sensors identical in construction and cross-calibrated to each other. To evaluate the RE high-resolution Multi-spectral Imager (MSI) sensor capabilities, a cross-comparison between the RE constellation of sensors was performed first using image statistics based on large common areas observed over pseudo-invariant calibration sites (PICS) by the sensors and, second, by comparing the on-orbit radiometric calibration temporal trending over a large number of calibration sites. For any spectral band, the individual responses measured by the five satellites of the RE constellation were found to differ <2–3% from the average constellation response depending on the method used for evaluation. Geometric assessment was also performed to study the positional accuracy and relative band-to-band (B2B) alignment of the image data sets. The position accuracy was assessed by comparing the RE imagery against high-resolution aerial imagery, while the B2B characterization was performed by registering each band against every other band to ensure that the proper band alignment is provided for an image product. The B2B results indicate that the internal alignments of these five RE bands are in agreement, with bands typically registered to within 0.25 pixels of each other or better.
Multiplexed 3D FRET imaging in deep tissue of live embryos
Zhao, Ming; Wan, Xiaoyang; Li, Yu; Zhou, Weibin; Peng, Leilei
2015-01-01
Current deep tissue microscopy techniques are mostly restricted to intensity mapping of fluorophores, which significantly limit their applications in investigating biochemical processes in vivo. We present a deep tissue multiplexed functional imaging method that probes multiple Förster resonant energy transfer (FRET) sensors in live embryos with high spatial resolution. The method simultaneously images fluorescence lifetimes in 3D with multiple excitation lasers. Through quantitative analysis of triple-channel intensity and lifetime images, we demonstrated that Ca2+ and cAMP levels of live embryos expressing dual FRET sensors can be monitored simultaneously at microscopic resolution. The method is compatible with a broad range of FRET sensors currently available for probing various cellular biochemical functions. It opens the door to imaging complex cellular circuitries in whole live organisms. PMID:26387920
Ooe, Hiroaki; Fujii, Mikihiro; Tomitori, Masahiko; Arai, Toyoko
2016-02-01
High-Q factor retuned fork (RTF) force sensors made from quartz tuning forks, and the electric circuits for the sensors, were evaluated and optimized to improve the performance of non-contact atomic force microscopy (nc-AFM) performed under ultrahigh vacuum (UHV) conditions. To exploit the high Q factor of the RTF sensor, the oscillation of the RTF sensor was excited at its resonant frequency, using a stray capacitance compensation circuit to cancel the excitation signal leaked through the stray capacitor of the sensor. To improve the signal-to-noise (S/N) ratio in the detected signal, a small capacitor was inserted before the input of an operational (OP) amplifier placed in an UHV chamber, which reduced the output noise from the amplifier. A low-noise, wideband OP amplifier produced a superior S/N ratio, compared with a precision OP amplifier. The thermal vibrational density spectra of the RTF sensors were evaluated using the circuit. The RTF sensor with an effective spring constant value as low as 1000 N/m provided a lower minimum detection limit for force differentiation. A nc-AFM image of a Si(111)-7 × 7 surface was produced with atomic resolution using the RTF sensor in a constant frequency shift mode; tunneling current and energy dissipation images with atomic resolution were also simultaneously produced. The high-Q factor RTF sensor showed potential for the high sensitivity of energy dissipation as small as 1 meV/cycle and the high-resolution analysis of non-conservative force interactions.
Enhancing hyperspectral spatial resolution using multispectral image fusion: A wavelet approach
NASA Astrophysics Data System (ADS)
Jazaeri, Amin
High spectral and spatial resolution images have a significant impact in remote sensing applications. Because both spatial and spectral resolutions of spaceborne sensors are fixed by design and it is not possible to further increase the spatial or spectral resolution, techniques such as image fusion must be applied to achieve such goals. This dissertation introduces the concept of wavelet fusion between hyperspectral and multispectral sensors in order to enhance the spectral and spatial resolution of a hyperspectral image. To test the robustness of this concept, images from Hyperion (hyperspectral sensor) and Advanced Land Imager (multispectral sensor) were first co-registered and then fused using different wavelet algorithms. A regression-based fusion algorithm was also implemented for comparison purposes. The results show that the fused images using a combined bi-linear wavelet-regression algorithm have less error than other methods when compared to the ground truth. In addition, a combined regression-wavelet algorithm shows more immunity to misalignment of the pixels due to the lack of proper registration. The quantitative measures of average mean square error show that the performance of wavelet-based methods degrades when the spatial resolution of hyperspectral images becomes eight times less than its corresponding multispectral image. Regardless of what method of fusion is utilized, the main challenge in image fusion is image registration, which is also a very time intensive process. Because the combined regression wavelet technique is computationally expensive, a hybrid technique based on regression and wavelet methods was also implemented to decrease computational overhead. However, the gain in faster computation was offset by the introduction of more error in the outcome. The secondary objective of this dissertation is to examine the feasibility and sensor requirements for image fusion for future NASA missions in order to be able to perform onboard image fusion. In this process, the main challenge of image registration was resolved by registering the input images using transformation matrices of previously acquired data. The composite image resulted from the fusion process remarkably matched the ground truth, indicating the possibility of real time onboard fusion processing.
Zhang, Wenlu; Chen, Fengyi; Ma, Wenwen; Rong, Qiangzhou; Qiao, Xueguang; Wang, Ruohui
2018-04-16
A fringe visibility enhanced fiber-optic Fabry-Perot interferometer based ultrasonic sensor is proposed and experimentally demonstrated for seismic physical model imaging. The sensor consists of a graded index multimode fiber collimator and a PTFE (polytetrafluoroethylene) diaphragm to form a Fabry-Perot interferometer. Owing to the increase of the sensor's spectral sideband slope and the smaller Young's modulus of the PTFE diaphragm, a high response to both continuous and pulsed ultrasound with a high SNR of 42.92 dB in 300 kHz is achieved when the spectral sideband filter technique is used to interrogate the sensor. The ultrasonic reconstructed images can clearly differentiate the shape of models with a high resolution.
Image formation analysis and high resolution image reconstruction for plenoptic imaging systems.
Shroff, Sapna A; Berkner, Kathrin
2013-04-01
Plenoptic imaging systems are often used for applications like refocusing, multimodal imaging, and multiview imaging. However, their resolution is limited to the number of lenslets. In this paper we investigate paraxial, incoherent, plenoptic image formation, and develop a method to recover some of the resolution for the case of a two-dimensional (2D) in-focus object. This enables the recovery of a conventional-resolution, 2D image from the data captured in a plenoptic system. We show simulation results for a plenoptic system with a known response and Gaussian sensor noise.
Efficient space-time sampling with pixel-wise coded exposure for high-speed imaging.
Liu, Dengyu; Gu, Jinwei; Hitomi, Yasunobu; Gupta, Mohit; Mitsunaga, Tomoo; Nayar, Shree K
2014-02-01
Cameras face a fundamental trade-off between spatial and temporal resolution. Digital still cameras can capture images with high spatial resolution, but most high-speed video cameras have relatively low spatial resolution. It is hard to overcome this trade-off without incurring a significant increase in hardware costs. In this paper, we propose techniques for sampling, representing, and reconstructing the space-time volume to overcome this trade-off. Our approach has two important distinctions compared to previous works: 1) We achieve sparse representation of videos by learning an overcomplete dictionary on video patches, and 2) we adhere to practical hardware constraints on sampling schemes imposed by architectures of current image sensors, which means that our sampling function can be implemented on CMOS image sensors with modified control units in the future. We evaluate components of our approach, sampling function and sparse representation, by comparing them to several existing approaches. We also implement a prototype imaging system with pixel-wise coded exposure control using a liquid crystal on silicon device. System characteristics such as field of view and modulation transfer function are evaluated for our imaging system. Both simulations and experiments on a wide range of scenes show that our method can effectively reconstruct a video from a single coded image while maintaining high spatial resolution.
Multiframe super resolution reconstruction method based on light field angular images
NASA Astrophysics Data System (ADS)
Zhou, Shubo; Yuan, Yan; Su, Lijuan; Ding, Xiaomin; Wang, Jichao
2017-12-01
The plenoptic camera can directly obtain 4-dimensional light field information from a 2-dimensional sensor. However, based on the sampling theorem, the spatial resolution is greatly limited by the microlenses. In this paper, we present a method of reconstructing high-resolution images from the angular images. First, the ray tracing method is used to model the telecentric-based light field imaging process. Then, we analyze the subpixel shifts between the angular images extracted from the defocused light field data and the blur in the angular images. According to the analysis above, we construct the observation model from the ideal high-resolution image to the angular images. Applying the regularized super resolution method, we can obtain the super resolution result with a magnification ratio of 8. The results demonstrate the effectiveness of the proposed observation model.
Beyer, Hannes; Wagner, Tino; Stemmer, Andreas
2016-01-01
Frequency-modulation atomic force microscopy has turned into a well-established method to obtain atomic resolution on flat surfaces, but is often limited to ultra-high vacuum conditions and cryogenic temperatures. Measurements under ambient conditions are influenced by variations of the dew point and thin water layers present on practically every surface, complicating stable imaging with high resolution. We demonstrate high-resolution imaging in air using a length-extension resonator operating at small amplitudes. An additional slow feedback compensates for changes in the free resonance frequency, allowing stable imaging over a long period of time with changing environmental conditions.
Land use change detection based on multi-date imagery from different satellite sensor systems
NASA Technical Reports Server (NTRS)
Stow, Douglas A.; Collins, Doretta; Mckinsey, David
1990-01-01
An empirical study is conducted to assess the accuracy of land use change detection using satellite image data acquired ten years apart by sensors with differing spatial resolutions. The primary goals of the investigation were to (1) compare standard change detection methods applied to image data of varying spatial resolution, (2) assess whether to transform the raster grid of the higher resolution image data to that of the lower resolution raster grid or vice versa in the registration process, (3) determine if Landsat/Thermatic Mapper or SPOT/High Resolution Visible multispectral data provide more accurate detection of land use changes when registered to historical Landsat/MSS data. It is concluded that image ratioing of multisensor, multidate satellite data produced higher change detection accuracies than did principal components analysis, and that it is useful as a land use change enhancement method.
Onboard Image Processing System for Hyperspectral Sensor
Hihara, Hiroki; Moritani, Kotaro; Inoue, Masao; Hoshi, Yoshihiro; Iwasaki, Akira; Takada, Jun; Inada, Hitomi; Suzuki, Makoto; Seki, Taeko; Ichikawa, Satoshi; Tanii, Jun
2015-01-01
Onboard image processing systems for a hyperspectral sensor have been developed in order to maximize image data transmission efficiency for large volume and high speed data downlink capacity. Since more than 100 channels are required for hyperspectral sensors on Earth observation satellites, fast and small-footprint lossless image compression capability is essential for reducing the size and weight of a sensor system. A fast lossless image compression algorithm has been developed, and is implemented in the onboard correction circuitry of sensitivity and linearity of Complementary Metal Oxide Semiconductor (CMOS) sensors in order to maximize the compression ratio. The employed image compression method is based on Fast, Efficient, Lossless Image compression System (FELICS), which is a hierarchical predictive coding method with resolution scaling. To improve FELICS’s performance of image decorrelation and entropy coding, we apply a two-dimensional interpolation prediction and adaptive Golomb-Rice coding. It supports progressive decompression using resolution scaling while still maintaining superior performance measured as speed and complexity. Coding efficiency and compression speed enlarge the effective capacity of signal transmission channels, which lead to reducing onboard hardware by multiplexing sensor signals into a reduced number of compression circuits. The circuitry is embedded into the data formatter of the sensor system without adding size, weight, power consumption, and fabrication cost. PMID:26404281
Radiation imaging with optically read out GEM-based detectors
NASA Astrophysics Data System (ADS)
Brunbauer, F. M.; Lupberger, M.; Oliveri, E.; Resnati, F.; Ropelewski, L.; Streli, C.; Thuiner, P.; van Stenis, M.
2018-02-01
Modern imaging sensors allow for high granularity optical readout of radiation detectors such as MicroPattern Gaseous Detectors (MPGDs). Taking advantage of the high signal amplification factors achievable by MPGD technologies such as Gaseous Electron Multipliers (GEMs), highly sensitive detectors can be realised and employing gas mixtures with strong scintillation yield in the visible wavelength regime, optical readout of such detectors can provide high-resolution event representations. Applications from X-ray imaging to fluoroscopy and tomography profit from the good spatial resolution of optical readout and the possibility to obtain images without the need for extensive reconstruction. Sensitivity to low-energy X-rays and energy resolution permit energy resolved imaging and material distinction in X-ray fluorescence measurements. Additionally, the low material budget of gaseous detectors and the possibility to couple scintillation light to imaging sensors via fibres or mirrors makes optically read out GEMs an ideal candidate for beam monitoring detectors in high energy physics as well as radiotherapy. We present applications and achievements of optically read out GEM-based detectors including high spatial resolution imaging and X-ray fluorescence measurements as an alternative readout approach for MPGDs. A detector concept for low intensity applications such as X-ray crystallography, which maximises detection efficiency with a thick conversion region but mitigates parallax-induced broadening is presented and beam monitoring capabilities of optical readout are explored. Augmenting high resolution 2D projections of particle tracks obtained with optical readout with timing information from fast photon detectors or transparent anodes for charge readout, 3D reconstruction of particle trajectories can be performed and permits the realisation of optically read out time projection chambers. Combining readily available high performance imaging sensors with compatible scintillating gases and the strong signal amplification factors achieved by MPGDs makes optical readout an attractive alternative to the common concept of electronic readout of radiation detectors. Outstanding signal-to-noise ratios and robustness against electronic noise allow unprecedented imaging capabilities for various applications in fields ranging from high energy physics to medical instrumentation.
Generating High-Temporal and Spatial Resolution TIR Image Data
NASA Astrophysics Data System (ADS)
Herrero-Huerta, M.; Lagüela, S.; Alfieri, S. M.; Menenti, M.
2017-09-01
Remote sensing imagery to monitor global biophysical dynamics requires the availability of thermal infrared data at high temporal and spatial resolution because of the rapid development of crops during the growing season and the fragmentation of most agricultural landscapes. Conversely, no single sensor meets these combined requirements. Data fusion approaches offer an alternative to exploit observations from multiple sensors, providing data sets with better properties. A novel spatio-temporal data fusion model based on constrained algorithms denoted as multisensor multiresolution technique (MMT) was developed and applied to generate TIR synthetic image data at both temporal and spatial high resolution. Firstly, an adaptive radiance model is applied based on spectral unmixing analysis of . TIR radiance data at TOA (top of atmosphere) collected by MODIS daily 1-km and Landsat - TIRS 16-day sampled at 30-m resolution are used to generate synthetic daily radiance images at TOA at 30-m spatial resolution. The next step consists of unmixing the 30 m (now lower resolution) images using the information about their pixel land-cover composition from co-registered images at higher spatial resolution. In our case study, TIR synthesized data were unmixed to the Sentinel 2 MSI with 10 m resolution. The constrained unmixing preserves all the available radiometric information of the 30 m images and involves the optimization of the number of land-cover classes and the size of the moving window for spatial unmixing. Results are still being evaluated, with particular attention for the quality of the data streams required to apply our approach.
Spatial, Temporal and Spectral Satellite Image Fusion via Sparse Representation
NASA Astrophysics Data System (ADS)
Song, Huihui
Remote sensing provides good measurements for monitoring and further analyzing the climate change, dynamics of ecosystem, and human activities in global or regional scales. Over the past two decades, the number of launched satellite sensors has been increasing with the development of aerospace technologies and the growing requirements on remote sensing data in a vast amount of application fields. However, a key technological challenge confronting these sensors is that they tradeoff between spatial resolution and other properties, including temporal resolution, spectral resolution, swath width, etc., due to the limitations of hardware technology and budget constraints. To increase the spatial resolution of data with other good properties, one possible cost-effective solution is to explore data integration methods that can fuse multi-resolution data from multiple sensors, thereby enhancing the application capabilities of available remote sensing data. In this thesis, we propose to fuse the spatial resolution with temporal resolution and spectral resolution, respectively, based on sparse representation theory. Taking the study case of Landsat ETM+ (with spatial resolution of 30m and temporal resolution of 16 days) and MODIS (with spatial resolution of 250m ~ 1km and daily temporal resolution) reflectance, we propose two spatial-temporal fusion methods to combine the fine spatial information of Landsat image and the daily temporal resolution of MODIS image. Motivated by that the images from these two sensors are comparable on corresponding bands, we propose to link their spatial information on available Landsat- MODIS image pair (captured on prior date) and then predict the Landsat image from the MODIS counterpart on prediction date. To well-learn the spatial details from the prior images, we use a redundant dictionary to extract the basic representation atoms for both Landsat and MODIS images based on sparse representation. Under the scenario of two prior Landsat-MODIS image pairs, we build the corresponding relationship between the difference images of MODIS and ETM+ by training a low- and high-resolution dictionary pair from the given prior image pairs. In the second scenario, i.e., only one Landsat- MODIS image pair being available, we directly correlate MODIS and ETM+ data through an image degradation model. Then, the fusion stage is achieved by super-resolving the MODIS image combining the high-pass modulation in a two-layer fusion framework. Remarkably, the proposed spatial-temporal fusion methods form a unified framework for blending remote sensing images with phenology change or land-cover-type change. Based on the proposed spatial-temporal fusion models, we propose to monitor the land use/land cover changes in Shenzhen, China. As a fast-growing city, Shenzhen faces the problem of detecting the rapid changes for both rational city planning and sustainable development. However, the cloudy and rainy weather in region Shenzhen located makes the capturing circle of high-quality satellite images longer than their normal revisit periods. Spatial-temporal fusion methods are capable to tackle this problem by improving the spatial resolution of images with coarse spatial resolution but frequent temporal coverage, thereby making the detection of rapid changes possible. On two Landsat-MODIS datasets with annual and monthly changes, respectively, we apply the proposed spatial-temporal fusion methods to the task of multiple change detection. Afterward, we propose a novel spatial and spectral fusion method for satellite multispectral and hyperspectral (or high-spectral) images based on dictionary-pair learning and sparse non-negative matrix factorization. By combining the spectral information from hyperspectral image, which is characterized by low spatial resolution but high spectral resolution and abbreviated as LSHS, and the spatial information from multispectral image, which is featured by high spatial resolution but low spectral resolution and abbreviated as HSLS, this method aims to generate the fused data with both high spatial and high spectral resolutions. Motivated by the observation that each hyperspectral pixel can be represented by a linear combination of a few endmembers, this method first extracts the spectral bases of LSHS and HSLS images by making full use of the rich spectral information in LSHS data. The spectral bases of these two categories data then formulate a dictionary-pair due to their correspondence in representing each pixel spectra of LSHS data and HSLS data, respectively. Subsequently, the LSHS image is spatially unmixed by representing the HSLS image with respect to the corresponding learned dictionary to derive its representation coefficients. Combining the spectral bases of LSHS data and the representation coefficients of HSLS data, we finally derive the fused data characterized by the spectral resolution of LSHS data and the spatial resolution of HSLS data.
A time-resolved image sensor for tubeless streak cameras
NASA Astrophysics Data System (ADS)
Yasutomi, Keita; Han, SangMan; Seo, Min-Woong; Takasawa, Taishi; Kagawa, Keiichiro; Kawahito, Shoji
2014-03-01
This paper presents a time-resolved CMOS image sensor with draining-only modulation (DOM) pixels for tube-less streak cameras. Although the conventional streak camera has high time resolution, the device requires high voltage and bulky system due to the structure with a vacuum tube. The proposed time-resolved imager with a simple optics realize a streak camera without any vacuum tubes. The proposed image sensor has DOM pixels, a delay-based pulse generator, and a readout circuitry. The delay-based pulse generator in combination with an in-pixel logic allows us to create and to provide a short gating clock to the pixel array. A prototype time-resolved CMOS image sensor with the proposed pixel is designed and implemented using 0.11um CMOS image sensor technology. The image array has 30(Vertical) x 128(Memory length) pixels with the pixel pitch of 22.4um. .
Landsat multispectral sharpening using a sensor system model and panchromatic image
Lemeshewsky, G.P.; ,
2003-01-01
The thematic mapper (TM) sensor aboard Landsats 4, 5 and enhanced TM plus (ETM+) on Landsat 7 collect imagery at 30-m sample distance in six spectral bands. New with ETM+ is a 15-m panchromatic (P) band. With image sharpening techniques, this higher resolution P data, or as an alternative, the 10-m (or 5-m) P data of the SPOT satellite, can increase the spatial resolution of the multispectral (MS) data. Sharpening requires that the lower resolution MS image be coregistered and resampled to the P data before high spatial frequency information is transferred to the MS data. For visual interpretation and machine classification tasks, it is important that the sharpened data preserve the spectral characteristics of the original low resolution data. A technique was developed for sharpening (in this case, 3:1 spatial resolution enhancement) visible spectral band data, based on a model of the sensor system point spread function (PSF) in order to maintain spectral fidelity. It combines high-pass (HP) filter sharpening methods with iterative image restoration to reduce degradations caused by sensor-system-induced blurring and resembling. Also there is a spectral fidelity requirement: sharpened MS when filtered by the modeled degradations should reproduce the low resolution source MS. Quantitative evaluation of sharpening performance was made by using simulated low resolution data generated from digital color-IR aerial photography. In comparison to the HP-filter-based sharpening method, results for the technique in this paper with simulated data show improved spectral fidelity. Preliminary results with TM 30-m visible band data sharpened with simulated 10-m panchromatic data are promising but require further study.
NASA Astrophysics Data System (ADS)
Ni, Guangming; Liu, Lin; Zhang, Jing; Liu, Juanxiu; Liu, Yong
2018-01-01
With the development of the liquid crystal display (LCD) module industry, LCD modules become more and more precise with larger sizes, which demands harsh imaging requirements for automated optical inspection (AOI). Here, we report a high-resolution and clearly focused imaging optomechatronics for precise LCD module bonding AOI inspection. It first presents and achieves high-resolution imaging for LCD module bonding AOI inspection using a line scan camera (LSC) triggered by a linear optical encoder, self-adaptive focusing for the whole large imaging region using LSC, and a laser displacement sensor, which reduces the requirements of machining, assembly, and motion control of AOI devices. Results show that this system can directly achieve clearly focused imaging for AOI inspection of large LCD module bonding with 0.8 μm image resolution, 2.65-mm scan imaging width, and no limited imaging width theoretically. All of these are significant for AOI inspection in the LCD module industry and other fields that require imaging large regions with high resolution.
High resolution multiplexed functional imaging in live embryos (Conference Presentation)
NASA Astrophysics Data System (ADS)
Xu, Dongli; Zhou, Weibin; Peng, Leilei
2017-02-01
Fourier multiplexed fluorescence lifetime imaging (FmFLIM) scanning laser optical tomography (FmFLIM-SLOT) combines FmFLIM and Scanning laser optical tomography (SLOT) to perform multiplexed 3D FLIM imaging of live embryos. The system had demonstrate multiplexed functional imaging of zebrafish embryos genetically express Foster Resonant Energy Transfer (FRET) sensors. However, previous system has a 20 micron resolution because the focused Gaussian beam diverges quickly from the focused plane, makes it difficult to achieve high resolution imaging over a long projection depth. Here, we present a high-resolution FmFLIM-SLOT system with achromatic Bessel beam, which achieves 3 micron resolution in 3D deep tissue imaging. In Bessel-FmFLIM-SLOT, multiple laser excitation lines are firstly intensity modulated by a Michelson interferometer with a spinning polygon mirror optical delay line, which enables Fourier multiplexed multi-channel lifetime measurements. Then, a spatial light modulator and a prism are used to transform the modulated Gaussian laser beam to an achromatic Bessel beam. The achromatic Bessel beam scans across the whole specimen with equal angular intervals as sample rotated. After tomography reconstruction and the frequency domain lifetime analysis method, both the 3D intensity and lifetime image of multiple excitation-emission can be obtained. Using Bessel-FmFLIM-SLOT system, we performed cellular-resolution FLIM tomography imaging of live zebrafish embryo. Genetically expressed FRET sensors in these embryo will allow non-invasive observation of multiple biochemical processes in vivo.
Image Registration of High-Resolution Uav Data: the New Hypare Algorithm
NASA Astrophysics Data System (ADS)
Bahr, T.; Jin, X.; Lasica, R.; Giessel, D.
2013-08-01
Unmanned aerial vehicles play an important role in the present-day civilian and military intelligence. Equipped with a variety of sensors, such as SAR imaging modes, E/O- and IR sensor technology, they are due to their agility suitable for many applications. Hence, the necessity arises to use fusion technologies and to develop them continuously. Here an exact image-to-image registration is essential. It serves as the basis for important image processing operations such as georeferencing, change detection, and data fusion. Therefore we developed the Hybrid Powered Auto-Registration Engine (HyPARE). HyPARE combines all available spatial reference information with a number of image registration approaches to improve the accuracy, performance, and automation of tie point generation and image registration. We demonstrate this approach by the registration of 39 still images from a high-resolution image stream, acquired with a Aeryon Photo3S™ camera on an Aeryon Scout micro-UAV™.
Propagation phasor approach for holographic image reconstruction
Luo, Wei; Zhang, Yibo; Göröcs, Zoltán; Feizi, Alborz; Ozcan, Aydogan
2016-01-01
To achieve high-resolution and wide field-of-view, digital holographic imaging techniques need to tackle two major challenges: phase recovery and spatial undersampling. Previously, these challenges were separately addressed using phase retrieval and pixel super-resolution algorithms, which utilize the diversity of different imaging parameters. Although existing holographic imaging methods can achieve large space-bandwidth-products by performing pixel super-resolution and phase retrieval sequentially, they require large amounts of data, which might be a limitation in high-speed or cost-effective imaging applications. Here we report a propagation phasor approach, which for the first time combines phase retrieval and pixel super-resolution into a unified mathematical framework and enables the synthesis of new holographic image reconstruction methods with significantly improved data efficiency. In this approach, twin image and spatial aliasing signals, along with other digital artifacts, are interpreted as noise terms that are modulated by phasors that analytically depend on the lateral displacement between hologram and sensor planes, sample-to-sensor distance, wavelength, and the illumination angle. Compared to previous holographic reconstruction techniques, this new framework results in five- to seven-fold reduced number of raw measurements, while still achieving a competitive resolution and space-bandwidth-product. We also demonstrated the success of this approach by imaging biological specimens including Papanicolaou and blood smears. PMID:26964671
NASA Astrophysics Data System (ADS)
Kuroda, R.; Sugawa, S.
2017-02-01
Ultra-high speed (UHS) CMOS image sensors with on-chop analog memories placed on the periphery of pixel array for the visualization of UHS phenomena are overviewed in this paper. The developed UHS CMOS image sensors consist of 400H×256V pixels and 128 memories/pixel, and the readout speed of 1Tpixel/sec is obtained, leading to 10 Mfps full resolution video capturing with consecutive 128 frames, and 20 Mfps half resolution video capturing with consecutive 256 frames. The first development model has been employed in the high speed video camera and put in practical use in 2012. By the development of dedicated process technologies, photosensitivity improvement and power consumption reduction were simultaneously achieved, and the performance improved version has been utilized in the commercialized high-speed video camera since 2015 that offers 10 Mfps with ISO16,000 photosensitivity. Due to the improved photosensitivity, clear images can be captured and analyzed even under low light condition, such as under a microscope as well as capturing of UHS light emission phenomena.
FPscope: a field-portable high-resolution microscope using a cellphone lens.
Dong, Siyuan; Guo, Kaikai; Nanda, Pariksheet; Shiradkar, Radhika; Zheng, Guoan
2014-10-01
The large consumer market has made cellphone lens modules available at low-cost and in high-quality. In a conventional cellphone camera, the lens module is used to demagnify the scene onto the image plane of the camera, where image sensor is located. In this work, we report a 3D-printed high-resolution Fourier ptychographic microscope, termed FPscope, which uses a cellphone lens in a reverse manner. In our platform, we replace the image sensor with sample specimens, and use the cellphone lens to project the magnified image to the detector. To supersede the diffraction limit of the lens module, we use an LED array to illuminate the sample from different incident angles and synthesize the acquired images using the Fourier ptychographic algorithm. As a demonstration, we use the reported platform to acquire high-resolution images of resolution target and biological specimens, with a maximum synthetic numerical aperture (NA) of 0.5. We also show that, the depth-of-focus of the reported platform is about 0.1 mm, orders of magnitude longer than that of a conventional microscope objective with a similar NA. The reported platform may enable healthcare accesses in low-resource settings. It can also be used to demonstrate the concept of computational optics for educational purposes.
Computational imaging through a fiber-optic bundle
NASA Astrophysics Data System (ADS)
Lodhi, Muhammad A.; Dumas, John Paul; Pierce, Mark C.; Bajwa, Waheed U.
2017-05-01
Compressive sensing (CS) has proven to be a viable method for reconstructing high-resolution signals using low-resolution measurements. Integrating CS principles into an optical system allows for higher-resolution imaging using lower-resolution sensor arrays. In contrast to prior works on CS-based imaging, our focus in this paper is on imaging through fiber-optic bundles, in which manufacturing constraints limit individual fiber spacing to around 2 μm. This limitation essentially renders fiber-optic bundles as low-resolution sensors with relatively few resolvable points per unit area. These fiber bundles are often used in minimally invasive medical instruments for viewing tissue at macro and microscopic levels. While the compact nature and flexibility of fiber bundles allow for excellent tissue access in-vivo, imaging through fiber bundles does not provide the fine details of tissue features that is demanded in some medical situations. Our hypothesis is that adapting existing CS principles to fiber bundle-based optical systems will overcome the resolution limitation inherent in fiber-bundle imaging. In a previous paper we examined the practical challenges involved in implementing a highly parallel version of the single-pixel camera while focusing on synthetic objects. This paper extends the same architecture for fiber-bundle imaging under incoherent illumination and addresses some practical issues associated with imaging physical objects. Additionally, we model the optical non-idealities in the system to get lower modelling errors.
USDA-ARS?s Scientific Manuscript database
Many societal applications of soil moisture data products require high spatial resolution and numerical accuracy. Current thermal geostationary satellite sensors (GOES Imager and GOES-R ABI) could produce 2-16km resolution soil moisture proxy data. Passive microwave satellite radiometers (e.g. AMSR...
Chen, Qin; Hu, Xin; Wen, Long; Yu, Yan; Cumming, David R S
2016-09-01
The increasing miniaturization and resolution of image sensors bring challenges to conventional optical elements such as spectral filters and polarizers, the properties of which are determined mainly by the materials used, including dye polymers. Recent developments in spectral filtering and optical manipulating techniques based on nanophotonics have opened up the possibility of an alternative method to control light spectrally and spatially. By integrating these technologies into image sensors, it will become possible to achieve high compactness, improved process compatibility, robust stability and tunable functionality. In this Review, recent representative achievements on nanophotonic image sensors are presented and analyzed including image sensors with nanophotonic color filters and polarizers, metamaterial-based THz image sensors, filter-free nanowire image sensors and nanostructured-based multispectral image sensors. This novel combination of cutting edge photonics research and well-developed commercial products may not only lead to an important application of nanophotonics but also offer great potential for next generation image sensors beyond Moore's Law expectations. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
High-resolution CCD imaging alternatives
NASA Astrophysics Data System (ADS)
Brown, D. L.; Acker, D. E.
1992-08-01
High resolution CCD color cameras have recently stimulated the interest of a large number of potential end-users for a wide range of practical applications. Real-time High Definition Television (HDTV) systems are now being used or considered for use in applications ranging from entertainment program origination through digital image storage to medical and scientific research. HDTV generation of electronic images offers significant cost and time-saving advantages over the use of film in such applications. Further in still image systems electronic image capture is faster and more efficient than conventional image scanners. The CCD still camera can capture 3-dimensional objects into the computing environment directly without having to shoot a picture on film develop it and then scan the image into a computer. 2. EXTENDING CCD TECHNOLOGY BEYOND BROADCAST Most standard production CCD sensor chips are made for broadcast-compatible systems. One popular CCD and the basis for this discussion offers arrays of roughly 750 x 580 picture elements (pixels) or a total array of approximately 435 pixels (see Fig. 1). FOR. A has developed a technique to increase the number of available pixels for a given image compared to that produced by the standard CCD itself. Using an inter-lined CCD with an overall spatial structure several times larger than the photo-sensitive sensor areas each of the CCD sensors is shifted in two dimensions in order to fill in spatial gaps between adjacent sensors.
Time stamping of single optical photons with 10 ns resolution
NASA Astrophysics Data System (ADS)
Chakaberia, Irakli; Cotlet, Mircea; Fisher-Levine, Merlin; Hodges, Diedra R.; Nguyen, Jayke; Nomerotski, Andrei
2017-05-01
High spatial and temporal resolution are key features for many modern applications, e.g. mass spectrometry, probing the structure of materials via neutron scattering, studying molecular structure, etc.1-5 Fast imaging also provides the capability of coincidence detection, and the further addition of sensitivity to single optical photons with the capability of timestamping them further broadens the field of potential applications. Photon counting is already widely used in X-ray imaging,6 where the high energy of the photons makes their detection easier. TimepixCam is a novel optical imager,7 which achieves high spatial resolution using an array of 256×256 55 μm × 55μm pixels which have individually controlled functionality. It is based on a thin-entrance-window silicon sensor, bump-bonded to a Timepix ASIC.8 TimepixCam provides high quantum efficiency in the optical wavelength range (400-1000 nm). We perform the timestamping of single photons with a time resolution of 20 ns, by coupling TimepixCam to a fast image-intensifier with a P47 phosphor screen. The fast emission time of the P479 allows us to preserve good time resolution while maintaining the capability to focus the optical output of the intensifier onto the 256×256 pixel Timepix sensor area. We demonstrate the capability of the (TimepixCam + image intensifier) setup to provide high-resolution single-photon timestamping, with an effective frame rate of 50 MHz.
NASA Astrophysics Data System (ADS)
Liebel, L.; Körner, M.
2016-06-01
In optical remote sensing, spatial resolution of images is crucial for numerous applications. Space-borne systems are most likely to be affected by a lack of spatial resolution, due to their natural disadvantage of a large distance between the sensor and the sensed object. Thus, methods for single-image super resolution are desirable to exceed the limits of the sensor. Apart from assisting visual inspection of datasets, post-processing operations—e.g., segmentation or feature extraction—can benefit from detailed and distinguishable structures. In this paper, we show that recently introduced state-of-the-art approaches for single-image super resolution of conventional photographs, making use of deep learning techniques, such as convolutional neural networks (CNN), can successfully be applied to remote sensing data. With a huge amount of training data available, end-to-end learning is reasonably easy to apply and can achieve results unattainable using conventional handcrafted algorithms. We trained our CNN on a specifically designed, domain-specific dataset, in order to take into account the special characteristics of multispectral remote sensing data. This dataset consists of publicly available SENTINEL-2 images featuring 13 spectral bands, a ground resolution of up to 10m, and a high radiometric resolution and thus satisfying our requirements in terms of quality and quantity. In experiments, we obtained results superior compared to competing approaches trained on generic image sets, which failed to reasonably scale satellite images with a high radiometric resolution, as well as conventional interpolation methods.
Low-Light Image Enhancement Using Adaptive Digital Pixel Binning
Yoo, Yoonjong; Im, Jaehyun; Paik, Joonki
2015-01-01
This paper presents an image enhancement algorithm for low-light scenes in an environment with insufficient illumination. Simple amplification of intensity exhibits various undesired artifacts: noise amplification, intensity saturation, and loss of resolution. In order to enhance low-light images without undesired artifacts, a novel digital binning algorithm is proposed that considers brightness, context, noise level, and anti-saturation of a local region in the image. The proposed algorithm does not require any modification of the image sensor or additional frame-memory; it needs only two line-memories in the image signal processor (ISP). Since the proposed algorithm does not use an iterative computation, it can be easily embedded in an existing digital camera ISP pipeline containing a high-resolution image sensor. PMID:26121609
Narrow-Band Organic Photodiodes for High-Resolution Imaging.
Han, Moon Gyu; Park, Kyung-Bae; Bulliard, Xavier; Lee, Gae Hwang; Yun, Sungyoung; Leem, Dong-Seok; Heo, Chul-Joon; Yagi, Tadao; Sakurai, Rie; Ro, Takkyun; Lim, Seon-Jeong; Sul, Sangchul; Na, Kyoungwon; Ahn, Jungchak; Jin, Yong Wan; Lee, Sangyoon
2016-10-05
There are growing opportunities and demands for image sensors that produce higher-resolution images, even in low-light conditions. Increasing the light input areas through 3D architecture within the same pixel size can be an effective solution to address this issue. Organic photodiodes (OPDs) that possess wavelength selectivity can allow for advancements in this regard. Here, we report on novel push-pull D-π-A dyes specially designed for Gaussian-shaped, narrow-band absorption and the high photoelectric conversion. These p-type organic dyes work both as a color filter and as a source of photocurrents with linear and fast light responses, high sensitivity, and excellent stability, when combined with C60 to form bulk heterojunctions (BHJs). The effectiveness of the OPD composed of the active color filter was demonstrated by obtaining a full-color image using a camera that contained an organic/Si hybrid complementary metal-oxide-semiconductor (CMOS) color image sensor.
A Multi-Resolution Approach for an Automated Fusion of Different Low-Cost 3D Sensors
Dupuis, Jan; Paulus, Stefan; Behmann, Jan; Plümer, Lutz; Kuhlmann, Heiner
2014-01-01
The 3D acquisition of object structures has become a common technique in many fields of work, e.g., industrial quality management, cultural heritage or crime scene documentation. The requirements on the measuring devices are versatile, because spacious scenes have to be imaged with a high level of detail for selected objects. Thus, the used measuring systems are expensive and require an experienced operator. With the rise of low-cost 3D imaging systems, their integration into the digital documentation process is possible. However, common low-cost sensors have the limitation of a trade-off between range and accuracy, providing either a low resolution of single objects or a limited imaging field. Therefore, the use of multiple sensors is desirable. We show the combined use of two low-cost sensors, the Microsoft Kinect and the David laserscanning system, to achieve low-resolved scans of the whole scene and a high level of detail for selected objects, respectively. Afterwards, the high-resolved David objects are automatically assigned to their corresponding Kinect object by the use of surface feature histograms and SVM-classification. The corresponding objects are fitted using an ICP-implementation to produce a multi-resolution map. The applicability is shown for a fictional crime scene and the reconstruction of a ballistic trajectory. PMID:24763255
A multi-resolution approach for an automated fusion of different low-cost 3D sensors.
Dupuis, Jan; Paulus, Stefan; Behmann, Jan; Plümer, Lutz; Kuhlmann, Heiner
2014-04-24
The 3D acquisition of object structures has become a common technique in many fields of work, e.g., industrial quality management, cultural heritage or crime scene documentation. The requirements on the measuring devices are versatile, because spacious scenes have to be imaged with a high level of detail for selected objects. Thus, the used measuring systems are expensive and require an experienced operator. With the rise of low-cost 3D imaging systems, their integration into the digital documentation process is possible. However, common low-cost sensors have the limitation of a trade-off between range and accuracy, providing either a low resolution of single objects or a limited imaging field. Therefore, the use of multiple sensors is desirable. We show the combined use of two low-cost sensors, the Microsoft Kinect and the David laserscanning system, to achieve low-resolved scans of the whole scene and a high level of detail for selected objects, respectively. Afterwards, the high-resolved David objects are automatically assigned to their corresponding Kinect object by the use of surface feature histograms and SVM-classification. The corresponding objects are fitted using an ICP-implementation to produce a multi-resolution map. The applicability is shown for a fictional crime scene and the reconstruction of a ballistic trajectory.
NASA Technical Reports Server (NTRS)
2002-01-01
Roughly a dozen fires (red pixels) dotted the landscape on the main Philippine island of Luzon on April 1, 2002. This true-color image was acquired by the Moderate-resolution Imaging Spectroradiometer (MODIS), flying aboard NASA's Terra spacecraft. Please note that the high-resolution scene provided here is 500 meters per pixel. For a copy of this scene at the sensor's fullest resolution, visit the MODIS Rapidfire site.
Fearns, Peter
2017-01-01
The impact of anthropogenic activities on coastal waters is a cause of concern because such activities add to the total suspended sediment (TSS) budget of the coastal waters, which have negative impacts on the coastal ecosystem. Satellite remote sensing provides a powerful tool in monitoring TSS concentration at high spatiotemporal resolution, but coastal managers should be mindful that the satellite-derived TSS concentrations are dependent on the satellite sensor’s radiometric properties, atmospheric correction approaches, the spatial resolution and the limitations of specific TSS algorithms. In this study, we investigated the impact of different spatial resolutions of satellite sensor on the quantification of TSS concentration in coastal waters of northern Western Australia. We quantified the TSS product derived from MODerate resolution Imaging Spectroradiometer (MODIS)-Aqua, Landsat-8 Operational Land Image (OLI), and WorldView-2 (WV2) at native spatial resolutions of 250 m, 30 m and 2 m respectively and coarser spatial resolution (resampled up to 5 km) to quantify the impact of spatial resolution on the derived TSS product in different turbidity conditions. The results from the study show that in the waters of high turbidity and high spatial variability, the high spatial resolution WV2 sensor reported TSS concentration as high as 160 mg L-1 while the low spatial resolution MODIS-Aqua reported a maximum TSS concentration of 23.6 mg L-1. Degrading the spatial resolution of each satellite sensor for highly spatially variable turbid waters led to variability in the TSS concentrations of 114.46%, 304.68% and 38.2% for WV2, Landsat-8 OLI and MODIS-Aqua respectively. The implications of this work are particularly relevant in the situation of compliance monitoring where operations may be required to restrict TSS concentrations to a pre-defined limit. PMID:28380059
Lensless transport-of-intensity phase microscopy and tomography with a color LED matrix
NASA Astrophysics Data System (ADS)
Zuo, Chao; Sun, Jiasong; Zhang, Jialin; Hu, Yan; Chen, Qian
2015-07-01
We demonstrate lens-less quantitative phase microscopy and diffraction tomography based on a compact on-chip platform, using only a CMOS image sensor and a programmable color LED array. Based on multi-wavelength transport-of- intensity phase retrieval and multi-angle illumination diffraction tomography, this platform offers high quality, depth resolved images with a lateral resolution of ˜3.7μm and an axial resolution of ˜5μm, over wide large imaging FOV of 24mm2. The resolution and FOV can be further improved by using a larger image sensors with small pixels straightforwardly. This compact, low-cost, robust, portable platform with a decent imaging performance may offer a cost-effective tool for telemedicine needs, or for reducing health care costs for point-of-care diagnostics in resource-limited environments.
2008-01-01
Distributed network-based battle management High performance computing supporting uniform and nonuniform memory access with single and multithreaded...pallet Airborne EO/IR and radar sensors VNIR through SWIR hyperspectral systems VNIR, MWIR, and LWIR high-resolution sys- tems Wideband SAR systems...meteorological sensors Hyperspectral sensor systems (PHILLS) Mid-wave infrared (MWIR) Indium Antimonide (InSb) imaging system Long-wave infrared ( LWIR
Wavefront sensor-driven variable-geometry pupil for ground-based aperture synthesis imaging
NASA Astrophysics Data System (ADS)
Tyler, David W.
2000-07-01
I describe a variable-geometry pupil (VGP) to increase image resolution for ground-based near-IR and optical imaging. In this scheme, a curvature-type wavefront sensor provides an estimate of the wavefront curvature to the controller of a high-resolution spatial light modulator (SLM) or micro- electromechanical (MEM) mirror, positioned at an image of the telescope pupil. This optical element, the VGP, passes or reflects the incident beam only where the wavefront phase is sufficiently smooth, viz., where the curvature is sufficiently low. Using a computer simulation, I show the VGP can sharpen and smooth the long-exposure PSF and increase the OTF SNR for tilt-only and low-order AO systems, allowing higher resolution and more stable deconvolution with dimmer AO guidestars.
Fires and Heavy Smoke in Alaska
NASA Technical Reports Server (NTRS)
2002-01-01
On May 28, 2002, the Moderate Resolution Imaging Spectroradiometer (MODIS) captured this image of fires that continue to burn in central Alaska. Alaska is very dry and warm for this time of year, and has experienced over 230 wildfires so far this season. Please note that the high-resolution scene provided here is 500 meters per pixel. For a copy of the scene at the sensor's fullest resolution, visit the MODIS Rapid Response Image Gallery.
Building Change Detection in Very High Resolution Satellite Stereo Image Time Series
NASA Astrophysics Data System (ADS)
Tian, J.; Qin, R.; Cerra, D.; Reinartz, P.
2016-06-01
There is an increasing demand for robust methods on urban sprawl monitoring. The steadily increasing number of high resolution and multi-view sensors allows producing datasets with high temporal and spatial resolution; however, less effort has been dedicated to employ very high resolution (VHR) satellite image time series (SITS) to monitor the changes in buildings with higher accuracy. In addition, these VHR data are often acquired from different sensors. The objective of this research is to propose a robust time-series data analysis method for VHR stereo imagery. Firstly, the spatial-temporal information of the stereo imagery and the Digital Surface Models (DSMs) generated from them are combined, and building probability maps (BPM) are calculated for all acquisition dates. In the second step, an object-based change analysis is performed based on the derivative features of the BPM sets. The change consistence between object-level and pixel-level are checked to remove any outlier pixels. Results are assessed on six pairs of VHR satellite images acquired within a time span of 7 years. The evaluation results have proved the efficiency of the proposed method.
Fiber-Laser-Based Ultrasound Sensor for Photoacoustic Imaging
Liang, Yizhi; Jin, Long; Wang, Lidai; Bai, Xue; Cheng, Linghao; Guan, Bai-Ou
2017-01-01
Photoacoustic imaging, especially for intravascular and endoscopic applications, requires ultrasound probes with miniature size and high sensitivity. In this paper, we present a new photoacoustic sensor based on a small-sized fiber laser. Incident ultrasound waves exert pressures on the optical fiber laser and induce harmonic vibrations of the fiber, which is detected by the frequency shift of the beating signal between the two orthogonal polarization modes in the fiber laser. This ultrasound sensor presents a noise-equivalent pressure of 40 Pa over a 50-MHz bandwidth. We demonstrate this new ultrasound sensor on an optical-resolution photoacoustic microscope. The axial and lateral resolutions are 48 μm and 3.3 μm. The field of view is up to 1.57 mm2. The sensor exhibits strong resistance to environmental perturbations, such as temperature changes, due to common-mode cancellation between the two orthogonal modes. The present fiber laser ultrasound sensor offers a new tool for all-optical photoacoustic imaging. PMID:28098201
A Multi-Sensor Aerogeophysical Study of Afghanistan
2007-01-01
magnetometer coupled with an Applied Physics 539 3-axis fluxgate mag- netometer for compensation of the aircraft field; • an Applanix DSS 301 digital...survey. DATA COlleCTION AND PROCeSSINg Photogrammetry More than 65,000 high-resolution photogram- metric images were collected using an Applanix Digital...HSI L-Band Polarimetric Imaging Radar KGPS Dual Gravity Meters Common Sensor Bomb-bay Pallet Applanix DSS Camera Sensor Suite • Magnetometer • Gravity
Turbulent Mixing and Combustion for High-Speed Air-Breathing Propulsion Application
2007-08-12
deficit (the velocity of the wake relative to the free-stream velocity), decays rapidly with downstream distance, so that the streamwise velocity is...switched laser with double-pulse option) and a new imaging system (high-resolution: 4008x2672 pix2, low- noise (cooled) Cooke PCO-4000 CCD camera). The...was designed in-house for high-speed low- noise image acquisition. The KFS CCD image sensor was designed by Mark Wadsworth of JPL and has a resolution
High-resolution streaming video integrated with UGS systems
NASA Astrophysics Data System (ADS)
Rohrer, Matthew
2010-04-01
Imagery has proven to be a valuable complement to Unattended Ground Sensor (UGS) systems. It provides ultimate verification of the nature of detected targets. However, due to the power, bandwidth, and technological limitations inherent to UGS, sacrifices have been made to the imagery portion of such systems. The result is that these systems produce lower resolution images in small quantities. Currently, a high resolution, wireless imaging system is being developed to bring megapixel, streaming video to remote locations to operate in concert with UGS. This paper will provide an overview of how using Wifi radios, new image based Digital Signal Processors (DSP) running advanced target detection algorithms, and high resolution cameras gives the user an opportunity to take high-powered video imagers to areas where power conservation is a necessity.
NASA Astrophysics Data System (ADS)
Jerram, P. A.; Fryer, M.; Pratlong, J.; Pike, A.; Walker, A.; Dierickx, B.; Dupont, B.; Defernez, A.
2017-11-01
CCDs have been used for many years for Hyperspectral imaging missions and have been extremely successful. These include the Medium Resolution Imaging Spectrometer (MERIS) [1] on Envisat, the Compact High Resolution Imaging Spectrometer (CHRIS) on Proba and the Ozone Monitoring Instrument operating in the UV spectral region. ESA are also planning a number of further missions that are likely to use CCD technology (Sentinel 3, 4 and 5). However CMOS sensors have a number of advantages which means that they will probably be used for hyperspectral applications in the longer term. There are two main advantages with CMOS sensors: First a hyperspectral image consists of spectral lines with a large difference in intensity; in a frame transfer CCD the faint spectral lines have to be transferred through the part of the imager illuminated by intense lines. This can lead to cross-talk and whilst this problem can be reduced by the use of split frame transfer and faster line rates CMOS sensors do not require a frame transfer and hence inherently will not suffer from this problem. Second, with a CMOS sensor the intense spectral lines can be read multiple times within a frame to give a significant increase in dynamic range. We will describe the design, and initial test of a CMOS sensor for use in hyperspectral applications. This device has been designed to give as high a dynamic range as possible with minimum cross-talk. The sensor has been manufactured on high resistivity epitaxial silicon wafers and is be back-thinned and left relatively thick in order to obtain the maximum quantum efficiency across the entire spectral range
Multi-Sensor Methods for Mobile Radar Motion Capture and Compensation
NASA Astrophysics Data System (ADS)
Nakata, Robert
Remote sensing has many applications, including surveying and mapping, geophysics exploration, military surveillance, search and rescue and counter-terrorism operations. Remote sensor systems typically use visible image, infrared or radar sensors. Camera based image sensors can provide high spatial resolution but are limited to line-of-sight capture during daylight. Infrared sensors have lower resolution but can operate during darkness. Radar sensors can provide high resolution motion measurements, even when obscured by weather, clouds and smoke and can penetrate walls and collapsed structures constructed with non-metallic materials up to 1 m to 2 m in depth depending on the wavelength and transmitter power level. However, any platform motion will degrade the target signal of interest. In this dissertation, we investigate alternative methodologies to capture platform motion, including a Body Area Network (BAN) that doesn't require external fixed location sensors, allowing full mobility of the user. We also investigated platform stabilization and motion compensation techniques to reduce and remove the signal distortion introduced by the platform motion. We evaluated secondary ultrasonic and radar sensors to stabilize the platform resulting in an average 5 dB of Signal to Interference Ratio (SIR) improvement. We also implemented a Digital Signal Processing (DSP) motion compensation algorithm that improved the SIR by 18 dB on average. These techniques could be deployed on a quadcopter platform and enable the detection of respiratory motion using an onboard radar sensor.
Polar research from satellites
NASA Technical Reports Server (NTRS)
Thomas, Robert H.
1991-01-01
In the polar regions and climate change section, the topics of ocean/atmosphere heat transfer, trace gases, surface albedo, and response to climate warming are discussed. The satellite instruments section is divided into three parts. Part one is about basic principles and covers, choice of frequencies, algorithms, orbits, and remote sensing techniques. Part two is about passive sensors and covers microwave radiometers, medium-resolution visible and infrared sensors, advanced very high resolution radiometers, optical line scanners, earth radiation budget experiment, coastal zone color scanner, high-resolution imagers, and atmospheric sounding. Part three is about active sensors and covers synthetic aperture radar, radar altimeters, scatterometers, and lidar. There is also a next decade section that is followed by a summary and recommendations section.
Multiple Sensor Camera for Enhanced Video Capturing
NASA Astrophysics Data System (ADS)
Nagahara, Hajime; Kanki, Yoshinori; Iwai, Yoshio; Yachida, Masahiko
A resolution of camera has been drastically improved under a current request for high-quality digital images. For example, digital still camera has several mega pixels. Although a video camera has the higher frame-rate, the resolution of a video camera is lower than that of still camera. Thus, the high-resolution is incompatible with the high frame rate of ordinary cameras in market. It is difficult to solve this problem by a single sensor, since it comes from physical limitation of the pixel transfer rate. In this paper, we propose a multi-sensor camera for capturing a resolution and frame-rate enhanced video. Common multi-CCDs camera, such as 3CCD color camera, has same CCD for capturing different spectral information. Our approach is to use different spatio-temporal resolution sensors in a single camera cabinet for capturing higher resolution and frame-rate information separately. We build a prototype camera which can capture high-resolution (2588×1958 pixels, 3.75 fps) and high frame-rate (500×500, 90 fps) videos. We also proposed the calibration method for the camera. As one of the application of the camera, we demonstrate an enhanced video (2128×1952 pixels, 90 fps) generated from the captured videos for showing the utility of the camera.
Development of High Resolution Eddy Current Imaging Using an Electro-Mechanical Sensor (Preprint)
2011-11-01
The Fluxgate Magnetometer ,” J. Phys. E: Sci. Instrum., Vol. 12: 241-253. 13. A. Abedi, J. J. Fellenstein, A. J. Lucas, and J. P. Wikswo, Jr., “A...206 (2006). 11. Ripka, P., 1992, Review of Fluxgate Sensors, Sensors and Actuators, A. 33, Elsevier Sequoia: 129-141. 12. Primdahl, F., 1979...superconducting quantum interference device magnetometer system for quantitative analysis and imaging of hidden corrosion activity in aircraft aluminum
NASA Astrophysics Data System (ADS)
Cha, B. K.; Kim, J. Y.; Kim, Y. J.; Yun, S.; Cho, G.; Kim, H. K.; Seo, C.-W.; Jeon, S.; Huh, Y.
2012-04-01
In digital X-ray imaging systems, X-ray imaging detectors based on scintillating screens with electronic devices such as charge-coupled devices (CCDs), thin-film transistors (TFT), complementary metal oxide semiconductor (CMOS) flat panel imagers have been introduced for general radiography, dental, mammography and non-destructive testing (NDT) applications. Recently, a large-area CMOS active-pixel sensor (APS) in combination with scintillation films has been widely used in a variety of digital X-ray imaging applications. We employed a scintillator-based CMOS APS image sensor for high-resolution mammography. In this work, both powder-type Gd2O2S:Tb and a columnar structured CsI:Tl scintillation screens with various thicknesses were fabricated and used as materials to convert X-ray into visible light. These scintillating screens were directly coupled to a CMOS flat panel imager with a 25 × 50 mm2 active area and a 48 μm pixel pitch for high spatial resolution acquisition. We used a W/Al mammographic X-ray source with a 30 kVp energy condition. The imaging characterization of the X-ray detector was measured and analyzed in terms of linearity in incident X-ray dose, modulation transfer function (MTF), noise-power spectrum (NPS) and detective quantum efficiency (DQE).
Along-Track Reef Imaging System (ATRIS)
Brock, John; Zawada, Dave
2006-01-01
"Along-Track Reef Imaging System (ATRIS)" describes the U.S. Geological Survey's Along-Track Reef Imaging System, a boat-based sensor package for rapidly mapping shallow water benthic environments. ATRIS acquires high resolution, color digital images that are accurately geo-located in real-time.
A Flexible Spatiotemporal Method for Fusing Satellite Images with Different Resolutions
USDA-ARS?s Scientific Manuscript database
Studies of land surface dynamics in heterogeneous landscapes often require remote sensing data with high acquisition frequency and high spatial resolution. However, no single sensor meets this requirement. This study presents a new spatiotemporal data fusion method, the Flexible Spatiotemporal DAta ...
Spatial Quality Evaluation of Resampled Unmanned Aerial Vehicle-Imagery for Weed Mapping.
Borra-Serrano, Irene; Peña, José Manuel; Torres-Sánchez, Jorge; Mesas-Carrascosa, Francisco Javier; López-Granados, Francisca
2015-08-12
Unmanned aerial vehicles (UAVs) combined with different spectral range sensors are an emerging technology for providing early weed maps for optimizing herbicide applications. Considering that weeds, at very early phenological stages, are similar spectrally and in appearance, three major components are relevant: spatial resolution, type of sensor and classification algorithm. Resampling is a technique to create a new version of an image with a different width and/or height in pixels, and it has been used in satellite imagery with different spatial and temporal resolutions. In this paper, the efficiency of resampled-images (RS-images) created from real UAV-images (UAV-images; the UAVs were equipped with two types of sensors, i.e., visible and visible plus near-infrared spectra) captured at different altitudes is examined to test the quality of the RS-image output. The performance of the object-based-image-analysis (OBIA) implemented for the early weed mapping using different weed thresholds was also evaluated. Our results showed that resampling accurately extracted the spectral values from high spatial resolution UAV-images at an altitude of 30 m and the RS-image data at altitudes of 60 and 100 m, was able to provide accurate weed cover and herbicide application maps compared with UAV-images from real flights.
Spatial Quality Evaluation of Resampled Unmanned Aerial Vehicle-Imagery for Weed Mapping
Borra-Serrano, Irene; Peña, José Manuel; Torres-Sánchez, Jorge; Mesas-Carrascosa, Francisco Javier; López-Granados, Francisca
2015-01-01
Unmanned aerial vehicles (UAVs) combined with different spectral range sensors are an emerging technology for providing early weed maps for optimizing herbicide applications. Considering that weeds, at very early phenological stages, are similar spectrally and in appearance, three major components are relevant: spatial resolution, type of sensor and classification algorithm. Resampling is a technique to create a new version of an image with a different width and/or height in pixels, and it has been used in satellite imagery with different spatial and temporal resolutions. In this paper, the efficiency of resampled-images (RS-images) created from real UAV-images (UAV-images; the UAVs were equipped with two types of sensors, i.e., visible and visible plus near-infrared spectra) captured at different altitudes is examined to test the quality of the RS-image output. The performance of the object-based-image-analysis (OBIA) implemented for the early weed mapping using different weed thresholds was also evaluated. Our results showed that resampling accurately extracted the spectral values from high spatial resolution UAV-images at an altitude of 30 m and the RS-image data at altitudes of 60 and 100 m, was able to provide accurate weed cover and herbicide application maps compared with UAV-images from real flights. PMID:26274960
Super-resolved refocusing with a plenoptic camera
NASA Astrophysics Data System (ADS)
Zhou, Zhiliang; Yuan, Yan; Bin, Xiangli; Qian, Lulu
2011-03-01
This paper presents an approach to enhance the resolution of refocused images by super resolution methods. In plenoptic imaging, we demonstrate that the raw sensor image can be divided to a number of low-resolution angular images with sub-pixel shifts between each other. The sub-pixel shift, which defines the super-resolving ability, is mathematically derived by considering the plenoptic camera as equivalent camera arrays. We implement simulation to demonstrate the imaging process of a plenoptic camera. A high-resolution image is then reconstructed using maximum a posteriori (MAP) super resolution algorithms. Without other degradation effects in simulation, the super resolved image achieves a resolution as high as predicted by the proposed model. We also build an experimental setup to acquire light fields. With traditional refocusing methods, the image is rendered at a rather low resolution. In contrast, we implement the super-resolved refocusing methods and recover an image with more spatial details. To evaluate the performance of the proposed method, we finally compare the reconstructed images using image quality metrics like peak signal to noise ratio (PSNR).
Sharpening advanced land imager multispectral data using a sensor model
Lemeshewsky, G.P.; ,
2005-01-01
The Advanced Land Imager (ALI) instrument on NASA's Earth Observing One (EO-1) satellite provides for nine spectral bands at 30m ground sample distance (GSD) and a 10m GSD panchromatic band. This report describes an image sharpening technique where the higher spatial resolution information of the panchromatic band is used to increase the spatial resolution of ALI multispectral (MS) data. To preserve the spectral characteristics, this technique combines reported deconvolution deblurring methods for the MS data with highpass filter-based fusion methods for the Pan data. The deblurring process uses the point spread function (PSF) model of the ALI sensor. Information includes calculation of the PSF from pre-launch calibration data. Performance was evaluated using simulated ALI MS data generated by degrading the spatial resolution of high resolution IKONOS satellite MS data. A quantitative measure of performance was the error between sharpened MS data and high resolution reference. This report also compares performance with that of a reported method that includes PSF information. Preliminary results indicate improved sharpening with the method reported here.
Noise and spectroscopic performance of DEPMOSFET matrix devices for XEUS
NASA Astrophysics Data System (ADS)
Treis, J.; Fischer, P.; Hälker, O.; Herrmann, S.; Kohrs, R.; Krüger, H.; Lechner, P.; Lutz, G.; Peric, I.; Porro, M.; Richter, R. H.; Strüder, L.; Trimpl, M.; Wermes, N.; Wölfel, S.
2005-08-01
DEPMOSFET based Active Pixel Sensor (APS) matrix devices, originally developed to cope with the challenging requirements of the XEUS Wide Field Imager, have proven to be a promising new imager concept for a variety of future X-ray imaging and spectroscopy missions like Simbol-X. The devices combine excellent energy resolution, high speed readout and low power consumption with the attractive feature of random accessibility of pixels. A production of sensor prototypes with 64 x 64 pixels with a size of 75 μm x 75 μm each has recently been finished at the MPI semiconductor laboratory in Munich. The devices are built for row-wise readout and require dedicated control and signal processing electronics of the CAMEX type, which is integrated together with the sensor onto a readout hybrid. A number of hybrids incorporating the most promising sensor design variants has been built, and their performance has been studied in detail. A spectroscopic resolution of 131 eV has been measured, the readout noise is as low as 3.5 e- ENC. Here, the dependence of readout noise and spectroscopic resolution on the device temperature is presented.
Assessment and Prediction of Natural Hazards from Satellite Imagery
Gillespie, Thomas W.; Chu, Jasmine; Frankenberg, Elizabeth; Thomas, Duncan
2013-01-01
Since 2000, there have been a number of spaceborne satellites that have changed the way we assess and predict natural hazards. These satellites are able to quantify physical geographic phenomena associated with the movements of the earth’s surface (earthquakes, mass movements), water (floods, tsunamis, storms), and fire (wildfires). Most of these satellites contain active or passive sensors that can be utilized by the scientific community for the remote sensing of natural hazards over a number of spatial and temporal scales. The most useful satellite imagery for the assessment of earthquake damage comes from high-resolution (0.6 m to 1 m pixel size) passive sensors and moderate resolution active sensors that can quantify the vertical and horizontal movement of the earth’s surface. High-resolution passive sensors have been used to successfully assess flood damage while predictive maps of flood vulnerability areas are possible based on physical variables collected from passive and active sensors. Recent moderate resolution sensors are able to provide near real time data on fires and provide quantitative data used in fire behavior models. Limitations currently exist due to atmospheric interference, pixel resolution, and revisit times. However, a number of new microsatellites and constellations of satellites will be launched in the next five years that contain increased resolution (0.5 m to 1 m pixel resolution for active sensors) and revisit times (daily ≤ 2.5 m resolution images from passive sensors) that will significantly improve our ability to assess and predict natural hazards from space. PMID:25170186
Light-Addressable Potentiometric Sensors for Quantitative Spatial Imaging of Chemical Species.
Yoshinobu, Tatsuo; Miyamoto, Ko-Ichiro; Werner, Carl Frederik; Poghossian, Arshak; Wagner, Torsten; Schöning, Michael J
2017-06-12
A light-addressable potentiometric sensor (LAPS) is a semiconductor-based chemical sensor, in which a measurement site on the sensing surface is defined by illumination. This light addressability can be applied to visualize the spatial distribution of pH or the concentration of a specific chemical species, with potential applications in the fields of chemistry, materials science, biology, and medicine. In this review, the features of this chemical imaging sensor technology are compared with those of other technologies. Instrumentation, principles of operation, and various measurement modes of chemical imaging sensor systems are described. The review discusses and summarizes state-of-the-art technologies, especially with regard to the spatial resolution and measurement speed; for example, a high spatial resolution in a submicron range and a readout speed in the range of several tens of thousands of pixels per second have been achieved with the LAPS. The possibility of combining this technology with microfluidic devices and other potential future developments are discussed.
Indium antimonide large-format detector arrays
NASA Astrophysics Data System (ADS)
Davis, Mike; Greiner, Mark
2011-06-01
Large format infrared imaging sensors are required to achieve simultaneously high resolution and wide field of view image data. Infrared sensors are generally required to be cooled from room temperature to cryogenic temperatures in less than 10 min thousands of times during their lifetime. The challenge is to remove mechanical stress, which is due to different materials with different coefficients of expansion, over a very wide temperature range and at the same time, provide a high sensitivity and high resolution image data. These challenges are met by developing a hybrid where the indium antimonide detector elements (pixels) are unconnected islands that essentially float on a silicon substrate and form a near perfect match to the silicon read-out circuit. Since the pixels are unconnected and isolated from each other, the array is reticulated. This paper shows that the front side illuminated and reticulated element indium antimonide focal plane developed at L-3 Cincinnati Electronics are robust, approach background limited sensitivity limit, and provide the resolution expected of the reticulated pixel array.
New optical sensor systems for high-resolution satellite, airborne and terrestrial imaging systems
NASA Astrophysics Data System (ADS)
Eckardt, Andreas; Börner, Anko; Lehmann, Frank
2007-10-01
The department of Optical Information Systems (OS) at the Institute of Robotics and Mechatronics of the German Aerospace Center (DLR) has more than 25 years experience with high-resolution imaging technology. The technology changes in the development of detectors, as well as the significant change of the manufacturing accuracy in combination with the engineering research define the next generation of spaceborne sensor systems focusing on Earth observation and remote sensing. The combination of large TDI lines, intelligent synchronization control, fast-readable sensors and new focal-plane concepts open the door to new remote-sensing instruments. This class of instruments is feasible for high-resolution sensor systems regarding geometry and radiometry and their data products like 3D virtual reality. Systemic approaches are essential for such designs of complex sensor systems for dedicated tasks. The system theory of the instrument inside a simulated environment is the beginning of the optimization process for the optical, mechanical and electrical designs. Single modules and the entire system have to be calibrated and verified. Suitable procedures must be defined on component, module and system level for the assembly test and verification process. This kind of development strategy allows the hardware-in-the-loop design. The paper gives an overview about the current activities at DLR in the field of innovative sensor systems for photogrammetric and remote sensing purposes.
Comparison of SeaWinds Backscatter Imaging Algorithms
Long, David G.
2017-01-01
This paper compares the performance and tradeoffs of various backscatter imaging algorithms for the SeaWinds scatterometer when multiple passes over a target are available. Reconstruction methods are compared with conventional gridding algorithms. In particular, the performance and tradeoffs in conventional ‘drop in the bucket’ (DIB) gridding at the intrinsic sensor resolution are compared to high-spatial-resolution imaging algorithms such as fine-resolution DIB and the scatterometer image reconstruction (SIR) that generate enhanced-resolution backscatter images. Various options for each algorithm are explored, including considering both linear and dB computation. The effects of sampling density and reconstruction quality versus time are explored. Both simulated and actual data results are considered. The results demonstrate the effectiveness of high-resolution reconstruction using SIR as well as its limitations and the limitations of DIB and fDIB. PMID:28828143
Chromatic Modulator for High Resolution CCD or APS Devices
NASA Technical Reports Server (NTRS)
Hartley, Frank T. (Inventor); Hull, Anthony B. (Inventor)
2003-01-01
A system for providing high-resolution color separation in electronic imaging. Comb drives controllably oscillate a red-green-blue (RGB) color strip filter system (or otherwise) over an electronic imaging system such as a charge-coupled device (CCD) or active pixel sensor (APS). The color filter is modulated over the imaging array at a rate three or more times the frame rate of the imaging array. In so doing, the underlying active imaging elements are then able to detect separate color-separated images, which are then combined to provide a color-accurate frame which is then recorded as the representation of the recorded image. High pixel resolution is maintained. Registration is obtained between the color strip filter and the underlying imaging array through the use of electrostatic comb drives in conjunction with a spring suspension system.
NASA Astrophysics Data System (ADS)
Bryant, Kyle R.
2016-05-01
Foveated imaging can deliver two different resolutions on a single focal plane, which might inexpensively allow more capability for military systems. The following design study results provide starting examples, lessons learned, and helpful setup equations and pointers to aid the lens designer in any foveated lens design effort. Our goal is to put robust sensor in a small package with no moving parts, but still be able to perform some of the functions of a sensor in a moving gimbal. All of the elegant solutions are out (for various reasons). This study is an attempt to see if lens designs can solve this problem and realize some gains in performance versus cost for airborne sensors. We determined a series of design concepts to simultaneously deliver wide field of view and high foveal resolution without scanning or gimbals. Separate sensors for each field of view are easy and relatively inexpensive, but lead to bulky detectors and electronics. Folding and beam-combining of separate optical channels reduces sensor footprint, but induces image inversions and reduced transmission. Entirely common optics provide good resolution, but cannot provide a significant magnification increase in the foveal region. Offsetting the foveal region from the wide field center may not be physically realizable, but may be required for some applications. The design study revealed good general guidance for foveated optics designs with a cold stop. Key lessons learned involve managing distortion, telecentric imagers, matching image inversions and numerical apertures between channels, reimaging lenses, and creating clean resolution zone splits near internal focal planes.
High spatial resolution passive microwave sounding systems
NASA Technical Reports Server (NTRS)
Staelin, D. H.; Rosenkranz, P. W.; Bonanni, P. G.; Gasiewski, A. W.
1986-01-01
Two extensive series of flights aboard the ER-2 aircraft were conducted with the MIT 118 GHz imaging spectrometer together with a 53.6 GHz nadir channel and a TV camera record of the mission. Other microwave sensors, including a 183 GHz imaging spectrometer were flown simultaneously by other research groups. Work also continued on evaluating the impact of high-resolution passive microwave soundings upon numerical weather prediction models.
Image Stability Requirements For a Geostationary Imaging Fourier Transform Spectrometer (GIFTS)
NASA Technical Reports Server (NTRS)
Bingham, G. E.; Cantwell, G.; Robinson, R. C.; Revercomb, H. E.; Smith, W. L.
2001-01-01
A Geostationary Imaging Fourier Transform Spectrometer (GIFTS) has been selected for the NASA New Millennium Program (NMP) Earth Observing-3 (EO-3) mission. Our paper will discuss one of the key GIFTS measurement requirements, Field of View (FOV) stability, and its impact on required system performance. The GIFTS NMP mission is designed to demonstrate new and emerging sensor and data processing technologies with the goal of making revolutionary improvements in meteorological observational capability and forecasting accuracy. The GIFTS payload is a versatile imaging FTS with programmable spectral resolution and spatial scene selection that allows radiometric accuracy and atmospheric sounding precision to be traded in near real time for area coverage. The GIFTS sensor combines high sensitivity with a massively parallel spatial data collection scheme to allow high spatial resolution measurement of the Earth's atmosphere and rapid broad area coverage. An objective of the GIFTS mission is to demonstrate the advantages of high spatial resolution (4 km ground sample distance - gsd) on temperature and water vapor retrieval by allowing sampling in broken cloud regions. This small gsd, combined with the relatively long scan time required (approximately 10 s) to collect high resolution spectra from geostationary (GEO) orbit, may require extremely good pointing control. This paper discusses the analysis of this requirement.
Radiometric characterization of hyperspectral imagers using multispectral sensors
NASA Astrophysics Data System (ADS)
McCorkel, Joel; Thome, Kurt; Leisso, Nathan; Anderson, Nikolaus; Czapla-Myers, Jeff
2009-08-01
The Remote Sensing Group (RSG) at the University of Arizona has a long history of using ground-based test sites for the calibration of airborne and satellite based sensors. Often, ground-truth measurements at these tests sites are not always successful due to weather and funding availability. Therefore, RSG has also employed automated ground instrument approaches and cross-calibration methods to verify the radiometric calibration of a sensor. The goal in the cross-calibration method is to transfer the calibration of a well-known sensor to that of a different sensor. This work studies the feasibility of determining the radiometric calibration of a hyperspectral imager using multispectral imagery. The work relies on the Moderate Resolution Imaging Spectroradiometer (MODIS) as a reference for the hyperspectral sensor Hyperion. Test sites used for comparisons are Railroad Valley in Nevada and a portion of the Libyan Desert in North Africa. Hyperion bands are compared to MODIS by band averaging Hyperion's high spectral resolution data with the relative spectral response of MODIS. The results compare cross-calibration scenarios that differ in image acquisition coincidence, test site used for the calibration, and reference sensor. Cross-calibration results are presented that show agreement between the use of coincident and non-coincident image pairs within 2% in most bands as well as similar agreement between results that employ the different MODIS sensors as a reference.
Radiometric Characterization of Hyperspectral Imagers using Multispectral Sensors
NASA Technical Reports Server (NTRS)
McCorkel, Joel; Kurt, Thome; Leisso, Nathan; Anderson, Nikolaus; Czapla-Myers, Jeff
2009-01-01
The Remote Sensing Group (RSG) at the University of Arizona has a long history of using ground-based test sites for the calibration of airborne and satellite based sensors. Often, ground-truth measurements at these test sites are not always successful due to weather and funding availability. Therefore, RSG has also automated ground instrument approaches and cross-calibration methods to verify the radiometric calibration of a sensor. The goal in the cross-calibration method is to transfer the calibration of a well-known sensor to that of a different sensor, This work studies the feasibility of determining the radiometric calibration of a hyperspectral imager using multispectral a imagery. The work relies on the Moderate Resolution Imaging Spectroradiometer (M0DIS) as a reference for the hyperspectral sensor Hyperion. Test sites used for comparisons are Railroad Valley in Nevada and a portion of the Libyan Desert in North Africa. Hyperion bands are compared to MODIS by band averaging Hyperion's high spectral resolution data with the relative spectral response of M0DlS. The results compare cross-calibration scenarios that differ in image acquisition coincidence, test site used for the calibration, and reference sensor. Cross-calibration results are presented that show agreement between the use of coincident and non-coincident image pairs within 2% in most brands as well as similar agreement between results that employ the different MODIS sensors as a reference.
Compact and portable X-ray imager system using Medipix3RX
NASA Astrophysics Data System (ADS)
Garcia-Nathan, T. B.; Kachatkou, A.; Jiang, C.; Omar, D.; Marchal, J.; Changani, H.; Tartoni, N.; van Silfhout, R. G.
2017-10-01
In this paper the design and implementation of a novel portable X-ray imager system is presented. The design features a direct X-ray detection scheme by making use of a hybrid detector (Medipix3RX). Taking advantages of the capabilities of the Medipix3RX, like a high resolution, zero dead-time, single photon detection and charge-sharing mode, the imager has a better resolution and higher sensitivity compared to using traditional indirect detection schemes. A detailed description of the system is presented, which consists of a vacuum chamber containing the sensor, an electronic board for temperature management, conditioning and readout of the sensor and a data processing unit which also handles network connection and allow communication with clients by acting as a server. A field programmable gate array (FPGA) device is used to implement the readout protocol for the Medipix3RX, apart from the readout the FPGA can perform complex image processing functions such as feature extraction, histogram, profiling and image compression at high speeds. The temperature of the sensor is monitored and controlled through a PID algorithm making use of a Peltier cooler, improving the energy resolution and response stability of the sensor. Without implementing data compression techniques, the system is capable of transferring 680 profiles/s or 240 images/s in a continuous mode. Implementation of equalization procedures and tests on colour mode are presented in this paper. For the experimental measurements the Medipix3RX sensor was used with a Silicon layer. One of the tested applications of the system is as an X-ray beam position monitor (XBPM) device for synchrotron applications. The XBPM allows a non-destructive real time measurement of the beam position, size and intensity. A Kapton foil is placed in the beam path scattering radiation towards a pinhole camera setup that allows the sensor to obtain an image of the beam. By using profiles of the synchrotron X-ray beam, high frequency movement of the beam position can be studied, up to 340 Hz. The system is capable of realizing an independent energy measure of the beam by using the Medipix3RX variable energy threshold feature.
Electro-optical imaging systems integration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wight, R.
1987-01-01
Since the advent of high resolution, high data rate electronic sensors for military aircraft, the demands on their counterpart, the image generator hard copy output system, have increased dramatically. This has included support of direct overflight and standoff reconnaissance systems and often has required operation within a military shelter or van. The Tactical Laser Beam Recorder (TLBR) design has met the challenge each time. A third generation (TLBR) was designed and two units delivered to rapidly produce high quality wet process imagery on 5-inch film from a 5-sensor digital image signal input. A modular, in-line wet film processor is includedmore » in the total TLBR (W) system. The system features a rugged optical and transport package that requires virtually no alignment or maintenance. It has a ''Scan FIX'' capability which corrects for scanner fault errors and ''Scan LOC'' system which provides for complete phase synchronism isolation between scanner and digital image data input via strobed, 2-line digital buffers. Electronic gamma adjustment automatically compensates for variable film processing time as the film speed changes to track the sensor. This paper describes the fourth meeting of that challenge, the High Resolution Laser Beam Recorder (HRLBR) for Reconnaissance/Tactical applications.« less
NASA Astrophysics Data System (ADS)
Ha, W.; Gowda, P. H.; Oommen, T.; Howell, T. A.; Hernandez, J. E.
2010-12-01
High spatial resolution Land Surface Temperature (LST) images are required to estimate evapotranspiration (ET) at a field scale for irrigation scheduling purposes. Satellite sensors such as Landsat 5 Thematic Mapper (TM) and Moderate Resolution Imaging Spectroradiometer (MODIS) can offer images at several spectral bandwidths including visible, near-infrared (NIR), shortwave-infrared, and thermal-infrared (TIR). The TIR images usually have coarser spatial resolutions than those from non-thermal infrared bands. Due to this technical constraint of the satellite sensors on these platforms, image downscaling has been proposed in the field of ET remote sensing. This paper explores the potential of the Support Vector Machines (SVM) to perform downscaling of LST images derived from aircraft (4 m spatial resolution), TM (120 m), and MODIS (1000 m) using normalized difference vegetation index images derived from simultaneously acquired high resolution visible and NIR data (1 m for aircraft, 30 m for TM, and 250 m for MODIS). The SVM is a new generation machine learning algorithm that has found a wide application in the field of pattern recognition and time series analysis. The SVM would be ideally suited for downscaling problems due to its generalization ability in capturing non-linear regression relationship between the predictand and the multiple predictors. Remote sensing data acquired over the Texas High Plains during the 2008 summer growing season will be used in this study. Accuracy assessment of the downscaled 1, 30, and 250 m LST images will be made by comparing them with LST data measured with infrared thermometers at a small spatial scale, upscaled 30 m aircraft-based LST images, and upscaled 250 m TM-based LST images, respectively.
Development of High Resolution Eddy Current Imaging Using an Electro-Mechanical Sensor (Postprint)
2011-08-01
Primdahl, F., 1979, “The Fluxgate Magnetometer ,” J. Phys. E: Sci. Instrum., Vol. 12: 241-253. 13. A. Abedi, J. J. Fellenstein, A. J. Lucas, and J. P...Issues 1-2, Pages 203-206 (2006). 11. Ripka, P., 1992, Review of Fluxgate Sensors, Sensors and Actuators, A. 33, Elsevier Sequoia: 129-141. 12...Wikswo, Jr., “A superconducting quantum interference device magnetometer system for quantitative analysis and imaging of hidden corrosion activity in
DUSTER: demonstration of an integrated LWIR-VNIR-SAR imaging system
NASA Astrophysics Data System (ADS)
Wilson, Michael L.; Linne von Berg, Dale; Kruer, Melvin; Holt, Niel; Anderson, Scott A.; Long, David G.; Margulis, Yuly
2008-04-01
The Naval Research Laboratory (NRL) and Space Dynamics Laboratory (SDL) are executing a joint effort, DUSTER (Deployable Unmanned System for Targeting, Exploitation, and Reconnaissance), to develop and test a new tactical sensor system specifically designed for Tier II UAVs. The system is composed of two coupled near-real-time sensors: EyePod (VNIR/LWIR ball gimbal) and NuSAR (L-band synthetic aperture radar). EyePod consists of a jitter-stabilized LWIR sensor coupled with a dual focal-length optical system and a bore-sighted high-resolution VNIR sensor. The dual focal-length design coupled with precision pointing an step-stare capabilities enable EyePod to conduct wide-area survey and high resolution inspection missions from a single flight pass. NuSAR is being developed with partners Brigham Young University (BYU) and Artemis, Inc and consists of a wideband L-band SAR capable of large area survey and embedded real-time image formation. Both sensors employ standard Ethernet interfaces and provide geo-registered NITFS output imagery. In the fall of 2007, field tests were conducted with both sensors, results of which will be presented.
2014-01-01
Comparison of footprints from various image sensors used in this study . Landsat (blue) is in the upper left panel, SPOT (yellow) is in the upper right...the higher resolution sensors evaluated as part of this study are limited to four spectral bands. Moderate resolution processing. ArcGIS ...moderate, effective useful coverage may be much more limited for a scene that includes significant amounts of water. Throughout the study period, SPOT 4
A flexible spatiotemporal method for fusing satellite images with different resolutions
Xiaolin Zhu; Eileen H. Helmer; Feng Gao; Desheng Liu; Jin Chen; Michael A. Lefsky
2016-01-01
Studies of land surface dynamics in heterogeneous landscapes often require remote sensing datawith high acquisition frequency and high spatial resolution. However, no single sensor meets this requirement. This study presents a new spatiotemporal data fusion method, the Flexible Spatiotemporal DAta Fusion (FSDAF) method, to generate synthesized frequent high spatial...
NASA Astrophysics Data System (ADS)
Wang, Mi; Fang, Chengcheng; Yang, Bo; Cheng, Yufeng
2016-06-01
The low frequency error is a key factor which has affected uncontrolled geometry processing accuracy of the high-resolution optical image. To guarantee the geometric quality of imagery, this paper presents an on-orbit calibration method for the low frequency error based on geometric calibration field. Firstly, we introduce the overall flow of low frequency error on-orbit analysis and calibration, which includes optical axis angle variation detection of star sensor, relative calibration among star sensors, multi-star sensor information fusion, low frequency error model construction and verification. Secondly, we use optical axis angle change detection method to analyze the law of low frequency error variation. Thirdly, we respectively use the method of relative calibration and information fusion among star sensors to realize the datum unity and high precision attitude output. Finally, we realize the low frequency error model construction and optimal estimation of model parameters based on DEM/DOM of geometric calibration field. To evaluate the performance of the proposed calibration method, a certain type satellite's real data is used. Test results demonstrate that the calibration model in this paper can well describe the law of the low frequency error variation. The uncontrolled geometric positioning accuracy of the high-resolution optical image in the WGS-84 Coordinate Systems is obviously improved after the step-wise calibration.
Generic Sensor Modeling Using Pulse Method
NASA Technical Reports Server (NTRS)
Helder, Dennis L.; Choi, Taeyoung
2005-01-01
Recent development of high spatial resolution satellites such as IKONOS, Quickbird and Orbview enable observation of the Earth's surface with sub-meter resolution. Compared to the 30 meter resolution of Landsat 5 TM, the amount of information in the output image was dramatically increased. In this era of high spatial resolution, the estimation of spatial quality of images is gaining attention. Historically, the Modulation Transfer Function (MTF) concept has been used to estimate an imaging system's spatial quality. Sometimes classified by target shapes, various methods were developed in laboratory environment utilizing sinusoidal inputs, periodic bar patterns and narrow slits. On-orbit sensor MTF estimation was performed on 30-meter GSD Landsat4 Thematic Mapper (TM) data from the bridge pulse target as a pulse input . Because of a high resolution sensor s small Ground Sampling Distance (GSD), reasonably sized man-made edge, pulse, and impulse targets can be deployed on a uniform grassy area with accurate control of ground targets using tarps and convex mirrors. All the previous work cited calculated MTF without testing the MTF estimator's performance. In previous report, a numerical generic sensor model had been developed to simulate and improve the performance of on-orbit MTF estimating techniques. Results from the previous sensor modeling report that have been incorporated into standard MTF estimation work include Fermi edge detection and the newly developed 4th order modified Savitzky-Golay (MSG) interpolation technique. Noise sensitivity had been studied by performing simulations on known noise sources and a sensor model. Extensive investigation was done to characterize multi-resolution ground noise. Finally, angle simulation was tested by using synthetic pulse targets with angles from 2 to 15 degrees, several brightness levels, and different noise levels from both ground targets and imaging system. As a continuing research activity using the developed sensor model, this report was dedicated to MTF estimation via pulse input method characterization using the Fermi edge detection and 4th order MSG interpolation method. The relationship between pulse width and MTF value at Nyquist was studied including error detection and correction schemes. Pulse target angle sensitivity was studied by using synthetic targets angled from 2 to 12 degrees. In this report, from the ground and system noise simulation, a minimum SNR value was suggested for a stable MTF value at Nyquist for the pulse method. Target width error detection and adjustment technique based on a smooth transition of MTF profile is presented, which is specifically applicable only to the pulse method with 3 pixel wide targets.
The progress of sub-pixel imaging methods
NASA Astrophysics Data System (ADS)
Wang, Hu; Wen, Desheng
2014-02-01
This paper reviews the Sub-pixel imaging technology principles, characteristics, the current development status at home and abroad and the latest research developments. As Sub-pixel imaging technology has achieved the advantages of high resolution of optical remote sensor, flexible working ways and being miniaturized with no moving parts. The imaging system is suitable for the application of space remote sensor. Its application prospect is very extensive. It is quite possible to be the research development direction of future space optical remote sensing technology.
Cheetah: A high frame rate, high resolution SWIR image camera
NASA Astrophysics Data System (ADS)
Neys, Joel; Bentell, Jonas; O'Grady, Matt; Vermeiren, Jan; Colin, Thierry; Hooylaerts, Peter; Grietens, Bob
2008-10-01
A high resolution, high frame rate InGaAs based image sensor and associated camera has been developed. The sensor and the camera are capable of recording and delivering more than 1700 full 640x512pixel frames per second. The FPA utilizes a low lag CTIA current integrator in each pixel, enabling integration times shorter than one microsecond. On-chip logics allows for four different sub windows to be read out simultaneously at even higher rates. The spectral sensitivity of the FPA is situated in the SWIR range [0.9-1.7 μm] and can be further extended into the Visible and NIR range. The Cheetah camera has max 16 GB of on-board memory to store the acquired images and transfer the data over a Gigabit Ethernet connection to the PC. The camera is also equipped with a full CameralinkTM interface to directly stream the data to a frame grabber or dedicated image processing unit. The Cheetah camera is completely under software control.
An Evaluation of ALOS Data in Disaster Applications
NASA Astrophysics Data System (ADS)
Igarashi, Tamotsu; Igarashi, Tamotsu; Furuta, Ryoich; Ono, Makoto
ALOS is the advanced land observing satellite, providing image data from onboard sensors; PRISM, AVNIR-2 and PALSAR. PRISM is the sensor of panchromatic stereo, high resolution three-line-scanner to characterize the earth surface. The accuracy of position in image and height of Digital Surface Model (DSM) are high, therefore the geographic information extraction is improved in the field of disaster applications with providing images of disaster area. Especially pan-sharpened 3D image composed with PRISM and the four-band visible near-infrared radiometer AVNIR-2 data is expected to provide information to understand the geographic and topographic feature. PALSAR is the advanced multi-functional synthetic aperture radar (SAR) operated in L-band, appropriate for the use of land surface feature characterization. PALSAR has many improvements from JERS-1/SAR, such as high sensitivity, having high resolution, polarimetric and scan SAR observation modes. PALSAR is also applicable for SAR interferometry processing. This paper describes the evaluation of ALOS data characteristic from the view point of disaster applications, through some exercise applications.
Performance of RVGui sensor and Kodak Ektaspeed Plus film for proximal caries detection.
Abreu, M; Mol, A; Ludlow, J B
2001-03-01
A high-resolution charge-coupled device was used to compare the diagnostic performances obtained with Trophy's new RVGui sensor and Kodak Ektaspeed Plus film with respect to caries detection. Three acquisition modes of the Trophy RVGui sensor were compared with Kodak Ektaspeed Plus film. Images of the proximal surfaces of 40 extracted posterior teeth were evaluated by 6 observers. The presence or absence of caries was scored by means of a 5-point confidence scale. The actual caries status of each surface was determined through ground-section histology. Responses were evaluated by means of receiver operating characteristic analysis. Areas under receiver operating characteristic curves (A(Z)) were assessed through analysis of variance. The mean A(Z) scores were 0.85 for film, 0.84 for the high-resolution caries mode, and 0.82 for both the low resolution caries mode and the high-resolution periodontal mode. These differences were not statistically significant (P =.70). The differences among observers also were not statistically significant (P =.23). The performance of the RVGui sensor in high- and low-resolution modes for proximal caries detection is comparable to that of Ektaspeed Plus film.
NASA Astrophysics Data System (ADS)
Han, Chao; Yao, Lei; Xu, Di; Xie, Xianchuan; Zhang, Chaosheng
2016-05-01
A new dual-lumophore optical sensor combined with a robust RGB referencing method was developed for two-dimensional (2D) pH imaging in alkaline sediments and water. The pH sensor film consisted of a proton-permeable polymer (PVC) in which two dyes with different pH sensitivities and emission colors: (1) chloro phenyl imino propenyl aniline (CPIPA) and (2) the coumarin dye Macrolex® fluorescence yellow 10 GN (MFY-10 GN) were entrapped. Calibration experiments revealed the typical sigmoid function and temperature dependencies. This sensor featured high sensitivity and fast response over the alkaline working ranges from pH 7.5 to pH 10.5. Cross-sensitivity towards ionic strength (IS) was found to be negligible for freshwater when IS <0.1 M. The sensor had a spatial resolution of approximately 22 μm and aresponse time of <120 s when going from pH 7.0 to 9.0. The feasibility of the sensor was demonstrated using the pH microelectrode. An example of pH image obtained in the natrual freshwater sediment and water associated with the photosynthesis of Vallisneria spiral species was also presented, suggesting that the sensor held great promise for the field applications.
Han, Chao; Yao, Lei; Xu, Di; Xie, Xianchuan; Zhang, Chaosheng
2016-01-01
A new dual-lumophore optical sensor combined with a robust RGB referencing method was developed for two-dimensional (2D) pH imaging in alkaline sediments and water. The pH sensor film consisted of a proton-permeable polymer (PVC) in which two dyes with different pH sensitivities and emission colors: (1) chloro phenyl imino propenyl aniline (CPIPA) and (2) the coumarin dye Macrolex® fluorescence yellow 10 GN (MFY-10 GN) were entrapped. Calibration experiments revealed the typical sigmoid function and temperature dependencies. This sensor featured high sensitivity and fast response over the alkaline working ranges from pH 7.5 to pH 10.5. Cross-sensitivity towards ionic strength (IS) was found to be negligible for freshwater when IS <0.1 M. The sensor had a spatial resolution of approximately 22 μm and aresponse time of <120 s when going from pH 7.0 to 9.0. The feasibility of the sensor was demonstrated using the pH microelectrode. An example of pH image obtained in the natrual freshwater sediment and water associated with the photosynthesis of Vallisneria spiral species was also presented, suggesting that the sensor held great promise for the field applications. PMID:27199163
Photon Counting Imaging with an Electron-Bombarded Pixel Image Sensor
Hirvonen, Liisa M.; Suhling, Klaus
2016-01-01
Electron-bombarded pixel image sensors, where a single photoelectron is accelerated directly into a CCD or CMOS sensor, allow wide-field imaging at extremely low light levels as they are sensitive enough to detect single photons. This technology allows the detection of up to hundreds or thousands of photon events per frame, depending on the sensor size, and photon event centroiding can be employed to recover resolution lost in the detection process. Unlike photon events from electron-multiplying sensors, the photon events from electron-bombarded sensors have a narrow, acceleration-voltage-dependent pulse height distribution. Thus a gain voltage sweep during exposure in an electron-bombarded sensor could allow photon arrival time determination from the pulse height with sub-frame exposure time resolution. We give a brief overview of our work with electron-bombarded pixel image sensor technology and recent developments in this field for single photon counting imaging, and examples of some applications. PMID:27136556
Evaluating an image-fusion algorithm with synthetic-image-generation tools
NASA Astrophysics Data System (ADS)
Gross, Harry N.; Schott, John R.
1996-06-01
An algorithm that combines spectral mixing and nonlinear optimization is used to fuse multiresolution images. Image fusion merges images of different spatial and spectral resolutions to create a high spatial resolution multispectral combination. High spectral resolution allows identification of materials in the scene, while high spatial resolution locates those materials. In this algorithm, conventional spectral mixing estimates the percentage of each material (called endmembers) within each low resolution pixel. Three spectral mixing models are compared; unconstrained, partially constrained, and fully constrained. In the partially constrained application, the endmember fractions are required to sum to one. In the fully constrained application, all fractions are additionally required to lie between zero and one. While negative fractions seem inappropriate, they can arise from random spectral realizations of the materials. In the second part of the algorithm, the low resolution fractions are used as inputs to a constrained nonlinear optimization that calculates the endmember fractions for the high resolution pixels. The constraints mirror the low resolution constraints and maintain consistency with the low resolution fraction results. The algorithm can use one or more higher resolution sharpening images to locate the endmembers to high spatial accuracy. The algorithm was evaluated with synthetic image generation (SIG) tools. A SIG developed image can be used to control the various error sources that are likely to impair the algorithm performance. These error sources include atmospheric effects, mismodeled spectral endmembers, and variability in topography and illumination. By controlling the introduction of these errors, the robustness of the algorithm can be studied and improved upon. The motivation for this research is to take advantage of the next generation of multi/hyperspectral sensors. Although the hyperspectral images will be of modest to low resolution, fusing them with high resolution sharpening images will produce a higher spatial resolution land cover or material map.
NASA Astrophysics Data System (ADS)
Rivenson, Yair; Wu, Chris; Wang, Hongda; Zhang, Yibo; Ozcan, Aydogan
2017-03-01
Microscopic imaging of biological samples such as pathology slides is one of the standard diagnostic methods for screening various diseases, including cancer. These biological samples are usually imaged using traditional optical microscopy tools; however, the high cost, bulkiness and limited imaging throughput of traditional microscopes partially restrict their deployment in resource-limited settings. In order to mitigate this, we previously demonstrated a cost-effective and compact lens-less on-chip microscopy platform with a wide field-of-view of >20-30 mm^2. The lens-less microscopy platform has shown its effectiveness for imaging of highly connected biological samples, such as pathology slides of various tissue samples and smears, among others. This computational holographic microscope requires a set of super-resolved holograms acquired at multiple sample-to-sensor distances, which are used as input to an iterative phase recovery algorithm and holographic reconstruction process, yielding high-resolution images of the samples in phase and amplitude channels. Here we demonstrate that in order to reconstruct clinically relevant images with high resolution and image contrast, we require less than 50% of the previously reported nominal number of holograms acquired at different sample-to-sensor distances. This is achieved by incorporating a loose sparsity constraint as part of the iterative holographic object reconstruction. We demonstrate the success of this sparsity-based computational lens-less microscopy platform by imaging pathology slides of breast cancer tissue and Papanicolaou (Pap) smears.
NASA Astrophysics Data System (ADS)
Belbachir, A. N.; Hofstätter, M.; Litzenberger, M.; Schön, P.
2009-10-01
A synchronous communication interface for neuromorphic temporal contrast vision sensors is described and evaluated in this paper. This interface has been designed for ultra high-speed synchronous arbitration of a temporal contrast image sensors pixels' data. Enabling high-precision timestamping, this system demonstrates its uniqueness for handling peak data rates and preserving the main advantage of the neuromorphic electronic systems, that is high and accurate temporal resolution. Based on a synchronous arbitration concept, the timestamping has a resolution of 100 ns. Both synchronous and (state-of-the-art) asynchronous arbiters have been implemented in a neuromorphic dual-line vision sensor chip in a standard 0.35 µm CMOS process. The performance analysis of both arbiters and the advantages of the synchronous arbitration over asynchronous arbitration in capturing high-speed objects are discussed in detail.
Demosaiced pixel super-resolution for multiplexed holographic color imaging
Wu, Yichen; Zhang, Yibo; Luo, Wei; Ozcan, Aydogan
2016-01-01
To synthesize a holographic color image, one can sequentially take three holograms at different wavelengths, e.g., at red (R), green (G) and blue (B) parts of the spectrum, and digitally merge them. To speed up the imaging process by a factor of three, a Bayer color sensor-chip can also be used to demultiplex three wavelengths that simultaneously illuminate the sample and digitally retrieve individual set of holograms using the known transmission spectra of the Bayer color filters. However, because the pixels of different channels (R, G, B) on a Bayer color sensor are not at the same physical location, conventional demosaicing techniques generate color artifacts in holographic imaging using simultaneous multi-wavelength illumination. Here we demonstrate that pixel super-resolution can be merged into the color de-multiplexing process to significantly suppress the artifacts in wavelength-multiplexed holographic color imaging. This new approach, termed Demosaiced Pixel Super-Resolution (D-PSR), generates color images that are similar in performance to sequential illumination at three wavelengths, and therefore improves the speed of holographic color imaging by 3-fold. D-PSR method is broadly applicable to holographic microscopy applications, where high-resolution imaging and multi-wavelength illumination are desired. PMID:27353242
Compact and mobile high resolution PET brain imager
Majewski, Stanislaw [Yorktown, VA; Proffitt, James [Newport News, VA
2011-02-08
A brain imager includes a compact ring-like static PET imager mounted in a helmet-like structure. When attached to a patient's head, the helmet-like brain imager maintains the relative head-to-imager geometry fixed through the whole imaging procedure. The brain imaging helmet contains radiation sensors and minimal front-end electronics. A flexible mechanical suspension/harness system supports the weight of the helmet thereby allowing for patient to have limited movements of the head during imaging scans. The compact ring-like PET imager enables very high resolution imaging of neurological brain functions, cancer, and effects of trauma using a rather simple mobile scanner with limited space needs for use and storage.
NASA Astrophysics Data System (ADS)
Navarro-Cerrillo, Rafael Mª; Trujillo, Jesus; de la Orden, Manuel Sánchez; Hernández-Clemente, Rocío
2014-02-01
A new generation of narrow-band hyperspectral remote sensing data offers an alternative to broad-band multispectral data for the estimation of vegetation chlorophyll content. This paper examines the potential of some of these sensors comparing red-edge and simple ratio indices to develop a rapid and cost-effective system for monitoring Mediterranean pine plantations in Spain. Chlorophyll content retrieval was analyzed with the red-edge R750/R710 index and the simple ratio R800/R560 index using the PROSPECT-5 leaf model and the Discrete Anisotropic Radiative Transfer (DART) and experimental approach. Five sensors were used: AHS, CHRIS/Proba, Hyperion, Landsat and QuickBird. The model simulation results obtained with synthetic spectra demonstrated the feasibility of estimating Ca + b content in conifers using the simple ratio R800/R560 index formulated with different full widths at half maximum (FWHM) at the leaf level. This index yielded a r2 = 0.69 for a FWHM of 30 nm and r2 = 0.55 for a FWHM of 70 nm. Experimental results compared the regression coefficients obtained with various multispectral and hyperspectral images with different spatial resolutions at the stand level. The strongest relationships where obtained using high-resolution hyperspectral images acquired with the AHS sensor (r2 = 0.65) while coarser spatial and spectral resolution images yielded a lower root mean square error (QuickBird r2 = 0.42; Landsat r2 = 0.48; Hyperion r2 = 0.56; CHRIS/Proba r2 = 0.57). This study shows the need to estimate chlorophyll content in forest plantations at the stand level with high spatial and spectral resolution sensors. Nevertheless, these results also show the accuracy obtained with medium-resolution sensors when monitoring physiological processes. Generating biochemical maps at the stand level could play a critical rule in the early detection of forest decline processes enabling their use in precision forestry.
Radiometric cross-calibration of the Terra MODIS and Landsat 7 ETM+ using an invariant desert site
Choi, T.; Angal, A.; Chander, G.; Xiong, X.
2008-01-01
A methodology for long-term radiometric cross-calibration between the Terra Moderate Resolution Imaging Spectroradiometer (MODIS) and Landsat 7 (L7) Enhanced Thematic Mapper Plus (ETM+) sensors was developed. The approach involves calibration of near-simultaneous surface observations between 2000 and 2007. Fifty-seven cloud-free image pairs were carefully selected over the Libyan desert for this study. The Libyan desert site (+28.55??, +23.39??), located in northern Africa, is a high reflectance site with high spatial, spectral, and temporal uniformity. Because the test site covers about 12 kmx13 km, accurate geometric preprocessing is required to match the footprint size between the two sensors to avoid uncertainties due to residual image misregistration. MODIS Level IB radiometrically corrected products were reprojected to the corresponding ETM+ image's Universal Transverse Mercator (UTM) grid projection. The 30 m pixels from the ETM+ images were aggregated to match the MODIS spatial resolution (250 m in Bands 1 and 2, or 500 m in Bands 3 to 7). The image data from both sensors were converted to absolute units of at-sensor radiance and top-ofatmosphere (TOA) reflectance for the spectrally matching band pairs. For each band pair, a set of fitted coefficients (slope and offset) is provided to quantify the relationships between the testing sensors. This work focuses on long-term stability and correlation of the Terra MODIS and L7 ETM+ sensors using absolute calibration results over the entire mission of the two sensors. Possible uncertainties are also discussed such as spectral differences in matching band pairs, solar zenith angle change during a collection, and differences in solar irradiance models.
Display challenges resulting from the use of wide field of view imaging devices
NASA Astrophysics Data System (ADS)
Petty, Gregory J.; Fulton, Jack; Nicholson, Gail; Seals, Ean
2012-06-01
As focal plane array technologies advance and imagers increase in resolution, display technology must outpace the imaging improvements in order to adequately represent the complete data collection. Typical display devices tend to have an aspect ratio similar to 4:3 or 16:9, however a breed of Wide Field of View (WFOV) imaging devices exist that skew from the norm with aspect ratios as high as 5:1. This particular quality, when coupled with a high spatial resolution, presents a unique challenge for display devices. Standard display devices must choose between resizing the image data to fit the display and displaying the image data in native resolution and truncating potentially important information. The problem compounds when considering the applications; WFOV high-situationalawareness imagers are sought for space-limited military vehicles. Tradeoffs between these issues are assessed to the image quality of the WFOV sensor.
CoBOP: Electro-Optic Identification Laser Line Sean Sensors
1998-01-01
Electro - Optic Identification Sensors Project[1] is to develop and demonstrate high resolution underwater electro - optic (EO) imaging sensors, and associated image processing/analysis methods, for rapid visual identification of mines and mine-like contacts (MLCs). Identification of MLCs is a pressing Fleet need. During MCM operations, sonar contacts are classified as mine-like if they are sufficiently similar to signatures of mines. Each contact classified as mine-like must be identified as a mine or not a mine. During MCM operations in littoral areas,
Damage extraction of buildings in the 2015 Gorkha, Nepal earthquake from high-resolution SAR data
NASA Astrophysics Data System (ADS)
Yamazaki, Fumio; Bahri, Rendy; Liu, Wen; Sasagawa, Tadashi
2016-05-01
Satellite remote sensing is recognized as one of the effective tools for detecting and monitoring affected areas due to natural disasters. Since SAR sensors can capture images not only at daytime but also at nighttime and under cloud-cover conditions, they are especially useful at an emergency response period. In this study, multi-temporal high-resolution TerraSAR-X images were used for damage inspection of the Kathmandu area, which was severely affected by the April 25, 2015 Gorkha Earthquake. The SAR images obtained before and after the earthquake were utilized for calculating the difference and correlation coefficient of backscatter. The affected areas were identified by high values of the absolute difference and low values of the correlation coefficient. The post-event high-resolution optical satellite images were employed as ground truth data to verify our results. Although it was difficult to estimate the damage levels for individual buildings, the high resolution SAR images could illustrate their capability in detecting collapsed buildings at emergency response times.
NASA Astrophysics Data System (ADS)
Jacobs, Alan M.; Cox, John D.; Juang, Yi-Shung
1987-01-01
A solid-state digital x-ray detector is described which can replace high resolution film in industrial radiography and has potential for application in some medical imaging. Because of the 10 micron pixel pitch on the sensor, contact magnification radiology is possible and is demonstrated. Methods for frame speed increase and integration of sensor to a large format are discussed.
Improved spatial resolution of luminescence images acquired with a silicon line scanning camera
NASA Astrophysics Data System (ADS)
Teal, Anthony; Mitchell, Bernhard; Juhl, Mattias K.
2018-04-01
Luminescence imaging is currently being used to provide spatially resolved defect in high volume silicon solar cell production. One option to obtain the high throughput required for on the fly detection is the use a silicon line scan cameras. However, when using a silicon based camera, the spatial resolution is reduced as a result of the weakly absorbed light scattering within the camera's chip. This paper address this issue by applying deconvolution from a measured point spread function. This paper extends the methods for determining the point spread function of a silicon area camera to a line scan camera with charge transfer. The improvement in resolution is quantified in the Fourier domain and in spatial domain on an image of a multicrystalline silicon brick. It is found that light spreading beyond the active sensor area is significant in line scan sensors, but can be corrected for through normalization of the point spread function. The application of this method improves the raw data, allowing effective detection of the spatial resolution of defects in manufacturing.
Single-frame 3D fluorescence microscopy with ultraminiature lensless FlatScope
Adams, Jesse K.; Boominathan, Vivek; Avants, Benjamin W.; Vercosa, Daniel G.; Ye, Fan; Baraniuk, Richard G.; Robinson, Jacob T.; Veeraraghavan, Ashok
2017-01-01
Modern biology increasingly relies on fluorescence microscopy, which is driving demand for smaller, lighter, and cheaper microscopes. However, traditional microscope architectures suffer from a fundamental trade-off: As lenses become smaller, they must either collect less light or image a smaller field of view. To break this fundamental trade-off between device size and performance, we present a new concept for three-dimensional (3D) fluorescence imaging that replaces lenses with an optimized amplitude mask placed a few hundred micrometers above the sensor and an efficient algorithm that can convert a single frame of captured sensor data into high-resolution 3D images. The result is FlatScope: perhaps the world’s tiniest and lightest microscope. FlatScope is a lensless microscope that is scarcely larger than an image sensor (roughly 0.2 g in weight and less than 1 mm thick) and yet able to produce micrometer-resolution, high–frame rate, 3D fluorescence movies covering a total volume of several cubic millimeters. The ability of FlatScope to reconstruct full 3D images from a single frame of captured sensor data allows us to image 3D volumes roughly 40,000 times faster than a laser scanning confocal microscope while providing comparable resolution. We envision that this new flat fluorescence microscopy paradigm will lead to implantable endoscopes that minimize tissue damage, arrays of imagers that cover large areas, and bendable, flexible microscopes that conform to complex topographies. PMID:29226243
Concept and integration of an on-line quasi-operational airborne hyperspectral remote sensing system
NASA Astrophysics Data System (ADS)
Schilling, Hendrik; Lenz, Andreas; Gross, Wolfgang; Perpeet, Dominik; Wuttke, Sebastian; Middelmann, Wolfgang
2013-10-01
Modern mission characteristics require the use of advanced imaging sensors in reconnaissance. In particular, high spatial and high spectral resolution imaging provides promising data for many tasks such as classification and detecting objects of military relevance, such as camouflaged units or improvised explosive devices (IEDs). Especially in asymmetric warfare with highly mobile forces, intelligence, surveillance and reconnaissance (ISR) needs to be available close to real-time. This demands the use of unmanned aerial vehicles (UAVs) in combination with downlink capability. The system described in this contribution is integrated in a wing pod for ease of installation and calibration. It is designed for the real-time acquisition and analysis of hyperspectral data. The main component is a Specim AISA Eagle II hyperspectral sensor, covering the visible and near-infrared (VNIR) spectral range with a spectral resolution up to 1.2 nm and 1024 pixel across track, leading to a ground sampling distance below 1 m at typical altitudes. The push broom characteristic of the hyperspectral sensor demands an inertial navigation system (INS) for rectification and georeferencing of the image data. Additional sensors are a high resolution RGB (HR-RGB) frame camera and a thermal imaging camera. For on-line application, the data is preselected, compressed and transmitted to the ground control station (GCS) by an existing system in a second wing pod. The final result after data processing in the GCS is a hyperspectral orthorectified GeoTIFF, which is filed in the ERDAS APOLLO geographical information system. APOLLO allows remote access to the data and offers web-based analysis tools. The system is quasi-operational and was successfully tested in May 2013 in Bremerhaven, Germany.
NASA Astrophysics Data System (ADS)
Vitucci, G.; Minniti, T.; Tremsin, A. S.; Kockelmann, W.; Gorini, G.
2018-04-01
The MCP-based neutron counting detector is a novel device that allows high spatial resolution and time-resolved neutron radiography and tomography with epithermal, thermal and cold neutrons. Time resolution is possible by the high readout speeds of ~ 1200 frames/sec, allowing high resolution event counting with relatively high rates without spatial resolution degradation due to event overlaps. The electronic readout is based on a Timepix sensor, a CMOS pixel readout chip developed at CERN. Currently, a geometry of a quad Timepix detector is used with an active format of 28 × 28 mm2 limited by the size of the Timepix quad (2 × 2 chips) readout. Measurements of a set of high-precision micrometers test samples have been performed at the Imaging and Materials Science & Engineering (IMAT) beamline operating at the ISIS spallation neutron source (U.K.). The aim of these experiments was the full characterization of the chip misalignment and of the gaps between each pad in the quad Timepix sensor. Such misalignment causes distortions of the recorded shape of the sample analyzed. We present in this work a post-processing image procedure that considers and corrects these effects. Results of the correction will be discussed and the efficacy of this method evaluated.
High resolution, wide field of view, real time 340GHz 3D imaging radar for security screening
NASA Astrophysics Data System (ADS)
Robertson, Duncan A.; Macfarlane, David G.; Hunter, Robert I.; Cassidy, Scott L.; Llombart, Nuria; Gandini, Erio; Bryllert, Tomas; Ferndahl, Mattias; Lindström, Hannu; Tenhunen, Jussi; Vasama, Hannu; Huopana, Jouni; Selkälä, Timo; Vuotikka, Antti-Jussi
2017-05-01
The EU FP7 project CONSORTIS (Concealed Object Stand-Off Real-Time Imaging for Security) is developing a demonstrator system for next generation airport security screening which will combine passive and active submillimeter wave imaging sensors. We report on the development of the 340 GHz 3D imaging radar which achieves high volumetric resolution over a wide field of view with high dynamic range and a high frame rate. A sparse array of 16 radar transceivers is coupled with high speed mechanical beam scanning to achieve a field of view of 1 x 1 x 1 m3 and a 10 Hz frame rate.
Evaluation of an innovative color sensor for space application
NASA Astrophysics Data System (ADS)
Cessa, Virginie; Beauvivre, Stéphane; Pittet, Jacques; Dougnac, Virgile; Fasano, M.
2017-11-01
We present in this paper an evaluation of an innovative image sensor that provides color information without the need of organic filters. The sensor is a CMOS array with more than 4 millions pixels which filters the incident photons into R, G, and B channels, delivering the full resolution in color. Such a sensor, combining high performance with low power consumption, is of high interest for future space missions. The paper presents the characteristics of the detector as well as the first results of environmental testing.
Submicrometer fiber-optic chemical sensors: Measuring pH inside single cells
NASA Astrophysics Data System (ADS)
Kopelman, R.
Starting from scratch, we went in two and a half years to 0.04 micron optical microscopy resolution. We have demonstrated the application of near-field scanning optical microscopy to DNA samples and opened the new fields of near-field scanning spectroscopy and submicron opto-chemical sensors. All of these developments have been important steps towards in-situ DNA imaging and characterization on the nanoscale. Our first goal was to make NSOM (near-field scanning optical microscopy) a working enterprise, capable of 'zooming-in' towards a sample and imaging with a resolution exceeding that of traditional microscopy by a factor of ten. This has been achieved. Not only do we have a resolution of about 40 nm but we can image a 1 x 1 micron object in less than 10 seconds. Furthermore, the NSOM is a practical instrument. The tips survive for days or weeks of scanning and new methods of force feedback will soon protect the most fragile samples. Reproducible images of metal gratings, gold particles, dye balls (for calibration) and of several DNA samples have been made, proving the practicality of our approach. We also give highly resolved Force/NSOM images of human blood cells. Our second goal has been to form molecular optics (e.g., exciton donor) tips with a resolution of 2-10 nm for molecular excitation microscopy (MEM). We have produced such tips, and scanned with them, but only with a resolution comparable to that of our standard NSOM tips. However, we have demonstrated their potential for high resolution imaging capabilities: (1) An energy transfer (tip to sample) based feedback capability. (2) A Kasha (external heavy atom) effect based feedback. In addition, a novel and practical opto-chemical sensor that is a billion times smaller than the best ones available has been developed as well. Finally, we have also performed spatially resolved fluorescence spectroscopy.
NASA Astrophysics Data System (ADS)
Sargent, Garrett C.; Ratliff, Bradley M.; Asari, Vijayan K.
2017-08-01
The advantage of division of focal plane imaging polarimeters is their ability to obtain temporally synchronized intensity measurements across a scene; however, they sacrifice spatial resolution in doing so due to their spatially modulated arrangement of the pixel-to-pixel polarizers and often result in aliased imagery. Here, we propose a super-resolution method based upon two previously trained extreme learning machines (ELM) that attempt to recover missing high frequency and low frequency content beyond the spatial resolution of the sensor. This method yields a computationally fast and simple way of recovering lost high and low frequency content from demosaicing raw microgrid polarimetric imagery. The proposed method outperforms other state-of-the-art single-image super-resolution algorithms in terms of structural similarity and peak signal-to-noise ratio.
Fiber Optic Distributed Sensors for High-resolution Temperature Field Mapping.
Lomperski, Stephen; Gerardi, Craig; Lisowski, Darius
2016-11-07
The reliability of computational fluid dynamics (CFD) codes is checked by comparing simulations with experimental data. A typical data set consists chiefly of velocity and temperature readings, both ideally having high spatial and temporal resolution to facilitate rigorous code validation. While high resolution velocity data is readily obtained through optical measurement techniques such as particle image velocimetry, it has proven difficult to obtain temperature data with similar resolution. Traditional sensors such as thermocouples cannot fill this role, but the recent development of distributed sensing based on Rayleigh scattering and swept-wave interferometry offers resolution suitable for CFD code validation work. Thousands of temperature measurements can be generated along a single thin optical fiber at hundreds of Hertz. Sensors function over large temperature ranges and within opaque fluids where optical techniques are unsuitable. But this type of sensor is sensitive to strain and humidity as well as temperature and so accuracy is affected by handling, vibration, and shifts in relative humidity. Such behavior is quite unlike traditional sensors and so unconventional installation and operating procedures are necessary to ensure accurate measurements. This paper demonstrates implementation of a Rayleigh scattering-type distributed temperature sensor in a thermal mixing experiment involving two air jets at 25 and 45 °C. We present criteria to guide selection of optical fiber for the sensor and describe installation setup for a jet mixing experiment. We illustrate sensor baselining, which links readings to an absolute temperature standard, and discuss practical issues such as errors due to flow-induced vibration. This material can aid those interested in temperature measurements having high data density and bandwidth for fluid dynamics experiments and similar applications. We highlight pitfalls specific to these sensors for consideration in experiment design and operation.
NASA Astrophysics Data System (ADS)
Materne, A.; Bardoux, A.; Geoffray, H.; Tournier, T.; Kubik, P.; Morris, D.; Wallace, I.; Renard, C.
2017-11-01
The PLEIADES-HR Earth observing satellites, under CNES development, combine a 0.7m resolution panchromatic channel, and a multispectral channel allowing a 2.8 m resolution, in 4 spectral bands. The 2 satellites will be placed on a sun-synchronous orbit at an altitude of 695 km. The camera operates in push broom mode, providing images across a 20 km swath. This paper focuses on the specifications, design and performance of the TDI detectors developed by e2v technologies under CNES contract for the panchromatic channel. Design drivers, derived from the mission and satellite requirements, architecture of the sensor and measurement results for key performances of the first prototypes are presented.
Real-time biochemical sensor based on Raman scattering with CMOS contact imaging.
Muyun Cao; Yuhua Li; Yadid-Pecht, Orly
2015-08-01
This work presents a biochemical sensor based on Raman scattering with Complementary metal-oxide-semiconductor (CMOS) contact imaging. This biochemical optical sensor is designed for detecting the concentration of solutions. The system is built with a laser diode, an optical filter, a sample holder and a commercial CMOS sensor. The output of the system is analyzed by an image processing program. The system provides instant measurements with a resolution of 0.2 to 0.4 Mol. This low cost and easy-operated small scale system is useful in chemical, biomedical and environmental labs for quantitative bio-chemical concentration detection with results reported comparable to a highly cost commercial spectrometer.
Autonomous collection of dynamically-cued multi-sensor imagery
NASA Astrophysics Data System (ADS)
Daniel, Brian; Wilson, Michael L.; Edelberg, Jason; Jensen, Mark; Johnson, Troy; Anderson, Scott
2011-05-01
The availability of imagery simultaneously collected from sensors of disparate modalities enhances an image analyst's situational awareness and expands the overall detection capability to a larger array of target classes. Dynamic cooperation between sensors is increasingly important for the collection of coincident data from multiple sensors either on the same or on different platforms suitable for UAV deployment. Of particular interest is autonomous collaboration between wide area survey detection, high-resolution inspection, and RF sensors that span large segments of the electromagnetic spectrum. The Naval Research Laboratory (NRL) in conjunction with the Space Dynamics Laboratory (SDL) is building sensors with such networked communications capability and is conducting field tests to demonstrate the feasibility of collaborative sensor data collection and exploitation. Example survey / detection sensors include: NuSAR (NRL Unmanned SAR), a UAV compatible synthetic aperture radar system; microHSI, an NRL developed lightweight hyper-spectral imager; RASAR (Real-time Autonomous SAR), a lightweight podded synthetic aperture radar; and N-WAPSS-16 (Nighttime Wide-Area Persistent Surveillance Sensor-16Mpix), a MWIR large array gimbaled system. From these sensors, detected target cues are automatically sent to the NRL/SDL developed EyePod, a high-resolution, narrow FOV EO/IR sensor, for target inspection. In addition to this cooperative data collection, EyePod's real-time, autonomous target tracking capabilities will be demonstrated. Preliminary results and target analysis will be presented.
Comparison of NDVI fields obtained from different remote sensors
NASA Astrophysics Data System (ADS)
Escribano Rodriguez, Juan; Alonso, Carmelo; Tarquis, Ana Maria; Benito, Rosa Maria; Hernandez Díaz-Ambrona, Carlos
2013-04-01
Satellite image data have become an important source of information for monitoring vegetation and mapping land cover at several scales. Beside this, the distribution and phenology of vegetation is largely associated with climate, terrain characteristics and human activity. Various vegetation indices have been developed for qualitative and quantitative assessment of vegetation using remote spectral measurements. In particular, sensors with spectral bands in the red (RED) and near-infrared (NIR) lend themselves well to vegetation monitoring and based on them [(NIR - RED) / (NIR + RED)] Normalized Difference Vegetation Index (NDVI) has been widespread used. Given that the characteristics of spectral bands in RED and NIR vary distinctly from sensor to sensor, NDVI values based on data from different instruments will not be directly comparable. The spatial resolution also varies significantly between sensors, as well as within a given scene in the case of wide-angle and oblique sensors. As a result, NDVI values will vary according to combinations of the heterogeneity and scale of terrestrial surfaces and pixel footprint sizes. Therefore, the question arises as to the impact of differences in spectral and spatial resolutions on vegetation indices like the NDVI and their interpretation as a drought index. During 2012 three locations (at Salamanca, Granada and Córdoba) were selected and a periodic pasture monitoring and botanic composition were achieved. Daily precipitation, temperature and monthly soil water content were measurement as well as fresh and dry pasture weight. At the same time, remote sensing images were capture by DEIMOS-1 and MODIS of the chosen places. DEIMOS-1 is based on the concept Microsat-100 from Surrey. It is conceived for obtaining Earth images with a good enough resolution to study the terrestrial vegetation cover (20x20 m), although with a great range of visual field (600 km) in order to obtain those images with high temporal resolution and at a reduced cost. By contranst, MODIS images present a much lower spatial resolution (500x500 m). The aim of this study is to establish a comparison between two different sensors in their NDVI values at different spatial resolutions. Acknowledgements. This work was partially supported by ENESA under project P10 0220C-823. Funding provided by Spanish Ministerio de Ciencia e Innovación (MICINN) through project no. MTM2009-14621 and i-MATH No. CSD2006-00032 is greatly appreciated.
A two-step A/D conversion and column self-calibration technique for low noise CMOS image sensors.
Bae, Jaeyoung; Kim, Daeyun; Ham, Seokheon; Chae, Youngcheol; Song, Minkyu
2014-07-04
In this paper, a 120 frames per second (fps) low noise CMOS Image Sensor (CIS) based on a Two-Step Single Slope ADC (TS SS ADC) and column self-calibration technique is proposed. The TS SS ADC is suitable for high speed video systems because its conversion speed is much faster (by more than 10 times) than that of the Single Slope ADC (SS ADC). However, there exist some mismatching errors between the coarse block and the fine block due to the 2-step operation of the TS SS ADC. In general, this makes it difficult to implement the TS SS ADC beyond a 10-bit resolution. In order to improve such errors, a new 4-input comparator is discussed and a high resolution TS SS ADC is proposed. Further, a feedback circuit that enables column self-calibration to reduce the Fixed Pattern Noise (FPN) is also described. The proposed chip has been fabricated with 0.13 μm Samsung CIS technology and the chip satisfies the VGA resolution. The pixel is based on the 4-TR Active Pixel Sensor (APS). The high frame rate of 120 fps is achieved at the VGA resolution. The measured FPN is 0.38 LSB, and measured dynamic range is about 64.6 dB.
Synthetic Foveal Imaging Technology
NASA Technical Reports Server (NTRS)
Hoenk, Michael; Monacos, Steve; Nikzad, Shouleh
2009-01-01
Synthetic Foveal imaging Technology (SyFT) is an emerging discipline of image capture and image-data processing that offers the prospect of greatly increased capabilities for real-time processing of large, high-resolution images (including mosaic images) for such purposes as automated recognition and tracking of moving objects of interest. SyFT offers a solution to the image-data processing problem arising from the proposed development of gigapixel mosaic focal-plane image-detector assemblies for very wide field-of-view imaging with high resolution for detecting and tracking sparse objects or events within narrow subfields of view. In order to identify and track the objects or events without the means of dynamic adaptation to be afforded by SyFT, it would be necessary to post-process data from an image-data space consisting of terabytes of data. Such post-processing would be time-consuming and, as a consequence, could result in missing significant events that could not be observed at all due to the time evolution of such events or could not be observed at required levels of fidelity without such real-time adaptations as adjusting focal-plane operating conditions or aiming of the focal plane in different directions to track such events. The basic concept of foveal imaging is straightforward: In imitation of a natural eye, a foveal-vision image sensor is designed to offer higher resolution in a small region of interest (ROI) within its field of view. Foveal vision reduces the amount of unwanted information that must be transferred from the image sensor to external image-data-processing circuitry. The aforementioned basic concept is not new in itself: indeed, image sensors based on these concepts have been described in several previous NASA Tech Briefs articles. Active-pixel integrated-circuit image sensors that can be programmed in real time to effect foveal artificial vision on demand are one such example. What is new in SyFT is a synergistic combination of recent advances in foveal imaging, computing, and related fields, along with a generalization of the basic foveal-vision concept to admit a synthetic fovea that is not restricted to one contiguous region of an image.
Wong, Kevin S K; Jian, Yifan; Cua, Michelle; Bonora, Stefano; Zawadzki, Robert J; Sarunic, Marinko V
2015-02-01
Wavefront sensorless adaptive optics optical coherence tomography (WSAO-OCT) is a novel imaging technique for in vivo high-resolution depth-resolved imaging that mitigates some of the challenges encountered with the use of sensor-based adaptive optics designs. This technique replaces the Hartmann Shack wavefront sensor used to measure aberrations with a depth-resolved image-driven optimization algorithm, with the metric based on the OCT volumes acquired in real-time. The custom-built ultrahigh-speed GPU processing platform and fast modal optimization algorithm presented in this paper was essential in enabling real-time, in vivo imaging of human retinas with wavefront sensorless AO correction. WSAO-OCT is especially advantageous for developing a clinical high-resolution retinal imaging system as it enables the use of a compact, low-cost and robust lens-based adaptive optics design. In this report, we describe our WSAO-OCT system for imaging the human photoreceptor mosaic in vivo. We validated our system performance by imaging the retina at several eccentricities, and demonstrated the improvement in photoreceptor visibility with WSAO compensation.
NASA Astrophysics Data System (ADS)
Pi, Shiqiang; Liu, Wenzhong; Jiang, Tao
2018-03-01
The magnetic transparency of biological tissue allows the magnetic nanoparticle (MNP) to be a promising functional sensor and contrast agent. The complex susceptibility of MNPs, strongly influenced by particle concentration, excitation magnetic field and their surrounding microenvironment, provides significant implications for biomedical applications. Therefore, magnetic susceptibility imaging of high spatial resolution will give more detailed information during the process of MNP-aided diagnosis and therapy. In this study, we present a novel spatial magnetic susceptibility extraction method for MNPs under a gradient magnetic field, a low-frequency drive magnetic field, and a weak strength high-frequency magnetic field. Based on this novel method, a magnetic particle susceptibility imaging (MPSI) of millimeter-level spatial resolution (<3 mm) was achieved using our homemade imaging system. Corroborated by the experimental results, the MPSI shows real-time (1 s per frame acquisition) and quantitative abilities, and isotropic high resolution.
Resolution Properties of a Calcium Tungstate (CaWO4) Screen Coupled to a CMOS Imaging Detector
NASA Astrophysics Data System (ADS)
Koukou, Vaia; Martini, Niki; Valais, Ioannis; Bakas, Athanasios; Kalyvas, Nektarios; Lavdas, Eleftherios; Fountos, George; Kandarakis, Ioannis; Michail, Christos
2017-11-01
The aim of the current work was to assess the resolution properties of a calcium tungstate (CaWO4) screen (screen coating thickness: 50.09 mg/cm2, actual thickness: 167.2 μm) coupled to a high resolution complementary metal oxide semiconductor (CMOS) digital imaging sensor. A 2.7x3.6 cm2 CaWO4 sample was extracted from an Agfa Curix universal screen and was coupled directly with the active area of the active pixel sensor (APS) CMOS sensor. Experiments were performed following the new IEC 62220-1-1:2015 International Standard, using an RQA-5 beam quality. Resolution was assessed in terms of the Modulation Transfer Function (MTF), using the slanted-edge method. The CaWO4/CMOS detector configuration was found with linear response, in the exposure range under investigation. The final MTF was obtained through averaging the oversampled edge spread function (ESF), using a custom-made software developed by our team, according to the IEC 62220-1-1:2015. Considering the renewed interest in calcium tungstate for various applications, along with the resolution results of this work, CaWO4 could be also considered for use in X-ray imaging devices such as charged-coupled devices (CCD) and CMOS.
Computational Burden Resulting from Image Recognition of High Resolution Radar Sensors
López-Rodríguez, Patricia; Fernández-Recio, Raúl; Bravo, Ignacio; Gardel, Alfredo; Lázaro, José L.; Rufo, Elena
2013-01-01
This paper presents a methodology for high resolution radar image generation and automatic target recognition emphasizing the computational cost involved in the process. In order to obtain focused inverse synthetic aperture radar (ISAR) images certain signal processing algorithms must be applied to the information sensed by the radar. From actual data collected by radar the stages and algorithms needed to obtain ISAR images are revised, including high resolution range profile generation, motion compensation and ISAR formation. Target recognition is achieved by comparing the generated set of actual ISAR images with a database of ISAR images generated by electromagnetic software. High resolution radar image generation and target recognition processes are burdensome and time consuming, so to determine the most suitable implementation platform the analysis of the computational complexity is of great interest. To this end and since target identification must be completed in real time, computational burden of both processes the generation and comparison with a database is explained separately. Conclusions are drawn about implementation platforms and calculation efficiency in order to reduce time consumption in a possible future implementation. PMID:23609804
Computational burden resulting from image recognition of high resolution radar sensors.
López-Rodríguez, Patricia; Fernández-Recio, Raúl; Bravo, Ignacio; Gardel, Alfredo; Lázaro, José L; Rufo, Elena
2013-04-22
This paper presents a methodology for high resolution radar image generation and automatic target recognition emphasizing the computational cost involved in the process. In order to obtain focused inverse synthetic aperture radar (ISAR) images certain signal processing algorithms must be applied to the information sensed by the radar. From actual data collected by radar the stages and algorithms needed to obtain ISAR images are revised, including high resolution range profile generation, motion compensation and ISAR formation. Target recognition is achieved by comparing the generated set of actual ISAR images with a database of ISAR images generated by electromagnetic software. High resolution radar image generation and target recognition processes are burdensome and time consuming, so to determine the most suitable implementation platform the analysis of the computational complexity is of great interest. To this end and since target identification must be completed in real time, computational burden of both processes the generation and comparison with a database is explained separately. Conclusions are drawn about implementation platforms and calculation efficiency in order to reduce time consumption in a possible future implementation.
Wu, Yiming; Zhang, Xiujuan; Pan, Huanhuan; Deng, Wei; Zhang, Xiaohong; Zhang, Xiwei; Jie, Jiansheng
2013-01-01
Single-crystalline organic nanowires (NWs) are important building blocks for future low-cost and efficient nano-optoelectronic devices due to their extraordinary properties. However, it remains a critical challenge to achieve large-scale organic NW array assembly and device integration. Herein, we demonstrate a feasible one-step method for large-area patterned growth of cross-aligned single-crystalline organic NW arrays and their in-situ device integration for optical image sensors. The integrated image sensor circuitry contained a 10 × 10 pixel array in an area of 1.3 × 1.3 mm2, showing high spatial resolution, excellent stability and reproducibility. More importantly, 100% of the pixels successfully operated at a high response speed and relatively small pixel-to-pixel variation. The high yield and high spatial resolution of the operational pixels, along with the high integration level of the device, clearly demonstrate the great potential of the one-step organic NW array growth and device construction approach for large-scale optoelectronic device integration. PMID:24287887
NASA Astrophysics Data System (ADS)
Chambion, Bertrand; Gaschet, Christophe; Behaghel, Thibault; Vandeneynde, Aurélie; Caplet, Stéphane; Gétin, Stéphane; Henry, David; Hugot, Emmanuel; Jahn, Wilfried; Lombardo, Simona; Ferrari, Marc
2018-02-01
Over the recent years, a huge interest has grown for curved electronics, particularly for opto-electronics systems. Curved sensors help the correction of off-axis aberrations, such as Petzval Field Curvature, astigmatism, and bring significant optical and size benefits for imaging systems. In this paper, we first describe advantages of curved sensor and associated packaging process applied on a 1/1.8'' format 1.3Mpx global shutter CMOS sensor (Teledyne EV76C560) into its standard ceramic package with a spherical radius of curvature Rc=65mm and 55mm. The mechanical limits of the die are discussed (Finite Element Modelling and experimental), and electro-optical performances are investigated. Then, based on the monocentric optical architecture, we proposed a new design, compact and with a high resolution, developed specifically for a curved image sensor including optical optimization, tolerances, assembly and optical tests. Finally, a functional prototype is presented through a benchmark approach and compared to an existing standard optical system with same performances and a x2.5 reduction of length. The finality of this work was a functional prototype demonstration on the CEA-LETI during Photonics West 2018 conference. All these experiments and optical results demonstrate the feasibility and high performances of systems with curved sensors.
Highly curved image sensors: a practical approach for improved optical performance
NASA Astrophysics Data System (ADS)
Guenter, Brian; Joshi, Neel; Stoakley, Richard; Keefe, Andrew; Geary, Kevin; Freeman, Ryan; Hundley, Jake; Patterson, Pamela; Hammon, David; Herrera, Guillermo; Sherman, Elena; Nowak, Andrew; Schubert, Randall; Brewer, Peter; Yang, Louis; Mott, Russell; McKnight, Geoff
2017-06-01
The significant optical and size benefits of using a curved focal surface for imaging systems have been well studied yet never brought to market for lack of a high-quality, mass-producible, curved image sensor. In this work we demonstrate that commercial silicon CMOS image sensors can be thinned and formed into accurate, highly curved optical surfaces with undiminished functionality. Our key development is a pneumatic forming process that avoids rigid mechanical constraints and suppresses wrinkling instabilities. A combination of forming-mold design, pressure membrane elastic properties, and controlled friction forces enables us to gradually contact the die at the corners and smoothly press the sensor into a spherical shape. Allowing the die to slide into the concave target shape enables a threefold increase in the spherical curvature over prior approaches having mechanical constraints that resist deformation, and create a high-stress, stretch-dominated state. Our process creates a bridge between the high precision and low-cost but planar CMOS process, and ideal non-planar component shapes such as spherical imagers for improved optical systems. We demonstrate these curved sensors in prototype cameras with custom lenses, measuring exceptional resolution of 3220 line-widths per picture height at an aperture of f/1.2 and nearly 100% relative illumination across the field. Though we use a 1/2.3" format image sensor in this report, we also show this process is generally compatible with many state of the art imaging sensor formats. By example, we report photogrammetry test data for an APS-C sized silicon die formed to a 30$^\\circ$ subtended spherical angle. These gains in sharpness and relative illumination enable a new generation of ultra-high performance, manufacturable, digital imaging systems for scientific, industrial, and artistic use.
Planar implantable sensor for in vivo measurement of cellular oxygen metabolism in brain tissue.
Tsytsarev, Vassiliy; Akkentli, Fatih; Pumbo, Elena; Tang, Qinggong; Chen, Yu; Erzurumlu, Reha S; Papkovsky, Dmitri B
2017-04-01
Brain imaging methods are continually improving. Imaging of the cerebral cortex is widely used in both animal experiments and charting human brain function in health and disease. Among the animal models, the rodent cerebral cortex has been widely used because of patterned neural representation of the whiskers on the snout and relative ease of activating cortical tissue with whisker stimulation. We tested a new planar solid-state oxygen sensor comprising a polymeric film with a phosphorescent oxygen-sensitive coating on the working side, to monitor dynamics of oxygen metabolism in the cerebral cortex following sensory stimulation. Sensory stimulation led to changes in oxygenation and deoxygenation processes of activated areas in the barrel cortex. We demonstrate the possibility of dynamic mapping of relative changes in oxygenation in live mouse brain tissue with such a sensor. Oxygenation-based functional magnetic resonance imaging (fMRI) is very effective method for functional brain mapping but have high costs and limited spatial resolution. Optical imaging of intrinsic signal (IOS) does not provide the required sensitivity, and voltage-sensitive dye optical imaging (VSDi) has limited applicability due to significant toxicity of the voltage-sensitive dye. Our planar solid-state oxygen sensor imaging approach circumvents these limitations, providing a simple optical contrast agent with low toxicity and rapid application. The planar solid-state oxygen sensor described here can be used as a tool in visualization and real-time analysis of sensory-evoked neural activity in vivo. Further, this approach allows visualization of local neural activity with high temporal and spatial resolution. Copyright © 2017 Elsevier B.V. All rights reserved.
Landsat 7 thermal-IR image sharpening using an artificial neural network and sensor model
Lemeshewsky, G.P.; Schowengerdt, R.A.; ,
2001-01-01
The enhanced thematic mapper (plus) (ETM+) instrument on Landsat 7 shares the same basic design as the TM sensors on Landsats 4 and 5, with some significant improvements. In common are six multispectral bands with a 30-m ground-projected instantaneous field of view (GIFOV). However, the thermaL-IR (TIR) band now has a 60-m GIFOV, instead of 120-m. Also, a 15-m panchromatic band has been added. The artificial neural network (NN) image sharpening method described here uses data from the higher spatial resolution ETM+ bands to enhance (sharpen) the spatial resolution of the TIR imagery. It is based on an assumed correlation over multiple scales of resolution, between image edge contrast patterns in the TIR band and several other spectral bands. A multilayer, feedforward NN is trained to approximate TIR data at 60m, given degraded (from 30-m to 60-m) spatial resolution input from spectral bands 7, 5, and 2. After training, the NN output for full-resolution input generates an approximation of a TIR image at 30-m resolution. Two methods are used to degrade the spatial resolution of the imagery used for NN training, and the corresponding sharpening results are compared. One degradation method uses a published sensor transfer function (TF) for Landsat 5 to simulate sensor coarser resolution imagery from higher resolution imagery. For comparison, the second degradation method is simply Gaussian low pass filtering and subsampling, wherein the Gaussian filter approximates the full width at half maximum amplitude characteristics of the TF-based spatial filter. Two fixed-size NNs (that is, number of weights and processing elements) were trained separately with the degraded resolution data, and the sharpening results compared. The comparison evaluates the relative influence of the degradation technique employed and whether or not it is desirable to incorporate a sensor TF model. Preliminary results indicate some improvements for the sensor model-based technique. Further evaluation using a higher resolution reference image and strict application of sensor model to data is recommended.
Method of orthogonally splitting imaging pose measurement
NASA Astrophysics Data System (ADS)
Zhao, Na; Sun, Changku; Wang, Peng; Yang, Qian; Liu, Xintong
2018-01-01
In order to meet the aviation's and machinery manufacturing's pose measurement need of high precision, fast speed and wide measurement range, and to resolve the contradiction between measurement range and resolution of vision sensor, this paper proposes an orthogonally splitting imaging pose measurement method. This paper designs and realizes an orthogonally splitting imaging vision sensor and establishes a pose measurement system. The vision sensor consists of one imaging lens, a beam splitter prism, cylindrical lenses and dual linear CCD. Dual linear CCD respectively acquire one dimensional image coordinate data of the target point, and two data can restore the two dimensional image coordinates of the target point. According to the characteristics of imaging system, this paper establishes the nonlinear distortion model to correct distortion. Based on cross ratio invariability, polynomial equation is established and solved by the least square fitting method. After completing distortion correction, this paper establishes the measurement mathematical model of vision sensor, and determines intrinsic parameters to calibrate. An array of feature points for calibration is built by placing a planar target in any different positions for a few times. An terative optimization method is presented to solve the parameters of model. The experimental results show that the field angle is 52 °, the focus distance is 27.40 mm, image resolution is 5185×5117 pixels, displacement measurement error is less than 0.1mm, and rotation angle measurement error is less than 0.15°. The method of orthogonally splitting imaging pose measurement can satisfy the pose measurement requirement of high precision, fast speed and wide measurement range.
NASA Astrophysics Data System (ADS)
Gao, M.; Li, J.
2018-04-01
Geometric correction is an important preprocessing process in the application of GF4 PMS image. The method of geometric correction that is based on the manual selection of geometric control points is time-consuming and laborious. The more common method, based on a reference image, is automatic image registration. This method involves several steps and parameters. For the multi-spectral sensor GF4 PMS, it is necessary for us to identify the best combination of parameters and steps. This study mainly focuses on the following issues: necessity of Rational Polynomial Coefficients (RPC) correction before automatic registration, base band in the automatic registration and configuration of GF4 PMS spatial resolution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kirtley, John R., E-mail: jkirtley@stanford.edu; Rosenberg, Aaron J.; Palmstrom, Johanna C.
Superconducting QUantum Interference Device (SQUID) microscopy has excellent magnetic field sensitivity, but suffers from modest spatial resolution when compared with other scanning probes. This spatial resolution is determined by both the size of the field sensitive area and the spacing between this area and the sample surface. In this paper we describe scanning SQUID susceptometers that achieve sub-micron spatial resolution while retaining a white noise floor flux sensitivity of ≈2μΦ{sub 0}/Hz{sup 1/2}. This high spatial resolution is accomplished by deep sub-micron feature sizes, well shielded pickup loops fabricated using a planarized process, and a deep etch step that minimizes themore » spacing between the sample surface and the SQUID pickup loop. We describe the design, modeling, fabrication, and testing of these sensors. Although sub-micron spatial resolution has been achieved previously in scanning SQUID sensors, our sensors not only achieve high spatial resolution but also have integrated modulation coils for flux feedback, integrated field coils for susceptibility measurements, and batch processing. They are therefore a generally applicable tool for imaging sample magnetization, currents, and susceptibilities with higher spatial resolution than previous susceptometers.« less
Compressive light field imaging
NASA Astrophysics Data System (ADS)
Ashok, Amit; Neifeld, Mark A.
2010-04-01
Light field imagers such as the plenoptic and the integral imagers inherently measure projections of the four dimensional (4D) light field scalar function onto a two dimensional sensor and therefore, suffer from a spatial vs. angular resolution trade-off. Programmable light field imagers, proposed recently, overcome this spatioangular resolution trade-off and allow high-resolution capture of the (4D) light field function with multiple measurements at the cost of a longer exposure time. However, these light field imagers do not exploit the spatio-angular correlations inherent in the light fields of natural scenes and thus result in photon-inefficient measurements. Here, we describe two architectures for compressive light field imaging that require relatively few photon-efficient measurements to obtain a high-resolution estimate of the light field while reducing the overall exposure time. Our simulation study shows that, compressive light field imagers using the principal component (PC) measurement basis require four times fewer measurements and three times shorter exposure time compared to a conventional light field imager in order to achieve an equivalent light field reconstruction quality.
Using CTX Image Features to Predict HiRISE-Equivalent Rock Density
NASA Technical Reports Server (NTRS)
Serrano, Navid; Huertas, Andres; McGuire, Patrick; Mayer, David; Ardvidson, Raymond
2010-01-01
Methods have been developed to quantitatively assess rock hazards at candidate landing sites with the aid of images from the HiRISE camera onboard NASA s Mars Reconnaissance Orbiter. HiRISE is able to resolve rocks as small as 1-m in diameter. Some sites of interest do not have adequate coverage with the highest resolution sensors and there is a need to infer relevant information (like site safety or underlying geomorphology). The proposed approach would make it possible to obtain rock density estimates at a level close to or equal to those obtained from high-resolution sensors where individual rocks are discernable.
NASA Technical Reports Server (NTRS)
Cao, Changyong; DeLuccia, Frank J.; Xiong, Xiaoxiong; Wolfe, Robert; Weng, Fuzhong
2014-01-01
The Visible Infrared Imaging Radiometer Suite (VIIRS) is one of the key environmental remote-sensing instruments onboard the Suomi National Polar-Orbiting Partnership spacecraft, which was successfully launched on October 28, 2011 from the Vandenberg Air Force Base, California. Following a series of spacecraft and sensor activation operations, the VIIRS nadir door was opened on November 21, 2011. The first VIIRS image acquired signifies a new generation of operational moderate resolution-imaging capabilities following the legacy of the advanced very high-resolution radiometer series on NOAA satellites and Terra and Aqua Moderate-Resolution Imaging Spectroradiometer for NASA's Earth Observing system. VIIRS provides significant enhancements to the operational environmental monitoring and numerical weather forecasting, with 22 imaging and radiometric bands covering wavelengths from 0.41 to 12.5 microns, providing the sensor data records for 23 environmental data records including aerosol, cloud properties, fire, albedo, snow and ice, vegetation, sea surface temperature, ocean color, and nigh-time visible-light-related applications. Preliminary results from the on-orbit verification in the postlaunch check-out and intensive calibration and validation have shown that VIIRS is performing well and producing high-quality images. This paper provides an overview of the onorbit performance of VIIRS, the calibration/validation (cal/val) activities and methodologies used. It presents an assessment of the sensor initial on-orbit calibration and performance based on the efforts from the VIIRS-SDR team. Known anomalies, issues, and future calibration efforts, including the long-term monitoring, and intercalibration are also discussed.
NASA Astrophysics Data System (ADS)
Wu, Yichen; Zhang, Yibo; Luo, Wei; Ozcan, Aydogan
2017-03-01
Digital holographic on-chip microscopy achieves large space-bandwidth-products (e.g., >1 billion) by making use of pixel super-resolution techniques. To synthesize a digital holographic color image, one can take three sets of holograms representing the red (R), green (G) and blue (B) parts of the spectrum and digitally combine them to synthesize a color image. The data acquisition efficiency of this sequential illumination process can be improved by 3-fold using wavelength-multiplexed R, G and B illumination that simultaneously illuminates the sample, and using a Bayer color image sensor with known or calibrated transmission spectra to digitally demultiplex these three wavelength channels. This demultiplexing step is conventionally used with interpolation-based Bayer demosaicing methods. However, because the pixels of different color channels on a Bayer image sensor chip are not at the same physical location, conventional interpolation-based demosaicing process generates strong color artifacts, especially at rapidly oscillating hologram fringes, which become even more pronounced through digital wave propagation and phase retrieval processes. Here, we demonstrate that by merging the pixel super-resolution framework into the demultiplexing process, such color artifacts can be greatly suppressed. This novel technique, termed demosaiced pixel super-resolution (D-PSR) for digital holographic imaging, achieves very similar color imaging performance compared to conventional sequential R,G,B illumination, with 3-fold improvement in image acquisition time and data-efficiency. We successfully demonstrated the color imaging performance of this approach by imaging stained Pap smears. The D-PSR technique is broadly applicable to high-throughput, high-resolution digital holographic color microscopy techniques that can be used in resource-limited-settings and point-of-care offices.
High Spectral Resolution, High Cadence, Imaging X-Ray Microcalorimeters for Solar Physics
NASA Technical Reports Server (NTRS)
Bandler, Simon R.; Bailey, Catherine N.; Bookbinder, Jay A.; DeLuca, Edward E.; Chervenak, Jay A.; Eckart, Megan E.; Finkbeiner, Fred M.; Kelley, Daniel P.; Kelley, Richard L.; Kilbourne, Caroline A.;
2010-01-01
High spectral resolution, high cadence, imaging x-ray spectroscopy has the potential to revolutionize the study of the solar corona. To that end we have been developing transition-edge-sensor (TES) based x-ray micro calorimeter arrays for future solar physics missions where imaging and high energy resolution spectroscopy will enable previously impossible studies of the dynamics and energetics of the solar corona. The characteristics of these x-ray microcalorimeters are significantly different from conventional micro calorimeters developed for astrophysics because they need to accommodate much higher count rates (300-1000 cps) while maintaining high energy resolution of less than 4 eV FWHM in the X-ray energy band of 0.2-10 keV. The other main difference is a smaller pixel size (less than 75 x 75 square microns) than is typical for x-ray micro calorimeters in order to provide angular resolution less than 1 arcsecond. We have achieved at energy resolution of 2.15 eV at 6 keV in a pixel with a 12 x 12 square micron TES sensor and 34 x 34 x 9.1 micron gold absorber, and a resolution of 2.30 eV at 6 keV in a pixel with a 35 x 35 micron TES and a 57 x 57 x 9.1 micron gold absorber. This performance has been achieved in pixels that are fabricated directly onto solid substrates, ie. they are not supported by silicon nitride membranes. We present the results from these detectors, the expected performance at high count-rates, and prospects for the use of this technology for future Solar missions.
Phytoplankton Bloom Off Portugal
NASA Technical Reports Server (NTRS)
2002-01-01
Turquoise and greenish swirls marked the presence of a large phytoplankton bloom off the coast of Portugal on April 23, 2002. This true-color image was acquired by the Moderate-resolution Imaging Spectroradiometer (MODIS), flying aboard NASA's Terra satellite. There are also several fires burning in northwest Spain, near the port city of A Coruna. Please note that the high-resolution scene provided here is 500 meters per pixel. For a copy of this scene at the sensor's fullest resolution, visit the MODIS Rapidfire site.
Adaptive pixel-super-resolved lensfree in-line digital holography for wide-field on-chip microscopy.
Zhang, Jialin; Sun, Jiasong; Chen, Qian; Li, Jiaji; Zuo, Chao
2017-09-18
High-resolution wide field-of-view (FOV) microscopic imaging plays an essential role in various fields of biomedicine, engineering, and physical sciences. As an alternative to conventional lens-based scanning techniques, lensfree holography provides a new way to effectively bypass the intrinsical trade-off between the spatial resolution and FOV of conventional microscopes. Unfortunately, due to the limited sensor pixel-size, unpredictable disturbance during image acquisition, and sub-optimum solution to the phase retrieval problem, typical lensfree microscopes only produce compromised imaging quality in terms of lateral resolution and signal-to-noise ratio (SNR). Here, we propose an adaptive pixel-super-resolved lensfree imaging (APLI) method which can solve, or at least partially alleviate these limitations. Our approach addresses the pixel aliasing problem by Z-scanning only, without resorting to subpixel shifting or beam-angle manipulation. Automatic positional error correction algorithm and adaptive relaxation strategy are introduced to enhance the robustness and SNR of reconstruction significantly. Based on APLI, we perform full-FOV reconstruction of a USAF resolution target (~29.85 mm 2 ) and achieve half-pitch lateral resolution of 770 nm, surpassing 2.17 times of the theoretical Nyquist-Shannon sampling resolution limit imposed by the sensor pixel-size (1.67µm). Full-FOV imaging result of a typical dicot root is also provided to demonstrate its promising potential applications in biologic imaging.
Source-space ICA for MEG source imaging.
Jonmohamadi, Yaqub; Jones, Richard D
2016-02-01
One of the most widely used approaches in electroencephalography/magnetoencephalography (MEG) source imaging is application of an inverse technique (such as dipole modelling or sLORETA) on the component extracted by independent component analysis (ICA) (sensor-space ICA + inverse technique). The advantage of this approach over an inverse technique alone is that it can identify and localize multiple concurrent sources. Among inverse techniques, the minimum-variance beamformers offer a high spatial resolution. However, in order to have both high spatial resolution of beamformer and be able to take on multiple concurrent sources, sensor-space ICA + beamformer is not an ideal combination. We propose source-space ICA for MEG as a powerful alternative approach which can provide the high spatial resolution of the beamformer and handle multiple concurrent sources. The concept of source-space ICA for MEG is to apply the beamformer first and then singular value decomposition + ICA. In this paper we have compared source-space ICA with sensor-space ICA both in simulation and real MEG. The simulations included two challenging scenarios of correlated/concurrent cluster sources. Source-space ICA provided superior performance in spatial reconstruction of source maps, even though both techniques performed equally from a temporal perspective. Real MEG from two healthy subjects with visual stimuli were also used to compare performance of sensor-space ICA and source-space ICA. We have also proposed a new variant of minimum-variance beamformer called weight-normalized linearly-constrained minimum-variance with orthonormal lead-field. As sensor-space ICA-based source reconstruction is popular in EEG and MEG imaging, and given that source-space ICA has superior spatial performance, it is expected that source-space ICA will supersede its predecessor in many applications.
NASA Technical Reports Server (NTRS)
Scott, Peter (Inventor); Sridhar, Ramalingam (Inventor); Bandera, Cesar (Inventor); Xia, Shu (Inventor)
2002-01-01
A foveal image sensor integrated circuit comprising a plurality of CMOS active pixel sensors arranged both within and about a central fovea region of the chip. The pixels in the central fovea region have a smaller size than the pixels arranged in peripheral rings about the central region. A new photocharge normalization scheme and associated circuitry normalizes the output signals from the different size pixels in the array. The pixels are assembled into a multi-resolution rectilinear foveal image sensor chip using a novel access scheme to reduce the number of analog RAM cells needed. Localized spatial resolution declines monotonically with offset from the imager's optical axis, analogous to biological foveal vision.
Haffert, S Y
2016-08-22
Current wavefront sensors for high resolution imaging have either a large dynamic range or a high sensitivity. A new kind of wavefront sensor is developed which can have both: the Generalised Optical Differentiation wavefront sensor. This new wavefront sensor is based on the principles of optical differentiation by amplitude filters. We have extended the theory behind linear optical differentiation and generalised it to nonlinear filters. We used numerical simulations and laboratory experiments to investigate the properties of the generalised wavefront sensor. With this we created a new filter that can decouple the dynamic range from the sensitivity. These properties make it suitable for adaptive optic systems where a large range of phase aberrations have to be measured with high precision.
Microfabrication of High Resolution X-ray Magnetic Calorimeters
NASA Astrophysics Data System (ADS)
Hsieh, Wen-Ting; Bandler, Simon R.; Kelly, Daniel P.; Porst, Jan P.; Rotzinger, Hannes; Seidel, George M.; Stevenson, Thomas R.
2009-12-01
Metallic magnetic calorimeter (MMC) is one of the most promising x-ray detector technologies for providing the very high energy resolution needed for future astronomical x-ray imaging spectroscopy. For this purpose, we have developed micro-fabricated 5×5 arrays of MMC of which each individual pixel has excellent energy resolution as good as 3.4 eV at 6 keV x-ray. Here we report on the fabrication techniques developed to achieve good resolution and high efficiency. These include: processing of a thin insulation layer for strong magnetic coupling between the AuEr sensor film and the niobium pick-up coil; production of overhanging absorbers for enhanced efficiency of x-ray absorption; fabrication on SiN membranes to minimize the effects on energy resolution from athermal phonon loss. We have also improved the deposition of the magnetic sensor film such that the film magnetization is nearly completely that is expected from the AuEr sputter target bulk material. In addition, we have included a study of a positional sensitive design, the Hydra design, which allows thermal coupling of four absorbers to a common MMC sensor and circuit.
Feng, Wei; Zhang, Fumin; Qu, Xinghua; Zheng, Shiwei
2016-01-01
High-speed photography is an important tool for studying rapid physical phenomena. However, low-frame-rate CCD (charge coupled device) or CMOS (complementary metal oxide semiconductor) camera cannot effectively capture the rapid phenomena with high-speed and high-resolution. In this paper, we incorporate the hardware restrictions of existing image sensors, design the sampling functions, and implement a hardware prototype with a digital micromirror device (DMD) camera in which spatial and temporal information can be flexibly modulated. Combined with the optical model of DMD camera, we theoretically analyze the per-pixel coded exposure and propose a three-element median quicksort method to increase the temporal resolution of the imaging system. Theoretically, this approach can rapidly increase the temporal resolution several, or even hundreds, of times without increasing bandwidth requirements of the camera. We demonstrate the effectiveness of our method via extensive examples and achieve 100 fps (frames per second) gain in temporal resolution by using a 25 fps camera. PMID:26959023
Feng, Wei; Zhang, Fumin; Qu, Xinghua; Zheng, Shiwei
2016-03-04
High-speed photography is an important tool for studying rapid physical phenomena. However, low-frame-rate CCD (charge coupled device) or CMOS (complementary metal oxide semiconductor) camera cannot effectively capture the rapid phenomena with high-speed and high-resolution. In this paper, we incorporate the hardware restrictions of existing image sensors, design the sampling functions, and implement a hardware prototype with a digital micromirror device (DMD) camera in which spatial and temporal information can be flexibly modulated. Combined with the optical model of DMD camera, we theoretically analyze the per-pixel coded exposure and propose a three-element median quicksort method to increase the temporal resolution of the imaging system. Theoretically, this approach can rapidly increase the temporal resolution several, or even hundreds, of times without increasing bandwidth requirements of the camera. We demonstrate the effectiveness of our method via extensive examples and achieve 100 fps (frames per second) gain in temporal resolution by using a 25 fps camera.
Kim, Daehyeok; Song, Minkyu; Choe, Byeongseong; Kim, Soo Youn
2017-06-25
In this paper, we present a multi-resolution mode CMOS image sensor (CIS) for intelligent surveillance system (ISS) applications. A low column fixed-pattern noise (CFPN) comparator is proposed in 8-bit two-step single-slope analog-to-digital converter (TSSS ADC) for the CIS that supports normal, 1/2, 1/4, 1/8, 1/16, 1/32, and 1/64 mode of pixel resolution. We show that the scaled-resolution images enable CIS to reduce total power consumption while images hold steady without events. A prototype sensor of 176 × 144 pixels has been fabricated with a 0.18 μm 1-poly 4-metal CMOS process. The area of 4-shared 4T-active pixel sensor (APS) is 4.4 μm × 4.4 μm and the total chip size is 2.35 mm × 2.35 mm. The maximum power consumption is 10 mW (with full resolution) with supply voltages of 3.3 V (analog) and 1.8 V (digital) and 14 frame/s of frame rates.
a Spatio-Spectral Camera for High Resolution Hyperspectral Imaging
NASA Astrophysics Data System (ADS)
Livens, S.; Pauly, K.; Baeck, P.; Blommaert, J.; Nuyts, D.; Zender, J.; Delauré, B.
2017-08-01
Imaging with a conventional frame camera from a moving remotely piloted aircraft system (RPAS) is by design very inefficient. Less than 1 % of the flying time is used for collecting light. This unused potential can be utilized by an innovative imaging concept, the spatio-spectral camera. The core of the camera is a frame sensor with a large number of hyperspectral filters arranged on the sensor in stepwise lines. It combines the advantages of frame cameras with those of pushbroom cameras. By acquiring images in rapid succession, such a camera can collect detailed hyperspectral information, while retaining the high spatial resolution offered by the sensor. We have developed two versions of a spatio-spectral camera and used them in a variety of conditions. In this paper, we present a summary of three missions with the in-house developed COSI prototype camera (600-900 nm) in the domains of precision agriculture (fungus infection monitoring in experimental wheat plots), horticulture (crop status monitoring to evaluate irrigation management in strawberry fields) and geology (meteorite detection on a grassland field). Additionally, we describe the characteristics of the 2nd generation, commercially available ButterflEYE camera offering extended spectral range (475-925 nm), and we discuss future work.
Broadband image sensor array based on graphene-CMOS integration
NASA Astrophysics Data System (ADS)
Goossens, Stijn; Navickaite, Gabriele; Monasterio, Carles; Gupta, Shuchi; Piqueras, Juan José; Pérez, Raúl; Burwell, Gregory; Nikitskiy, Ivan; Lasanta, Tania; Galán, Teresa; Puma, Eric; Centeno, Alba; Pesquera, Amaia; Zurutuza, Amaia; Konstantatos, Gerasimos; Koppens, Frank
2017-06-01
Integrated circuits based on complementary metal-oxide-semiconductors (CMOS) are at the heart of the technological revolution of the past 40 years, enabling compact and low-cost microelectronic circuits and imaging systems. However, the diversification of this platform into applications other than microcircuits and visible-light cameras has been impeded by the difficulty to combine semiconductors other than silicon with CMOS. Here, we report the monolithic integration of a CMOS integrated circuit with graphene, operating as a high-mobility phototransistor. We demonstrate a high-resolution, broadband image sensor and operate it as a digital camera that is sensitive to ultraviolet, visible and infrared light (300-2,000 nm). The demonstrated graphene-CMOS integration is pivotal for incorporating 2D materials into the next-generation microelectronics, sensor arrays, low-power integrated photonics and CMOS imaging systems covering visible, infrared and terahertz frequencies.
Energy dispersive CdTe and CdZnTe detectors for spectral clinical CT and NDT applications
NASA Astrophysics Data System (ADS)
Barber, W. C.; Wessel, J. C.; Nygard, E.; Iwanczyk, J. S.
2015-06-01
We are developing room temperature compound semiconductor detectors for applications in energy-resolved high-flux single x-ray photon-counting spectral computed tomography (CT), including functional imaging with nanoparticle contrast agents for medical applications and non-destructive testing (NDT) for security applications. Energy-resolved photon-counting can provide reduced patient dose through optimal energy weighting for a particular imaging task in CT, functional contrast enhancement through spectroscopic imaging of metal nanoparticles in CT, and compositional analysis through multiple basis function material decomposition in CT and NDT. These applications produce high input count rates from an x-ray generator delivered to the detector. Therefore, in order to achieve energy-resolved single photon counting in these applications, a high output count rate (OCR) for an energy-dispersive detector must be achieved at the required spatial resolution and across the required dynamic range for the application. The required performance in terms of the OCR, spatial resolution, and dynamic range must be obtained with sufficient field of view (FOV) for the application thus requiring the tiling of pixel arrays and scanning techniques. Room temperature cadmium telluride (CdTe) and cadmium zinc telluride (CdZnTe) compound semiconductors, operating as direct conversion x-ray sensors, can provide the required speed when connected to application specific integrated circuits (ASICs) operating at fast peaking times with multiple fixed thresholds per pixel provided the sensors are designed for rapid signal formation across the x-ray energy ranges of the application at the required energy and spatial resolutions, and at a sufficiently high detective quantum efficiency (DQE). We have developed high-flux energy-resolved photon-counting x-ray imaging array sensors using pixellated CdTe and CdZnTe semiconductors optimized for clinical CT and security NDT. We have also fabricated high-flux ASICs with a two dimensional (2D) array of inputs for readout from the sensors. The sensors are guard ring free and have a 2D array of pixels and can be tiled in 2D while preserving pixel pitch. The 2D ASICs have four energy bins with a linear energy response across sufficient dynamic range for clinical CT and some NDT applications. The ASICs can also be tiled in 2D and are designed to fit within the active area of the sensors. We have measured several important performance parameters including: the output count rate (OCR) in excess of 20 million counts per second per square mm with a minimum loss of counts due to pulse pile-up, an energy resolution of 7 keV full width at half-maximum (FWHM) across the entire dynamic range, and a noise floor about 20 keV. This is achieved by directly interconnecting the ASIC inputs to the pixels of the CdZnTe sensors incurring very little input capacitance to the ASICs. We present measurements of the performance of the CdTe and CdZnTe sensors including the OCR, FWHM energy resolution, noise floor, as well as the temporal stability and uniformity under the rapidly varying high flux expected in CT and NDT applications.
Energy dispersive CdTe and CdZnTe detectors for spectral clinical CT and NDT applications
Barber, W. C.; Wessel, J. C.; Nygard, E.; Iwanczyk, J. S.
2014-01-01
We are developing room temperature compound semiconductor detectors for applications in energy-resolved high-flux single x-ray photon-counting spectral computed tomography (CT), including functional imaging with nanoparticle contrast agents for medical applications and non destructive testing (NDT) for security applications. Energy-resolved photon-counting can provide reduced patient dose through optimal energy weighting for a particular imaging task in CT, functional contrast enhancement through spectroscopic imaging of metal nanoparticles in CT, and compositional analysis through multiple basis function material decomposition in CT and NDT. These applications produce high input count rates from an x-ray generator delivered to the detector. Therefore, in order to achieve energy-resolved single photon counting in these applications, a high output count rate (OCR) for an energy-dispersive detector must be achieved at the required spatial resolution and across the required dynamic range for the application. The required performance in terms of the OCR, spatial resolution, and dynamic range must be obtained with sufficient field of view (FOV) for the application thus requiring the tiling of pixel arrays and scanning techniques. Room temperature cadmium telluride (CdTe) and cadmium zinc telluride (CdZnTe) compound semiconductors, operating as direct conversion x-ray sensors, can provide the required speed when connected to application specific integrated circuits (ASICs) operating at fast peaking times with multiple fixed thresholds per pixel provided the sensors are designed for rapid signal formation across the x-ray energy ranges of the application at the required energy and spatial resolutions, and at a sufficiently high detective quantum efficiency (DQE). We have developed high-flux energy-resolved photon-counting x-ray imaging array sensors using pixellated CdTe and CdZnTe semiconductors optimized for clinical CT and security NDT. We have also fabricated high-flux ASICs with a two dimensional (2D) array of inputs for readout from the sensors. The sensors are guard ring free and have a 2D array of pixels and can be tiled in 2D while preserving pixel pitch. The 2D ASICs have four energy bins with a linear energy response across sufficient dynamic range for clinical CT and some NDT applications. The ASICs can also be tiled in 2D and are designed to fit within the active area of the sensors. We have measured several important performance parameters including; the output count rate (OCR) in excess of 20 million counts per second per square mm with a minimum loss of counts due to pulse pile-up, an energy resolution of 7 keV full width at half maximum (FWHM) across the entire dynamic range, and a noise floor about 20keV. This is achieved by directly interconnecting the ASIC inputs to the pixels of the CdZnTe sensors incurring very little input capacitance to the ASICs. We present measurements of the performance of the CdTe and CdZnTe sensors including the OCR, FWHM energy resolution, noise floor, as well as the temporal stability and uniformity under the rapidly varying high flux expected in CT and NDT applications. PMID:25937684
Energy dispersive CdTe and CdZnTe detectors for spectral clinical CT and NDT applications.
Barber, W C; Wessel, J C; Nygard, E; Iwanczyk, J S
2015-06-01
We are developing room temperature compound semiconductor detectors for applications in energy-resolved high-flux single x-ray photon-counting spectral computed tomography (CT), including functional imaging with nanoparticle contrast agents for medical applications and non destructive testing (NDT) for security applications. Energy-resolved photon-counting can provide reduced patient dose through optimal energy weighting for a particular imaging task in CT, functional contrast enhancement through spectroscopic imaging of metal nanoparticles in CT, and compositional analysis through multiple basis function material decomposition in CT and NDT. These applications produce high input count rates from an x-ray generator delivered to the detector. Therefore, in order to achieve energy-resolved single photon counting in these applications, a high output count rate (OCR) for an energy-dispersive detector must be achieved at the required spatial resolution and across the required dynamic range for the application. The required performance in terms of the OCR, spatial resolution, and dynamic range must be obtained with sufficient field of view (FOV) for the application thus requiring the tiling of pixel arrays and scanning techniques. Room temperature cadmium telluride (CdTe) and cadmium zinc telluride (CdZnTe) compound semiconductors, operating as direct conversion x-ray sensors, can provide the required speed when connected to application specific integrated circuits (ASICs) operating at fast peaking times with multiple fixed thresholds per pixel provided the sensors are designed for rapid signal formation across the x-ray energy ranges of the application at the required energy and spatial resolutions, and at a sufficiently high detective quantum efficiency (DQE). We have developed high-flux energy-resolved photon-counting x-ray imaging array sensors using pixellated CdTe and CdZnTe semiconductors optimized for clinical CT and security NDT. We have also fabricated high-flux ASICs with a two dimensional (2D) array of inputs for readout from the sensors. The sensors are guard ring free and have a 2D array of pixels and can be tiled in 2D while preserving pixel pitch. The 2D ASICs have four energy bins with a linear energy response across sufficient dynamic range for clinical CT and some NDT applications. The ASICs can also be tiled in 2D and are designed to fit within the active area of the sensors. We have measured several important performance parameters including; the output count rate (OCR) in excess of 20 million counts per second per square mm with a minimum loss of counts due to pulse pile-up, an energy resolution of 7 keV full width at half maximum (FWHM) across the entire dynamic range, and a noise floor about 20keV. This is achieved by directly interconnecting the ASIC inputs to the pixels of the CdZnTe sensors incurring very little input capacitance to the ASICs. We present measurements of the performance of the CdTe and CdZnTe sensors including the OCR, FWHM energy resolution, noise floor, as well as the temporal stability and uniformity under the rapidly varying high flux expected in CT and NDT applications.
Tsunami damage in Aceh Province, Sumatra
NASA Technical Reports Server (NTRS)
2004-01-01
The island of Sumatra suffered from both the rumblings of the submarine earthquake and the tsunamis that were generated on December 26, 2004. Within minutes of the quake, the sea surged ashore, bringing destruction to the coasts of northern Sumatra. This pair of natural-color images from Landsat 7's Enhanced Thematic Mapper Plus (ETM+) instrument shows a small area along the Sumatran coast in Aceh province where the tsunami smashed its way ashore. In this region, the wave cut a swath of near-total destruction 1.5 kilometers (roughly one mile) in most places, but penetrating farther in many others. Some of these deeper paths of destruction can be seen especially dramatically in the larger-area ETM+ images linked to above. (North is up in these larger images.) ETM+ collects data at roughly 30 meter resolution, complimenting sensors like NASA's MODIS (onboard both Terra and Aqua satellites) which observed this area at 250-meter resolution to give a wide view and ultra-high-resolution sensors like Space Imaging's IKONOS, which observed the same region at 4-meter resolution to give a detailed, smaller-area view. NASA images created by Jesse Allen, Earth Observatory, using data provided courtesy of the Landsat 7 Science Project Office
High Resolution Near Real Time Image Processing and Support for MSSS Modernization
NASA Astrophysics Data System (ADS)
Duncan, R. B.; Sabol, C.; Borelli, K.; Spetka, S.; Addison, J.; Mallo, A.; Farnsworth, B.; Viloria, R.
2012-09-01
This paper describes image enhancement software applications engineering development work that has been performed in support of Maui Space Surveillance System (MSSS) Modernization. It also includes R&D and transition activity that has been performed over the past few years with the objective of providing increased space situational awareness (SSA) capabilities. This includes Air Force Research Laboratory (AFRL) use of an FY10 Dedicated High Performance Investment (DHPI) cluster award -- and our selection and planned use for an FY12 DHPI award. We provide an introduction to image processing of electro optical (EO) telescope sensors data; and a high resolution image enhancement and near real time processing and summary status overview. We then describe recent image enhancement applications development and support for MSSS Modernization, results to date, and end with a discussion of desired future development work and conclusions. Significant improvements to image processing enhancement have been realized over the past several years, including a key application that has realized more than a 10,000-times speedup compared to the original R&D code -- and a greater than 72-times speedup over the past few years. The latest version of this code maintains software efficiency for post-mission processing while providing optimization for image processing of data from a new EO sensor at MSSS. Additional work has also been performed to develop low latency, near real time processing of data that is collected by the ground-based sensor during overhead passes of space objects.
Intelligent image processing for vegetation classification using multispectral LANDSAT data
NASA Astrophysics Data System (ADS)
Santos, Stewart R.; Flores, Jorge L.; Garcia-Torales, G.
2015-09-01
We propose an intelligent computational technique for analysis of vegetation imaging, which are acquired with multispectral scanner (MSS) sensor. This work focuses on intelligent and adaptive artificial neural network (ANN) methodologies that allow segmentation and classification of spectral remote sensing (RS) signatures, in order to obtain a high resolution map, in which we can delimit the wooded areas and quantify the amount of combustible materials present into these areas. This could provide important information to prevent fires and deforestation of wooded areas. The spectral RS input data, acquired by the MSS sensor, are considered in a random propagation remotely sensed scene with unknown statistics for each Thematic Mapper (TM) band. Performing high-resolution reconstruction and adding these spectral values with neighbor pixels information from each TM band, we can include contextual information into an ANN. The biggest challenge in conventional classifiers is how to reduce the number of components in the feature vector, while preserving the major information contained in the data, especially when the dimensionality of the feature space is high. Preliminary results show that the Adaptive Modified Neural Network method is a promising and effective spectral method for segmentation and classification in RS images acquired with MSS sensor.
NASA Astrophysics Data System (ADS)
Mõttus, Matti; Takala, Tuure
2014-12-01
Fertility, or the availability of nutrients and water, controls forest productivity. It affects its carbon sequestration, and thus the forest's effect on climate, as well as its commercial value. Although the availability of nutrients cannot be measured directly using remote sensing methods, fertility alters several vegetation traits detectable from the reflectance spectra of the forest stand, including its pigment content and water stress. However, forest reflectance is also influenced by other factors, such as species composition and stand age. Here, we present a case study demonstrating how data obtained using imaging spectroscopy is correlated with site fertility. The study was carried out in Hyytiälä, Finland, in the southern boreal forest zone. We used a database of state-owned forest stands including basic forestry variables and a site fertility index. To test the suitability of imaging spectroscopy with different spatial and spectral resolutions for site fertility mapping, we performed two airborne acquisitions using different sensor configurations. First, the sensor was flown at a high altitude with high spectral resolution resulting in a pixel size in the order of a tree crown. Next, the same area was flown to provide reflectance data with sub-meter spatial resolution. However, to maintain usable signal-to-noise ratios, several spectral channels inside the sensor were combined, thus reducing spectral resolution. We correlated a number of narrowband vegetation indices (describing canopy biochemical composition, structure, and photosynthetic activity) on site fertility. Overall, site fertility had a significant influence on the vegetation indices but the strength of the correlation depended on dominant species. We found that high spatial resolution data calculated from the spectra of sunlit parts of tree crowns had the strongest correlation with site fertility.
Linear mixing model applied to coarse spatial resolution data from multispectral satellite sensors
NASA Technical Reports Server (NTRS)
Holben, Brent N.; Shimabukuro, Yosio E.
1993-01-01
A linear mixing model was applied to coarse spatial resolution data from the NOAA Advanced Very High Resolution Radiometer. The reflective component of the 3.55-3.95 micron channel was used with the two reflective channels 0.58-0.68 micron and 0.725-1.1 micron to run a constrained least squares model to generate fraction images for an area in the west central region of Brazil. The fraction images were compared with an unsupervised classification derived from Landsat TM data acquired on the same day. The relationship between the fraction images and normalized difference vegetation index images show the potential of the unmixing techniques when using coarse spatial resolution data for global studies.
Radiometric calibration of the Earth observing system's imaging sensors
NASA Technical Reports Server (NTRS)
Slater, P. N.
1987-01-01
Philosophy, requirements, and methods of calibration of multispectral space sensor systems as applicable to the Earth Observing System (EOS) are discussed. Vicarious methods for calibration of low spatial resolution systems, with respect to the Advanced Very High Resolution Radiometer (AVHRR), are then summarized. Finally, a theoretical introduction is given to a new vicarious method of calibration using the ratio of diffuse-to-global irradiance at the Earth's surfaces as the key input. This may provide an additional independent method for in-flight calibration.
Droplet Image Super Resolution Based on Sparse Representation and Kernel Regression
NASA Astrophysics Data System (ADS)
Zou, Zhenzhen; Luo, Xinghong; Yu, Qiang
2018-02-01
Microgravity and containerless conditions, which are produced via electrostatic levitation combined with a drop tube, are important when studying the intrinsic properties of new metastable materials. Generally, temperature and image sensors can be used to measure the changes of sample temperature, morphology and volume. Then, the specific heat, surface tension, viscosity changes and sample density can be obtained. Considering that the falling speed of the material sample droplet is approximately 31.3 m/s when it reaches the bottom of a 50-meter-high drop tube, a high-speed camera with a collection rate of up to 106 frames/s is required to image the falling droplet. However, at the high-speed mode, very few pixels, approximately 48-120, will be obtained in each exposure time, which results in low image quality. Super-resolution image reconstruction is an algorithm that provides finer details than the sampling grid of a given imaging device by increasing the number of pixels per unit area in the image. In this work, we demonstrate the application of single image-resolution reconstruction in the microgravity and electrostatic levitation for the first time. Here, using the image super-resolution method based on sparse representation, a low-resolution droplet image can be reconstructed. Employed Yang's related dictionary model, high- and low-resolution image patches were combined with dictionary training, and high- and low-resolution-related dictionaries were obtained. The online double-sparse dictionary training algorithm was used in the study of related dictionaries and overcome the shortcomings of the traditional training algorithm with small image patch. During the stage of image reconstruction, the algorithm of kernel regression is added, which effectively overcomes the shortcomings of the Yang image's edge blurs.
Droplet Image Super Resolution Based on Sparse Representation and Kernel Regression
NASA Astrophysics Data System (ADS)
Zou, Zhenzhen; Luo, Xinghong; Yu, Qiang
2018-05-01
Microgravity and containerless conditions, which are produced via electrostatic levitation combined with a drop tube, are important when studying the intrinsic properties of new metastable materials. Generally, temperature and image sensors can be used to measure the changes of sample temperature, morphology and volume. Then, the specific heat, surface tension, viscosity changes and sample density can be obtained. Considering that the falling speed of the material sample droplet is approximately 31.3 m/s when it reaches the bottom of a 50-meter-high drop tube, a high-speed camera with a collection rate of up to 106 frames/s is required to image the falling droplet. However, at the high-speed mode, very few pixels, approximately 48-120, will be obtained in each exposure time, which results in low image quality. Super-resolution image reconstruction is an algorithm that provides finer details than the sampling grid of a given imaging device by increasing the number of pixels per unit area in the image. In this work, we demonstrate the application of single image-resolution reconstruction in the microgravity and electrostatic levitation for the first time. Here, using the image super-resolution method based on sparse representation, a low-resolution droplet image can be reconstructed. Employed Yang's related dictionary model, high- and low-resolution image patches were combined with dictionary training, and high- and low-resolution-related dictionaries were obtained. The online double-sparse dictionary training algorithm was used in the study of related dictionaries and overcome the shortcomings of the traditional training algorithm with small image patch. During the stage of image reconstruction, the algorithm of kernel regression is added, which effectively overcomes the shortcomings of the Yang image's edge blurs.
Sajjad, Muhammad; Mehmood, Irfan; Baik, Sung Wook
2014-01-01
Visual sensor networks (VSNs) usually generate a low-resolution (LR) frame-sequence due to energy and processing constraints. These LR-frames are not very appropriate for use in certain surveillance applications. It is very important to enhance the resolution of the captured LR-frames using resolution enhancement schemes. In this paper, an effective framework for a super-resolution (SR) scheme is proposed that enhances the resolution of LR key-frames extracted from frame-sequences captured by visual-sensors. In a VSN, a visual processing hub (VPH) collects a huge amount of visual data from camera sensors. In the proposed framework, at the VPH, key-frames are extracted using our recent key-frame extraction technique and are streamed to the base station (BS) after compression. A novel effective SR scheme is applied at BS to produce a high-resolution (HR) output from the received key-frames. The proposed SR scheme uses optimized orthogonal matching pursuit (OOMP) for sparse-representation recovery in SR. OOMP does better in terms of detecting true sparsity than orthogonal matching pursuit (OMP). This property of the OOMP helps produce a HR image which is closer to the original image. The K-SVD dictionary learning procedure is incorporated for dictionary learning. Batch-OMP improves the dictionary learning process by removing the limitation in handling a large set of observed signals. Experimental results validate the effectiveness of the proposed scheme and show its superiority over other state-of-the-art schemes. PMID:24566632
Sajjad, Muhammad; Mehmood, Irfan; Baik, Sung Wook
2014-02-21
Visual sensor networks (VSNs) usually generate a low-resolution (LR) frame-sequence due to energy and processing constraints. These LR-frames are not very appropriate for use in certain surveillance applications. It is very important to enhance the resolution of the captured LR-frames using resolution enhancement schemes. In this paper, an effective framework for a super-resolution (SR) scheme is proposed that enhances the resolution of LR key-frames extracted from frame-sequences captured by visual-sensors. In a VSN, a visual processing hub (VPH) collects a huge amount of visual data from camera sensors. In the proposed framework, at the VPH, key-frames are extracted using our recent key-frame extraction technique and are streamed to the base station (BS) after compression. A novel effective SR scheme is applied at BS to produce a high-resolution (HR) output from the received key-frames. The proposed SR scheme uses optimized orthogonal matching pursuit (OOMP) for sparse-representation recovery in SR. OOMP does better in terms of detecting true sparsity than orthogonal matching pursuit (OMP). This property of the OOMP helps produce a HR image which is closer to the original image. The K-SVD dictionary learning procedure is incorporated for dictionary learning. Batch-OMP improves the dictionary learning process by removing the limitation in handling a large set of observed signals. Experimental results validate the effectiveness of the proposed scheme and show its superiority over other state-of-the-art schemes.
Research-grade CMOS image sensors for remote sensing applications
NASA Astrophysics Data System (ADS)
Saint-Pe, Olivier; Tulet, Michel; Davancens, Robert; Larnaudie, Franck; Magnan, Pierre; Martin-Gonthier, Philippe; Corbiere, Franck; Belliot, Pierre; Estribeau, Magali
2004-11-01
Imaging detectors are key elements for optical instruments and sensors on board space missions dedicated to Earth observation (high resolution imaging, atmosphere spectroscopy...), Solar System exploration (micro cameras, guidance for autonomous vehicle...) and Universe observation (space telescope focal planes, guiding sensors...). This market has been dominated by CCD technology for long. Since the mid-90s, CMOS Image Sensors (CIS) have been competing with CCDs for consumer domains (webcams, cell phones, digital cameras...). Featuring significant advantages over CCD sensors for space applications (lower power consumption, smaller system size, better radiations behaviour...), CMOS technology is also expanding in this field, justifying specific R&D and development programs funded by national and European space agencies (mainly CNES, DGA and ESA). All along the 90s and thanks to their increasingly improving performances, CIS have started to be successfully used for more and more demanding space applications, from vision and control functions requiring low-level performances to guidance applications requiring medium-level performances. Recent technology improvements have made possible the manufacturing of research-grade CIS that are able to compete with CCDs in the high-performances arena. After an introduction outlining the growing interest of optical instruments designers for CMOS image sensors, this paper will present the existing and foreseen ways to reach high-level electro-optics performances for CIS. The developments and performances of CIS prototypes built using an imaging CMOS process will be presented in the corresponding section.
Ultra-fast high-resolution hybrid and monolithic CMOS imagers in multi-frame radiography
NASA Astrophysics Data System (ADS)
Kwiatkowski, Kris; Douence, Vincent; Bai, Yibin; Nedrow, Paul; Mariam, Fesseha; Merrill, Frank; Morris, Christopher L.; Saunders, Andy
2014-09-01
A new burst-mode, 10-frame, hybrid Si-sensor/CMOS-ROIC FPA chip has been recently fabricated at Teledyne Imaging Sensors. The intended primary use of the sensor is in the multi-frame 800 MeV proton radiography at LANL. The basic part of the hybrid is a large (48×49 mm2) stitched CMOS chip of 1100×1100 pixel count, with a minimum shutter speed of 50 ns. The performance parameters of this chip are compared to the first generation 3-frame 0.5-Mpixel custom hybrid imager. The 3-frame cameras have been in continuous use for many years, in a variety of static and dynamic experiments at LANSCE. The cameras can operate with a per-frame adjustable integration time of ~ 120ns-to- 1s, and inter-frame time of 250ns to 2s. Given the 80 ms total readout time, the original and the new imagers can be externally synchronized to 0.1-to-5 Hz, 50-ns wide proton beam pulses, and record up to ~1000-frame radiographic movies typ. of 3-to-30 minute duration. The performance of the global electronic shutter is discussed and compared to that of a high-resolution commercial front-illuminated monolithic CMOS imager.
NASA Astrophysics Data System (ADS)
Rengarajan, Rajagopalan; Goodenough, Adam A.; Schott, John R.
2016-10-01
Many remote sensing applications rely on simulated scenes to perform complex interaction and sensitivity studies that are not possible with real-world scenes. These applications include the development and validation of new and existing algorithms, understanding of the sensor's performance prior to launch, and trade studies to determine ideal sensor configurations. The accuracy of these applications is dependent on the realism of the modeled scenes and sensors. The Digital Image and Remote Sensing Image Generation (DIRSIG) tool has been used extensively to model the complex spectral and spatial texture variation expected in large city-scale scenes and natural biomes. In the past, material properties that were used to represent targets in the simulated scenes were often assumed to be Lambertian in the absence of hand-measured directional data. However, this assumption presents a limitation for new algorithms that need to recognize the anisotropic behavior of targets. We have developed a new method to model and simulate large-scale high-resolution terrestrial scenes by combining bi-directional reflectance distribution function (BRDF) products from Moderate Resolution Imaging Spectroradiometer (MODIS) data, high spatial resolution data, and hyperspectral data. The high spatial resolution data is used to separate materials and add textural variations to the scene, and the directional hemispherical reflectance from the hyperspectral data is used to adjust the magnitude of the MODIS BRDF. In this method, the shape of the BRDF is preserved since it changes very slowly, but its magnitude is varied based on the high resolution texture and hyperspectral data. In addition to the MODIS derived BRDF, target/class specific BRDF values or functions can also be applied to features of specific interest. The purpose of this paper is to discuss the techniques and the methodology used to model a forest region at a high resolution. The simulated scenes using this method for varying view angles show the expected variations in the reflectance due to the BRDF effects of the Harvard forest. The effectiveness of this technique to simulate real sensor data is evaluated by comparing the simulated data with the Landsat 8 Operational Land Image (OLI) data over the Harvard forest. Regions of interest were selected from the simulated and the real data for different targets and their Top-of-Atmospheric (TOA) radiance were compared. After adjusting for scaling correction due to the difference in atmospheric conditions between the simulated and the real data, the TOA radiance is found to agree within 5 % in the NIR band and 10 % in the visible bands for forest targets under similar illumination conditions. The technique presented in this paper can be extended for other biomes (e.g. desert regions and agricultural regions) by using the appropriate geographic regions. Since the entire scene is constructed in a simulated environment, parameters such as BRDF or its effects can be analyzed for general or target specific algorithm improvements. Also, the modeling and simulation techniques can be used as a baseline for the development and comparison of new sensor designs and to investigate the operational and environmental factors that affects the sensor constellations such as Sentinel and Landsat missions.
High resolution PET breast imager with improved detection efficiency
Majewski, Stanislaw
2010-06-08
A highly efficient PET breast imager for detecting lesions in the entire breast including those located close to the patient's chest wall. The breast imager includes a ring of imaging modules surrounding the imaged breast. Each imaging module includes a slant imaging light guide inserted between a gamma radiation sensor and a photodetector. The slant light guide permits the gamma radiation sensors to be placed in close proximity to the skin of the chest wall thereby extending the sensitive region of the imager to the base of the breast. Several types of photodetectors are proposed for use in the detector modules, with compact silicon photomultipliers as the preferred choice, due to its high compactness. The geometry of the detector heads and the arrangement of the detector ring significantly reduce dead regions thereby improving detection efficiency for lesions located close to the chest wall.
Super-resolution for scanning light stimulation systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bitzer, L. A.; Neumann, K.; Benson, N., E-mail: niels.benson@uni-due.de
Super-resolution (SR) is a technique used in digital image processing to overcome the resolution limitation of imaging systems. In this process, a single high resolution image is reconstructed from multiple low resolution images. SR is commonly used for CCD and CMOS (Complementary Metal-Oxide-Semiconductor) sensor images, as well as for medical applications, e.g., magnetic resonance imaging. Here, we demonstrate that super-resolution can be applied with scanning light stimulation (LS) systems, which are common to obtain space-resolved electro-optical parameters of a sample. For our purposes, the Projection Onto Convex Sets (POCS) was chosen and modified to suit the needs of LS systems.more » To demonstrate the SR adaption, an Optical Beam Induced Current (OBIC) LS system was used. The POCS algorithm was optimized by means of OBIC short circuit current measurements on a multicrystalline solar cell, resulting in a mean square error reduction of up to 61% and improved image quality.« less
A high frequency electromagnetic impedance imaging system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tseng, Hung-Wen; Lee, Ki Ha; Becker, Alex
2003-01-15
Non-invasive, high resolution geophysical mapping of the shallow subsurface is necessary for delineation of buried hazardous wastes, detecting unexploded ordinance, verifying and monitoring of containment or moisture contents, and other environmental applications. Electromagnetic (EM) techniques can be used for this purpose since electrical conductivity and dielectric permittivity are representative of the subsurface media. Measurements in the EM frequency band between 1 and 100 MHz are very important for such applications, because the induction number of many targets is small and the ability to determine the subsurface distribution of both electrical properties is required. Earlier workers were successful in developing systemsmore » for detecting anomalous areas, but quantitative interpretation of the data was difficult. Accurate measurements are necessary, but difficult to achieve for high-resolution imaging of the subsurface. We are developing a broadband non-invasive method for accurately mapping the electrical conductivity and dielectric permittivity of the shallow subsurface using an EM impedance approach similar to the MT exploration technique. Electric and magnetic sensors were tested to ensure that stray EM scattering is minimized and the quality of the data collected with the high-frequency impedance (HFI) system is good enough to allow high-resolution, multi-dimensional imaging of hidden targets. Additional efforts are being made to modify and further develop existing sensors and transmitters to improve the imaging capability and data acquisition efficiency.« less
NASA Astrophysics Data System (ADS)
Hojas-Gascon, L.; Belward, A.; Eva, H.; Ceccherini, G.; Hagolle, O.; Garcia, J.; Cerutti, P.
2015-04-01
The forthcoming European Space Agency's Sentinel-2 mission promises to provide high (10 m) resolution optical data at higher temporal frequencies (5 day revisit with two operational satellites) than previously available. CNES, the French national space agency, launched a program in 2013, 'SPOT4 take 5', to simulate such a dataflow using the SPOT HRV sensor, which has similar spectral characteristics to the Sentinel sensor, but lower (20m) spatial resolution. Such data flow enables the analysis of the satellite images using temporal analysis, an approach previously restricted to lower spatial resolution sensors. We acquired 23 such images over Tanzania for the period from February to June 2013. The data were analysed with aim of discriminating between different forest cover percentages for landscape units of 0.5 ha over a site characterised by deciduous intact and degraded forests. The SPOT data were processed by one extracting temporal vegetation indices. We assessed the impact of the high acquisition rate with respect to the current rate of one image every 16 days. Validation data, giving the percentage of forest canopy cover in each land unit were provided by very high resolution satellite data. Results show that using the full temporal series it is possible to discriminate between forest units with differences of more than 40% tree cover or more. Classification errors fell exclusively into the adjacent forest canopy cover class of 20% or less. The analyses show that forestation mapping and degradation monitoring will be substantially improved with the Sentinel-2 program.
Holographic leaky-wave metasurfaces for dual-sensor imaging.
Li, Yun Bo; Li, Lian Lin; Cai, Ben Geng; Cheng, Qiang; Cui, Tie Jun
2015-12-10
Metasurfaces have huge potentials to develop new type imaging systems due to their abilities of controlling electromagnetic waves. Here, we propose a new method for dual-sensor imaging based on cross-like holographic leaky-wave metasurfaces which are composed of hybrid isotropic and anisotropic surface impedance textures. The holographic leaky-wave radiations are generated by special impedance modulations of surface waves excited by the sensor ports. For one independent sensor, the main leaky-wave radiation beam can be scanned by frequency in one-dimensional space, while the frequency scanning in the orthogonal spatial dimension is accomplished by the other sensor. Thus, for a probed object, the imaging plane can be illuminated adequately to obtain the two-dimensional backward scattered fields by the dual-sensor for reconstructing the object. The relativity of beams under different frequencies is very low due to the frequency-scanning beam performance rather than the random beam radiations operated by frequency, and the multi-illuminations with low relativity are very appropriate for multi-mode imaging method with high resolution and anti- noise. Good reconstruction results are given to validate the proposed imaging method.
A review of potential image fusion methods for remote sensing-based irrigation management: Part II
USDA-ARS?s Scientific Manuscript database
Satellite-based sensors provide data at either greater spectral and coarser spatial resolutions, or lower spectral and finer spatial resolutions due to complementary spectral and spatial characteristics of optical sensor systems. In order to overcome this limitation, image fusion has been suggested ...
Shack-Hartmann wavefront-sensor-based adaptive optics system for multiphoton microscopy
Cha, Jae Won; Ballesta, Jerome; So, Peter T.C.
2010-01-01
The imaging depth of two-photon excitation fluorescence microscopy is partly limited by the inhomogeneity of the refractive index in biological specimens. This inhomogeneity results in a distortion of the wavefront of the excitation light. This wavefront distortion results in image resolution degradation and lower signal level. Using an adaptive optics system consisting of a Shack-Hartmann wavefront sensor and a deformable mirror, wavefront distortion can be measured and corrected. With adaptive optics compensation, we demonstrate that the resolution and signal level can be better preserved at greater imaging depth in a variety of ex-vivo tissue specimens including mouse tongue muscle, heart muscle, and brain. However, for these highly scattering tissues, we find signal degradation due to scattering to be a more dominant factor than aberration. PMID:20799824
Shack-Hartmann wavefront-sensor-based adaptive optics system for multiphoton microscopy.
Cha, Jae Won; Ballesta, Jerome; So, Peter T C
2010-01-01
The imaging depth of two-photon excitation fluorescence microscopy is partly limited by the inhomogeneity of the refractive index in biological specimens. This inhomogeneity results in a distortion of the wavefront of the excitation light. This wavefront distortion results in image resolution degradation and lower signal level. Using an adaptive optics system consisting of a Shack-Hartmann wavefront sensor and a deformable mirror, wavefront distortion can be measured and corrected. With adaptive optics compensation, we demonstrate that the resolution and signal level can be better preserved at greater imaging depth in a variety of ex-vivo tissue specimens including mouse tongue muscle, heart muscle, and brain. However, for these highly scattering tissues, we find signal degradation due to scattering to be a more dominant factor than aberration.
Mapping detailed 3D information onto high resolution SAR signatures
NASA Astrophysics Data System (ADS)
Anglberger, H.; Speck, R.
2017-05-01
Due to challenges in the visual interpretation of radar signatures or in the subsequent information extraction, a fusion with other data sources can be beneficial. The most accurate basis for a fusion of any kind of remote sensing data is the mapping of the acquired 2D image space onto the true 3D geometry of the scenery. In the case of radar images this is a challenging task because the coordinate system is based on the measured range which causes ambiguous regions due to layover effects. This paper describes a method that accurately maps the detailed 3D information of a scene to the slantrange-based coordinate system of imaging radars. Due to this mapping all the contributing geometrical parts of one resolution cell can be determined in 3D space. The proposed method is highly efficient, because computationally expensive operations can be directly performed on graphics card hardware. The described approach builds a perfect basis for sophisticated methods to extract data from multiple complimentary sensors like from radar and optical images, especially because true 3D information from whole cities will be available in the near future. The performance of the developed methods will be demonstrated with high resolution radar data acquired by the space-borne SAR-sensor TerraSAR-X.
Tactile surface classification for limbed robots using a pressure sensitive robot skin.
Shill, Jacob J; Collins, Emmanuel G; Coyle, Eric; Clark, Jonathan
2015-02-02
This paper describes an approach to terrain identification based on pressure images generated through direct surface contact using a robot skin constructed around a high-resolution pressure sensing array. Terrain signatures for classification are formulated from the magnitude frequency responses of the pressure images. The initial experimental results for statically obtained images show that the approach yields classification accuracies [Formula: see text]. The methodology is extended to accommodate the dynamic pressure images anticipated when a robot is walking or running. Experiments with a one-legged hopping robot yield similar identification accuracies [Formula: see text]. In addition, the accuracies are independent with respect to changing robot dynamics (i.e., when using different leg gaits). The paper further shows that the high-resolution capabilities of the sensor enables similarly textured surfaces to be distinguished. A correcting filter is developed to accommodate for failures or faults that inevitably occur within the sensing array with continued use. Experimental results show using the correcting filter can extend the effective operational lifespan of a high-resolution sensing array over 6x in the presence of sensor damage. The results presented suggest this methodology can be extended to autonomous field robots, providing a robot with crucial information about the environment that can be used to aid stable and efficient mobility over rough and varying terrains.
Giardino, Claudia; Bresciani, Mariano; Cazzaniga, Ilaria; Schenk, Karin; Rieger, Patrizia; Braga, Federica; Matta, Erica; Brando, Vittorio E
2014-12-15
In this study we evaluate the capabilities of three satellite sensors for assessing water composition and bottom depth in Lake Garda, Italy. A consistent physics-based processing chain was applied to Moderate Resolution Imaging Spectroradiometer (MODIS), Landsat-8 Operational Land Imager (OLI) and RapidEye. Images gathered on 10 June 2014 were corrected for the atmospheric effects with the 6SV code. The computed remote sensing reflectance (Rrs) from MODIS and OLI were converted into water quality parameters by adopting a spectral inversion procedure based on a bio-optical model calibrated with optical properties of the lake. The same spectral inversion procedure was applied to RapidEye and to OLI data to map bottom depth. In situ measurements of Rrs and of concentrations of water quality parameters collected in five locations were used to evaluate the models. The bottom depth maps from OLI and RapidEye showed similar gradients up to 7 m (r = 0.72). The results indicate that: (1) the spatial and radiometric resolutions of OLI enabled mapping water constituents and bottom properties; (2) MODIS was appropriate for assessing water quality in the pelagic areas at a coarser spatial resolution; and (3) RapidEye had the capability to retrieve bottom depth at high spatial resolution. Future work should evaluate the performance of the three sensors in different bio-optical conditions.
NASA Astrophysics Data System (ADS)
Yeom, J. M.
2017-12-01
Recently developed Korea Multi-Purpose Satellite-3A (KOMPSAT-3A), which is a continuation of the KOMPSAT-1, 2 and 3 earth observation satellite (EOS) programs from the Korea Aerospace Research Institute (KARI) was launched on March, 25 2015 on a Dnepr-1 launch vehicle from the Jasny Dombarovsky site in Russia. After launched, KARI performed in-orbit-test (IOT) including radiometric calibration for 6 months from 14 Apr. to 4 Sep. 2015. KOMPSAT-3A is equipped with two distinctive sensors; one is a high resolution multispectral optical sensor, namely the Advances Earth Image Sensor System-A (AEISS-A) and the other is the Scanner Infrared Imaging System (SIIS). In this study, we focused on the radiometric calibration of AEISS-A. The multispectral wavelengths of AEISS-A are covering three visible regions: blue (450 - 520 nm), green (520 - 600 nm), red (630 - 690 nm), one near infrared (760 - 900 nm) with a 2.0 m spatial resolution at nadir, whereas the panchromatic imagery (450 - 900 nm) has a 0.5 m resolution. Those are the same spectral response functions were same with KOMPSAT-3 multispectral and panchromatic bands but the spatial resolutions are improved. The main mission of KOMPSAT-3A is to develop for Geographical Information System (GIS) applications in environmental, agriculture, and oceanographic sciences, as well as natural hazard monitoring.
Downscaling of Remotely Sensed Land Surface Temperature with multi-sensor based products
NASA Astrophysics Data System (ADS)
Jeong, J.; Baik, J.; Choi, M.
2016-12-01
Remotely sensed satellite data provides a bird's eye view, which allows us to understand spatiotemporal behavior of hydrologic variables at global scale. Especially, geostationary satellite continuously observing specific regions is useful to monitor the fluctuations of hydrologic variables as well as meteorological factors. However, there are still problems regarding spatial resolution whether the fine scale land cover can be represented with the spatial resolution of the satellite sensor, especially in the area of complex topography. To solve these problems, many researchers have been trying to establish the relationship among various hydrological factors and combine images from multi-sensor to downscale land surface products. One of geostationary satellite, Communication, Ocean and Meteorological Satellite (COMS), has Meteorological Imager (MI) and Geostationary Ocean Color Imager (GOCI). MI performing the meteorological mission produce Rainfall Intensity (RI), Land Surface Temperature (LST), and many others every 15 minutes. Even though it has high temporal resolution, low spatial resolution of MI data is treated as major research problem in many studies. This study suggests a methodology to downscale 4 km LST datasets derived from MI in finer resolution (500m) by using GOCI datasets in Northeast Asia. Normalized Difference Vegetation Index (NDVI) recognized as variable which has significant relationship with LST are chosen to estimate LST in finer resolution. Each pixels of NDVI and LST are separated according to land cover provided from MODerate resolution Imaging Spectroradiometer (MODIS) to achieve more accurate relationship. Downscaled LST are compared with LST observed from Automated Synoptic Observing System (ASOS) for assessing its accuracy. The downscaled LST results of this study, coupled with advantage of geostationary satellite, can be applied to observe hydrologic process efficiently.
NASA Astrophysics Data System (ADS)
Kingfield, D.; de Beurs, K.
2014-12-01
It has been demonstrated through various case studies that multispectral satellite imagery can be utilized in the identification of damage caused by a tornado through the change detection process. This process involves the difference in returned surface reflectance between two images and is often summarized through a variety of ratio-based vegetation indices (VIs). Land cover type plays a large contributing role in the change detection process as the reflectance properties of vegetation can vary based on several factors (e.g. species, greenness, density). Consequently, this provides the possibility for a variable magnitude of loss, making certain land cover regimes less reliable in the damage identification process. Furthermore, the tradeoff between sensor resolution and orbital return period may also play a role in the ability to detect catastrophic loss. Moderate resolution imagery (e.g. Moderate Resolution Imaging Spectroradiometer (MODIS)) provides relatively coarse surface detail with a higher update rate which could hinder the identification of small regions that underwent a dynamic change. Alternatively, imagery with higher spatial resolution (e.g. Landsat) have a longer temporal return period between successive images which could result in natural recovery underestimating the absolute magnitude of damage incurred. This study evaluates the role of land cover type and sensor resolution on four high-end (EF3+) tornado events occurring in four different land cover groups (agriculture, forest, grassland, urban) in the spring season. The closest successive clear images from both Landsat 5 and MODIS are quality controlled for each case. Transacts of surface reflectance across a homogenous land cover type both inside and outside the damage swath are extracted. These metrics are synthesized through the calculation of six different VIs to rank the calculated change metrics by land cover type, sensor resolution and VI.
Multi-sensor fusion of Landsat 8 thermal infrared (TIR) and panchromatic (PAN) images.
Jung, Hyung-Sup; Park, Sung-Whan
2014-12-18
Data fusion is defined as the combination of data from multiple sensors such that the resulting information is better than would be possible when the sensors are used individually. The multi-sensor fusion of panchromatic (PAN) and thermal infrared (TIR) images is a good example of this data fusion. While a PAN image has higher spatial resolution, a TIR one has lower spatial resolution. In this study, we have proposed an efficient method to fuse Landsat 8 PAN and TIR images using an optimal scaling factor in order to control the trade-off between the spatial details and the thermal information. We have compared the fused images created from different scaling factors and then tested the performance of the proposed method at urban and rural test areas. The test results show that the proposed method merges the spatial resolution of PAN image and the temperature information of TIR image efficiently. The proposed method may be applied to detect lava flows of volcanic activity, radioactive exposure of nuclear power plants, and surface temperature change with respect to land-use change.
Thermal infrared panoramic imaging sensor
NASA Astrophysics Data System (ADS)
Gutin, Mikhail; Tsui, Eddy K.; Gutin, Olga; Wang, Xu-Ming; Gutin, Alexey
2006-05-01
Panoramic cameras offer true real-time, 360-degree coverage of the surrounding area, valuable for a variety of defense and security applications, including force protection, asset protection, asset control, security including port security, perimeter security, video surveillance, border control, airport security, coastguard operations, search and rescue, intrusion detection, and many others. Automatic detection, location, and tracking of targets outside protected area ensures maximum protection and at the same time reduces the workload on personnel, increases reliability and confidence of target detection, and enables both man-in-the-loop and fully automated system operation. Thermal imaging provides the benefits of all-weather, 24-hour day/night operation with no downtime. In addition, thermal signatures of different target types facilitate better classification, beyond the limits set by camera's spatial resolution. The useful range of catadioptric panoramic cameras is affected by their limited resolution. In many existing systems the resolution is optics-limited. Reflectors customarily used in catadioptric imagers introduce aberrations that may become significant at large camera apertures, such as required in low-light and thermal imaging. Advantages of panoramic imagers with high image resolution include increased area coverage with fewer cameras, instantaneous full horizon detection, location and tracking of multiple targets simultaneously, extended range, and others. The Automatic Panoramic Thermal Integrated Sensor (APTIS), being jointly developed by Applied Science Innovative, Inc. (ASI) and the Armament Research, Development and Engineering Center (ARDEC) combines the strengths of improved, high-resolution panoramic optics with thermal imaging in the 8 - 14 micron spectral range, leveraged by intelligent video processing for automated detection, location, and tracking of moving targets. The work in progress supports the Future Combat Systems (FCS) and the Intelligent Munitions Systems (IMS). The APTIS is anticipated to operate as an intelligent node in a wireless network of multifunctional nodes that work together to serve in a wide range of applications of homeland security, as well as serve the Army in tasks of improved situational awareness (SA) in defense and offensive operations, and as a sensor node in tactical Intelligence Surveillance Reconnaissance (ISR). The novel ViperView TM high-resolution panoramic thermal imager is the heart of the APTIS system. It features an aberration-corrected omnidirectional imager with small optics designed to match the resolution of a 640x480 pixels IR camera with improved image quality for longer range target detection, classification, and tracking. The same approach is applicable to panoramic cameras working in the visible spectral range. Other components of the ATPIS system include network communications, advanced power management, and wakeup capability. Recent developments include image processing, optical design being expanded into the visible spectral range, and wireless communications design. This paper describes the development status of the APTIS system.
Thin polymer etalon arrays for high-resolution photoacoustic imaging
Hou, Yang; Huang, Sheng-Wen; Ashkenazi, Shai; Witte, Russell; O’Donnell, Matthew
2009-01-01
Thin polymer etalons are demonstrated as high-frequency ultrasound sensors for three-dimensional (3-D) high-resolution photoacoustic imaging. The etalon, a Fabry-Perot optical resonator, consists of a thin polymer slab sandwiched between two gold layers. It is probed with a scanning continuous-wave (CW) laser for ultrasound array detection. Detection bandwidth of a 20-μm-diam array element exceeds 50 MHz, and the ultrasound sensitivity is comparable to polyvinylidene fluoride (PVDF) equivalents of similar size. In a typical photoacoustic imaging setup, a pulsed laser beam illuminates the imaging target, where optical energy is absorbed and acoustic waves are generated through the thermoelastic effect. An ultrasound detection array is formed by scanning the probing laser beam on the etalon surface in either a 1-D or a 2-D configuration, which produces 2-D or 3-D images, respectively. Axial and lateral resolutions have been demonstrated to be better than 20 μm. Detailed characterizations of the optical and acoustical properties of the etalon, as well as photoacoustic imaging results, suggest that thin polymer etalon arrays can be used as ultrasound detectors for 3-D high-resolution photoacoustic imaging applications. PMID:19123679
NASA Astrophysics Data System (ADS)
Limonova, Elena; Tropin, Daniil; Savelyev, Boris; Mamay, Igor; Nikolaev, Dmitry
2018-04-01
In this paper we describe stitching protocol, which allows to obtain high resolution images of long length monochromatic objects with periodic structure. This protocol can be used for long length documents or human-induced objects in satellite images of uninhabitable regions like Arctic regions. The length of such objects can reach notable values, while modern camera sensors have limited resolution and are not able to provide good enough image of the whole object for further processing, e.g. using in OCR system. The idea of the proposed method is to acquire a video stream containing full object in high resolution and use image stitching. We expect the scanned object to have straight boundaries and periodic structure, which allow us to introduce regularization to the stitching problem and adapt algorithm for limited computational power of mobile and embedded CPUs. With the help of detected boundaries and structure we estimate homography between frames and use this information to reduce complexity of stitching. We demonstrate our algorithm on mobile device and show image processing speed of 2 fps on Samsung Exynos 5422 processor
Adaptive optics with pupil tracking for high resolution retinal imaging
Sahin, Betul; Lamory, Barbara; Levecq, Xavier; Harms, Fabrice; Dainty, Chris
2012-01-01
Adaptive optics, when integrated into retinal imaging systems, compensates for rapidly changing ocular aberrations in real time and results in improved high resolution images that reveal the photoreceptor mosaic. Imaging the retina at high resolution has numerous potential medical applications, and yet for the development of commercial products that can be used in the clinic, the complexity and high cost of the present research systems have to be addressed. We present a new method to control the deformable mirror in real time based on pupil tracking measurements which uses the default camera for the alignment of the eye in the retinal imaging system and requires no extra cost or hardware. We also present the first experiments done with a compact adaptive optics flood illumination fundus camera where it was possible to compensate for the higher order aberrations of a moving model eye and in vivo in real time based on pupil tracking measurements, without the real time contribution of a wavefront sensor. As an outcome of this research, we showed that pupil tracking can be effectively used as a low cost and practical adaptive optics tool for high resolution retinal imaging because eye movements constitute an important part of the ocular wavefront dynamics. PMID:22312577
Adaptive optics with pupil tracking for high resolution retinal imaging.
Sahin, Betul; Lamory, Barbara; Levecq, Xavier; Harms, Fabrice; Dainty, Chris
2012-02-01
Adaptive optics, when integrated into retinal imaging systems, compensates for rapidly changing ocular aberrations in real time and results in improved high resolution images that reveal the photoreceptor mosaic. Imaging the retina at high resolution has numerous potential medical applications, and yet for the development of commercial products that can be used in the clinic, the complexity and high cost of the present research systems have to be addressed. We present a new method to control the deformable mirror in real time based on pupil tracking measurements which uses the default camera for the alignment of the eye in the retinal imaging system and requires no extra cost or hardware. We also present the first experiments done with a compact adaptive optics flood illumination fundus camera where it was possible to compensate for the higher order aberrations of a moving model eye and in vivo in real time based on pupil tracking measurements, without the real time contribution of a wavefront sensor. As an outcome of this research, we showed that pupil tracking can be effectively used as a low cost and practical adaptive optics tool for high resolution retinal imaging because eye movements constitute an important part of the ocular wavefront dynamics.
Two micron pore size MCP-based image intensifiers
NASA Astrophysics Data System (ADS)
Glesener, John; Estrera, Joseph
2010-02-01
Image intensifiers (I2) have many advantages as detectors. They offer single photon sensitivity in an imaging format, they're light in weight and analog I2 systems can operate for hours on a single AA battery. Their light output is such as to exploit the peak in color sensitivity of the human eye. Until recent developments in CMOS sensors, they also were one of the highest resolution sensors available. The closest all solid state solution, the Texas Instruments Impactron chip, comes in a 1 megapixel format. Depending on the level of integration, an Impactron based system can consume 20 to 40 watts in a system configuration. In further investing in I2 technology, L-3 EOS determined that increasing I2 resolution merited a high priority. Increased I2 resolution offers the system user two desirable options: 1) increased detection and identification ranges while maintaining field-of-view (FOV) or 2) increasing FOV while maintaining the original system resolution. One of the areas where an investment in resolution is being made is in the microchannel plate (MCP). Incorporation of a 2 micron MCP into an image tube has the potential of increasing the system resolution of currently fielded systems. Both inverting and non-inverting configurations are being evaluated. Inverting tubes are being characterized in night vision goggle (NVG) and sights. The non-inverting 2 micron tube is being characterized for high resolution I2CMOS camera applications. Preliminary measurements show an increase in the MTF over a standard 5 micron pore size, 6 micron pitch plate. Current results will be presented.
Development of plenoptic infrared camera using low dimensional material based photodetectors
NASA Astrophysics Data System (ADS)
Chen, Liangliang
Infrared (IR) sensor has extended imaging from submicron visible spectrum to tens of microns wavelength, which has been widely used for military and civilian application. The conventional bulk semiconductor materials based IR cameras suffer from low frame rate, low resolution, temperature dependent and highly cost, while the unusual Carbon Nanotube (CNT), low dimensional material based nanotechnology has been made much progress in research and industry. The unique properties of CNT lead to investigate CNT based IR photodetectors and imaging system, resolving the sensitivity, speed and cooling difficulties in state of the art IR imagings. The reliability and stability is critical to the transition from nano science to nano engineering especially for infrared sensing. It is not only for the fundamental understanding of CNT photoresponse induced processes, but also for the development of a novel infrared sensitive material with unique optical and electrical features. In the proposed research, the sandwich-structured sensor was fabricated within two polymer layers. The substrate polyimide provided sensor with isolation to background noise, and top parylene packing blocked humid environmental factors. At the same time, the fabrication process was optimized by real time electrical detection dielectrophoresis and multiple annealing to improve fabrication yield and sensor performance. The nanoscale infrared photodetector was characterized by digital microscopy and precise linear stage in order for fully understanding it. Besides, the low noise, high gain readout system was designed together with CNT photodetector to make the nano sensor IR camera available. To explore more of infrared light, we employ compressive sensing algorithm into light field sampling, 3-D camera and compressive video sensing. The redundant of whole light field, including angular images for light field, binocular images for 3-D camera and temporal information of video streams, are extracted and expressed in compressive approach. The following computational algorithms are applied to reconstruct images beyond 2D static information. The super resolution signal processing was then used to enhance and improve the image spatial resolution. The whole camera system brings a deeply detailed content for infrared spectrum sensing.
Sugimura, Daisuke; Kobayashi, Suguru; Hamamoto, Takayuki
2017-11-01
Light field imaging is an emerging technique that is employed to realize various applications such as multi-viewpoint imaging, focal-point changing, and depth estimation. In this paper, we propose a concept of a dual-resolution light field imaging system to synthesize super-resolved multi-viewpoint images. The key novelty of this study is the use of an organic photoelectric conversion film (OPCF), which is a device that converts spectra information of incoming light within a certain wavelength range into an electrical signal (pixel value), for light field imaging. In our imaging system, we place the OPCF having the green spectral sensitivity onto the micro-lens array of the conventional light field camera. The OPCF allows us to acquire the green spectra information only at the center viewpoint with the full resolution of the image sensor. In contrast, the optical system of the light field camera in our imaging system captures the other spectra information (red and blue) at multiple viewpoints (sub-aperture images) but with low resolution. Thus, our dual-resolution light field imaging system enables us to simultaneously capture information about the target scene at a high spatial resolution as well as the direction information of the incoming light. By exploiting these advantages of our imaging system, our proposed method enables the synthesis of full-resolution multi-viewpoint images. We perform experiments using synthetic images, and the results demonstrate that our method outperforms other previous methods.
Data Processing for the Space-Based Desis Hyperspectral Sensor
NASA Astrophysics Data System (ADS)
Carmona, E.; Avbelj, J.; Alonso, K.; Bachmann, M.; Cerra, D.; Eckardt, A.; Gerasch, B.; Graham, L.; Günther, B.; Heiden, U.; Kerr, G.; Knodt, U.; Krutz, D.; Krawcyk, H.; Makarau, A.; Miller, R.; Müller, R.; Perkins, R.; Walter, I.
2017-05-01
The German Aerospace Center (DLR) and Teledyne Brown Engineering (TBE) have established a collaboration to develop and operate a new space-based hyperspectral sensor, the DLR Earth Sensing Imaging Spectrometer (DESIS). DESIS will provide spacebased hyperspectral data in the VNIR with high spectral resolution and near-global coverage. While TBE provides the platform and infrastructure for operation of the DESIS instrument on the International Space Station, DLR is responsible for providing the instrument and the processing software. The DESIS instrument is equipped with novel characteristics for an imaging spectrometer such high spectral resolution (2.55 nm), a mirror pointing unit or a CMOS sensor operated in rolling shutter mode. We present here an overview of the DESIS instrument and its processing chain, emphasizing the effect of the novel characteristics of DESIS in the data processing and final data products. Furthermore, we analyse in more detail the effect of the rolling shutter on the DESIS data and possible mitigation/correction strategies.
NASA Astrophysics Data System (ADS)
Kendrick, Stephen E.; Harwit, Alex; Kaplan, Michael; Smythe, William D.
2007-09-01
An MWIR TDI (Time Delay and Integration) Imager and Spectrometer (MTIS) instrument for characterizing from orbit the moons of Jupiter and Saturn is proposed. Novel to this instrument is the planned implementation of a digital TDI detector array and an innovative imaging/spectroscopic architecture. Digital TDI enables a higher SNR for high spatial resolution surface mapping of Titan and Enceladus and for improved spectral discrimination and resolution at Europa. The MTIS imaging/spectroscopic architecture combines a high spatial resolution coarse wavelength resolution imaging spectrometer with a hyperspectral sensor to spectrally decompose a portion of the data adjacent to the data sampled in the imaging spectrometer. The MTIS instrument thus maps with high spatial resolution a planetary object while spectrally decomposing enough of the data that identification of the constituent materials is highly likely. Additionally, digital TDI systems have the ability to enable the rejection of radiation induced spikes in high radiation environments (Europa) and the ability to image in low light levels (Titan and Enceladus). The ability to image moving objects that might be missed utilizing a conventional TDI system is an added advantage and is particularly important for characterizing atmospheric effects and separating atmospheric and surface components. This can be accomplished with on-orbit processing or collecting and returning individual non co-added frames.
NASA Astrophysics Data System (ADS)
Goss, Tristan M.
2016-05-01
With 640x512 pixel format IR detector arrays having been on the market for the past decade, Standard Definition (SD) thermal imaging sensors have been developed and deployed across the world. Now with 1280x1024 pixel format IR detector arrays becoming readily available designers of thermal imager systems face new challenges as pixel sizes reduce and the demand and applications for High Definition (HD) thermal imaging sensors increases. In many instances the upgrading of existing under-sampled SD thermal imaging sensors into more optimally sampled or oversampled HD thermal imaging sensors provides a more cost effective and reduced time to market option than to design and develop a completely new sensor. This paper presents the analysis and rationale behind the selection of the best suited HD pixel format MWIR detector for the upgrade of an existing SD thermal imaging sensor to a higher performing HD thermal imaging sensor. Several commercially available and "soon to be" commercially available HD small pixel IR detector options are included as part of the analysis and are considered for this upgrade. The impact the proposed detectors have on the sensor's overall sensitivity, noise and resolution is analyzed, and the improved range performance is predicted. Furthermore with reduced dark currents due to the smaller pixel sizes, the candidate HD MWIR detectors are operated at higher temperatures when compared to their SD predecessors. Therefore, as an additional constraint and as a design goal, the feasibility of achieving upgraded performance without any increase in the size, weight and power consumption of the thermal imager is discussed herein.
NASA Astrophysics Data System (ADS)
Materne, A.; Virmontois, C.; Bardoux, A.; Gimenez, T.; Biffi, J. M.; Laubier, D.; Delvit, J. M.
2014-10-01
This paper describes the activities managed by CNES (French National Space Agency) for the development of focal planes for next generation of optical high resolution Earth observation satellites, in low sun-synchronous orbit. CNES has launched a new programme named OTOS, to increase the level of readiness (TRL) of several key technologies for high resolution Earth observation satellites. The OTOS programme includes several actions in the field of detection and focal planes: a new generation of CCD and CMOS image sensors, updated analog front-end electronics and analog-to-digital converters. The main features that must be achieved on focal planes for high resolution Earth Observation, are: readout speed, signal to noise ratio at low light level, anti-blooming efficiency, geometric stability, MTF and line of sight stability. The next steps targeted are presented in comparison to the in-flight measured performance of the PLEIADES satellites launched in 2011 and 2012. The high resolution panchromatic channel is still based upon Backside illuminated (BSI) CCDs operated in Time Delay Integration (TDI). For the multispectral channel, the main evolution consists in moving to TDI mode and the competition is open with the concurrent development of a CCD solution versus a CMOS solution. New CCDs will be based upon several process blocks under evaluation on the e2v 6 inches BSI wafer manufacturing line. The OTOS strategy for CMOS image sensors investigates on one hand custom TDI solutions within a similar approach to CCDs, and, on the other hand, investigates ways to take advantage of existing performance of off-the-shelf 2D arrays CMOS image sensors. We present the characterization results obtained from test vehicles designed for custom TDI operation on several CIS technologies and results obtained before and after radiation on snapshot 2D arrays from the CMOSIS CMV family.
Full-field acoustomammography using an acousto-optic sensor.
Sandhu, J S; Schmidt, R A; La Rivière, P J
2009-06-01
In this Letter the authors introduce a wide-field transmission ultrasound approach to breast imaging based on the use of a large area acousto-optic (AO) sensor. Accompanied by a suitable acoustic source, such a detector could be mounted on a traditional mammography system and provide a mammographylike ultrasound projection image of the compressed breast in registration with the x-ray mammogram. The authors call the approach acoustography. The hope is that this additional information could improve the sensitivity and specificity of screening mammography. The AO sensor converts ultrasound directly into a visual image by virtue of the acousto-optic effect of the liquid crystal layer contained in the AO sensor. The image is captured with a digital video camera for processing, analysis, and storage. In this Letter, the authors perform a geometrical resolution analysis and also present images of a multimodality breast phantom imaged with both mammography and acoustography to demonstrate the feasibility of the approach. The geometric resolution analysis suggests that the technique could readily detect tumors of diameter of 3 mm using 8.5 MHz ultrasound, with smaller tumors detectable with higher frequency ultrasound, though depth penetration might then become a limiting factor. The preliminary phantom images show high contrast and compare favorably to digital mammograms of the same phantom. The authors have introduced and established, through phantom imaging, the feasibility of a full-field transmission ultrasound detector for breast imaging based on the use of a large area AO sensor. Of course variations in attenuation of connective, glandular, and fatty tissues will lead to images with more cluttered anatomical background than those of the phantom imaged here. Acoustic coupling to the mammographically compressed breast, particularly at the margins, will also have to be addressed.
Full-field acoustomammography using an acousto-optic sensor
Sandhu, J. S.; Schmidt, R. A.; La Rivière, P. J.
2009-01-01
In this Letter the authors introduce a wide-field transmission ultrasound approach to breast imaging based on the use of a large area acousto-optic (AO) sensor. Accompanied by a suitable acoustic source, such a detector could be mounted on a traditional mammography system and provide a mammographylike ultrasound projection image of the compressed breast in registration with the x-ray mammogram. The authors call the approach acoustography. The hope is that this additional information could improve the sensitivity and specificity of screening mammography. The AO sensor converts ultrasound directly into a visual image by virtue of the acousto-optic effect of the liquid crystal layer contained in the AO sensor. The image is captured with a digital video camera for processing, analysis, and storage. In this Letter, the authors perform a geometrical resolution analysis and also present images of a multimodality breast phantom imaged with both mammography and acoustography to demonstrate the feasibility of the approach. The geometric resolution analysis suggests that the technique could readily detect tumors of diameter of 3 mm using 8.5 MHz ultrasound, with smaller tumors detectable with higher frequency ultrasound, though depth penetration might then become a limiting factor. The preliminary phantom images show high contrast and compare favorably to digital mammograms of the same phantom. The authors have introduced and established, through phantom imaging, the feasibility of a full-field transmission ultrasound detector for breast imaging based on the use of a large area AO sensor. Of course variations in attenuation of connective, glandular, and fatty tissues will lead to images with more cluttered anatomical background than those of the phantom imaged here. Acoustic coupling to the mammographically compressed breast, particularly at the margins, will also have to be addressed. PMID:19610321
Piezo-based, high dynamic range, wide bandwidth steering system for optical applications
NASA Astrophysics Data System (ADS)
Karasikov, Nir; Peled, Gal; Yasinov, Roman; Feinstein, Alan
2017-05-01
Piezoelectric motors and actuators are characterized by direct drive, fast response, high positioning resolution and high mechanical power density. These properties are beneficial for optical devices such as gimbals, optical image stabilizers and mirror angular positioners. The range of applications includes sensor pointing systems, image stabilization, laser steering and more. This paper reports on the construction, properties and operation of three types of piezo based building blocks for optical steering applications: a small gimbal and a two-axis OIS (Optical Image Stabilization) mechanism, both based on piezoelectric motors, and a flexure-assisted piezoelectric actuator for mirror angular positioning. The gimbal weighs less than 190 grams, has a wide angular span (solid angle of > 2π) and allows for a 80 micro-radian stabilization with a stabilization frequency up to 25 Hz. The OIS is an X-Y, closed loop, platform having a lateral positioning resolution better than 1 μm, a stabilization frequency up to 25 Hz and a travel of +/-2 mm. It is used for laser steering or positioning of the image sensor, based on signals from a MEMS Gyro sensor. The actuator mirror positioner is based on three piezoelectric actuation axes for tip tilt (each providing a 50 μm motion range), has a positioning resolution of 10 nm and is capable of a 1000 Hz response. A combination of the gimbal with the mirror positioner or the OIS stage is explored by simulations, indicating a <10 micro-radian stabilization capability under substantial perturbation. Simulations and experimental results are presented for a combined device facilitating both wide steering angle range and bandwidth.
Diffraction-Limited Plenoptic Imaging with Correlated Light
NASA Astrophysics Data System (ADS)
Pepe, Francesco V.; Di Lena, Francesco; Mazzilli, Aldo; Edrei, Eitan; Garuccio, Augusto; Scarcelli, Giuliano; D'Angelo, Milena
2017-12-01
Traditional optical imaging faces an unavoidable trade-off between resolution and depth of field (DOF). To increase resolution, high numerical apertures (NAs) are needed, but the associated large angular uncertainty results in a limited range of depths that can be put in sharp focus. Plenoptic imaging was introduced a few years ago to remedy this trade-off. To this aim, plenoptic imaging reconstructs the path of light rays from the lens to the sensor. However, the improvement offered by standard plenoptic imaging is practical and not fundamental: The increased DOF leads to a proportional reduction of the resolution well above the diffraction limit imposed by the lens NA. In this Letter, we demonstrate that correlation measurements enable pushing plenoptic imaging to its fundamental limits of both resolution and DOF. Namely, we demonstrate maintaining the imaging resolution at the diffraction limit while increasing the depth of field by a factor of 7. Our results represent the theoretical and experimental basis for the effective development of promising applications of plenoptic imaging.
Diffraction-Limited Plenoptic Imaging with Correlated Light.
Pepe, Francesco V; Di Lena, Francesco; Mazzilli, Aldo; Edrei, Eitan; Garuccio, Augusto; Scarcelli, Giuliano; D'Angelo, Milena
2017-12-15
Traditional optical imaging faces an unavoidable trade-off between resolution and depth of field (DOF). To increase resolution, high numerical apertures (NAs) are needed, but the associated large angular uncertainty results in a limited range of depths that can be put in sharp focus. Plenoptic imaging was introduced a few years ago to remedy this trade-off. To this aim, plenoptic imaging reconstructs the path of light rays from the lens to the sensor. However, the improvement offered by standard plenoptic imaging is practical and not fundamental: The increased DOF leads to a proportional reduction of the resolution well above the diffraction limit imposed by the lens NA. In this Letter, we demonstrate that correlation measurements enable pushing plenoptic imaging to its fundamental limits of both resolution and DOF. Namely, we demonstrate maintaining the imaging resolution at the diffraction limit while increasing the depth of field by a factor of 7. Our results represent the theoretical and experimental basis for the effective development of promising applications of plenoptic imaging.
Performance modeling of terahertz (THz) and millimeter waves (mmW) pupil plane imaging
NASA Astrophysics Data System (ADS)
Mohammadian, Nafiseh; Furxhi, Orges; Zhang, Lei; Offermans, Peter; Ghazi, Galia; Driggers, Ronald
2018-05-01
Terahertz- (THz) and millimeter-wave sensors are becoming more important in industrial, security, medical, and defense applications. A major problem in these sensing areas is the resolution, sensitivity, and visual acuity of the imaging systems. There are different fundamental parameters in designing a system that have significant effects on the imaging performance. The performance of THz systems can be discussed in terms of two characteristics: sensitivity and spatial resolution. New approaches for design and manufacturing of THz imagers are a vital basis for developing future applications. Photonics solutions have been at the technological forefront in THz band applications. A single scan antenna does not provide reasonable resolution, sensitivity, and speed. An effective approach to imaging is placing a high-performance antenna in a two-dimensional antenna array to achieve higher radiation efficiency and higher resolution in the imaging systems. Here, we present the performance modeling of a pupil plane imaging system to find the resolution and sensitivity efficiency of the imaging system.
Atomic-Scale Nuclear Spin Imaging Using Quantum-Assisted Sensors in Diamond
NASA Astrophysics Data System (ADS)
Ajoy, A.; Bissbort, U.; Lukin, M. D.; Walsworth, R. L.; Cappellaro, P.
2015-01-01
Nuclear spin imaging at the atomic level is essential for the understanding of fundamental biological phenomena and for applications such as drug discovery. The advent of novel nanoscale sensors promises to achieve the long-standing goal of single-protein, high spatial-resolution structure determination under ambient conditions. In particular, quantum sensors based on the spin-dependent photoluminescence of nitrogen-vacancy (NV) centers in diamond have recently been used to detect nanoscale ensembles of external nuclear spins. While NV sensitivity is approaching single-spin levels, extracting relevant information from a very complex structure is a further challenge since it requires not only the ability to sense the magnetic field of an isolated nuclear spin but also to achieve atomic-scale spatial resolution. Here, we propose a method that, by exploiting the coupling of the NV center to an intrinsic quantum memory associated with the nitrogen nuclear spin, can reach a tenfold improvement in spatial resolution, down to atomic scales. The spatial resolution enhancement is achieved through coherent control of the sensor spin, which creates a dynamic frequency filter selecting only a few nuclear spins at a time. We propose and analyze a protocol that would allow not only sensing individual spins in a complex biomolecule, but also unraveling couplings among them, thus elucidating local characteristics of the molecule structure.
Optimum Image Formation for Spaceborne Microwave Radiometer Products.
Long, David G; Brodzik, Mary J
2016-05-01
This paper considers some of the issues of radiometer brightness image formation and reconstruction for use in the NASA-sponsored Calibrated Passive Microwave Daily Equal-Area Scalable Earth Grid 2.0 Brightness Temperature Earth System Data Record project, which generates a multisensor multidecadal time series of high-resolution radiometer products designed to support climate studies. Two primary reconstruction algorithms are considered: the Backus-Gilbert approach and the radiometer form of the scatterometer image reconstruction (SIR) algorithm. These are compared with the conventional drop-in-the-bucket (DIB) gridded image formation approach. Tradeoff study results for the various algorithm options are presented to select optimum values for the grid resolution, the number of SIR iterations, and the BG gamma parameter. We find that although both approaches are effective in improving the spatial resolution of the surface brightness temperature estimates compared to DIB, SIR requires significantly less computation. The sensitivity of the reconstruction to the accuracy of the measurement spatial response function (MRF) is explored. The partial reconstruction of the methods can tolerate errors in the description of the sensor measurement response function, which simplifies the processing of historic sensor data for which the MRF is not known as well as modern sensors. Simulation tradeoff results are confirmed using actual data.
Wavelet compression techniques for hyperspectral data
NASA Technical Reports Server (NTRS)
Evans, Bruce; Ringer, Brian; Yeates, Mathew
1994-01-01
Hyperspectral sensors are electro-optic sensors which typically operate in visible and near infrared bands. Their characteristic property is the ability to resolve a relatively large number (i.e., tens to hundreds) of contiguous spectral bands to produce a detailed profile of the electromagnetic spectrum. In contrast, multispectral sensors measure relatively few non-contiguous spectral bands. Like multispectral sensors, hyperspectral sensors are often also imaging sensors, measuring spectra over an array of spatial resolution cells. The data produced may thus be viewed as a three dimensional array of samples in which two dimensions correspond to spatial position and the third to wavelength. Because they multiply the already large storage/transmission bandwidth requirements of conventional digital images, hyperspectral sensors generate formidable torrents of data. Their fine spectral resolution typically results in high redundancy in the spectral dimension, so that hyperspectral data sets are excellent candidates for compression. Although there have been a number of studies of compression algorithms for multispectral data, we are not aware of any published results for hyperspectral data. Three algorithms for hyperspectral data compression are compared. They were selected as representatives of three major approaches for extending conventional lossy image compression techniques to hyperspectral data. The simplest approach treats the data as an ensemble of images and compresses each image independently, ignoring the correlation between spectral bands. The second approach transforms the data to decorrelate the spectral bands, and then compresses the transformed data as a set of independent images. The third approach directly generalizes two-dimensional transform coding by applying a three-dimensional transform as part of the usual transform-quantize-entropy code procedure. The algorithms studied all use the discrete wavelet transform. In the first two cases, a wavelet transform coder was used for the two-dimensional compression. The third case used a three dimensional extension of this same algorithm.
NASA Astrophysics Data System (ADS)
Neukum, Gerhard; Jaumann, Ralf; Scholten, Frank; Gwinner, Klaus
2017-11-01
At the Institute of Space Sensor Technology and Planetary Exploration of the German Aerospace Center (DLR) the High Resolution Stereo Camera (HRSC) has been designed for international missions to planet Mars. For more than three years an airborne version of this camera, the HRSC-A, has been successfully applied in many flight campaigns and in a variety of different applications. It combines 3D-capabilities and high resolution with multispectral data acquisition. Variable resolutions depending on the camera control settings can be generated. A high-end GPS/INS system in combination with the multi-angle image information yields precise and high-frequent orientation data for the acquired image lines. In order to handle these data a completely automated photogrammetric processing system has been developed, and allows to generate multispectral 3D-image products for large areas and with accuracies for planimetry and height in the decimeter range. This accuracy has been confirmed by detailed investigations.
AVIRIS calibration and application in coastal oceanic environments
NASA Technical Reports Server (NTRS)
Carder, Kendall L.
1992-01-01
The Airborne Visible-Infrared Imaging Spectrometer (AVIRIS) is a test-bed for future spacecraft sensors such as the High-Resolution Imaging Spectrometer and the Moderate-Resolution Imaging Spectrometers planned for the Earth Observing System. To use this sensor for ocean applications, S/N was increased by spatial averaging of images. Post-flight recalibration was accomplished using in situ the water-leaving radiance measured at flight time, modeling radiance transmission to the aircraft, and adding modeled atmospheric radiance to that value. The preflight calibration curve was then adjusted until aircraft and modeled total radiance values matched. Water-leaving radiance values from the recalibrated AVIRIS imagery were consistent with in situ data supporting the validity of the approach. Imagery of the absorption coefficient at 415 nm and backscattering coefficient at 671 nm were used to depict the dissolved and particulate constituents of an ebb-tidal esturance plume on the East coast of Florida.
High spatial resolution LWIR hyperspectral sensor
NASA Astrophysics Data System (ADS)
Roberts, Carson B.; Bodkin, Andrew; Daly, James T.; Meola, Joseph
2015-06-01
Presented is a new hyperspectral imager design based on multiple slit scanning. This represents an innovation in the classic trade-off between speed and resolution. This LWIR design has been able to produce data-cubes at 3 times the rate of conventional single slit scan devices. The instrument has a built-in radiometric and spectral calibrator.
The application of Fresnel zone plate based projection in optofluidic microscopy.
Wu, Jigang; Cui, Xiquan; Lee, Lap Man; Yang, Changhuei
2008-09-29
Optofluidic microscopy (OFM) is a novel technique for low-cost, high-resolution on-chip microscopy imaging. In this paper we report the use of the Fresnel zone plate (FZP) based projection in OFM as a cost-effective and compact means for projecting the transmission through an OFM's aperture array onto a sensor grid. We demonstrate this approach by employing a FZP (diameter = 255 microm, focal length = 800 microm) that has been patterned onto a glass slide to project the transmission from an array of apertures (diameter = 1 microm, separation = 10 microm) onto a CMOS sensor. We are able to resolve the contributions from 44 apertures on the sensor under the illumination from a HeNe laser (wavelength = 633 nm). The imaging quality of the FZP determines the effective field-of-view (related to the number of resolvable transmissions from apertures) but not the image resolution of such an OFM system--a key distinction from conventional microscope systems. We demonstrate the capability of the integrated system by flowing the protist Euglena gracilis across the aperture array microfluidically and performing OFM imaging of the samples.
Kang, Wonseok; Yu, Soohwan; Seo, Doochun; Jeong, Jaeheon; Paik, Joonki
2015-09-10
In very high-resolution (VHR) push-broom-type satellite sensor data, both destriping and denoising methods have become chronic problems and attracted major research advances in the remote sensing fields. Since the estimation of the original image from a noisy input is an ill-posed problem, a simple noise removal algorithm cannot preserve the radiometric integrity of satellite data. To solve these problems, we present a novel method to correct VHR data acquired by a push-broom-type sensor by combining wavelet-Fourier and multiscale non-local means (NLM) filters. After the wavelet-Fourier filter separates the stripe noise from the mixed noise in the wavelet low- and selected high-frequency sub-bands, random noise is removed using the multiscale NLM filter in both low- and high-frequency sub-bands without loss of image detail. The performance of the proposed method is compared to various existing methods on a set of push-broom-type sensor data acquired by Korean Multi-Purpose Satellite 3 (KOMPSAT-3) with severe stripe and random noise, and the results of the proposed method show significantly improved enhancement results over existing state-of-the-art methods in terms of both qualitative and quantitative assessments.
Kang, Wonseok; Yu, Soohwan; Seo, Doochun; Jeong, Jaeheon; Paik, Joonki
2015-01-01
In very high-resolution (VHR) push-broom-type satellite sensor data, both destriping and denoising methods have become chronic problems and attracted major research advances in the remote sensing fields. Since the estimation of the original image from a noisy input is an ill-posed problem, a simple noise removal algorithm cannot preserve the radiometric integrity of satellite data. To solve these problems, we present a novel method to correct VHR data acquired by a push-broom-type sensor by combining wavelet-Fourier and multiscale non-local means (NLM) filters. After the wavelet-Fourier filter separates the stripe noise from the mixed noise in the wavelet low- and selected high-frequency sub-bands, random noise is removed using the multiscale NLM filter in both low- and high-frequency sub-bands without loss of image detail. The performance of the proposed method is compared to various existing methods on a set of push-broom-type sensor data acquired by Korean Multi-Purpose Satellite 3 (KOMPSAT-3) with severe stripe and random noise, and the results of the proposed method show significantly improved enhancement results over existing state-of-the-art methods in terms of both qualitative and quantitative assessments. PMID:26378532
Hamann, Elias; Koenig, Thomas; Zuber, Marcus; Cecilia, Angelica; Tyazhev, Anton; Tolbanov, Oleg; Procz, Simon; Fauler, Alex; Baumbach, Tilo; Fiederle, Michael
2015-03-01
High resistivity gallium arsenide is considered a suitable sensor material for spectroscopic X-ray imaging detectors. These sensors typically have thicknesses between a few hundred μm and 1 mm to ensure a high photon detection efficiency. However, for small pixel sizes down to several tens of μm, an effect called charge sharing reduces a detector's spectroscopic performance. The recently developed Medipix3RX readout chip overcomes this limitation by implementing a charge summing circuit, which allows the reconstruction of the full energy information of a photon interaction in a single pixel. In this work, we present the characterization of the first Medipix3RX detector assembly with a 500 μm thick high resistivity, chromium compensated gallium arsenide sensor. We analyze its properties and demonstrate the functionality of the charge summing mode by means of energy response functions recorded at a synchrotron. Furthermore, the imaging properties of the detector, in terms of its modulation transfer functions and signal-to-noise ratios, are investigated. After more than one decade of attempts to establish gallium arsenide as a sensor material for photon counting detectors, our results represent a breakthrough in obtaining detector-grade material. The sensor we introduce is therefore suitable for high resolution X-ray imaging applications.
A High Fidelity Approach to Data Simulation for Space Situational Awareness Missions
NASA Astrophysics Data System (ADS)
Hagerty, S.; Ellis, H., Jr.
2016-09-01
Space Situational Awareness (SSA) is vital to maintaining our Space Superiority. A high fidelity, time-based simulation tool, PROXOR™ (Proximity Operations and Rendering), supports SSA by generating realistic mission scenarios including sensor frame data with corresponding truth. This is a unique and critical tool for supporting mission architecture studies, new capability (algorithm) development, current/future capability performance analysis, and mission performance prediction. PROXOR™ provides a flexible architecture for sensor and resident space object (RSO) orbital motion and attitude control that simulates SSA, rendezvous and proximity operations scenarios. The major elements of interest are based on the ability to accurately simulate all aspects of the RSO model, viewing geometry, imaging optics, sensor detector, and environmental conditions. These capabilities enhance the realism of mission scenario models and generated mission image data. As an input, PROXOR™ uses a library of 3-D satellite models containing 10+ satellites, including low-earth orbit (e.g., DMSP) and geostationary (e.g., Intelsat) spacecraft, where the spacecraft surface properties are those of actual materials and include Phong and Maxwell-Beard bidirectional reflectance distribution function (BRDF) coefficients for accurate radiometric modeling. We calculate the inertial attitude, the changing solar and Earth illumination angles of the satellite, and the viewing angles from the sensor as we propagate the RSO in its orbit. The synthetic satellite image is rendered at high resolution and aggregated to the focal plane resolution resulting in accurate radiometry even when the RSO is a point source. The sensor model includes optical effects from the imaging system [point spread function (PSF) includes aberrations, obscurations, support structures, defocus], detector effects (CCD blooming, left/right bias, fixed pattern noise, image persistence, shot noise, read noise, and quantization noise), and environmental effects (radiation hits with selectable angular distributions and 4-layer atmospheric turbulence model for ground based sensors). We have developed an accurate flash Light Detection and Ranging (LIDAR) model that supports reconstruction of 3-dimensional information on the RSO. PROXOR™ contains many important imaging effects such as intra-frame smear, realized by oversampling the image in time and capturing target motion and jitter during the integration time.
High-Resolution Gamma-Ray Imaging Measurements Using Externally Segmented Germanium Detectors
NASA Technical Reports Server (NTRS)
Callas, J.; Mahoney, W.; Skelton, R.; Varnell, L.; Wheaton, W.
1994-01-01
Fully two-dimensional gamma-ray imaging with simultaneous high-resolution spectroscopy has been demonstrated using an externally segmented germanium sensor. The system employs a single high-purity coaxial detector with its outer electrode segmented into 5 distinct charge collection regions and a lead coded aperture with a uniformly redundant array (URA) pattern. A series of one-dimensional responses was collected around 511 keV while the system was rotated in steps through 180 degrees. A non-negative, linear least-squares algorithm was then employed to reconstruct a 2-dimensional image. Corrections for multiple scattering in the detector, and the finite distance of source and detector are made in the reconstruction process.
New learning based super-resolution: use of DWT and IGMRF prior.
Gajjar, Prakash P; Joshi, Manjunath V
2010-05-01
In this paper, we propose a new learning-based approach for super-resolving an image captured at low spatial resolution. Given the low spatial resolution test image and a database consisting of low and high spatial resolution images, we obtain super-resolution for the test image. We first obtain an initial high-resolution (HR) estimate by learning the high-frequency details from the available database. A new discrete wavelet transform (DWT) based approach is proposed for learning that uses a set of low-resolution (LR) images and their corresponding HR versions. Since the super-resolution is an ill-posed problem, we obtain the final solution using a regularization framework. The LR image is modeled as the aliased and noisy version of the corresponding HR image, and the aliasing matrix entries are estimated using the test image and the initial HR estimate. The prior model for the super-resolved image is chosen as an Inhomogeneous Gaussian Markov random field (IGMRF) and the model parameters are estimated using the same initial HR estimate. A maximum a posteriori (MAP) estimation is used to arrive at the cost function which is minimized using a simple gradient descent approach. We demonstrate the effectiveness of the proposed approach by conducting the experiments on gray scale as well as on color images. The method is compared with the standard interpolation technique and also with existing learning-based approaches. The proposed approach can be used in applications such as wildlife sensor networks, remote surveillance where the memory, the transmission bandwidth, and the camera cost are the main constraints.
Akerley, John
2010-04-17
Map, image, and data files, and a summary report of a high-resolution aeromagnetic survey of southern Maui, Hawai'i completed by EDCON-PRJ, Inc. for Ormat Nevada Inc using an helicopter and a towed sensor array.
MPCM: a hardware coder for super slow motion video sequences
NASA Astrophysics Data System (ADS)
Alcocer, Estefanía; López-Granado, Otoniel; Gutierrez, Roberto; Malumbres, Manuel P.
2013-12-01
In the last decade, the improvements in VLSI levels and image sensor technologies have led to a frenetic rush to provide image sensors with higher resolutions and faster frame rates. As a result, video devices were designed to capture real-time video at high-resolution formats with frame rates reaching 1,000 fps and beyond. These ultrahigh-speed video cameras are widely used in scientific and industrial applications, such as car crash tests, combustion research, materials research and testing, fluid dynamics, and flow visualization that demand real-time video capturing at extremely high frame rates with high-definition formats. Therefore, data storage capability, communication bandwidth, processing time, and power consumption are critical parameters that should be carefully considered in their design. In this paper, we propose a fast FPGA implementation of a simple codec called modulo-pulse code modulation (MPCM) which is able to reduce the bandwidth requirements up to 1.7 times at the same image quality when compared with PCM coding. This allows current high-speed cameras to capture in a continuous manner through a 40-Gbit Ethernet point-to-point access.
Imaging mitochondrial flux in single cells with a FRET sensor for pyruvate.
San Martín, Alejandro; Ceballo, Sebastián; Baeza-Lehnert, Felipe; Lerchundi, Rodrigo; Valdebenito, Rocío; Contreras-Baeza, Yasna; Alegría, Karin; Barros, L Felipe
2014-01-01
Mitochondrial flux is currently accessible at low resolution. Here we introduce a genetically-encoded FRET sensor for pyruvate, and methods for quantitative measurement of pyruvate transport, pyruvate production and mitochondrial pyruvate consumption in intact individual cells at high temporal resolution. In HEK293 cells, neurons and astrocytes, mitochondrial pyruvate uptake was saturated at physiological levels, showing that the metabolic rate is determined by intrinsic properties of the organelle and not by substrate availability. The potential of the sensor was further demonstrated in neurons, where mitochondrial flux was found to rise by 300% within seconds of a calcium transient triggered by a short theta burst, while glucose levels remained unaltered. In contrast, astrocytic mitochondria were insensitive to a similar calcium transient elicited by extracellular ATP. We expect the improved resolution provided by the pyruvate sensor will be of practical interest for basic and applied researchers interested in mitochondrial function.
Imaging Mitochondrial Flux in Single Cells with a FRET Sensor for Pyruvate
Baeza-Lehnert, Felipe; Lerchundi, Rodrigo; Valdebenito, Rocío; Contreras-Baeza, Yasna; Alegría, Karin; Barros, L. Felipe
2014-01-01
Mitochondrial flux is currently accessible at low resolution. Here we introduce a genetically-encoded FRET sensor for pyruvate, and methods for quantitative measurement of pyruvate transport, pyruvate production and mitochondrial pyruvate consumption in intact individual cells at high temporal resolution. In HEK293 cells, neurons and astrocytes, mitochondrial pyruvate uptake was saturated at physiological levels, showing that the metabolic rate is determined by intrinsic properties of the organelle and not by substrate availability. The potential of the sensor was further demonstrated in neurons, where mitochondrial flux was found to rise by 300% within seconds of a calcium transient triggered by a short theta burst, while glucose levels remained unaltered. In contrast, astrocytic mitochondria were insensitive to a similar calcium transient elicited by extracellular ATP. We expect the improved resolution provided by the pyruvate sensor will be of practical interest for basic and applied researchers interested in mitochondrial function. PMID:24465702
Multi-sensor data processing method for improved satellite retrievals
NASA Astrophysics Data System (ADS)
Fan, Xingwang
2017-04-01
Satellite remote sensing has provided massive data that improve the overall accuracy and extend the time series of environmental studies. In reflective solar bands, satellite data are related to land surface properties via radiative transfer (RT) equations. These equations generally include sensor-related (calibration coefficients), atmosphere-related (aerosol optical thickness) and surface-related (surface reflectance) parameters. It is an ill-posed problem to solve three parameters with only one RT equation. Even if there are two RT equations (dual-sensor data), the problem is still unsolvable. However, a robust solution can be obtained when any two parameters are known. If surface and atmosphere are known, sensor intercalibration can be performed. For example, the Advanced Very High Resolution Radiometer (AVHRR) was calibrated to the MODerate-resolution Imaging Spectroradiometer (MODIS) in Fan and Liu (2014) [Fan, X., and Liu, Y. (2014). Quantifying the relationship between intersensor images in solar reflective bands: Implications for intercalibration. IEEE Transactions on Geoscience and Remote Sensing, 52(12), 7727-7737.]. If sensor and surface are known, atmospheric data can be retrieved. For example, aerosol data were retrieved using tandem TERRA and AQUA MODIS images in Fan and Liu (2016a) [Fan, X., and Liu, Y. (2016a). Exploiting TERRA-AQUA MODIS relationship in the reflective solar bands for aerosol retrieval. Remote Sensing, 8(12), 996.]. If sensor and atmosphere are known, data consistency can be obtained. For example, Normalized Difference Vegetation Index (NDVI) data were intercalibrated among coarse-resolution sensors in Fan and Liu (2016b) [Fan, X., and Liu, Y. (2016b). A global study of NDVI difference among moderate-resolution satellite sensors. ISPRS Journal of Photogrammetry and Remote Sensing, 121, 177-191.], and among fine-resolution sensors in Fan and Liu (2017) [Fan, X., and Liu, Y. (2017). A generalized model for intersensor NDVI calibration and its comparison with regression approaches. IEEE Transactions on Geoscience and Remote Sensing, 55(3), doi: 10.1109/TGRS.2016.2635802.]. These studies demonstrate the success of multi-sensor data and novel methods in the research domain of geoscience. These data will benefit remote sensing of terrestrial parameters in decadal timescales, such as soil salinity content in Fan et al. (2016) [Fan, X., Weng, Y., and Tao, J. (2016). Towards decadal soil salinity mapping using Landsat time series data. International Journal of Applied Earth Observation and Geoinformation, 52, 32-41.].
NASA Astrophysics Data System (ADS)
Chu, Hongjun; Qi, Jiaran; Xiao, Shanshan; Qiu, Jinghui
2018-04-01
In this paper, we present a flat transmission-type focusing metasurface for the near-field passive millimeter-wave (PMMW) imaging systems. Considering the non-uniform wavefront of the actual feeding horn, the metasurface is configured by unit cells consisting of coaxial annular apertures and is optimized to achieve broadband, high spatial resolution, and polarization insensitive properties important for PMMW imaging applications in the frequency range from 33 GHz to 37 GHz, with the focal spot as small as 0.43λ0 (@35 GHz). A prototype of the proposed metasurface is fabricated, and the measurement results fairly agree with the simulation ones. Furthermore, an experimental single-sensor PMMW imaging system is constructed based on the metasurface and a Ka-band direct detection radiometer. The experimental results show that the azimuth resolution of the system can reach approximately 4 mm (≈0.47λ0). It is shown that the proposed metasurface can potentially replace the bulky dielectric-lens or reflector antenna to achieve possibly more compact PMMW imaging systems with high spatial resolution approaching the diffraction-limit.
Investigation of CMOS pixel sensor with 0.18 μm CMOS technology for high-precision tracking detector
NASA Astrophysics Data System (ADS)
Zhang, L.; Fu, M.; Zhang, Y.; Yan, W.; Wang, M.
2017-01-01
The Circular Electron Positron Collider (CEPC) proposed by the Chinese high energy physics community is aiming to measure Higgs particles and their interactions precisely. The tracking detector including Silicon Inner Tracker (SIT) and Forward Tracking Disks (FTD) has driven stringent requirements on sensor technologies in term of spatial resolution, power consumption and readout speed. CMOS Pixel Sensor (CPS) is a promising candidate to approach these requirements. This paper presents the preliminary studies on the sensor optimization for tracking detector to achieve high collection efficiency while keeping necessary spatial resolution. Detailed studies have been performed on the charge collection using a 0.18 μm CMOS image sensor process. This process allows high resistivity epitaxial layer, leading to a significant improvement on the charge collection and therefore improving the radiation tolerance. Together with the simulation results, the first exploratory prototype has bee designed and fabricated. The prototype includes 9 different pixel arrays, which vary in terms of pixel pitch, diode size and geometry. The total area of the prototype amounts to 2 × 7.88 mm2.
Research-grade CMOS image sensors for demanding space applications
NASA Astrophysics Data System (ADS)
Saint-Pé, Olivier; Tulet, Michel; Davancens, Robert; Larnaudie, Franck; Magnan, Pierre; Corbière, Franck; Martin-Gonthier, Philippe; Belliot, Pierre
2004-06-01
Imaging detectors are key elements for optical instruments and sensors on board space missions dedicated to Earth observation (high resolution imaging, atmosphere spectroscopy...), Solar System exploration (micro cameras, guidance for autonomous vehicle...) and Universe observation (space telescope focal planes, guiding sensors...). This market has been dominated by CCD technology for long. Since the mid-90s, CMOS Image Sensors (CIS) have been competing with CCDs for more and more consumer domains (webcams, cell phones, digital cameras...). Featuring significant advantages over CCD sensors for space applications (lower power consumption, smaller system size, better radiations behaviour...), CMOS technology is also expanding in this field, justifying specific R&D and development programs funded by national and European space agencies (mainly CNES, DGA, and ESA). All along the 90s and thanks to their increasingly improving performances, CIS have started to be successfully used for more and more demanding applications, from vision and control functions requiring low-level performances to guidance applications requiring medium-level performances. Recent technology improvements have made possible the manufacturing of research-grade CIS that are able to compete with CCDs in the high-performances arena. After an introduction outlining the growing interest of optical instruments designers for CMOS image sensors, this talk will present the existing and foreseen ways to reach high-level electro-optics performances for CIS. The developments of CIS prototypes built using an imaging CMOS process and of devices based on improved designs will be presented.
Research-grade CMOS image sensors for demanding space applications
NASA Astrophysics Data System (ADS)
Saint-Pé, Olivier; Tulet, Michel; Davancens, Robert; Larnaudie, Franck; Magnan, Pierre; Corbière, Franck; Martin-Gonthier, Philippe; Belliot, Pierre
2017-11-01
Imaging detectors are key elements for optical instruments and sensors on board space missions dedicated to Earth observation (high resolution imaging, atmosphere spectroscopy...), Solar System exploration (micro cameras, guidance for autonomous vehicle...) and Universe observation (space telescope focal planes, guiding sensors...). This market has been dominated by CCD technology for long. Since the mid- 90s, CMOS Image Sensors (CIS) have been competing with CCDs for more and more consumer domains (webcams, cell phones, digital cameras...). Featuring significant advantages over CCD sensors for space applications (lower power consumption, smaller system size, better radiations behaviour...), CMOS technology is also expanding in this field, justifying specific R&D and development programs funded by national and European space agencies (mainly CNES, DGA, and ESA). All along the 90s and thanks to their increasingly improving performances, CIS have started to be successfully used for more and more demanding applications, from vision and control functions requiring low-level performances to guidance applications requiring medium-level performances. Recent technology improvements have made possible the manufacturing of research-grade CIS that are able to compete with CCDs in the high-performances arena. After an introduction outlining the growing interest of optical instruments designers for CMOS image sensors, this talk will present the existing and foreseen ways to reach high-level electro-optics performances for CIS. The developments of CIS prototypes built using an imaging CMOS process and of devices based on improved designs will be presented.
Thinking Outside of the Blue Marble: Novel Ocean Applications Using the VIIRS Sensor
NASA Technical Reports Server (NTRS)
Vandermeulen, Ryan A.; Arnone, Robert
2016-01-01
While planning for future space-borne sensors will increase the quality, quantity, and duration of ocean observations in the years to come, efforts to extend the limits of sensors currently in orbit can help shed light on future scientific gains as well as associated uncertainties. Here, we present several applications that are unique to the polar orbiting Visual Infrared Imaging Radiometer Suite (VIIRS), each of which challenge the threshold capabilities of the sensor and provide lessons for future missions. For instance, while moderate resolution polar orbiters typically have a one day revisit time, we are able to obtain multiple looks of the same area by focusing on the extreme zenith angles where orbital views overlap, and pair these observations with those from other sensors to create pseudo-geostationary data sets. Or, by exploiting high spatial resolution (imaging) channels and analyzing patterns of synoptic covariance across the visible spectrum, we can obtain higher spatial resolution bio-optical products. Alternatively, non-traditional products can illuminate important biological interactions in the ocean, such as the use of the Day-Night-Band to provide some quantification of phototactic behavior of marine life along light polluted beaches, as well as track the location of marine fishing vessel fleets along ocean fronts. In this talk, we explore ways to take full advantage of the capabilities of existing sensors in order to maximize insights for future missions.
Image super-resolution via adaptive filtering and regularization
NASA Astrophysics Data System (ADS)
Ren, Jingbo; Wu, Hao; Dong, Weisheng; Shi, Guangming
2014-11-01
Image super-resolution (SR) is widely used in the fields of civil and military, especially for the low-resolution remote sensing images limited by the sensor. Single-image SR refers to the task of restoring a high-resolution (HR) image from the low-resolution image coupled with some prior knowledge as a regularization term. One classic method regularizes image by total variation (TV) and/or wavelet or some other transform which introduce some artifacts. To compress these shortages, a new framework for single image SR is proposed by utilizing an adaptive filter before regularization. The key of our model is that the adaptive filter is used to remove the spatial relevance among pixels first and then only the high frequency (HF) part, which is sparser in TV and transform domain, is considered as the regularization term. Concretely, through transforming the original model, the SR question can be solved by two alternate iteration sub-problems. Before each iteration, the adaptive filter should be updated to estimate the initial HF. A high quality HF part and HR image can be obtained by solving the first and second sub-problem, respectively. In experimental part, a set of remote sensing images captured by Landsat satellites are tested to demonstrate the effectiveness of the proposed framework. Experimental results show the outstanding performance of the proposed method in quantitative evaluation and visual fidelity compared with the state-of-the-art methods.
3D near-infrared imaging based on a single-photon avalanche diode array sensor
NASA Astrophysics Data System (ADS)
Mata Pavia, Juan; Charbon, Edoardo; Wolf, Martin
2011-07-01
An imager for optical tomography was designed based on a detector with 128×128 single-photon pixels that included a bank of 32 time-to-digital converters. Due to the high spatial resolution and the possibility of performing time resolved measurements, a new contact-less setup has been conceived in which scanning of the object is not necessary. This enables one to perform high-resolution optical tomography with much higher acquisition rate, which is fundamental in clinical applications. The setup has a resolution of 97ps and operates with a laser source with an average power of 3mW. This new imaging system generated a high amount of data that could not be processed by established methods, therefore new concepts and algorithms were developed to take full advantage of it. Images were generated using a new reconstruction algorithm that combined general inverse problem methods with Fourier transforms in order to reduce the complexity of the problem. Simulations show that the potential resolution of the new setup is in the order of millimeters. Experiments have been performed to confirm this potential. Images derived from the measurements demonstrate that we have already reached a resolution of 5mm.
Theoretical performance analysis for CMOS based high resolution detectors.
Jain, Amit; Bednarek, Daniel R; Rudin, Stephen
2013-03-06
High resolution imaging capabilities are essential for accurately guiding successful endovascular interventional procedures. Present x-ray imaging detectors are not always adequate due to their inherent limitations. The newly-developed high-resolution micro-angiographic fluoroscope (MAF-CCD) detector has demonstrated excellent clinical image quality; however, further improvement in performance and physical design may be possible using CMOS sensors. We have thus calculated the theoretical performance of two proposed CMOS detectors which may be used as a successor to the MAF. The proposed detectors have a 300 μm thick HL-type CsI phosphor, a 50 μm-pixel CMOS sensor with and without a variable gain light image intensifier (LII), and are designated MAF-CMOS-LII and MAF-CMOS, respectively. For the performance evaluation, linear cascade modeling was used. The detector imaging chains were divided into individual stages characterized by one of the basic processes (quantum gain, binomial selection, stochastic and deterministic blurring, additive noise). Ranges of readout noise and exposure were used to calculate the detectors' MTF and DQE. The MAF-CMOS showed slightly better MTF than the MAF-CMOS-LII, but the MAF-CMOS-LII showed far better DQE, especially for lower exposures. The proposed detectors can have improved MTF and DQE compared with the present high resolution MAF detector. The performance of the MAF-CMOS is excellent for the angiography exposure range; however it is limited at fluoroscopic levels due to additive instrumentation noise. The MAF-CMOS-LII, having the advantage of the variable LII gain, can overcome the noise limitation and hence may perform exceptionally for the full range of required exposures; however, it is more complex and hence more expensive.
An update of commercial infrared sensing and imaging instruments
NASA Technical Reports Server (NTRS)
Kaplan, Herbert
1989-01-01
A classification of infrared sensing instruments by type and application, listing commercially available instruments, from single point thermal probes to on-line control sensors, to high speed, high resolution imaging systems is given. A review of performance specifications follows, along with a discussion of typical thermographic display approaches utilized by various imager manufacturers. An update report on new instruments, new display techniques and newly introduced features of existing instruments is given.
NASA Technical Reports Server (NTRS)
Schmullius, C.; Nithack, J.
1992-01-01
On July 12, the MAC Europe '91 (Multi-Sensor Airborne Campaign) took place over test site Oberpfaffenhofen. The DLR Institute of Radio-Frequency Technology participated with its C-VV, X-VV, and X-HH Experimental Synthetic Aperture Radar (E-SAR). The high resolution E-SAR images with a pixel size between 1 and 2 m and the polarimetric AIRSAR images were analyzed. Using both sensors in combination is a unique opportunity to evaluate SAR images in a frequency range from P- to X-band and to investigate polarimetric information.
ASTER First Views of Red Sea, Ethiopia - Thermal-Infrared TIR Image monochrome
2000-03-11
ASTER succeeded in acquiring this image at night, which is something Visible/Near Infrared VNIR) and Shortwave Infrared (SWIR) sensors cannot do. The scene covers the Red Sea coastline to an inland area of Ethiopia. White pixels represent areas with higher temperature material on the surface, while dark pixels indicate lower temperatures. This image shows ASTER's ability as a highly sensitive, temperature-discerning instrument and the first spaceborne TIR multi-band sensor in history. The size of image: 60 km x 60 km approx., ground resolution 90 m x 90 m approximately. http://photojournal.jpl.nasa.gov/catalog/PIA02452
Chemical bond imaging using higher eigenmodes of tuning fork sensors in atomic force microscopy
NASA Astrophysics Data System (ADS)
Ebeling, Daniel; Zhong, Qigang; Ahles, Sebastian; Chi, Lifeng; Wegner, Hermann A.; Schirmeisen, André
2017-05-01
We demonstrate the ability of resolving the chemical structure of single organic molecules using non-contact atomic force microscopy with higher normal eigenmodes of quartz tuning fork sensors. In order to achieve submolecular resolution, CO-functionalized tips at low temperatures are used. The tuning fork sensors are operated in ultrahigh vacuum in the frequency modulation mode by exciting either their first or second eigenmode. Despite the high effective spring constant of the second eigenmode (on the order of several tens of kN/m), the force sensitivity is sufficiently high to achieve atomic resolution above the organic molecules. This is observed for two different tuning fork sensors with different tip geometries (small tip vs. large tip). These results represent an important step towards resolving the chemical structure of single molecules with multifrequency atomic force microscopy techniques where two or more eigenmodes are driven simultaneously.
Smart sensors II; Proceedings of the Seminar, San Diego, CA, July 31, August 1, 1980
NASA Astrophysics Data System (ADS)
Barbe, D. F.
1980-01-01
Topics discussed include technology for smart sensors, smart sensors for tracking and surveillance, and techniques and algorithms for smart sensors. Papers are presented on the application of very large scale integrated circuits to smart sensors, imaging charge-coupled devices for deep-space surveillance, ultra-precise star tracking using charge coupled devices, and automatic target identification of blurred images with super-resolution features. Attention is also given to smart sensors for terminal homing, algorithms for estimating image position, and the computational efficiency of multiple image registration algorithms.
Sun, Chenglu; Li, Wei; Chen, Wei
2017-01-01
For extracting the pressure distribution image and respiratory waveform unobtrusively and comfortably, we proposed a smart mat which utilized a flexible pressure sensor array, printed electrodes and novel soft seven-layer structure to monitor those physiological information. However, in order to obtain high-resolution pressure distribution and more accurate respiratory waveform, it needs more time to acquire the pressure signal of all the pressure sensors embedded in the smart mat. In order to reduce the sampling time while keeping the same resolution and accuracy, a novel method based on compressed sensing (CS) theory was proposed. By utilizing the CS based method, 40% of the sampling time can be decreased by means of acquiring nearly one-third of original sampling points. Then several experiments were carried out to validate the performance of the CS based method. While less than one-third of original sampling points were measured, the correlation degree coefficient between reconstructed respiratory waveform and original waveform can achieve 0.9078, and the accuracy of the respiratory rate (RR) extracted from the reconstructed respiratory waveform can reach 95.54%. The experimental results demonstrated that the novel method can fit the high resolution smart mat system and be a viable option for reducing the sampling time of the pressure sensor array. PMID:28796188
Multi-Image Registration for an Enhanced Vision System
NASA Technical Reports Server (NTRS)
Hines, Glenn; Rahman, Zia-Ur; Jobson, Daniel; Woodell, Glenn
2002-01-01
An Enhanced Vision System (EVS) utilizing multi-sensor image fusion is currently under development at the NASA Langley Research Center. The EVS will provide enhanced images of the flight environment to assist pilots in poor visibility conditions. Multi-spectral images obtained from a short wave infrared (SWIR), a long wave infrared (LWIR), and a color visible band CCD camera, are enhanced and fused using the Retinex algorithm. The images from the different sensors do not have a uniform data structure: the three sensors not only operate at different wavelengths, but they also have different spatial resolutions, optical fields of view (FOV), and bore-sighting inaccuracies. Thus, in order to perform image fusion, the images must first be co-registered. Image registration is the task of aligning images taken at different times, from different sensors, or from different viewpoints, so that all corresponding points in the images match. In this paper, we present two methods for registering multiple multi-spectral images. The first method performs registration using sensor specifications to match the FOVs and resolutions directly through image resampling. In the second method, registration is obtained through geometric correction based on a spatial transformation defined by user selected control points and regression analysis.
Compressive hyperspectral and multispectral imaging fusion
NASA Astrophysics Data System (ADS)
Espitia, Óscar; Castillo, Sergio; Arguello, Henry
2016-05-01
Image fusion is a valuable framework which combines two or more images of the same scene from one or multiple sensors, allowing to improve the resolution of the images and increase the interpretable content. In remote sensing a common fusion problem consists of merging hyperspectral (HS) and multispectral (MS) images that involve large amount of redundant data, which ignores the highly correlated structure of the datacube along the spatial and spectral dimensions. Compressive HS and MS systems compress the spectral data in the acquisition step allowing to reduce the data redundancy by using different sampling patterns. This work presents a compressed HS and MS image fusion approach, which uses a high dimensional joint sparse model. The joint sparse model is formulated by combining HS and MS compressive acquisition models. The high spectral and spatial resolution image is reconstructed by using sparse optimization algorithms. Different fusion spectral image scenarios are used to explore the performance of the proposed scheme. Several simulations with synthetic and real datacubes show promising results as the reliable reconstruction of a high spectral and spatial resolution image can be achieved by using as few as just the 50% of the datacube.
A Method for Imaging Oxygen Distribution and Respiration at a Microscopic Level of Resolution.
Rolletschek, Hardy; Liebsch, Gregor
2017-01-01
Conventional oxygen (micro-) sensors assess oxygen concentration within a particular region or across a transect of tissue, but provide no information regarding its bidimensional distribution. Here, a novel imaging technology is presented, in which an optical sensor foil (i.e., the planar optode) is attached to the surface of the sample. The sensor converts a fluorescent signal into an oxygen value. Since each single image captures an entire area of the sample surface, the system is able to deduce the distribution of oxygen at a resolution level of few micrometers. It can be deployed to dynamically monitor oxygen consumption, thereby providing a detailed respiration map at close to cellular resolution. Here, we demonstrate the application of the imaging tool to developing plant seeds; the protocol is explained step by step and some potential pitfalls are discussed.
EARTHS (Earth Albedo Radiometer for Temporal Hemispheric Sensing)
NASA Astrophysics Data System (ADS)
Ackleson, S. G.; Bowles, J. H.; Mouroulis, P.; Philpot, W. D.
2018-02-01
We propose a concept for measuring the hemispherical Earth albedo in high temporal and spectral resolution using a hyperspectral imaging sensor deployed on a lunar satellite, such as the proposed NASA Deep Space Gateway.
NASA Astrophysics Data System (ADS)
Alonso, C.; Benito, R. M.; Tarquis, A. M.
2012-04-01
Satellite image data have become an important source of information for monitoring vegetation and mapping land cover at several scales. Beside this, the distribution and phenology of vegetation is largely associated with climate, terrain characteristics and human activity. Various vegetation indices have been developed for qualitative and quantitative assessment of vegetation using remote spectral measurements. In particular, sensors with spectral bands in the red (RED) and near-infrared (NIR) lend themselves well to vegetation monitoring and based on them [(NIR - RED) / (NIR + RED)] Normalized Difference Vegetation Index (NDVI) has been widespread used. Given that the characteristics of spectral bands in RED and NIR vary distinctly from sensor to sensor, NDVI values based on data from different instruments will not be directly comparable. The spatial resolution also varies significantly between sensors, as well as within a given scene in the case of wide-angle and oblique sensors. As a result, NDVI values will vary according to combinations of the heterogeneity and scale of terrestrial surfaces and pixel footprint sizes. Therefore, the question arises as to the impact of differences in spectral and spatial resolutions on vegetation indices like the NDVI. The aim of this study is to establish a comparison between two different sensors in their NDVI values at different spatial resolutions. Scaling analysis and modeling techniques are increasingly understood to be the result of nonlinear dynamic mechanisms repeating scale after scale from large to small scales leading to non-classical resolution dependencies. In the remote sensing framework the main characteristic of sensors images is the high local variability in their values. This variability is a consequence of the increase in spatial and radiometric resolution that implies an increase in complexity that it is necessary to characterize. Fractal and multifractal techniques has been proven to be useful to extract such complexities from remote sensing images and will applied in this study to see the scaling behavior for each sensor in generalized fractal dimensions. The studied area is located in the provinces of Caceres and Salamanca (east of Iberia Peninsula) with an extension of 32 x 32 km2. The altitude in the area varies from 1,560 to 320 m, comprising natural vegetation in the mountain area (forest and bushes) and agricultural crops in the valleys. Scaling analysis were applied to Landsat-5 and MODIS TERRA to the normalized derived vegetation index (NDVI) on the same region with one day of difference, 13 and 12 of July 2003 respectively. From these images the area of interest was selected obtaining 1024 x 1024 pixels for Landsat image and 128 x 128 pixels for MODIS image. This implies that the resolution for MODIS is 250x250 m. and for Landsat is 30x30 m. From the reflectance data obtained from NIR and RED bands, NDVI was calculated for each image focusing this study on 0.2 to 0.5 ranges of values. Once that both NDVI fields were obtained several fractal dimensions were estimated in each one segmenting the values in 0.20-0.25, 0.25-0.30 and so on to rich 0.45-0.50. In all the scaling analysis the scale size length was expressed in meters, and not in pixels, to make the comparison between both sensors possible. Results are discussed. Acknowledgements This work has been supported by the Spanish MEC under Projects No. AGL2010-21501/AGR, MTM2009-14621 and i-MATH No. CSD2006-00032
Image sensor with high dynamic range linear output
NASA Technical Reports Server (NTRS)
Yadid-Pecht, Orly (Inventor); Fossum, Eric R. (Inventor)
2007-01-01
Designs and operational methods to increase the dynamic range of image sensors and APS devices in particular by achieving more than one integration times for each pixel thereof. An APS system with more than one column-parallel signal chains for readout are described for maintaining a high frame rate in readout. Each active pixel is sampled for multiple times during a single frame readout, thus resulting in multiple integration times. The operation methods can also be used to obtain multiple integration times for each pixel with an APS design having a single column-parallel signal chain for readout. Furthermore, analog-to-digital conversion of high speed and high resolution can be implemented.
High-speed three-dimensional measurements with a fringe projection-based optical sensor
NASA Astrophysics Data System (ADS)
Bräuer-Burchardt, Christian; Breitbarth, Andreas; Kühmstedt, Peter; Notni, Gunther
2014-11-01
An optical three-dimensional (3-D) sensor based on a fringe projection technique that realizes the acquisition of the surface geometry of small objects was developed for highly resolved and ultrafast measurements. It realizes a data acquisition rate up to 60 high-resolution 3-D datasets per second. The high measurement velocity was achieved by consequent fringe code reduction and parallel data processing. The reduction of the length of the fringe image sequence was obtained by omission of the Gray code sequence using the geometric restrictions of the measurement objects and the geometric constraints of the sensor arrangement. The sensor covers three different measurement fields between 20 mm×20 mm and 40 mm×40 mm with a spatial resolution between 10 and 20 μm, respectively. In order to obtain a robust and fast recalibration of the sensor after change of the measurement field, a calibration procedure based on single shot analysis of a special test object was applied which works with low effort and time. The sensor may be used, e.g., for quality inspection of conductor boards or plugs in real-time industrial applications.
Integrated sensor with frame memory and programmable resolution for light adaptive imaging
NASA Technical Reports Server (NTRS)
Zhou, Zhimin (Inventor); Fossum, Eric R. (Inventor); Pain, Bedabrata (Inventor)
2004-01-01
An image sensor operable to vary the output spatial resolution according to a received light level while maintaining a desired signal-to-noise ratio. Signals from neighboring pixels in a pixel patch with an adjustable size are added to increase both the image brightness and signal-to-noise ratio. One embodiment comprises a sensor array for receiving input signals, a frame memory array for temporarily storing a full frame, and an array of self-calibration column integrators for uniform column-parallel signal summation. The column integrators are capable of substantially canceling fixed pattern noise.
USGS aerial resolution targets.
Salamonowicz, P.H.
1982-01-01
It is necessary to measure the achievable resolution of any airborne sensor that is to be used for metric purposes. Laboratory calibration facilities may be inadequate or inappropriate for determining the resolution of non-photographic sensors such as optical-mechanical scanners, television imaging tubes, and linear arrays. However, large target arrays imaged in the field can be used in testing such systems. The USGS has constructed an array of resolution targets in order to permit field testing of a variety of airborne sensing systems. The target array permits any interested organization with an airborne sensing system to accurately determine the operational resolution of its system. -from Author
Cross delay line sensor characterization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Owens, Israel J; Remelius, Dennis K; Tiee, Joe J
There exists a wealth of information in the scientific literature on the physical properties and device characterization procedures for complementary metal oxide semiconductor (CMOS), charge coupled device (CCD) and avalanche photodiode (APD) format detectors. Numerous papers and books have also treated photocathode operation in the context of photomultiplier tube (PMT) operation for either non imaging applications or limited night vision capability. However, much less information has been reported in the literature about the characterization procedures and properties of photocathode detectors with novel cross delay line (XDL) anode structures. These allow one to detect single photons and create images by recordingmore » space and time coordinate (X, Y & T) information. In this paper, we report on the physical characteristics and performance of a cross delay line anode sensor with an enhanced near infrared wavelength response photocathode and high dynamic range micro channel plate (MCP) gain (> 10{sup 6}) multiplier stage. Measurement procedures and results including the device dark event rate (DER), pulse height distribution, quantum and electronic device efficiency (QE & DQE) and spatial resolution per effective pixel region in a 25 mm sensor array are presented. The overall knowledge and information obtained from XDL sensor characterization allow us to optimize device performance and assess capability. These device performance properties and capabilities make XDL detectors ideal for remote sensing field applications that require single photon detection, imaging, sub nano-second timing response, high spatial resolution (10's of microns) and large effective image format.« less
Sensitivity encoded silicon photomultiplier--a new sensor for high-resolution PET-MRI.
Schulz, Volkmar; Berker, Yannick; Berneking, Arne; Omidvari, Negar; Kiessling, Fabian; Gola, Alberto; Piemonte, Claudio
2013-07-21
Detectors for simultaneous positron emission tomography and magnetic resonance imaging in particular with sub-mm spatial resolution are commonly composed of scintillator crystal arrays, readout via arrays of solid state sensors, such as avalanche photo diodes (APDs) or silicon photomultipliers (SiPMs). Usually a light guide between the crystals and the sensor is used to enable the identification of crystals which are smaller than the sensor elements. However, this complicates crystal identification at the gaps and edges of the sensor arrays. A solution is to use as many sensors as crystals with a direct coupling, which unfortunately increases the complexity and power consumption of the readout electronics. Since 1997, position-sensitive APDs have been successfully used to identify sub-mm crystals. Unfortunately, these devices show a limitation in their time resolution and a degradation of spatial resolution when placed in higher magnetic fields. To overcome these limitations, this paper presents a new sensor concept that extends conventional SiPMs by adding position information via the spatial encoding of the channel sensitivity. The concept allows a direct coupling of high-resolution crystal arrays to the sensor with a reduced amount of readout channels. The theory of sensitivity encoding is detailed and linked to compressed sensing to compute unique sparse solutions. Two devices have been designed using one- and two-dimensional linear sensitivity encoding with eight and four readout channels, respectively. Flood histograms of both devices show the capability to precisely identify all 4 × 4 LYSO crystals with dimensions of 0.93 × 0.93 × 10 mm(3). For these crystals, the energy and time resolution (MV ± SD) of the devices with one (two)-dimensional encoding have been measured to be 12.3 · (1 ± 0.047)% (13.7 · (1 ± 0.047)%) around 511 keV with a paired coincidence time resolution (full width at half maximum) of 462 · (1 ± 0.054) ps (452 · (1 ± 0.078) ps).
Sensitivity encoded silicon photomultiplier—a new sensor for high-resolution PET-MRI
NASA Astrophysics Data System (ADS)
Schulz, Volkmar; Berker, Yannick; Berneking, Arne; Omidvari, Negar; Kiessling, Fabian; Gola, Alberto; Piemonte, Claudio
2013-07-01
Detectors for simultaneous positron emission tomography and magnetic resonance imaging in particular with sub-mm spatial resolution are commonly composed of scintillator crystal arrays, readout via arrays of solid state sensors, such as avalanche photo diodes (APDs) or silicon photomultipliers (SiPMs). Usually a light guide between the crystals and the sensor is used to enable the identification of crystals which are smaller than the sensor elements. However, this complicates crystal identification at the gaps and edges of the sensor arrays. A solution is to use as many sensors as crystals with a direct coupling, which unfortunately increases the complexity and power consumption of the readout electronics. Since 1997, position-sensitive APDs have been successfully used to identify sub-mm crystals. Unfortunately, these devices show a limitation in their time resolution and a degradation of spatial resolution when placed in higher magnetic fields. To overcome these limitations, this paper presents a new sensor concept that extends conventional SiPMs by adding position information via the spatial encoding of the channel sensitivity. The concept allows a direct coupling of high-resolution crystal arrays to the sensor with a reduced amount of readout channels. The theory of sensitivity encoding is detailed and linked to compressed sensing to compute unique sparse solutions. Two devices have been designed using one- and two-dimensional linear sensitivity encoding with eight and four readout channels, respectively. Flood histograms of both devices show the capability to precisely identify all 4 × 4 LYSO crystals with dimensions of 0.93 × 0.93 × 10 mm3. For these crystals, the energy and time resolution (MV ± SD) of the devices with one (two)-dimensional encoding have been measured to be 12.3 · (1 ± 0.047)% (13.7 · (1 ± 0.047)%) around 511 keV with a paired coincidence time resolution (full width at half maximum) of 462 · (1 ± 0.054) ps (452 · (1 ± 0.078) ps).
Calibration of Action Cameras for Photogrammetric Purposes
Balletti, Caterina; Guerra, Francesco; Tsioukas, Vassilios; Vernier, Paolo
2014-01-01
The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a) easy to handle, (b) capable of performing under extreme conditions and more importantly (c) able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution. PMID:25237898
Calibration of action cameras for photogrammetric purposes.
Balletti, Caterina; Guerra, Francesco; Tsioukas, Vassilios; Vernier, Paolo
2014-09-18
The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a) easy to handle, (b) capable of performing under extreme conditions and more importantly (c) able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution.
Using a high spatial resolution tactile sensor for intention detection.
Castellini, Claudio; Koiva, Risto
2013-06-01
Intention detection is the interpretation of biological signals with the aim of automatically, reliably and naturally understanding what a human subject desires to do. Although intention detection is not restricted to disabled people, such methods can be crucial in improving a patient's life, e.g., aiding control of a robotic wheelchair or of a self-powered prosthesis. Traditionally, intention detection is done using, e.g., gaze tracking, surface electromyography and electroencephalography. In this paper we present exciting initial results of an experiment aimed at intention detection using a high-spatial-resolution, high-dynamic-range tactile sensor. The tactile image of the ventral side of the forearm of 9 able-bodied participants was recorded during a variable-force task stimulated at the fingertip. Both the forces at the fingertip and at the forearm were synchronously recorded. We show that a standard dimensionality reduction technique (Principal Component Analysis) plus a Support Vector Machine attain almost perfect detection accuracy of the direction and the intensity of the intended force. This paves the way for high spatial resolution tactile sensors to be used as a means for intention detection.
An Overview of the CBERS-2 Satellite and Comparison of the CBERS-2 CCD Data with the L5 TM Data
NASA Technical Reports Server (NTRS)
Chandler, Gyanesh
2007-01-01
CBERS satellite carries on-board a multi sensor payload with different spatial resolutions and collection frequencies. HRCCD (High Resolution CCD Camera), IRMSS (Infrared Multispectral Scanner), and WFI (Wide-Field Imager). The CCD and the WFI camera operate in the VNIR regions, while the IRMSS operates in SWIR and thermal region. In addition to the imaging payload, the satellite carries a Data Collection System (DCS) and Space Environment Monitor (SEM).
SkySat-1: very high-resolution imagery from a small satellite
NASA Astrophysics Data System (ADS)
Murthy, Kiran; Shearn, Michael; Smiley, Byron D.; Chau, Alexandra H.; Levine, Josh; Robinson, M. Dirk
2014-10-01
This paper presents details of the SkySat-1 mission, which is the first microsatellite-class commercial earth- observation system to generate sub-meter resolution panchromatic imagery, in addition to sub-meter resolution 4-band pan-sharpened imagery. SkySat-1 was built and launched for an order of magnitude lower cost than similarly performing missions. The low-cost design enables the deployment of a large imaging constellation that can provide imagery with both high temporal resolution and high spatial resolution. One key enabler of the SkySat-1 mission was simplifying the spacecraft design and instead relying on ground- based image processing to achieve high-performance at the system level. The imaging instrument consists of a custom-designed high-quality optical telescope and commercially-available high frame rate CMOS image sen- sors. While each individually captured raw image frame shows moderate quality, ground-based image processing algorithms improve the raw data by combining data from multiple frames to boost image signal-to-noise ratio (SNR) and decrease the ground sample distance (GSD) in a process Skybox calls "digital TDI". Careful qual-ity assessment and tuning of the spacecraft, payload, and algorithms was necessary to generate high-quality panchromatic, multispectral, and pan-sharpened imagery. Furthermore, the framing sensor configuration en- abled the first commercial High-Definition full-frame rate panchromatic video to be captured from space, with approximately 1 meter ground sample distance. Details of the SkySat-1 imaging instrument and ground-based image processing system are presented, as well as an overview of the work involved with calibrating and validating the system. Examples of raw and processed imagery are shown, and the raw imagery is compared to pre-launch simulated imagery used to tune the image processing algorithms.
The CAOS camera platform: ushering in a paradigm change in extreme dynamic range imager design
NASA Astrophysics Data System (ADS)
Riza, Nabeel A.
2017-02-01
Multi-pixel imaging devices such as CCD, CMOS and Focal Plane Array (FPA) photo-sensors dominate the imaging world. These Photo-Detector Array (PDA) devices certainly have their merits including increasingly high pixel counts and shrinking pixel sizes, nevertheless, they are also being hampered by limitations in instantaneous dynamic range, inter-pixel crosstalk, quantum full well capacity, signal-to-noise ratio, sensitivity, spectral flexibility, and in some cases, imager response time. Recently invented is the Coded Access Optical Sensor (CAOS) Camera platform that works in unison with current Photo-Detector Array (PDA) technology to counter fundamental limitations of PDA-based imagers while providing high enough imaging spatial resolution and pixel counts. Using for example the Texas Instruments (TI) Digital Micromirror Device (DMD) to engineer the CAOS camera platform, ushered in is a paradigm change in advanced imager design, particularly for extreme dynamic range applications.
2014-10-01
applications of present nano-/ bio -technology include advanced health and fitness monitoring, high-resolution imaging, new environmental sensor platforms...others areas where nano-/ bio -technology development is needed: • Sensors : Diagnostic and detection kits (gene-chips, protein-chips, lab-on-chips, etc...studies on chemo- bio nano- sensors , ultra-sensitive biochips (“lab-on-a-chip” and “cells-on-chips” devices) have been prepared for routine medical
Multisensor Super Resolution Using Directionally-Adaptive Regularization for UAV Images
Kang, Wonseok; Yu, Soohwan; Ko, Seungyong; Paik, Joonki
2015-01-01
In various unmanned aerial vehicle (UAV) imaging applications, the multisensor super-resolution (SR) technique has become a chronic problem and attracted increasing attention. Multisensor SR algorithms utilize multispectral low-resolution (LR) images to make a higher resolution (HR) image to improve the performance of the UAV imaging system. The primary objective of the paper is to develop a multisensor SR method based on the existing multispectral imaging framework instead of using additional sensors. In order to restore image details without noise amplification or unnatural post-processing artifacts, this paper presents an improved regularized SR algorithm by combining the directionally-adaptive constraints and multiscale non-local means (NLM) filter. As a result, the proposed method can overcome the physical limitation of multispectral sensors by estimating the color HR image from a set of multispectral LR images using intensity-hue-saturation (IHS) image fusion. Experimental results show that the proposed method provides better SR results than existing state-of-the-art SR methods in the sense of objective measures. PMID:26007744
Multisensor Super Resolution Using Directionally-Adaptive Regularization for UAV Images.
Kang, Wonseok; Yu, Soohwan; Ko, Seungyong; Paik, Joonki
2015-05-22
In various unmanned aerial vehicle (UAV) imaging applications, the multisensor super-resolution (SR) technique has become a chronic problem and attracted increasing attention. Multisensor SR algorithms utilize multispectral low-resolution (LR) images to make a higher resolution (HR) image to improve the performance of the UAV imaging system. The primary objective of the paper is to develop a multisensor SR method based on the existing multispectral imaging framework instead of using additional sensors. In order to restore image details without noise amplification or unnatural post-processing artifacts, this paper presents an improved regularized SR algorithm by combining the directionally-adaptive constraints and multiscale non-local means (NLM) filter. As a result, the proposed method can overcome the physical limitation of multispectral sensors by estimating the color HR image from a set of multispectral LR images using intensity-hue-saturation (IHS) image fusion. Experimental results show that the proposed method provides better SR results than existing state-of-the-art SR methods in the sense of objective measures.
A multi-sensor data-driven methodology for all-sky passive microwave inundation retrieval
NASA Astrophysics Data System (ADS)
Takbiri, Zeinab; Ebtehaj, Ardeshir M.; Foufoula-Georgiou, Efi
2017-06-01
We present a multi-sensor Bayesian passive microwave retrieval algorithm for flood inundation mapping at high spatial and temporal resolutions. The algorithm takes advantage of observations from multiple sensors in optical, short-infrared, and microwave bands, thereby allowing for detection and mapping of the sub-pixel fraction of inundated areas under almost all-sky conditions. The method relies on a nearest-neighbor search and a modern sparsity-promoting inversion method that make use of an a priori dataset in the form of two joint dictionaries. These dictionaries contain almost overlapping observations by the Special Sensor Microwave Imager and Sounder (SSMIS) on board the Defense Meteorological Satellite Program (DMSP) F17 satellite and the Moderate Resolution Imaging Spectroradiometer (MODIS) on board the Aqua and Terra satellites. Evaluation of the retrieval algorithm over the Mekong Delta shows that it is capable of capturing to a good degree the inundation diurnal variability due to localized convective precipitation. At longer timescales, the results demonstrate consistency with the ground-based water level observations, denoting that the method is properly capturing inundation seasonal patterns in response to regional monsoonal rain. The calculated Euclidean distance, rank-correlation, and also copula quantile analysis demonstrate a good agreement between the outputs of the algorithm and the observed water levels at monthly and daily timescales. The current inundation products are at a resolution of 12.5 km and taken twice per day, but a higher resolution (order of 5 km and every 3 h) can be achieved using the same algorithm with the dictionary populated by the Global Precipitation Mission (GPM) Microwave Imager (GMI) products.
NASA Technical Reports Server (NTRS)
Shenk, W. E.; Adler, R. F.; Chesters, D.; Susskind, J.; Uccellini, L.
1984-01-01
The measurements from current and planned geosynchronous satellites provide quantitative estimates of temperature and moisture profiles, surface temperature, wind, cloud properties, and precipitation. A number of significant observation characteristics remain, they include: (1) temperature and moisture profiles in cloudy areas; (2) high vertical profile resolution; (3) definitive precipitation area mapping and precipitation rate estimates on the convective cloud scale; (4) winds from low level cloud motions at night; (5) the determination of convective cloud structure; and (6) high resolution surface temperature determination. Four major new observing capabilities are proposed to overcome these deficiencies: a microwave sounder/imager, a high resolution visible and infrared imager, a high spectral resolution infrared sounder, and a total ozone mapper. It is suggested that the four sensors are flown together and used to support major mesoscale and short range forecasting field experiments.
Coskun, Ahmet F; Sencan, Ikbal; Su, Ting-Wei; Ozcan, Aydogan
2011-01-06
We demonstrate lensfree on-chip fluorescent imaging of transgenic Caenorhabditis elegans (C. elegans) over an ultra-wide field-of-view (FOV) of e.g., >2-8 cm(2) with a spatial resolution of ∼10 µm. This is the first time that a lensfree on-chip platform has successfully imaged fluorescent C. elegans samples. In our wide-field lensfree imaging platform, the transgenic samples are excited using a prism interface from the side, where the pump light is rejected through total internal reflection occurring at the bottom facet of the substrate. The emitted fluorescent signal from C. elegans samples is then recorded on a large area opto-electronic sensor-array over an FOV of e.g., >2-8 cm(2), without the use of any lenses, thin-film interference filters or mechanical scanners. Because fluorescent emission rapidly diverges, such lensfree fluorescent images recorded on a chip look blurred due to broad point-spread-function of our platform. To combat this resolution challenge, we use a compressive sampling algorithm to uniquely decode the recorded lensfree fluorescent patterns into higher resolution images, demonstrating ∼10 µm resolution. We tested the efficacy of this compressive decoding approach with different types of opto-electronic sensors to achieve a similar resolution level, independent of the imaging chip. We further demonstrate that this wide FOV lensfree fluorescent imaging platform can also perform sequential bright-field imaging of the same samples using partially-coherent lensfree digital in-line holography that is coupled from the top facet of the same prism used in fluorescent excitation. This unique combination permits ultra-wide field dual-mode imaging of C. elegans on a chip which could especially provide a useful tool for high-throughput screening applications in biomedical research.
Ozcan, Aydogan
2011-01-01
We demonstrate lensfree on-chip fluorescent imaging of transgenic Caenorhabditis elegans (C. elegans) over an ultra-wide field-of-view (FOV) of e.g., >2–8 cm2 with a spatial resolution of ∼10µm. This is the first time that a lensfree on-chip platform has successfully imaged fluorescent C. elegans samples. In our wide-field lensfree imaging platform, the transgenic samples are excited using a prism interface from the side, where the pump light is rejected through total internal reflection occurring at the bottom facet of the substrate. The emitted fluorescent signal from C. elegans samples is then recorded on a large area opto-electronic sensor-array over an FOV of e.g., >2–8 cm2, without the use of any lenses, thin-film interference filters or mechanical scanners. Because fluorescent emission rapidly diverges, such lensfree fluorescent images recorded on a chip look blurred due to broad point-spread-function of our platform. To combat this resolution challenge, we use a compressive sampling algorithm to uniquely decode the recorded lensfree fluorescent patterns into higher resolution images, demonstrating ∼10 µm resolution. We tested the efficacy of this compressive decoding approach with different types of opto-electronic sensors to achieve a similar resolution level, independent of the imaging chip. We further demonstrate that this wide FOV lensfree fluorescent imaging platform can also perform sequential bright-field imaging of the same samples using partially-coherent lensfree digital in-line holography that is coupled from the top facet of the same prism used in fluorescent excitation. This unique combination permits ultra-wide field dual-mode imaging of C. elegans on a chip which could especially provide a useful tool for high-throughput screening applications in biomedical research. PMID:21253611
Zhao, C; Vassiljev, N; Konstantinidis, A C; Speller, R D; Kanicki, J
2017-03-07
High-resolution, low-noise x-ray detectors based on the complementary metal-oxide-semiconductor (CMOS) active pixel sensor (APS) technology have been developed and proposed for digital breast tomosynthesis (DBT). In this study, we evaluated the three-dimensional (3D) imaging performance of a 50 µm pixel pitch CMOS APS x-ray detector named DynAMITe (Dynamic Range Adjustable for Medical Imaging Technology). The two-dimensional (2D) angle-dependent modulation transfer function (MTF), normalized noise power spectrum (NNPS), and detective quantum efficiency (DQE) were experimentally characterized and modeled using the cascaded system analysis at oblique incident angles up to 30°. The cascaded system model was extended to the 3D spatial frequency space in combination with the filtered back-projection (FBP) reconstruction method to calculate the 3D and in-plane MTF, NNPS and DQE parameters. The results demonstrate that the beam obliquity blurs the 2D MTF and DQE in the high spatial frequency range. However, this effect can be eliminated after FBP image reconstruction. In addition, impacts of the image acquisition geometry and detector parameters were evaluated using the 3D cascaded system analysis for DBT. The result shows that a wider projection angle range (e.g. ±30°) improves the low spatial frequency (below 5 mm -1 ) performance of the CMOS APS detector. In addition, to maintain a high spatial resolution for DBT, a focal spot size of smaller than 0.3 mm should be used. Theoretical analysis suggests that a pixelated scintillator in combination with the 50 µm pixel pitch CMOS APS detector could further improve the 3D image resolution. Finally, the 3D imaging performance of the CMOS APS and an indirect amorphous silicon (a-Si:H) thin-film transistor (TFT) passive pixel sensor (PPS) detector was simulated and compared.
NASA Astrophysics Data System (ADS)
Zhao, C.; Vassiljev, N.; Konstantinidis, A. C.; Speller, R. D.; Kanicki, J.
2017-03-01
High-resolution, low-noise x-ray detectors based on the complementary metal-oxide-semiconductor (CMOS) active pixel sensor (APS) technology have been developed and proposed for digital breast tomosynthesis (DBT). In this study, we evaluated the three-dimensional (3D) imaging performance of a 50 µm pixel pitch CMOS APS x-ray detector named DynAMITe (Dynamic Range Adjustable for Medical Imaging Technology). The two-dimensional (2D) angle-dependent modulation transfer function (MTF), normalized noise power spectrum (NNPS), and detective quantum efficiency (DQE) were experimentally characterized and modeled using the cascaded system analysis at oblique incident angles up to 30°. The cascaded system model was extended to the 3D spatial frequency space in combination with the filtered back-projection (FBP) reconstruction method to calculate the 3D and in-plane MTF, NNPS and DQE parameters. The results demonstrate that the beam obliquity blurs the 2D MTF and DQE in the high spatial frequency range. However, this effect can be eliminated after FBP image reconstruction. In addition, impacts of the image acquisition geometry and detector parameters were evaluated using the 3D cascaded system analysis for DBT. The result shows that a wider projection angle range (e.g. ±30°) improves the low spatial frequency (below 5 mm-1) performance of the CMOS APS detector. In addition, to maintain a high spatial resolution for DBT, a focal spot size of smaller than 0.3 mm should be used. Theoretical analysis suggests that a pixelated scintillator in combination with the 50 µm pixel pitch CMOS APS detector could further improve the 3D image resolution. Finally, the 3D imaging performance of the CMOS APS and an indirect amorphous silicon (a-Si:H) thin-film transistor (TFT) passive pixel sensor (PPS) detector was simulated and compared.
Structure-from-motion for MAV image sequence analysis with photogrammetric applications
NASA Astrophysics Data System (ADS)
Schönberger, J. L.; Fraundorfer, F.; Frahm, J.-M.
2014-08-01
MAV systems have found increased attention in the photogrammetric community as an (autonomous) image acquisition platform for accurate 3D reconstruction. For an accurate reconstruction in feasible time, the acquired imagery requires specialized SfM software. Current systems typically use high-resolution sensors in pre-planned flight missions from far distance. We describe and evaluate a new SfM pipeline specifically designed for sequential, close-distance, and low-resolution imagery from mobile cameras with relatively high frame-rate and high overlap. Experiments demonstrate reduced computational complexity by leveraging the temporal consistency, comparable accuracy and point density with respect to state-of-the-art systems.
Single-cell imaging tools for brain energy metabolism: a review
San Martín, Alejandro; Sotelo-Hitschfeld, Tamara; Lerchundi, Rodrigo; Fernández-Moncada, Ignacio; Ceballo, Sebastian; Valdebenito, Rocío; Baeza-Lehnert, Felipe; Alegría, Karin; Contreras-Baeza, Yasna; Garrido-Gerter, Pamela; Romero-Gómez, Ignacio; Barros, L. Felipe
2014-01-01
Abstract. Neurophotonics comes to light at a time in which advances in microscopy and improved calcium reporters are paving the way toward high-resolution functional mapping of the brain. This review relates to a parallel revolution in metabolism. We argue that metabolism needs to be approached both in vitro and in vivo, and that it does not just exist as a low-level platform but is also a relevant player in information processing. In recent years, genetically encoded fluorescent nanosensors have been introduced to measure glucose, glutamate, ATP, NADH, lactate, and pyruvate in mammalian cells. Reporting relative metabolite levels, absolute concentrations, and metabolic fluxes, these sensors are instrumental for the discovery of new molecular mechanisms. Sensors continue to be developed, which together with a continued improvement in protein expression strategies and new imaging technologies, herald an exciting era of high-resolution characterization of metabolism in the brain and other organs. PMID:26157964
Application of Sensor Fusion to Improve Uav Image Classification
NASA Astrophysics Data System (ADS)
Jabari, S.; Fathollahi, F.; Zhang, Y.
2017-08-01
Image classification is one of the most important tasks of remote sensing projects including the ones that are based on using UAV images. Improving the quality of UAV images directly affects the classification results and can save a huge amount of time and effort in this area. In this study, we show that sensor fusion can improve image quality which results in increasing the accuracy of image classification. Here, we tested two sensor fusion configurations by using a Panchromatic (Pan) camera along with either a colour camera or a four-band multi-spectral (MS) camera. We use the Pan camera to benefit from its higher sensitivity and the colour or MS camera to benefit from its spectral properties. The resulting images are then compared to the ones acquired by a high resolution single Bayer-pattern colour camera (here referred to as HRC). We assessed the quality of the output images by performing image classification tests. The outputs prove that the proposed sensor fusion configurations can achieve higher accuracies compared to the images of the single Bayer-pattern colour camera. Therefore, incorporating a Pan camera on-board in the UAV missions and performing image fusion can help achieving higher quality images and accordingly higher accuracy classification results.
Event-based Sensing for Space Situational Awareness
NASA Astrophysics Data System (ADS)
Cohen, G.; Afshar, S.; van Schaik, A.; Wabnitz, A.; Bessell, T.; Rutten, M.; Morreale, B.
A revolutionary type of imaging device, known as a silicon retina or event-based sensor, has recently been developed and is gaining in popularity in the field of artificial vision systems. These devices are inspired by a biological retina and operate in a significantly different way to traditional CCD-based imaging sensors. While a CCD produces frames of pixel intensities, an event-based sensor produces a continuous stream of events, each of which is generated when a pixel detects a change in log light intensity. These pixels operate asynchronously and independently, producing an event-based output with high temporal resolution. There are also no fixed exposure times, allowing these devices to offer a very high dynamic range independently for each pixel. Additionally, these devices offer high-speed, low power operation and a sparse spatiotemporal output. As a consequence, the data from these sensors must be interpreted in a significantly different way to traditional imaging sensors and this paper explores the advantages this technology provides for space imaging. The applicability and capabilities of event-based sensors for SSA applications are demonstrated through telescope field trials. Trial results have confirmed that the devices are capable of observing resident space objects from LEO through to GEO orbital regimes. Significantly, observations of RSOs were made during both day-time and nighttime (terminator) conditions without modification to the camera or optics. The event based sensor’s ability to image stars and satellites during day-time hours offers a dramatic capability increase for terrestrial optical sensors. This paper shows the field testing and validation of two different architectures of event-based imaging sensors. An eventbased sensor’s asynchronous output has an intrinsically low data-rate. In addition to low-bandwidth communications requirements, the low weight, low-power and high-speed make them ideally suitable to meeting the demanding challenges required by space-based SSA systems. Results from these experiments and the systems developed highlight the applicability of event-based sensors to ground and space-based SSA tasks.
Imaging performance of a Timepix detector based on semi-insulating GaAs
NASA Astrophysics Data System (ADS)
Zaťko, B.; Zápražný, Z.; Jakůbek, J.; Šagátová, A.; Boháček, P.; Sekáčová, M.; Korytár, D.; Nečas, V.; Žemlička, J.; Mora, Y.; Pichotka, M.
2018-01-01
This work focused on a Timepix chip [1] coupled with a bulk semi-insulating GaAs sensor. The sensor consisted of a matrix of 256 × 256 pixels with a pitch of 55 μm bump-bonded to a Timepix ASIC. The sensor was processed on a 350 μm-thick SI GaAs wafer. We carried out detector adjustment to optimize its performance. This included threshold equalization with setting up parameters of the Timepix chip, such as Ikrum, Pream, Vfbk, and so on. The energy calibration of the GaAs Timepix detector was realized using a 241Am radioisotope in two Timepix detector modes: time-over-threshold and threshold scan. An energy resolution of 4.4 keV in FWHM (Full Width at Half Maximum) was observed for 59.5 keV γ-photons using threshold scan mode. The X-ray imaging quality of the GaAs Timepix detector was tested using various samples irradiated by an X-ray source with a focal spot size smaller than 8 μm and accelerating voltage up to 80 kV. A 700 μm × 700 μm gold testing object (X-500-200-16Au with Siemens star) fabricated with high precision was used for the spatial resolution testing at different values of X-ray image magnification (up to 45). The measured spatial resolution of our X-ray imaging system was about 4 μm.
Sensors and OBIA synergy for operational monitoring of surface water
NASA Astrophysics Data System (ADS)
Masson, Eric; Thenard, Lucas
2010-05-01
This contribution will focus on combining Object Based Image Analysis (i.e. OBIA with e-Cognition 8) and recent sensors (i.e. Spot 5 XS, Pan and ALOS Prism, Avnir2, Palsar) to address the technical feasibility for an operational monitoring of surface water. Three cases of river meandering (India), flood mapping (Nepal) and dam's seasonal water level monitoring (Morocco) using recent sensors will present various application of surface water monitoring. The operational aspect will be demonstrated either by sensor properties (i.e. spatial resolution and bandwidth), data acquisition properties (i.e. multi sensor, return period and near real-time acquisition) but also with OBIA algorithms (i.e. fusion of multi sensors / multi resolution data and batch processes). In the first case of river meandering (India) we will address multi sensor and multi date satellite acquisition to monitor the river bed mobility within a floodplain using an ALOS dataset. It will demonstrate the possibility of an operational monitoring system that helps the geomorphologist in the analysis of fluvial dynamic and sediment budget for high energy rivers. In the second case of flood mapping (Nepal) we will address near real time Palsar data acquisition at high spatial resolution to monitor and to map a flood extension. This ALOS sensor takes benefit both from SAR and L band properties (i.e. atmospheric transparency, day/night acquisition, low sensibility to surface wind). It's a real achievement compared to optical imagery or even other high resolution SAR properties (i.e. acquisition swath, bandwidth and data price). These advantages meet the operational needs set by crisis management of hydrological disasters but also for the implementation of flood risk management plans. The last case of dam surface water monitoring (Morocco) will address an important issue of water resource management in countries affected by water scarcity. In such countries water users have to cope with over exploitation, frequent drought period and now with foreseen climate change impacts. This third case will demonstrate the efficiency of SPOT 5 programming in synergy with OBIA methodology to assess the evolution of dam surface water within a complete water cycle (i.e. 2008-09). In all those three cases image segmentation and classification algorithms developed with e-Cognition 8 software allow an easy to use implementation of simple to highly sophisticate OBIA rulsets fully operational in batch processes. Finally this contribution foresees the new opportunity of integration of Worldview 2 multispectral imagery (i.e. 8 bands) including its "coastal" band that will also find an application in continental surface water bathymetry. Worldview 2 is a recently launch satellite (e.g. October 2009) that starts to collect earth observation data since January 2010. It is therefore a promising new remote sensing tool to develop operational hydrology in combination high resolution SAR imagery and OBIA methodology. This contribution will conclude on the strong potential for operationalisation in hydrology and water resources management that recent and future sensors and image analysis methodologies are offering to water management and decision makers.
Radiometric Characterization of the IKONOS, QuickBird, and OrbView-3 Sensors
NASA Technical Reports Server (NTRS)
Holekamp, Kara
2006-01-01
Radiometric calibration of commercial imaging satellite products is required to ensure that science and application communities can better understand their properties. Inaccurate radiometric calibrations can lead to erroneous decisions and invalid conclusions and can limit intercomparisons with other systems. To address this calibration need, satellite at-sensor radiance values were compared to those estimated by each independent team member to determine the sensor's radiometric accuracy. The combined results of this evaluation provide the user community with an independent assessment of these commercially available high spatial resolution sensors' absolute calibration values.
Design Method For Ultra-High Resolution Linear CCD Imagers
NASA Astrophysics Data System (ADS)
Sheu, Larry S.; Truong, Thanh; Yuzuki, Larry; Elhatem, Abdul; Kadekodi, Narayan
1984-11-01
This paper presents the design method to achieve ultra-high resolution linear imagers. This method utilizes advanced design rules and novel staggered bilinear photo sensor arrays with quadrilinear shift registers. Design constraint in the detector arrays and shift registers are analyzed. Imager architecture to achieve ultra-high resolution is presented. The characteristics of MTF, aliasing, speed, transfer efficiency and fine photolithography requirements associated with this architecture are also discussed. A CCD imager with advanced 1.5 um minimum feature size was fabricated. It is intended as a test vehicle for the next generation small sampling pitch ultra-high resolution CCD imager. Standard double-poly, two-phase shift registers were fabricated at an 8 um pitch using the advanced design rules. A special process step that blocked the source-drain implant from the shift register area was invented. This guaranteed excellent performance of the shift registers regardless of the small poly overlaps. A charge transfer efficiency of better than 0.99995 and maximum transfer speed of 8 MHz were achieved. The imager showed excellent performance. The dark current was less than 0.2 mV/ms, saturation 250 mV, adjacent photoresponse non-uniformity ± 4% and responsivity 0.7 V/ μJ/cm2 for the 8 μm x 6 μm photosensor size. The MTF was 0.6 at 62.5 cycles/mm. These results confirm the feasibility of the next generation ultra-high resolution CCD imagers.
NASA Astrophysics Data System (ADS)
Vanhellemont, Q.
2016-02-01
Since the launch of Landsat-8 (L8) in 2013, a joint NASA/USGS programme, new applications of high resolution imagery for coastal and inland waters have become apparent. The optical imaging instrument on L8, the Operational Land Imager (OLI), is much improved compared to its predecessors on L5 and L7, especially with regards to SNR and digitization, and is therefore well suited for retrieving water reflectances and derived parameters such as turbidity and suspended sediment concentration. In June 2015, the European Space Agency (ESA) successfully launched a similar instrument, the MultiSpectral Imager (MSI), on board of Sentinel-2A (S2A). Imagery from both L8 and S2A are free of charge and publicly available (S2A starting at the end of 2015). Atmospheric correction schemes and processing software is under development in the EC-FP7 HIGHROC project. The spatial resolution of these instruments (10-60 m) is a great improvement over typical moderate resolution ocean colour sensors such as MODIS and MERIS (0.25 - 1 km). At higher resolution, many more lakes, rivers, ports and estuaries are spatially resolved, and can thus now be studied using satellite data, unlocking potential for mandatory monitoring e.g. under European Directives such as the Marine Strategy Framework Directive and the Water Framework Directive. We present new applications of these high resolution data, such as monitoring of offshore constructions, wind farms, sediment transport, dredging and dumping, shipping and fishing activities. The spatial variability at sub moderate resolution (0.25 - 1 km) scales can be assessed, as well as the impact of sub grid scale variability (including ships and platforms used for validation) on the moderate pixel retrieval. While the daily revisit time of the moderate resolution sensors is vastly superior to those of the high resolution satellites, at the equator respectively 16 and 10 days for L8 and S2A, the low revisit times can be partially mitigated by combining data streams. Time-series of L8 and S2A imagery are presented to show the power of combining the two satellite missions. With the launch of Sentinel-2B (expected mid-2016), the time-series will be extended with another high resolution sensor. S2B will be on the same orbit as S2A, spaced 180 degrees apart, bringing the S2A+B combined revisit time down to 5 days.
Advances in detection of diffuse seafloor venting using structured light imaging.
NASA Astrophysics Data System (ADS)
Smart, C.; Roman, C.; Carey, S.
2016-12-01
Systematic, remote detection and high resolution mapping of low temperature diffuse hydrothermal venting is inefficient and not currently tractable using traditional remotely operated vehicle (ROV) mounted sensors. Preliminary results for hydrothermal vent detection using a structured light laser sensor were presented in 2011 and published in 2013 (Smart) with continual advancements occurring in the interim. As the structured light laser passes over active venting, the projected laser line effectively blurs due to the associated turbulence and density anomalies in the vent fluid. The degree laser disturbance is captured by a camera collecting images of the laser line at 20 Hz. Advancements in the detection of the laser and fluid interaction have included extensive normalization of the collected laser data and the implementation of a support vector machine algorithm to develop a classification routine. The image data collected over a hydrothermal vent field is then labeled as seafloor, bacteria or a location of venting. The results can then be correlated with stereo images, bathymetry and backscatter data. This sensor is a component of an ROV mounted imaging suite which also includes stereo cameras and a multibeam sonar system. Originally developed for bathymetric mapping, the structured light laser sensor, and other imaging suite components, are capable of creating visual and bathymetric maps with centimeter level resolution. Surveys are completed in a standard mowing the lawn pattern completing a 30m x 30m survey with centimeter level resolution in under an hour. Resulting co-registered data includes, multibeam and structured light laser bathymetry and backscatter, stereo images and vent detection. This system allows for efficient exploration of areas with diffuse and small point source hydrothermal venting increasing the effectiveness of scientific sampling and observation. Recent vent detection results collected during the 2013-2015 E/V Nautilus seasons will be presented. Smart, C. J. and Roman, C. and Carey, S. N. (2013) Detection of diffuse seafloor venting using structured light imaging, Geochemistry, Geophysics, Geosystems, 14, 4743-4757
Bakó, Gábor; Tolnai, Márton; Takács, Ádám
2014-01-01
Remote sensing is a method that collects data of the Earth's surface without causing disturbances. Thus, it is worthwhile to use remote sensing methods to survey endangered ecosystems, as the studied species will behave naturally while undisturbed. The latest passive optical remote sensing solutions permit surveys from long distances. State-of-the-art highly sensitive sensor systems allow high spatial resolution image acquisition at high altitudes and at high flying speeds, even in low-visibility conditions. As the aerial imagery captured by an airplane covers the entire study area, all the animals present in that area can be recorded. A population assessment is conducted by visual interpretations of an ortho image map. The basic objective of this study is to determine whether small- and medium-sized bird species are recognizable in the ortho images by using high spatial resolution aerial cameras. The spatial resolution needed for identifying the bird species in the ortho image map was studied. The survey was adjusted to determine the number of birds in a colony at a given time. PMID:25046012
NASA Astrophysics Data System (ADS)
de Vieilleville, F.; Ristorcelli, T.; Delvit, J.-M.
2016-06-01
This paper presents a method for dense DSM reconstruction from high resolution, mono sensor, passive imagery, spatial panchromatic image sequence. The interest of our approach is four-fold. Firstly, we extend the core of light field approaches using an explicit BRDF model from the Image Synthesis community which is more realistic than the Lambertian model. The chosen model is the Cook-Torrance BRDF which enables us to model rough surfaces with specular effects using specific material parameters. Secondly, we extend light field approaches for non-pinhole sensors and non-rectilinear motion by using a proper geometric transformation on the image sequence. Thirdly, we produce a 3D volume cost embodying all the tested possible heights and filter it using simple methods such as Volume Cost Filtering or variational optimal methods. We have tested our method on a Pleiades image sequence on various locations with dense urban buildings and report encouraging results with respect to classic multi-label methods such as MIC-MAC, or more recent pipelines such as S2P. Last but not least, our method also produces maps of material parameters on the estimated points, allowing us to simplify building classification or road extraction.
Wu, Jih-Huah; Pen, Cheng-Chung; Jiang, Joe-Air
2008-03-13
With their significant features, the applications of complementary metal-oxidesemiconductor (CMOS) image sensors covers a very extensive range, from industrialautomation to traffic applications such as aiming systems, blind guidance, active/passiverange finders, etc. In this paper CMOS image sensor-based active and passive rangefinders are presented. The measurement scheme of the proposed active/passive rangefinders is based on a simple triangulation method. The designed range finders chieflyconsist of a CMOS image sensor and some light sources such as lasers or LEDs. Theimplementation cost of our range finders is quite low. Image processing software to adjustthe exposure time (ET) of the CMOS image sensor to enhance the performance oftriangulation-based range finders was also developed. An extensive series of experimentswere conducted to evaluate the performance of the designed range finders. From theexperimental results, the distance measurement resolutions achieved by the active rangefinder and the passive range finder can be better than 0.6% and 0.25% within themeasurement ranges of 1 to 8 m and 5 to 45 m, respectively. Feasibility tests onapplications of the developed CMOS image sensor-based range finders to the automotivefield were also conducted. The experimental results demonstrated that our range finders arewell-suited for distance measurements in this field.
Loughran, Brendan; Swetadri Vasan, S N; Singh, Vivek; Ionita, Ciprian N; Jain, Amit; Bednarek, Daniel R; Titus, Albert; Rudin, Stephen
2013-03-06
The detectors that are used for endovascular image-guided interventions (EIGI), particularly for neurovascular interventions, do not provide clinicians with adequate visualization to ensure the best possible treatment outcomes. Developing an improved x-ray imaging detector requires the determination of estimated clinical x-ray entrance exposures to the detector. The range of exposures to the detector in clinical studies was found for the three modes of operation: fluoroscopic mode, high frame-rate digital angiographic mode (HD fluoroscopic mode), and DSA mode. Using these estimated detector exposure ranges and available CMOS detector technical specifications, design requirements were developed to pursue a quantum limited, high resolution, dynamic x-ray detector based on a CMOS sensor with 50 μm pixel size. For the proposed MAF-CMOS, the estimated charge collected within the full exposure range was found to be within the estimated full well capacity of the pixels. Expected instrumentation noise for the proposed detector was estimated to be 50-1,300 electrons. Adding a gain stage such as a light image intensifier would minimize the effect of the estimated instrumentation noise on total image noise but may not be necessary to ensure quantum limited detector operation at low exposure levels. A recursive temporal filter may decrease the effective total noise by 2 to 3 times, allowing for the improved signal to noise ratios at the lowest estimated exposures despite consequent loss in temporal resolution. This work can serve as a guide for further development of dynamic x-ray imaging prototypes or improvements for existing dynamic x-ray imaging systems.
Oh, Sungjin; Ahn, Jae-Hyun; Lee, Sangmin; Ko, Hyoungho; Seo, Jong Mo; Goo, Yong-Sook; Cho, Dong-il Dan
2015-01-01
Retinal prosthetic devices stimulate retinal nerve cells with electrical signals proportional to the incident light intensities. For a high-resolution retinal prosthesis, it is necessary to reduce the size of the stimulator pixels as much as possible, because the retinal nerve cells are concentrated in a small area of approximately 5 mm × 5 mm. In this paper, a miniaturized biphasic current stimulator integrated circuit is developed for subretinal stimulation and tested in vitro. The stimulator pixel is miniaturized by using a complementary metal-oxide-semiconductor (CMOS) image sensor composed of three transistors. Compared to a pixel that uses a four-transistor CMOS image sensor, this new design reduces the pixel size by 8.3%. The pixel size is further reduced by simplifying the stimulation-current generating circuit, which provides a 43.9% size reduction when compared to the design reported to be the most advanced version to date for subretinal stimulation. The proposed design is fabricated using a 0.35 μm bipolar-CMOS-DMOS process. Each pixel is designed to fit in a 50 μ m × 55 μm area, which theoretically allows implementing more than 5000 pixels in the 5 mm × 5 mm area. Experimental results show that a biphasic current in the range of 0 to 300 μA at 12 V can be generated as a function of incident light intensities. Results from in vitro experiments with rd1 mice indicate that the proposed method can be effectively used for retinal prosthesis with a high resolution.
Arrays of Nano Tunnel Junctions as Infrared Image Sensors
NASA Technical Reports Server (NTRS)
Son, Kyung-Ah; Moon, Jeong S.; Prokopuk, Nicholas
2006-01-01
Infrared image sensors based on high density rectangular planar arrays of nano tunnel junctions have been proposed. These sensors would differ fundamentally from prior infrared sensors based, variously, on bolometry or conventional semiconductor photodetection. Infrared image sensors based on conventional semiconductor photodetection must typically be cooled to cryogenic temperatures to reduce noise to acceptably low levels. Some bolometer-type infrared sensors can be operated at room temperature, but they exhibit low detectivities and long response times, which limit their utility. The proposed infrared image sensors could be operated at room temperature without incurring excessive noise, and would exhibit high detectivities and short response times. Other advantages would include low power demand, high resolution, and tailorability of spectral response. Neither bolometers nor conventional semiconductor photodetectors, the basic detector units as proposed would partly resemble rectennas. Nanometer-scale tunnel junctions would be created by crossing of nanowires with quantum-mechanical-barrier layers in the form of thin layers of electrically insulating material between them (see figure). A microscopic dipole antenna sized and shaped to respond maximally in the infrared wavelength range that one seeks to detect would be formed integrally with the nanowires at each junction. An incident signal in that wavelength range would become coupled into the antenna and, through the antenna, to the junction. At the junction, the flow of electrons between the crossing wires would be dominated by quantum-mechanical tunneling rather than thermionic emission. Relative to thermionic emission, quantum mechanical tunneling is a fast process.
Optimized computational imaging methods for small-target sensing in lens-free holographic microscopy
NASA Astrophysics Data System (ADS)
Xiong, Zhen; Engle, Isaiah; Garan, Jacob; Melzer, Jeffrey E.; McLeod, Euan
2018-02-01
Lens-free holographic microscopy is a promising diagnostic approach because it is cost-effective, compact, and suitable for point-of-care applications, while providing high resolution together with an ultra-large field-of-view. It has been applied to biomedical sensing, where larger targets like eukaryotic cells, bacteria, or viruses can be directly imaged without labels, and smaller targets like proteins or DNA strands can be detected via scattering labels like micro- or nano-spheres. Automated image processing routines can count objects and infer target concentrations. In these sensing applications, sensitivity and specificity are critically affected by image resolution and signal-to-noise ratio (SNR). Pixel super-resolution approaches have been shown to boost resolution and SNR by synthesizing a high-resolution image from multiple, partially redundant, low-resolution images. However, there are several computational methods that can be used to synthesize the high-resolution image, and previously, it has been unclear which methods work best for the particular case of small-particle sensing. Here, we quantify the SNR achieved in small-particle sensing using regularized gradient-descent optimization method, where the regularization is based on cardinal-neighbor differences, Bayer-pattern noise reduction, or sparsity in the image. In particular, we find that gradient-descent with sparsity-based regularization works best for small-particle sensing. These computational approaches were evaluated on images acquired using a lens-free microscope that we assembled from an off-the-shelf LED array and color image sensor. Compared to other lens-free imaging systems, our hardware integration, calibration, and sample preparation are particularly simple. We believe our results will help to enable the best performance in lens-free holographic sensing.
NASA Astrophysics Data System (ADS)
Takashima, Ichiro; Kajiwara, Riichi; Murano, Kiyo; Iijima, Toshio; Morinaka, Yasuhiro; Komobuchi, Hiroyoshi
2001-04-01
We have designed and built a high-speed CCD imaging system for monitoring neural activity in an exposed animal cortex stained with a voltage-sensitive dye. Two types of custom-made CCD sensors were developed for this system. The type I chip has a resolution of 2664 (H) X 1200 (V) pixels and a wide imaging area of 28.1 X 13.8 mm, while the type II chip has 1776 X 1626 pixels and an active imaging area of 20.4 X 18.7 mm. The CCD arrays were constructed with multiple output amplifiers in order to accelerate the readout rate. The two chips were divided into either 24 (I) or 16 (II) distinct areas that were driven in parallel. The parallel CCD outputs were digitized by 12-bit A/D converters and then stored in the frame memory. The frame memory was constructed with synchronous DRAM modules, which provided a capacity of 128 MB per channel. On-chip and on-memory binning methods were incorporated into the system, e.g., this enabled us to capture 444 X 200 pixel-images for periods of 36 seconds at a rate of 500 frames/second. This system was successfully used to visualize neural activity in the cortices of rats, guinea pigs, and monkeys.
Performance Evaluation of 98 CZT Sensors for Their Use in Gamma-Ray Imaging
NASA Astrophysics Data System (ADS)
Dedek, Nicolas; Speller, Robert D.; Spendley, Paul; Horrocks, Julie A.
2008-10-01
98 SPEAR sensors from eV Products have been evaluated for their use in a portable Compton camera. The sensors have a 5 mm times 5 mm times 5 mm CdZnTe crystal and are provided together with a preamplifier. The energy resolution was studied in detail for all sensors and was found to be 6% on average at 59.5 keV and 3% on average at 662 keV. The standard deviations of the corresponding energy resolution distributions are remarkably small (0.6% at 59.5 keV, 0.7% at 662 keV) and reflect the uniformity of the sensor characteristics. For a possible outside use the temperature dependence of the sensor performances was investigated for temperatures between 15 and 45 deg Celsius. A linear shift in calibration with temperature was observed. The energy resolution at low energies (81 keV) was found to deteriorate exponentially with temperature, while it stayed constant at higher energies (356 keV). A Compton camera built of these sensors was simulated. To obtain realistic energy spectra a suitable detector response function was implemented. To investigate the angular resolution of the camera a 137Cs point source was simulated. Reconstructed images of the point source were compared for perfect and realistic energy and position resolutions. The angular resolution of the camera was found to be better than 10 deg.
Compact SPAD-Based Pixel Architectures for Time-Resolved Image Sensors
Perenzoni, Matteo; Pancheri, Lucio; Stoppa, David
2016-01-01
This paper reviews the state of the art of single-photon avalanche diode (SPAD) image sensors for time-resolved imaging. The focus of the paper is on pixel architectures featuring small pixel size (<25 μm) and high fill factor (>20%) as a key enabling technology for the successful implementation of high spatial resolution SPAD-based image sensors. A summary of the main CMOS SPAD implementations, their characteristics and integration challenges, is provided from the perspective of targeting large pixel arrays, where one of the key drivers is the spatial uniformity. The main analog techniques aimed at time-gated photon counting and photon timestamping suitable for compact and low-power pixels are critically discussed. The main features of these solutions are the adoption of analog counting techniques and time-to-analog conversion, in NMOS-only pixels. Reliable quantum-limited single-photon counting, self-referenced analog-to-digital conversion, time gating down to 0.75 ns and timestamping with 368 ps jitter are achieved. PMID:27223284
NASA's Earth Science Use of Commercially Availiable Remote Sensing Datasets: Cover Image
NASA Technical Reports Server (NTRS)
Underwood, Lauren W.; Goward, Samuel N.; Fearon, Matthew G.; Fletcher, Rose; Garvin, Jim; Hurtt, George
2008-01-01
The cover image incorporates high resolution stereo pairs acquired from the DigitalGlobe(R) QuickBird sensor. It shows a digital elevation model of Meteor Crater, Arizona at approximately 1.3 meter point-spacing. Image analysts used the Leica Photogrammetry Suite to produce the DEM. The outside portion was computed from two QuickBird panchromatic scenes acquired October 2006, while an Optech laser scan dataset was used for the crater s interior elevations. The crater s terrain model and image drape were created in a NASA Constellation Program project focused on simulating lunar surface environments for prototyping and testing lunar surface mission analysis and planning tools. This work exemplifies NASA s Scientific Data Purchase legacy and commercial high resolution imagery applications, as scientists use commercial high resolution data to examine lunar analog Earth landscapes for advanced planning and trade studies for future lunar surface activities. Other applications include landscape dynamics related to volcanism, hydrologic events, climate change, and ice movement.
NASA Astrophysics Data System (ADS)
Fan, Yuanchao; Koukal, Tatjana; Weisberg, Peter J.
2014-10-01
Canopy shadowing mediated by topography is an important source of radiometric distortion on remote sensing images of rugged terrain. Topographic correction based on the sun-canopy-sensor (SCS) model significantly improved over those based on the sun-terrain-sensor (STS) model for surfaces with high forest canopy cover, because the SCS model considers and preserves the geotropic nature of trees. The SCS model accounts for sub-pixel canopy shadowing effects and normalizes the sunlit canopy area within a pixel. However, it does not account for mutual shadowing between neighboring pixels. Pixel-to-pixel shadowing is especially apparent for fine resolution satellite images in which individual tree crowns are resolved. This paper proposes a new topographic correction model: the sun-crown-sensor (SCnS) model based on high-resolution satellite imagery (IKONOS) and high-precision LiDAR digital elevation model. An improvement on the C-correction logic with a radiance partitioning method to address the effects of diffuse irradiance is also introduced (SCnS + C). In addition, we incorporate a weighting variable, based on pixel shadow fraction, on the direct and diffuse radiance portions to enhance the retrieval of at-sensor radiance and reflectance of highly shadowed tree pixels and form another variety of SCnS model (SCnS + W). Model evaluation with IKONOS test data showed that the new SCnS model outperformed the STS and SCS models in quantifying the correlation between terrain-regulated illumination factor and at-sensor radiance. Our adapted C-correction logic based on the sun-crown-sensor geometry and radiance partitioning better represented the general additive effects of diffuse radiation than C parameters derived from the STS or SCS models. The weighting factor Wt also significantly enhanced correction results by reducing within-class standard deviation and balancing the mean pixel radiance between sunlit and shaded slopes. We analyzed these improvements with model comparison on the red and near infrared bands. The advantages of SCnS + C and SCnS + W on both bands are expected to facilitate forest classification and change detection applications.
Computer synthesis of high resolution electron micrographs
NASA Technical Reports Server (NTRS)
Nathan, R.
1976-01-01
Specimen damage, spherical aberration, low contrast and noisy sensors combine to prevent direct atomic viewing in a conventional electron microscope. The paper describes two methods for obtaining ultra-high resolution in biological specimens under the electron microscope. The first method assumes the physical limits of the electron objective lens and uses a series of dark field images of biological crystals to obtain direct information on the phases of the Fourier diffraction maxima; this information is used in an appropriate computer to synthesize a large aperture lens for a 1-A resolution. The second method assumes there is sufficient amplitude scatter from images recorded in focus which can be utilized with a sensitive densitometer and computer contrast stretching to yield fine structure image details. Cancer virus characterization is discussed as an illustrative example. Numerous photographs supplement the text.
NASA Technical Reports Server (NTRS)
Palacios, Sherry L.; Schafer, Chris; Broughton, Jennifer; Guild, Liane S.; Kudela, Raphael M.
2013-01-01
There is a need in the Biological Oceanography community to discriminate among phytoplankton groups within the bulk chlorophyll pool to understand energy flow through ecosystems, to track the fate of carbon in the ocean, and to detect and monitor-for harmful algal blooms (HABs). The ocean color community has responded to this demand with the development of phytoplankton functional type (PFT) discrimination algorithms. These PFT algorithms fall into one of three categories depending on the science application: size-based, biogeochemical function, and taxonomy. The new PFT algorithm Phytoplankton Detection with Optics (PHYDOTax) is an inversion algorithm that discriminates taxon-specific biomass to differentiate among six taxa found in the California Current System: diatoms, dinoflagellates, haptophytes, chlorophytes, cryptophytes, and cyanophytes. PHYDOTax was developed and validated in Monterey Bay, CA for the high resolution imaging spectrometer, Spectroscopic Aerial Mapping System with On-board Navigation (SAMSON - 3.5 nm resolution). PHYDOTax exploits the high spectral resolution of an imaging spectrometer and the improved spatial resolution that airborne data provides for coastal areas. The objective of this study was to apply PHYDOTax to a relatively lower resolution imaging spectrometer to test the algorithm's sensitivity to atmospheric correction, to evaluate capability with other sensors, and to determine if down-sampling spectral resolution would degrade its ability to discriminate among phytoplankton taxa. This study is a part of the larger Hyperspectral Infrared Imager (HyspIRI) airborne simulation campaign which is collecting Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) imagery aboard NASA's ER-2 aircraft during three seasons in each of two years over terrestrial and marine targets in California. Our aquatic component seeks to develop and test algorithms to retrieve water quality properties (e.g. HABs and river plumes) in both marine and in-land water bodies. Results presented are from the 10 April 2013 overflight of the Monterey Bay region and focus primarily on the first objective - sensitivity to atmospheric correction. On-going and future work will continue to evaluate if PHYDOTax can be applied to historical (SeaWiFS and MERIS), existing (MODIS, VIIRS, and HICO), and future (PACE, GEO-CAPE, and HyspIRI) satellite sensors. Demonstration of cross-platform continuity may aid in calibration and validation efforts of these sensors.
Automated Verification of Spatial Resolution in Remotely Sensed Imagery
NASA Technical Reports Server (NTRS)
Davis, Bruce; Ryan, Robert; Holekamp, Kara; Vaughn, Ronald
2011-01-01
Image spatial resolution characteristics can vary widely among sources. In the case of aerial-based imaging systems, the image spatial resolution characteristics can even vary between acquisitions. In these systems, aircraft altitude, speed, and sensor look angle all affect image spatial resolution. Image spatial resolution needs to be verified with estimators that include the ground sample distance (GSD), the modulation transfer function (MTF), and the relative edge response (RER), all of which are key components of image quality, along with signal-to-noise ratio (SNR) and dynamic range. Knowledge of spatial resolution parameters is important to determine if features of interest are distinguishable in imagery or associated products, and to develop image restoration algorithms. An automated Spatial Resolution Verification Tool (SRVT) was developed to rapidly determine the spatial resolution characteristics of remotely sensed aerial and satellite imagery. Most current methods for assessing spatial resolution characteristics of imagery rely on pre-deployed engineered targets and are performed only at selected times within preselected scenes. The SRVT addresses these insufficiencies by finding uniform, high-contrast edges from urban scenes and then using these edges to determine standard estimators of spatial resolution, such as the MTF and the RER. The SRVT was developed using the MATLAB programming language and environment. This automated software algorithm assesses every image in an acquired data set, using edges found within each image, and in many cases eliminating the need for dedicated edge targets. The SRVT automatically identifies high-contrast, uniform edges and calculates the MTF and RER of each image, and when possible, within sections of an image, so that the variation of spatial resolution characteristics across the image can be analyzed. The automated algorithm is capable of quickly verifying the spatial resolution quality of all images within a data set, enabling the appropriate use of those images in a number of applications.
Visible and infrared imaging radiometers for ocean observations
NASA Technical Reports Server (NTRS)
Barnes, W. L.
1977-01-01
The current status of visible and infrared sensors designed for the remote monitoring of the oceans is reviewed. Emphasis is placed on multichannel scanning radiometers that are either operational or under development. Present design practices and parameter constraints are discussed. Airborne sensor systems examined include the ocean color scanner and the ocean temperature scanner. The costal zone color scanner and advanced very high resolution radiometer are reviewed with emphasis on design specifications. Recent technological advances and their impact on sensor design are examined.
Informative Feature Selection for Object Recognition via Sparse PCA
2011-04-07
constraint on images collected from low-power camera net- works instead of high-end photography is that establishing wide-baseline feature correspondence of...variable selection tool for selecting informative features in the object images captured from low-resolution cam- era sensor networks. Firstly, we...More examples can be found in Figure 4 later. 3. Identifying Informative Features Classical PCA is a well established tool for the analysis of high
Optical path difference microscopy with a Shack-Hartmann wavefront sensor.
Gong, Hai; Agbana, Temitope E; Pozzi, Paolo; Soloviev, Oleg; Verhaegen, Michel; Vdovin, Gleb
2017-06-01
In this Letter, we show that a Shack-Hartmann wavefront sensor can be used for the quantitative measurement of the specimen optical path difference (OPD) in an ordinary incoherent optical microscope, if the spatial coherence of the illumination light in the plane of the specimen is larger than the microscope resolution. To satisfy this condition, the illumination numerical aperture should be smaller than the numerical aperture of the imaging lens. This principle has been successfully applied to build a high-resolution reference-free instrument for the characterization of the OPD of micro-optical components and microscopic biological samples.
Lusch, Achim; Liss, Michael A; Greene, Peter; Abdelshehid, Corollos; Menhadji, Ashleigh; Bucur, Philip; Alipanah, Reza; McDougall, Elspeth; Landman, Jaime
2013-12-01
To evaluate performance characteristics and optics of a new generation high-definition distal sensor (HD-DS) flexible cystoscope, a standard-definition distal sensor (SD-DS) cystoscope, and a standard fiberoptic (FO) cystoscope. Three new cystoscopes (HD-DS, SD-DS, and FO) were compared for active deflection, irrigation flow, and optical characteristics. Each cystoscope was evaluated with an empty working channel and with various accessories. Optical characteristics (resolution, grayscale imaging, color representation, depth of field, and image brightness) were measured using United States Air Force (USAF)/Edmund Optics test targets and illumination meter. We digitally recorded a porcine cystoscopy in both clear and blood fields, with subsequent video analysis by 8 surgeons via questionnaire. The HD-DS had a higher resolution than the SD-DS and the FO at both 20 mm (6.35 vs 4.00 vs 2.24 line pairs/mm) and 10 mm (14.3 vs 7.13 vs 4.00 line pairs/mm) evaluations, respectively (P <.001 and P <.001). Color representation and depth of field (P = .001 and P <.001) were better in the HD-DS. When compared to the FO, the HD-DS and SD-DS demonstrated superior deflection up and irrigant flow with and without accessory present in the working channel, whereas image brightness was superior in the FO (P <.001, P = .001, and P <.001, respectively). Observers deemed the HD-DS cystoscope superior in visualization in clear and bloody fields, as well as for illumination. The new HD-DS provided significantly improved visualization in a clear and a bloody field, resolution, color representation, and depth of field compared to SD-DS and FO. Clinical correlation of these findings is pending. Copyright © 2013 Elsevier Inc. All rights reserved.
Hierarchical classification in high dimensional numerous class cases
NASA Technical Reports Server (NTRS)
Kim, Byungyong; Landgrebe, D. A.
1990-01-01
As progress in new sensor technology continues, increasingly high resolution imaging sensors are being developed. These sensors give more detailed and complex data for each picture element and greatly increase the dimensionality of data over past systems. Three methods for designing a decision tree classifier are discussed: a top down approach, a bottom up approach, and a hybrid approach. Three feature extraction techniques are implemented. Canonical and extended canonical techniques are mainly dependent upon the mean difference between two classes. An autocorrelation technique is dependent upon the correlation differences. The mathematical relationship between sample size, dimensionality, and risk value is derived.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Ki Ha; Becker, Alex; Framgos, William
1999-06-01
Non-invasive, high-resolution imaging of the shallow subsurface is needed for delineation of buried waste, detection of unexploded ordinance, verification and monitoring of containment structures, and other environmental applications. Electromagnetic measurements at frequencies between 1 and 100 MHz are important for such applications, because the induction number of many targets is small and the ability to determine the dielectric permittivity in addition to electrical conductivity of the subsurface is possible. Earlier workers were successful in developing systems for detecting anomalous areas, but no quantifiable information was accurately determined. For high-resolution imaging, accurate measurements are necessary so the field data can bemore » mapped into the space of the subsurface parameters. We are developing a non-invasive method for accurately imaging the electrical conductivity and dielectric permittivity of the shallow subsurface using the plane wave impedance approach. Electric and magnetic sensors are being tested in a known area against theoretical predictions, thereby insuring that the data collected with the high-frequency impedance (HFI) system will support high-resolution, multi-dimensional imaging techniques.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Ki Ha; Becker, Alex
2000-06-01
Non-invasive, high-resolution imaging of the shallow subsurface is needed for delineation of buried waste, detection of unexploded ordinance, verification and monitoring of containment structures, and other environmental applications. Electromagnetic measurements at frequencies between 1 and 100 MHz are important for such applications, because the induction number of many targets is small and the ability to determine the dielectric permittivity in addition to electrical conductivity of the subsurface is possible. Earlier workers were successful in developing systems for detecting anomalous areas, but no quantifiable information was accurately determined. For high-resolution imaging, accurate measurements are necessary so the field data can bemore » mapped into the space of the subsurface parameters. We are developing a non-invasive method for accurately imaging the electrical conductivity and dielectric permittivity of the shallow subsurface using the plane wave impedance approach (Song et al., 1997). Electric and magnetic sensors are being tested in a known area against theoretical predictions, thereby insuring that the data collected with the high-frequency impedance (HFI) system will support high-resolution, multi-dimensional imaging techniques.« less
Super Typhoon Halong off Taiwan
NASA Technical Reports Server (NTRS)
2002-01-01
On July 14, 2002, Super Typhoon Halong was east of Taiwan (left edge) in the western Pacific Ocean. At the time this image was taken the storm was a Category 4 hurricane, with maximum sustained winds of 115 knots (132 miles per hour), but as recently as July 12, winds were at 135 knots (155 miles per hour). Halong has moved northwards and pounded Okinawa, Japan, with heavy rain and high winds, just days after tropical Storm Chataan hit the country, creating flooding and killing several people. The storm is expected to be a continuing threat on Monday and Tuesday. This image was acquired by the Moderate Resolution Imaging Spectroradiometer (MODIS) on the Terra satellite on July 14, 2002. Please note that the high-resolution scene provided here is 500 meters per pixel. For a copy of the scene at the sensor's fullest resolution, visit the MODIS Rapid Response Image Gallery. Image courtesy Jacques Descloitres, MODIS Land Rapid Response Team at NASA GSFC
NASA Astrophysics Data System (ADS)
Refice, Alberto; Tijani, Khalid; Lovergine, Francesco P.; D'Addabbo, Annarita; Nutricato, Raffaele; Morea, Alberto
2017-04-01
Satellite monitoring of flood events at high spatial and temporal resolution is considered a difficult problem, mainly due to the lack of data with sufficient acquisition frequency and timeliness. The problem is worsened by the typically cloudy weather conditions associated to floods, which obstacle the propagation of e.m. waves in the optical spectral range, forbidding acquisitions by optical sensors. This problem is not present for longer wavelengths, so that radar imaging sensors are recognized as viable solutions for long-term flood monitoring. In selected cases, however, weather conditions may remain clear for sufficient amounts of time, enabling monitoring of the evolution of flood events through long time series of satellite images, both optical and radar. In this contribution, we present a case study of long-term integrated monitoring of a flood event which affected part of the Strymonas river basin, a transboundary river with source in Bulgaria, which flows then through Greece up to the Aegean Sea. The event, which affected the floodplain close to the river mouth, started at the beginning of April 2015, due to heavy rain, and lasted for several months, with some water pools still present at the beginning of September. Due to the arid climate characterizing the area, weather conditions were cloud-free for most of the period covering the event. We collected one high-resolution, X-band, COSMO-SkyMed, 5 C-band, Sentinel-1 SAR images, and 11 optical Landsat-8 images of the area. SAR images were calibrated, speckle-filtered and precisely geocoded; optical images were radiometrically corrected to obtain ground reflectance values from which NDVI maps were derived. The images were then thresholded to obtain binary flood maps for each day. Threshold values for microwave and optical data were calibrated by comparing one SAR and one optical image acquired on the same date. Results allow to draw a multi-temporal map of the flood evolution with high temporal resolution. The extension of flooded area can also be tracked in time, allowing to envisage testing of evapotranspiration/absorption models.
Computational multispectral video imaging [Invited].
Wang, Peng; Menon, Rajesh
2018-01-01
Multispectral imagers reveal information unperceivable to humans and conventional cameras. Here, we demonstrate a compact single-shot multispectral video-imaging camera by placing a micro-structured diffractive filter in close proximity to the image sensor. The diffractive filter converts spectral information to a spatial code on the sensor pixels. Following a calibration step, this code can be inverted via regularization-based linear algebra to compute the multispectral image. We experimentally demonstrated spectral resolution of 9.6 nm within the visible band (430-718 nm). We further show that the spatial resolution is enhanced by over 30% compared with the case without the diffractive filter. We also demonstrate Vis-IR imaging with the same sensor. Because no absorptive color filters are utilized, sensitivity is preserved as well. Finally, the diffractive filters can be easily manufactured using optical lithography and replication techniques.
Quantitative imaging with fluorescent biosensors.
Okumoto, Sakiko; Jones, Alexander; Frommer, Wolf B
2012-01-01
Molecular activities are highly dynamic and can occur locally in subcellular domains or compartments. Neighboring cells in the same tissue can exist in different states. Therefore, quantitative information on the cellular and subcellular dynamics of ions, signaling molecules, and metabolites is critical for functional understanding of organisms. Mass spectrometry is generally used for monitoring ions and metabolites; however, its temporal and spatial resolution are limited. Fluorescent proteins have revolutionized many areas of biology-e.g., fluorescent proteins can report on gene expression or protein localization in real time-yet promoter-based reporters are often slow to report physiologically relevant changes such as calcium oscillations. Therefore, novel tools are required that can be deployed in specific cells and targeted to subcellular compartments in order to quantify target molecule dynamics directly. We require tools that can measure enzyme activities, protein dynamics, and biophysical processes (e.g., membrane potential or molecular tension) with subcellular resolution. Today, we have an extensive suite of tools at our disposal to address these challenges, including translocation sensors, fluorescence-intensity sensors, and Förster resonance energy transfer sensors. This review summarizes sensor design principles, provides a database of sensors for more than 70 different analytes/processes, and gives examples of applications in quantitative live cell imaging.
Cooperative multisensor system for real-time face detection and tracking in uncontrolled conditions
NASA Astrophysics Data System (ADS)
Marchesotti, Luca; Piva, Stefano; Turolla, Andrea; Minetti, Deborah; Regazzoni, Carlo S.
2005-03-01
The presented work describes an innovative architecture for multi-sensor distributed video surveillance applications. The aim of the system is to track moving objects in outdoor environments with a cooperative strategy exploiting two video cameras. The system also exhibits the capacity of focusing its attention on the faces of detected pedestrians collecting snapshot frames of face images, by segmenting and tracking them over time at different resolution. The system is designed to employ two video cameras in a cooperative client/server structure: the first camera monitors the entire area of interest and detects the moving objects using change detection techniques. The detected objects are tracked over time and their position is indicated on a map representing the monitored area. The objects" coordinates are sent to the server sensor in order to point its zooming optics towards the moving object. The second camera tracks the objects at high resolution. As well as the client camera, this sensor is calibrated and the position of the object detected on the image plane reference system is translated in its coordinates referred to the same area map. In the map common reference system, data fusion techniques are applied to achieve a more precise and robust estimation of the objects" track and to perform face detection and tracking. The work novelties and strength reside in the cooperative multi-sensor approach, in the high resolution long distance tracking and in the automatic collection of biometric data such as a person face clip for recognition purposes.
NASA Astrophysics Data System (ADS)
Kistler, Marc; Estre, Nicolas; Merle, Elsa
2018-01-01
As part of its R&D activities on high-energy X-ray imaging for non-destructive characterization, the Nuclear Measurement Laboratory has started an upgrade of its imaging system currently implemented at the CEA-Cadarache center. The goals are to achieve a sub-millimeter spatial resolution and the ability to perform tomographies on very large objects (more than 100-cm standard concrete or 40-cm steel). This paper presentsresults on the detection part of the imaging system. The upgrade of the detection part needs a thorough study of the performance of two detectors: a series of CdTe semiconductor sensors and two arrays of segmented CdWO4 scintillators with different pixel sizes. This study consists in a Quantum Accounting Diagram (QAD) analysis coupled with Monte-Carlo simulations. The scintillator arrays are able to detect millimeter details through 140 cm of concrete, but are limited to 120 cm for smaller ones. CdTe sensors have lower but more stable performance, with a 0.5 mm resolution for 90 cm of concrete. The choice of the detector then depends on the preferred characteristic: the spatial resolution or the use on large volumes. The combination of the features of the source and the studies on the detectors gives the expected performance of the whole equipment, in terms of signal-over-noise ratio (SNR), spatial resolution and acquisition time.
SPIDER: Next Generation Chip Scale Imaging Sensor
NASA Astrophysics Data System (ADS)
Duncan, Alan; Kendrick, Rick; Thurman, Sam; Wuchenich, Danielle; Scott, Ryan P.; Yoo, S. J. B.; Su, Tiehui; Yu, Runxiang; Ogden, Chad; Proiett, Roberto
The LM Advanced Technology Center and UC Davis are developing an Electro-Optical (EO) imaging sensor called SPIDER (Segmented Planar Imaging Detector for Electro-optical Reconnaissance) that provides a 10x to 100x size, weight, and power (SWaP) reduction alternative to the traditional bulky optical telescope and focal plane detector array. The substantial reductions in SWaP would reduce cost and/or provide higher resolution by enabling a larger aperture imager in a constrained volume. The SPIDER concept consists of thousands of direct detection white-light interferometers densely packed onto Photonic Integrated Circuits (PICs) to measure the amplitude and phase of the visibility function at spatial frequencies that span the full synthetic aperture. In other words, SPIDER would sample the object being imaged in the Fourier domain (i.e., spatial frequency domain), and then digitally reconstruct an image. The conventional approach for imaging interferometers requires complex mechanical delay lines to form the interference fringes. This results in designs that are not traceable to more than a few simultaneous spatial frequency measurements. SPIDER seeks to achieve this traceability by employing micron-=scale optical waveguides and nanophotonic structures fabricated on a PIC with micron-scale packing density to form the necessary interferometers. Prior LM IRAD and DARPA/NASA CRAD-funded SPIDER risk reduction experiments, design trades, and simulations have matured the SPIDER imager concept to a TRL 3 level. Current funding under the DARPA SPIDER Zoom program is maturing the underlying PIC technology for SPIDER to the TRL 4 level. This is done by developing and fabricating a second-generation PIC that is fully traceable to the multiple layers and low-power phase modulators required for higher-dimension waveguide arrays that are needed for higher field-of-view sensors. Our project also seeks to extend the SPIDER concept to add a zoom capability that would provide simultaneous low-resolution, large field-of-view and steerable high-resolution, narrow field-of-view imaging modes. A proof of concept demo is being designed to validate this capability. Finally, data collected by this project would be used to benchmark and increase the fidelity of our SPIDER image simulations and enhance our ability to predict the performance of existing and future SPIDER sensor design variations. These designs and their associated performance characteristics could then be evaluated as candidates for future mission opportunities to identify specific transition paths. This paper provides an overview of performance data on the first-generation PIC for SPIDER developed under DARPA SeeMe program funding. We provide a design description of the SPICER Zoom imaging sensor and the second-generation PIC (high- and low-resolution versions) currently under development on the DARPA SPIDER Zoom effort. Results of performance simulations and design trades are presented. Unique low-cost payload applications for future SSA missions are also discussed.
NASA Astrophysics Data System (ADS)
Han, Ling; Miller, Brian W.; Barrett, Harrison H.; Barber, H. Bradford; Furenlid, Lars R.
2017-09-01
iQID is an intensified quantum imaging detector developed in the Center for Gamma-Ray Imaging (CGRI). Originally called BazookaSPECT, iQID was designed for high-resolution gamma-ray imaging and preclinical gamma-ray single-photon emission computed tomography (SPECT). With the use of a columnar scintillator, an image intensifier and modern CCD/CMOS sensors, iQID cameras features outstanding intrinsic spatial resolution. In recent years, many advances have been achieved that greatly boost the performance of iQID, broadening its applications to cover nuclear and particle imaging for preclinical, clinical and homeland security settings. This paper presents an overview of the recent advances of iQID technology and its applications in preclinical and clinical scintigraphy, preclinical SPECT, particle imaging (alpha, neutron, beta, and fission fragment), and digital autoradiography.
NASA Astrophysics Data System (ADS)
Peterson, E. R.; Stanton, T. P.
2016-12-01
Determining ice concentration in the Arctic is necessary to track significant changes in sea ice edge extent. Sea ice concentrations are also needed to interpret data collected by in-situ instruments like buoys, as the amount of ice versus water in a given area determines local solar heating. Ice concentration products are now routinely derived from satellite radiometers including the Advanced Microwave Scanning Radiometer for the Earth Observing System (AMSR-E), the Advanced Microwave Scanning Radiometer 2 (AMSR2), the Special Sensor Microwave Imager (SSMI), and the Special Sensor Microwave Imager/Sounder (SSMIS). While these radiometers are viewed as reliable to monitor long-term changes in sea ice extent, their accuracy should be analyzed, and compared to determine which radiometer performs best over smaller features such as melt ponds, and how seasonal conditions affect accuracy. Knowledge of the accuracy of radiometers at high resolution can help future researchers determine which radiometer to use, and be aware of radiometer shortcomings in different ice conditions. This will be especially useful when interpreting data from in-situ instruments which deal with small scale measurements. In order to compare these passive microwave radiometers, selected high spatial resolution one-meter resolution Medea images, archived at the Unites States Geological Survey, are used for ground truth comparison. Sea ice concentrations are derived from these images in an interactive process, although estimates are not perfect ground truth due to exposure of images, shadowing and cloud cover. 68 images are retrieved from the USGS website and compared with 9 useable, collocated SSMI, 33 SSMIS, 36 AMSRE, and 14 AMSR2 ice concentrations in the Arctic Ocean. We analyze and compare the accuracy of radiometer instrumentation in differing ice conditions.
Correcting spacecraft jitter in HiRISE images
Sutton, S. S.; Boyd, A.K.; Kirk, Randolph L.; Cook, Debbie; Backer, Jean; Fennema, A.; Heyd, R.; McEwen, A.S.; Mirchandani, S.D.; Wu, B.; Di, K.; Oberst, J.; Karachevtseva, I.
2017-01-01
Mechanical oscillations or vibrations on spacecraft, also called pointing jitter, cause geometric distortions and/or smear in high resolution digital images acquired from orbit. Geometric distortion is especially a problem with pushbroom type sensors, such as the High Resolution Imaging Science Experiment (HiRISE) instrument on board the Mars Reconnaissance Orbiter (MRO). Geometric distortions occur at a range of frequencies that may not be obvious in the image products, but can cause problems with stereo image correlation in the production of digital elevation models, and in measuring surface changes over time in orthorectified images. The HiRISE focal plane comprises a staggered array of fourteen charge-coupled devices (CCDs) with pixel IFOV of 1 microradian. The high spatial resolution of HiRISE makes it both sensitive to, and an excellent recorder of jitter. We present an algorithm using Fourier analysis to resolve the jitter function for a HiRISE image that is then used to update instrument pointing information to remove geometric distortions from the image. Implementation of the jitter analysis and image correction is performed on selected HiRISE images. Resulting corrected images and updated pointing information are made available to the public. Results show marked reduction of geometric distortions. This work has applications to similar cameras operating now, and to the design of future instruments (such as the Europa Imaging System).
Precise color images a high-speed color video camera system with three intensified sensors
NASA Astrophysics Data System (ADS)
Oki, Sachio; Yamakawa, Masafumi; Gohda, Susumu; Etoh, Takeharu G.
1999-06-01
High speed imaging systems have been used in a large field of science and engineering. Although the high speed camera systems have been improved to high performance, most of their applications are only to get high speed motion pictures. However, in some fields of science and technology, it is useful to get some other information, such as temperature of combustion flame, thermal plasma and molten materials. Recent digital high speed video imaging technology should be able to get such information from those objects. For this purpose, we have already developed a high speed video camera system with three-intensified-sensors and cubic prism image splitter. The maximum frame rate is 40,500 pps (picture per second) at 64 X 64 pixels and 4,500 pps at 256 X 256 pixels with 256 (8 bit) intensity resolution for each pixel. The camera system can store more than 1,000 pictures continuously in solid state memory. In order to get the precise color images from this camera system, we need to develop a digital technique, which consists of a computer program and ancillary instruments, to adjust displacement of images taken from two or three image sensors and to calibrate relationship between incident light intensity and corresponding digital output signals. In this paper, the digital technique for pixel-based displacement adjustment are proposed. Although the displacement of the corresponding circle was more than 8 pixels in original image, the displacement was adjusted within 0.2 pixels at most by this method.
Snapshot Imaging Spectrometry in the Visible and Long Wave Infrared
NASA Astrophysics Data System (ADS)
Maione, Bryan David
Imaging spectrometry is an optical technique in which the spectral content of an object is measured at each location in space. The main advantage of this modality is that it enables characterization beyond what is possible with a conventional camera, since spectral information is generally related to the chemical composition of the object. Due to this, imaging spectrometers are often capable of detecting targets that are either morphologically inconsistent, or even under resolved. A specific class of imaging spectrometer, known as a snapshot system, seeks to measure all spatial and spectral information simultaneously, thereby rectifying artifacts associated with scanning designs, and enabling the measurement of temporally dynamic scenes. Snapshot designs are the focus of this dissertation. Three designs for snapshot imaging spectrometers are developed, each providing novel contributions to the field of imaging spectrometry. In chapter 2, the first spatially heterodyned snapshot imaging spectrometer is modeled and experimentally validated. Spatial heterodyning is a technique commonly implemented in non-imaging Fourier transform spectrometry. For Fourier transform imaging spectrometers, spatial heterodyning improves the spectral resolution trade space. Additionally, in this chapter a unique neural network based spectral calibration is developed and determined to be an improvement beyond Fourier and linear operator based techniques. Leveraging spatial heterodyning as developed in chapter 2, in chapter 3, a high spectral resolution snapshot Fourier transform imaging spectrometer, based on a Savart plate interferometer, is developed and experimentally validated. The sensor presented in this chapter is the highest spectral resolution sensor in its class. High spectral resolution enables the sensor to discriminate narrowly spaced spectral lines. The capabilities of neural networks in imaging spectrometry are further explored in this chapter. Neural networks are used to perform single target detection on raw instrument data, thereby eliminating the need for an explicit spectral calibration step. As an extension of the results in chapter 2, neural networks are once again demonstrated to be an improvement when compared to linear operator based detection. In chapter 4 a non-interferometric design is developed for the long wave infrared (wavelengths spanning 8-12 microns). The imaging spectrometer developed in this chapter is a multi-aperture filtered microbolometer. Since the detector is uncooled, the presented design is ultra-compact and low power. Additionally, cost effective polymer absorption filters are used in lieu of interference filters. Since, each measurement of the system is spectrally multiplexed, an SNR advantage is realized. A theoretical model for the filtered design is developed, and the performance of the sensor for detecting liquid contaminants is investigated. Similar to past chapters, neural networks are used and achieve false detection rates of less than 1%. Lastly, this dissertation is concluded with a discussion on future work and potential impact of these devices.
NASA Technical Reports Server (NTRS)
Lampton, M.; Malina, R. F.
1976-01-01
A position-sensitive event-counting electronic readout system for microchannel plates (MCPs) is described that offers the advantages of high spatial resolution and fast time resolution. The technique relies upon a four-quadrant electron-collecting anode located behind the output face of the microchannel plate, so that the electron cloud from each detected event is partly intercepted by each of the four quadrants. The relative amounts of charge collected by each quadrant depend on event position, permitting each event to be localized with two ratio circuits. A prototype quadrant anode system for ion, electron, and extreme ultraviolet imaging is described. The spatial resolution achieved, about 10 microns, allows individual MCP channels to be distinguished.
The Landsat Image Mosaic of Antarctica
Bindschadler, Robert; Vornberger, P.; Fleming, A.; Fox, A.; Mullins, J.; Binnie, D.; Paulsen, S.J.; Granneman, Brian J.; Gorodetzky, D.
2008-01-01
The Landsat Image Mosaic of Antarctica (LIMA) is the first true-color, high-spatial-resolution image of the seventh continent. It is constructed from nearly 1100 individually selected Landsat-7 ETM+ scenes. Each image was orthorectified and adjusted for geometric, sensor and illumination variations to a standardized, almost seamless surface reflectance product. Mosaicing to avoid clouds produced a high quality, nearly cloud-free benchmark data set of Antarctica for the International Polar Year from images collected primarily during 1999-2003. Multiple color composites and enhancements were generated to illustrate additional characteristics of the multispectral data including: the true appearance of the surface; discrimination between snow and bare ice; reflectance variations within bright snow; recovered reflectance values in regions of sensor saturation; and subtle topographic variations associated with ice flow. LIMA is viewable and individual scenes or user defined portions of the mosaic are downloadable at http://lima.usgs.gov. Educational materials associated with LIMA are available at http://lima.nasa.gov.
2009-03-01
value. While these instruments may be well suited for academic research, they are generally not useful for battlefield measurements. Airborne and...may be too generalized for use with current tactical decision aids in the high-resolution, high- precision environment of the modern battlefield...imager resolutions on the order of less than 1 meter, shadows from small features such as buildings can be used to effectively measure the AOD in the
NASA Astrophysics Data System (ADS)
Saeb Gilani, T.; Villringer, C.; Zhang, E.; Gundlach, H.; Buchmann, J.; Schrader, S.; Laufer, J.
2018-02-01
Tomographic photoacoustic (PA) images acquired using a Fabry-Perot (FP) based scanner offer high resolution and image fidelity but can result in long acquisition times due to the need for raster scanning. To reduce the acquisition times, a parallelised camera-based PA signal detection scheme is developed. The scheme is based on using a sCMOScamera and FPI sensors with high homogeneity of optical thickness. PA signals were acquired using the camera-based setup and the signal to noise ratio (SNR) was measured. A comparison of the SNR of PA signal detected using 1) a photodiode in a conventional raster scanning detection scheme and 2) a sCMOS camera in parallelised detection scheme is made. The results show that the parallelised interrogation scheme has the potential to provide high speed PA imaging.
NASA Technical Reports Server (NTRS)
Brand, R. R.; Barker, J. L.
1983-01-01
A multistage sampling procedure using image processing, geographical information systems, and analytical photogrammetry is presented which can be used to guide the collection of representative, high-resolution spectra and discrete reflectance targets for future satellite sensors. The procedure is general and can be adapted to characterize areas as small as minor watersheds and as large as multistate regions. Beginning with a user-determined study area, successive reductions in size and spectral variation are performed using image analysis techniques on data from the Multispectral Scanner, orbital and simulated Thematic Mapper, low altitude photography synchronized with the simulator, and associated digital data. An integrated image-based geographical information system supports processing requirements.
Zeng, Youjun; Wang, Lei; Wu, Shu-Yuen; He, Jianan; Qu, Junle; Li, Xuejin; Ho, Ho-Pui; Gu, Dayong; Gao, Bruce Zhi; Shao, Yonghong
2017-01-01
A fast surface plasmon resonance (SPR) imaging biosensor system based on wavelength interrogation using an acousto-optic tunable filter (AOTF) and a white light laser is presented. The system combines the merits of a wide-dynamic detection range and high sensitivity offered by the spectral approach with multiplexed high-throughput data collection and a two-dimensional (2D) biosensor array. The key feature is the use of AOTF to realize wavelength scan from a white laser source and thus to achieve fast tracking of the SPR dip movement caused by target molecules binding to the sensor surface. Experimental results show that the system is capable of completing a SPR dip measurement within 0.35 s. To the best of our knowledge, this is the fastest time ever reported in the literature for imaging spectral interrogation. Based on a spectral window with a width of approximately 100 nm, a dynamic detection range and resolution of 4.63 × 10−2 refractive index unit (RIU) and 1.27 × 10−6 RIU achieved in a 2D-array sensor is reported here. The spectral SPR imaging sensor scheme has the capability of performing fast high-throughput detection of biomolecular interactions from 2D sensor arrays. The design has no mechanical moving parts, thus making the scheme completely solid-state. PMID:28067766
Automatic parquet block sorting using real-time spectral classification
NASA Astrophysics Data System (ADS)
Astrom, Anders; Astrand, Erik; Johansson, Magnus
1999-03-01
This paper presents a real-time spectral classification system based on the PGP spectrograph and a smart image sensor. The PGP is a spectrograph which extracts the spectral information from a scene and projects the information on an image sensor, which is a method often referred to as Imaging Spectroscopy. The classification is based on linear models and categorizes a number of pixels along a line. Previous systems adopting this method have used standard sensors, which often resulted in poor performance. The new system, however, is based on a patented near-sensor classification method, which exploits analogue features on the smart image sensor. The method reduces the enormous amount of data to be processed at an early stage, thus making true real-time spectral classification possible. The system has been evaluated on hardwood parquet boards showing very good results. The color defects considered in the experiments were blue stain, white sapwood, yellow decay and red decay. In addition to these four defect classes, a reference class was used to indicate correct surface color. The system calculates a statistical measure for each parquet block, giving the pixel defect percentage. The patented method makes it possible to run at very high speeds with a high spectral discrimination ability. Using a powerful illuminator, the system can run with a line frequency exceeding 2000 line/s. This opens up the possibility to maintain high production speed and still measure with good resolution.
Fan, Chong; Wu, Chaoyun; Li, Grand; Ma, Jun
2017-01-01
To solve the problem on inaccuracy when estimating the point spread function (PSF) of the ideal original image in traditional projection onto convex set (POCS) super-resolution (SR) reconstruction, this paper presents an improved POCS SR algorithm based on PSF estimation of low-resolution (LR) remote sensing images. The proposed algorithm can improve the spatial resolution of the image and benefit agricultural crop visual interpolation. The PSF of the high-resolution (HR) image is unknown in reality. Therefore, analysis of the relationship between the PSF of the HR image and the PSF of the LR image is important to estimate the PSF of the HR image by using multiple LR images. In this study, the linear relationship between the PSFs of the HR and LR images can be proven. In addition, the novel slant knife-edge method is employed, which can improve the accuracy of the PSF estimation of LR images. Finally, the proposed method is applied to reconstruct airborne digital sensor 40 (ADS40) three-line array images and the overlapped areas of two adjacent GF-2 images by embedding the estimated PSF of the HR image to the original POCS SR algorithm. Experimental results show that the proposed method yields higher quality of reconstructed images than that produced by the blind SR method and the bicubic interpolation method. PMID:28208837
Continuous Mapping of Tunnel Walls in a Gnss-Denied Environment
NASA Astrophysics Data System (ADS)
Chapman, Michael A.; Min, Cao; Zhang, Deijin
2016-06-01
The need for reliable systems for capturing precise detail in tunnels has increased as the number of tunnels (e.g., for cars and trucks, trains, subways, mining and other infrastructure) has increased and the age of these structures and, subsequent, deterioration has introduced structural degradations and eventual failures. Due to the hostile environments encountered in tunnels, mobile mapping systems are plagued with various problems such as loss of GNSS signals, drift of inertial measurements systems, low lighting conditions, dust and poor surface textures for feature identification and extraction. A tunnel mapping system using alternate sensors and algorithms that can deliver precise coordinates and feature attributes from surfaces along the entire tunnel path is presented. This system employs image bridging or visual odometry to estimate precise sensor positions and orientations. The fundamental concept is the use of image sequences to geometrically extend the control information in the absence of absolute positioning data sources. This is a non-trivial problem due to changes in scale, perceived resolution, image contrast and lack of salient features. The sensors employed include forward-looking high resolution digital frame cameras coupled with auxiliary light sources. In addition, a high frequency lidar system and a thermal imager are included to offer three dimensional point clouds of the tunnel walls along with thermal images for moisture detection. The mobile mapping system is equipped with an array of 16 cameras and light sources to capture the tunnel walls. Continuous images are produced using a semi-automated mosaicking process. Results of preliminary experimentation are presented to demonstrate the effectiveness of the system for the generation of seamless precise tunnel maps.
Studies of soundings and imagings measurements from geostationary satellites
NASA Technical Reports Server (NTRS)
Suomi, V. E.
1973-01-01
Soundings and imaging measurements from geostationary satellites are presented. The subjects discussed are: (1) meteorological data processing techniques, (2) sun glitter, (3) cloud growth rate study, satellite stability characteristics, and (4) high resolution optics. The use of perturbation technique to obtain the motion of sensors aboard a satellite is described. The most conditions, and measurement errors. Several performance evaluation parameters are proposed.
Sampson, David D.; Kennedy, Brendan F.
2017-01-01
High-resolution tactile imaging, superior to the sense of touch, has potential for future biomedical applications such as robotic surgery. In this paper, we propose a tactile imaging method, termed computational optical palpation, based on measuring the change in thickness of a thin, compliant layer with optical coherence tomography and calculating tactile stress using finite-element analysis. We demonstrate our method on test targets and on freshly excised human breast fibroadenoma, demonstrating a resolution of up to 15–25 µm and a field of view of up to 7 mm. Our method is open source and readily adaptable to other imaging modalities, such as ultrasonography and confocal microscopy. PMID:28250098
Agricultural Land Use mapping by multi-sensor approach for hydrological water quality monitoring
NASA Astrophysics Data System (ADS)
Brodsky, Lukas; Kodesova, Radka; Kodes, Vit
2010-05-01
The main objective of this study is to demonstrate potential of operational use of the high and medium resolution remote sensing data for hydrological water quality monitoring by mapping agriculture intensity and crop structures. In particular use of remote sensing mapping for optimization of pesticide monitoring. The agricultural mapping task is tackled by means of medium spatial and high temporal resolution ESA Envisat MERIS FR images together with single high spatial resolution IRS AWiFS image covering the whole area of interest (the Czech Republic). High resolution data (e.g. SPOT, ALOS, Landsat) are often used for agricultural land use classification, but usually only at regional or local level due to data availability and financial constraints. AWiFS data (nominal spatial resolution 56 m) due to the wide satellite swath seems to be more suitable for use at national level. Nevertheless, one of the critical issues for such a classification is to have sufficient image acquisitions over the whole vegetation period to describe crop development in appropriate way. ESA MERIS middle-resolution data were used in several studies for crop classification. The high temporal and also spectral resolution of MERIS data has indisputable advantage for crop classification. However, spatial resolution of 300 m results in mixture signal in a single pixel. AWiFS-MERIS data synergy brings new perspectives in agricultural Land Use mapping. Also, the developed methodology procedure is fully compatible with future use of ESA (GMES) Sentinel satellite images. The applied methodology of hybrid multi-sensor approach consists of these main stages: a/ parcel segmentation and spectral pre-classification of high resolution image (AWiFS); b/ ingestion of middle resolution (MERIS) vegetation spectro-temporal features; c/ vegetation signatures unmixing; and d/ semantic object-oriented classification of vegetation classes into final classification scheme. These crop groups were selected to be classified: winter crops, spring crops, oilseed rape, legumes, summer and other crops. This study highlights operational potentials of high temporal full resolution MERIS images in agricultural land use monitoring. Practical application of this methodology is foreseen, among others, in the water quality monitoring. Effective pesticide monitoring relies also on spatial distribution of applied pesticides, which can be derived from crop - plant protection product relationship. Knowledge of areas with predominant occurrence of specific crop based on remote sensing data described above can be used for a forecast of probable plant protection product application, thus cost-effective pesticide monitoring. The remote sensing data used on a continuous basis can be used in other long-term water management issues and provide valuable data for decision makers. Acknowledgement: Authors acknowledge the financial support of the Ministry of Education, Youth and Sports of the Czech Republic (grants No. 2B06095 and No. MSM 6046070901). The study was also supported by ESA CAT-1 (ref. 4358) and SOSI projects (Spatial Observation Services and Infrastructure; ref. GSTP-RTDA-EOPG-SW-08-0004).
x-y curvature wavefront sensor.
Cagigal, Manuel P; Valle, Pedro J
2015-04-15
In this Letter, we propose a new curvature wavefront sensor based on the principles of optical differentiation. The theoretically modeled setup consists of a diffractive optical mask placed at the intermediate plane of a classical two-lens coherent optical processor. The resulting image is composed of a number of local derivatives of the entrance pupil function whose proper combination provides the wavefront curvature. In contrast to the common radial curvature sensors, this one is able to provide the x and y wavefront curvature maps simultaneously. The sensor offers other additional advantages like having high spatial resolution, adjustable dynamic range, and not being sensitive to misalignment.
Lensfree on-chip microscopy over a wide field-of-view using pixel super-resolution
Bishara, Waheb; Su, Ting-Wei; Coskun, Ahmet F.; Ozcan, Aydogan
2010-01-01
We demonstrate lensfree holographic microscopy on a chip to achieve ~0.6 µm spatial resolution corresponding to a numerical aperture of ~0.5 over a large field-of-view of ~24 mm2. By using partially coherent illumination from a large aperture (~50 µm), we acquire lower resolution lensfree in-line holograms of the objects with unit fringe magnification. For each lensfree hologram, the pixel size at the sensor chip limits the spatial resolution of the reconstructed image. To circumvent this limitation, we implement a sub-pixel shifting based super-resolution algorithm to effectively recover much higher resolution digital holograms of the objects, permitting sub-micron spatial resolution to be achieved across the entire sensor chip active area, which is also equivalent to the imaging field-of-view (24 mm2) due to unit magnification. We demonstrate the success of this pixel super-resolution approach by imaging patterned transparent substrates, blood smear samples, as well as Caenoharbditis Elegans. PMID:20588977
SRAO: optical design and the dual-knife-edge WFS
NASA Astrophysics Data System (ADS)
Ziegler, Carl; Law, Nicholas M.; Tokovinin, Andrei
2016-07-01
The Southern Robotic Adaptive Optics (SRAO) instrument will bring the proven high-efficiency capabilities of Robo-AO to the Southern-Hemisphere, providing the unique capability to image with high-angular-resolution thousands of targets per year across the entire sky. Deployed on the modern 4.1m SOAR telescope located on Cerro Tololo, the NGS AO system will use an innovative dual-knife-edge wavefront sensor, similar to a pyramid sensor, to enable guiding on targets down to V=16 with diffraction limited resolution in the NIR. The dual-knife-edge wavefront sensor can be up to two orders of magnitude less costly than custom glass pyramids, with similar wavefront error sensitivity and minimal chromatic aberrations. SRAO is capable of observing hundreds of targets a night through automation, allowing confirmation and characterization of the large number of exoplanets produced by current and future missions.
Multi-frequency SAR, SSM/I and AVHRR derived geophysical information of the marginal ice zone
NASA Technical Reports Server (NTRS)
Shuchman, R. A.; Onstott, R. G.; Wackerman, C. C.; Russel, C. A.; Sutherland, L. L.; Johannessen, O. M.; Johannessen, J. A.; Sandven, S.; Gloerson, P.
1991-01-01
A description is given of the fusion of synthetic aperture radar (SAR), special sensor microwave imager (SSM/I), and NOAA Advanced Very High Resolution Radiometer (AVHRR) data to study arctic processes. These data were collected during the SIZEX/CEAREX experiments that occurred in the Greenland Sea in March of 1989. Detailed comparisons between the SAR, AVHRR, and SSM/I indicated: (1) The ice edge position was in agreement to within 25 km, (2) The SSM/I SAR total ice concentration compared favorably, however, the SSM/I significantly underpredicted the multiyear fraction, (3) Combining high resolution SAR with SSM/I can potentially map open water and new ice features in the marginal ice zone (MIZ) which cannot be mapped by the single sensors, and (4) The combination of all three sensors provides accurate ice information as well as sea surface temperature and wind speeds.
NASA Technical Reports Server (NTRS)
Wang, Yu (Inventor)
2006-01-01
A miniature, ultra-high resolution, and color scanning microscope using microchannel and solid-state technology that does not require focus adjustment. One embodiment includes a source of collimated radiant energy for illuminating a sample, a plurality of narrow angle filters comprising a microchannel structure to permit the passage of only unscattered radiant energy through the microchannels with some portion of the radiant energy entering the microchannels from the sample, a solid-state sensor array attached to the microchannel structure, the microchannels being aligned with an element of the solid-state sensor array, that portion of the radiant energy entering the microchannels parallel to the microchannel walls travels to the sensor element generating an electrical signal from which an image is reconstructed by an external device, and a moving element for movement of the microchannel structure relative to the sample. Discloses a method for scanning samples whereby the sensor array elements trace parallel paths that are arbitrarily close to the parallel paths traced by other elements of the array.
Scene-based Shack-Hartmann wavefront sensor for light-sheet microscopy
NASA Astrophysics Data System (ADS)
Lawrence, Keelan; Liu, Yang; Dale, Savannah; Ball, Rebecca; VanLeuven, Ariel J.; Sornborger, Andrew; Lauderdale, James D.; Kner, Peter
2018-02-01
Light-sheet microscopy is an ideal imaging modality for long-term live imaging in model organisms. However, significant optical aberrations can be present when imaging into an organism that is hundreds of microns or greater in size. To measure and correct optical aberrations, an adaptive optics system must be incorporated into the microscope. Many biological samples lack point sources that can be used as guide stars with conventional Shack-Hartmann wavefront sensors. We have developed a scene-based Shack-Hartmann wavefront sensor for measuring the optical aberrations in a light-sheet microscopy system that does not require a point-source and can measure the aberrations for different parts of the image. The sensor has 280 lenslets inside the pupil, creates an image from each lenslet with a 500 micron field of view and a resolution of 8 microns, and has a resolution for the wavefront gradient of 75 milliradians per lenslet. We demonstrate the system on both fluorescent bead samples and zebrafish embryos.
NASA Technical Reports Server (NTRS)
Blakeslee, R. J.; Christian, H. J.; Mach, D. M.; Buechler, D. E.; Wharton, N. A.; Stewart, M. F.; Ellett, W. T.; Koshak, W. J.; Walker, T. D.; Virts, K.;
2017-01-01
Mission: Fly a flight-spare LIS (Lightning Imaging Sensor) on ISS to take advantage of unique capabilities provided by the ISS (e.g., high inclination, real time data); Integrate LIS as a hosted payload on the DoD Space Test Program-Houston 5 (STP-H5) mission and launch on a Space X rocket for a minimum 2 year mission. Measurement: NASA and its partners developed and demonstrated effectiveness and value of using space-based lightning observations as a remote sensing tool; LIS measures lightning (amount, rate, radiant energy) with storm scale resolution, millisecond timing, and high detection efficiency, with no land-ocean bias. Benefit: LIS on ISS will extend TRMM (Tropical Rainfall Measuring Mission) time series observations, expand latitudinal coverage, provide real time data to operational users, and enable cross-sensor calibration.
NASA Astrophysics Data System (ADS)
Fischer, Peter; Schuegraf, Philipp; Merkle, Nina; Storch, Tobias
2018-04-01
This paper presents a hybrid evolutionary algorithm for fast intensity based matching between satellite imagery from SAR and very high-resolution (VHR) optical sensor systems. The precise and accurate co-registration of image time series and images of different sensors is a key task in multi-sensor image processing scenarios. The necessary preprocessing step of image matching and tie-point detection is divided into a search problem and a similarity measurement. Within this paper we evaluate the use of an evolutionary search strategy for establishing the spatial correspondence between satellite imagery of optical and radar sensors. The aim of the proposed algorithm is to decrease the computational costs during the search process by formulating the search as an optimization problem. Based upon the canonical evolutionary algorithm, the proposed algorithm is adapted for SAR/optical imagery intensity based matching. Extensions are drawn using techniques like hybridization (e.g. local search) and others to lower the number of objective function calls and refine the result. The algorithm significantely decreases the computational costs whilst finding the optimal solution in a reliable way.
Wu, Jih-Huah; Pen, Cheng-Chung; Jiang, Joe-Air
2008-01-01
With their significant features, the applications of complementary metal-oxide semiconductor (CMOS) image sensors covers a very extensive range, from industrial automation to traffic applications such as aiming systems, blind guidance, active/passive range finders, etc. In this paper CMOS image sensor-based active and passive range finders are presented. The measurement scheme of the proposed active/passive range finders is based on a simple triangulation method. The designed range finders chiefly consist of a CMOS image sensor and some light sources such as lasers or LEDs. The implementation cost of our range finders is quite low. Image processing software to adjust the exposure time (ET) of the CMOS image sensor to enhance the performance of triangulation-based range finders was also developed. An extensive series of experiments were conducted to evaluate the performance of the designed range finders. From the experimental results, the distance measurement resolutions achieved by the active range finder and the passive range finder can be better than 0.6% and 0.25% within the measurement ranges of 1 to 8 m and 5 to 45 m, respectively. Feasibility tests on applications of the developed CMOS image sensor-based range finders to the automotive field were also conducted. The experimental results demonstrated that our range finders are well-suited for distance measurements in this field. PMID:27879789
3D near-infrared imaging based on a single-photon avalanche diode array sensor
NASA Astrophysics Data System (ADS)
Mata Pavia, Juan; Wolf, Martin; Charbon, Edoardo
2012-10-01
Near-infrared light can be used to determine the optical properties (absorption and scattering) of human tissue. Optical tomography uses this principle to image the internal structure of parts of the body by measuring the light that is scattered in the tissue. An imager for optical tomography was designed based on a detector with 128x128 single photon pixels that included a bank of 32 time-to-digital converters. Due to the high spatial resolution and the possibility of performing time resolved measurements, a new contactless setup has been conceived. The setup has a resolution of 97ps and operates with a laser source with an average power of 3mW. This new setup generated an high amount of data that could not be processed by established methods, therefore new concepts and algorithms were developed to take advantage of it. Simulations show that the potential resolution of the new setup would be much higher than previous designs. Measurements have been performed showing its potential. Images derived from the measurements showed that it is possible to reach a resolution of at least 5mm.
Sensing, Spectra and Scaling: What's in Store for Land Observations
NASA Technical Reports Server (NTRS)
Goetz, Alexander F. H.
2001-01-01
Bill Pecora's 1960's vision of the future, using spacecraft-based sensors for mapping the environment and exploring for resources, is being implemented today. New technology has produced better sensors in space such as the Landsat Thematic Mapper (TM) and SPOT, and creative researchers are continuing to find new applications. However, with existing sensors, and those intended for launch in this century, the potential for extracting information from the land surface is far from being exploited. The most recent technology development is imaging spectrometry, the acquisition of images in hundreds of contiguous spectral bands, such that for any pixel a complete reflectance spectrum can be acquired. Experience with Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) has shown that, with proper attention paid to absolute calibration, it is possible to acquire apparent surface reflectance to 5% accuracy without any ground-based measurement. The data reduction incorporates in educated guess of the aerosol scattering, development of a precipitable water vapor map from the data and mapping of cirrus clouds in the 1.38 micrometer band. This is not possible with TM. The pixel size in images of the earth plays and important role in the type and quality of information that can be derived. Less understood is the coupling between spatial and spectral resolution in a sensor. Recent work has shown that in processing the data to derive the relative abundance of materials in a pixel, also known is unmixing, the pixel size is an important parameter. A variance in the relative abundance of materials among the pixels is necessary to be able to derive the endmembers or pure material constituent spectra. In most cases, the 1 km pixel size for the Earth Observing System Moderate Resolution Imaging Spectroradiometer (MODIS) instrument is too large to meet the variance criterion. A pointable high spatial and spectral resolution imaging spectrometer in orbit will be necessary to make the major next step in our understanding of the solid earth surface and its changing face.
NASA Technical Reports Server (NTRS)
Farrar, Michael R.; Smith, Eric A.
1992-01-01
A method for enhancing the 19, 22, and 37 GHz measurements of the SSM/I (Special Sensor Microwave/Imager) to the spatial resolution and sampling density of the high resolution 85-GHz channel is presented. An objective technique for specifying the tuning parameter, which balances the tradeoff between resolution and noise, is developed in terms of maximizing cross-channel correlations. Various validation procedures are performed to demonstrate the effectiveness of the method, which hopefully will provide researchers with a valuable tool in multispectral applications of satellite radiometer data.
The Focal Plane Assembly for the Athena X-Ray Integral Field Unit Instrument
NASA Technical Reports Server (NTRS)
Jackson, B. D.; Van Weers, H.; van der Kuur, J.; den Hartog, R.; Akamatsu, H.; Argan, A.; Bandler, S. R.; Barbera, M.; Barret, D.; Bruijn, M. P.;
2016-01-01
This paper summarizes a preliminary design concept for the focal plane assembly of the X-ray Integral Field Unit on the Athena spacecraft, an imaging microcalorimeter that will enable high spectral resolution imaging and point-source spectroscopy. The instrument's sensor array will be a 3840-pixel transition edge sensor (TES) microcalorimeter array, with a frequency domain multiplexed SQUID readout system allowing this large-format sensor array to be operated within the thermal constraints of the instrument's cryogenic system. A second TES detector will be operated in close proximity to the sensor array to detect cosmic rays and secondary particles passing through the sensor array for off-line coincidence detection to identify and reject events caused by the in-orbit high-energy particle background. The detectors, operating at 55 mK, or less, will be thermally isolated from the instrument cryostat's 2 K stage, while shielding and filtering within the FPA will allow the instrument's sensitive sensor array to be operated in the expected environment during both on-ground testing and in-flight operation, including stray light from the cryostat environment, low-energy photons entering through the X-ray aperture, low-frequency magnetic fields, and high-frequency electric fields.
Super-resolution reconstruction of hyperspectral images.
Akgun, Toygar; Altunbasak, Yucel; Mersereau, Russell M
2005-11-01
Hyperspectral images are used for aerial and space imagery applications, including target detection, tracking, agricultural, and natural resource exploration. Unfortunately, atmospheric scattering, secondary illumination, changing viewing angles, and sensor noise degrade the quality of these images. Improving their resolution has a high payoff, but applying super-resolution techniques separately to every spectral band is problematic for two main reasons. First, the number of spectral bands can be in the hundreds, which increases the computational load excessively. Second, considering the bands separately does not make use of the information that is present across them. Furthermore, separate band super-resolution does not make use of the inherent low dimensionality of the spectral data, which can effectively be used to improve the robustness against noise. In this paper, we introduce a novel super-resolution method for hyperspectral images. An integral part of our work is to model the hyperspectral image acquisition process. We propose a model that enables us to represent the hyperspectral observations from different wavelengths as weighted linear combinations of a small number of basis image planes. Then, a method for applying super resolution to hyperspectral images using this model is presented. The method fuses information from multiple observations and spectral bands to improve spatial resolution and reconstruct the spectrum of the observed scene as a combination of a small number of spectral basis functions.
Image registration of naval IR images
NASA Astrophysics Data System (ADS)
Rodland, Arne J.
1996-06-01
In a real world application an image from a stabilized sensor on a moving platform will not be 100 percent stabilized. There will always be a small unknown error in the stabilization due to factors such as dynamic deformations in the structure between sensor and reference Inertial Navigation Unit, servo inaccuracies, etc. For a high resolution imaging sensor this stabilization error causes the image to move several pixels in unknown direction between frames. TO be able to detect and track small moving objects from such a sensor, this unknown movement of the sensor image must be estimated. An algorithm that searches for land contours in the image has been evaluated. The algorithm searches for high contrast points distributed over the whole image. As long as moving objects in the scene only cover a small area of the scene, most of the points are located on solid ground. By matching the list of points from frame to frame, the movement of the image due to stabilization errors can be estimated and compensated. The point list is searched for points with diverging movement from the estimated stabilization error. These points are then assumed to be located on moving objects. Points assumed to be located on moving objects are gradually exchanged with new points located in the same area. Most of the processing is performed on the list of points and not on the complete image. The algorithm is therefore very fast and well suited for real time implementation. The algorithm has been tested on images from an experimental IR scanner. Stabilization errors were added artificially to the image such that the output from the algorithm could be compared with the artificially added stabilization errors.
NASA Astrophysics Data System (ADS)
Perez Saavedra, L.-M.; Mercier, G.; Yesou, H.; Liege, F.; Pasero, G.
2016-08-01
The Copernicus program of ESA and European commission (6 Sentinels Missions, among them Sentinel-1 with Synthetic Aperture Radar sensor and Sentinel-2 with 13-band 10 to 60 meter resolution optical sensors), offers a new opportunity to Earth Observation with high temporal acquisition capability ( 12 days repetitiveness and 5 days in some geographic areas of the world) with high spatial resolution.Due to these high temporal and spatial resolutions, it opens new challenges in several fields such as image processing, new algorithms for Time Series and big data analysis. In addition, these missions will be able to analyze several topics of earth temporal evolution such as crop vegetation, water bodies, Land use and Land Cover (LULC), sea and ice information, etc. This is particularly useful for end users and policy makers to detect early signs of damages, vegetation illness, flooding areas, etc.From the state of the art, one can find algorithms and methods that use a bi-date comparison for change detection [1-3] or time series analysis. Actually, these methods are essentially used for target detection or for abrupt change detection that requires 2 observations only.A Hölder means-based change detection technique has been proposed in [2,3] for high resolution radar images. This so-called MIMOSA technique has been mainly dedicated to man-made change detection in urban areas and CARABAS - II project by using a couple of SAR images. An extension to multitemporal change detection technique has been investigated but its application to land use and cover changes still has to be validated.The Hölder Hp is a Time Series pixel by pixel feature extraction and is defined by:H𝑝[X]=[1/n∑ⁿᵢ₌1 Xᴾᵢ]1/p p∈R Hp[X] : N images * S Bandes * t datesn is the number of images in the time series. N > 2Hp (X) is continuous and monotonic increasing in p for - ∞ < p < ∞
The Transition-Edge-Sensor Array for the Micro-X Sounding Rocket
NASA Technical Reports Server (NTRS)
Eckart, M. E.; Adams, J. S.; Bailey, C. N.; Bandler, S. R.; Busch, Sarah Elizabeth; Chervenak J. A.; Finkbeiner, F. M.; Kelley, R. L.; Kilbourne, C. A.; Porst, J. P.;
2012-01-01
The Micro-X sounding rocket program will fly a 128-element array of transition-edge-sensor microcalorimeters to enable high-resolution X-ray imaging spectroscopy of the Puppis-A supernova remnant. To match the angular resolution of the optics while maximizing the field-of-view and retaining a high energy resolution (< 4 eV at 1 keV), we have designed the pixels using 600 x 600 sq. micron Au/Bi absorbers, which overhang 140 x 140 sq. micron Mo/Au sensors. The data-rate capabilities of the rocket telemetry system require the pulse decay to be approximately 2 ms to allow a significant portion of the data to be telemetered during flight. Here we report experimental results from the flight array, including measurements of energy resolution, uniformity, and absorber thermalization. In addition, we present studies of test devices that have a variety of absorber contact geometries, as well as a variety of membrane-perforation schemes designed to slow the pulse decay time to match the telemetry requirements. Finally, we describe the reduction in pixel-to-pixel crosstalk afforded by an angle-evaporated Cu backside heatsinking layer, which provides Cu coverage on the four sidewalls of the silicon wells beneath each pixel.
Expanding the functionality and applications of nanopore sensors
NASA Astrophysics Data System (ADS)
Venta, Kimberly E.
Nanopore sensors have developed into powerful tools for single-molecule studies since their inception two decades ago. Nanopore sensors function as nanoscale Coulter counters, by monitoring ionic current modulations as particles pass through a nanopore. While nanopore sensors can be used to study any nanoscale particle, their most notable application is as a low cost, fast alternative to current DNA sequencing technologies. In recent years, signifcant progress has been made toward the goal of nanopore-based DNA sequencing, which requires an ambitious combination of a low-noise and high-bandwidth nanopore measurement system and spatial resolution. In this dissertation, nanopore sensors in thin membranes are developed to improve dimensional resolution, and these membranes are used in parallel with a high-bandwidth amplfier. Using this nanopore sensor system, the signals of three DNA homopolymers are differentiated for the first time in solid-state nanopores. The nanopore noise is also reduced through the addition of a layer of SU8, a spin-on polymer, to the supporting chip structure. By increasing the temporal and spatial resolution of nanopore sensors, studies of shorter molecules are now possible. Nanopore sensors are beginning to be used for the study and characterization of nanoparticles. Nanoparticles have found many uses from biomedical imaging to next-generation solar cells. However, further insights into the formation and characterization of nanoparticles would aid in developing improved synthesis methods leading to more effective and customizable nanoparticles. This dissertation presents two methods of employing nanopore sensors to benet nanoparticle characterization and fabrication. Nanopores were used to study the formation of individual nanoparticles and serve as nanoparticle growth templates that could be exploited to create custom nanoparticle arrays. Additionally, nanopore sensors were used to characterize the surface charge density of anisotropic nanopores, which previously could not be reliably measured. Current nanopore sensor resolution levels have facilitated innovative research on nanoscale systems, including studies of DNA and nanoparticle characterization. Further nanopore system improvements will enable vastly improved DNA sequencing capabilities and open the door to additional nanopore sensing applications.
Material condition assessment with eddy current sensors
NASA Technical Reports Server (NTRS)
Goldfine, Neil J. (Inventor); Washabaugh, Andrew P. (Inventor); Sheiretov, Yanko K. (Inventor); Schlicker, Darrell E. (Inventor); Lyons, Robert J. (Inventor); Windoloski, Mark D. (Inventor); Craven, Christopher A. (Inventor); Tsukernik, Vladimir B. (Inventor); Grundy, David C. (Inventor)
2010-01-01
Eddy current sensors and sensor arrays are used for process quality and material condition assessment of conducting materials. In an embodiment, changes in spatially registered high resolution images taken before and after cold work processing reflect the quality of the process, such as intensity and coverage. These images also permit the suppression or removal of local outlier variations. Anisotropy in a material property, such as magnetic permeability or electrical conductivity, can be intentionally introduced and used to assess material condition resulting from an operation, such as a cold work or heat treatment. The anisotropy is determined by sensors that provide directional property measurements. The sensor directionality arises from constructs that use a linear conducting drive segment to impose the magnetic field in a test material. Maintaining the orientation of this drive segment, and associated sense elements, relative to a material edge provides enhanced sensitivity for crack detection at edges.
Joint Prior Learning for Visual Sensor Network Noisy Image Super-Resolution
Yue, Bo; Wang, Shuang; Liang, Xuefeng; Jiao, Licheng; Xu, Caijin
2016-01-01
The visual sensor network (VSN), a new type of wireless sensor network composed of low-cost wireless camera nodes, is being applied for numerous complex visual analyses in wild environments, such as visual surveillance, object recognition, etc. However, the captured images/videos are often low resolution with noise. Such visual data cannot be directly delivered to the advanced visual analysis. In this paper, we propose a joint-prior image super-resolution (JPISR) method using expectation maximization (EM) algorithm to improve VSN image quality. Unlike conventional methods that only focus on upscaling images, JPISR alternatively solves upscaling mapping and denoising in the E-step and M-step. To meet the requirement of the M-step, we introduce a novel non-local group-sparsity image filtering method to learn the explicit prior and induce the geometric duality between images to learn the implicit prior. The EM algorithm inherently combines the explicit prior and implicit prior by joint learning. Moreover, JPISR does not rely on large external datasets for training, which is much more practical in a VSN. Extensive experiments show that JPISR outperforms five state-of-the-art methods in terms of both PSNR, SSIM and visual perception. PMID:26927114
Sadowski, Franklin G.; Covington, Steven J.
1987-01-01
Advanced digital processing techniques were applied to Landsat-5 Thematic Mapper (TM) data and SPOT highresolution visible (HRV) panchromatic data to maximize the utility of images of a nuclear powerplant emergency at Chernobyl in the Soviet Ukraine. The images demonstrate the unique interpretive capabilities provided by the numerous spectral bands of the Thematic Mapper and the high spatial resolution of the SPOT HRV sensor.
NASA Technical Reports Server (NTRS)
Forrest, R. B.; Eppes, T. A.; Ouellette, R. J.
1973-01-01
Studies were performed to evaluate various image positioning methods for possible use in the earth observatory satellite (EOS) program and other earth resource imaging satellite programs. The primary goal is the generation of geometrically corrected and registered images, positioned with respect to the earth's surface. The EOS sensors which were considered were the thematic mapper, the return beam vidicon camera, and the high resolution pointable imager. The image positioning methods evaluated consisted of various combinations of satellite data and ground control points. It was concluded that EOS attitude control system design must be considered as a part of the image positioning problem for EOS, along with image sensor design and ground image processing system design. Study results show that, with suitable efficiency for ground control point selection and matching activities during data processing, extensive reliance should be placed on use of ground control points for positioning the images obtained from EOS and similar programs.
10000 pixels wide CMOS frame imager for earth observation from a HALE UAV
NASA Astrophysics Data System (ADS)
Delauré, B.; Livens, S.; Everaerts, J.; Kleihorst, R.; Schippers, Gert; de Wit, Yannick; Compiet, John; Banachowicz, Bartosz
2009-09-01
MEDUSA is the lightweight high resolution camera, designed to be operated from a solar-powered Unmanned Aerial Vehicle (UAV) flying at stratospheric altitudes. The instrument is a technology demonstrator within the Pegasus program and targets applications such as crisis management and cartography. A special wide swath CMOS imager has been developed by Cypress Semiconductor Cooperation Belgium to meet the specific sensor requirements of MEDUSA. The CMOS sensor has a stitched design comprising a panchromatic and color sensor on the same die. Each sensor consists of 10000*1200 square pixels (5.5μm size, novel 6T architecture) with micro-lenses. The exposure is performed by means of a high efficiency snapshot shutter. The sensor is able to operate at a rate of 30fps in full frame readout. Due to a novel pixel design, the sensor has low dark leakage of the memory elements (PSNL) and low parasitic light sensitivity (PLS). Still it maintains a relative high QE (Quantum efficiency) and a FF (fill factor) of over 65%. It features an MTF (Modulation Transfer Function) higher than 60% at Nyquist frequency in both X and Y directions The measured optical/electrical crosstalk (expressed as MTF) of this 5.5um pixel is state-of-the art. These properties makes it possible to acquire sharp images also in low-light conditions.
Restoring the spatial resolution of refocus images on 4D light field
NASA Astrophysics Data System (ADS)
Lim, JaeGuyn; Park, ByungKwan; Kang, JooYoung; Lee, SeongDeok
2010-01-01
This paper presents the method for generating a refocus image with restored spatial resolution on a plenoptic camera, which functions controlling the depth of field after capturing one image unlike a traditional camera. It is generally known that the camera captures 4D light field (angular and spatial information of light) within a limited 2D sensor and results in reducing 2D spatial resolution due to inevitable 2D angular data. That's the reason why a refocus image is composed of a low spatial resolution compared with 2D sensor. However, it has recently been known that angular data contain sub-pixel spatial information such that the spatial resolution of 4D light field can be increased. We exploit the fact for improving the spatial resolution of a refocus image. We have experimentally scrutinized that the spatial information is different according to the depth of objects from a camera. So, from the selection of refocused regions (corresponding depth), we use corresponding pre-estimated sub-pixel spatial information for reconstructing spatial resolution of the regions. Meanwhile other regions maintain out-of-focus. Our experimental results show the effect of this proposed method compared to existing method.
NASA Astrophysics Data System (ADS)
Wang, Shifeng; So, Emily; Smith, Pete
2015-04-01
Estimating the number of refugees and internally displaced persons is important for planning and managing an efficient relief operation following disasters and conflicts. Accurate estimates of refugee numbers can be inferred from the number of tents. Extracting tents from high-resolution satellite imagery has recently been suggested. However, it is still a significant challenge to extract tents automatically and reliably from remote sensing imagery. This paper describes a novel automated method, which is based on mathematical morphology, to generate a camp map to estimate the refugee numbers by counting tents on the camp map. The method is especially useful in detecting objects with a clear shape, size, and significant spectral contrast with their surroundings. Results for two study sites with different satellite sensors and different spatial resolutions demonstrate that the method achieves good performance in detecting tents. The overall accuracy can be up to 81% in this study. Further improvements should be possible if over-identified isolated single pixel objects can be filtered. The performance of the method is impacted by spectral characteristics of satellite sensors and image scenes, such as the extent of area of interest and the spatial arrangement of tents. It is expected that the image scene would have a much higher influence on the performance of the method than the sensor characteristics.
A distributed automatic target recognition system using multiple low resolution sensors
NASA Astrophysics Data System (ADS)
Yue, Zhanfeng; Lakshmi Narasimha, Pramod; Topiwala, Pankaj
2008-04-01
In this paper, we propose a multi-agent system which uses swarming techniques to perform high accuracy Automatic Target Recognition (ATR) in a distributed manner. The proposed system can co-operatively share the information from low-resolution images of different looks and use this information to perform high accuracy ATR. An advanced, multiple-agent Unmanned Aerial Vehicle (UAV) systems-based approach is proposed which integrates the processing capabilities, combines detection reporting with live video exchange, and swarm behavior modalities that dramatically surpass individual sensor system performance levels. We employ real-time block-based motion analysis and compensation scheme for efficient estimation and correction of camera jitter, global motion of the camera/scene and the effects of atmospheric turbulence. Our optimized Partition Weighted Sum (PWS) approach requires only bitshifts and additions, yet achieves a stunning 16X pixel resolution enhancement, which is moreover parallizable. We develop advanced, adaptive particle-filtering based algorithms to robustly track multiple mobile targets by adaptively changing the appearance model of the selected targets. The collaborative ATR system utilizes the homographies between the sensors induced by the ground plane to overlap the local observation with the received images from other UAVs. The motion of the UAVs distorts estimated homography frame to frame. A robust dynamic homography estimation algorithm is proposed to address this, by using the homography decomposition and the ground plane surface estimation.
NASA Astrophysics Data System (ADS)
Alshehhi, Rasha; Marpu, Prashanth Reddy
2017-04-01
Extraction of road networks in urban areas from remotely sensed imagery plays an important role in many urban applications (e.g. road navigation, geometric correction of urban remote sensing images, updating geographic information systems, etc.). It is normally difficult to accurately differentiate road from its background due to the complex geometry of the buildings and the acquisition geometry of the sensor. In this paper, we present a new method for extracting roads from high-resolution imagery based on hierarchical graph-based image segmentation. The proposed method consists of: 1. Extracting features (e.g., using Gabor and morphological filtering) to enhance the contrast between road and non-road pixels, 2. Graph-based segmentation consisting of (i) Constructing a graph representation of the image based on initial segmentation and (ii) Hierarchical merging and splitting of image segments based on color and shape features, and 3. Post-processing to remove irregularities in the extracted road segments. Experiments are conducted on three challenging datasets of high-resolution images to demonstrate the proposed method and compare with other similar approaches. The results demonstrate the validity and superior performance of the proposed method for road extraction in urban areas.
Pathfinder in flight over Hawaii
1997-08-28
Pathfinder, NASA's solar-powered, remotely-piloted aircraft is shown while it was conducting a series of science flights to highlight the aircraft's science capabilities while collecting imagery of forest and coastal zone ecosystems on Kauai, Hawaii. The flights also tested two new scientific instruments, a high spectral resolution Digital Array Scanned Interferometer (DASI) and a high spatial resolution Airborne Real-Time Imaging System (ARTIS). The remote sensor payloads were designed by NASA's Ames Research Center, Moffett Field, California, to support NASA's Mission to Planet Earth science programs.
Pathfinder over runway in Hawaii
1997-08-28
Pathfinder, NASA's solar-powered, remotely-piloted aircraft is shown while it was conducting a series of science flights to highlight the aircraft's science capabilities while collecting imagery of forest and coastal zone ecosystems on Kauai, Hawaii. The flights also tested two new scientific instruments, a high-spectral-resolution Digital Array Scanned Interferometer (DASI) and a high-spatial-resolution Airborne Real-Time Imaging System (ARTIS). The remote sensor payloads were designed by NASA's Ames Research Center, Moffett Field, California, to support NASA's Mission to Planet Earth science programs.
Proof of principle study of the use of a CMOS active pixel sensor for proton radiography.
Seco, Joao; Depauw, Nicolas
2011-02-01
Proof of principle study of the use of a CMOS active pixel sensor (APS) in producing proton radiographic images using the proton beam at the Massachusetts General Hospital (MGH). A CMOS APS, previously tested for use in s-ray radiation therapy applications, was used for proton beam radiographic imaging at the MGH. Two different setups were used as a proof of principle that CMOS can be used as proton imaging device: (i) a pen with two metal screws to assess spatial resolution of the CMOS and (ii) a phantom with lung tissue, bone tissue, and water to assess tissue contrast of the CMOS. The sensor was then traversed by a double scattered monoenergetic proton beam at 117 MeV, and the energy deposition inside the detector was recorded to assess its energy response. Conventional x-ray images with similar setup at voltages of 70 kVp and proton images using commercial Gafchromic EBT 2 and Kodak X-Omat V films were also taken for comparison purposes. Images were successfully acquired and compared to x-ray kVp and proton EBT2/X-Omat film images. The spatial resolution of the CMOS detector image is subjectively comparable to the EBT2 and Kodak X-Omat V film images obtained at the same object-detector distance. X-rays have apparent higher spatial resolution than the CMOS. However, further studies with different commercial films using proton beam irradiation demonstrate that the distance of the detector to the object is important to the amount of proton scatter contributing to the proton image. Proton images obtained with films at different distances from the source indicate that proton scatter significantly affects the CMOS image quality. Proton radiographic images were successfully acquired at MGH using a CMOS active pixel sensor detector. The CMOS demonstrated spatial resolution subjectively comparable to films at the same object-detector distance. Further work will be done in order to establish the spatial and energy resolution of the CMOS detector for protons. The development and use of CMOS in proton radiography could allow in vivo proton range checks, patient setup QA, and real-time tumor tracking.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steiner, J; Matthews, K; Jia, G
Purpose: To test feasibility of the use of a digital endorectal x-ray sensor for improved image resolution of permanent brachytherapy seed implants compared to conventional CT. Methods: Two phantoms simulating the male pelvic region were used to test the capabilities of a digital endorectal x-ray sensor for imaging permanent brachytherapy seed implants. Phantom 1 was constructed from acrylic plastic with cavities milled in the locations of the prostate and the rectum. The prostate cavity was filled a Styrofoam plug implanted with 10 training seeds. Phantom 2 was constructed from tissue-equivalent gelatins and contained a prostate phantom implanted with 18 strandsmore » of training seeds. For both phantoms, an intraoral digital dental x-ray sensor was placed in the rectum within 2 cm of the seed implants. Scout scans were taken of the phantoms over a limited arc angle using a CT scanner (80 kV, 120–200 mA). The dental sensor was removed from the phantoms and normal helical CT and scout (0 degree) scans using typical parameters for pelvic CT (120 kV, auto-mA) were collected. A shift-and add tomosynthesis algorithm was developed to localize seed plane location normal to detector face. Results: The endorectal sensor produced images with improved resolution compared to CT scans. Seed clusters and individual seed geometry were more discernable using the endorectal sensor. Seed 3D locations, including seeds that were not located in every projection image, were discernable using the shift and add algorithm. Conclusion: This work shows that digital endorectal x-ray sensors are a feasible method for improving imaging of permanent brachytherapy seed implants. Future work will consist of optimizing the tomosynthesis technique to produce higher resolution, lower dose images of 1) permanent brachytherapy seed implants for post-implant dosimetry and 2) fine anatomic details for imaging and managing prostatic disease compared to CT images. Funding: LSU Faculty Start-up Funding. Disclosure: XDR Radiography has loaned our research group the digital x-ray detector used in this work. CoI: None.« less
Slant path range gated imaging of static and moving targets
NASA Astrophysics Data System (ADS)
Steinvall, Ove; Elmqvist, Magnus; Karlsson, Kjell; Gustafsson, Ove; Chevalier, Tomas
2012-06-01
This paper will report experiments and analysis of slant path imaging using 1.5 μm and 0.8 μm gated imaging. The investigation is a follow up on the measurement reported last year at the laser radar conference at SPIE Orlando. The sensor, a SWIR camera was collecting both passive and active images along a 2 km long path over an airfield. The sensor was elevated by a lift in steps from 1.6-13.5 meters. Targets were resolution charts and also human targets. The human target was holding various items and also performing certain tasks some of high of relevance in defence and security. One of the main purposes with this investigation was to compare the recognition of these human targets and their activities with the resolution information obtained from conventional resolution charts. The data collection of human targets was also made from out roof top laboratory at about 13 m height above ground. The turbulence was measured along the path with anemometers and scintillometers. The camera was collecting both passive and active images in the SWIR region. We also included the Obzerv camera working at 0.8 μm in some tests. The paper will present images for both passive and active modes obtained at different elevations and discuss the results from both technical and system perspectives.
Quadrilinear CCD sensors for the multispectral channel of spaceborne imagers
NASA Astrophysics Data System (ADS)
Materne, Alex; Gili, Bruno; Laubier, David; Gimenez, Thierry
2001-12-01
The PLEIADES-HR Earth Observation satellites will combine a high resolution panchromatic channel -- 0.7 m at nadir -- and a multispectral channel allowing a 2.8 m resolution. This paper presents the main specifications, design and performances of a 52 microns pitch quadrilinear CCD sensor developed by ATMEL under CNES contract, for the multispectral channel of the PLEIADES-HR instrument. The monolithic CCD device includes four lines of 1500 pixels, each line dedicated to a narrow spectral band within blue to near infra red spectrum. The design of the photodiodes and CCD registers, with larger size than those developed up to now for CNES spaceborne imagers, needed some specific structures to break the large equipotential areas where charge do not flow properly. Results are presented on the options which were experimented to improve sensitivity, maintain transfer efficiency and reduce power dissipation. The four spectral bands are achieved by four stripe filters made by SAGEM-REOSC PRODUCTS on a glass substrate, to be assembled on the sensor window. Line to line spacing on the silicon die takes into account the results of straylight analysis. A mineral layer, with high optical absorption performances is deposited between photosensitive lines to further reduce straylight.
Poyneer, Lisa A; Bauman, Brian J
2015-03-31
Reference-free compensated imaging makes an estimation of the Fourier phase of a series of images of a target. The Fourier magnitude of the series of images is obtained by dividing the power spectral density of the series of images by an estimate of the power spectral density of atmospheric turbulence from a series of scene based wave front sensor (SBWFS) measurements of the target. A high-resolution image of the target is recovered from the Fourier phase and the Fourier magnitude.
Novel eye-safe line scanning 3D laser-radar
NASA Astrophysics Data System (ADS)
Eberle, B.; Kern, Tobias; Hammer, Marcus; Schwanke, Ullrich; Nowak, Heinrich
2014-10-01
Today, the civil market provides quite a number of different 3D-Sensors covering ranges up to 1 km. Typically these sensors are based on single element detectors which suffer from the drawback of spatial resolution at larger distances. Tasks demanding reliable object classification at long ranges can be fulfilled only by sensors consisting of detector arrays. They ensure sufficient frame rates and high spatial resolution. Worldwide there are many efforts in developing 3D-detectors, based on two-dimensional arrays. This paper presents first results on the performance of a recently developed 3D imaging laser radar sensor, working in the short wave infrared (SWIR) at 1.5 μm. It consists of a novel Cadmium Mercury Telluride (CMT) linear array APD detector with 384x1 elements at a pitch of 25 μm, developed by AIM Infrarot Module GmbH. The APD elements are designed to work in the linear (non-Geiger) mode. Each pixel will provide the time of flight measurement, and, due to the linear detection mode, allowing the detection of three successive echoes. The resolution in depth is 15 cm, the maximum repetition rate is 4 kHz. We discuss various sensor concepts regarding possible applications and their dependence on system parameters like field of view, frame rate, spatial resolution and range of operation.
Two-step single slope/SAR ADC with error correction for CMOS image sensor.
Tang, Fang; Bermak, Amine; Amira, Abbes; Amor Benammar, Mohieddine; He, Debiao; Zhao, Xiaojin
2014-01-01
Conventional two-step ADC for CMOS image sensor requires full resolution noise performance in the first stage single slope ADC, leading to high power consumption and large chip area. This paper presents an 11-bit two-step single slope/successive approximation register (SAR) ADC scheme for CMOS image sensor applications. The first stage single slope ADC generates a 3-bit data and 1 redundant bit. The redundant bit is combined with the following 8-bit SAR ADC output code using a proposed error correction algorithm. Instead of requiring full resolution noise performance, the first stage single slope circuit of the proposed ADC can tolerate up to 3.125% quantization noise. With the proposed error correction mechanism, the power consumption and chip area of the single slope ADC are significantly reduced. The prototype ADC is fabricated using 0.18 μ m CMOS technology. The chip area of the proposed ADC is 7 μ m × 500 μ m. The measurement results show that the energy efficiency figure-of-merit (FOM) of the proposed ADC core is only 125 pJ/sample under 1.4 V power supply and the chip area efficiency is 84 k μ m(2) · cycles/sample.
Sensor for In-Motion Continuous 3D Shape Measurement Based on Dual Line-Scan Cameras
Sun, Bo; Zhu, Jigui; Yang, Linghui; Yang, Shourui; Guo, Yin
2016-01-01
The acquisition of three-dimensional surface data plays an increasingly important role in the industrial sector. Numerous 3D shape measurement techniques have been developed. However, there are still limitations and challenges in fast measurement of large-scale objects or high-speed moving objects. The innovative line scan technology opens up new potentialities owing to the ultra-high resolution and line rate. To this end, a sensor for in-motion continuous 3D shape measurement based on dual line-scan cameras is presented. In this paper, the principle and structure of the sensor are investigated. The image matching strategy is addressed and the matching error is analyzed. The sensor has been verified by experiments and high-quality results are obtained. PMID:27869731
Lange, Maximilian; Dechant, Benjamin; Rebmann, Corinna; Vohland, Michael; Cuntz, Matthias; Doktor, Daniel
2017-08-11
Quantifying the accuracy of remote sensing products is a timely endeavor given the rapid increase in Earth observation missions. A validation site for Sentinel-2 products was hence established in central Germany. Automatic multispectral and hyperspectral sensor systems were installed in parallel with an existing eddy covariance flux tower, providing spectral information of the vegetation present at high temporal resolution. Normalized Difference Vegetation Index (NDVI) values from ground-based hyperspectral and multispectral sensors were compared with NDVI products derived from Sentinel-2A and Moderate-resolution Imaging Spectroradiometer (MODIS). The influence of different spatial and temporal resolutions was assessed. High correlations and similar phenological patterns between in situ and satellite-based NDVI time series demonstrated the reliability of satellite-based phenological metrics. Sentinel-2-derived metrics showed better agreement with in situ measurements than MODIS-derived metrics. Dynamic filtering with the best index slope extraction algorithm was nevertheless beneficial for Sentinel-2 NDVI time series despite the availability of quality information from the atmospheric correction procedure.
Lange, Maximilian; Rebmann, Corinna; Cuntz, Matthias; Doktor, Daniel
2017-01-01
Quantifying the accuracy of remote sensing products is a timely endeavor given the rapid increase in Earth observation missions. A validation site for Sentinel-2 products was hence established in central Germany. Automatic multispectral and hyperspectral sensor systems were installed in parallel with an existing eddy covariance flux tower, providing spectral information of the vegetation present at high temporal resolution. Normalized Difference Vegetation Index (NDVI) values from ground-based hyperspectral and multispectral sensors were compared with NDVI products derived from Sentinel-2A and Moderate-resolution Imaging Spectroradiometer (MODIS). The influence of different spatial and temporal resolutions was assessed. High correlations and similar phenological patterns between in situ and satellite-based NDVI time series demonstrated the reliability of satellite-based phenological metrics. Sentinel-2-derived metrics showed better agreement with in situ measurements than MODIS-derived metrics. Dynamic filtering with the best index slope extraction algorithm was nevertheless beneficial for Sentinel-2 NDVI time series despite the availability of quality information from the atmospheric correction procedure. PMID:28800065
Gu, Yingxin; Wylie, Bruce K.
2015-01-01
The satellite-derived growing season time-integrated Normalized Difference Vegetation Index (GSN) has been used as a proxy for vegetation biomass productivity. The 250-m GSN data estimated from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensors have been used for terrestrial ecosystem modeling and monitoring. High temporal resolution with a wide range of wavelengths make the MODIS land surface products robust and reliable. The long-term 30-m Landsat data provide spatial detailed information for characterizing human-scale processes and have been used for land cover and land change studies. The main goal of this study is to combine 250-m MODIS GSN and 30-m Landsat observations to generate a quality-improved high spatial resolution (30-m) GSN database. A rule-based piecewise regression GSN model based on MODIS and Landsat data was developed. Results show a strong correlation between predicted GSN and actual GSN (r = 0.97, average error = 0.026). The most important Landsat variables in the GSN model are Normalized Difference Vegetation Indices (NDVIs) in May and August. The derived MODIS-Landsat-based 30-m GSN map provides biophysical information for moderate-scale ecological features. This multiple sensor study retains the detailed seasonal dynamic information captured by MODIS and leverages the high-resolution information from Landsat, which will be useful for regional ecosystem studies.
Image quality testing of assembled IR camera modules
NASA Astrophysics Data System (ADS)
Winters, Daniel; Erichsen, Patrik
2013-10-01
Infrared (IR) camera modules for the LWIR (8-12_m) that combine IR imaging optics with microbolometer focal plane array (FPA) sensors with readout electronics are becoming more and more a mass market product. At the same time, steady improvements in sensor resolution in the higher priced markets raise the requirement for imaging performance of objectives and the proper alignment between objective and FPA. This puts pressure on camera manufacturers and system integrators to assess the image quality of finished camera modules in a cost-efficient and automated way for quality control or during end-of-line testing. In this paper we present recent development work done in the field of image quality testing of IR camera modules. This technology provides a wealth of additional information in contrast to the more traditional test methods like minimum resolvable temperature difference (MRTD) which give only a subjective overall test result. Parameters that can be measured are image quality via the modulation transfer function (MTF) for broadband or with various bandpass filters on- and off-axis and optical parameters like e.g. effective focal length (EFL) and distortion. If the camera module allows for refocusing the optics, additional parameters like best focus plane, image plane tilt, auto-focus quality, chief ray angle etc. can be characterized. Additionally, the homogeneity and response of the sensor with the optics can be characterized in order to calculate the appropriate tables for non-uniformity correction (NUC). The technology can also be used to control active alignment methods during mechanical assembly of optics to high resolution sensors. Other important points that are discussed are the flexibility of the technology to test IR modules with different form factors, electrical interfaces and last but not least the suitability for fully automated measurements in mass production.
NASA Astrophysics Data System (ADS)
Singh, Dharmendra; Kumar, Harish
Earth observation satellites provide data that covers different portions of the electromagnetic spectrum at different spatial and spectral resolutions. The increasing availability of information products generated from satellite images are extending the ability to understand the patterns and dynamics of the earth resource systems at all scales of inquiry. In which one of the most important application is the generation of land cover classification from satellite images for understanding the actual status of various land cover classes. The prospect for the use of satel-lite images in land cover classification is an extremely promising one. The quality of satellite images available for land-use mapping is improving rapidly by development of advanced sensor technology. Particularly noteworthy in this regard is the improved spatial and spectral reso-lution of the images captured by new satellite sensors like MODIS, ASTER, Landsat 7, and SPOT 5. For the full exploitation of increasingly sophisticated multisource data, fusion tech-niques are being developed. Fused images may enhance the interpretation capabilities. The images used for fusion have different temporal, and spatial resolution. Therefore, the fused image provides a more complete view of the observed objects. It is one of the main aim of image fusion to integrate different data in order to obtain more information that can be de-rived from each of the single sensor data alone. A good example of this is the fusion of images acquired by different sensors having a different spatial resolution and of different spectral res-olution. Researchers are applying the fusion technique since from three decades and propose various useful methods and techniques. The importance of high-quality synthesis of spectral information is well suited and implemented for land cover classification. More recently, an underlying multiresolution analysis employing the discrete wavelet transform has been used in image fusion. It was found that multisensor image fusion is a tradeoff between the spectral information from a low resolution multi-spectral images and the spatial information from a high resolution multi-spectral images. With the wavelet transform based fusion method, it is easy to control this tradeoff. A new transform, the curvelet transform was used in recent years by Starck. A ridgelet transform is applied to square blocks of detail frames of undecimated wavelet decomposition, consequently the curvelet transform is obtained. Since the ridgelet transform possesses basis functions matching directional straight lines therefore, the curvelet transform is capable of representing piecewise linear contours on multiple scales through few significant coefficients. This property leads to a better separation between geometric details and background noise, which may be easily reduced by thresholding curvelet coefficients before they are used for fusion. The Terra and Aqua Moderate Resolution Imaging Spectroradiometer (MODIS) instrument provides high radiometric sensitivity (12 bit) in 36 spectral bands ranging in wavelength from 0.4 m to 14.4 m and also it is freely available. Two bands are imaged at a nominal resolution of 250 m at nadir, with five bands at 500 m, and the remaining 29 bands at 1 km. In this paper, the band 1 of spatial resolution 250 m and bandwidth 620-670 nm, and band 2, of spatial resolution of 250m and bandwidth 842-876 nm is considered as these bands has special features to identify the agriculture and other land covers. In January 2006, the Advanced Land Observing Satellite (ALOS) was successfully launched by the Japan Aerospace Exploration Agency (JAXA). The Phased Arraytype L-band SAR (PALSAR) sensor onboard the satellite acquires SAR imagery at a wavelength of 23.5 cm (frequency 1.27 GHz) with capabilities of multimode and multipolarization observation. PALSAR can operate in several modes: the fine-beam single (FBS) polarization mode (HH), fine-beam dual (FBD) polariza-tion mode (HH/HV or VV/VH), polarimetric (PLR) mode (HH/HV/VH/VV), and ScanSAR (WB) mode (HH/VV) [15]. These makes PALSAR imagery very attractive for spatially and temporally consistent monitoring system. The Overview of Principal Component Analysis is that the most of the information within all the bands can be compressed into a much smaller number of bands with little loss of information. It allows us to extract the low-dimensional subspaces that capture the main linear correlation among the high-dimensional image data. This facilitates viewing the explained variance or signal in the available imagery, allowing both gross and more subtle features in the imagery to be seen. In this paper we have explored the fusion technique for enhancing the land cover classification of low resolution satellite data espe-cially freely available satellite data. For this purpose, we have considered to fuse the PALSAR principal component data with MODIS principal component data. Initially, the MODIS band 1 and band 2 is considered, its principal component is computed. Similarly the PALSAR HH, HV and VV polarized data are considered, and there principal component is also computed. con-sequently, the PALSAR principal component image is fused with MODIS principal component image. The aim of this paper is to analyze the effect of classification accuracy on major type of land cover types like agriculture, water and urban bodies with fusion of PALSAR data to MODIS data. Curvelet transformation has been applied for fusion of these two satellite images and Minimum Distance classification technique has been applied for the resultant fused image. It is qualitatively and visually observed that the overall classification accuracy of MODIS image after fusion is enhanced. This type of fusion technique may be quite helpful in near future to use freely available satellite data to develop monitoring system for different land cover classes on the earth.
Change detection of polarimetric SAR images based on the KummerU Distribution
NASA Astrophysics Data System (ADS)
Chen, Quan; Zou, Pengfei; Li, Zhen; Zhang, Ping
2014-11-01
In the society of PolSAR image segmentation, change detection and classification, the classical Wishart distribution has been used for a long time, but it especially suit to low-resolution SAR image, because in traditional sensors, only a small number of scatterers are present in each resolution cell. With the improving of SAR systems these years, the classical statistical models can therefore be reconsidered for high resolution and polarimetric information contained in the images acquired by these advanced systems. In this study, SAR image segmentation algorithm based on level-set method, added with distance regularized level-set evolution (DRLSE) is performed using Envisat/ASAR single-polarization data and Radarsat-2 polarimetric images, respectively. KummerU heterogeneous clutter model is used in the later to overcome the homogeneous hypothesis at high resolution cell. An enhanced distance regularized level-set evolution (DRLSE-E) is also applied in the later, to ensure accurate computation and stable level-set evolution. Finally, change detection based on four polarimetric Radarsat-2 time series images is carried out at Genhe area of Inner Mongolia Autonomous Region, NorthEastern of China, where a heavy flood disaster occurred during the summer of 2013, result shows the recommend segmentation method can detect the change of watershed effectively.
High-resolution depth profiling using a range-gated CMOS SPAD quanta image sensor.
Ren, Ximing; Connolly, Peter W R; Halimi, Abderrahim; Altmann, Yoann; McLaughlin, Stephen; Gyongy, Istvan; Henderson, Robert K; Buller, Gerald S
2018-03-05
A CMOS single-photon avalanche diode (SPAD) quanta image sensor is used to reconstruct depth and intensity profiles when operating in a range-gated mode used in conjunction with pulsed laser illumination. By designing the CMOS SPAD array to acquire photons within a pre-determined temporal gate, the need for timing circuitry was avoided and it was therefore possible to have an enhanced fill factor (61% in this case) and a frame rate (100,000 frames per second) that is more difficult to achieve in a SPAD array which uses time-correlated single-photon counting. When coupled with appropriate image reconstruction algorithms, millimeter resolution depth profiles were achieved by iterating through a sequence of temporal delay steps in synchronization with laser illumination pulses. For photon data with high signal-to-noise ratios, depth images with millimeter scale depth uncertainty can be estimated using a standard cross-correlation approach. To enhance the estimation of depth and intensity images in the sparse photon regime, we used a bespoke clustering-based image restoration strategy, taking into account the binomial statistics of the photon data and non-local spatial correlations within the scene. For sparse photon data with total exposure times of 75 ms or less, the bespoke algorithm can reconstruct depth images with millimeter scale depth uncertainty at a stand-off distance of approximately 2 meters. We demonstrate a new approach to single-photon depth and intensity profiling using different target scenes, taking full advantage of the high fill-factor, high frame rate and large array format of this range-gated CMOS SPAD array.
Wang, Huan; Jing, Miao; Li, Yulong
2018-06-01
Measuring the precise dynamics of specific neurotransmitters and neuromodulators in the brain is essential for understanding how information is transmitted and processed. Thanks to the development and optimization of various genetically encoded sensors, we are approaching the stage in which a few key neurotransmitters/neuromodulators can be imaged with high cell specificity and good signal-to-noise ratio. Here, we summarize recent progress regarding these sensors, focusing on their design principles, properties, potential applications, and current limitations. We also highlight the G protein-coupled receptor (GPCR) scaffold as a promising platform that may enable the scalable development of the next generation of sensors, enabling the rapid, sensitive, and specific detection of a large repertoire of neurotransmitters/neuromodulators in vivo at cellular or even subcellular resolution. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Jeong, Myeong-Jae; Hsu, N. Christina; Kwiatkowska, Ewa J.; Franz, Bryan A.; Meister, Gerhard; Salustro, Clare E.
2012-01-01
The retrieval of aerosol properties from spaceborne sensors requires highly accurate and precise radiometric measurements, thus placing stringent requirements on sensor calibration and characterization. For the Terra/Moderate Resolution Imaging Spedroradiometer (MODIS), the characteristics of the detectors of certain bands, particularly band 8 [(B8); 412 nm], have changed significantly over time, leading to increased calibration uncertainty. In this paper, we explore a possibility of utilizing a cross-calibration method developed for characterizing the Terral MODIS detectors in the ocean bands by the National Aeronautics and Space Administration Ocean Biology Processing Group to improve aerosol retrieval over bright land surfaces. We found that the Terra/MODIS B8 reflectance corrected using the cross calibration method resulted in significant improvements for the retrieved aerosol optical thickness when compared with that from the Multi-angle Imaging Spectroradiometer, Aqua/MODIS, and the Aerosol Robotic Network. The method reported in this paper is implemented for the operational processing of the Terra/MODIS Deep Blue aerosol products.
Detection of Obstacles in Monocular Image Sequences
NASA Technical Reports Server (NTRS)
Kasturi, Rangachar; Camps, Octavia
1997-01-01
The ability to detect and locate runways/taxiways and obstacles in images captured using on-board sensors is an essential first step in the automation of low-altitude flight, landing, takeoff, and taxiing phase of aircraft navigation. Automation of these functions under different weather and lighting situations, can be facilitated by using sensors of different modalities. An aircraft-based Synthetic Vision System (SVS), with sensors of different modalities mounted on-board, complements the current ground-based systems in functions such as detection and prevention of potential runway collisions, airport surface navigation, and landing and takeoff in all weather conditions. In this report, we address the problem of detection of objects in monocular image sequences obtained from two types of sensors, a Passive Millimeter Wave (PMMW) sensor and a video camera mounted on-board a landing aircraft. Since the sensors differ in their spatial resolution, and the quality of the images obtained using these sensors is not the same, different approaches are used for detecting obstacles depending on the sensor type. These approaches are described separately in two parts of this report. The goal of the first part of the report is to develop a method for detecting runways/taxiways and objects on the runway in a sequence of images obtained from a moving PMMW sensor. Since the sensor resolution is low and the image quality is very poor, we propose a model-based approach for detecting runways/taxiways. We use the approximate runway model and the position information of the camera provided by the Global Positioning System (GPS) to define regions of interest in the image plane to search for the image features corresponding to the runway markers. Once the runway region is identified, we use histogram-based thresholding to detect obstacles on the runway and regions outside the runway. This algorithm is tested using image sequences simulated from a single real PMMW image.
Thermal imaging of Al-CuO thermites
NASA Astrophysics Data System (ADS)
Densmore, John; Sullivan, Kyle; Kuntz, Joshua; Gash, Alex
2013-06-01
We have performed spatial in-situ temperature measurements of aluminum-copper oxide thermite reactions using high-speed color pyrometry. Electrophoretic deposition was used to create thermite microstructures. Tests were performed with micron- and nano-sized particles at different stoichiometries. The color pyrometry was performed using a high-speed color camera. The color filter array on the image sensor collects light within three spectral bands. Assuming a gray-body emission spectrum a multi-wavelength ratio analysis allows a temperature to be calculated. An advantage of using a two-dimensional image sensor is that it allows heterogeneous flames to be measured with high spatial resolution. Light from the initial combustion of the Al-CuO can be differentiated from the light created by the late time oxidization with atmosphere. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Advances in HgCdTe APDs and LADAR Receivers
NASA Technical Reports Server (NTRS)
Bailey, Steven; McKeag, William; Wang, Jinxue; Jack, Michael; Amzajerdian, Farzin
2010-01-01
Raytheon is developing NIR sensor chip assemblies (SCAs) for scanning and staring 3D LADAR systems. High sensitivity is obtained by integrating high performance detectors with gain i.e. APDs with very low noise Readout Integrated Circuits. Unique aspects of these designs include: independent acquisition (non-gated) of pulse returns, multiple pulse returns with both time and intensity reported to enable full 3D reconstruction of the image. Recent breakthrough in device design has resulted in HgCdTe APDs operating at 300K with essentially no excess noise to gains in excess of 100, low NEP <1nW and GHz bandwidths and have demonstrated linear mode photon counting. SCAs utilizing these high performance APDs have been integrated and demonstrated excellent spatial and range resolution enabling detailed 3D imagery both at short range and long ranges. In this presentation we will review progress in high resolution scanning, staring and ultra-high sensitivity photon counting LADAR sensors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Ki Ha; Becker, Alex; Tseng, Hung-Wen
2002-11-20
Non-invasive, high-resolution imaging of the shallow subsurface is needed for delineation of buried waste, detection of unexploded ordinance, verification and monitoring of containment structures, and other environmental applications. Electromagnetic (EM) measurements at frequencies between 1 and 100 MHz are important for such applications, because the induction number of many targets is small and the ability to determine the dielectric permittivity in addition to electrical conductivity of the subsurface is possible. Earlier workers were successful in developing systems for detecting anomalous areas, but no quantifiable information was accurately determined. For high-resolution imaging, accurate measurements are necessary so the field data canmore » be mapped into the space of the subsurface parameters. We are developing a non-invasive method for accurately mapping the electrical conductivity and dielectric permittivity of the shallow subsurface using the EM impedance approach (Frangos, 2001; Lee and Becker, 2001; Song et al., 2002). Electric and magnetic sensors are being tested in a known area against theoretical predictions, thereby insuring that the data collected with the high-frequency impedance (HFI) system will support high-resolution, multi-dimensional imaging techniques.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Ki Ha; Becker, Alex; Tseng, Hung-Wen
2001-06-10
Non-invasive, high-resolution imaging of the shallow subsurface is needed for delineation of buried waste, detection of unexploded ordinance, verification and monitoring of containment structures, and other environmental applications. Electromagnetic (EM) measurements at frequencies between 1 and 100 MHz are important for such applications, because the induction number of many targets is small and the ability to determine the dielectric permittivity in addition to electrical conductivity of the subsurface is possible. Earlier workers were successful in developing systems for detecting anomalous areas, but no quantifiable information was accurately determined. For high-resolution imaging, accurate measurements are necessary so the field data canmore » be mapped into the space of the subsurface parameters. We are developing a non-invasive method for accurately mapping the electrical conductivity and dielectric permittivity of the shallow subsurface using the EM impedance approach (Frangos, 2001; Lee and Becker, 2001). Electric and magnetic sensors are being tested in a known area against theoretical predictions, thereby insuring that the data collected with the high-frequency impedance (HFI) system will support high-resolution, multi-dimensional imaging techniques.« less
Hard-X-Ray/Soft-Gamma-Ray Imaging Sensor Assembly for Astronomy
NASA Technical Reports Server (NTRS)
Myers, Richard A.
2008-01-01
An improved sensor assembly has been developed for astronomical imaging at photon energies ranging from 1 to 100 keV. The assembly includes a thallium-doped cesium iodide scintillator divided into pixels and coupled to an array of high-gain avalanche photodiodes (APDs). Optionally, the array of APDs can be operated without the scintillator to detect photons at energies below 15 keV. The array of APDs is connected to compact electronic readout circuitry that includes, among other things, 64 independent channels for detection of photons in various energy ranges, up to a maximum energy of 100 keV, at a count rate up to 3 kHz. The readout signals are digitized and processed by imaging software that performs "on-the-fly" analysis. The sensor assembly has been integrated into an imaging spectrometer, along with a pair of coded apertures (Fresnel zone plates) that are used in conjunction with the pixel layout to implement a shadow-masking technique to obtain relatively high spatial resolution without having to use extremely small pixels. Angular resolutions of about 20 arc-seconds have been measured. Thus, for example, the imaging spectrometer can be used to (1) determine both the energy spectrum of a distant x-ray source and the angular deviation of the source from the nominal line of sight of an x-ray telescope in which the spectrometer is mounted or (2) study the spatial and temporal development of solar flares, repeating - ray bursters, and other phenomena that emit transient radiation in the hard-x-ray/soft- -ray region of the electromagnetic spectrum.
Accelerated high-resolution photoacoustic tomography via compressed sensing
NASA Astrophysics Data System (ADS)
Arridge, Simon; Beard, Paul; Betcke, Marta; Cox, Ben; Huynh, Nam; Lucka, Felix; Ogunlade, Olumide; Zhang, Edward
2016-12-01
Current 3D photoacoustic tomography (PAT) systems offer either high image quality or high frame rates but are not able to deliver high spatial and temporal resolution simultaneously, which limits their ability to image dynamic processes in living tissue (4D PAT). A particular example is the planar Fabry-Pérot (FP) photoacoustic scanner, which yields high-resolution 3D images but takes several minutes to sequentially map the incident photoacoustic field on the 2D sensor plane, point-by-point. However, as the spatio-temporal complexity of many absorbing tissue structures is rather low, the data recorded in such a conventional, regularly sampled fashion is often highly redundant. We demonstrate that combining model-based, variational image reconstruction methods using spatial sparsity constraints with the development of novel PAT acquisition systems capable of sub-sampling the acoustic wave field can dramatically increase the acquisition speed while maintaining a good spatial resolution: first, we describe and model two general spatial sub-sampling schemes. Then, we discuss how to implement them using the FP interferometer and demonstrate the potential of these novel compressed sensing PAT devices through simulated data from a realistic numerical phantom and through measured data from a dynamic experimental phantom as well as from in vivo experiments. Our results show that images with good spatial resolution and contrast can be obtained from highly sub-sampled PAT data if variational image reconstruction techniques that describe the tissues structures with suitable sparsity-constraints are used. In particular, we examine the use of total variation (TV) regularization enhanced by Bregman iterations. These novel reconstruction strategies offer new opportunities to dramatically increase the acquisition speed of photoacoustic scanners that employ point-by-point sequential scanning as well as reducing the channel count of parallelized schemes that use detector arrays.
USDA-ARS?s Scientific Manuscript database
Vegetation monitoring requires frequent remote sensing observations. While imagery from coarse resolution sensors such as MODIS/VIIRS can provide daily observations, they lack spatial detail to capture surface features for vegetation monitoring. The medium spatial resolution (10-100m) sensors are su...
Application of the high resolution return beam vidicon
NASA Technical Reports Server (NTRS)
Cantella, M. J.
1977-01-01
The Return Beam Vidicon (RBV) is a high-performance electronic image sensor and electrical storage component. It can accept continuous or discrete exposures. Information can be read out with a single scan or with many repetitive scans for either signal processing or display. Resolution capability is 10,000 TV lines/height, and at 100 lp/mm, performance matches or exceeds that of film, particularly with low-contrast imagery. Electronic zoom can be employed effectively for image magnification and data compression. The high performance and flexibility of the RBV permit wide application in systems for reconnaissance, scan conversion, information storage and retrieval, and automatic inspection and test. This paper summarizes the characteristics and performance parameters of the RBV and cites examples of feasible applications.
NASA Astrophysics Data System (ADS)
Hayduk, Robert J.; Scott, Walter S.; Walberg, Gerald D.; Butts, James J.; Starr, Richard D.
1997-01-01
The Small Satellite Technology Initiative (SSTI) is a National Aeronautics and Space Administration (NASA) program to demonstrate smaller, high technology satellites constructed rapidly and less expensively. Under SSTI, NASA funded the development of ``Clark,'' a high technology demonstration satellite to provide 3-m resolution panchromatic and 15-m resolution multispectral images, as well as collect atmospheric constituent and cosmic x-ray data. The 690-lb. satellite, to be launched in early 1997, will be in a 476 km, circular, sun-synchronous polar orbit. This paper describes the program objectives, the technical characteristics of the sensors and satellite, image processing, archiving and distribution. Data archiving and distribution will be performed by NASA Stennis Space Center and by the EROS Data Center, Sioux Falls, South Dakota, USA.
NASA Astrophysics Data System (ADS)
Zhu, L.; Radeloff, V.; Ives, A. R.; Barton, B.
2015-12-01
Deriving crop pattern with high accuracy is of great importance for characterizing landscape diversity, which affects the resilience of food webs in agricultural systems in the face of climatic and land cover changes. Landsat sensors were originally designed to monitor agricultural areas, and both radiometric and spatial resolution are optimized for monitoring large agricultural fields. Unfortunately, few clear Landsat images per year are available, which has limited the use of Landsat for making crop classification, and this situation is worse in cloudy areas of the Earth. Meanwhile, the MODerate Resolution Imaging Spectroradiometer (MODIS) data has better temporal resolution but cannot capture fine spatial heterogeneity of agricultural systems. Our question was to what extent fusing imagery from both sensors could improve crop classifications. We utilized the Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM) algorithm to simulate Landsat-like images at MODIS temporal resolution. Based on Random Forests (RF) classifier, we tested whether and by what degree crop maps from 2000 to 2014 of the Arlington Agricultural Research Station (Wisconsin, USA) were improved by integrating available clear Landsat images each year with synthetic images. We predicted that the degree to which classification accuracy can be improved by incorporating synthetic imagery depends on the number and acquisition time of clear Landsat images. Moreover, multi-season data are essential for mapping crop types by capturing their phenological dynamics, and STARFM-simulated images can be used to compensate for missing Landsat observations. Our study is helpful for eliminating the limits of the use of Landsat data in mapping crop patterns, and can provide a benchmark of accuracy when choosing STARFM-simulated images to make crop classification at broader scales.
Demonstration of the CDMA-mode CAOS smart camera.
Riza, Nabeel A; Mazhar, Mohsin A
2017-12-11
Demonstrated is the code division multiple access (CDMA)-mode coded access optical sensor (CAOS) smart camera suited for bright target scenarios. Deploying a silicon CMOS sensor and a silicon point detector within a digital micro-mirror device (DMD)-based spatially isolating hybrid camera design, this smart imager first engages the DMD starring mode with a controlled factor of 200 high optical attenuation of the scene irradiance to provide a classic unsaturated CMOS sensor-based image for target intelligence gathering. Next, this CMOS sensor provided image data is used to acquire a focused zone more robust un-attenuated true target image using the time-modulated CDMA-mode of the CAOS camera. Using four different bright light test target scenes, successfully demonstrated is a proof-of-concept visible band CAOS smart camera operating in the CDMA-mode using up-to 4096 bits length Walsh design CAOS pixel codes with a maximum 10 KHz code bit rate giving a 0.4096 seconds CAOS frame acquisition time. A 16-bit analog-to-digital converter (ADC) with time domain correlation digital signal processing (DSP) generates the CDMA-mode images with a 3600 CAOS pixel count and a best spatial resolution of one micro-mirror square pixel size of 13.68 μm side. The CDMA-mode of the CAOS smart camera is suited for applications where robust high dynamic range (DR) imaging is needed for un-attenuated un-spoiled bright light spectrally diverse targets.
A Novel Image Compression Algorithm for High Resolution 3D Reconstruction
NASA Astrophysics Data System (ADS)
Siddeq, M. M.; Rodrigues, M. A.
2014-06-01
This research presents a novel algorithm to compress high-resolution images for accurate structured light 3D reconstruction. Structured light images contain a pattern of light and shadows projected on the surface of the object, which are captured by the sensor at very high resolutions. Our algorithm is concerned with compressing such images to a high degree with minimum loss without adversely affecting 3D reconstruction. The Compression Algorithm starts with a single level discrete wavelet transform (DWT) for decomposing an image into four sub-bands. The sub-band LL is transformed by DCT yielding a DC-matrix and an AC-matrix. The Minimize-Matrix-Size Algorithm is used to compress the AC-matrix while a DWT is applied again to the DC-matrix resulting in LL2, HL2, LH2 and HH2 sub-bands. The LL2 sub-band is transformed by DCT, while the Minimize-Matrix-Size Algorithm is applied to the other sub-bands. The proposed algorithm has been tested with images of different sizes within a 3D reconstruction scenario. The algorithm is demonstrated to be more effective than JPEG2000 and JPEG concerning higher compression rates with equivalent perceived quality and the ability to more accurately reconstruct the 3D models.
The Design of a Single-Bit CMOS Image Sensor for Iris Recognition Applications
Park, Keunyeol; Song, Minkyu
2018-01-01
This paper presents a single-bit CMOS image sensor (CIS) that uses a data processing technique with an edge detection block for simple iris segmentation. In order to recognize the iris image, the image sensor conventionally captures high-resolution image data in digital code, extracts the iris data, and then compares it with a reference image through a recognition algorithm. However, in this case, the frame rate decreases by the time required for digital signal conversion of multi-bit digital data through the analog-to-digital converter (ADC) in the CIS. In order to reduce the overall processing time as well as the power consumption, we propose a data processing technique with an exclusive OR (XOR) logic gate to obtain single-bit and edge detection image data instead of multi-bit image data through the ADC. In addition, we propose a logarithmic counter to efficiently measure single-bit image data that can be applied to the iris recognition algorithm. The effective area of the proposed single-bit image sensor (174 × 144 pixel) is 2.84 mm2 with a 0.18 μm 1-poly 4-metal CMOS image sensor process. The power consumption of the proposed single-bit CIS is 2.8 mW with a 3.3 V of supply voltage and 520 frame/s of the maximum frame rates. The error rate of the ADC is 0.24 least significant bit (LSB) on an 8-bit ADC basis at a 50 MHz sampling frequency. PMID:29495273
The Design of a Single-Bit CMOS Image Sensor for Iris Recognition Applications.
Park, Keunyeol; Song, Minkyu; Kim, Soo Youn
2018-02-24
This paper presents a single-bit CMOS image sensor (CIS) that uses a data processing technique with an edge detection block for simple iris segmentation. In order to recognize the iris image, the image sensor conventionally captures high-resolution image data in digital code, extracts the iris data, and then compares it with a reference image through a recognition algorithm. However, in this case, the frame rate decreases by the time required for digital signal conversion of multi-bit digital data through the analog-to-digital converter (ADC) in the CIS. In order to reduce the overall processing time as well as the power consumption, we propose a data processing technique with an exclusive OR (XOR) logic gate to obtain single-bit and edge detection image data instead of multi-bit image data through the ADC. In addition, we propose a logarithmic counter to efficiently measure single-bit image data that can be applied to the iris recognition algorithm. The effective area of the proposed single-bit image sensor (174 × 144 pixel) is 2.84 mm² with a 0.18 μm 1-poly 4-metal CMOS image sensor process. The power consumption of the proposed single-bit CIS is 2.8 mW with a 3.3 V of supply voltage and 520 frame/s of the maximum frame rates. The error rate of the ADC is 0.24 least significant bit (LSB) on an 8-bit ADC basis at a 50 MHz sampling frequency.
A study on rational function model generation for TerraSAR-X imagery.
Eftekhari, Akram; Saadatseresht, Mohammad; Motagh, Mahdi
2013-09-09
The Rational Function Model (RFM) has been widely used as an alternative to rigorous sensor models of high-resolution optical imagery in photogrammetry and remote sensing geometric processing. However, not much work has been done to evaluate the applicability of the RF model for Synthetic Aperture Radar (SAR) image processing. This paper investigates how to generate a Rational Polynomial Coefficient (RPC) for high-resolution TerraSAR-X imagery using an independent approach. The experimental results demonstrate that the RFM obtained using the independent approach fits the Range-Doppler physical sensor model with an accuracy of greater than 10-3 pixel. Because independent RPCs indicate absolute errors in geolocation, two methods can be used to improve the geometric accuracy of the RFM. In the first method, Ground Control Points (GCPs) are used to update SAR sensor orientation parameters, and the RPCs are calculated using the updated parameters. Our experiment demonstrates that by using three control points in the corners of the image, an accuracy of 0.69 pixels in range and 0.88 pixels in the azimuth direction is achieved. For the second method, we tested the use of an affine model for refining RPCs. In this case, by applying four GCPs in the corners of the image, the accuracy reached 0.75 pixels in range and 0.82 pixels in the azimuth direction.
A Study on Rational Function Model Generation for TerraSAR-X Imagery
Eftekhari, Akram; Saadatseresht, Mohammad; Motagh, Mahdi
2013-01-01
The Rational Function Model (RFM) has been widely used as an alternative to rigorous sensor models of high-resolution optical imagery in photogrammetry and remote sensing geometric processing. However, not much work has been done to evaluate the applicability of the RF model for Synthetic Aperture Radar (SAR) image processing. This paper investigates how to generate a Rational Polynomial Coefficient (RPC) for high-resolution TerraSAR-X imagery using an independent approach. The experimental results demonstrate that the RFM obtained using the independent approach fits the Range-Doppler physical sensor model with an accuracy of greater than 10−3 pixel. Because independent RPCs indicate absolute errors in geolocation, two methods can be used to improve the geometric accuracy of the RFM. In the first method, Ground Control Points (GCPs) are used to update SAR sensor orientation parameters, and the RPCs are calculated using the updated parameters. Our experiment demonstrates that by using three control points in the corners of the image, an accuracy of 0.69 pixels in range and 0.88 pixels in the azimuth direction is achieved. For the second method, we tested the use of an affine model for refining RPCs. In this case, by applying four GCPs in the corners of the image, the accuracy reached 0.75 pixels in range and 0.82 pixels in the azimuth direction. PMID:24021971
Medipix2 based CdTe microprobe for dental imaging
NASA Astrophysics Data System (ADS)
Vykydal, Z.; Fauler, A.; Fiederle, M.; Jakubek, J.; Svestkova, M.; Zwerger, A.
2011-12-01
Medical imaging devices and techniques are demanded to provide high resolution and low dose images of samples or patients. Hybrid semiconductor single photon counting devices together with suitable sensor materials and advanced techniques of image reconstruction fulfil these requirements. In particular cases such as the direct observation of dental implants also the size of the imaging device itself plays a critical role. This work presents the comparison of 2D radiographs of tooth provided by a standard commercial dental imaging system (Gendex 765DC X-ray tube with VisualiX scintillation detector) and two Medipix2 USB Lite detectors one equipped with a Si sensor (300 μm thick) and one with a CdTe sensor (1 mm thick). Single photon counting capability of the Medipix2 device allows virtually unlimited dynamic range of the images and thus increases the contrast significantly. The dimensions of the whole USB Lite device are only 15 mm × 60 mm of which 25% consists of the sensitive area. Detector of this compact size can be used directly inside the patients' mouth.
1920x1080 pixel color camera with progressive scan at 50 to 60 frames per second
NASA Astrophysics Data System (ADS)
Glenn, William E.; Marcinka, John W.
1998-09-01
For over a decade, the broadcast industry, the film industry and the computer industry have had a long-range objective to originate high definition images with progressive scan. This produces images with better vertical resolution and much fewer artifacts than interlaced scan. Computers almost universally use progressive scan. The broadcast industry has resisted switching from interlace to progressive because no cameras were available in that format with the 1920 X 1080 resolution that had obtained international acceptance for high definition program production. The camera described in this paper produces an output in that format derived from two 1920 X 1080 CCD sensors produced by Eastman Kodak.
Measuring the performance of super-resolution reconstruction algorithms
NASA Astrophysics Data System (ADS)
Dijk, Judith; Schutte, Klamer; van Eekeren, Adam W. M.; Bijl, Piet
2012-06-01
For many military operations situational awareness is of great importance. This situational awareness and related tasks such as Target Acquisition can be acquired using cameras, of which the resolution is an important characteristic. Super resolution reconstruction algorithms can be used to improve the effective sensor resolution. In order to judge these algorithms and the conditions under which they operate best, performance evaluation methods are necessary. This evaluation, however, is not straightforward for several reasons. First of all, frequency-based evaluation techniques alone will not provide a correct answer, due to the fact that they are unable to discriminate between structure-related and noise-related effects. Secondly, most super-resolution packages perform additional image enhancement techniques such as noise reduction and edge enhancement. As these algorithms improve the results they cannot be evaluated separately. Thirdly, a single high-resolution ground truth is rarely available. Therefore, evaluation of the differences in high resolution between the estimated high resolution image and its ground truth is not that straightforward. Fourth, different artifacts can occur due to super-resolution reconstruction, which are not known on forehand and hence are difficult to evaluate. In this paper we present a set of new evaluation techniques to assess super-resolution reconstruction algorithms. Some of these evaluation techniques are derived from processing on dedicated (synthetic) imagery. Other evaluation techniques can be evaluated on both synthetic and natural images (real camera data). The result is a balanced set of evaluation algorithms that can be used to assess the performance of super-resolution reconstruction algorithms.
Scanning Microscopes Using X Rays and Microchannels
NASA Technical Reports Server (NTRS)
Wang, Yu
2003-01-01
Scanning microscopes that would be based on microchannel filters and advanced electronic image sensors and that utilize x-ray illumination have been proposed. Because the finest resolution attainable in a microscope is determined by the wavelength of the illumination, the xray illumination in the proposed microscopes would make it possible, in principle, to achieve resolutions of the order of nanometers about a thousand times as fine as the resolution of a visible-light microscope. Heretofore, it has been necessary to use scanning electron microscopes to obtain such fine resolution. In comparison with scanning electron microscopes, the proposed microscopes would likely be smaller, less massive, and less expensive. Moreover, unlike in scanning electron microscopes, it would not be necessary to place specimens under vacuum. The proposed microscopes are closely related to the ones described in several prior NASA Tech Briefs articles; namely, Miniature Microscope Without Lenses (NPO-20218), NASA Tech Briefs, Vol. 22, No. 8 (August 1998), page 43; and Reflective Variants of Miniature Microscope Without Lenses (NPO-20610), NASA Tech Briefs, Vol. 26, No. 9 (September 2002) page 6a. In all of these microscopes, the basic principle of design and operation is the same: The focusing optics of a conventional visible-light microscope are replaced by a combination of a microchannel filter and a charge-coupled-device (CCD) image detector. A microchannel plate containing parallel, microscopic-cross-section holes much longer than they are wide is placed between a specimen and an image sensor, which is typically the CCD. The microchannel plate must be made of a material that absorbs the illuminating radiation reflected or scattered from the specimen. The microchannels must be positioned and dimensioned so that each one is registered with a pixel on the image sensor. Because most of the radiation incident on the microchannel walls becomes absorbed, the radiation that reaches the image sensor consists predominantly of radiation that was launched along the longitudinal direction of the microchannels. Therefore, most of the radiation arriving at each pixel on the sensor must have traveled along a straight line from a corresponding location on the specimen. Thus, there is a one-to-one mapping from a point on a specimen to a pixel in the image sensor, so that the output of the image sensor contains image information equivalent to that from a microscope.
Objects Grouping for Segmentation of Roads Network in High Resolution Images of Urban Areas
NASA Astrophysics Data System (ADS)
Maboudi, M.; Amini, J.; Hahn, M.
2016-06-01
Updated road databases are required for many purposes such as urban planning, disaster management, car navigation, route planning, traffic management and emergency handling. In the last decade, the improvement in spatial resolution of VHR civilian satellite sensors - as the main source of large scale mapping applications - was so considerable that GSD has become finer than size of common urban objects of interest such as building, trees and road parts. This technological advancement pushed the development of "Object-based Image Analysis (OBIA)" as an alternative to pixel-based image analysis methods. Segmentation as one of the main stages of OBIA provides the image objects on which most of the following processes will be applied. Therefore, the success of an OBIA approach is strongly affected by the segmentation quality. In this paper, we propose a purpose-dependent refinement strategy in order to group road segments in urban areas using maximal similarity based region merging. For investigations with the proposed method, we use high resolution images of some urban sites. The promising results suggest that the proposed approach is applicable in grouping of road segments in urban areas.
Satellite and airborne IR sensor validation by an airborne interferometer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gumley, L.E.; Delst, P.F. van; Moeller, C.C.
1996-11-01
The validation of in-orbit longwave IR radiances from the GOES-8 Sounder and inflight longwave IR radiances from the MODIS Airborne Simulator (MAS) is described. The reference used is the airborne University of Wisconsin High Resolution Interferometer Sounder (HIS). The calibration of each sensor is described. Data collected during the Ocean Temperature Interferometric Survey (OTIS) experiment in January 1995 is used in the comparison between sensors. Detailed forward calculations of at-sensor radiance are used to account for the difference in GOES-8 and HIS altitude and viewing geometry. MAS radiances and spectrally averaged HIS radiances are compared directly. Differences between GOES-8 andmore » HIS brightness temperatures, and GOES-8 and MAS brightness temperatures, are found to be with 1.0 K for the majority of longwave channels examined. The same validation approach will be used for future sensors such as the Moderate Resolution Imaging Spectroradiometer (MODIS) and the Atmospheric Infrared Sounder (AIRS). 11 refs., 2 figs., 4 tabs.« less
NASA Astrophysics Data System (ADS)
Nurge, Mark A.
2007-05-01
An electrical capacitance volume tomography system has been created for use with a new image reconstruction algorithm capable of imaging high contrast dielectric distributions. The electrode geometry consists of two 4 × 4 parallel planes of copper conductors connected through custom built switch electronics to a commercially available capacitance to digital converter. Typical electrical capacitance tomography (ECT) systems rely solely on mutual capacitance readings to reconstruct images of dielectric distributions. This paper presents a method of reconstructing images of high contrast dielectric materials using only the self-capacitance measurements. By constraining the unknown dielectric material to one of two values, the inverse problem is no longer ill-determined. Resolution becomes limited only by the accuracy and resolution of the measurement circuitry. Images were reconstructed using this method with both synthetic and real data acquired using an aluminium structure inserted at different positions within the sensing region. Comparisons with standard two-dimensional ECT systems highlight the capabilities and limitations of the electronics and reconstruction algorithm.
Millimeter wave imaging for concealed weapon detection and surveillance at up to 220 GHz
NASA Astrophysics Data System (ADS)
Stanko, S.; Nötel, D.; Huck, J.; Wirtz, S.; Klöppel, F.; Essen, H.
2008-04-01
Sensors used for security purposes have to cover the non-invasive control of men and direct surroundings of buildings and camps to detect weapons, explosives and chemical or biological threat material. Those sensors have to cope with different environmental conditions. Ideally, the control of people has to be done at a longer distance as standoff detection. The work described in this paper concentrates on passive radiometric sensors at 0.1 and 0.2 THz which are able to detect non-metallic objects like ceramic knifes. Also the identification of objects like mobile phones or PDAs will be shown. Additionally, standoff surveillance is possible, which is of high importance with regard to suicide bombers. The presentation will include images at both mentioned frequencies comparing the efficiency in terms of range and resolution. In addition, the concept of the sensor design showing a Dicke-type 220GHz radiometer using new LNAs and the results along with image enhancement methods are shown. 2.1 Main principle
Fusion of MultiSpectral and Panchromatic Images Based on Morphological Operators.
Restaino, Rocco; Vivone, Gemine; Dalla Mura, Mauro; Chanussot, Jocelyn
2016-04-20
Nonlinear decomposition schemes constitute an alternative to classical approaches for facing the problem of data fusion. In this paper we discuss the application of this methodology to a popular remote sensing application called pansharpening, which consists in the fusion of a low resolution multispectral image and a high resolution panchromatic image. We design a complete pansharpening scheme based on the use of morphological half gradients operators and demonstrate the suitability of this algorithm through the comparison with state of the art approaches. Four datasets acquired by the Pleiades, Worldview-2, Ikonos and Geoeye-1 satellites are employed for the performance assessment, testifying the effectiveness of the proposed approach in producing top-class images with a setting independent of the specific sensor.
NASA Astrophysics Data System (ADS)
Zarubin, V.; Bychkov, A.; Simonova, V.; Zhigarkov, V.; Karabutov, A.; Cherepetskaya, E.
2018-05-01
In this paper, a technique for reflection mode immersion 2D laser-ultrasound tomography of solid objects with piecewise linear 2D surface profiles is presented. Pulsed laser radiation was used for generation of short ultrasonic probe pulses, providing high spatial resolution. A piezofilm sensor array was used for detection of the waves reflected by the surface and internal inhomogeneities of the object. The original ultrasonic image reconstruction algorithm accounting for refraction of acoustic waves at the liquid-solid interface provided longitudinal resolution better than 100 μm in the polymethyl methacrylate sample object.
NASA Technical Reports Server (NTRS)
2000-01-01
Video Pics is a software program that generates high-quality photos from video. The software was developed under an SBIR contract with Marshall Space Flight Center by Redhawk Vision, Inc.--a subsidiary of Irvine Sensors Corporation. Video Pics takes information content from multiple frames of video and enhances the resolution of a selected frame. The resulting image has enhanced sharpness and clarity like that of a 35 mm photo. The images are generated as digital files and are compatible with image editing software.
Laser range profiling for small target recognition
NASA Astrophysics Data System (ADS)
Steinvall, Ove; Tulldahl, Michael
2017-03-01
Long range identification (ID) or ID at closer range of small targets has its limitations in imaging due to the demand for very high-transverse sensor resolution. This is, therefore, a motivation to look for one-dimensional laser techniques for target ID. These include laser vibrometry and laser range profiling. Laser vibrometry can give good results, but is not always robust as it is sensitive to certain vibrating parts on the target being in the field of view. Laser range profiling is attractive because the maximum range can be substantial, especially for a small laser beam width. A range profiler can also be used in a scanning mode to detect targets within a certain sector. The same laser can also be used for active imaging when the target comes closer and is angularly resolved. Our laser range profiler is based on a laser with a pulse width of 6 ns (full width half maximum). This paper will show both experimental and simulated results for laser range profiling of small boats out to a 6 to 7-km range and a unmanned arrial vehicle (UAV) mockup at close range (1.3 km). The naval experiments took place in the Baltic Sea using many other active and passive electro-optical sensors in addition to the profiling system. The UAV experiments showed the need for a high-range resolution, thus we used a photon counting system in addition to the more conventional profiler used in the naval experiments. This paper shows the influence of target pose and range resolution on the capability of classification. The typical resolution (in our case 0.7 m) obtainable with a conventional range finder type of sensor can be used for large target classification with a depth structure over 5 to 10 m or more, but for smaller targets such as a UAV a high resolution (in our case 7.5 mm) is needed to reveal depth structures and surface shapes. This paper also shows the need for 3-D target information to build libraries for comparison of measured and simulated range profiles. At closer ranges, full 3-D images should be preferable.
NASA Technical Reports Server (NTRS)
Tescher, Andrew G. (Editor)
1989-01-01
Various papers on image compression and automatic target recognition are presented. Individual topics addressed include: target cluster detection in cluttered SAR imagery, model-based target recognition using laser radar imagery, Smart Sensor front-end processor for feature extraction of images, object attitude estimation and tracking from a single video sensor, symmetry detection in human vision, analysis of high resolution aerial images for object detection, obscured object recognition for an ATR application, neural networks for adaptive shape tracking, statistical mechanics and pattern recognition, detection of cylinders in aerial range images, moving object tracking using local windows, new transform method for image data compression, quad-tree product vector quantization of images, predictive trellis encoding of imagery, reduced generalized chain code for contour description, compact architecture for a real-time vision system, use of human visibility functions in segmentation coding, color texture analysis and synthesis using Gibbs random fields.
Evaluating the capacity of GF-4 satellite data for estimating fractional vegetation cover
NASA Astrophysics Data System (ADS)
Zhang, C.; Qin, Q.; Ren, H.; Zhang, T.; Sun, Y.
2016-12-01
Fractional vegetation cover (FVC) is a crucial parameter for many agricultural, environmental, meteorological and ecological applications, which is of great importance for studies on ecosystem structure and function. The Chinese GaoFen-4 (GF-4) geostationary satellite designed for the purpose of environmental and ecological observation was launched in December 29, 2015, and official use has been started by Chinese Government on June 13, 2016. Multi-spectral images with spatial resolution of 50 m and high temporal resolution, could be acquired by the sensor on GF-4 satellite on the 36000 km-altitude orbit. To take full advantage of the outstanding performance of GF-4 satellite, this study evaluated the capacity of GF-4 satellite data for monitoring FVC. To the best of our knowledge, this is the first research about estimating FVC from GF-4 satellite images. First, we developed a procedure for preprocessing GF-4 satellite data, including radiometric calibration and atmospheric correction, to acquire surface reflectance. Then single image and multi-temporal images were used for extracting the endmembers of vegetation and soil, respectively. After that, dimidiate pixel model and square model based on vegetation indices were used for estimating FVC. Finally, the estimation results were comparatively analyzed with FVC estimated by other existing sensors. The experimental results showed that satisfying accuracy of FVC estimation could be achieved from GF-4 satellite images using dimidiate pixel model and square model based on vegetation indices. What's more, the multi-temporal images increased the probability to find pure vegetation and soil endmembers, thus the characteristic of high temporal resolution of GF-4 satellite images improved the accuracy of FVC estimation. This study demonstrated the capacity of GF-4 satellite data for monitoring FVC. The conclusions reached by this study are significant for improving the accuracy and spatial-temporal resolution of existing FVC products, which provides a basis for the studies on ecosystem structure and function using remote sensing data acquired by GF-4 satellite.
NASA Technical Reports Server (NTRS)
2002-01-01
This Moderate-resolution Imaging Spectroradiometer (MODIS) image over Argentina was acquired on April 24, 2000, and was produced using a combination of the sensor's 250-m and 500-m resolution 'true color' bands. This image was presented on June 13, 2000 as a GIFt to Argentinian President Fernando de la Rua by NASA Administrator Dan Goldin. Note the Parana River which runs due south from the top of the image before turning east to empty into the Atlantic Ocean. Note the yellowish sediment from the Parana River mixing with the redish sediment from the Uruguay River as it empties into the Rio de la Plata. The water level of the Parana seems high, which could explain the high sediment discharge. A variety of land surface features are visible in this image. To the north, the greenish pixels show forest regions, as well as characteristic clusters of rectangular patterns of agricultural fields. In the lower left of the image, the lighter green pixels show arable regions where there is grazing and farming. (Image courtesy Jacques Descloitres, MODIS Land Group, NASA GSFC)
BIOME: An Ecosystem Remote Sensor Based on Imaging Interferometry
NASA Technical Reports Server (NTRS)
Peterson, David L.; Hammer, Philip; Smith, William H.; Lawless, James G. (Technical Monitor)
1994-01-01
Until recent times, optical remote sensing of ecosystem properties from space has been limited to broad band multispectral scanners such as Landsat and AVHRR. While these sensor data can be used to derive important information about ecosystem parameters, they are very limited for measuring key biogeochemical cycling parameters such as the chemical content of plant canopies. Such parameters, for example the lignin and nitrogen contents, are potentially amenable to measurements by very high spectral resolution instruments using a spectroscopic approach. Airborne sensors based on grating imaging spectrometers gave the first promise of such potential but the recent decision not to deploy the space version has left the community without many alternatives. In the past few years, advancements in high performance deep well digital sensor arrays coupled with a patented design for a two-beam interferometer has produced an entirely new design for acquiring imaging spectroscopic data at the signal to noise levels necessary for quantitatively estimating chemical composition (1000:1 at 2 microns). This design has been assembled as a laboratory instrument and the principles demonstrated for acquiring remote scenes. An airborne instrument is in production and spaceborne sensors being proposed. The instrument is extremely promising because of its low cost, lower power requirements, very low weight, simplicity (no moving parts), and high performance. For these reasons, we have called it the first instrument optimized for ecosystem studies as part of a Biological Imaging and Observation Mission to Earth (BIOME).
Evaluation of a high resolution silicon PET insert module
NASA Astrophysics Data System (ADS)
Grkovski, Milan; Brzezinski, Karol; Cindro, Vladimir; Clinthorne, Neal H.; Kagan, Harris; Lacasta, Carlos; Mikuž, Marko; Solaz, Carles; Studen, Andrej; Weilhammer, Peter; Žontar, Dejan
2015-07-01
Conventional PET systems can be augmented with additional detectors placed in close proximity of the region of interest. We developed a high resolution PET insert module to evaluate the added benefit of such a combination. The insert module consists of two back-to-back 1 mm thick silicon sensors, each segmented into 1040 1 mm2 pads arranged in a 40 by 26 array. A set of 16 VATAGP7.1 ASICs and a custom assembled data acquisition board were used to read out the signal from the insert module. Data were acquired in slice (2D) geometry with a Jaszczak phantom (rod diameters of 1.2-4.8 mm) filled with 18F-FDG and the images were reconstructed with ML-EM method. Both data with full and limited angular coverage from the insert module were considered and three types of coincidence events were combined. The ratio of high-resolution data that substantially improves quality of the reconstructed image for the region near the surface of the insert module was estimated to be about 4%. Results from our previous studies suggest that such ratio could be achieved at a moderate technological expense by using an equivalent of two insert modules (an effective sensor thickness of 4 mm).
Sensor fusion for synthetic vision
NASA Technical Reports Server (NTRS)
Pavel, M.; Larimer, J.; Ahumada, A.
1991-01-01
Display methodologies are explored for fusing images gathered by millimeter wave sensors with images rendered from an on-board terrain data base to facilitate visually guided flight and ground operations in low visibility conditions. An approach to fusion based on multiresolution image representation and processing is described which facilitates fusion of images differing in resolution within and between images. To investigate possible fusion methods, a workstation-based simulation environment is being developed.
Note: An absolute X-Y-Θ position sensor using a two-dimensional phase-encoded binary scale
NASA Astrophysics Data System (ADS)
Kim, Jong-Ahn; Kim, Jae Wan; Kang, Chu-Shik; Jin, Jonghan
2018-04-01
This Note presents a new absolute X-Y-Θ position sensor for measuring planar motion of a precision multi-axis stage system. By analyzing the rotated image of a two-dimensional phase-encoded binary scale (2D), the absolute 2D position values at two separated points were obtained and the absolute X-Y-Θ position could be calculated combining these values. The sensor head was constructed using a board-level camera, a light-emitting diode light source, an imaging lens, and a cube beam-splitter. To obtain the uniform intensity profiles from the vignette scale image, we selected the averaging directions deliberately, and higher resolution in the angle measurement could be achieved by increasing the allowable offset size. The performance of a prototype sensor was evaluated in respect of resolution, nonlinearity, and repeatability. The sensor could resolve 25 nm linear and 0.001° angular displacements clearly, and the standard deviations were less than 18 nm when 2D grid positions were measured repeatedly.
Payload Configurations for Efficient Image Acquisition - Indian Perspective
NASA Astrophysics Data System (ADS)
Samudraiah, D. R. M.; Saxena, M.; Paul, S.; Narayanababu, P.; Kuriakose, S.; Kiran Kumar, A. S.
2014-11-01
The world is increasingly depending on remotely sensed data. The data is regularly used for monitoring the earth resources and also for solving problems of the world like disasters, climate degradation, etc. Remotely sensed data has changed our perspective of understanding of other planets. With innovative approaches in data utilization, the demands of remote sensing data are ever increasing. More and more research and developments are taken up for data utilization. The satellite resources are scarce and each launch costs heavily. Each launch is also associated with large effort for developing the hardware prior to launch. It is also associated with large number of software elements and mathematical algorithms post-launch. The proliferation of low-earth and geostationary satellites has led to increased scarcity in the available orbital slots for the newer satellites. Indian Space Research Organization has always tried to maximize the utility of satellites. Multiple sensors are flown on each satellite. In each of the satellites, sensors are designed to cater to various spectral bands/frequencies, spatial and temporal resolutions. Bhaskara-1, the first experimental satellite started with 2 bands in electro-optical spectrum and 3 bands in microwave spectrum. The recent Resourcesat-2 incorporates very efficient image acquisition approach with multi-resolution (3 types of spatial resolution) multi-band (4 spectral bands) electro-optical sensors (LISS-4, LISS-3* and AWiFS). The system has been designed to provide data globally with various data reception stations and onboard data storage capabilities. Oceansat-2 satellite has unique sensor combination with 8 band electro-optical high sensitive ocean colour monitor (catering to ocean and land) along with Ku band scatterometer to acquire information on ocean winds. INSAT- 3D launched recently provides high resolution 6 band image data in visible, short-wave, mid-wave and long-wave infrared spectrum. It also has 19 band sounder for providing vertical profile of water vapour, temperature, etc. The same system has data relay transponders for acquiring data from weather stations. The payload configurations have gone through significant changes over the years to increase data rate per kilogram of payload. Future Indian remote sensing systems are planned with very high efficient ways of image acquisition. This paper analyses the strides taken by ISRO (Indian Space research Organisation) in achieving high efficiency in remote sensing image data acquisition. Parameters related to efficiency of image data acquisition are defined and a methodology is worked out to compute the same. Some of the Indian payloads are analysed with respect to some of the system/ subsystem parameters that decide the configuration of payload. Based on the analysis, possible configuration approaches that can provide high efficiency are identified. A case study is carried out with improved configuration and the results of efficiency improvements are reported. This methodology may be used for assessing other electro-optical payloads or missions and can be extended to other types of payloads and missions.
High-speed 3D surface measurement with a fringe projection based optical sensor
NASA Astrophysics Data System (ADS)
Bräuer-Burchardt, Christian; Heist, Stefan; Kühmstedt, Peter; Notni, Gunther
2014-05-01
A new optical sensor based on fringe projection technique for the accurate and fast measurement of the surface of objects mainly for industrial inspection tasks is introduced. High-speed fringe projection and image recording with 180 Hz allows 3D rates up to 60 Hz. The high measurement velocity was achieved by consequent fringe code reduction and parallel data processing. Reduction of the image sequence length was obtained by omission of the Gray-code sequence by using the geometric restrictions of the measurement objects. The sensor realizes three different measurement fields between 20 x 20 mm2 and 40 x 40 mm2 with lateral spatial solutions between 10 μm and 20 μm with the same working distance. Measurement object height extension is between +/- 0.5 mm and +/- 2 mm. Height resolution between 1 μm and 5 μm can be achieved depending on the properties of the measurement objects. The sensor may be used e.g. for quality inspection of conductor boards or plugs in real-time industrial applications.
Coding Strategies and Implementations of Compressive Sensing
NASA Astrophysics Data System (ADS)
Tsai, Tsung-Han
This dissertation studies the coding strategies of computational imaging to overcome the limitation of conventional sensing techniques. The information capacity of conventional sensing is limited by the physical properties of optics, such as aperture size, detector pixels, quantum efficiency, and sampling rate. These parameters determine the spatial, depth, spectral, temporal, and polarization sensitivity of each imager. To increase sensitivity in any dimension can significantly compromise the others. This research implements various coding strategies subject to optical multidimensional imaging and acoustic sensing in order to extend their sensing abilities. The proposed coding strategies combine hardware modification and signal processing to exploiting bandwidth and sensitivity from conventional sensors. We discuss the hardware architecture, compression strategies, sensing process modeling, and reconstruction algorithm of each sensing system. Optical multidimensional imaging measures three or more dimensional information of the optical signal. Traditional multidimensional imagers acquire extra dimensional information at the cost of degrading temporal or spatial resolution. Compressive multidimensional imaging multiplexes the transverse spatial, spectral, temporal, and polarization information on a two-dimensional (2D) detector. The corresponding spectral, temporal and polarization coding strategies adapt optics, electronic devices, and designed modulation techniques for multiplex measurement. This computational imaging technique provides multispectral, temporal super-resolution, and polarization imaging abilities with minimal loss in spatial resolution and noise level while maintaining or gaining higher temporal resolution. The experimental results prove that the appropriate coding strategies may improve hundreds times more sensing capacity. Human auditory system has the astonishing ability in localizing, tracking, and filtering the selected sound sources or information from a noisy environment. Using engineering efforts to accomplish the same task usually requires multiple detectors, advanced computational algorithms, or artificial intelligence systems. Compressive acoustic sensing incorporates acoustic metamaterials in compressive sensing theory to emulate the abilities of sound localization and selective attention. This research investigates and optimizes the sensing capacity and the spatial sensitivity of the acoustic sensor. The well-modeled acoustic sensor allows localizing multiple speakers in both stationary and dynamic auditory scene; and distinguishing mixed conversations from independent sources with high audio recognition rate.
NASA Technical Reports Server (NTRS)
Cecil, Daniel J.; Buechler, Dennis E.; Blakeslee, Richard J.
2015-01-01
The Tropical Rainfall Measuring Mission (TRMM) Lightning Imaging Sensor (LIS) has been collecting observations of total lightning in the global tropics and subtropics (roughly 38 deg S - 38 deg N) since December 1997. A similar instrument, the Optical Transient Detector, operated from 1995-2000 on another low earth orbit satellite that also saw high latitudes. Lightning data from these instruments have been used to create gridded climatologies and time series of lightning flash rate. These include a 0.5 deg resolution global annual climatology, and lower resolution products describing the annual cycle and the diurnal cycle. These products are updated annually. Results from the update through 2013 will be shown at the conference. The gridded products are publicly available for download. Descriptions of how each product can be used will be discussed, including strengths, weaknesses, and caveats about the smoothing and sampling used in various products.
NASA Astrophysics Data System (ADS)
Thomas, N.; Rueda, X.; Lambin, E.; Mendenhall, C. D.
2012-12-01
Large intact forested regions of the world are known to be critical to maintaining Earth's climate, ecosystem health, and human livelihoods. Remote sensing has been successfully implemented as a tool to monitor forest cover and landscape dynamics over broad regions. Much of this work has been done using coarse resolution sensors such as AVHRR and MODIS in combination with moderate resolution sensors, particularly Landsat. Finer scale analysis of heterogeneous and fragmented landscapes is commonly performed with medium resolution data and has had varying success depending on many factors including the level of fragmentation, variability of land cover types, patch size, and image availability. Fine scale tree cover in mixed agricultural areas can have a major impact on biodiversity and ecosystem sustainability but may often be inadequately captured with the global to regional (coarse resolution and moderate resolution) satellite sensors and processing techniques widely used to detect land use and land cover changes. This study investigates whether advanced remote sensing methods are able to assess and monitor percent tree canopy cover in spatially complex human-dominated agricultural landscapes that prove challenging for traditional mapping techniques. Our study areas are in high altitude, mixed agricultural coffee-growing regions in Costa Rica and the Colombian Andes. We applied Random Forests regression tree analysis to Landsat data along with additional spectral, environmental, and spatial variables to predict percent tree canopy cover at 30m resolution. Image object-based texture, shape, and neighborhood metrics were generated at the Landsat scale using eCognition and included in the variable suite. Training and validation data was generated using high resolution imagery from digital aerial photography at 1m to 2.5 m resolution. Our results are promising with Pearson's correlation coefficients between observed and predicted percent tree canopy cover of .86 (Costa Rica) and .83 (Colombia). The tree cover mapping developed here supports two distinct projects on sustaining biodiversity and natural and human capital: in Costa Rica the tree canopy cover map is utilized to predict bird community composition; and in Colombia the mapping is performed for two time periods and used to assess the impact of coffee eco-certification programs on the landscape. This research identifies ways to leverage readily available, high quality, and cost-free Landsat data or other medium resolution satellite data sources in combination with high resolution data, such as that frequently available through Google Earth, to monitor and support sustainability efforts in fragmented and heterogeneous landscapes.
NASA Astrophysics Data System (ADS)
Zhang, Jialin; Chen, Qian; Sun, Jiasong; Li, Jiaji; Zuo, Chao
2018-01-01
Lensfree holography provides a new way to effectively bypass the intrinsical trade-off between the spatial resolution and field-of-view (FOV) of conventional lens-based microscopes. Unfortunately, due to the limited sensor pixel-size, unpredictable disturbance during image acquisition, and sub-optimum solution to the phase retrieval problem, typical lensfree microscopes only produce compromised imaging quality in terms of lateral resolution and signal-to-noise ratio (SNR). In this paper, we propose an adaptive pixel-super-resolved lensfree imaging (APLI) method to address the pixel aliasing problem by Z-scanning only, without resorting to subpixel shifting or beam-angle manipulation. Furthermore, an automatic positional error correction algorithm and adaptive relaxation strategy are introduced to enhance the robustness and SNR of reconstruction significantly. Based on APLI, we perform full-FOV reconstruction of a USAF resolution target across a wide imaging area of {29.85 mm2 and achieve half-pitch lateral resolution of 770 nm, surpassing 2.17 times of the theoretical Nyquist-Shannon sampling resolution limit imposed by the sensor pixel-size (1.67 μm). Full-FOV imaging result of a typical dicot root is also provided to demonstrate its promising potential applications in biologic imaging.
NASA Astrophysics Data System (ADS)
Zhang, Edward Z.; Laufer, Jan; Beard, Paul
2007-02-01
A 3D photoacoustic imaging instrument for characterising small animal models of human disease processes has been developed. The system comprises an OPO excitation source and a backward-mode planar ultrasound imaging head based upon a Fabry Perot polymer film sensing interferometer (FPI). The mirrors of the latter are transparent between 590 - 1200nm but highly reflective between 1500-1600nm. This enables nanosecond excitation laser pulses in the former wavelength range, where biological tissues are relatively transparent, to be transmitted through the sensor head into the tissue. The resulting photoacoustic signals arrive at the sensor where they modulate the optical thickness of the FPI and therefore its reflectivity. By scanning a CW focused interrogating laser beam at 1550nm across the surface of the sensor, the spatial-temporal distribution of the photoacoustic signals can therefore be mapped in 2D enabling a 3D photoacoustic image to be reconstructed. To demonstrate the application of the system to imaging small animals such as mice, 3D images of the vascular anatomy of the mouse brain and the microvasculature in the skin around the abdomen were obtained non invasively. It is considered that this system provides a practical alternative to photoacoustic scanners based upon piezoelectric detectors for high resolution non invasive small animal imaging.
NASA Astrophysics Data System (ADS)
Savin, A.; Novy, F.; Fintova, S.; Steigmann, R.
2017-08-01
The current stage of nondestructive evaluation techniques imposes the development of new electromagnetic (EM) methods that are based on high spatial resolution and increased sensitivity. In order to achieve high performance, the work frequencies must be either radifrequencies or microwaves. At these frequencies, at the dielectric/conductor interface, plasmon polaritons can appear, propagating between conductive regions as evanescent waves. In order to use the evanescent wave that can appear even if the slits width is much smaller that the wavwelength of incident EM wave, a sensor with metamaterial (MM) is used. The study of the EM field diffraction against the edge of long thin discontinuity placed under the inspected surface of a conductive plate has been performed using the geometrical optics principles. This type of sensor having the reception coils shielded by a conductive screen with a circular aperture placed in the front of reception coil of emission reception sensor has been developed and “transported” information for obtaining of magnified image of the conductive structures inspected. This work presents a sensor, using MM conical Swiss roll type that allows the propagation of evanescent waves and the electromagnetic images are magnified. The test method can be successfully applied in a variety of applications of maxim importance such as defect/damage detection in materials used in automotive and aviation technologies. Applying this testing method, spatial resolution can be improved.
NVSIM: UNIX-based thermal imaging system simulator
NASA Astrophysics Data System (ADS)
Horger, John D.
1993-08-01
For several years the Night Vision and Electronic Sensors Directorate (NVESD) has been using an internally developed forward looking infrared (FLIR) simulation program. In response to interest in the simulation part of these projects by other organizations, NVESD has been working on a new version of the simulation, NVSIM, that will be made generally available to the FLIR using community. NVSIM uses basic FLIR specification data, high resolution thermal input imagery and spatial domain image processing techniques to produce simulated image outputs from a broad variety of FLIRs. It is being built around modular programming techniques to allow simpler addition of more sensor effects. The modularity also allows selective inclusion and exclusion of individual sensor effects at run time. The simulation has been written in the industry standard ANSI C programming language under the widely used UNIX operating system to make it easily portable to a wide variety of computer platforms.
Field-portable lensfree tomographic microscope†
Isikman, Serhan O.; Bishara, Waheb; Sikora, Uzair; Yaglidere, Oguzhan; Yeah, John; Ozcan, Aydogan
2011-01-01
We present a field-portable lensfree tomographic microscope, which can achieve sectional imaging of a large volume (~20 mm3) on a chip with an axial resolution of <7 μm. In this compact tomographic imaging platform (weighing only ~110 grams), 24 light-emitting diodes (LEDs) that are each butt-coupled to a fibre-optic waveguide are controlled through a cost-effective micro-processor to sequentially illuminate the sample from different angles to record lensfree holograms of the sample that is placed on the top of a digital sensor array. In order to generate pixel super-resolved (SR) lensfree holograms and hence digitally improve the achievable lateral resolution, multiple sub-pixel shifted holograms are recorded at each illumination angle by electromagnetically actuating the fibre-optic waveguides using compact coils and magnets. These SR projection holograms obtained over an angular range of ~50° are rapidly reconstructed to yield projection images of the sample, which can then be back-projected to compute tomograms of the objects on the sensor-chip. The performance of this compact and light-weight lensfree tomographic microscope is validated by imaging micro-beads of different dimensions as well as a Hymenolepis nana egg, which is an infectious parasitic flatworm. Achieving a decent three-dimensional spatial resolution, this field-portable on-chip optical tomographic microscope might provide a useful toolset for telemedicine and high-throughput imaging applications in resource-poor settings. PMID:21573311
NASA Technical Reports Server (NTRS)
Kasturi, Rangachar; Devadiga, Sadashiva; Tang, Yuan-Liang
1994-01-01
This research was initiated as a part of the Advanced Sensor and Imaging System Technology (ASSIST) program at NASA Langley Research Center. The primary goal of this research is the development of image analysis algorithms for the detection of runways and other objects using an on-board camera. Initial effort was concentrated on images acquired using a passive millimeter wave (PMMW) sensor. The images obtained using PMMW sensors under poor visibility conditions due to atmospheric fog are characterized by very low spatial resolution but good image contrast compared to those images obtained using sensors operating in the visible spectrum. Algorithms developed for analyzing these images using a model of the runway and other objects are described in Part 1 of this report. Experimental verification of these algorithms was limited to a sequence of images simulated from a single frame of PMMW image. Subsequent development and evaluation of algorithms was done using video image sequences. These images have better spatial and temporal resolution compared to PMMW images. Algorithms for reliable recognition of runways and accurate estimation of spatial position of stationary objects on the ground have been developed and evaluated using several image sequences. These algorithms are described in Part 2 of this report. A list of all publications resulting from this work is also included.
NASA Astrophysics Data System (ADS)
Caras, Tamir; Hedley, John; Karnieli, Arnon
2017-12-01
Remote sensing offers a potential tool for large scale environmental surveying and monitoring. However, remote observations of coral reefs are difficult especially due to the spatial and spectral complexity of the target compared to sensor specifications as well as the environmental implications of the water medium above. The development of sensors is driven by technological advances and the desired products. Currently, spaceborne systems are technologically limited to a choice between high spectral resolution and high spatial resolution, but not both. The current study explores the dilemma of whether future sensor design for marine monitoring should prioritise on improving their spatial or spectral resolution. To address this question, a spatially and spectrally resampled ground-level hyperspectral image was used to test two classification elements: (1) how the tradeoff between spatial and spectral resolutions affects classification; and (2) how a noise reduction by majority filter might improve classification accuracy. The studied reef, in the Gulf of Aqaba (Eilat), Israel, is heterogeneous and complex so the local substrate patches are generally finer than currently available imagery. Therefore, the tested spatial resolution was broadly divided into four scale categories from five millimeters to one meter. Spectral resolution resampling aimed to mimic currently available and forthcoming spaceborne sensors such as (1) Environmental Mapping and Analysis Program (EnMAP) that is characterized by 25 bands of 6.5 nm width; (2) VENμS with 12 narrow bands; and (3) the WorldView series with broadband multispectral resolution. Results suggest that spatial resolution should generally be prioritized for coral reef classification because the finer spatial scale tested (pixel size < 0.1 m) may compensate for some low spectral resolution drawbacks. In this regard, it is shown that the post-classification majority filtering substantially improves the accuracy of all pixel sizes up to the point where the kernel size reaches the average unit size (pixel < 0.25 m). However, careful investigation as to the effect of band distribution and choice could improve the sensor suitability for the marine environment task. This in mind, while the focus in this study was on the technologically limited spaceborne design, aerial sensors may presently provide an opportunity to implement the suggested setup.
SPIDER: Next Generation Chip Scale Imaging Sensor Update
NASA Astrophysics Data System (ADS)
Duncan, A.; Kendrick, R.; Ogden, C.; Wuchenich, D.; Thurman, S.; Su, T.; Lai, W.; Chun, J.; Li, S.; Liu, G.; Yoo, S. J. B.
2016-09-01
The Lockheed Martin Advanced Technology Center (LM ATC) and the University of California at Davis (UC Davis) are developing an electro-optical (EO) imaging sensor called SPIDER (Segmented Planar Imaging Detector for Electro-optical Reconnaissance) that seeks to provide a 10x to 100x size, weight, and power (SWaP) reduction alternative to the traditional bulky optical telescope and focal-plane detector array. The substantial reductions in SWaP would reduce cost and/or provide higher resolution by enabling a larger-aperture imager in a constrained volume. Our SPIDER imager replaces the traditional optical telescope and digital focal plane detector array with a densely packed interferometer array based on emerging photonic integrated circuit (PIC) technologies that samples the object being imaged in the Fourier domain (i.e., spatial frequency domain), and then reconstructs an image. Our approach replaces the large optics and structures required by a conventional telescope with PICs that are accommodated by standard lithographic fabrication techniques (e.g., complementary metal-oxide-semiconductor (CMOS) fabrication). The standard EO payload integration and test process that involves precision alignment and test of optical components to form a diffraction limited telescope is, therefore, replaced by in-process integration and test as part of the PIC fabrication, which substantially reduces associated schedule and cost. This paper provides an overview of performance data on the second-generation PIC for SPIDER developed under the Defense Advanced Research Projects Agency (DARPA)'s SPIDER Zoom research funding. We also update the design description of the SPIDER Zoom imaging sensor and the second-generation PIC (high- and low resolution versions).
Vector sensor for scanning SQUID microscopy
NASA Astrophysics Data System (ADS)
Dang, Vu The; Toji, Masaki; Thanh Huy, Ho; Miyajima, Shigeyuki; Shishido, Hiroaki; Hidaka, Mutsuo; Hayashi, Masahiko; Ishida, Takekazu
2017-07-01
We plan to build a novel 3-dimensional (3D) scanning SQUID microscope with high sensitivity and high spatial resolution. In the system, a vector sensor consists of three SQUID sensors and three pick-up coils realized on a single chip. Three pick-up coils are configured in orthogonal with each other to measure the magnetic field vector of X, Y, Z components. We fabricated some SQUID chips with one uniaxial pick-up coil or three vector pick-up coils and carried out fundamental measurements to reveal the basic characteristics. Josephson junctions (JJs) of sensors are designed to have the critical current density J c of 320 A/cm2, and the critical current I c becomes 12.5 μA for the 2.2μm × 2.2μm JJ. We carefully positioned the three pickup coils so as to keep them at the same height at the centers of all three X, Y and Z coils. This can be done by arranging them along single line parallel to a sample surface. With the aid of multilayer technology of Nb-based fabrication, we attempted to reduce an inner diameter of the pickup coils to enhance both sensitivity and spatial resolution. The method for improving a spatial resolution of a local magnetic field image is to employ an XYZ piezo-driven scanner for controlling the positions of the pick-up coils. The fundamental characteristics of our SQUID sensors confirmed the proper operation of our SQUID sensors and found a good agreement with our design parameters.
Engineering the Ideal Gigapixel Image Viewer
NASA Astrophysics Data System (ADS)
Perpeet, D. Wassenberg, J.
2011-09-01
Despite improvements in automatic processing, analysts are still faced with the task of evaluating gigapixel-scale mosaics or images acquired by telescopes such as Pan-STARRS. Displaying such images in ‘ideal’ form is a major challenge even today, and the amount of data will only increase as sensor resolutions improve. In our opinion, the ideal viewer has several key characteristics. Lossless display - down to individual pixels - ensures all information can be extracted from the image. Support for all relevant pixel formats (integer or floating point) allows displaying data from different sensors. Smooth zooming and panning in the high-resolution data enables rapid screening and navigation in the image. High responsiveness to input commands avoids frustrating delays. Instantaneous image enhancement, e.g. contrast adjustment and image channel selection, helps with analysis tasks. Modest system requirements allow viewing on regular workstation computers or even laptops. To the best of our knowledge, no such software product is currently available. Meeting these goals requires addressing certain realities of current computer architectures. GPU hardware accelerates rendering and allows smooth zooming without high CPU load. Programmable GPU shaders enable instant channel selection and contrast adjustment without any perceptible slowdown or changes to the input data. Relatively low disk transfer speeds suggest the use of compression to decrease the amount of data to transfer. Asynchronous I/O allows decompressing while waiting for previous I/O operations to complete. The slow seek times of magnetic disks motivate optimizing the order of the data on disk. Vectorization and parallelization allow significant increases in computational capacity. Limited memory requires streaming and caching of image regions. We develop a viewer that takes the above issues into account. Its awareness of the computer architecture enables previously unattainable features such as smooth zooming and image enhancement within high-resolution data. We describe our implementation, disclosing its novel file format and lossless image codec whose decompression is faster than copying the raw data in memory. Both provide crucial performance boosts compared to conventional approaches. Usability tests demonstrate the suitability of our viewer for rapid analysis of large SAR datasets, multispectral satellite imagery and mosaics.
EKOSAT/DIAMANT - The Earth Observation Programme at OHB- System
NASA Astrophysics Data System (ADS)
Penne, B.; Tobehn, C.; Kassebom, M.; Luebberstedt
This paper covers the EKOSAT / DIAMANT programme heading for superspectral geo-information products. The EKOSAT / DIAMANT programme is based on a commercial strategy just before the realization of the first step - the EKOSAT launch in 2004. Further, we give an overview on OHB-System earth observation prime activities especially for infrared and radar. The EKOSAT/ DIAMANT is based on the MSRS sensor featuring 12 user dedicated spectral bands in the VIS/NIR with 5m spatial resolution and 26 km swath at an orbit of 670 km. The operational demonstrator mission EKOSAT is a Korean-Israelean-German-Russian initiative that aims in utilizing the existing proto-flight model of the KOMPSAT-1 spacecraft for the MSRS sensor, which development is finished. The EKOSAT pointing capability will allow a revisit time of 3 days. DIAMANT stands for the future full operational system based on dedicated small satellites. The basic constellation relying on 2-3 satellites with about one day revisit is extendend on market demand. EKOSAT/ DIAMANT is designed to fill the gap between modern high spatial resolution multispectral (MS) systems and hyperspectral systems with moderate spatial resolution. On European level, there is currently no remote sensing system operational with comparable features and capabilities concerning applications especially in the field of environmental issues, vegetation, agriculture and water bodies. The Space Segment has been designed to satisfy the user requirements based on a balance between commercial aspects and scientific approaches. For example eight spectral bands have been identified to cover almost the entire product range for the current market. Additional four bands have been implemented to be prepared for future applications as for example the improved red edge detection, which give better results regarding environmental conditions. The spacecraft design and its subsystems are still reasonable small in order to keep the mass below 200 kg. This is an important cost saving approach that surely offers higher viability of the system. The Intelligent Infrared Sensor System - FOCUS - aims at the reliable autonomous on-board detection of High Temperature Events (HTE) on Earth surface. The key to this task is the simultaneous co-registration of a combination of infrared (IR) and visible (VIS) channels. Furthermore there are ecology-oriented objectives mainly related to the sophisticated data fusion of spectrometric &imaging remote inspection and parameter extraction of selected HTEs, and to the assessment of ecological consequences of HTEs, such as aerosol and gas emission. The FOCUS Multi Sensor consists of two sensor systems: The Fore Field Sensor (FFS) will perform the wide-angle hot spot detection and mapping. For the on-board detected and selected hot spots, the Main Sensor (MS) will be targeted with a tiltable mirror and deliver detailed spatial high resolution observation. The MS is composed of an imaging system and a Fourier Spectrometer. The SAR-Lupe satellite system - under development by OHB-System - will generate high resolution SAR- (Synthetic Aperture Radar) images for military reconnaissance purposes. SAR-Lupe relies on a constellation of small satellites in low earth orbit, 1 control and 1 user ground segment.
Hain, Christopher R; Anderson, Martha C
2017-10-16
Observations of land surface temperature (LST) are crucial for the monitoring of surface energy fluxes from satellite. Methods that require high temporal resolution LST observations (e.g., from geostationary orbit) can be difficult to apply globally because several geostationary sensors are required to attain near-global coverage (60°N to 60°S). While these LST observations are available from polar-orbiting sensors, providing global coverage at higher spatial resolutions, the temporal sampling (twice daily observations) can pose significant limitations. For example, the Atmosphere Land Exchange Inverse (ALEXI) surface energy balance model, used for monitoring evapotranspiration and drought, requires an observation of the morning change in LST - a quantity not directly observable from polar-orbiting sensors. Therefore, we have developed and evaluated a data-mining approach to estimate the mid-morning rise in LST from a single sensor (2 observations per day) of LST from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor on the Aqua platform. In general, the data-mining approach produced estimates with low relative error (5 to 10%) and statistically significant correlations when compared against geostationary observations. This approach will facilitate global, near real-time applications of ALEXI at higher spatial and temporal coverage from a single sensor than currently achievable with current geostationary datasets.
A study of CR-39 plastic charged-particle detector replacement by consumer imaging sensors
NASA Astrophysics Data System (ADS)
Plaud-Ramos, K. O.; Freeman, M. S.; Wei, W.; Guardincerri, E.; Bacon, J. D.; Cowan, J.; Durham, J. M.; Huang, D.; Gao, J.; Hoffbauer, M. A.; Morley, D. J.; Morris, C. L.; Poulson, D. C.; Wang, Zhehui
2016-11-01
Consumer imaging sensors (CIS) are examined for real-time charged-particle detection and CR-39 plastic detector replacement. Removing cover glass from CIS is hard if not impossible, in particular for the latest inexpensive webcam models. We show that 10-class CIS are sensitive to MeV and higher energy protons and α-particles by using a 90Sr β-source with its cover glass in place. Indirect, real-time, high-resolution detection is also feasible when combining CIS with a ZnS:Ag phosphor screen and optics. Noise reduction in CIS is nevertheless important for the indirect approach.
A study of CR-39 plastic charged-particle detector replacement by consumer imaging sensors
Plaud-Ramos, Kenie Omar; Freeman, Matthew Stouten; Wei, Wanchun; ...
2016-08-03
Consumer imaging sensors (CIS) are examined for real-time charged-particle detection and CR-39 plastic detector replacement. Removing cover glass from CIS is hard if not impossible, in particular for the latest inexpensive webcam models. We show that $10-class CIS are sensitive to MeV and higher energy protons and α-particles by using a 90Sr β-source with its cover glass in place. Indirect, real-time, high-resolution detection is also feasible when combining CIS with a ZnS:Ag phosphor screen and optics. Furthermore, noise reduction in CIS is nevertheless important for the indirect approach.
A study of CR-39 plastic charged-particle detector replacement by consumer imaging sensors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Plaud-Ramos, K. O.; Freeman, M. S.; Wei, W.
Consumer imaging sensors (CIS) are examined for real-time charged-particle detection and CR-39 plastic detector replacement. Removing cover glass from CIS is hard if not impossible, in particular for the latest inexpensive webcam models. We show that $10-class CIS are sensitive to MeV and higher energy protons and α-particles by using a {sup 90}Sr β-source with its cover glass in place. Indirect, real-time, high-resolution detection is also feasible when combining CIS with a ZnS:Ag phosphor screen and optics. Noise reduction in CIS is nevertheless important for the indirect approach.
A study of CR-39 plastic charged-particle detector replacement by consumer imaging sensors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Plaud-Ramos, Kenie Omar; Freeman, Matthew Stouten; Wei, Wanchun
Consumer imaging sensors (CIS) are examined for real-time charged-particle detection and CR-39 plastic detector replacement. Removing cover glass from CIS is hard if not impossible, in particular for the latest inexpensive webcam models. We show that $10-class CIS are sensitive to MeV and higher energy protons and α-particles by using a 90Sr β-source with its cover glass in place. Indirect, real-time, high-resolution detection is also feasible when combining CIS with a ZnS:Ag phosphor screen and optics. Furthermore, noise reduction in CIS is nevertheless important for the indirect approach.
Thiel, Florian; Kosch, Olaf; Seifert, Frank
2010-01-01
The specific advantages of ultra-wideband electromagnetic remote sensing (UWB radar) make it a particularly attractive technique for biomedical applications. We partially review our activities in utilizing this novel approach for the benefit of high and ultra-high field magnetic resonance imaging (MRI) and other applications, e.g., for intensive care medicine and biomedical research. We could show that our approach is beneficial for applications like motion tracking for high resolution brain imaging due to the non-contact acquisition of involuntary head motions with high spatial resolution, navigation for cardiac MRI due to our interpretation of the detected physiological mechanical contraction of the heart muscle and for MR safety, since we have investigated the influence of high static magnetic fields on myocardial mechanics. From our findings we could conclude, that UWB radar can serve as a navigator technique for high and ultra-high field magnetic resonance imaging and can be beneficial preserving the high resolution capability of this imaging modality. Furthermore it can potentially be used to support standard ECG analysis by complementary information where sole ECG analysis fails. Further analytical investigations have proven the feasibility of this method for intracranial displacements detection and the rendition of a tumour's contrast agent based perfusion dynamic. Beside these analytical approaches we have carried out FDTD simulations of a complex arrangement mimicking the illumination of a human torso model incorporating the geometry of the antennas applied.
A new sampling scheme for tropical forest monitoring using satellite imagery
Frederic Achard; Tim Richards; Javier Gallego
2000-01-01
At the global level, a sampling scheme for tropical forest change assessment, using high resolution satellite images, has been defined using sampling units independent of any particular satellite sensor. For this purpose, a sampling frame has been chosen a hexagonal tessellation of 3,600 km².
Remote sensing of vegetation and land-cover change in Arctic Tundra Ecosystems
Stow, Douglas A.; Hope, Allen; McGuire, David; Verbyla, David; Gamon, John A.; Huemmrich, Fred; Houston, Stan; Racine, Charles H.; Sturm, Matthew; Tape, Ken D.; Hinzman, Larry D.; Yoshikawa, Kenji; Tweedie, Craig E.; Noyle, Brian; Silapaswan, Cherie; Douglas, David C.; Griffith, Brad; Jia, Gensuo; Howard E. Epstein,; Walker, Donald A.; Daeschner, Scott; Petersen, Aaron; Zhou, Liming; Myneni, Ranga B.
2004-01-01
The objective of this paper is to review research conducted over the past decade on the application of multi-temporal remote sensing for monitoring changes of Arctic tundra lands. Emphasis is placed on results from the National Science Foundation Land–Air–Ice Interactions (LAII) program and on optical remote sensing techniques. Case studies demonstrate that ground-level sensors on stationary or moving track platforms and wide-swath imaging sensors on polar orbiting satellites are particularly useful for capturing optical remote sensing data at sufficient frequency to study tundra vegetation dynamics and changes for the cloud prone Arctic. Less frequent imaging with high spatial resolution instruments on aircraft and lower orbiting satellites enable more detailed analyses of land cover change and calibration/validation of coarser resolution observations.The strongest signals of ecosystem change detected thus far appear to correspond to expansion of tundra shrubs and changes in the amount and extent of thaw lakes and ponds. Changes in shrub cover and extent have been documented by modern repeat imaging that matches archived historical aerial photography. NOAA Advanced Very High Resolution Radiometer (AVHRR) time series provide a 20-year record for determining changes in greenness that relates to photosynthetic activity, net primary production, and growing season length. The strong contrast between land materials and surface waters enables changes in lake and pond extent to be readily measured and monitored.
Camera array based light field microscopy
Lin, Xing; Wu, Jiamin; Zheng, Guoan; Dai, Qionghai
2015-01-01
This paper proposes a novel approach for high-resolution light field microscopy imaging by using a camera array. In this approach, we apply a two-stage relay system for expanding the aperture plane of the microscope into the size of an imaging lens array, and utilize a sensor array for acquiring different sub-apertures images formed by corresponding imaging lenses. By combining the rectified and synchronized images from 5 × 5 viewpoints with our prototype system, we successfully recovered color light field videos for various fast-moving microscopic specimens with a spatial resolution of 0.79 megapixels at 30 frames per second, corresponding to an unprecedented data throughput of 562.5 MB/s for light field microscopy. We also demonstrated the use of the reported platform for different applications, including post-capture refocusing, phase reconstruction, 3D imaging, and optical metrology. PMID:26417490
Scaling of surface energy fluxes using remotely sensed data
NASA Astrophysics Data System (ADS)
French, Andrew Nichols
Accurate estimates of evapotranspiration (ET) across multiple terrains would greatly ease challenges faced by hydrologists, climate modelers, and agronomists as they attempt to apply theoretical models to real-world situations. One ET estimation approach uses an energy balance model to interpret a combination of meteorological observations taken at the surface and data captured by remote sensors. However, results of this approach have not been accurate because of poor understanding of the relationship between surface energy flux and land cover heterogeneity, combined with limits in available resolution of remote sensors. The purpose of this study was to determine how land cover and image resolution affect ET estimates. Using remotely sensed data collected over El Reno, Oklahoma, during four days in June and July 1997, scale effects on the estimation of spatially distributed ET were investigated. Instantaneous estimates of latent and sensible heat flux were calculated using a two-source surface energy balance model driven by thermal infrared, visible-near infrared, and meteorological data. The heat flux estimates were verified by comparison to independent eddy-covariance observations. Outcomes of observations taken at coarser resolutions were simulated by aggregating remote sensor data and estimated surface energy balance components from the finest sensor resolution (12 meter) to hypothetical resolutions as coarse as one kilometer. Estimated surface energy flux components were found to be significantly dependent on observation scale. For example, average evaporative fraction varied from 0.79, using 12-m resolution data, to 0.93, using 1-km resolution data. Resolution effects upon flux estimates were related to a measure of landscape heterogeneity known as operational scale, reflecting the size of dominant landscape features. Energy flux estimates based on data at resolutions less than 100 m and much greater than 400 m showed a scale-dependent bias. But estimates derived from data taken at about 400-m resolution (the operational scale at El Reno) were susceptible to large error due to mixing of surface types. The El Reno experiments show that accurate instantaneous estimates of ET require precise image alignment and image resolutions finer than landscape operational scale. These findings are valuable for the design of sensors and experiments to quantify spatially-varying hydrologic processes.
Merging climate and multi-sensor time-series data in real-time drought monitoring across the U.S.A.
Brown, Jesslyn F.; Miura, T.; Wardlow, B.; Gu, Yingxin
2011-01-01
Droughts occur repeatedly in the United States resulting in billions of dollars of damage. Monitoring and reporting on drought conditions is a necessary function of government agencies at multiple levels. A team of Federal and university partners developed a drought decision- support tool with higher spatial resolution relative to traditional climate-based drought maps. The Vegetation Drought Response Index (VegDRI) indicates general canopy vegetation condition assimilation of climate, satellite, and biophysical data via geospatial modeling. In VegDRI, complementary drought-related data are merged to provide a comprehensive, detailed representation of drought stress on vegetation. Time-series data from daily polar-orbiting earth observing systems [Advanced Very High Resolution Radiometer (AVHRR) and Moderate Resolution Imaging Spectroradiometer (MODIS)] providing global measurements of land surface conditions are ingested into VegDRI. Inter-sensor compatibility is required to extend multi-sensor data records; thus, translations were developed using overlapping observations to create consistent, long-term data time series.
Evaluation and comparison of the IRS-P6 and the landsat sensors
Chander, G.; Coan, M.J.; Scaramuzza, P.L.
2008-01-01
The Indian Remote Sensing Satellite (IRS-P6), also called ResourceSat-1, was launched in a polar sun-synchronous orbit on October 17, 2003. It carries three sensors: the highresolution Linear Imaging Self-Scanner (LISS-IV), the mediumresolution Linear Imaging Self-Scanner (LISS-III), and the Advanced Wide-Field Sensor (AWiFS). These three sensors provide images of different resolutions and coverage. To understand the absolute radiometric calibration accuracy of IRS-P6 AWiFS and LISS-III sensors, image pairs from these sensors were compared to images from the Landsat-5 Thematic Mapper (TM) and Landsat-7 Enhanced TM Plus (ETM+) sensors. The approach involves calibration of surface observations based on image statistics from areas observed nearly simultaneously by the two sensors. This paper also evaluated the viability of data from these nextgeneration imagers for use in creating three National Land Cover Dataset (NLCD) products: land cover, percent tree canopy, and percent impervious surface. Individual products were consistent with previous studies but had slightly lower overall accuracies as compared to data from the Landsat sensors.
High Resolution Airborne Digital Imagery for Precision Agriculture
NASA Technical Reports Server (NTRS)
Herwitz, Stanley R.
1998-01-01
The Environmental Research Aircraft and Sensor Technology (ERAST) program is a NASA initiative that seeks to demonstrate the application of cost-effective aircraft and sensor technology to private commercial ventures. In 1997-98, a series of flight-demonstrations and image acquisition efforts were conducted over the Hawaiian Islands using a remotely-piloted solar- powered platform (Pathfinder) and a fixed-wing piloted aircraft (Navajo) equipped with a Kodak DCS450 CIR (color infrared) digital camera. As an ERAST Science Team Member, I defined a set of flight lines over the largest coffee plantation in Hawaii: the Kauai Coffee Company's 4,000 acre Koloa Estate. Past studies have demonstrated the applications of airborne digital imaging to agricultural management. Few studies have examined the usefulness of high resolution airborne multispectral imagery with 10 cm pixel sizes. The Kodak digital camera integrated with ERAST's Airborne Real Time Imaging System (ARTIS) which generated multiband CCD images consisting of 6 x 106 pixel elements. At the designated flight altitude of 1,000 feet over the coffee plantation, pixel size was 10 cm. The study involved the analysis of imagery acquired on 5 March 1998 for the detection of anomalous reflectance values and for the definition of spectral signatures as indicators of tree vigor and treatment effectiveness (e.g., drip irrigation; fertilizer application).
Multi-Scale Fractal Analysis of Image Texture and Pattern
NASA Technical Reports Server (NTRS)
Emerson, Charles W.; Lam, Nina Siu-Ngan; Quattrochi, Dale A.
1999-01-01
Analyses of the fractal dimension of Normalized Difference Vegetation Index (NDVI) images of homogeneous land covers near Huntsville, Alabama revealed that the fractal dimension of an image of an agricultural land cover indicates greater complexity as pixel size increases, a forested land cover gradually grows smoother, and an urban image remains roughly self-similar over the range of pixel sizes analyzed (10 to 80 meters). A similar analysis of Landsat Thematic Mapper images of the East Humboldt Range in Nevada taken four months apart show a more complex relation between pixel size and fractal dimension. The major visible difference between the spring and late summer NDVI images is the absence of high elevation snow cover in the summer image. This change significantly alters the relation between fractal dimension and pixel size. The slope of the fractal dimension-resolution relation provides indications of how image classification or feature identification will be affected by changes in sensor spatial resolution.
Multi-Scale Fractal Analysis of Image Texture and Pattern
NASA Technical Reports Server (NTRS)
Emerson, Charles W.; Lam, Nina Siu-Ngan; Quattrochi, Dale A.
1999-01-01
Analyses of the fractal dimension of Normalized Difference Vegetation Index (NDVI) images of homogeneous land covers near Huntsville, Alabama revealed that the fractal dimension of an image of an agricultural land cover indicates greater complexity as pixel size increases, a forested land cover gradually grows smoother, and an urban image remains roughly self-similar over the range of pixel sizes analyzed (10 to 80 meters). A similar analysis of Landsat Thematic Mapper images of the East Humboldt Range in Nevada taken four months apart show a more complex relation between pixel size and fractal dimension. The major visible difference between the spring and late summer NDVI images of the absence of high elevation snow cover in the summer image. This change significantly alters the relation between fractal dimension and pixel size. The slope of the fractal dimensional-resolution relation provides indications of how image classification or feature identification will be affected by changes in sensor spatial resolution.
High-resolution CdTe detectors with application to various fields (Conference Presentation)
NASA Astrophysics Data System (ADS)
Takeda, Shin'ichiro; Orita, Tadashi; Arai, Yasuo; Sugawara, Hirotaka; Tomaru, Ryota; Katsuragawa, Miho; Sato, Goro; Watanabe, Shin; Ikeda, Hirokazu; Takahashi, Tadayuki; Furenlid, Lars R.; Barber, H. Bradford
2016-10-01
High-quality CdTe semiconductor detectors with both fine position resolution and high energy resolution hold great promise to improve measurement in various hard X-ray and gamma-ray imaging fields. ISAS/JAXA has been developing CdTe imaging detectors to meet scientific demands in latest celestial observation and severe environmental limitation (power consumption, vibration, radiation) in space for over 15 years. The energy resolution of imaging detectors with a CdTe Schottky diode of In/CdTe/Pt or Al/CdTe/Pt contact is a highlight of our development. We can extremely reduce a leakage current of devises, meaning it allows us to supply higher bias voltage to collect charges. The 3.2cm-wide and 0.75mm-thick CdTe double-sided strip detector with a strip pitch of 250 µm has been successfully established and was mounted in the latest Japanese X-ray satellite. The energy resolution measured in the test on ground was 2.1 keV (FWHM) at 59.5 keV. The detector with much finer resolution of 60 µm is ready, and it was actually used in the FOXSI rocket mission to observe hard X-ray from the sun. In this talk, we will focus on our research activities to apply space sensor technologies to such various imaging fields as medical imaging. Recent development of CdTe detectors, imaging module with pinhole and coded-mask collimators, and experimental study of response to hard X-rays and gamma-rays are presented. The talk also includes research of the Compton camera which has a configuration of accumulated Si and CdTe imaging detectors.
Gutiérrez, Marco A; Manso, Luis J; Pandya, Harit; Núñez, Pedro
2017-02-11
Object detection and classification have countless applications in human-robot interacting systems. It is a necessary skill for autonomous robots that perform tasks in household scenarios. Despite the great advances in deep learning and computer vision, social robots performing non-trivial tasks usually spend most of their time finding and modeling objects. Working in real scenarios means dealing with constant environment changes and relatively low-quality sensor data due to the distance at which objects are often found. Ambient intelligence systems equipped with different sensors can also benefit from the ability to find objects, enabling them to inform humans about their location. For these applications to succeed, systems need to detect the objects that may potentially contain other objects, working with relatively low-resolution sensor data. A passive learning architecture for sensors has been designed in order to take advantage of multimodal information, obtained using an RGB-D camera and trained semantic language models. The main contribution of the architecture lies in the improvement of the performance of the sensor under conditions of low resolution and high light variations using a combination of image labeling and word semantics. The tests performed on each of the stages of the architecture compare this solution with current research labeling techniques for the application of an autonomous social robot working in an apartment. The results obtained demonstrate that the proposed sensor architecture outperforms state-of-the-art approaches.
Effects of satellite image spatial aggregation and resolution on estimates of forest land area
M.D. Nelson; R.E. McRoberts; G.R. Holden; M.E. Bauer
2009-01-01
Satellite imagery is being used increasingly in association with national forest inventories (NFIs) to produce maps and enhance estimates of forest attributes. We simulated several image spatial resolutions within sparsely and heavily forested study areas to assess resolution effects on estimates of forest land area, independent of other sensor characteristics. We...
Volumetric Forest Change Detection Through Vhr Satellite Imagery
NASA Astrophysics Data System (ADS)
Akca, Devrim; Stylianidis, Efstratios; Smagas, Konstantinos; Hofer, Martin; Poli, Daniela; Gruen, Armin; Sanchez Martin, Victor; Altan, Orhan; Walli, Andreas; Jimeno, Elisa; Garcia, Alejandro
2016-06-01
Quick and economical ways of detecting of planimetric and volumetric changes of forest areas are in high demand. A research platform, called FORSAT (A satellite processing platform for high resolution forest assessment), was developed for the extraction of 3D geometric information from VHR (very-high resolution) imagery from satellite optical sensors and automatic change detection. This 3D forest information solution was developed during a Eurostars project. FORSAT includes two main units. The first one is dedicated to the geometric and radiometric processing of satellite optical imagery and 2D/3D information extraction. This includes: image radiometric pre-processing, image and ground point measurement, improvement of geometric sensor orientation, quasiepipolar image generation for stereo measurements, digital surface model (DSM) extraction by using a precise and robust image matching approach specially designed for VHR satellite imagery, generation of orthoimages, and 3D measurements in single images using mono-plotting and in stereo images as well as triplets. FORSAT supports most of the VHR optically imagery commonly used for civil applications: IKONOS, OrbView - 3, SPOT - 5 HRS, SPOT - 5 HRG, QuickBird, GeoEye-1, WorldView-1/2, Pléiades 1A/1B, SPOT 6/7, and sensors of similar type to be expected in the future. The second unit of FORSAT is dedicated to 3D surface comparison for change detection. It allows users to import digital elevation models (DEMs), align them using an advanced 3D surface matching approach and calculate the 3D differences and volume changes between epochs. To this end our 3D surface matching method LS3D is being used. FORSAT is a single source and flexible forest information solution with a very competitive price/quality ratio, allowing expert and non-expert remote sensing users to monitor forests in three and four dimensions from VHR optical imagery for many forest information needs. The capacity and benefits of FORSAT have been tested in six case studies located in Austria, Cyprus, Spain, Switzerland and Turkey, using optical data from different sensors and with the purpose to monitor forest with different geometric characteristics. The validation run on Cyprus dataset is reported and commented.
Sánchez-Durán, José A; Hidalgo-López, José A; Castellanos-Ramos, Julián; Oballe-Peinado, Óscar; Vidal-Verdú, Fernando
2015-08-19
Tactile sensors suffer from many types of interference and errors like crosstalk, non-linearity, drift or hysteresis, therefore calibration should be carried out to compensate for these deviations. However, this procedure is difficult in sensors mounted on artificial hands for robots or prosthetics for instance, where the sensor usually bends to cover a curved surface. Moreover, the calibration procedure should be repeated often because the correction parameters are easily altered by time and surrounding conditions. Furthermore, this intensive and complex calibration could be less determinant, or at least simpler. This is because manipulation algorithms do not commonly use the whole data set from the tactile image, but only a few parameters such as the moments of the tactile image. These parameters could be changed less by common errors and interferences, or at least their variations could be in the order of those caused by accepted limitations, like reduced spatial resolution. This paper shows results from experiments to support this idea. The experiments are carried out with a high performance commercial sensor as well as with a low-cost error-prone sensor built with a common procedure in robotics.
Fast and compact internal scanning CMOS-based hyperspectral camera: the Snapscan
NASA Astrophysics Data System (ADS)
Pichette, Julien; Charle, Wouter; Lambrechts, Andy
2017-02-01
Imec has developed a process for the monolithic integration of optical filters on top of CMOS image sensors, leading to compact, cost-efficient and faster hyperspectral cameras. Linescan cameras are typically used in remote sensing or for conveyor belt applications. Translation of the target is not always possible for large objects or in many medical applications. Therefore, we introduce a novel camera, the Snapscan (patent pending), exploiting internal movement of a linescan sensor enabling fast and convenient acquisition of high-resolution hyperspectral cubes (up to 2048x3652x150 in spectral range 475-925 nm). The Snapscan combines the spectral and spatial resolutions of a linescan system with the convenience of a snapshot camera.