Mapping Capacitive Coupling Among Pixels in a Sensor Array
NASA Technical Reports Server (NTRS)
Seshadri, Suresh; Cole, David M.; Smith, Roger M.
2010-01-01
An improved method of mapping the capacitive contribution to cross-talk among pixels in an imaging array of sensors (typically, an imaging photodetector array) has been devised for use in calibrating and/or characterizing such an array. The method involves a sequence of resets of subarrays of pixels to specified voltages and measurement of the voltage responses of neighboring non-reset pixels.
Characterization of pixel sensor designed in 180 nm SOI CMOS technology
NASA Astrophysics Data System (ADS)
Benka, T.; Havranek, M.; Hejtmanek, M.; Jakovenko, J.; Janoska, Z.; Marcisovska, M.; Marcisovsky, M.; Neue, G.; Tomasek, L.; Vrba, V.
2018-01-01
A new type of X-ray imaging Monolithic Active Pixel Sensor (MAPS), X-CHIP-02, was developed using a 180 nm deep submicron Silicon On Insulator (SOI) CMOS commercial technology. Two pixel matrices were integrated into the prototype chip, which differ by the pixel pitch of 50 μm and 100 μm. The X-CHIP-02 contains several test structures, which are useful for characterization of individual blocks. The sensitive part of the pixel integrated in the handle wafer is one of the key structures designed for testing. The purpose of this structure is to determine the capacitance of the sensitive part (diode in the MAPS pixel). The measured capacitance is 2.9 fF for 50 μm pixel pitch and 4.8 fF for 100 μm pixel pitch at -100 V (default operational voltage). This structure was used to measure the IV characteristics of the sensitive diode. In this work, we report on a circuit designed for precise determination of sensor capacitance and IV characteristics of both pixel types with respect to X-ray irradiation. The motivation for measurement of the sensor capacitance was its importance for the design of front-end amplifier circuits. The design of pixel elements, as well as circuit simulation and laboratory measurement techniques are described. The experimental results are of great importance for further development of MAPS sensors in this technology.
Can direct electron detectors outperform phosphor-CCD systems for TEM?
NASA Astrophysics Data System (ADS)
Moldovan, G.; Li, X.; Kirkland, A.
2008-08-01
A new generation of imaging detectors is being considered for application in TEM, but which device architectures can provide the best images? Monte Carlo simulations of the electron-sensor interaction are used here to calculate the expected modulation transfer of monolithic active pixel sensors (MAPS), hybrid active pixel sensors (HAPS) and double sided Silicon strip detectors (DSSD), showing that ideal and nearly ideal transfer can be obtained using DSSD and MAPS sensors. These results highly recommend the replacement of current phosphor screen and charge coupled device imaging systems with such new directly exposed position sensitive electron detectors.
A Multi-Modality CMOS Sensor Array for Cell-Based Assay and Drug Screening.
Chi, Taiyun; Park, Jong Seok; Butts, Jessica C; Hookway, Tracy A; Su, Amy; Zhu, Chengjie; Styczynski, Mark P; McDevitt, Todd C; Wang, Hua
2015-12-01
In this paper, we present a fully integrated multi-modality CMOS cellular sensor array with four sensing modalities to characterize different cell physiological responses, including extracellular voltage recording, cellular impedance mapping, optical detection with shadow imaging and bioluminescence sensing, and thermal monitoring. The sensor array consists of nine parallel pixel groups and nine corresponding signal conditioning blocks. Each pixel group comprises one temperature sensor and 16 tri-modality sensor pixels, while each tri-modality sensor pixel can be independently configured for extracellular voltage recording, cellular impedance measurement (voltage excitation/current sensing), and optical detection. This sensor array supports multi-modality cellular sensing at the pixel level, which enables holistic cell characterization and joint-modality physiological monitoring on the same cellular sample with a pixel resolution of 80 μm × 100 μm. Comprehensive biological experiments with different living cell samples demonstrate the functionality and benefit of the proposed multi-modality sensing in cell-based assay and drug screening.
Plenoptic mapping for imaging and retrieval of the complex field amplitude of a laser beam.
Wu, Chensheng; Ko, Jonathan; Davis, Christopher C
2016-12-26
The plenoptic sensor has been developed to sample complicated beam distortions produced by turbulence in the low atmosphere (deep turbulence or strong turbulence) with high density data samples. In contrast with the conventional Shack-Hartmann wavefront sensor, which utilizes all the pixels under each lenslet of a micro-lens array (MLA) to obtain one data sample indicating sub-aperture phase gradient and photon intensity, the plenoptic sensor uses each illuminated pixel (with significant pixel value) under each MLA lenslet as a data point for local phase gradient and intensity. To characterize the working principle of a plenoptic sensor, we propose the concept of plenoptic mapping and its inverse mapping to describe the imaging and reconstruction process respectively. As a result, we show that the plenoptic mapping is an efficient method to image and reconstruct the complex field amplitude of an incident beam with just one image. With a proof of concept experiment, we show that adaptive optics (AO) phase correction can be instantaneously achieved without going through a phase reconstruction process under the concept of plenoptic mapping. The plenoptic mapping technology has high potential for applications in imaging, free space optical (FSO) communication and directed energy (DE) where atmospheric turbulence distortion needs to be compensated.
Wang, Xiandi; Zhang, Hanlu; Dong, Lin; Han, Xun; Du, Weiming; Zhai, Junyi; Pan, Caofeng; Wang, Zhong Lin
2016-04-20
A triboelectric sensor matrix (TESM) can accurately track and map 2D tactile sensing. A self-powered, high-resolution, pressure-sensitive, flexible and durable TESM with 16 × 16 pixels is fabricated for the fast detection of single-point and multi-point touching. Using cross-locating technology, a cross-type TESM with 32 × 20 pixels is developed for more rapid tactile mapping, which significantly reduces the addressing lines from m × n to m + n. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Log polar image sensor in CMOS technology
NASA Astrophysics Data System (ADS)
Scheffer, Danny; Dierickx, Bart; Pardo, Fernando; Vlummens, Jan; Meynants, Guy; Hermans, Lou
1996-08-01
We report on the design, design issues, fabrication and performance of a log-polar CMOS image sensor. The sensor is developed for the use in a videophone system for deaf and hearing impaired people, who are not capable of communicating through a 'normal' telephone. The system allows 15 detailed images per second to be transmitted over existing telephone lines. This framerate is sufficient for conversations by means of sign language or lip reading. The pixel array of the sensor consists of 76 concentric circles with (up to) 128 pixels per circle, in total 8013 pixels. The interior pixels have a pitch of 14 micrometers, up to 250 micrometers at the border. The 8013-pixels image is mapped (log-polar transformation) in a X-Y addressable 76 by 128 array.
NASA Astrophysics Data System (ADS)
Maczewski, Lukasz
2010-05-01
The International Linear Collider (ILC) is a project of an electron-positron (e+e-) linear collider with the centre-of-mass energy of 200-500 GeV. Monolithic Active Pixel Sensors (MAPS) are one of the proposed silicon pixel detector concepts for the ILC vertex detector (VTX). Basic characteristics of two MAPS pixel matrices MIMOSA-5 (17 μm pixel pitch) and MIMOSA-18 (10 μm pixel pitch) are studied and compared (pedestals, noises, calibration of the ADC-to-electron conversion gain, detector efficiency and charge collection properties). The e+e- collisions at the ILC will be accompanied by intense beamsstrahlung background of electrons and positrons hitting inner planes of the vertex detector. Tracks of this origin leave elongated clusters contrary to those of secondary hadrons. Cluster characteristics and orientation with respect to the pixels netting are studied for perpendicular and inclined tracks. Elongation and precision of determining the cluster orientation as a function of the angle of incidence were measured. A simple model of signal formation (based on charge diffusion) is proposed and tested using the collected data.
MAPS development for the ALICE ITS upgrade
NASA Astrophysics Data System (ADS)
Yang, P.; Aglieri, G.; Cavicchioli, C.; Chalmet, P. L.; Chanlek, N.; Collu, A.; Gao, C.; Hillemanns, H.; Junique, A.; Kofarago, M.; Keil, M.; Kugathasan, T.; Kim, D.; Kim, J.; Lattuca, A.; Marin Tobon, C. A.; Marras, D.; Mager, M.; Martinengo, P.; Mazza, G.; Mugnier, H.; Musa, L.; Puggioni, C.; Rousset, J.; Reidt, F.; Riedler, P.; Snoeys, W.; Siddhanta, S.; Usai, G.; van Hoorne, J. W.; Yi, J.
2015-03-01
Monolithic Active Pixel Sensors (MAPS) offer the possibility to build pixel detectors and tracking layers with high spatial resolution and low material budget in commercial CMOS processes. Significant progress has been made in the field of MAPS in recent years, and they are now considered for the upgrades of the LHC experiments. This contribution will focus on MAPS detectors developed for the ALICE Inner Tracking System (ITS) upgrade and manufactured in the TowerJazz 180 nm CMOS imaging sensor process on wafers with a high resistivity epitaxial layer. Several sensor chip prototypes have been developed and produced to optimise both charge collection and readout circuitry. The chips have been characterised using electrical measurements, radioactive sources and particle beams. The tests indicate that the sensors satisfy the ALICE requirements and first prototypes with the final size of 1.5 × 3 cm2 have been produced in the first half of 2014. This contribution summarises the characterisation measurements and presents first results from the full-scale chips.
A FPGA-based Cluster Finder for CMOS Monolithic Active Pixel Sensors of the MIMOSA-26 Family
NASA Astrophysics Data System (ADS)
Li, Qiyan; Amar-Youcef, S.; Doering, D.; Deveaux, M.; Fröhlich, I.; Koziel, M.; Krebs, E.; Linnik, B.; Michel, J.; Milanovic, B.; Müntz, C.; Stroth, J.; Tischler, T.
2014-06-01
CMOS Monolithic Active Pixel Sensors (MAPS) demonstrated excellent performances in the field of charged particle tracking. Among their strong points are an single point resolution few μm, a light material budget of 0.05% X0 in combination with a good radiation tolerance and high rate capability. Those features make the sensors a valuable technology for vertex detectors of various experiments in heavy ion and particle physics. To reduce the load on the event builders and future mass storage systems, we have developed algorithms suited for preprocessing and reducing the data streams generated by the MAPS. This real-time processing employs remaining free resources of the FPGAs of the readout controllers of the detector and complements the on-chip data reduction circuits of the MAPS.
Tests of monolithic active pixel sensors at national synchrotron light source
NASA Astrophysics Data System (ADS)
Deptuch, G.; Besson, A.; Carini, G. A.; Siddons, D. P.; Szelezniak, M.; Winter, M.
2007-01-01
The paper discusses basic characterization of Monolithic Active Pixel Sensors (MAPS) carried out at the X12A beam-line at National Synchrotron Light Source (NSLS), Upton, NY, USA. The tested device was a MIMOSA V (MV) chip, back-thinned down to the epitaxial layer. This 1M pixels device features a pixel size of 17×17 μm2 and was designed in a 0.6 μm CMOS process. The X-ray beam energies used range from 5 to 12 keV. Examples of direct X-ray imaging capabilities are presented.
NASA Astrophysics Data System (ADS)
Bisanz, T.; Große-Knetter, J.; Quadt, A.; Rieger, J.; Weingarten, J.
2017-08-01
The upgrade to the High Luminosity Large Hadron Collider will increase the instantaneous luminosity by more than a factor of 5, thus creating significant challenges to the tracking systems of all experiments. Recent advancement of active pixel detectors designed in CMOS processes provide attractive alternatives to the well-established hybrid design using passive sensors since they allow for smaller pixel sizes and cost effective production. This article presents studies of a high-voltage CMOS active pixel sensor designed for the ATLAS tracker upgrade. The sensor is glued to the read-out chip of the Insertable B-Layer, forming a capacitively coupled pixel detector. The pixel pitch of the device under test is 33× 125 μm2, while the pixels of the read-out chip have a pitch of 50× 250 μm2. Three pixels of the CMOS device are connected to one read-out pixel, the information of which of these subpixels is hit is encoded in the amplitude of the output signal (subpixel encoding). Test beam measurements are presented that demonstrate the usability of this subpixel encoding scheme.
Time-of-flight camera via a single-pixel correlation image sensor
NASA Astrophysics Data System (ADS)
Mao, Tianyi; Chen, Qian; He, Weiji; Dai, Huidong; Ye, Ling; Gu, Guohua
2018-04-01
A time-of-flight imager based on single-pixel correlation image sensors is proposed for noise-free depth map acquisition in presence of ambient light. Digital micro-mirror device and time-modulated IR-laser provide spatial and temporal illumination on the unknown object. Compressed sensing and ‘four bucket principle’ method are combined to reconstruct the depth map from a sequence of measurements at a low sampling rate. Second-order correlation transform is also introduced to reduce the noise from the detector itself and direct ambient light. Computer simulations are presented to validate the computational models and improvement of reconstructions.
Mapping Electrical Crosstalk in Pixelated Sensor Arrays
NASA Technical Reports Server (NTRS)
Seshadri, Suresh (Inventor); Cole, David (Inventor); Smith, Roger M. (Inventor); Hancock, Bruce R. (Inventor)
2017-01-01
The effects of inter pixel capacitance in a pixilated array may be measured by first resetting all pixels in the array to a first voltage, where a first image is read out, followed by resetting only a subset of pixels in the array to a second voltage, where a second image is read out, where the difference in the first and second images provide information about the inter pixel capacitance. Other embodiments are described and claimed.
Mapping Electrical Crosstalk in Pixelated Sensor Arrays
NASA Technical Reports Server (NTRS)
Smith, Roger M (Inventor); Hancock, Bruce R. (Inventor); Cole, David (Inventor); Seshadri, Suresh (Inventor)
2013-01-01
The effects of inter pixel capacitance in a pixilated array may be measured by first resetting all pixels in the array to a first voltage, where a first image is read out, followed by resetting only a subset of pixels in the array to a second voltage, where a second image is read out, where the difference in the first and second images provide information about the inter pixel capacitance. Other embodiments are described and claimed.
Noise characterization of a 512×16 spad line sensor for time-resolved spectroscopy applications
NASA Astrophysics Data System (ADS)
Finlayson, Neil; Usai, Andrea; Erdogan, Ahmet T.; Henderson, Robert K.
2018-02-01
Time-resolved spectroscopy in the presence of noise is challenging. We have developed a new 512 pixel line sensor with 16 single-photon-avalanche (SPAD) detectors per pixel and ultrafast in-pixel time-correlated single photon counting (TCSPC) histogramming for such applications. SPADs are near shot noise limited detectors but we are still faced with the problem of high dark count rate (DCR) SPADs. The noisiest SPADs can be switched off to optimise signal-to-noiseratios (SNR) at the expense of longer acquisition/exposure times than would be possible if more SPADs were exploited. Here we present detailed noise characterization of our array. We build a DCR map for the sensor and demonstrate the effect of switching off the noisiest SPADs in each pixel. 24% percent of SPADs in the array are measured to have DCR in excess of 1kHz, while the best SPAD selection per pixel reduces DCR to 53+/-7Hz across the entire array. We demonstrate that selection of the lowest DCR SPAD in each pixel leads to the emergence of sparse spatial sampling noise in the sensor.
Development of radiation tolerant monolithic active pixel sensors with fast column parallel read-out
NASA Astrophysics Data System (ADS)
Koziel, M.; Dorokhov, A.; Fontaine, J.-C.; De Masi, R.; Winter, M.
2010-12-01
Monolithic active pixel sensors (MAPS) [1] (Turchetta et al., 2001) are being developed at IPHC—Strasbourg to equip the EUDET telescope [2] (Haas, 2006) and vertex detectors for future high energy physics experiments, including the STAR upgrade at RHIC [3] (T.S. Collaboration, 2005) and the CBM experiment at FAIR/GSI [4] (Heuser, 2006). High granularity, low material budget and high read-out speed are systematically required for most applications, complemented, for some of them, with high radiation tolerance. A specific column-parallel architecture, implemented in the MIMOSA-22 sensor, was developed to achieve fast read-out MAPS. Previous studies of the front-end architecture integrated in this sensor, which includes in-pixel amplification, have shown that the fixed pattern noise increase consecutive to ionizing radiation can be controlled by means of a negative feedback [5] (Hu-Guo et al., 2008). However, an unexpected rise of the temporal noise was observed. A second version of this chip (MIMOSA-22bis) was produced in order to search for possible improvements of the radiation tolerance, regarding this type of noise. In this prototype, the feedback transistor was tuned in order to mitigate the sensitivity of the pixel to ionizing radiation. The performances of the pixels after irradiation were investigated for two types of feedback transistors: enclosed layout transistor (ELT) [6] (Snoeys et al., 2000) and "standard" transistor with either large or small transconductance. The noise performance of all test structures was studied in various conditions (expected in future experiments) regarding temperature, integration time and ionizing radiation dose. Test results are presented in this paper. Based on these observations, ideas for further improvement of the radiation tolerance of column parallel MAPS are derived.
Sensing, Spectra and Scaling: What's in Store for Land Observations
NASA Technical Reports Server (NTRS)
Goetz, Alexander F. H.
2001-01-01
Bill Pecora's 1960's vision of the future, using spacecraft-based sensors for mapping the environment and exploring for resources, is being implemented today. New technology has produced better sensors in space such as the Landsat Thematic Mapper (TM) and SPOT, and creative researchers are continuing to find new applications. However, with existing sensors, and those intended for launch in this century, the potential for extracting information from the land surface is far from being exploited. The most recent technology development is imaging spectrometry, the acquisition of images in hundreds of contiguous spectral bands, such that for any pixel a complete reflectance spectrum can be acquired. Experience with Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) has shown that, with proper attention paid to absolute calibration, it is possible to acquire apparent surface reflectance to 5% accuracy without any ground-based measurement. The data reduction incorporates in educated guess of the aerosol scattering, development of a precipitable water vapor map from the data and mapping of cirrus clouds in the 1.38 micrometer band. This is not possible with TM. The pixel size in images of the earth plays and important role in the type and quality of information that can be derived. Less understood is the coupling between spatial and spectral resolution in a sensor. Recent work has shown that in processing the data to derive the relative abundance of materials in a pixel, also known is unmixing, the pixel size is an important parameter. A variance in the relative abundance of materials among the pixels is necessary to be able to derive the endmembers or pure material constituent spectra. In most cases, the 1 km pixel size for the Earth Observing System Moderate Resolution Imaging Spectroradiometer (MODIS) instrument is too large to meet the variance criterion. A pointable high spatial and spectral resolution imaging spectrometer in orbit will be necessary to make the major next step in our understanding of the solid earth surface and its changing face.
NASA Astrophysics Data System (ADS)
Liu, Zhaoxin; Zhao, Liaoying; Li, Xiaorun; Chen, Shuhan
2018-04-01
Owing to the limitation of spatial resolution of the imaging sensor and the variability of ground surfaces, mixed pixels are widesperead in hyperspectral imagery. The traditional subpixel mapping algorithms treat all mixed pixels as boundary-mixed pixels while ignoring the existence of linear subpixels. To solve this question, this paper proposed a new subpixel mapping method based on linear subpixel feature detection and object optimization. Firstly, the fraction value of each class is obtained by spectral unmixing. Secondly, the linear subpixel features are pre-determined based on the hyperspectral characteristics and the linear subpixel feature; the remaining mixed pixels are detected based on maximum linearization index analysis. The classes of linear subpixels are determined by using template matching method. Finally, the whole subpixel mapping results are iteratively optimized by binary particle swarm optimization algorithm. The performance of the proposed subpixel mapping method is evaluated via experiments based on simulated and real hyperspectral data sets. The experimental results demonstrate that the proposed method can improve the accuracy of subpixel mapping.
Radiation damage caused by cold neutrons in boron doped CMOS active pixel sensors
NASA Astrophysics Data System (ADS)
Linnik, B.; Bus, T.; Deveaux, M.; Doering, D.; Kudejova, P.; Wagner, F. M.; Yazgili, A.; Stroth, J.
2017-05-01
CMOS Monolithic Active Pixel Sensors (MAPS) are considered as an emerging technology in the field of charged particle tracking. They will be used in the vertex detectors of experiments like STAR, CBM and ALICE and are considered for the ILC and the tracker of ATLAS. In those applications, the sensors are exposed to sizeable radiation doses. While the tolerance of MAPS to ionizing radiation and fast hadrons is well known, the damage caused by low energy neutrons was not studied so far. Those slow neutrons may initiate nuclear fission of 10B dopants found in the B-doped silicon active medium of MAPS. This effect was expected to create an unknown amount of radiation damage beyond the predictions of the NIEL (Non Ionizing Energy Loss) model for pure silicon. We estimate the impact of this effect by calculating the additional NIEL created by this fission. Moreover, we show first measured data for CMOS sensors which were irradiated with cold neutrons. The empirical results contradict the prediction of the updated NIEL model both, qualitatively and quantitatively: the sensors irradiated with slow neutrons show an unexpected and strong acceptor removal, which is not observed in sensors irradiated with MeV neutrons.
Design and characterization of novel monolithic pixel sensors for the ALICE ITS upgrade
NASA Astrophysics Data System (ADS)
Cavicchioli, C.; Chalmet, P. L.; Giubilato, P.; Hillemanns, H.; Junique, A.; Kugathasan, T.; Mager, M.; Marin Tobon, C. A.; Martinengo, P.; Mattiazzo, S.; Mugnier, H.; Musa, L.; Pantano, D.; Rousset, J.; Reidt, F.; Riedler, P.; Snoeys, W.; Van Hoorne, J. W.; Yang, P.
2014-11-01
Within the R&D activities for the upgrade of the ALICE Inner Tracking System (ITS), Monolithic Active Pixel Sensors (MAPS) are being developed and studied, due to their lower material budget ( 0.3 %X0 in total for each inner layer) and higher granularity ( 20 μm × 20 μm pixels) with respect to the present pixel detector. This paper presents the design and characterization results of the Explorer0 chip, manufactured in the TowerJazz 180 nm CMOS Imaging Sensor process, based on a wafer with high-resistivity (ρ > 1 kΩ cm) and 18 μm thick epitaxial layer. The chip is organized in two sub-matrices with different pixel pitches (20 μm and 30 μm), each of them containing several pixel designs. The collection electrode size and shape, as well as the distance between the electrode and the surrounding electronics, are varied; the chip also offers the possibility to decouple the charge integration time from the readout time, and to change the sensor bias. The charge collection properties of the different pixel variants implemented in Explorer0 have been studied using a 55Fe X-ray source and 1-5 GeV/c electrons and positrons. The sensor capacitance has been estimated, and the effect of the sensor bias has also been examined in detail. A second version of the Explorer0 chip (called Explorer1) has been submitted for production in March 2013, together with a novel circuit with in-pixel discrimination and a sparsified readout. Results from these submissions are also presented.
A high efficiency readout architecture for a large matrix of pixels.
NASA Astrophysics Data System (ADS)
Gabrielli, A.; Giorgi, F.; Villa, M.
2010-07-01
In this work we present a fast readout architecture for silicon pixel matrix sensors that has been designed to sustain very high rates, above 1 MHz/mm2 for matrices greater than 80k pixels. This logic can be implemented within MAPS (Monolithic Active Pixel Sensors), a kind of high resolution sensor that integrates on the same bulk the sensor matrix and the CMOS logic for readout, but it can be exploited also with other technologies. The proposed architecture is based on three main concepts. First of all, the readout of the hits is performed by activating one column at a time; all the fired pixels on the active column are read, sparsified and reset in parallel in one clock cycle. This implies the use of global signals across the sensor matrix. The consequent reduction of metal interconnections improves the active area while maintaining a high granularity (down to a pixel pitch of 40 μm). Secondly, the activation for readout takes place only for those columns overlapping with a certain fired area, thus reducing the sweeping time of the whole matrix and reducing the pixel dead-time. Third, the sparsification (x-y address labeling of the hits) is performed with a lower granularity with respect to single pixels, by addressing vertical zones of 8 pixels each. The fine-grain Y resolution is achieved by appending the zone pattern to the zone address of a hit. We show then the benefits of this technique in presence of clusters. We describe this architecture from a schematic point of view, then presenting the efficiency results obtained by VHDL simulations.
Kim, Min Young; Lee, Hyunkee; Cho, Hyungsuck
2008-04-10
One major research issue associated with 3D perception by robotic systems is the creation of efficient sensor systems that can generate dense range maps reliably. A visual sensor system for robotic applications is developed that is inherently equipped with two types of sensor, an active trinocular vision and a passive stereo vision. Unlike in conventional active vision systems that use a large number of images with variations of projected patterns for dense range map acquisition or from conventional passive vision systems that work well on specific environments with sufficient feature information, a cooperative bidirectional sensor fusion method for this visual sensor system enables us to acquire a reliable dense range map using active and passive information simultaneously. The fusion algorithms are composed of two parts, one in which the passive stereo vision helps active vision and the other in which the active trinocular vision helps the passive one. The first part matches the laser patterns in stereo laser images with the help of intensity images; the second part utilizes an information fusion technique using the dynamic programming method in which image regions between laser patterns are matched pixel-by-pixel with help of the fusion results obtained in the first part. To determine how the proposed sensor system and fusion algorithms can work in real applications, the sensor system is implemented on a robotic system, and the proposed algorithms are applied. A series of experimental tests is performed for a variety of configurations of robot and environments. The performance of the sensor system is discussed in detail.
NASA Astrophysics Data System (ADS)
Yang, P.; Aglieri, G.; Cavicchioli, C.; Chalmet, P. L.; Chanlek, N.; Collu, A.; Gao, C.; Hillemanns, H.; Junique, A.; Kofarago, M.; Keil, M.; Kugathasan, T.; Kim, D.; Kim, J.; Lattuca, A.; Marin Tobon, C. A.; Marras, D.; Mager, M.; Martinengo, P.; Mazza, G.; Mugnier, H.; Musa, L.; Puggioni, C.; Rousset, J.; Reidt, F.; Riedler, P.; Snoeys, W.; Siddhanta, S.; Usai, G.; van Hoorne, J. W.; Yi, J.
2015-06-01
Active Pixel Sensors used in High Energy Particle Physics require low power consumption to reduce the detector material budget, low integration time to reduce the possibilities of pile-up and fast readout to improve the detector data capability. To satisfy these requirements, a novel Address-Encoder and Reset-Decoder (AERD) asynchronous circuit for a fast readout of a pixel matrix has been developed. The AERD data-driven readout architecture operates the address encoding and reset decoding based on an arbitration tree, and allows us to readout only the hit pixels. Compared to the traditional readout structure of the rolling shutter scheme in Monolithic Active Pixel Sensors (MAPS), AERD can achieve a low readout time and a low power consumption especially for low hit occupancies. The readout is controlled at the chip periphery with a signal synchronous with the clock, allows a good digital and analogue signal separation in the matrix and a reduction of the power consumption. The AERD circuit has been implemented in the TowerJazz 180 nm CMOS Imaging Sensor (CIS) process with full complementary CMOS logic in the pixel. It works at 10 MHz with a matrix height of 15 mm. The energy consumed to read out one pixel is around 72 pJ. A scheme to boost the readout speed to 40 MHz is also discussed. The sensor chip equipped with AERD has been produced and characterised. Test results including electrical beam measurement are presented.
Pitch dependence of the tolerance of CMOS monolithic active pixel sensors to non-ionizing radiation
NASA Astrophysics Data System (ADS)
Doering, D.; Deveaux, M.; Domachowski, M.; Fröhlich, I.; Koziel, M.; Müntz, C.; Scharrer, P.; Stroth, J.
2013-12-01
CMOS monolithic active pixel sensors (MAPS) have demonstrated excellent performance as tracking detectors for charged particles. They provide an outstanding spatial resolution (a few μm), a detection efficiency of ≳ 99.9 %, very low material budget (0.05 %X0) and good radiation tolerance (≳ 1 Mrad, ≳1013neq /cm2) (Deveaux et al. [1]). This makes them an interesting technology for various applications in heavy ion and particle physics. Their tolerance to bulk damage was recently improved by using high-resistivity (∼ 1 kΩ cm) epitaxial layers as sensitive volume (Deveaux et al. [1], Dorokhov et al. [2]). The radiation tolerance of conventional MAPS is known to depend on the pixel pitch. This is as a higher pitch extends the distance, which signal electrons have to travel by thermal diffusion before being collected. Increased diffusion paths turn into a higher probability of loosing signal charge due to recombination. Provided that a similar effect exists in MAPS with high-resistivity epitaxial layer, it could be used to extend their radiation tolerance further. We addressed this question with MIMOSA-18AHR prototypes, which were provided by the IPHC Strasbourg and irradiated with reactor neutrons. We report about the results of this study and provide evidences that MAPS with 10 μm pixel pitch tolerate doses of ≳ 3 ×1014neq /cm2.
Rapid Response Flood Water Mapping
NASA Technical Reports Server (NTRS)
Policelli, Fritz; Brakenridge, G. R.; Coplin, A.; Bunnell, M.; Wu, L.; Habib, Shahid; Farah, H.
2010-01-01
Since the beginning of operation of the MODIS instrument on the NASA Terra satellite at the end of 1999, an exceptionally useful sensor and public data stream have been available for many applications including the rapid and precise characterization of terrestrial surface water changes. One practical application of such capability is the near-real time mapping of river flood inundation. We have developed a surface water mapping methodology based on using only bands 1 (620-672 nm) and 2 (841-890 nm). These are the two bands at 250 m, and the use of only these bands maximizes the resulting map detail. In this regard, most water bodies are strong absorbers of incoming solar radiation at the band 2 wavelength: it could be used alone, via a thresholding procedure, to separate water (dark, low radiance or reflectance pixels) from land (much brighter pixels) (1, 2). Some previous water mapping procedures have in fact used such single band data from this and other sensors that include similar wavelength channels. Adding the second channel of data (band 1), however, allows a band ratio approach which permits sediment-laden water, often relatively light at band 2 wavelengths, to still be discriminated, and, as well, provides some removal of error by reducing the number of cloud shadow pixels that would otherwise be misclassified as water.
NASA Astrophysics Data System (ADS)
Huynh, Nam; Zhang, Edward; Betcke, Marta; Arridge, Simon R.; Beard, Paul; Cox, Ben
2015-03-01
A system for dynamic mapping of broadband ultrasound fields has been designed, with high frame rate photoacoustic imaging in mind. A Fabry-Pérot interferometric ultrasound sensor was interrogated using a coherent light single-pixel camera. Scrambled Hadamard measurement patterns were used to sample the acoustic field at the sensor, and either a fast Hadamard transform or a compressed sensing reconstruction algorithm were used to recover the acoustic pressure data. Frame rates of 80 Hz were achieved for 32x32 images even though no specialist hardware was used for the on-the-fly reconstructions. The ability of the system to obtain photocacoustic images with data compressions as low as 10% was also demonstrated.
Ballin, Jamie Alexander; Crooks, Jamie Phillip; Dauncey, Paul Dominic; Magnan, Anne-Marie; Mikami, Yoshiari; Miller, Owen Daniel; Noy, Matthew; Rajovic, Vladimir; Stanitzki, Marcel; Stefanov, Konstantin; Turchetta, Renato; Tyndel, Mike; Villani, Enrico Giulio; Watson, Nigel Keith; Wilson, John Allan
2008-09-02
In this paper we present a novel, quadruple well process developed in a modern 0.18 mm CMOS technology called INMAPS. On top of the standard process, we have added a deep P implant that can be used to form a deep P-well and provide screening of N-wells from the P-doped epitaxial layer. This prevents the collection of radiation-induced charge by unrelated N-wells, typically ones where PMOS transistors are integrated. The design of a sensor specifically tailored to a particle physics experiment is presented, where each 50 mm pixel has over 150 PMOS and NMOS transistors. The sensor has been fabricated in the INMAPS process and first experimental evidence of the effectiveness of this process on charge collection is presented, showing a significant improvement in efficiency.
Mapping Electrical Crosstalk in Pixelated Sensor Arrays
NASA Technical Reports Server (NTRS)
Seshadri, S.; Cole, D. M.; Hancock, B. R.; Smith, R. M.
2008-01-01
Electronic coupling effects such as Inter-Pixel Capacitance (IPC) affect the quantitative interpretation of image data from CMOS, hybrid visible and infrared imagers alike. Existing methods of characterizing IPC do not provide a map of the spatial variation of IPC over all pixels. We demonstrate a deterministic method that provides a direct quantitative map of the crosstalk across an imager. The approach requires only the ability to reset single pixels to an arbitrary voltage, different from the rest of the imager. No illumination source is required. Mapping IPC independently for each pixel is also made practical by the greater S/N ratio achievable for an electrical stimulus than for an optical stimulus, which is subject to both Poisson statistics and diffusion effects of photo-generated charge. The data we present illustrates a more complex picture of IPC in Teledyne HgCdTe and HyViSi focal plane arrays than is presently understood, including the presence of a newly discovered, long range IPC in the HyViSi FPA that extends tens of pixels in distance, likely stemming from extended field effects in the fully depleted substrate. The sensitivity of the measurement approach has been shown to be good enough to distinguish spatial structure in IPC of the order of 0.1%.
A MAPS Based Micro-Vertex Detector for the STAR Experiment
Schambach, Joachim; Anderssen, Eric; Contin, Giacomo; ...
2015-06-18
For the 2014 heavy ion run of RHIC a new micro-vertex detector called the Heavy Flavor Tracker (HFT) was installed in the STAR experiment. The HFT consists of three detector subsystems with various silicon technologies arranged in 4 approximately concentric cylinders close to the STAR interaction point designed to improve the STAR detector’s vertex resolution and extend its measurement capabilities in the heavy flavor domain. The two innermost HFT layers are placed at radii of 2.8 cm and 8 cm from the beam line. These layers are constructed with 400 high resolution sensors based on CMOS Monolithic Active Pixel Sensormore » (MAPS) technology arranged in 10-sensor ladders mounted on 10 thin carbon fiber sectors to cover a total silicon area of 0.16 m 2. Each sensor of this PiXeL (“PXL”) sub-detector combines a pixel array of 928 rows and 960 columns with a 20.7 μm pixel pitch together with front-end electronics and zero-suppression circuitry in one silicon die providing a sensitive area of ~3.8 cm 2. This sensor architecture features 185.6 μs readout time and 170 mW/cm 2 power dissipation. This low power dissipation allows the PXL detector to be air-cooled, and with the sensors thinned down to 50 μm results in a global material budget of only 0.4% radiation length per layer. A novel mechanical approach to detector insertion allows us to effectively install and integrate the PXL sub-detector within a 12 hour period during an on-going multi-month data taking period. The detector requirements, architecture and design, as well as the performance after installation, are presented in this paper.« less
NASA Astrophysics Data System (ADS)
Lebedev, M. A.; Stepaniants, D. G.; Komarov, D. V.; Vygolov, O. V.; Vizilter, Yu. V.; Zheltov, S. Yu.
2014-08-01
The paper addresses a promising visualization concept related to combination of sensor and synthetic images in order to enhance situation awareness of a pilot during an aircraft landing. A real-time algorithm for a fusion of a sensor image, acquired by an onboard camera, and a synthetic 3D image of the external view, generated in an onboard computer, is proposed. The pixel correspondence between the sensor and the synthetic images is obtained by an exterior orientation of a "virtual" camera using runway points as a geospatial reference. The runway points are detected by the Projective Hough Transform, which idea is to project the edge map onto a horizontal plane in the object space (the runway plane) and then to calculate intensity projections of edge pixels on different directions of intensity gradient. The performed experiments on simulated images show that on a base glide path the algorithm provides image fusion with pixel accuracy, even in the case of significant navigation errors.
Ballin, Jamie Alexander; Crooks, Jamie Phillip; Dauncey, Paul Dominic; Magnan, Anne-Marie; Mikami, Yoshinari; Miller, Owen Daniel; Noy, Matthew; Rajovic, Vladimir; Stanitzki, Marcel; Stefanov, Konstantin; Turchetta, Renato; Tyndel, Mike; Villani, Enrico Giulio; Watson, Nigel Keith; Wilson, John Allan
2008-01-01
In this paper we present a novel, quadruple well process developed in a modern 0.18 μm CMOS technology called INMAPS. On top of the standard process, we have added a deep P implant that can be used to form a deep P-well and provide screening of N-wells from the P-doped epitaxial layer. This prevents the collection of radiation-induced charge by unrelated N-wells, typically ones where PMOS transistors are integrated. The design of a sensor specifically tailored to a particle physics experiment is presented, where each 50 μm pixel has over 150 PMOS and NMOS transistors. The sensor has been fabricated in the INMAPS process and first experimental evidence of the effectiveness of this process on charge collection is presented, showing a significant improvement in efficiency. PMID:27873817
Photon counting phosphorescence lifetime imaging with TimepixCam
Hirvonen, Liisa M.; Fisher-Levine, Merlin; Suhling, Klaus; ...
2017-01-12
TimepixCam is a novel fast optical imager based on an optimized silicon pixel sensor with a thin entrance window, and read out by a Timepix ASIC. The 256 x 256 pixel sensor has a time resolution of 15 ns at a sustained frame rate of 10 Hz. We used this sensor in combination with an image intensifier for wide-field time-correlated single photon counting (TCSPC) imaging. We have characterised the photon detection capabilities of this detector system, and employed it on a wide-field epifluorescence microscope to map phosphorescence decays of various iridium complexes with lifetimes of about 1 μs in 200more » μm diameter polystyrene beads.« less
Photon counting phosphorescence lifetime imaging with TimepixCam.
Hirvonen, Liisa M; Fisher-Levine, Merlin; Suhling, Klaus; Nomerotski, Andrei
2017-01-01
TimepixCam is a novel fast optical imager based on an optimized silicon pixel sensor with a thin entrance window and read out by a Timepix Application Specific Integrated Circuit. The 256 × 256 pixel sensor has a time resolution of 15 ns at a sustained frame rate of 10 Hz. We used this sensor in combination with an image intensifier for wide-field time-correlated single photon counting imaging. We have characterised the photon detection capabilities of this detector system and employed it on a wide-field epifluorescence microscope to map phosphorescence decays of various iridium complexes with lifetimes of about 1 μs in 200 μm diameter polystyrene beads.
Photon counting phosphorescence lifetime imaging with TimepixCam
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hirvonen, Liisa M.; Fisher-Levine, Merlin; Suhling, Klaus
TimepixCam is a novel fast optical imager based on an optimized silicon pixel sensor with a thin entrance window, and read out by a Timepix ASIC. The 256 x 256 pixel sensor has a time resolution of 15 ns at a sustained frame rate of 10 Hz. We used this sensor in combination with an image intensifier for wide-field time-correlated single photon counting (TCSPC) imaging. We have characterised the photon detection capabilities of this detector system, and employed it on a wide-field epifluorescence microscope to map phosphorescence decays of various iridium complexes with lifetimes of about 1 μs in 200more » μm diameter polystyrene beads.« less
Photon counting phosphorescence lifetime imaging with TimepixCam
NASA Astrophysics Data System (ADS)
Hirvonen, Liisa M.; Fisher-Levine, Merlin; Suhling, Klaus; Nomerotski, Andrei
2017-01-01
TimepixCam is a novel fast optical imager based on an optimized silicon pixel sensor with a thin entrance window and read out by a Timepix Application Specific Integrated Circuit. The 256 × 256 pixel sensor has a time resolution of 15 ns at a sustained frame rate of 10 Hz. We used this sensor in combination with an image intensifier for wide-field time-correlated single photon counting imaging. We have characterised the photon detection capabilities of this detector system and employed it on a wide-field epifluorescence microscope to map phosphorescence decays of various iridium complexes with lifetimes of about 1 μs in 200 μm diameter polystyrene beads.
NASA Astrophysics Data System (ADS)
Ponchut, C.; Cotte, M.; Lozinskaya, A.; Zarubin, A.; Tolbanov, O.; Tyazhev, A.
2017-12-01
In order to meet the needs of some ESRF beamlines for highly efficient 2D X-ray detectors in the 20-50 keV range, GaAs:Cr pixel sensors coupled to TIMEPIX readout chips were implemented into a MAXIPIX detector. Use of GaAs:Cr sensor material is intended to overcome the limitations of Si (low absorption) and of CdTe (fluorescence) in this energy range The GaAs:Cr sensor assemblies were characterised with both laboratory X-ray sources and monochromatic synchrotron X-ray beams. The sensor response as a function of bias voltage was compared to a theoretical model, leading to an estimation of the μτ product of electrons in GaAs:Cr sensor material of 1.6×10-4 cm2/V. The spatial homogeneity of X-ray images obtained with the sensors was measured in different irradiation conditions, showing a particular sensitivity to small variations in the incident beam spectrum. 2D-resolved elemental mapping of the sensor surface was carried out to investigate a possible relation between the noise pattern observed in X-ray images and local fluctuations in chemical composition. A scanning of the sensor response at subpixel scale revealed that these irregularities can be correlated with a distortion of the effective pixel shapes.
Rugged: an operational, open-source solution for Sentinel-2 mapping
NASA Astrophysics Data System (ADS)
Maisonobe, Luc; Seyral, Jean; Prat, Guylaine; Guinet, Jonathan; Espesset, Aude
2015-10-01
When you map the entire Earth every 5 days with the aim of generating high-quality time series over land, there is no room for geometrical error: the algorithms have to be stable, reliable, and precise. Rugged, a new open-source library for pixel geolocation, is at the geometrical heart of the operational processing for Sentinel-2. Rugged performs sensor-to-terrain mapping taking into account ground Digital Elevation Models, Earth rotation with all its small irregularities, on-board sensor pixel individual lines-of-sight, spacecraft motion and attitude, and all significant physical effects. It provides direct and inverse location, i.e. it allows the accurate computation of which ground point is viewed from a specific pixel in a spacecraft instrument, and conversely which pixel will view a specified ground point. Direct and inverse location can be used to perform full ortho-rectification of images and correlation between sensors observing the same area. Implemented as an add-on for Orekit (Orbits Extrapolation KIT; a low-level space dynamics library), Rugged also offers the possibility of simulating satellite motion and attitude auxiliary data using Orekit's full orbit propagation capability. This is a considerable advantage for test data generation and mission simulation activities. Together with the Orfeo ToolBox (OTB) image processing library, Rugged provides the algorithmic core of Sentinel-2 Instrument Processing Facilities. The S2 complex viewing model - with 12 staggered push-broom detectors and 13 spectral bands - is built using Rugged objects, enabling the computation of rectification grids for mapping between cartographic and focal plane coordinates. These grids are passed to the OTB library for further image resampling, thus completing the ortho-rectification chain. Sentinel-2 stringent operational requirements to process several terabytes of data per week represented a tough challenge, though one that was well met by Rugged in terms of the robustness and performance of the library.
X-ray metrology of an array of active edge pixel sensors for use at synchrotron light sources
NASA Astrophysics Data System (ADS)
Plackett, R.; Arndt, K.; Bortoletto, D.; Horswell, I.; Lockwood, G.; Shipsey, I.; Tartoni, N.; Williams, S.
2018-01-01
We report on the production and testing of an array of active edge silicon sensors as a prototype of a large array. Four Medipix3RX.1 chips were bump bonded to four single chip sized Advacam active edge n-on-n sensors. These detectors were then mounted into a 2 by 2 array and tested on B16 at Diamond Light Source with an x-ray beam spot of 2um. The results from these tests, compared with optical metrology demonstrate that this type of sensor is sensitive to the physical edge of the silicon, with only a modest loss of efficiency in the final two rows of pixels. We present the efficiency maps recorded with the microfocus beam and a sample powder diffraction measurement. These results give confidence that this sensor technology can be used effectively in larger arrays of detectors at synchrotron light sources.
Valderrama-Landeros, L; Flores-de-Santiago, F; Kovacs, J M; Flores-Verdugo, F
2017-12-14
Optimizing the classification accuracy of a mangrove forest is of utmost importance for conservation practitioners. Mangrove forest mapping using satellite-based remote sensing techniques is by far the most common method of classification currently used given the logistical difficulties of field endeavors in these forested wetlands. However, there is now an abundance of options from which to choose in regards to satellite sensors, which has led to substantially different estimations of mangrove forest location and extent with particular concern for degraded systems. The objective of this study was to assess the accuracy of mangrove forest classification using different remotely sensed data sources (i.e., Landsat-8, SPOT-5, Sentinel-2, and WorldView-2) for a system located along the Pacific coast of Mexico. Specifically, we examined a stressed semiarid mangrove forest which offers a variety of conditions such as dead areas, degraded stands, healthy mangroves, and very dense mangrove island formations. The results indicated that Landsat-8 (30 m per pixel) had the lowest overall accuracy at 64% and that WorldView-2 (1.6 m per pixel) had the highest at 93%. Moreover, the SPOT-5 and the Sentinel-2 classifications (10 m per pixel) were very similar having accuracies of 75 and 78%, respectively. In comparison to WorldView-2, the other sensors overestimated the extent of Laguncularia racemosa and underestimated the extent of Rhizophora mangle. When considering such type of sensors, the higher spatial resolution can be particularly important in mapping small mangrove islands that often occur in degraded mangrove systems.
A 3D image sensor with adaptable charge subtraction scheme for background light suppression
NASA Astrophysics Data System (ADS)
Shin, Jungsoon; Kang, Byongmin; Lee, Keechang; Kim, James D. K.
2013-02-01
We present a 3D ToF (Time-of-Flight) image sensor with adaptive charge subtraction scheme for background light suppression. The proposed sensor can alternately capture high resolution color image and high quality depth map in each frame. In depth-mode, the sensor requires enough integration time for accurate depth acquisition, but saturation will occur in high background light illumination. We propose to divide the integration time into N sub-integration times adaptively. In each sub-integration time, our sensor captures an image without saturation and subtracts the charge to prevent the pixel from the saturation. In addition, the subtraction results are cumulated N times obtaining a final result image without background illumination at full integration time. Experimental results with our own ToF sensor show high background suppression performance. We also propose in-pixel storage and column-level subtraction circuit for chiplevel implementation of the proposed method. We believe the proposed scheme will enable 3D sensors to be used in out-door environment.
Experience from the construction and operation of the STAR PXL detector
NASA Astrophysics Data System (ADS)
Greiner, L.; Anderssen, E. C.; Contin, G.; Schambach, J.; Silber, J.; Stezelberger, T.; Sun, X.; Szelezniak, M.; Vu, C.; Wieman, H. H.; Woodmansee, S.
2015-04-01
A new silicon based vertex detector called the Heavy Flavor Tracker (HFT) was installed at the Soleniodal Tracker At RHIC (STAR) experiment for the Relativistic Heavy Ion Collider (RHIC) 2014 heavy ion run to improve the vertex resolution and extend the measurement capabilities of STAR in the heavy flavor domain. The HFT consists of four concentric cylinders around the STAR interaction point composed of three different silicon detector technologies based on strips, pads and for the first time in an accelerator experiment CMOS monolithic active pixels (MAPS) . The two innermost layers at a radius of 2.8 cm and 8 cm from the beam line are constructed with 400 high resolution MAPS sensors arranged in 10-sensor ladders mounted on 10 thin carbon fiber sectors giving a total silicon area of 0.16 m2. Each sensor consists of a pixel array of nearly 1 million pixels with a pitch of 20.7 μm with column-level discriminators, zero-suppression circuitry and output buffer memory integrated into one silicon die with a sensitive area of ~ 3.8 cm2. The pixel (PXL) detector has a low power dissipation of 170 mW/cm2, which allows air cooling. This results in a global material budget of 0.5% radiation length per layer for detector used in this run. A novel mechanical approach to detector insertion allows for the installation and integration of the pixel sub detector within a 12 hour period during an on-going STAR run. The detector specifications, experience from the construction and operation, lessons learned and initial measurements of the PXL performance in the 200 GeV Au-Au run will be presented.
Hyperspectral Imaging Sensors and the Marine Coastal Zone
NASA Technical Reports Server (NTRS)
Richardson, Laurie L.
2000-01-01
Hyperspectral imaging sensors greatly expand the potential of remote sensing to assess, map, and monitor marine coastal zones. Each pixel in a hyperspectral image contains an entire spectrum of information. As a result, hyperspectral image data can be processed in two very different ways: by image classification techniques, to produce mapped outputs of features in the image on a regional scale; and by use of spectral analysis of the spectral data embedded within each pixel of the image. The latter is particularly useful in marine coastal zones because of the spectral complexity of suspended as well as benthic features found in these environments. Spectral-based analysis of hyperspectral (AVIRIS) imagery was carried out to investigate a marine coastal zone of South Florida, USA. Florida Bay is a phytoplankton-rich estuary characterized by taxonomically distinct phytoplankton assemblages and extensive seagrass beds. End-member spectra were extracted from AVIRIS image data corresponding to ground-truth sample stations and well-known field sites. Spectral libraries were constructed from the AVIRIS end-member spectra and used to classify images using the Spectral Angle Mapper (SAM) algorithm, a spectral-based approach that compares the spectrum, in each pixel of an image with each spectrum in a spectral library. Using this approach different phytoplankton assemblages containing diatoms, cyanobacteria, and green microalgae, as well as benthic community (seagrasses), were mapped.
NASA Technical Reports Server (NTRS)
Myint, Soe W.; Mesev, Victor; Quattrochi, Dale; Wentz, Elizabeth A.
2013-01-01
Remote sensing methods used to generate base maps to analyze the urban environment rely predominantly on digital sensor data from space-borne platforms. This is due in part from new sources of high spatial resolution data covering the globe, a variety of multispectral and multitemporal sources, sophisticated statistical and geospatial methods, and compatibility with GIS data sources and methods. The goal of this chapter is to review the four groups of classification methods for digital sensor data from space-borne platforms; per-pixel, sub-pixel, object-based (spatial-based), and geospatial methods. Per-pixel methods are widely used methods that classify pixels into distinct categories based solely on the spectral and ancillary information within that pixel. They are used for simple calculations of environmental indices (e.g., NDVI) to sophisticated expert systems to assign urban land covers. Researchers recognize however, that even with the smallest pixel size the spectral information within a pixel is really a combination of multiple urban surfaces. Sub-pixel classification methods therefore aim to statistically quantify the mixture of surfaces to improve overall classification accuracy. While within pixel variations exist, there is also significant evidence that groups of nearby pixels have similar spectral information and therefore belong to the same classification category. Object-oriented methods have emerged that group pixels prior to classification based on spectral similarity and spatial proximity. Classification accuracy using object-based methods show significant success and promise for numerous urban 3 applications. Like the object-oriented methods that recognize the importance of spatial proximity, geospatial methods for urban mapping also utilize neighboring pixels in the classification process. The primary difference though is that geostatistical methods (e.g., spatial autocorrelation methods) are utilized during both the pre- and post-classification steps. Within this chapter, each of the four approaches is described in terms of scale and accuracy classifying urban land use and urban land cover; and for its range of urban applications. We demonstrate the overview of four main classification groups in Figure 1 while Table 1 details the approaches with respect to classification requirements and procedures (e.g., reflectance conversion, steps before training sample selection, training samples, spatial approaches commonly used, classifiers, primary inputs for classification, output structures, number of output layers, and accuracy assessment). The chapter concludes with a brief summary of the methods reviewed and the challenges that remain in developing new classification methods for improving the efficiency and accuracy of mapping urban areas.
High dynamic range vision sensor for automotive applications
NASA Astrophysics Data System (ADS)
Grenet, Eric; Gyger, Steve; Heim, Pascal; Heitger, Friedrich; Kaess, Francois; Nussbaum, Pascal; Ruedi, Pierre-Francois
2005-02-01
A 128 x 128 pixels, 120 dB vision sensor extracting at the pixel level the contrast magnitude and direction of local image features is used to implement a lane tracking system. The contrast representation (relative change of illumination) delivered by the sensor is independent of the illumination level. Together with the high dynamic range of the sensor, it ensures a very stable image feature representation even with high spatial and temporal inhomogeneities of the illumination. Dispatching off chip image feature is done according to the contrast magnitude, prioritizing features with high contrast magnitude. This allows to reduce drastically the amount of data transmitted out of the chip, hence the processing power required for subsequent processing stages. To compensate for the low fill factor (9%) of the sensor, micro-lenses have been deposited which increase the sensitivity by a factor of 5, corresponding to an equivalent of 2000 ASA. An algorithm exploiting the contrast representation output by the vision sensor has been developed to estimate the position of a vehicle relative to the road markings. The algorithm first detects the road markings based on the contrast direction map. Then, it performs quadratic fits on selected kernel of 3 by 3 pixels to achieve sub-pixel accuracy on the estimation of the lane marking positions. The resulting precision on the estimation of the vehicle lateral position is 1 cm. The algorithm performs efficiently under a wide variety of environmental conditions, including night and rainy conditions.
NASA Astrophysics Data System (ADS)
Di, K.; Liu, Y.; Liu, B.; Peng, M.
2012-07-01
Chang'E-1(CE-1) and Chang'E-2(CE-2) are the two lunar orbiters of China's lunar exploration program. Topographic mapping using CE-1 and CE-2 images is of great importance for scientific research as well as for preparation of landing and surface operation of Chang'E-3 lunar rover. In this research, we developed rigorous sensor models of CE-1 and CE-2 CCD cameras based on push-broom imaging principle with interior and exterior orientation parameters. Based on the rigorous sensor model, the 3D coordinate of a ground point in lunar body-fixed (LBF) coordinate system can be calculated by space intersection from the image coordinates of con-jugate points in stereo images, and the image coordinates can be calculated from 3D coordinates by back-projection. Due to uncer-tainties of the orbit and the camera, the back-projected image points are different from the measured points. In order to reduce these inconsistencies and improve precision, we proposed two methods to refine the rigorous sensor model: 1) refining EOPs by correcting the attitude angle bias, 2) refining the interior orientation model by calibration of the relative position of the two linear CCD arrays. Experimental results show that the mean back-projection residuals of CE-1 images are reduced to better than 1/100 pixel by method 1 and the mean back-projection residuals of CE-2 images are reduced from over 20 pixels to 0.02 pixel by method 2. Consequently, high precision DEM (Digital Elevation Model) and DOM (Digital Ortho Map) are automatically generated.
NASA Astrophysics Data System (ADS)
Mattiazzo, S.; Aimo, I.; Baudot, J.; Bedda, C.; La Rocca, P.; Perez, A.; Riggi, F.; Spiriti, E.
2015-10-01
The ALICE experiment at CERN will undergo a major upgrade in the second Long LHC Shutdown in the years 2018-2019; this upgrade includes the full replacement of the Inner Tracking System (ITS), deploying seven layers of Monolithic Active Pixel Sensors (MAPS). For the development of the new ALICE ITS, the Tower-Jazz 0.18 μm CMOS imaging sensor process has been chosen as it is possible to use full CMOS in the pixel and different silicon wafers (including high resistivity epitaxial layers). A large test campaign has been carried out on several small prototype chips, designed to optimize the pixel sensor layout and the front-end electronics. Results match the target requirements both in terms of performance and of radiation hardness. Following this development, the first full scale chips have been designed, submitted and are currently under test, with promising results. A telescope composed of 4 planes of Mimosa-28 and 2 planes of Mimosa-18 chips is under development at the DAFNE Beam Test Facility (BTF) at the INFN Laboratori Nazionali di Frascati (LNF) in Italy with the final goal to perform a comparative test of the full scale prototypes. The telescope has been recently used to test a Mimosa-22THRb chip (a monolithic pixel sensor built in the 0.18 μm Tower-Jazz process) and we foresee to perform tests on the full scale chips for the ALICE ITS upgrade at the beginning of 2015. In this contribution we will describe some first measurements of spatial resolution, fake hit rate and detection efficiency of the Mimosa-22THRb chip obtained at the BTF facility in June 2014 with an electron beam of 500 MeV.
An EUDET/AIDA Pixel Beam Telescope for Detector Development
NASA Astrophysics Data System (ADS)
Rubinskiy, I.; EUDET Consortium; AIDA Consortium
Ahigh resolution(σ< 2 μm) beam telescope based on monolithic active pixel sensors (MAPS) was developed within the EUDET collaboration. EUDET was a coordinated detector R&D programme for the future International Linear Collider providing test beam infrastructure to detector R&D groups. The telescope consists of six sensor planes with a pixel pitch of either 18.4 μm or 10 μmand canbe operated insidea solenoidal magnetic fieldofupto1.2T.Ageneral purpose cooling, positioning, data acquisition (DAQ) and offine data analysis tools are available for the users. The excellent resolution, readout rate andDAQintegration capabilities made the telescopea primary beam tests tool also for several CERN based experiments. In this report the performance of the final telescope is presented. The plans for an even more flexible telescope with three differentpixel technologies(ATLASPixel, Mimosa,Timepix) withinthenew European detector infrastructure project AIDA are presented.
Assessment and Prediction of Natural Hazards from Satellite Imagery
Gillespie, Thomas W.; Chu, Jasmine; Frankenberg, Elizabeth; Thomas, Duncan
2013-01-01
Since 2000, there have been a number of spaceborne satellites that have changed the way we assess and predict natural hazards. These satellites are able to quantify physical geographic phenomena associated with the movements of the earth’s surface (earthquakes, mass movements), water (floods, tsunamis, storms), and fire (wildfires). Most of these satellites contain active or passive sensors that can be utilized by the scientific community for the remote sensing of natural hazards over a number of spatial and temporal scales. The most useful satellite imagery for the assessment of earthquake damage comes from high-resolution (0.6 m to 1 m pixel size) passive sensors and moderate resolution active sensors that can quantify the vertical and horizontal movement of the earth’s surface. High-resolution passive sensors have been used to successfully assess flood damage while predictive maps of flood vulnerability areas are possible based on physical variables collected from passive and active sensors. Recent moderate resolution sensors are able to provide near real time data on fires and provide quantitative data used in fire behavior models. Limitations currently exist due to atmospheric interference, pixel resolution, and revisit times. However, a number of new microsatellites and constellations of satellites will be launched in the next five years that contain increased resolution (0.5 m to 1 m pixel resolution for active sensors) and revisit times (daily ≤ 2.5 m resolution images from passive sensors) that will significantly improve our ability to assess and predict natural hazards from space. PMID:25170186
Fixed-Wing Micro Aerial Vehicle for Accurate Corridor Mapping
NASA Astrophysics Data System (ADS)
Rehak, M.; Skaloud, J.
2015-08-01
In this study we present a Micro Aerial Vehicle (MAV) equipped with precise position and attitude sensors that together with a pre-calibrated camera enables accurate corridor mapping. The design of the platform is based on widely available model components to which we integrate an open-source autopilot, customized mass-market camera and navigation sensors. We adapt the concepts of system calibration from larger mapping platforms to MAV and evaluate them practically for their achievable accuracy. We present case studies for accurate mapping without ground control points: first for a block configuration, later for a narrow corridor. We evaluate the mapping accuracy with respect to checkpoints and digital terrain model. We show that while it is possible to achieve pixel (3-5 cm) mapping accuracy in both cases, precise aerial position control is sufficient for block configuration, the precise position and attitude control is required for corridor mapping.
Land cover mapping at sub-pixel scales
NASA Astrophysics Data System (ADS)
Makido, Yasuyo Kato
One of the biggest drawbacks of land cover mapping from remotely sensed images relates to spatial resolution, which determines the level of spatial details depicted in an image. Fine spatial resolution images from satellite sensors such as IKONOS and QuickBird are now available. However, these images are not suitable for large-area studies, since a single image is very small and therefore it is costly for large area studies. Much research has focused on attempting to extract land cover types at sub-pixel scale, and little research has been conducted concerning the spatial allocation of land cover types within a pixel. This study is devoted to the development of new algorithms for predicting land cover distribution using remote sensory imagery at sub-pixel level. The "pixel-swapping" optimization algorithm, which was proposed by Atkinson for predicting sub-pixel land cover distribution, is investigated in this study. Two limitations of this method, the arbitrary spatial range value and the arbitrary exponential model of spatial autocorrelation, are assessed. Various weighting functions, as alternatives to the exponential model, are evaluated in order to derive the optimum weighting function. Two different simulation models were employed to develop spatially autocorrelated binary class maps. In all tested models, Gaussian, Exponential, and IDW, the pixel swapping method improved classification accuracy compared with the initial random allocation of sub-pixels. However the results suggested that equal weight could be used to increase accuracy and sub-pixel spatial autocorrelation instead of using these more complex models of spatial structure. New algorithms for modeling the spatial distribution of multiple land cover classes at sub-pixel scales are developed and evaluated. Three methods are examined: sequential categorical swapping, simultaneous categorical swapping, and simulated annealing. These three methods are applied to classified Landsat ETM+ data that has been resampled to 210 meters. The result suggested that the simultaneous method can be considered as the optimum method in terms of accuracy performance and computation time. The case study employs remote sensing imagery at the following sites: tropical forests in Brazil and temperate multiple land mosaic in East China. Sub-areas for both sites are used to examine how the characteristics of the landscape affect the ability of the optimum technique. Three types of measurement: Moran's I, mean patch size (MPS), and patch size standard deviation (STDEV), are used to characterize the landscape. All results suggested that this technique could increase the classification accuracy more than traditional hard classification. The methods developed in this study can benefit researchers who employ coarse remote sensing imagery but are interested in detailed landscape information. In many cases, the satellite sensor that provides large spatial coverage has insufficient spatial detail to identify landscape patterns. Application of the super-resolution technique described in this dissertation could potentially solve this problem by providing detailed land cover predictions from the coarse resolution satellite sensor imagery.
ALPIDE: the Monolithic Active Pixel Sensor for the ALICE ITS upgrade
NASA Astrophysics Data System (ADS)
Šuljić, M.
2016-11-01
The upgrade of the ALICE vertex detector, the Inner Tracking System (ITS), is scheduled to be installed during the next long shutdown period (2019-2020) of the CERN Large Hadron Collider (LHC) . The current ITS will be replaced by seven concentric layers of Monolithic Active Pixel Sensors (MAPS) with total active surface of ~10 m2, thus making ALICE the first LHC experiment implementing MAPS detector technology on a large scale. The ALPIDE chip, based on TowerJazz 180 nm CMOS Imaging Process, is being developed for this purpose. A particular process feature, the deep p-well, is exploited so the full CMOS logic can be implemented over the active sensor area without impinging on the deposited charge collection. ALPIDE is implemented on silicon wafers with a high resistivity epitaxial layer. A single chip measures 15 mm by 30 mm and contains half a million pixels distributed in 512 rows and 1024 columns. In-pixel circuitry features amplification, shaping, discrimination and multi-event buffering. The readout is hit driven i.e. only addresses of hit pixels are sent to the periphery. The upgrade of the ITS presents two different sets of requirements for sensors of the inner and of the outer layers due to the significantly different track density, radiation level and active detector surface. The ALPIDE chip fulfils the stringent requirements in both cases. The detection efficiency is higher than 99%, fake-hit probability is orders of magnitude lower than the required 10-6 and spatial resolution within the required 5 μm. This performance is to be maintained even after a total ionising does (TID) of 2.7 Mrad and a non-ionising energy loss (NIEL) fluence of 1.7 × 1013 1 MeV neq/cm2, which is above what is expected during the detector lifetime. Readout rate of 100 kHz is provided and the power density of ALPIDE is less than 40 mW/cm2. This contribution will provide a summary of the ALPIDE features and main test results.
NASA Astrophysics Data System (ADS)
Watanabe, Shigeo; Takahashi, Teruo; Bennett, Keith
2017-02-01
The"scientific" CMOS (sCMOS) camera architecture fundamentally differs from CCD and EMCCD cameras. In digital CCD and EMCCD cameras, conversion from charge to the digital output is generally through a single electronic chain, and the read noise and the conversion factor from photoelectrons to digital outputs are highly uniform for all pixels, although quantum efficiency may spatially vary. In CMOS cameras, the charge to voltage conversion is separate for each pixel and each column has independent amplifiers and analog-to-digital converters, in addition to possible pixel-to-pixel variation in quantum efficiency. The "raw" output from the CMOS image sensor includes pixel-to-pixel variability in the read noise, electronic gain, offset and dark current. Scientific camera manufacturers digitally compensate the raw signal from the CMOS image sensors to provide usable images. Statistical noise in images, unless properly modeled, can introduce errors in methods such as fluctuation correlation spectroscopy or computational imaging, for example, localization microscopy using maximum likelihood estimation. We measured the distributions and spatial maps of individual pixel offset, dark current, read noise, linearity, photoresponse non-uniformity and variance distributions of individual pixels for standard, off-the-shelf Hamamatsu ORCA-Flash4.0 V3 sCMOS cameras using highly uniform and controlled illumination conditions, from dark conditions to multiple low light levels between 20 to 1,000 photons / pixel per frame to higher light conditions. We further show that using pixel variance for flat field correction leads to errors in cameras with good factory calibration.
A 176×144 148dB adaptive tone-mapping imager
NASA Astrophysics Data System (ADS)
Vargas-Sierra, S.; Liñán-Cembrano, G.; Rodríguez-Vázquez, A.
2012-03-01
This paper presents a 176x144 (QCIF) HDR image sensor where visual information is simultaneously captured and adaptively compressed by means of an in-pixel tone mapping scheme. The tone mapping curve (TMC) is calculated from the histogram of a Time Stamp image captured in the previous frame, which serves as a probability indicator of the distribution of illuminations within the present frame. The chip produces 7-bit/pixel images that can map illuminations from 311μlux to 55.3 klux in a single frame in a way that each pixel decides when to stop observing photocurrent integration -with extreme values captured at 8s and 2.34μs respectively. Pixels size is 33x33μm2, which includes a 3x3μm2 Nwell- Psubstrate photodiode and an autozeroing technique for establishing the reset voltage, which cancels most of the offset contributions created by the analog processing circuitry. Dark signal (10.8 mV/s ) effects in the final image are attenuated by an automatic programming of the DAC top voltage. Measured characteristics are Sensitivity 5.79 V/lux.s , FWC 12.2ke-, Conversion Factor 129(e-/DN), and Read Noise 25e-. The chip has been designed in the 0.35μm OPTO technology from Austriamicrosystems (AMS). Due to the focal plane operation, this architecture is especially well suited to be implemented in a 3D (vertical stacking) technology using per-pixel TSVs.
New false color mapping for image fusion
NASA Astrophysics Data System (ADS)
Toet, Alexander; Walraven, Jan
1996-03-01
A pixel-based color-mapping algorithm is presented that produces a fused false color rendering of two gray-level images representing different sensor modalities. The resulting images have a higher information content than each of the original images and retain sensor-specific image information. The unique component of each image modality is enhanced in the resulting fused color image representation. First, the common component of the two original input images is determined. Second, the common component is subtracted from the original images to obtain the unique component of each image. Third, the unique component of each image modality is subtracted from the image of the other modality. This step serves to enhance the representation of sensor-specific details in the final fused result. Finally, a fused color image is produced by displaying the images resulting from the last step through, respectively, the red and green channels of a color display. The method is applied to fuse thermal and visual images. The results show that the color mapping enhances the visibility of certain details and preserves the specificity of the sensor information. The fused images also have a fairly natural appearance. The fusion scheme involves only operations on corresponding pixels. The resolution of a fused image is therefore directly related to the resolution of the input images. Before fusing, the contrast of the images can be enhanced and their noise can be reduced by standard image- processing techniques. The color mapping algorithm is computationally simple. This implies that the investigated approaches can eventually be applied in real time and that the hardware needed is not too complicated or too voluminous (an important consideration when it has to fit in an airplane, for instance).
Origami silicon optoelectronics for hemispherical electronic eye systems.
Zhang, Kan; Jung, Yei Hwan; Mikael, Solomon; Seo, Jung-Hun; Kim, Munho; Mi, Hongyi; Zhou, Han; Xia, Zhenyang; Zhou, Weidong; Gong, Shaoqin; Ma, Zhenqiang
2017-11-24
Digital image sensors in hemispherical geometries offer unique imaging advantages over their planar counterparts, such as wide field of view and low aberrations. Deforming miniature semiconductor-based sensors with high-spatial resolution into such format is challenging. Here we report a simple origami approach for fabricating single-crystalline silicon-based focal plane arrays and artificial compound eyes that have hemisphere-like structures. Convex isogonal polyhedral concepts allow certain combinations of polygons to fold into spherical formats. Using each polygon block as a sensor pixel, the silicon-based devices are shaped into maps of truncated icosahedron and fabricated on flexible sheets and further folded either into a concave or convex hemisphere. These two electronic eye prototypes represent simple and low-cost methods as well as flexible optimization parameters in terms of pixel density and design. Results demonstrated in this work combined with miniature size and simplicity of the design establish practical technology for integration with conventional electronic devices.
Where can pixel counting area estimates meet user-defined accuracy requirements?
NASA Astrophysics Data System (ADS)
Waldner, François; Defourny, Pierre
2017-08-01
Pixel counting is probably the most popular way to estimate class areas from satellite-derived maps. It involves determining the number of pixels allocated to a specific thematic class and multiplying it by the pixel area. In the presence of asymmetric classification errors, the pixel counting estimator is biased. The overarching objective of this article is to define the applicability conditions of pixel counting so that the estimates are below a user-defined accuracy target. By reasoning in terms of landscape fragmentation and spatial resolution, the proposed framework decouples the resolution bias and the classifier bias from the overall classification bias. The consequence is that prior to any classification, part of the tolerated bias is already committed due to the choice of the spatial resolution of the imagery. How much classification bias is affordable depends on the joint interaction of spatial resolution and fragmentation. The method was implemented over South Africa for cropland mapping, demonstrating its operational applicability. Particular attention was paid to modeling a realistic sensor's spatial response by explicitly accounting for the effect of its point spread function. The diagnostic capabilities offered by this framework have multiple potential domains of application such as guiding users in their choice of imagery and providing guidelines for space agencies to elaborate the design specifications of future instruments.
Event-Based Tone Mapping for Asynchronous Time-Based Image Sensor
Simon Chane, Camille; Ieng, Sio-Hoi; Posch, Christoph; Benosman, Ryad B.
2016-01-01
The asynchronous time-based neuromorphic image sensor ATIS is an array of autonomously operating pixels able to encode luminance information with an exceptionally high dynamic range (>143 dB). This paper introduces an event-based methodology to display data from this type of event-based imagers, taking into account the large dynamic range and high temporal accuracy that go beyond available mainstream display technologies. We introduce an event-based tone mapping methodology for asynchronously acquired time encoded gray-level data. A global and a local tone mapping operator are proposed. Both are designed to operate on a stream of incoming events rather than on time frame windows. Experimental results on real outdoor scenes are presented to evaluate the performance of the tone mapping operators in terms of quality, temporal stability, adaptation capability, and computational time. PMID:27642275
Huang, Xiwei; Yu, Hao; Liu, Xu; Jiang, Yu; Yan, Mei; Wu, Dongping
2015-09-01
The existing ISFET-based DNA sequencing detects hydrogen ions released during the polymerization of DNA strands on microbeads, which are scattered into microwell array above the ISFET sensor with unknown distribution. However, false pH detection happens at empty microwells due to crosstalk from neighboring microbeads. In this paper, a dual-mode CMOS ISFET sensor is proposed to have accurate pH detection toward DNA sequencing. Dual-mode sensing, optical and chemical modes, is realized by integrating a CMOS image sensor (CIS) with ISFET pH sensor, and is fabricated in a standard 0.18-μm CIS process. With accurate determination of microbead physical locations with CIS pixel by contact imaging, the dual-mode sensor can correlate local pH for one DNA slice at one location-determined microbead, which can result in improved pH detection accuracy. Moreover, toward a high-throughput DNA sequencing, a correlated-double-sampling readout that supports large array for both modes is deployed to reduce pixel-to-pixel nonuniformity such as threshold voltage mismatch. The proposed CMOS dual-mode sensor is experimentally examined to show a well correlated pH map and optical image for microbeads with a pH sensitivity of 26.2 mV/pH, a fixed pattern noise (FPN) reduction from 4% to 0.3%, and a readout speed of 1200 frames/s. A dual-mode CMOS ISFET sensor with suppressed FPN for accurate large-arrayed pH sensing is proposed and demonstrated with state-of-the-art measured results toward accurate and high-throughput DNA sequencing. The developed dual-mode CMOS ISFET sensor has great potential for future personal genome diagnostics with high accuracy and low cost.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schambach, Joachim; Anderssen, Eric; Contin, Giacomo
For the 2014 heavy ion run of RHIC a new micro-vertex detector called the Heavy Flavor Tracker (HFT) was installed in the STAR experiment. The HFT consists of three detector subsystems with various silicon technologies arranged in 4 approximately concentric cylinders close to the STAR interaction point designed to improve the STAR detector’s vertex resolution and extend its measurement capabilities in the heavy flavor domain. The two innermost HFT layers are placed at radii of 2.8 cm and 8 cm from the beam line. These layers are constructed with 400 high resolution sensors based on CMOS Monolithic Active Pixel Sensormore » (MAPS) technology arranged in 10-sensor ladders mounted on 10 thin carbon fiber sectors to cover a total silicon area of 0.16 m 2. Each sensor of this PiXeL (“PXL”) sub-detector combines a pixel array of 928 rows and 960 columns with a 20.7 μm pixel pitch together with front-end electronics and zero-suppression circuitry in one silicon die providing a sensitive area of ~3.8 cm 2. This sensor architecture features 185.6 μs readout time and 170 mW/cm 2 power dissipation. This low power dissipation allows the PXL detector to be air-cooled, and with the sensors thinned down to 50 μm results in a global material budget of only 0.4% radiation length per layer. A novel mechanical approach to detector insertion allows us to effectively install and integrate the PXL sub-detector within a 12 hour period during an on-going multi-month data taking period. The detector requirements, architecture and design, as well as the performance after installation, are presented in this paper.« less
Reconstruction of Sky Illumination Domes from Ground-Based Panoramas
NASA Astrophysics Data System (ADS)
Coubard, F.; Lelégard, L.; Brédif, M.; Paparoditis, N.; Briottet, X.
2012-07-01
The knowledge of the sky illumination is important for radiometric corrections and for computer graphics applications such as relighting or augmented reality. We propose an approach to compute environment maps, representing the sky radiance, from a set of ground-based images acquired by a panoramic acquisition system, for instance a mobile-mapping system. These images can be affected by important radiometric artifacts, such as bloom or overexposure. A Perez radiance model is estimated with the blue sky pixels of the images, and used to compute additive corrections in order to reduce these radiometric artifacts. The sky pixels are then aggregated in an environment map, which still suffers from discontinuities on stitching edges. The influence of the quality of estimated sky radiance on the simulated light signal is measured quantitatively on a simple synthetic urban scene; in our case, the maximal error for the total sensor radiance is about 10%.
Sub-pixel Area Calculation Methods for Estimating Irrigated Areas.
Thenkabailc, Prasad S; Biradar, Chandrashekar M; Noojipady, Praveen; Cai, Xueliang; Dheeravath, Venkateswarlu; Li, Yuanjie; Velpuri, Manohar; Gumma, Muralikrishna; Pandey, Suraj
2007-10-31
The goal of this paper was to develop and demonstrate practical methods forcomputing sub-pixel areas (SPAs) from coarse-resolution satellite sensor data. Themethods were tested and verified using: (a) global irrigated area map (GIAM) at 10-kmresolution based, primarily, on AVHRR data, and (b) irrigated area map for India at 500-mbased, primarily, on MODIS data. The sub-pixel irrigated areas (SPIAs) from coarse-resolution satellite sensor data were estimated by multiplying the full pixel irrigated areas(FPIAs) with irrigated area fractions (IAFs). Three methods were presented for IAFcomputation: (a) Google Earth Estimate (IAF-GEE); (b) High resolution imagery (IAF-HRI); and (c) Sub-pixel de-composition technique (IAF-SPDT). The IAF-GEE involvedthe use of "zoom-in-views" of sub-meter to 4-meter very high resolution imagery (VHRI)from Google Earth and helped determine total area available for irrigation (TAAI) or netirrigated areas that does not consider intensity or seasonality of irrigation. The IAF-HRI isa well known method that uses finer-resolution data to determine SPAs of the coarser-resolution imagery. The IAF-SPDT is a unique and innovative method wherein SPAs aredetermined based on the precise location of every pixel of a class in 2-dimensionalbrightness-greenness-wetness (BGW) feature-space plot of red band versus near-infraredband spectral reflectivity. The SPIAs computed using IAF-SPDT for the GIAM was within2 % of the SPIA computed using well known IAF-HRI. Further the fractions from the 2 methods were significantly correlated. The IAF-HRI and IAF-SPDT help to determine annualized or gross irrigated areas (AIA) that does consider intensity or seasonality (e.g., sum of areas from season 1, season 2, and continuous year-round crops). The national census based irrigated areas for the top 40 irrigated nations (which covers about 90% of global irrigation) was significantly better related (and had lesser uncertainties and errors) when compared to SPIAs than FPIAs derived using 10-km and 500-m data. The SPIAs were closer to actual areas whereas FPIAs grossly over-estimate areas. The research clearly demonstrated the value and the importance of sub-pixel areas as opposed to full pixel areas and presented 3 innovative methods for computing the same.
Landsat Time-Series Analysis Opens New Approaches for Regional Glacier Mapping
NASA Astrophysics Data System (ADS)
Winsvold, S. H.; Kääb, A.; Nuth, C.; Altena, B.
2016-12-01
The archive of Landsat satellite scenes is important for mapping of glaciers, especially as it represents the longest running and continuous satellite record of sufficient resolution to track glacier changes over time. Contributing optical sensors newly launched (Landsat 8 and Sentinel-2A) or upcoming in the near future (Sentinel-2B), will promote very high temporal resolution of optical satellite images especially in high-latitude regions. Because of the potential that lies within such near-future dense time series, methods for mapping glaciers from space should be revisited. We present application scenarios that utilize and explore dense time series of optical data for automatic mapping of glacier outlines and glacier facies. Throughout the season, glaciers display a temporal sequence of properties in optical reflection as the seasonal snow melts away, and glacier ice appears in the ablation area and firn in the accumulation area. In one application scenario presented we simulated potential future seasonal resolution using several years of Landsat 5TM/7ETM+ data, and found a sinusoidal evolution of the spectral reflectance for on-glacier pixels throughout a year. We believe this is because of the short wave infrared band and its sensitivity to snow grain size. The parameters retrieved from the fitting sinus curve can be used for glacier mapping purposes, thus we also found similar results using e.g. the mean of summer band ratio images. In individual optical mapping scenes, conditions will vary (e.g., snow, ice, and clouds) and will not be equally optimal over the entire scene. Using robust statistics on stacked pixels reveals a potential for synthesizing optimal mapping scenes from a temporal stack, as we present in a further application scenario. The dense time series available from satellite imagery will also promote multi-temporal and multi-sensor based analyses. The seasonal pattern of snow and ice on a glacier seen in the optical time series can in the summer season also be observed using radar backscatter series. Optical sensors reveal the reflective properties at the surface, while radar sensors may penetrate the surface revealing properties from a certain volume.In an outlook to this contribution we have explored how we can combine information from SAR and optical sensor systems for different purposes.
CMOS Active Pixel Sensors as energy-range detectors for proton Computed Tomography.
Esposito, M; Anaxagoras, T; Evans, P M; Green, S; Manolopoulos, S; Nieto-Camero, J; Parker, D J; Poludniowski, G; Price, T; Waltham, C; Allinson, N M
2015-06-03
Since the first proof of concept in the early 70s, a number of technologies has been proposed to perform proton CT (pCT), as a means of mapping tissue stopping power for accurate treatment planning in proton therapy. Previous prototypes of energy-range detectors for pCT have been mainly based on the use of scintillator-based calorimeters, to measure proton residual energy after passing through the patient. However, such an approach is limited by the need for only a single proton passing through the energy-range detector in a read-out cycle. A novel approach to this problem could be the use of pixelated detectors, where the independent read-out of each pixel allows to measure simultaneously the residual energy of a number of protons in the same read-out cycle, facilitating a faster and more efficient pCT scan. This paper investigates the suitability of CMOS Active Pixel Sensors (APSs) to track individual protons as they go through a number of CMOS layers, forming an energy-range telescope. Measurements performed at the iThemba Laboratories will be presented and analysed in terms of correlation, to confirm capability of proton tracking for CMOS APSs.
1993-10-01
satellite-derived products and to understand in a more quantitative manner the benefits of different sensor systems . While there have been studies of...radiation balance of the surface/atmosphere system in the Arctic (Curry et al., 1990; Curry et al., 1989a; Curry et al., 1989b) and 4) high level cirrus...characteristics of the target or imaging system . In such a context it assumes that a given lead pixel is completely within the FOV of the satellite
Plantar pressure cartography reconstruction from 3 sensors.
Abou Ghaida, Hussein; Mottet, Serge; Goujon, Jean-Marc
2014-01-01
Foot problem diagnosis is often made by using pressure mapping systems, unfortunately located and used in the laboratories. In the context of e-health and telemedicine for home monitoring of patients having foot problems, our focus is to present an acceptable system for daily use. We developed an ambulatory instrumented insole using 3 pressures sensors to visualize plantar pressure cartographies. We show that a standard insole with fixed sensor position could be used for different foot sizes. The results show an average error measured at each pixel of 0.01 daN, with a standard deviation of 0.005 daN.
Adaptive time-sequential binary sensing for high dynamic range imaging
NASA Astrophysics Data System (ADS)
Hu, Chenhui; Lu, Yue M.
2012-06-01
We present a novel image sensor for high dynamic range imaging. The sensor performs an adaptive one-bit quantization at each pixel, with the pixel output switched from 0 to 1 only if the number of photons reaching that pixel is greater than or equal to a quantization threshold. With an oracle knowledge of the incident light intensity, one can pick an optimal threshold (for that light intensity) and the corresponding Fisher information contained in the output sequence follows closely that of an ideal unquantized sensor over a wide range of intensity values. This observation suggests the potential gains one may achieve by adaptively updating the quantization thresholds. As the main contribution of this work, we propose a time-sequential threshold-updating rule that asymptotically approaches the performance of the oracle scheme. With every threshold mapped to a number of ordered states, the dynamics of the proposed scheme can be modeled as a parametric Markov chain. We show that the frequencies of different thresholds converge to a steady-state distribution that is concentrated around the optimal choice. Moreover, numerical experiments show that the theoretical performance measures (Fisher information and Craḿer-Rao bounds) can be achieved by a maximum likelihood estimator, which is guaranteed to find globally optimal solution due to the concavity of the log-likelihood functions. Compared with conventional image sensors and the strategy that utilizes a constant single-photon threshold considered in previous work, the proposed scheme attains orders of magnitude improvement in terms of sensor dynamic ranges.
A novel source-drain follower for monolithic active pixel sensors
NASA Astrophysics Data System (ADS)
Gao, C.; Aglieri, G.; Hillemanns, H.; Huang, G.; Junique, A.; Keil, M.; Kim, D.; Kofarago, M.; Kugathasan, T.; Mager, M.; Marin Tobon, C. A.; Martinengo, P.; Mugnier, H.; Musa, L.; Lee, S.; Reidt, F.; Riedler, P.; Rousset, J.; Sielewicz, K. M.; Snoeys, W.; Sun, X.; Van Hoorne, J. W.; Yang, P.
2016-09-01
Monolithic active pixel sensors (MAPS) receive interest in tracking applications in high energy physics as they integrate sensor and readout electronics in one silicon die with potential for lower material budget and cost, and better performance. Source followers (SFs) are widely used for MAPS readout: they increase charge conversion gain 1/Ceff or decrease the effective sensing node capacitance Ceff because the follower action compensates part of the input capacitance. Charge conversion gain is critical for analog power consumption and therefore for material budget in tracking applications, and also has direct system impact. This paper presents a novel source-drain follower (SDF), where both source and drain follow the gate potential improving charge conversion gain. For the inner tracking system (ITS) upgrade of the ALICE experiment at CERN, low material budget is a primary requirement. The SDF circuit was studied as part of the effort to optimize the effective capacitance of the sensing node. The collection electrode, input transistor and routing metal all contribute to Ceff. Reverse sensor bias reduces the collection electrode capacitance. The novel SDF circuit eliminates the contribution of the input transistor to Ceff, reduces the routing contribution if additional shielding is introduced, provides a way to estimate the capacitance of the sensor itself, and has a voltage gain closer to unity than the standard SF. The SDF circuit has a somewhat larger area with a somewhat smaller bandwidth, but this is acceptable in most cases. A test chip, manufactured in a 180 nm CMOS image sensor process, implements small prototype pixel matrices in different flavors to compare the standard SF to the novel SF and to the novel SF with additional shielding. The effective sensing node capacitance was measured using a 55Fe source. Increasing reverse substrate bias from -1 V to -6 V reduces Ceff by 38% and the equivalent noise charge (ENC) by 22% for the standard SF. The SDF provides a further 9% improvement for Ceff and 25% for ENC. The SDF circuit with additional shielding provides 18% improvement for Ceff, and combined with -6 V reverse bias yields almost a factor 2.
Selkowitz, David J.; Forster, Richard; Caldwell, Megan K.
2014-01-01
Remote sensing of snow-covered area (SCA) can be binary (indicating the presence/absence of snow cover at each pixel) or fractional (indicating the fraction of each pixel covered by snow). Fractional SCA mapping provides more information than binary SCA, but is more difficult to implement and may not be feasible with all types of remote sensing data. The utility of fractional SCA mapping relative to binary SCA mapping varies with the intended application as well as by spatial resolution, temporal resolution and period of interest, and climate. We quantified the frequency of occurrence of partially snow-covered (mixed) pixels at spatial resolutions between 1 m and 500 m over five dates at two study areas in the western U.S., using 0.5 m binary SCA maps derived from high spatial resolution imagery aggregated to fractional SCA at coarser spatial resolutions. In addition, we used in situ monitoring to estimate the frequency of partially snow-covered conditions for the period September 2013–August 2014 at 10 60-m grid cell footprints at two study areas with continental snow climates. Results from the image analysis indicate that at 40 m, slightly above the nominal spatial resolution of Landsat, mixed pixels accounted for 25%–93% of total pixels, while at 500 m, the nominal spatial resolution of MODIS bands used for snow cover mapping, mixed pixels accounted for 67%–100% of total pixels. Mixed pixels occurred more commonly at the continental snow climate site than at the maritime snow climate site. The in situ data indicate that some snow cover was present between 186 and 303 days, and partial snow cover conditions occurred on 10%–98% of days with snow cover. Four sites remained partially snow-free throughout most of the winter and spring, while six sites were entirely snow covered throughout most or all of the winter and spring. Within 60 m grid cells, the late spring/summer transition from snow-covered to snow-free conditions lasted 17–56 days and averaged 37 days. Our results suggest that mixed snow-covered snow-free pixels are common at the spatial resolutions imaged by both the Landsat and MODIS sensors. This highlights the additional information available from fractional SCA products and suggests fractional SCA can provide a major advantage for hydrological and climatological monitoring and modeling, particularly when accurate representation of the spatial distribution of snow cover is critical.
Detailed measurements of shower properties in a high granularity digital electromagnetic calorimeter
NASA Astrophysics Data System (ADS)
van der Kolk, N.
2018-03-01
The MAPS (Monolithic Active Pixel Sensors) prototype of the proposed ALICE Forward Calorimeter (FoCal) is the highest granularity electromagnetic calorimeter, with 39 million pixels with a size of 30 × 30 μm2. Particle showers can be studied with unprecedented detail with this prototype. Electromagnetic showers at energies between 2 GeV and 244 GeV have been studied and compared with GEANT4 simulations. Simulation models can be tested in more detail than ever before and the differences observed between FoCal data and GEANT4 simulations illustrate that improvements in electromagnetic models are still possible.
Active-Pixel Image Sensor With Analog-To-Digital Converters
NASA Technical Reports Server (NTRS)
Fossum, Eric R.; Mendis, Sunetra K.; Pain, Bedabrata; Nixon, Robert H.
1995-01-01
Proposed single-chip integrated-circuit image sensor contains 128 x 128 array of active pixel sensors at 50-micrometer pitch. Output terminals of all pixels in each given column connected to analog-to-digital (A/D) converter located at bottom of column. Pixels scanned in semiparallel fashion, one row at time; during time allocated to scanning row, outputs of all active pixel sensors in row fed to respective A/D converters. Design of chip based on complementary metal oxide semiconductor (CMOS) technology, and individual circuit elements fabricated according to 2-micrometer CMOS design rules. Active pixel sensors designed to operate at video rate of 30 frames/second, even at low light levels. A/D scheme based on first-order Sigma-Delta modulation.
Photodiode area effect on performance of X-ray CMOS active pixel sensors
NASA Astrophysics Data System (ADS)
Kim, M. S.; Kim, Y.; Kim, G.; Lim, K. T.; Cho, G.; Kim, D.
2018-02-01
Compared to conventional TFT-based X-ray imaging devices, CMOS-based X-ray imaging sensors are considered next generation because they can be manufactured in very small pixel pitches and can acquire high-speed images. In addition, CMOS-based sensors have the advantage of integration of various functional circuits within the sensor. The image quality can also be improved by the high fill-factor in large pixels. If the size of the subject is small, the size of the pixel must be reduced as a consequence. In addition, the fill factor must be reduced to aggregate various functional circuits within the pixel. In this study, 3T-APS (active pixel sensor) with photodiodes of four different sizes were fabricated and evaluated. It is well known that a larger photodiode leads to improved overall performance. Nonetheless, if the size of the photodiode is > 1000 μm2, the degree to which the sensor performance increases as the photodiode size increases, is reduced. As a result, considering the fill factor, pixel-pitch > 32 μm is not necessary to achieve high-efficiency image quality. In addition, poor image quality is to be expected unless special sensor-design techniques are included for sensors with a pixel pitch of 25 μm or less.
NASA Astrophysics Data System (ADS)
Fu, Y.; Hu-Guo, C.; Dorokhov, A.; Pham, H.; Hu, Y.
2013-07-01
In order to exploit the ability to integrate a charge collecting electrode with analog and digital processing circuitry down to the pixel level, a new type of CMOS pixel sensors with full CMOS capability is presented in this paper. The pixel array is read out based on a column-parallel read-out architecture, where each pixel incorporates a diode, a preamplifier with a double sampling circuitry and a discriminator to completely eliminate analog read-out bottlenecks. The sensor featuring a pixel array of 8 rows and 32 columns with a pixel pitch of 80 μm×16 μm was fabricated in a 0.18 μm CMOS process. The behavior of each pixel-level discriminator isolated from the diode and the preamplifier was studied. The experimental results indicate that all in-pixel discriminators which are fully operational can provide significant improvements in the read-out speed and the power consumption of CMOS pixel sensors.
NASA Astrophysics Data System (ADS)
Spivey, Alvin J.
Mapping land-cover land-use change (LCLUC) over regional and continental scales, and long time scales (years and decades), can be accomplished using thematically identified classification maps of a landscape---a LCLU class map. Observations of a landscape's LCLU class map pattern can indicate the most relevant process, like hydrologic or ecologic function, causing landscape scale environmental change. Quantified as Landscape Pattern Metrics (LPM), emergent landscape patterns act as Landscape Indicators (LI) when physically interpreted. The common mathematical approach to quantifying observed landscape scale pattern is to have LPM measure how connected a class exists within the landscape, through nonlinear local kernel operations of edges and gradients in class maps. Commonly applied kernel-based LPM that consistently reveal causal processes are Dominance, Contagion, and Fractal Dimension. These kernel-based LPM can be difficult to interpret. The emphasis on an image pixel's edge by gradient operations and dependence on an image pixel's existence according to classification accuracy limit the interpretation of LPM. For example, the Dominance and Contagion kernel-based LPM very similarly measure how connected a landscape is. Because of this, their reported edge measurements of connected pattern correlate strongly, making their results ambiguous. Additionally, each of these kernel-based LPM are unscalable when comparing class maps from separate imaging system sensor scenarios that change the image pixel's edge position (i.e. changes in landscape extent, changes in pixel size, changes in orientation, etc), and can only interpret landscape pattern as accurately as the LCLU map classification will allow. This dissertation discusses the reliability of common LPM in light of imaging system effects such as: algorithm classification likelihoods, LCLU classification accuracy due to random image sensor noise, and image scale. A description of an approach to generating well behaved LPM through a Fourier system analysis of the entire class map, or any subset of the class map (e.g. the watershed) is the focus of this work. The Fourier approach provides four improvements for LPM. First, the approach reduces any correlation between metrics by developing them within an independent (i.e. orthogonal) Fourier vector space; a Fourier vector space that includes relevant physically representative parameters ( i.e. between class Euclidean distance). Second, accounting for LCLU classification accuracy the LPM measurement precision and measurement accuracy are reported. Third, the mathematics of this approach makes it possible to compare image data captured at separate pixel resolutions or even from separate landscape scenes. Fourth, Fourier interpreted landscape pattern measurement can be a measure of the entire landscape shape, of individual landscape cover change, or as exchanges between class map subsets by operating on the entire class map, subset of class map, or separate subsets of class map[s] respectively. These LCLUC LPM are examined along the 1991-1992 and 2000-2001 records of National Land Cover Database Landsat data products. Those LPM results are used in a predictive fecal coliform model at the South Carolina watershed level in the context of past (validation study) change. Finally, the proposed LPM ability to be used as ecologically relevant environmental indicators is tested by correlating metrics with other, well known LI that consistently reveal causal processes in the literature.
Development of CMOS pixel sensors for the upgrade of the ALICE Inner Tracking System
NASA Astrophysics Data System (ADS)
Molnar, L.
2014-12-01
The ALICE Collaboration is preparing a major upgrade of the current detector, planned for installation during the second long LHC shutdown in the years 2018-19, in order to enhance its low-momentum vertexing and tracking capability, and exploit the planned increase of the LHC luminosity with Pb beams. One of the cornerstones of the ALICE upgrade strategy is to replace the current Inner Tracking System in its entirety with a new, high resolution, low-material ITS detector. The new ITS will consist of seven concentric layers equipped with Monolithic Active Pixel Sensors (MAPS) implemented using the 0.18 μm CMOS technology of TowerJazz. In this contribution, the main key features of the ITS upgrade will be illustrated with emphasis on the functionality of the pixel chip. The ongoing developments on the readout architectures, which have been implemented in several fabricated prototypes, will be discussed. The operational features of these prototypes as well as the results of the characterisation tests before and after irradiation will also be presented.
A time-resolved image sensor for tubeless streak cameras
NASA Astrophysics Data System (ADS)
Yasutomi, Keita; Han, SangMan; Seo, Min-Woong; Takasawa, Taishi; Kagawa, Keiichiro; Kawahito, Shoji
2014-03-01
This paper presents a time-resolved CMOS image sensor with draining-only modulation (DOM) pixels for tube-less streak cameras. Although the conventional streak camera has high time resolution, the device requires high voltage and bulky system due to the structure with a vacuum tube. The proposed time-resolved imager with a simple optics realize a streak camera without any vacuum tubes. The proposed image sensor has DOM pixels, a delay-based pulse generator, and a readout circuitry. The delay-based pulse generator in combination with an in-pixel logic allows us to create and to provide a short gating clock to the pixel array. A prototype time-resolved CMOS image sensor with the proposed pixel is designed and implemented using 0.11um CMOS image sensor technology. The image array has 30(Vertical) x 128(Memory length) pixels with the pixel pitch of 22.4um. .
NASA Astrophysics Data System (ADS)
Flouzat, C.; Değerli, Y.; Guilloux, F.; Orsini, F.; Venault, P.
2015-05-01
In the framework of the ALICE experiment upgrade at HL-LHC, a new forward tracking detector, the Muon Forward Tracker (MFT), is foreseen to overcome the intrinsic limitations of the present Muon Spectrometer and will perform new measurements of general interest for the whole ALICE physics. To fulfill the new detector requirements, CMOS Monolithic Active Pixel Sensors (MAPS) provide an attractive trade-off between readout speed, spatial resolution, radiation hardness, granularity, power consumption and material budget. This technology has been chosen to equip the Muon Forward Tracker and also the vertex detector: the Inner Tracking System (ITS). Since few years, an intensive R&D program has been performed on the design of MAPS in the 0.18 μ m CMOS Image Sensor (CIS) process. In order to avoid pile up effects in the experiment, the classical rolling shutter readout system of MAPS has been improved to overcome the readout speed limitation. A zero suppression algorithm, based on a 3 by 3 cluster finding (position and data), has been chosen for the MFT. This algorithm allows adequate data compression for the sensor. This paper presents the large size prototype PIXAM, which represents 1/3 of the final chip, and will focus specially on the zero suppression block architecture. This chip is designed and under fabrication in the 0.18 μ m CIS process. Finally, the readout electronics principle to send out the compressed data flow is also presented taking into account the cluster occupancy per MFT plane for a single central Pb-Pb collision.
NASA Astrophysics Data System (ADS)
Caras, Tamir; Hedley, John; Karnieli, Arnon
2017-12-01
Remote sensing offers a potential tool for large scale environmental surveying and monitoring. However, remote observations of coral reefs are difficult especially due to the spatial and spectral complexity of the target compared to sensor specifications as well as the environmental implications of the water medium above. The development of sensors is driven by technological advances and the desired products. Currently, spaceborne systems are technologically limited to a choice between high spectral resolution and high spatial resolution, but not both. The current study explores the dilemma of whether future sensor design for marine monitoring should prioritise on improving their spatial or spectral resolution. To address this question, a spatially and spectrally resampled ground-level hyperspectral image was used to test two classification elements: (1) how the tradeoff between spatial and spectral resolutions affects classification; and (2) how a noise reduction by majority filter might improve classification accuracy. The studied reef, in the Gulf of Aqaba (Eilat), Israel, is heterogeneous and complex so the local substrate patches are generally finer than currently available imagery. Therefore, the tested spatial resolution was broadly divided into four scale categories from five millimeters to one meter. Spectral resolution resampling aimed to mimic currently available and forthcoming spaceborne sensors such as (1) Environmental Mapping and Analysis Program (EnMAP) that is characterized by 25 bands of 6.5 nm width; (2) VENμS with 12 narrow bands; and (3) the WorldView series with broadband multispectral resolution. Results suggest that spatial resolution should generally be prioritized for coral reef classification because the finer spatial scale tested (pixel size < 0.1 m) may compensate for some low spectral resolution drawbacks. In this regard, it is shown that the post-classification majority filtering substantially improves the accuracy of all pixel sizes up to the point where the kernel size reaches the average unit size (pixel < 0.25 m). However, careful investigation as to the effect of band distribution and choice could improve the sensor suitability for the marine environment task. This in mind, while the focus in this study was on the technologically limited spaceborne design, aerial sensors may presently provide an opportunity to implement the suggested setup.
NASA Astrophysics Data System (ADS)
Esbrand, C.; Royle, G.; Griffiths, J.; Speller, R.
2009-07-01
The integration of technology with healthcare has undoubtedly propelled the medical imaging sector well into the twenty first century. The concept of digital imaging introduced during the 1970s has since paved the way for established imaging techniques where digital mammography, phase contrast imaging and CT imaging are just a few examples. This paper presents a prototype intelligent digital mammography system designed and developed by a European consortium. The final system, the I-ImaS system, utilises CMOS monolithic active pixel sensor (MAPS) technology promoting on-chip data processing, enabling the acts of data processing and image acquisition to be achieved simultaneously; consequently, statistical analysis of tissue is achievable in real-time for the purpose of x-ray beam modulation via a feedback mechanism during the image acquisition procedure. The imager implements a dual array of twenty 520 pixel × 40 pixel CMOS MAPS sensing devices with a 32μm pixel size, each individually coupled to a 100μm thick thallium doped structured CsI scintillator. This paper presents the first intelligent images of real breast tissue obtained from the prototype system of real excised breast tissue where the x-ray exposure was modulated via the statistical information extracted from the breast tissue itself. Conventional images were experimentally acquired where the statistical analysis of the data was done off-line, resulting in the production of simulated real-time intelligently optimised images. The results obtained indicate real-time image optimisation using the statistical information extracted from the breast as a means of a feedback mechanisms is beneficial and foreseeable in the near future.
Fusion of spectral and panchromatic images using false color mapping and wavelet integrated approach
NASA Astrophysics Data System (ADS)
Zhao, Yongqiang; Pan, Quan; Zhang, Hongcai
2006-01-01
With the development of sensory technology, new image sensors have been introduced that provide a greater range of information to users. But as the power limitation of radiation, there will always be some trade-off between spatial and spectral resolution in the image captured by specific sensors. Images with high spatial resolution can locate objects with high accuracy, whereas images with high spectral resolution can be used to identify the materials. Many applications in remote sensing require fusing low-resolution imaging spectral images with panchromatic images to identify materials at high resolution in clutter. A pixel-based false color mapping and wavelet transform integrated fusion algorithm is presented in this paper, the resulting images have a higher information content than each of the original images and retain sensor-specific image information. The simulation results show that this algorithm can enhance the visibility of certain details and preserve the difference of different materials.
An All Silicon Feedhorn-Coupled Focal Plane for Cosmic Microwave Background Polarimetry
NASA Technical Reports Server (NTRS)
Hubmayr, J.; Appel, J. W.; Austermann, J. E.; Beall, J. A.; Becker, D.; Benson, B. A.; Bleem, L. E.; Carlstrom, J. E.; Chang, C. L.; Cho, H. M.;
2011-01-01
Upcoming experiments aim to produce high fidelity polarization maps of the cosmic microwave background. To achieve the required sensitivity, we are developing monolithic, feedhorn-coupled transition edge sensor polarimeter arrays operating at 150 GHz. We describe this focal plane architecture and the current status of this technology, focusing on single-pixel polarimeters being deployed on the Atacama B-mode Search (ABS) and an 84-pixel demonstration feedhorn array backed by four 10-pixel polarimeter arrays. The feedhorn array exhibits symmetric beams, cross-polar response less than -23 dB and excellent uniformity across the array. Monolithic polarimeter arrays, including arrays of silicon feedhorns, will be used in the Atacama Cosmology Telescope Polarimeter (ACTPol) and the South Pole Telescope Polarimeter (SPTpol) and have been proposed for upcoming balloon-borne instruments.
NASA Astrophysics Data System (ADS)
Lin, Shengmin; Lin, Chi-Pin; Wang, Weng-Lyang; Hsiao, Feng-Ke; Sikora, Robert
2009-08-01
A 256x512 element digital image sensor has been developed which has a large pixel size, slow scan and low power consumption for Hyper Spectral Imager (HySI) applications. The device is a mixed mode, silicon on chip (SOC) IC. It combines analog circuitry, digital circuitry and optical sensor circuitry into a single chip. This chip integrates a 256x512 active pixel sensor array, a programming gain amplifier (PGA) for row wise gain setting, I2C interface, SRAM, 12 bit analog to digital convertor (ADC), voltage regulator, low voltage differential signal (LVDS) and timing generator. The device can be used for 256 pixels of spatial resolution and 512 bands of spectral resolution ranged from 400 nm to 950 nm in wavelength. In row wise gain readout mode, one can set a different gain on each row of the photo detector by storing the gain setting data on the SRAM thru the I2C interface. This unique row wise gain setting can be used to compensate the silicon spectral response non-uniformity problem. Due to this unique function, the device is suitable for hyper-spectral imager applications. The HySI camera located on-board the Chandrayaan-1 satellite, was successfully launched to the moon on Oct. 22, 2008. The device is currently mapping the moon and sending back excellent images of the moon surface. The device design and the moon image data will be presented in the paper.
Microlens performance limits in sub-2mum pixel CMOS image sensors.
Huo, Yijie; Fesenmaier, Christian C; Catrysse, Peter B
2010-03-15
CMOS image sensors with smaller pixels are expected to enable digital imaging systems with better resolution. When pixel size scales below 2 mum, however, diffraction affects the optical performance of the pixel and its microlens, in particular. We present a first-principles electromagnetic analysis of microlens behavior during the lateral scaling of CMOS image sensor pixels. We establish for a three-metal-layer pixel that diffraction prevents the microlens from acting as a focusing element when pixels become smaller than 1.4 microm. This severely degrades performance for on and off-axis pixels in red, green and blue color channels. We predict that one-metal-layer or backside-illuminated pixels are required to extend the functionality of microlenses beyond the 1.4 microm pixel node.
Hot pixel generation in active pixel sensors: dosimetric and micro-dosimetric response
NASA Technical Reports Server (NTRS)
Scheick, Leif; Novak, Frank
2003-01-01
The dosimetric response of an active pixel sensor is analyzed. heavy ions are seen to damage the pixel in much the same way as gamma radiation. The probability of a hot pixel is seen to exhibit behavior that is not typical with other microdose effects.
Design, optimization and evaluation of a "smart" pixel sensor array for low-dose digital radiography
NASA Astrophysics Data System (ADS)
Wang, Kai; Liu, Xinghui; Ou, Hai; Chen, Jun
2016-04-01
Amorphous silicon (a-Si:H) thin-film transistors (TFTs) have been widely used to build flat-panel X-ray detectors for digital radiography (DR). As the demand for low-dose X-ray imaging grows, a detector with high signal-to-noise-ratio (SNR) pixel architecture emerges. "Smart" pixel is intended to use a dual-gate photosensitive TFT for sensing, storage, and switch. It differs from a conventional passive pixel sensor (PPS) and active pixel sensor (APS) in that all these three functions are combined into one device instead of three separate units in a pixel. Thus, it is expected to have high fill factor and high spatial resolution. In addition, it utilizes the amplification effect of the dual-gate photosensitive TFT to form a one-transistor APS that leads to a potentially high SNR. This paper addresses the design, optimization and evaluation of the smart pixel sensor and array for low-dose DR. We will design and optimize the smart pixel from the scintillator to TFT levels and validate it through optical and electrical simulation and experiments of a 4x4 sensor array.
Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) on Mars Reconnaissance Orbiter (MRO)
NASA Astrophysics Data System (ADS)
Murchie, S.; Arvidson, R.; Bedini, P.; Beisser, K.; Bibring, J.-P.; Bishop, J.; Boldt, J.; Cavender, P.; Choo, T.; Clancy, R. T.; Darlington, E. H.; Des Marais, D.; Espiritu, R.; Fort, D.; Green, R.; Guinness, E.; Hayes, J.; Hash, C.; Heffernan, K.; Hemmler, J.; Heyler, G.; Humm, D.; Hutcheson, J.; Izenberg, N.; Lee, R.; Lees, J.; Lohr, D.; Malaret, E.; Martin, T.; McGovern, J. A.; McGuire, P.; Morris, R.; Mustard, J.; Pelkey, S.; Rhodes, E.; Robinson, M.; Roush, T.; Schaefer, E.; Seagrave, G.; Seelos, F.; Silverglate, P.; Slavney, S.; Smith, M.; Shyong, W.-J.; Strohbehn, K.; Taylor, H.; Thompson, P.; Tossman, B.; Wirzburger, M.; Wolff, M.
2007-05-01
The Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) is a hyperspectral imager on the Mars Reconnaissance Orbiter (MRO) spacecraft. CRISM consists of three subassemblies, a gimbaled Optical Sensor Unit (OSU), a Data Processing Unit (DPU), and the Gimbal Motor Electronics (GME). CRISM's objectives are (1) to map the entire surface using a subset of bands to characterize crustal mineralogy, (2) to map the mineralogy of key areas at high spectral and spatial resolution, and (3) to measure spatial and seasonal variations in the atmosphere. These objectives are addressed using three major types of observations. In multispectral mapping mode, with the OSU pointed at planet nadir, data are collected at a subset of 72 wavelengths covering key mineralogic absorptions and binned to pixel footprints of 100 or 200 m/pixel. Nearly the entire planet can be mapped in this fashion. In targeted mode the OSU is scanned to remove most along-track motion, and a region of interest is mapped at full spatial and spectral resolution (15-19 m/pixel, 362-3920 nm at 6.55 nm/channel). Ten additional abbreviated, spatially binned images are taken before and after the main image, providing an emission phase function (EPF) of the site for atmospheric study and correction of surface spectra for atmospheric effects. In atmospheric mode, only the EPF is acquired. Global grids of the resulting lower data volume observations are taken repeatedly throughout the Martian year to measure seasonal variations in atmospheric properties. Raw, calibrated, and map-projected data are delivered to the community with a spectral library to aid in interpretation.
CMOS Active Pixel Sensor Star Tracker with Regional Electronic Shutter
NASA Technical Reports Server (NTRS)
Yadid-Pecht, Orly; Pain, Bedabrata; Staller, Craig; Clark, Christopher; Fossum, Eric
1996-01-01
The guidance system in a spacecraft determines spacecraft attitude by matching an observed star field to a star catalog....An APS(active pixel sensor)-based system can reduce mass and power consumption and radiation effects compared to a CCD(charge-coupled device)-based system...This paper reports an APS (active pixel sensor) with locally variable times, achieved through individual pixel reset (IPR).
Report of the sensor readout electronics panel
NASA Technical Reports Server (NTRS)
Fossum, Eric R.; Carson, J.; Kleinhans, W.; Kosonocky, W.; Kozlowski, L.; Pecsalski, A.; Silver, A.; Spieler, H.; Woolaway, J.
1991-01-01
The findings of the Sensor Readout Electronics Panel are summarized in regard to technology assessment and recommended development plans. In addition to two specific readout issues, cryogenic readouts and sub-electron noise, the panel considered three advanced technology areas that impact the ability to achieve large format sensor arrays. These are mega-pixel focal plane packaging issues, focal plane to data processing module interfaces, and event driven readout architectures. Development in each of these five areas was judged to have significant impact in enabling the sensor performance desired for the Astrotech 21 mission set. Other readout issues, such as focal plane signal processing or other high volume data acquisition applications important for Eos-type mapping, were determined not to be relevant for astrophysics science goals.
Mapping Irrigated Areas in the Tunisian Semi-Arid Context with Landsat Thermal and VNIR Data Imagery
NASA Astrophysics Data System (ADS)
Rivalland, Vincent; Drissi, Hsan; Simonneaux, Vincent; Tardy, Benjamin; Boulet, Gilles
2016-04-01
Our study area is the Merguellil semi-arid irrigated plain in Tunisia, where the water resource management is an important stake for governmental institutions, farmer communities and more generally for the environment. Indeed, groundwater abstraction for irrigation is the primary cause of aquifer depletion. Moreover, unregistered pumping practices are widespread and very difficult to survey by authorities. Thus, the identification of areas actually irrigated in the whole plain is of major interest. In order to map the irrigated areas, we tried out a methodology based on the use of Landsat 7 and 8 Land Surface Temperature (LST) data issued from atmospherically corrected thermal band using the LANDARTs Tool jointly with the NDVI vegetation indices obtained from visible ane near infrared (VNIR) bands. For each Landsat acquisition during the years 2012 to 2014, we computed a probability of irrigation based on the location of the pixel in the NDVI - LST space. Basically for a given NDVI value, the cooler the pixel the higher its probability to be irrigated is. For each date, pixels were classified in seven bins of irrigation probability ranges. Pixel probabilities for each date were then summed over the study period resulting in a probability map of irrigation. Comparison with ground data shows a consistent identification of irrigated plots and supports the potential operational interest of the method. However, results were hampered by the low Landsat LST data availability due to clouds and the inadequate revisit frequency of the sensor.
NASA Technical Reports Server (NTRS)
Scott, Peter (Inventor); Sridhar, Ramalingam (Inventor); Bandera, Cesar (Inventor); Xia, Shu (Inventor)
2002-01-01
A foveal image sensor integrated circuit comprising a plurality of CMOS active pixel sensors arranged both within and about a central fovea region of the chip. The pixels in the central fovea region have a smaller size than the pixels arranged in peripheral rings about the central region. A new photocharge normalization scheme and associated circuitry normalizes the output signals from the different size pixels in the array. The pixels are assembled into a multi-resolution rectilinear foveal image sensor chip using a novel access scheme to reduce the number of analog RAM cells needed. Localized spatial resolution declines monotonically with offset from the imager's optical axis, analogous to biological foveal vision.
Pu, Ruiliang; Gong, Peng; Yu, Qian
2008-01-01
In this study, a comparative analysis of capabilities of three sensors for mapping forest crown closure (CC) and leaf area index (LAI) was conducted. The three sensors are Hyperspectral Imager (Hyperion) and Advanced Land Imager (ALI) onboard EO-1 satellite and Landsat-7 Enhanced Thematic Mapper Plus (ETM+). A total of 38 mixed coniferous forest CC and 38 LAI measurements were collected at Blodgett Forest Research Station, University of California at Berkeley, USA. The analysis method consists of (1) extracting spectral vegetation indices (VIs), spectral texture information and maximum noise fractions (MNFs), (2) establishing multivariate prediction models, (3) predicting and mapping pixel-based CC and LAI values, and (4) validating the mapped CC and LAI results with field validated photo-interpreted CC and LAI values. The experimental results indicate that the Hyperion data are the most effective for mapping forest CC and LAI (CC mapped accuracy (MA) = 76.0%, LAI MA = 74.7%), followed by ALI data (CC MA = 74.5%, LAI MA = 70.7%), with ETM+ data results being least effective (CC MA = 71.1%, LAI MA = 63.4%). This analysis demonstrates that the Hyperion sensor outperforms the other two sensors: ALI and ETM+. This is because of its high spectral resolution with rich subtle spectral information, of its short-wave infrared data for constructing optimal VIs that are slightly affected by the atmosphere, and of its more available MNFs than the other two sensors to be selected for establishing prediction models. Compared to ETM+ data, ALI data are better for mapping forest CC and LAI due to ALI data with more bands and higher signal-to-noise ratios than those of ETM+ data. PMID:27879906
Pu, Ruiliang; Gong, Peng; Yu, Qian
2008-06-06
In this study, a comparative analysis of capabilities of three sensors for mapping forest crown closure (CC) and leaf area index (LAI) was conducted. The three sensors are Hyperspectral Imager (Hyperion) and Advanced Land Imager (ALI) onboard EO-1 satellite and Landsat-7 Enhanced Thematic Mapper Plus (ETM+). A total of 38 mixed coniferous forest CC and 38 LAI measurements were collected at Blodgett Forest Research Station, University of California at Berkeley, USA. The analysis method consists of (1) extracting spectral vegetation indices (VIs), spectral texture information and maximum noise fractions (MNFs), (2) establishing multivariate prediction models, (3) predicting and mapping pixel-based CC and LAI values, and (4) validating the mapped CC and LAI results with field validated photo-interpreted CC and LAI values. The experimental results indicate that the Hyperion data are the most effective for mapping forest CC and LAI (CC mapped accuracy (MA) = 76.0%, LAI MA = 74.7%), followed by ALI data (CC MA = 74.5%, LAI MA = 70.7%), with ETM+ data results being least effective (CC MA = 71.1%, LAI MA = 63.4%). This analysis demonstrates that the Hyperion sensor outperforms the other two sensors: ALI and ETM+. This is because of its high spectral resolution with rich subtle spectral information, of its short-wave infrared data for constructing optimal VIs that are slightly affected by the atmosphere, and of its more available MNFs than the other two sensors to be selected for establishing prediction models. Compared to ETM+ data, ALI data are better for mapping forest CC and LAI due to ALI data with more bands and higher signal-to-noise ratios than those of ETM+ data.
Method and apparatus of high dynamic range image sensor with individual pixel reset
NASA Technical Reports Server (NTRS)
Yadid-Pecht, Orly (Inventor); Pain, Bedabrata (Inventor); Fossum, Eric R. (Inventor)
2001-01-01
A wide dynamic range image sensor provides individual pixel reset to vary the integration time of individual pixels. The integration time of each pixel is controlled by column and row reset control signals which activate a logical reset transistor only when both signals coincide for a given pixel.
Pixel-based flood mapping from SAR imagery: a comparison of approaches
NASA Astrophysics Data System (ADS)
Landuyt, Lisa; Van Wesemael, Alexandra; Van Coillie, Frieke M. B.; Verhoest, Niko E. C.
2017-04-01
Due to their all-weather, day and night capabilities, SAR sensors have been shown to be particularly suitable for flood mapping applications. Thus, they can provide spatially-distributed flood extent data which are valuable for calibrating, validating and updating flood inundation models. These models are an invaluable tool for water managers, to take appropriate measures in times of high water levels. Image analysis approaches to delineate flood extent on SAR imagery are numerous. They can be classified into two categories, i.e. pixel-based and object-based approaches. Pixel-based approaches, e.g. thresholding, are abundant and in general computationally inexpensive. However, large discrepancies between these techniques exist and often subjective user intervention is needed. Object-based approaches require more processing but allow for the integration of additional object characteristics, like contextual information and object geometry, and thus have significant potential to provide an improved classification result. As means of benchmark, a selection of pixel-based techniques is applied on a ERS-2 SAR image of the 2006 flood event of River Dee, United Kingdom. This selection comprises Otsu thresholding, Kittler & Illingworth thresholding, the Fine To Coarse segmentation algorithm and active contour modelling. The different classification results are evaluated and compared by means of several accuracy measures, including binary performance measures.
Fusion of GEDI, ICESAT2 & NISAR data for above ground biomass mapping in Sonoma County, California
NASA Astrophysics Data System (ADS)
Duncanson, L.; Simard, M.; Thomas, N. M.; Neuenschwander, A. L.; Hancock, S.; Armston, J.; Dubayah, R.; Hofton, M. A.; Huang, W.; Tang, H.; Marselis, S.; Fatoyinbo, T.
2017-12-01
Several upcoming NASA missions will collect data sensitive to forest structure (GEDI, ICESAT-2 & NISAR). The LiDAR and SAR data collected by these missions will be used in coming years to map forest aboveground biomass at various resolutions. This research focuses on developing and testing multi-sensor data fusion approaches in advance of these missions. Here, we present the first case study of a CMS-16 grant with results from Sonoma County, California. We simulate lidar and SAR datasets from GEDI, ICESAT-2 and NISAR using airborne discrete return lidar and UAVSAR data, respectively. GEDI and ICESAT-2 signals are simulated from high point density discrete return lidar that was acquired over the entire county in 2014 through a previous CMS project (Dubayah & Hurtt, CMS-13). NISAR is simulated from L-band UAVSAR data collected in 2014. These simulations are empirically related to 300 field plots of aboveground biomass as well as a 30m biomass map produced from the 2014 airborne lidar data. We model biomass independently for each simulated mission dataset and then test two fusion methods for County-wide mapping 1) a pixel based approach and 2) an object oriented approach. In the pixel-based approach, GEDI and ICESAT-2 biomass models are calibrated over field plots and applied in orbital simulations for a 2-year period of the GEDI and ICESAT-2 missions. These simulated samples are then used to calibrate UAVSAR data to produce a 0.25 ha map. In the object oriented approach, the GEDI and ICESAT-2 data are identical to the pixel-based approach, but calibrate image objects of similar L-band backscatter rather than uniform pixels. The results of this research demonstrate the estimated ability for each of these three missions to independently map biomass in a temperate, high biomass system, as well as the potential improvement expected through combining mission datasets.
High-voltage pixel sensors for ATLAS upgrade
NASA Astrophysics Data System (ADS)
Perić, I.; Kreidl, C.; Fischer, P.; Bompard, F.; Breugnon, P.; Clemens, J.-C.; Fougeron, D.; Liu, J.; Pangaud, P.; Rozanov, A.; Barbero, M.; Feigl, S.; Capeans, M.; Ferrere, D.; Pernegger, H.; Ristic, B.; Muenstermann, D.; Gonzalez Sevilla, S.; La Rosa, A.; Miucci, A.; Nessi, M.; Iacobucci, G.; Backhaus, M.; Hügging, Fabian; Krüger, H.; Hemperek, T.; Obermann, T.; Wermes, N.; Garcia-Sciveres, M.; Quadt, A.; Weingarten, J.; George, M.; Grosse-Knetter, J.; Rieger, J.; Bates, R.; Blue, A.; Buttar, C.; Hynds, D.
2014-11-01
The high-voltage (HV-) CMOS pixel sensors offer several good properties: a fast charge collection by drift, the possibility to implement relatively complex CMOS in-pixel electronics and the compatibility with commercial processes. The sensor element is a deep n-well diode in a p-type substrate. The n-well contains CMOS pixel electronics. The main charge collection mechanism is drift in a shallow, high field region, which leads to a fast charge collection and a high radiation tolerance. We are currently evaluating the use of the high-voltage detectors implemented in 180 nm HV-CMOS technology for the high-luminosity ATLAS upgrade. Our approach is replacing the existing pixel and strip sensors with the CMOS sensors while keeping the presently used readout ASICs. By intelligence we mean the ability of the sensor to recognize a particle hit and generate the address information. In this way we could benefit from the advantages of the HV sensor technology such as lower cost, lower mass, lower operating voltage, smaller pitch, smaller clusters at high incidence angles. Additionally we expect to achieve a radiation hardness necessary for ATLAS upgrade. In order to test the concept, we have designed two HV-CMOS prototypes that can be readout in two ways: using pixel and strip readout chips. In the case of the pixel readout, the connection between HV-CMOS sensor and the readout ASIC can be established capacitively.
Development of n-in-p pixel modules for the ATLAS upgrade at HL-LHC
NASA Astrophysics Data System (ADS)
Macchiolo, A.; Nisius, R.; Savic, N.; Terzo, S.
2016-09-01
Thin planar pixel modules are promising candidates to instrument the inner layers of the new ATLAS pixel detector for HL-LHC, thanks to the reduced contribution to the material budget and their high charge collection efficiency after irradiation. 100-200 μm thick sensors, interconnected to FE-I4 read-out chips, have been characterized with radioactive sources and beam tests at the CERN-SPS and DESY. The results of these measurements are reported for devices before and after irradiation up to a fluence of 14 ×1015 neq /cm2 . The charge collection and tracking efficiency of the different sensor thicknesses are compared. The outlook for future planar pixel sensor production is discussed, with a focus on sensor design with the pixel pitches (50×50 and 25×100 μm2) foreseen for the RD53 Collaboration read-out chip in 65 nm CMOS technology. An optimization of the biasing structures in the pixel cells is required to avoid the hit efficiency loss presently observed in the punch-through region after irradiation. For this purpose the performance of different layouts have been compared in FE-I4 compatible sensors at various fluence levels by using beam test data. Highly segmented sensors will represent a challenge for the tracking in the forward region of the pixel system at HL-LHC. In order to reproduce the performance of 50×50 μm2 pixels at high pseudo-rapidity values, FE-I4 compatible planar pixel sensors have been studied before and after irradiation in beam tests at high incidence angle (80°) with respect to the short pixel direction. Results on cluster shapes, charge collection and hit efficiency will be shown.
Spatio-thermal depth correction of RGB-D sensors based on Gaussian processes in real-time
NASA Astrophysics Data System (ADS)
Heindl, Christoph; Pönitz, Thomas; Stübl, Gernot; Pichler, Andreas; Scharinger, Josef
2018-04-01
Commodity RGB-D sensors capture color images along with dense pixel-wise depth information in real-time. Typical RGB-D sensors are provided with a factory calibration and exhibit erratic depth readings due to coarse calibration values, ageing and thermal influence effects. This limits their applicability in computer vision and robotics. We propose a novel method to accurately calibrate depth considering spatial and thermal influences jointly. Our work is based on Gaussian Process Regression in a four dimensional Cartesian and thermal domain. We propose to leverage modern GPUs for dense depth map correction in real-time. For reproducibility we make our dataset and source code publicly available.
CMOS Active Pixel Sensor Technology and Reliability Characterization Methodology
NASA Technical Reports Server (NTRS)
Chen, Yuan; Guertin, Steven M.; Pain, Bedabrata; Kayaii, Sammy
2006-01-01
This paper describes the technology, design features and reliability characterization methodology of a CMOS Active Pixel Sensor. Both overall chip reliability and pixel reliability are projected for the imagers.
Improved radiation tolerance of MAPS using a depleted epitaxial layer
NASA Astrophysics Data System (ADS)
Dorokhov, A.; Bertolone, G.; Baudot, J.; Brogna, A. S.; Colledani, C.; Claus, G.; De Masi, R.; Deveaux, M.; Dozière, G.; Dulinski, W.; Fontaine, J.-C.; Goffe, M.; Himmi, A.; Hu-Guo, Ch.; Jaaskelainen, K.; Koziel, M.; Morel, F.; Santos, C.; Specht, M.; Valin, I.; Voutsinas, G.; Wagner, F. M.; Winter, M.
2010-12-01
Tracking performance of Monolithic Active Pixel Sensors (MAPS) developed at IPHC (Turchetta, et al., 2001) [1] have been extensively studied (Winter, et al., 2001; Gornushkin, et al., 2002) [2,3]. Numerous sensor prototypes, called MIMOSA, were fabricated and tested since 1999 in order to optimise the charge collection efficiency and power dissipation, to minimise the noise and to increase the readout speed. The radiation tolerance was also investigated. The highest fluence tolerable for a 10 μm pitch device was found to be ˜1013 neq/cm2, while it was only 2×1012 neq/cm2 for a 20 μm pitch device. The purpose of this paper is to show that the tolerance to non-ionising radiation may be extended up to O(10 14) n eq/cm 2. This goal relies on a fabrication process featuring a 15 μm thin, high resistivity ( ˜1 kΩ cm) epitaxial layer. A sensor prototype (MIMOSA-25) was fabricated in this process to explore its detection performance. The depletion depth of the epitaxial layer at standard CMOS voltages ( <5 V) is similar to the layer thickness. Measurements with m.i.p.s show that the charge collected in the seed pixel is at least twice larger for the depleted epitaxial layer than for the undepleted one, translating into a signal-to-noise ratio (SNR) of ˜50. Tests after irradiation have shown that this excellent performance is maintained up to the highest fluence considered ( 3×1013 neq/cm2), making evidence of a significant extension of the radiation tolerance limits of MAPS. Standing for minimum ionising particle.
Automated Geo/Co-Registration of Multi-Temporal Very-High-Resolution Imagery.
Han, Youkyung; Oh, Jaehong
2018-05-17
For time-series analysis using very-high-resolution (VHR) multi-temporal satellite images, both accurate georegistration to the map coordinates and subpixel-level co-registration among the images should be conducted. However, applying well-known matching methods, such as scale-invariant feature transform and speeded up robust features for VHR multi-temporal images, has limitations. First, they cannot be used for matching an optical image to heterogeneous non-optical data for georegistration. Second, they produce a local misalignment induced by differences in acquisition conditions, such as acquisition platform stability, the sensor's off-nadir angle, and relief displacement of the considered scene. Therefore, this study addresses the problem by proposing an automated geo/co-registration framework for full-scene multi-temporal images acquired from a VHR optical satellite sensor. The proposed method comprises two primary steps: (1) a global georegistration process, followed by (2) a fine co-registration process. During the first step, two-dimensional multi-temporal satellite images are matched to three-dimensional topographic maps to assign the map coordinates. During the second step, a local analysis of registration noise pixels extracted between the multi-temporal images that have been mapped to the map coordinates is conducted to extract a large number of well-distributed corresponding points (CPs). The CPs are finally used to construct a non-rigid transformation function that enables minimization of the local misalignment existing among the images. Experiments conducted on five Kompsat-3 full scenes confirmed the effectiveness of the proposed framework, showing that the georegistration performance resulted in an approximately pixel-level accuracy for most of the scenes, and the co-registration performance further improved the results among all combinations of the georegistered Kompsat-3 image pairs by increasing the calculated cross-correlation values.
A 128 x 128 CMOS Active Pixel Image Sensor for Highly Integrated Imaging Systems
NASA Technical Reports Server (NTRS)
Mendis, Sunetra K.; Kemeny, Sabrina E.; Fossum, Eric R.
1993-01-01
A new CMOS-based image sensor that is intrinsically compatible with on-chip CMOS circuitry is reported. The new CMOS active pixel image sensor achieves low noise, high sensitivity, X-Y addressability, and has simple timing requirements. The image sensor was fabricated using a 2 micrometer p-well CMOS process, and consists of a 128 x 128 array of 40 micrometer x 40 micrometer pixels. The CMOS image sensor technology enables highly integrated smart image sensors, and makes the design, incorporation and fabrication of such sensors widely accessible to the integrated circuit community.
A low-noise CMOS pixel direct charge sensor, Topmetal-II-
An, Mangmang; Chen, Chufeng; Gao, Chaosong; ...
2015-12-12
In this paper, we report the design and characterization of a CMOS pixel direct charge sensor, Topmetal-II-, fabricated in a standard 0.35 μm CMOS Integrated Circuit process. The sensor utilizes exposed metal patches on top of each pixel to directly collect charge. Each pixel contains a low-noise charge-sensitive preamplifier to establish the analog signal and a discriminator with tunable threshold to generate hits. The analog signal from each pixel is accessible through time-shared multiplexing over the entire array. Hits are read out digitally through a column-based priority logic structure. Tests show that the sensor achieved a <15e - analog noisemore » and a 200e - minimum threshold for digital readout per pixel. The sensor is capable of detecting both electrons and ions drifting in gas. Lastly, these characteristics enable its use as the charge readout device in future Time Projection Chambers without gaseous gain mechanism, which has unique advantages in low background and low rate-density experiments.« less
A low-noise CMOS pixel direct charge sensor, Topmetal-II-
DOE Office of Scientific and Technical Information (OSTI.GOV)
An, Mangmang; Chen, Chufeng; Gao, Chaosong
In this paper, we report the design and characterization of a CMOS pixel direct charge sensor, Topmetal-II-, fabricated in a standard 0.35 μm CMOS Integrated Circuit process. The sensor utilizes exposed metal patches on top of each pixel to directly collect charge. Each pixel contains a low-noise charge-sensitive preamplifier to establish the analog signal and a discriminator with tunable threshold to generate hits. The analog signal from each pixel is accessible through time-shared multiplexing over the entire array. Hits are read out digitally through a column-based priority logic structure. Tests show that the sensor achieved a <15e - analog noisemore » and a 200e - minimum threshold for digital readout per pixel. The sensor is capable of detecting both electrons and ions drifting in gas. Lastly, these characteristics enable its use as the charge readout device in future Time Projection Chambers without gaseous gain mechanism, which has unique advantages in low background and low rate-density experiments.« less
Radiation hard pixel sensors using high-resistive wafers in a 150 nm CMOS processing line
NASA Astrophysics Data System (ADS)
Pohl, D.-L.; Hemperek, T.; Caicedo, I.; Gonella, L.; Hügging, F.; Janssen, J.; Krüger, H.; Macchiolo, A.; Owtscharenko, N.; Vigani, L.; Wermes, N.
2017-06-01
Pixel sensors using 8'' CMOS processing technology have been designed and characterized offering the benefits of industrial sensor fabrication, including large wafers, high throughput and yield, as well as low cost. The pixel sensors are produced using a 150 nm CMOS technology offered by LFoundry in Avezzano. The technology provides multiple metal and polysilicon layers, as well as metal-insulator-metal capacitors that can be employed for AC-coupling and redistribution layers. Several prototypes were fabricated and are characterized with minimum ionizing particles before and after irradiation to fluences up to 1.1 × 1015 neq cm-2. The CMOS-fabricated sensors perform equally well as standard pixel sensors in terms of noise and hit detection efficiency. AC-coupled sensors even reach 100% hit efficiency in a 3.2 GeV electron beam before irradiation.
CMOS Active-Pixel Image Sensor With Simple Floating Gates
NASA Technical Reports Server (NTRS)
Fossum, Eric R.; Nakamura, Junichi; Kemeny, Sabrina E.
1996-01-01
Experimental complementary metal-oxide/semiconductor (CMOS) active-pixel image sensor integrated circuit features simple floating-gate structure, with metal-oxide/semiconductor field-effect transistor (MOSFET) as active circuit element in each pixel. Provides flexibility of readout modes, no kTC noise, and relatively simple structure suitable for high-density arrays. Features desirable for "smart sensor" applications.
A Chip and Pixel Qualification Methodology on Imaging Sensors
NASA Technical Reports Server (NTRS)
Chen, Yuan; Guertin, Steven M.; Petkov, Mihail; Nguyen, Duc N.; Novak, Frank
2004-01-01
This paper presents a qualification methodology on imaging sensors. In addition to overall chip reliability characterization based on sensor s overall figure of merit, such as Dark Rate, Linearity, Dark Current Non-Uniformity, Fixed Pattern Noise and Photon Response Non-Uniformity, a simulation technique is proposed and used to project pixel reliability. The projected pixel reliability is directly related to imaging quality and provides additional sensor reliability information and performance control.
REPORT ON AN ORBITAL MAPPING SYSTEM.
Colvocoresses, Alden P.; ,
1984-01-01
During June 1984, the International Society for Photogrammetry and Remote Sensing accepted a committee report that defines an Orbital Mapping System (OMS) to follow Landsat and other Earth-sensing systems. The OMS involves the same orbital parameters of Landsats 1, 2, and 3, three wave bands (two in the visible and one in the near infrared) and continuous stereoscopic capability. The sensors involve solid-state linear arrays and data acquisition (including stereo) designed for one-dimensional data processing. It has a resolution capability of 10-m pixels and is capable of producing 1:50,000-scale image maps with 20-m contours. In addition to mapping, the system is designed to monitor the works of man as well as nature and in a cost-effective manner.
NASA Astrophysics Data System (ADS)
Waigl, C.; Stuefer, M.; Prakash, A.
2013-12-01
Wildfire is the main disturbance regime of the boreal forest ecosystem, a region acutely sensitive to climate change. Large fires impact the carbon cycle, permafrost, and air quality on a regional and even hemispheric scale. Because of their significance as a hazard to human health and economic activity, monitoring wildfires is relevant not only to science but also to government agencies. The goal of this study is to develop pathways towards a near real-time assessment of fire characteristics in the boreal zones of Alaska based on satellite remote sensing data. We map the location of active burn areas and derive fire parameters such as fire temperature, intensity, stage (smoldering or flaming), emission injection points, carbon consumed, and energy released. For monitoring wildfires in the sub-arctic region, we benefit from the high temporal resolution of data (as high as 8 images a day) from MODIS on the Aqua and Terra platforms and VIIRS on NPP/Suomi, downlinked and processed to level 1 by the Geographic Information Network of Alaska at the University of Alaska Fairbanks. To transcend the low spatial resolution of these sensors, a sub-pixel analysis is carried out. By applying techniques from Bayesian inverse modeling to Dozier's two-component approach, uncertainties and sensitivity of the retrieved fire temperatures and fractional pixel areas to background temperature and atmospheric factors are assessed. A set of test cases - large fires from the 2004 to 2013 fire seasons complemented by a selection of smaller burns at the lower end of the MODIS detection threshold - is used to evaluate the methodology. While the VIIRS principal fire detection band M13 (centered at 4.05 μm, similar to MODIS bands 21 and 22 at 3.959 μm) does not usually saturate for Alaskan wildfire areas, the thermal IR band M15 (10.763 μm, comparable to MODIS band 31 at 11.03 μm) indeed saturates for a percentage, though not all, of the fire pixels of intense burns. As this limits the application of the classical version of Dozier's model for this particular combination to lower intensity and smaller fires, or smaller fractional fire areas, other VIIRS band combinations are evaluated as well. Furthermore, the higher spatial resolution of the VIIRS sensor compared to MODIS and its constant along-scan resolution DNB (day/night band) dataset provide additional options for fire mapping, detection and quantification. Higher spatial resolution satellite-borne remote sensing data is used to validate the pixel and sub-pixel level analysis and to assess lower detection thresholds. For each sample fire, moderate-resolution imagery is paired with data from the ASTER instrument (simultaneous with MODIS data on the Terra platform) and/or Landsat scenes acquired in close temporal proximity. To complement the satellite-borne imagery, aerial surveys using a FLIR thermal imaging camera with a broadband TIR sensor provide additional ground truthing and a validation of fire location and background temperature.
Image Segmentation Analysis for NASA Earth Science Applications
NASA Technical Reports Server (NTRS)
Tilton, James C.
2010-01-01
NASA collects large volumes of imagery data from satellite-based Earth remote sensing sensors. Nearly all of the computerized image analysis of this data is performed pixel-by-pixel, in which an algorithm is applied directly to individual image pixels. While this analysis approach is satisfactory in many cases, it is usually not fully effective in extracting the full information content from the high spatial resolution image data that s now becoming increasingly available from these sensors. The field of object-based image analysis (OBIA) has arisen in recent years to address the need to move beyond pixel-based analysis. The Recursive Hierarchical Segmentation (RHSEG) software developed by the author is being used to facilitate moving from pixel-based image analysis to OBIA. The key unique aspect of RHSEG is that it tightly intertwines region growing segmentation, which produces spatially connected region objects, with region object classification, which groups sets of region objects together into region classes. No other practical, operational image segmentation approach has this tight integration of region growing object finding with region classification This integration is made possible by the recursive, divide-and-conquer implementation utilized by RHSEG, in which the input image data is recursively subdivided until the image data sections are small enough to successfully mitigat the combinatorial explosion caused by the need to compute the dissimilarity between each pair of image pixels. RHSEG's tight integration of region growing object finding and region classification is what enables the high spatial fidelity of the image segmentations produced by RHSEG. This presentation will provide an overview of the RHSEG algorithm and describe how it is currently being used to support OBIA or Earth Science applications such as snow/ice mapping and finding archaeological sites from remotely sensed data.
NASA Astrophysics Data System (ADS)
Sedano, Fernando; Kempeneers, Pieter; Strobl, Peter; Kucera, Jan; Vogt, Peter; Seebach, Lucia; San-Miguel-Ayanz, Jesús
2011-09-01
This study presents a novel cloud masking approach for high resolution remote sensing images in the context of land cover mapping. As an advantage to traditional methods, the approach does not rely on thermal bands and it is applicable to images from most high resolution earth observation remote sensing sensors. The methodology couples pixel-based seed identification and object-based region growing. The seed identification stage relies on pixel value comparison between high resolution images and cloud free composites at lower spatial resolution from almost simultaneously acquired dates. The methodology was tested taking SPOT4-HRVIR, SPOT5-HRG and IRS-LISS III as high resolution images and cloud free MODIS composites as reference images. The selected scenes included a wide range of cloud types and surface features. The resulting cloud masks were evaluated through visual comparison. They were also compared with ad-hoc independently generated cloud masks and with the automatic cloud cover assessment algorithm (ACCA). In general the results showed an agreement in detected clouds higher than 95% for clouds larger than 50 ha. The approach produced consistent results identifying and mapping clouds of different type and size over various land surfaces including natural vegetation, agriculture land, built-up areas, water bodies and snow.
Laboratory and testbeam results for thin and epitaxial planar sensors for HL-LHC
Bubna, M.; Bolla, G.; Bortoletto, D.; ...
2015-08-03
The High-Luminosity LHC (HL-LHC) upgrade of the CMS pixel detector will require the development of novel pixel sensors which can withstand the increase in instantaneous luminosity to L = 5 × 10 34 cm –2s –1 and collect ~ 3000fb –1 of data. The innermost layer of the pixel detector will be exposed to doses of about 10 16 n eq/ cm 2. Hence, new pixel sensors with improved radiation hardness need to be investigated. A variety of silicon materials (Float-zone, Magnetic Czochralski and Epitaxially grown silicon), with thicknesses from 50 μm to 320 μm in p-type and n-type substrates have beenmore » fabricated using single-sided processing. The effect of reducing the sensor active thickness to improve radiation hardness by using various techniques (deep diffusion, wafer thinning, or growing epitaxial silicon on a handle wafer) has been studied. Furthermore, the results for electrical characterization, charge collection efficiency, and position resolution of various n-on-p pixel sensors with different substrates and different pixel geometries (different bias dot gaps and pixel implant sizes) will be presented.« less
Mapping turbidity in the Charles River, Boston using a high-resolution satellite.
Hellweger, Ferdi L; Miller, Will; Oshodi, Kehinde Sarat
2007-09-01
The usability of high-resolution satellite imagery for estimating spatial water quality patterns in urban water bodies is evaluated using turbidity in the lower Charles River, Boston as a case study. Water turbidity was surveyed using a boat-mounted optical sensor (YSI) at 5 m spatial resolution, resulting in about 4,000 data points. The ground data were collected coincidently with a satellite imagery acquisition (IKONOS), which consists of multispectral (R, G, B) reflectance at 1 m resolution. The original correlation between the raw ground and satellite data was poor (R2 = 0.05). Ground data were processed by removing points affected by contamination (e.g., sensor encounters a particle floc), which were identified visually. Also, the ground data were corrected for the memory effect introduced by the sensor's protective casing using an analytical model. Satellite data were processed to remove pixels affected by permanent non-water features (e.g., shoreline). In addition, water pixels within a certain buffer distance from permanent non-water features were removed due to contamination by the adjacency effect. To determine the appropriate buffer distance, a procedure that explicitly considers the distance of pixels to the permanent non-water features was applied. Two automatic methods for removing the effect of temporary non-water features (e.g., boats) were investigated, including (1) creating a water-only mask based on an unsupervised classification and (2) removing (filling) all local maxima in reflectance. After the various processing steps, the correlation between the ground and satellite data was significantly better (R2 = 0.70). The correlation was applied to the satellite image to develop a map of turbidity in the lower Charles River, which reveals large-scale patterns in water clarity. However, the adjacency effect prevented the application of this method to near-shore areas, where high-resolution patterns were expected (e.g., outfall plumes).
A 100 Mfps image sensor for biological applications
NASA Astrophysics Data System (ADS)
Etoh, T. Goji; Shimonomura, Kazuhiro; Nguyen, Anh Quang; Takehara, Kosei; Kamakura, Yoshinari; Goetschalckx, Paul; Haspeslagh, Luc; De Moor, Piet; Dao, Vu Truong Son; Nguyen, Hoang Dung; Hayashi, Naoki; Mitsui, Yo; Inumaru, Hideo
2018-02-01
Two ultrahigh-speed CCD image sensors with different characteristics were fabricated for applications to advanced scientific measurement apparatuses. The sensors are BSI MCG (Backside-illuminated Multi-Collection-Gate) image sensors with multiple collection gates around the center of the front side of each pixel, placed like petals of a flower. One has five collection gates and one drain gate at the center, which can capture consecutive five frames at 100 Mfps with the pixel count of about 600 kpixels (512 x 576 x 2 pixels). In-pixel signal accumulation is possible for repetitive image capture of reproducible events. The target application is FLIM. The other is equipped with four collection gates each connected to an in-situ CCD memory with 305 elements, which enables capture of 1,220 (4 x 305) consecutive images at 50 Mfps. The CCD memory is folded and looped with the first element connected to the last element, which also makes possible the in-pixel signal accumulation. The sensor is a small test sensor with 32 x 32 pixels. The target applications are imaging TOF MS, pulse neutron tomography and dynamic PSP. The paper also briefly explains an expression of the temporal resolution of silicon image sensors theoretically derived by the authors in 2017. It is shown that the image sensor designed based on the theoretical analysis achieves imaging of consecutive frames at the frame interval of 50 ps.
Is flat fielding safe for precision CCD astronomy?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baumer, Michael; Davis, Christopher P.; Roodman, Aaron
The ambitious goals of precision cosmology with wide-field optical surveys such as the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope (LSST) demand precision CCD astronomy as their foundation. This in turn requires an understanding of previously uncharacterized sources of systematic error in CCD sensors, many of which manifest themselves as static effective variations in pixel area. Such variation renders a critical assumption behind the traditional procedure of flat fielding—that a sensor's pixels comprise a uniform grid—invalid. In this work, we present a method to infer a curl-free model of a sensor's underlying pixel grid from flat-field images,more » incorporating the superposition of all electrostatic sensor effects—both known and unknown—present in flat-field data. We use these pixel grid models to estimate the overall impact of sensor systematics on photometry, astrometry, and PSF shape measurements in a representative sensor from the Dark Energy Camera (DECam) and a prototype LSST sensor. Applying the method to DECam data recovers known significant sensor effects for which corrections are currently being developed within DES. For an LSST prototype CCD with pixel-response non-uniformity (PRNU) of 0.4%, we find the impact of "improper" flat fielding on these observables is negligible in nominal .7'' seeing conditions. Furthermore, these errors scale linearly with the PRNU, so for future LSST production sensors, which may have larger PRNU, our method provides a way to assess whether pixel-level calibration beyond flat fielding will be required.« less
Is flat fielding safe for precision CCD astronomy?
Baumer, Michael; Davis, Christopher P.; Roodman, Aaron
2017-07-06
The ambitious goals of precision cosmology with wide-field optical surveys such as the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope (LSST) demand precision CCD astronomy as their foundation. This in turn requires an understanding of previously uncharacterized sources of systematic error in CCD sensors, many of which manifest themselves as static effective variations in pixel area. Such variation renders a critical assumption behind the traditional procedure of flat fielding—that a sensor's pixels comprise a uniform grid—invalid. In this work, we present a method to infer a curl-free model of a sensor's underlying pixel grid from flat-field images,more » incorporating the superposition of all electrostatic sensor effects—both known and unknown—present in flat-field data. We use these pixel grid models to estimate the overall impact of sensor systematics on photometry, astrometry, and PSF shape measurements in a representative sensor from the Dark Energy Camera (DECam) and a prototype LSST sensor. Applying the method to DECam data recovers known significant sensor effects for which corrections are currently being developed within DES. For an LSST prototype CCD with pixel-response non-uniformity (PRNU) of 0.4%, we find the impact of "improper" flat fielding on these observables is negligible in nominal .7'' seeing conditions. Furthermore, these errors scale linearly with the PRNU, so for future LSST production sensors, which may have larger PRNU, our method provides a way to assess whether pixel-level calibration beyond flat fielding will be required.« less
Fully depleted CMOS pixel sensor development and potential applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baudot, J.; Kachel, M.; CNRS, UMR7178, 67037 Strasbourg
CMOS pixel sensors are often opposed to hybrid pixel sensors due to their very different sensitive layer. In standard CMOS imaging processes, a thin (about 20 μm) low resistivity epitaxial layer acts as the sensitive volume and charge collection is mostly driven by thermal agitation. In contrast, the so-called hybrid pixel technology exploits a thick (typically 300 μm) silicon sensor with high resistivity allowing for the depletion of this volume, hence charges drift toward collecting electrodes. But this difference is fading away with the recent availability of some CMOS imaging processes based on a relatively thick (about 50 μm) highmore » resistivity epitaxial layer which allows for full depletion. This evolution extents the range of applications for CMOS pixel sensors where their known assets, high sensitivity and granularity combined with embedded signal treatment, could potentially foster breakthrough in detection performances for specific scientific instruments. One such domain is the Xray detection for soft energies, typically below 10 keV, where the thin sensitive layer was previously severely impeding CMOS sensor usage. Another application becoming realistic for CMOS sensors, is the detection in environment with a high fluence of non-ionizing radiation, such as hadron colliders. However, when considering highly demanding applications, it is still to be proven that micro-circuits required to uniformly deplete the sensor at the pixel level, do not mitigate the sensitivity and efficiency required. Prototype sensors in two different technologies with resistivity higher than 1 kΩ, sensitive layer between 40 and 50 μm and featuring pixel pitch in the range 25 to 50 μm, have been designed and fabricated. Various biasing architectures were adopted to reach full depletion with only a few volts. Laboratory investigations with three types of sources (X-rays, β-rays and infrared light) demonstrated the validity of the approach with respect to depletion, keeping a low noise figure. Especially, an energy resolution of about 400 eV for 5 keV X-rays was obtained for single pixels. The prototypes have then been exposed to gradually increased fluences of neutrons, from 10{sup 13} to 5x10{sup 14} neq/cm{sup 2}. Again laboratory tests allowed to evaluate the signal over noise persistence on the different pixels implemented. Currently our development mostly targets the detection of soft X-rays, with the ambition to develop a pixel sensor matching counting rates as affordable with hybrid pixel sensors, but with an extended sensitivity to low energy and finer pixel about 25 x 25 μm{sup 2}. The original readout architecture proposed relies on a two tiers chip. The first tier consists of a sensor with a modest dynamic in order to insure low noise performances required by sensitivity. The interconnected second tier chip enhances the read-out speed by introducing massive parallelization. Performances reachable with this strategy combining counting and integration will be detailed. (authors)« less
Spatial Quality Evaluation of Resampled Unmanned Aerial Vehicle-Imagery for Weed Mapping.
Borra-Serrano, Irene; Peña, José Manuel; Torres-Sánchez, Jorge; Mesas-Carrascosa, Francisco Javier; López-Granados, Francisca
2015-08-12
Unmanned aerial vehicles (UAVs) combined with different spectral range sensors are an emerging technology for providing early weed maps for optimizing herbicide applications. Considering that weeds, at very early phenological stages, are similar spectrally and in appearance, three major components are relevant: spatial resolution, type of sensor and classification algorithm. Resampling is a technique to create a new version of an image with a different width and/or height in pixels, and it has been used in satellite imagery with different spatial and temporal resolutions. In this paper, the efficiency of resampled-images (RS-images) created from real UAV-images (UAV-images; the UAVs were equipped with two types of sensors, i.e., visible and visible plus near-infrared spectra) captured at different altitudes is examined to test the quality of the RS-image output. The performance of the object-based-image-analysis (OBIA) implemented for the early weed mapping using different weed thresholds was also evaluated. Our results showed that resampling accurately extracted the spectral values from high spatial resolution UAV-images at an altitude of 30 m and the RS-image data at altitudes of 60 and 100 m, was able to provide accurate weed cover and herbicide application maps compared with UAV-images from real flights.
Spatial Quality Evaluation of Resampled Unmanned Aerial Vehicle-Imagery for Weed Mapping
Borra-Serrano, Irene; Peña, José Manuel; Torres-Sánchez, Jorge; Mesas-Carrascosa, Francisco Javier; López-Granados, Francisca
2015-01-01
Unmanned aerial vehicles (UAVs) combined with different spectral range sensors are an emerging technology for providing early weed maps for optimizing herbicide applications. Considering that weeds, at very early phenological stages, are similar spectrally and in appearance, three major components are relevant: spatial resolution, type of sensor and classification algorithm. Resampling is a technique to create a new version of an image with a different width and/or height in pixels, and it has been used in satellite imagery with different spatial and temporal resolutions. In this paper, the efficiency of resampled-images (RS-images) created from real UAV-images (UAV-images; the UAVs were equipped with two types of sensors, i.e., visible and visible plus near-infrared spectra) captured at different altitudes is examined to test the quality of the RS-image output. The performance of the object-based-image-analysis (OBIA) implemented for the early weed mapping using different weed thresholds was also evaluated. Our results showed that resampling accurately extracted the spectral values from high spatial resolution UAV-images at an altitude of 30 m and the RS-image data at altitudes of 60 and 100 m, was able to provide accurate weed cover and herbicide application maps compared with UAV-images from real flights. PMID:26274960
Development of N+ in P pixel sensors for a high-luminosity large hadron collider
NASA Astrophysics Data System (ADS)
Kamada, Shintaro; Yamamura, Kazuhisa; Unno, Yoshinobu; Ikegami, Yoichi
2014-11-01
Hamamatsu Photonics K. K. is developing an N+ in a p planar pixel sensor with high radiation tolerance for the high-luminosity large hadron collider (HL-LHC). The N+ in the p planar pixel sensor is a candidate for the HL-LHC and offers the advantages of high radiation tolerance at a reasonable price compared with the N+ in an n planar sensor, the three-dimensional sensor, and the diamond sensor. However, the N+ in the p planar pixel sensor still presents some problems that need to be solved, such as its slim edge and the danger of sparks between the sensor and readout integrated circuit. We are now attempting to solve these problems with wafer-level processes, which is important for mass production. To date, we have obtained a 250-μm edge with an applied bias voltage of 1000 V. To protect against high-voltage sparks from the edge, we suggest some possible designs for the N+ edge.
Preliminary investigations of active pixel sensors in Nuclear Medicine imaging
NASA Astrophysics Data System (ADS)
Ott, Robert; Evans, Noel; Evans, Phil; Osmond, J.; Clark, A.; Turchetta, R.
2009-06-01
Three CMOS active pixel sensors have been investigated for their application to Nuclear Medicine imaging. Startracker with 525×525 25 μm square pixels has been coupled via a fibre optic stud to a 2 mm thick segmented CsI(Tl) crystal. Imaging tests were performed using 99mTc sources, which emit 140 keV gamma rays. The system was interfaced to a PC via FPGA-based DAQ and optical link enabling imaging rates of 10 f/s. System noise was measured to be >100e and it was shown that the majority of this noise was fixed pattern in nature. The intrinsic spatial resolution was measured to be ˜80 μm and the system spatial resolution measured with a slit was ˜450 μm. The second sensor, On Pixel Intelligent CMOS (OPIC), had 64×72 40 μm pixels and was used to evaluate noise characteristics and to develop a method of differentiation between fixed pattern and statistical noise. The third sensor, Vanilla, had 520×520 25 μm pixels and a measured system noise of ˜25e. This sensor was coupled directly to the segmented phosphor. Imaging results show that even at this lower level of noise the signal from 140 keV gamma rays is small as the light from the phosphor is spread over a large number of pixels. Suggestions for the 'ideal' sensor are made.
New results on diamond pixel sensors using ATLAS frontend electronics
NASA Astrophysics Data System (ADS)
Keil, M.; Adam, W.; Berdermann, E.; Bergonzo, P.; de Boer, W.; Bogani, F.; Borchi, E.; Brambilla, A.; Bruzzi, M.; Colledani, C.; Conway, J.; D'Angelo, P.; Dabrowski, W.; Delpierre, P.; Dulinski, W.; Doroshenko, J.; Doucet, M.; van Eijk, B.; Fallou, A.; Fischer, P.; Fizzotti, F.; Kania, D.; Gan, K. K.; Grigoriev, E.; Hallewell, G.; Han, S.; Hartjes, F.; Hrubec, J.; Husson, D.; Kagan, H.; Kaplon, J.; Kass, R.; Knöpfle, K. T.; Koeth, T.; Krammer, M.; Logiudice, A.; mac Lynne, L.; Manfredotti, C.; Meier, D.; Menichelli, D.; Meuser, S.; Mishina, M.; Moroni, L.; Noomen, J.; Oh, A.; Pan, L. S.; Pernicka, M.; Perera, L.; Riester, J. L.; Roe, S.; Rudge, A.; Russ, J.; Sala, S.; Sampietro, M.; Schnetzer, S.; Sciortino, S.; Stelzer, H.; Stone, R.; Suter, B.; Trischuk, W.; Tromson, D.; Vittone, E.; Weilhammer, P.; Wermes, N.; Wetstein, M.; Zeuner, W.; Zoeller, M.
2003-03-01
Diamond is a promising sensor material for future collider experiments due to its radiation hardness. Diamond pixel sensors have been bump bonded to an ATLAS pixel readout chip using PbSn solder bumps. Single chip devices have been characterised by lab measurements and in a high-energy pion beam at CERN. Results on charge collection, spatial resolution, efficiency and the charge carrier lifetime are presented.
Film cameras or digital sensors? The challenge ahead for aerial imaging
Light, D.L.
1996-01-01
Cartographic aerial cameras continue to play the key role in producing quality products for the aerial photography business, and specifically for the National Aerial Photography Program (NAPP). One NAPP photograph taken with cameras capable of 39 lp/mm system resolution can contain the equivalent of 432 million pixels at 11 ??m spot size, and the cost is less than $75 per photograph to scan and output the pixels on a magnetic storage medium. On the digital side, solid state charge coupled device linear and area arrays can yield quality resolution (7 to 12 ??m detector size) and a broader dynamic range. If linear arrays are to compete with film cameras, they will require precise attitude and positioning of the aircraft so that the lines of pixels can be unscrambled and put into a suitable homogeneous scene that is acceptable to an interpreter. Area arrays need to be much larger than currently available to image scenes competitive in size with film cameras. Analysis of the relative advantages and disadvantages of the two systems show that the analog approach is more economical at present. However, as arrays become larger, attitude sensors become more refined, global positioning system coordinate readouts become commonplace, and storage capacity becomes more affordable, the digital camera may emerge as the imaging system for the future. Several technical challenges must be overcome if digital sensors are to advance to where they can support mapping, charting, and geographic information system applications.
Small pixel cross-talk MTF and its impact on MWIR sensor performance
NASA Astrophysics Data System (ADS)
Goss, Tristan M.; Willers, Cornelius J.
2017-05-01
As pixel sizes reduce in the development of modern High Definition (HD) Mid Wave Infrared (MWIR) detectors the interpixel cross-talk becomes increasingly difficult to regulate. The diffusion lengths required to achieve the quantum efficiency and sensitivity of MWIR detectors are typically longer than the pixel pitch dimension, and the probability of inter-pixel cross-talk increases as the pixel pitch/diffusion length fraction decreases. Inter-pixel cross-talk is most conveniently quantified by the focal plane array sampling Modulation Transfer Function (MTF). Cross-talk MTF will reduce the ideal sinc square pixel MTF that is commonly used when modelling sensor performance. However, cross-talk MTF data is not always readily available from detector suppliers, and since the origins of inter-pixel cross-talk are uniquely device and manufacturing process specific, no generic MTF models appear to satisfy the needs of the sensor designers and analysts. In this paper cross-talk MTF data has been collected from recent publications and the development for a generic cross-talk MTF model to fit this data is investigated. The resulting cross-talk MTF model is then included in a MWIR sensor model and the impact on sensor performance is evaluated in terms of the National Imagery Interoperability Rating Scale's (NIIRS) General Image Quality Equation (GIQE) metric for a range of fnumber/ detector pitch Fλ/d configurations and operating environments. By applying non-linear boost transfer functions in the signal processing chain, the contrast losses due to cross-talk may be compensated for. Boost transfer functions, however, also reduce the signal to noise ratio of the sensor. In this paper boost function limits are investigated and included in the sensor performance assessments.
Experimental single-chip color HDTV image acquisition system with 8M-pixel CMOS image sensor
NASA Astrophysics Data System (ADS)
Shimamoto, Hiroshi; Yamashita, Takayuki; Funatsu, Ryohei; Mitani, Kohji; Nojiri, Yuji
2006-02-01
We have developed an experimental single-chip color HDTV image acquisition system using 8M-pixel CMOS image sensor. The sensor has 3840 × 2160 effective pixels and is progressively scanned at 60 frames per second. We describe the color filter array and interpolation method to improve image quality with a high-pixel-count single-chip sensor. We also describe an experimental image acquisition system we used to measured spatial frequency characteristics in the horizontal direction. The results indicate good prospects for achieving a high quality single chip HDTV camera that reduces pseudo signals and maintains high spatial frequency characteristics within the frequency band for HDTV.
Luminance compensation for AMOLED displays using integrated MIS sensors
NASA Astrophysics Data System (ADS)
Vygranenko, Yuri; Fernandes, Miguel; Louro, Paula; Vieira, Manuela
2017-05-01
Active-matrix organic light-emitting diodes (AMOLEDs) are ideal for future TV applications due to their ability to faithfully reproduce real images. However, pixel luminance can be affected by instability of driver TFTs and aging effect in OLEDs. This paper reports on a pixel driver utilizing a metal-insulator-semiconductor (MIS) sensor for luminance control of the OLED element. In the proposed pixel architecture for bottom-emission AMOLEDs, the embedded MIS sensor shares the same layer stack with back-channel etched a Si:H TFTs to maintain the fabrication simplicity. The pixel design for a large-area HD display is presented. The external electronics performs image processing to modify incoming video using correction parameters for each pixel in the backplane, and also sensor data processing to update the correction parameters. The luminance adjusting algorithm is based on realistic models for pixel circuit elements to predict the relation between the programming voltage and OLED luminance. SPICE modeling of the sensing part of the backplane is performed to demonstrate its feasibility. Details on the pixel circuit functionality including the sensing and programming operations are also discussed.
Active pixel sensors with substantially planarized color filtering elements
NASA Technical Reports Server (NTRS)
Fossum, Eric R. (Inventor); Kemeny, Sabrina E. (Inventor)
1999-01-01
A semiconductor imaging system preferably having an active pixel sensor array compatible with a CMOS fabrication process. Color-filtering elements such as polymer filters and wavelength-converting phosphors can be integrated with the image sensor.
NASA Astrophysics Data System (ADS)
Li, Zhuo; Seo, Min-Woong; Kagawa, Keiichiro; Yasutomi, Keita; Kawahito, Shoji
2016-04-01
This paper presents the design and implementation of a time-resolved CMOS image sensor with a high-speed lateral electric field modulation (LEFM) gating structure for time domain fluorescence lifetime measurement. Time-windowed signal charge can be transferred from a pinned photodiode (PPD) to a pinned storage diode (PSD) by turning on a pair of transfer gates, which are situated beside the channel. Unwanted signal charge can be drained from the PPD to the drain by turning on another pair of gates. The pixel array contains 512 (V) × 310 (H) pixels with 5.6 × 5.6 µm2 pixel size. The imager chip was fabricated using 0.11 µm CMOS image sensor process technology. The prototype sensor has a time response of 150 ps at 374 nm. The fill factor of the pixels is 5.6%. The usefulness of the prototype sensor is demonstrated for fluorescence lifetime imaging through simulation and measurement results.
High resistivity CMOS pixel sensors and their application to the STAR PXL detector
NASA Astrophysics Data System (ADS)
Dorokhov, A.; Bertolone, G.; Baudot, J.; Colledani, C.; Claus, G.; Degerli, Y.; de Masi, R.; Deveaux, M.; Dozière, G.; Dulinski, W.; Gélin, M.; Goffe, M.; Himmi, A.; Hu-Guo, Ch.; Jaaskelainen, K.; Koziel, M.; Morel, F.; Santos, C.; Specht, M.; Valin, I.; Voutsinas, G.; Winter, M.
2011-09-01
CMOS pixel sensors are foreseen to equip the vertex detector (called PXL) of the upgraded inner tracking system of the STAR experiment at RHIC. The sensors (called ULTIMATE) are being designed and their architecture is being optimized for the PXL specifications, extrapolating from the MIMOSA-26 sensor realized for the EUDET beam telescope.The paper gives an overview of the ULTIMATE sensor specifications and of the adaptation of its forerunner, MIMOSA-26, to the PXL specifications.One of the main changes between MIMOSA-26 and ULTIMATE is the use of a high resistivity epitaxial layer. Recent performance assessments obtained with MIMOSA-26 sensors manufactured on such an epitaxial layer are presented, as well as results of beam tests obtained with a prototype probing improved versions of the MIMOSA-26 pixel design. They show drastic improvements of the pixel signal-to-noise ratio and of the sensor radiation tolerance with respect to the performances achieved with a standard, i.e. low resistivity, layer.
NASA Astrophysics Data System (ADS)
Weatherill, Daniel P.; Stefanov, Konstantin D.; Greig, Thomas A.; Holland, Andrew D.
2014-07-01
Pixellated monolithic silicon detectors operated in a photon-counting regime are useful in spectroscopic imaging applications. Since a high energy incident photon may produce many excess free carriers upon absorption, both energy and spatial information can be recovered by resolving each interaction event. The performance of these devices in terms of both the energy and spatial resolution is in large part determined by the amount of diffusion which occurs during the collection of the charge cloud by the pixels. Past efforts to predict the X-ray performance of imaging sensors have used either analytical solutions to the diffusion equation or simplified monte carlo electron transport models. These methods are computationally attractive and highly useful but may be complemented using more physically detailed models based on TCAD simulations of the devices. Here we present initial results from a model which employs a full transient numerical solution of the classical semiconductor equations to model charge collection in device pixels under stimulation from initially Gaussian photogenerated charge clouds, using commercial TCAD software. Realistic device geometries and doping are included. By mapping the pixel response to different initial interaction positions and charge cloud sizes, the charge splitting behaviour of the model sensor under various illuminations and operating conditions is investigated. Experimental validation of the model is presented from an e2v CCD30-11 device under varying substrate bias, illuminated using an Fe-55 source.
NASA Technical Reports Server (NTRS)
Kizhner, Semion; Miko, Joseph; Bradley, Damon; Heinzen, Katherine
2008-01-01
NASA Hubble Space Telescope (HST) and upcoming cosmology science missions carry instruments with multiple focal planes populated with many large sensor detector arrays. These sensors are passively cooled to low temperatures for low-level light (L3) and near-infrared (NIR) signal detection, and the sensor readout electronics circuitry must perform at extremely low noise levels to enable new required science measurements. Because we are at the technological edge of enhanced performance for sensors and readout electronics circuitry, as determined by thermal noise level at given temperature in analog domain, we must find new ways of further compensating for the noise in the signal digital domain. To facilitate this new approach, state-of-the-art sensors are augmented at their array hardware boundaries by non-illuminated reference pixels, which can be used to reduce noise attributed to sensors. There are a few proposed methodologies of processing in the digital domain the information carried by reference pixels, as employed by the Hubble Space Telescope and the James Webb Space Telescope Projects. These methods involve using spatial and temporal statistical parameters derived from boundary reference pixel information to enhance the active (non-reference) pixel signals. To make a step beyond this heritage methodology, we apply the NASA-developed technology known as the Hilbert- Huang Transform Data Processing System (HHT-DPS) for reference pixel information processing and its utilization in reconfigurable hardware on-board a spaceflight instrument or post-processing on the ground. The methodology examines signal processing for a 2-D domain, in which high-variance components of the thermal noise are carried by both active and reference pixels, similar to that in processing of low-voltage differential signals and subtraction of a single analog reference pixel from all active pixels on the sensor. Heritage methods using the aforementioned statistical parameters in the digital domain (such as statistical averaging of the reference pixels themselves) zeroes out the high-variance components, and the counterpart components in the active pixels remain uncorrected. This paper describes how the new methodology was demonstrated through analysis of fast-varying noise components using the Hilbert-Huang Transform Data Processing System tool (HHT-DPS) developed at NASA and the high-level programming language MATLAB (Trademark of MathWorks Inc.), as well as alternative methods for correcting for the high-variance noise component, using an HgCdTe sensor data. The NASA Hubble Space Telescope data post-processing, as well as future deep-space cosmology projects on-board instrument data processing from all the sensor channels, would benefit from this effort.
MTF evaluation of white pixel sensors
NASA Astrophysics Data System (ADS)
Lindner, Albrecht; Atanassov, Kalin; Luo, Jiafu; Goma, Sergio
2015-01-01
We present a methodology to compare image sensors with traditional Bayer RGB layouts to sensors with alternative layouts containing white pixels. We focused on the sensors' resolving powers, which we measured in the form of a modulation transfer function for variations in both luma and chroma channels. We present the design of the test chart, the acquisition of images, the image analysis, and an interpretation of results. We demonstrate the approach at the example of two sensors that only differ in their color filter arrays. We confirmed that the sensor with white pixels and the corresponding demosaicing result in a higher resolving power in the luma channel, but a lower resolving power in the chroma channels when compared to the traditional Bayer sensor.
Spectral-Spatial Classification of Hyperspectral Images Using Hierarchical Optimization
NASA Technical Reports Server (NTRS)
Tarabalka, Yuliya; Tilton, James C.
2011-01-01
A new spectral-spatial method for hyperspectral data classification is proposed. For a given hyperspectral image, probabilistic pixelwise classification is first applied. Then, hierarchical step-wise optimization algorithm is performed, by iteratively merging neighboring regions with the smallest Dissimilarity Criterion (DC) and recomputing class labels for new regions. The DC is computed by comparing region mean vectors, class labels and a number of pixels in the two regions under consideration. The algorithm is converged when all the pixels get involved in the region merging procedure. Experimental results are presented on two remote sensing hyperspectral images acquired by the AVIRIS and ROSIS sensors. The proposed approach improves classification accuracies and provides maps with more homogeneous regions, when compared to previously proposed classification techniques.
Pixel super resolution using wavelength scanning
2016-04-08
the light source is adjusted to ~20 μW. The image sensor chip is a color CMOS sensor chip with a pixel size of 1.12 μm manufactured for cellphone...pitch (that is, ~ 1 μm in Figure 3a, using a CMOS sensor that has a 1.12-μm pixel pitch). For the same configuration depicted in Figure 3, utilizing...section). The a Lens-free raw holograms captured by 1.12 μm CMOS image sensor Field of view ≈ 20.5 mm2 Angle change directions for synthetic aperture
NASA Astrophysics Data System (ADS)
Igoe, Damien P.; Parisi, Alfio V.; Amar, Abdurazaq; Rummenie, Katherine J.
2018-01-01
An evaluation of the use of median filters in the reduction of dark noise in smartphone high resolution image sensors is presented. The Sony Xperia Z1 employed has a maximum image sensor resolution of 20.7 Mpixels, with each pixel having a side length of just over 1 μm. Due to the large number of photosites, this provides an image sensor with very high sensitivity but also makes them prone to noise effects such as hot-pixels. Similar to earlier research with older models of smartphone, no appreciable temperature effects were observed in the overall average pixel values for images taken in ambient temperatures between 5 °C and 25 °C. In this research, hot-pixels are defined as pixels with intensities above a specific threshold. The threshold is determined using the distribution of pixel values of a set of images with uniform statistical properties associated with the application of median-filters of increasing size. An image with uniform statistics was employed as a training set from 124 dark images, and the threshold was determined to be 9 digital numbers (DN). The threshold remained constant for multiple resolutions and did not appreciably change even after a year of extensive field use and exposure to solar ultraviolet radiation. Although the temperature effects' uniformity masked an increase in hot-pixel occurrences, the total number of occurrences represented less than 0.1% of the total image. Hot-pixels were removed by applying a median filter, with an optimum filter size of 7 × 7; similar trends were observed for four additional smartphone image sensors used for validation. Hot-pixels were also reduced by decreasing image resolution. The method outlined in this research provides a methodology to characterise the dark noise behavior of high resolution image sensors for use in scientific investigations, especially as pixel sizes decrease.
CMOS Image Sensors: Electronic Camera On A Chip
NASA Technical Reports Server (NTRS)
Fossum, E. R.
1995-01-01
Recent advancements in CMOS image sensor technology are reviewed, including both passive pixel sensors and active pixel sensors. On- chip analog to digital converters and on-chip timing and control circuits permit realization of an electronic camera-on-a-chip. Highly miniaturized imaging systems based on CMOS image sensor technology are emerging as a competitor to charge-coupled devices for low cost uses.
NASA Technical Reports Server (NTRS)
Bateman, M. G.; Mach, D. M.; McCaul, M. G.; Bailey, J. C.; Christian, H. J.
2008-01-01
The Lightning Imaging Sensor (LIS) aboard the TRMM satellite has been collecting optical lightning data since November 1997. A Lightning Mapping Array (LMA) that senses VHF impulses from lightning was installed in North Alabama in the Fall of 2001. A dataset has been compiled to compare data from both instruments for all times when the LIS was passing over the domain of our LMA. We have algorithms for both instruments to group pixels or point sources into lightning flashes. This study presents the comparison statistics of the flash data output (flash duration, size, and amplitude) from both algorithms. We will present the results of this comparison study and show "point-level" data to explain the differences. AS we head closer to realizing a Global Lightning Mapper (GLM) on GOES-R, better understanding and ground truth of each of these instruments and their respective flash algorithms is needed.
CMOS Active-Pixel Image Sensor With Intensity-Driven Readout
NASA Technical Reports Server (NTRS)
Langenbacher, Harry T.; Fossum, Eric R.; Kemeny, Sabrina
1996-01-01
Proposed complementary metal oxide/semiconductor (CMOS) integrated-circuit image sensor automatically provides readouts from pixels in order of decreasing illumination intensity. Sensor operated in integration mode. Particularly useful in number of image-sensing tasks, including diffractive laser range-finding, three-dimensional imaging, event-driven readout of sparse sensor arrays, and star tracking.
Study of a GaAs:Cr-based Timepix detector using synchrotron facility
NASA Astrophysics Data System (ADS)
Smolyanskiy, P.; Kozhevnikov, D.; Bakina, O.; Chelkov, G.; Dedovich, D.; Kuper, K.; Leyva Fabelo, A.; Zhemchugov, A.
2017-11-01
High resistivity gallium arsenide compensated by chromium fabricated by Tomsk State University has demonstrated a good suitability as a sensor material for hybrid pixel detectors used in X-ray imaging systems with photon energies up to 60 keV. The material is available with a thickness up to 1 mm and due to its Z number a high absorption efficiency in this energy region is provided. However, the performance of thick GaAs:Cr-based detectors in spectroscopic applications is limited by readout electronics with relatively small pixels due to the charge sharing effect. In this paper, we present the experimental investigation of the charge sharing effect contribution in the GaAs:Cr-based Timepix detector. By means of scanning the detector with a pencil photon beam generated by the synchrotron facility, the geometrical mapping of pixel sensitivity is obtained, as well as the energy resolution of a single pixel. The experimental results are supported by numerical simulations. The observed limitation of the GaAs:Cr-based Timepix detector for the high flux X-ray imaging is discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Becker, Julian; Tate, Mark W.; Shanks, Katherine S.
Pixel Array Detectors (PADs) consist of an x-ray sensor layer bonded pixel-by-pixel to an underlying readout chip. This approach allows both the sensor and the custom pixel electronics to be tailored independently to best match the x-ray imaging requirements. Here we describe the hybridization of CdTe sensors to two different charge-integrating readout chips, the Keck PAD and the Mixed-Mode PAD (MM-PAD), both developed previously in our laboratory. The charge-integrating architecture of each of these PADs extends the instantaneous counting rate by many orders of magnitude beyond that obtainable with photon counting architectures. The Keck PAD chip consists of rapid, 8-frame,more » in-pixel storage elements with framing periods <150 ns. The second detector, the MM-PAD, has an extended dynamic range by utilizing an in-pixel overflow counter coupled with charge removal circuitry activated at each overflow. This allows the recording of signals from the single-photon level to tens of millions of x-rays/pixel/frame while framing at 1 kHz. Both detector chips consist of a 128×128 pixel array with (150 µm){sup 2} pixels.« less
NASA Astrophysics Data System (ADS)
Goss, Tristan M.
2016-05-01
With 640x512 pixel format IR detector arrays having been on the market for the past decade, Standard Definition (SD) thermal imaging sensors have been developed and deployed across the world. Now with 1280x1024 pixel format IR detector arrays becoming readily available designers of thermal imager systems face new challenges as pixel sizes reduce and the demand and applications for High Definition (HD) thermal imaging sensors increases. In many instances the upgrading of existing under-sampled SD thermal imaging sensors into more optimally sampled or oversampled HD thermal imaging sensors provides a more cost effective and reduced time to market option than to design and develop a completely new sensor. This paper presents the analysis and rationale behind the selection of the best suited HD pixel format MWIR detector for the upgrade of an existing SD thermal imaging sensor to a higher performing HD thermal imaging sensor. Several commercially available and "soon to be" commercially available HD small pixel IR detector options are included as part of the analysis and are considered for this upgrade. The impact the proposed detectors have on the sensor's overall sensitivity, noise and resolution is analyzed, and the improved range performance is predicted. Furthermore with reduced dark currents due to the smaller pixel sizes, the candidate HD MWIR detectors are operated at higher temperatures when compared to their SD predecessors. Therefore, as an additional constraint and as a design goal, the feasibility of achieving upgraded performance without any increase in the size, weight and power consumption of the thermal imager is discussed herein.
ALPIDE, the Monolithic Active Pixel Sensor for the ALICE ITS upgrade
NASA Astrophysics Data System (ADS)
Mager, M.; ALICE Collaboration
2016-07-01
A new 10 m2 inner tracking system based on seven concentric layers of Monolithic Active Pixel Sensors will be installed in the ALICE experiment during the second long shutdown of LHC in 2019-2020. The monolithic pixel sensors will be fabricated in the 180 nm CMOS Imaging Sensor process of TowerJazz. The ALPIDE design takes full advantage of a particular process feature, the deep p-well, which allows for full CMOS circuitry within the pixel matrix, while at the same time retaining the full charge collection efficiency. Together with the small feature size and the availability of six metal layers, this allowed a continuously active low-power front-end to be placed into each pixel and an in-matrix sparsification circuit to be used that sends only the addresses of hit pixels to the periphery. This approach led to a power consumption of less than 40 mWcm-2, a spatial resolution of around 5 μm, a peaking time of around 2 μs, while being radiation hard to some 1013 1 MeVneq /cm2, fulfilling or exceeding the ALICE requirements. Over the last years of R & D, several prototype circuits have been used to verify radiation hardness, and to optimize pixel geometry and in-pixel front-end circuitry. The positive results led to a submission of full-scale (3 cm×1.5 cm) sensor prototypes in 2014. They are being characterized in a comprehensive campaign that also involves several irradiation and beam tests. A summary of the results obtained and prospects towards the final sensor to instrument the ALICE Inner Tracking System are given.
NASA Astrophysics Data System (ADS)
Kirby, Richard; Whitaker, Ross
2016-09-01
In recent years, the use of multi-modal camera rigs consisting of an RGB sensor and an infrared (IR) sensor have become increasingly popular for use in surveillance and robotics applications. The advantages of using multi-modal camera rigs include improved foreground/background segmentation, wider range of lighting conditions under which the system works, and richer information (e.g. visible light and heat signature) for target identification. However, the traditional computer vision method of mapping pairs of images using pixel intensities or image features is often not possible with an RGB/IR image pair. We introduce a novel method to overcome the lack of common features in RGB/IR image pairs by using a variational methods optimization algorithm to map the optical flow fields computed from different wavelength images. This results in the alignment of the flow fields, which in turn produce correspondences similar to those found in a stereo RGB/RGB camera rig using pixel intensities or image features. In addition to aligning the different wavelength images, these correspondences are used to generate dense disparity and depth maps. We obtain accuracies similar to other multi-modal image alignment methodologies as long as the scene contains sufficient depth variations, although a direct comparison is not possible because of the lack of standard image sets from moving multi-modal camera rigs. We test our method on synthetic optical flow fields and on real image sequences that we created with a multi-modal binocular stereo RGB/IR camera rig. We determine our method's accuracy by comparing against a ground truth.
Spectral characterisation and noise performance of Vanilla—an active pixel sensor
NASA Astrophysics Data System (ADS)
Blue, Andrew; Bates, R.; Bohndiek, S. E.; Clark, A.; Arvanitis, Costas D.; Greenshaw, T.; Laing, A.; Maneuski, D.; Turchetta, R.; O'Shea, V.
2008-06-01
This work will report on the characterisation of a new active pixel sensor, Vanilla. The Vanilla comprises of 512×512 (25μm 2) pixels. The sensor has a 12 bit digital output for full-frame mode, although it can also be readout in analogue mode, whereby it can also be read in a fully programmable region-of-interest (ROI) mode. In full frame, the sensor can operate at a readout rate of more than 100 frames per second (fps), while in ROI mode, the speed depends on the size, shape and number of ROIs. For example, an ROI of 6×6 pixels can be read at 20,000 fps in analogue mode. Using photon transfer curve (PTC) measurements allowed for the calculation of the read noise, shot noise, full-well capacity and camera gain constant of the sensor. Spectral response measurements detailed the quantum efficiency (QE) of the detector through the UV and visible region. Analysis of the ROI readout mode was also performed. Such measurements suggest that the Vanilla APS (active pixel sensor) will be suitable for a wide range of applications including particle physics and medical imaging.
Cavalli, Rosa Maria; Fusilli, Lorenzo; Pascucci, Simone; Pignatti, Stefano; Santini, Federico
2008-01-01
This study aims at comparing the capability of different sensors to detect land cover materials within an historical urban center. The main objective is to evaluate the added value of hyperspectral sensors in mapping a complex urban context. In this study we used: (a) the ALI and Hyperion satellite data, (b) the LANDSAT ETM+ satellite data, (c) MIVIS airborne data and (d) the high spatial resolution IKONOS imagery as reference. The Venice city center shows a complex urban land cover and therefore was chosen for testing the spectral and spatial characteristics of different sensors in mapping the urban tissue. For this purpose, an object-oriented approach and different common classification methods were used. Moreover, spectra of the main anthropogenic surfaces (i.e. roofing and paving materials) were collected during the field campaigns conducted on the study area. They were exploited for applying band-depth and sub-pixel analyses to subsets of Hyperion and MIVIS hyperspectral imagery. The results show that satellite data with a 30m spatial resolution (ALI, LANDSAT ETM+ and HYPERION) are able to identify only the main urban land cover materials. PMID:27879879
NASA Astrophysics Data System (ADS)
Darvishi, Mehdi; Schlögel, Romy; Cuozzo, Giovanni; Callegari, Mattia; Thiebes, Benni; Bruzzone, Lorenzo; Mulas, Marco; Corsini, Alessandro; Mair, Volkmar
2016-04-01
Despite the advantages of Differential Synthetic Aperture Radar Interferometry (DInSAR) methods for quantifying landslide deformation over large areas, some limitations remain. These include for example geometric distortions, atmospheric artefacts, geometric and temporal decorrelations, data and scale constraints, and the restriction that only 1-dimentional line-of-sight (LOS) deformations can be measured. At local scale, the major limitations are dense vegetation, as well as large displacement rates which can lead to decorrelation between SAR acquisitions also for high resolution images and temporal baselines. Sub-pixel offset tracking was proposed to overcome some of these limitations. Two of the most important advantages of this technique are the mapping of 2-D displacements (azimuth and range directions), and the fact that there is no need for complex phase unwrapping algorithms which could give wrong results or fail in case of decorrelation or fast ground deformations. As sub-pixel offset tracking is highly sensitive to the spatial resolution of the data, latest generations of SAR sensors such as TerraSAR-X and COSMO-SkyMed providing high resolution data (up to 1m) have great potential to become established methods in the field of ground deformation monitoring. In this study, sub-pixel offset tracking was applied to COSMO SkyMed X-band imagery in order to quantify ground displacements and to evaluate the feasibility of offset tracking for landslide movement mapping and monitoring. The study area is the active Corvara landslide located in the Italian Alps, described as a slow-moving and deep-seated landslide with annual displacement rates of up to 20 m. Corner reflectors specifically designed for X-band were installed on the landslide and used as reference points for sub-pixel offset tracking. Satellite images covering the period from 2013 to 2015 were analyzed with an amplitude tracking tool for calculating the offsets and extracting 2-D displacements. Sub-pixel offset tracking outputs were integrated with DInSAR results and correlated to differential GPS measurements recorded at the same time as the SAR data acquisitions.
Investigation of HV/HR-CMOS technology for the ATLAS Phase-II Strip Tracker Upgrade
NASA Astrophysics Data System (ADS)
Fadeyev, V.; Galloway, Z.; Grabas, H.; Grillo, A. A.; Liang, Z.; Martinez-Mckinney, F.; Seiden, A.; Volk, J.; Affolder, A.; Buckland, M.; Meng, L.; Arndt, K.; Bortoletto, D.; Huffman, T.; John, J.; McMahon, S.; Nickerson, R.; Phillips, P.; Plackett, R.; Shipsey, I.; Vigani, L.; Bates, R.; Blue, A.; Buttar, C.; Kanisauskas, K.; Maneuski, D.; Benoit, M.; Di Bello, F.; Caragiulo, P.; Dragone, A.; Grenier, P.; Kenney, C.; Rubbo, F.; Segal, J.; Su, D.; Tamma, C.; Das, D.; Dopke, J.; Turchetta, R.; Wilson, F.; Worm, S.; Ehrler, F.; Peric, I.; Gregor, I. M.; Stanitzki, M.; Hoeferkamp, M.; Seidel, S.; Hommels, L. B. A.; Kramberger, G.; Mandić, I.; Mikuž, M.; Muenstermann, D.; Wang, R.; Zhang, J.; Warren, M.; Song, W.; Xiu, Q.; Zhu, H.
2016-09-01
ATLAS has formed strip CMOS project to study the use of CMOS MAPS devices as silicon strip sensors for the Phase-II Strip Tracker Upgrade. This choice of sensors promises several advantages over the conventional baseline design, such as better resolution, less material in the tracking volume, and faster construction speed. At the same time, many design features of the sensors are driven by the requirement of minimizing the impact on the rest of the detector. Hence the target devices feature long pixels which are grouped to form a virtual strip with binary-encoded z position. The key performance aspects are radiation hardness compatibility with HL-LHC environment, as well as extraction of the full hit position with full-reticle readout architecture. To date, several test chips have been submitted using two different CMOS technologies. The AMS 350 nm is a high voltage CMOS process (HV-CMOS), that features the sensor bias of up to 120 V. The TowerJazz 180 nm high resistivity CMOS process (HR-CMOS) uses a high resistivity epitaxial layer to provide the depletion region on top of the substrate. We have evaluated passive pixel performance, and charge collection projections. The results strongly support the radiation tolerance of these devices to radiation dose of the HL-LHC in the strip tracker region. We also describe design features for the next chip submission that are motivated by our technology evaluation.
Spectral Analysis of the Primary Flight Focal Plane Arrays for the Thermal Infrared Sensor
NASA Technical Reports Server (NTRS)
Montanaro, Matthew; Reuter, Dennis C.; Markham, Brian L.; Thome, Kurtis J.; Lunsford, Allen W.; Jhabvala, Murzy D.; Rohrbach, Scott O.; Gerace, Aaron D.
2011-01-01
Thermal Infrared Sensor (TIRS) is a (1) New longwave infrared (10 - 12 micron) sensor for the Landsat Data Continuity Mission, (2) 185 km ground swath; 100 meter pixel size on ground, (3) Pushbroom sensor configuration. Issue of Calibration are: (1) Single detector -- only one calibration, (2) Multiple detectors - unique calibration for each detector -- leads to pixel-to-pixel artifacts. Objectives are: (1) Predict extent of residual striping when viewing a uniform blackbody target through various atmospheres, (2) Determine how different spectral shapes affect the derived surface temperature in a realistic synthetic scene.
NASA Astrophysics Data System (ADS)
Noh, M. J.; Howat, I. M.
2017-12-01
Glaciers and ice sheets are changing rapidly. Digital Elevation Models (DEMs) and Velocity Maps (VMs) obtained from repeat satellite imagery provide critical measurements of changes in glacier dynamics and mass balance over large, remote areas. DEMs created from stereopairs obtained during the same satellite pass through sensor re-pointing (i.e. "in-track stereo") have been most commonly used. In-track stereo has the advantage of minimizing the time separation and, thus, surface motion between image acquisitions, so that the ice surface can be assumed motionless in when collocating pixels between image pairs. Since the DEM extraction process assumes that all motion between collocated pixels is due to parallax or sensor model error, significant ice motion results in DEM quality loss or failure. In-track stereo, however, puts a greater demand on satellite tasking resources and, therefore, is much less abundant than single-scan imagery. Thus, if ice surface motion can be mitigated, the ability to extract surface elevation measurements from pairs of repeat single-scan "cross-track" imagery would greatly increase the extent and temporal resolution of ice surface change. Additionally, the ice motion measured by the DEM extraction process would itself provide a useful velocity measurement. We develop a novel algorithm for generating high-quality DEMs and VMs from cross-track image pairs without any prior information using the Surface Extraction from TIN-based Searchspace Minimization (SETSM) algorithm and its sensor model bias correction capabilities. Using a test suite of repeat, single-scan imagery from WorldView and QuickBird sensors collected over fast-moving outlet glaciers, we develop a method by which RPC biases between images are first calculated and removed over ice-free surfaces. Subpixel displacements over the ice are then constrained and used to correct the parallax estimate. Initial tests yield DEM results with the same quality as in-track stereo for cases where snowfall has not occurred between the two images and when the images have similar ground sample distances. The resulting velocity map also closely matches independent measurements.
Estimating pixel variances in the scenes of staring sensors
Simonson, Katherine M [Cedar Crest, NM; Ma, Tian J [Albuquerque, NM
2012-01-24
A technique for detecting changes in a scene perceived by a staring sensor is disclosed. The technique includes acquiring a reference image frame and a current image frame of a scene with the staring sensor. A raw difference frame is generated based upon differences between the reference image frame and the current image frame. Pixel error estimates are generated for each pixel in the raw difference frame based at least in part upon spatial error estimates related to spatial intensity gradients in the scene. The pixel error estimates are used to mitigate effects of camera jitter in the scene between the current image frame and the reference image frame.
Smart-Pixel Array Processors Based on Optimal Cellular Neural Networks for Space Sensor Applications
NASA Technical Reports Server (NTRS)
Fang, Wai-Chi; Sheu, Bing J.; Venus, Holger; Sandau, Rainer
1997-01-01
A smart-pixel cellular neural network (CNN) with hardware annealing capability, digitally programmable synaptic weights, and multisensor parallel interface has been under development for advanced space sensor applications. The smart-pixel CNN architecture is a programmable multi-dimensional array of optoelectronic neurons which are locally connected with their local neurons and associated active-pixel sensors. Integration of the neuroprocessor in each processor node of a scalable multiprocessor system offers orders-of-magnitude computing performance enhancements for on-board real-time intelligent multisensor processing and control tasks of advanced small satellites. The smart-pixel CNN operation theory, architecture, design and implementation, and system applications are investigated in detail. The VLSI (Very Large Scale Integration) implementation feasibility was illustrated by a prototype smart-pixel 5x5 neuroprocessor array chip of active dimensions 1380 micron x 746 micron in a 2-micron CMOS technology.
A CMOS image sensor with programmable pixel-level analog processing.
Massari, Nicola; Gottardi, Massimo; Gonzo, Lorenzo; Stoppa, David; Simoni, Andrea
2005-11-01
A prototype of a 34 x 34 pixel image sensor, implementing real-time analog image processing, is presented. Edge detection, motion detection, image amplification, and dynamic-range boosting are executed at pixel level by means of a highly interconnected pixel architecture based on the absolute value of the difference among neighbor pixels. The analog operations are performed over a kernel of 3 x 3 pixels. The square pixel, consisting of 30 transistors, has a pitch of 35 microm with a fill-factor of 20%. The chip was fabricated in a 0.35 microm CMOS technology, and its power consumption is 6 mW with 3.3 V power supply. The device was fully characterized and achieves a dynamic range of 50 dB with a light power density of 150 nW/mm2 and a frame rate of 30 frame/s. The measured fixed pattern noise corresponds to 1.1% of the saturation level. The sensor's dynamic range can be extended up to 96 dB using the double-sampling technique.
Photon small-field measurements with a CMOS active pixel sensor.
Spang, F Jiménez; Rosenberg, I; Hedin, E; Royle, G
2015-06-07
In this work the dosimetric performance of CMOS active pixel sensors for the measurement of small photon beams is presented. The detector used consisted of an array of 520 × 520 pixels on a 25 µm pitch. Dosimetric parameters measured with this sensor were compared with data collected with an ionization chamber, a film detector and GEANT4 Monte Carlo simulations. The sensor performance for beam profiles measurements was evaluated for field sizes of 0.5 × 0.5 cm(2). The high spatial resolution achieved with this sensor allowed the accurate measurement of profiles, beam penumbrae and field size under lateral electronic disequilibrium. Field size and penumbrae agreed within 5.4% and 2.2% respectively with film measurements. Agreements with ionization chambers better than 1.0% were obtained when measuring tissue-phantom ratios. Output factor measurements were in good agreement with ionization chamber and Monte Carlo simulation. The data obtained from this imaging sensor can be easily analyzed to extract dosimetric information. The results presented in this work are promising for the development and implementation of CMOS active pixel sensors for dosimetry applications.
How Many Pixels Does It Take to Make a Good 4"×6" Print? Pixel Count Wars Revisited
NASA Astrophysics Data System (ADS)
Kriss, Michael A.
Digital still cameras emerged following the introduction of the Sony Mavica analog prototype camera in 1981. These early cameras produced poor image quality and did not challenge film cameras for overall quality. By 1995 digital still cameras in expensive SLR formats had 6 mega-pixels and produced high quality images (with significant image processing). In 2005 significant improvement in image quality was apparent and lower prices for digital still cameras (DSCs) started a rapid decline in film usage and film camera sells. By 2010 film usage was mostly limited to professionals and the motion picture industry. The rise of DSCs was marked by a “pixel war” where the driving feature of the cameras was the pixel count where even moderate cost, ˜120, DSCs would have 14 mega-pixels. The improvement of CMOS technology pushed this trend of lower prices and higher pixel counts. Only the single lens reflex cameras had large sensors and large pixels. The drive for smaller pixels hurt the quality aspects of the final image (sharpness, noise, speed, and exposure latitude). Only today are camera manufactures starting to reverse their course and producing DSCs with larger sensors and pixels. This paper will explore why larger pixels and sensors are key to the future of DSCs.
The effect of split pixel HDR image sensor technology on MTF measurements
NASA Astrophysics Data System (ADS)
Deegan, Brian M.
2014-03-01
Split-pixel HDR sensor technology is particularly advantageous in automotive applications, because the images are captured simultaneously rather than sequentially, thereby reducing motion blur. However, split pixel technology introduces artifacts in MTF measurement. To achieve a HDR image, raw images are captured from both large and small sub-pixels, and combined to make the HDR output. In some cases, a large sub-pixel is used for long exposure captures, and a small sub-pixel for short exposures, to extend the dynamic range. The relative size of the photosensitive area of the pixel (fill factor) plays a very significant role in the output MTF measurement. Given an identical scene, the MTF will be significantly different, depending on whether you use the large or small sub-pixels i.e. a smaller fill factor (e.g. in the short exposure sub-pixel) will result in higher MTF scores, but significantly greater aliasing. Simulations of split-pixel sensors revealed that, when raw images from both sub-pixels are combined, there is a significant difference in rising edge (i.e. black-to-white transition) and falling edge (white-to-black) reproduction. Experimental results showed a difference of ~50% in measured MTF50 between the falling and rising edges of a slanted edge test chart.
NASA Astrophysics Data System (ADS)
Ban, Yifang; Gong, Peng; Gamba, Paolo; Taubenbock, Hannes; Du, Peijun
2016-08-01
The overall objective of this research is to investigate multi-temporal, multi-scale, multi-sensor satellite data for analysis of urbanization and environmental/climate impact in China to support sustainable planning. Multi- temporal multi-scale SAR and optical data have been evaluated for urban information extraction using innovative methods and algorithms, including KTH- Pavia Urban Extractor, Pavia UEXT, and an "exclusion- inclusion" framework for urban extent extraction, and KTH-SEG, a novel object-based classification method for detailed urban land cover mapping. Various pixel- based and object-based change detection algorithms were also developed to extract urban changes. Several Chinese cities including Beijing, Shanghai and Guangzhou are selected as study areas. Spatio-temporal urbanization patterns and environmental impact at regional, metropolitan and city core were evaluated through ecosystem service, landscape metrics, spatial indices, and/or their combinations. The relationship between land surface temperature and land-cover classes was also analyzed.The urban extraction results showed that urban areas and small towns could be well extracted using multitemporal SAR data with the KTH-Pavia Urban Extractor and UEXT. The fusion of SAR data at multiple scales from multiple sensors was proven to improve urban extraction. For urban land cover mapping, the results show that the fusion of multitemporal SAR and optical data could produce detailed land cover maps with improved accuracy than that of SAR or optical data alone. Pixel-based and object-based change detection algorithms developed with the project were effective to extract urban changes. Comparing the urban land cover results from mulitemporal multisensor data, the environmental impact analysis indicates major losses for food supply, noise reduction, runoff mitigation, waste treatment and global climate regulation services through landscape structural changes in terms of decreases in service area, edge contamination and fragmentation. In terms ofclimate impact, the results indicate that land surface temperature can be related to land use/land cover classes.
User-interactive electronic skin for instantaneous pressure visualization
NASA Astrophysics Data System (ADS)
Wang, Chuan; Hwang, David; Yu, Zhibin; Takei, Kuniharu; Park, Junwoo; Chen, Teresa; Ma, Biwu; Javey, Ali
2013-10-01
Electronic skin (e-skin) presents a network of mechanically flexible sensors that can conformally wrap irregular surfaces and spatially map and quantify various stimuli. Previous works on e-skin have focused on the optimization of pressure sensors interfaced with an electronic readout, whereas user interfaces based on a human-readable output were not explored. Here, we report the first user-interactive e-skin that not only spatially maps the applied pressure but also provides an instantaneous visual response through a built-in active-matrix organic light-emitting diode display with red, green and blue pixels. In this system, organic light-emitting diodes (OLEDs) are turned on locally where the surface is touched, and the intensity of the emitted light quantifies the magnitude of the applied pressure. This work represents a system-on-plastic demonstration where three distinct electronic components—thin-film transistor, pressure sensor and OLED arrays—are monolithically integrated over large areas on a single plastic substrate. The reported e-skin may find a wide range of applications in interactive input/control devices, smart wallpapers, robotics and medical/health monitoring devices.
User-interactive electronic skin for instantaneous pressure visualization.
Wang, Chuan; Hwang, David; Yu, Zhibin; Takei, Kuniharu; Park, Junwoo; Chen, Teresa; Ma, Biwu; Javey, Ali
2013-10-01
Electronic skin (e-skin) presents a network of mechanically flexible sensors that can conformally wrap irregular surfaces and spatially map and quantify various stimuli. Previous works on e-skin have focused on the optimization of pressure sensors interfaced with an electronic readout, whereas user interfaces based on a human-readable output were not explored. Here, we report the first user-interactive e-skin that not only spatially maps the applied pressure but also provides an instantaneous visual response through a built-in active-matrix organic light-emitting diode display with red, green and blue pixels. In this system, organic light-emitting diodes (OLEDs) are turned on locally where the surface is touched, and the intensity of the emitted light quantifies the magnitude of the applied pressure. This work represents a system-on-plastic demonstration where three distinct electronic components--thin-film transistor, pressure sensor and OLED arrays--are monolithically integrated over large areas on a single plastic substrate. The reported e-skin may find a wide range of applications in interactive input/control devices, smart wallpapers, robotics and medical/health monitoring devices.
Wafer-scale pixelated detector system
Fahim, Farah; Deptuch, Grzegorz; Zimmerman, Tom
2017-10-17
A large area, gapless, detection system comprises at least one sensor; an interposer operably connected to the at least one sensor; and at least one application specific integrated circuit operably connected to the sensor via the interposer wherein the detection system provides high dynamic range while maintaining small pixel area and low power dissipation. Thereby the invention provides methods and systems for a wafer-scale gapless and seamless detector systems with small pixels, which have both high dynamic range and low power dissipation.
Photon Counting Imaging with an Electron-Bombarded Pixel Image Sensor
Hirvonen, Liisa M.; Suhling, Klaus
2016-01-01
Electron-bombarded pixel image sensors, where a single photoelectron is accelerated directly into a CCD or CMOS sensor, allow wide-field imaging at extremely low light levels as they are sensitive enough to detect single photons. This technology allows the detection of up to hundreds or thousands of photon events per frame, depending on the sensor size, and photon event centroiding can be employed to recover resolution lost in the detection process. Unlike photon events from electron-multiplying sensors, the photon events from electron-bombarded sensors have a narrow, acceleration-voltage-dependent pulse height distribution. Thus a gain voltage sweep during exposure in an electron-bombarded sensor could allow photon arrival time determination from the pulse height with sub-frame exposure time resolution. We give a brief overview of our work with electron-bombarded pixel image sensor technology and recent developments in this field for single photon counting imaging, and examples of some applications. PMID:27136556
Building Change Detection in Very High Resolution Satellite Stereo Image Time Series
NASA Astrophysics Data System (ADS)
Tian, J.; Qin, R.; Cerra, D.; Reinartz, P.
2016-06-01
There is an increasing demand for robust methods on urban sprawl monitoring. The steadily increasing number of high resolution and multi-view sensors allows producing datasets with high temporal and spatial resolution; however, less effort has been dedicated to employ very high resolution (VHR) satellite image time series (SITS) to monitor the changes in buildings with higher accuracy. In addition, these VHR data are often acquired from different sensors. The objective of this research is to propose a robust time-series data analysis method for VHR stereo imagery. Firstly, the spatial-temporal information of the stereo imagery and the Digital Surface Models (DSMs) generated from them are combined, and building probability maps (BPM) are calculated for all acquisition dates. In the second step, an object-based change analysis is performed based on the derivative features of the BPM sets. The change consistence between object-level and pixel-level are checked to remove any outlier pixels. Results are assessed on six pairs of VHR satellite images acquired within a time span of 7 years. The evaluation results have proved the efficiency of the proposed method.
Integration of SAR and DEM data: Geometrical considerations
NASA Technical Reports Server (NTRS)
Kropatsch, Walter G.
1991-01-01
General principles for integrating data from different sources are derived from the experience of registration of SAR images with digital elevation models (DEM) data. The integration consists of establishing geometrical relations between the data sets that allow us to accumulate information from both data sets for any given object point (e.g., elevation, slope, backscatter of ground cover, etc.). Since the geometries of the two data are completely different they cannot be compared on a pixel by pixel basis. The presented approach detects instances of higher level features in both data sets independently and performs the matching at the high level. Besides the efficiency of this general strategy it further allows the integration of additional knowledge sources: world knowledge and sensor characteristics are also useful sources of information. The SAR features layover and shadow can be detected easily in SAR images. An analytical method to find such regions also in a DEM needs in addition the parameters of the flight path of the SAR sensor and the range projection model. The generation of the SAR layover and shadow maps is summarized and new extensions to this method are proposed.
Optimal design and critical analysis of a high resolution video plenoptic demonstrator
NASA Astrophysics Data System (ADS)
Drazic, Valter; Sacré, Jean-Jacques; Bertrand, Jérôme; Schubert, Arno; Blondé, Etienne
2011-03-01
A plenoptic camera is a natural multi-view acquisition device also capable of measuring distances by correlating a set of images acquired under different parallaxes. Its single lens and single sensor architecture have two downsides: limited resolution and depth sensitivity. In a very first step and in order to circumvent those shortcomings, we have investigated how the basic design parameters of a plenoptic camera optimize both the resolution of each view and also its depth measuring capability. In a second step, we built a prototype based on a very high resolution Red One® movie camera with an external plenoptic adapter and a relay lens. The prototype delivered 5 video views of 820x410. The main limitation in our prototype is view cross talk due to optical aberrations which reduce the depth accuracy performance. We have simulated some limiting optical aberrations and predicted its impact on the performances of the camera. In addition, we developed adjustment protocols based on a simple pattern and analyzing programs which investigate the view mapping and amount of parallax crosstalk on the sensor on a pixel basis. The results of these developments enabled us to adjust the lenslet array with a sub micrometer precision and to mark the pixels of the sensor where the views do not register properly.
Optimal design and critical analysis of a high-resolution video plenoptic demonstrator
NASA Astrophysics Data System (ADS)
Drazic, Valter; Sacré, Jean-Jacques; Schubert, Arno; Bertrand, Jérôme; Blondé, Etienne
2012-01-01
A plenoptic camera is a natural multiview acquisition device also capable of measuring distances by correlating a set of images acquired under different parallaxes. Its single lens and single sensor architecture have two downsides: limited resolution and limited depth sensitivity. As a first step and in order to circumvent those shortcomings, we investigated how the basic design parameters of a plenoptic camera optimize both the resolution of each view and its depth-measuring capability. In a second step, we built a prototype based on a very high resolution Red One® movie camera with an external plenoptic adapter and a relay lens. The prototype delivered five video views of 820 × 410. The main limitation in our prototype is view crosstalk due to optical aberrations that reduce the depth accuracy performance. We simulated some limiting optical aberrations and predicted their impact on the performance of the camera. In addition, we developed adjustment protocols based on a simple pattern and analysis of programs that investigated the view mapping and amount of parallax crosstalk on the sensor on a pixel basis. The results of these developments enabled us to adjust the lenslet array with a submicrometer precision and to mark the pixels of the sensor where the views do not register properly.
Context dependent prediction and category encoding for DPCM image compression
NASA Technical Reports Server (NTRS)
Beaudet, Paul R.
1989-01-01
Efficient compression of image data requires the understanding of the noise characteristics of sensors as well as the redundancy expected in imagery. Herein, the techniques of Differential Pulse Code Modulation (DPCM) are reviewed and modified for information-preserving data compression. The modifications include: mapping from intensity to an equal variance space; context dependent one and two dimensional predictors; rationale for nonlinear DPCM encoding based upon an image quality model; context dependent variable length encoding of 2x2 data blocks; and feedback control for constant output rate systems. Examples are presented at compression rates between 1.3 and 2.8 bits per pixel. The need for larger block sizes, 2D context dependent predictors, and the hope for sub-bits-per-pixel compression which maintains spacial resolution (information preserving) are discussed.
Takayanagi, Isao; Yoshimura, Norio; Mori, Kazuya; Matsuo, Shinichiro; Tanaka, Shunsuke; Abe, Hirofumi; Yasuda, Naoto; Ishikawa, Kenichiro; Okura, Shunsuke; Ohsawa, Shinji; Otaka, Toshinori
2018-01-12
To respond to the high demand for high dynamic range imaging suitable for moving objects with few artifacts, we have developed a single-exposure dynamic range image sensor by introducing a triple-gain pixel and a low noise dual-gain readout circuit. The developed 3 μm pixel is capable of having three conversion gains. Introducing a new split-pinned photodiode structure, linear full well reaches 40 ke - . Readout noise under the highest pixel gain condition is 1 e - with a low noise readout circuit. Merging two signals, one with high pixel gain and high analog gain, and the other with low pixel gain and low analog gain, a single exposure dynamic rage (SEHDR) signal is obtained. Using this technology, a 1/2.7", 2M-pixel CMOS image sensor has been developed and characterized. The image sensor also employs an on-chip linearization function, yielding a 16-bit linear signal at 60 fps, and an intra-scene dynamic range of higher than 90 dB was successfully demonstrated. This SEHDR approach inherently mitigates the artifacts from moving objects or time-varying light sources that can appear in the multiple exposure high dynamic range (MEHDR) approach.
Takayanagi, Isao; Yoshimura, Norio; Mori, Kazuya; Matsuo, Shinichiro; Tanaka, Shunsuke; Abe, Hirofumi; Yasuda, Naoto; Ishikawa, Kenichiro; Okura, Shunsuke; Ohsawa, Shinji; Otaka, Toshinori
2018-01-01
To respond to the high demand for high dynamic range imaging suitable for moving objects with few artifacts, we have developed a single-exposure dynamic range image sensor by introducing a triple-gain pixel and a low noise dual-gain readout circuit. The developed 3 μm pixel is capable of having three conversion gains. Introducing a new split-pinned photodiode structure, linear full well reaches 40 ke−. Readout noise under the highest pixel gain condition is 1 e− with a low noise readout circuit. Merging two signals, one with high pixel gain and high analog gain, and the other with low pixel gain and low analog gain, a single exposure dynamic rage (SEHDR) signal is obtained. Using this technology, a 1/2.7”, 2M-pixel CMOS image sensor has been developed and characterized. The image sensor also employs an on-chip linearization function, yielding a 16-bit linear signal at 60 fps, and an intra-scene dynamic range of higher than 90 dB was successfully demonstrated. This SEHDR approach inherently mitigates the artifacts from moving objects or time-varying light sources that can appear in the multiple exposure high dynamic range (MEHDR) approach. PMID:29329210
Monolithic Active Pixel Sensors
NASA Astrophysics Data System (ADS)
Lutz, P.
In close collaboration with the group from Strasbourg, Saclay has been developing fast monolithic active pixel sensors for future vertex detectors. This presentation gives some recent results from the MIMOSA serie, emphazising the participation of the group.
Testbeam results of irradiated ams H18 HV-CMOS pixel sensor prototypes
NASA Astrophysics Data System (ADS)
Benoit, M.; Braccini, S.; Casse, G.; Chen, H.; Chen, K.; Di Bello, F. A.; Ferrere, D.; Golling, T.; Gonzalez-Sevilla, S.; Iacobucci, G.; Kiehn, M.; Lanni, F.; Liu, H.; Meng, L.; Merlassino, C.; Miucci, A.; Muenstermann, D.; Nessi, M.; Okawa, H.; Perić, I.; Rimoldi, M.; Ristić, B.; Barrero Pinto, M. Vicente; Vossebeld, J.; Weber, M.; Weston, T.; Wu, W.; Xu, L.; Zaffaroni, E.
2018-02-01
HV-CMOS pixel sensors are a promising option for the tracker upgrade of the ATLAS experiment at the LHC, as well as for other future tracking applications in which large areas are to be instrumented with radiation-tolerant silicon pixel sensors. We present results of testbeam characterisations of the 4th generation of Capacitively Coupled Pixel Detectors (CCPDv4) produced with the ams H18 HV-CMOS process that have been irradiated with different particles (reactor neutrons and 18 MeV protons) to fluences between 1× 1014 and 5× 1015 1-MeV- neq. The sensors were glued to ATLAS FE-I4 pixel readout chips and measured at the CERN SPS H8 beamline using the FE-I4 beam telescope. Results for all fluences are very encouraging with all hit efficiencies being better than 97% for bias voltages of 85 V. The sample irradiated to a fluence of 1× 1015 neq—a relevant value for a large volume of the upgraded tracker—exhibited 99.7% average hit efficiency. The results give strong evidence for the radiation tolerance of HV-CMOS sensors and their suitability as sensors for the experimental HL-LHC upgrades and future large-area silicon-based tracking detectors in high-radiation environments.
The Multidimensional Integrated Intelligent Imaging project (MI-3)
NASA Astrophysics Data System (ADS)
Allinson, N.; Anaxagoras, T.; Aveyard, J.; Arvanitis, C.; Bates, R.; Blue, A.; Bohndiek, S.; Cabello, J.; Chen, L.; Chen, S.; Clark, A.; Clayton, C.; Cook, E.; Cossins, A.; Crooks, J.; El-Gomati, M.; Evans, P. M.; Faruqi, W.; French, M.; Gow, J.; Greenshaw, T.; Greig, T.; Guerrini, N.; Harris, E. J.; Henderson, R.; Holland, A.; Jeyasundra, G.; Karadaglic, D.; Konstantinidis, A.; Liang, H. X.; Maini, K. M. S.; McMullen, G.; Olivo, A.; O'Shea, V.; Osmond, J.; Ott, R. J.; Prydderch, M.; Qiang, L.; Riley, G.; Royle, G.; Segneri, G.; Speller, R.; Symonds-Tayler, J. R. N.; Triger, S.; Turchetta, R.; Venanzi, C.; Wells, K.; Zha, X.; Zin, H.
2009-06-01
MI-3 is a consortium of 11 universities and research laboratories whose mission is to develop complementary metal-oxide semiconductor (CMOS) active pixel sensors (APS) and to apply these sensors to a range of imaging challenges. A range of sensors has been developed: On-Pixel Intelligent CMOS (OPIC)—designed for in-pixel intelligence; FPN—designed to develop novel techniques for reducing fixed pattern noise; HDR—designed to develop novel techniques for increasing dynamic range; Vanilla/PEAPS—with digital and analogue modes and regions of interest, which has also been back-thinned; Large Area Sensor (LAS)—a novel, stitched LAS; and eLeNA—which develops a range of low noise pixels. Applications being developed include autoradiography, a gamma camera system, radiotherapy verification, tissue diffraction imaging, X-ray phase-contrast imaging, DNA sequencing and electron microscopy.
Pixel-by-pixel absolute phase retrieval using three phase-shifted fringe patterns without markers
NASA Astrophysics Data System (ADS)
Jiang, Chufan; Li, Beiwen; Zhang, Song
2017-04-01
This paper presents a method that can recover absolute phase pixel by pixel without embedding markers on three phase-shifted fringe patterns, acquiring additional images, or introducing additional hardware component(s). The proposed three-dimensional (3D) absolute shape measurement technique includes the following major steps: (1) segment the measured object into different regions using rough priori knowledge of surface geometry; (2) artificially create phase maps at different z planes using geometric constraints of structured light system; (3) unwrap the phase pixel by pixel for each region by properly referring to the artificially created phase map; and (4) merge unwrapped phases from all regions into a complete absolute phase map for 3D reconstruction. We demonstrate that conventional three-step phase-shifted fringe patterns can be used to create absolute phase map pixel by pixel even for large depth range objects. We have successfully implemented our proposed computational framework to achieve absolute 3D shape measurement at 40 Hz.
Velocity filtering applied to optical flow calculations
NASA Technical Reports Server (NTRS)
Barniv, Yair
1990-01-01
Optical flow is a method by which a stream of two-dimensional images obtained from a forward-looking passive sensor is used to map the three-dimensional volume in front of a moving vehicle. Passive ranging via optical flow is applied here to the helicopter obstacle-avoidance problem. Velocity filtering is used as a field-based method to determine range to all pixels in the initial image. The theoretical understanding and performance analysis of velocity filtering as applied to optical flow is expanded and experimental results are presented.
NASA Astrophysics Data System (ADS)
Li, Linyi; Chen, Yun; Yu, Xin; Liu, Rui; Huang, Chang
2015-03-01
The study of flood inundation is significant to human life and social economy. Remote sensing technology has provided an effective way to study the spatial and temporal characteristics of inundation. Remotely sensed images with high temporal resolutions are widely used in mapping inundation. However, mixed pixels do exist due to their relatively low spatial resolutions. One of the most popular approaches to resolve this issue is sub-pixel mapping. In this paper, a novel discrete particle swarm optimization (DPSO) based sub-pixel flood inundation mapping (DPSO-SFIM) method is proposed to achieve an improved accuracy in mapping inundation at a sub-pixel scale. The evaluation criterion for sub-pixel inundation mapping is formulated. The DPSO-SFIM algorithm is developed, including particle discrete encoding, fitness function designing and swarm search strategy. The accuracy of DPSO-SFIM in mapping inundation at a sub-pixel scale was evaluated using Landsat ETM + images from study areas in Australia and China. The results show that DPSO-SFIM consistently outperformed the four traditional SFIM methods in these study areas. A sensitivity analysis of DPSO-SFIM was also carried out to evaluate its performances. It is hoped that the results of this study will enhance the application of medium-low spatial resolution images in inundation detection and mapping, and thereby support the ecological and environmental studies of river basins.
A versatile calibration procedure for portable coded aperture gamma cameras and RGB-D sensors
NASA Astrophysics Data System (ADS)
Paradiso, V.; Crivellaro, A.; Amgarou, K.; de Lanaute, N. Blanc; Fua, P.; Liénard, E.
2018-04-01
The present paper proposes a versatile procedure for the geometrical calibration of coded aperture gamma cameras and RGB-D depth sensors, using only one radioactive point source and a simple experimental set-up. Calibration data is then used for accurately aligning radiation images retrieved by means of the γ-camera with the respective depth images computed with the RGB-D sensor. The system resulting from such a combination is thus able to retrieve, automatically, the distance of radioactive hotspots by means of pixel-wise mapping between gamma and depth images. This procedure is of great interest for a wide number of applications, ranging from precise automatic estimation of the shape and distance of radioactive objects to Augmented Reality systems. Incidentally, the corresponding results validated the choice of a perspective design model for a coded aperture γ-camera.
CMOS active pixel sensors response to low energy light ions
NASA Astrophysics Data System (ADS)
Spiriti, E.; Finck, Ch.; Baudot, J.; Divay, C.; Juliani, D.; Labalme, M.; Rousseau, M.; Salvador, S.; Vanstalle, M.; Agodi, C.; Cuttone, G.; De Napoli, M.; Romano, F.
2017-12-01
Recently CMOS active pixel sensors have been used in Hadrontherapy ions fragmentation cross section measurements. Their main goal is to reconstruct tracks generated by the non interacting primary ions or by the produced fragments. In this framework the sensors, unexpectedly, demonstrated the possibility to obtain also some informations that could contribute to the ion type identification. The present analysis shows a clear dependency in charge and number of pixels per cluster (pixels with a collected amount of charge above a given threshold) with both fragment atomic number Z and energy loss in the sensor. This information, in the FIRST (F ragmentation of I ons R elevant for S pace and T herapy) experiment, has been used in the overall particle identification analysis algorithm. The aim of this paper is to present the data analysis and the obtained results. An empirical model was developed, in this paper, that reproduce the cluster size as function of the deposited energy in the sensor.
High-speed imaging using CMOS image sensor with quasi pixel-wise exposure
NASA Astrophysics Data System (ADS)
Sonoda, T.; Nagahara, H.; Endo, K.; Sugiyama, Y.; Taniguchi, R.
2017-02-01
Several recent studies in compressive video sensing have realized scene capture beyond the fundamental trade-off limit between spatial resolution and temporal resolution using random space-time sampling. However, most of these studies showed results for higher frame rate video that were produced by simulation experiments or using an optically simulated random sampling camera, because there are currently no commercially available image sensors with random exposure or sampling capabilities. We fabricated a prototype complementary metal oxide semiconductor (CMOS) image sensor with quasi pixel-wise exposure timing that can realize nonuniform space-time sampling. The prototype sensor can reset exposures independently by columns and fix these amount of exposure by rows for each 8x8 pixel block. This CMOS sensor is not fully controllable via the pixels, and has line-dependent controls, but it offers flexibility when compared with regular CMOS or charge-coupled device sensors with global or rolling shutters. We propose a method to realize pseudo-random sampling for high-speed video acquisition that uses the flexibility of the CMOS sensor. We reconstruct the high-speed video sequence from the images produced by pseudo-random sampling using an over-complete dictionary.
Characterization techniques for incorporating backgrounds into DIRSIG
NASA Astrophysics Data System (ADS)
Brown, Scott D.; Schott, John R.
2000-07-01
The appearance of operation hyperspectral imaging spectrometers in both solar and thermal regions has lead to the development of a variety of spectral detection algorithms. The development and testing of these algorithms requires well characterized field collection campaigns that can be time and cost prohibitive. Radiometrically robust synthetic image generation (SIG) environments that can generate appropriate images under a variety of atmospheric conditions and with a variety of sensors offers an excellent supplement to reduce the scope of the expensive field collections. In addition, SIG image products provide the algorithm developer with per-pixel truth, allowing for improved characterization of the algorithm performance. To meet the needs of the algorithm development community, the image modeling community needs to supply synthetic image products that contain all the spatial and spectral variability present in real world scenes, and that provide the large area coverage typically acquired with actual sensors. This places a heavy burden on synthetic scene builders to construct well characterized scenes that span large areas. Several SIG models have demonstrated the ability to accurately model targets (vehicles, buildings, etc.) Using well constructed target geometry (from CAD packages) and robust thermal and radiometry models. However, background objects (vegetation, infrastructure, etc.) dominate the percentage of real world scene pixels and utilizing target building techniques is time and resource prohibitive. This paper discusses new methods that have been integrated into the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model to characterize backgrounds. The new suite of scene construct types allows the user to incorporate both terrain and surface properties to obtain wide area coverage. The terrain can be incorporated using a triangular irregular network (TIN) derived from elevation data or digital elevation model (DEM) data from actual sensors, temperature maps, spectral reflectance cubes (possible derived from actual sensors), and/or material and mixture maps. Descriptions and examples of each new technique are presented as well as hybrid methods to demonstrate target embedding in real world imagery.
Waldner, François; Hansen, Matthew C; Potapov, Peter V; Löw, Fabian; Newby, Terence; Ferreira, Stefanus; Defourny, Pierre
2017-01-01
The lack of sufficient ground truth data has always constrained supervised learning, thereby hindering the generation of up-to-date satellite-derived thematic maps. This is all the more true for those applications requiring frequent updates over large areas such as cropland mapping. Therefore, we present a method enabling the automated production of spatially consistent cropland maps at the national scale, based on spectral-temporal features and outdated land cover information. Following an unsupervised approach, this method extracts reliable calibration pixels based on their labels in the outdated map and their spectral signatures. To ensure spatial consistency and coherence in the map, we first propose to generate seamless input images by normalizing the time series and deriving spectral-temporal features that target salient cropland characteristics. Second, we reduce the spatial variability of the class signatures by stratifying the country and by classifying each stratum independently. Finally, we remove speckle with a weighted majority filter accounting for per-pixel classification confidence. Capitalizing on a wall-to-wall validation data set, the method was tested in South Africa using a 16-year old land cover map and multi-sensor Landsat time series. The overall accuracy of the resulting cropland map reached 92%. A spatially explicit validation revealed large variations across the country and suggests that intensive grain-growing areas were better characterized than smallholder farming systems. Informative features in the classification process vary from one stratum to another but features targeting the minimum of vegetation as well as short-wave infrared features were consistently important throughout the country. Overall, the approach showed potential for routinely delivering consistent cropland maps over large areas as required for operational crop monitoring.
Hansen, Matthew C.; Potapov, Peter V.; Löw, Fabian; Newby, Terence; Ferreira, Stefanus; Defourny, Pierre
2017-01-01
The lack of sufficient ground truth data has always constrained supervised learning, thereby hindering the generation of up-to-date satellite-derived thematic maps. This is all the more true for those applications requiring frequent updates over large areas such as cropland mapping. Therefore, we present a method enabling the automated production of spatially consistent cropland maps at the national scale, based on spectral-temporal features and outdated land cover information. Following an unsupervised approach, this method extracts reliable calibration pixels based on their labels in the outdated map and their spectral signatures. To ensure spatial consistency and coherence in the map, we first propose to generate seamless input images by normalizing the time series and deriving spectral-temporal features that target salient cropland characteristics. Second, we reduce the spatial variability of the class signatures by stratifying the country and by classifying each stratum independently. Finally, we remove speckle with a weighted majority filter accounting for per-pixel classification confidence. Capitalizing on a wall-to-wall validation data set, the method was tested in South Africa using a 16-year old land cover map and multi-sensor Landsat time series. The overall accuracy of the resulting cropland map reached 92%. A spatially explicit validation revealed large variations across the country and suggests that intensive grain-growing areas were better characterized than smallholder farming systems. Informative features in the classification process vary from one stratum to another but features targeting the minimum of vegetation as well as short-wave infrared features were consistently important throughout the country. Overall, the approach showed potential for routinely delivering consistent cropland maps over large areas as required for operational crop monitoring. PMID:28817618
Seo, Min-Woong; Kawahito, Shoji
2017-12-01
A large full well capacity (FWC) for wide signal detection range and low temporal random noise for high sensitivity lock-in pixel CMOS image sensor (CIS) embedded with two in-pixel storage diodes (SDs) has been developed and presented in this paper. For fast charge transfer from photodiode to SDs, a lateral electric field charge modulator (LEFM) is used for the developed lock-in pixel. As a result, the time-resolved CIS achieves a very large SD-FWC of approximately 7ke-, low temporal random noise of 1.2e-rms at 20 fps with true correlated double sampling operation and fast intrinsic response less than 500 ps at 635 nm. The proposed imager has an effective pixel array of and a pixel size of . The sensor chip is fabricated by Dongbu HiTek 1P4M 0.11 CIS process.
Compact SPAD-Based Pixel Architectures for Time-Resolved Image Sensors
Perenzoni, Matteo; Pancheri, Lucio; Stoppa, David
2016-01-01
This paper reviews the state of the art of single-photon avalanche diode (SPAD) image sensors for time-resolved imaging. The focus of the paper is on pixel architectures featuring small pixel size (<25 μm) and high fill factor (>20%) as a key enabling technology for the successful implementation of high spatial resolution SPAD-based image sensors. A summary of the main CMOS SPAD implementations, their characteristics and integration challenges, is provided from the perspective of targeting large pixel arrays, where one of the key drivers is the spatial uniformity. The main analog techniques aimed at time-gated photon counting and photon timestamping suitable for compact and low-power pixels are critically discussed. The main features of these solutions are the adoption of analog counting techniques and time-to-analog conversion, in NMOS-only pixels. Reliable quantum-limited single-photon counting, self-referenced analog-to-digital conversion, time gating down to 0.75 ns and timestamping with 368 ps jitter are achieved. PMID:27223284
NASA Astrophysics Data System (ADS)
Unno, Y.; Kamada, S.; Yamamura, K.; Ikegami, Y.; Nakamura, K.; Takubo, Y.; Takashima, R.; Tojo, J.; Kono, T.; Hanagaki, K.; Yajima, K.; Yamauchi, Y.; Hirose, M.; Homma, Y.; Jinnouchi, O.; Kimura, K.; Motohashi, K.; Sato, S.; Sawai, H.; Todome, K.; Yamaguchi, D.; Hara, K.; Sato, Kz.; Sato, Kj.; Hagihara, M.; Iwabuchi, S.
2016-09-01
We have developed n+-in-p pixel sensors to obtain highly radiation tolerant sensors for extremely high radiation environments such as those found at the high-luminosity LHC. We have designed novel pixel structures to eliminate the sources of efficiency loss under the bias rails after irradiation by removing the bias rail out of the boundary region and routing the bias resistors inside the area of the pixel electrodes. After irradiation by protons with the fluence of approximately 3 ×1015neq /cm2, the pixel structure with the polysilicon bias resistor and the bias rails removed far away from the boundary shows an efficiency loss of < 0.5 % per pixel at the boundary region, which is as efficient as the pixel structure without a biasing structure. The pixel structure with the bias rails at the boundary and the widened p-stop's underneath the bias rail also exhibits an improved loss of approximately 1% per pixel at the boundary region. We have elucidated the physical mechanisms behind the efficiency loss under the bias rail with TCAD simulations. The efficiency loss is due to the interplay of the bias rail acting as a charge collecting electrode with the region of low electric field in the silicon near the surface at the boundary. The region acts as a "shield" for the electrode. After irradiation, the strong applied electric field nearly eliminates the region. The TCAD simulations have shown that wide p-stop and large Si-SiO2 interface charge (inversion layer, specifically) act to shield the weighting potential. The pixel sensor of the old design irradiated by γ-rays at 2.4 MGy is confirmed to exhibit only a slight efficiency loss at the boundary.
NASA Astrophysics Data System (ADS)
Fan, Yuanchao; Koukal, Tatjana; Weisberg, Peter J.
2014-10-01
Canopy shadowing mediated by topography is an important source of radiometric distortion on remote sensing images of rugged terrain. Topographic correction based on the sun-canopy-sensor (SCS) model significantly improved over those based on the sun-terrain-sensor (STS) model for surfaces with high forest canopy cover, because the SCS model considers and preserves the geotropic nature of trees. The SCS model accounts for sub-pixel canopy shadowing effects and normalizes the sunlit canopy area within a pixel. However, it does not account for mutual shadowing between neighboring pixels. Pixel-to-pixel shadowing is especially apparent for fine resolution satellite images in which individual tree crowns are resolved. This paper proposes a new topographic correction model: the sun-crown-sensor (SCnS) model based on high-resolution satellite imagery (IKONOS) and high-precision LiDAR digital elevation model. An improvement on the C-correction logic with a radiance partitioning method to address the effects of diffuse irradiance is also introduced (SCnS + C). In addition, we incorporate a weighting variable, based on pixel shadow fraction, on the direct and diffuse radiance portions to enhance the retrieval of at-sensor radiance and reflectance of highly shadowed tree pixels and form another variety of SCnS model (SCnS + W). Model evaluation with IKONOS test data showed that the new SCnS model outperformed the STS and SCS models in quantifying the correlation between terrain-regulated illumination factor and at-sensor radiance. Our adapted C-correction logic based on the sun-crown-sensor geometry and radiance partitioning better represented the general additive effects of diffuse radiation than C parameters derived from the STS or SCS models. The weighting factor Wt also significantly enhanced correction results by reducing within-class standard deviation and balancing the mean pixel radiance between sunlit and shaded slopes. We analyzed these improvements with model comparison on the red and near infrared bands. The advantages of SCnS + C and SCnS + W on both bands are expected to facilitate forest classification and change detection applications.
Development of pixellated Ir-TESs
NASA Astrophysics Data System (ADS)
Zen, Nobuyuki; Takahashi, Hiroyuki; Kunieda, Yuichi; Damayanthi, Rathnayaka M. T.; Mori, Fumiakira; Fujita, Kaoru; Nakazawa, Masaharu; Fukuda, Daiji; Ohkubo, Masataka
2006-04-01
We have been developing Ir-based pixellated superconducting transition edge sensors (TESs). In the area of material or astronomical applications, the sensor with few eV energy resolution and over 1000 pixels imaging property is desired. In order to achieve this goal, we have been analyzing signals from pixellated TESs. In the case of a 20 pixel array of Ir-TESs, with 45 μm×45 μm pixel sizes, the incident X-ray signals have been classified into 16 groups. We have applied numerical signal analysis. On the one hand, the energy resolution of our pixellated TES is strongly degraded. However, using pulse shape analysis, we can dramatically improve the resolution. Thus, we consider that the pulse signal analysis will lead this device to be used as a practical photon incident position identifying TES.
NASA Astrophysics Data System (ADS)
Kim, D.; Aglieri Rinella, G.; Cavicchioli, C.; Chanlek, N.; Collu, A.; Degerli, Y.; Dorokhov, A.; Flouzat, C.; Gajanana, D.; Gao, C.; Guilloux, F.; Hillemanns, H.; Hristozkov, S.; Junique, A.; Keil, M.; Kofarago, M.; Kugathasan, T.; Kwon, Y.; Lattuca, A.; Mager, M.; Sielewicz, K. M.; Marin Tobon, C. A.; Marras, D.; Martinengo, P.; Mazza, G.; Mugnier, H.; Musa, L.; Pham, T. H.; Puggioni, C.; Reidt, F.; Riedler, P.; Rousset, J.; Siddhanta, S.; Snoeys, W.; Song, M.; Usai, G.; Van Hoorne, J. W.; Yang, P.
2016-02-01
ALICE plans to replace its Inner Tracking System during the second long shut down of the LHC in 2019 with a new 10 m2 tracker constructed entirely with monolithic active pixel sensors. The TowerJazz 180 nm CMOS imaging Sensor process has been selected to produce the sensor as it offers a deep pwell allowing full CMOS in-pixel circuitry and different starting materials. First full-scale prototypes have been fabricated and tested. Radiation tolerance has also been verified. In this paper the development of the charge sensitive front end and in particular its optimization for uniformity of charge threshold and time response will be presented.
NASA Astrophysics Data System (ADS)
Seo, Sang-Ho; Seo, Min-Woong; Kong, Jae-Sung; Shin, Jang-Kyoo; Choi, Pyung
2008-11-01
In this paper, a pseudo 2-transistor active pixel sensor (APS) has been designed and fabricated by using an n-well/gate-tied p-channel metal oxide semiconductor field effect transistor (PMOSFET)-type photodetector with built-in transfer gate. The proposed sensor has been fabricated using a 0.35 μm 2-poly 4-metal standard complementary metal oxide semiconductor (CMOS) logic process. The pseudo 2-transistor APS consists of two NMOSFETs and one photodetector which can amplify the generated photocurrent. The area of the pseudo 2-transistor APS is 7.1 × 6.2 μm2. The sensitivity of the proposed pixel is 49 lux/(V·s). By using this pixel, a smaller pixel area and a higher level of sensitivity can be realized when compared with a conventional 3-transistor APS which uses a pn junction photodiode.
Bio-Inspired Asynchronous Pixel Event Tricolor Vision Sensor.
Lenero-Bardallo, Juan Antonio; Bryn, D H; Hafliger, Philipp
2014-06-01
This article investigates the potential of the first ever prototype of a vision sensor that combines tricolor stacked photo diodes with the bio-inspired asynchronous pixel event communication protocol known as Address Event Representation (AER). The stacked photo diodes are implemented in a 22 × 22 pixel array in a standard STM 90 nm CMOS process. Dynamic range is larger than 60 dB and pixels fill factor is 28%. The pixels employ either simple pulse frequency modulation (PFM) or a Time-to-First-Spike (TFS) mode. A heuristic linear combination of the chip's inherent pseudo colors serves to approximate RGB color representation. Furthermore, the sensor outputs can be processed to represent the radiation in the near infrared (NIR) band without employing external filters, and to color-encode direction of motion due to an asymmetry in the update rates of the different diode layers.
Transparent Fingerprint Sensor System for Large Flat Panel Display.
Seo, Wonkuk; Pi, Jae-Eun; Cho, Sung Haeung; Kang, Seung-Youl; Ahn, Seong-Deok; Hwang, Chi-Sun; Jeon, Ho-Sik; Kim, Jong-Uk; Lee, Myunghee
2018-01-19
In this paper, we introduce a transparent fingerprint sensing system using a thin film transistor (TFT) sensor panel, based on a self-capacitive sensing scheme. An armorphousindium gallium zinc oxide (a-IGZO) TFT sensor array and associated custom Read-Out IC (ROIC) are implemented for the system. The sensor panel has a 200 × 200 pixel array and each pixel size is as small as 50 μm × 50 μm. The ROIC uses only eight analog front-end (AFE) amplifier stages along with a successive approximation analog-to-digital converter (SAR ADC). To get the fingerprint image data from the sensor array, the ROIC senses a capacitance, which is formed by a cover glass material between a human finger and an electrode of each pixel of the sensor array. Three methods are reviewed for estimating the self-capacitance. The measurement result demonstrates that the transparent fingerprint sensor system has an ability to differentiate a human finger's ridges and valleys through the fingerprint sensor array.
Transparent Fingerprint Sensor System for Large Flat Panel Display
Seo, Wonkuk; Pi, Jae-Eun; Cho, Sung Haeung; Kang, Seung-Youl; Ahn, Seong-Deok; Hwang, Chi-Sun; Jeon, Ho-Sik; Kim, Jong-Uk
2018-01-01
In this paper, we introduce a transparent fingerprint sensing system using a thin film transistor (TFT) sensor panel, based on a self-capacitive sensing scheme. An armorphousindium gallium zinc oxide (a-IGZO) TFT sensor array and associated custom Read-Out IC (ROIC) are implemented for the system. The sensor panel has a 200 × 200 pixel array and each pixel size is as small as 50 μm × 50 μm. The ROIC uses only eight analog front-end (AFE) amplifier stages along with a successive approximation analog-to-digital converter (SAR ADC). To get the fingerprint image data from the sensor array, the ROIC senses a capacitance, which is formed by a cover glass material between a human finger and an electrode of each pixel of the sensor array. Three methods are reviewed for estimating the self-capacitance. The measurement result demonstrates that the transparent fingerprint sensor system has an ability to differentiate a human finger’s ridges and valleys through the fingerprint sensor array. PMID:29351218
Testbeam results of irradiated ams H18 HV-CMOS pixel sensor prototypes
Benoit, M.; Braccini, S.; Casse, G.; ...
2018-02-08
HV-CMOS pixel sensors are a promising option for the tracker upgrade of the ATLAS experiment at the LHC, as well as for other future tracking applications in which large areas are to be instrumented with radiation-tolerant silicon pixel sensors. We present results of testbeam characterisations of the 4 th generation of Capacitively Coupled Pixel Detectors (CCPDv4) produced with the ams H18 HV-CMOS process that have been irradiated with different particles (reactor neutrons and 18 MeV protons) to fluences between 1×10 14 and 5×10 15 1–MeV– n eq. The sensors were glued to ATLAS FE-I4 pixel readout chips and measured atmore » the CERN SPS H8 beamline using the FE-I4 beam telescope. Results for all fluences are very encouraging with all hit efficiencies being better than 97% for bias voltages of 85 V. The sample irradiated to a fluence of 1×10 15 neq—a relevant value for a large volume of the upgraded tracker—exhibited 99.7% average hit efficiency. Furthermore, the results give strong evidence for the radiation tolerance of HV-CMOS sensors and their suitability as sensors for the experimental HL-LHC upgrades and future large-area silicon-based tracking detectors in high-radiation environments.« less
Testbeam results of irradiated ams H18 HV-CMOS pixel sensor prototypes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benoit, M.; Braccini, S.; Casse, G.
HV-CMOS pixel sensors are a promising option for the tracker upgrade of the ATLAS experiment at the LHC, as well as for other future tracking applications in which large areas are to be instrumented with radiation-tolerant silicon pixel sensors. We present results of testbeam characterisations of the 4 th generation of Capacitively Coupled Pixel Detectors (CCPDv4) produced with the ams H18 HV-CMOS process that have been irradiated with different particles (reactor neutrons and 18 MeV protons) to fluences between 1×10 14 and 5×10 15 1–MeV– n eq. The sensors were glued to ATLAS FE-I4 pixel readout chips and measured atmore » the CERN SPS H8 beamline using the FE-I4 beam telescope. Results for all fluences are very encouraging with all hit efficiencies being better than 97% for bias voltages of 85 V. The sample irradiated to a fluence of 1×10 15 neq—a relevant value for a large volume of the upgraded tracker—exhibited 99.7% average hit efficiency. Furthermore, the results give strong evidence for the radiation tolerance of HV-CMOS sensors and their suitability as sensors for the experimental HL-LHC upgrades and future large-area silicon-based tracking detectors in high-radiation environments.« less
Commercial CMOS image sensors as X-ray imagers and particle beam monitors
NASA Astrophysics Data System (ADS)
Castoldi, A.; Guazzoni, C.; Maffessanti, S.; Montemurro, G. V.; Carraresi, L.
2015-01-01
CMOS image sensors are widely used in several applications such as mobile handsets webcams and digital cameras among others. Furthermore they are available across a wide range of resolutions with excellent spectral and chromatic responses. In order to fulfill the need of cheap systems as beam monitors and high resolution image sensors for scientific applications we exploited the possibility of using commercial CMOS image sensors as X-rays and proton detectors. Two different sensors have been mounted and tested. An Aptina MT9v034, featuring 752 × 480 pixels, 6μm × 6μm pixel size has been mounted and successfully tested as bi-dimensional beam profile monitor, able to take pictures of the incoming proton bunches at the DeFEL beamline (1-6 MeV pulsed proton beam) of the LaBeC of INFN in Florence. The naked sensor is able to successfully detect the interactions of the single protons. The sensor point-spread-function (PSF) has been qualified with 1MeV protons and is equal to one pixel (6 mm) r.m.s. in both directions. A second sensor MT9M032, featuring 1472 × 1096 pixels, 2.2 × 2.2 μm pixel size has been mounted on a dedicated board as high-resolution imager to be used in X-ray imaging experiments with table-top generators. In order to ease and simplify the data transfer and the image acquisition the system is controlled by a dedicated micro-processor board (DM3730 1GHz SoC ARM Cortex-A8) on which a modified LINUX kernel has been implemented. The paper presents the architecture of the sensor systems and the results of the experimental measurements.
1T Pixel Using Floating-Body MOSFET for CMOS Image Sensors.
Lu, Guo-Neng; Tournier, Arnaud; Roy, François; Deschamps, Benoît
2009-01-01
We present a single-transistor pixel for CMOS image sensors (CIS). It is a floating-body MOSFET structure, which is used as photo-sensing device and source-follower transistor, and can be controlled to store and evacuate charges. Our investigation into this 1T pixel structure includes modeling to obtain analytical description of conversion gain. Model validation has been done by comparing theoretical predictions and experimental results. On the other hand, the 1T pixel structure has been implemented in different configurations, including rectangular-gate and ring-gate designs, and variations of oxidation parameters for the fabrication process. The pixel characteristics are presented and discussed.
A 45 nm Stacked CMOS Image Sensor Process Technology for Submicron Pixel.
Takahashi, Seiji; Huang, Yi-Min; Sze, Jhy-Jyi; Wu, Tung-Ting; Guo, Fu-Sheng; Hsu, Wei-Cheng; Tseng, Tung-Hsiung; Liao, King; Kuo, Chin-Chia; Chen, Tzu-Hsiang; Chiang, Wei-Chieh; Chuang, Chun-Hao; Chou, Keng-Yu; Chung, Chi-Hsien; Chou, Kuo-Yu; Tseng, Chien-Hsien; Wang, Chuan-Joung; Yaung, Dun-Nien
2017-12-05
A submicron pixel's light and dark performance were studied by experiment and simulation. An advanced node technology incorporated with a stacked CMOS image sensor (CIS) is promising in that it may enhance performance. In this work, we demonstrated a low dark current of 3.2 e - /s at 60 °C, an ultra-low read noise of 0.90 e - ·rms, a high full well capacity (FWC) of 4100 e - , and blooming of 0.5% in 0.9 μm pixels with a pixel supply voltage of 2.8 V. In addition, the simulation study result of 0.8 μm pixels is discussed.
A neighbor pixel communication filtering structure for Dynamic Vision Sensors
NASA Astrophysics Data System (ADS)
Xu, Yuan; Liu, Shiqi; Lu, Hehui; Zhang, Zilong
2017-02-01
For Dynamic Vision Sensors (DVS), thermal noise and junction leakage current induced Background Activity (BA) is the major cause of the deterioration of images quality. Inspired by the smoothing filtering principle of horizontal cells in vertebrate retina, A DVS pixel with Neighbor Pixel Communication (NPC) filtering structure is proposed to solve this issue. The NPC structure is designed to judge the validity of pixel's activity through the communication between its 4 adjacent pixels. The pixel's outputs will be suppressed if its activities are determined not real. The proposed pixel's area is 23.76×24.71μm2 and only 3ns output latency is introduced. In order to validate the effectiveness of the structure, a 5×5 pixel array has been implemented in SMIC 0.13μm CIS process. 3 test cases of array's behavioral model show that the NPC-DVS have an ability of filtering the BA.
Carbon nanotube active-matrix backplanes for conformal electronics and sensors.
Takahashi, Toshitake; Takei, Kuniharu; Gillies, Andrew G; Fearing, Ronald S; Javey, Ali
2011-12-14
In this paper, we report a promising approach for fabricating large-scale flexible and stretchable electronics using a semiconductor-enriched carbon nanotube solution. Uniform semiconducting nanotube networks with superb electrical properties (mobility of ∼20 cm2 V(-1) s(-1) and ION/IOFF of ∼10(4)) are obtained on polyimide substrates. The substrate is made stretchable by laser cutting a honeycomb mesh structure, which combined with nanotube-network transistors enables highly robust conformal electronic devices with minimal device-to-device stochastic variations. The utility of this device concept is demonstrated by fabricating an active-matrix backplane (12×8 pixels, physical size of 6×4 cm2) for pressure mapping using a pressure sensitive rubber as the sensor element.
Improving depth estimation from a plenoptic camera by patterned illumination
NASA Astrophysics Data System (ADS)
Marshall, Richard J.; Meah, Chris J.; Turola, Massimo; Claridge, Ela; Robinson, Alex; Bongs, Kai; Gruppetta, Steve; Styles, Iain B.
2015-05-01
Plenoptic (light-field) imaging is a technique that allows a simple CCD-based imaging device to acquire both spatially and angularly resolved information about the "light-field" from a scene. It requires a microlens array to be placed between the objective lens and the sensor of the imaging device1 and the images under each microlens (which typically span many pixels) can be computationally post-processed to shift perspective, digital refocus, extend the depth of field, manipulate the aperture synthetically and generate a depth map from a single image. Some of these capabilities are rigid functions that do not depend upon the scene and work by manipulating and combining a well-defined set of pixels in the raw image. However, depth mapping requires specific features in the scene to be identified and registered between consecutive microimages. This process requires that the image has sufficient features for the registration, and in the absence of such features the algorithms become less reliable and incorrect depths are generated. The aim of this study is to investigate the generation of depth-maps from light-field images of scenes with insufficient features for accurate registration, using projected patterns to impose a texture on the scene that provides sufficient landmarks for the registration methods.
Sasagawa, Kiyotaka; Shishido, Sanshiro; Ando, Keisuke; Matsuoka, Hitoshi; Noda, Toshihiko; Tokuda, Takashi; Kakiuchi, Kiyomi; Ohta, Jun
2013-05-06
In this study, we demonstrate a polarization sensitive pixel for a complementary metal-oxide-semiconductor (CMOS) image sensor based on 65-nm standard CMOS technology. Using such a deep-submicron CMOS technology, it is possible to design fine metal patterns smaller than the wavelengths of visible light by using a metal wire layer. We designed and fabricated a metal wire grid polarizer on a 20 × 20 μm(2) pixel for image sensor. An extinction ratio of 19.7 dB was observed at a wavelength 750 nm.
Characterisation of Vanilla—A novel active pixel sensor for radiation detection
NASA Astrophysics Data System (ADS)
Blue, A.; Bates, R.; Laing, A.; Maneuski, D.; O'Shea, V.; Clark, A.; Prydderch, M.; Turchetta, R.; Arvanitis, C.; Bohndiek, S.
2007-10-01
Novel features of a new monolithic active pixel sensor, Vanilla, with 520×520 pixels ( 25 μm square) has been characterised for the first time. Optimisation of the sensor operation was made through variation of frame rates, integration times and on-chip biases and voltages. Features such as flushed reset operation, ROI capturing and readout modes have been fully tested. Stability measurements were performed to test its suitablility for long-term applications. These results suggest the Vanilla sensor—along with bio-medical and space applications—is suitable for use in particle physics experiments.
3D-FBK Pixel Sensors: Recent Beam Tests Results with Irradiated Devices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Micelli, A.; /INFN, Trieste /Udine U.; Helle, K.
2012-04-30
The Pixel Detector is the innermost part of the ATLAS experiment tracking device at the Large Hadron Collider, and plays a key role in the reconstruction of the primary vertices from the collisions and secondary vertices produced by short-lived particles. To cope with the high level of radiation produced during the collider operation, it is planned to add to the present three layers of silicon pixel sensors which constitute the Pixel Detector, an additional layer (Insertable B-Layer, or IBL) of sensors. 3D silicon sensors are one of the technologies which are under study for the IBL. 3D silicon technology ismore » an innovative combination of very-large-scale integration and Micro-Electro-Mechanical-Systems where electrodes are fabricated inside the silicon bulk instead of being implanted on the wafer surfaces. 3D sensors, with electrodes fully or partially penetrating the silicon substrate, are currently fabricated at different processing facilities in Europe and USA. This paper reports on the 2010 June beam test results for irradiated 3D devices produced at FBK (Trento, Italy). The performance of these devices, all bump-bonded with the ATLAS pixel FE-I3 read-out chip, is compared to that observed before irradiation in a previous beam test.« less
Low Power Camera-on-a-Chip Using CMOS Active Pixel Sensor Technology
NASA Technical Reports Server (NTRS)
Fossum, E. R.
1995-01-01
A second generation image sensor technology has been developed at the NASA Jet Propulsion Laboratory as a result of the continuing need to miniaturize space science imaging instruments. Implemented using standard CMOS, the active pixel sensor (APS) technology permits the integration of the detector array with on-chip timing, control and signal chain electronics, including analog-to-digital conversion.
A CMOS active pixel sensor for retinal stimulation
NASA Astrophysics Data System (ADS)
Prydderch, Mark L.; French, Marcus J.; Mathieson, Keith; Adams, Christopher; Gunning, Deborah; Laudanski, Jonathan; Morrison, James D.; Moodie, Alan R.; Sinclair, James
2006-02-01
Degenerative photoreceptor diseases, such as age-related macular degeneration and retinitis pigmentosa, are the most common causes of blindness in the western world. A potential cure is to use a microelectronic retinal prosthesis to provide electrical stimulation to the remaining healthy retinal cells. We describe a prototype CMOS Active Pixel Sensor capable of detecting a visual scene and translating it into a train of electrical pulses for stimulation of the retina. The sensor consists of a 10 x 10 array of 100 micron square pixels fabricated on a 0.35 micron CMOS process. Light incident upon each pixel is converted into output current pulse trains with a frequency related to the light intensity. These outputs are connected to a biocompatible microelectrode array for contact to the retinal cells. The flexible design allows experimentation with signal amplitudes and frequencies in order to determine the most appropriate stimulus for the retina. Neural processing in the retina can be studied by using the sensor in conjunction with a Field Programmable Gate Array (FPGA) programmed to behave as a neural network. The sensor has been integrated into a test system designed for studying retinal response. We present the most recent results obtained from this sensor.
Scanning Microscopes Using X Rays and Microchannels
NASA Technical Reports Server (NTRS)
Wang, Yu
2003-01-01
Scanning microscopes that would be based on microchannel filters and advanced electronic image sensors and that utilize x-ray illumination have been proposed. Because the finest resolution attainable in a microscope is determined by the wavelength of the illumination, the xray illumination in the proposed microscopes would make it possible, in principle, to achieve resolutions of the order of nanometers about a thousand times as fine as the resolution of a visible-light microscope. Heretofore, it has been necessary to use scanning electron microscopes to obtain such fine resolution. In comparison with scanning electron microscopes, the proposed microscopes would likely be smaller, less massive, and less expensive. Moreover, unlike in scanning electron microscopes, it would not be necessary to place specimens under vacuum. The proposed microscopes are closely related to the ones described in several prior NASA Tech Briefs articles; namely, Miniature Microscope Without Lenses (NPO-20218), NASA Tech Briefs, Vol. 22, No. 8 (August 1998), page 43; and Reflective Variants of Miniature Microscope Without Lenses (NPO-20610), NASA Tech Briefs, Vol. 26, No. 9 (September 2002) page 6a. In all of these microscopes, the basic principle of design and operation is the same: The focusing optics of a conventional visible-light microscope are replaced by a combination of a microchannel filter and a charge-coupled-device (CCD) image detector. A microchannel plate containing parallel, microscopic-cross-section holes much longer than they are wide is placed between a specimen and an image sensor, which is typically the CCD. The microchannel plate must be made of a material that absorbs the illuminating radiation reflected or scattered from the specimen. The microchannels must be positioned and dimensioned so that each one is registered with a pixel on the image sensor. Because most of the radiation incident on the microchannel walls becomes absorbed, the radiation that reaches the image sensor consists predominantly of radiation that was launched along the longitudinal direction of the microchannels. Therefore, most of the radiation arriving at each pixel on the sensor must have traveled along a straight line from a corresponding location on the specimen. Thus, there is a one-to-one mapping from a point on a specimen to a pixel in the image sensor, so that the output of the image sensor contains image information equivalent to that from a microscope.
CVD diamond pixel detectors for LHC experiments
NASA Astrophysics Data System (ADS)
Wedenig, R.; Adam, W.; Bauer, C.; Berdermann, E.; Bergonzo, P.; Bogani, F.; Borchi, E.; Brambilla, A.; Bruzzi, M.; Colledani, C.; Conway, J.; Dabrowski, W.; Delpierre, P.; Deneuville, A.; Dulinski, W.; van Eijk, B.; Fallou, A.; Fizzotti, F.; Foulon, F.; Friedl, M.; Gan, K. K.; Gheeraert, E.; Grigoriev, E.; Hallewell, G.; Hall-Wilton, R.; Han, S.; Hartjes, F.; Hrubec, J.; Husson, D.; Kagan, H.; Kania, D.; Kaplon, J.; Karl, C.; Kass, R.; Knöpfle, K. T.; Krammer, M.; Logiudice, A.; Lu, R.; Manfredi, P. F.; Manfredotti, C.; Marshall, R. D.; Meier, D.; Mishina, M.; Oh, A.; Pan, L. S.; Palmieri, V. G.; Pernicka, M.; Peitz, A.; Pirollo, S.; Polesello, P.; Pretzl, K.; Procario, M.; Re, V.; Riester, J. L.; Roe, S.; Roff, D.; Rudge, A.; Runolfsson, O.; Russ, J.; Schnetzer, S.; Sciortino, S.; Speziali, V.; Stelzer, H.; Stone, R.; Suter, B.; Tapper, R. J.; Tesarek, R.; Trawick, M.; Trischuk, W.; Vittone, E.; Wagner, A.; Walsh, A. M.; Weilhammer, P.; White, C.; Zeuner, W.; Ziock, H.; Zoeller, M.; Blanquart, L.; Breugnion, P.; Charles, E.; Ciocio, A.; Clemens, J. C.; Dao, K.; Einsweiler, K.; Fasching, D.; Fischer, P.; Joshi, A.; Keil, M.; Klasen, V.; Kleinfelder, S.; Laugier, D.; Meuser, S.; Milgrome, O.; Mouthuy, T.; Richardson, J.; Sinervo, P.; Treis, J.; Wermes, N.; RD42 Collaboration
1999-08-01
This paper reviews the development of CVD diamond pixel detectors. The preparation of the diamond pixel sensors for bump-bonding to the pixel readout electronics for the LHC and the results from beam tests carried out at CERN are described.
Investigation of CMOS pixel sensor with 0.18 μm CMOS technology for high-precision tracking detector
NASA Astrophysics Data System (ADS)
Zhang, L.; Fu, M.; Zhang, Y.; Yan, W.; Wang, M.
2017-01-01
The Circular Electron Positron Collider (CEPC) proposed by the Chinese high energy physics community is aiming to measure Higgs particles and their interactions precisely. The tracking detector including Silicon Inner Tracker (SIT) and Forward Tracking Disks (FTD) has driven stringent requirements on sensor technologies in term of spatial resolution, power consumption and readout speed. CMOS Pixel Sensor (CPS) is a promising candidate to approach these requirements. This paper presents the preliminary studies on the sensor optimization for tracking detector to achieve high collection efficiency while keeping necessary spatial resolution. Detailed studies have been performed on the charge collection using a 0.18 μm CMOS image sensor process. This process allows high resistivity epitaxial layer, leading to a significant improvement on the charge collection and therefore improving the radiation tolerance. Together with the simulation results, the first exploratory prototype has bee designed and fabricated. The prototype includes 9 different pixel arrays, which vary in terms of pixel pitch, diode size and geometry. The total area of the prototype amounts to 2 × 7.88 mm2.
3D track reconstruction capability of a silicon hybrid active pixel detector
NASA Astrophysics Data System (ADS)
Bergmann, Benedikt; Pichotka, Martin; Pospisil, Stanislav; Vycpalek, Jiri; Burian, Petr; Broulim, Pavel; Jakubek, Jan
2017-06-01
Timepix3 detectors are the latest generation of hybrid active pixel detectors of the Medipix/Timepix family. Such detectors consist of an active sensor layer which is connected to the readout ASIC (application specific integrated circuit), segmenting the detector into a square matrix of 256 × 256 pixels (pixel pitch 55 μm). Particles interacting in the active sensor material create charge carriers, which drift towards the pixelated electrode, where they are collected. In each pixel, the time of the interaction (time resolution 1.56 ns) and the amount of created charge carriers are measured. Such a device was employed in an experiment in a 120 GeV/c pion beam. It is demonstrated, how the drift time information can be used for "4D" particle tracking, with the three spatial dimensions and the energy losses along the particle trajectory (dE/dx). Since the coordinates in the detector plane are given by the pixelation ( x, y), the x- and y-resolution is determined by the pixel pitch (55 μm). A z-resolution of 50.4 μm could be achieved (for a 500 μm thick silicon sensor at 130 V bias), whereby the drift time model independent z-resolution was found to be 28.5 μm.
Sensor-driven area coverage for an autonomous fixed-wing unmanned aerial vehicle.
Paull, Liam; Thibault, Carl; Nagaty, Amr; Seto, Mae; Li, Howard
2014-09-01
Area coverage with an onboard sensor is an important task for an unmanned aerial vehicle (UAV) with many applications. Autonomous fixed-wing UAVs are more appropriate for larger scale area surveying since they can cover ground more quickly. However, their non-holonomic dynamics and susceptibility to disturbances make sensor coverage a challenging task. Most previous approaches to area coverage planning are offline and assume that the UAV can follow the planned trajectory exactly. In this paper, this restriction is removed as the aircraft maintains a coverage map based on its actual pose trajectory and makes control decisions based on that map. The aircraft is able to plan paths in situ based on sensor data and an accurate model of the on-board camera used for coverage. An information theoretic approach is used that selects desired headings that maximize the expected information gain over the coverage map. In addition, the branch entropy concept previously developed for autonomous underwater vehicles is extended to UAVs and ensures that the vehicle is able to achieve its global coverage mission. The coverage map over the workspace uses the projective camera model and compares the expected area of the target on the ground and the actual area covered on the ground by each pixel in the image. The camera is mounted on a two-axis gimbal and can either be stabilized or optimized for maximal coverage. Hardware-in-the-loop simulation results and real hardware implementation on a fixed-wing UAV show the effectiveness of the approach. By including the already developed automatic takeoff and landing capabilities, we now have a fully automated and robust platform for performing aerial imagery surveys.
A robust color signal processing with wide dynamic range WRGB CMOS image sensor
NASA Astrophysics Data System (ADS)
Kawada, Shun; Kuroda, Rihito; Sugawa, Shigetoshi
2011-01-01
We have developed a robust color reproduction methodology by a simple calculation with a new color matrix using the formerly developed wide dynamic range WRGB lateral overflow integration capacitor (LOFIC) CMOS image sensor. The image sensor was fabricated through a 0.18 μm CMOS technology and has a 45 degrees oblique pixel array, the 4.2 μm effective pixel pitch and the W pixels. A W pixel was formed by replacing one of the two G pixels in the Bayer RGB color filter. The W pixel has a high sensitivity through the visible light waveband. An emerald green and yellow (EGY) signal is generated from the difference between the W signal and the sum of RGB signals. This EGY signal mainly includes emerald green and yellow lights. These colors are difficult to be reproduced accurately by the conventional simple linear matrix because their wave lengths are in the valleys of the spectral sensitivity characteristics of the RGB pixels. A new linear matrix based on the EGY-RGB signal was developed. Using this simple matrix, a highly accurate color processing with a large margin to the sensitivity fluctuation and noise has been achieved.
Lutz, Gerhard; Porro, Matteo; Aschauer, Stefan; Wölfel, Stefan; Strüder, Lothar
2016-01-01
Depleted field effect transistors (DEPFET) are used to achieve very low noise signal charge readout with sub-electron measurement precision. This is accomplished by repeatedly reading an identical charge, thereby suppressing not only the white serial noise but also the usually constant 1/f noise. The repetitive non-destructive readout (RNDR) DEPFET is an ideal central element for an active pixel sensor (APS) pixel. The theory has been derived thoroughly and results have been verified on RNDR-DEPFET prototypes. A charge measurement precision of 0.18 electrons has been achieved. The device is well-suited for spectroscopic X-ray imaging and for optical photon counting in pixel sensors, even at high photon numbers in the same cell. PMID:27136549
A Physics-Based Deep Learning Approach to Shadow Invariant Representations of Hyperspectral Images.
Windrim, Lloyd; Ramakrishnan, Rishi; Melkumyan, Arman; Murphy, Richard J
2018-02-01
This paper proposes the Relit Spectral Angle-Stacked Autoencoder, a novel unsupervised feature learning approach for mapping pixel reflectances to illumination invariant encodings. This work extends the Spectral Angle-Stacked Autoencoder so that it can learn a shadow-invariant mapping. The method is inspired by a deep learning technique, Denoising Autoencoders, with the incorporation of a physics-based model for illumination such that the algorithm learns a shadow invariant mapping without the need for any labelled training data, additional sensors, a priori knowledge of the scene or the assumption of Planckian illumination. The method is evaluated using datasets captured from several different cameras, with experiments to demonstrate the illumination invariance of the features and how they can be used practically to improve the performance of high-level perception algorithms that operate on images acquired outdoors.
Timepix Device Efficiency for Pattern Recognition of Tracks Generated by Ionizing Radiation
NASA Astrophysics Data System (ADS)
Leroy, Claude; Asbah, Nedaa; Gagnon, Louis-Guilaume; Larochelle, Jean-Simon; Pospisil, Stanislav; Soueid, Paul
2014-06-01
A hybrid silicon pixelated TIMEPIX detector (256 × 256 pixels with 55 μm pitch) operated in Time Over Threshold (TOT) mode was exposed to radioactive sources (241Am, 106Ru, 137Cs), protons and alpha-particles after Rutherford Backscattering on a thin gold foil of protons and alpha-particles beams delivered by the Tandem Accelerator of Montreal University. Measurements were also performed with different mixed radiation fields of heavy charged particles (protons and alpha-particles), photons and electrons produced by simultaneous exposure of TIMEPIX to the radioactive sources and to protons beams on top of the radioactive sources. All measurements were performed in vacuum. The TOT mode of operation has allowed the direct measurement of the energy deposited in each pixel. The efficiency of track recognition with this device was tested by comparing the experimental activities (determined from number of tracks measurements) of the radioactive sources with their expected activities. The efficiency of track recognition of incident protons and alpha-particles of different energies as a function of the incidence angle was measured. The operation of TIMEPIX in TOT mode has allowed a 3D mapping of the charge sharing effect in the whole volume of the silicon sensor. The effect of the bias voltage on charge sharing was investigated as the level of charge sharing is related to the local profile of the electric field in the sensor. The results of the present measurements demonstrate the TIMEPIX capability of differentiating between different types of particles species from mixed radiation fields and measuring their energy deposition. Single track analysis gives a good precision (significantly better than the 55 μm size of one detector pixel) on the coordinates of the impact point of protons interacting in the TIMEPIX silicon layer.
Hart, James L; Lang, Andrew C; Leff, Asher C; Longo, Paolo; Trevor, Colin; Twesten, Ray D; Taheri, Mitra L
2017-08-15
In many cases, electron counting with direct detection sensors offers improved resolution, lower noise, and higher pixel density compared to conventional, indirect detection sensors for electron microscopy applications. Direct detection technology has previously been utilized, with great success, for imaging and diffraction, but potential advantages for spectroscopy remain unexplored. Here we compare the performance of a direct detection sensor operated in counting mode and an indirect detection sensor (scintillator/fiber-optic/CCD) for electron energy-loss spectroscopy. Clear improvements in measured detective quantum efficiency and combined energy resolution/energy field-of-view are offered by counting mode direct detection, showing promise for efficient spectrum imaging, low-dose mapping of beam-sensitive specimens, trace element analysis, and time-resolved spectroscopy. Despite the limited counting rate imposed by the readout electronics, we show that both core-loss and low-loss spectral acquisition are practical. These developments will benefit biologists, chemists, physicists, and materials scientists alike.
A 75-ps Gated CMOS Image Sensor with Low Parasitic Light Sensitivity
Zhang, Fan; Niu, Hanben
2016-01-01
In this study, a 40 × 48 pixel global shutter complementary metal-oxide-semiconductor (CMOS) image sensor with an adjustable shutter time as low as 75 ps was implemented using a 0.5-μm mixed-signal CMOS process. The implementation consisted of a continuous contact ring around each p+/n-well photodiode in the pixel array in order to apply sufficient light shielding. The parasitic light sensitivity of the in-pixel storage node was measured to be 1/8.5 × 107 when illuminated by a 405-nm diode laser and 1/1.4 × 104 when illuminated by a 650-nm diode laser. The pixel pitch was 24 μm, the size of the square p+/n-well photodiode in each pixel was 7 μm per side, the measured random readout noise was 217 e− rms, and the measured dynamic range of the pixel of the designed chip was 5500:1. The type of gated CMOS image sensor (CIS) that is proposed here can be used in ultra-fast framing cameras to observe non-repeatable fast-evolving phenomena. PMID:27367699
A 75-ps Gated CMOS Image Sensor with Low Parasitic Light Sensitivity.
Zhang, Fan; Niu, Hanben
2016-06-29
In this study, a 40 × 48 pixel global shutter complementary metal-oxide-semiconductor (CMOS) image sensor with an adjustable shutter time as low as 75 ps was implemented using a 0.5-μm mixed-signal CMOS process. The implementation consisted of a continuous contact ring around each p+/n-well photodiode in the pixel array in order to apply sufficient light shielding. The parasitic light sensitivity of the in-pixel storage node was measured to be 1/8.5 × 10⁷ when illuminated by a 405-nm diode laser and 1/1.4 × 10⁴ when illuminated by a 650-nm diode laser. The pixel pitch was 24 μm, the size of the square p+/n-well photodiode in each pixel was 7 μm per side, the measured random readout noise was 217 e(-) rms, and the measured dynamic range of the pixel of the designed chip was 5500:1. The type of gated CMOS image sensor (CIS) that is proposed here can be used in ultra-fast framing cameras to observe non-repeatable fast-evolving phenomena.
Modeling and analysis of hybrid pixel detector deficiencies for scientific applications
NASA Astrophysics Data System (ADS)
Fahim, Farah; Deptuch, Grzegorz W.; Hoff, James R.; Mohseni, Hooman
2015-08-01
Semiconductor hybrid pixel detectors often consist of a pixellated sensor layer bump bonded to a matching pixelated readout integrated circuit (ROIC). The sensor can range from high resistivity Si to III-V materials, whereas a Si CMOS process is typically used to manufacture the ROIC. Independent, device physics and electronic design automation (EDA) tools are used to determine sensor characteristics and verify functional performance of ROICs respectively with significantly different solvers. Some physics solvers provide the capability of transferring data to the EDA tool. However, single pixel transient simulations are either not feasible due to convergence difficulties or are prohibitively long. A simplified sensor model, which includes a current pulse in parallel with detector equivalent capacitor, is often used; even then, spice type top-level (entire array) simulations range from days to weeks. In order to analyze detector deficiencies for a particular scientific application, accurately defined transient behavioral models of all the functional blocks are required. Furthermore, various simulations, such as transient, noise, Monte Carlo, inter-pixel effects, etc. of the entire array need to be performed within a reasonable time frame without trading off accuracy. The sensor and the analog front-end can be modeling using a real number modeling language, as complex mathematical functions or detailed data can be saved to text files, for further top-level digital simulations. Parasitically aware digital timing is extracted in a standard delay format (sdf) from the pixel digital back-end layout as well as the periphery of the ROIC. For any given input, detector level worst-case and best-case simulations are performed using a Verilog simulation environment to determine the output. Each top-level transient simulation takes no more than 10-15 minutes. The impact of changing key parameters such as sensor Poissonian shot noise, analog front-end bandwidth, jitter due to clock distribution etc. can be accurately analyzed to determine ROIC architectural viability and bottlenecks. Hence the impact of the detector parameters on the scientific application can be studied.
The FoCal prototype—an extremely fine-grained electromagnetic calorimeter using CMOS pixel sensors
NASA Astrophysics Data System (ADS)
de Haas, A. P.; Nooren, G.; Peitzmann, T.; Reicher, M.; Rocco, E.; Röhrich, D.; Ullaland, K.; van den Brink, A.; van Leeuwen, M.; Wang, H.; Yang, S.; Zhang, C.
2018-01-01
A prototype of a Si-W EM calorimeter was built with Monolithic Active Pixel Sensors as the active elements. With a pixel size of 30 μm it allows digital calorimetry, i.e. the particle's energy is determined by counting pixels, not by measuring the energy deposited. Although of modest size, with a width of only four Moliere radii, it has 39 million pixels. In this article the construction and tuning of the prototype is described. Results from beam tests are compared with predictions of GEANT-based Monte Carlo simulations. The shape of showers caused by electrons is shown in unprecedented detail. Results for energy and position resolution are also given.
Integrated optical sensors for 2D spatial chemical mapping (Conference Presentation)
NASA Astrophysics Data System (ADS)
Flores, Raquel; Janeiro, Ricardo; Viegas, Jaime
2017-02-01
Sensors based on optical waveguides for chemical sensing have attracted increasing interest over the last two decades, fueled by potential applications in commercial lab-on-a-chip devices for medical and food safety industries. Even though the early studies were oriented for single-point detection, progress in device size reduction and device yield afforded by photonics foundries have opened the opportunity for distributed dynamic chemical sensing at the microscale. This will allow researchers to follow the dynamics of chemical species in field of microbiology, and microchemistry, with a complementary method to current technologies based on microfluorescence and hyperspectral imaging. The study of the chemical dynamics at the surface of photoelectrodes in water splitting cells are a good candidate to benefit from such optochemical sensing devices that includes a photonic integrated circuit (PIC) with multiple sensors for real-time detection and spatial mapping of chemical species. In this project, we present experimental results on a prototype integrated optical system for chemical mapping based on the interaction of cascaded resonant optical devices, spatially covered with chemically sensitive polymers and plasmon-enhanced nanostructured metal/metal-oxide claddings offering chemical selectivity in a pixelated surface. In order to achieve a compact footprint, the prototype is based in a silicon photonics platform. A discussion on the relative merits of a photonic platform based on large bandgap metal oxides and nitrides which have higher chemical resistance than silicon is also presented.
Optical and Electric Multifunctional CMOS Image Sensors for On-Chip Biosensing Applications.
Tokuda, Takashi; Noda, Toshihiko; Sasagawa, Kiyotaka; Ohta, Jun
2010-12-29
In this review, the concept, design, performance, and a functional demonstration of multifunctional complementary metal-oxide-semiconductor (CMOS) image sensors dedicated to on-chip biosensing applications are described. We developed a sensor architecture that allows flexible configuration of a sensing pixel array consisting of optical and electric sensing pixels, and designed multifunctional CMOS image sensors that can sense light intensity and electric potential or apply a voltage to an on-chip measurement target. We describe the sensors' architecture on the basis of the type of electric measurement or imaging functionalities.
NASA Astrophysics Data System (ADS)
Adam, W.; Berdermann, E.; Bergonzo, P.; Bertuccio, G.; Bogani, F.; Borchi, E.; Brambilla, A.; Bruzzi, M.; Colledani, C.; Conway, J.; D'Angelo, P.; Dabrowski, W.; Delpierre, P.; Deneuville, A.; Doroshenko, J.; Dulinski, W.; van Eijk, B.; Fallou, A.; Fizzotti, F.; Foster, J.; Foulon, F.; Friedl, M.; Gan, K. K.; Gheeraert, E.; Gobbi, B.; Grim, G. P.; Hallewell, G.; Han, S.; Hartjes, F.; Hrubec, J.; Husson, D.; Kagan, H.; Kania, D.; Kaplon, J.; Kass, R.; Koeth, T.; Krammer, M.; Lander, R.; Logiudice, A.; Lu, R.; mac Lynne, L.; Manfredotti, C.; Meier, D.; Mishina, M.; Moroni, L.; Oh, A.; Pan, L. S.; Pernicka, M.; Perera, L.; Pirollo, S.; Plano, R.; Procario, M.; Riester, J. L.; Roe, S.; Rott, C.; Rousseau, L.; Rudge, A.; Russ, J.; Sala, S.; Sampietro, M.; Schnetzer, S.; Sciortino, S.; Stelzer, H.; Stone, R.; Suter, B.; Tapper, R. J.; Tesarek, R.; Trischuk, W.; Tromson, D.; Vittone, E.; Wedenig, R.; Weilhammer, P.; White, C.; Zeuner, W.; Zoeller, M.
2001-06-01
Diamond based pixel detectors are a promising radiation-hard technology for use at the LHC. We present first results on a CMS diamond pixel sensor. With a threshold setting of 2000 electrons, an average pixel efficiency of 78% was obtained for normally incident minimum ionizing particles.
System and method for generating a deselect mapping for a focal plane array
Bixler, Jay V; Brandt, Timothy G; Conger, James L; Lawson, Janice K
2013-05-21
A method for generating a deselect mapping for a focal plane array according to one embodiment includes gathering a data set for a focal plane array when exposed to light or radiation from a first known target; analyzing the data set for determining which pixels or subpixels of the focal plane array to add to a deselect mapping; adding the pixels or subpixels to the deselect mapping based on the analysis; and storing the deselect mapping. A method for gathering data using a focal plane array according to another embodiment includes deselecting pixels or subpixels based on a deselect mapping; gathering a data set using pixels or subpixels in a focal plane array that are not deselected upon exposure thereof to light or radiation from a target of interest; and outputting the data set.
Pixel-based approach for building heights determination by SAR radargrammetry
NASA Astrophysics Data System (ADS)
Dubois, C.; Thiele, A.; Hinz, S.
2013-10-01
Numerous advances have been made recently in photogrammetry, laser scanning, and remote sensing for the creation of 3D city models. More and more cities are interested in getting 3D city models, be it for urban planning purposes or for supporting public utility companies. In areas often affected by natural disaster, rapid updating of the 3D information may also be useful for helping rescue forces. The high resolutions that can be achieved by the new spaceborne SAR sensor generation enables the analysis of city areas at building level and make those sensors attractive for the extraction of 3D information. Moreover, they present the advantage of weather and sunlight independency, which make them more practicable than optical data, in particular for tasks where rapid response is required. Furthermore, their short revisit time and the possibility of multi-sensor constellation enable providing several acquisitions within a few hours. This opens up the floor for new applications, especially radargrammetric applications, which consider acquisitions taken under different incidence angles. In this paper, we present a new approach for determining building heights, relying only on the radargrammetric analysis of building layover. By taking into account same-side acquisitions, we present the workflow of building height determination. Focus is set on some geometric considerations, pixel-based approach for disparity map calculation, and analysis of the building layover signature for different configurations in order to determine building height.
Germanium ``hexa'' detector: production and testing
NASA Astrophysics Data System (ADS)
Sarajlić, M.; Pennicard, D.; Smoljanin, S.; Hirsemann, H.; Struth, B.; Fritzsch, T.; Rothermund, M.; Zuvic, M.; Lampert, M. O.; Askar, M.; Graafsma, H.
2017-01-01
Here we present new result on the testing of a Germanium sensor for X-ray radiation. The system is made of 3 × 2 Medipix3RX chips, bump-bonded to a monolithic sensor, and is called ``hexa''. Its dimensions are 45 × 30 mm2 and the sensor thickness was 1.5 mm. The total number of the pixels is 393216 in the matrix 768 × 512 with pixel pitch 55 μ m. Medipix3RX read-out chip provides photon counting read-out with single photon sensitivity. The sensor is cooled to -126°C and noise levels together with flat field response are measured. For -200 V polarization bias, leakage current was 4.4 mA (3.2 μ A/mm2). Due to higher leakage around 2.5% of all pixels stay non-responsive. More than 99% of all pixels are bump bonded correctly. In this paper we present the experimental set-up, threshold equalization procedure, image acquisition and the technique for bump bond quality estimate.
NASA Astrophysics Data System (ADS)
Seo, Hokuto; Aihara, Satoshi; Watabe, Toshihisa; Ohtake, Hiroshi; Sakai, Toshikatsu; Kubota, Misao; Egami, Norifumi; Hiramatsu, Takahiro; Matsuda, Tokiyoshi; Furuta, Mamoru; Hirao, Takashi
2011-02-01
A color image was produced by a vertically stacked image sensor with blue (B)-, green (G)-, and red (R)-sensitive organic photoconductive films, each having a thin-film transistor (TFT) array that uses a zinc oxide (ZnO) channel to read out the signal generated in each organic film. The number of the pixels of the fabricated image sensor is 128×96 for each color, and the pixel size is 100×100 µm2. The current on/off ratio of the ZnO TFT is over 106, and the B-, G-, and R-sensitive organic photoconductive films show excellent wavelength selectivity. The stacked image sensor can produce a color image at 10 frames per second with a resolution corresponding to the pixel number. This result clearly shows that color separation is achieved without using any conventional color separation optical system such as a color filter array or a prism.
Development of a 750x750 pixels CMOS imager sensor for tracking applications
NASA Astrophysics Data System (ADS)
Larnaudie, Franck; Guardiola, Nicolas; Saint-Pé, Olivier; Vignon, Bruno; Tulet, Michel; Davancens, Robert; Magnan, Pierre; Corbière, Franck; Martin-Gonthier, Philippe; Estribeau, Magali
2017-11-01
Solid-state optical sensors are now commonly used in space applications (navigation cameras, astronomy imagers, tracking sensors...). Although the charge-coupled devices are still widely used, the CMOS image sensor (CIS), which performances are continuously improving, is a strong challenger for Guidance, Navigation and Control (GNC) systems. This paper describes a 750x750 pixels CMOS image sensor that has been specially designed and developed for star tracker and tracking sensor applications. Such detector, that is featuring smart architecture enabling very simple and powerful operations, is built using the AMIS 0.5μm CMOS technology. It contains 750x750 rectangular pixels with 20μm pitch. The geometry of the pixel sensitive zone is optimized for applications based on centroiding measurements. The main feature of this device is the on-chip control and timing function that makes the device operation easier by drastically reducing the number of clocks to be applied. This powerful function allows the user to operate the sensor with high flexibility: measurement of dark level from masked lines, direct access to the windows of interest… A temperature probe is also integrated within the CMOS chip allowing a very precise measurement through the video stream. A complete electro-optical characterization of the sensor has been performed. The major parameters have been evaluated: dark current and its uniformity, read-out noise, conversion gain, Fixed Pattern Noise, Photo Response Non Uniformity, quantum efficiency, Modulation Transfer Function, intra-pixel scanning. The characterization tests are detailed in the paper. Co60 and protons irradiation tests have been also carried out on the image sensor and the results are presented. The specific features of the 750x750 image sensor such as low power CMOS design (3.3V, power consumption<100mW), natural windowing (that allows efficient and robust tracking algorithms), simple proximity electronics (because of the on-chip control and timing function) enabling a high flexibility architecture, make this imager a good candidate for high performance tracking applications.
CMOS image sensors: State-of-the-art
NASA Astrophysics Data System (ADS)
Theuwissen, Albert J. P.
2008-09-01
This paper gives an overview of the state-of-the-art of CMOS image sensors. The main focus is put on the shrinkage of the pixels : what is the effect on the performance characteristics of the imagers and on the various physical parameters of the camera ? How is the CMOS pixel architecture optimized to cope with the negative performance effects of the ever-shrinking pixel size ? On the other hand, the smaller dimensions in CMOS technology allow further integration on column level and even on pixel level. This will make CMOS imagers even smarter that they are already.
NASA Astrophysics Data System (ADS)
Meng, Siqi; Ren, Kan; Lu, Dongming; Gu, Guohua; Chen, Qian; Lu, Guojun
2018-03-01
Synthetic aperture radar (SAR) is an indispensable and useful method for marine monitoring. With the increase of SAR sensors, high resolution images can be acquired and contain more target structure information, such as more spatial details etc. This paper presents a novel adaptive parameter transform (APT) domain constant false alarm rate (CFAR) to highlight targets. The whole method is based on the APT domain value. Firstly, the image is mapped to the new transform domain by the algorithm. Secondly, the false candidate target pixels are screened out by the CFAR detector to highlight the target ships. Thirdly, the ship pixels are replaced by the homogeneous sea pixels. And then, the enhanced image is processed by Niblack algorithm to obtain the wake binary image. Finally, normalized Hough transform (NHT) is used to detect wakes in the binary image, as a verification of the presence of the ships. Experiments on real SAR images validate that the proposed transform does enhance the target structure and improve the contrast of the image. The algorithm has a good performance in the ship and ship wake detection.
Development of CMOS Active Pixel Image Sensors for Low Cost Commercial Applications
NASA Technical Reports Server (NTRS)
Fossum, E.; Gee, R.; Kemeny, S.; Kim, Q.; Mendis, S.; Nakamura, J.; Nixon, R.; Ortiz, M.; Pain, B.; Zhou, Z.;
1994-01-01
This paper describes ongoing research and development of CMOS active pixel image sensors for low cost commercial applications. A number of sensor designs have been fabricated and tested in both p-well and n-well technologies. Major elements in the development of the sensor include on-chip analog signal processing circuits for the reduction of fixed pattern noise, on-chip timing and control circuits and on-chip analog-to-digital conversion (ADC). Recent results and continuing efforts in these areas will be presented.
A new 9T global shutter pixel with CDS technique
NASA Astrophysics Data System (ADS)
Liu, Yang; Ma, Cheng; Zhou, Quan; Wang, Xinyang
2015-04-01
Benefiting from motion blur free, Global shutter pixel is very widely used in the design of CMOS image sensors for high speed applications such as motion vision, scientifically inspection, etc. In global shutter sensors, all pixel signal information needs to be stored in the pixel first and then waiting for readout. For higher frame rate, we need very fast operation of the pixel array. There are basically two ways for the in pixel signal storage, one is in charge domain, such as the one shown in [1], this needs complicated process during the pixel fabrication. The other one is in voltage domain, one example is the one in [2], this pixel is based on the 4T PPD technology and normally the driving of the high capacitive transfer gate limits the speed of the array operation. In this paper we report a new 9T global shutter pixel based on 3-T partially pinned photodiode (PPPD) technology. It incorporates three in-pixel storage capacitors allowing for correlated double sampling (CDS) and pipeline operation of the array (pixel exposure during the readout of the array). Only two control pulses are needed for all the pixels at the end of exposure which allows high speed exposure control.
Low temperature performance of a commercially available InGaAs image sensor
NASA Astrophysics Data System (ADS)
Nakaya, Hidehiko; Komiyama, Yutaka; Kashikawa, Nobunari; Uchida, Tomohisa; Nagayama, Takahiro; Yoshida, Michitoshi
2016-08-01
We report the evaluation results of a commercially available InGaAs image sensor manufactured by Hamamatsu Photonics K. K., which has sensitivity between 0.95μm and 1.7μm at a room temperature. The sensor format was 128×128 pixels with 20 μm pitch. It was tested with our original readout electronics and cooled down to 80 K by a mechanical cooler to minimize the dark current. Although the readout noise and dark current were 200 e- and 20 e- /sec/pixel, respectively, we found no serious problems for the linearity, wavelength response, and intra-pixel response.
A 4MP high-dynamic-range, low-noise CMOS image sensor
NASA Astrophysics Data System (ADS)
Ma, Cheng; Liu, Yang; Li, Jing; Zhou, Quan; Chang, Yuchun; Wang, Xinyang
2015-03-01
In this paper we present a 4 Megapixel high dynamic range, low dark noise and dark current CMOS image sensor, which is ideal for high-end scientific and surveillance applications. The pixel design is based on a 4-T PPD structure. During the readout of the pixel array, signals are first amplified, and then feed to a low- power column-parallel ADC array which is already presented in [1]. Measurement results show that the sensor achieves a dynamic range of 96dB, a dark noise of 1.47e- at 24fps speed. The dark current is 0.15e-/pixel/s at -20oC.
The Si/CdTe semiconductor Compton camera of the ASTRO-H Soft Gamma-ray Detector (SGD)
NASA Astrophysics Data System (ADS)
Watanabe, Shin; Tajima, Hiroyasu; Fukazawa, Yasushi; Ichinohe, Yuto; Takeda, Shin`ichiro; Enoto, Teruaki; Fukuyama, Taro; Furui, Shunya; Genba, Kei; Hagino, Kouichi; Harayama, Atsushi; Kuroda, Yoshikatsu; Matsuura, Daisuke; Nakamura, Ryo; Nakazawa, Kazuhiro; Noda, Hirofumi; Odaka, Hirokazu; Ohta, Masayuki; Onishi, Mitsunobu; Saito, Shinya; Sato, Goro; Sato, Tamotsu; Takahashi, Tadayuki; Tanaka, Takaaki; Togo, Atsushi; Tomizuka, Shinji
2014-11-01
The Soft Gamma-ray Detector (SGD) is one of the instrument payloads onboard ASTRO-H, and will cover a wide energy band (60-600 keV) at a background level 10 times better than instruments currently in orbit. The SGD achieves low background by combining a Compton camera scheme with a narrow field-of-view active shield. The Compton camera in the SGD is realized as a hybrid semiconductor detector system which consists of silicon and cadmium telluride (CdTe) sensors. The design of the SGD Compton camera has been finalized and the final prototype, which has the same configuration as the flight model, has been fabricated for performance evaluation. The Compton camera has overall dimensions of 12 cm×12 cm×12 cm, consisting of 32 layers of Si pixel sensors and 8 layers of CdTe pixel sensors surrounded by 2 layers of CdTe pixel sensors. The detection efficiency of the Compton camera reaches about 15% and 3% for 100 keV and 511 keV gamma rays, respectively. The pixel pitch of the Si and CdTe sensors is 3.2 mm, and the signals from all 13,312 pixels are processed by 208 ASICs developed for the SGD. Good energy resolution is afforded by semiconductor sensors and low noise ASICs, and the obtained energy resolutions with the prototype Si and CdTe pixel sensors are 1.0-2.0 keV (FWHM) at 60 keV and 1.6-2.5 keV (FWHM) at 122 keV, respectively. This results in good background rejection capability due to better constraints on Compton kinematics. Compton camera energy resolutions achieved with the final prototype are 6.3 keV (FWHM) at 356 keV and 10.5 keV (FWHM) at 662 keV, which satisfy the instrument requirements for the SGD Compton camera (better than 2%). Moreover, a low intrinsic background has been confirmed by the background measurement with the final prototype.
Giga-pixel lensfree holographic microscopy and tomography using color image sensors.
Isikman, Serhan O; Greenbaum, Alon; Luo, Wei; Coskun, Ahmet F; Ozcan, Aydogan
2012-01-01
We report Giga-pixel lensfree holographic microscopy and tomography using color sensor-arrays such as CMOS imagers that exhibit Bayer color filter patterns. Without physically removing these color filters coated on the sensor chip, we synthesize pixel super-resolved lensfree holograms, which are then reconstructed to achieve ~350 nm lateral resolution, corresponding to a numerical aperture of ~0.8, across a field-of-view of ~20.5 mm(2). This constitutes a digital image with ~0.7 Billion effective pixels in both amplitude and phase channels (i.e., ~1.4 Giga-pixels total). Furthermore, by changing the illumination angle (e.g., ± 50°) and scanning a partially-coherent light source across two orthogonal axes, super-resolved images of the same specimen from different viewing angles are created, which are then digitally combined to synthesize tomographic images of the object. Using this dual-axis lensfree tomographic imager running on a color sensor-chip, we achieve a 3D spatial resolution of ~0.35 µm × 0.35 µm × ~2 µm, in x, y and z, respectively, creating an effective voxel size of ~0.03 µm(3) across a sample volume of ~5 mm(3), which is equivalent to >150 Billion voxels. We demonstrate the proof-of-concept of this lensfree optical tomographic microscopy platform on a color CMOS image sensor by creating tomograms of micro-particles as well as a wild-type C. elegans nematode.
Tests of UFXC32k chip with CdTe pixel detector
NASA Astrophysics Data System (ADS)
Maj, P.; Taguchi, T.; Nakaye, Y.
2018-02-01
The paper presents the performance of the UFXC32K—a hybrid pixel detector readout chip working with CdTe detectors. The UFXC32K has a pixel pitch of 75 μm and can cope with both input signal polarities. This functionality allows operating with widely used silicon sensors collecting holes and CdTe sensors collecting electrons. This article describes the chip focusing on solving the issues connected to high-Z sensor material, namely high leakage currents, slow charge collection time and thick material resulting in increased charge-sharring effects. The measurements were conducted with higher X-ray energies including 17.4 keV from molybdenum. Conclusions drawn inside the paper show the UFXC32K's usability for CdTe sensors in high X-ray energy applications.
NASA Astrophysics Data System (ADS)
Zhang, X.; Wu, B.; Zhang, M.; Zeng, H.
2017-12-01
Rice is one of the main staple foods in East Asia and Southeast Asia, which has occupied more than half of the world's population with 11% of cultivated land. Study on rice can provide direct or indirect information on food security and water source management. Remote sensing has proven to be the most effective method to monitoring the cropland in large scale by using temporary and spectral information. There are two main kinds of satellite have been used to mapping rice including microwave and optical. Rice, as the main crop of paddy fields, the main feature different from other crops is flooding phenomenon at planning stage (Figure 1). Microwave satellites can penetrate through clouds and efficiency on monitoring flooding phenomenon. Meanwhile, the vegetation index based on optical satellite can well distinguish rice from other vegetation. Google Earth Engine is a cloud-based platform that makes it easy to access high-performance computing resources for processing very large geospatial datasets. Google has collected large number of remote sensing satellite data around the world, which providing researchers with the possibility of doing application by using multi-source remote sensing data in a large area. In this work, we map rice planting area in south China through integration of Landsat-8 OLI, Sentienl-2, and Sentinel-1 Synthetic Aperture Radar (SAR) images. The flowchart is shown in figure 2. First, a threshold method the VH polarized backscatter from SAR sensor and vegetation index including normalized difference vegetation index (NDVI) and enhanced vegetation index (EVI) from optical sensor were used the classify the rice extent map. The forest and water surface extent map provided by earth engine were used to mask forest and water. To overcome the problem of the "salt and pepper effect" by Pixel-based classification when the spatial resolution increased, we segment the optical image and use the pixel- based classification results to merge the object-oriented segmentation data, and finally get the rice extent map. At last, by using the time series analysis, the peak count was obtained for each rice area to ensure the crop intensity. In this work, the rice ground point from a GVG crowdsourcing smartphone and rice area statistical results from National Bureau of Statistics were used to validate and evaluate our result.
Image acquisition system using on sensor compressed sampling technique
NASA Astrophysics Data System (ADS)
Gupta, Pravir Singh; Choi, Gwan Seong
2018-01-01
Advances in CMOS technology have made high-resolution image sensors possible. These image sensors pose significant challenges in terms of the amount of raw data generated, energy efficiency, and frame rate. This paper presents a design methodology for an imaging system and a simplified image sensor pixel design to be used in the system so that the compressed sensing (CS) technique can be implemented easily at the sensor level. This results in significant energy savings as it not only cuts the raw data rate but also reduces transistor count per pixel; decreases pixel size; increases fill factor; simplifies analog-to-digital converter, JPEG encoder, and JPEG decoder design; decreases wiring; and reduces the decoder size by half. Thus, CS has the potential to increase the resolution of image sensors for a given technology and die size while significantly decreasing the power consumption and design complexity. We show that it has potential to reduce power consumption by about 23% to 65%.
Organic-on-silicon complementary metal-oxide-semiconductor colour image sensors.
Lim, Seon-Jeong; Leem, Dong-Seok; Park, Kyung-Bae; Kim, Kyu-Sik; Sul, Sangchul; Na, Kyoungwon; Lee, Gae Hwang; Heo, Chul-Joon; Lee, Kwang-Hee; Bulliard, Xavier; Satoh, Ryu-Ichi; Yagi, Tadao; Ro, Takkyun; Im, Dongmo; Jung, Jungkyu; Lee, Myungwon; Lee, Tae-Yon; Han, Moon Gyu; Jin, Yong Wan; Lee, Sangyoon
2015-01-12
Complementary metal-oxide-semiconductor (CMOS) colour image sensors are representative examples of light-detection devices. To achieve extremely high resolutions, the pixel sizes of the CMOS image sensors must be reduced to less than a micron, which in turn significantly limits the number of photons that can be captured by each pixel using silicon (Si)-based technology (i.e., this reduction in pixel size results in a loss of sensitivity). Here, we demonstrate a novel and efficient method of increasing the sensitivity and resolution of the CMOS image sensors by superposing an organic photodiode (OPD) onto a CMOS circuit with Si photodiodes, which consequently doubles the light-input surface area of each pixel. To realise this concept, we developed organic semiconductor materials with absorption properties selective to green light and successfully fabricated highly efficient green-light-sensitive OPDs without colour filters. We found that such a top light-receiving OPD, which is selective to specific green wavelengths, demonstrates great potential when combined with a newly designed Si-based CMOS circuit containing only blue and red colour filters. To demonstrate the effectiveness of this state-of-the-art hybrid colour image sensor, we acquired a real full-colour image using a camera that contained the organic-on-Si hybrid CMOS colour image sensor.
Organic-on-silicon complementary metal–oxide–semiconductor colour image sensors
Lim, Seon-Jeong; Leem, Dong-Seok; Park, Kyung-Bae; Kim, Kyu-Sik; Sul, Sangchul; Na, Kyoungwon; Lee, Gae Hwang; Heo, Chul-Joon; Lee, Kwang-Hee; Bulliard, Xavier; Satoh, Ryu-Ichi; Yagi, Tadao; Ro, Takkyun; Im, Dongmo; Jung, Jungkyu; Lee, Myungwon; Lee, Tae-Yon; Han, Moon Gyu; Jin, Yong Wan; Lee, Sangyoon
2015-01-01
Complementary metal–oxide–semiconductor (CMOS) colour image sensors are representative examples of light-detection devices. To achieve extremely high resolutions, the pixel sizes of the CMOS image sensors must be reduced to less than a micron, which in turn significantly limits the number of photons that can be captured by each pixel using silicon (Si)-based technology (i.e., this reduction in pixel size results in a loss of sensitivity). Here, we demonstrate a novel and efficient method of increasing the sensitivity and resolution of the CMOS image sensors by superposing an organic photodiode (OPD) onto a CMOS circuit with Si photodiodes, which consequently doubles the light-input surface area of each pixel. To realise this concept, we developed organic semiconductor materials with absorption properties selective to green light and successfully fabricated highly efficient green-light-sensitive OPDs without colour filters. We found that such a top light-receiving OPD, which is selective to specific green wavelengths, demonstrates great potential when combined with a newly designed Si-based CMOS circuit containing only blue and red colour filters. To demonstrate the effectiveness of this state-of-the-art hybrid colour image sensor, we acquired a real full-colour image using a camera that contained the organic-on-Si hybrid CMOS colour image sensor. PMID:25578322
Very-large-area CCD image sensors: concept and cost-effective research
NASA Astrophysics Data System (ADS)
Bogaart, E. W.; Peters, I. M.; Kleimann, A. C.; Manoury, E. J. P.; Klaassens, W.; de Laat, W. T. F. M.; Draijer, C.; Frost, R.; Bosiers, J. T.
2009-01-01
A new-generation full-frame 36x48 mm2 48Mp CCD image sensor with vertical anti-blooming for professional digital still camera applications is developed by means of the so-called building block concept. The 48Mp devices are formed by stitching 1kx1k building blocks with 6.0 µm pixel pitch in 6x8 (hxv) format. This concept allows us to design four large-area (48Mp) and sixty-two basic (1Mp) devices per 6" wafer. The basic image sensor is relatively small in order to obtain data from many devices. Evaluation of the basic parameters such as the image pixel and on-chip amplifier provides us statistical data using a limited number of wafers. Whereas the large-area devices are evaluated for aspects typical to large-sensor operation and performance, such as the charge transport efficiency. Combined with the usability of multi-layer reticles, the sensor development is cost effective for prototyping. Optimisation of the sensor design and technology has resulted in a pixel charge capacity of 58 ke- and significantly reduced readout noise (12 electrons at 25 MHz pixel rate, after CDS). Hence, a dynamic range of 73 dB is obtained. Microlens and stack optimisation resulted in an excellent angular response that meets with the wide-angle photography demands.
CMOS Active Pixel Sensors for Low Power, Highly Miniaturized Imaging Systems
NASA Technical Reports Server (NTRS)
Fossum, Eric R.
1996-01-01
The complementary metal-oxide-semiconductor (CMOS) active pixel sensor (APS) technology has been developed over the past three years by NASA at the Jet Propulsion Laboratory, and has reached a level of performance comparable to CCDs with greatly increased functionality but at a very reduced power level.
NASA Astrophysics Data System (ADS)
Zang, A.; Anton, G.; Ballabriga, R.; Bisello, F.; Campbell, M.; Celi, J. C.; Fauler, A.; Fiederle, M.; Jensch, M.; Kochanski, N.; Llopart, X.; Michel, N.; Mollenhauer, U.; Ritter, I.; Tennert, F.; Wölfel, S.; Wong, W.; Michel, T.
2015-04-01
The Dosepix detector is a hybrid photon-counting pixel detector based on ideas of the Medipix and Timepix detector family. 1 mm thick cadmium telluride and 300 μm thick silicon were used as sensor material. The pixel matrix of the Dosepix consists of 16 x 16 square pixels with 12 rows of (200 μm)2 and 4 rows of (55 μm)2 sensitive area for the silicon sensor layer and 16 rows of pixels with 220 μm pixel pitch for CdTe. Besides digital energy integration and photon-counting mode, a novel concept of energy binning is included in the pixel electronics, allowing energy-resolved measurements in 16 energy bins within one acquisition. The possibilities of this detector concept range from applications in personal dosimetry and energy-resolved imaging to quality assurance of medical X-ray sources by analysis of the emitted photon spectrum. In this contribution the Dosepix detector, its response to X-rays as well as spectrum measurements with Si and CdTe sensor layer are presented. Furthermore, a first evaluation was carried out to use the Dosepix detector as a kVp-meter, that means to determine the applied acceleration voltage from measured X-ray tubes spectra.
Toward global crop type mapping using a hybrid machine learning approach and multi-sensor imagery
NASA Astrophysics Data System (ADS)
Wang, S.; Le Bras, S.; Azzari, G.; Lobell, D. B.
2017-12-01
Current global scale datasets on agricultural land use do not have sufficient spatial or temporal resolution to meet the needs of many applications. The recent rapid increase in public availability of fine- to moderate-resolution satellite imagery from Landsat OLI and Copernicus Sentinel-2 provides a unique opportunity to improve agricultural land use datasets. This project leverages these new satellite data streams, existing census data, and a novel training approach to develop global, annual maps that indicate the presence of (i) cropland and (ii) specific crops at a 20m resolution. Our machine learning methodology consists of two steps. The first is a supervised classifier trained with explicitly labelled data to distinguish between crop and non-crop pixels, creating a binary mask. For ground truth, we use labels collected by previous mapping efforts (e.g. IIASA's crowdsourced data (Fritz et al. 2015) and AFSIS's geosurvey data) in combination with new data collected manually. The crop pixels output by the binary mask are input to the second step: a semi-supervised clustering algorithm to resolve different crop types and generate a crop type map. We do not use field-level information on crop type to train the algorithm, making this approach scalable spatially and temporally. We instead incorporate size constraints on clusters based on aggregated agricultural land use statistics and other, more generalizable domain knowledge. We employ field-level data from the U.S., Southern Europe, and Eastern Africa to validate crop-to-cluster assignments.
Assessment of the short-term radiometric stability between Terra MODIS and Landsat 7 ETM+ sensors
Choi, Taeyoung; Xiong, Xiaoxiong; Chander, Gyanesh; Angal, A.
2009-01-01
Short-term radiometric stability was evaluated using continuous ETM+ scenes within a single orbit (contact period) and the corresponding MODIS scenes for the four matching solar reflective visible and near-infrared (VNIR) band pairs between the two sensors. The near-simultaneous earth observations were limited by the smaller swath size of ETM+ (183 km) compared to MODIS (2330 km). Two sets of continuous granules for Terra MODIS and Landsat 7 ETM+ were selected and mosaicked based on pixel geolocation information for noncloudy pixels over the African continent. The matching pixel pairs were resampled from a fine to a coarse pixel resolution, and the at-sensor spectral radiance values for a wide dynamic range of the sensors were compared and analyzed, covering various surface types. The following study focuses on radiometric stability analysis from the VNIR band-pairs of ETM+ and MODIS. The Libya-4 desert target was included in the path of this continuous orbit, which served as a verification point between the short-term and the long-term trending results from previous studies. MODTRAN at-sensor spectral radiance simulation is included for a representative desert surface type to evaluate the consistency of the results.
Budde, M.E.; Tappan, G.; Rowland, James; Lewis, J.; Tieszen, L.L.
2004-01-01
The researchers calculated seasonal integrated normalized difference vegetation index (NDVI) for each of 7 years using a time-series of 1-km data from the Advanced Very High Resolution Radiometer (AVHRR) (1992-93, 1995) and SPOT Vegetation (1998-2001) sensors. We used a local variance technique to identify each pixel as normal or either positively or negatively anomalous when compared to its surroundings. We then summarized the number of years that a given pixel was identified as an anomaly. The resulting anomaly maps were analysed using Landsat TM imagery and extensive ground knowledge to assess the results. This technique identified anomalies that can be linked to numerous anthropogenic impacts including agricultural and urban expansion, maintenance of protected areas and increased fallow. Local variance analysis is a reliable method for assessing vegetation degradation resulting from human pressures or increased land productivity from natural resource management practices. ?? 2004 Published by Elsevier Ltd.
Perceptual Color Characterization of Cameras
Vazquez-Corral, Javier; Connah, David; Bertalmío, Marcelo
2014-01-01
Color camera characterization, mapping outputs from the camera sensors to an independent color space, such as XY Z, is an important step in the camera processing pipeline. Until now, this procedure has been primarily solved by using a 3 × 3 matrix obtained via a least-squares optimization. In this paper, we propose to use the spherical sampling method, recently published by Finlayson et al., to perform a perceptual color characterization. In particular, we search for the 3 × 3 matrix that minimizes three different perceptual errors, one pixel based and two spatially based. For the pixel-based case, we minimize the CIE ΔE error, while for the spatial-based case, we minimize both the S-CIELAB error and the CID error measure. Our results demonstrate an improvement of approximately 3% for the ΔE error, 7% for the S-CIELAB error and 13% for the CID error measures. PMID:25490586
Scanning the pressure-induced distortion of fingerprints.
Mil'shtein, S; Doshi, U
2004-01-01
Fingerprint recognition technology is an important part of criminal investigations it is the basis of some security systems and an important tool of government operations such as the Immigration and Naturalization Services, registration procedures in the Armed Forces, and so forth. After the tragic events of September 11, 2001, the importance of reliable fingerprint recognition technology became even more obvious. In the current study, pressure-induced changes of distances between ridges of a fingerprint were measured. Using calibrated silicon pressure sensors we scanned the distribution of pressure across a finger pixel by pixel, and also generated maps of an average pressure distribution during fingerprinting. Emulating the fingerprinting procedure employed with widely used optical scanners, we found that on average the distance between ridges decreases by about 20% when a finger is positioned on a scanner. Controlled loading of a finger demonstrated that it is impossible to reproduce the same distribution of pressure across a given finger during repeated fingerprinting procedures.
NASA Astrophysics Data System (ADS)
Khlopenkov, Konstantin; Duda, David; Thieman, Mandana; Minnis, Patrick; Su, Wenying; Bedka, Kristopher
2017-10-01
The Deep Space Climate Observatory (DSCOVR) enables analysis of the daytime Earth radiation budget via the onboard Earth Polychromatic Imaging Camera (EPIC) and National Institute of Standards and Technology Advanced Radiometer (NISTAR). Radiance observations and cloud property retrievals from low earth orbit and geostationary satellite imagers have to be co-located with EPIC pixels to provide scene identification in order to select anisotropic directional models needed to calculate shortwave and longwave fluxes. A new algorithm is proposed for optimal merging of selected radiances and cloud properties derived from multiple satellite imagers to obtain seamless global hourly composites at 5-km resolution. An aggregated rating is employed to incorporate several factors and to select the best observation at the time nearest to the EPIC measurement. Spatial accuracy is improved using inverse mapping with gradient search during reprojection and bicubic interpolation for pixel resampling. The composite data are subsequently remapped into EPIC-view domain by convolving composite pixels with the EPIC point spread function defined with a half-pixel accuracy. PSF-weighted average radiances and cloud properties are computed separately for each cloud phase. The algorithm has demonstrated contiguous global coverage for any requested time of day with a temporal lag of under 2 hours in over 95% of the globe.
NASA Technical Reports Server (NTRS)
Khlopenkov, Konstantin; Duda, David; Thieman, Mandana; Minnis, Patrick; Su, Wenying; Bedka, Kristopher
2017-01-01
The Deep Space Climate Observatory (DSCOVR) enables analysis of the daytime Earth radiation budget via the onboard Earth Polychromatic Imaging Camera (EPIC) and National Institute of Standards and Technology Advanced Radiometer (NISTAR). Radiance observations and cloud property retrievals from low earth orbit and geostationary satellite imagers have to be co-located with EPIC pixels to provide scene identification in order to select anisotropic directional models needed to calculate shortwave and longwave fluxes. A new algorithm is proposed for optimal merging of selected radiances and cloud properties derived from multiple satellite imagers to obtain seamless global hourly composites at 5-kilometer resolution. An aggregated rating is employed to incorporate several factors and to select the best observation at the time nearest to the EPIC measurement. Spatial accuracy is improved using inverse mapping with gradient search during reprojection and bicubic interpolation for pixel resampling. The composite data are subsequently remapped into EPIC-view domain by convolving composite pixels with the EPIC point spread function (PSF) defined with a half-pixel accuracy. PSF-weighted average radiances and cloud properties are computed separately for each cloud phase. The algorithm has demonstrated contiguous global coverage for any requested time of day with a temporal lag of under 2 hours in over 95 percent of the globe.
Liu, Yang; Njuguna, Raphael; Matthews, Thomas; Akers, Walter J.; Sudlow, Gail P.; Mondal, Suman; Tang, Rui
2013-01-01
Abstract. We have developed a near-infrared (NIR) fluorescence goggle system based on the complementary metal–oxide–semiconductor active pixel sensor imaging and see-through display technologies. The fluorescence goggle system is a compact wearable intraoperative fluorescence imaging and display system that can guide surgery in real time. The goggle is capable of detecting fluorescence of indocyanine green solution in the picomolar range. Aided by NIR quantum dots, we successfully used the fluorescence goggle to guide sentinel lymph node mapping in a rat model. We further demonstrated the feasibility of using the fluorescence goggle in guiding surgical resection of breast cancer metastases in the liver in conjunction with NIR fluorescent probes. These results illustrate the diverse potential use of the goggle system in surgical procedures. PMID:23728180
Park, Jong Seok; Aziz, Moez Karim; Li, Sensen; Chi, Taiyun; Grijalva, Sandra Ivonne; Sung, Jung Hoon; Cho, Hee Cheol; Wang, Hua
2018-02-01
This paper presents a fully integrated CMOS multimodality joint sensor/stimulator array with 1024 pixels for real-time holistic cellular characterization and drug screening. The proposed system consists of four pixel groups and four parallel signal-conditioning blocks. Every pixel group contains 16 × 16 pixels, and each pixel includes one gold-plated electrode, four photodiodes, and in-pixel circuits, within a pixel footprint. Each pixel supports real-time extracellular potential recording, optical detection, charge-balanced biphasic current stimulation, and cellular impedance measurement for the same cellular sample. The proposed system is fabricated in a standard 130-nm CMOS process. Rat cardiomyocytes are successfully cultured on-chip. Measured high-resolution optical opacity images, extracellular potential recordings, biphasic current stimulations, and cellular impedance images demonstrate the unique advantages of the system for holistic cell characterization and drug screening. Furthermore, this paper demonstrates the use of optical detection on the on-chip cultured cardiomyocytes to real-time track their cyclic beating pattern and beating rate.
Forest Biomass Mapping from Prism Triplet, Palsar and Landsat Data
NASA Astrophysics Data System (ADS)
Ranson, J.; Sun, G.; Ni, W.
2014-12-01
The loss of sensitivity at higher biomass levels is a common problem in biomass mapping using optical multi-spectral data or radar backscattering data due to the lack of information on canopy vertical structure. Studies have shown that adding implicit information of forest vertical structure improves the performance of forest biomass mapping from optical reflectance and radar backscattering data. LiDAR, InSAR and stereo imager are the data sources for obtaining forest structural information. The potential of providing information on forest vertical structure by stereoscopic imagery data has drawn attention recently due to the availability of high-resolution digital stereo imaging from space and the advances of digital stereo image processing software. The Panchromatic Remote-sensing Instrument for Stereo Mapping (PRISM) onboard the Advanced Land Observation Satellite (ALOS) has acquired multiple global coverage from June 2006 to April 2011 providing a good data source for regional/global forest studies. In this study, five PRISM triplets acquired on June 14, 2008, August 19 and September 5, 2009; PALSAR dual-pol images acquired on July 12, 2008 and August 30, 2009; and LANDSAT 5 TM images acquired on September 5, 2009 and the field plot data collected in 2009 and 2010 were used to map forest biomass at 50m pixel in an area of about 4000 km2in Maine, USA ( 45.2 deg N 68.6 deg W). PRISM triplets were used to generate point cloud data at 2m pixel first and then the average height of points above NED (National Elevation Dataset) within a 50m by 50m pixel was calculated. Five images were mosaicked and used as canopy height information in the biomass estimation along with the PALSAR HH, HV radar backscattering and optical reflectance vegetation indices from L-5 TM data. A small portion of this region was covered by the Land Vegetation and Ice Sensor (LVIS) in 2009. The biomass maps from the LVIS data was used to evaluate the results from combined use of PRISM, PALSAR and LANDSAT data. The results show that the canopy height index from PRISM stereo images significantly improves the biomass mapping accuracy and extends the saturation level of biomass, and results in a biomass map comparable with those generated from LVIS data.
The progress of sub-pixel imaging methods
NASA Astrophysics Data System (ADS)
Wang, Hu; Wen, Desheng
2014-02-01
This paper reviews the Sub-pixel imaging technology principles, characteristics, the current development status at home and abroad and the latest research developments. As Sub-pixel imaging technology has achieved the advantages of high resolution of optical remote sensor, flexible working ways and being miniaturized with no moving parts. The imaging system is suitable for the application of space remote sensor. Its application prospect is very extensive. It is quite possible to be the research development direction of future space optical remote sensing technology.
NASA Astrophysics Data System (ADS)
Berdalovic, I.; Bates, R.; Buttar, C.; Cardella, R.; Egidos Plaja, N.; Hemperek, T.; Hiti, B.; van Hoorne, J. W.; Kugathasan, T.; Mandic, I.; Maneuski, D.; Marin Tobon, C. A.; Moustakas, K.; Musa, L.; Pernegger, H.; Riedler, P.; Riegel, C.; Schaefer, D.; Schioppa, E. J.; Sharma, A.; Snoeys, W.; Solans Sanchez, C.; Wang, T.; Wermes, N.
2018-01-01
The upgrade of the ATLAS tracking detector (ITk) for the High-Luminosity Large Hadron Collider at CERN requires the development of novel radiation hard silicon sensor technologies. Latest developments in CMOS sensor processing offer the possibility of combining high-resistivity substrates with on-chip high-voltage biasing to achieve a large depleted active sensor volume. We have characterised depleted monolithic active pixel sensors (DMAPS), which were produced in a novel modified imaging process implemented in the TowerJazz 180 nm CMOS process in the framework of the monolithic sensor development for the ALICE experiment. Sensors fabricated in this modified process feature full depletion of the sensitive layer, a sensor capacitance of only a few fF and radiation tolerance up to 1015 neq/cm2. This paper summarises the measurements of charge collection properties in beam tests and in the laboratory using radioactive sources and edge TCT. The results of these measurements show significantly improved radiation hardness obtained for sensors manufactured using the modified process. This has opened the way to the design of two large scale demonstrators for the ATLAS ITk. To achieve a design compatible with the requirements of the outer pixel layers of the tracker, a charge sensitive front-end taking 500 nA from a 1.8 V supply is combined with a fast digital readout architecture. The low-power front-end with a 25 ns time resolution exploits the low sensor capacitance to reduce noise and analogue power, while the implemented readout architectures minimise power by reducing the digital activity.
Contact CMOS imaging of gaseous oxygen sensor array
Daivasagaya, Daisy S.; Yao, Lei; Yi Yung, Ka; Hajj-Hassan, Mohamad; Cheung, Maurice C.; Chodavarapu, Vamsy P.; Bright, Frank V.
2014-01-01
We describe a compact luminescent gaseous oxygen (O2) sensor microsystem based on the direct integration of sensor elements with a polymeric optical filter and placed on a low power complementary metal-oxide semiconductor (CMOS) imager integrated circuit (IC). The sensor operates on the measurement of excited-state emission intensity of O2-sensitive luminophore molecules tris(4,7-diphenyl-1,10-phenanthroline) ruthenium(II) ([Ru(dpp)3]2+) encapsulated within sol–gel derived xerogel thin films. The polymeric optical filter is made with polydimethylsiloxane (PDMS) that is mixed with a dye (Sudan-II). The PDMS membrane surface is molded to incorporate arrays of trapezoidal microstructures that serve to focus the optical sensor signals on to the imager pixels. The molded PDMS membrane is then attached with the PDMS color filter. The xerogel sensor arrays are contact printed on top of the PDMS trapezoidal lens-like microstructures. The CMOS imager uses a 32 × 32 (1024 elements) array of active pixel sensors and each pixel includes a high-gain phototransistor to convert the detected optical signals into electrical currents. Correlated double sampling circuit, pixel address, digital control and signal integration circuits are also implemented on-chip. The CMOS imager data is read out as a serial coded signal. The CMOS imager consumes a static power of 320 µW and an average dynamic power of 625 µW when operating at 100 Hz sampling frequency and 1.8 V DC. This CMOS sensor system provides a useful platform for the development of miniaturized optical chemical gas sensors. PMID:24493909
Contact CMOS imaging of gaseous oxygen sensor array.
Daivasagaya, Daisy S; Yao, Lei; Yi Yung, Ka; Hajj-Hassan, Mohamad; Cheung, Maurice C; Chodavarapu, Vamsy P; Bright, Frank V
2011-10-01
We describe a compact luminescent gaseous oxygen (O 2 ) sensor microsystem based on the direct integration of sensor elements with a polymeric optical filter and placed on a low power complementary metal-oxide semiconductor (CMOS) imager integrated circuit (IC). The sensor operates on the measurement of excited-state emission intensity of O 2 -sensitive luminophore molecules tris(4,7-diphenyl-1,10-phenanthroline) ruthenium(II) ([Ru(dpp) 3 ] 2+ ) encapsulated within sol-gel derived xerogel thin films. The polymeric optical filter is made with polydimethylsiloxane (PDMS) that is mixed with a dye (Sudan-II). The PDMS membrane surface is molded to incorporate arrays of trapezoidal microstructures that serve to focus the optical sensor signals on to the imager pixels. The molded PDMS membrane is then attached with the PDMS color filter. The xerogel sensor arrays are contact printed on top of the PDMS trapezoidal lens-like microstructures. The CMOS imager uses a 32 × 32 (1024 elements) array of active pixel sensors and each pixel includes a high-gain phototransistor to convert the detected optical signals into electrical currents. Correlated double sampling circuit, pixel address, digital control and signal integration circuits are also implemented on-chip. The CMOS imager data is read out as a serial coded signal. The CMOS imager consumes a static power of 320 µW and an average dynamic power of 625 µW when operating at 100 Hz sampling frequency and 1.8 V DC. This CMOS sensor system provides a useful platform for the development of miniaturized optical chemical gas sensors.
A 256×256 low-light-level CMOS imaging sensor with digital CDS
NASA Astrophysics Data System (ADS)
Zou, Mei; Chen, Nan; Zhong, Shengyou; Li, Zhengfen; Zhang, Jicun; Yao, Li-bin
2016-10-01
In order to achieve high sensitivity for low-light-level CMOS image sensors (CIS), a capacitive transimpedance amplifier (CTIA) pixel circuit with a small integration capacitor is used. As the pixel and the column area are highly constrained, it is difficult to achieve analog correlated double sampling (CDS) to remove the noise for low-light-level CIS. So a digital CDS is adopted, which realizes the subtraction algorithm between the reset signal and pixel signal off-chip. The pixel reset noise and part of the column fixed-pattern noise (FPN) can be greatly reduced. A 256×256 CIS with CTIA array and digital CDS is implemented in the 0.35μm CMOS technology. The chip size is 7.7mm×6.75mm, and the pixel size is 15μm×15μm with a fill factor of 20.6%. The measured pixel noise is 24LSB with digital CDS in RMS value at dark condition, which shows 7.8× reduction compared to the image sensor without digital CDS. Running at 7fps, this low-light-level CIS can capture recognizable images with the illumination down to 0.1lux.
Bathymetric Lidar Mapping of Seagrass Distribution within Redfish Bay State Scientific Area, Texas
NASA Astrophysics Data System (ADS)
Starek, M. J.; Fernandez-Diaz, J. C.; Singhania, A.; Shrestha, R. L.; Gibeaut, J. C.; Su, L.; Reisinger, A. S.; Lord, A.
2013-05-01
Monitoring seagrass habitat, species growth, and population decline is an important environmental initiative for coastal ecosystem sustainability. However, measuring details about seagrass distribution and canopy structure over large areas via remote sensing has proved challenging. Developments in airborne bathymetric light detection and ranging (lidar) provide great potential in this regard. Traditional bathymetric lidar systems have been limited in their ability to map within the shallow water zone (< 1 m) where seagrass is typically present due to limitations in receiver response and laser pulse length. Emergent short-pulse width bathymetric lidar sensors and waveform processing algorithms enable depth measurements in shallow water environments not previously accessible. This 3D information of the benthic layer can be applied to extract metrics about the seagrass canopy. On September 10, 2012, researchers with the National Center for Airborne Laser Mapping (NCALM) at the University of Houston (UH) and the Coastal and Marine Geospatial Sciences Lab (CMGL) of the Harte Research Institute at Texas A&M University-Corpus Christi conducted a coordinated airborne and ground-based survey of the Redfish Bay State Scientific Area as part of a collaborative study to investigate the capabilities of bathymetric lidar and hyperspectral imaging for seagrass mapping (standalone and in-fusion). Redfish Bay, located along the middle Texas coast of the Gulf of Mexico, is a state scientific area designated for the purposes of protecting and studying the native seagrasses. For this survey, UH acquired high resolution (2.5 shots/m^2) very-shallow water bathymetry data using their new lidar system , the Optech Aquarius Green (532 nm) system. In a separate flight, UH collected 2 sets of hyperspectral imaging data (1.2-m pixel resolution and 72 bands, and 0.6m pixel resolution and 36 bands) with their CASI 1500 hy sensor. For this survey the sensors were mounted on a PA-31 Chieftain aircraft. The ground survey was conducted by CMGL. The team used an airboat to collect in-situ radiometer measurements of sky irradiance and surface water reflectance at different locations in the bay. The team also collected water samples, GPS position, and depth. A follow-up survey was conducted to acquire ground-truth data of benthic type at over 80 locations within the bay. In this work, we will present initial results on the seagrass mapping project. Focus will be on the bathymetric lidar data collection component. Details on the resultant data characteristics, accuracy, and its applicability for extracting metrics on seagrass canopy distribution and structure within the shallow bay will be presented.
Analysis of Multipath Pixels in SAR Images
NASA Astrophysics Data System (ADS)
Zhao, J. W.; Wu, J. C.; Ding, X. L.; Zhang, L.; Hu, F. M.
2016-06-01
As the received radar signal is the sum of signal contributions overlaid in one single pixel regardless of the travel path, the multipath effect should be seriously tackled as the multiple bounce returns are added to direct scatter echoes which leads to ghost scatters. Most of the existing solution towards the multipath is to recover the signal propagation path. To facilitate the signal propagation simulation process, plenty of aspects such as sensor parameters, the geometry of the objects (shape, location, orientation, mutual position between adjacent buildings) and the physical parameters of the surface (roughness, correlation length, permittivity)which determine the strength of radar signal backscattered to the SAR sensor should be given in previous. However, it's not practical to obtain the highly detailed object model in unfamiliar area by field survey as it's a laborious work and time-consuming. In this paper, SAR imaging simulation based on RaySAR is conducted at first aiming at basic understanding of multipath effects and for further comparison. Besides of the pre-imaging simulation, the product of the after-imaging, which refers to radar images is also taken into consideration. Both Cosmo-SkyMed ascending and descending SAR images of Lupu Bridge in Shanghai are used for the experiment. As a result, the reflectivity map and signal distribution map of different bounce level are simulated and validated by 3D real model. The statistic indexes such as the phase stability, mean amplitude, amplitude dispersion, coherence and mean-sigma ratio in case of layover are analyzed with combination of the RaySAR output.
Chemiresistive Graphene Sensors for Ammonia Detection.
Mackin, Charles; Schroeder, Vera; Zurutuza, Amaia; Su, Cong; Kong, Jing; Swager, Timothy M; Palacios, Tomás
2018-05-09
The primary objective of this work is to demonstrate a novel sensor system as a convenient vehicle for scaled-up repeatability and the kinetic analysis of a pixelated testbed. This work presents a sensor system capable of measuring hundreds of functionalized graphene sensors in a rapid and convenient fashion. The sensor system makes use of a novel array architecture requiring only one sensor per pixel and no selector transistor. The sensor system is employed specifically for the evaluation of Co(tpfpp)ClO 4 functionalization of graphene sensors for the detection of ammonia as an extension of previous work. Co(tpfpp)ClO 4 treated graphene sensors were found to provide 4-fold increased ammonia sensitivity over pristine graphene sensors. Sensors were also found to exhibit excellent selectivity over interfering compounds such as water and common organic solvents. The ability to monitor a large sensor array with 160 pixels provides insights into performance variations and reproducibility-critical factors in the development of practical sensor systems. All sensors exhibit the same linearly related responses with variations in response exhibiting Gaussian distributions, a key finding for variation modeling and quality engineering purposes. The mean correlation coefficient between sensor responses was found to be 0.999 indicating highly consistent sensor responses and excellent reproducibility of Co(tpfpp)ClO 4 functionalization. A detailed kinetic model is developed to describe sensor response profiles. The model consists of two adsorption mechanisms-one reversible and one irreversible-and is shown capable of fitting experimental data with a mean percent error of 0.01%.
Resolution-enhanced Mapping Spectrometer
NASA Technical Reports Server (NTRS)
Kumer, J. B.; Aubrun, J. N.; Rosenberg, W. J.; Roche, A. E.
1993-01-01
A familiar mapping spectrometer implementation utilizes two dimensional detector arrays with spectral dispersion along one direction and spatial along the other. Spectral images are formed by spatially scanning across the scene (i.e., push-broom scanning). For imaging grating and prism spectrometers, the slit is perpendicular to the spatial scan direction. For spectrometers utilizing linearly variable focal-plane-mounted filters the spatial scan direction is perpendicular to the direction of spectral variation. These spectrometers share the common limitation that the number of spectral resolution elements is given by the number of pixels along the spectral (or dispersive) direction. Resolution enhancement by first passing the light input to the spectrometer through a scanned etalon or Michelson is discussed. Thus, while a detector element is scanned through a spatial resolution element of the scene, it is also temporally sampled. The analysis for all the pixels in the dispersive direction is addressed. Several specific examples are discussed. The alternate use of a Michelson for the same enhancement purpose is also discussed. Suitable for weight constrained deep space missions, hardware systems were developed including actuators, sensor, and electronics such that low-resolution etalons with performance required for implementation would weigh less than one pound.
Influence of pansharpening techniques in obtaining accurate vegetation thematic maps
NASA Astrophysics Data System (ADS)
Ibarrola-Ulzurrun, Edurne; Gonzalo-Martin, Consuelo; Marcello-Ruiz, Javier
2016-10-01
In last decades, there have been a decline in natural resources, becoming important to develop reliable methodologies for their management. The appearance of very high resolution sensors has offered a practical and cost-effective means for a good environmental management. In this context, improvements are needed for obtaining higher quality of the information available in order to get reliable classified images. Thus, pansharpening enhances the spatial resolution of the multispectral band by incorporating information from the panchromatic image. The main goal in the study is to implement pixel and object-based classification techniques applied to the fused imagery using different pansharpening algorithms and the evaluation of thematic maps generated that serve to obtain accurate information for the conservation of natural resources. A vulnerable heterogenic ecosystem from Canary Islands (Spain) was chosen, Teide National Park, and Worldview-2 high resolution imagery was employed. The classes considered of interest were set by the National Park conservation managers. 7 pansharpening techniques (GS, FIHS, HCS, MTF based, Wavelet `à trous' and Weighted Wavelet `à trous' through Fractal Dimension Maps) were chosen in order to improve the data quality with the goal to analyze the vegetation classes. Next, different classification algorithms were applied at pixel-based and object-based approach, moreover, an accuracy assessment of the different thematic maps obtained were performed. The highest classification accuracy was obtained applying Support Vector Machine classifier at object-based approach in the Weighted Wavelet `à trous' through Fractal Dimension Maps fused image. Finally, highlight the difficulty of the classification in Teide ecosystem due to the heterogeneity and the small size of the species. Thus, it is important to obtain accurate thematic maps for further studies in the management and conservation of natural resources.
Novel Si-Ge-C Superlattices for More than Moore CMOS
2016-03-31
diodes can be entirely formed by epitaxial growth, CMOS Active Pixel Sensors can be made with Fully-Depleted SOI CMOS . One important advantage of...a NMOS Transfer Gate (TG), which could be part of a 4T pixel APS. PPDs are preferred in CMOS image sensors for the ability of the pinning layer to...than Moore” with the creation of active photonic devices monolithically integrated with CMOS . Applications include Multispectral CMOS Image Sensors
The first bump-bonded pixel detectors on CVD diamond
NASA Astrophysics Data System (ADS)
Adam, W.; Bauer, C.; Berdermann, E.; Bergonzo, P.; Bogani, F.; Borchi, E.; Brambilla, A.; Bruzzi, M.; Colledani, C.; Conway, J.; Dabrowski, W.; Delpierre, P.; Deneuville, A.; Dulinski, W.; van Eijk, B.; Fallou, A.; Fizzotti, F.; Foulon, F.; Friedl, M.; Gan, K. K.; Gheeraert, E.; Grigoriev, E.; Hallewell, G.; Hall-Wilton, R.; Han, S.; Hartjes, F.; Hrubec, J.; Husson, D.; Kagan, H.; Kania, D.; Kaplon, J.; Karl, C.; Kass, R.; Krammer, M.; Logiudice, A.; Lu, R.; Manfredi, P. F.; Manfredotti, C.; Marshall, R. D.; Meier, D.; Mishina, M.; Oh, A.; Palmieri, V. G.; Pan, L. S.; Peitz, A.; Pernicka, M.; Pirollo, S.; Polesello, P.; Pretzl, K.; Re, V.; Riester, J. L.; Roe, S.; Roff, D.; Rudge, A.; Schnetzer, S.; Sciortino, S.; Speziali, V.; Stelzer, H.; Steuerer, J.; Stone, R.; Tapper, R. J.; Tesarek, R.; Trawick, M.; Trischuk, W.; Turchetta, R.; Vittone, E.; Wagner, A.; Walsh, A. M.; Wedenig, R.; Weilhammer, P.; Zeuner, W.; Ziock, H.; Zoeller, M.; Charles, E.; Ciocio, A.; Dao, K.; Einsweiler, K.; Fasching, D.; Gilchriese, M.; Joshi, A.; Kleinfelder, S.; Milgrome, O.; Palaio, N.; Richardson, J.; Sinervo, P.; Zizka, G.; RD42 Collaboration
1999-11-01
Diamond is a nearly ideal material for detecting ionising radiation. Its outstanding radiation hardness, fast charge collection and low leakage current allow it to be used in high radiation environments. These characteristics make diamond sensors particularly appealing for use in the next generation of pixel detectors. Over the last year, the RD42 collaboration has worked with several groups that have developed pixel readout electronics in order to optimise diamond sensors for bump-bonding. This effort resulted in an operational diamond pixel sensor that was tested in a pion beam. We demonstrate that greater than 98% of the channels were successfully bump-bonded and functioning. The device shows good overall hit efficiency as well as clear spatial hit correlation to tracks measured in a silicon reference telescope. A position resolution of 14.8 μm was observed, consistent with expectations given the detector pitch.
An ultra-low power CMOS image sensor with on-chip energy harvesting and power management capability.
Cevik, Ismail; Huang, Xiwei; Yu, Hao; Yan, Mei; Ay, Suat U
2015-03-06
An ultra-low power CMOS image sensor with on-chip energy harvesting and power management capability is introduced in this paper. The photodiode pixel array can not only capture images but also harvest solar energy. As such, the CMOS image sensor chip is able to switch between imaging and harvesting modes towards self-power operation. Moreover, an on-chip maximum power point tracking (MPPT)-based power management system (PMS) is designed for the dual-mode image sensor to further improve the energy efficiency. A new isolated P-well energy harvesting and imaging (EHI) pixel with very high fill factor is introduced. Several ultra-low power design techniques such as reset and select boosting techniques have been utilized to maintain a wide pixel dynamic range. The chip was designed and fabricated in a 1.8 V, 1P6M 0.18 µm CMOS process. Total power consumption of the imager is 6.53 µW for a 96 × 96 pixel array with 1 V supply and 5 fps frame rate. Up to 30 μW of power could be generated by the new EHI pixels. The PMS is capable of providing 3× the power required during imaging mode with 50% efficiency allowing energy autonomous operation with a 72.5% duty cycle.
An Ultra-Low Power CMOS Image Sensor with On-Chip Energy Harvesting and Power Management Capability
Cevik, Ismail; Huang, Xiwei; Yu, Hao; Yan, Mei; Ay, Suat U.
2015-01-01
An ultra-low power CMOS image sensor with on-chip energy harvesting and power management capability is introduced in this paper. The photodiode pixel array can not only capture images but also harvest solar energy. As such, the CMOS image sensor chip is able to switch between imaging and harvesting modes towards self-power operation. Moreover, an on-chip maximum power point tracking (MPPT)-based power management system (PMS) is designed for the dual-mode image sensor to further improve the energy efficiency. A new isolated P-well energy harvesting and imaging (EHI) pixel with very high fill factor is introduced. Several ultra-low power design techniques such as reset and select boosting techniques have been utilized to maintain a wide pixel dynamic range. The chip was designed and fabricated in a 1.8 V, 1P6M 0.18 µm CMOS process. Total power consumption of the imager is 6.53 µW for a 96 × 96 pixel array with 1 V supply and 5 fps frame rate. Up to 30 μW of power could be generated by the new EHI pixels. The PMS is capable of providing 3× the power required during imaging mode with 50% efficiency allowing energy autonomous operation with a 72.5% duty cycle. PMID:25756863
A fast image encryption algorithm based on only blocks in cipher text
NASA Astrophysics Data System (ADS)
Wang, Xing-Yuan; Wang, Qian
2014-03-01
In this paper, a fast image encryption algorithm is proposed, in which the shuffling and diffusion is performed simultaneously. The cipher-text image is divided into blocks and each block has k ×k pixels, while the pixels of the plain-text are scanned one by one. Four logistic maps are used to generate the encryption key stream and the new place in the cipher image of plain image pixels, including the row and column of the block which the pixel belongs to and the place where the pixel would be placed in the block. After encrypting each pixel, the initial conditions of logistic maps would be changed according to the encrypted pixel's value; after encrypting each row of plain image, the initial condition would also be changed by the skew tent map. At last, it is illustrated that this algorithm has a faster speed, big key space, and better properties in withstanding differential attacks, statistical analysis, known plaintext, and chosen plaintext attacks.
JPL CMOS Active Pixel Sensor Technology
NASA Technical Reports Server (NTRS)
Fossum, E. R.
1995-01-01
This paper will present the JPL-developed complementary metal- oxide-semiconductor (CMOS) active pixel sensor (APS) technology. The CMOS APS has achieved performance comparable to charge coupled devices, yet features ultra low power operation, random access readout, on-chip timing and control, and on-chip analog to digital conversion. Previously published open literature will be reviewed.
1 kHz 2D Visual Motion Sensor Using 20 × 20 Silicon Retina Optical Sensor and DSP Microcontroller.
Liu, Shih-Chii; Yang, MinHao; Steiner, Andreas; Moeckel, Rico; Delbruck, Tobi
2015-04-01
Optical flow sensors have been a long running theme in neuromorphic vision sensors which include circuits that implement the local background intensity adaptation mechanism seen in biological retinas. This paper reports a bio-inspired optical motion sensor aimed towards miniature robotic and aerial platforms. It combines a 20 × 20 continuous-time CMOS silicon retina vision sensor with a DSP microcontroller. The retina sensor has pixels that have local gain control and adapt to background lighting. The system allows the user to validate various motion algorithms without building dedicated custom solutions. Measurements are presented to show that the system can compute global 2D translational motion from complex natural scenes using one particular algorithm: the image interpolation algorithm (I2A). With this algorithm, the system can compute global translational motion vectors at a sample rate of 1 kHz, for speeds up to ±1000 pixels/s, using less than 5 k instruction cycles (12 instructions per pixel) per frame. At 1 kHz sample rate the DSP is 12% occupied with motion computation. The sensor is implemented as a 6 g PCB consuming 170 mW of power.
NASA Astrophysics Data System (ADS)
Xie, Huan; Luo, Xin; Xu, Xiong; Wang, Chen; Pan, Haiyan; Tong, Xiaohua; Liu, Shijie
2016-10-01
Water body is a fundamental element in urban ecosystems and water mapping is critical for urban and landscape planning and management. As remote sensing has increasingly been used for water mapping in rural areas, this spatially explicit approach applied in urban area is also a challenging work due to the water bodies mainly distributed in a small size and the spectral confusion widely exists between water and complex features in the urban environment. Water index is the most common method for water extraction at pixel level, and spectral mixture analysis (SMA) has been widely employed in analyzing urban environment at subpixel level recently. In this paper, we introduce an automatic subpixel water mapping method in urban areas using multispectral remote sensing data. The objectives of this research consist of: (1) developing an automatic land-water mixed pixels extraction technique by water index; (2) deriving the most representative endmembers of water and land by utilizing neighboring water pixels and adaptive iterative optimal neighboring land pixel for respectively; (3) applying a linear unmixing model for subpixel water fraction estimation. Specifically, to automatically extract land-water pixels, the locally weighted scatter plot smoothing is firstly used to the original histogram curve of WI image . And then the Ostu threshold is derived as the start point to select land-water pixels based on histogram of the WI image with the land threshold and water threshold determination through the slopes of histogram curve . Based on the previous process at pixel level, the image is divided into three parts: water pixels, land pixels, and mixed land-water pixels. Then the spectral mixture analysis (SMA) is applied to land-water mixed pixels for water fraction estimation at subpixel level. With the assumption that the endmember signature of a target pixel should be more similar to adjacent pixels due to spatial dependence, the endmember of water and land are determined by neighboring pure land or pure water pixels within a distance. To obtaining the most representative endmembers in SMA, we designed an adaptive iterative endmember selection method based on the spatial similarity of adjacent pixels. According to the spectral similarity in a spatial adjacent region, the spectrum of land endmember is determined by selecting the most representative land pixel in a local window, and the spectrum of water endmember is determined by calculating an average of the water pixels in the local window. The proposed hierarchical processing method based on WI and SMA (WISMA) is applied to urban areas for reliability evaluation using the Landsat-8 Operational Land Imager (OLI) images. For comparison, four methods at pixel level and subpixel level were chosen respectively. Results indicate that the water maps generated by the proposed method correspond as closely with the truth water maps with subpixel precision. And the results showed that the WISMA achieved the best performance in water mapping with comprehensive analysis of different accuracy evaluation indexes (RMSE and SE).
Wu, Yiming; Zhang, Xiujuan; Pan, Huanhuan; Deng, Wei; Zhang, Xiaohong; Zhang, Xiwei; Jie, Jiansheng
2013-01-01
Single-crystalline organic nanowires (NWs) are important building blocks for future low-cost and efficient nano-optoelectronic devices due to their extraordinary properties. However, it remains a critical challenge to achieve large-scale organic NW array assembly and device integration. Herein, we demonstrate a feasible one-step method for large-area patterned growth of cross-aligned single-crystalline organic NW arrays and their in-situ device integration for optical image sensors. The integrated image sensor circuitry contained a 10 × 10 pixel array in an area of 1.3 × 1.3 mm2, showing high spatial resolution, excellent stability and reproducibility. More importantly, 100% of the pixels successfully operated at a high response speed and relatively small pixel-to-pixel variation. The high yield and high spatial resolution of the operational pixels, along with the high integration level of the device, clearly demonstrate the great potential of the one-step organic NW array growth and device construction approach for large-scale optoelectronic device integration. PMID:24287887
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fahim, Farah; Deptuch, Grzegorz W.; Hoff, James R.
Semiconductor hybrid pixel detectors often consist of a pixellated sensor layer bump bonded to a matching pixelated readout integrated circuit (ROIC). The sensor can range from high resistivity Si to III-V materials, whereas a Si CMOS process is typically used to manufacture the ROIC. Independent, device physics and electronic design automation (EDA) tools are used to determine sensor characteristics and verify functional performance of ROICs respectively with significantly different solvers. Some physics solvers provide the capability of transferring data to the EDA tool. However, single pixel transient simulations are either not feasible due to convergence difficulties or are prohibitively long.more » A simplified sensor model, which includes a current pulse in parallel with detector equivalent capacitor, is often used; even then, spice type top-level (entire array) simulations range from days to weeks. In order to analyze detector deficiencies for a particular scientific application, accurately defined transient behavioral models of all the functional blocks are required. Furthermore, various simulations, such as transient, noise, Monte Carlo, inter-pixel effects, etc. of the entire array need to be performed within a reasonable time frame without trading off accuracy. The sensor and the analog front-end can be modeling using a real number modeling language, as complex mathematical functions or detailed data can be saved to text files, for further top-level digital simulations. Parasitically aware digital timing is extracted in a standard delay format (sdf) from the pixel digital back-end layout as well as the periphery of the ROIC. For any given input, detector level worst-case and best-case simulations are performed using a Verilog simulation environment to determine the output. Each top-level transient simulation takes no more than 10-15 minutes. The impact of changing key parameters such as sensor Poissonian shot noise, analog front-end bandwidth, jitter due to clock distribution etc. can be accurately analyzed to determine ROIC architectural viability and bottlenecks. Hence the impact of the detector parameters on the scientific application can be studied.« less
Fully 3D-Integrated Pixel Detectors for X-Rays
Deptuch, Grzegorz W.; Gabriella, Carini; Enquist, Paul; ...
2016-01-01
The vertically integrated photon imaging chip (VIPIC1) pixel detector is a stack consisting of a 500-μm-thick silicon sensor, a two-tier 34-μm-thick integrated circuit, and a host printed circuit board (PCB). The integrated circuit tiers were bonded using the direct bonding technology with copper, and each tier features 1-μm-diameter through-silicon vias that were used for connections to the sensor on one side, and to the host PCB on the other side. The 80-μm-pixel-pitch sensor was the direct bonding technology with nickel bonded to the integrated circuit. The stack was mounted on the board using Sn–Pb balls placed on a 320-μm pitch,more » yielding an entirely wire-bond-less structure. The analog front-end features a pulse response peaking at below 250 ns, and the power consumption per pixel is 25 μW. We successful completed the 3-D integration and have reported here. Additionally, all pixels in the matrix of 64 × 64 pixels were responding on well-bonded devices. Correct operation of the sparsified readout, allowing a single 153-ns bunch timing resolution, was confirmed in the tests on a synchrotron beam of 10-keV X-rays. An equivalent noise charge of 36.2 e - rms and a conversion gain of 69.5 μV/e - with 2.6 e - rms and 2.7 μV/e - rms pixel-to-pixel variations, respectively, were measured.« less
10000 pixels wide CMOS frame imager for earth observation from a HALE UAV
NASA Astrophysics Data System (ADS)
Delauré, B.; Livens, S.; Everaerts, J.; Kleihorst, R.; Schippers, Gert; de Wit, Yannick; Compiet, John; Banachowicz, Bartosz
2009-09-01
MEDUSA is the lightweight high resolution camera, designed to be operated from a solar-powered Unmanned Aerial Vehicle (UAV) flying at stratospheric altitudes. The instrument is a technology demonstrator within the Pegasus program and targets applications such as crisis management and cartography. A special wide swath CMOS imager has been developed by Cypress Semiconductor Cooperation Belgium to meet the specific sensor requirements of MEDUSA. The CMOS sensor has a stitched design comprising a panchromatic and color sensor on the same die. Each sensor consists of 10000*1200 square pixels (5.5μm size, novel 6T architecture) with micro-lenses. The exposure is performed by means of a high efficiency snapshot shutter. The sensor is able to operate at a rate of 30fps in full frame readout. Due to a novel pixel design, the sensor has low dark leakage of the memory elements (PSNL) and low parasitic light sensitivity (PLS). Still it maintains a relative high QE (Quantum efficiency) and a FF (fill factor) of over 65%. It features an MTF (Modulation Transfer Function) higher than 60% at Nyquist frequency in both X and Y directions The measured optical/electrical crosstalk (expressed as MTF) of this 5.5um pixel is state-of-the art. These properties makes it possible to acquire sharp images also in low-light conditions.
NASA Astrophysics Data System (ADS)
Seo, Sang-Ho; Kim, Kyoung-Do; Kong, Jae-Sung; Shin, Jang-Kyoo; Choi, Pyung
2007-02-01
In this paper, a new CMOS image sensor is presented, which uses a PMOSFET-type photodetector with a transfer gate that has a high and variable sensitivity. The proposed CMOS image sensor has been fabricated using a 0.35 μm 2-poly 4- metal standard CMOS technology and is composed of a 256 × 256 array of 7.05 × 7.10 μm pixels. The unit pixel has a configuration of a pseudo 3-transistor active pixel sensor (APS) with the PMOSFET-type photodetector with a transfer gate, which has a function of conventional 4-transistor APS. The generated photocurrent is controlled by the transfer gate of the PMOSFET-type photodetector. The maximum responsivity of the photodetector is larger than 1.0 × 10 3 A/W without any optical lens. Fabricated 256 × 256 CMOS image sensor exhibits a good response to low-level illumination as low as 5 lux.
NASA Astrophysics Data System (ADS)
Ceresa, D.; Marchioro, A.; Kloukinas, K.; Kaplon, J.; Bialas, W.; Re, V.; Traversi, G.; Gaioni, L.; Ratti, L.
2014-11-01
The CMS tracker at HL-LHC is required to provide prompt information on particles with high transverse momentum to the central Level 1 trigger. For this purpose, the innermost part of the outer tracker is based on a combination of a pixelated sensor with a short strip sensor, the so-called Pixel-Strip module (PS). The readout of these sensors is carried out by distinct ASICs, the Strip Sensor ASIC (SSA), for the strip layer, and the Macro Pixel ASIC (MPA) for the pixel layer. The processing of the data directly on the front-end module represents a design challenge due to the large data volume (30720 pixels and 1920 strips per module) and the limited power budget. This is the reason why several studies have been carried out to find the best compromise between ASICs performance and power consumption. This paper describes the current status of the MPA ASIC development where the logic for generating prompt information on particles with high transverse momentum is implemented. An overview of the readout method is presented with particular attention on the cluster reduction, position encoding and momentum discrimination logic. Concerning the architectural studies, a software test bench capable of reading physics Monte-Carlo generated events has been developed and used to validate the MPA design and to evaluate the MPA performance. The MPA-Light is scheduled to be submitted for fabrication this year and will include the full analog functions and a part of the digital logic of the final version in order to qualify the chosen VLSI technology for the analog front-end, the module assembly and the low voltage digital supply.
Characterisation of capacitively coupled HV/HR-CMOS sensor chips for the CLIC vertex detector
NASA Astrophysics Data System (ADS)
Kremastiotis, I.
2017-12-01
The capacitive coupling between an active sensor and a readout ASIC has been considered in the framework of the CLIC vertex detector study. The CLICpix Capacitively Coupled Pixel Detector (C3PD) is a High-Voltage CMOS sensor chip produced in a commercial 180 nm HV-CMOS process for this purpose. The sensor was designed to be connected to the CLICpix2 readout chip. It therefore matches the dimensions of the readout chip, featuring a matrix of 128×128 square pixels with 25μm pitch. The sensor chip has been produced with the standard value for the substrate resistivity (~20 Ωcm) and it has been characterised in standalone testing mode, before receiving and testing capacitively coupled assemblies. The standalone measurement results show a rise time of ~20 ns for a power consumption of 5μW/pixel. Production of the C3PD HV-CMOS sensor chip with higher substrate resistivity wafers (~20, 80, 200 and 1000 Ωcm) is foreseen. The expected benefits of the higher substrate resistivity will be studied using future assemblies with the readout chip.
Crosstalk quantification, analysis, and trends in CMOS image sensors.
Blockstein, Lior; Yadid-Pecht, Orly
2010-08-20
Pixel crosstalk (CTK) consists of three components, optical CTK (OCTK), electrical CTK (ECTK), and spectral CTK (SCTK). The CTK has been classified into two groups: pixel-architecture dependent and pixel-architecture independent. The pixel-architecture-dependent CTK (PADC) consists of the sum of two CTK components, i.e., the OCTK and the ECTK. This work presents a short summary of a large variety of methods for PADC reduction. Following that, this work suggests a clear quantifiable definition of PADC. Three complementary metal-oxide-semiconductor (CMOS) image sensors based on different technologies were empirically measured, using a unique scanning technology, the S-cube. The PADC is analyzed, and technology trends are shown.
The INFN-FBK pixel R&D program for HL-LHC
NASA Astrophysics Data System (ADS)
Meschini, M.; Dalla Betta, G. F.; Boscardin, M.; Calderini, G.; Darbo, G.; Giacomini, G.; Messineo, A.; Ronchin, S.
2016-09-01
We report on the ATLAS and CMS joint research activity, which is aiming at the development of new, thin silicon pixel detectors for the Large Hadron Collider Phase-2 detector upgrades. This R&D is performed under special agreement between Istituto Nazionale di Fisica Nucleare and FBK foundation (Trento, Italy). New generations of 3D and planar pixel sensors with active edges are being developed in the R&D project, and will be fabricated at FBK. A first planar pixel batch, which was produced by the end of year 2014, will be described in this paper. First clean room measurement results on planar sensors obtained before and after neutron irradiation will be presented.
Evaluating video digitizer errors
NASA Astrophysics Data System (ADS)
Peterson, C.
2016-01-01
Analog output video cameras remain popular for recording meteor data. Although these cameras uniformly employ electronic detectors with fixed pixel arrays, the digitization process requires resampling the horizontal lines as they are output in order to reconstruct the pixel data, usually resulting in a new data array of different horizontal dimensions than the native sensor. Pixel timing is not provided by the camera, and must be reconstructed based on line sync information embedded in the analog video signal. Using a technique based on hot pixels, I present evidence that jitter, sync detection, and other timing errors introduce both position and intensity errors which are not present in cameras which internally digitize their sensors and output the digital data directly.
A 45 nm Stacked CMOS Image Sensor Process Technology for Submicron Pixel †
Takahashi, Seiji; Huang, Yi-Min; Sze, Jhy-Jyi; Wu, Tung-Ting; Guo, Fu-Sheng; Hsu, Wei-Cheng; Tseng, Tung-Hsiung; Liao, King; Kuo, Chin-Chia; Chen, Tzu-Hsiang; Chiang, Wei-Chieh; Chuang, Chun-Hao; Chou, Keng-Yu; Chung, Chi-Hsien; Chou, Kuo-Yu; Tseng, Chien-Hsien; Wang, Chuan-Joung; Yaung, Dun-Nien
2017-01-01
A submicron pixel’s light and dark performance were studied by experiment and simulation. An advanced node technology incorporated with a stacked CMOS image sensor (CIS) is promising in that it may enhance performance. In this work, we demonstrated a low dark current of 3.2 e−/s at 60 °C, an ultra-low read noise of 0.90 e−·rms, a high full well capacity (FWC) of 4100 e−, and blooming of 0.5% in 0.9 μm pixels with a pixel supply voltage of 2.8 V. In addition, the simulation study result of 0.8 μm pixels is discussed. PMID:29206162
Smart image sensors: an emerging key technology for advanced optical measurement and microsystems
NASA Astrophysics Data System (ADS)
Seitz, Peter
1996-08-01
Optical microsystems typically include photosensitive devices, analog preprocessing circuitry and digital signal processing electronics. The advances in semiconductor technology have made it possible today to integrate all photosensitive and electronical devices on one 'smart image sensor' or photo-ASIC (application-specific integrated circuits containing photosensitive elements). It is even possible to provide each 'smart pixel' with additional photoelectronic functionality, without compromising the fill factor substantially. This technological capability is the basis for advanced cameras and optical microsystems showing novel on-chip functionality: Single-chip cameras with on- chip analog-to-digital converters for less than $10 are advertised; image sensors have been developed including novel functionality such as real-time selectable pixel size and shape, the capability of performing arbitrary convolutions simultaneously with the exposure, as well as variable, programmable offset and sensitivity of the pixels leading to image sensors with a dynamic range exceeding 150 dB. Smart image sensors have been demonstrated offering synchronous detection and demodulation capabilities in each pixel (lock-in CCD), and conventional image sensors are combined with an on-chip digital processor for complete, single-chip image acquisition and processing systems. Technological problems of the monolithic integration of smart image sensors include offset non-uniformities, temperature variations of electronic properties, imperfect matching of circuit parameters, etc. These problems can often be overcome either by designing additional compensation circuitry or by providing digital correction routines. Where necessary for technological or economic reasons, smart image sensors can also be combined with or realized as hybrids, making use of commercially available electronic components. It is concluded that the possibilities offered by custom smart image sensors will influence the design and the performance of future electronic imaging systems in many disciplines, reaching from optical metrology to machine vision on the factory floor and in robotics applications.
A Novel Multi-Aperture Based Sun Sensor Based on a Fast Multi-Point MEANSHIFT (FMMS) Algorithm
You, Zheng; Sun, Jian; Xing, Fei; Zhang, Gao-Fei
2011-01-01
With the current increased widespread interest in the development and applications of micro/nanosatellites, it was found that we needed to design a small high accuracy satellite attitude determination system, because the star trackers widely used in large satellites are large and heavy, and therefore not suitable for installation on micro/nanosatellites. A Sun sensor + magnetometer is proven to be a better alternative, but the conventional sun sensor has low accuracy, and cannot meet the requirements of the attitude determination systems of micro/nanosatellites, so the development of a small high accuracy sun sensor with high reliability is very significant. This paper presents a multi-aperture based sun sensor, which is composed of a micro-electro-mechanical system (MEMS) mask with 36 apertures and an active pixels sensor (APS) CMOS placed below the mask at a certain distance. A novel fast multi-point MEANSHIFT (FMMS) algorithm is proposed to improve the accuracy and reliability, the two key performance features, of an APS sun sensor. When the sunlight illuminates the sensor, a sun spot array image is formed on the APS detector. Then the sun angles can be derived by analyzing the aperture image location on the detector via the FMMS algorithm. With this system, the centroid accuracy of the sun image can reach 0.01 pixels, without increasing the weight and power consumption, even when some missing apertures and bad pixels appear on the detector due to aging of the devices and operation in a harsh space environment, while the pointing accuracy of the single-aperture sun sensor using the conventional correlation algorithm is only 0.05 pixels. PMID:22163770
Large Format CMOS-based Detectors for Diffraction Studies
NASA Astrophysics Data System (ADS)
Thompson, A. C.; Nix, J. C.; Achterkirchen, T. G.; Westbrook, E. M.
2013-03-01
Complementary Metal Oxide Semiconductor (CMOS) devices are rapidly replacing CCD devices in many commercial and medical applications. Recent developments in CMOS fabrication have improved their radiation hardness, device linearity, readout noise and thermal noise, making them suitable for x-ray crystallography detectors. Large-format (e.g. 10 cm × 15 cm) CMOS devices with a pixel size of 100 μm × 100 μm are now becoming available that can be butted together on three sides so that very large area detector can be made with no dead regions. Like CCD systems our CMOS systems use a GdOS:Tb scintillator plate to convert stopping x-rays into visible light which is then transferred with a fiber-optic plate to the sensitive surface of the CMOS sensor. The amount of light per x-ray on the sensor is much higher in the CMOS system than a CCD system because the fiber optic plate is only 3 mm thick while on a CCD system it is highly tapered and much longer. A CMOS sensor is an active pixel matrix such that every pixel is controlled and readout independently of all other pixels. This allows these devices to be readout while the sensor is collecting charge in all the other pixels. For x-ray diffraction detectors this is a major advantage since image frames can be collected continuously at up 20 Hz while the crystal is rotated. A complete diffraction dataset can be collected over five times faster than with CCD systems with lower radiation exposure to the crystal. In addition, since the data is taken fine-phi slice mode the 3D angular position of diffraction peaks is improved. We have developed a cooled 6 sensor CMOS detector with an active area of 28.2 × 29.5 cm with 100 μm × 100 μm pixels and a readout rate of 20 Hz. The detective quantum efficiency exceeds 60% over the range 8-12 keV. One, two and twelve sensor systems are also being developed for a variety of scientific applications. Since the sensors are butt able on three sides, even larger systems could be built at reasonable cost.
NASA Astrophysics Data System (ADS)
Wang, Junbang; Sun, Wenyi
2014-11-01
Remote sensing is widely applied in the study of terrestrial primary production and the global carbon cycle. The researches on the spatial heterogeneity in images with different sensors and resolutions would improve the application of remote sensing. In this study two sites on alpine meadow grassland in Qinghai, China, which have distinct fractal vegetation cover, were used to test and analyze differences between Normalized Difference Vegetation Index (NDVI) and enhanced vegetation index (EVI) derived from the Huanjing (HJ) and Landsat Thematic Mapper (TM) sensors. The results showed that: 1) NDVI estimated from HJ were smaller than the corresponding values from TM at the two sites whereas EVI were almost the same for the two sensors. 2) The overall variance represented by HJ data was consistently about half of that of Landsat TM although their nominal pixel size is approximately 30m for both sensors. The overall variance from EVI is greater than that from NDVI. The difference of the range between the two sensors is about 6 pixels at 30m resolution. The difference of the range in which there is not more corrective between two vegetation indices is about 1 pixel. 3) The sill decreased when pixel size increased from 30m to 1km, and then decreased very quickly when pixel size is changed to 250m from 30m or 90m but slowly when changed from 250m to 500m. HJ can capture this spatial heterogeneity to some extent and this study provides foundations for the use of the sensor for validation of net primary productivity estimates obtained from ecosystem process models.
Ah Lee, Seung; Ou, Xiaoze; Lee, J Eugene; Yang, Changhuei
2013-06-01
We demonstrate a silo-filter (SF) complementary metal-oxide semiconductor (CMOS) image sensor for a chip-scale fluorescence microscope. The extruded pixel design with metal walls between neighboring pixels guides fluorescence emission through the thick absorptive filter to the photodiode of a pixel. Our prototype device achieves 13 μm resolution over a wide field of view (4.8 mm × 4.4 mm). We demonstrate bright-field and fluorescence longitudinal imaging of living cells in a compact, low-cost configuration.
Neighborhood size of training data influences soil map disaggregation
USDA-ARS?s Scientific Manuscript database
Soil class mapping relies on the ability of sample locations to represent portions of the landscape with similar soil types; however, most digital soil mapping (DSM) approaches intersect sample locations with one raster pixel per covariate layer regardless of pixel size. This approach does not take ...
Smart CMOS image sensor for lightning detection and imaging.
Rolando, Sébastien; Goiffon, Vincent; Magnan, Pierre; Corbière, Franck; Molina, Romain; Tulet, Michel; Bréart-de-Boisanger, Michel; Saint-Pé, Olivier; Guiry, Saïprasad; Larnaudie, Franck; Leone, Bruno; Perez-Cuevas, Leticia; Zayer, Igor
2013-03-01
We present a CMOS image sensor dedicated to lightning detection and imaging. The detector has been designed to evaluate the potentiality of an on-chip lightning detection solution based on a smart sensor. This evaluation is performed in the frame of the predevelopment phase of the lightning detector that will be implemented in the Meteosat Third Generation Imager satellite for the European Space Agency. The lightning detection process is performed by a smart detector combining an in-pixel frame-to-frame difference comparison with an adjustable threshold and on-chip digital processing allowing an efficient localization of a faint lightning pulse on the entire large format array at a frequency of 1 kHz. A CMOS prototype sensor with a 256×256 pixel array and a 60 μm pixel pitch has been fabricated using a 0.35 μm 2P 5M technology and tested to validate the selected detection approach.
Low noise WDR ROIC for InGaAs SWIR image sensor
NASA Astrophysics Data System (ADS)
Ni, Yang
2017-11-01
Hybridized image sensors are actually the only solution for image sensing beyond the spectral response of silicon devices. By hybridization, we can combine the best sensing material and photo-detector design with high performance CMOS readout circuitry. In the infrared band, we are facing typically 2 configurations: high background situation and low background situation. The performance of high background sensors are conditioned mainly by the integration capacity in each pixel which is the case for mid-wave and long-wave infrared detectors. For low background situation, the detector's performance is mainly limited by the pixel's noise performance which is conditioned by dark signal and readout noise. In the case of reflection based imaging condition, the pixel's dynamic range is also an important parameter. This is the case for SWIR band imaging. We are particularly interested by InGaAs based SWIR image sensors.
Shamwell, E Jared; Nothwang, William D; Perlis, Donald
2018-05-04
Aimed at improving size, weight, and power (SWaP)-constrained robotic vision-aided state estimation, we describe our unsupervised, deep convolutional-deconvolutional sensor fusion network, Multi-Hypothesis DeepEfference (MHDE). MHDE learns to intelligently combine noisy heterogeneous sensor data to predict several probable hypotheses for the dense, pixel-level correspondence between a source image and an unseen target image. We show how our multi-hypothesis formulation provides increased robustness against dynamic, heteroscedastic sensor and motion noise by computing hypothesis image mappings and predictions at 76⁻357 Hz depending on the number of hypotheses being generated. MHDE fuses noisy, heterogeneous sensory inputs using two parallel, inter-connected architectural pathways and n (1⁻20 in this work) multi-hypothesis generating sub-pathways to produce n global correspondence estimates between a source and a target image. We evaluated MHDE on the KITTI Odometry dataset and benchmarked it against the vision-only DeepMatching and Deformable Spatial Pyramids algorithms and were able to demonstrate a significant runtime decrease and a performance increase compared to the next-best performing method.
NASA Technical Reports Server (NTRS)
Grycewicz, Thomas J.; Tan, Bin; Isaacson, Peter J.; De Luccia, Frank J.; Dellomo, John
2016-01-01
In developing software for independent verification and validation (IVV) of the Image Navigation and Registration (INR) capability for the Geostationary Operational Environmental Satellite R Series (GOES-R) Advanced Baseline Imager (ABI), we have encountered an image registration artifact which limits the accuracy of image offset estimation at the subpixel scale using image correlation. Where the two images to be registered have the same pixel size, subpixel image registration preferentially selects registration values where the image pixel boundaries are close to lined up. Because of the shape of a curve plotting input displacement to estimated offset, we call this a stair-step artifact. When one image is at a higher resolution than the other, the stair-step artifact is minimized by correlating at the higher resolution. For validating ABI image navigation, GOES-R images are correlated with Landsat-based ground truth maps. To create the ground truth map, the Landsat image is first transformed to the perspective seen from the GOES-R satellite, and then is scaled to an appropriate pixel size. Minimizing processing time motivates choosing the map pixels to be the same size as the GOES-R pixels. At this pixel size image processing of the shift estimate is efficient, but the stair-step artifact is present. If the map pixel is very small, stair-step is not a problem, but image correlation is computation-intensive. This paper describes simulation-based selection of the scale for truth maps for registering GOES-R ABI images.
Infrared sensors for Earth observation missions
NASA Astrophysics Data System (ADS)
Ashcroft, P.; Thorne, P.; Weller, H.; Baker, I.
2007-10-01
SELEX S&AS is developing a family of infrared sensors for earth observation missions. The spectral bands cover shortwave infrared (SWIR) channels from around 1μm to long-wave infrared (LWIR) channels up to 15μm. Our mercury cadmium telluride (MCT) technology has enabled a sensor array design that can satisfy the requirements of all of the SWIR and medium-wave infrared (MWIR) bands with near-identical arrays. This is made possible by the combination of a set of existing technologies that together enable a high degree of flexibility in the pixel geometry, sensitivity, and photocurrent integration capacity. The solution employs a photodiode array under the control of a readout integrated circuit (ROIC). The ROIC allows flexible geometries and in-pixel redundancy to maximise operability and reliability, by combining the photocurrent from a number of photodiodes into a single pixel. Defective or inoperable diodes (or "sub-pixels") can be deselected with tolerable impact on the overall pixel performance. The arrays will be fabricated using the "loophole" process in MCT grown by liquid-phase epitaxy (LPE). These arrays are inherently robust, offer high quantum efficiencies and have been used in previous space programs. The use of loophole arrays also offers access to SELEX's avalanche photodiode (APD) technology, allowing low-noise, highly uniform gain at the pixel level where photon flux is very low.
Extraction of incident irradiance from LWIR hyperspectral imagery
NASA Astrophysics Data System (ADS)
Lahaie, Pierre
2014-10-01
The atmospheric correction of thermal hyperspectral imagery can be separated in two distinct processes: Atmospheric Compensation (AC) and Temperature and Emissivity separation (TES). TES requires for input at each pixel, the ground leaving radiance and the atmospheric downwelling irradiance, which are the outputs of the AC process. The extraction from imagery of the downwelling irradiance requires assumptions about some of the pixels' nature, the sensor and the atmosphere. Another difficulty is that, often the sensor's spectral response is not well characterized. To deal with this unknown, we defined a spectral mean operator that is used to filter the ground leaving radiance and a computation of the downwelling irradiance from MODTRAN. A user will select a number of pixels in the image for which the emissivity is assumed to be known. The emissivity of these pixels is assumed to be smooth and that the only spectrally fast varying variable in the downwelling irradiance. Using these assumptions we built an algorithm to estimate the downwelling irradiance. The algorithm is used on all the selected pixels. The estimated irradiance is the average on the spectral channels of the resulting computation. The algorithm performs well in simulation and results are shown for errors in the assumed emissivity and for errors in the atmospheric profiles. The sensor noise influences mainly the required number of pixels.
Optical and Electric Multifunctional CMOS Image Sensors for On-Chip Biosensing Applications
Tokuda, Takashi; Noda, Toshihiko; Sasagawa, Kiyotaka; Ohta, Jun
2010-01-01
In this review, the concept, design, performance, and a functional demonstration of multifunctional complementary metal-oxide-semiconductor (CMOS) image sensors dedicated to on-chip biosensing applications are described. We developed a sensor architecture that allows flexible configuration of a sensing pixel array consisting of optical and electric sensing pixels, and designed multifunctional CMOS image sensors that can sense light intensity and electric potential or apply a voltage to an on-chip measurement target. We describe the sensors’ architecture on the basis of the type of electric measurement or imaging functionalities. PMID:28879978
Development of CMOS Active Pixel Image Sensors for Low Cost Commercial Applications
NASA Technical Reports Server (NTRS)
Gee, R.; Kemeny, S.; Kim, Q.; Mendis, S.; Nakamura, J.; Nixon, R.; Ortiz, M.; Pain, B.; Staller, C.; Zhou, Z;
1994-01-01
JPL, under sponsorship from the NASA Office of Advanced Concepts and Technology, has been developing a second-generation solid-state image sensor technology. Charge-coupled devices (CCD) are a well-established first generation image sensor technology. For both commercial and NASA applications, CCDs have numerous shortcomings. In response, the active pixel sensor (APS) technology has been under research. The major advantages of APS technology are the ability to integrate on-chip timing, control, signal-processing and analog-to-digital converter functions, reduced sensitivity to radiation effects, low power operation, and random access readout.
NASA Astrophysics Data System (ADS)
Alonso, C.; Benito, R. M.; Tarquis, A. M.
2012-04-01
Satellite image data have become an important source of information for monitoring vegetation and mapping land cover at several scales. Beside this, the distribution and phenology of vegetation is largely associated with climate, terrain characteristics and human activity. Various vegetation indices have been developed for qualitative and quantitative assessment of vegetation using remote spectral measurements. In particular, sensors with spectral bands in the red (RED) and near-infrared (NIR) lend themselves well to vegetation monitoring and based on them [(NIR - RED) / (NIR + RED)] Normalized Difference Vegetation Index (NDVI) has been widespread used. Given that the characteristics of spectral bands in RED and NIR vary distinctly from sensor to sensor, NDVI values based on data from different instruments will not be directly comparable. The spatial resolution also varies significantly between sensors, as well as within a given scene in the case of wide-angle and oblique sensors. As a result, NDVI values will vary according to combinations of the heterogeneity and scale of terrestrial surfaces and pixel footprint sizes. Therefore, the question arises as to the impact of differences in spectral and spatial resolutions on vegetation indices like the NDVI. The aim of this study is to establish a comparison between two different sensors in their NDVI values at different spatial resolutions. Scaling analysis and modeling techniques are increasingly understood to be the result of nonlinear dynamic mechanisms repeating scale after scale from large to small scales leading to non-classical resolution dependencies. In the remote sensing framework the main characteristic of sensors images is the high local variability in their values. This variability is a consequence of the increase in spatial and radiometric resolution that implies an increase in complexity that it is necessary to characterize. Fractal and multifractal techniques has been proven to be useful to extract such complexities from remote sensing images and will applied in this study to see the scaling behavior for each sensor in generalized fractal dimensions. The studied area is located in the provinces of Caceres and Salamanca (east of Iberia Peninsula) with an extension of 32 x 32 km2. The altitude in the area varies from 1,560 to 320 m, comprising natural vegetation in the mountain area (forest and bushes) and agricultural crops in the valleys. Scaling analysis were applied to Landsat-5 and MODIS TERRA to the normalized derived vegetation index (NDVI) on the same region with one day of difference, 13 and 12 of July 2003 respectively. From these images the area of interest was selected obtaining 1024 x 1024 pixels for Landsat image and 128 x 128 pixels for MODIS image. This implies that the resolution for MODIS is 250x250 m. and for Landsat is 30x30 m. From the reflectance data obtained from NIR and RED bands, NDVI was calculated for each image focusing this study on 0.2 to 0.5 ranges of values. Once that both NDVI fields were obtained several fractal dimensions were estimated in each one segmenting the values in 0.20-0.25, 0.25-0.30 and so on to rich 0.45-0.50. In all the scaling analysis the scale size length was expressed in meters, and not in pixels, to make the comparison between both sensors possible. Results are discussed. Acknowledgements This work has been supported by the Spanish MEC under Projects No. AGL2010-21501/AGR, MTM2009-14621 and i-MATH No. CSD2006-00032
Design and Optimization of Multi-Pixel Transition-Edge Sensors for X-Ray Astronomy Applications
NASA Technical Reports Server (NTRS)
Smith, Stephen J.; Adams, Joseph S.; Bandler, Simon R.; Chervenak, James A.; Datesman, Aaron Michael; Eckart, Megan E.; Ewin, Audrey J.; Finkbeiner, Fred M.; Kelley, Richard L.; Kilbourne, Caroline A.;
2017-01-01
Multi-pixel transition-edge sensors (TESs), commonly referred to as 'hydras', are a type of position sensitive micro-calorimeter that enables very large format arrays to be designed without commensurate increase in the number of readout channels and associated wiring. In the hydra design, a single TES is coupled to discrete absorbers via varied thermal links. The links act as low pass thermal filters that are tuned to give a different characteristic pulse shape for x-ray photons absorbed in each of the hydra sub pixels. In this contribution we report on the experimental results from hydras consisting of up to 20 pixels per TES. We discuss the design trade-offs between energy resolution, position discrimination and number of pixels and investigate future design optimizations specifically targeted at meeting the readout technology considered for Lynx.
NASA Technical Reports Server (NTRS)
St.Cyr, O. C.; Malayeri, M. L.; Yashiro, S.; Quernerais, E.; Bertaux, Jean-Loup; Howard, Russ
2003-01-01
We have investigated the possibility that the Solar Wind Anisotropies (SWAN) remote sensing instrument on SOHO may be able to detect coronal mass ejections (CMEs) in neutral Hydrogen Lyman-? emission. We have identified CMEs near the Sun in observations by the SOHO LASCO white-light coronagraphs and in extreme ultraviolet emissions using SOHO E n . There are very few methods of tracking CMEs after they leave the coronagraph's field-of-view, so this is an important topic to study. The primary science goal of the SWAN investigation is the measurement of large-scale structures in the solar wind, and these are obtained by detecting intensity fluctuations in Lyman-?. SWAN consists of a pair of Sensors on opposite panels of SOHO. The instantaneous field-of-view of each sensor unit is a So x So square, divided into lo pixels. A gimbaled periscope system allows each sensor to map the intensity distribution of Lyman-?, and the entire sky can be scanned in less than one day. This is the typical mode of operation for this instrument.
Uncertainty in cloud optical depth estimates made from satellite radiance measurements
NASA Technical Reports Server (NTRS)
Pincus, Robert; Szczodrak, Malgorzata; Gu, Jiujing; Austin, Philip
1995-01-01
The uncertainty in optical depths retrieved from satellite measurements of visible wavelength radiance at the top of the atmosphere is quantified. Techniques are briefly reviewed for the estimation of optical depth from measurements of radiance, and it is noted that these estimates are always more uncertain at greater optical depths and larger solar zenith angles. The lack of radiometric calibration for visible wavelength imagers on operational satellites dominates the uncertainty retrievals of optical depth. This is true for both single-pixel retrievals and for statistics calculated from a population of individual retrievals. For individual estimates or small samples, sensor discretization can also be significant, but the sensitivity of the retrieval to the specification of the model atmosphere is less important. The relative uncertainty in calibration affects the accuracy with which optical depth distributions measured by different sensors may be quantitatively compared, while the absolute calibration uncertainty, acting through the nonlinear mapping of radiance to optical depth, limits the degree to which distributions measured by the same sensor may be distinguished.
Mapping of the Culann-Tohil Region of Io
NASA Technical Reports Server (NTRS)
Turtle, E. P.; Keszthelyi, L. P.; Jaeger, W. L.; Radebaugh, J.; Milazzo, M. P.; McEwen, A. S.; Moore, J. M.; Schenk, P. M.; Lopes, R. M. C.
2003-01-01
The Galileo spacecraft completed its observations of Jupiter's volcanic moon Io in October 2001 with the orbit I32 flyby, during which new local (13-55 m/pixel) and regional (130-400 m/pixel) resolution images and spectroscopic data were returned of the antijovian hemisphere. We have combined a I32 regional mosaic (330 m/pixel) with lower-resolution C21 color data (1.4 km/pixel, Figure 1) and produced a geomorphologic map of the Culann-Tohil area of this hemisphere. Here we present the geologic features, map units, and structures in this region, and give preliminary conclusions about geologic activity for comparison with other regions to better understand Io's geologic evolution.
Sea ice motion measurements from Seasat SAR images
NASA Technical Reports Server (NTRS)
Leberl, F.; Raggam, J.; Elachi, C.; Campbell, W. J.
1983-01-01
Data from the Seasat synthetic aperture radar (SAR) experiment are analyzed in order to determine the accuracy of this information for mapping the distribution of sea ice and its motion. Data from observations of sea ice in the Beaufort Sea from seven sequential orbits of the satellite were selected to study the capabilities and limitations of spaceborne radar application to sea-ice mapping. Results show that there is no difficulty in identifying homologue ice features on sequential radar images and the accuracy is entirely controlled by the accuracy of the orbit data and the geometric calibration of the sensor. Conventional radargrammetric methods are found to serve well for satellite radar ice mapping, while ground control points can be used to calibrate the ice location and motion measurements in the cases where orbit data and sensor calibration are lacking. The ice motion was determined to be approximately 6.4 + or - 0.5 km/day. In addition, the accuracy of pixel location was found over land areas. The use of one control point in 10,000 sq km produced an accuracy of about + or 150 m, while with a higher density of control points (7 in 1000 sq km) the location accuracy improves to the image resolution of + or - 25 m. This is found to be applicable for both optical and digital data.
The National Map - Orthoimagery
Mauck, James; Brown, Kim; Carswell, William J.
2009-01-01
Orthorectified digital aerial photographs and satellite images of 1-meter (m) pixel resolution or finer make up the orthoimagery component of The National Map. The process of orthorectification removes feature displacements and scale variations caused by terrain relief and sensor geometry. The result is a combination of the image characteristics of an aerial photograph or satellite image and the geometric qualities of a map. These attributes allow users to: *Measure distance *Calculate areas *Determine shapes of features *Calculate directions *Determine accurate coordinates *Determine land cover and use *Perform change detection *Update maps The standard digital orthoimage is a 1-m or finer resolution, natural color or color infra-red product. Most are now produced as GeoTIFFs and accompanied by a Federal Geographic Data Committee (FGDC)-compliant metadata file. The primary source for 1-m data is the National Agriculture Imagery Program (NAIP) leaf-on imagery. The U.S. Geological Survey (USGS) utilizes NAIP imagery as the image layer on its 'Digital- Map' - a new generation of USGS topographic maps (http://nationalmap.gov/digital_map). However, many Federal, State, and local governments and organizations require finer resolutions to meet a myriad of needs. Most of these images are leaf-off, natural-color products at resolutions of 1-foot (ft) or finer.
Tokuda, T; Yamada, H; Sasagawa, K; Ohta, J
2009-10-01
This paper proposes and demonstrates a polarization-analyzing CMOS sensor based on image sensor architecture. The sensor was designed targeting applications for chiral analysis in a microchemistry system. The sensor features a monolithically embedded polarizer. Embedded polarizers with different angles were implemented to realize a real-time absolute measurement of the incident polarization angle. Although the pixel-level performance was confirmed to be limited, estimation schemes based on the variation of the polarizer angle provided a promising performance for real-time polarization measurements. An estimation scheme using 180 pixels in a 1deg step provided an estimation accuracy of 0.04deg. Polarimetric measurements of chiral solutions were also successfully performed to demonstrate the applicability of the sensor to optical chiral analysis.
Performance assessment of a compressive sensing single-pixel imaging system
NASA Astrophysics Data System (ADS)
Du Bosq, Todd W.; Preece, Bradley L.
2017-04-01
Conventional sensors measure the light incident at each pixel in a focal plane array. Compressive sensing (CS) involves capturing a smaller number of unconventional measurements from the scene, and then using a companion process to recover the image. CS has the potential to acquire imagery with equivalent information content to a large format array while using smaller, cheaper, and lower bandwidth components. However, the benefits of CS do not come without compromise. The CS architecture chosen must effectively balance between physical considerations, reconstruction accuracy, and reconstruction speed to meet operational requirements. Performance modeling of CS imagers is challenging due to the complexity and nonlinearity of the system and reconstruction algorithm. To properly assess the value of such systems, it is necessary to fully characterize the image quality, including artifacts and sensitivity to noise. Imagery of a two-handheld object target set was collected using an shortwave infrared single-pixel CS camera for various ranges and number of processed measurements. Human perception experiments were performed to determine the identification performance within the trade space. The performance of the nonlinear CS camera was modeled by mapping the nonlinear degradations to an equivalent linear shift invariant model. Finally, the limitations of CS modeling techniques are discussed.
A dual-polarized broadband planar antenna and channelizing filter bank for millimeter wavelengths
NASA Astrophysics Data System (ADS)
O'Brient, Roger; Ade, Peter; Arnold, Kam; Edwards, Jennifer; Engargiola, Greg; Holzapfel, William L.; Lee, Adrian T.; Myers, Michael J.; Quealy, Erin; Rebeiz, Gabriel; Richards, Paul; Suzuki, Aritoki
2013-02-01
We describe the design, fabrication, and testing of a broadband log-periodic antenna coupled to multiple cryogenic bolometers. This detector architecture, optimized here for astrophysical observations, simultaneously receives two linear polarizations with two octaves of bandwidth at millimeter wavelengths. The broad bandwidth signal received by the antenna is divided into sub-bands with integrated in-line frequency-selective filters. We demonstrate two such filter banks: a diplexer with two sub-bands and a log-periodic channelizer with seven contiguous sub-bands. These detectors have receiver efficiencies of 20%-40% and percent level polarization isolation. Superconducting transition-edge sensor bolometers detect the power in each sub-band and polarization. We demonstrate circularly symmetric beam patterns, high polarization isolation, accurately positioned bands, and high optical efficiency. The pixel design is applicable to astronomical observations of intensity and polarization at millimeter through sub-millimeter wavelengths. As compared with an imaging array of pixels measuring only one band, simultaneous measurements of multiple bands in each pixel has the potential to result in a higher signal-to-noise measurement while also providing spectral information. This development facilitates compact systems with high mapping speeds for observations that require information in multiple frequency bands.
A Closer Look at the Congo and the Lightning Maximum on Earth
NASA Technical Reports Server (NTRS)
Blakeslee, R. J.; Buechler, D. E.; Lavreau, Johan; Goodman, Steven J.
2008-01-01
The global maps of maximum mean annual flash density derived from a decade of observations from the Lightning Imaging Sensor on the NASA Tropical Rainfall Measuring Mission (TRMM) satellite show that a 0.5 degree x 0.5 degree pixel west of Bukavu, Democratic Republic of Congo (latitude 2S, longitude 28E) has the most frequent lightning activity anywhere on earth with an average value in excess of 157 fl/sq km/yr. This pixel has a flash density that is much greater than even its surrounding neighbors. By contrast the maximum mean annual flash rate for North America located in central Florida is only 33 fl/sq km/yr. Previous studies have shown that monthly-seasonal-annual lightning maxima on earth occur in regions dominated by coastal (land-sea breeze interactions) or topographic influences (elevated heat sources, enhanced convergence). Using TRMM, Landsat Enhanced Thematic Mapper, and Shuttle Imaging Radar imagery we further examine the unique features of this region situated in the deep tropics and dominated by a complex topography having numerous mountain ridges and valleys to better understand why this pixel, unlike any other, has the most active lightning on the planet.
Characterisation of novel thin n-in-p planar pixel modules for the ATLAS Inner Tracker upgrade
NASA Astrophysics Data System (ADS)
Beyer, J.-C.; La Rosa, A.; Macchiolo, A.; Nisius, R.; Savic, N.; Taibah, R.
2018-01-01
In view of the high luminosity phase of the LHC (HL-LHC) to start operation around 2026, a major upgrade of the tracker system for the ATLAS experiment is in preparation. The expected neutron equivalent fluence of up to 2.4×1016 1 MeV neq./cm2 at the innermost layer of the pixel detector poses the most severe challenge. Thanks to their low material budget and high charge collection efficiency after irradiation, modules made of thin planar pixel sensors are promising candidates to instrument these layers. To optimise the sensor layout for the decreased pixel cell size of 50×50 μm2, TCAD device simulations are being performed to investigate the charge collection efficiency before and after irradiation. In addition, sensors of 100-150 μm thickness, interconnected to FE-I4 read-out chips featuring the previous generation pixel cell size of 50×250 μm2, are characterised with testbeams at the CERN-SPS and DESY facilities. The performance of sensors with various designs, irradiated up to a fluence of 1×1016 neq./cm2, is compared in terms of charge collection and hit efficiency. A replacement of the two innermost pixel layers is foreseen during the lifetime of HL-LHC . The replacement will require several months of intervention, during which the remaining detector modules cannot be cooled. They are kept at room temperature, thus inducing an annealing. The performance of irradiated modules will be investigated with testbeam campaigns and the method of accelerated annealing at higher temperatures.
Fabrication of amorphous InGaZnO thin-film transistor-driven flexible thermal and pressure sensors
NASA Astrophysics Data System (ADS)
Park, Ick-Joon; Jeong, Chan-Yong; Cho, In-Tak; Lee, Jong-Ho; Cho, Eou-Sik; Kwon, Sang Jik; Kim, Bosul; Cheong, Woo-Seok; Song, Sang-Hun; Kwon, Hyuck-In
2012-10-01
In this work, we present the results concerning the use of amorphous indium-gallium-zinc-oxide (a-IGZO) thin-film transistor (TFT) as a driving transistor of the flexible thermal and pressure sensors which are applicable to artificial skin systems. Although the a-IGZO TFT has been attracting much attention as a driving transistor of the next-generation flat panel displays, no study has been performed about the application of this new device to the driving transistor of the flexible sensors yet. The proposed thermal sensor pixel is composed of the series-connected a-IGZO TFT and ZnO-based thermistor fabricated on a polished metal foil, and the ZnO-based thermistor is replaced by the pressure sensitive rubber in the pressure sensor pixel. In both sensor pixels, the a-IGZO TFT acts as the driving transistor and the temperature/pressure-dependent resistance of the ZnO-based thermistor/pressure-sensitive rubber mainly determines the magnitude of the output currents. The fabricated a-IGZO TFT-driven flexible thermal sensor shows around a seven times increase in the output current as the temperature increases from 20 °C to 100 °C, and the a-IGZO TFT-driven flexible pressure sensors also exhibit high sensitivity under various pressure environments.
Fabrication of a Kilopixel Array of Superconducting Microcalorimeters with Microstripline Wiring
NASA Technical Reports Server (NTRS)
Chervenak, James
2012-01-01
A document describes the fabrication of a two-dimensional microcalorimeter array that uses microstrip wiring and integrated heat sinking to enable use of high-performance pixel designs at kilopixel scales (32 X 32). Each pixel is the high-resolution design employed in small-array test devices, which consist of a Mo/Au TES (transition edge sensor) on a silicon nitride membrane and an electroplated Bi/Au absorber. The pixel pitch within the array is 300 microns, where absorbers 290 microns on a side are cantilevered over a silicon support grid with 100-micron-wide beams. The high-density wiring and heat sinking are both carried by the silicon beams to the edge of the array. All pixels are wired out to the array edge. ECR (electron cyclotron resonance) oxide underlayer is deposited underneath the sensor layer. The sensor (TES) layer consists of a superconducting underlayer and a normal metal top layer. If the sensor is deposited at high temperature, the ECR oxide can be vacuum annealed to improve film smoothness and etch characteristics. This process is designed to recover high-resolution, single-pixel x-ray microcalorimeter performance within arrays of arbitrarily large format. The critical current limiting parts of the circuit are designed to have simple interfaces that can be independently verified. The lead-to-TES interface is entirely determined in a single layer that has multiple points of interface to maximize critical current. The lead rails that overlap the TES sensor element contact both the superconducting underlayer and the TES normal metal
NASA Astrophysics Data System (ADS)
Cao, Nan; Cao, Fengmei; Lin, Yabin; Bai, Tingzhu; Song, Shengyu
2015-04-01
For a new kind of retina-like senor camera and a traditional rectangular sensor camera, dual cameras acquisition and display system need to be built. We introduce the principle and the development of retina-like senor. Image coordinates transformation and interpolation based on sub-pixel interpolation need to be realized for our retina-like sensor's special pixels distribution. The hardware platform is composed of retina-like senor camera, rectangular sensor camera, image grabber and PC. Combined the MIL and OpenCV library, the software program is composed in VC++ on VS 2010. Experience results show that the system can realizes two cameras' acquisition and display.
Design and Performance of a Pinned Photodiode CMOS Image Sensor Using Reverse Substrate Bias.
Stefanov, Konstantin D; Clarke, Andrew S; Ivory, James; Holland, Andrew D
2018-01-03
A new pinned photodiode (PPD) CMOS image sensor with reverse biased p-type substrate has been developed and characterized. The sensor uses traditional PPDs with one additional deep implantation step to suppress the parasitic reverse currents, and can be fully depleted. The first prototypes have been manufactured on an 18 µm thick, 1000 Ω·cm epitaxial silicon wafers using 180 nm PPD image sensor process. Both front-side illuminated (FSI) and back-side illuminated (BSI) devices were manufactured in collaboration with Teledyne e2v. The characterization results from a number of arrays of 10 µm and 5.4 µm PPD pixels, with different shape, the size and the depth of the new implant are in good agreement with device simulations. The new pixels could be reverse-biased without parasitic leakage currents well beyond full depletion, and demonstrate nearly identical optical response to the reference non-modified pixels. The observed excessive charge sharing in some pixel variants is shown to not be a limiting factor in operation. This development promises to realize monolithic PPD CIS with large depleted thickness and correspondingly high quantum efficiency at near-infrared and soft X-ray wavelengths.
Design and Performance of a Pinned Photodiode CMOS Image Sensor Using Reverse Substrate Bias †
Clarke, Andrew S.; Ivory, James; Holland, Andrew D.
2018-01-01
A new pinned photodiode (PPD) CMOS image sensor with reverse biased p-type substrate has been developed and characterized. The sensor uses traditional PPDs with one additional deep implantation step to suppress the parasitic reverse currents, and can be fully depleted. The first prototypes have been manufactured on an 18 µm thick, 1000 Ω·cm epitaxial silicon wafers using 180 nm PPD image sensor process. Both front-side illuminated (FSI) and back-side illuminated (BSI) devices were manufactured in collaboration with Teledyne e2v. The characterization results from a number of arrays of 10 µm and 5.4 µm PPD pixels, with different shape, the size and the depth of the new implant are in good agreement with device simulations. The new pixels could be reverse-biased without parasitic leakage currents well beyond full depletion, and demonstrate nearly identical optical response to the reference non-modified pixels. The observed excessive charge sharing in some pixel variants is shown to not be a limiting factor in operation. This development promises to realize monolithic PPD CIS with large depleted thickness and correspondingly high quantum efficiency at near-infrared and soft X-ray wavelengths. PMID:29301379
Monolithic active pixel sensor development for the upgrade of the ALICE inner tracking system
NASA Astrophysics Data System (ADS)
Aglieri, G.; Cavicchioli, C.; Chalmet, P. L.; Chanlek, N.; Collu, A.; Giubilato, P.; Hillemanns, H.; Junique, A.; Keil, M.; Kim, D.; Kim, J.; Kugathasan, T.; Lattuca, A.; Mager, M.; Marin Tobon, C. A.; Marras, D.; Martinengo, P.; Mattiazzo, S.; Mazza, G.; Mugnier, H.; Musa, L.; Pantano, D.; Puggioni, C.; Rousset, J.; Reidt, F.; Riedler, P.; Siddhanta, S.; Snoeys, W.; Usai, G.; van Hoorne, J. W.; Yang, P.; Yi, J.
2013-12-01
ALICE plans an upgrade of its Inner Tracking System for 2018. The development of a monolithic active pixel sensor for this upgrade is described. The TowerJazz 180 nm CMOS imaging sensor process has been chosen as it is possible to use full CMOS in the pixel due to the offering of a deep pwell and also to use different starting materials. The ALPIDE development is an alternative to approaches based on a rolling shutter architecture, and aims to reduce power consumption and integration time by an order of magnitude below the ALICE specifications, which would be quite beneficial in terms of material budget and background. The approach is based on an in-pixel binary front-end combined with a hit-driven architecture. Several prototypes have already been designed, submitted for fabrication and some of them tested with X-ray sources and particles in a beam. Analog power consumption has been limited by optimizing the Q/C of the sensor using Explorer chips. Promising but preliminary first results have also been obtained with a prototype ALPIDE. Radiation tolerance up to the ALICE requirements has also been verified.
Self-amplified CMOS image sensor using a current-mode readout circuit
NASA Astrophysics Data System (ADS)
Santos, Patrick M.; de Lima Monteiro, Davies W.; Pittet, Patrick
2014-05-01
The feature size of the CMOS processes decreased during the past few years and problems such as reduced dynamic range have become more significant in voltage-mode pixels, even though the integration of more functionality inside the pixel has become easier. This work makes a contribution on both sides: the possibility of a high signal excursion range using current-mode circuits together with functionality addition by making signal amplification inside the pixel. The classic 3T pixel architecture was rebuild with small modifications to integrate a transconductance amplifier providing a current as an output. The matrix with these new pixels will operate as a whole large transistor outsourcing an amplified current that will be used for signal processing. This current is controlled by the intensity of the light received by the matrix, modulated pixel by pixel. The output current can be controlled by the biasing circuits to achieve a very large range of output signal levels. It can also be controlled with the matrix size and this permits a very high degree of freedom on the signal level, observing the current densities inside the integrated circuit. In addition, the matrix can operate at very small integration times. Its applications would be those in which fast imaging processing, high signal amplification are required and low resolution is not a major problem, such as UV image sensors. Simulation results will be presented to support: operation, control, design, signal excursion levels and linearity for a matrix of pixels that was conceived using this new concept of sensor.
Backside illuminated CMOS-TDI line scan sensor for space applications
NASA Astrophysics Data System (ADS)
Cohen, Omer; Ofer, Oren; Abramovich, Gil; Ben-Ari, Nimrod; Gershon, Gal; Brumer, Maya; Shay, Adi; Shamay, Yaron
2018-05-01
A multi-spectral backside illuminated Time Delayed Integration Radiation Hardened line scan sensor utilizing CMOS technology was designed for continuous scanning Low Earth Orbit small satellite applications. The sensor comprises a single silicon chip with 4 independent arrays of pixels where each array is arranged in 2600 columns with 64 TDI levels. A multispectral optical filter whose spectral responses per array are adjustable per system requirement is assembled at the package level. A custom 4T Pixel design provides the required readout speed, low-noise, very low dark current, and high conversion gains. A 2-phase internally controlled exposure mechanism improves the sensor's dynamic MTF. The sensor high level of integration includes on-chip 12 bit per pixel analog to digital converters, on-chip controller, and CMOS compatible voltage levels. Thus, the power consumption and the weight of the supporting electronics are reduced, and a simple electrical interface is provided. An adjustable gain provides a Full Well Capacity ranging from 150,000 electrons up to 500,000 electrons per column and an overall readout noise per column of less than 120 electrons. The imager supports line rates ranging from 50 to 10,000 lines/sec, with power consumption of less than 0.5W per array. Thus, the sensor is characterized by a high pixel rate, a high dynamic range and a very low power. To meet a Latch-up free requirement RadHard architecture and design rules were utilized. In this paper recent electrical and electro-optical measurements of the sensor's Flight Models will be presented for the first time.
Investigation of thin n-in-p planar pixel modules for the ATLAS upgrade
NASA Astrophysics Data System (ADS)
Savic, N.; Beyer, J.; La Rosa, A.; Macchiolo, A.; Nisius, R.
2016-12-01
In view of the High Luminosity upgrade of the Large Hadron Collider (HL-LHC), planned to start around 2023-2025, the ATLAS experiment will undergo a replacement of the Inner Detector. A higher luminosity will imply higher irradiation levels and hence will demand more radiation hardness especially in the inner layers of the pixel system. The n-in-p silicon technology is a promising candidate to instrument this region, also thanks to its cost-effectiveness because it only requires a single sided processing in contrast to the n-in-n pixel technology presently employed in the LHC experiments. In addition, thin sensors were found to ensure radiation hardness at high fluences. An overview is given of recent results obtained with not irradiated and irradiated n-in-p planar pixel modules. The focus will be on n-in-p planar pixel sensors with an active thickness of 100 and 150 μm recently produced at ADVACAM. To maximize the active area of the sensors, slim and active edges are implemented. The performance of these modules is investigated at beam tests and the results on edge efficiency will be shown.
NASA Astrophysics Data System (ADS)
Becker, J.; Tate, M. W.; Shanks, K. S.; Philipp, H. T.; Weiss, J. T.; Purohit, P.; Chamberlain, D.; Gruner, S. M.
2018-01-01
We studied the properties of chromium compensated GaAs when coupled to charge integrating ASICs as a function of detector temperature, applied bias and X-ray tube energy. The material is a photoresistor and can be biased to collect either electrons or holes by the pixel circuitry. Both are studied here. Previous studies have shown substantial hole trapping. This trapping and other sensor properties give rise to several non-ideal effects which include an extended point spread function, variations in the effective pixel size, and rate dependent offset shifts. The magnitude of these effects varies with temperature and bias, mandating good temperature uniformity in the sensor and very good temperature stabilization, as well as a carefully selected bias voltage.
Advancements in DEPMOSFET device developments for XEUS
NASA Astrophysics Data System (ADS)
Treis, J.; Bombelli, L.; Eckart, R.; Fiorini, C.; Fischer, P.; Hälker, O.; Herrmann, S.; Lechner, P.; Lutz, G.; Peric, I.; Porro, M.; Richter, R. H.; Schaller, G.; Schopper, F.; Soltau, H.; Strüder, L.; Wölfel, S.
2006-06-01
DEPMOSFET based Active Pixel Sensor (APS) matrices are a new detector concept for X-ray imaging spectroscopy missions. They can cope with the challenging requirements of the XEUS Wide Field Imager and combine excellent energy resolution, high speed readout and low power consumption with the attractive feature of random accessibility of pixels. From the evaluation of first prototypes, new concepts have been developed to overcome the minor drawbacks and problems encountered for the older devices. The new devices will have a pixel size of 75 μm × 75 μm. Besides 64 × 64 pixel arrays, prototypes with a sizes of 256 × 256 pixels and 128 × 512 pixels and an active area of about 3.6 cm2 will be produced, a milestone on the way towards the fully grown XEUS WFI device. The production of these improved devices is currently on the way. At the same time, the development of the next generation of front-end electronics has been started, which will permit to operate the sensor devices with the readout speed required by XEUS. Here, a summary of the DEPFET capabilities, the concept of the sensors of the next generation and the new front-end electronics will be given. Additionally, prospects of new device developments using the DEPFET as a sensitive element are shown, e.g. so-called RNDR-pixels, which feature repetitive non-destructive readout to lower the readout noise below the 1 e - ENC limit.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deptuch, Grzegorz W.; Gabriella, Carini; Enquist, Paul
The vertically integrated photon imaging chip (VIPIC1) pixel detector is a stack consisting of a 500-μm-thick silicon sensor, a two-tier 34-μm-thick integrated circuit, and a host printed circuit board (PCB). The integrated circuit tiers were bonded using the direct bonding technology with copper, and each tier features 1-μm-diameter through-silicon vias that were used for connections to the sensor on one side, and to the host PCB on the other side. The 80-μm-pixel-pitch sensor was the direct bonding technology with nickel bonded to the integrated circuit. The stack was mounted on the board using Sn–Pb balls placed on a 320-μm pitch,more » yielding an entirely wire-bond-less structure. The analog front-end features a pulse response peaking at below 250 ns, and the power consumption per pixel is 25 μW. We successful completed the 3-D integration and have reported here. Additionally, all pixels in the matrix of 64 × 64 pixels were responding on well-bonded devices. Correct operation of the sparsified readout, allowing a single 153-ns bunch timing resolution, was confirmed in the tests on a synchrotron beam of 10-keV X-rays. An equivalent noise charge of 36.2 e - rms and a conversion gain of 69.5 μV/e - with 2.6 e - rms and 2.7 μV/e - rms pixel-to-pixel variations, respectively, were measured.« less
Synthesis of a fiber-optic magnetostrictive sensor (FOMS) pixel for RF magnetic field imaging
NASA Astrophysics Data System (ADS)
Rengarajan, Suraj
The principal objective of this dissertation was to synthesize a sensor element with properties specifically optimized for integration into arrays capable of imaging RF magnetic fields. The dissertation problem was motivated by applications in nondestructive eddy current testing, smart skins, etc., requiring sensor elements that non-invasively detect millimeter-scale variations over several square meters, in low level magnetic fields varying at frequencies in the 100 kHz-1 GHz range. The poor spatial and temporal resolution of FOMS elements available prior to this dissertation research, precluded their use in non-invasive large area mapping applications. Prior research had been focused on large, discrete devices for detecting extremely low level magnetic fields varying at a few kHz. These devices are incompatible with array integration and imaging applications. The dissertation research sought to overcome the limitations of current technology by utilizing three new approaches; synthesizing magnetostrictive thin films and optimizing their properties for sensor applications, integrating small sensor elements into an array compatible fiber optic interferometer, and devising a RF mixing approach to measure high frequency magnetic fields using the integrated sensor element. Multilayer thin films were used to optimize the magnetic properties of the magnetostrictive elements. Alternating soft (Nisb{80}Fesb{20}) and hard (Cosb{50}Fesb{50}) magnetic alloy layers were selected for the multilayer and the layer thicknesses were varied to obtain films with a combination of large magnetization, high frequency permeability and large magnetostrictivity. X-Ray data and measurement of the variations in the magnetization, resistivity and magnetostriction with layer thicknesses, indicated that an interfacial layer was responsible for enhancing the sensing performance of the multilayers. A FOMS pixel was patterned directly onto the sensing arm of a fiber-optic interferometer, by sputtering a multilayer film with favorable sensor properties. After calibrating the interferometer response with a piezo, the mechanical and magnetic responses of the FOMS element were evaluated for various test fields. High frequency magnetic fields were detected using a local oscillator field to downconvert the RF signal fields to the lower mechanical resonant frequency of the element. A field sensitivity of 0.3 Oe/cm sensor element length was demonstrated at 1 MHz. A coherent magnetization rotation model was developed to predict the magnetostrictive response of the element, and identify approaches for optimizing its performance. This model predicts that an optimized element could resolve ˜1 mm variations in fields varying at frequencies >10 MHz with a sensitivity of ˜10sp{-3} Oe/mm. The results demonstrate the potential utility of integrating this device as a FOMS pixel in RF magnetic field imaging arrays.
The CAOS camera platform: ushering in a paradigm change in extreme dynamic range imager design
NASA Astrophysics Data System (ADS)
Riza, Nabeel A.
2017-02-01
Multi-pixel imaging devices such as CCD, CMOS and Focal Plane Array (FPA) photo-sensors dominate the imaging world. These Photo-Detector Array (PDA) devices certainly have their merits including increasingly high pixel counts and shrinking pixel sizes, nevertheless, they are also being hampered by limitations in instantaneous dynamic range, inter-pixel crosstalk, quantum full well capacity, signal-to-noise ratio, sensitivity, spectral flexibility, and in some cases, imager response time. Recently invented is the Coded Access Optical Sensor (CAOS) Camera platform that works in unison with current Photo-Detector Array (PDA) technology to counter fundamental limitations of PDA-based imagers while providing high enough imaging spatial resolution and pixel counts. Using for example the Texas Instruments (TI) Digital Micromirror Device (DMD) to engineer the CAOS camera platform, ushered in is a paradigm change in advanced imager design, particularly for extreme dynamic range applications.
Spectroscopic remote sensing for material identification, vegetation characterization, and mapping
Kokaly, Raymond F.; Lewis, Paul E.; Shen, Sylvia S.
2012-01-01
Identifying materials by measuring and analyzing their reflectance spectra has been an important procedure in analytical chemistry for decades. Airborne and space-based imaging spectrometers allow materials to be mapped across the landscape. With many existing airborne sensors and new satellite-borne sensors planned for the future, robust methods are needed to fully exploit the information content of hyperspectral remote sensing data. A method of identifying and mapping materials using spectral feature analyses of reflectance data in an expert-system framework called MICA (Material Identification and Characterization Algorithm) is described. MICA is a module of the PRISM (Processing Routines in IDL for Spectroscopic Measurements) software, available to the public from the U.S. Geological Survey (USGS) at http://pubs.usgs.gov/of/2011/1155/. The core concepts of MICA include continuum removal and linear regression to compare key diagnostic absorption features in reference laboratory/field spectra and the spectra being analyzed. The reference spectra, diagnostic features, and threshold constraints are defined within a user-developed MICA command file (MCF). Building on several decades of experience in mineral mapping, a broadly-applicable MCF was developed to detect a set of minerals frequently occurring on the Earth's surface and applied to map minerals in the country-wide coverage of the 2007 Afghanistan HyMap data set. MICA has also been applied to detect sub-pixel oil contamination in marshes impacted by the Deepwater Horizon incident by discriminating the C-H absorption features in oil residues from background vegetation. These two recent examples demonstrate the utility of a spectroscopic approach to remote sensing for identifying and mapping the distributions of materials in imaging spectrometer data.
Compression of color-mapped images
NASA Technical Reports Server (NTRS)
Hadenfeldt, A. C.; Sayood, Khalid
1992-01-01
In a standard image coding scenario, pixel-to-pixel correlation nearly always exists in the data, especially if the image is a natural scene. This correlation is what allows predictive coding schemes (e.g., DPCM) to perform efficient compression. In a color-mapped image, the values stored in the pixel array are no longer directly related to the pixel intensity. Two color indices which are numerically adjacent (close) may point to two very different colors. The correlation still exists, but only via the colormap. This fact can be exploited by sorting the color map to reintroduce the structure. The sorting of colormaps is studied and it is shown how the resulting structure can be used in both lossless and lossy compression of images.
Optical and electrical characterization of a back-thinned CMOS active pixel sensor
NASA Astrophysics Data System (ADS)
Blue, Andrew; Clark, A.; Houston, S.; Laing, A.; Maneuski, D.; Prydderch, M.; Turchetta, R.; O'Shea, V.
2009-06-01
This work will report on the first work on the characterization of a back-thinned Vanilla-a 512×512 (25 μm squared) active pixel sensor (APS). Characterization of the detectors was carried out through the analysis of photon transfer curves to yield a measurement of full well capacity, noise levels, gain constants and linearity. Spectral characterization of the sensors was also performed in the visible and UV regions. A full comparison against non-back-thinned front illuminated Vanilla sensors is included. Such measurements suggest that the Vanilla APS will be suitable for a wide range of applications, including particle physics and biomedical imaging.
Kim, Kang-Hyun; Hong, Soon Kyu; Jang, Nam-Su; Ha, Sung-Hun; Lee, Hyung Woo; Kim, Jong-Man
2017-05-24
Wearable pressure sensors are crucial building blocks for potential applications in real-time health monitoring, artificial electronic skins, and human-to-machine interfaces. Here we present a highly sensitive, simple-architectured wearable resistive pressure sensor based on highly compliant yet robust carbon composite conductors made of a vertically aligned carbon nanotube (VACNT) forest embedded in a polydimethylsiloxane (PDMS) matrix with irregular surface morphology. A roughened surface of the VACNT/PDMS composite conductor is simply formed using a sandblasted silicon master in a low-cost and potentially scalable manner and plays an important role in improving the sensitivity of resistive pressure sensor. After assembling two of the roughened composite conductors, our sensor shows considerable pressure sensitivity of ∼0.3 kPa -1 up to 0.7 kPa as well as stable steady-state responses under various pressures, a wide detectable range of up to 5 kPa before saturation, a relatively fast response time of ∼162 ms, and good reproducibility over 5000 cycles of pressure loading/unloading. The fabricated pressure sensor can be used to detect a wide range of human motions ranging from subtle blood pulses to dynamic joint movements, and it can also be used to map spatial pressure distribution in a multipixel platform (in a 4 × 4 pixel array).
Reliable Fusion of Stereo Matching and Depth Sensor for High Quality Dense Depth Maps
Liu, Jing; Li, Chunpeng; Fan, Xuefeng; Wang, Zhaoqi
2015-01-01
Depth estimation is a classical problem in computer vision, which typically relies on either a depth sensor or stereo matching alone. The depth sensor provides real-time estimates in repetitive and textureless regions where stereo matching is not effective. However, stereo matching can obtain more accurate results in rich texture regions and object boundaries where the depth sensor often fails. We fuse stereo matching and the depth sensor using their complementary characteristics to improve the depth estimation. Here, texture information is incorporated as a constraint to restrict the pixel’s scope of potential disparities and to reduce noise in repetitive and textureless regions. Furthermore, a novel pseudo-two-layer model is used to represent the relationship between disparities in different pixels and segments. It is more robust to luminance variation by treating information obtained from a depth sensor as prior knowledge. Segmentation is viewed as a soft constraint to reduce ambiguities caused by under- or over-segmentation. Compared to the average error rate 3.27% of the previous state-of-the-art methods, our method provides an average error rate of 2.61% on the Middlebury datasets, which shows that our method performs almost 20% better than other “fused” algorithms in the aspect of precision. PMID:26308003
NASA Astrophysics Data System (ADS)
Hashimoto, Ryoji; Matsumura, Tomoya; Nozato, Yoshihiro; Watanabe, Kenji; Onoye, Takao
A multi-agent object attention system is proposed, which is based on biologically inspired attractor selection model. Object attention is facilitated by using a video sequence and a depth map obtained through a compound-eye image sensor TOMBO. Robustness of the multi-agent system over environmental changes is enhanced by utilizing the biological model of adaptive response by attractor selection. To implement the proposed system, an efficient VLSI architecture is employed with reducing enormous computational costs and memory accesses required for depth map processing and multi-agent attractor selection process. According to the FPGA implementation result of the proposed object attention system, which is accomplished by using 7,063 slices, 640×512 pixel input images can be processed in real-time with three agents at a rate of 9fps in 48MHz operation.
NASA Astrophysics Data System (ADS)
Tanguy, Marion; Bernier, Monique; Chokmani, Karem
2015-04-01
When a flood hits an inhabited area, managers and services responsible for public safety need precise, reliable and up to date maps of the areas affected by the flood, in order to quickly roll out and to coordinate the adequate intervention and assistance plans required to limit the human and material damages caused by the disaster. Synthetic aperture radar (SAR) sensors are now considered as one of the most adapted tool for flood detection and mapping in a context of crisis management. Indeed, due to their capacity to acquire data night and day, in almost all meteorological conditions, SAR sensors allow the acquisition of synoptic but detailed views of the areas affected by the flood, even during the active phases of the event. Moreover, new generation sensors such as RADARSAT-2, TerraSAR-X, COSMO-SkyMed, are providing very high resolution images of the disaster (down to 1m ground resolution). Further, critical improvements have been made on the temporal repetitivity of acquisitions and on data availability, through the development of satellite constellations (i.e the four COSMO-Skymed or the Sentinel-1A and 1B satellites) and thanks to the implementation of the International Charter "Space and Major Disasters", which guarantees high priority images acquisition and delivery with 4 to 12 hours. If detection of open water flooded areas is relatively straightforward with SAR imagery, flood detection in built-up areas is often associated with important issues. Indeed, because of the side looking geometry of the SAR sensors, structures such as tall vegetation and structures parallel to the satellite direction of travel may produce shadow and layover effects, leading to important over and under-detections of flooded pixels. Besides, the numerous permanent water-surfaces like radar response areas present in built-up environments, such as parking lots, roads etc., may be mixed up with flooded areas, resulting in substantial inaccuracies in the final flood map. In spite of the many efforts recently done toward the improvements of the accuracy of the processing algorithms for flood detection in urban areas with high resolution SAR imagery, these algorithms still encounter difficulties to detect urban flooded pixels with precision. The difficulties do not seem to be only ascribable to the choice of SAR image processing methods, but can also be imputed to the limitations of the SAR imaging technique itself in urban areas. We propose a fully automatic and effective approach for near-real time delineation of urban and rural flooded areas, which combines the capacity of SAR imagery to detect open water areas, and explicit hydrodynamic characteristics of the region affected by the flood, expressed through flood recurrence interval data. This innovative approach has been tested with RADARSAT-2 Fine and Ultrafine Mode images acquired during the 2011 Richelieu River flooding, in Canada. It proved successful in accurately delineating flooding in urban and rural areas, with a RMSE inferior to 2 pixels.
NASA Astrophysics Data System (ADS)
Drzewiecki, Wojciech
2017-12-01
We evaluated the performance of nine machine learning regression algorithms and their ensembles for sub-pixel estimation of impervious areas coverages from Landsat imagery. The accuracy of imperviousness mapping in individual time points was assessed based on RMSE, MAE and R2. These measures were also used for the assessment of imperviousness change intensity estimations. The applicability for detection of relevant changes in impervious areas coverages at sub-pixel level was evaluated using overall accuracy, F-measure and ROC Area Under Curve. The results proved that Cubist algorithm may be advised for Landsat-based mapping of imperviousness for single dates. Stochastic gradient boosting of regression trees (GBM) may be also considered for this purpose. However, Random Forest algorithm is endorsed for both imperviousness change detection and mapping of its intensity. In all applications the heterogeneous model ensembles performed at least as well as the best individual models or better. They may be recommended for improving the quality of sub-pixel imperviousness and imperviousness change mapping. The study revealed also limitations of the investigated methodology for detection of subtle changes of imperviousness inside the pixel. None of the tested approaches was able to reliably classify changed and non-changed pixels if the relevant change threshold was set as one or three percent. Also for fi ve percent change threshold most of algorithms did not ensure that the accuracy of change map is higher than the accuracy of random classifi er. For the threshold of relevant change set as ten percent all approaches performed satisfactory.
Depth map occlusion filling and scene reconstruction using modified exemplar-based inpainting
NASA Astrophysics Data System (ADS)
Voronin, V. V.; Marchuk, V. I.; Fisunov, A. V.; Tokareva, S. V.; Egiazarian, K. O.
2015-03-01
RGB-D sensors are relatively inexpensive and are commercially available off-the-shelf. However, owing to their low complexity, there are several artifacts that one encounters in the depth map like holes, mis-alignment between the depth and color image and lack of sharp object boundaries in the depth map. Depth map generated by Kinect cameras also contain a significant amount of missing pixels and strong noise, limiting their usability in many computer vision applications. In this paper, we present an efficient hole filling and damaged region restoration method that improves the quality of the depth maps obtained with the Microsoft Kinect device. The proposed approach is based on a modified exemplar-based inpainting and LPA-ICI filtering by exploiting the correlation between color and depth values in local image neighborhoods. As a result, edges of the objects are sharpened and aligned with the objects in the color image. Several examples considered in this paper show the effectiveness of the proposed approach for large holes removal as well as recovery of small regions on several test images of depth maps. We perform a comparative study and show that statistically, the proposed algorithm delivers superior quality results compared to existing algorithms.
Results of the 2015 testbeam of a 180 nm AMS High-Voltage CMOS sensor prototype
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benoit, M.; de Mendizabal, J. Bilbao; Casse, G.
We investigated the active pixel sensors based on the High-Voltage CMOS technology as a viable option for the future pixel tracker of the ATLAS experiment at the High-Luminosity LHC. Our paper reports on the testbeam measurements performed at the H8 beamline of the CERN Super Proton Synchrotron on a High-Voltage CMOS sensor prototype produced in 180 nm AMS technology. These results in terms of tracking efficiency and timing performance, for different threshold and bias conditions, are shown.
Results of the 2015 testbeam of a 180 nm AMS High-Voltage CMOS sensor prototype
Benoit, M.; de Mendizabal, J. Bilbao; Casse, G.; ...
2016-07-21
We investigated the active pixel sensors based on the High-Voltage CMOS technology as a viable option for the future pixel tracker of the ATLAS experiment at the High-Luminosity LHC. Our paper reports on the testbeam measurements performed at the H8 beamline of the CERN Super Proton Synchrotron on a High-Voltage CMOS sensor prototype produced in 180 nm AMS technology. These results in terms of tracking efficiency and timing performance, for different threshold and bias conditions, are shown.
Spectral Ratio Imaging with Hyperion Satellite Data for Geological Mapping
NASA Technical Reports Server (NTRS)
Vincent, Robert K.; Beck, Richard A.
2005-01-01
Since the advent of LANDSAT I in 1972, many different multispectral satellites have been orbited by the U.S. and other countries. These satellites have varied from 4 spectral bands in LANDSAT I to 14 spectral bands in the ASTER sensor aboard the TERRA space platform. Hyperion is a relatively new hyperspectral sensor with over 220 spectral bands. The huge increase in the number of spectral bands offers a substantial challenge to computers and analysts alike when it comes to the task of mapping features on the basis of chemical composition, especially if little or no ground truth is available beforehand from the area being mapped. One approach is the theoretical approach of the modeler, where all extraneous information (atmospheric attenuation, sensor electronic gain and offset, etc.) is subtracted off and divided out, and laboratory (or field) spectra of materials are used as training sets to map features in the scene of similar composition. This approach is very difficult to keep accurate because of variations in the atmosphere, solar illumination, and sensor electronic gain and offset that are not always perfectly recorded or accounted for. For instance, to apply laboratory or field spectra of materials as data sets from the theoretical approach, the header information of the files must reflect the correct, up-to-date sensor electronic gain and offset and the analyst must pick the exact atmospheric model that is appropriate for the day of data collection in order for classification procedures to accurately match pixels in the scene with the laboratory or field spectrum of a desired target on the basis of the hyperspectral data. The modeling process is so complex that it is difficult to tell when it is operating well or determine how to fix it when it is incorrect. Recently RSI has announced that the latest version of their ENVI software package is not performing atmospheric corrections correctly with the FLAASH atmospheric model. It took a long time to determine that it was wrong, and may take an equally long time (or longer) to fix.
Label-Free Biomedical Imaging Using High-Speed Lock-In Pixel Sensor for Stimulated Raman Scattering
Mars, Kamel; Kawahito, Shoji; Yasutomi, Keita; Kagawa, Keiichiro; Yamada, Takahiro
2017-01-01
Raman imaging eliminates the need for staining procedures, providing label-free imaging to study biological samples. Recent developments in stimulated Raman scattering (SRS) have achieved fast acquisition speed and hyperspectral imaging. However, there has been a problem of lack of detectors suitable for MHz modulation rate parallel detection, detecting multiple small SRS signals while eliminating extremely strong offset due to direct laser light. In this paper, we present a complementary metal-oxide semiconductor (CMOS) image sensor using high-speed lock-in pixels for stimulated Raman scattering that is capable of obtaining the difference of Stokes-on and Stokes-off signal at modulation frequency of 20 MHz in the pixel before reading out. The generated small SRS signal is extracted and amplified in a pixel using a high-speed and large area lateral electric field charge modulator (LEFM) employing two-step ion implantation and an in-pixel pair of low-pass filter, a sample and hold circuit and a switched capacitor integrator using a fully differential amplifier. A prototype chip is fabricated using 0.11 μm CMOS image sensor technology process. SRS spectra and images of stearic acid and 3T3-L1 samples are successfully obtained. The outcomes suggest that hyperspectral and multi-focus SRS imaging at video rate is viable after slight modifications to the pixel architecture and the acquisition system. PMID:29120358
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weiss, Joel T.; Becker, Julian; Shanks, Katherine S.
There is a compelling need for a high frame rate imaging detector with a wide dynamic range, from single x-rays/pixel/pulse to >10{sup 6} x-rays/pixel/pulse, that is capable of operating at both x-ray free electron laser (XFEL) and 3rd generation sources with sustained fluxes of > 10{sup 11} x-rays/pixel/s [1, 2, 3]. We propose to meet these requirements with the High Dynamic Range Pixel Array Detector (HDR-PAD) by (a) increasing the speed of charge removal strategies [4], (b) increasing integrator range by implementing adaptive gain [5], and (c) exploiting the extended charge collection times of electron-hole pair plasma clouds that formmore » when a sufficiently large number of x-rays are absorbed in a detector sensor in a short period of time [6]. We have developed a measurement platform similar to the one used in [6] to study the effects of high electron-hole densities in silicon sensors using optical lasers to emulate the conditions found at XFELs. Characterizations of the employed tunable wavelength laser with picosecond pulse duration have shown Gaussian focal spots sizes of 6 ± 1 µm rms over the relevant spectrum and 2 to 3 orders of magnitude increase in available intensity compared to previous measurements presented in [6]. Results from measurements on a typical pixelated silicon diode intended for use with the HDR-PAD (150 µm pixel size, 500 µm thick sensor) are presented.« less
Label-Free Biomedical Imaging Using High-Speed Lock-In Pixel Sensor for Stimulated Raman Scattering.
Mars, Kamel; Lioe, De Xing; Kawahito, Shoji; Yasutomi, Keita; Kagawa, Keiichiro; Yamada, Takahiro; Hashimoto, Mamoru
2017-11-09
Raman imaging eliminates the need for staining procedures, providing label-free imaging to study biological samples. Recent developments in stimulated Raman scattering (SRS) have achieved fast acquisition speed and hyperspectral imaging. However, there has been a problem of lack of detectors suitable for MHz modulation rate parallel detection, detecting multiple small SRS signals while eliminating extremely strong offset due to direct laser light. In this paper, we present a complementary metal-oxide semiconductor (CMOS) image sensor using high-speed lock-in pixels for stimulated Raman scattering that is capable of obtaining the difference of Stokes-on and Stokes-off signal at modulation frequency of 20 MHz in the pixel before reading out. The generated small SRS signal is extracted and amplified in a pixel using a high-speed and large area lateral electric field charge modulator (LEFM) employing two-step ion implantation and an in-pixel pair of low-pass filter, a sample and hold circuit and a switched capacitor integrator using a fully differential amplifier. A prototype chip is fabricated using 0.11 μm CMOS image sensor technology process. SRS spectra and images of stearic acid and 3T3-L1 samples are successfully obtained. The outcomes suggest that hyperspectral and multi-focus SRS imaging at video rate is viable after slight modifications to the pixel architecture and the acquisition system.
Laser doppler blood flow imaging using a CMOS imaging sensor with on-chip signal processing.
He, Diwei; Nguyen, Hoang C; Hayes-Gill, Barrie R; Zhu, Yiqun; Crowe, John A; Gill, Cally; Clough, Geraldine F; Morgan, Stephen P
2013-09-18
The first fully integrated 2D CMOS imaging sensor with on-chip signal processing for applications in laser Doppler blood flow (LDBF) imaging has been designed and tested. To obtain a space efficient design over 64 × 64 pixels means that standard processing electronics used off-chip cannot be implemented. Therefore the analog signal processing at each pixel is a tailored design for LDBF signals with balanced optimization for signal-to-noise ratio and silicon area. This custom made sensor offers key advantages over conventional sensors, viz. the analog signal processing at the pixel level carries out signal normalization; the AC amplification in combination with an anti-aliasing filter allows analog-to-digital conversion with a low number of bits; low resource implementation of the digital processor enables on-chip processing and the data bottleneck that exists between the detector and processing electronics has been overcome. The sensor demonstrates good agreement with simulation at each design stage. The measured optical performance of the sensor is demonstrated using modulated light signals and in vivo blood flow experiments. Images showing blood flow changes with arterial occlusion and an inflammatory response to a histamine skin-prick demonstrate that the sensor array is capable of detecting blood flow signals from tissue.
Toward one Giga frames per second--evolution of in situ storage image sensors.
Etoh, Takeharu G; Son, Dao V T; Yamada, Tetsuo; Charbon, Edoardo
2013-04-08
The ISIS is an ultra-fast image sensor with in-pixel storage. The evolution of the ISIS in the past and in the near future is reviewed and forecasted. To cover the storage area with a light shield, the conventional frontside illuminated ISIS has a limited fill factor. To achieve higher sensitivity, a BSI ISIS was developed. To avoid direct intrusion of light and migration of signal electrons to the storage area on the frontside, a cross-sectional sensor structure with thick pnpn layers was developed, and named "Tetratified structure". By folding and looping in-pixel storage CCDs, an image signal accumulation sensor, ISAS, is proposed. The ISAS has a new function, the in-pixel signal accumulation, in addition to the ultra-high-speed imaging. To achieve much higher frame rate, a multi-collection-gate (MCG) BSI image sensor architecture is proposed. The photoreceptive area forms a honeycomb-like shape. Performance of a hexagonal CCD-type MCG BSI sensor is examined by simulations. The highest frame rate is theoretically more than 1Gfps. For the near future, a stacked hybrid CCD/CMOS MCG image sensor seems most promising. The associated problems are discussed. A fine TSV process is the key technology to realize the structure.
Merging of multi-temporal SST data at South China Sea
NASA Astrophysics Data System (ADS)
Ng, H. G.; MatJafri, M. Z.; Abdullah, K.; Lim, H. S.
2008-10-01
The sea surface temperature (SST) mapping could be performed with a wide spatial and temporal extent in a reasonable time limit. The space-borne sensor of AVHRR was widely used for the purpose. However, the current SST retrieval techniques for infrared channels were limited only for the cloud-free area, because the electromagnetic waves in the infrared wavelengths could not penetrate the cloud. Therefore, the SST availability was low for the single image. To overcome this problem, we studied to produce the composite of three day's SST map. The diurnal changes of SST data are quite stable through a short period of time if no abrupt natural disaster occurrence. Therefore, the SST data of three consecutive days with nearly coincident daily time were merged in order to create a three day's composite SST data. The composite image could increase the SST availability. In this study, we acquired the level 1b AVHRR (Advanced Very High Resolution Radiometer) images from Malaysia Center of Remote Sensing (MACRES). The images were first preprocessed and the cloud and land areas were masked. We made some modifications on the technique of obtaining the threshold value for cloud masking. The SST was estimated by using the day split MCSST algorithm. The cloud free water pixels availability were computed and compared. The mean of SST for three day's composite data were calculated and a SST map was generated. The cloud free water pixels availability were computed and compared. The SST data availability was increased by merging the SST data.
Mapping of Hot/Cold Springs in a Large Lake Using Thermal Remote Sensing and In-situ Measurement
NASA Astrophysics Data System (ADS)
Gurcan, T.; Kurtulus, B.; Avşar
2016-12-01
In this study, in-situ measurement and thermal infrared imagery was used to map hot and cold springs of Köyceǧiz Lake in Turkey, which is one of the biggest open coastal lakes in the world. In-situ surface, depth water temperature, climatic data and bathymetry measurement were collected using data loggers. Landsat 8 TIRS Band 10 (Thermal Infrared Sensors) images were compared with in-situ measurements. Electrical conductivity, pH and salinity measurement were also collected at the bottom of the lake to better understand the groundwater discharge evidence in the lake. In-situ measurement were interpolated using Empirical Bayesian Kriging (EBK). In-Situ measurement and Landsat 8 Images were compared pixel by pixel and appropriate regression equation were calculated according to best coefficient of correlation (R2). The results show that in-situ measurement of temperature at surface of the Köyceǧiz Lake has a good correlation for several cases (R2 ≥ 0.7) with Landsat 8 TIR images (Figure1). The mapping results of in-situ measurements also reveal that at the north east part of the Köyceǧiz Lake there exist several evidence of cold spring at the bottom of the Lake. Hot spring evidence were located at the South-West part of the Köyceǧiz Lake near the Sultaniye region. In this regard, we would like to thank TUBITAK project (112Y137) for their financial support.
NASA Technical Reports Server (NTRS)
1999-01-01
Jet Propulsion Laboratory's research on a second generation, solid-state image sensor technology has resulted in the Complementary Metal- Oxide Semiconductor Active Pixel Sensor (CMOS), establishing an alternative to the Charged Coupled Device (CCD). Photobit Corporation, the leading supplier of CMOS image sensors, has commercialized two products of their own based on this technology: the PB-100 and PB-300. These devices are cameras on a chip, combining all camera functions. CMOS "active-pixel" digital image sensors offer several advantages over CCDs, a technology used in video and still-camera applications for 30 years. The CMOS sensors draw less energy, they use the same manufacturing platform as most microprocessors and memory chips, and they allow on-chip programming of frame size, exposure, and other parameters.
NASA Astrophysics Data System (ADS)
Guss, Paul; Rabin, Michael; Croce, Mark; Hoteling, Nathan; Schwellenbach, David; Kruschwitz, Craig; Mocko, Veronika; Mukhopadhyay, Sanjoy
2017-09-01
We demonstrate very high-resolution photon spectroscopy with a microwave-multiplexed 4-pixel transition edge sensor (TES) array. The readout circuit consists of superconducting microwave resonators coupled to radio frequency superconducting-quantum-interference devices (RF-SQUIDs) and transduces changes in input current to changes in phase of a microwave signal. We used a flux-ramp modulation to linearize the response and avoid low-frequency noise. The result is a very high-resolution photon spectroscopy with a microwave-multiplexed 4-pixel transition edge sensor array. We performed and validated a small-scale demonstration and test of all the components of our concept system, which encompassed microcalorimetry, microwave multiplexing, RF-SQUIDs, and software-defined radio (SDR). We shall display data we acquired in the first simultaneous combination of all key innovations in a 4-pixel demonstration, including microcalorimetry, microwave multiplexing, RF-SQUIDs, and SDR. We present the energy spectrum of a gadolinium-153 (153Gd) source we measured using our 4-pixel TES array and the RF-SQUID multiplexer. For each pixel, one can observe the two 97.4 and 103.2 keV photopeaks. We measured the 153Gd photon source with an achieved energy resolution of 70 eV, full width half maximum (FWHM) at 100 keV, and an equivalent readout system noise of 90 pA/pHz at the TES. This demonstration establishes a path for the readout of cryogenic x-ray and gamma ray sensor arrays with more elements and spectral resolving powers. We believe this project has improved capabilities and substantively advanced the science useful for missions such as nuclear forensics, emergency response, and treaty verification through the explored TES developments.
NASA Astrophysics Data System (ADS)
Hoefflinger, Bernd
Silicon charge-coupled-device (CCD) imagers have been and are a specialty market ruled by a few companies for decades. Based on CMOS technologies, active-pixel sensors (APS) began to appear in 1990 at the 1 μm technology node. These pixels allow random access, global shutters, and they are compatible with focal-plane imaging systems combining sensing and first-level image processing. The progress towards smaller features and towards ultra-low leakage currents has provided reduced dark currents and μm-size pixels. All chips offer Mega-pixel resolution, and many have very high sensitivities equivalent to ASA 12.800. As a result, HDTV video cameras will become a commodity. Because charge-integration sensors suffer from a limited dynamic range, significant processing effort is spent on multiple exposure and piece-wise analog-digital conversion to reach ranges >10,000:1. The fundamental alternative is log-converting pixels with an eye-like response. This offers a range of almost a million to 1, constant contrast sensitivity and constant colors, important features in professional, technical and medical applications. 3D retino-morphic stacking of sensing and processing on top of each other is being revisited with sub-100 nm CMOS circuits and with TSV technology. With sensor outputs directly on top of neurons, neural focal-plane processing will regain momentum, and new levels of intelligent vision will be achieved. The industry push towards thinned wafers and TSV enables backside-illuminated and other pixels with a 100% fill-factor. 3D vision, which relies on stereo or on time-of-flight, high-speed circuitry, will also benefit from scaled-down CMOS technologies both because of their size as well as their higher speed.
FITPix COMBO—Timepix detector with integrated analog signal spectrometric readout
NASA Astrophysics Data System (ADS)
Holik, M.; Kraus, V.; Georgiev, V.; Granja, C.
2016-02-01
The hybrid semiconductor pixel detector Timepix has proven a powerful tool in radiation detection and imaging. Energy loss and directional sensitivity as well as particle type resolving power are possible by high resolution particle tracking and per-pixel energy and quantum-counting capability. The spectrometric resolving power of the detector can be further enhanced by analyzing the analog signal of the detector common sensor electrode (also called back-side pulse). In this work we present a new compact readout interface, based on the FITPix readout architecture, extended with integrated analog electronics for the detector's common sensor signal. Integrating simultaneous operation of the digital per-pixel information with the common sensor (called also back-side electrode) analog pulse processing circuitry into one device enhances the detector capabilities and opens new applications. Thanks to noise suppression and built-in electromagnetic interference shielding the common hardware platform enables parallel analog signal spectroscopy on the back side pulse signal with full operation and read-out of the pixelated digital part, the noise level is 600 keV and spectrometric resolution around 100 keV for 5.5 MeV alpha particles. Self-triggering is implemented with delay of few tens of ns making use of adjustable low-energy threshold of the particle analog signal amplitude. The digital pixelated full frame can be thus triggered and recorded together with the common sensor analog signal. The waveform, which is sampled with frequency 100 MHz, can be recorded in adjustable time window including time prior to the trigger level. An integrated software tool provides control, on-line display and read-out of both analog and digital channels. Both the pixelated digital record and the analog waveform are synchronized and written out by common time stamp.
Early Validation of Sentinel-2 L2A Processor and Products
NASA Astrophysics Data System (ADS)
Pflug, Bringfried; Main-Knorn, Magdalena; Bieniarz, Jakub; Debaecker, Vincent; Louis, Jerome
2016-08-01
Sentinel-2 is a constellation of two polar orbiting satellite units each one equipped with an optical imaging sensor MSI (Multi-Spectral Instrument). Sentinel-2A was launched on June 23, 2015 and Sentinel-2B will follow in 2017.The Level-2A (L2A) processor Sen2Cor implemented for Sentinel-2 data provides a scene classification image, aerosol optical thickness (AOT) and water vapour (WV) maps and the Bottom-Of-Atmosphere (BOA) corrected reflectance product. First validation results of Sen2Cor scene classification showed an overall accuracy of 81%. AOT at 550 nm is estimated by Sen2Cor with uncertainty of 0.035 for cloudless images and locations with dense dark vegetation (DDV) pixels present in the image. Aerosol estimation fails if the image contains no DDV-pixels. Mean difference between Sen2Cor WV and ground-truth is 0.29 cm. Uncertainty of up to 0.04 was found for the BOA- reflectance product.
High speed wide field CMOS camera for Transneptunian Automatic Occultation Survey
NASA Astrophysics Data System (ADS)
Wang, Shiang-Yu; Geary, John C.; Amato, Stephen M.; Hu, Yen-Sang; Ling, Hung-Hsu; Huang, Pin-Jie; Furesz, Gabor; Chen, Hsin-Yo; Chang, Yin-Chang; Szentgyorgyi, Andrew; Lehner, Matthew; Norton, Timothy
2014-08-01
The Transneptunian Automated Occultation Survey (TAOS II) is a three robotic telescope project to detect the stellar occultation events generated by Trans Neptunian Objects (TNOs). TAOS II project aims to monitor about 10000 stars simultaneously at 20Hz to enable statistically significant event rate. The TAOS II camera is designed to cover the 1.7 degree diameter field of view (FoV) of the 1.3m telescope with 10 mosaic 4.5kx2k CMOS sensors. The new CMOS sensor has a back illumination thinned structure and high sensitivity to provide similar performance to that of the backillumination thinned CCDs. The sensor provides two parallel and eight serial decoders so the region of interests can be addressed and read out separately through different output channels efficiently. The pixel scale is about 0.6"/pix with the 16μm pixels. The sensors, mounted on a single Invar plate, are cooled to the operation temperature of about 200K by a cryogenic cooler. The Invar plate is connected to the dewar body through a supporting ring with three G10 bipods. The deformation of the cold plate is less than 10μm to ensure the sensor surface is always within ±40μm of focus range. The control electronics consists of analog part and a Xilinx FPGA based digital circuit. For each field star, 8×8 pixels box will be readout. The pixel rate for each channel is about 1Mpix/s and the total pixel rate for each camera is about 80Mpix/s. The FPGA module will calculate the total flux and also the centroid coordinates for every field star in each exposure.
Further applications for mosaic pixel FPA technology
NASA Astrophysics Data System (ADS)
Liddiard, Kevin C.
2011-06-01
In previous papers to this SPIE forum the development of novel technology for next generation PIR security sensors has been described. This technology combines the mosaic pixel FPA concept with low cost optics and purpose-designed readout electronics to provide a higher performance and affordable alternative to current PIR sensor technology, including an imaging capability. Progressive development has resulted in increased performance and transition from conventional microbolometer fabrication to manufacture on 8 or 12 inch CMOS/MEMS fabrication lines. A number of spin-off applications have been identified. In this paper two specific applications are highlighted: high performance imaging IRFPA design and forest fire detection. The former involves optional design for small pixel high performance imaging. The latter involves cheap expendable sensors which can detect approaching fire fronts and send alarms with positional data via mobile phone or satellite link. We also introduce to this SPIE forum the application of microbolometer IR sensor technology to IoT, the Internet of Things.
Evaluation of a single-pixel one-transistor active pixel sensor for fingerprint imaging
NASA Astrophysics Data System (ADS)
Xu, Man; Ou, Hai; Chen, Jun; Wang, Kai
2015-08-01
Since it first appeared in iPhone 5S in 2013, fingerprint identification (ID) has rapidly gained popularity among consumers. Current fingerprint-enabled smartphones unanimously consists of a discrete sensor to perform fingerprint ID. This architecture not only incurs higher material and manufacturing cost, but also provides only static identification and limited authentication. Hence as the demand for a thinner, lighter, and more secure handset grows, we propose a novel pixel architecture that is a photosensitive device embedded in a display pixel and detects the reflected light from the finger touch for high resolution, high fidelity and dynamic biometrics. To this purpose, an amorphous silicon (a-Si:H) dual-gate photo TFT working in both fingerprint-imaging mode and display-driving mode will be developed.
Development of Kilo-Pixel Arrays of Transition-Edge Sensors for X-Ray Spectroscopy
NASA Technical Reports Server (NTRS)
Adams, J. S.; Bandler, S. R.; Busch, S. E.; Chervenak, J. A.; Chiao, M. P.; Eckart, M. E.; Ewin, A. J.; Finkbeiner, F. M.; Kelley, R. L.; Kelly, D. P.;
2012-01-01
We are developing kilo-pixel arrays of transition-edge sensor (TES) microcalorimeters for future X-ray astronomy observatories or for use in laboratory astrophysics applications. For example, Athena/XMS (currently under study by the european space agency) would require a close-packed 32x32 pixel array on a 250-micron pitch with < 3.0 eV full-width-half-maximum energy resolution at 6 keV and at count-rates of up to 50 counts/pixel/second. We present characterization of 32x32 arrays. These detectors will be readout using state of the art SQUID based time-domain multiplexing (TDM). We will also present the latest results in integrating these detectors and the TDM readout technology into a 16 row x N column field-able instrument.
Design and implementation of non-linear image processing functions for CMOS image sensor
NASA Astrophysics Data System (ADS)
Musa, Purnawarman; Sudiro, Sunny A.; Wibowo, Eri P.; Harmanto, Suryadi; Paindavoine, Michel
2012-11-01
Today, solid state image sensors are used in many applications like in mobile phones, video surveillance systems, embedded medical imaging and industrial vision systems. These image sensors require the integration in the focal plane (or near the focal plane) of complex image processing algorithms. Such devices must meet the constraints related to the quality of acquired images, speed and performance of embedded processing, as well as low power consumption. To achieve these objectives, low-level analog processing allows extracting the useful information in the scene directly. For example, edge detection step followed by a local maxima extraction will facilitate the high-level processing like objects pattern recognition in a visual scene. Our goal was to design an intelligent image sensor prototype achieving high-speed image acquisition and non-linear image processing (like local minima and maxima calculations). For this purpose, we present in this article the design and test of a 64×64 pixels image sensor built in a standard CMOS Technology 0.35 μm including non-linear image processing. The architecture of our sensor, named nLiRIC (non-Linear Rapid Image Capture), is based on the implementation of an analog Minima/Maxima Unit. This MMU calculates the minimum and maximum values (non-linear functions), in real time, in a 2×2 pixels neighbourhood. Each MMU needs 52 transistors and the pitch of one pixel is 40×40 mu m. The total area of the 64×64 pixels is 12.5mm2. Our tests have shown the validity of the main functions of our new image sensor like fast image acquisition (10K frames per second), minima/maxima calculations in less then one ms.
Design of a Low-Light-Level Image Sensor with On-Chip Sigma-Delta Analog-to- Digital Conversion
NASA Technical Reports Server (NTRS)
Mendis, Sunetra K.; Pain, Bedabrata; Nixon, Robert H.; Fossum, Eric R.
1993-01-01
The design and projected performance of a low-light-level active-pixel-sensor (APS) chip with semi-parallel analog-to-digital (A/D) conversion is presented. The individual elements have been fabricated and tested using MOSIS* 2 micrometer CMOS technology, although the integrated system has not yet been fabricated. The imager consists of a 128 x 128 array of active pixels at a 50 micrometer pitch. Each column of pixels shares a 10-bit A/D converter based on first-order oversampled sigma-delta (Sigma-Delta) modulation. The 10-bit outputs of each converter are multiplexed and read out through a single set of outputs. A semi-parallel architecture is chosen to achieve 30 frames/second operation even at low light levels. The sensor is designed for less than 12 e^- rms noise performance.
Neural Network for Image-to-Image Control of Optical Tweezers
NASA Technical Reports Server (NTRS)
Decker, Arthur J.; Anderson, Robert C.; Weiland, Kenneth E.; Wrbanek, Susan Y.
2004-01-01
A method is discussed for using neural networks to control optical tweezers. Neural-net outputs are combined with scaling and tiling to generate 480 by 480-pixel control patterns for a spatial light modulator (SLM). The SLM can be combined in various ways with a microscope to create movable tweezers traps with controllable profiles. The neural nets are intended to respond to scattered light from carbon and silicon carbide nanotube sensors. The nanotube sensors are to be held by the traps for manipulation and calibration. Scaling and tiling allow the 100 by 100-pixel maximum resolution of the neural-net software to be applied in stages to exploit the full 480 by 480-pixel resolution of the SLM. One of these stages is intended to create sensitive null detectors for detecting variations in the scattered light from the nanotube sensors.
Text image authenticating algorithm based on MD5-hash function and Henon map
NASA Astrophysics Data System (ADS)
Wei, Jinqiao; Wang, Ying; Ma, Xiaoxue
2017-07-01
In order to cater to the evidentiary requirements of the text image, this paper proposes a fragile watermarking algorithm based on Hash function and Henon map. The algorithm is to divide a text image into parts, get flippable pixels and nonflippable pixels of every lump according to PSD, generate watermark of non-flippable pixels with MD5-Hash, encrypt watermark with Henon map and select embedded blocks. The simulation results show that the algorithm with a good ability in tampering localization can be used to authenticate and forensics the authenticity and integrity of text images
NASA Astrophysics Data System (ADS)
Steadman, Roger; Herrmann, Christoph; Livne, Amir
2017-08-01
Spectral CT based on energy-resolving photon counting detectors is expected to deliver additional diagnostic value at a lower dose than current state-of-the-art CT [1]. The capability of simultaneously providing a number of spectrally distinct measurements not only allows distinguishing between photo-electric and Compton interactions but also discriminating contrast agents that exhibit a K-edge discontinuity in the absorption spectrum, referred to as K-edge Imaging [2]. Such detectors are based on direct converting sensors (e.g. CdTe or CdZnTe) and high-rate photon counting electronics. To support the development of Spectral CT and show the feasibility of obtaining rates exceeding 10 Mcps/pixel (Poissonian observed count-rate), the ChromAIX ASIC has been previously reported showing 13.5 Mcps/pixel (150 Mcps/mm2 incident) [3]. The ChromAIX has been improved to offer the possibility of a large area coverage detector, and increased overall performance. The new ASIC is called ChromAIX2, and delivers count-rates exceeding 15 Mcps/pixel with an rms-noise performance of approximately 260 e-. It has an isotropic pixel pitch of 500 μm in an array of 22×32 pixels and is tile-able on three of its sides. The pixel topology consists of a two stage amplifier (CSA and Shaper) and a number of test features allowing to thoroughly characterize the ASIC without a sensor. A total of 5 independent thresholds are also available within each pixel, allowing to acquire 5 spectrally distinct measurements simultaneously. The ASIC also incorporates a baseline restorer to eliminate excess currents induced by the sensor (e.g. dark current and low frequency drifts) which would otherwise cause an energy estimation error. In this paper we report on the inherent electrical performance of the ChromAXI2 as well as measurements obtained with CZT (CdZnTe)/CdTe sensors and X-rays and radioactive sources.
Event-based Sensing for Space Situational Awareness
NASA Astrophysics Data System (ADS)
Cohen, G.; Afshar, S.; van Schaik, A.; Wabnitz, A.; Bessell, T.; Rutten, M.; Morreale, B.
A revolutionary type of imaging device, known as a silicon retina or event-based sensor, has recently been developed and is gaining in popularity in the field of artificial vision systems. These devices are inspired by a biological retina and operate in a significantly different way to traditional CCD-based imaging sensors. While a CCD produces frames of pixel intensities, an event-based sensor produces a continuous stream of events, each of which is generated when a pixel detects a change in log light intensity. These pixels operate asynchronously and independently, producing an event-based output with high temporal resolution. There are also no fixed exposure times, allowing these devices to offer a very high dynamic range independently for each pixel. Additionally, these devices offer high-speed, low power operation and a sparse spatiotemporal output. As a consequence, the data from these sensors must be interpreted in a significantly different way to traditional imaging sensors and this paper explores the advantages this technology provides for space imaging. The applicability and capabilities of event-based sensors for SSA applications are demonstrated through telescope field trials. Trial results have confirmed that the devices are capable of observing resident space objects from LEO through to GEO orbital regimes. Significantly, observations of RSOs were made during both day-time and nighttime (terminator) conditions without modification to the camera or optics. The event based sensor’s ability to image stars and satellites during day-time hours offers a dramatic capability increase for terrestrial optical sensors. This paper shows the field testing and validation of two different architectures of event-based imaging sensors. An eventbased sensor’s asynchronous output has an intrinsically low data-rate. In addition to low-bandwidth communications requirements, the low weight, low-power and high-speed make them ideally suitable to meeting the demanding challenges required by space-based SSA systems. Results from these experiments and the systems developed highlight the applicability of event-based sensors to ground and space-based SSA tasks.
Color sensitivity of the multi-exposure HDR imaging process
NASA Astrophysics Data System (ADS)
Lenseigne, Boris; Jacobs, Valéry Ann; Withouck, Martijn; Hanselaer, Peter; Jonker, Pieter P.
2013-04-01
Multi-exposure high dynamic range(HDR) imaging builds HDR radiance maps by stitching together different views of a same scene with varying exposures. Practically, this process involves converting raw sensor data into low dynamic range (LDR) images, estimate the camera response curves, and use them in order to recover the irradiance for every pixel. During the export, applying white balance settings and image stitching, which both have an influence on the color balance in the final image. In this paper, we use a calibrated quasi-monochromatic light source, an integrating sphere, and a spectrograph in order to evaluate and compare the average spectral response of the image sensor. We finally draw some conclusion about the color consistency of HDR imaging and the additional steps necessary to use multi-exposure HDR imaging as a tool to measure the physical quantities such as radiance and luminance.
NASA Astrophysics Data System (ADS)
Czermak, A.; Zalewska, A.; Dulny, B.; Sowicki, B.; Jastrząb, M.; Nowak, L.
2004-07-01
The needs for real time monitoring of the hadrontherapy beam intensity and profile as well as requirements for the fast dosimetry using Monolithic Active Pixel Sensors (MAPS) forced the SUCIMA collaboration to the design of the unique Data Acquisition System (DAQ SUCIMA Imager). The DAQ system has been developed on one of the most advanced XILINX Field Programmable Gate Array chip - VERTEX II. The dedicated multifunctional electronic board for the detector's analogue signals capture, their parallel digital processing and final data compression as well as transmission through the high speed USB 2.0 port has been prototyped and tested.
Fixed Pattern Noise pixel-wise linear correction for crime scene imaging CMOS sensor
NASA Astrophysics Data System (ADS)
Yang, Jie; Messinger, David W.; Dube, Roger R.; Ientilucci, Emmett J.
2017-05-01
Filtered multispectral imaging technique might be a potential method for crime scene documentation and evidence detection due to its abundant spectral information as well as non-contact and non-destructive nature. Low-cost and portable multispectral crime scene imaging device would be highly useful and efficient. The second generation crime scene imaging system uses CMOS imaging sensor to capture spatial scene and bandpass Interference Filters (IFs) to capture spectral information. Unfortunately CMOS sensors suffer from severe spatial non-uniformity compared to CCD sensors and the major cause is Fixed Pattern Noise (FPN). IFs suffer from "blue shift" effect and introduce spatial-spectral correlated errors. Therefore, Fixed Pattern Noise (FPN) correction is critical to enhance crime scene image quality and is also helpful for spatial-spectral noise de-correlation. In this paper, a pixel-wise linear radiance to Digital Count (DC) conversion model is constructed for crime scene imaging CMOS sensor. Pixel-wise conversion gain Gi,j and Dark Signal Non-Uniformity (DSNU) Zi,j are calculated. Also, conversion gain is divided into four components: FPN row component, FPN column component, defects component and effective photo response signal component. Conversion gain is then corrected to average FPN column and row components and defects component so that the sensor conversion gain is uniform. Based on corrected conversion gain and estimated image incident radiance from the reverse of pixel-wise linear radiance to DC model, corrected image spatial uniformity can be enhanced to 7 times as raw image, and the bigger the image DC value within its dynamic range, the better the enhancement.
NASA Astrophysics Data System (ADS)
Ghosh, Aniruddha; Fassnacht, Fabian Ewald; Joshi, P. K.; Koch, Barbara
2014-02-01
Knowledge of tree species distribution is important worldwide for sustainable forest management and resource evaluation. The accuracy and information content of species maps produced using remote sensing images vary with scale, sensor (optical, microwave, LiDAR), classification algorithm, verification design and natural conditions like tree age, forest structure and density. Imaging spectroscopy reduces the inaccuracies making use of the detailed spectral response. However, the scale effect still has a strong influence and cannot be neglected. This study aims to bridge the knowledge gap in understanding the scale effect in imaging spectroscopy when moving from 4 to 30 m pixel size for tree species mapping, keeping in mind that most current and future hyperspectral satellite based sensors work with spatial resolution around 30 m or more. Two airborne (HyMAP) and one spaceborne (Hyperion) imaging spectroscopy dataset with pixel sizes of 4, 8 and 30 m, respectively were available to examine the effect of scale over a central European forest. The forest under examination is a typical managed forest with relatively homogenous stands featuring mostly two canopy layers. Normalized digital surface model (nDSM) derived from LiDAR data was used additionally to examine the effect of height information in tree species mapping. Six different sets of predictor variables (reflectance value of all bands, selected components of a Minimum Noise Fraction (MNF), Vegetation Indices (VI) and each of these sets combined with LiDAR derived height) were explored at each scale. Supervised kernel based (Support Vector Machines) and ensemble based (Random Forest) machine learning algorithms were applied on the dataset to investigate the effect of the classifier. Iterative bootstrap-validation with 100 iterations was performed for classification model building and testing for all the trials. For scale, analysis of overall classification accuracy and kappa values indicated that 8 m spatial resolution (reaching kappa values of over 0.83) slightly outperformed the results obtained from 4 m for the study area and five tree species under examination. The 30 m resolution Hyperion image produced sound results (kappa values of over 0.70), which in some areas of the test site were comparable with the higher spatial resolution imagery when qualitatively assessing the map outputs. Considering input predictor sets, MNF bands performed best at 4 and 8 m resolution. Optical bands were found to be best for 30 m spatial resolution. Classification with MNF as input predictors produced better visual appearance of tree species patches when compared with reference maps. Based on the analysis, it was concluded that there is no significant effect of height information on tree species classification accuracies for the present framework and study area. Furthermore, in the examined cases there was no single best choice among the two classifiers across scales and predictors. It can be concluded that tree species mapping from imaging spectroscopy for forest sites comparable to the one under investigation is possible with reliable accuracies not only from airborne but also from spaceborne imaging spectroscopy datasets.
Design and Calibration of a Novel Bio-Inspired Pixelated Polarized Light Compass.
Han, Guoliang; Hu, Xiaoping; Lian, Junxiang; He, Xiaofeng; Zhang, Lilian; Wang, Yujie; Dong, Fengliang
2017-11-14
Animals, such as Savannah sparrows and North American monarch butterflies, are able to obtain compass information from skylight polarization patterns to help them navigate effectively and robustly. Inspired by excellent navigation ability of animals, this paper proposes a novel image-based polarized light compass, which has the advantages of having a small size and being light weight. Firstly, the polarized light compass, which is composed of a Charge Coupled Device (CCD) camera, a pixelated polarizer array and a wide-angle lens, is introduced. Secondly, the measurement method of a skylight polarization pattern and the orientation method based on a single scattering Rayleigh model are presented. Thirdly, the error model of the sensor, mainly including the response error of CCD pixels and the installation error of the pixelated polarizer, is established. A calibration method based on iterative least squares estimation is proposed. In the outdoor environment, the skylight polarization pattern can be measured in real time by our sensor. The orientation accuracy of the sensor increases with the decrease of the solar elevation angle, and the standard deviation of orientation error is 0 . 15 ∘ at sunset. Results of outdoor experiments show that the proposed polarization navigation sensor can be used for outdoor autonomous navigation.
Multiple Sensor Camera for Enhanced Video Capturing
NASA Astrophysics Data System (ADS)
Nagahara, Hajime; Kanki, Yoshinori; Iwai, Yoshio; Yachida, Masahiko
A resolution of camera has been drastically improved under a current request for high-quality digital images. For example, digital still camera has several mega pixels. Although a video camera has the higher frame-rate, the resolution of a video camera is lower than that of still camera. Thus, the high-resolution is incompatible with the high frame rate of ordinary cameras in market. It is difficult to solve this problem by a single sensor, since it comes from physical limitation of the pixel transfer rate. In this paper, we propose a multi-sensor camera for capturing a resolution and frame-rate enhanced video. Common multi-CCDs camera, such as 3CCD color camera, has same CCD for capturing different spectral information. Our approach is to use different spatio-temporal resolution sensors in a single camera cabinet for capturing higher resolution and frame-rate information separately. We build a prototype camera which can capture high-resolution (2588×1958 pixels, 3.75 fps) and high frame-rate (500×500, 90 fps) videos. We also proposed the calibration method for the camera. As one of the application of the camera, we demonstrate an enhanced video (2128×1952 pixels, 90 fps) generated from the captured videos for showing the utility of the camera.
Design and Calibration of a Novel Bio-Inspired Pixelated Polarized Light Compass
Hu, Xiaoping; Lian, Junxiang; He, Xiaofeng; Zhang, Lilian; Wang, Yujie; Dong, Fengliang
2017-01-01
Animals, such as Savannah sparrows and North American monarch butterflies, are able to obtain compass information from skylight polarization patterns to help them navigate effectively and robustly. Inspired by excellent navigation ability of animals, this paper proposes a novel image-based polarized light compass, which has the advantages of having a small size and being light weight. Firstly, the polarized light compass, which is composed of a Charge Coupled Device (CCD) camera, a pixelated polarizer array and a wide-angle lens, is introduced. Secondly, the measurement method of a skylight polarization pattern and the orientation method based on a single scattering Rayleigh model are presented. Thirdly, the error model of the sensor, mainly including the response error of CCD pixels and the installation error of the pixelated polarizer, is established. A calibration method based on iterative least squares estimation is proposed. In the outdoor environment, the skylight polarization pattern can be measured in real time by our sensor. The orientation accuracy of the sensor increases with the decrease of the solar elevation angle, and the standard deviation of orientation error is 0.15∘ at sunset. Results of outdoor experiments show that the proposed polarization navigation sensor can be used for outdoor autonomous navigation. PMID:29135927
Readout of the upgraded ALICE-ITS
NASA Astrophysics Data System (ADS)
Szczepankiewicz, A.; ALICE Collaboration
2016-07-01
The ALICE experiment will undergo a major upgrade during the second long shutdown of the CERN LHC. As part of this program, the present Inner Tracking System (ITS), which employs different layers of hybrid pixels, silicon drift and strip detectors, will be replaced by a completely new tracker composed of seven layers of monolithic active pixel sensors. The upgraded ITS will have more than twelve billion pixels in total, producing 300 Gbit/s of data when tracking 50 kHz Pb-Pb events. Two families of pixel chips realized with the TowerJazz CMOS imaging process have been developed as candidate sensors: the ALPIDE, which uses a proprietary readout and sparsification mechanism and the MISTRAL-O, based on a proven rolling shutter architecture. Both chips can operate in continuous mode, with the ALPIDE also supporting triggered operations. As the communication IP blocks are shared among the two chip families, it has been possible to develop a common Readout Electronics. All the sensor components (analog stages, state machines, buffers, FIFOs, etc.) have been modelled in a system level simulation, which has been extensively used to optimize both the sensor and the whole readout chain design in an iterative process. This contribution covers the progress of the R&D efforts and the overall expected performance of the ALICE-ITS readout system.
Characteristics of Monolithically Integrated InGaAs Active Pixel Imager Array
NASA Technical Reports Server (NTRS)
Kim, Q.; Cunningham, T. J.; Pain, B.; Lange, M. J.; Olsen, G. H.
2000-01-01
Switching and amplifying characteristics of a newly developed monolithic InGaAs Active Pixel Imager Array are presented. The sensor array is fabricated from InGaAs material epitaxially deposited on an InP substrate. It consists of an InGaAs photodiode connected to InP depletion-mode junction field effect transistors (JFETs) for low leakage, low power, and fast control of circuit signal amplifying, buffering, selection, and reset. This monolithically integrated active pixel sensor configuration eliminates the need for hybridization with silicon multiplexer. In addition, the configuration allows the sensor to be front illuminated, making it sensitive to visible as well as near infrared signal radiation. Adapting the existing 1.55 micrometer fiber optical communication technology, this integration will be an ideal system of optoelectronic integration for dual band (Visible/IR) applications near room temperature, for use in atmospheric gas sensing in space, and for target identification on earth. In this paper, two different types of small 4 x 1 test arrays will be described. The effectiveness of switching and amplifying circuits will be discussed in terms of circuit effectiveness (leakage, operating frequency, and temperature) in preparation for the second phase demonstration of integrated, two-dimensional monolithic InGaAs active pixel sensor arrays for applications in transportable shipboard surveillance, night vision, and emission spectroscopy.
Frahm, Jan-Michael; Pollefeys, Marc Andre Leon; Gallup, David Robert
2015-12-08
Methods of generating a three dimensional representation of an object in a reference plane from a depth map including distances from a reference point to pixels in an image of the object taken from a reference point. Weights are assigned to respective voxels in a three dimensional grid along rays extending from the reference point through the pixels in the image based on the distances in the depth map from the reference point to the respective pixels, and a height map including an array of height values in the reference plane is formed based on the assigned weights. An n-layer height map may be constructed by generating a probabilistic occupancy grid for the voxels and forming an n-dimensional height map comprising an array of layer height values in the reference plane based on the probabilistic occupancy grid.
Oh, Sungjin; Ahn, Jae-Hyun; Lee, Sangmin; Ko, Hyoungho; Seo, Jong Mo; Goo, Yong-Sook; Cho, Dong-il Dan
2015-01-01
Retinal prosthetic devices stimulate retinal nerve cells with electrical signals proportional to the incident light intensities. For a high-resolution retinal prosthesis, it is necessary to reduce the size of the stimulator pixels as much as possible, because the retinal nerve cells are concentrated in a small area of approximately 5 mm × 5 mm. In this paper, a miniaturized biphasic current stimulator integrated circuit is developed for subretinal stimulation and tested in vitro. The stimulator pixel is miniaturized by using a complementary metal-oxide-semiconductor (CMOS) image sensor composed of three transistors. Compared to a pixel that uses a four-transistor CMOS image sensor, this new design reduces the pixel size by 8.3%. The pixel size is further reduced by simplifying the stimulation-current generating circuit, which provides a 43.9% size reduction when compared to the design reported to be the most advanced version to date for subretinal stimulation. The proposed design is fabricated using a 0.35 μm bipolar-CMOS-DMOS process. Each pixel is designed to fit in a 50 μ m × 55 μm area, which theoretically allows implementing more than 5000 pixels in the 5 mm × 5 mm area. Experimental results show that a biphasic current in the range of 0 to 300 μA at 12 V can be generated as a function of incident light intensities. Results from in vitro experiments with rd1 mice indicate that the proposed method can be effectively used for retinal prosthesis with a high resolution.
NASA Astrophysics Data System (ADS)
Wang, Shifeng; So, Emily; Smith, Pete
2015-04-01
Estimating the number of refugees and internally displaced persons is important for planning and managing an efficient relief operation following disasters and conflicts. Accurate estimates of refugee numbers can be inferred from the number of tents. Extracting tents from high-resolution satellite imagery has recently been suggested. However, it is still a significant challenge to extract tents automatically and reliably from remote sensing imagery. This paper describes a novel automated method, which is based on mathematical morphology, to generate a camp map to estimate the refugee numbers by counting tents on the camp map. The method is especially useful in detecting objects with a clear shape, size, and significant spectral contrast with their surroundings. Results for two study sites with different satellite sensors and different spatial resolutions demonstrate that the method achieves good performance in detecting tents. The overall accuracy can be up to 81% in this study. Further improvements should be possible if over-identified isolated single pixel objects can be filtered. The performance of the method is impacted by spectral characteristics of satellite sensors and image scenes, such as the extent of area of interest and the spatial arrangement of tents. It is expected that the image scene would have a much higher influence on the performance of the method than the sensor characteristics.
Hit efficiency study of CMS prototype forward pixel detectors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Dongwook; /Johns Hopkins U.
2006-01-01
In this paper the author describes the measurement of the hit efficiency of a prototype pixel device for the CMS forward pixel detector. These pixel detectors were FM type sensors with PSI46V1 chip readout. The data were taken with the 120 GeV proton beam at Fermilab during the period of December 2004 to February 2005. The detectors proved to be highly efficient (99.27 {+-} 0.02%). The inefficiency was primarily located near the corners of the individual pixels.
Miniature Wide-Angle Lens for Small-Pixel Electronic Camera
NASA Technical Reports Server (NTRS)
Mouroulils, Pantazis; Blazejewski, Edward
2009-01-01
A proposed wideangle lens is shown that would be especially well suited for an electronic camera in which the focal plane is occupied by an image sensor that has small pixels. The design of the lens is intended to satisfy requirements for compactness, high image quality, and reasonably low cost, while addressing issues peculiar to the operation of small-pixel image sensors. Hence, this design is expected to enable the development of a new generation of compact, high-performance electronic cameras. The lens example shown has a 60 degree field of view and a relative aperture (f-number) of 3.2. The main issues affecting the design are also shown.
Development of X-Ray Microcalorimeter Imaging Spectrometers for the X-Ray Surveyor Mission Concept
NASA Technical Reports Server (NTRS)
Bandler, Simon R.; Adams, Joseph S.; Chervenak, James A.; Datesman, Aaron M.; Eckart, Megan E.; Finkbeiner, Fred M.; Kelley, Richard L.; Kilbourne, Caroline A.; Betncourt-Martinez, Gabriele; Miniussi, Antoine R.;
2016-01-01
Four astrophysics missions are currently being studied by NASA as candidate large missions to be chosen inthe 2020 astrophysics decadal survey.1 One of these missions is the X-Ray Surveyor (XRS), and possibleconfigurations of this mission are currently under study by a science and technology definition team (STDT). Oneof the key instruments under study is an X-ray microcalorimeter, and the requirements for such an instrument arecurrently under discussion. In this paper we review some different detector options that exist for this instrument,and discuss what array formats might be possible. We have developed one design option that utilizes eithertransition-edge sensor (TES) or magnetically coupled calorimeters (MCC) in pixel array-sizes approaching 100kilo-pixels. To reduce the number of sensors read out to a plausible scale, we have assumed detector geometriesin which a thermal sensor such a TES or MCC can read out a sub-array of 20-25 individual 1 pixels. In thispaper we describe the development status of these detectors, and also discuss the different options that exist forreading out the very large number of pixels.
CMOS Imaging of Temperature Effects on Pin-Printed Xerogel Sensor Microarrays.
Lei Yao; Ka Yi Yung; Chodavarapu, Vamsy P; Bright, Frank V
2011-04-01
In this paper, we study the effect of temperature on the operation and performance of a xerogel-based sensor microarrays coupled to a complementary metal-oxide semiconductor (CMOS) imager integrated circuit (IC) that images the photoluminescence response from the sensor microarray. The CMOS imager uses a 32 × 32 (1024 elements) array of active pixel sensors and each pixel includes a high-gain phototransistor to convert the detected optical signals into electrical currents. A correlated double sampling circuit and pixel address/digital control/signal integration circuit are also implemented on-chip. The CMOS imager data are read out as a serial coded signal. The sensor system uses a light-emitting diode to excite target analyte responsive organometallic luminophores doped within discrete xerogel-based sensor elements. As a proto type, we developed a 3 × 3 (9 elements) array of oxygen (O2) sensors. Each group of three sensor elements in the array (arranged in a column) is designed to provide a different and specific sensitivity to the target gaseous O2 concentration. This property of multiple sensitivities is achieved by using a mix of two O2 sensitive luminophores in each pin-printed xerogel sensor element. The CMOS imager is designed to be low noise and consumes a static power of 320.4 μW and an average dynamic power of 624.6 μW when operating at 100-Hz sampling frequency and 1.8-V dc power supply.
NASA Astrophysics Data System (ADS)
Moody, D.; Brumby, S. P.; Chartrand, R.; Franco, E.; Keisler, R.; Kelton, T.; Kontgis, C.; Mathis, M.; Raleigh, D.; Rudelis, X.; Skillman, S.; Warren, M. S.; Longbotham, N.
2016-12-01
The recent computing performance revolution has driven improvements in sensor, communication, and storage technology. Historical, multi-decadal remote sensing datasets at the petabyte scale are now available in commercial clouds, with new satellite constellations generating petabytes per year of high-resolution imagery with daily global coverage. Cloud computing and storage, combined with recent advances in machine learning and open software, are enabling understanding of the world at an unprecedented scale and detail. We have assembled all available satellite imagery from the USGS Landsat, NASA MODIS, and ESA Sentinel programs, as well as commercial PlanetScope and RapidEye imagery, and have analyzed over 2.8 quadrillion multispectral pixels. We leveraged the commercial cloud to generate a tiled, spatio-temporal mosaic of the Earth for fast iteration and development of new algorithms combining analysis techniques from remote sensing, machine learning, and scalable compute infrastructure. Our data platform enables processing at petabytes per day rates using multi-source data to produce calibrated, georeferenced imagery stacks at desired points in time and space that can be used for pixel level or global scale analysis. We demonstrate our data platform capability by using the European Space Agency's (ESA) published 2006 and 2009 GlobCover 20+ category label maps to train and test a Land Cover Land Use (LCLU) classifier, and generate current self-consistent LCLU maps in Brazil. We train a standard classifier on 2006 GlobCover categories using temporal imagery stacks, and we validate our results on co-registered 2009 Globcover LCLU maps and 2009 imagery. We then extend the derived LCLU model to current imagery stacks to generate an updated, in-season label map. Changes in LCLU labels can now be seamlessly monitored for a given location across the years in order to track, for example, cropland expansion, forest growth, and urban developments. An example of change monitoring is illustrated in the included figure showing rainfed cropland change in the Mato Grosso region of Brazil between 2006 and 2009.
What is missing? An operational inundation mapping framework by SAR data
NASA Astrophysics Data System (ADS)
Shen, X.; Anagnostou, E. N.; Zeng, Z.; Kettner, A.; Hong, Y.
2017-12-01
Compared to optical sensors, synthetic aperture radar (SAR) works all-day all-weather. In addition, its spatial resolution does not decrease with the height of the platform and is thus applicable to a range of important studies. However, existing studies did not address the operational demands of real-time inundation mapping. The direct proof is that no water body product exists for any SAR-based satellites. Then what is missing between science and products? Automation and quality. What makes it so difficult to develop an operational inundation mapping technique based on SAR data? Spectrum-wise, unlike optical water indices such as MNDWI, AWEI etc., where a relative constant threshold may apply across acquisition of images, regions and sensors, the threshold to separate water from non-water pixels in each SAR images has to be individually chosen. The optimization of the threshold is the first obstacle to the automation of the SAR data algorithm. Morphologically, the quality and reliability of the results have been compromised by over-detection caused by smooth surface and shadowing area, the noise-like speckle and under-detection caused by strong-scatter disturbance. In this study, we propose a three-step framework that addresses all aforementioned issues of operational inundation mapping by SAR data. The framework consists of 1) optimization of Wishart distribution parameters of single/dual/fully-polarized SAR data, 2) morphological removal of over-detection, and 3) machine-learning based removal of under-detection. The framework utilizes not only the SAR data, but also the synergy of digital elevation model (DEM), and optical sensor-based products of fine resolution, including the water probability map, land cover classification map (optional), and river width. The framework has been validated throughout multiple areas in different parts of the world using different satellite SAR data and globally available ancillary data products. Therefore, it has the potential to contribute as an operational inundation mapping algorithm to any SAR missions, such as SWOT, ALOS, Sentinel, etc. Selected results using ALOS/PALSAR-1 L-band dual polarized data around the Connecticut River is provided in the attached Figure.
Optimizing Floating Guard Ring Designs for FASPAX N-in-P Silicon Sensors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shin, Kyung-Wook; Bradford, Robert; Lipton, Ronald
2016-10-06
FASPAX (Fermi-Argonne Semiconducting Pixel Array X-ray detector) is being developed as a fast integrating area detector with wide dynamic range for time resolved applications at the upgraded Advanced Photon Source (APS.) A burst mode detector with intendedmore » $$\\mbox{13 $$MHz$}$ image rate, FASPAX will also incorporate a novel integration circuit to achieve wide dynamic range, from single photon sensitivity to $$10^{\\text{5}}$$ x-rays/pixel/pulse. To achieve these ambitious goals, a novel silicon sensor design is required. This paper will detail early design of the FASPAX sensor. Results from TCAD optimization studies, and characterization of prototype sensors will be presented.« less
Study of cluster shapes in a monolithic active pixel detector
NASA Astrophysics Data System (ADS)
Maçzewski, ł.; Adamus, M.; Ciborowski, J.; Grzelak, G.; łużniak, P.; Nieżurawski, P.; Żarnecki, A. F.
2009-11-01
Beamstrahlung will constitute an important source of background in a pixel vertex detector at the future International Linear Collider. Electron and positron tracks of this origin impact the pixel planes at angles generally larger than those of secondary hadrons and the corresponding clusters are elongated. We report studies of cluster characteristics using test beam electron tracks incident at various angles on a MIMOSA-5 monolithic active pixel sensor matrix.
Pengra, Bruce; Johnston, C.A.; Loveland, Thomas R.
2007-01-01
Mapping tools are needed to document the location and extent of Phragmites australis, a tall grass that invades coastal marshes throughout North America, displacing native plant species and degrading wetland habitat. Mapping Phragmites is particularly challenging in the freshwater Great Lakes coastal wetlands due to dynamic lake levels and vegetation diversity. We tested the applicability of Hyperion hyperspectral satellite imagery for mapping Phragmites in wetlands of the west coast of Green Bay in Wisconsin, U.S.A. A reference spectrum created using Hyperion data from several pure Phragmites stands within the image was used with a Spectral Correlation Mapper (SCM) algorithm to create a raster map with values ranging from 0 to 1, where 0 represented the greatest similarity between the reference spectrum and the image spectrum and 1 the least similarity. The final two-class thematic classification predicted monodominant Phragmites covering 3.4% of the study area. Most of this was concentrated in long linear features parallel to the Green Bay shoreline, particularly in areas that had been under water only six years earlier when lake levels were 66??cm higher. An error matrix using spring 2005 field validation points (n = 129) showed good overall accuracy-81.4%. The small size and linear arrangement of Phragmites stands was less than optimal relative to the sensor resolution, and Hyperion's 30??m resolution captured few if any pure pixels. Contemporary Phragmites maps prepared with Hyperion imagery would provide wetland managers with a tool that they currently lack, which could aid attempts to stem the spread of this invasive species. ?? 2006 Elsevier Inc. All rights reserved.
Color filter array pattern identification using variance of color difference image
NASA Astrophysics Data System (ADS)
Shin, Hyun Jun; Jeon, Jong Ju; Eom, Il Kyu
2017-07-01
A color filter array is placed on the image sensor of a digital camera to acquire color images. Each pixel uses only one color, since the image sensor can measure only one color per pixel. Therefore, empty pixels are filled using an interpolation process called demosaicing. The original and the interpolated pixels have different statistical characteristics. If the image is modified by manipulation or forgery, the color filter array pattern is altered. This pattern change can be a clue for image forgery detection. However, most forgery detection algorithms have the disadvantage of assuming the color filter array pattern. We present an identification method of the color filter array pattern. Initially, the local mean is eliminated to remove the background effect. Subsequently, the color difference block is constructed to emphasize the difference between the original pixel and the interpolated pixel. The variance measure of the color difference image is proposed as a means of estimating the color filter array configuration. The experimental results show that the proposed method is effective in identifying the color filter array pattern. Compared with conventional methods, our method provides superior performance.
Pixel electronic noise as a function of position in an active matrix flat panel imaging array
NASA Astrophysics Data System (ADS)
Yazdandoost, Mohammad Y.; Wu, Dali; Karim, Karim S.
2010-04-01
We present an analysis of output referred pixel electronic noise as a function of position in the active matrix array for both active and passive pixel architectures. Three different noise sources for Active Pixel Sensor (APS) arrays are considered: readout period noise, reset period noise and leakage current noise of the reset TFT during readout. For the state-of-the-art Passive Pixel Sensor (PPS) array, the readout noise of the TFT switch is considered. Measured noise results are obtained by modeling the array connections with RC ladders on a small in-house fabricated prototype. The results indicate that the pixels in the rows located in the middle part of the array have less random electronic noise at the output of the off-panel charge amplifier compared to the ones in rows at the two edges of the array. These results can help optimize for clearer images as well as help define the region-of-interest with the best signal-to-noise ratio in an active matrix digital flat panel imaging array.
Elbakri, I A; McIntosh, B J; Rickey, D W
2009-03-21
We investigated the physical characteristics of two complementary metal oxide semiconductor (CMOS) mammography detectors. The detectors featured 14-bit image acquisition, 50 microm detector element (del) size and an active area of 5 cm x 5 cm. One detector was a passive-pixel sensor (PPS) with signal amplification performed by an array of amplifiers connected to dels via data lines. The other detector was an active-pixel sensor (APS) with signal amplification performed at each del. Passive-pixel designs have higher read noise due to data line capacitance, and the APS represents an attempt to improve the noise performance of this technology. We evaluated the detectors' resolution by measuring the modulation transfer function (MTF) using a tilted edge. We measured the noise power spectra (NPS) and detective quantum efficiencies (DQE) using mammographic beam conditions specified by the IEC 62220-1-2 standard. Our measurements showed the APS to have much higher gain, slightly higher MTF, and higher NPS. The MTF of both sensors approached 10% near the Nyquist limit. DQE values near dc frequency were in the range of 55-67%, with the APS sensor DQE lower than the PPS DQE for all frequencies. Our results show that lower read noise specifications in this case do not translate into gains in the imaging performance of the sensor. We postulate that the lower fill factor of the APS is a possible cause for this result.
Integrated sensor with frame memory and programmable resolution for light adaptive imaging
NASA Technical Reports Server (NTRS)
Zhou, Zhimin (Inventor); Fossum, Eric R. (Inventor); Pain, Bedabrata (Inventor)
2004-01-01
An image sensor operable to vary the output spatial resolution according to a received light level while maintaining a desired signal-to-noise ratio. Signals from neighboring pixels in a pixel patch with an adjustable size are added to increase both the image brightness and signal-to-noise ratio. One embodiment comprises a sensor array for receiving input signals, a frame memory array for temporarily storing a full frame, and an array of self-calibration column integrators for uniform column-parallel signal summation. The column integrators are capable of substantially canceling fixed pattern noise.
Smartphone Based Platform for Colorimetric Sensing of Dyes
NASA Astrophysics Data System (ADS)
Dutta, Sibasish; Nath, Pabitra
We demonstrate the working of a smartphone based optical sensor for measuring absorption band of coloured dyes. By integration of simple laboratory optical components with the camera unit of the smartphone we have converted it into a visible spectrometer with a pixel resolution of 0.345 nm/pixel. Light from a broadband optical source is allowed to transmit through a specific dye solution. The transmitted light signal is captured by the camera of the smartphone. The present sensor is inexpensive, portable and light weight making it an ideal handy sensor suitable for different on-field sensing.
AVIRIS calibration using the cloud-shadow method
NASA Technical Reports Server (NTRS)
Carder, K. L.; Reinersman, P.; Chen, R. F.
1993-01-01
More than 90 percent of the signal at an ocean-viewing, satellite sensor is due to the atmosphere, so a 5 percent sensor-calibration error viewing a target that contributes but 10 percent of the signal received at the sensor may result in a target-reflectance error of more than 50 percent. Since prelaunch calibration accuracies of 5 percent are typical of space-sensor requirements, recalibration of the sensor using ground-base methods is required for low-signal target. Known target reflectance or water-leaving radiance spectra and atmospheric correction parameters are required. In this article we describe an atmospheric-correction method that uses cloud shadowed pixels in combination with pixels in a neighborhood region of similar optical properties to remove atmospheric effects from ocean scenes. These neighboring pixels can then be used as known reflectance targets for validation of the sensor calibration and atmospheric correction. The method uses the difference between water-leaving radiance values for these two regions. This allows nearly identical optical contributions to the two signals (e.g., path radiance and Fresnel-reflected skylight) to be removed, leaving mostly solar photons backscattered from beneath the sea to dominate the residual signal. Normalization by incident solar irradiance reaching the sea surface provides the remote-sensing reflectance of the ocean at the location of the neighbor region.
Hamann, Elias; Koenig, Thomas; Zuber, Marcus; Cecilia, Angelica; Tyazhev, Anton; Tolbanov, Oleg; Procz, Simon; Fauler, Alex; Baumbach, Tilo; Fiederle, Michael
2015-03-01
High resistivity gallium arsenide is considered a suitable sensor material for spectroscopic X-ray imaging detectors. These sensors typically have thicknesses between a few hundred μm and 1 mm to ensure a high photon detection efficiency. However, for small pixel sizes down to several tens of μm, an effect called charge sharing reduces a detector's spectroscopic performance. The recently developed Medipix3RX readout chip overcomes this limitation by implementing a charge summing circuit, which allows the reconstruction of the full energy information of a photon interaction in a single pixel. In this work, we present the characterization of the first Medipix3RX detector assembly with a 500 μm thick high resistivity, chromium compensated gallium arsenide sensor. We analyze its properties and demonstrate the functionality of the charge summing mode by means of energy response functions recorded at a synchrotron. Furthermore, the imaging properties of the detector, in terms of its modulation transfer functions and signal-to-noise ratios, are investigated. After more than one decade of attempts to establish gallium arsenide as a sensor material for photon counting detectors, our results represent a breakthrough in obtaining detector-grade material. The sensor we introduce is therefore suitable for high resolution X-ray imaging applications.
Ultra-low power high-dynamic range color pixel embedding RGB to r-g chromaticity transformation
NASA Astrophysics Data System (ADS)
Lecca, Michela; Gasparini, Leonardo; Gottardi, Massimo
2014-05-01
This work describes a novel color pixel topology that converts the three chromatic components from the standard RGB space into the normalized r-g chromaticity space. This conversion is implemented with high-dynamic range and with no dc power consumption, and the auto-exposure capability of the sensor ensures to capture a high quality chromatic signal, even in presence of very bright illuminants or in the darkness. The pixel is intended to become the basic building block of a CMOS color vision sensor, targeted to ultra-low power applications for mobile devices, such as human machine interfaces, gesture recognition, face detection. The experiments show that significant improvements of the proposed pixel with respect to standard cameras in terms of energy saving and accuracy on data acquisition. An application to skin color-based description is presented.
NASA Astrophysics Data System (ADS)
Guskov, A.; Shelkov, G.; Smolyanskiy, P.; Zhemchugov, A.
2016-02-01
The scientific apparatus GAMMA-400 designed for study of electromagnetic and hadron components of cosmic rays will be launched to an elliptic orbit with the apogee of about 300 000 km and the perigee of about 500 km. Such a configuration of the orbit allows it to cross periodically the radiation belt and the outer part of magnetosphere. We discuss the possibility to use hybrid pixel detecters based on the Timepix chip and semiconductive sensors on board the GAMMA-400 apparatus. Due to high granularity of the sensor (pixel size is 55 mum) and possibility to measure independently an energy deposition in each pixel, such compact and lightweight detector could be a unique instrument for study of spatial, energy and time structure of electron and proton components of the radiation belt.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Domengie, F., E-mail: florian.domengie@st.com; Morin, P.; Bauza, D.
We propose a model for dark current induced by metallic contamination in a CMOS image sensor. Based on Shockley-Read-Hall kinetics, the expression of dark current proposed accounts for the electric field enhanced emission factor due to the Poole-Frenkel barrier lowering and phonon-assisted tunneling mechanisms. To that aim, we considered the distribution of the electric field magnitude and metal atoms in the depth of the pixel. Poisson statistics were used to estimate the random distribution of metal atoms in each pixel for a given contamination dose. Then, we performed a Monte-Carlo-based simulation for each pixel to set the number of metalmore » atoms the pixel contained and the enhancement factor each atom underwent, and obtained a histogram of the number of pixels versus dark current for the full sensor. Excellent agreement with the dark current histogram measured on an ion-implanted gold-contaminated imager has been achieved, in particular, for the description of the distribution tails due to the pixel regions in which the contaminant atoms undergo a large electric field. The agreement remains very good when increasing the temperature by 15 °C. We demonstrated that the amplification of the dark current generated for the typical electric fields encountered in the CMOS image sensors, which depends on the nature of the metal contaminant, may become very large at high electric field. The electron and hole emissions and the resulting enhancement factor are described as a function of the trap characteristics, electric field, and temperature.« less
Compressive hyperspectral sensor for LWIR gas detection
NASA Astrophysics Data System (ADS)
Russell, Thomas A.; McMackin, Lenore; Bridge, Bob; Baraniuk, Richard
2012-06-01
Focal plane arrays with associated electronics and cooling are a substantial portion of the cost, complexity, size, weight, and power requirements of Long-Wave IR (LWIR) imagers. Hyperspectral LWIR imagers add significant data volume burden as they collect a high-resolution spectrum at each pixel. We report here on a LWIR Hyperspectral Sensor that applies Compressive Sensing (CS) in order to achieve benefits in these areas. The sensor applies single-pixel detection technology demonstrated by Rice University. The single-pixel approach uses a Digital Micro-mirror Device (DMD) to reflect and multiplex the light from a random assortment of pixels onto the detector. This is repeated for a number of measurements much less than the total number of scene pixels. We have extended this architecture to hyperspectral LWIR sensing by inserting a Fabry-Perot spectrometer in the optical path. This compressive hyperspectral imager collects all three dimensions on a single detection element, greatly reducing the size, weight and power requirements of the system relative to traditional approaches, while also reducing data volume. The CS architecture also supports innovative adaptive approaches to sensing, as the DMD device allows control over the selection of spatial scene pixels to be multiplexed on the detector. We are applying this advantage to the detection of plume gases, by adaptively locating and concentrating target energy. A key challenge in this system is the diffraction loss produce by the DMD in the LWIR. We report the results of testing DMD operation in the LWIR, as well as system spatial and spectral performance.
Precise color images a high-speed color video camera system with three intensified sensors
NASA Astrophysics Data System (ADS)
Oki, Sachio; Yamakawa, Masafumi; Gohda, Susumu; Etoh, Takeharu G.
1999-06-01
High speed imaging systems have been used in a large field of science and engineering. Although the high speed camera systems have been improved to high performance, most of their applications are only to get high speed motion pictures. However, in some fields of science and technology, it is useful to get some other information, such as temperature of combustion flame, thermal plasma and molten materials. Recent digital high speed video imaging technology should be able to get such information from those objects. For this purpose, we have already developed a high speed video camera system with three-intensified-sensors and cubic prism image splitter. The maximum frame rate is 40,500 pps (picture per second) at 64 X 64 pixels and 4,500 pps at 256 X 256 pixels with 256 (8 bit) intensity resolution for each pixel. The camera system can store more than 1,000 pictures continuously in solid state memory. In order to get the precise color images from this camera system, we need to develop a digital technique, which consists of a computer program and ancillary instruments, to adjust displacement of images taken from two or three image sensors and to calibrate relationship between incident light intensity and corresponding digital output signals. In this paper, the digital technique for pixel-based displacement adjustment are proposed. Although the displacement of the corresponding circle was more than 8 pixels in original image, the displacement was adjusted within 0.2 pixels at most by this method.
Computation of dark frames in digital imagers
NASA Astrophysics Data System (ADS)
Widenhorn, Ralf; Rest, Armin; Blouke, Morley M.; Berry, Richard L.; Bodegom, Erik
2007-02-01
Dark current is caused by electrons that are thermally exited into the conduction band. These electrons are collected by the well of the CCD and add a false signal to the chip. We will present an algorithm that automatically corrects for dark current. It uses a calibration protocol to characterize the image sensor for different temperatures. For a given exposure time, the dark current of every pixel is characteristic of a specific temperature. The dark current of every pixel can therefore be used as an indicator of the temperature. Hot pixels have the highest signal-to-noise ratio and are the best temperature sensors. We use the dark current of a several hundred hot pixels to sense the chip temperature and predict the dark current of all pixels on the chip. Dark current computation is not a new concept, but our approach is unique. Some advantages of our method include applicability for poorly temperature-controlled camera systems and the possibility of ex post facto dark current correction.
Mattioli Della Rocca, Francescopaolo
2018-01-01
This paper examines methods to best exploit the High Dynamic Range (HDR) of the single photon avalanche diode (SPAD) in a high fill-factor HDR photon counting pixel that is scalable to megapixel arrays. The proposed method combines multi-exposure HDR with temporal oversampling in-pixel. We present a silicon demonstration IC with 96 × 40 array of 8.25 µm pitch 66% fill-factor SPAD-based pixels achieving >100 dB dynamic range with 3 back-to-back exposures (short, mid, long). Each pixel sums 15 bit-planes or binary field images internally to constitute one frame providing 3.75× data compression, hence the 1k frames per second (FPS) output off-chip represents 45,000 individual field images per second on chip. Two future projections of this work are described: scaling SPAD-based image sensors to HDR 1 MPixel formats and shrinking the pixel pitch to 1–3 µm. PMID:29641479
NASA Astrophysics Data System (ADS)
Kremastiotis, I.; Ballabriga, R.; Campbell, M.; Dannheim, D.; Fiergolski, A.; Hynds, D.; Kulis, S.; Peric, I.
2017-09-01
The concept of capacitive coupling between sensors and readout chips is under study for the vertex detector at the proposed high-energy CLIC electron positron collider. The CLICpix Capacitively Coupled Pixel Detector (C3PD) is an active High-Voltage CMOS sensor, designed to be capacitively coupled to the CLICpix2 readout chip. The chip is implemented in a commercial 180 nm HV-CMOS process and contains a matrix of 128×128 square pixels with 25μm pitch. First prototypes have been produced with a standard resistivity of ~20 Ωcm for the substrate and tested in standalone mode. The results show a rise time of ~20 ns, charge gain of 190 mV/ke- and ~40 e- RMS noise for a power consumption of 4.8μW/pixel. The main design aspects, as well as standalone measurement results, are presented.
Mochizuki, Futa; Kagawa, Keiichiro; Okihara, Shin-ichiro; Seo, Min-Woong; Zhang, Bo; Takasawa, Taishi; Yasutomi, Keita; Kawahito, Shoji
2016-02-22
In the work described in this paper, an image reproduction scheme with an ultra-high-speed temporally compressive multi-aperture CMOS image sensor was demonstrated. The sensor captures an object by compressing a sequence of images with focal-plane temporally random-coded shutters, followed by reconstruction of time-resolved images. Because signals are modulated pixel-by-pixel during capturing, the maximum frame rate is defined only by the charge transfer speed and can thus be higher than those of conventional ultra-high-speed cameras. The frame rate and optical efficiency of the multi-aperture scheme are discussed. To demonstrate the proposed imaging method, a 5×3 multi-aperture image sensor was fabricated. The average rising and falling times of the shutters were 1.53 ns and 1.69 ns, respectively. The maximum skew among the shutters was 3 ns. The sensor observed plasma emission by compressing it to 15 frames, and a series of 32 images at 200 Mfps was reconstructed. In the experiment, by correcting disparities and considering temporal pixel responses, artifacts in the reconstructed images were reduced. An improvement in PSNR from 25.8 dB to 30.8 dB was confirmed in simulations.
Status of HVCMOS developments for ATLAS
NASA Astrophysics Data System (ADS)
Perić, I.; Blanco, R.; Casanova Mohr, R.; Ehrler, F.; Guezzi Messaoud, F.; Krämer, C.; Leys, R.; Prathapan, M.; Schimassek, R.; Schöning, A.; Vilella Figueras, E.; Weber, A.; Zhang, H.
2017-02-01
This paper describes the status of the developments made by ATLAS HVCMOS and HVMAPS collaborations. We have proposed two HVCMOS sensor concepts for ATLAS pixels—the capacitive coupled pixel detector (CCPD) and the monolithic detector. The sensors have been implemented in three semiconductor processes AMS H18, AMS H35 and LFoundry LFA15. Efficiency of 99.7% after neutron irradiation to 1015 neq/cm2W has been measured with the small area CCPD prototype in AMS H18 technology. About 84% of the particles are detected with a time resolution better than 25 ns. The sensor was implemented on a low resistivity substrate. The large area demonstrator sensor in AMS H35 process has been designed, produced and successfully tested. The sensor has been produced on different high resistivity substrates ranging from 80 Ωcm to more than 1 kΩ. Monolithic- and hybrid readout are both possible. In August 2016, six different monolithic pixel matrices for ATLAS with a total area of 1 cm2 have been submitted in LFoundry LFA15 process. The matrices implement column drain and triggered readout as well as waveform sampling capability on pixel level. Design details will be presented.
Koyama, Shinzo; Onozawa, Kazutoshi; Tanaka, Keisuke; Saito, Shigeru; Kourkouss, Sahim Mohamed; Kato, Yoshihisa
2016-08-08
We developed multiocular 1/3-inch 2.75-μm-pixel-size 2.1M- pixel image sensors by co-design of both on-chip beam-splitter and 100-nm-width 800-nm-depth patterned inner meta-micro-lens for single-main-lens stereo camera systems. A camera with the multiocular image sensor can capture horizontally one-dimensional light filed by both the on-chip beam-splitter horizontally dividing ray according to incident angle, and the inner meta-micro-lens collecting the divided ray into pixel with small optical loss. Cross-talks between adjacent light field images of a fabricated binocular image sensor and of a quad-ocular image sensor are as low as 6% and 7% respectively. With the selection of two images from one-dimensional light filed images, a selective baseline for stereo vision is realized to view close objects with single-main-lens. In addition, by adding multiple light field images with different ratios, baseline distance can be tuned within an aperture of a main lens. We suggest the electrically selective or tunable baseline stereo vision to reduce 3D fatigue of viewers.
Plenoptic camera image simulation for reconstruction algorithm verification
NASA Astrophysics Data System (ADS)
Schwiegerling, Jim
2014-09-01
Plenoptic cameras have emerged in recent years as a technology for capturing light field data in a single snapshot. A conventional digital camera can be modified with the addition of a lenslet array to create a plenoptic camera. Two distinct camera forms have been proposed in the literature. The first has the camera image focused onto the lenslet array. The lenslet array is placed over the camera sensor such that each lenslet forms an image of the exit pupil onto the sensor. The second plenoptic form has the lenslet array relaying the image formed by the camera lens to the sensor. We have developed a raytracing package that can simulate images formed by a generalized version of the plenoptic camera. Several rays from each sensor pixel are traced backwards through the system to define a cone of rays emanating from the entrance pupil of the camera lens. Objects that lie within this cone are integrated to lead to a color and exposure level for that pixel. To speed processing three-dimensional objects are approximated as a series of planes at different depths. Repeating this process for each pixel in the sensor leads to a simulated plenoptic image on which different reconstruction algorithms can be tested.
Multiple-Event, Single-Photon Counting Imaging Sensor
NASA Technical Reports Server (NTRS)
Zheng, Xinyu; Cunningham, Thomas J.; Sun, Chao; Wang, Kang L.
2011-01-01
The single-photon counting imaging sensor is typically an array of silicon Geiger-mode avalanche photodiodes that are monolithically integrated with CMOS (complementary metal oxide semiconductor) readout, signal processing, and addressing circuits located in each pixel and the peripheral area of the chip. The major problem is its single-event method for photon count number registration. A single-event single-photon counting imaging array only allows registration of up to one photon count in each of its pixels during a frame time, i.e., the interval between two successive pixel reset operations. Since the frame time can t be too short, this will lead to very low dynamic range and make the sensor merely useful for very low flux environments. The second problem of the prior technique is a limited fill factor resulting from consumption of chip area by the monolithically integrated CMOS readout in pixels. The resulting low photon collection efficiency will substantially ruin any benefit gained from the very sensitive single-photon counting detection. The single-photon counting imaging sensor developed in this work has a novel multiple-event architecture, which allows each of its pixels to register as more than one million (or more) photon-counting events during a frame time. Because of a consequently boosted dynamic range, the imaging array of the invention is capable of performing single-photon counting under ultra-low light through high-flux environments. On the other hand, since the multiple-event architecture is implemented in a hybrid structure, back-illumination and close-to-unity fill factor can be realized, and maximized quantum efficiency can also be achieved in the detector array.
Characterization of Kilopixel TES detector arrays for PIPER
NASA Astrophysics Data System (ADS)
Datta, Rahul; Ade, Peter; Benford, Dominic; Bennett, Charles; Chuss, David; Costen, Nicholas; Coughlin, Kevin; Dotson, Jessie; Eimer, Joseph; Fixsen, Dale; Gandilo, Natalie; Halpern, Mark; Essinger-Hileman, Thomas; Hilton, Gene; Hinshaw, Gary; Irwin, Kent; Jhabvala, Christine; Kimball, Mark; Kogut, Al; Lazear, Justin; Lowe, Luke; Manos, George; McMahon, Jeff; Miller, Timothy; Mirel, Paul; Moseley, Samuel Harvey; Pawlyk, Samuel; Rodriguez, Samelys; Sharp, Elmer; Shirron, Peter; Staguhn, Johannes G.; Sullivan, Dan; Switzer, Eric; Taraschi, Peter; Tucker, Carole; Walts, Alexander; Wollack, Edward
2018-01-01
The Primordial Inflation Polarization ExploreR (PIPER) is a balloon-borne instrument optimized to measure the polarization of the Cosmic Microwave Background (CMB) at large angular scales. It will map 85% of the sky in four frequency bands centered at 200, 270, 350, and 600 GHz to characterize dust foregrounds and constrain the tensor-to-scalar ratio, r. The sky is imaged on to 32x40 pixel arrays of time-domain multiplexed Transition-Edge Sensor (TES) bolometers operating at a bath temperature of 100 mK to achieve background-limited sensitivity. Each kilopixel array is indium-bump-bonded to a 2D superconducting quantum interference device (SQUID) time-domain multiplexer (MUX) chip and read out by warm electronics. Each pixel measures total incident power over a frequency band defined by bandpass filters in front of the array, while polarization sensitivity is provided by the upstream Variable-delay Polarization Modulators (VPMs) and analyzer grids. We present measurements of the detector parameters from the laboratory characterization of the first kilopixel science array for PIPER including transition temperature, saturation power, thermal conductivity, time constant, and noise performance. We also describe the testing of the 2D MUX chips, optimization of the integrated readout parameters, and the overall pixel yield of the array. The first PIPER science flight is planned for June 2018 from Palestine, Texas.
Enhanced RGB-D Mapping Method for Detailed 3D Indoor and Outdoor Modeling
Tang, Shengjun; Zhu, Qing; Chen, Wu; Darwish, Walid; Wu, Bo; Hu, Han; Chen, Min
2016-01-01
RGB-D sensors (sensors with RGB camera and Depth camera) are novel sensing systems that capture RGB images along with pixel-wise depth information. Although they are widely used in various applications, RGB-D sensors have significant drawbacks including limited measurement ranges (e.g., within 3 m) and errors in depth measurement increase with distance from the sensor with respect to 3D dense mapping. In this paper, we present a novel approach to geometrically integrate the depth scene and RGB scene to enlarge the measurement distance of RGB-D sensors and enrich the details of model generated from depth images. First, precise calibration for RGB-D Sensors is introduced. In addition to the calibration of internal and external parameters for both, IR camera and RGB camera, the relative pose between RGB camera and IR camera is also calibrated. Second, to ensure poses accuracy of RGB images, a refined false features matches rejection method is introduced by combining the depth information and initial camera poses between frames of the RGB-D sensor. Then, a global optimization model is used to improve the accuracy of the camera pose, decreasing the inconsistencies between the depth frames in advance. In order to eliminate the geometric inconsistencies between RGB scene and depth scene, the scale ambiguity problem encountered during the pose estimation with RGB image sequences can be resolved by integrating the depth and visual information and a robust rigid-transformation recovery method is developed to register RGB scene to depth scene. The benefit of the proposed joint optimization method is firstly evaluated with the publicly available benchmark datasets collected with Kinect. Then, the proposed method is examined by tests with two sets of datasets collected in both outside and inside environments. The experimental results demonstrate the feasibility and robustness of the proposed method. PMID:27690028
Enhanced RGB-D Mapping Method for Detailed 3D Indoor and Outdoor Modeling.
Tang, Shengjun; Zhu, Qing; Chen, Wu; Darwish, Walid; Wu, Bo; Hu, Han; Chen, Min
2016-09-27
RGB-D sensors (sensors with RGB camera and Depth camera) are novel sensing systems that capture RGB images along with pixel-wise depth information. Although they are widely used in various applications, RGB-D sensors have significant drawbacks including limited measurement ranges (e.g., within 3 m) and errors in depth measurement increase with distance from the sensor with respect to 3D dense mapping. In this paper, we present a novel approach to geometrically integrate the depth scene and RGB scene to enlarge the measurement distance of RGB-D sensors and enrich the details of model generated from depth images. First, precise calibration for RGB-D Sensors is introduced. In addition to the calibration of internal and external parameters for both, IR camera and RGB camera, the relative pose between RGB camera and IR camera is also calibrated. Second, to ensure poses accuracy of RGB images, a refined false features matches rejection method is introduced by combining the depth information and initial camera poses between frames of the RGB-D sensor. Then, a global optimization model is used to improve the accuracy of the camera pose, decreasing the inconsistencies between the depth frames in advance. In order to eliminate the geometric inconsistencies between RGB scene and depth scene, the scale ambiguity problem encountered during the pose estimation with RGB image sequences can be resolved by integrating the depth and visual information and a robust rigid-transformation recovery method is developed to register RGB scene to depth scene. The benefit of the proposed joint optimization method is firstly evaluated with the publicly available benchmark datasets collected with Kinect. Then, the proposed method is examined by tests with two sets of datasets collected in both outside and inside environments. The experimental results demonstrate the feasibility and robustness of the proposed method.
Utilizing soil polypedons to improve model performance for digital soil mapping
USDA-ARS?s Scientific Manuscript database
Most digital soil mapping approaches that use point data to develop relationships with covariate data intersect sample locations with one raster pixel regardless of pixel size. Resulting models are subject to spurious values in covariate data which may limit model performance. An alternative approac...
Validation of ET maps derived from MODIS imagery
NASA Astrophysics Data System (ADS)
Hong, S.; Hendrickx, J. M.; Borchers, B.
2005-12-01
In previous work we have used the New Mexico Tech implementation of the Surface Energy Balance Algorithm for Land (SEBAL-NMT) for the generation of ET maps from LandSat imagery. Comparison of these SEBAL ET estimates versus ET ground measurements using eddy covariance showed satisfactory agreement between the two methods in the heterogeneous arid landscape of the Middle Rio Grande Basin. The objective of this study is to validate SEBAL ET estimates obtained from MODIS imagery. The use of MODIS imagery is attractive since MODIS images are available at a much higher frequency than LandSat images at no cost to the user. MODIS images have a pixel size in the thermal band of 1000x1000 m which is much coarser than the 60x60 m pixel size of LandSat 7. This large pixel size precludes the use of eddy covariance measurements for validation of ET maps derived from MODIS imagery since the eddy covariance measurement is not representative of a 1000x1000 m MODIS pixel. In our experience, a typical foot print of an ET rate measured by eddy covariance on a clear day in New Mexico around 11 am is less than then thousand square meters or two orders of magnitude smaller than a MODIS thermal pixel. Therefore, we have validated ET maps derived from MODIS imagery by comparison with up-scaled ET maps derived from LandSat imagery. The results of our study demonstrate: (1) There is good agreement between ET maps derived from LandSat and MODIS images; (2) Up-scaling of LandSat ET maps over the Middle Rio Grande Basin produces ET maps that are very similar to ET maps directly derived from MODIS images; (3) ET maps derived from free MODIS imagery using SEBAL-NMT can provide reliable regional ET information for water resource managers.
Towards global Landsat burned area mapping: revisit time and availability of cloud free observations
NASA Astrophysics Data System (ADS)
Melchiorre, A.; Boschetti, L.
2016-12-01
Global, daily coarse resolution satellite data have been extensively used for systematic burned area mapping (Giglio et al. 2013; Mouillot et al. 2014). The adoption of similar approaches for producing moderate resolution (10 - 30 m) global burned area products would lead to very significant improvements for the wide variety of fire information users. It would meet a demand for accurate burned area perimeters needed for fire management, post-fire assessment and environmental restoration, and would lead to more accurate and precise atmospheric emission estimations, especially over heterogeneous areas (Mouillot et al. 2014; Randerson et al. 2012; van der Werf et al. 2010). The increased spatial resolution clearly benefits mapping accuracy: the reduction of mixed pixels directly translates in increased spectral separation compared to coarse resolution data. As a tradeoff, the lower temporal resolution (e.g. 16 days for Landsat), could potentially cause large omission errors in ecosystems with fast post-fire recovery. The spectral signal due to the fire effects is non-permanent, can be detected for a period ranging from a few weeks in savannas and grasslands, to over a year in forest ecosystems (Roy et al. 2010). Additionally, clouds, smoke, and other optically thick aerosols limit the number of available observations (Roy et al. 2008; Smith and Wooster 2005), exacerbating the issues related to mapping burned areas globally with moderate resolution sensors. This study presents a global analysis of the effect of cloud cover on Landsat data availability over burned areas, by analyzing the MODIS data record of burned area (MCD45) and cloud detections (MOD35), and combining it with the Landsat acquisition calendar and viewing geometry. For each pixel classified as burned in the MCD45 product, the MOD35 data are used to determine how many cloud free observations would have been available on Landsat overpass days, within the period of observability of the burned area spectral signal in the specific ecosystem. If a burned area pixel is covered by clouds on all the post-fire Landsat overpass days, we assume that it would not be detected in a hypothetical Landsat global burned area product. The resulting maps of expected omission errors are combined for the full 15-year MODIS dataset, and summarized by ecoregion and landcover class.
Lava flow risk maps at Mount Cameroon volcano
NASA Astrophysics Data System (ADS)
Favalli, M.; Fornaciai, A.; Papale, P.; Tarquini, S.
2009-04-01
Mount Cameroon, in the southwest Cameroon, is one of the most active volcanoes in Africa. Rising 4095 m asl, it has erupted nine times since the beginning of the past century, more recently in 1999 and 2000. Mount Cameroon documented eruptions are represented by moderate explosive and effusive eruptions occurred from both summit and flank vents. A 1922 SW-flank eruption produced a lava flow that reached the Atlantic coast near the village of Biboundi, and a lava flow from a 1999 south-flank eruption stopped only 200 m from the sea, threatening the villages of Bakingili and Dibunscha. More than 450,000 people live or work around the volcano, making the risk from lava flow invasion a great concern. In this work we propose both conventional hazard and risk maps and novel quantitative risk maps which relate vent locations to the expected total damage on existing buildings. These maps are based on lava flow simulations starting from 70,000 different vent locations, a probability distribution of vent opening, a law for the maximum length of lava flows, and a database of buildings. The simulations were run over the SRTM Digital Elevation Model (DEM) using DOWNFLOW, a fast DEM-driven model that is able to compute detailed invasion areas of lava flows from each vent. We present three different types of risk maps (90-m-pixel) for buildings around Mount Cameroon volcano: (1) a conventional risk map that assigns a probability of devastation by lava flows to each pixel representing buildings; (2) a reversed risk map where each pixel expresses the total damage expected as a consequence of vent opening in that pixel (the damage is expressed as the total surface of urbanized areas invaded); (3) maps of the lava catchments of the main towns around the volcano, within every catchment the pixels are classified according to the expected impact they might produce on the relative town in the case of a vent opening in that pixel. Maps of type (1) and (3) are useful for long term planning. Maps of type (2) and (3) are useful at the onset of a new eruption, when a vent forms. The combined use of these maps provides an efficient tool for lava flow risk assessment at Mount Cameroon.
High-resolution CASSINI-VIMS mosaics of Titan and the icy Saturnian satellites
Jaumann, R.; Stephan, K.; Brown, R.H.; Buratti, B.J.; Clark, R.N.; McCord, T.B.; Coradini, A.; Capaccioni, F.; Filacchione, G.; Cerroni, P.; Baines, K.H.; Bellucci, G.; Bibring, J.-P.; Combes, M.; Cruikshank, D.P.; Drossart, P.; Formisano, V.; Langevin, Y.; Matson, D.L.; Nelson, R.M.; Nicholson, P.D.; Sicardy, B.; Sotin, Christophe; Soderbloom, L.A.; Griffith, C.; Matz, K.-D.; Roatsch, Th.; Scholten, F.; Porco, C.C.
2006-01-01
The Visual Infrared Mapping Spectrometer (VIMS) onboard the CASSINI spacecraft obtained new spectral data of the icy satellites of Saturn after its arrival at Saturn in June 2004. VIMS operates in a spectral range from 0.35 to 5.2 ??m, generating image cubes in which each pixel represents a spectrum consisting of 352 contiguous wavebands. As an imaging spectrometer VIMS combines the characteristics of both a spectrometer and an imaging instrument. This makes it possible to analyze the spectrum of each pixel separately and to map the spectral characteristics spatially, which is important to study the relationships between spectral information and geological and geomorphologic surface features. The spatial analysis of the spectral data requires the determination of the exact geographic position of each pixel on the specific surface and that all 352 spectral elements of each pixel show the same region of the target. We developed a method to reproject each pixel geometrically and to convert the spectral data into map projected image cubes. This method can also be applied to mosaic different VIMS observations. Based on these mosaics, maps of the spectral properties for each Saturnian satellite can be derived and attributed to geographic positions as well as to geological and geomorphologic surface features. These map-projected mosaics are the basis for all further investigations. ?? 2006 Elsevier Ltd. All rights reserved.
CMOS Imaging of Pin-Printed Xerogel-Based Luminescent Sensor Microarrays.
Yao, Lei; Yung, Ka Yi; Khan, Rifat; Chodavarapu, Vamsy P; Bright, Frank V
2010-12-01
We present the design and implementation of a luminescence-based miniaturized multisensor system using pin-printed xerogel materials which act as host media for chemical recognition elements. We developed a CMOS imager integrated circuit (IC) to image the luminescence response of the xerogel-based sensor array. The imager IC uses a 26 × 20 (520 elements) array of active pixel sensors and each active pixel includes a high-gain phototransistor to convert the detected optical signals into electrical currents. The imager includes a correlated double sampling circuit and pixel address/digital control circuit; the image data is read-out as coded serial signal. The sensor system uses a light-emitting diode (LED) to excite the target analyte responsive luminophores doped within discrete xerogel-based sensor elements. As a prototype, we developed a 4 × 4 (16 elements) array of oxygen (O 2 ) sensors. Each group of 4 sensor elements in the array (arranged in a row) is designed to provide a different and specific sensitivity to the target gaseous O 2 concentration. This property of multiple sensitivities is achieved by using a strategic mix of two oxygen sensitive luminophores ([Ru(dpp) 3 ] 2+ and ([Ru(bpy) 3 ] 2+ ) in each pin-printed xerogel sensor element. The CMOS imager consumes an average power of 8 mW operating at 1 kHz sampling frequency driven at 5 V. The developed prototype system demonstrates a low cost and miniaturized luminescence multisensor system.
Fusion: ultra-high-speed and IR image sensors
NASA Astrophysics Data System (ADS)
Etoh, T. Goji; Dao, V. T. S.; Nguyen, Quang A.; Kimata, M.
2015-08-01
Most targets of ultra-high-speed video cameras operating at more than 1 Mfps, such as combustion, crack propagation, collision, plasma, spark discharge, an air bag at a car accident and a tire under a sudden brake, generate sudden heat. Researchers in these fields require tools to measure the high-speed motion and heat simultaneously. Ultra-high frame rate imaging is achieved by an in-situ storage image sensor. Each pixel of the sensor is equipped with multiple memory elements to record a series of image signals simultaneously at all pixels. Image signals stored in each pixel are read out after an image capturing operation. In 2002, we developed an in-situ storage image sensor operating at 1 Mfps 1). However, the fill factor of the sensor was only 15% due to a light shield covering the wide in-situ storage area. Therefore, in 2011, we developed a backside illuminated (BSI) in-situ storage image sensor to increase the sensitivity with 100% fill factor and a very high quantum efficiency 2). The sensor also achieved a much higher frame rate,16.7 Mfps, thanks to the wiring on the front side with more freedom 3). The BSI structure has another advantage that it has less difficulties in attaching an additional layer on the backside, such as scintillators. This paper proposes development of an ultra-high-speed IR image sensor in combination of advanced nano-technologies for IR imaging and the in-situ storage technology for ultra-highspeed imaging with discussion on issues in the integration.
High dynamic range pixel architecture for advanced diagnostic medical x-ray imaging applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Izadi, Mohammad Hadi; Karim, Karim S.
2006-05-15
The most widely used architecture in large-area amorphous silicon (a-Si) flat panel imagers is a passive pixel sensor (PPS), which consists of a detector and a readout switch. While the PPS has the advantage of being compact and amenable toward high-resolution imaging, small PPS output signals are swamped by external column charge amplifier and data line thermal noise, which reduce the minimum readable sensor input signal. In contrast to PPS circuits, on-pixel amplifiers in a-Si technology reduce readout noise to levels that can meet even the stringent requirements for low noise digital x-ray fluoroscopy (<1000 noise electrons). However, larger voltagesmore » at the pixel input cause the output of the amplified pixel to become nonlinear thus reducing the dynamic range. We reported a hybrid amplified pixel architecture based on a combination of PPS and amplified pixel designs that, in addition to low noise performance, also resulted in large-signal linearity and consequently higher dynamic range [K. S. Karim et al., Proc. SPIE 5368, 657 (2004)]. The additional benefit in large-signal linearity, however, came at the cost of an additional pixel transistor. We present an amplified pixel design that achieves the goals of low noise performance and large-signal linearity without the need for an additional pixel transistor. Theoretical calculations and simulation results for noise indicate the applicability of the amplified a-Si pixel architecture for high dynamic range, medical x-ray imaging applications that require switching between low exposure, real-time fluoroscopy and high-exposure radiography.« less
The Dynamic Photometric Stereo Method Using a Multi-Tap CMOS Image Sensor.
Yoda, Takuya; Nagahara, Hajime; Taniguchi, Rin-Ichiro; Kagawa, Keiichiro; Yasutomi, Keita; Kawahito, Shoji
2018-03-05
The photometric stereo method enables estimation of surface normals from images that have been captured using different but known lighting directions. The classical photometric stereo method requires at least three images to determine the normals in a given scene. However, this method cannot be applied to dynamic scenes because it is assumed that the scene remains static while the required images are captured. In this work, we present a dynamic photometric stereo method for estimation of the surface normals in a dynamic scene. We use a multi-tap complementary metal-oxide-semiconductor (CMOS) image sensor to capture the input images required for the proposed photometric stereo method. This image sensor can divide the electrons from the photodiode from a single pixel into the different taps of the exposures and can thus capture multiple images under different lighting conditions with almost identical timing. We implemented a camera lighting system and created a software application to enable estimation of the normal map in real time. We also evaluated the accuracy of the estimated surface normals and demonstrated that our proposed method can estimate the surface normals of dynamic scenes.
A multi-sensor data-driven methodology for all-sky passive microwave inundation retrieval
NASA Astrophysics Data System (ADS)
Takbiri, Zeinab; Ebtehaj, Ardeshir M.; Foufoula-Georgiou, Efi
2017-06-01
We present a multi-sensor Bayesian passive microwave retrieval algorithm for flood inundation mapping at high spatial and temporal resolutions. The algorithm takes advantage of observations from multiple sensors in optical, short-infrared, and microwave bands, thereby allowing for detection and mapping of the sub-pixel fraction of inundated areas under almost all-sky conditions. The method relies on a nearest-neighbor search and a modern sparsity-promoting inversion method that make use of an a priori dataset in the form of two joint dictionaries. These dictionaries contain almost overlapping observations by the Special Sensor Microwave Imager and Sounder (SSMIS) on board the Defense Meteorological Satellite Program (DMSP) F17 satellite and the Moderate Resolution Imaging Spectroradiometer (MODIS) on board the Aqua and Terra satellites. Evaluation of the retrieval algorithm over the Mekong Delta shows that it is capable of capturing to a good degree the inundation diurnal variability due to localized convective precipitation. At longer timescales, the results demonstrate consistency with the ground-based water level observations, denoting that the method is properly capturing inundation seasonal patterns in response to regional monsoonal rain. The calculated Euclidean distance, rank-correlation, and also copula quantile analysis demonstrate a good agreement between the outputs of the algorithm and the observed water levels at monthly and daily timescales. The current inundation products are at a resolution of 12.5 km and taken twice per day, but a higher resolution (order of 5 km and every 3 h) can be achieved using the same algorithm with the dictionary populated by the Global Precipitation Mission (GPM) Microwave Imager (GMI) products.
SCUBA-2: The next generation wide-field imager for the James Clerk Maxwell Telescope
NASA Astrophysics Data System (ADS)
Holland, W. S.; Duncan, W. D.; Kelly, B. D.; Peacocke, T.; Robson, E. I.; Irwin, K. D.; Hilton, G.; Rinehart, S.; Ade, P. A. R.; Griffin, M. J.
2000-12-01
We describe SCUBA-2 - the next generation continuum imaging camera for the James Clerk Maxwell Telescope. The instrument will capitalise on the success of the current SCUBA camera, by having a much larger field-of- view and improved sensitivity. SCUBA-2 will be able to map the submillimetre sky several hundred times faster than SCUBA to the same noise level. Many areas of astronomy are expected to benefit - from large scale cosmological surveys to probe galaxy formation and evolution to studies of the earliest stages of star formation in our own Galaxy. Perhaps the most exciting prospect that SCUBA-2 will offer is in the statistical significance of wide-field surveys. The key science requirements of the new camera are the ability to make very deep images - reaching background confusion levels in only a couple of hours; to generate high fidelity images at two wavelengths simultaneously; to map large areas of sky (tens of degrees) to a reasonable depth in only a few hours; carry out photometry of known-position point-sources to a high accuracy. The technical design of SCUBA-2 will incorporate new technology transition-edge sensors as the detecting element, with signals being read out using multiplexed SQUID amplifiers. As in SCUBA there will be two arrays operating at 450 and 850 microns simultaneously. Fully-sampling a field-of-voew of 8 arcminutes square will require 25,600 and 6,400 pixels at 450 and 850 microns respectively (cf 91 and 37 pixels with SCUBA!). Each pixel will have diffraction-limited resolution on the sky and a sensitivity dominated by the background photon noise. SCUBA-2 is a collaboration between a number of institutions. We anticipate delivery of the final instrument to the telescope before the end of 2005.
Learning to merge: a new tool for interactive mapping
NASA Astrophysics Data System (ADS)
Porter, Reid B.; Lundquist, Sheng; Ruggiero, Christy
2013-05-01
The task of turning raw imagery into semantically meaningful maps and overlays is a key area of remote sensing activity. Image analysts, in applications ranging from environmental monitoring to intelligence, use imagery to generate and update maps of terrain, vegetation, road networks, buildings and other relevant features. Often these tasks can be cast as a pixel labeling problem, and several interactive pixel labeling tools have been developed. These tools exploit training data, which is generated by analysts using simple and intuitive paint-program annotation tools, in order to tailor the labeling algorithm for the particular dataset and task. In other cases, the task is best cast as a pixel segmentation problem. Interactive pixel segmentation tools have also been developed, but these tools typically do not learn from training data like the pixel labeling tools do. In this paper we investigate tools for interactive pixel segmentation that also learn from user input. The input has the form of segment merging (or grouping). Merging examples are 1) easily obtained from analysts using vector annotation tools, and 2) more challenging to exploit than traditional labels. We outline the key issues in developing these interactive merging tools, and describe their application to remote sensing.
Image sensor with high dynamic range linear output
NASA Technical Reports Server (NTRS)
Yadid-Pecht, Orly (Inventor); Fossum, Eric R. (Inventor)
2007-01-01
Designs and operational methods to increase the dynamic range of image sensors and APS devices in particular by achieving more than one integration times for each pixel thereof. An APS system with more than one column-parallel signal chains for readout are described for maintaining a high frame rate in readout. Each active pixel is sampled for multiple times during a single frame readout, thus resulting in multiple integration times. The operation methods can also be used to obtain multiple integration times for each pixel with an APS design having a single column-parallel signal chain for readout. Furthermore, analog-to-digital conversion of high speed and high resolution can be implemented.
Bonding techniques for hybrid active pixel sensors (HAPS)
NASA Astrophysics Data System (ADS)
Bigas, M.; Cabruja, E.; Lozano, M.
2007-05-01
A hybrid active pixel sensor (HAPS) consists of an array of sensing elements which is connected to an electronic read-out unit. The most used way to connect these two different devices is bump bonding. This interconnection technique is very suitable for these systems because it allows a very fine pitch and a high number of I/Os. However, there are other interconnection techniques available such as direct bonding. This paper, as a continuation of a review [M. Lozano, E. Cabruja, A. Collado, J. Santander, M. Ullan, Nucl. Instr. and Meth. A 473 (1-2) (2001) 95-101] published in 2001, presents an update of the different advanced bonding techniques available for manufacturing a hybrid active pixel detector.
Noise and spectroscopic performance of DEPMOSFET matrix devices for XEUS
NASA Astrophysics Data System (ADS)
Treis, J.; Fischer, P.; Hälker, O.; Herrmann, S.; Kohrs, R.; Krüger, H.; Lechner, P.; Lutz, G.; Peric, I.; Porro, M.; Richter, R. H.; Strüder, L.; Trimpl, M.; Wermes, N.; Wölfel, S.
2005-08-01
DEPMOSFET based Active Pixel Sensor (APS) matrix devices, originally developed to cope with the challenging requirements of the XEUS Wide Field Imager, have proven to be a promising new imager concept for a variety of future X-ray imaging and spectroscopy missions like Simbol-X. The devices combine excellent energy resolution, high speed readout and low power consumption with the attractive feature of random accessibility of pixels. A production of sensor prototypes with 64 x 64 pixels with a size of 75 μm x 75 μm each has recently been finished at the MPI semiconductor laboratory in Munich. The devices are built for row-wise readout and require dedicated control and signal processing electronics of the CAMEX type, which is integrated together with the sensor onto a readout hybrid. A number of hybrids incorporating the most promising sensor design variants has been built, and their performance has been studied in detail. A spectroscopic resolution of 131 eV has been measured, the readout noise is as low as 3.5 e- ENC. Here, the dependence of readout noise and spectroscopic resolution on the device temperature is presented.
Kim, Daehyeok; Song, Minkyu; Choe, Byeongseong; Kim, Soo Youn
2017-06-25
In this paper, we present a multi-resolution mode CMOS image sensor (CIS) for intelligent surveillance system (ISS) applications. A low column fixed-pattern noise (CFPN) comparator is proposed in 8-bit two-step single-slope analog-to-digital converter (TSSS ADC) for the CIS that supports normal, 1/2, 1/4, 1/8, 1/16, 1/32, and 1/64 mode of pixel resolution. We show that the scaled-resolution images enable CIS to reduce total power consumption while images hold steady without events. A prototype sensor of 176 × 144 pixels has been fabricated with a 0.18 μm 1-poly 4-metal CMOS process. The area of 4-shared 4T-active pixel sensor (APS) is 4.4 μm × 4.4 μm and the total chip size is 2.35 mm × 2.35 mm. The maximum power consumption is 10 mW (with full resolution) with supply voltages of 3.3 V (analog) and 1.8 V (digital) and 14 frame/s of frame rates.
Radiation hard analog circuits for ALICE ITS upgrade
NASA Astrophysics Data System (ADS)
Gajanana, D.; Gromov, V.; Kuijer, P.; Kugathasan, T.; Snoeys, W.
2016-03-01
The ALICE experiment is planning to upgrade the ITS (Inner Tracking System) [1] detector during the LS2 shutdown. The present ITS will be fully replaced with a new one entirely based on CMOS monolithic pixel sensor chips fabricated in TowerJazz CMOS 0.18 μ m imaging technology. The large (3 cm × 1.5 cm = 4.5 cm2) ALPIDE (ALICE PIxel DEtector) sensor chip contains about 500 Kpixels, and will be used to cover a 10 m2 area with 12.5 Gpixels distributed over seven cylindrical layers. The ALPOSE chip was designed as a test chip for the various building blocks foreseen in the ALPIDE [2] pixel chip from CERN. The building blocks include: bandgap and Temperature sensor in four different flavours, and LDOs for powering schemes. One flavour of bandgap and temperature sensor will be included in the ALPIDE chip. Power consumption numbers have dropped very significantly making the use of LDOs less interesting, but in this paper all blocks are presented including measurement results before and after irradiation with neutrons to characterize robustness against displacement damage.
NASA Astrophysics Data System (ADS)
Snoeys, W.; Aglieri Rinella, G.; Hillemanns, H.; Kugathasan, T.; Mager, M.; Musa, L.; Riedler, P.; Reidt, F.; Van Hoorne, J.; Fenigstein, A.; Leitner, T.
2017-11-01
For the upgrade of its Inner Tracking System, the ALICE experiment plans to install a new tracker fully constructed with monolithic active pixel sensors implemented in a standard 180 nm CMOS imaging sensor process, with a deep pwell allowing full CMOS within the pixel. Reverse substrate bias increases the tolerance to non-ionizing energy loss (NIEL) well beyond 1013 1 MeVneq /cm2, but does not allow full depletion of the sensitive layer and hence full charge collection by drift, mandatory for more extreme radiation tolerance. This paper describes a process modification to fully deplete the epitaxial layer even with a small charge collection electrode. It uses a low dose blanket deep high energy n-type implant in the pixel array and does not require significant circuit or layout changes so that the same design can be fabricated both in the standard and modified process. When exposed to a 55 Fe source at a reverse substrate bias of -6 V, pixels implemented in the standard and the modified process in a low and high dose variant for the deep n-type implant respectively yield a signal of about 115 mV, 110 mV and 90 mV at the output of a follower circuit. Signal rise times heavily affected by the speed of this circuit are 27 . 8 + / - 5 ns, 23 . 2 + / - 4 . 2 ns, and 22 . 2 + / - 3 . 7 ns rms, respectively. In a different setup, the single pixel signal from a 90 Sr source only degrades by less than 20% for the modified process after a 1015 1 MeVneq /cm2 irradiation, while the signal rise time only degrades by about 16 + / - 2 ns to 19 + / - 2 . 8 ns rms. From sensors implemented in the standard process no useful signal could be extracted after the same exposure. These first results indicate the process modification maintains low sensor capacitance, improves timing performance and increases NIEL tolerance by at least an order of magnitude.
Absolute Thermal SST Measurements over the Deepwater Horizon Oil Spill
NASA Astrophysics Data System (ADS)
Good, W. S.; Warden, R.; Kaptchen, P. F.; Finch, T.; Emery, W. J.
2010-12-01
Climate monitoring and natural disaster rapid assessment require baseline measurements that can be tracked over time to distinguish anthropogenic versus natural changes to the Earth system. Disasters like the Deepwater Horizon Oil Spill require constant monitoring to assess the potential environmental and economic impacts. Absolute calibration and validation of Earth-observing sensors is needed to allow for comparison of temporally separated data sets and provide accurate information to policy makers. The Ball Experimental Sea Surface Temperature (BESST) radiometer was designed and built by Ball Aerospace to provide a well calibrated measure of sea surface temperature (SST) from an unmanned aerial system (UAS). Currently, emissive skin SST observed by satellite infrared radiometers is validated by shipborne instruments that are expensive to deploy and can only take a few data samples along the ship track to overlap within a single satellite pixel. Implementation on a UAS will allow BESST to map the full footprint of a satellite pixel and perform averaging to remove any local variability due to the difference in footprint size of the instruments. It also enables the capability to study this sub-pixel variability to determine if smaller scale effects need to be accounted for in models to improve forecasting of ocean events. In addition to satellite sensor validation, BESST can distinguish meter scale variations in SST which could be used to remotely monitor and assess thermal pollution in rivers and coastal areas as well as study diurnal and seasonal changes to bodies of water that impact the ocean ecosystem. BESST was recently deployed on a conventional Twin Otter airplane for measurements over the Gulf of Mexico to access the thermal properties of the ocean surface being affected by the oil spill. Results of these measurements will be presented along with ancillary sensor data used to eliminate false signals including UV and Synthetic Aperture Radar (SAR) information. Spatial variations and day-to-day changes in the visible oil concentration on the surface of the water were observed in performing these measurements. An assessment of the thermal imagery variation will be made based on the absolute calibration of the sensor to determine if the visible variation was due to properties of the reflected light or of the actual oil composition. Comparisons with satellite data (both SAR and thermal infrared images) and buoy data will also be included.
Image mosaic and topographic map of the moon
Hare, Trent M.; Hayward, Rosalyn K.; Blue, Jennifer S.; Archinal, Brent A.
2015-01-01
Sheet 2: This map is based on data from the Lunar Orbiter Laser Altimeter (LOLA; Smith and others, 2010), an instrument on the National Aeronautics and Space Administration (NASA) Lunar Reconnaissance Orbiter (LRO) spacecraft (Tooley and others, 2010). The image used for the base of this map represents more than 6.5 billion measurements gathered between July 2009 and July 2013, adjusted for consistency in the coordinate system described below, and then converted to lunar radii (Mazarico and others, 2012). For the Mercator portion, these measurements were converted into a digital elevation model (DEM) with a resolution of 0.015625 degrees per pixel, or 64 pixels per degree. In projection, the pixels are 473.8 m in size at the equator. For the polar portion, the LOLA elevation points were used to create a DEM at 240 meters per pixel. A shaded relief map was generated from each DEM with a sun angle of 45° from horizontal, and a sun azimuth of 270°, as measured clockwise from north with no vertical exaggeration. The DEM values were then mapped to a global color look-up table, with each color representing a range of 1 km of elevation. For this map sheet, only larger feature names are shown. For references listed above, please open the full PDF.
Characterisation of a novel reverse-biased PPD CMOS image sensor
NASA Astrophysics Data System (ADS)
Stefanov, K. D.; Clarke, A. S.; Ivory, J.; Holland, A. D.
2017-11-01
A new pinned photodiode (PPD) CMOS image sensor (CIS) has been developed and characterised. The sensor can be fully depleted by means of reverse bias applied to the substrate, and the principle of operation is applicable to very thick sensitive volumes. Additional n-type implants under the pixel p-wells, called Deep Depletion Extension (DDE), have been added in order to eliminate the large parasitic substrate current that would otherwise be present in a normal device. The first prototype has been manufactured on a 18 μm thick, 1000 Ω .cm epitaxial silicon wafers using 180 nm PPD image sensor process at TowerJazz Semiconductor. The chip contains arrays of 10 μm and 5.4 μm pixels, with variations of the shape, size and the depth of the DDE implant. Back-side illuminated (BSI) devices were manufactured in collaboration with Teledyne e2v, and characterised together with the front-side illuminated (FSI) variants. The presented results show that the devices could be reverse-biased without parasitic leakage currents, in good agreement with simulations. The new 10 μm pixels in both BSI and FSI variants exhibit nearly identical photo response to the reference non-modified pixels, as characterised with the photon transfer curve. Different techniques were used to measure the depletion depth in FSI and BSI chips, and the results are consistent with the expected full depletion.
Active pixel image sensor with a winner-take-all mode of operation
NASA Technical Reports Server (NTRS)
Yadid-Pecht, Orly (Inventor); Mead, Carver (Inventor); Fossum, Eric R. (Inventor)
2003-01-01
An integrated CMOS semiconductor imaging device having two modes of operation that can be performed simultaneously to produce an output image and provide information of a brightest or darkest pixel in the image.
Active Pixel Sensors: Are CCD's Dinosaurs?
NASA Technical Reports Server (NTRS)
Fossum, Eric R.
1993-01-01
Charge-coupled devices (CCD's) are presently the technology of choice for most imaging applications. In the 23 years since their invention in 1970, they have evolved to a sophisticated level of performance. However, as with all technologies, we can be certain that they will be supplanted someday. In this paper, the Active Pixel Sensor (APS) technology is explored as a possible successor to the CCD. An active pixel is defined as a detector array technology that has at least one active transistor within the pixel unit cell. The APS eliminates the need for nearly perfect charge transfer -- the Achilles' heel of CCDs. This perfect charge transfer makes CCD's radiation 'soft,' difficult to use under low light conditions, difficult to manufacture in large array sizes, difficult to integrate with on-chip electronics, difficult to use at low temperatures, difficult to use at high frame rates, and difficult to manufacture in non-silicon materials that extend wavelength response.
Heavy Ion Transient Characterization of a Photobit Hardened-by-Design Active Pixel Sensor Array
NASA Technical Reports Server (NTRS)
Marshall, Paul W.; Byers, Wheaton B.; Conger, Christopher; Eid, El-Sayed; Gee, George; Jones, Michael R.; Marshall, Cheryl J.; Reed, Robert; Pickel, Jim; Kniffin, Scott
2002-01-01
This paper presents heavy ion data on the single event transient (SET) response of a Photobit active pixel sensor (APS) four quadrant test chip with different radiation tolerant designs in a standard 0.35 micron CMOS process. The physical design techniques of enclosed geometry and P-channel guard rings are used to design the four N-type active photodiode pixels as described in a previous paper. Argon transient measurements on the 256 x 256 chip array as a function of incident angle show a significant variation in the amount of charge collected as well as the charge spreading dependent on the pixel type. The results are correlated with processing and design information provided by Photobit. In addition, there is a large degree of statistical variability between individual ion strikes. No latch-up is observed up to an LET of 106 MeV/mg/sq cm.
A study on rational function model generation for TerraSAR-X imagery.
Eftekhari, Akram; Saadatseresht, Mohammad; Motagh, Mahdi
2013-09-09
The Rational Function Model (RFM) has been widely used as an alternative to rigorous sensor models of high-resolution optical imagery in photogrammetry and remote sensing geometric processing. However, not much work has been done to evaluate the applicability of the RF model for Synthetic Aperture Radar (SAR) image processing. This paper investigates how to generate a Rational Polynomial Coefficient (RPC) for high-resolution TerraSAR-X imagery using an independent approach. The experimental results demonstrate that the RFM obtained using the independent approach fits the Range-Doppler physical sensor model with an accuracy of greater than 10-3 pixel. Because independent RPCs indicate absolute errors in geolocation, two methods can be used to improve the geometric accuracy of the RFM. In the first method, Ground Control Points (GCPs) are used to update SAR sensor orientation parameters, and the RPCs are calculated using the updated parameters. Our experiment demonstrates that by using three control points in the corners of the image, an accuracy of 0.69 pixels in range and 0.88 pixels in the azimuth direction is achieved. For the second method, we tested the use of an affine model for refining RPCs. In this case, by applying four GCPs in the corners of the image, the accuracy reached 0.75 pixels in range and 0.82 pixels in the azimuth direction.
A Study on Rational Function Model Generation for TerraSAR-X Imagery
Eftekhari, Akram; Saadatseresht, Mohammad; Motagh, Mahdi
2013-01-01
The Rational Function Model (RFM) has been widely used as an alternative to rigorous sensor models of high-resolution optical imagery in photogrammetry and remote sensing geometric processing. However, not much work has been done to evaluate the applicability of the RF model for Synthetic Aperture Radar (SAR) image processing. This paper investigates how to generate a Rational Polynomial Coefficient (RPC) for high-resolution TerraSAR-X imagery using an independent approach. The experimental results demonstrate that the RFM obtained using the independent approach fits the Range-Doppler physical sensor model with an accuracy of greater than 10−3 pixel. Because independent RPCs indicate absolute errors in geolocation, two methods can be used to improve the geometric accuracy of the RFM. In the first method, Ground Control Points (GCPs) are used to update SAR sensor orientation parameters, and the RPCs are calculated using the updated parameters. Our experiment demonstrates that by using three control points in the corners of the image, an accuracy of 0.69 pixels in range and 0.88 pixels in the azimuth direction is achieved. For the second method, we tested the use of an affine model for refining RPCs. In this case, by applying four GCPs in the corners of the image, the accuracy reached 0.75 pixels in range and 0.82 pixels in the azimuth direction. PMID:24021971
Homogeneity study of a GaAs:Cr pixelated sensor by means of X-rays
NASA Astrophysics Data System (ADS)
Billoud, T.; Leroy, C.; Papadatos, C.; Pichotka, M.; Pospisil, S.; Roux, J. S.
2018-04-01
Direct conversion semiconductor detectors have become an indispensable tool in radiation detection by now. In order to obtain a high detection efficiency, especially when detecting X or γ rays, high-Z semiconductor sensors are necessary. Like other compound semiconductors GaAs, compensated by chromium (GaAs:Cr), suffers from a number of defects that affect the charge collection efficiency and homogeneity of the material. A precise knowledge of this problem is important to predict the performance of such detectors and eventually correct their response in specific applications. In this study we analyse the homogeneity and mobility-lifetime products (μe τe) of a 500 μ m thick GaAs:Cr pixelated sensor connected to a Timepix chip. The detector is irradiated by 23 keV X-rays, each pixel recording the number of photon interactions and the charge they induce on its electrode. The μe τe products are extracted on a per-pixel basis, using the Hecht equation corrected for the small pixel effect. The detector shows a good time stability in the experimental conditions. Significant inhomogeneities are observed in photon counting and charge collection efficiencies. An average μe τe of 1.0 ṡ 10‑4 cm2V‑1 is found, and compared with values obtained by other methods for the same material. Solutions to improve the response are discussed.
Panoramic thermal imaging: challenges and tradeoffs
NASA Astrophysics Data System (ADS)
Aburmad, Shimon
2014-06-01
Over the past decade, we have witnessed a growing demand for electro-optical systems that can provide continuous 3600 coverage. Applications such as perimeter security, autonomous vehicles, and military warning systems are a few of the most common applications for panoramic imaging. There are several different technological approaches for achieving panoramic imaging. Solutions based on rotating elements do not provide continuous coverage as there is a time lag between updates. Continuous panoramic solutions either use "stitched" images from multiple adjacent sensors, or sophisticated optical designs which warp a panoramic view onto a single sensor. When dealing with panoramic imaging in the visible spectrum, high volume production and advancement of semiconductor technology has enabled the use of CMOS/CCD image sensors with a huge number of pixels, small pixel dimensions, and low cost devices. However, in the infrared spectrum, the growth of detector pixel counts, pixel size reduction, and cost reduction is taking place at a slower rate due to the complexity of the technology and limitations caused by the laws of physics. In this work, we will explore the challenges involved in achieving 3600 panoramic thermal imaging, and will analyze aspects such as spatial resolution, FOV, data complexity, FPA utilization, system complexity, coverage and cost of the different solutions. We will provide illustrations, calculations, and tradeoffs between three solutions evaluated by Opgal: A unique 3600 lens design using an LWIR XGA detector, stitching of three adjacent LWIR sensors equipped with a low distortion 1200 lens, and a fisheye lens with a HFOV of 180º and an XGA sensor.
The Design of Optical Sensor for the Pinhole/Occulter Facility
NASA Technical Reports Server (NTRS)
Greene, Michael E.
1990-01-01
Three optical sight sensor systems were designed, built and tested. Two optical lines of sight sensor system are capable of measuring the absolute pointing angle to the sun. The system is for use with the Pinhole/Occulter Facility (P/OF), a solar hard x ray experiment to be flown from Space Shuttle or Space Station. The sensor consists of a pinhole camera with two pairs of perpendicularly mounted linear photodiode arrays to detect the intensity distribution of the solar image produced by the pinhole, track and hold circuitry for data reduction, an analog to digital converter, and a microcomputer. The deflection of the image center is calculated from these data using an approximation for the solar image. A second system consists of a pinhole camera with a pair of perpendicularly mounted linear photodiode arrays, amplification circuitry, threshold detection circuitry, and a microcomputer board. The deflection of the image is calculated by knowing the position of each pixel of the photodiode array and merely counting the pixel numbers until threshold is surpassed. A third optical sensor system is capable of measuring the internal vibration of the P/OF between the mask and base. The system consists of a white light source, a mirror and a pair of perpendicularly mounted linear photodiode arrays to detect the intensity distribution of the solar image produced by the mirror, amplification circuitry, threshold detection circuitry, and a microcomputer board. The deflection of the image and hence the vibration of the structure is calculated by knowing the position of each pixel of the photodiode array and merely counting the pixel numbers until threshold is surpassed.
The Transition-Edge-Sensor Array for the Micro-X Sounding Rocket
NASA Technical Reports Server (NTRS)
Eckart, M. E.; Adams, J. S.; Bailey, C. N.; Bandler, S. R.; Busch, Sarah Elizabeth; Chervenak J. A.; Finkbeiner, F. M.; Kelley, R. L.; Kilbourne, C. A.; Porst, J. P.;
2012-01-01
The Micro-X sounding rocket program will fly a 128-element array of transition-edge-sensor microcalorimeters to enable high-resolution X-ray imaging spectroscopy of the Puppis-A supernova remnant. To match the angular resolution of the optics while maximizing the field-of-view and retaining a high energy resolution (< 4 eV at 1 keV), we have designed the pixels using 600 x 600 sq. micron Au/Bi absorbers, which overhang 140 x 140 sq. micron Mo/Au sensors. The data-rate capabilities of the rocket telemetry system require the pulse decay to be approximately 2 ms to allow a significant portion of the data to be telemetered during flight. Here we report experimental results from the flight array, including measurements of energy resolution, uniformity, and absorber thermalization. In addition, we present studies of test devices that have a variety of absorber contact geometries, as well as a variety of membrane-perforation schemes designed to slow the pulse decay time to match the telemetry requirements. Finally, we describe the reduction in pixel-to-pixel crosstalk afforded by an angle-evaporated Cu backside heatsinking layer, which provides Cu coverage on the four sidewalls of the silicon wells beneath each pixel.
Development of n+-in-p planar pixel quadsensor flip-chipped with FE-I4 readout ASICs
NASA Astrophysics Data System (ADS)
Unno, Y.; Kamada, S.; Yamamura, K.; Yamamoto, H.; Hanagaki, K.; Hori, R.; Ikegami, Y.; Nakamura, K.; Takubo, Y.; Takashima, R.; Tojo, J.; Kono, T.; Nagai, R.; Saito, S.; Sugibayashi, K.; Hirose, M.; Jinnouchi, O.; Sato, S.; Sawai, H.; Hara, K.; Sato, Kz.; Sato, Kj.; Iwabuchi, S.; Suzuki, J.
2017-01-01
We have developed flip-chip modules applicable to the pixel detector for the HL-LHC. New radiation-tolerant n+-in-p planar pixel sensors of a size of four FE-I4 application-specific integrated circuits (ASICs) are laid out in a 6-in wafer. Variation in readout connection for the pixels at the boundary of ASICs is implemented in the design of quadsensors. Bump bonding technology is developed for four ASICs onto one quadsensor. Both sensors and ASICs are thinned to 150 μm before bump bonding, and are held flat with vacuum chucks. Using lead-free SnAg solder bumps, we encounter deficiency with large areas of disconnected bumps after thermal stress treatment, including irradiation. Surface oxidation of the solder bumps is identified as a critical source of this deficiency after bump bonding trials, using SnAg bumps with solder flux, indium bumps, and SnAg bumps with a newly-introduced hydrogen-reflow process. With hydrogen-reflow, we establish flux-less bump bonding technology with SnAg bumps, appropriate for mass production of the flip-chip modules with thin sensors and thin ASICs.
Li, Jing; Mahmoodi, Alireza; Joseph, Dileepan
2015-10-16
An important class of complementary metal-oxide-semiconductor (CMOS) image sensors are those where pixel responses are monotonic nonlinear functions of light stimuli. This class includes various logarithmic architectures, which are easily capable of wide dynamic range imaging, at video rates, but which are vulnerable to image quality issues. To minimize fixed pattern noise (FPN) and maximize photometric accuracy, pixel responses must be calibrated and corrected due to mismatch and process variation during fabrication. Unlike literature approaches, which employ circuit-based models of varying complexity, this paper introduces a novel approach based on low-degree polynomials. Although each pixel may have a highly nonlinear response, an approximately-linear FPN calibration is possible by exploiting the monotonic nature of imaging. Moreover, FPN correction requires only arithmetic, and an optimal fixed-point implementation is readily derived, subject to a user-specified number of bits per pixel. Using a monotonic spline, involving cubic polynomials, photometric calibration is also possible without a circuit-based model, and fixed-point photometric correction requires only a look-up table. The approach is experimentally validated with a logarithmic CMOS image sensor and is compared to a leading approach from the literature. The novel approach proves effective and efficient.
MT3250BA: a 320×256-50µm snapshot microbolometer ROIC for high-resistance detector arrays
NASA Astrophysics Data System (ADS)
Eminoglu, Selim; Akin, Tayfun
2013-06-01
This paper reports the development of a new microbolometer readout integrated circuit (MT3250BA) designed for high-resistance detector arrays. MT3250BA is the first microbolometer readout integrated circuit (ROIC) product from Mikro-Tasarim Ltd., which is a fabless IC design house specialized in the development of monolithic CMOS imaging sensors and ROICs for hybrid photonic imaging sensors and microbolometers. MT3250BA has a format of 320 × 256 and a pixel pitch of 50 µm, developed with a system-on-chip architecture in mind, where all the timing and biasing for this ROIC are generated on-chip without requiring any external inputs. MT3250BA is a highly configurable ROIC, where many of its features can be programmed through a 3-wire serial interface allowing on-the-fly configuration of many ROIC features. MT3250BA has 2 analog video outputs and 1 analog reference output for pseudo-differential operation, and the ROIC can be programmed to operate in the 1 or 2-output modes. A unique feature of MT3250BA is that it performs snapshot readout operation; therefore, the image quality will only be limited by the thermal time constant of the detector pixels, but not by the scanning speed of the ROIC, as commonly found in the conventional microbolometer ROICs performing line-by-line (rolling-line) readout operation. The signal integration is performed at the pixel level in parallel for the whole array, and signal integration time can be programmed from 0.1 µs up to 100 ms in steps of 0.1 µs. The ROIC is designed to work with high-resistance detector arrays with pixel resistance values higher than 250 kΩ. The detector bias voltage can be programmed on-chip over a 2 V range with a resolution of 1 mV. The ROIC has a measured input referred noise of 260 µV rms at 300 K. The ROIC can be used to build a microbolometer infrared sensor with an NETD value below 100 mK using a microbolometer detector array fabrication technology with a high detector resistance value (≥ 250 KΩ), a high TCR value (≥ 2.5 % / K), and a sufficiently low pixel thermal conductance (Gth ≤ 20 nW / K). The ROIC uses a single 3.3 V supply voltage and dissipates less than 75 mW in the 1-output mode at 60 fps. MT3250BA is fabricated using a mixed-signal CMOS process on 200 mm CMOS wafers, and tested wafers are available with test data and wafer map. A USB based compact test electronics and software are available for quick evaluation of this new microbolometer ROIC.
Reduced signal crosstalk multi neurotransmitter image sensor by microhole array structure
NASA Astrophysics Data System (ADS)
Ogaeri, Yuta; Lee, You-Na; Mitsudome, Masato; Iwata, Tatsuya; Takahashi, Kazuhiro; Sawada, Kazuaki
2018-06-01
A microhole array structure combined with an enzyme immobilization method using magnetic beads can enhance the target discernment capability of a multi neurotransmitter image sensor. Here we report the fabrication and evaluation of the H+-diffusion-preventing capability of the sensor with the array structure. The structure with an SU-8 photoresist has holes with a size of 24.5 × 31.6 µm2. Sensors were prepared with the array structure of three different heights: 0, 15, and 60 µm. When the sensor has the structure of 60 µm height, 48% reduced output voltage is measured at a H+-sensitive null pixel that is located 75 µm from the acetylcholinesterase (AChE)-immobilized pixel, which is the starting point of H+ diffusion. The suppressed H+ immigration is shown in a two-dimensional (2D) image in real time. The sensor parameters, such as height of the array structure and measuring time, are optimized experimentally. The sensor is expected to effectively distinguish various neurotransmitters in biological samples.
NASA Astrophysics Data System (ADS)
Burri, Samuel; Powolny, François; Bruschini, Claudio E.; Michalet, Xavier; Regazzoni, Francesco; Charbon, Edoardo
2014-05-01
This paper presents our work on a 65k pixel single-photon avalanche diode (SPAD) based imaging sensor realized in a 0.35μm standard CMOS process. At a resolution of 512 by 128 pixels the sensor is read out in 6.4μs to deliver over 150k monochrome frames per second. The individual pixel has a size of 24μm2 and contains the SPAD with a 12T quenching and gating circuitry along with a memory element. The gating signals are distributed across the chip through a balanced tree to minimize the signal skew between the pixels. The array of pixels is row-addressable and data is sent out of the chip on 128 lines in parallel at a frequency of 80MHz. The system is controlled by an FPGA which generates the gating and readout signals and can be used for arbitrary real-time computation on the frames from the sensor. The communication protocol between the camera and a conventional PC is USB2. The active area of the chip is 5% and can be significantly improved with the application of a micro-lens array. A micro-lens array, for use with collimated light, has been designed and its performance is reviewed in the paper. Among other high-speed phenomena the gating circuitry capable of generating illumination periods shorter than 5ns can be used for Fluorescence Lifetime Imaging (FLIM). In order to measure the lifetime of fluorophores excited by a picosecond laser, the sensor's illumination period is synchronized with the excitation laser pulses. A histogram of the photon arrival times relative to the excitation is then constructed by counting the photons arriving during the sensitive time for several positions of the illumination window. The histogram for each pixel is transferred afterwards to a computer where software routines extract the lifetime at each location with an accuracy better than 100ps. We show results for fluorescence lifetime measurements using different fluorophores with lifetimes ranging from 150ps to 5ns.
Geometric correction methods for Timepix based large area detectors
NASA Astrophysics Data System (ADS)
Zemlicka, J.; Dudak, J.; Karch, J.; Krejci, F.
2017-01-01
X-ray micro radiography with the hybrid pixel detectors provides versatile tool for the object inspection in various fields of science. It has proven itself especially suitable for the samples with low intrinsic attenuation contrast (e.g. soft tissue in biology, plastics in material sciences, thin paint layers in cultural heritage, etc.). The limited size of single Medipix type detector (1.96 cm2) was recently overcome by the construction of large area detectors WidePIX assembled of Timepix chips equipped with edgeless silicon sensors. The largest already built device consists of 100 chips and provides fully sensitive area of 14.3 × 14.3 cm2 without any physical gaps between sensors. The pixel resolution of this device is 2560 × 2560 pixels (6.5 Mpix). The unique modular detector layout requires special processing of acquired data to avoid occurring image distortions. It is necessary to use several geometric compensations after standard corrections methods typical for this type of pixel detectors (i.e. flat-field, beam hardening correction). The proposed geometric compensations cover both concept features and particular detector assembly misalignment of individual chip rows of large area detectors based on Timepix assemblies. The former deals with larger border pixels in individual edgeless sensors and their behaviour while the latter grapple with shifts, tilts and steps between detector rows. The real position of all pixels is defined in Cartesian coordinate system and together with non-binary reliability mask it is used for the final image interpolation. The results of geometric corrections for test wire phantoms and paleo botanic material are presented in this article.
NASA Astrophysics Data System (ADS)
Li, Long; Solana, Carmen; Canters, Frank; Kervyn, Matthieu
2017-10-01
Mapping lava flows using satellite images is an important application of remote sensing in volcanology. Several volcanoes have been mapped through remote sensing using a wide range of data, from optical to thermal infrared and radar images, using techniques such as manual mapping, supervised/unsupervised classification, and elevation subtraction. So far, spectral-based mapping applications mainly focus on the use of traditional pixel-based classifiers, without much investigation into the added value of object-based approaches and into advantages of using machine learning algorithms. In this study, Nyamuragira, characterized by a series of > 20 overlapping lava flows erupted over the last century, was used as a case study. The random forest classifier was tested to map lava flows based on pixels and objects. Image classification was conducted for the 20 individual flows and for 8 groups of flows of similar age using a Landsat 8 image and a DEM of the volcano, both at 30-meter spatial resolution. Results show that object-based classification produces maps with continuous and homogeneous lava surfaces, in agreement with the physical characteristics of lava flows, while lava flows mapped through the pixel-based classification are heterogeneous and fragmented including much "salt and pepper noise". In terms of accuracy, both pixel-based and object-based classification performs well but the former results in higher accuracies than the latter except for mapping lava flow age groups without using topographic features. It is concluded that despite spectral similarity, lava flows of contrasting age can be well discriminated and mapped by means of image classification. The classification approach demonstrated in this study only requires easily accessible image data and can be applied to other volcanoes as well if there is sufficient information to calibrate the mapping.
TimepixCam: a fast optical imager with time-stamping
NASA Astrophysics Data System (ADS)
Fisher-Levine, M.; Nomerotski, A.
2016-03-01
We describe a novel fast optical imager, TimepixCam, based on an optimized silicon pixel sensor with a thin entrance window, read out by a Timepix ASIC. TimepixCam is able to record and time-stamp light flashes in excess of 1,000 photons with high quantum efficiency in the 400-1000nm wavelength range with 20ns timing resolution, corresponding to an effective rate of 50 Megaframes per second. The camera was used for imaging ions impinging on a microchannel plate followed by a phosphor screen. Possible applications include spatial and velocity map imaging of ions in time-of-flight mass spectroscopy; coincidence imaging of ions and electrons, and other time-resolved types of imaging spectroscopy.
NASA GES DISC Aerosol analysis and visualization services
NASA Astrophysics Data System (ADS)
Wei, J. C.; Ichoku, C. M.; Petrenko, M.; Yang, W.; Albayrak, A.; Zhao, P.; Johnson, J. E.; Kempler, S.
2015-12-01
Among the known atmospheric constituents, aerosols represent the greatest uncertainty in climate research. Satellite data products are important for a wide variety of applications that can bring far-reaching benefits to the science community and the broader society. These benefits can best be achieved if the satellite data are well utilized and interpreted. Unfortunately, this is not always the case, despite the abundance and relative maturity of numerous satellite-borne sensors routinely measure aerosols. There is often disagreement between similar aerosol parameters retrieved from different sensors, leaving users confused as to which sensors to trust for answering important science questions about the distribution, properties, and impacts of aerosols. Such misunderstanding may be avoided by providing satellite data with accurate pixel-level (Level 2) information, including pixel coverage area delineation and science team recommended quality screening for individual geophysical parameters. NASA Goddard Earth Sciences Data and Information Services Center (GES DISC) have developed multiple MAPSS applications as a part of Giovanni (Geospatial Interactive Online Visualization and Analysis Interface) data visualization and analysis tool - Giovanni-MAPSS and Giovanni-MAPSS_Explorer since 2007. The MAPSS database provides spatio-temporal statistics for multiple spatial spaceborne Level 2 aerosol products (MODIS Terra, MODIS Aqua, MISR, POLDER, OMI, CALIOP, SeaWiFS Deep Blue, and VIIRS) sampled over AERONET ground stations. In this presentation, I will demonstrate the improved features from Giovanni-MAPSS and introduce a new visualization service (Giovanni VizMAP) supporting various visualization and data accessing capabilities from satellite Level 2 data (non-aggregated and un-gridded) at high spatial resolution. Functionality will include selecting data sources (e.g., multiple parameters under the same measurement), defining area-of-interest and temporal extents, zooming, panning, overlaying, sliding, and data subsetting and reformatting.
Automatic Near-Real-Time Image Processing Chain for Very High Resolution Optical Satellite Data
NASA Astrophysics Data System (ADS)
Ostir, K.; Cotar, K.; Marsetic, A.; Pehani, P.; Perse, M.; Zaksek, K.; Zaletelj, J.; Rodic, T.
2015-04-01
In response to the increasing need for automatic and fast satellite image processing SPACE-SI has developed and implemented a fully automatic image processing chain STORM that performs all processing steps from sensor-corrected optical images (level 1) to web-delivered map-ready images and products without operator's intervention. Initial development was tailored to high resolution RapidEye images, and all crucial and most challenging parts of the planned full processing chain were developed: module for automatic image orthorectification based on a physical sensor model and supported by the algorithm for automatic detection of ground control points (GCPs); atmospheric correction module, topographic corrections module that combines physical approach with Minnaert method and utilizing anisotropic illumination model; and modules for high level products generation. Various parts of the chain were implemented also for WorldView-2, THEOS, Pleiades, SPOT 6, Landsat 5-8, and PROBA-V. Support of full-frame sensor currently in development by SPACE-SI is in plan. The proposed paper focuses on the adaptation of the STORM processing chain to very high resolution multispectral images. The development concentrated on the sub-module for automatic detection of GCPs. The initially implemented two-step algorithm that worked only with rasterized vector roads and delivered GCPs with sub-pixel accuracy for the RapidEye images, was improved with the introduction of a third step: super-fine positioning of each GCP based on a reference raster chip. The added step exploits the high spatial resolution of the reference raster to improve the final matching results and to achieve pixel accuracy also on very high resolution optical satellite data.
Color encryption scheme based on adapted quantum logistic map
NASA Astrophysics Data System (ADS)
Zaghloul, Alaa; Zhang, Tiejun; Amin, Mohamed; Abd El-Latif, Ahmed A.
2014-04-01
This paper presents a new color image encryption scheme based on quantum chaotic system. In this scheme, a new encryption scheme is accomplished by generating an intermediate chaotic key stream with the help of quantum chaotic logistic map. Then, each pixel is encrypted by the cipher value of the previous pixel and the adapted quantum logistic map. The results show that the proposed scheme has adequate security for the confidentiality of color images.
NASA Astrophysics Data System (ADS)
Butts, Robert R.
1997-08-01
A low noise, high resolution Shack-Hartmann wavefront sensor was included in the ABLE-ACE instrument suite to obtain direct high resolution phase measurements of the 0.53 micrometers pulsed laser beam propagated through high altitude atmospheric turbulence. The wavefront sensor employed a Fired geometry using a lenslet array which provided approximately 17 sub-apertures across the pupil. The lenslets focused the light in each sub-aperture onto a 21 by 21 array of pixels in the camera focal plane with 8 pixels in the camera focal plane with 8 pixels across the central lobe of the diffraction limited spot. The goal of the experiment was to measure the effects of the turbulence in the free atmosphere on propagation, but the wavefront sensor also detected the aberrations induced by the aircraft boundary layer and the receiver aircraft internal beam path. Data analysis methods used to extract the desired atmospheric contribution to the phase measurements from the data corrupted by non-atmospheric aberrations are described. Approaches which were used included a reconstruction of the phase as a linear combination of Zernike polynomials coupled with optical estimator sand computation of structure functions of the sub-aperture slopes. The theoretical basis for the data analysis techniques is presented. Results are described, and comparisons with theory and simulations are shown. Estimates of average turbulence strength along the propagation path from the wavefront sensor showed good agreement with other sensor. The Zernike spectra calculated from the wavefront sensor data were consistent with the standard Kolmogorov model of turbulence.
How many pixels does it take to make a good 4"×6" print? Pixel count wars revisited
NASA Astrophysics Data System (ADS)
Kriss, Michael A.
2011-01-01
In the early 1980's the future of conventional silver-halide photographic systems was of great concern due to the potential introduction of electronic imaging systems then typified by the Sony Mavica analog electronic camera. The focus was on the quality of film-based systems as expressed in the number of equivalent number pixels and bits-per-pixel, and how many pixels would be required to create an equivalent quality image from a digital camera. It was found that 35-mm frames, for ISO 100 color negative film, contained equivalent pixels of 12 microns for a total of 18 million pixels per frame (6 million pixels per layer) with about 6 bits of information per pixel; the introduction of new emulsion technology, tabular AgX grains, increased the value to 8 bit per pixel. Higher ISO speed films had larger equivalent pixels, fewer pixels per frame, but retained the 8 bits per pixel. Further work found that a high quality 3.5" x 5.25" print could be obtained from a three layer system containing 1300 x 1950 pixels per layer or about 7.6 million pixels in all. In short, it became clear that when a digital camera contained about 6 million pixels (in a single layer using a color filter array and appropriate image processing) that digital systems would challenge and replace conventional film-based system for the consumer market. By 2005 this became the reality. Since 2005 there has been a "pixel war" raging amongst digital camera makers. The question arises about just how many pixels are required and are all pixels equal? This paper will provide a practical look at how many pixels are needed for a good print based on the form factor of the sensor (sensor size) and the effective optical modulation transfer function (optical spread function) of the camera lens. Is it better to have 16 million, 5.7-micron pixels or 6 million 7.8-micron pixels? How does intrinsic (no electronic boost) ISO speed and exposure latitude vary with pixel size? A systematic review of these issues will be provided within the context of image quality and ISO speed models developed over the last 15 years.
Development of Position-Sensitive Magnetic Calorimeters for X-Ray Astronomy
NASA Technical Reports Server (NTRS)
Bandler, SImon; Stevenson, Thomas; Hsieh, Wen-Ting
2011-01-01
Metallic magnetic calorimeters (MMC) are one of the most promising devices to provide very high energy resolution needed for future astronomical x-ray spectroscopy. MMC detectors can be built to large detector arrays having thousands of pixels. Position-sensitive magnetic (PoSM) microcalorimeters consist of multiple absorbers thermally coupled to one magnetic micro calorimeter. Each absorber element has a different thermal coupling to the MMC, resulting in a distribution of different pulse shapes and enabling position discrimination between the absorber elements. PoSMs therefore achieve the large focal plane area with fewer number of readout channels without compromising spatial sampling. Excellent performance of PoSMs was achieved by optimizing the designs of key parameters such as the thermal conductance among the absorbers, magnetic sensor, and heat sink, as well as the absorber heat capacities. Micro fab ri - cation techniques were developed to construct four-absorber PoSMs, in which each absorber consists of a two-layer composite of bismuth and gold. The energy resolution (FWHM full width at half maximum) was measured to be better than 5 eV at 6 keV x-rays for all four absorbers. Position determination was demonstrated with pulse-shape discrimination, as well as with pulse rise time. X-ray microcalorimeters are usually designed to thermalize as quickly as possible to avoid degradation in energy resolution from position dependence to the pulse shapes. Each pixel consists of an absorber and a temperature sensor, both decoupled from the cold bath through a weak thermal link. Each pixel requires a separate readout channel; for instance, with a SQUID (superconducting quantum interference device). For future astronomy missions where thousands to millions of resolution elements are required, having an individual SQUID readout channel for each pixel becomes difficult. One route to attaining these goals is a position-sensitive detector in which a large continuous or pixilated array of x-ray absorbers shares fewer numbers of temperature sensors. A means of discriminating the signals from different absorber positions, however, needs to be built into the device for each sensor. The design concept for the device is such that the shape of the temperature pulse with time depends on the location of the absorber. This inherent position sensitivity of the signal is then analyzed to determine the location of the event precisely, effectively yielding one device with many sub-pixels. With such devices, the total number of electronic channels required to read out a given number of pixels is significantly reduced. PoSMs were developed that consist of four discrete absorbers connected to a single magnetic sensor. The design concept can be extended to more than four absorbers per sensor. The thermal conductance between the sensor and each absorber is different by design and consequently, the pulse shapes are different depending upon which absorber the xrays are received, allowing position discrimination. A magnetic sensor was used in which a paramagnetic Au:Er temperature-sensitive material is located in a weak magnetic field. Deposition of energy from an x-ray photon causes an increase in temperature, which leads to a change of magnetization of the paramagnetic sensor, which is subsequently read out using a low noise dc-SQUID. The PoSM microcalorimeters are fully microfabricated: the Au:Er sensor is located above the meander, with a thin insulation gap in between. For this position-sensitive device, four electroplated absorbers are thermally linked to the sensor via heat links of different thermal conductance. One pixel is identical to that of a single-pixel design, consisting of an overhanging absorber fabricated directly on top of the sensor. It is therefore very strongly thermally coupled to it. The three other absorbers are supported directly on a silicon-nitride membrane. These absorbers are thermally coupled to the sensor via Ti (5 nm)/Au250 nm) metal links. The strength of the links is parameterized by the number of gold squares making up the link. For detector performance, experimentally different pulse-shapes were demonstrated with 6 keV x-rays, which clearly show different rise times for different absorber positions. For energy resolution measurement, the PoSM was operated at 32 mK with an applied field that was generated using a persistent current of 50 mA. Over the four pixels, energy resolution ranges from 4.4 to 4.7 eV were demonstrated.
Pixel-based dust-extinction mapping in nearby galaxies: A new approach to lifting the veil of dust
NASA Astrophysics Data System (ADS)
Tamura, Kazuyuki
In the first part of this dissertation, I explore a new approach to mapping dust extinction in galaxies, using the observed and estimated dust-free flux- ratios of optical V -band and mid-IR 3.6 micro-meter emission. Inferred missing V -band flux is then converted into an estimate of dust extinction. While dust features are not clearly evident in the observed ground-based images of NGC 0959, the target of my pilot study, the dust-map created with this method clearly traces the distribution of dust seen in higher resolution Hubble images. Stellar populations are then analyzed through various pixel Color- Magnitude Diagrams and pixel Color-Color Diagrams (pCCDs), both before and after extinction correction. The ( B - 3.6 microns) versus (far-UV - U ) pCCD proves particularly powerful to distinguish pixels that are dominated by different types of or mixtures of stellar populations. Mapping these pixel- groups onto a pixel-coordinate map shows that they are not distributed randomly, but follow genuine galactic structures, such as a previously unrecognized bar. I show that selecting pixel-groups is not meaningful when using uncorrected colors, and that pixel-based extinction correction is crucial to reveal the true spatial variations in stellar populations. This method is then applied to a sample of late-type galaxies to study the distribution of dust and stellar population as a function of their morphological type and absolute magnitude. In each galaxy, I find that dust extinction is not simply decreasing radially, but that is concentrated in localized clumps throughout a galaxy. I also find some cases where star-formation regions are not associated with dust. In the second part, I describe the application of astronomical image analysis tools for medical purposes. In particular, Source Extractor is used to detect nerve fibers in the basement membrane images of human skin-biopsies of obese subjects. While more development and testing is necessary for this kind of work, I show that computerized detection methods significantly increase the repeatability and reliability of the results. A patent on this work is pending.
Status and Construction of the Belle II DEPFET pixel system
NASA Astrophysics Data System (ADS)
Lütticke, Florian
2014-06-01
DEpleted P-channel Field Effect Transistor (DEPFET) active pixel detectors combine detection with a first amplification stage in a fully depleted detector, resulting in an superb signal-to-noise ratio even for thin sensors. Two layers of thin (75 micron) silicon DEPFET pixels will be used as the innermost vertex system, very close to the beam pipe in the Belle II detector at the SuperKEKB facility. The status of the 8 million DEPFET pixels detector, latest developments and current system tests will be discussed.
Backside illuminated CMOS-TDI line scanner for space applications
NASA Astrophysics Data System (ADS)
Cohen, O.; Ben-Ari, N.; Nevo, I.; Shiloah, N.; Zohar, G.; Kahanov, E.; Brumer, M.; Gershon, G.; Ofer, O.
2017-09-01
A new multi-spectral line scanner CMOS image sensor is reported. The backside illuminated (BSI) image sensor was designed for continuous scanning Low Earth Orbit (LEO) space applications including A custom high quality CMOS Active Pixels, Time Delayed Integration (TDI) mechanism that increases the SNR, 2-phase exposure mechanism that increases the dynamic Modulation Transfer Function (MTF), very low power internal Analog to Digital Converters (ADC) with resolution of 12 bit per pixel and on chip controller. The sensor has 4 independent arrays of pixels where each array is arranged in 2600 TDI columns with controllable TDI depth from 8 up to 64 TDI levels. A multispectral optical filter with specific spectral response per array is assembled at the package level. In this paper we briefly describe the sensor design and present some electrical and electro-optical recent measurements of the first prototypes including high Quantum Efficiency (QE), high MTF, wide range selectable Full Well Capacity (FWC), excellent linearity of approximately 1.3% in a signal range of 5-85% and approximately 1.75% in a signal range of 2-95% out of the signal span, readout noise of approximately 95 electrons with 64 TDI levels, negligible dark current and power consumption of less than 1.5W total for 4 bands sensor at all operation conditions .
Williams, David A.; Keszthelyi, Laszlo P.; Crown, David A.; Yff, Jessica A.; Jaeger, Windy L.; Schenk, Paul M.; Geissler, Paul E.; Becker, Tammy L.
2011-01-01
Io, discovered by Galileo Galilei on January 7–13, 1610, is the innermost of the four Galilean satellites of the planet Jupiter (Galilei, 1610). It is the most volcanically active object in the Solar System, as recognized by observations from six National Aeronautics and Space Administration (NASA) spacecraft: Voyager 1 (March 1979), Voyager 2 (July 1979), Hubble Space Telescope (1990–present), Galileo (1996–2001), Cassini (December 2000), and New Horizons (February 2007). The lack of impact craters on Io in any spacecraft images at any resolution attests to the high resurfacing rate (1 cm/yr) and the dominant role of active volcanism in shaping its surface. High-temperature hot spots detected by the Galileo Solid-State Imager (SSI), Near-Infrared Mapping Spectrometer (NIMS), and Photopolarimeter-Radiometer (PPR) usually correlate with darkest materials on the surface, suggesting active volcanism. The Voyager flybys obtained complete coverage of Io's subjovian hemisphere at 500 m/pixel to 2 km/pixel, and most of the rest of the satellite at 5–20 km/pixel. Repeated Galileo flybys obtained complementary coverage of Io's antijovian hemisphere at 5 m/pixel to 1.4 km/pixel. Thus, the Voyager and Galileo data sets were merged to enable the characterization of the whole surface of the satellite at a consistent resolution. The United States Geological Survey (USGS) produced a set of four global mosaics of Io in visible wavelengths at a spatial resolution of 1 km/pixel, released in February 2006, which we have used as base maps for this new global geologic map. Much has been learned about Io's volcanism, tectonics, degradation, and interior since the Voyager flybys, primarily during and following the Galileo Mission at Jupiter (December 1995–September 2003), and the results have been summarized in books published after the end of the Galileo Mission. Our mapping incorporates this new understanding to assist in map unit definition and to provide a global synthesis of Io's geology.
Estimating Forest Species Composition Using a Multi-Sensor Approach
NASA Astrophysics Data System (ADS)
Wolter, P. T.
2009-12-01
The magnitude, duration, and frequency of forest disturbance caused by the spruce budworm and forest tent caterpillar has increased over the last century due to a shift in forest species composition linked to historical fire suppression, forest management, and pesticide application that has fostered the increase in dominance of host tree species. Modeling approaches are currently being used to understand and forecast potential management effects in changing insect disturbance trends. However, detailed forest composition data needed for these efforts is often lacking. Here, we used partial least squares (PLS) regression to integrate satellite sensor data from Landsat, Radarsat-1, and PALSAR, as well as pixel-wise forest structure information derived from SPOT-5 sensor data (Wolter et al. 2009), to estimate species-level forest composition of 12 species required for modeling efforts. C-band Radarsat-1 data and L-band PALSAR data were frequently among the strongest predictors of forest composition. Pixel-level forest structure data were more important for estimating conifer rather than hardwood forest composition. The coefficients of determination for species relative basal area (RBA) ranged from 0.57 (white cedar) to 0.94 (maple) with RMSE of 8.88 to 6.44 % RBA, respectively. Receiver operating characteristic (ROC) curves were used to determine the effective lower limits of usefulness of species RBA estimates which ranged from 5.94 % (jack pine) to 39.41 % (black ash). These estimates were then used to produce a dominant forest species map for the study region with an overall accuracy of 78 %. Most notably, this approach facilitated discrimination of aspen from birch as well as spruce and fir from other conifer species which is crucial for the study of forest tent caterpillar and spruce budworm dynamics, respectively, in the Upper Midwest. Thus, use of PLS regression as a data fusion strategy has proven to be an effective tool for regional characterization of forest composition within spatially heterogeneous forests using large-format satellite sensor data.
Ground-truthing AVIRIS mineral mapping at Cuprite, Nevada
NASA Technical Reports Server (NTRS)
Swayze, Gregg; Clark, Roger N.; Kruse, Fred; Sutley, Steve; Gallagher, Andrea
1992-01-01
Mineral abundance maps of 18 minerals were made of the Cuprite Mining District using 1990 AVIRIS data and the Multiple Spectral Feature Mapping Algorithm (MSFMA) as discussed in Clark et al. This technique uses least-squares fitting between a scaled laboratory reference spectrum and ground calibrated AVIRIS data for each pixel. Multiple spectral features can be fitted for each mineral and an unlimited number of minerals can be mapped simultaneously. Quality of fit and depth from continuum numbers for each mineral are calculated for each pixel and the results displayed as a multicolor image.
NASA Astrophysics Data System (ADS)
Bormann, K.; Rittger, K.; Painter, T. H.
2016-12-01
The continuation of large-scale snow cover records into the future is crucial for monitoring the impacts of global pressures such as climate change and weather variability on the cryosphere. With daily MODIS records since 2000 from a now ageing MODIS constellation (Terra & Aqua) and daily VIIRS records since 2012 from the Suomi-NPP platform, the consistency of information between the two optical sensors must be understood. First, we evaluated snow cover maps derived from both MODIS and VIIRS retrievals with coincident cloud-free Landsat 8 OLI maps across a range of locations. We found that both MODIS and VIIRS snow cover maps show similar errors when evaluated with Landsat OLI retrievals. Preliminary results also show a general agreement in regional snowline between the two sensors that is maintained during the spring snowline retreat where the proportion of mixed pixels is increased. The agreement between sensors supports the future use of VIIRS snow cover maps to continue the long-term record beyond the lifetime of MODIS. Second, we use snowline elevation to quantify large scale snow cover variability and to monitor potential changes in the rain/snow transition zone where climate change pressures may be enhanced. Despite the large inter-annual variability that is often observed in snow metrics, we expect that over the 16-year time series we will see a rise in seasonal elevation of the snowline and consequently an increasing rain/snow transition boundary in mountain environments. These results form the basis for global snowline elevation monitoring using optical remote sensing data and highlight regional differences in snowline elevation dynamics. The long-term variability in observed snowline elevation provides a recent climatology of mountain snowpack across several regions that will likely to be of interest to those interested in climate change impacts in mountain environments. This work will also be of interest to existing users of MODSCAG and VIIRSCAG snow cover products and those working in remote sensing of the mountain snowpack.
Object-based landslide mapping on satellite images from different sensors
NASA Astrophysics Data System (ADS)
Hölbling, Daniel; Friedl, Barbara; Eisank, Clemens; Blaschke, Thomas
2015-04-01
Several studies have proven that object-based image analysis (OBIA) is a suitable approach for landslide mapping using remote sensing data. Mostly, optical satellite images are utilized in combination with digital elevation models (DEMs) for semi-automated mapping. The ability of considering spectral, spatial, morphometric and contextual features in OBIA constitutes a significant advantage over pixel-based methods, especially when analysing non-uniform natural phenomena such as landslides. However, many of the existing knowledge-based OBIA approaches for landslide mapping are rather complex and are tailored to specific data sets. These restraints lead to a lack of transferability of OBIA mapping routines. The objective of this study is to develop an object-based approach for landslide mapping that is robust against changing input data with different resolutions, i.e. optical satellite imagery from various sensors. Two study sites in Taiwan were selected for developing and testing the landslide mapping approach. One site is located around the Baolai village in the Huaguoshan catchment in the southern-central part of the island, the other one is a sub-area of the Taimali watershed in Taitung County near the south-eastern Pacific coast. Both areas are regularly affected by severe landslides and debris flows. A range of very high resolution (VHR) optical satellite images was used for the object-based mapping of landslides and for testing the transferability across different sensors and resolutions: (I) SPOT-5, (II) Formosat-2, (III) QuickBird, and (IV) WorldView-2. Additionally, a digital elevation model (DEM) with 5 m spatial resolution and its derived products (e.g. slope, plan curvature) were used for supporting the semi-automated mapping, particularly for differentiating source areas and accumulation areas according to their morphometric characteristics. A focus was put on the identification of comparatively stable parameters (e.g. relative indices), which could be transferred to the different satellite images. The presence of bare ground was assumed to be an evidence for the occurrence of landslides. For separating vegetated from non-vegetated areas the Normalized Difference Vegetation Index (NDVI) was primarily used. Each image was divided into two respective parts based on an automatically calculated NDVI threshold value in eCognition (Trimble) software by combining the homogeneity criterion of multiresolution segmentation and histogram-based methods, so that heterogeneity is increased to a maximum. Expert knowledge models, which depict features and thresholds that are usually used by experts for digital landslide mapping, were considered for refining the classification. The results were compared to the respective results from visual image interpretation (i.e. manually digitized reference polygons for each image), which were produced by an independent local expert. By that, the spatial overlaps as well as under- and over-estimated areas were identified and the performance of the approach in relation to each sensor was evaluated. The presented method can complement traditional manual mapping efforts. Moreover, it contributes to current developments for increasing the transferability of semi-automated OBIA approaches and for improving the efficiency of change detection approaches across multi-sensor imagery.
A CMOS pixel sensor prototype for the outer layers of linear collider vertex detector
NASA Astrophysics Data System (ADS)
Zhang, L.; Morel, F.; Hu-Guo, C.; Himmi, A.; Dorokhov, A.; Hu, Y.
2015-01-01
The International Linear Collider (ILC) expresses a stringent requirement for high precision vertex detectors (VXD). CMOS pixel sensors (CPS) have been considered as an option for the VXD of the International Large Detector (ILD), one of the detector concepts proposed for the ILC. MIMOSA-31 developed at IPHC-Strasbourg is the first CPS integrated with 4-bit column-level ADC for the outer layers of the VXD, adapted to an original concept minimizing the power consumption. It is composed of a matrix of 64 rows and 48 columns. The pixel concept combines in-pixel amplification with a correlated double sampling (CDS) operation in order to reduce the temporal noise and fixed pattern noise (FPN). At the bottom of the pixel array, each column is terminated with a self-triggered analog-to-digital converter (ADC). The ADC design was optimized for power saving at a sampling frequency of 6.25 MS/s. The prototype chip is fabricated in a 0.35 μm CMOS technology. This paper presents the details of the prototype chip and its test results.
Pixel parallel localized driver design for a 128 x 256 pixel array 3D 1Gfps image sensor
NASA Astrophysics Data System (ADS)
Zhang, C.; Dao, V. T. S.; Etoh, T. G.; Charbon, E.
2017-02-01
In this paper, a 3D 1Gfps BSI image sensor is proposed, where 128 × 256 pixels are located in the top-tier chip and a 32 × 32 localized driver array in the bottom-tier chip. Pixels are designed with Multiple Collection Gates (MCG), which collects photons selectively with different collection gates being active at intervals of 1ns to achieve 1Gfps. For the drivers, a global PLL is designed, which consists of a ring oscillator with 6-stage current starved differential inverters, achieving a wide frequency tuning range from 40MHz to 360MHz (20ps rms jitter). The drivers are the replicas of the ring oscillator that operates within a PLL. Together with level shifters and XNOR gates, continuous 3.3V pulses are generated with desired pulse width, which is 1/12 of the PLL clock period. The driver array is activated by a START signal, which propagates through a highly balanced clock tree, to activate all the pixels at the same time with virtually negligible skew.
Resolution studies with the DATURA beam telescope
NASA Astrophysics Data System (ADS)
Jansen, H.
2016-12-01
Detailed studies of the resolution of a EUDET-type beam telescope are carried out using the DATURA beam telescope as an example. The EUDET-type beam telescopes make use of CMOS MIMOSA 26 pixel detectors for particle tracking allowing for precise characterisation of particle-sensing devices. A profound understanding of the performance of the beam telescope as a whole is obtained by a detailed characterisation of the sensors themselves. The differential intrinsic resolution as measured in a MIMOSA 26 sensor is extracted using an iterative pull method, and various quantities that depend on the size of the cluster produced by a traversing charged particle are discussed: the residual distribution, the intra-pixel residual-width distribution and the intra-pixel density distribution of track incident positions.
The ATLAS Diamond Beam Monitor: Luminosity detector at the LHC
NASA Astrophysics Data System (ADS)
Schaefer, D. M.; ATLAS Collaboration
2016-07-01
After the first three years of the LHC running, the ATLAS experiment extracted its pixel detector system to refurbish and re-position the optical readout drivers and install a new barrel layer of pixels. The experiment has also taken advantage of this access to install a set of beam monitoring telescopes with pixel sensors, four each in the forward and backward regions. These telescopes are based on chemical vapor deposited (CVD) diamond sensors to survive in this high radiation environment without needing extensive cooling. This paper describes the lessons learned in construction and commissioning of the ATLAS Diamond Beam Monitor (DBM). We show results from the construction quality assurance tests and commissioning performance, including results from cosmic ray running in early 2015.
NASA Astrophysics Data System (ADS)
Kabir, Salman; Smith, Craig; Armstrong, Frank; Barnard, Gerrit; Schneider, Alex; Guidash, Michael; Vogelsang, Thomas; Endsley, Jay
2018-03-01
Differential binary pixel technology is a threshold-based timing, readout, and image reconstruction method that utilizes the subframe partial charge transfer technique in a standard four-transistor (4T) pixel CMOS image sensor to achieve a high dynamic range video with stop motion. This technology improves low light signal-to-noise ratio (SNR) by up to 21 dB. The method is verified in silicon using a Taiwan Semiconductor Manufacturing Company's 65 nm 1.1 μm pixel technology 1 megapixel test chip array and is compared with a traditional 4 × oversampling technique using full charge transfer to show low light SNR superiority of the presented technology.
Riza, Nabeel A; La Torre, Juan Pablo; Amin, M Junaid
2016-06-13
Proposed and experimentally demonstrated is the CAOS-CMOS camera design that combines the coded access optical sensor (CAOS) imager platform with the CMOS multi-pixel optical sensor. The unique CAOS-CMOS camera engages the classic CMOS sensor light staring mode with the time-frequency-space agile pixel CAOS imager mode within one programmable optical unit to realize a high dynamic range imager for extreme light contrast conditions. The experimentally demonstrated CAOS-CMOS camera is built using a digital micromirror device, a silicon point-photo-detector with a variable gain amplifier, and a silicon CMOS sensor with a maximum rated 51.3 dB dynamic range. White light imaging of three different brightness simultaneously viewed targets, that is not possible by the CMOS sensor, is achieved by the CAOS-CMOS camera demonstrating an 82.06 dB dynamic range. Applications for the camera include industrial machine vision, welding, laser analysis, automotive, night vision, surveillance and multispectral military systems.
High responsivity CMOS imager pixel implemented in SOI technology
NASA Technical Reports Server (NTRS)
Zheng, X.; Wrigley, C.; Yang, G.; Pain, B.
2000-01-01
Availability of mature sub-micron CMOS technology and the advent of the new low noise active pixel sensor (APS) concept have enabled the development of low power, miniature, single-chip, CMOS digital imagers in the decade of the 1990's.
Improved charge injection device and a focal plane interface electronics board for stellar tracking
NASA Technical Reports Server (NTRS)
Michon, G. J.; Burke, H. K.
1984-01-01
An improved Charge Injection Device (CID) stellar tracking sensor and an operating sensor in a control/readout electronics board were developed. The sensor consists of a shift register scanned, 256x256 CID array organized for readout of 4x4 subarrays. The 4x4 subarrays can be positioned anywhere within the 256x256 array with a 2 pixel resolution. This allows continuous tracking of a number of stars simultaneously since nine pixels (3x3) centered on any star can always be read out. Organization and operation of this sensor and the improvements in design and semiconductor processing are described. A hermetic package incorporating an internal thermoelectric cooler assembled using low temperature solders was developed. The electronics board, which contains the sensor drivers, amplifiers, sample hold circuits, multiplexer, analog to digital converter, and the sensor temperature control circuits, is also described. Packaged sensors were evaluated for readout efficiency, spectral quantum efficiency, temporal noise, fixed pattern noise, and dark current. Eight sensors along with two tracker electronics boards were completed, evaluated, and delivered.
1995-07-01
designated pixel. OTF analysis will be similar to the analysis discussed previously. Any nonuniformity in the response of the chosen pixel to the...not seen by the trace. Nonuniformity of the pixel response must be also be taken into account. Background measurements of the maximum and minimum...to the background field of regard. To incorporate and support interactive CLDWSG operation and to accommodate simulation of nonuniform anisoplanatic
Super-resolved refocusing with a plenoptic camera
NASA Astrophysics Data System (ADS)
Zhou, Zhiliang; Yuan, Yan; Bin, Xiangli; Qian, Lulu
2011-03-01
This paper presents an approach to enhance the resolution of refocused images by super resolution methods. In plenoptic imaging, we demonstrate that the raw sensor image can be divided to a number of low-resolution angular images with sub-pixel shifts between each other. The sub-pixel shift, which defines the super-resolving ability, is mathematically derived by considering the plenoptic camera as equivalent camera arrays. We implement simulation to demonstrate the imaging process of a plenoptic camera. A high-resolution image is then reconstructed using maximum a posteriori (MAP) super resolution algorithms. Without other degradation effects in simulation, the super resolved image achieves a resolution as high as predicted by the proposed model. We also build an experimental setup to acquire light fields. With traditional refocusing methods, the image is rendered at a rather low resolution. In contrast, we implement the super-resolved refocusing methods and recover an image with more spatial details. To evaluate the performance of the proposed method, we finally compare the reconstructed images using image quality metrics like peak signal to noise ratio (PSNR).
Distributed Antenna-Coupled TES for FIR Detectors Arrays
NASA Technical Reports Server (NTRS)
Day, Peter K.; Leduc, Henry G.; Dowell, C. Darren; Lee, Richard A.; Zmuidzinas, Jonas
2007-01-01
We describe a new architecture for a superconducting detector for the submillimeter and far-infrared. This detector uses a distributed hot-electron transition edge sensor (TES) to collect the power from a focal-plane-filling slot antenna array. The sensors lay directly across the slots of the antenna and match the antenna impedance of about 30 ohms. Each pixel contains many sensors that are wired in parallel as a single distributed TES, which results in a low impedance that readily matches to a multiplexed SQUID readout These detectors are inherently polarization sensitive, with very low cross-polarization response, but can also be configured to sum both polarizations. The dual-polarization design can have a bandwidth of 50The use of electron-phonon decoupling eliminates the need for micro-machining, making the focal plane much easier to fabricate than with absorber-coupled, mechanically isolated pixels. We discuss applications of these detectors and a hybridization scheme compatible with arrays of tens of thousands of pixels.
4K x 2K pixel color video pickup system
NASA Astrophysics Data System (ADS)
Sugawara, Masayuki; Mitani, Kohji; Shimamoto, Hiroshi; Fujita, Yoshihiro; Yuyama, Ichiro; Itakura, Keijirou
1998-12-01
This paper describes the development of an experimental super- high-definition color video camera system. During the past several years there has been much interest in super-high- definition images as the next generation image media. One of the difficulties in implementing a super-high-definition motion imaging system is constructing the image-capturing section (camera). Even the state-of-the-art semiconductor technology can not realize the image sensor which has enough pixels and output data rate for super-high-definition images. The present study is an attempt to fill the gap in this respect. The authors intend to solve the problem by using new imaging method in which four HDTV sensors are attached on a new color separation optics so that their pixel sample pattern forms checkerboard pattern. A series of imaging experiments demonstrate that this technique is an effective approach to capturing super-high-definition moving images in the present situation where no image sensors exist for such images.
Capacitively coupled hybrid pixel assemblies for the CLIC vertex detector
NASA Astrophysics Data System (ADS)
Tehrani, N. Alipour; Arfaoui, S.; Benoit, M.; Dannheim, D.; Dette, K.; Hynds, D.; Kulis, S.; Perić, I.; Petrič, M.; Redford, S.; Sicking, E.; Valerio, P.
2016-07-01
The vertex detector at the proposed CLIC multi-TeV linear e+e- collider must have minimal material content and high spatial resolution, combined with accurate time-stamping to cope with the expected high rate of beam-induced backgrounds. One of the options being considered is the use of active sensors implemented in a commercial high-voltage CMOS process, capacitively coupled to hybrid pixel ASICs. A prototype of such an assembly, using two custom designed chips (CCPDv3 as active sensor glued to a CLICpix readout chip), has been characterised both in the lab and in beam tests at the CERN SPS using 120 GeV/c positively charged hadrons. Results of these characterisation studies are presented both for single and dual amplification stages in the active sensor, where efficiencies of greater than 99% have been achieved at -60 V substrate bias, with a single hit resolution of 6.1 μm . Pixel cross-coupling results are also presented, showing the sensitivity to placement precision and planarity of the glue layer.
Satellite-Sensor Calibration Verification Using the Cloud-Shadow Method
NASA Technical Reports Server (NTRS)
Reinersman, P.; Carder, K. L.; Chen, F. R.
1995-01-01
An atmospheric-correction method which uses cloud-shaded pixels together with pixels in a neighboring region of similar optical properties is described. This cloud-shadow method uses the difference between the total radiance values observed at the sensor for these two regions, thus removing the nearly identical atmospheric radiance contributions to the two signals (e.g. path radiance and Fresnel-reflected skylight). What remains is largely due to solar photons backscattered from beneath the sea to dominate the residual signal. Normalization by the direct solar irradiance reaching the sea surface and correction for some second-order effects provides the remote-sensing reflectance of the ocean at the location of the neighbor region, providing a known 'ground target' spectrum for use in testing the calibration of the sensor. A similar approach may be useful for land targets if horizontal homogeneity of scene reflectance exists about the shadow. Monte Carlo calculations have been used to correct for adjacency effects and to estimate the differences in the skylight reaching the shadowed and neighbor pixels.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoidn, Oliver R.; Seidler, Gerald T., E-mail: seidler@uw.edu
We have integrated mass-produced commercial complementary metal-oxide-semiconductor (CMOS) image sensors and off-the-shelf single-board computers into an x-ray camera platform optimized for acquisition of x-ray spectra and radiographs at energies of 2–6 keV. The CMOS sensor and single-board computer are complemented by custom mounting and interface hardware that can be easily acquired from rapid prototyping services. For single-pixel detection events, i.e., events where the deposited energy from one photon is substantially localized in a single pixel, we establish ∼20% quantum efficiency at 2.6 keV with ∼190 eV resolution and a 100 kHz maximum detection rate. The detector platform’s useful intrinsic energymore » resolution, 5-μm pixel size, ease of use, and obvious potential for parallelization make it a promising candidate for many applications at synchrotron facilities, in laser-heating plasma physics studies, and in laboratory-based x-ray spectrometry.« less
NASA Astrophysics Data System (ADS)
Martín-Luis, Antonio; Arbelo, Manuel; Hernández-Leal, Pedro; Arbelo-Bayó, Manuel
2016-10-01
Reliable and updated maps of vegetation in protected natural areas are essential for a proper management and conservation. Remote sensing is a valid tool for this purpose. In this study, a methodology based on a WorldView-2 (WV-2) satellite image and in situ spectral signatures measurements was applied to map the Canarian Monteverde ecosystem located in the north of the Tenerife Island (Canary Islands, Spain). Due to the high spectral similarity of vegetation species in the study zone, a Multiple Endmember Spectral Mixture Analysis (MESMA) was performed. MESMA determines the fractional cover of different components within one pixel and it allows for a pixel-by-pixel variation of endmembers. Two libraries of endmembers were collected for the most abundant species in the test area. The first library was collected from in situ spectral signatures measured with an ASD spectroradiometer during a field campaign in June 2015. The second library was obtained from pure pixels identified in the satellite image for the same species. The accuracy of the mapping process was assessed from a set of independent validation plots. The overall accuracy for the ASD-based method was 60.51 % compared to the 86.67 % reached for the WV-2 based mapping. The results suggest the possibility of using WV-2 images for monitoring and regularly updating the maps of the Monteverde forest on the island of Tenerife.
Comparison of Sub-Pixel Classification Approaches for Crop-Specific Mapping
This paper examined two non-linear models, Multilayer Perceptron (MLP) regression and Regression Tree (RT), for estimating sub-pixel crop proportions using time-series MODIS-NDVI data. The sub-pixel proportions were estimated for three major crop types including corn, soybean, a...
Kieper, Douglas Arthur [Seattle, WA; Majewski, Stanislaw [Morgantown, WV; Welch, Benjamin L [Hampton, VA
2012-07-03
An improved method for enhancing the contrast between background and lesion areas of a breast undergoing dual-head scintimammographic examination comprising: 1) acquiring a pair of digital images from a pair of small FOV or mini gamma cameras compressing the breast under examination from opposing sides; 2) inverting one of the pair of images to align or co-register with the other of the images to obtain co-registered pixel values; 3) normalizing the pair of images pixel-by-pixel by dividing pixel values from each of the two acquired images and the co-registered image by the average count per pixel in the entire breast area of the corresponding detector; and 4) multiplying the number of counts in each pixel by the value obtained in step 3 to produce a normalization enhanced two dimensional contrast map. This enhanced (increased contrast) contrast map enhances the visibility of minor local increases (uptakes) of activity over the background and therefore improves lesion detection sensitivity, especially of small lesions.
Kieper, Douglas Arthur [Newport News, VA; Majewski, Stanislaw [Yorktown, VA; Welch, Benjamin L [Hampton, VA
2008-10-28
An improved method for enhancing the contrast between background and lesion areas of a breast undergoing dual-head scintimammographic examination comprising: 1) acquiring a pair of digital images from a pair of small FOV or mini gamma cameras compressing the breast under examination from opposing sides; 2) inverting one of the pair of images to align or co-register with the other of the images to obtain co-registered pixel values; 3) normalizing the pair of images pixel-by-pixel by dividing pixel values from each of the two acquired images and the co-registered image by the average count per pixel in the entire breast area of the corresponding detector; and 4) multiplying the number of counts in each pixel by the value obtained in step 3 to produce a normalization enhanced two dimensional contrast map. This enhanced (increased contrast) contrast map enhances the visibility of minor local increases (uptakes) of activity over the background and therefore improves lesion detection sensitivity, especially of small lesions.
Data Processing for a High Resolution Preclinical PET Detector Based on Philips DPC Digital SiPMs
NASA Astrophysics Data System (ADS)
Schug, David; Wehner, Jakob; Goldschmidt, Benjamin; Lerche, Christoph; Dueppenbecker, Peter Michael; Hallen, Patrick; Weissler, Bjoern; Gebhardt, Pierre; Kiessling, Fabian; Schulz, Volkmar
2015-06-01
In positron emission tomography (PET) systems, light sharing techniques are commonly used to readout scintillator arrays consisting of scintillation elements, which are smaller than the optical sensors. The scintillating element is then identified evaluating the signal heights in the readout channels using statistical algorithms, the center of gravity (COG) algorithm being the simplest and mostly used one. We propose a COG algorithm with a fixed number of input channels in order to guarantee a stable calculation of the position. The algorithm is implemented and tested with the raw detector data obtained with the Hyperion-II D preclinical PET insert which uses Philips Digital Photon Counting's (PDPC) digitial SiPMs. The gamma detectors use LYSO scintillator arrays with 30 ×30 crystals of 1 ×1 ×12 mm3 in size coupled to 4 ×4 PDPC DPC 3200-22 sensors (DPC) via a 2-mm-thick light guide. These self-triggering sensors are made up of 2 ×2 pixels resulting in a total of 64 readout channels. We restrict the COG calculation to a main pixel, which captures most of the scintillation light from a crystal, and its (direct and diagonal) neighboring pixels and reject single events in which this data is not fully available. This results in stable COG positions for a crystal element and enables high spatial image resolution. Due to the sensor layout, for some crystals it is very likely that a single diagonal neighbor pixel is missing as a result of the low light level on the corresponding DPC. This leads to a loss of sensitivity, if these events are rejected. An enhancement of the COG algorithm is proposed which handles the potentially missing pixel separately both for the crystal identification and the energy calculation. Using this advancement, we show that the sensitivity of the Hyperion-II D insert using the described scintillator configuration can be improved by 20-100% for practical useful readout thresholds of a single DPC pixel ranging from 17-52 photons. Furthermore, we show that the energy resolution of the scanner is superior for all readout thresholds if singles with a single missing pixel are accepted and correctly handled compared to the COG method only accepting singles with all neighbors present by 0-1.6% (relative difference). The presented methods can not only be applied to gamma detectors employing DPC sensors, but can be generalized to other similarly structured and self-triggering detectors, using light sharing techniques, as well.
Method and apparatus for determining the coordinates of an object
Pedersen, Paul S; Sebring, Robert
2003-01-01
A method and apparatus is described for determining the coordinates on the surface of an object which is illuminated by a beam having pixels which have been modulated according to predetermined mathematical relationships with pixel position within the modulator. The reflected illumination is registered by an image sensor at a known location which registers the intensity of the pixels as received. Computations on the intensity, which relate the pixel intensities received to the pixel intensities transmitted at the modulator, yield the proportional loss of intensity and planar position of the originating pixels. The proportional loss and position information can then be utilized within triangulation equations to resolve the coordinates of associated surface locations on the object.
NASA Astrophysics Data System (ADS)
Zhang, Ying; Zhu, Hongbo; Zhang, Liang; Fu, Min
2016-09-01
The proposed Circular Electron Positron Collider (CEPC) will be primarily aimed for precision measurements of the discovered Higgs boson. Its innermost vertex detector, which will play a critical role in heavy-flavor tagging, must be constructed with fine-pitched silicon pixel sensors with low power consumption and fast readout. CMOS pixel sensor (CPS), as one of the most promising candidate technologies, has already demonstrated its excellent performance in several high energy physics experiments. Therefore it has been considered for R&D for the CEPC vertex detector. In this paper, we present the preliminary studies to improve the collected signal charge over the equivalent input capacitance ratio (Q / C), which will be crucial to reduce the analog power consumption. We have performed detailed 3D device simulation and evaluated potential impacts from diode geometry, epitaxial layer properties and non-ionizing radiation damage. We have proposed a new approach to improve the treatment of the boundary conditions in simulation. Along with the TCAD simulation, we have designed the exploratory prototype utilizing the TowerJazz 0.18 μm CMOS imaging sensor process and we will verify the simulation results with future measurements.
Reduction of Radiometric Miscalibration—Applications to Pushbroom Sensors
Rogaß, Christian; Spengler, Daniel; Bochow, Mathias; Segl, Karl; Lausch, Angela; Doktor, Daniel; Roessner, Sigrid; Behling, Robert; Wetzel, Hans-Ulrich; Kaufmann, Hermann
2011-01-01
The analysis of hyperspectral images is an important task in Remote Sensing. Foregoing radiometric calibration results in the assignment of incident electromagnetic radiation to digital numbers and reduces the striping caused by slightly different responses of the pixel detectors. However, due to uncertainties in the calibration some striping remains. This publication presents a new reduction framework that efficiently reduces linear and nonlinear miscalibrations by an image-driven, radiometric recalibration and rescaling. The proposed framework—Reduction Of Miscalibration Effects (ROME)—considering spectral and spatial probability distributions, is constrained by specific minimisation and maximisation principles and incorporates image processing techniques such as Minkowski metrics and convolution. To objectively evaluate the performance of the new approach, the technique was applied to a variety of commonly used image examples and to one simulated and miscalibrated EnMAP (Environmental Mapping and Analysis Program) scene. Other examples consist of miscalibrated AISA/Eagle VNIR (Visible and Near Infrared) and Hawk SWIR (Short Wave Infrared) scenes of rural areas of the region Fichtwald in Germany and Hyperion scenes of the Jalal-Abad district in Southern Kyrgyzstan. Recovery rates of approximately 97% for linear and approximately 94% for nonlinear miscalibrated data were achieved, clearly demonstrating the benefits of the new approach and its potential for broad applicability to miscalibrated pushbroom sensor data. PMID:22163960
Vanderhoof, Melanie; Fairaux, Nicole; Beal, Yen-Ju G.; Hawbaker, Todd J.
2017-01-01
The Landsat Burned Area Essential Climate Variable (BAECV), developed by the U.S. Geological Survey (USGS), capitalizes on the long temporal availability of Landsat imagery to identify burned areas across the conterminous United States (CONUS) (1984–2015). Adequate validation of such products is critical for their proper usage and interpretation. Validation of coarse-resolution products often relies on independent data derived from moderate-resolution sensors (e.g., Landsat). Validation of Landsat products, in turn, is challenging because there is no corresponding source of high-resolution, multispectral imagery that has been systematically collected in space and time over the entire temporal extent of the Landsat archive. Because of this, comparison between high-resolution images and Landsat science products can help increase user's confidence in the Landsat science products, but may not, alone, be adequate. In this paper, we demonstrate an approach to systematically validate the Landsat-derived BAECV product. Burned area extent was mapped for Landsat image pairs using a manually trained semi-automated algorithm that was manually edited across 28 path/rows and five different years (1988, 1993, 1998, 2003, 2008). Three datasets were independently developed by three analysts and the datasets were integrated on a pixel by pixel basis in which at least one to all three analysts were required to agree a pixel was burned. We found that errors within our Landsat reference dataset could be minimized by using the rendition of the dataset in which pixels were mapped as burned if at least two of the three analysts agreed. BAECV errors of omission and commission for the detection of burned pixels averaged 42% and 33%, respectively for CONUS across all five validation years. Errors of omission and commission were lowest across the western CONUS, for example in the shrub and scrublands of the Arid West (31% and 24%, respectively), and highest in the grasslands and agricultural lands of the Great Plains in central CONUS (62% and 57%, respectively). The BAECV product detected most (> 65%) fire events > 10 ha across the western CONUS (Arid and Mountain West ecoregions). Our approach and results demonstrate that a thorough validation of Landsat science products can be completed with independent Landsat-derived reference data, but could be strengthened by the use of complementary sources of high-resolution data.
Relation of MODIS EVI and LAI across time, vegetation types and hydrological regimes
NASA Astrophysics Data System (ADS)
Alexandridis, Thomas; Ovakoglou, George
2015-04-01
Estimation of the Leaf Area Index (LAI) of a landscape is considered important to describe the ecosystems activity and is used as an important input parameter in hydrological and biogeochemical models related to water and carbon cycle, desertification risk, etc. The measurement of LAI in the field is a laborious and costly process and is mainly done by indirect methods, such as hemispherical photographs that are processed by specialized software. For this reason there have been several attempts to estimate LAI with multispectral satellite images, using theoretical biomass development models, or empirical equations using vegetation indices and land cover maps. The aim of this work is to study the relation of MODIS EVI and LAI across time, vegetation type, and hydrological regime. This was achieved by studying 120 maps of EVI and LAI which cover a hydrological year and five hydrologically diverse areas: river Nestos in Greece, Queimados catchment in Brazil, Rijnland catchment in The Netherlands, river Tamega in Portugal, and river Umbeluzi in Mozambique. The following Terra MODIS composite datasets were downloaded for the hydrological year 2012-2013: MOD13A2 "Vegetation Indices" and MCD15A2 "LAI and FPAR", as well as the equivalent quality information layers (QA). All the pixels that fall in a vegetation land cover (according to the MERIS GLOBCOVER map) were sampled for the analysis, with the exception of those that fell at the border between two vegetation or other land cover categories, to avoid the influence of mixed pixels. Using linear regression analysis, the relationship between EVI and LAI was identified per date, vegetation type and study area. Results show that vegetation type has the highest influence in the variation of the relationship between EVI and LAI in each study area. The coefficient of determination (R2) is high and statistically significant (ranging from 0.41 to 0.83 in 90% of the cases). When plotting the EVI factor from the regression equation across time, there is an evident temporal change in all test sites. The sensitivity of EVI to LAI is smaller in periods of high biomass production. The range of fluctuation is different across sites, and is related to biomass quantity and type. Higher fluctuation is noted in the winter season in Tamega, possibly due to cloud infected pixels that the QA and compositing algorithms did not successfully detect. Finally, there was no significant difference in the R2 and EVI factor when including in the analyses pixels indicated as "low and marginal quality" by the QA layers, thus suggesting that the use of low quality pixels can be justified when good quality pixels are not enough. Future work will study the transferability of these relations across scales and sensors. This study is supported by the Research Committee of Aristotle University of Thessaloniki project "Improvement of the estimation of Leaf Area Index (LAI) at basin scale using satellite images". MODIS data are provided by USGS.
Coarse Scale In Situ Albedo Observations over Heterogeneous Land Surfaces and Validation Strategy
NASA Astrophysics Data System (ADS)
Xiao, Q.; Wu, X.; Wen, J.; BAI, J., Sr.
2017-12-01
To evaluate and improve the quality of coarse-pixel land surface albedo products, validation with ground measurements of albedo is crucial over the spatially and temporally heterogeneous land surface. The performance of albedo validation depends on the quality of ground-based albedo measurements at a corresponding coarse-pixel scale, which can be conceptualized as the "truth" value of albedo at coarse-pixel scale. The wireless sensor network (WSN) technology provides access to continuously observe on the large pixel scale. Taking the albedo products as an example, this paper was dedicated to the validation of coarse-scale albedo products over heterogeneous surfaces based on the WSN observed data, which is aiming at narrowing down the uncertainty of results caused by the spatial scaling mismatch between satellite and ground measurements over heterogeneous surfaces. The reference value of albedo at coarse-pixel scale can be obtained through an upscaling transform function based on all of the observations for that pixel. We will devote to further improve and develop new method that that are better able to account for the spatio-temporal characteristic of surface albedo in the future. Additionally, how to use the widely distributed single site measurements over the heterogeneous surfaces is also a question to be answered. Keywords: Remote sensing; Albedo; Validation; Wireless sensor network (WSN); Upscaling; Heterogeneous land surface; Albedo truth at coarse-pixel scale
NASA Astrophysics Data System (ADS)
Brumby, S. P.; Warren, M. S.; Keisler, R.; Chartrand, R.; Skillman, S.; Franco, E.; Kontgis, C.; Moody, D.; Kelton, T.; Mathis, M.
2016-12-01
Cloud computing, combined with recent advances in machine learning for computer vision, is enabling understanding of the world at a scale and at a level of space and time granularity never before feasible. Multi-decadal Earth remote sensing datasets at the petabyte scale (8×10^15 bits) are now available in commercial cloud, and new satellite constellations will generate daily global coverage at a few meters per pixel. Public and commercial satellite observations now provide a wide range of sensor modalities, from traditional visible/infrared to dual-polarity synthetic aperture radar (SAR). This provides the opportunity to build a continuously updated map of the world supporting the academic community and decision-makers in government, finanace and industry. We report on work demonstrating country-scale agricultural forecasting, and global-scale land cover/land, use mapping using a range of public and commercial satellite imagery. We describe processing over a petabyte of compressed raw data from 2.8 quadrillion pixels (2.8 petapixels) acquired by the US Landsat and MODIS programs over the past 40 years. Using commodity cloud computing resources, we convert the imagery to a calibrated, georeferenced, multiresolution tiled format suited for machine-learning analysis. We believe ours is the first application to process, in less than a day, on generally available resources, over a petabyte of scientific image data. We report on work combining this imagery with time-series SAR collected by ESA Sentinel 1. We report on work using this reprocessed dataset for experiments demonstrating country-scale food production monitoring, an indicator for famine early warning. We apply remote sensing science and machine learning algorithms to detect and classify agricultural crops and then estimate crop yields and detect threats to food security (e.g., flooding, drought). The software platform and analysis methodology also support monitoring water resources, forests and other general indicators of environmental health, and can detect growth and changes in cities that are displacing historical agricultural zones.
Wang, Guizhou; Liu, Jianbo; He, Guojin
2013-01-01
This paper presents a new classification method for high-spatial-resolution remote sensing images based on a strategic mechanism of spatial mapping and reclassification. The proposed method includes four steps. First, the multispectral image is classified by a traditional pixel-based classification method (support vector machine). Second, the panchromatic image is subdivided by watershed segmentation. Third, the pixel-based multispectral image classification result is mapped to the panchromatic segmentation result based on a spatial mapping mechanism and the area dominant principle. During the mapping process, an area proportion threshold is set, and the regional property is defined as unclassified if the maximum area proportion does not surpass the threshold. Finally, unclassified regions are reclassified based on spectral information using the minimum distance to mean algorithm. Experimental results show that the classification method for high-spatial-resolution remote sensing images based on the spatial mapping mechanism and reclassification strategy can make use of both panchromatic and multispectral information, integrate the pixel- and object-based classification methods, and improve classification accuracy. PMID:24453808
Karbasi, Salman; Arianpour, Ashkan; Motamedi, Nojan; Mellette, William M; Ford, Joseph E
2015-06-10
Imaging fiber bundles can map the curved image surface formed by some high-performance lenses onto flat focal plane detectors. The relative alignment between the focal plane array pixels and the quasi-periodic fiber-bundle cores can impose an undesirable space variant moiré pattern, but this effect may be greatly reduced by flat-field calibration, provided that the local responsivity is known. Here we demonstrate a stable metric for spatial analysis of the moiré pattern strength, and use it to quantify the effect of relative sensor and fiber-bundle pitch, and that of the Bayer color filter. We measure the thermal dependence of the moiré pattern, and the achievable improvement by flat-field calibration at different operating temperatures. We show that a flat-field calibration image at a desired operating temperature can be generated using linear interpolation between white images at several fixed temperatures, comparing the final image quality with an experimentally acquired image at the same temperature.
NASA Tech Briefs, January 2010
NASA Technical Reports Server (NTRS)
2010-01-01
Topics covered include: Cryogenic Flow Sensor; Multi-Sensor Mud Detection; Gas Flow Detection System; Mapping Capacitive Coupling Among Pixels in a Sensor Array; Fiber-Based Laser Transmitter for Oxygen A-Band Spectroscopy and Remote Sensing; Low-Profile, Dual-Wavelength, Dual-Polarized Antenna; Time-Separating Heating and Sensor Functions of Thermistors in Precision Thermal Control Applications; Cellular Reflectarray Antenna; A One-Dimensional Synthetic-Aperture Microwave Radiometer; Electrical Switching of Perovskite Thin-Film Resistors; Two-Dimensional Synthetic-Aperture Radiometer; Ethernet-Enabled Power and Communication Module for Embedded Processors; Electrically Variable Resistive Memory Devices; Improved Attachment in a Hybrid Inflatable Pressure Vessel; Electrostatic Separator for Beneficiation of Lunar Soil; Amorphous Rover; Space-Frame Antenna; Gear-Driven Turnbuckle Actuator; In-Situ Focusing Inside a Thermal Vacuum Chamber; Space-Frame Lunar Lander; Wider-Opening Dewar Flasks for Cryogenic Storage; Silicon Oxycarbide Aerogels for High-Temperature Thermal Insulation; Supercapacitor Electrolyte Solvents with Liquid Range Below -80 C; Designs and Materials for Better Coronagraph Occulting Masks; Fuel-Cell-Powered Vehicle with Hybrid Power Management; Fine-Water-Mist Multiple-Orientation-Discharge Fire Extinguisher; Fuel-Cell Water Separator; Turbulence and the Stabilization Principle; Improved Cloud Condensation Nucleus Spectrometer; Better Modeling of Electrostatic Discharge in an Insulator; Sub-Aperture Interferometers; Terahertz Mapping of Microstructure and Thickness Variations; Multiparallel Three-Dimensional Optical Microscopy; Stabilization of Phase of a Sinusoidal Signal Transmitted Over Optical Fiber; Vacuum-Compatible Wideband White Light and Laser Combiner Source System; Optical Tapers as White-Light WGM Resonators; EPR Imaging at a Few Megahertz Using SQUID Detectors; Reducing Field Distortion in Magnetic Resonance Imaging; Fluorogenic Cell-Based Biosensors for Monitoring Microbes; A Constant-Force Resistive Exercise Unit; GUI to Facilitate Research on Biological Damage from Radiation; On-Demand Urine Analyzer; More-Realistic Digital Modeling of a Human Body; and Advanced Liquid-Cooling Garment Using Highly Thermally Conductive Sheets.
Determining the 3-D structure and motion of objects using a scanning laser range sensor
NASA Technical Reports Server (NTRS)
Nandhakumar, N.; Smith, Philip W.
1993-01-01
In order for the EVAHR robot to autonomously track and grasp objects, its vision system must be able to determine the 3-D structure and motion of an object from a sequence of sensory images. This task is accomplished by the use of a laser radar range sensor which provides dense range maps of the scene. Unfortunately, the currently available laser radar range cameras use a sequential scanning approach which complicates image analysis. Although many algorithms have been developed for recognizing objects from range images, none are suited for use with single beam, scanning, time-of-flight sensors because all previous algorithms assume instantaneous acquisition of the entire image. This assumption is invalid since the EVAHR robot is equipped with a sequential scanning laser range sensor. If an object is moving while being imaged by the device, the apparent structure of the object can be significantly distorted due to the significant non-zero delay time between sampling each image pixel. If an estimate of the motion of the object can be determined, this distortion can be eliminated; but, this leads to the motion-structure paradox - most existing algorithms for 3-D motion estimation use the structure of objects to parameterize their motions. The goal of this research is to design a rigid-body motion recovery technique which overcomes this limitation. The method being developed is an iterative, linear, feature-based approach which uses the non-zero image acquisition time constraint to accurately recover the motion parameters from the distorted structure of the 3-D range maps. Once the motion parameters are determined, the structural distortion in the range images is corrected.
Visual mining geo-related data using pixel bar charts
NASA Astrophysics Data System (ADS)
Hao, Ming C.; Keim, Daniel A.; Dayal, Umeshwar; Wright, Peter; Schneidewind, Joern
2005-03-01
A common approach to analyze geo-related data is using bar charts or x-y plots. They are intuitive and easy to use. But important information often gets lost. In this paper, we introduce a new interactive visualization technique called Geo Pixel Bar Charts, which combines the advantages of Pixel Bar Charts and interactive maps. This technique allows analysts to visualize large amounts of spatial data without aggregation and shows the geographical regions corresponding to the spatial data attribute at the same time. In this paper, we apply Geo Pixel Bar Charts to visually mining sales transactions and Internet usage from different locations. Our experimental results show the effectiveness of this technique for providing data distribution and exceptions from the map.
NASA Technical Reports Server (NTRS)
2004-01-01
Topics: Optoelectronic Sensor System for Guidance in Docking; Hybrid Piezoelectric/Fiber-Optic Sensor Sheets; Multisensor Arrays for Greater Reliability and Accuracy; Integrated-Optic Oxygen Sensors; Ka-Band Autonomous Formation Flying Sensor; CMOS VLSI Active-Pixel Sensor for Tracking; Lightweight, Self-Deploying Foam Antenna Structures; Electrically Small Microstrip Quarter-Wave Monopole Antennas; A 2-to-28-MHz Phase-Locked Loop; Portable Electromyograph; Open-Source Software for Modeling of Nanoelectronic Devices; Software for Generating Strip Maps from SAR Data; Calibration Software for use with Jurassicprok; Software for Probabilistic Risk Reduction; Software Processes SAR Motion-Measurement Data; Improved Method of Purifying Carbon Nanotubes; Patterned Growth of Carbon Nanotubes or Nanofibers; Lightweight, Rack-Mountable Composite Cold Plate/Shelves; SiC-Based Miniature High-Temperature Cantilever Anemometer; Inlet Housing for a Partial-Admission Turbine; Lightweight Thermoformed Structural Components and Optics; Growing High-Quality InAs Quantum Dots for Infrared Lasers; Selected Papers on Protoplanetary Disks; Module for Oxygenating Water without Generating Bubbles; Coastal Research Imaging Spectrometer; Rapid Switching and Modulation by use of Coupled VCSELs; Laser-Induced-Fluorescence Photogrammetry and Videogrammetry; Laboratory Apparatus Generates Dual-Species Cold Atomic Beam; Laser Ablation of Materials for Propulsion of Spacecraft; Small Active Radiation Monitor; Hybrid Image-Plane/Stereo Manipulation; Partitioning a Gridded Rectangle into Smaller Rectangles; Digital Radar-Signal Processors Implemented in FPGAs; Part 1 of a Computational Study of a Drop-Laden Mixing Layer; and Some Improvements in Signal-Conditioning Circuits.
NASA Astrophysics Data System (ADS)
Evans, Aaron H.
Thermal remote sensing is a powerful tool for measuring the spatial variability of evapotranspiration due to the cooling effect of vaporization. The residual method is a popular technique which calculates evapotranspiration by subtracting sensible heat from available energy. Estimating sensible heat requires aerodynamic surface temperature which is difficult to retrieve accurately. Methods such as SEBAL/METRIC correct for this problem by calibrating the relationship between sensible heat and retrieved surface temperature. Disadvantage of these calibrations are 1) user must manually identify extremely dry and wet pixels in image 2) each calibration is only applicable over limited spatial extent. Producing larger maps is operationally limited due to time required to manually calibrate multiple spatial extents over multiple days. This dissertation develops techniques which automatically detect dry and wet pixels. LANDSAT imagery is used because it resolves dry pixels. Calibrations using 1) only dry pixels and 2) including wet pixels are developed. Snapshots of retrieved evaporative fraction and actual evapotranspiration are compared to eddy covariance measurements for five study areas in Florida: 1) Big Cypress 2) Disney Wilderness 3) Everglades 4) near Gainesville, FL. 5) Kennedy Space Center. The sensitivity of evaporative fraction to temperature, available energy, roughness length and wind speed is tested. A technique for temporally interpolating evapotranspiration by fusing LANDSAT and MODIS is developed and tested. The automated algorithm is successful at detecting wet and dry pixels (if they exist). Including wet pixels in calibration and assuming constant atmospheric conductance significantly improved results for all but Big Cypress and Gainesville. Evaporative fraction is not very sensitive to instantaneous available energy but it is sensitive to temperature when wet pixels are included because temperature is required for estimating wet pixel evapotranspiration. Data fusion techniques only slightly outperformed linear interpolation. Eddy covariance comparison and temporal interpolation produced acceptable bias error for most cases suggesting automated calibration and interpolation could be used to predict monthly or annual ET. Maps demonstrating spatial patterns of evapotranspiration at field scale were successfully produced, but only for limited spatial extents. A framework has been established for producing larger maps by creating a mosaic of smaller individual maps.
Snow cover retrieval over Rhone and Po river basins from MODIS optical satellite data (2000-2009).
NASA Astrophysics Data System (ADS)
Dedieu, Jean-Pierre, ,, Dr.; Boos, Alain; Kiage, Wiliam; Pellegrini, Matteo
2010-05-01
Estimation of the Snow Covered Area (SCA) is an important issue for meteorological application and hydrological modeling of runoff. With spectral bands in the visible, near and middle infrared, the MODIS optical satellite sensor can be used to detect snow cover because of large differences between reflectance from snow covered and snow free surfaces. At the same time, it allows separation between snow and clouds. Moreover, the sensor provides a daily coverage of large areas (2,500 km range). However, as the pixel size is 500m x 500m, a MODIS pixel may be partially covered by snow, particularly in Alpine areas, where snow may not be present in valleys lying at lower altitudes. Also, variation of reflectance due to differential sunlit effects as a function of slope and aspect, as well as bidirectional effects may be present in images. Nevertheless, it is possible to estimate snow cover at the Sub-Pixel level with a relatively good accuracy and with very good results if the sub-pixel estimations are integrated for a few pixels relative to an entire watershed. Integrated into the EU-FP7 ACQWA Project (www.acqwa.ch), this approach was first applied over Alpine area of Rhone river basin upper Geneva Lake: Canton du Valais, Switzerland (5 375 km²). In a second step over Alps, rolling hills and plain areas in Po catchment for Val d'Aosta and Piemonte regions, Italy (37 190 km²). Watershed boundaries were provided respectively by GRID (Ch) and ARPA (It) partners. The complete satellite images database was extracted from the U.S. MODIS/NASA website (http://modis.gsfc.nasa.gov/) for MOD09_B1 Reflectance images, and from the MODIS/NSIDC website (http://nsidc.org/index.html) for MOD10_A2 snow cover images. Only the Terra platform was used because images are acquired in the morning and are therefore better correlated with dry snow surface, avoiding cloud coverage of the afternoon (Aqua Platform). The MOD9 Image reflectance and MOD10_A2 products were respectively analyzed to retrieve (i) Fractional Snow cover at sub-pixel scale, and (ii) maximum snow cover. All products were retrieved at 8-days over a complete time period of 10 years (2000-2009), giving 500 images for each river basin. Digital Model Elevation was given by NASA/SRTM database at 90-m resolution and used (i) for illumination versus topography correction on snow cover, (ii) geometric rectification of images. Geographic projection is WGS84, UTM 32. Fractional Snow cover mapping was derived from the NDSI linear regression method (Salomonson et al., 2004). Cloud mask was given by MODIS-NASA library (radiometric threshold) and completed by inverse slope regression to avoid lowlands fog confusing with thin snow cover (Po river basin). Maximum Snow Cover mapping was retrieved from the NSIDC database classification (Hall et al., 2001). Validation step was processed using comparison between MODIS Snow maps outputs and meteorological data provided by network of 87 meteorological stations: temperature, precipitation, snow depth measurement. A 0.92 correlation was observed for snow/non snow cover and can be considered as quite satisfactory, given the radiometric problems encountered in mountainous areas, particularly in snowmelt season. The 10-years time period results indicates a main difference between (i) regular snow accumulation and depletion in Rhone and (ii) the high temporal and spatial variability of snow cover for Po. Then, a high sensitivity to low variation of air temperature, often close to 1° C was observed. This is the case in particular for the beginning and the end of the winter season. The regional snow cover depletion is both influenced by thermal positives anomalies (e.g. 2000 and 2006), and the general trend of rising atmospheric temperatures since the late 1980s, particularly for Po river basin. Results will be combined with two hydrological models: Topkapi and Fest.
Will it Blend? Visualization and Accuracy Evaluation of High-Resolution Fuzzy Vegetation Maps
NASA Astrophysics Data System (ADS)
Zlinszky, A.; Kania, A.
2016-06-01
Instead of assigning every map pixel to a single class, fuzzy classification includes information on the class assigned to each pixel but also the certainty of this class and the alternative possible classes based on fuzzy set theory. The advantages of fuzzy classification for vegetation mapping are well recognized, but the accuracy and uncertainty of fuzzy maps cannot be directly quantified with indices developed for hard-boundary categorizations. The rich information in such a map is impossible to convey with a single map product or accuracy figure. Here we introduce a suite of evaluation indices and visualization products for fuzzy maps generated with ensemble classifiers. We also propose a way of evaluating classwise prediction certainty with "dominance profiles" visualizing the number of pixels in bins according to the probability of the dominant class, also showing the probability of all the other classes. Together, these data products allow a quantitative understanding of the rich information in a fuzzy raster map both for individual classes and in terms of variability in space, and also establish the connection between spatially explicit class certainty and traditional accuracy metrics. These map products are directly comparable to widely used hard boundary evaluation procedures, support active learning-based iterative classification and can be applied for operational use.
The Statistical Analysis of Global Oxygen ENAs Sky Maps from IBEX-Lo: Implication on the ENA sources
NASA Astrophysics Data System (ADS)
Park, J.; Kucharek, H.; Moebius, E.; Bochsler, P. A.
2013-12-01
Energetic Neutral Atoms (ENAs) created in the interstellar medium and heliospheric interface have been observed by the Interstellar Boundary Explorer (IBEX) orbiting the Earth on a highly elliptical trajectory since 2008. The science payload on this small spacecraft consists of two highly sensitive single-pixel ENA cameras: the IBEX-Lo sensor covering the energy ranges from 0.01 to 2 keV and the IBEX-Hi sensor covering the energy ranges from 0.3 to 6 keV. In order to measure the incident ENAs, the IBEX-Lo sensor uses a conversion surface to convert neutrals to negative ions. After passing an electrostatic analyzer, they are separated by species (H and heavier species) via a time-of-flight mass spectrometer. All-sky H ENA maps over three years were completed and show two significant features: the interstellar H and He neutral flow is shown at the low energy ranges (0.01 to 0.11 keV) and the ribbon appears at the higher energies (0.21 to 1.35 keV). Like in the hydrogen sky maps, the interstellar O+Ne neutral flow appears in all-sky O ENA maps at the energy ranges from 0.21 to 0.87 keV The distributed heliospheric Oxygen ENAs over the entire energy ranges is determined from very low counting statistics. In this study, we therefore apply the Cash's C statistics (Cash, 1979) and determine the upper and lower confidence limits (Gehrels, 1986) for the statistical significance among all events in all-sky O ENA maps. These newly created sky maps specifically show the distributed heliospheric O ENA flux surrounding the interstellar O+Ne neutral flow. This enhancement distributed ENA flux will provide us new insights into the ion population creation the ENA emission. It seems that there is no signature of ribbon in all-sky O ENA maps. If one assumes that the generation mechanism of the ribbon is the same for hydrogen and oxygen, the location of source ion population may be closer to the heliosheath. In this poster we will discuss all the results of this study and their implications for the source regions and populations in detail.
Performance of a novel wafer scale CMOS active pixel sensor for bio-medical imaging.
Esposito, M; Anaxagoras, T; Konstantinidis, A C; Zheng, Y; Speller, R D; Evans, P M; Allinson, N M; Wells, K
2014-07-07
Recently CMOS active pixels sensors (APSs) have become a valuable alternative to amorphous silicon and selenium flat panel imagers (FPIs) in bio-medical imaging applications. CMOS APSs can now be scaled up to the standard 20 cm diameter wafer size by means of a reticle stitching block process. However, despite wafer scale CMOS APS being monolithic, sources of non-uniformity of response and regional variations can persist representing a significant challenge for wafer scale sensor response. Non-uniformity of stitched sensors can arise from a number of factors related to the manufacturing process, including variation of amplification, variation between readout components, wafer defects and process variations across the wafer due to manufacturing processes. This paper reports on an investigation into the spatial non-uniformity and regional variations of a wafer scale stitched CMOS APS. For the first time a per-pixel analysis of the electro-optical performance of a wafer CMOS APS is presented, to address inhomogeneity issues arising from the stitching techniques used to manufacture wafer scale sensors. A complete model of the signal generation in the pixel array has been provided and proved capable of accounting for noise and gain variations across the pixel array. This novel analysis leads to readout noise and conversion gain being evaluated at pixel level, stitching block level and in regions of interest, resulting in a coefficient of variation ⩽1.9%. The uniformity of the image quality performance has been further investigated in a typical x-ray application, i.e. mammography, showing a uniformity in terms of CNR among the highest when compared with mammography detectors commonly used in clinical practice. Finally, in order to compare the detection capability of this novel APS with the technology currently used (i.e. FPIs), theoretical evaluation of the detection quantum efficiency (DQE) at zero-frequency has been performed, resulting in a higher DQE for this detector compared to FPIs. Optical characterization, x-ray contrast measurements and theoretical DQE evaluation suggest that a trade off can be found between the need of a large imaging area and the requirement of a uniform imaging performance, making the DynAMITe large area CMOS APS suitable for a range of bio-medical applications.
An asynchronous data-driven readout prototype for CEPC vertex detector
NASA Astrophysics Data System (ADS)
Yang, Ping; Sun, Xiangming; Huang, Guangming; Xiao, Le; Gao, Chaosong; Huang, Xing; Zhou, Wei; Ren, Weiping; Li, Yashu; Liu, Jianchao; You, Bihui; Zhang, Li
2017-12-01
The Circular Electron Positron Collider (CEPC) is proposed as a Higgs boson and/or Z boson factory for high-precision measurements on the Higgs boson. The precision of secondary vertex impact parameter plays an important role in such measurements which typically rely on flavor-tagging. Thus silicon CMOS Pixel Sensors (CPS) are the most promising technology candidate for a CEPC vertex detector, which can most likely feature a high position resolution, a low power consumption and a fast readout simultaneously. For the R&D of the CEPC vertex detector, we have developed a prototype MIC4 in the Towerjazz 180 nm CMOS Image Sensor (CIS) process. We have proposed and implemented a new architecture of asynchronous zero-suppression data-driven readout inside the matrix combined with a binary front-end inside the pixel. The matrix contains 128 rows and 64 columns with a small pixel pitch of 25 μm. The readout architecture has implemented the traditional OR-gate chain inside a super pixel combined with a priority arbiter tree between the super pixels, only reading out relevant pixels. The MIC4 architecture will be introduced in more detail in this paper. It will be taped out in May and will be characterized when the chip comes back.
Development of monolithic pixel detector with SOI technology for the ILC vertex detector
NASA Astrophysics Data System (ADS)
Yamada, M.; Ono, S.; Tsuboyama, T.; Arai, Y.; Haba, J.; Ikegami, Y.; Kurachi, I.; Togawa, M.; Mori, T.; Aoyagi, W.; Endo, S.; Hara, K.; Honda, S.; Sekigawa, D.
2018-01-01
We have been developing a monolithic pixel sensor for the International Linear Collider (ILC) vertex detector with the 0.2 μm FD-SOI CMOS process by LAPIS Semiconductor Co., Ltd. We aim to achieve a 3 μm single-point resolution required for the ILC with a 20×20 μm2 pixel. Beam bunch crossing at the ILC occurs every 554 ns in 1-msec-long bunch trains with an interval of 200 ms. Each pixel must record the charge and time stamp of a hit to identify a collision bunch for event reconstruction. Necessary functions include the amplifier, comparator, shift register, analog memory and time stamp implementation in each pixel, and column ADC and Zero-suppression logic on the chip. We tested the first prototype sensor, SOFIST ver.1, with a 120 GeV proton beam at the Fermilab Test Beam Facility in January 2017. SOFIST ver.1 has a charge sensitive amplifier and two analog memories in each pixel, and an 8-bit Wilkinson-type ADC is implemented for each column on the chip. We measured the residual of the hit position to the reconstructed track. The standard deviation of the residual distribution fitted by a Gaussian is better than 3 μm.
Image sensor with motion artifact supression and anti-blooming
NASA Technical Reports Server (NTRS)
Pain, Bedabrata (Inventor); Wrigley, Chris (Inventor); Yang, Guang (Inventor); Yadid-Pecht, Orly (Inventor)
2006-01-01
An image sensor includes pixels formed on a semiconductor substrate. Each pixel includes a photoactive region in the semiconductor substrate, a sense node, and a power supply node. A first electrode is disposed near a surface of the semiconductor substrate. A bias signal on the first electrode sets a potential in a region of the semiconductor substrate between the photoactive region and the sense node. A second electrode is disposed near the surface of the semiconductor substrate. A bias signal on the second electrode sets a potential in a region of the semiconductor substrate between the photoactive region and the power supply node. The image sensor includes a controller that causes bias signals to be provided to the electrodes so that photocharges generated in the photoactive region are accumulated in the photoactive region during a pixel integration period, the accumulated photocharges are transferred to the sense node during a charge transfer period, and photocharges generated in the photoactive region are transferred to the power supply node during a third period without passing through the sense node. The imager can operate at high shutter speeds with simultaneous integration of pixels in the array. High quality images can be produced free from motion artifacts. High quantum efficiency, good blooming control, low dark current, low noise and low image lag can be obtained.
Temporal Noise Analysis of Charge-Domain Sampling Readout Circuits for CMOS Image Sensors.
Ge, Xiaoliang; Theuwissen, Albert J P
2018-02-27
This paper presents a temporal noise analysis of charge-domain sampling readout circuits for Complementary Metal-Oxide Semiconductor (CMOS) image sensors. In order to address the trade-off between the low input-referred noise and high dynamic range, a Gm-cell-based pixel together with a charge-domain correlated-double sampling (CDS) technique has been proposed to provide a way to efficiently embed a tunable conversion gain along the read-out path. Such readout topology, however, operates in a non-stationery large-signal behavior, and the statistical properties of its temporal noise are a function of time. Conventional noise analysis methods for CMOS image sensors are based on steady-state signal models, and therefore cannot be readily applied for Gm-cell-based pixels. In this paper, we develop analysis models for both thermal noise and flicker noise in Gm-cell-based pixels by employing the time-domain linear analysis approach and the non-stationary noise analysis theory, which help to quantitatively evaluate the temporal noise characteristic of Gm-cell-based pixels. Both models were numerically computed in MATLAB using design parameters of a prototype chip, and compared with both simulation and experimental results. The good agreement between the theoretical and measurement results verifies the effectiveness of the proposed noise analysis models.
IR sensitivity enhancement of CMOS Image Sensor with diffractive light trapping pixels.
Yokogawa, Sozo; Oshiyama, Itaru; Ikeda, Harumi; Ebiko, Yoshiki; Hirano, Tomoyuki; Saito, Suguru; Oinoue, Takashi; Hagimoto, Yoshiya; Iwamoto, Hayato
2017-06-19
We report on the IR sensitivity enhancement of back-illuminated CMOS Image Sensor (BI-CIS) with 2-dimensional diffractive inverted pyramid array structure (IPA) on crystalline silicon (c-Si) and deep trench isolation (DTI). FDTD simulations of semi-infinite thick c-Si having 2D IPAs on its surface whose pitches over 400 nm shows more than 30% improvement of light absorption at λ = 850 nm and the maximum enhancement of 43% with the 540 nm pitch at the wavelength is confirmed. A prototype BI-CIS sample with pixel size of 1.2 μm square containing 400 nm pitch IPAs shows 80% sensitivity enhancement at λ = 850 nm compared to the reference sample with flat surface. This is due to diffraction with the IPA and total reflection at the pixel boundary. The NIR images taken by the demo camera equip with a C-mount lens show 75% sensitivity enhancement in the λ = 700-1200 nm wavelength range with negligible spatial resolution degradation. Light trapping CIS pixel technology promises to improve NIR sensitivity and appears to be applicable to many different image sensor applications including security camera, personal authentication, and range finding Time-of-Flight camera with IR illuminations.
Temporal Noise Analysis of Charge-Domain Sampling Readout Circuits for CMOS Image Sensors †
Theuwissen, Albert J. P.
2018-01-01
This paper presents a temporal noise analysis of charge-domain sampling readout circuits for Complementary Metal-Oxide Semiconductor (CMOS) image sensors. In order to address the trade-off between the low input-referred noise and high dynamic range, a Gm-cell-based pixel together with a charge-domain correlated-double sampling (CDS) technique has been proposed to provide a way to efficiently embed a tunable conversion gain along the read-out path. Such readout topology, however, operates in a non-stationery large-signal behavior, and the statistical properties of its temporal noise are a function of time. Conventional noise analysis methods for CMOS image sensors are based on steady-state signal models, and therefore cannot be readily applied for Gm-cell-based pixels. In this paper, we develop analysis models for both thermal noise and flicker noise in Gm-cell-based pixels by employing the time-domain linear analysis approach and the non-stationary noise analysis theory, which help to quantitatively evaluate the temporal noise characteristic of Gm-cell-based pixels. Both models were numerically computed in MATLAB using design parameters of a prototype chip, and compared with both simulation and experimental results. The good agreement between the theoretical and measurement results verifies the effectiveness of the proposed noise analysis models. PMID:29495496
Li, Jing; Mahmoodi, Alireza; Joseph, Dileepan
2015-01-01
An important class of complementary metal-oxide-semiconductor (CMOS) image sensors are those where pixel responses are monotonic nonlinear functions of light stimuli. This class includes various logarithmic architectures, which are easily capable of wide dynamic range imaging, at video rates, but which are vulnerable to image quality issues. To minimize fixed pattern noise (FPN) and maximize photometric accuracy, pixel responses must be calibrated and corrected due to mismatch and process variation during fabrication. Unlike literature approaches, which employ circuit-based models of varying complexity, this paper introduces a novel approach based on low-degree polynomials. Although each pixel may have a highly nonlinear response, an approximately-linear FPN calibration is possible by exploiting the monotonic nature of imaging. Moreover, FPN correction requires only arithmetic, and an optimal fixed-point implementation is readily derived, subject to a user-specified number of bits per pixel. Using a monotonic spline, involving cubic polynomials, photometric calibration is also possible without a circuit-based model, and fixed-point photometric correction requires only a look-up table. The approach is experimentally validated with a logarithmic CMOS image sensor and is compared to a leading approach from the literature. The novel approach proves effective and efficient. PMID:26501287
Sensor development at the semiconductor laboratory of the Max-Planck-Society
NASA Astrophysics Data System (ADS)
Bähr, A.; Lechner, P.; Ninkovic, J.
2017-12-01
For more than twenty years the semiconductor laboratory of the Max-Planck Society (MPG-HLL) is developing high-performing, specialised, scientific silicon sensors including the integration of amplifying electronics on the sensor chip. This paper summarises the actual status of these devices like pnCCDs and DePFET Active Pixel Sensors and their applications.
NASA Astrophysics Data System (ADS)
Tamura, K.; Jansen, R. A.; Eskridge, P. B.; Cohen, S. H.; Windhorst, R. A.
2010-06-01
We present the results of a study of the late-type spiral galaxy NGC 0959, before and after application of the pixel-based dust extinction correction described in Tamura et al. (Paper I). Galaxy Evolution Explorer far-UV, and near-UV, ground-based Vatican Advanced Technology Telescope, UBVR, and Spitzer/Infrared Array Camera 3.6, 4.5, 5.8, and 8.0 μm images are studied through pixel color-magnitude diagrams and pixel color-color diagrams (pCCDs). We define groups of pixels based on their distribution in a pCCD of (B - 3.6 μm) versus (FUV - U) colors after extinction correction. In the same pCCD, we trace their locations before the extinction correction was applied. This shows that selecting pixel groups is not meaningful when using colors uncorrected for dust. We also trace the distribution of the pixel groups on a pixel coordinate map of the galaxy. We find that the pixel-based (two-dimensional) extinction correction is crucial for revealing the spatial variations in the dominant stellar population, averaged over each resolution element. Different types and mixtures of stellar populations, and galaxy structures such as a previously unrecognized bar, become readily discernible in the extinction-corrected pCCD and as coherent spatial structures in the pixel coordinate map.
Sub-pixel mineral mapping using EO-1 Hyperion hyperspectral data
NASA Astrophysics Data System (ADS)
Kumar, C.; Shetty, A.; Raval, S.; Champatiray, P. K.; Sharma, R.
2014-11-01
This study describes the utility of Earth Observation (EO)-1 Hyperion data for sub-pixel mineral investigation using Mixture Tuned Target Constrained Interference Minimized Filter (MTTCIMF) algorithm in hostile mountainous terrain of Rajsamand district of Rajasthan, which hosts economic mineralization such as lead, zinc, and copper etc. The study encompasses pre-processing, data reduction, Pixel Purity Index (PPI) and endmember extraction from reflectance image of surface minerals such as illite, montmorillonite, phlogopite, dolomite and chlorite. These endmembers were then assessed with USGS mineral spectral library and lab spectra of rock samples collected from field for spectral inspection. Subsequently, MTTCIMF algorithm was implemented on processed image to obtain mineral distribution map of each detected mineral. A virtual verification method has been adopted to evaluate the classified image, which uses directly image information to evaluate the result and confirm the overall accuracy and kappa coefficient of 68 % and 0.6 respectively. The sub-pixel level mineral information with reasonable accuracy could be a valuable guide to geological and exploration community for expensive ground and/or lab experiments to discover economic deposits. Thus, the study demonstrates the feasibility of Hyperion data for sub-pixel mineral mapping using MTTCIMF algorithm with cost and time effective approach.
A Sensitive Dynamic and Active Pixel Vision Sensor for Color or Neural Imaging Applications.
Moeys, Diederik Paul; Corradi, Federico; Li, Chenghan; Bamford, Simeon A; Longinotti, Luca; Voigt, Fabian F; Berry, Stewart; Taverni, Gemma; Helmchen, Fritjof; Delbruck, Tobi
2018-02-01
Applications requiring detection of small visual contrast require high sensitivity. Event cameras can provide higher dynamic range (DR) and reduce data rate and latency, but most existing event cameras have limited sensitivity. This paper presents the results of a 180-nm Towerjazz CIS process vision sensor called SDAVIS192. It outputs temporal contrast dynamic vision sensor (DVS) events and conventional active pixel sensor frames. The SDAVIS192 improves on previous DAVIS sensors with higher sensitivity for temporal contrast. The temporal contrast thresholds can be set down to 1% for negative changes in logarithmic intensity (OFF events) and down to 3.5% for positive changes (ON events). The achievement is possible through the adoption of an in-pixel preamplification stage. This preamplifier reduces the effective intrascene DR of the sensor (70 dB for OFF and 50 dB for ON), but an automated operating region control allows up to at least 110-dB DR for OFF events. A second contribution of this paper is the development of characterization methodology for measuring DVS event detection thresholds by incorporating a measure of signal-to-noise ratio (SNR). At average SNR of 30 dB, the DVS temporal contrast threshold fixed pattern noise is measured to be 0.3%-0.8% temporal contrast. Results comparing monochrome and RGBW color filter array DVS events are presented. The higher sensitivity of SDAVIS192 make this sensor potentially useful for calcium imaging, as shown in a recording from cultured neurons expressing calcium sensitive green fluorescent protein GCaMP6f.
Contrast computation methods for interferometric measurement of sensor modulation transfer function
NASA Astrophysics Data System (ADS)
Battula, Tharun; Georgiev, Todor; Gille, Jennifer; Goma, Sergio
2018-01-01
Accurate measurement of image-sensor frequency response over a wide range of spatial frequencies is very important for analyzing pixel array characteristics, such as modulation transfer function (MTF), crosstalk, and active pixel shape. Such analysis is especially significant in computational photography for the purposes of deconvolution, multi-image superresolution, and improved light-field capture. We use a lensless interferometric setup that produces high-quality fringes for measuring MTF over a wide range of frequencies (here, 37 to 434 line pairs per mm). We discuss the theoretical framework, involving Michelson and Fourier contrast measurement of the MTF, addressing phase alignment problems using a moiré pattern. We solidify the definition of Fourier contrast mathematically and compare it to Michelson contrast. Our interferometric measurement method shows high detail in the MTF, especially at high frequencies (above Nyquist frequency). We are able to estimate active pixel size and pixel pitch from measurements. We compare both simulation and experimental MTF results to a lens-free slanted-edge implementation using commercial software.
Spatial optical crosstalk in CMOS image sensors integrated with plasmonic color filters.
Yu, Yan; Chen, Qin; Wen, Long; Hu, Xin; Zhang, Hui-Fang
2015-08-24
Imaging resolution of complementary metal oxide semiconductor (CMOS) image sensor (CIS) keeps increasing to approximately 7k × 4k. As a result, the pixel size shrinks down to sub-2μm, which greatly increases the spatial optical crosstalk. Recently, plasmonic color filter was proposed as an alternative to conventional colorant pigmented ones. However, there is little work on its size effect and the spatial optical crosstalk in a model of CIS. By numerical simulation, we investigate the size effect of nanocross array plasmonic color filters and analyze the spatial optical crosstalk of each pixel in a Bayer array of a CIS with a pixel size of 1μm. It is found that the small pixel size deteriorates the filtering performance of nanocross color filters and induces substantial spatial color crosstalk. By integrating the plasmonic filters in the low Metal layer in standard CMOS process, the crosstalk reduces significantly, which is compatible to pigmented filters in a state-of-the-art backside illumination CIS.