Sample records for pixel image sensor

  1. A time-resolved image sensor for tubeless streak cameras

    NASA Astrophysics Data System (ADS)

    Yasutomi, Keita; Han, SangMan; Seo, Min-Woong; Takasawa, Taishi; Kagawa, Keiichiro; Kawahito, Shoji

    2014-03-01

    This paper presents a time-resolved CMOS image sensor with draining-only modulation (DOM) pixels for tube-less streak cameras. Although the conventional streak camera has high time resolution, the device requires high voltage and bulky system due to the structure with a vacuum tube. The proposed time-resolved imager with a simple optics realize a streak camera without any vacuum tubes. The proposed image sensor has DOM pixels, a delay-based pulse generator, and a readout circuitry. The delay-based pulse generator in combination with an in-pixel logic allows us to create and to provide a short gating clock to the pixel array. A prototype time-resolved CMOS image sensor with the proposed pixel is designed and implemented using 0.11um CMOS image sensor technology. The image array has 30(Vertical) x 128(Memory length) pixels with the pixel pitch of 22.4um. .

  2. A 128 x 128 CMOS Active Pixel Image Sensor for Highly Integrated Imaging Systems

    NASA Technical Reports Server (NTRS)

    Mendis, Sunetra K.; Kemeny, Sabrina E.; Fossum, Eric R.

    1993-01-01

    A new CMOS-based image sensor that is intrinsically compatible with on-chip CMOS circuitry is reported. The new CMOS active pixel image sensor achieves low noise, high sensitivity, X-Y addressability, and has simple timing requirements. The image sensor was fabricated using a 2 micrometer p-well CMOS process, and consists of a 128 x 128 array of 40 micrometer x 40 micrometer pixels. The CMOS image sensor technology enables highly integrated smart image sensors, and makes the design, incorporation and fabrication of such sensors widely accessible to the integrated circuit community.

  3. Photodiode area effect on performance of X-ray CMOS active pixel sensors

    NASA Astrophysics Data System (ADS)

    Kim, M. S.; Kim, Y.; Kim, G.; Lim, K. T.; Cho, G.; Kim, D.

    2018-02-01

    Compared to conventional TFT-based X-ray imaging devices, CMOS-based X-ray imaging sensors are considered next generation because they can be manufactured in very small pixel pitches and can acquire high-speed images. In addition, CMOS-based sensors have the advantage of integration of various functional circuits within the sensor. The image quality can also be improved by the high fill-factor in large pixels. If the size of the subject is small, the size of the pixel must be reduced as a consequence. In addition, the fill factor must be reduced to aggregate various functional circuits within the pixel. In this study, 3T-APS (active pixel sensor) with photodiodes of four different sizes were fabricated and evaluated. It is well known that a larger photodiode leads to improved overall performance. Nonetheless, if the size of the photodiode is > 1000 μm2, the degree to which the sensor performance increases as the photodiode size increases, is reduced. As a result, considering the fill factor, pixel-pitch > 32 μm is not necessary to achieve high-efficiency image quality. In addition, poor image quality is to be expected unless special sensor-design techniques are included for sensors with a pixel pitch of 25 μm or less.

  4. CMOS foveal image sensor chip

    NASA Technical Reports Server (NTRS)

    Scott, Peter (Inventor); Sridhar, Ramalingam (Inventor); Bandera, Cesar (Inventor); Xia, Shu (Inventor)

    2002-01-01

    A foveal image sensor integrated circuit comprising a plurality of CMOS active pixel sensors arranged both within and about a central fovea region of the chip. The pixels in the central fovea region have a smaller size than the pixels arranged in peripheral rings about the central region. A new photocharge normalization scheme and associated circuitry normalizes the output signals from the different size pixels in the array. The pixels are assembled into a multi-resolution rectilinear foveal image sensor chip using a novel access scheme to reduce the number of analog RAM cells needed. Localized spatial resolution declines monotonically with offset from the imager's optical axis, analogous to biological foveal vision.

  5. Microlens performance limits in sub-2mum pixel CMOS image sensors.

    PubMed

    Huo, Yijie; Fesenmaier, Christian C; Catrysse, Peter B

    2010-03-15

    CMOS image sensors with smaller pixels are expected to enable digital imaging systems with better resolution. When pixel size scales below 2 mum, however, diffraction affects the optical performance of the pixel and its microlens, in particular. We present a first-principles electromagnetic analysis of microlens behavior during the lateral scaling of CMOS image sensor pixels. We establish for a three-metal-layer pixel that diffraction prevents the microlens from acting as a focusing element when pixels become smaller than 1.4 microm. This severely degrades performance for on and off-axis pixels in red, green and blue color channels. We predict that one-metal-layer or backside-illuminated pixels are required to extend the functionality of microlenses beyond the 1.4 microm pixel node.

  6. A 100 Mfps image sensor for biological applications

    NASA Astrophysics Data System (ADS)

    Etoh, T. Goji; Shimonomura, Kazuhiro; Nguyen, Anh Quang; Takehara, Kosei; Kamakura, Yoshinari; Goetschalckx, Paul; Haspeslagh, Luc; De Moor, Piet; Dao, Vu Truong Son; Nguyen, Hoang Dung; Hayashi, Naoki; Mitsui, Yo; Inumaru, Hideo

    2018-02-01

    Two ultrahigh-speed CCD image sensors with different characteristics were fabricated for applications to advanced scientific measurement apparatuses. The sensors are BSI MCG (Backside-illuminated Multi-Collection-Gate) image sensors with multiple collection gates around the center of the front side of each pixel, placed like petals of a flower. One has five collection gates and one drain gate at the center, which can capture consecutive five frames at 100 Mfps with the pixel count of about 600 kpixels (512 x 576 x 2 pixels). In-pixel signal accumulation is possible for repetitive image capture of reproducible events. The target application is FLIM. The other is equipped with four collection gates each connected to an in-situ CCD memory with 305 elements, which enables capture of 1,220 (4 x 305) consecutive images at 50 Mfps. The CCD memory is folded and looped with the first element connected to the last element, which also makes possible the in-pixel signal accumulation. The sensor is a small test sensor with 32 x 32 pixels. The target applications are imaging TOF MS, pulse neutron tomography and dynamic PSP. The paper also briefly explains an expression of the temporal resolution of silicon image sensors theoretically derived by the authors in 2017. It is shown that the image sensor designed based on the theoretical analysis achieves imaging of consecutive frames at the frame interval of 50 ps.

  7. Experimental single-chip color HDTV image acquisition system with 8M-pixel CMOS image sensor

    NASA Astrophysics Data System (ADS)

    Shimamoto, Hiroshi; Yamashita, Takayuki; Funatsu, Ryohei; Mitani, Kohji; Nojiri, Yuji

    2006-02-01

    We have developed an experimental single-chip color HDTV image acquisition system using 8M-pixel CMOS image sensor. The sensor has 3840 × 2160 effective pixels and is progressively scanned at 60 frames per second. We describe the color filter array and interpolation method to improve image quality with a high-pixel-count single-chip sensor. We also describe an experimental image acquisition system we used to measured spatial frequency characteristics in the horizontal direction. The results indicate good prospects for achieving a high quality single chip HDTV camera that reduces pseudo signals and maintains high spatial frequency characteristics within the frequency band for HDTV.

  8. Median filters as a tool to determine dark noise thresholds in high resolution smartphone image sensors for scientific imaging

    NASA Astrophysics Data System (ADS)

    Igoe, Damien P.; Parisi, Alfio V.; Amar, Abdurazaq; Rummenie, Katherine J.

    2018-01-01

    An evaluation of the use of median filters in the reduction of dark noise in smartphone high resolution image sensors is presented. The Sony Xperia Z1 employed has a maximum image sensor resolution of 20.7 Mpixels, with each pixel having a side length of just over 1 μm. Due to the large number of photosites, this provides an image sensor with very high sensitivity but also makes them prone to noise effects such as hot-pixels. Similar to earlier research with older models of smartphone, no appreciable temperature effects were observed in the overall average pixel values for images taken in ambient temperatures between 5 °C and 25 °C. In this research, hot-pixels are defined as pixels with intensities above a specific threshold. The threshold is determined using the distribution of pixel values of a set of images with uniform statistical properties associated with the application of median-filters of increasing size. An image with uniform statistics was employed as a training set from 124 dark images, and the threshold was determined to be 9 digital numbers (DN). The threshold remained constant for multiple resolutions and did not appreciably change even after a year of extensive field use and exposure to solar ultraviolet radiation. Although the temperature effects' uniformity masked an increase in hot-pixel occurrences, the total number of occurrences represented less than 0.1% of the total image. Hot-pixels were removed by applying a median filter, with an optimum filter size of 7 × 7; similar trends were observed for four additional smartphone image sensors used for validation. Hot-pixels were also reduced by decreasing image resolution. The method outlined in this research provides a methodology to characterise the dark noise behavior of high resolution image sensors for use in scientific investigations, especially as pixel sizes decrease.

  9. CMOS Active-Pixel Image Sensor With Intensity-Driven Readout

    NASA Technical Reports Server (NTRS)

    Langenbacher, Harry T.; Fossum, Eric R.; Kemeny, Sabrina

    1996-01-01

    Proposed complementary metal oxide/semiconductor (CMOS) integrated-circuit image sensor automatically provides readouts from pixels in order of decreasing illumination intensity. Sensor operated in integration mode. Particularly useful in number of image-sensing tasks, including diffractive laser range-finding, three-dimensional imaging, event-driven readout of sparse sensor arrays, and star tracking.

  10. A Chip and Pixel Qualification Methodology on Imaging Sensors

    NASA Technical Reports Server (NTRS)

    Chen, Yuan; Guertin, Steven M.; Petkov, Mihail; Nguyen, Duc N.; Novak, Frank

    2004-01-01

    This paper presents a qualification methodology on imaging sensors. In addition to overall chip reliability characterization based on sensor s overall figure of merit, such as Dark Rate, Linearity, Dark Current Non-Uniformity, Fixed Pattern Noise and Photon Response Non-Uniformity, a simulation technique is proposed and used to project pixel reliability. The projected pixel reliability is directly related to imaging quality and provides additional sensor reliability information and performance control.

  11. Method and apparatus of high dynamic range image sensor with individual pixel reset

    NASA Technical Reports Server (NTRS)

    Yadid-Pecht, Orly (Inventor); Pain, Bedabrata (Inventor); Fossum, Eric R. (Inventor)

    2001-01-01

    A wide dynamic range image sensor provides individual pixel reset to vary the integration time of individual pixels. The integration time of each pixel is controlled by column and row reset control signals which activate a logical reset transistor only when both signals coincide for a given pixel.

  12. CMOS Image Sensors: Electronic Camera On A Chip

    NASA Technical Reports Server (NTRS)

    Fossum, E. R.

    1995-01-01

    Recent advancements in CMOS image sensor technology are reviewed, including both passive pixel sensors and active pixel sensors. On- chip analog to digital converters and on-chip timing and control circuits permit realization of an electronic camera-on-a-chip. Highly miniaturized imaging systems based on CMOS image sensor technology are emerging as a competitor to charge-coupled devices for low cost uses.

  13. CMOS image sensor with lateral electric field modulation pixels for fluorescence lifetime imaging with sub-nanosecond time response

    NASA Astrophysics Data System (ADS)

    Li, Zhuo; Seo, Min-Woong; Kagawa, Keiichiro; Yasutomi, Keita; Kawahito, Shoji

    2016-04-01

    This paper presents the design and implementation of a time-resolved CMOS image sensor with a high-speed lateral electric field modulation (LEFM) gating structure for time domain fluorescence lifetime measurement. Time-windowed signal charge can be transferred from a pinned photodiode (PPD) to a pinned storage diode (PSD) by turning on a pair of transfer gates, which are situated beside the channel. Unwanted signal charge can be drained from the PPD to the drain by turning on another pair of gates. The pixel array contains 512 (V) × 310 (H) pixels with 5.6 × 5.6 µm2 pixel size. The imager chip was fabricated using 0.11 µm CMOS image sensor process technology. The prototype sensor has a time response of 150 ps at 374 nm. The fill factor of the pixels is 5.6%. The usefulness of the prototype sensor is demonstrated for fluorescence lifetime imaging through simulation and measurement results.

  14. Estimating pixel variances in the scenes of staring sensors

    DOEpatents

    Simonson, Katherine M [Cedar Crest, NM; Ma, Tian J [Albuquerque, NM

    2012-01-24

    A technique for detecting changes in a scene perceived by a staring sensor is disclosed. The technique includes acquiring a reference image frame and a current image frame of a scene with the staring sensor. A raw difference frame is generated based upon differences between the reference image frame and the current image frame. Pixel error estimates are generated for each pixel in the raw difference frame based at least in part upon spatial error estimates related to spatial intensity gradients in the scene. The pixel error estimates are used to mitigate effects of camera jitter in the scene between the current image frame and the reference image frame.

  15. Log polar image sensor in CMOS technology

    NASA Astrophysics Data System (ADS)

    Scheffer, Danny; Dierickx, Bart; Pardo, Fernando; Vlummens, Jan; Meynants, Guy; Hermans, Lou

    1996-08-01

    We report on the design, design issues, fabrication and performance of a log-polar CMOS image sensor. The sensor is developed for the use in a videophone system for deaf and hearing impaired people, who are not capable of communicating through a 'normal' telephone. The system allows 15 detailed images per second to be transmitted over existing telephone lines. This framerate is sufficient for conversations by means of sign language or lip reading. The pixel array of the sensor consists of 76 concentric circles with (up to) 128 pixels per circle, in total 8013 pixels. The interior pixels have a pitch of 14 micrometers, up to 250 micrometers at the border. The 8013-pixels image is mapped (log-polar transformation) in a X-Y addressable 76 by 128 array.

  16. The analysis and rationale behind the upgrading of existing standard definition thermal imagers to high definition

    NASA Astrophysics Data System (ADS)

    Goss, Tristan M.

    2016-05-01

    With 640x512 pixel format IR detector arrays having been on the market for the past decade, Standard Definition (SD) thermal imaging sensors have been developed and deployed across the world. Now with 1280x1024 pixel format IR detector arrays becoming readily available designers of thermal imager systems face new challenges as pixel sizes reduce and the demand and applications for High Definition (HD) thermal imaging sensors increases. In many instances the upgrading of existing under-sampled SD thermal imaging sensors into more optimally sampled or oversampled HD thermal imaging sensors provides a more cost effective and reduced time to market option than to design and develop a completely new sensor. This paper presents the analysis and rationale behind the selection of the best suited HD pixel format MWIR detector for the upgrade of an existing SD thermal imaging sensor to a higher performing HD thermal imaging sensor. Several commercially available and "soon to be" commercially available HD small pixel IR detector options are included as part of the analysis and are considered for this upgrade. The impact the proposed detectors have on the sensor's overall sensitivity, noise and resolution is analyzed, and the improved range performance is predicted. Furthermore with reduced dark currents due to the smaller pixel sizes, the candidate HD MWIR detectors are operated at higher temperatures when compared to their SD predecessors. Therefore, as an additional constraint and as a design goal, the feasibility of achieving upgraded performance without any increase in the size, weight and power consumption of the thermal imager is discussed herein.

  17. Active-Pixel Image Sensor With Analog-To-Digital Converters

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R.; Mendis, Sunetra K.; Pain, Bedabrata; Nixon, Robert H.

    1995-01-01

    Proposed single-chip integrated-circuit image sensor contains 128 x 128 array of active pixel sensors at 50-micrometer pitch. Output terminals of all pixels in each given column connected to analog-to-digital (A/D) converter located at bottom of column. Pixels scanned in semiparallel fashion, one row at time; during time allocated to scanning row, outputs of all active pixel sensors in row fed to respective A/D converters. Design of chip based on complementary metal oxide semiconductor (CMOS) technology, and individual circuit elements fabricated according to 2-micrometer CMOS design rules. Active pixel sensors designed to operate at video rate of 30 frames/second, even at low light levels. A/D scheme based on first-order Sigma-Delta modulation.

  18. CMOS Active-Pixel Image Sensor With Simple Floating Gates

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R.; Nakamura, Junichi; Kemeny, Sabrina E.

    1996-01-01

    Experimental complementary metal-oxide/semiconductor (CMOS) active-pixel image sensor integrated circuit features simple floating-gate structure, with metal-oxide/semiconductor field-effect transistor (MOSFET) as active circuit element in each pixel. Provides flexibility of readout modes, no kTC noise, and relatively simple structure suitable for high-density arrays. Features desirable for "smart sensor" applications.

  19. Photon Counting Imaging with an Electron-Bombarded Pixel Image Sensor

    PubMed Central

    Hirvonen, Liisa M.; Suhling, Klaus

    2016-01-01

    Electron-bombarded pixel image sensors, where a single photoelectron is accelerated directly into a CCD or CMOS sensor, allow wide-field imaging at extremely low light levels as they are sensitive enough to detect single photons. This technology allows the detection of up to hundreds or thousands of photon events per frame, depending on the sensor size, and photon event centroiding can be employed to recover resolution lost in the detection process. Unlike photon events from electron-multiplying sensors, the photon events from electron-bombarded sensors have a narrow, acceleration-voltage-dependent pulse height distribution. Thus a gain voltage sweep during exposure in an electron-bombarded sensor could allow photon arrival time determination from the pulse height with sub-frame exposure time resolution. We give a brief overview of our work with electron-bombarded pixel image sensor technology and recent developments in this field for single photon counting imaging, and examples of some applications. PMID:27136556

  20. Active pixel sensors with substantially planarized color filtering elements

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R. (Inventor); Kemeny, Sabrina E. (Inventor)

    1999-01-01

    A semiconductor imaging system preferably having an active pixel sensor array compatible with a CMOS fabrication process. Color-filtering elements such as polymer filters and wavelength-converting phosphors can be integrated with the image sensor.

  1. Preliminary investigations of active pixel sensors in Nuclear Medicine imaging

    NASA Astrophysics Data System (ADS)

    Ott, Robert; Evans, Noel; Evans, Phil; Osmond, J.; Clark, A.; Turchetta, R.

    2009-06-01

    Three CMOS active pixel sensors have been investigated for their application to Nuclear Medicine imaging. Startracker with 525×525 25 μm square pixels has been coupled via a fibre optic stud to a 2 mm thick segmented CsI(Tl) crystal. Imaging tests were performed using 99mTc sources, which emit 140 keV gamma rays. The system was interfaced to a PC via FPGA-based DAQ and optical link enabling imaging rates of 10 f/s. System noise was measured to be >100e and it was shown that the majority of this noise was fixed pattern in nature. The intrinsic spatial resolution was measured to be ˜80 μm and the system spatial resolution measured with a slit was ˜450 μm. The second sensor, On Pixel Intelligent CMOS (OPIC), had 64×72 40 μm pixels and was used to evaluate noise characteristics and to develop a method of differentiation between fixed pattern and statistical noise. The third sensor, Vanilla, had 520×520 25 μm pixels and a measured system noise of ˜25e. This sensor was coupled directly to the segmented phosphor. Imaging results show that even at this lower level of noise the signal from 140 keV gamma rays is small as the light from the phosphor is spread over a large number of pixels. Suggestions for the 'ideal' sensor are made.

  2. Mapping Capacitive Coupling Among Pixels in a Sensor Array

    NASA Technical Reports Server (NTRS)

    Seshadri, Suresh; Cole, David M.; Smith, Roger M.

    2010-01-01

    An improved method of mapping the capacitive contribution to cross-talk among pixels in an imaging array of sensors (typically, an imaging photodetector array) has been devised for use in calibrating and/or characterizing such an array. The method involves a sequence of resets of subarrays of pixels to specified voltages and measurement of the voltage responses of neighboring non-reset pixels.

  3. MTF evaluation of white pixel sensors

    NASA Astrophysics Data System (ADS)

    Lindner, Albrecht; Atanassov, Kalin; Luo, Jiafu; Goma, Sergio

    2015-01-01

    We present a methodology to compare image sensors with traditional Bayer RGB layouts to sensors with alternative layouts containing white pixels. We focused on the sensors' resolving powers, which we measured in the form of a modulation transfer function for variations in both luma and chroma channels. We present the design of the test chart, the acquisition of images, the image analysis, and an interpretation of results. We demonstrate the approach at the example of two sensors that only differ in their color filter arrays. We confirmed that the sensor with white pixels and the corresponding demosaicing result in a higher resolving power in the luma channel, but a lower resolving power in the chroma channels when compared to the traditional Bayer sensor.

  4. How Many Pixels Does It Take to Make a Good 4"×6" Print? Pixel Count Wars Revisited

    NASA Astrophysics Data System (ADS)

    Kriss, Michael A.

    Digital still cameras emerged following the introduction of the Sony Mavica analog prototype camera in 1981. These early cameras produced poor image quality and did not challenge film cameras for overall quality. By 1995 digital still cameras in expensive SLR formats had 6 mega-pixels and produced high quality images (with significant image processing). In 2005 significant improvement in image quality was apparent and lower prices for digital still cameras (DSCs) started a rapid decline in film usage and film camera sells. By 2010 film usage was mostly limited to professionals and the motion picture industry. The rise of DSCs was marked by a “pixel war” where the driving feature of the cameras was the pixel count where even moderate cost, ˜120, DSCs would have 14 mega-pixels. The improvement of CMOS technology pushed this trend of lower prices and higher pixel counts. Only the single lens reflex cameras had large sensors and large pixels. The drive for smaller pixels hurt the quality aspects of the final image (sharpness, noise, speed, and exposure latitude). Only today are camera manufactures starting to reverse their course and producing DSCs with larger sensors and pixels. This paper will explore why larger pixels and sensors are key to the future of DSCs.

  5. The effect of split pixel HDR image sensor technology on MTF measurements

    NASA Astrophysics Data System (ADS)

    Deegan, Brian M.

    2014-03-01

    Split-pixel HDR sensor technology is particularly advantageous in automotive applications, because the images are captured simultaneously rather than sequentially, thereby reducing motion blur. However, split pixel technology introduces artifacts in MTF measurement. To achieve a HDR image, raw images are captured from both large and small sub-pixels, and combined to make the HDR output. In some cases, a large sub-pixel is used for long exposure captures, and a small sub-pixel for short exposures, to extend the dynamic range. The relative size of the photosensitive area of the pixel (fill factor) plays a very significant role in the output MTF measurement. Given an identical scene, the MTF will be significantly different, depending on whether you use the large or small sub-pixels i.e. a smaller fill factor (e.g. in the short exposure sub-pixel) will result in higher MTF scores, but significantly greater aliasing. Simulations of split-pixel sensors revealed that, when raw images from both sub-pixels are combined, there is a significant difference in rising edge (i.e. black-to-white transition) and falling edge (white-to-black) reproduction. Experimental results showed a difference of ~50% in measured MTF50 between the falling and rising edges of a slanted edge test chart.

  6. A CMOS image sensor with programmable pixel-level analog processing.

    PubMed

    Massari, Nicola; Gottardi, Massimo; Gonzo, Lorenzo; Stoppa, David; Simoni, Andrea

    2005-11-01

    A prototype of a 34 x 34 pixel image sensor, implementing real-time analog image processing, is presented. Edge detection, motion detection, image amplification, and dynamic-range boosting are executed at pixel level by means of a highly interconnected pixel architecture based on the absolute value of the difference among neighbor pixels. The analog operations are performed over a kernel of 3 x 3 pixels. The square pixel, consisting of 30 transistors, has a pitch of 35 microm with a fill-factor of 20%. The chip was fabricated in a 0.35 microm CMOS technology, and its power consumption is 6 mW with 3.3 V power supply. The device was fully characterized and achieves a dynamic range of 50 dB with a light power density of 150 nW/mm2 and a frame rate of 30 frame/s. The measured fixed pattern noise corresponds to 1.1% of the saturation level. The sensor's dynamic range can be extended up to 96 dB using the double-sampling technique.

  7. The Multidimensional Integrated Intelligent Imaging project (MI-3)

    NASA Astrophysics Data System (ADS)

    Allinson, N.; Anaxagoras, T.; Aveyard, J.; Arvanitis, C.; Bates, R.; Blue, A.; Bohndiek, S.; Cabello, J.; Chen, L.; Chen, S.; Clark, A.; Clayton, C.; Cook, E.; Cossins, A.; Crooks, J.; El-Gomati, M.; Evans, P. M.; Faruqi, W.; French, M.; Gow, J.; Greenshaw, T.; Greig, T.; Guerrini, N.; Harris, E. J.; Henderson, R.; Holland, A.; Jeyasundra, G.; Karadaglic, D.; Konstantinidis, A.; Liang, H. X.; Maini, K. M. S.; McMullen, G.; Olivo, A.; O'Shea, V.; Osmond, J.; Ott, R. J.; Prydderch, M.; Qiang, L.; Riley, G.; Royle, G.; Segneri, G.; Speller, R.; Symonds-Tayler, J. R. N.; Triger, S.; Turchetta, R.; Venanzi, C.; Wells, K.; Zha, X.; Zin, H.

    2009-06-01

    MI-3 is a consortium of 11 universities and research laboratories whose mission is to develop complementary metal-oxide semiconductor (CMOS) active pixel sensors (APS) and to apply these sensors to a range of imaging challenges. A range of sensors has been developed: On-Pixel Intelligent CMOS (OPIC)—designed for in-pixel intelligence; FPN—designed to develop novel techniques for reducing fixed pattern noise; HDR—designed to develop novel techniques for increasing dynamic range; Vanilla/PEAPS—with digital and analogue modes and regions of interest, which has also been back-thinned; Large Area Sensor (LAS)—a novel, stitched LAS; and eLeNA—which develops a range of low noise pixels. Applications being developed include autoradiography, a gamma camera system, radiotherapy verification, tissue diffraction imaging, X-ray phase-contrast imaging, DNA sequencing and electron microscopy.

  8. Commercial CMOS image sensors as X-ray imagers and particle beam monitors

    NASA Astrophysics Data System (ADS)

    Castoldi, A.; Guazzoni, C.; Maffessanti, S.; Montemurro, G. V.; Carraresi, L.

    2015-01-01

    CMOS image sensors are widely used in several applications such as mobile handsets webcams and digital cameras among others. Furthermore they are available across a wide range of resolutions with excellent spectral and chromatic responses. In order to fulfill the need of cheap systems as beam monitors and high resolution image sensors for scientific applications we exploited the possibility of using commercial CMOS image sensors as X-rays and proton detectors. Two different sensors have been mounted and tested. An Aptina MT9v034, featuring 752 × 480 pixels, 6μm × 6μm pixel size has been mounted and successfully tested as bi-dimensional beam profile monitor, able to take pictures of the incoming proton bunches at the DeFEL beamline (1-6 MeV pulsed proton beam) of the LaBeC of INFN in Florence. The naked sensor is able to successfully detect the interactions of the single protons. The sensor point-spread-function (PSF) has been qualified with 1MeV protons and is equal to one pixel (6 mm) r.m.s. in both directions. A second sensor MT9M032, featuring 1472 × 1096 pixels, 2.2 × 2.2 μm pixel size has been mounted on a dedicated board as high-resolution imager to be used in X-ray imaging experiments with table-top generators. In order to ease and simplify the data transfer and the image acquisition the system is controlled by a dedicated micro-processor board (DM3730 1GHz SoC ARM Cortex-A8) on which a modified LINUX kernel has been implemented. The paper presents the architecture of the sensor systems and the results of the experimental measurements.

  9. CMOS Active Pixel Sensor Technology and Reliability Characterization Methodology

    NASA Technical Reports Server (NTRS)

    Chen, Yuan; Guertin, Steven M.; Pain, Bedabrata; Kayaii, Sammy

    2006-01-01

    This paper describes the technology, design features and reliability characterization methodology of a CMOS Active Pixel Sensor. Both overall chip reliability and pixel reliability are projected for the imagers.

  10. An Over 90 dB Intra-Scene Single-Exposure Dynamic Range CMOS Image Sensor Using a 3.0 μm Triple-Gain Pixel Fabricated in a Standard BSI Process.

    PubMed

    Takayanagi, Isao; Yoshimura, Norio; Mori, Kazuya; Matsuo, Shinichiro; Tanaka, Shunsuke; Abe, Hirofumi; Yasuda, Naoto; Ishikawa, Kenichiro; Okura, Shunsuke; Ohsawa, Shinji; Otaka, Toshinori

    2018-01-12

    To respond to the high demand for high dynamic range imaging suitable for moving objects with few artifacts, we have developed a single-exposure dynamic range image sensor by introducing a triple-gain pixel and a low noise dual-gain readout circuit. The developed 3 μm pixel is capable of having three conversion gains. Introducing a new split-pinned photodiode structure, linear full well reaches 40 ke - . Readout noise under the highest pixel gain condition is 1 e - with a low noise readout circuit. Merging two signals, one with high pixel gain and high analog gain, and the other with low pixel gain and low analog gain, a single exposure dynamic rage (SEHDR) signal is obtained. Using this technology, a 1/2.7", 2M-pixel CMOS image sensor has been developed and characterized. The image sensor also employs an on-chip linearization function, yielding a 16-bit linear signal at 60 fps, and an intra-scene dynamic range of higher than 90 dB was successfully demonstrated. This SEHDR approach inherently mitigates the artifacts from moving objects or time-varying light sources that can appear in the multiple exposure high dynamic range (MEHDR) approach.

  11. An Over 90 dB Intra-Scene Single-Exposure Dynamic Range CMOS Image Sensor Using a 3.0 μm Triple-Gain Pixel Fabricated in a Standard BSI Process †

    PubMed Central

    Takayanagi, Isao; Yoshimura, Norio; Mori, Kazuya; Matsuo, Shinichiro; Tanaka, Shunsuke; Abe, Hirofumi; Yasuda, Naoto; Ishikawa, Kenichiro; Okura, Shunsuke; Ohsawa, Shinji; Otaka, Toshinori

    2018-01-01

    To respond to the high demand for high dynamic range imaging suitable for moving objects with few artifacts, we have developed a single-exposure dynamic range image sensor by introducing a triple-gain pixel and a low noise dual-gain readout circuit. The developed 3 μm pixel is capable of having three conversion gains. Introducing a new split-pinned photodiode structure, linear full well reaches 40 ke−. Readout noise under the highest pixel gain condition is 1 e− with a low noise readout circuit. Merging two signals, one with high pixel gain and high analog gain, and the other with low pixel gain and low analog gain, a single exposure dynamic rage (SEHDR) signal is obtained. Using this technology, a 1/2.7”, 2M-pixel CMOS image sensor has been developed and characterized. The image sensor also employs an on-chip linearization function, yielding a 16-bit linear signal at 60 fps, and an intra-scene dynamic range of higher than 90 dB was successfully demonstrated. This SEHDR approach inherently mitigates the artifacts from moving objects or time-varying light sources that can appear in the multiple exposure high dynamic range (MEHDR) approach. PMID:29329210

  12. High-speed imaging using CMOS image sensor with quasi pixel-wise exposure

    NASA Astrophysics Data System (ADS)

    Sonoda, T.; Nagahara, H.; Endo, K.; Sugiyama, Y.; Taniguchi, R.

    2017-02-01

    Several recent studies in compressive video sensing have realized scene capture beyond the fundamental trade-off limit between spatial resolution and temporal resolution using random space-time sampling. However, most of these studies showed results for higher frame rate video that were produced by simulation experiments or using an optically simulated random sampling camera, because there are currently no commercially available image sensors with random exposure or sampling capabilities. We fabricated a prototype complementary metal oxide semiconductor (CMOS) image sensor with quasi pixel-wise exposure timing that can realize nonuniform space-time sampling. The prototype sensor can reset exposures independently by columns and fix these amount of exposure by rows for each 8x8 pixel block. This CMOS sensor is not fully controllable via the pixels, and has line-dependent controls, but it offers flexibility when compared with regular CMOS or charge-coupled device sensors with global or rolling shutters. We propose a method to realize pseudo-random sampling for high-speed video acquisition that uses the flexibility of the CMOS sensor. We reconstruct the high-speed video sequence from the images produced by pseudo-random sampling using an over-complete dictionary.

  13. A 128×96 Pixel Stack-Type Color Image Sensor: Stack of Individual Blue-, Green-, and Red-Sensitive Organic Photoconductive Films Integrated with a ZnO Thin Film Transistor Readout Circuit

    NASA Astrophysics Data System (ADS)

    Seo, Hokuto; Aihara, Satoshi; Watabe, Toshihisa; Ohtake, Hiroshi; Sakai, Toshikatsu; Kubota, Misao; Egami, Norifumi; Hiramatsu, Takahiro; Matsuda, Tokiyoshi; Furuta, Mamoru; Hirao, Takashi

    2011-02-01

    A color image was produced by a vertically stacked image sensor with blue (B)-, green (G)-, and red (R)-sensitive organic photoconductive films, each having a thin-film transistor (TFT) array that uses a zinc oxide (ZnO) channel to read out the signal generated in each organic film. The number of the pixels of the fabricated image sensor is 128×96 for each color, and the pixel size is 100×100 µm2. The current on/off ratio of the ZnO TFT is over 106, and the B-, G-, and R-sensitive organic photoconductive films show excellent wavelength selectivity. The stacked image sensor can produce a color image at 10 frames per second with a resolution corresponding to the pixel number. This result clearly shows that color separation is achieved without using any conventional color separation optical system such as a color filter array or a prism.

  14. Optical and Electric Multifunctional CMOS Image Sensors for On-Chip Biosensing Applications.

    PubMed

    Tokuda, Takashi; Noda, Toshihiko; Sasagawa, Kiyotaka; Ohta, Jun

    2010-12-29

    In this review, the concept, design, performance, and a functional demonstration of multifunctional complementary metal-oxide-semiconductor (CMOS) image sensors dedicated to on-chip biosensing applications are described. We developed a sensor architecture that allows flexible configuration of a sensing pixel array consisting of optical and electric sensing pixels, and designed multifunctional CMOS image sensors that can sense light intensity and electric potential or apply a voltage to an on-chip measurement target. We describe the sensors' architecture on the basis of the type of electric measurement or imaging functionalities.

  15. An ultra-low power CMOS image sensor with on-chip energy harvesting and power management capability.

    PubMed

    Cevik, Ismail; Huang, Xiwei; Yu, Hao; Yan, Mei; Ay, Suat U

    2015-03-06

    An ultra-low power CMOS image sensor with on-chip energy harvesting and power management capability is introduced in this paper. The photodiode pixel array can not only capture images but also harvest solar energy. As such, the CMOS image sensor chip is able to switch between imaging and harvesting modes towards self-power operation. Moreover, an on-chip maximum power point tracking (MPPT)-based power management system (PMS) is designed for the dual-mode image sensor to further improve the energy efficiency. A new isolated P-well energy harvesting and imaging (EHI) pixel with very high fill factor is introduced. Several ultra-low power design techniques such as reset and select boosting techniques have been utilized to maintain a wide pixel dynamic range. The chip was designed and fabricated in a 1.8 V, 1P6M 0.18 µm CMOS process. Total power consumption of the imager is 6.53 µW for a 96 × 96 pixel array with 1 V supply and 5 fps frame rate. Up to 30 μW of power could be generated by the new EHI pixels. The PMS is capable of providing 3× the power required during imaging mode with 50% efficiency allowing energy autonomous operation with a 72.5% duty cycle.

  16. An Ultra-Low Power CMOS Image Sensor with On-Chip Energy Harvesting and Power Management Capability

    PubMed Central

    Cevik, Ismail; Huang, Xiwei; Yu, Hao; Yan, Mei; Ay, Suat U.

    2015-01-01

    An ultra-low power CMOS image sensor with on-chip energy harvesting and power management capability is introduced in this paper. The photodiode pixel array can not only capture images but also harvest solar energy. As such, the CMOS image sensor chip is able to switch between imaging and harvesting modes towards self-power operation. Moreover, an on-chip maximum power point tracking (MPPT)-based power management system (PMS) is designed for the dual-mode image sensor to further improve the energy efficiency. A new isolated P-well energy harvesting and imaging (EHI) pixel with very high fill factor is introduced. Several ultra-low power design techniques such as reset and select boosting techniques have been utilized to maintain a wide pixel dynamic range. The chip was designed and fabricated in a 1.8 V, 1P6M 0.18 µm CMOS process. Total power consumption of the imager is 6.53 µW for a 96 × 96 pixel array with 1 V supply and 5 fps frame rate. Up to 30 μW of power could be generated by the new EHI pixels. The PMS is capable of providing 3× the power required during imaging mode with 50% efficiency allowing energy autonomous operation with a 72.5% duty cycle. PMID:25756863

  17. CMOS image sensors: State-of-the-art

    NASA Astrophysics Data System (ADS)

    Theuwissen, Albert J. P.

    2008-09-01

    This paper gives an overview of the state-of-the-art of CMOS image sensors. The main focus is put on the shrinkage of the pixels : what is the effect on the performance characteristics of the imagers and on the various physical parameters of the camera ? How is the CMOS pixel architecture optimized to cope with the negative performance effects of the ever-shrinking pixel size ? On the other hand, the smaller dimensions in CMOS technology allow further integration on column level and even on pixel level. This will make CMOS imagers even smarter that they are already.

  18. Pixel super resolution using wavelength scanning

    DTIC Science & Technology

    2016-04-08

    the light source is adjusted to ~20 μW. The image sensor chip is a color CMOS sensor chip with a pixel size of 1.12 μm manufactured for cellphone...pitch (that is, ~ 1 μm in Figure 3a, using a CMOS sensor that has a 1.12-μm pixel pitch). For the same configuration depicted in Figure 3, utilizing...section). The a Lens-free raw holograms captured by 1.12 μm CMOS image sensor Field of view ≈ 20.5 mm2 Angle change directions for synthetic aperture

  19. Image acquisition system using on sensor compressed sampling technique

    NASA Astrophysics Data System (ADS)

    Gupta, Pravir Singh; Choi, Gwan Seong

    2018-01-01

    Advances in CMOS technology have made high-resolution image sensors possible. These image sensors pose significant challenges in terms of the amount of raw data generated, energy efficiency, and frame rate. This paper presents a design methodology for an imaging system and a simplified image sensor pixel design to be used in the system so that the compressed sensing (CS) technique can be implemented easily at the sensor level. This results in significant energy savings as it not only cuts the raw data rate but also reduces transistor count per pixel; decreases pixel size; increases fill factor; simplifies analog-to-digital converter, JPEG encoder, and JPEG decoder design; decreases wiring; and reduces the decoder size by half. Thus, CS has the potential to increase the resolution of image sensors for a given technology and die size while significantly decreasing the power consumption and design complexity. We show that it has potential to reduce power consumption by about 23% to 65%.

  20. Compact SPAD-Based Pixel Architectures for Time-Resolved Image Sensors

    PubMed Central

    Perenzoni, Matteo; Pancheri, Lucio; Stoppa, David

    2016-01-01

    This paper reviews the state of the art of single-photon avalanche diode (SPAD) image sensors for time-resolved imaging. The focus of the paper is on pixel architectures featuring small pixel size (<25 μm) and high fill factor (>20%) as a key enabling technology for the successful implementation of high spatial resolution SPAD-based image sensors. A summary of the main CMOS SPAD implementations, their characteristics and integration challenges, is provided from the perspective of targeting large pixel arrays, where one of the key drivers is the spatial uniformity. The main analog techniques aimed at time-gated photon counting and photon timestamping suitable for compact and low-power pixels are critically discussed. The main features of these solutions are the adoption of analog counting techniques and time-to-analog conversion, in NMOS-only pixels. Reliable quantum-limited single-photon counting, self-referenced analog-to-digital conversion, time gating down to 0.75 ns and timestamping with 368 ps jitter are achieved. PMID:27223284

  1. Organic-on-silicon complementary metal-oxide-semiconductor colour image sensors.

    PubMed

    Lim, Seon-Jeong; Leem, Dong-Seok; Park, Kyung-Bae; Kim, Kyu-Sik; Sul, Sangchul; Na, Kyoungwon; Lee, Gae Hwang; Heo, Chul-Joon; Lee, Kwang-Hee; Bulliard, Xavier; Satoh, Ryu-Ichi; Yagi, Tadao; Ro, Takkyun; Im, Dongmo; Jung, Jungkyu; Lee, Myungwon; Lee, Tae-Yon; Han, Moon Gyu; Jin, Yong Wan; Lee, Sangyoon

    2015-01-12

    Complementary metal-oxide-semiconductor (CMOS) colour image sensors are representative examples of light-detection devices. To achieve extremely high resolutions, the pixel sizes of the CMOS image sensors must be reduced to less than a micron, which in turn significantly limits the number of photons that can be captured by each pixel using silicon (Si)-based technology (i.e., this reduction in pixel size results in a loss of sensitivity). Here, we demonstrate a novel and efficient method of increasing the sensitivity and resolution of the CMOS image sensors by superposing an organic photodiode (OPD) onto a CMOS circuit with Si photodiodes, which consequently doubles the light-input surface area of each pixel. To realise this concept, we developed organic semiconductor materials with absorption properties selective to green light and successfully fabricated highly efficient green-light-sensitive OPDs without colour filters. We found that such a top light-receiving OPD, which is selective to specific green wavelengths, demonstrates great potential when combined with a newly designed Si-based CMOS circuit containing only blue and red colour filters. To demonstrate the effectiveness of this state-of-the-art hybrid colour image sensor, we acquired a real full-colour image using a camera that contained the organic-on-Si hybrid CMOS colour image sensor.

  2. Organic-on-silicon complementary metal–oxide–semiconductor colour image sensors

    PubMed Central

    Lim, Seon-Jeong; Leem, Dong-Seok; Park, Kyung-Bae; Kim, Kyu-Sik; Sul, Sangchul; Na, Kyoungwon; Lee, Gae Hwang; Heo, Chul-Joon; Lee, Kwang-Hee; Bulliard, Xavier; Satoh, Ryu-Ichi; Yagi, Tadao; Ro, Takkyun; Im, Dongmo; Jung, Jungkyu; Lee, Myungwon; Lee, Tae-Yon; Han, Moon Gyu; Jin, Yong Wan; Lee, Sangyoon

    2015-01-01

    Complementary metal–oxide–semiconductor (CMOS) colour image sensors are representative examples of light-detection devices. To achieve extremely high resolutions, the pixel sizes of the CMOS image sensors must be reduced to less than a micron, which in turn significantly limits the number of photons that can be captured by each pixel using silicon (Si)-based technology (i.e., this reduction in pixel size results in a loss of sensitivity). Here, we demonstrate a novel and efficient method of increasing the sensitivity and resolution of the CMOS image sensors by superposing an organic photodiode (OPD) onto a CMOS circuit with Si photodiodes, which consequently doubles the light-input surface area of each pixel. To realise this concept, we developed organic semiconductor materials with absorption properties selective to green light and successfully fabricated highly efficient green-light-sensitive OPDs without colour filters. We found that such a top light-receiving OPD, which is selective to specific green wavelengths, demonstrates great potential when combined with a newly designed Si-based CMOS circuit containing only blue and red colour filters. To demonstrate the effectiveness of this state-of-the-art hybrid colour image sensor, we acquired a real full-colour image using a camera that contained the organic-on-Si hybrid CMOS colour image sensor. PMID:25578322

  3. Giga-pixel lensfree holographic microscopy and tomography using color image sensors.

    PubMed

    Isikman, Serhan O; Greenbaum, Alon; Luo, Wei; Coskun, Ahmet F; Ozcan, Aydogan

    2012-01-01

    We report Giga-pixel lensfree holographic microscopy and tomography using color sensor-arrays such as CMOS imagers that exhibit Bayer color filter patterns. Without physically removing these color filters coated on the sensor chip, we synthesize pixel super-resolved lensfree holograms, which are then reconstructed to achieve ~350 nm lateral resolution, corresponding to a numerical aperture of ~0.8, across a field-of-view of ~20.5 mm(2). This constitutes a digital image with ~0.7 Billion effective pixels in both amplitude and phase channels (i.e., ~1.4 Giga-pixels total). Furthermore, by changing the illumination angle (e.g., ± 50°) and scanning a partially-coherent light source across two orthogonal axes, super-resolved images of the same specimen from different viewing angles are created, which are then digitally combined to synthesize tomographic images of the object. Using this dual-axis lensfree tomographic imager running on a color sensor-chip, we achieve a 3D spatial resolution of ~0.35 µm × 0.35 µm × ~2 µm, in x, y and z, respectively, creating an effective voxel size of ~0.03 µm(3) across a sample volume of ~5 mm(3), which is equivalent to >150 Billion voxels. We demonstrate the proof-of-concept of this lensfree optical tomographic microscopy platform on a color CMOS image sensor by creating tomograms of micro-particles as well as a wild-type C. elegans nematode.

  4. The progress of sub-pixel imaging methods

    NASA Astrophysics Data System (ADS)

    Wang, Hu; Wen, Desheng

    2014-02-01

    This paper reviews the Sub-pixel imaging technology principles, characteristics, the current development status at home and abroad and the latest research developments. As Sub-pixel imaging technology has achieved the advantages of high resolution of optical remote sensor, flexible working ways and being miniaturized with no moving parts. The imaging system is suitable for the application of space remote sensor. Its application prospect is very extensive. It is quite possible to be the research development direction of future space optical remote sensing technology.

  5. Smart image sensors: an emerging key technology for advanced optical measurement and microsystems

    NASA Astrophysics Data System (ADS)

    Seitz, Peter

    1996-08-01

    Optical microsystems typically include photosensitive devices, analog preprocessing circuitry and digital signal processing electronics. The advances in semiconductor technology have made it possible today to integrate all photosensitive and electronical devices on one 'smart image sensor' or photo-ASIC (application-specific integrated circuits containing photosensitive elements). It is even possible to provide each 'smart pixel' with additional photoelectronic functionality, without compromising the fill factor substantially. This technological capability is the basis for advanced cameras and optical microsystems showing novel on-chip functionality: Single-chip cameras with on- chip analog-to-digital converters for less than $10 are advertised; image sensors have been developed including novel functionality such as real-time selectable pixel size and shape, the capability of performing arbitrary convolutions simultaneously with the exposure, as well as variable, programmable offset and sensitivity of the pixels leading to image sensors with a dynamic range exceeding 150 dB. Smart image sensors have been demonstrated offering synchronous detection and demodulation capabilities in each pixel (lock-in CCD), and conventional image sensors are combined with an on-chip digital processor for complete, single-chip image acquisition and processing systems. Technological problems of the monolithic integration of smart image sensors include offset non-uniformities, temperature variations of electronic properties, imperfect matching of circuit parameters, etc. These problems can often be overcome either by designing additional compensation circuitry or by providing digital correction routines. Where necessary for technological or economic reasons, smart image sensors can also be combined with or realized as hybrids, making use of commercially available electronic components. It is concluded that the possibilities offered by custom smart image sensors will influence the design and the performance of future electronic imaging systems in many disciplines, reaching from optical metrology to machine vision on the factory floor and in robotics applications.

  6. Image sensor pixel with on-chip high extinction ratio polarizer based on 65-nm standard CMOS technology.

    PubMed

    Sasagawa, Kiyotaka; Shishido, Sanshiro; Ando, Keisuke; Matsuoka, Hitoshi; Noda, Toshihiko; Tokuda, Takashi; Kakiuchi, Kiyomi; Ohta, Jun

    2013-05-06

    In this study, we demonstrate a polarization sensitive pixel for a complementary metal-oxide-semiconductor (CMOS) image sensor based on 65-nm standard CMOS technology. Using such a deep-submicron CMOS technology, it is possible to design fine metal patterns smaller than the wavelengths of visible light by using a metal wire layer. We designed and fabricated a metal wire grid polarizer on a 20 × 20 μm(2) pixel for image sensor. An extinction ratio of 19.7 dB was observed at a wavelength 750 nm.

  7. Design and implementation of non-linear image processing functions for CMOS image sensor

    NASA Astrophysics Data System (ADS)

    Musa, Purnawarman; Sudiro, Sunny A.; Wibowo, Eri P.; Harmanto, Suryadi; Paindavoine, Michel

    2012-11-01

    Today, solid state image sensors are used in many applications like in mobile phones, video surveillance systems, embedded medical imaging and industrial vision systems. These image sensors require the integration in the focal plane (or near the focal plane) of complex image processing algorithms. Such devices must meet the constraints related to the quality of acquired images, speed and performance of embedded processing, as well as low power consumption. To achieve these objectives, low-level analog processing allows extracting the useful information in the scene directly. For example, edge detection step followed by a local maxima extraction will facilitate the high-level processing like objects pattern recognition in a visual scene. Our goal was to design an intelligent image sensor prototype achieving high-speed image acquisition and non-linear image processing (like local minima and maxima calculations). For this purpose, we present in this article the design and test of a 64×64 pixels image sensor built in a standard CMOS Technology 0.35 μm including non-linear image processing. The architecture of our sensor, named nLiRIC (non-Linear Rapid Image Capture), is based on the implementation of an analog Minima/Maxima Unit. This MMU calculates the minimum and maximum values (non-linear functions), in real time, in a 2×2 pixels neighbourhood. Each MMU needs 52 transistors and the pitch of one pixel is 40×40 mu m. The total area of the 64×64 pixels is 12.5mm2. Our tests have shown the validity of the main functions of our new image sensor like fast image acquisition (10K frames per second), minima/maxima calculations in less then one ms.

  8. Development of a 750x750 pixels CMOS imager sensor for tracking applications

    NASA Astrophysics Data System (ADS)

    Larnaudie, Franck; Guardiola, Nicolas; Saint-Pé, Olivier; Vignon, Bruno; Tulet, Michel; Davancens, Robert; Magnan, Pierre; Corbière, Franck; Martin-Gonthier, Philippe; Estribeau, Magali

    2017-11-01

    Solid-state optical sensors are now commonly used in space applications (navigation cameras, astronomy imagers, tracking sensors...). Although the charge-coupled devices are still widely used, the CMOS image sensor (CIS), which performances are continuously improving, is a strong challenger for Guidance, Navigation and Control (GNC) systems. This paper describes a 750x750 pixels CMOS image sensor that has been specially designed and developed for star tracker and tracking sensor applications. Such detector, that is featuring smart architecture enabling very simple and powerful operations, is built using the AMIS 0.5μm CMOS technology. It contains 750x750 rectangular pixels with 20μm pitch. The geometry of the pixel sensitive zone is optimized for applications based on centroiding measurements. The main feature of this device is the on-chip control and timing function that makes the device operation easier by drastically reducing the number of clocks to be applied. This powerful function allows the user to operate the sensor with high flexibility: measurement of dark level from masked lines, direct access to the windows of interest… A temperature probe is also integrated within the CMOS chip allowing a very precise measurement through the video stream. A complete electro-optical characterization of the sensor has been performed. The major parameters have been evaluated: dark current and its uniformity, read-out noise, conversion gain, Fixed Pattern Noise, Photo Response Non Uniformity, quantum efficiency, Modulation Transfer Function, intra-pixel scanning. The characterization tests are detailed in the paper. Co60 and protons irradiation tests have been also carried out on the image sensor and the results are presented. The specific features of the 750x750 image sensor such as low power CMOS design (3.3V, power consumption<100mW), natural windowing (that allows efficient and robust tracking algorithms), simple proximity electronics (because of the on-chip control and timing function) enabling a high flexibility architecture, make this imager a good candidate for high performance tracking applications.

  9. Low noise WDR ROIC for InGaAs SWIR image sensor

    NASA Astrophysics Data System (ADS)

    Ni, Yang

    2017-11-01

    Hybridized image sensors are actually the only solution for image sensing beyond the spectral response of silicon devices. By hybridization, we can combine the best sensing material and photo-detector design with high performance CMOS readout circuitry. In the infrared band, we are facing typically 2 configurations: high background situation and low background situation. The performance of high background sensors are conditioned mainly by the integration capacity in each pixel which is the case for mid-wave and long-wave infrared detectors. For low background situation, the detector's performance is mainly limited by the pixel's noise performance which is conditioned by dark signal and readout noise. In the case of reflection based imaging condition, the pixel's dynamic range is also an important parameter. This is the case for SWIR band imaging. We are particularly interested by InGaAs based SWIR image sensors.

  10. The CAOS camera platform: ushering in a paradigm change in extreme dynamic range imager design

    NASA Astrophysics Data System (ADS)

    Riza, Nabeel A.

    2017-02-01

    Multi-pixel imaging devices such as CCD, CMOS and Focal Plane Array (FPA) photo-sensors dominate the imaging world. These Photo-Detector Array (PDA) devices certainly have their merits including increasingly high pixel counts and shrinking pixel sizes, nevertheless, they are also being hampered by limitations in instantaneous dynamic range, inter-pixel crosstalk, quantum full well capacity, signal-to-noise ratio, sensitivity, spectral flexibility, and in some cases, imager response time. Recently invented is the Coded Access Optical Sensor (CAOS) Camera platform that works in unison with current Photo-Detector Array (PDA) technology to counter fundamental limitations of PDA-based imagers while providing high enough imaging spatial resolution and pixel counts. Using for example the Texas Instruments (TI) Digital Micromirror Device (DMD) to engineer the CAOS camera platform, ushered in is a paradigm change in advanced imager design, particularly for extreme dynamic range applications.

  11. Low Power Camera-on-a-Chip Using CMOS Active Pixel Sensor Technology

    NASA Technical Reports Server (NTRS)

    Fossum, E. R.

    1995-01-01

    A second generation image sensor technology has been developed at the NASA Jet Propulsion Laboratory as a result of the continuing need to miniaturize space science imaging instruments. Implemented using standard CMOS, the active pixel sensor (APS) technology permits the integration of the detector array with on-chip timing, control and signal chain electronics, including analog-to-digital conversion.

  12. A 7 ke-SD-FWC 1.2 e-RMS Temporal Random Noise 128×256 Time-Resolved CMOS Image Sensor With Two In-Pixel SDs for Biomedical Applications.

    PubMed

    Seo, Min-Woong; Kawahito, Shoji

    2017-12-01

    A large full well capacity (FWC) for wide signal detection range and low temporal random noise for high sensitivity lock-in pixel CMOS image sensor (CIS) embedded with two in-pixel storage diodes (SDs) has been developed and presented in this paper. For fast charge transfer from photodiode to SDs, a lateral electric field charge modulator (LEFM) is used for the developed lock-in pixel. As a result, the time-resolved CIS achieves a very large SD-FWC of approximately 7ke-, low temporal random noise of 1.2e-rms at 20 fps with true correlated double sampling operation and fast intrinsic response less than 500 ps at 635 nm. The proposed imager has an effective pixel array of and a pixel size of . The sensor chip is fabricated by Dongbu HiTek 1P4M 0.11 CIS process.

  13. Smart CMOS image sensor for lightning detection and imaging.

    PubMed

    Rolando, Sébastien; Goiffon, Vincent; Magnan, Pierre; Corbière, Franck; Molina, Romain; Tulet, Michel; Bréart-de-Boisanger, Michel; Saint-Pé, Olivier; Guiry, Saïprasad; Larnaudie, Franck; Leone, Bruno; Perez-Cuevas, Leticia; Zayer, Igor

    2013-03-01

    We present a CMOS image sensor dedicated to lightning detection and imaging. The detector has been designed to evaluate the potentiality of an on-chip lightning detection solution based on a smart sensor. This evaluation is performed in the frame of the predevelopment phase of the lightning detector that will be implemented in the Meteosat Third Generation Imager satellite for the European Space Agency. The lightning detection process is performed by a smart detector combining an in-pixel frame-to-frame difference comparison with an adjustable threshold and on-chip digital processing allowing an efficient localization of a faint lightning pulse on the entire large format array at a frequency of 1 kHz. A CMOS prototype sensor with a 256×256 pixel array and a 60 μm pixel pitch has been fabricated using a 0.35 μm 2P 5M technology and tested to validate the selected detection approach.

  14. Optical and Electric Multifunctional CMOS Image Sensors for On-Chip Biosensing Applications

    PubMed Central

    Tokuda, Takashi; Noda, Toshihiko; Sasagawa, Kiyotaka; Ohta, Jun

    2010-01-01

    In this review, the concept, design, performance, and a functional demonstration of multifunctional complementary metal-oxide-semiconductor (CMOS) image sensors dedicated to on-chip biosensing applications are described. We developed a sensor architecture that allows flexible configuration of a sensing pixel array consisting of optical and electric sensing pixels, and designed multifunctional CMOS image sensors that can sense light intensity and electric potential or apply a voltage to an on-chip measurement target. We describe the sensors’ architecture on the basis of the type of electric measurement or imaging functionalities. PMID:28879978

  15. Contact CMOS imaging of gaseous oxygen sensor array

    PubMed Central

    Daivasagaya, Daisy S.; Yao, Lei; Yi Yung, Ka; Hajj-Hassan, Mohamad; Cheung, Maurice C.; Chodavarapu, Vamsy P.; Bright, Frank V.

    2014-01-01

    We describe a compact luminescent gaseous oxygen (O2) sensor microsystem based on the direct integration of sensor elements with a polymeric optical filter and placed on a low power complementary metal-oxide semiconductor (CMOS) imager integrated circuit (IC). The sensor operates on the measurement of excited-state emission intensity of O2-sensitive luminophore molecules tris(4,7-diphenyl-1,10-phenanthroline) ruthenium(II) ([Ru(dpp)3]2+) encapsulated within sol–gel derived xerogel thin films. The polymeric optical filter is made with polydimethylsiloxane (PDMS) that is mixed with a dye (Sudan-II). The PDMS membrane surface is molded to incorporate arrays of trapezoidal microstructures that serve to focus the optical sensor signals on to the imager pixels. The molded PDMS membrane is then attached with the PDMS color filter. The xerogel sensor arrays are contact printed on top of the PDMS trapezoidal lens-like microstructures. The CMOS imager uses a 32 × 32 (1024 elements) array of active pixel sensors and each pixel includes a high-gain phototransistor to convert the detected optical signals into electrical currents. Correlated double sampling circuit, pixel address, digital control and signal integration circuits are also implemented on-chip. The CMOS imager data is read out as a serial coded signal. The CMOS imager consumes a static power of 320 µW and an average dynamic power of 625 µW when operating at 100 Hz sampling frequency and 1.8 V DC. This CMOS sensor system provides a useful platform for the development of miniaturized optical chemical gas sensors. PMID:24493909

  16. Contact CMOS imaging of gaseous oxygen sensor array.

    PubMed

    Daivasagaya, Daisy S; Yao, Lei; Yi Yung, Ka; Hajj-Hassan, Mohamad; Cheung, Maurice C; Chodavarapu, Vamsy P; Bright, Frank V

    2011-10-01

    We describe a compact luminescent gaseous oxygen (O 2 ) sensor microsystem based on the direct integration of sensor elements with a polymeric optical filter and placed on a low power complementary metal-oxide semiconductor (CMOS) imager integrated circuit (IC). The sensor operates on the measurement of excited-state emission intensity of O 2 -sensitive luminophore molecules tris(4,7-diphenyl-1,10-phenanthroline) ruthenium(II) ([Ru(dpp) 3 ] 2+ ) encapsulated within sol-gel derived xerogel thin films. The polymeric optical filter is made with polydimethylsiloxane (PDMS) that is mixed with a dye (Sudan-II). The PDMS membrane surface is molded to incorporate arrays of trapezoidal microstructures that serve to focus the optical sensor signals on to the imager pixels. The molded PDMS membrane is then attached with the PDMS color filter. The xerogel sensor arrays are contact printed on top of the PDMS trapezoidal lens-like microstructures. The CMOS imager uses a 32 × 32 (1024 elements) array of active pixel sensors and each pixel includes a high-gain phototransistor to convert the detected optical signals into electrical currents. Correlated double sampling circuit, pixel address, digital control and signal integration circuits are also implemented on-chip. The CMOS imager data is read out as a serial coded signal. The CMOS imager consumes a static power of 320 µW and an average dynamic power of 625 µW when operating at 100 Hz sampling frequency and 1.8 V DC. This CMOS sensor system provides a useful platform for the development of miniaturized optical chemical gas sensors.

  17. Fixed Pattern Noise pixel-wise linear correction for crime scene imaging CMOS sensor

    NASA Astrophysics Data System (ADS)

    Yang, Jie; Messinger, David W.; Dube, Roger R.; Ientilucci, Emmett J.

    2017-05-01

    Filtered multispectral imaging technique might be a potential method for crime scene documentation and evidence detection due to its abundant spectral information as well as non-contact and non-destructive nature. Low-cost and portable multispectral crime scene imaging device would be highly useful and efficient. The second generation crime scene imaging system uses CMOS imaging sensor to capture spatial scene and bandpass Interference Filters (IFs) to capture spectral information. Unfortunately CMOS sensors suffer from severe spatial non-uniformity compared to CCD sensors and the major cause is Fixed Pattern Noise (FPN). IFs suffer from "blue shift" effect and introduce spatial-spectral correlated errors. Therefore, Fixed Pattern Noise (FPN) correction is critical to enhance crime scene image quality and is also helpful for spatial-spectral noise de-correlation. In this paper, a pixel-wise linear radiance to Digital Count (DC) conversion model is constructed for crime scene imaging CMOS sensor. Pixel-wise conversion gain Gi,j and Dark Signal Non-Uniformity (DSNU) Zi,j are calculated. Also, conversion gain is divided into four components: FPN row component, FPN column component, defects component and effective photo response signal component. Conversion gain is then corrected to average FPN column and row components and defects component so that the sensor conversion gain is uniform. Based on corrected conversion gain and estimated image incident radiance from the reverse of pixel-wise linear radiance to DC model, corrected image spatial uniformity can be enhanced to 7 times as raw image, and the bigger the image DC value within its dynamic range, the better the enhancement.

  18. A real-time photogrammetric algorithm for sensor and synthetic image fusion with application to aviation combined vision

    NASA Astrophysics Data System (ADS)

    Lebedev, M. A.; Stepaniants, D. G.; Komarov, D. V.; Vygolov, O. V.; Vizilter, Yu. V.; Zheltov, S. Yu.

    2014-08-01

    The paper addresses a promising visualization concept related to combination of sensor and synthetic images in order to enhance situation awareness of a pilot during an aircraft landing. A real-time algorithm for a fusion of a sensor image, acquired by an onboard camera, and a synthetic 3D image of the external view, generated in an onboard computer, is proposed. The pixel correspondence between the sensor and the synthetic images is obtained by an exterior orientation of a "virtual" camera using runway points as a geospatial reference. The runway points are detected by the Projective Hough Transform, which idea is to project the edge map onto a horizontal plane in the object space (the runway plane) and then to calculate intensity projections of edge pixels on different directions of intensity gradient. The performed experiments on simulated images show that on a base glide path the algorithm provides image fusion with pixel accuracy, even in the case of significant navigation errors.

  19. Single-event transient imaging with an ultra-high-speed temporally compressive multi-aperture CMOS image sensor.

    PubMed

    Mochizuki, Futa; Kagawa, Keiichiro; Okihara, Shin-ichiro; Seo, Min-Woong; Zhang, Bo; Takasawa, Taishi; Yasutomi, Keita; Kawahito, Shoji

    2016-02-22

    In the work described in this paper, an image reproduction scheme with an ultra-high-speed temporally compressive multi-aperture CMOS image sensor was demonstrated. The sensor captures an object by compressing a sequence of images with focal-plane temporally random-coded shutters, followed by reconstruction of time-resolved images. Because signals are modulated pixel-by-pixel during capturing, the maximum frame rate is defined only by the charge transfer speed and can thus be higher than those of conventional ultra-high-speed cameras. The frame rate and optical efficiency of the multi-aperture scheme are discussed. To demonstrate the proposed imaging method, a 5×3 multi-aperture image sensor was fabricated. The average rising and falling times of the shutters were 1.53 ns and 1.69 ns, respectively. The maximum skew among the shutters was 3 ns. The sensor observed plasma emission by compressing it to 15 frames, and a series of 32 images at 200 Mfps was reconstructed. In the experiment, by correcting disparities and considering temporal pixel responses, artifacts in the reconstructed images were reduced. An improvement in PSNR from 25.8 dB to 30.8 dB was confirmed in simulations.

  20. Label-Free Biomedical Imaging Using High-Speed Lock-In Pixel Sensor for Stimulated Raman Scattering

    PubMed Central

    Mars, Kamel; Kawahito, Shoji; Yasutomi, Keita; Kagawa, Keiichiro; Yamada, Takahiro

    2017-01-01

    Raman imaging eliminates the need for staining procedures, providing label-free imaging to study biological samples. Recent developments in stimulated Raman scattering (SRS) have achieved fast acquisition speed and hyperspectral imaging. However, there has been a problem of lack of detectors suitable for MHz modulation rate parallel detection, detecting multiple small SRS signals while eliminating extremely strong offset due to direct laser light. In this paper, we present a complementary metal-oxide semiconductor (CMOS) image sensor using high-speed lock-in pixels for stimulated Raman scattering that is capable of obtaining the difference of Stokes-on and Stokes-off signal at modulation frequency of 20 MHz in the pixel before reading out. The generated small SRS signal is extracted and amplified in a pixel using a high-speed and large area lateral electric field charge modulator (LEFM) employing two-step ion implantation and an in-pixel pair of low-pass filter, a sample and hold circuit and a switched capacitor integrator using a fully differential amplifier. A prototype chip is fabricated using 0.11 μm CMOS image sensor technology process. SRS spectra and images of stearic acid and 3T3-L1 samples are successfully obtained. The outcomes suggest that hyperspectral and multi-focus SRS imaging at video rate is viable after slight modifications to the pixel architecture and the acquisition system. PMID:29120358

  1. Label-Free Biomedical Imaging Using High-Speed Lock-In Pixel Sensor for Stimulated Raman Scattering.

    PubMed

    Mars, Kamel; Lioe, De Xing; Kawahito, Shoji; Yasutomi, Keita; Kagawa, Keiichiro; Yamada, Takahiro; Hashimoto, Mamoru

    2017-11-09

    Raman imaging eliminates the need for staining procedures, providing label-free imaging to study biological samples. Recent developments in stimulated Raman scattering (SRS) have achieved fast acquisition speed and hyperspectral imaging. However, there has been a problem of lack of detectors suitable for MHz modulation rate parallel detection, detecting multiple small SRS signals while eliminating extremely strong offset due to direct laser light. In this paper, we present a complementary metal-oxide semiconductor (CMOS) image sensor using high-speed lock-in pixels for stimulated Raman scattering that is capable of obtaining the difference of Stokes-on and Stokes-off signal at modulation frequency of 20 MHz in the pixel before reading out. The generated small SRS signal is extracted and amplified in a pixel using a high-speed and large area lateral electric field charge modulator (LEFM) employing two-step ion implantation and an in-pixel pair of low-pass filter, a sample and hold circuit and a switched capacitor integrator using a fully differential amplifier. A prototype chip is fabricated using 0.11 μm CMOS image sensor technology process. SRS spectra and images of stearic acid and 3T3-L1 samples are successfully obtained. The outcomes suggest that hyperspectral and multi-focus SRS imaging at video rate is viable after slight modifications to the pixel architecture and the acquisition system.

  2. Luminance compensation for AMOLED displays using integrated MIS sensors

    NASA Astrophysics Data System (ADS)

    Vygranenko, Yuri; Fernandes, Miguel; Louro, Paula; Vieira, Manuela

    2017-05-01

    Active-matrix organic light-emitting diodes (AMOLEDs) are ideal for future TV applications due to their ability to faithfully reproduce real images. However, pixel luminance can be affected by instability of driver TFTs and aging effect in OLEDs. This paper reports on a pixel driver utilizing a metal-insulator-semiconductor (MIS) sensor for luminance control of the OLED element. In the proposed pixel architecture for bottom-emission AMOLEDs, the embedded MIS sensor shares the same layer stack with back-channel etched a Si:H TFTs to maintain the fabrication simplicity. The pixel design for a large-area HD display is presented. The external electronics performs image processing to modify incoming video using correction parameters for each pixel in the backplane, and also sensor data processing to update the correction parameters. The luminance adjusting algorithm is based on realistic models for pixel circuit elements to predict the relation between the programming voltage and OLED luminance. SPICE modeling of the sensing part of the backplane is performed to demonstrate its feasibility. Details on the pixel circuit functionality including the sensing and programming operations are also discussed.

  3. Multiocular image sensor with on-chip beam-splitter and inner meta-micro-lens for single-main-lens stereo camera.

    PubMed

    Koyama, Shinzo; Onozawa, Kazutoshi; Tanaka, Keisuke; Saito, Shigeru; Kourkouss, Sahim Mohamed; Kato, Yoshihisa

    2016-08-08

    We developed multiocular 1/3-inch 2.75-μm-pixel-size 2.1M- pixel image sensors by co-design of both on-chip beam-splitter and 100-nm-width 800-nm-depth patterned inner meta-micro-lens for single-main-lens stereo camera systems. A camera with the multiocular image sensor can capture horizontally one-dimensional light filed by both the on-chip beam-splitter horizontally dividing ray according to incident angle, and the inner meta-micro-lens collecting the divided ray into pixel with small optical loss. Cross-talks between adjacent light field images of a fabricated binocular image sensor and of a quad-ocular image sensor are as low as 6% and 7% respectively. With the selection of two images from one-dimensional light filed images, a selective baseline for stereo vision is realized to view close objects with single-main-lens. In addition, by adding multiple light field images with different ratios, baseline distance can be tuned within an aperture of a main lens. We suggest the electrically selective or tunable baseline stereo vision to reduce 3D fatigue of viewers.

  4. Fusion: ultra-high-speed and IR image sensors

    NASA Astrophysics Data System (ADS)

    Etoh, T. Goji; Dao, V. T. S.; Nguyen, Quang A.; Kimata, M.

    2015-08-01

    Most targets of ultra-high-speed video cameras operating at more than 1 Mfps, such as combustion, crack propagation, collision, plasma, spark discharge, an air bag at a car accident and a tire under a sudden brake, generate sudden heat. Researchers in these fields require tools to measure the high-speed motion and heat simultaneously. Ultra-high frame rate imaging is achieved by an in-situ storage image sensor. Each pixel of the sensor is equipped with multiple memory elements to record a series of image signals simultaneously at all pixels. Image signals stored in each pixel are read out after an image capturing operation. In 2002, we developed an in-situ storage image sensor operating at 1 Mfps 1). However, the fill factor of the sensor was only 15% due to a light shield covering the wide in-situ storage area. Therefore, in 2011, we developed a backside illuminated (BSI) in-situ storage image sensor to increase the sensitivity with 100% fill factor and a very high quantum efficiency 2). The sensor also achieved a much higher frame rate,16.7 Mfps, thanks to the wiring on the front side with more freedom 3). The BSI structure has another advantage that it has less difficulties in attaching an additional layer on the backside, such as scintillators. This paper proposes development of an ultra-high-speed IR image sensor in combination of advanced nano-technologies for IR imaging and the in-situ storage technology for ultra-highspeed imaging with discussion on issues in the integration.

  5. Chip-scale fluorescence microscope based on a silo-filter complementary metal-oxide semiconductor image sensor.

    PubMed

    Ah Lee, Seung; Ou, Xiaoze; Lee, J Eugene; Yang, Changhuei

    2013-06-01

    We demonstrate a silo-filter (SF) complementary metal-oxide semiconductor (CMOS) image sensor for a chip-scale fluorescence microscope. The extruded pixel design with metal walls between neighboring pixels guides fluorescence emission through the thick absorptive filter to the photodiode of a pixel. Our prototype device achieves 13 μm resolution over a wide field of view (4.8 mm × 4.4 mm). We demonstrate bright-field and fluorescence longitudinal imaging of living cells in a compact, low-cost configuration.

  6. Color filter array pattern identification using variance of color difference image

    NASA Astrophysics Data System (ADS)

    Shin, Hyun Jun; Jeon, Jong Ju; Eom, Il Kyu

    2017-07-01

    A color filter array is placed on the image sensor of a digital camera to acquire color images. Each pixel uses only one color, since the image sensor can measure only one color per pixel. Therefore, empty pixels are filled using an interpolation process called demosaicing. The original and the interpolated pixels have different statistical characteristics. If the image is modified by manipulation or forgery, the color filter array pattern is altered. This pattern change can be a clue for image forgery detection. However, most forgery detection algorithms have the disadvantage of assuming the color filter array pattern. We present an identification method of the color filter array pattern. Initially, the local mean is eliminated to remove the background effect. Subsequently, the color difference block is constructed to emphasize the difference between the original pixel and the interpolated pixel. The variance measure of the color difference image is proposed as a means of estimating the color filter array configuration. The experimental results show that the proposed method is effective in identifying the color filter array pattern. Compared with conventional methods, our method provides superior performance.

  7. Active pixel image sensor with a winner-take-all mode of operation

    NASA Technical Reports Server (NTRS)

    Yadid-Pecht, Orly (Inventor); Mead, Carver (Inventor); Fossum, Eric R. (Inventor)

    2003-01-01

    An integrated CMOS semiconductor imaging device having two modes of operation that can be performed simultaneously to produce an output image and provide information of a brightest or darkest pixel in the image.

  8. Very-large-area CCD image sensors: concept and cost-effective research

    NASA Astrophysics Data System (ADS)

    Bogaart, E. W.; Peters, I. M.; Kleimann, A. C.; Manoury, E. J. P.; Klaassens, W.; de Laat, W. T. F. M.; Draijer, C.; Frost, R.; Bosiers, J. T.

    2009-01-01

    A new-generation full-frame 36x48 mm2 48Mp CCD image sensor with vertical anti-blooming for professional digital still camera applications is developed by means of the so-called building block concept. The 48Mp devices are formed by stitching 1kx1k building blocks with 6.0 µm pixel pitch in 6x8 (hxv) format. This concept allows us to design four large-area (48Mp) and sixty-two basic (1Mp) devices per 6" wafer. The basic image sensor is relatively small in order to obtain data from many devices. Evaluation of the basic parameters such as the image pixel and on-chip amplifier provides us statistical data using a limited number of wafers. Whereas the large-area devices are evaluated for aspects typical to large-sensor operation and performance, such as the charge transport efficiency. Combined with the usability of multi-layer reticles, the sensor development is cost effective for prototyping. Optimisation of the sensor design and technology has resulted in a pixel charge capacity of 58 ke- and significantly reduced readout noise (12 electrons at 25 MHz pixel rate, after CDS). Hence, a dynamic range of 73 dB is obtained. Microlens and stack optimisation resulted in an excellent angular response that meets with the wide-angle photography demands.

  9. 1T Pixel Using Floating-Body MOSFET for CMOS Image Sensors.

    PubMed

    Lu, Guo-Neng; Tournier, Arnaud; Roy, François; Deschamps, Benoît

    2009-01-01

    We present a single-transistor pixel for CMOS image sensors (CIS). It is a floating-body MOSFET structure, which is used as photo-sensing device and source-follower transistor, and can be controlled to store and evacuate charges. Our investigation into this 1T pixel structure includes modeling to obtain analytical description of conversion gain. Model validation has been done by comparing theoretical predictions and experimental results. On the other hand, the 1T pixel structure has been implemented in different configurations, including rectangular-gate and ring-gate designs, and variations of oxidation parameters for the fabrication process. The pixel characteristics are presented and discussed.

  10. A 45 nm Stacked CMOS Image Sensor Process Technology for Submicron Pixel.

    PubMed

    Takahashi, Seiji; Huang, Yi-Min; Sze, Jhy-Jyi; Wu, Tung-Ting; Guo, Fu-Sheng; Hsu, Wei-Cheng; Tseng, Tung-Hsiung; Liao, King; Kuo, Chin-Chia; Chen, Tzu-Hsiang; Chiang, Wei-Chieh; Chuang, Chun-Hao; Chou, Keng-Yu; Chung, Chi-Hsien; Chou, Kuo-Yu; Tseng, Chien-Hsien; Wang, Chuan-Joung; Yaung, Dun-Nien

    2017-12-05

    A submicron pixel's light and dark performance were studied by experiment and simulation. An advanced node technology incorporated with a stacked CMOS image sensor (CIS) is promising in that it may enhance performance. In this work, we demonstrated a low dark current of 3.2 e - /s at 60 °C, an ultra-low read noise of 0.90 e - ·rms, a high full well capacity (FWC) of 4100 e - , and blooming of 0.5% in 0.9 μm pixels with a pixel supply voltage of 2.8 V. In addition, the simulation study result of 0.8 μm pixels is discussed.

  11. 4K x 2K pixel color video pickup system

    NASA Astrophysics Data System (ADS)

    Sugawara, Masayuki; Mitani, Kohji; Shimamoto, Hiroshi; Fujita, Yoshihiro; Yuyama, Ichiro; Itakura, Keijirou

    1998-12-01

    This paper describes the development of an experimental super- high-definition color video camera system. During the past several years there has been much interest in super-high- definition images as the next generation image media. One of the difficulties in implementing a super-high-definition motion imaging system is constructing the image-capturing section (camera). Even the state-of-the-art semiconductor technology can not realize the image sensor which has enough pixels and output data rate for super-high-definition images. The present study is an attempt to fill the gap in this respect. The authors intend to solve the problem by using new imaging method in which four HDTV sensors are attached on a new color separation optics so that their pixel sample pattern forms checkerboard pattern. A series of imaging experiments demonstrate that this technique is an effective approach to capturing super-high-definition moving images in the present situation where no image sensors exist for such images.

  12. A 256×256 low-light-level CMOS imaging sensor with digital CDS

    NASA Astrophysics Data System (ADS)

    Zou, Mei; Chen, Nan; Zhong, Shengyou; Li, Zhengfen; Zhang, Jicun; Yao, Li-bin

    2016-10-01

    In order to achieve high sensitivity for low-light-level CMOS image sensors (CIS), a capacitive transimpedance amplifier (CTIA) pixel circuit with a small integration capacitor is used. As the pixel and the column area are highly constrained, it is difficult to achieve analog correlated double sampling (CDS) to remove the noise for low-light-level CIS. So a digital CDS is adopted, which realizes the subtraction algorithm between the reset signal and pixel signal off-chip. The pixel reset noise and part of the column fixed-pattern noise (FPN) can be greatly reduced. A 256×256 CIS with CTIA array and digital CDS is implemented in the 0.35μm CMOS technology. The chip size is 7.7mm×6.75mm, and the pixel size is 15μm×15μm with a fill factor of 20.6%. The measured pixel noise is 24LSB with digital CDS in RMS value at dark condition, which shows 7.8× reduction compared to the image sensor without digital CDS. Running at 7fps, this low-light-level CIS can capture recognizable images with the illumination down to 0.1lux.

  13. Highly sensitive and area-efficient CMOS image sensor using a PMOSFET-type photodetector with a built-in transfer gate

    NASA Astrophysics Data System (ADS)

    Seo, Sang-Ho; Kim, Kyoung-Do; Kong, Jae-Sung; Shin, Jang-Kyoo; Choi, Pyung

    2007-02-01

    In this paper, a new CMOS image sensor is presented, which uses a PMOSFET-type photodetector with a transfer gate that has a high and variable sensitivity. The proposed CMOS image sensor has been fabricated using a 0.35 μm 2-poly 4- metal standard CMOS technology and is composed of a 256 × 256 array of 7.05 × 7.10 μm pixels. The unit pixel has a configuration of a pseudo 3-transistor active pixel sensor (APS) with the PMOSFET-type photodetector with a transfer gate, which has a function of conventional 4-transistor APS. The generated photocurrent is controlled by the transfer gate of the PMOSFET-type photodetector. The maximum responsivity of the photodetector is larger than 1.0 × 10 3 A/W without any optical lens. Fabricated 256 × 256 CMOS image sensor exhibits a good response to low-level illumination as low as 5 lux.

  14. Mapping Electrical Crosstalk in Pixelated Sensor Arrays

    NASA Technical Reports Server (NTRS)

    Seshadri, Suresh (Inventor); Cole, David (Inventor); Smith, Roger M. (Inventor); Hancock, Bruce R. (Inventor)

    2017-01-01

    The effects of inter pixel capacitance in a pixilated array may be measured by first resetting all pixels in the array to a first voltage, where a first image is read out, followed by resetting only a subset of pixels in the array to a second voltage, where a second image is read out, where the difference in the first and second images provide information about the inter pixel capacitance. Other embodiments are described and claimed.

  15. Mapping Electrical Crosstalk in Pixelated Sensor Arrays

    NASA Technical Reports Server (NTRS)

    Smith, Roger M (Inventor); Hancock, Bruce R. (Inventor); Cole, David (Inventor); Seshadri, Suresh (Inventor)

    2013-01-01

    The effects of inter pixel capacitance in a pixilated array may be measured by first resetting all pixels in the array to a first voltage, where a first image is read out, followed by resetting only a subset of pixels in the array to a second voltage, where a second image is read out, where the difference in the first and second images provide information about the inter pixel capacitance. Other embodiments are described and claimed.

  16. Development of CMOS Active Pixel Image Sensors for Low Cost Commercial Applications

    NASA Technical Reports Server (NTRS)

    Fossum, E.; Gee, R.; Kemeny, S.; Kim, Q.; Mendis, S.; Nakamura, J.; Nixon, R.; Ortiz, M.; Pain, B.; Zhou, Z.; hide

    1994-01-01

    This paper describes ongoing research and development of CMOS active pixel image sensors for low cost commercial applications. A number of sensor designs have been fabricated and tested in both p-well and n-well technologies. Major elements in the development of the sensor include on-chip analog signal processing circuits for the reduction of fixed pattern noise, on-chip timing and control circuits and on-chip analog-to-digital conversion (ADC). Recent results and continuing efforts in these areas will be presented.

  17. Development of CMOS Active Pixel Image Sensors for Low Cost Commercial Applications

    NASA Technical Reports Server (NTRS)

    Gee, R.; Kemeny, S.; Kim, Q.; Mendis, S.; Nakamura, J.; Nixon, R.; Ortiz, M.; Pain, B.; Staller, C.; Zhou, Z; hide

    1994-01-01

    JPL, under sponsorship from the NASA Office of Advanced Concepts and Technology, has been developing a second-generation solid-state image sensor technology. Charge-coupled devices (CCD) are a well-established first generation image sensor technology. For both commercial and NASA applications, CCDs have numerous shortcomings. In response, the active pixel sensor (APS) technology has been under research. The major advantages of APS technology are the ability to integrate on-chip timing, control, signal-processing and analog-to-digital converter functions, reduced sensitivity to radiation effects, low power operation, and random access readout.

  18. Can direct electron detectors outperform phosphor-CCD systems for TEM?

    NASA Astrophysics Data System (ADS)

    Moldovan, G.; Li, X.; Kirkland, A.

    2008-08-01

    A new generation of imaging detectors is being considered for application in TEM, but which device architectures can provide the best images? Monte Carlo simulations of the electron-sensor interaction are used here to calculate the expected modulation transfer of monolithic active pixel sensors (MAPS), hybrid active pixel sensors (HAPS) and double sided Silicon strip detectors (DSSD), showing that ideal and nearly ideal transfer can be obtained using DSSD and MAPS sensors. These results highly recommend the replacement of current phosphor screen and charge coupled device imaging systems with such new directly exposed position sensitive electron detectors.

  19. A 75-ps Gated CMOS Image Sensor with Low Parasitic Light Sensitivity

    PubMed Central

    Zhang, Fan; Niu, Hanben

    2016-01-01

    In this study, a 40 × 48 pixel global shutter complementary metal-oxide-semiconductor (CMOS) image sensor with an adjustable shutter time as low as 75 ps was implemented using a 0.5-μm mixed-signal CMOS process. The implementation consisted of a continuous contact ring around each p+/n-well photodiode in the pixel array in order to apply sufficient light shielding. The parasitic light sensitivity of the in-pixel storage node was measured to be 1/8.5 × 107 when illuminated by a 405-nm diode laser and 1/1.4 × 104 when illuminated by a 650-nm diode laser. The pixel pitch was 24 μm, the size of the square p+/n-well photodiode in each pixel was 7 μm per side, the measured random readout noise was 217 e− rms, and the measured dynamic range of the pixel of the designed chip was 5500:1. The type of gated CMOS image sensor (CIS) that is proposed here can be used in ultra-fast framing cameras to observe non-repeatable fast-evolving phenomena. PMID:27367699

  20. A 75-ps Gated CMOS Image Sensor with Low Parasitic Light Sensitivity.

    PubMed

    Zhang, Fan; Niu, Hanben

    2016-06-29

    In this study, a 40 × 48 pixel global shutter complementary metal-oxide-semiconductor (CMOS) image sensor with an adjustable shutter time as low as 75 ps was implemented using a 0.5-μm mixed-signal CMOS process. The implementation consisted of a continuous contact ring around each p+/n-well photodiode in the pixel array in order to apply sufficient light shielding. The parasitic light sensitivity of the in-pixel storage node was measured to be 1/8.5 × 10⁷ when illuminated by a 405-nm diode laser and 1/1.4 × 10⁴ when illuminated by a 650-nm diode laser. The pixel pitch was 24 μm, the size of the square p+/n-well photodiode in each pixel was 7 μm per side, the measured random readout noise was 217 e(-) rms, and the measured dynamic range of the pixel of the designed chip was 5500:1. The type of gated CMOS image sensor (CIS) that is proposed here can be used in ultra-fast framing cameras to observe non-repeatable fast-evolving phenomena.

  1. Toward one Giga frames per second--evolution of in situ storage image sensors.

    PubMed

    Etoh, Takeharu G; Son, Dao V T; Yamada, Tetsuo; Charbon, Edoardo

    2013-04-08

    The ISIS is an ultra-fast image sensor with in-pixel storage. The evolution of the ISIS in the past and in the near future is reviewed and forecasted. To cover the storage area with a light shield, the conventional frontside illuminated ISIS has a limited fill factor. To achieve higher sensitivity, a BSI ISIS was developed. To avoid direct intrusion of light and migration of signal electrons to the storage area on the frontside, a cross-sectional sensor structure with thick pnpn layers was developed, and named "Tetratified structure". By folding and looping in-pixel storage CCDs, an image signal accumulation sensor, ISAS, is proposed. The ISAS has a new function, the in-pixel signal accumulation, in addition to the ultra-high-speed imaging. To achieve much higher frame rate, a multi-collection-gate (MCG) BSI image sensor architecture is proposed. The photoreceptive area forms a honeycomb-like shape. Performance of a hexagonal CCD-type MCG BSI sensor is examined by simulations. The highest frame rate is theoretically more than 1Gfps. For the near future, a stacked hybrid CCD/CMOS MCG image sensor seems most promising. The associated problems are discussed. A fine TSV process is the key technology to realize the structure.

  2. A robust color signal processing with wide dynamic range WRGB CMOS image sensor

    NASA Astrophysics Data System (ADS)

    Kawada, Shun; Kuroda, Rihito; Sugawa, Shigetoshi

    2011-01-01

    We have developed a robust color reproduction methodology by a simple calculation with a new color matrix using the formerly developed wide dynamic range WRGB lateral overflow integration capacitor (LOFIC) CMOS image sensor. The image sensor was fabricated through a 0.18 μm CMOS technology and has a 45 degrees oblique pixel array, the 4.2 μm effective pixel pitch and the W pixels. A W pixel was formed by replacing one of the two G pixels in the Bayer RGB color filter. The W pixel has a high sensitivity through the visible light waveband. An emerald green and yellow (EGY) signal is generated from the difference between the W signal and the sum of RGB signals. This EGY signal mainly includes emerald green and yellow lights. These colors are difficult to be reproduced accurately by the conventional simple linear matrix because their wave lengths are in the valleys of the spectral sensitivity characteristics of the RGB pixels. A new linear matrix based on the EGY-RGB signal was developed. Using this simple matrix, a highly accurate color processing with a large margin to the sensitivity fluctuation and noise has been achieved.

  3. Precise color images a high-speed color video camera system with three intensified sensors

    NASA Astrophysics Data System (ADS)

    Oki, Sachio; Yamakawa, Masafumi; Gohda, Susumu; Etoh, Takeharu G.

    1999-06-01

    High speed imaging systems have been used in a large field of science and engineering. Although the high speed camera systems have been improved to high performance, most of their applications are only to get high speed motion pictures. However, in some fields of science and technology, it is useful to get some other information, such as temperature of combustion flame, thermal plasma and molten materials. Recent digital high speed video imaging technology should be able to get such information from those objects. For this purpose, we have already developed a high speed video camera system with three-intensified-sensors and cubic prism image splitter. The maximum frame rate is 40,500 pps (picture per second) at 64 X 64 pixels and 4,500 pps at 256 X 256 pixels with 256 (8 bit) intensity resolution for each pixel. The camera system can store more than 1,000 pictures continuously in solid state memory. In order to get the precise color images from this camera system, we need to develop a digital technique, which consists of a computer program and ancillary instruments, to adjust displacement of images taken from two or three image sensors and to calibrate relationship between incident light intensity and corresponding digital output signals. In this paper, the digital technique for pixel-based displacement adjustment are proposed. Although the displacement of the corresponding circle was more than 8 pixels in original image, the displacement was adjusted within 0.2 pixels at most by this method.

  4. Novel Si-Ge-C Superlattices for More than Moore CMOS

    DTIC Science & Technology

    2016-03-31

    diodes can be entirely formed by epitaxial growth, CMOS Active Pixel Sensors can be made with Fully-Depleted SOI CMOS . One important advantage of...a NMOS Transfer Gate (TG), which could be part of a 4T pixel APS. PPDs are preferred in CMOS image sensors for the ability of the pinning layer to...than Moore” with the creation of active photonic devices monolithically integrated with CMOS . Applications include Multispectral CMOS Image Sensors

  5. A Multi-Modality CMOS Sensor Array for Cell-Based Assay and Drug Screening.

    PubMed

    Chi, Taiyun; Park, Jong Seok; Butts, Jessica C; Hookway, Tracy A; Su, Amy; Zhu, Chengjie; Styczynski, Mark P; McDevitt, Todd C; Wang, Hua

    2015-12-01

    In this paper, we present a fully integrated multi-modality CMOS cellular sensor array with four sensing modalities to characterize different cell physiological responses, including extracellular voltage recording, cellular impedance mapping, optical detection with shadow imaging and bioluminescence sensing, and thermal monitoring. The sensor array consists of nine parallel pixel groups and nine corresponding signal conditioning blocks. Each pixel group comprises one temperature sensor and 16 tri-modality sensor pixels, while each tri-modality sensor pixel can be independently configured for extracellular voltage recording, cellular impedance measurement (voltage excitation/current sensing), and optical detection. This sensor array supports multi-modality cellular sensing at the pixel level, which enables holistic cell characterization and joint-modality physiological monitoring on the same cellular sample with a pixel resolution of 80 μm × 100 μm. Comprehensive biological experiments with different living cell samples demonstrate the functionality and benefit of the proposed multi-modality sensing in cell-based assay and drug screening.

  6. Low temperature performance of a commercially available InGaAs image sensor

    NASA Astrophysics Data System (ADS)

    Nakaya, Hidehiko; Komiyama, Yutaka; Kashikawa, Nobunari; Uchida, Tomohisa; Nagayama, Takahiro; Yoshida, Michitoshi

    2016-08-01

    We report the evaluation results of a commercially available InGaAs image sensor manufactured by Hamamatsu Photonics K. K., which has sensitivity between 0.95μm and 1.7μm at a room temperature. The sensor format was 128×128 pixels with 20 μm pitch. It was tested with our original readout electronics and cooled down to 80 K by a mechanical cooler to minimize the dark current. Although the readout noise and dark current were 200 e- and 20 e- /sec/pixel, respectively, we found no serious problems for the linearity, wavelength response, and intra-pixel response.

  7. A 4MP high-dynamic-range, low-noise CMOS image sensor

    NASA Astrophysics Data System (ADS)

    Ma, Cheng; Liu, Yang; Li, Jing; Zhou, Quan; Chang, Yuchun; Wang, Xinyang

    2015-03-01

    In this paper we present a 4 Megapixel high dynamic range, low dark noise and dark current CMOS image sensor, which is ideal for high-end scientific and surveillance applications. The pixel design is based on a 4-T PPD structure. During the readout of the pixel array, signals are first amplified, and then feed to a low- power column-parallel ADC array which is already presented in [1]. Measurement results show that the sensor achieves a dynamic range of 96dB, a dark noise of 1.47e- at 24fps speed. The dark current is 0.15e-/pixel/s at -20oC.

  8. Plenoptic camera image simulation for reconstruction algorithm verification

    NASA Astrophysics Data System (ADS)

    Schwiegerling, Jim

    2014-09-01

    Plenoptic cameras have emerged in recent years as a technology for capturing light field data in a single snapshot. A conventional digital camera can be modified with the addition of a lenslet array to create a plenoptic camera. Two distinct camera forms have been proposed in the literature. The first has the camera image focused onto the lenslet array. The lenslet array is placed over the camera sensor such that each lenslet forms an image of the exit pupil onto the sensor. The second plenoptic form has the lenslet array relaying the image formed by the camera lens to the sensor. We have developed a raytracing package that can simulate images formed by a generalized version of the plenoptic camera. Several rays from each sensor pixel are traced backwards through the system to define a cone of rays emanating from the entrance pupil of the camera lens. Objects that lie within this cone are integrated to lead to a color and exposure level for that pixel. To speed processing three-dimensional objects are approximated as a series of planes at different depths. Repeating this process for each pixel in the sensor leads to a simulated plenoptic image on which different reconstruction algorithms can be tested.

  9. Imaging system design and image interpolation based on CMOS image sensor

    NASA Astrophysics Data System (ADS)

    Li, Yu-feng; Liang, Fei; Guo, Rui

    2009-11-01

    An image acquisition system is introduced, which consists of a color CMOS image sensor (OV9620), SRAM (CY62148), CPLD (EPM7128AE) and DSP (TMS320VC5509A). The CPLD implements the logic and timing control to the system. SRAM stores the image data, and DSP controls the image acquisition system through the SCCB (Omni Vision Serial Camera Control Bus). The timing sequence of the CMOS image sensor OV9620 is analyzed. The imaging part and the high speed image data memory unit are designed. The hardware and software design of the image acquisition and processing system is given. CMOS digital cameras use color filter arrays to sample different spectral components, such as red, green, and blue. At the location of each pixel only one color sample is taken, and the other colors must be interpolated from neighboring samples. We use the edge-oriented adaptive interpolation algorithm for the edge pixels and bilinear interpolation algorithm for the non-edge pixels to improve the visual quality of the interpolated images. This method can get high processing speed, decrease the computational complexity, and effectively preserve the image edges.

  10. Laser doppler blood flow imaging using a CMOS imaging sensor with on-chip signal processing.

    PubMed

    He, Diwei; Nguyen, Hoang C; Hayes-Gill, Barrie R; Zhu, Yiqun; Crowe, John A; Gill, Cally; Clough, Geraldine F; Morgan, Stephen P

    2013-09-18

    The first fully integrated 2D CMOS imaging sensor with on-chip signal processing for applications in laser Doppler blood flow (LDBF) imaging has been designed and tested. To obtain a space efficient design over 64 × 64 pixels means that standard processing electronics used off-chip cannot be implemented. Therefore the analog signal processing at each pixel is a tailored design for LDBF signals with balanced optimization for signal-to-noise ratio and silicon area. This custom made sensor offers key advantages over conventional sensors, viz. the analog signal processing at the pixel level carries out signal normalization; the AC amplification in combination with an anti-aliasing filter allows analog-to-digital conversion with a low number of bits; low resource implementation of the digital processor enables on-chip processing and the data bottleneck that exists between the detector and processing electronics has been overcome. The sensor demonstrates good agreement with simulation at each design stage. The measured optical performance of the sensor is demonstrated using modulated light signals and in vivo blood flow experiments. Images showing blood flow changes with arterial occlusion and an inflammatory response to a histamine skin-prick demonstrate that the sensor array is capable of detecting blood flow signals from tissue.

  11. High Dynamic Range Imaging at the Quantum Limit with Single Photon Avalanche Diode-Based Image Sensors †

    PubMed Central

    Mattioli Della Rocca, Francescopaolo

    2018-01-01

    This paper examines methods to best exploit the High Dynamic Range (HDR) of the single photon avalanche diode (SPAD) in a high fill-factor HDR photon counting pixel that is scalable to megapixel arrays. The proposed method combines multi-exposure HDR with temporal oversampling in-pixel. We present a silicon demonstration IC with 96 × 40 array of 8.25 µm pitch 66% fill-factor SPAD-based pixels achieving >100 dB dynamic range with 3 back-to-back exposures (short, mid, long). Each pixel sums 15 bit-planes or binary field images internally to constitute one frame providing 3.75× data compression, hence the 1k frames per second (FPS) output off-chip represents 45,000 individual field images per second on chip. Two future projections of this work are described: scaling SPAD-based image sensors to HDR 1 MPixel formats and shrinking the pixel pitch to 1–3 µm. PMID:29641479

  12. CMOS Imaging of Pin-Printed Xerogel-Based Luminescent Sensor Microarrays.

    PubMed

    Yao, Lei; Yung, Ka Yi; Khan, Rifat; Chodavarapu, Vamsy P; Bright, Frank V

    2010-12-01

    We present the design and implementation of a luminescence-based miniaturized multisensor system using pin-printed xerogel materials which act as host media for chemical recognition elements. We developed a CMOS imager integrated circuit (IC) to image the luminescence response of the xerogel-based sensor array. The imager IC uses a 26 × 20 (520 elements) array of active pixel sensors and each active pixel includes a high-gain phototransistor to convert the detected optical signals into electrical currents. The imager includes a correlated double sampling circuit and pixel address/digital control circuit; the image data is read-out as coded serial signal. The sensor system uses a light-emitting diode (LED) to excite the target analyte responsive luminophores doped within discrete xerogel-based sensor elements. As a prototype, we developed a 4 × 4 (16 elements) array of oxygen (O 2 ) sensors. Each group of 4 sensor elements in the array (arranged in a row) is designed to provide a different and specific sensitivity to the target gaseous O 2 concentration. This property of multiple sensitivities is achieved by using a strategic mix of two oxygen sensitive luminophores ([Ru(dpp) 3 ] 2+ and ([Ru(bpy) 3 ] 2+ ) in each pin-printed xerogel sensor element. The CMOS imager consumes an average power of 8 mW operating at 1 kHz sampling frequency driven at 5 V. The developed prototype system demonstrates a low cost and miniaturized luminescence multisensor system.

  13. CMOS Imaging of Temperature Effects on Pin-Printed Xerogel Sensor Microarrays.

    PubMed

    Lei Yao; Ka Yi Yung; Chodavarapu, Vamsy P; Bright, Frank V

    2011-04-01

    In this paper, we study the effect of temperature on the operation and performance of a xerogel-based sensor microarrays coupled to a complementary metal-oxide semiconductor (CMOS) imager integrated circuit (IC) that images the photoluminescence response from the sensor microarray. The CMOS imager uses a 32 × 32 (1024 elements) array of active pixel sensors and each pixel includes a high-gain phototransistor to convert the detected optical signals into electrical currents. A correlated double sampling circuit and pixel address/digital control/signal integration circuit are also implemented on-chip. The CMOS imager data are read out as a serial coded signal. The sensor system uses a light-emitting diode to excite target analyte responsive organometallic luminophores doped within discrete xerogel-based sensor elements. As a proto type, we developed a 3 × 3 (9 elements) array of oxygen (O2) sensors. Each group of three sensor elements in the array (arranged in a column) is designed to provide a different and specific sensitivity to the target gaseous O2 concentration. This property of multiple sensitivities is achieved by using a mix of two O2 sensitive luminophores in each pin-printed xerogel sensor element. The CMOS imager is designed to be low noise and consumes a static power of 320.4 μW and an average dynamic power of 624.6 μW when operating at 100-Hz sampling frequency and 1.8-V dc power supply.

  14. CMOS Image Sensors for High Speed Applications.

    PubMed

    El-Desouki, Munir; Deen, M Jamal; Fang, Qiyin; Liu, Louis; Tse, Frances; Armstrong, David

    2009-01-01

    Recent advances in deep submicron CMOS technologies and improved pixel designs have enabled CMOS-based imagers to surpass charge-coupled devices (CCD) imaging technology for mainstream applications. The parallel outputs that CMOS imagers can offer, in addition to complete camera-on-a-chip solutions due to being fabricated in standard CMOS technologies, result in compelling advantages in speed and system throughput. Since there is a practical limit on the minimum pixel size (4∼5 μm) due to limitations in the optics, CMOS technology scaling can allow for an increased number of transistors to be integrated into the pixel to improve both detection and signal processing. Such smart pixels truly show the potential of CMOS technology for imaging applications allowing CMOS imagers to achieve the image quality and global shuttering performance necessary to meet the demands of ultrahigh-speed applications. In this paper, a review of CMOS-based high-speed imager design is presented and the various implementations that target ultrahigh-speed imaging are described. This work also discusses the design, layout and simulation results of an ultrahigh acquisition rate CMOS active-pixel sensor imager that can take 8 frames at a rate of more than a billion frames per second (fps).

  15. Using polynomials to simplify fixed pattern noise and photometric correction of logarithmic CMOS image sensors.

    PubMed

    Li, Jing; Mahmoodi, Alireza; Joseph, Dileepan

    2015-10-16

    An important class of complementary metal-oxide-semiconductor (CMOS) image sensors are those where pixel responses are monotonic nonlinear functions of light stimuli. This class includes various logarithmic architectures, which are easily capable of wide dynamic range imaging, at video rates, but which are vulnerable to image quality issues. To minimize fixed pattern noise (FPN) and maximize photometric accuracy, pixel responses must be calibrated and corrected due to mismatch and process variation during fabrication. Unlike literature approaches, which employ circuit-based models of varying complexity, this paper introduces a novel approach based on low-degree polynomials. Although each pixel may have a highly nonlinear response, an approximately-linear FPN calibration is possible by exploiting the monotonic nature of imaging. Moreover, FPN correction requires only arithmetic, and an optimal fixed-point implementation is readily derived, subject to a user-specified number of bits per pixel. Using a monotonic spline, involving cubic polynomials, photometric calibration is also possible without a circuit-based model, and fixed-point photometric correction requires only a look-up table. The approach is experimentally validated with a logarithmic CMOS image sensor and is compared to a leading approach from the literature. The novel approach proves effective and efficient.

  16. Miniature Wide-Angle Lens for Small-Pixel Electronic Camera

    NASA Technical Reports Server (NTRS)

    Mouroulils, Pantazis; Blazejewski, Edward

    2009-01-01

    A proposed wideangle lens is shown that would be especially well suited for an electronic camera in which the focal plane is occupied by an image sensor that has small pixels. The design of the lens is intended to satisfy requirements for compactness, high image quality, and reasonably low cost, while addressing issues peculiar to the operation of small-pixel image sensors. Hence, this design is expected to enable the development of a new generation of compact, high-performance electronic cameras. The lens example shown has a 60 degree field of view and a relative aperture (f-number) of 3.2. The main issues affecting the design are also shown.

  17. Camera-on-a-Chip

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Jet Propulsion Laboratory's research on a second generation, solid-state image sensor technology has resulted in the Complementary Metal- Oxide Semiconductor Active Pixel Sensor (CMOS), establishing an alternative to the Charged Coupled Device (CCD). Photobit Corporation, the leading supplier of CMOS image sensors, has commercialized two products of their own based on this technology: the PB-100 and PB-300. These devices are cameras on a chip, combining all camera functions. CMOS "active-pixel" digital image sensors offer several advantages over CCDs, a technology used in video and still-camera applications for 30 years. The CMOS sensors draw less energy, they use the same manufacturing platform as most microprocessors and memory chips, and they allow on-chip programming of frame size, exposure, and other parameters.

  18. A programmable computational image sensor for high-speed vision

    NASA Astrophysics Data System (ADS)

    Yang, Jie; Shi, Cong; Long, Xitian; Wu, Nanjian

    2013-08-01

    In this paper we present a programmable computational image sensor for high-speed vision. This computational image sensor contains four main blocks: an image pixel array, a massively parallel processing element (PE) array, a row processor (RP) array and a RISC core. The pixel-parallel PE is responsible for transferring, storing and processing image raw data in a SIMD fashion with its own programming language. The RPs are one dimensional array of simplified RISC cores, it can carry out complex arithmetic and logic operations. The PE array and RP array can finish great amount of computation with few instruction cycles and therefore satisfy the low- and middle-level high-speed image processing requirement. The RISC core controls the whole system operation and finishes some high-level image processing algorithms. We utilize a simplified AHB bus as the system bus to connect our major components. Programming language and corresponding tool chain for this computational image sensor are also developed.

  19. Integrated sensor with frame memory and programmable resolution for light adaptive imaging

    NASA Technical Reports Server (NTRS)

    Zhou, Zhimin (Inventor); Fossum, Eric R. (Inventor); Pain, Bedabrata (Inventor)

    2004-01-01

    An image sensor operable to vary the output spatial resolution according to a received light level while maintaining a desired signal-to-noise ratio. Signals from neighboring pixels in a pixel patch with an adjustable size are added to increase both the image brightness and signal-to-noise ratio. One embodiment comprises a sensor array for receiving input signals, a frame memory array for temporarily storing a full frame, and an array of self-calibration column integrators for uniform column-parallel signal summation. The column integrators are capable of substantially canceling fixed pattern noise.

  20. A 3D image sensor with adaptable charge subtraction scheme for background light suppression

    NASA Astrophysics Data System (ADS)

    Shin, Jungsoon; Kang, Byongmin; Lee, Keechang; Kim, James D. K.

    2013-02-01

    We present a 3D ToF (Time-of-Flight) image sensor with adaptive charge subtraction scheme for background light suppression. The proposed sensor can alternately capture high resolution color image and high quality depth map in each frame. In depth-mode, the sensor requires enough integration time for accurate depth acquisition, but saturation will occur in high background light illumination. We propose to divide the integration time into N sub-integration times adaptively. In each sub-integration time, our sensor captures an image without saturation and subtracts the charge to prevent the pixel from the saturation. In addition, the subtraction results are cumulated N times obtaining a final result image without background illumination at full integration time. Experimental results with our own ToF sensor show high background suppression performance. We also propose in-pixel storage and column-level subtraction circuit for chiplevel implementation of the proposed method. We believe the proposed scheme will enable 3D sensors to be used in out-door environment.

  1. Panoramic thermal imaging: challenges and tradeoffs

    NASA Astrophysics Data System (ADS)

    Aburmad, Shimon

    2014-06-01

    Over the past decade, we have witnessed a growing demand for electro-optical systems that can provide continuous 3600 coverage. Applications such as perimeter security, autonomous vehicles, and military warning systems are a few of the most common applications for panoramic imaging. There are several different technological approaches for achieving panoramic imaging. Solutions based on rotating elements do not provide continuous coverage as there is a time lag between updates. Continuous panoramic solutions either use "stitched" images from multiple adjacent sensors, or sophisticated optical designs which warp a panoramic view onto a single sensor. When dealing with panoramic imaging in the visible spectrum, high volume production and advancement of semiconductor technology has enabled the use of CMOS/CCD image sensors with a huge number of pixels, small pixel dimensions, and low cost devices. However, in the infrared spectrum, the growth of detector pixel counts, pixel size reduction, and cost reduction is taking place at a slower rate due to the complexity of the technology and limitations caused by the laws of physics. In this work, we will explore the challenges involved in achieving 3600 panoramic thermal imaging, and will analyze aspects such as spatial resolution, FOV, data complexity, FPA utilization, system complexity, coverage and cost of the different solutions. We will provide illustrations, calculations, and tradeoffs between three solutions evaluated by Opgal: A unique 3600 lens design using an LWIR XGA detector, stitching of three adjacent LWIR sensors equipped with a low distortion 1200 lens, and a fisheye lens with a HFOV of 180º and an XGA sensor.

  2. Event-based Sensing for Space Situational Awareness

    NASA Astrophysics Data System (ADS)

    Cohen, G.; Afshar, S.; van Schaik, A.; Wabnitz, A.; Bessell, T.; Rutten, M.; Morreale, B.

    A revolutionary type of imaging device, known as a silicon retina or event-based sensor, has recently been developed and is gaining in popularity in the field of artificial vision systems. These devices are inspired by a biological retina and operate in a significantly different way to traditional CCD-based imaging sensors. While a CCD produces frames of pixel intensities, an event-based sensor produces a continuous stream of events, each of which is generated when a pixel detects a change in log light intensity. These pixels operate asynchronously and independently, producing an event-based output with high temporal resolution. There are also no fixed exposure times, allowing these devices to offer a very high dynamic range independently for each pixel. Additionally, these devices offer high-speed, low power operation and a sparse spatiotemporal output. As a consequence, the data from these sensors must be interpreted in a significantly different way to traditional imaging sensors and this paper explores the advantages this technology provides for space imaging. The applicability and capabilities of event-based sensors for SSA applications are demonstrated through telescope field trials. Trial results have confirmed that the devices are capable of observing resident space objects from LEO through to GEO orbital regimes. Significantly, observations of RSOs were made during both day-time and nighttime (terminator) conditions without modification to the camera or optics. The event based sensor’s ability to image stars and satellites during day-time hours offers a dramatic capability increase for terrestrial optical sensors. This paper shows the field testing and validation of two different architectures of event-based imaging sensors. An eventbased sensor’s asynchronous output has an intrinsically low data-rate. In addition to low-bandwidth communications requirements, the low weight, low-power and high-speed make them ideally suitable to meeting the demanding challenges required by space-based SSA systems. Results from these experiments and the systems developed highlight the applicability of event-based sensors to ground and space-based SSA tasks.

  3. Design, optimization and evaluation of a "smart" pixel sensor array for low-dose digital radiography

    NASA Astrophysics Data System (ADS)

    Wang, Kai; Liu, Xinghui; Ou, Hai; Chen, Jun

    2016-04-01

    Amorphous silicon (a-Si:H) thin-film transistors (TFTs) have been widely used to build flat-panel X-ray detectors for digital radiography (DR). As the demand for low-dose X-ray imaging grows, a detector with high signal-to-noise-ratio (SNR) pixel architecture emerges. "Smart" pixel is intended to use a dual-gate photosensitive TFT for sensing, storage, and switch. It differs from a conventional passive pixel sensor (PPS) and active pixel sensor (APS) in that all these three functions are combined into one device instead of three separate units in a pixel. Thus, it is expected to have high fill factor and high spatial resolution. In addition, it utilizes the amplification effect of the dual-gate photosensitive TFT to form a one-transistor APS that leads to a potentially high SNR. This paper addresses the design, optimization and evaluation of the smart pixel sensor and array for low-dose DR. We will design and optimize the smart pixel from the scintillator to TFT levels and validate it through optical and electrical simulation and experiments of a 4x4 sensor array.

  4. Coded aperture detector: an image sensor with sub 20-nm pixel resolution.

    PubMed

    Miyakawa, Ryan; Mayer, Rafael; Wojdyla, Antoine; Vannier, Nicolas; Lesser, Ian; Aron-Dine, Shifrah; Naulleau, Patrick

    2014-08-11

    We describe the coded aperture detector, a novel image sensor based on uniformly redundant arrays (URAs) with customizable pixel size, resolution, and operating photon energy regime. In this sensor, a coded aperture is scanned laterally at the image plane of an optical system, and the transmitted intensity is measured by a photodiode. The image intensity is then digitally reconstructed using a simple convolution. We present results from a proof-of-principle optical prototype, demonstrating high-fidelity image sensing comparable to a CCD. A 20-nm half-pitch URA fabricated by the Center for X-ray Optics (CXRO) nano-fabrication laboratory is presented that is suitable for high-resolution image sensing at EUV and soft X-ray wavelengths.

  5. Quantitative evaluation of the accuracy and variance of individual pixels in a scientific CMOS (sCMOS) camera for computational imaging

    NASA Astrophysics Data System (ADS)

    Watanabe, Shigeo; Takahashi, Teruo; Bennett, Keith

    2017-02-01

    The"scientific" CMOS (sCMOS) camera architecture fundamentally differs from CCD and EMCCD cameras. In digital CCD and EMCCD cameras, conversion from charge to the digital output is generally through a single electronic chain, and the read noise and the conversion factor from photoelectrons to digital outputs are highly uniform for all pixels, although quantum efficiency may spatially vary. In CMOS cameras, the charge to voltage conversion is separate for each pixel and each column has independent amplifiers and analog-to-digital converters, in addition to possible pixel-to-pixel variation in quantum efficiency. The "raw" output from the CMOS image sensor includes pixel-to-pixel variability in the read noise, electronic gain, offset and dark current. Scientific camera manufacturers digitally compensate the raw signal from the CMOS image sensors to provide usable images. Statistical noise in images, unless properly modeled, can introduce errors in methods such as fluctuation correlation spectroscopy or computational imaging, for example, localization microscopy using maximum likelihood estimation. We measured the distributions and spatial maps of individual pixel offset, dark current, read noise, linearity, photoresponse non-uniformity and variance distributions of individual pixels for standard, off-the-shelf Hamamatsu ORCA-Flash4.0 V3 sCMOS cameras using highly uniform and controlled illumination conditions, from dark conditions to multiple low light levels between 20 to 1,000 photons / pixel per frame to higher light conditions. We further show that using pixel variance for flat field correction leads to errors in cameras with good factory calibration.

  6. Crosstalk quantification, analysis, and trends in CMOS image sensors.

    PubMed

    Blockstein, Lior; Yadid-Pecht, Orly

    2010-08-20

    Pixel crosstalk (CTK) consists of three components, optical CTK (OCTK), electrical CTK (ECTK), and spectral CTK (SCTK). The CTK has been classified into two groups: pixel-architecture dependent and pixel-architecture independent. The pixel-architecture-dependent CTK (PADC) consists of the sum of two CTK components, i.e., the OCTK and the ECTK. This work presents a short summary of a large variety of methods for PADC reduction. Following that, this work suggests a clear quantifiable definition of PADC. Three complementary metal-oxide-semiconductor (CMOS) image sensors based on different technologies were empirically measured, using a unique scanning technology, the S-cube. The PADC is analyzed, and technology trends are shown.

  7. A 45 nm Stacked CMOS Image Sensor Process Technology for Submicron Pixel †

    PubMed Central

    Takahashi, Seiji; Huang, Yi-Min; Sze, Jhy-Jyi; Wu, Tung-Ting; Guo, Fu-Sheng; Hsu, Wei-Cheng; Tseng, Tung-Hsiung; Liao, King; Kuo, Chin-Chia; Chen, Tzu-Hsiang; Chiang, Wei-Chieh; Chuang, Chun-Hao; Chou, Keng-Yu; Chung, Chi-Hsien; Chou, Kuo-Yu; Tseng, Chien-Hsien; Wang, Chuan-Joung; Yaung, Dun-Nien

    2017-01-01

    A submicron pixel’s light and dark performance were studied by experiment and simulation. An advanced node technology incorporated with a stacked CMOS image sensor (CIS) is promising in that it may enhance performance. In this work, we demonstrated a low dark current of 3.2 e−/s at 60 °C, an ultra-low read noise of 0.90 e−·rms, a high full well capacity (FWC) of 4100 e−, and blooming of 0.5% in 0.9 μm pixels with a pixel supply voltage of 2.8 V. In addition, the simulation study result of 0.8 μm pixels is discussed. PMID:29206162

  8. The Design of Optical Sensor for the Pinhole/Occulter Facility

    NASA Technical Reports Server (NTRS)

    Greene, Michael E.

    1990-01-01

    Three optical sight sensor systems were designed, built and tested. Two optical lines of sight sensor system are capable of measuring the absolute pointing angle to the sun. The system is for use with the Pinhole/Occulter Facility (P/OF), a solar hard x ray experiment to be flown from Space Shuttle or Space Station. The sensor consists of a pinhole camera with two pairs of perpendicularly mounted linear photodiode arrays to detect the intensity distribution of the solar image produced by the pinhole, track and hold circuitry for data reduction, an analog to digital converter, and a microcomputer. The deflection of the image center is calculated from these data using an approximation for the solar image. A second system consists of a pinhole camera with a pair of perpendicularly mounted linear photodiode arrays, amplification circuitry, threshold detection circuitry, and a microcomputer board. The deflection of the image is calculated by knowing the position of each pixel of the photodiode array and merely counting the pixel numbers until threshold is surpassed. A third optical sensor system is capable of measuring the internal vibration of the P/OF between the mask and base. The system consists of a white light source, a mirror and a pair of perpendicularly mounted linear photodiode arrays to detect the intensity distribution of the solar image produced by the mirror, amplification circuitry, threshold detection circuitry, and a microcomputer board. The deflection of the image and hence the vibration of the structure is calculated by knowing the position of each pixel of the photodiode array and merely counting the pixel numbers until threshold is surpassed.

  9. Photon counting phosphorescence lifetime imaging with TimepixCam

    DOE PAGES

    Hirvonen, Liisa M.; Fisher-Levine, Merlin; Suhling, Klaus; ...

    2017-01-12

    TimepixCam is a novel fast optical imager based on an optimized silicon pixel sensor with a thin entrance window, and read out by a Timepix ASIC. The 256 x 256 pixel sensor has a time resolution of 15 ns at a sustained frame rate of 10 Hz. We used this sensor in combination with an image intensifier for wide-field time-correlated single photon counting (TCSPC) imaging. We have characterised the photon detection capabilities of this detector system, and employed it on a wide-field epifluorescence microscope to map phosphorescence decays of various iridium complexes with lifetimes of about 1 μs in 200more » μm diameter polystyrene beads.« less

  10. Photon counting phosphorescence lifetime imaging with TimepixCam.

    PubMed

    Hirvonen, Liisa M; Fisher-Levine, Merlin; Suhling, Klaus; Nomerotski, Andrei

    2017-01-01

    TimepixCam is a novel fast optical imager based on an optimized silicon pixel sensor with a thin entrance window and read out by a Timepix Application Specific Integrated Circuit. The 256 × 256 pixel sensor has a time resolution of 15 ns at a sustained frame rate of 10 Hz. We used this sensor in combination with an image intensifier for wide-field time-correlated single photon counting imaging. We have characterised the photon detection capabilities of this detector system and employed it on a wide-field epifluorescence microscope to map phosphorescence decays of various iridium complexes with lifetimes of about 1 μs in 200 μm diameter polystyrene beads.

  11. Photon counting phosphorescence lifetime imaging with TimepixCam

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hirvonen, Liisa M.; Fisher-Levine, Merlin; Suhling, Klaus

    TimepixCam is a novel fast optical imager based on an optimized silicon pixel sensor with a thin entrance window, and read out by a Timepix ASIC. The 256 x 256 pixel sensor has a time resolution of 15 ns at a sustained frame rate of 10 Hz. We used this sensor in combination with an image intensifier for wide-field time-correlated single photon counting (TCSPC) imaging. We have characterised the photon detection capabilities of this detector system, and employed it on a wide-field epifluorescence microscope to map phosphorescence decays of various iridium complexes with lifetimes of about 1 μs in 200more » μm diameter polystyrene beads.« less

  12. Photon counting phosphorescence lifetime imaging with TimepixCam

    NASA Astrophysics Data System (ADS)

    Hirvonen, Liisa M.; Fisher-Levine, Merlin; Suhling, Klaus; Nomerotski, Andrei

    2017-01-01

    TimepixCam is a novel fast optical imager based on an optimized silicon pixel sensor with a thin entrance window and read out by a Timepix Application Specific Integrated Circuit. The 256 × 256 pixel sensor has a time resolution of 15 ns at a sustained frame rate of 10 Hz. We used this sensor in combination with an image intensifier for wide-field time-correlated single photon counting imaging. We have characterised the photon detection capabilities of this detector system and employed it on a wide-field epifluorescence microscope to map phosphorescence decays of various iridium complexes with lifetimes of about 1 μs in 200 μm diameter polystyrene beads.

  13. Methods and apparatuses for detection of radiation with semiconductor image sensors

    DOEpatents

    Cogliati, Joshua Joseph

    2018-04-10

    A semiconductor image sensor is repeatedly exposed to high-energy photons while a visible light obstructer is in place to block visible light from impinging on the sensor to generate a set of images from the exposures. A composite image is generated from the set of images with common noise substantially removed so the composite image includes image information corresponding to radiated pixels that absorbed at least some energy from the high-energy photons. The composite image is processed to determine a set of bright points in the composite image, each bright point being above a first threshold. The set of bright points is processed to identify lines with two or more bright points that include pixels therebetween that are above a second threshold and identify a presence of the high-energy particles responsive to a number of lines.

  14. A CMOS image sensor with stacked photodiodes for lensless observation system of digital enzyme-linked immunosorbent assay

    NASA Astrophysics Data System (ADS)

    Takehara, Hironari; Miyazawa, Kazuya; Noda, Toshihiko; Sasagawa, Kiyotaka; Tokuda, Takashi; Kim, Soo Hyeon; Iino, Ryota; Noji, Hiroyuki; Ohta, Jun

    2014-01-01

    A CMOS image sensor with stacked photodiodes was fabricated using 0.18 µm mixed signal CMOS process technology. Two photodiodes were stacked at the same position of each pixel of the CMOS image sensor. The stacked photodiodes consist of shallow high-concentration N-type layer (N+), P-type well (PW), deep N-type well (DNW), and P-type substrate (P-sub). PW and P-sub were shorted to ground. By monitoring the voltage of N+ and DNW individually, we can observe two monochromatic colors simultaneously without using any color filters. The CMOS image sensor is suitable for fluorescence imaging, especially contact imaging such as a lensless observation system of digital enzyme-linked immunosorbent assay (ELISA). Since the fluorescence increases with time in digital ELISA, it is possible to observe fluorescence accurately by calculating the difference from the initial relation between the pixel values for both photodiodes.

  15. 10000 pixels wide CMOS frame imager for earth observation from a HALE UAV

    NASA Astrophysics Data System (ADS)

    Delauré, B.; Livens, S.; Everaerts, J.; Kleihorst, R.; Schippers, Gert; de Wit, Yannick; Compiet, John; Banachowicz, Bartosz

    2009-09-01

    MEDUSA is the lightweight high resolution camera, designed to be operated from a solar-powered Unmanned Aerial Vehicle (UAV) flying at stratospheric altitudes. The instrument is a technology demonstrator within the Pegasus program and targets applications such as crisis management and cartography. A special wide swath CMOS imager has been developed by Cypress Semiconductor Cooperation Belgium to meet the specific sensor requirements of MEDUSA. The CMOS sensor has a stitched design comprising a panchromatic and color sensor on the same die. Each sensor consists of 10000*1200 square pixels (5.5μm size, novel 6T architecture) with micro-lenses. The exposure is performed by means of a high efficiency snapshot shutter. The sensor is able to operate at a rate of 30fps in full frame readout. Due to a novel pixel design, the sensor has low dark leakage of the memory elements (PSNL) and low parasitic light sensitivity (PLS). Still it maintains a relative high QE (Quantum efficiency) and a FF (fill factor) of over 65%. It features an MTF (Modulation Transfer Function) higher than 60% at Nyquist frequency in both X and Y directions The measured optical/electrical crosstalk (expressed as MTF) of this 5.5um pixel is state-of-the art. These properties makes it possible to acquire sharp images also in low-light conditions.

  16. Multiple-Event, Single-Photon Counting Imaging Sensor

    NASA Technical Reports Server (NTRS)

    Zheng, Xinyu; Cunningham, Thomas J.; Sun, Chao; Wang, Kang L.

    2011-01-01

    The single-photon counting imaging sensor is typically an array of silicon Geiger-mode avalanche photodiodes that are monolithically integrated with CMOS (complementary metal oxide semiconductor) readout, signal processing, and addressing circuits located in each pixel and the peripheral area of the chip. The major problem is its single-event method for photon count number registration. A single-event single-photon counting imaging array only allows registration of up to one photon count in each of its pixels during a frame time, i.e., the interval between two successive pixel reset operations. Since the frame time can t be too short, this will lead to very low dynamic range and make the sensor merely useful for very low flux environments. The second problem of the prior technique is a limited fill factor resulting from consumption of chip area by the monolithically integrated CMOS readout in pixels. The resulting low photon collection efficiency will substantially ruin any benefit gained from the very sensitive single-photon counting detection. The single-photon counting imaging sensor developed in this work has a novel multiple-event architecture, which allows each of its pixels to register as more than one million (or more) photon-counting events during a frame time. Because of a consequently boosted dynamic range, the imaging array of the invention is capable of performing single-photon counting under ultra-low light through high-flux environments. On the other hand, since the multiple-event architecture is implemented in a hybrid structure, back-illumination and close-to-unity fill factor can be realized, and maximized quantum efficiency can also be achieved in the detector array.

  17. Further applications for mosaic pixel FPA technology

    NASA Astrophysics Data System (ADS)

    Liddiard, Kevin C.

    2011-06-01

    In previous papers to this SPIE forum the development of novel technology for next generation PIR security sensors has been described. This technology combines the mosaic pixel FPA concept with low cost optics and purpose-designed readout electronics to provide a higher performance and affordable alternative to current PIR sensor technology, including an imaging capability. Progressive development has resulted in increased performance and transition from conventional microbolometer fabrication to manufacture on 8 or 12 inch CMOS/MEMS fabrication lines. A number of spin-off applications have been identified. In this paper two specific applications are highlighted: high performance imaging IRFPA design and forest fire detection. The former involves optional design for small pixel high performance imaging. The latter involves cheap expendable sensors which can detect approaching fire fronts and send alarms with positional data via mobile phone or satellite link. We also introduce to this SPIE forum the application of microbolometer IR sensor technology to IoT, the Internet of Things.

  18. A Multi-Resolution Mode CMOS Image Sensor with a Novel Two-Step Single-Slope ADC for Intelligent Surveillance Systems.

    PubMed

    Kim, Daehyeok; Song, Minkyu; Choe, Byeongseong; Kim, Soo Youn

    2017-06-25

    In this paper, we present a multi-resolution mode CMOS image sensor (CIS) for intelligent surveillance system (ISS) applications. A low column fixed-pattern noise (CFPN) comparator is proposed in 8-bit two-step single-slope analog-to-digital converter (TSSS ADC) for the CIS that supports normal, 1/2, 1/4, 1/8, 1/16, 1/32, and 1/64 mode of pixel resolution. We show that the scaled-resolution images enable CIS to reduce total power consumption while images hold steady without events. A prototype sensor of 176 × 144 pixels has been fabricated with a 0.18 μm 1-poly 4-metal CMOS process. The area of 4-shared 4T-active pixel sensor (APS) is 4.4 μm × 4.4 μm and the total chip size is 2.35 mm × 2.35 mm. The maximum power consumption is 10 mW (with full resolution) with supply voltages of 3.3 V (analog) and 1.8 V (digital) and 14 frame/s of frame rates.

  19. Design and Performance of a Pinned Photodiode CMOS Image Sensor Using Reverse Substrate Bias.

    PubMed

    Stefanov, Konstantin D; Clarke, Andrew S; Ivory, James; Holland, Andrew D

    2018-01-03

    A new pinned photodiode (PPD) CMOS image sensor with reverse biased p-type substrate has been developed and characterized. The sensor uses traditional PPDs with one additional deep implantation step to suppress the parasitic reverse currents, and can be fully depleted. The first prototypes have been manufactured on an 18 µm thick, 1000 Ω·cm epitaxial silicon wafers using 180 nm PPD image sensor process. Both front-side illuminated (FSI) and back-side illuminated (BSI) devices were manufactured in collaboration with Teledyne e2v. The characterization results from a number of arrays of 10 µm and 5.4 µm PPD pixels, with different shape, the size and the depth of the new implant are in good agreement with device simulations. The new pixels could be reverse-biased without parasitic leakage currents well beyond full depletion, and demonstrate nearly identical optical response to the reference non-modified pixels. The observed excessive charge sharing in some pixel variants is shown to not be a limiting factor in operation. This development promises to realize monolithic PPD CIS with large depleted thickness and correspondingly high quantum efficiency at near-infrared and soft X-ray wavelengths.

  20. Design and Performance of a Pinned Photodiode CMOS Image Sensor Using Reverse Substrate Bias †

    PubMed Central

    Clarke, Andrew S.; Ivory, James; Holland, Andrew D.

    2018-01-01

    A new pinned photodiode (PPD) CMOS image sensor with reverse biased p-type substrate has been developed and characterized. The sensor uses traditional PPDs with one additional deep implantation step to suppress the parasitic reverse currents, and can be fully depleted. The first prototypes have been manufactured on an 18 µm thick, 1000 Ω·cm epitaxial silicon wafers using 180 nm PPD image sensor process. Both front-side illuminated (FSI) and back-side illuminated (BSI) devices were manufactured in collaboration with Teledyne e2v. The characterization results from a number of arrays of 10 µm and 5.4 µm PPD pixels, with different shape, the size and the depth of the new implant are in good agreement with device simulations. The new pixels could be reverse-biased without parasitic leakage currents well beyond full depletion, and demonstrate nearly identical optical response to the reference non-modified pixels. The observed excessive charge sharing in some pixel variants is shown to not be a limiting factor in operation. This development promises to realize monolithic PPD CIS with large depleted thickness and correspondingly high quantum efficiency at near-infrared and soft X-ray wavelengths. PMID:29301379

  1. Image sensor with motion artifact supression and anti-blooming

    NASA Technical Reports Server (NTRS)

    Pain, Bedabrata (Inventor); Wrigley, Chris (Inventor); Yang, Guang (Inventor); Yadid-Pecht, Orly (Inventor)

    2006-01-01

    An image sensor includes pixels formed on a semiconductor substrate. Each pixel includes a photoactive region in the semiconductor substrate, a sense node, and a power supply node. A first electrode is disposed near a surface of the semiconductor substrate. A bias signal on the first electrode sets a potential in a region of the semiconductor substrate between the photoactive region and the sense node. A second electrode is disposed near the surface of the semiconductor substrate. A bias signal on the second electrode sets a potential in a region of the semiconductor substrate between the photoactive region and the power supply node. The image sensor includes a controller that causes bias signals to be provided to the electrodes so that photocharges generated in the photoactive region are accumulated in the photoactive region during a pixel integration period, the accumulated photocharges are transferred to the sense node during a charge transfer period, and photocharges generated in the photoactive region are transferred to the power supply node during a third period without passing through the sense node. The imager can operate at high shutter speeds with simultaneous integration of pixels in the array. High quality images can be produced free from motion artifacts. High quantum efficiency, good blooming control, low dark current, low noise and low image lag can be obtained.

  2. Using Polynomials to Simplify Fixed Pattern Noise and Photometric Correction of Logarithmic CMOS Image Sensors

    PubMed Central

    Li, Jing; Mahmoodi, Alireza; Joseph, Dileepan

    2015-01-01

    An important class of complementary metal-oxide-semiconductor (CMOS) image sensors are those where pixel responses are monotonic nonlinear functions of light stimuli. This class includes various logarithmic architectures, which are easily capable of wide dynamic range imaging, at video rates, but which are vulnerable to image quality issues. To minimize fixed pattern noise (FPN) and maximize photometric accuracy, pixel responses must be calibrated and corrected due to mismatch and process variation during fabrication. Unlike literature approaches, which employ circuit-based models of varying complexity, this paper introduces a novel approach based on low-degree polynomials. Although each pixel may have a highly nonlinear response, an approximately-linear FPN calibration is possible by exploiting the monotonic nature of imaging. Moreover, FPN correction requires only arithmetic, and an optimal fixed-point implementation is readily derived, subject to a user-specified number of bits per pixel. Using a monotonic spline, involving cubic polynomials, photometric calibration is also possible without a circuit-based model, and fixed-point photometric correction requires only a look-up table. The approach is experimentally validated with a logarithmic CMOS image sensor and is compared to a leading approach from the literature. The novel approach proves effective and efficient. PMID:26501287

  3. CAOS-CMOS camera.

    PubMed

    Riza, Nabeel A; La Torre, Juan Pablo; Amin, M Junaid

    2016-06-13

    Proposed and experimentally demonstrated is the CAOS-CMOS camera design that combines the coded access optical sensor (CAOS) imager platform with the CMOS multi-pixel optical sensor. The unique CAOS-CMOS camera engages the classic CMOS sensor light staring mode with the time-frequency-space agile pixel CAOS imager mode within one programmable optical unit to realize a high dynamic range imager for extreme light contrast conditions. The experimentally demonstrated CAOS-CMOS camera is built using a digital micromirror device, a silicon point-photo-detector with a variable gain amplifier, and a silicon CMOS sensor with a maximum rated 51.3 dB dynamic range. White light imaging of three different brightness simultaneously viewed targets, that is not possible by the CMOS sensor, is achieved by the CAOS-CMOS camera demonstrating an 82.06 dB dynamic range. Applications for the camera include industrial machine vision, welding, laser analysis, automotive, night vision, surveillance and multispectral military systems.

  4. Dual cameras acquisition and display system of retina-like sensor camera and rectangular sensor camera

    NASA Astrophysics Data System (ADS)

    Cao, Nan; Cao, Fengmei; Lin, Yabin; Bai, Tingzhu; Song, Shengyu

    2015-04-01

    For a new kind of retina-like senor camera and a traditional rectangular sensor camera, dual cameras acquisition and display system need to be built. We introduce the principle and the development of retina-like senor. Image coordinates transformation and interpolation based on sub-pixel interpolation need to be realized for our retina-like sensor's special pixels distribution. The hardware platform is composed of retina-like senor camera, rectangular sensor camera, image grabber and PC. Combined the MIL and OpenCV library, the software program is composed in VC++ on VS 2010. Experience results show that the system can realizes two cameras' acquisition and display.

  5. Wavelength scanning achieves pixel super-resolution in holographic on-chip microscopy

    NASA Astrophysics Data System (ADS)

    Luo, Wei; Göröcs, Zoltan; Zhang, Yibo; Feizi, Alborz; Greenbaum, Alon; Ozcan, Aydogan

    2016-03-01

    Lensfree holographic on-chip imaging is a potent solution for high-resolution and field-portable bright-field imaging over a wide field-of-view. Previous lensfree imaging approaches utilize a pixel super-resolution technique, which relies on sub-pixel lateral displacements between the lensfree diffraction patterns and the image sensor's pixel-array, to achieve sub-micron resolution under unit magnification using state-of-the-art CMOS imager chips, commonly used in e.g., mobile-phones. Here we report, for the first time, a wavelength scanning based pixel super-resolution technique in lensfree holographic imaging. We developed an iterative super-resolution algorithm, which generates high-resolution reconstructions of the specimen from low-resolution (i.e., under-sampled) diffraction patterns recorded at multiple wavelengths within a narrow spectral range (e.g., 10-30 nm). Compared with lateral shift-based pixel super-resolution, this wavelength scanning approach does not require any physical shifts in the imaging setup, and the resolution improvement is uniform in all directions across the sensor-array. Our wavelength scanning super-resolution approach can also be integrated with multi-height and/or multi-angle on-chip imaging techniques to obtain even higher resolution reconstructions. For example, using wavelength scanning together with multi-angle illumination, we achieved a halfpitch resolution of 250 nm, corresponding to a numerical aperture of 1. In addition to pixel super-resolution, the small scanning steps in wavelength also enable us to robustly unwrap phase, revealing the specimen's optical path length in our reconstructed images. We believe that this new wavelength scanning based pixel super-resolution approach can provide competitive microscopy solutions for high-resolution and field-portable imaging needs, potentially impacting tele-pathology applications in resource-limited-settings.

  6. Polarization-Analyzing CMOS Image Sensor With Monolithically Embedded Polarizer for Microchemistry Systems.

    PubMed

    Tokuda, T; Yamada, H; Sasagawa, K; Ohta, J

    2009-10-01

    This paper proposes and demonstrates a polarization-analyzing CMOS sensor based on image sensor architecture. The sensor was designed targeting applications for chiral analysis in a microchemistry system. The sensor features a monolithically embedded polarizer. Embedded polarizers with different angles were implemented to realize a real-time absolute measurement of the incident polarization angle. Although the pixel-level performance was confirmed to be limited, estimation schemes based on the variation of the polarizer angle provided a promising performance for real-time polarization measurements. An estimation scheme using 180 pixels in a 1deg step provided an estimation accuracy of 0.04deg. Polarimetric measurements of chiral solutions were also successfully performed to demonstrate the applicability of the sensor to optical chiral analysis.

  7. Pixel-super-resolved lensfree holography using adaptive relaxation factor and positional error correction

    NASA Astrophysics Data System (ADS)

    Zhang, Jialin; Chen, Qian; Sun, Jiasong; Li, Jiaji; Zuo, Chao

    2018-01-01

    Lensfree holography provides a new way to effectively bypass the intrinsical trade-off between the spatial resolution and field-of-view (FOV) of conventional lens-based microscopes. Unfortunately, due to the limited sensor pixel-size, unpredictable disturbance during image acquisition, and sub-optimum solution to the phase retrieval problem, typical lensfree microscopes only produce compromised imaging quality in terms of lateral resolution and signal-to-noise ratio (SNR). In this paper, we propose an adaptive pixel-super-resolved lensfree imaging (APLI) method to address the pixel aliasing problem by Z-scanning only, without resorting to subpixel shifting or beam-angle manipulation. Furthermore, an automatic positional error correction algorithm and adaptive relaxation strategy are introduced to enhance the robustness and SNR of reconstruction significantly. Based on APLI, we perform full-FOV reconstruction of a USAF resolution target across a wide imaging area of {29.85 mm2 and achieve half-pitch lateral resolution of 770 nm, surpassing 2.17 times of the theoretical Nyquist-Shannon sampling resolution limit imposed by the sensor pixel-size (1.67 μm). Full-FOV imaging result of a typical dicot root is also provided to demonstrate its promising potential applications in biologic imaging.

  8. CMOS Active Pixel Sensors for Low Power, Highly Miniaturized Imaging Systems

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R.

    1996-01-01

    The complementary metal-oxide-semiconductor (CMOS) active pixel sensor (APS) technology has been developed over the past three years by NASA at the Jet Propulsion Laboratory, and has reached a level of performance comparable to CCDs with greatly increased functionality but at a very reduced power level.

  9. High responsivity CMOS imager pixel implemented in SOI technology

    NASA Technical Reports Server (NTRS)

    Zheng, X.; Wrigley, C.; Yang, G.; Pain, B.

    2000-01-01

    Availability of mature sub-micron CMOS technology and the advent of the new low noise active pixel sensor (APS) concept have enabled the development of low power, miniature, single-chip, CMOS digital imagers in the decade of the 1990's.

  10. Advanced microlens and color filter process technology for the high-efficiency CMOS and CCD image sensors

    NASA Astrophysics Data System (ADS)

    Fan, Yang-Tung; Peng, Chiou-Shian; Chu, Cheng-Yu

    2000-12-01

    New markets are emerging for digital electronic image device, especially in visual communications, PC camera, mobile/cell phone, security system, toys, vehicle image system and computer peripherals for document capture. To enable one-chip image system that image sensor is with a full digital interface, can make image capture devices in our daily lives. Adding a color filter to such image sensor in a pattern of mosaics pixel or wide stripes can make image more real and colorful. We can say 'color filter makes the life more colorful color filter is? Color filter means can filter image light source except the color with specific wavelength and transmittance that is same as color filter itself. Color filter process is coating and patterning green, red and blue (or cyan, magenta and yellow) mosaic resists onto matched pixel in image sensing array pixels. According to the signal caught from each pixel, we can figure out the environment image picture. Widely use of digital electronic camera and multimedia applications today makes the feature of color filter becoming bright. Although it has challenge but it is very worthy to develop the process of color filter. We provide the best service on shorter cycle time, excellent color quality, high and stable yield. The key issues of advanced color process have to be solved and implemented are planarization and micro-lens technology. Lost of key points of color filter process technology have to consider will also be described in this paper.

  11. Establishing imaging sensor specifications for digital still cameras

    NASA Astrophysics Data System (ADS)

    Kriss, Michael A.

    2007-02-01

    Digital Still Cameras, DSCs, have now displaced conventional still cameras in most markets. The heart of a DSC is thought to be the imaging sensor, be it Full Frame CCD, and Interline CCD, a CMOS sensor or the newer Foveon buried photodiode sensors. There is a strong tendency by consumers to consider only the number of mega-pixels in a camera and not to consider the overall performance of the imaging system, including sharpness, artifact control, noise, color reproduction, exposure latitude and dynamic range. This paper will provide a systematic method to characterize the physical requirements of an imaging sensor and supporting system components based on the desired usage. The analysis is based on two software programs that determine the "sharpness", potential for artifacts, sensor "photographic speed", dynamic range and exposure latitude based on the physical nature of the imaging optics, sensor characteristics (including size of pixels, sensor architecture, noise characteristics, surface states that cause dark current, quantum efficiency, effective MTF, and the intrinsic full well capacity in terms of electrons per square centimeter). Examples will be given for consumer, pro-consumer, and professional camera systems. Where possible, these results will be compared to imaging system currently on the market.

  12. Sensing, Spectra and Scaling: What's in Store for Land Observations

    NASA Technical Reports Server (NTRS)

    Goetz, Alexander F. H.

    2001-01-01

    Bill Pecora's 1960's vision of the future, using spacecraft-based sensors for mapping the environment and exploring for resources, is being implemented today. New technology has produced better sensors in space such as the Landsat Thematic Mapper (TM) and SPOT, and creative researchers are continuing to find new applications. However, with existing sensors, and those intended for launch in this century, the potential for extracting information from the land surface is far from being exploited. The most recent technology development is imaging spectrometry, the acquisition of images in hundreds of contiguous spectral bands, such that for any pixel a complete reflectance spectrum can be acquired. Experience with Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) has shown that, with proper attention paid to absolute calibration, it is possible to acquire apparent surface reflectance to 5% accuracy without any ground-based measurement. The data reduction incorporates in educated guess of the aerosol scattering, development of a precipitable water vapor map from the data and mapping of cirrus clouds in the 1.38 micrometer band. This is not possible with TM. The pixel size in images of the earth plays and important role in the type and quality of information that can be derived. Less understood is the coupling between spatial and spectral resolution in a sensor. Recent work has shown that in processing the data to derive the relative abundance of materials in a pixel, also known is unmixing, the pixel size is an important parameter. A variance in the relative abundance of materials among the pixels is necessary to be able to derive the endmembers or pure material constituent spectra. In most cases, the 1 km pixel size for the Earth Observing System Moderate Resolution Imaging Spectroradiometer (MODIS) instrument is too large to meet the variance criterion. A pointable high spatial and spectral resolution imaging spectrometer in orbit will be necessary to make the major next step in our understanding of the solid earth surface and its changing face.

  13. Performance benefits and limitations of a camera network

    NASA Astrophysics Data System (ADS)

    Carr, Peter; Thomas, Paul J.; Hornsey, Richard

    2005-06-01

    Visual information is of vital significance to both animals and artificial systems. The majority of mammals rely on two images, each with a resolution of 107-108 'pixels' per image. At the other extreme are insect eyes where the field of view is segmented into 103-105 images, each comprising effectively one pixel/image. The great majority of artificial imaging systems lie nearer to the mammalian characteristics in this parameter space, although electronic compound eyes have been developed in this laboratory and elsewhere. If the definition of a vision system is expanded to include networks or swarms of sensor elements, then schools of fish, flocks of birds and ant or termite colonies occupy a region where the number of images and the pixels/image may be comparable. A useful system might then have 105 imagers, each with about 104-105 pixels. Artificial analogs to these situations include sensor webs, smart dust and co-ordinated robot clusters. As an extreme example, we might consider the collective vision system represented by the imminent existence of ~109 cellular telephones, each with a one-megapixel camera. Unoccupied regions in this resolution-segmentation parameter space suggest opportunities for innovative artificial sensor network systems. Essential for the full exploitation of these opportunities is the availability of custom CMOS image sensor chips whose characteristics can be tailored to the application. Key attributes of such a chip set might include integrated image processing and control, low cost, and low power. This paper compares selected experimentally determined system specifications for an inward-looking array of 12 cameras with the aid of a camera-network model developed to explore the tradeoff between camera resolution and the number of cameras.

  14. A bio-image sensor for simultaneous detection of multi-neurotransmitters.

    PubMed

    Lee, You-Na; Okumura, Koichi; Horio, Tomoko; Iwata, Tatsuya; Takahashi, Kazuhiro; Hattori, Toshiaki; Sawada, Kazuaki

    2018-03-01

    We report here a new bio-image sensor for simultaneous detection of spatial and temporal distribution of multi-neurotransmitters. It consists of multiple enzyme-immobilized membranes on a 128 × 128 pixel array with read-out circuit. Apyrase and acetylcholinesterase (AChE), as selective elements, are used to recognize adenosine 5'-triphosphate (ATP) and acetylcholine (ACh), respectively. To enhance the spatial resolution, hydrogen ion (H + ) diffusion barrier layers are deposited on top of the bio-image sensor and demonstrated their prevention capability. The results are used to design the space among enzyme-immobilized pixels and the null H + sensor to minimize the undesired signal overlap by H + diffusion. Using this bio-image sensor, we can obtain H + diffusion-independent imaging of concentration gradients of ATP and ACh in real-time. The sensing characteristics, such as sensitivity and detection of limit, are determined experimentally. With the proposed bio-image sensor the possibility exists for customizable monitoring of the activities of various neurochemicals by using different kinds of proton-consuming or generating enzymes. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. 1024-Pixel CMOS Multimodality Joint Cellular Sensor/Stimulator Array for Real-Time Holistic Cellular Characterization and Cell-Based Drug Screening.

    PubMed

    Park, Jong Seok; Aziz, Moez Karim; Li, Sensen; Chi, Taiyun; Grijalva, Sandra Ivonne; Sung, Jung Hoon; Cho, Hee Cheol; Wang, Hua

    2018-02-01

    This paper presents a fully integrated CMOS multimodality joint sensor/stimulator array with 1024 pixels for real-time holistic cellular characterization and drug screening. The proposed system consists of four pixel groups and four parallel signal-conditioning blocks. Every pixel group contains 16 × 16 pixels, and each pixel includes one gold-plated electrode, four photodiodes, and in-pixel circuits, within a pixel footprint. Each pixel supports real-time extracellular potential recording, optical detection, charge-balanced biphasic current stimulation, and cellular impedance measurement for the same cellular sample. The proposed system is fabricated in a standard 130-nm CMOS process. Rat cardiomyocytes are successfully cultured on-chip. Measured high-resolution optical opacity images, extracellular potential recordings, biphasic current stimulations, and cellular impedance images demonstrate the unique advantages of the system for holistic cell characterization and drug screening. Furthermore, this paper demonstrates the use of optical detection on the on-chip cultured cardiomyocytes to real-time track their cyclic beating pattern and beating rate.

  16. Penrose high-dynamic-range imaging

    NASA Astrophysics Data System (ADS)

    Li, Jia; Bai, Chenyan; Lin, Zhouchen; Yu, Jian

    2016-05-01

    High-dynamic-range (HDR) imaging is becoming increasingly popular and widespread. The most common multishot HDR approach, based on multiple low-dynamic-range images captured with different exposures, has difficulties in handling camera and object movements. The spatially varying exposures (SVE) technology provides a solution to overcome this limitation by obtaining multiple exposures of the scene in only one shot but suffers from a loss in spatial resolution of the captured image. While aperiodic assignment of exposures has been shown to be advantageous during reconstruction in alleviating resolution loss, almost all the existing imaging sensors use the square pixel layout, which is a periodic tiling of square pixels. We propose the Penrose pixel layout, using pixels in aperiodic rhombus Penrose tiling, for HDR imaging. With the SVE technology, Penrose pixel layout has both exposure and pixel aperiodicities. To investigate its performance, we have to reconstruct HDR images in square pixel layout from Penrose raw images with SVE. Since the two pixel layouts are different, the traditional HDR reconstruction methods are not applicable. We develop a reconstruction method for Penrose pixel layout using a Gaussian mixture model for regularization. Both quantitative and qualitative results show the superiority of Penrose pixel layout over square pixel layout.

  17. Pixel electronic noise as a function of position in an active matrix flat panel imaging array

    NASA Astrophysics Data System (ADS)

    Yazdandoost, Mohammad Y.; Wu, Dali; Karim, Karim S.

    2010-04-01

    We present an analysis of output referred pixel electronic noise as a function of position in the active matrix array for both active and passive pixel architectures. Three different noise sources for Active Pixel Sensor (APS) arrays are considered: readout period noise, reset period noise and leakage current noise of the reset TFT during readout. For the state-of-the-art Passive Pixel Sensor (PPS) array, the readout noise of the TFT switch is considered. Measured noise results are obtained by modeling the array connections with RC ladders on a small in-house fabricated prototype. The results indicate that the pixels in the rows located in the middle part of the array have less random electronic noise at the output of the off-panel charge amplifier compared to the ones in rows at the two edges of the array. These results can help optimize for clearer images as well as help define the region-of-interest with the best signal-to-noise ratio in an active matrix digital flat panel imaging array.

  18. Optimization of CMOS image sensor utilizing variable temporal multisampling partial transfer technique to achieve full-frame high dynamic range with superior low light and stop motion capability

    NASA Astrophysics Data System (ADS)

    Kabir, Salman; Smith, Craig; Armstrong, Frank; Barnard, Gerrit; Schneider, Alex; Guidash, Michael; Vogelsang, Thomas; Endsley, Jay

    2018-03-01

    Differential binary pixel technology is a threshold-based timing, readout, and image reconstruction method that utilizes the subframe partial charge transfer technique in a standard four-transistor (4T) pixel CMOS image sensor to achieve a high dynamic range video with stop motion. This technology improves low light signal-to-noise ratio (SNR) by up to 21 dB. The method is verified in silicon using a Taiwan Semiconductor Manufacturing Company's 65 nm 1.1 μm pixel technology 1 megapixel test chip array and is compared with a traditional 4 × oversampling technique using full charge transfer to show low light SNR superiority of the presented technology.

  19. Retina-like sensor image coordinates transformation and display

    NASA Astrophysics Data System (ADS)

    Cao, Fengmei; Cao, Nan; Bai, Tingzhu; Song, Shengyu

    2015-03-01

    For a new kind of retina-like senor camera, the image acquisition, coordinates transformation and interpolation need to be realized. Both of the coordinates transformation and interpolation are computed in polar coordinate due to the sensor's particular pixels distribution. The image interpolation is based on sub-pixel interpolation and its relative weights are got in polar coordinates. The hardware platform is composed of retina-like senor camera, image grabber and PC. Combined the MIL and OpenCV library, the software program is composed in VC++ on VS 2010. Experience results show that the system can realizes the real-time image acquisition, coordinate transformation and interpolation.

  20. A Novel Multi-Aperture Based Sun Sensor Based on a Fast Multi-Point MEANSHIFT (FMMS) Algorithm

    PubMed Central

    You, Zheng; Sun, Jian; Xing, Fei; Zhang, Gao-Fei

    2011-01-01

    With the current increased widespread interest in the development and applications of micro/nanosatellites, it was found that we needed to design a small high accuracy satellite attitude determination system, because the star trackers widely used in large satellites are large and heavy, and therefore not suitable for installation on micro/nanosatellites. A Sun sensor + magnetometer is proven to be a better alternative, but the conventional sun sensor has low accuracy, and cannot meet the requirements of the attitude determination systems of micro/nanosatellites, so the development of a small high accuracy sun sensor with high reliability is very significant. This paper presents a multi-aperture based sun sensor, which is composed of a micro-electro-mechanical system (MEMS) mask with 36 apertures and an active pixels sensor (APS) CMOS placed below the mask at a certain distance. A novel fast multi-point MEANSHIFT (FMMS) algorithm is proposed to improve the accuracy and reliability, the two key performance features, of an APS sun sensor. When the sunlight illuminates the sensor, a sun spot array image is formed on the APS detector. Then the sun angles can be derived by analyzing the aperture image location on the detector via the FMMS algorithm. With this system, the centroid accuracy of the sun image can reach 0.01 pixels, without increasing the weight and power consumption, even when some missing apertures and bad pixels appear on the detector due to aging of the devices and operation in a harsh space environment, while the pointing accuracy of the single-aperture sun sensor using the conventional correlation algorithm is only 0.05 pixels. PMID:22163770

  1. High-resolution CCD imaging alternatives

    NASA Astrophysics Data System (ADS)

    Brown, D. L.; Acker, D. E.

    1992-08-01

    High resolution CCD color cameras have recently stimulated the interest of a large number of potential end-users for a wide range of practical applications. Real-time High Definition Television (HDTV) systems are now being used or considered for use in applications ranging from entertainment program origination through digital image storage to medical and scientific research. HDTV generation of electronic images offers significant cost and time-saving advantages over the use of film in such applications. Further in still image systems electronic image capture is faster and more efficient than conventional image scanners. The CCD still camera can capture 3-dimensional objects into the computing environment directly without having to shoot a picture on film develop it and then scan the image into a computer. 2. EXTENDING CCD TECHNOLOGY BEYOND BROADCAST Most standard production CCD sensor chips are made for broadcast-compatible systems. One popular CCD and the basis for this discussion offers arrays of roughly 750 x 580 picture elements (pixels) or a total array of approximately 435 pixels (see Fig. 1). FOR. A has developed a technique to increase the number of available pixels for a given image compared to that produced by the standard CCD itself. Using an inter-lined CCD with an overall spatial structure several times larger than the photo-sensitive sensor areas each of the CCD sensors is shifted in two dimensions in order to fill in spatial gaps between adjacent sensors.

  2. High dynamic range vision sensor for automotive applications

    NASA Astrophysics Data System (ADS)

    Grenet, Eric; Gyger, Steve; Heim, Pascal; Heitger, Friedrich; Kaess, Francois; Nussbaum, Pascal; Ruedi, Pierre-Francois

    2005-02-01

    A 128 x 128 pixels, 120 dB vision sensor extracting at the pixel level the contrast magnitude and direction of local image features is used to implement a lane tracking system. The contrast representation (relative change of illumination) delivered by the sensor is independent of the illumination level. Together with the high dynamic range of the sensor, it ensures a very stable image feature representation even with high spatial and temporal inhomogeneities of the illumination. Dispatching off chip image feature is done according to the contrast magnitude, prioritizing features with high contrast magnitude. This allows to reduce drastically the amount of data transmitted out of the chip, hence the processing power required for subsequent processing stages. To compensate for the low fill factor (9%) of the sensor, micro-lenses have been deposited which increase the sensitivity by a factor of 5, corresponding to an equivalent of 2000 ASA. An algorithm exploiting the contrast representation output by the vision sensor has been developed to estimate the position of a vehicle relative to the road markings. The algorithm first detects the road markings based on the contrast direction map. Then, it performs quadratic fits on selected kernel of 3 by 3 pixels to achieve sub-pixel accuracy on the estimation of the lane marking positions. The resulting precision on the estimation of the vehicle lateral position is 1 cm. The algorithm performs efficiently under a wide variety of environmental conditions, including night and rainy conditions.

  3. Characterization of pixel sensor designed in 180 nm SOI CMOS technology

    NASA Astrophysics Data System (ADS)

    Benka, T.; Havranek, M.; Hejtmanek, M.; Jakovenko, J.; Janoska, Z.; Marcisovska, M.; Marcisovsky, M.; Neue, G.; Tomasek, L.; Vrba, V.

    2018-01-01

    A new type of X-ray imaging Monolithic Active Pixel Sensor (MAPS), X-CHIP-02, was developed using a 180 nm deep submicron Silicon On Insulator (SOI) CMOS commercial technology. Two pixel matrices were integrated into the prototype chip, which differ by the pixel pitch of 50 μm and 100 μm. The X-CHIP-02 contains several test structures, which are useful for characterization of individual blocks. The sensitive part of the pixel integrated in the handle wafer is one of the key structures designed for testing. The purpose of this structure is to determine the capacitance of the sensitive part (diode in the MAPS pixel). The measured capacitance is 2.9 fF for 50 μm pixel pitch and 4.8 fF for 100 μm pixel pitch at -100 V (default operational voltage). This structure was used to measure the IV characteristics of the sensitive diode. In this work, we report on a circuit designed for precise determination of sensor capacitance and IV characteristics of both pixel types with respect to X-ray irradiation. The motivation for measurement of the sensor capacitance was its importance for the design of front-end amplifier circuits. The design of pixel elements, as well as circuit simulation and laboratory measurement techniques are described. The experimental results are of great importance for further development of MAPS sensors in this technology.

  4. In-situ device integration of large-area patterned organic nanowire arrays for high-performance optical sensors

    PubMed Central

    Wu, Yiming; Zhang, Xiujuan; Pan, Huanhuan; Deng, Wei; Zhang, Xiaohong; Zhang, Xiwei; Jie, Jiansheng

    2013-01-01

    Single-crystalline organic nanowires (NWs) are important building blocks for future low-cost and efficient nano-optoelectronic devices due to their extraordinary properties. However, it remains a critical challenge to achieve large-scale organic NW array assembly and device integration. Herein, we demonstrate a feasible one-step method for large-area patterned growth of cross-aligned single-crystalline organic NW arrays and their in-situ device integration for optical image sensors. The integrated image sensor circuitry contained a 10 × 10 pixel array in an area of 1.3 × 1.3 mm2, showing high spatial resolution, excellent stability and reproducibility. More importantly, 100% of the pixels successfully operated at a high response speed and relatively small pixel-to-pixel variation. The high yield and high spatial resolution of the operational pixels, along with the high integration level of the device, clearly demonstrate the great potential of the one-step organic NW array growth and device construction approach for large-scale optoelectronic device integration. PMID:24287887

  5. High dynamic range pixel architecture for advanced diagnostic medical x-ray imaging applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Izadi, Mohammad Hadi; Karim, Karim S.

    2006-05-15

    The most widely used architecture in large-area amorphous silicon (a-Si) flat panel imagers is a passive pixel sensor (PPS), which consists of a detector and a readout switch. While the PPS has the advantage of being compact and amenable toward high-resolution imaging, small PPS output signals are swamped by external column charge amplifier and data line thermal noise, which reduce the minimum readable sensor input signal. In contrast to PPS circuits, on-pixel amplifiers in a-Si technology reduce readout noise to levels that can meet even the stringent requirements for low noise digital x-ray fluoroscopy (<1000 noise electrons). However, larger voltagesmore » at the pixel input cause the output of the amplified pixel to become nonlinear thus reducing the dynamic range. We reported a hybrid amplified pixel architecture based on a combination of PPS and amplified pixel designs that, in addition to low noise performance, also resulted in large-signal linearity and consequently higher dynamic range [K. S. Karim et al., Proc. SPIE 5368, 657 (2004)]. The additional benefit in large-signal linearity, however, came at the cost of an additional pixel transistor. We present an amplified pixel design that achieves the goals of low noise performance and large-signal linearity without the need for an additional pixel transistor. Theoretical calculations and simulation results for noise indicate the applicability of the amplified a-Si pixel architecture for high dynamic range, medical x-ray imaging applications that require switching between low exposure, real-time fluoroscopy and high-exposure radiography.« less

  6. Characterization of a hybrid energy-resolving photon-counting detector

    NASA Astrophysics Data System (ADS)

    Zang, A.; Pelzer, G.; Anton, G.; Ballabriga Sune, R.; Bisello, F.; Campbell, M.; Fauler, A.; Fiederle, M.; Llopart Cudie, X.; Ritter, I.; Tennert, F.; Wölfel, S.; Wong, W. S.; Michel, T.

    2014-03-01

    Photon-counting detectors in medical x-ray imaging provide a higher dose efficiency than integrating detectors. Even further possibilities for imaging applications arise, if the energy of each photon counted is measured, as for example K-edge-imaging or optimizing image quality by applying energy weighting factors. In this contribution, we show results of the characterization of the Dosepix detector. This hybrid photon- counting pixel detector allows energy resolved measurements with a novel concept of energy binning included in the pixel electronics. Based on ideas of the Medipix detector family, it provides three different modes of operation: An integration mode, a photon-counting mode, and an energy-binning mode. In energy-binning mode, it is possible to set 16 energy thresholds in each pixel individually to derive a binned energy spectrum in every pixel in one acquisition. The hybrid setup allows using different sensor materials. For the measurements 300 μm Si and 1 mm CdTe were used. The detector matrix consists of 16 x 16 square pixels for CdTe (16 x 12 for Si) with a pixel pitch of 220 μm. The Dosepix was originally intended for applications in the field of radiation measurement. Therefore it is not optimized towards medical imaging. The detector concept itself still promises potential as an imaging detector. We present spectra measured in one single pixel as well as in the whole pixel matrix in energy-binning mode with a conventional x-ray tube. In addition, results concerning the count rate linearity for the different sensor materials are shown as well as measurements regarding energy resolution.

  7. Self-amplified CMOS image sensor using a current-mode readout circuit

    NASA Astrophysics Data System (ADS)

    Santos, Patrick M.; de Lima Monteiro, Davies W.; Pittet, Patrick

    2014-05-01

    The feature size of the CMOS processes decreased during the past few years and problems such as reduced dynamic range have become more significant in voltage-mode pixels, even though the integration of more functionality inside the pixel has become easier. This work makes a contribution on both sides: the possibility of a high signal excursion range using current-mode circuits together with functionality addition by making signal amplification inside the pixel. The classic 3T pixel architecture was rebuild with small modifications to integrate a transconductance amplifier providing a current as an output. The matrix with these new pixels will operate as a whole large transistor outsourcing an amplified current that will be used for signal processing. This current is controlled by the intensity of the light received by the matrix, modulated pixel by pixel. The output current can be controlled by the biasing circuits to achieve a very large range of output signal levels. It can also be controlled with the matrix size and this permits a very high degree of freedom on the signal level, observing the current densities inside the integrated circuit. In addition, the matrix can operate at very small integration times. Its applications would be those in which fast imaging processing, high signal amplification are required and low resolution is not a major problem, such as UV image sensors. Simulation results will be presented to support: operation, control, design, signal excursion levels and linearity for a matrix of pixels that was conceived using this new concept of sensor.

  8. Demosaiced pixel super-resolution for multiplexed holographic color imaging

    PubMed Central

    Wu, Yichen; Zhang, Yibo; Luo, Wei; Ozcan, Aydogan

    2016-01-01

    To synthesize a holographic color image, one can sequentially take three holograms at different wavelengths, e.g., at red (R), green (G) and blue (B) parts of the spectrum, and digitally merge them. To speed up the imaging process by a factor of three, a Bayer color sensor-chip can also be used to demultiplex three wavelengths that simultaneously illuminate the sample and digitally retrieve individual set of holograms using the known transmission spectra of the Bayer color filters. However, because the pixels of different channels (R, G, B) on a Bayer color sensor are not at the same physical location, conventional demosaicing techniques generate color artifacts in holographic imaging using simultaneous multi-wavelength illumination. Here we demonstrate that pixel super-resolution can be merged into the color de-multiplexing process to significantly suppress the artifacts in wavelength-multiplexed holographic color imaging. This new approach, termed Demosaiced Pixel Super-Resolution (D-PSR), generates color images that are similar in performance to sequential illumination at three wavelengths, and therefore improves the speed of holographic color imaging by 3-fold. D-PSR method is broadly applicable to holographic microscopy applications, where high-resolution imaging and multi-wavelength illumination are desired. PMID:27353242

  9. Reduced signal crosstalk multi neurotransmitter image sensor by microhole array structure

    NASA Astrophysics Data System (ADS)

    Ogaeri, Yuta; Lee, You-Na; Mitsudome, Masato; Iwata, Tatsuya; Takahashi, Kazuhiro; Sawada, Kazuaki

    2018-06-01

    A microhole array structure combined with an enzyme immobilization method using magnetic beads can enhance the target discernment capability of a multi neurotransmitter image sensor. Here we report the fabrication and evaluation of the H+-diffusion-preventing capability of the sensor with the array structure. The structure with an SU-8 photoresist has holes with a size of 24.5 × 31.6 µm2. Sensors were prepared with the array structure of three different heights: 0, 15, and 60 µm. When the sensor has the structure of 60 µm height, 48% reduced output voltage is measured at a H+-sensitive null pixel that is located 75 µm from the acetylcholinesterase (AChE)-immobilized pixel, which is the starting point of H+ diffusion. The suppressed H+ immigration is shown in a two-dimensional (2D) image in real time. The sensor parameters, such as height of the array structure and measuring time, are optimized experimentally. The sensor is expected to effectively distinguish various neurotransmitters in biological samples.

  10. Noise and spectroscopic performance of DEPMOSFET matrix devices for XEUS

    NASA Astrophysics Data System (ADS)

    Treis, J.; Fischer, P.; Hälker, O.; Herrmann, S.; Kohrs, R.; Krüger, H.; Lechner, P.; Lutz, G.; Peric, I.; Porro, M.; Richter, R. H.; Strüder, L.; Trimpl, M.; Wermes, N.; Wölfel, S.

    2005-08-01

    DEPMOSFET based Active Pixel Sensor (APS) matrix devices, originally developed to cope with the challenging requirements of the XEUS Wide Field Imager, have proven to be a promising new imager concept for a variety of future X-ray imaging and spectroscopy missions like Simbol-X. The devices combine excellent energy resolution, high speed readout and low power consumption with the attractive feature of random accessibility of pixels. A production of sensor prototypes with 64 x 64 pixels with a size of 75 μm x 75 μm each has recently been finished at the MPI semiconductor laboratory in Munich. The devices are built for row-wise readout and require dedicated control and signal processing electronics of the CAMEX type, which is integrated together with the sensor onto a readout hybrid. A number of hybrids incorporating the most promising sensor design variants has been built, and their performance has been studied in detail. A spectroscopic resolution of 131 eV has been measured, the readout noise is as low as 3.5 e- ENC. Here, the dependence of readout noise and spectroscopic resolution on the device temperature is presented.

  11. Evaluation of a single-pixel one-transistor active pixel sensor for fingerprint imaging

    NASA Astrophysics Data System (ADS)

    Xu, Man; Ou, Hai; Chen, Jun; Wang, Kai

    2015-08-01

    Since it first appeared in iPhone 5S in 2013, fingerprint identification (ID) has rapidly gained popularity among consumers. Current fingerprint-enabled smartphones unanimously consists of a discrete sensor to perform fingerprint ID. This architecture not only incurs higher material and manufacturing cost, but also provides only static identification and limited authentication. Hence as the demand for a thinner, lighter, and more secure handset grows, we propose a novel pixel architecture that is a photosensitive device embedded in a display pixel and detects the reflected light from the finger touch for high resolution, high fidelity and dynamic biometrics. To this purpose, an amorphous silicon (a-Si:H) dual-gate photo TFT working in both fingerprint-imaging mode and display-driving mode will be developed.

  12. Fully depleted CMOS pixel sensor development and potential applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baudot, J.; Kachel, M.; CNRS, UMR7178, 67037 Strasbourg

    CMOS pixel sensors are often opposed to hybrid pixel sensors due to their very different sensitive layer. In standard CMOS imaging processes, a thin (about 20 μm) low resistivity epitaxial layer acts as the sensitive volume and charge collection is mostly driven by thermal agitation. In contrast, the so-called hybrid pixel technology exploits a thick (typically 300 μm) silicon sensor with high resistivity allowing for the depletion of this volume, hence charges drift toward collecting electrodes. But this difference is fading away with the recent availability of some CMOS imaging processes based on a relatively thick (about 50 μm) highmore » resistivity epitaxial layer which allows for full depletion. This evolution extents the range of applications for CMOS pixel sensors where their known assets, high sensitivity and granularity combined with embedded signal treatment, could potentially foster breakthrough in detection performances for specific scientific instruments. One such domain is the Xray detection for soft energies, typically below 10 keV, where the thin sensitive layer was previously severely impeding CMOS sensor usage. Another application becoming realistic for CMOS sensors, is the detection in environment with a high fluence of non-ionizing radiation, such as hadron colliders. However, when considering highly demanding applications, it is still to be proven that micro-circuits required to uniformly deplete the sensor at the pixel level, do not mitigate the sensitivity and efficiency required. Prototype sensors in two different technologies with resistivity higher than 1 kΩ, sensitive layer between 40 and 50 μm and featuring pixel pitch in the range 25 to 50 μm, have been designed and fabricated. Various biasing architectures were adopted to reach full depletion with only a few volts. Laboratory investigations with three types of sources (X-rays, β-rays and infrared light) demonstrated the validity of the approach with respect to depletion, keeping a low noise figure. Especially, an energy resolution of about 400 eV for 5 keV X-rays was obtained for single pixels. The prototypes have then been exposed to gradually increased fluences of neutrons, from 10{sup 13} to 5x10{sup 14} neq/cm{sup 2}. Again laboratory tests allowed to evaluate the signal over noise persistence on the different pixels implemented. Currently our development mostly targets the detection of soft X-rays, with the ambition to develop a pixel sensor matching counting rates as affordable with hybrid pixel sensors, but with an extended sensitivity to low energy and finer pixel about 25 x 25 μm{sup 2}. The original readout architecture proposed relies on a two tiers chip. The first tier consists of a sensor with a modest dynamic in order to insure low noise performances required by sensitivity. The interconnected second tier chip enhances the read-out speed by introducing massive parallelization. Performances reachable with this strategy combining counting and integration will be detailed. (authors)« less

  13. Three-dimensional cascaded system analysis of a 50 µm pixel pitch wafer-scale CMOS active pixel sensor x-ray detector for digital breast tomosynthesis.

    PubMed

    Zhao, C; Vassiljev, N; Konstantinidis, A C; Speller, R D; Kanicki, J

    2017-03-07

    High-resolution, low-noise x-ray detectors based on the complementary metal-oxide-semiconductor (CMOS) active pixel sensor (APS) technology have been developed and proposed for digital breast tomosynthesis (DBT). In this study, we evaluated the three-dimensional (3D) imaging performance of a 50 µm pixel pitch CMOS APS x-ray detector named DynAMITe (Dynamic Range Adjustable for Medical Imaging Technology). The two-dimensional (2D) angle-dependent modulation transfer function (MTF), normalized noise power spectrum (NNPS), and detective quantum efficiency (DQE) were experimentally characterized and modeled using the cascaded system analysis at oblique incident angles up to 30°. The cascaded system model was extended to the 3D spatial frequency space in combination with the filtered back-projection (FBP) reconstruction method to calculate the 3D and in-plane MTF, NNPS and DQE parameters. The results demonstrate that the beam obliquity blurs the 2D MTF and DQE in the high spatial frequency range. However, this effect can be eliminated after FBP image reconstruction. In addition, impacts of the image acquisition geometry and detector parameters were evaluated using the 3D cascaded system analysis for DBT. The result shows that a wider projection angle range (e.g.  ±30°) improves the low spatial frequency (below 5 mm -1 ) performance of the CMOS APS detector. In addition, to maintain a high spatial resolution for DBT, a focal spot size of smaller than 0.3 mm should be used. Theoretical analysis suggests that a pixelated scintillator in combination with the 50 µm pixel pitch CMOS APS detector could further improve the 3D image resolution. Finally, the 3D imaging performance of the CMOS APS and an indirect amorphous silicon (a-Si:H) thin-film transistor (TFT) passive pixel sensor (PPS) detector was simulated and compared.

  14. Three-dimensional cascaded system analysis of a 50 µm pixel pitch wafer-scale CMOS active pixel sensor x-ray detector for digital breast tomosynthesis

    NASA Astrophysics Data System (ADS)

    Zhao, C.; Vassiljev, N.; Konstantinidis, A. C.; Speller, R. D.; Kanicki, J.

    2017-03-01

    High-resolution, low-noise x-ray detectors based on the complementary metal-oxide-semiconductor (CMOS) active pixel sensor (APS) technology have been developed and proposed for digital breast tomosynthesis (DBT). In this study, we evaluated the three-dimensional (3D) imaging performance of a 50 µm pixel pitch CMOS APS x-ray detector named DynAMITe (Dynamic Range Adjustable for Medical Imaging Technology). The two-dimensional (2D) angle-dependent modulation transfer function (MTF), normalized noise power spectrum (NNPS), and detective quantum efficiency (DQE) were experimentally characterized and modeled using the cascaded system analysis at oblique incident angles up to 30°. The cascaded system model was extended to the 3D spatial frequency space in combination with the filtered back-projection (FBP) reconstruction method to calculate the 3D and in-plane MTF, NNPS and DQE parameters. The results demonstrate that the beam obliquity blurs the 2D MTF and DQE in the high spatial frequency range. However, this effect can be eliminated after FBP image reconstruction. In addition, impacts of the image acquisition geometry and detector parameters were evaluated using the 3D cascaded system analysis for DBT. The result shows that a wider projection angle range (e.g.  ±30°) improves the low spatial frequency (below 5 mm-1) performance of the CMOS APS detector. In addition, to maintain a high spatial resolution for DBT, a focal spot size of smaller than 0.3 mm should be used. Theoretical analysis suggests that a pixelated scintillator in combination with the 50 µm pixel pitch CMOS APS detector could further improve the 3D image resolution. Finally, the 3D imaging performance of the CMOS APS and an indirect amorphous silicon (a-Si:H) thin-film transistor (TFT) passive pixel sensor (PPS) detector was simulated and compared.

  15. Plenoptic mapping for imaging and retrieval of the complex field amplitude of a laser beam.

    PubMed

    Wu, Chensheng; Ko, Jonathan; Davis, Christopher C

    2016-12-26

    The plenoptic sensor has been developed to sample complicated beam distortions produced by turbulence in the low atmosphere (deep turbulence or strong turbulence) with high density data samples. In contrast with the conventional Shack-Hartmann wavefront sensor, which utilizes all the pixels under each lenslet of a micro-lens array (MLA) to obtain one data sample indicating sub-aperture phase gradient and photon intensity, the plenoptic sensor uses each illuminated pixel (with significant pixel value) under each MLA lenslet as a data point for local phase gradient and intensity. To characterize the working principle of a plenoptic sensor, we propose the concept of plenoptic mapping and its inverse mapping to describe the imaging and reconstruction process respectively. As a result, we show that the plenoptic mapping is an efficient method to image and reconstruct the complex field amplitude of an incident beam with just one image. With a proof of concept experiment, we show that adaptive optics (AO) phase correction can be instantaneously achieved without going through a phase reconstruction process under the concept of plenoptic mapping. The plenoptic mapping technology has high potential for applications in imaging, free space optical (FSO) communication and directed energy (DE) where atmospheric turbulence distortion needs to be compensated.

  16. Performance of a Medipix3RX spectroscopic pixel detector with a high resistivity gallium arsenide sensor.

    PubMed

    Hamann, Elias; Koenig, Thomas; Zuber, Marcus; Cecilia, Angelica; Tyazhev, Anton; Tolbanov, Oleg; Procz, Simon; Fauler, Alex; Baumbach, Tilo; Fiederle, Michael

    2015-03-01

    High resistivity gallium arsenide is considered a suitable sensor material for spectroscopic X-ray imaging detectors. These sensors typically have thicknesses between a few hundred μm and 1 mm to ensure a high photon detection efficiency. However, for small pixel sizes down to several tens of μm, an effect called charge sharing reduces a detector's spectroscopic performance. The recently developed Medipix3RX readout chip overcomes this limitation by implementing a charge summing circuit, which allows the reconstruction of the full energy information of a photon interaction in a single pixel. In this work, we present the characterization of the first Medipix3RX detector assembly with a 500 μm thick high resistivity, chromium compensated gallium arsenide sensor. We analyze its properties and demonstrate the functionality of the charge summing mode by means of energy response functions recorded at a synchrotron. Furthermore, the imaging properties of the detector, in terms of its modulation transfer functions and signal-to-noise ratios, are investigated. After more than one decade of attempts to establish gallium arsenide as a sensor material for photon counting detectors, our results represent a breakthrough in obtaining detector-grade material. The sensor we introduce is therefore suitable for high resolution X-ray imaging applications.

  17. Front end optimization for the monolithic active pixel sensor of the ALICE Inner Tracking System upgrade

    NASA Astrophysics Data System (ADS)

    Kim, D.; Aglieri Rinella, G.; Cavicchioli, C.; Chanlek, N.; Collu, A.; Degerli, Y.; Dorokhov, A.; Flouzat, C.; Gajanana, D.; Gao, C.; Guilloux, F.; Hillemanns, H.; Hristozkov, S.; Junique, A.; Keil, M.; Kofarago, M.; Kugathasan, T.; Kwon, Y.; Lattuca, A.; Mager, M.; Sielewicz, K. M.; Marin Tobon, C. A.; Marras, D.; Martinengo, P.; Mazza, G.; Mugnier, H.; Musa, L.; Pham, T. H.; Puggioni, C.; Reidt, F.; Riedler, P.; Rousset, J.; Siddhanta, S.; Snoeys, W.; Song, M.; Usai, G.; Van Hoorne, J. W.; Yang, P.

    2016-02-01

    ALICE plans to replace its Inner Tracking System during the second long shut down of the LHC in 2019 with a new 10 m2 tracker constructed entirely with monolithic active pixel sensors. The TowerJazz 180 nm CMOS imaging Sensor process has been selected to produce the sensor as it offers a deep pwell allowing full CMOS in-pixel circuitry and different starting materials. First full-scale prototypes have been fabricated and tested. Radiation tolerance has also been verified. In this paper the development of the charge sensitive front end and in particular its optimization for uniformity of charge threshold and time response will be presented.

  18. ASTER First Views of Red Sea, Ethiopia - Thermal-Infrared TIR Image monochrome

    NASA Image and Video Library

    2000-03-11

    ASTER succeeded in acquiring this image at night, which is something Visible/Near Infrared VNIR) and Shortwave Infrared (SWIR) sensors cannot do. The scene covers the Red Sea coastline to an inland area of Ethiopia. White pixels represent areas with higher temperature material on the surface, while dark pixels indicate lower temperatures. This image shows ASTER's ability as a highly sensitive, temperature-discerning instrument and the first spaceborne TIR multi-band sensor in history. The size of image: 60 km x 60 km approx., ground resolution 90 m x 90 m approximately. http://photojournal.jpl.nasa.gov/catalog/PIA02452

  19. Design of a Low-Light-Level Image Sensor with On-Chip Sigma-Delta Analog-to- Digital Conversion

    NASA Technical Reports Server (NTRS)

    Mendis, Sunetra K.; Pain, Bedabrata; Nixon, Robert H.; Fossum, Eric R.

    1993-01-01

    The design and projected performance of a low-light-level active-pixel-sensor (APS) chip with semi-parallel analog-to-digital (A/D) conversion is presented. The individual elements have been fabricated and tested using MOSIS* 2 micrometer CMOS technology, although the integrated system has not yet been fabricated. The imager consists of a 128 x 128 array of active pixels at a 50 micrometer pitch. Each column of pixels shares a 10-bit A/D converter based on first-order oversampled sigma-delta (Sigma-Delta) modulation. The 10-bit outputs of each converter are multiplexed and read out through a single set of outputs. A semi-parallel architecture is chosen to achieve 30 frames/second operation even at low light levels. The sensor is designed for less than 12 e^- rms noise performance.

  20. Neural Network for Image-to-Image Control of Optical Tweezers

    NASA Technical Reports Server (NTRS)

    Decker, Arthur J.; Anderson, Robert C.; Weiland, Kenneth E.; Wrbanek, Susan Y.

    2004-01-01

    A method is discussed for using neural networks to control optical tweezers. Neural-net outputs are combined with scaling and tiling to generate 480 by 480-pixel control patterns for a spatial light modulator (SLM). The SLM can be combined in various ways with a microscope to create movable tweezers traps with controllable profiles. The neural nets are intended to respond to scattered light from carbon and silicon carbide nanotube sensors. The nanotube sensors are to be held by the traps for manipulation and calibration. Scaling and tiling allow the 100 by 100-pixel maximum resolution of the neural-net software to be applied in stages to exploit the full 480 by 480-pixel resolution of the SLM. One of these stages is intended to create sensitive null detectors for detecting variations in the scattered light from the nanotube sensors.

  1. Time-of-flight camera via a single-pixel correlation image sensor

    NASA Astrophysics Data System (ADS)

    Mao, Tianyi; Chen, Qian; He, Weiji; Dai, Huidong; Ye, Ling; Gu, Guohua

    2018-04-01

    A time-of-flight imager based on single-pixel correlation image sensors is proposed for noise-free depth map acquisition in presence of ambient light. Digital micro-mirror device and time-modulated IR-laser provide spatial and temporal illumination on the unknown object. Compressed sensing and ‘four bucket principle’ method are combined to reconstruct the depth map from a sequence of measurements at a low sampling rate. Second-order correlation transform is also introduced to reduce the noise from the detector itself and direct ambient light. Computer simulations are presented to validate the computational models and improvement of reconstructions.

  2. IR sensitivity enhancement of CMOS Image Sensor with diffractive light trapping pixels.

    PubMed

    Yokogawa, Sozo; Oshiyama, Itaru; Ikeda, Harumi; Ebiko, Yoshiki; Hirano, Tomoyuki; Saito, Suguru; Oinoue, Takashi; Hagimoto, Yoshiya; Iwamoto, Hayato

    2017-06-19

    We report on the IR sensitivity enhancement of back-illuminated CMOS Image Sensor (BI-CIS) with 2-dimensional diffractive inverted pyramid array structure (IPA) on crystalline silicon (c-Si) and deep trench isolation (DTI). FDTD simulations of semi-infinite thick c-Si having 2D IPAs on its surface whose pitches over 400 nm shows more than 30% improvement of light absorption at λ = 850 nm and the maximum enhancement of 43% with the 540 nm pitch at the wavelength is confirmed. A prototype BI-CIS sample with pixel size of 1.2 μm square containing 400 nm pitch IPAs shows 80% sensitivity enhancement at λ = 850 nm compared to the reference sample with flat surface. This is due to diffraction with the IPA and total reflection at the pixel boundary. The NIR images taken by the demo camera equip with a C-mount lens show 75% sensitivity enhancement in the λ = 700-1200 nm wavelength range with negligible spatial resolution degradation. Light trapping CIS pixel technology promises to improve NIR sensitivity and appears to be applicable to many different image sensor applications including security camera, personal authentication, and range finding Time-of-Flight camera with IR illuminations.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Becker, Julian; Tate, Mark W.; Shanks, Katherine S.

    Pixel Array Detectors (PADs) consist of an x-ray sensor layer bonded pixel-by-pixel to an underlying readout chip. This approach allows both the sensor and the custom pixel electronics to be tailored independently to best match the x-ray imaging requirements. Here we describe the hybridization of CdTe sensors to two different charge-integrating readout chips, the Keck PAD and the Mixed-Mode PAD (MM-PAD), both developed previously in our laboratory. The charge-integrating architecture of each of these PADs extends the instantaneous counting rate by many orders of magnitude beyond that obtainable with photon counting architectures. The Keck PAD chip consists of rapid, 8-frame,more » in-pixel storage elements with framing periods <150 ns. The second detector, the MM-PAD, has an extended dynamic range by utilizing an in-pixel overflow counter coupled with charge removal circuitry activated at each overflow. This allows the recording of signals from the single-photon level to tens of millions of x-rays/pixel/frame while framing at 1 kHz. Both detector chips consist of a 128×128 pixel array with (150 µm){sup 2} pixels.« less

  4. Tests of monolithic active pixel sensors at national synchrotron light source

    NASA Astrophysics Data System (ADS)

    Deptuch, G.; Besson, A.; Carini, G. A.; Siddons, D. P.; Szelezniak, M.; Winter, M.

    2007-01-01

    The paper discusses basic characterization of Monolithic Active Pixel Sensors (MAPS) carried out at the X12A beam-line at National Synchrotron Light Source (NSLS), Upton, NY, USA. The tested device was a MIMOSA V (MV) chip, back-thinned down to the epitaxial layer. This 1M pixels device features a pixel size of 17×17 μm2 and was designed in a 0.6 μm CMOS process. The X-ray beam energies used range from 5 to 12 keV. Examples of direct X-ray imaging capabilities are presented.

  5. Image sensor with high dynamic range linear output

    NASA Technical Reports Server (NTRS)

    Yadid-Pecht, Orly (Inventor); Fossum, Eric R. (Inventor)

    2007-01-01

    Designs and operational methods to increase the dynamic range of image sensors and APS devices in particular by achieving more than one integration times for each pixel thereof. An APS system with more than one column-parallel signal chains for readout are described for maintaining a high frame rate in readout. Each active pixel is sampled for multiple times during a single frame readout, thus resulting in multiple integration times. The operation methods can also be used to obtain multiple integration times for each pixel with an APS design having a single column-parallel signal chain for readout. Furthermore, analog-to-digital conversion of high speed and high resolution can be implemented.

  6. A high sensitivity 20Mfps CMOS image sensor with readout speed of 1Tpixel/sec for visualization of ultra-high speed phenomena

    NASA Astrophysics Data System (ADS)

    Kuroda, R.; Sugawa, S.

    2017-02-01

    Ultra-high speed (UHS) CMOS image sensors with on-chop analog memories placed on the periphery of pixel array for the visualization of UHS phenomena are overviewed in this paper. The developed UHS CMOS image sensors consist of 400H×256V pixels and 128 memories/pixel, and the readout speed of 1Tpixel/sec is obtained, leading to 10 Mfps full resolution video capturing with consecutive 128 frames, and 20 Mfps half resolution video capturing with consecutive 256 frames. The first development model has been employed in the high speed video camera and put in practical use in 2012. By the development of dedicated process technologies, photosensitivity improvement and power consumption reduction were simultaneously achieved, and the performance improved version has been utilized in the commercialized high-speed video camera since 2015 that offers 10 Mfps with ISO16,000 photosensitivity. Due to the improved photosensitivity, clear images can be captured and analyzed even under low light condition, such as under a microscope as well as capturing of UHS light emission phenomena.

  7. IR CMOS: near infrared enhanced digital imaging (Presentation Recording)

    NASA Astrophysics Data System (ADS)

    Pralle, Martin U.; Carey, James E.; Joy, Thomas; Vineis, Chris J.; Palsule, Chintamani

    2015-08-01

    SiOnyx has demonstrated imaging at light levels below 1 mLux (moonless starlight) at video frame rates with a 720P CMOS image sensor in a compact, low latency camera. Low light imaging is enabled by the combination of enhanced quantum efficiency in the near infrared together with state of the art low noise image sensor design. The quantum efficiency enhancements are achieved by applying Black Silicon, SiOnyx's proprietary ultrafast laser semiconductor processing technology. In the near infrared, silicon's native indirect bandgap results in low absorption coefficients and long absorption lengths. The Black Silicon nanostructured layer fundamentally disrupts this paradigm by enhancing the absorption of light within a thin pixel layer making 5 microns of silicon equivalent to over 300 microns of standard silicon. This results in a demonstrate 10 fold improvements in near infrared sensitivity over incumbent imaging technology while maintaining complete compatibility with standard CMOS image sensor process flows. Applications include surveillance, nightvision, and 1064nm laser see spot. Imaging performance metrics will be discussed. Demonstrated performance characteristics: Pixel size : 5.6 and 10 um Array size: 720P/1.3Mpix Frame rate: 60 Hz Read noise: 2 ele/pixel Spectral sensitivity: 400 to 1200 nm (with 10x QE at 1064nm) Daytime imaging: color (Bayer pattern) Nighttime imaging: moonless starlight conditions 1064nm laser imaging: daytime imaging out to 2Km

  8. Integrated imaging sensor systems with CMOS active pixel sensor technology

    NASA Technical Reports Server (NTRS)

    Yang, G.; Cunningham, T.; Ortiz, M.; Heynssens, J.; Sun, C.; Hancock, B.; Seshadri, S.; Wrigley, C.; McCarty, K.; Pain, B.

    2002-01-01

    This paper discusses common approaches to CMOS APS technology, as well as specific results on the five-wire programmable digital camera-on-a-chip developed at JPL. The paper also reports recent research in the design, operation, and performance of APS imagers for several imager applications.

  9. Demonstration of the CDMA-mode CAOS smart camera.

    PubMed

    Riza, Nabeel A; Mazhar, Mohsin A

    2017-12-11

    Demonstrated is the code division multiple access (CDMA)-mode coded access optical sensor (CAOS) smart camera suited for bright target scenarios. Deploying a silicon CMOS sensor and a silicon point detector within a digital micro-mirror device (DMD)-based spatially isolating hybrid camera design, this smart imager first engages the DMD starring mode with a controlled factor of 200 high optical attenuation of the scene irradiance to provide a classic unsaturated CMOS sensor-based image for target intelligence gathering. Next, this CMOS sensor provided image data is used to acquire a focused zone more robust un-attenuated true target image using the time-modulated CDMA-mode of the CAOS camera. Using four different bright light test target scenes, successfully demonstrated is a proof-of-concept visible band CAOS smart camera operating in the CDMA-mode using up-to 4096 bits length Walsh design CAOS pixel codes with a maximum 10 KHz code bit rate giving a 0.4096 seconds CAOS frame acquisition time. A 16-bit analog-to-digital converter (ADC) with time domain correlation digital signal processing (DSP) generates the CDMA-mode images with a 3600 CAOS pixel count and a best spatial resolution of one micro-mirror square pixel size of 13.68 μm side. The CDMA-mode of the CAOS smart camera is suited for applications where robust high dynamic range (DR) imaging is needed for un-attenuated un-spoiled bright light spectrally diverse targets.

  10. Performance of a novel wafer scale CMOS active pixel sensor for bio-medical imaging.

    PubMed

    Esposito, M; Anaxagoras, T; Konstantinidis, A C; Zheng, Y; Speller, R D; Evans, P M; Allinson, N M; Wells, K

    2014-07-07

    Recently CMOS active pixels sensors (APSs) have become a valuable alternative to amorphous silicon and selenium flat panel imagers (FPIs) in bio-medical imaging applications. CMOS APSs can now be scaled up to the standard 20 cm diameter wafer size by means of a reticle stitching block process. However, despite wafer scale CMOS APS being monolithic, sources of non-uniformity of response and regional variations can persist representing a significant challenge for wafer scale sensor response. Non-uniformity of stitched sensors can arise from a number of factors related to the manufacturing process, including variation of amplification, variation between readout components, wafer defects and process variations across the wafer due to manufacturing processes. This paper reports on an investigation into the spatial non-uniformity and regional variations of a wafer scale stitched CMOS APS. For the first time a per-pixel analysis of the electro-optical performance of a wafer CMOS APS is presented, to address inhomogeneity issues arising from the stitching techniques used to manufacture wafer scale sensors. A complete model of the signal generation in the pixel array has been provided and proved capable of accounting for noise and gain variations across the pixel array. This novel analysis leads to readout noise and conversion gain being evaluated at pixel level, stitching block level and in regions of interest, resulting in a coefficient of variation ⩽1.9%. The uniformity of the image quality performance has been further investigated in a typical x-ray application, i.e. mammography, showing a uniformity in terms of CNR among the highest when compared with mammography detectors commonly used in clinical practice. Finally, in order to compare the detection capability of this novel APS with the technology currently used (i.e. FPIs), theoretical evaluation of the detection quantum efficiency (DQE) at zero-frequency has been performed, resulting in a higher DQE for this detector compared to FPIs. Optical characterization, x-ray contrast measurements and theoretical DQE evaluation suggest that a trade off can be found between the need of a large imaging area and the requirement of a uniform imaging performance, making the DynAMITe large area CMOS APS suitable for a range of bio-medical applications.

  11. Characterisation of GaAs:Cr pixel sensors coupled to Timepix chips in view of synchrotron applications

    NASA Astrophysics Data System (ADS)

    Ponchut, C.; Cotte, M.; Lozinskaya, A.; Zarubin, A.; Tolbanov, O.; Tyazhev, A.

    2017-12-01

    In order to meet the needs of some ESRF beamlines for highly efficient 2D X-ray detectors in the 20-50 keV range, GaAs:Cr pixel sensors coupled to TIMEPIX readout chips were implemented into a MAXIPIX detector. Use of GaAs:Cr sensor material is intended to overcome the limitations of Si (low absorption) and of CdTe (fluorescence) in this energy range The GaAs:Cr sensor assemblies were characterised with both laboratory X-ray sources and monochromatic synchrotron X-ray beams. The sensor response as a function of bias voltage was compared to a theoretical model, leading to an estimation of the μτ product of electrons in GaAs:Cr sensor material of 1.6×10-4 cm2/V. The spatial homogeneity of X-ray images obtained with the sensors was measured in different irradiation conditions, showing a particular sensitivity to small variations in the incident beam spectrum. 2D-resolved elemental mapping of the sensor surface was carried out to investigate a possible relation between the noise pattern observed in X-ray images and local fluctuations in chemical composition. A scanning of the sensor response at subpixel scale revealed that these irregularities can be correlated with a distortion of the effective pixel shapes.

  12. Development of pixellated Ir-TESs

    NASA Astrophysics Data System (ADS)

    Zen, Nobuyuki; Takahashi, Hiroyuki; Kunieda, Yuichi; Damayanthi, Rathnayaka M. T.; Mori, Fumiakira; Fujita, Kaoru; Nakazawa, Masaharu; Fukuda, Daiji; Ohkubo, Masataka

    2006-04-01

    We have been developing Ir-based pixellated superconducting transition edge sensors (TESs). In the area of material or astronomical applications, the sensor with few eV energy resolution and over 1000 pixels imaging property is desired. In order to achieve this goal, we have been analyzing signals from pixellated TESs. In the case of a 20 pixel array of Ir-TESs, with 45 μm×45 μm pixel sizes, the incident X-ray signals have been classified into 16 groups. We have applied numerical signal analysis. On the one hand, the energy resolution of our pixellated TES is strongly degraded. However, using pulse shape analysis, we can dramatically improve the resolution. Thus, we consider that the pulse signal analysis will lead this device to be used as a practical photon incident position identifying TES.

  13. Vision Sensors and Cameras

    NASA Astrophysics Data System (ADS)

    Hoefflinger, Bernd

    Silicon charge-coupled-device (CCD) imagers have been and are a specialty market ruled by a few companies for decades. Based on CMOS technologies, active-pixel sensors (APS) began to appear in 1990 at the 1 μm technology node. These pixels allow random access, global shutters, and they are compatible with focal-plane imaging systems combining sensing and first-level image processing. The progress towards smaller features and towards ultra-low leakage currents has provided reduced dark currents and μm-size pixels. All chips offer Mega-pixel resolution, and many have very high sensitivities equivalent to ASA 12.800. As a result, HDTV video cameras will become a commodity. Because charge-integration sensors suffer from a limited dynamic range, significant processing effort is spent on multiple exposure and piece-wise analog-digital conversion to reach ranges >10,000:1. The fundamental alternative is log-converting pixels with an eye-like response. This offers a range of almost a million to 1, constant contrast sensitivity and constant colors, important features in professional, technical and medical applications. 3D retino-morphic stacking of sensing and processing on top of each other is being revisited with sub-100 nm CMOS circuits and with TSV technology. With sensor outputs directly on top of neurons, neural focal-plane processing will regain momentum, and new levels of intelligent vision will be achieved. The industry push towards thinned wafers and TSV enables backside-illuminated and other pixels with a 100% fill-factor. 3D vision, which relies on stereo or on time-of-flight, high-speed circuitry, will also benefit from scaled-down CMOS technologies both because of their size as well as their higher speed.

  14. A sun-crown-sensor model and adapted C-correction logic for topographic correction of high resolution forest imagery

    NASA Astrophysics Data System (ADS)

    Fan, Yuanchao; Koukal, Tatjana; Weisberg, Peter J.

    2014-10-01

    Canopy shadowing mediated by topography is an important source of radiometric distortion on remote sensing images of rugged terrain. Topographic correction based on the sun-canopy-sensor (SCS) model significantly improved over those based on the sun-terrain-sensor (STS) model for surfaces with high forest canopy cover, because the SCS model considers and preserves the geotropic nature of trees. The SCS model accounts for sub-pixel canopy shadowing effects and normalizes the sunlit canopy area within a pixel. However, it does not account for mutual shadowing between neighboring pixels. Pixel-to-pixel shadowing is especially apparent for fine resolution satellite images in which individual tree crowns are resolved. This paper proposes a new topographic correction model: the sun-crown-sensor (SCnS) model based on high-resolution satellite imagery (IKONOS) and high-precision LiDAR digital elevation model. An improvement on the C-correction logic with a radiance partitioning method to address the effects of diffuse irradiance is also introduced (SCnS + C). In addition, we incorporate a weighting variable, based on pixel shadow fraction, on the direct and diffuse radiance portions to enhance the retrieval of at-sensor radiance and reflectance of highly shadowed tree pixels and form another variety of SCnS model (SCnS + W). Model evaluation with IKONOS test data showed that the new SCnS model outperformed the STS and SCS models in quantifying the correlation between terrain-regulated illumination factor and at-sensor radiance. Our adapted C-correction logic based on the sun-crown-sensor geometry and radiance partitioning better represented the general additive effects of diffuse radiation than C parameters derived from the STS or SCS models. The weighting factor Wt also significantly enhanced correction results by reducing within-class standard deviation and balancing the mean pixel radiance between sunlit and shaded slopes. We analyzed these improvements with model comparison on the red and near infrared bands. The advantages of SCnS + C and SCnS + W on both bands are expected to facilitate forest classification and change detection applications.

  15. Light-controlled biphasic current stimulator IC using CMOS image sensors for high-resolution retinal prosthesis and in vitro experimental results with rd1 mouse.

    PubMed

    Oh, Sungjin; Ahn, Jae-Hyun; Lee, Sangmin; Ko, Hyoungho; Seo, Jong Mo; Goo, Yong-Sook; Cho, Dong-il Dan

    2015-01-01

    Retinal prosthetic devices stimulate retinal nerve cells with electrical signals proportional to the incident light intensities. For a high-resolution retinal prosthesis, it is necessary to reduce the size of the stimulator pixels as much as possible, because the retinal nerve cells are concentrated in a small area of approximately 5 mm × 5 mm. In this paper, a miniaturized biphasic current stimulator integrated circuit is developed for subretinal stimulation and tested in vitro. The stimulator pixel is miniaturized by using a complementary metal-oxide-semiconductor (CMOS) image sensor composed of three transistors. Compared to a pixel that uses a four-transistor CMOS image sensor, this new design reduces the pixel size by 8.3%. The pixel size is further reduced by simplifying the stimulation-current generating circuit, which provides a 43.9% size reduction when compared to the design reported to be the most advanced version to date for subretinal stimulation. The proposed design is fabricated using a 0.35 μm bipolar-CMOS-DMOS process. Each pixel is designed to fit in a 50 μ m × 55 μm area, which theoretically allows implementing more than 5000 pixels in the 5 mm × 5 mm area. Experimental results show that a biphasic current in the range of 0 to 300 μA at 12 V can be generated as a function of incident light intensities. Results from in vitro experiments with rd1 mice indicate that the proposed method can be effectively used for retinal prosthesis with a high resolution.

  16. Hyperspectral Image Classification via Kernel Sparse Representation

    DTIC Science & Technology

    2013-01-01

    classification algorithms. Moreover, the spatial coherency across neighboring pixels is also incorporated through a kernelized joint sparsity model , where...joint sparsity model , where all of the pixels within a small neighborhood are jointly represented in the feature space by selecting a few common training...hyperspectral imagery, joint spar- sity model , kernel methods, sparse representation. I. INTRODUCTION HYPERSPECTRAL imaging sensors capture images

  17. Design and fabrication of vertically-integrated CMOS image sensors.

    PubMed

    Skorka, Orit; Joseph, Dileepan

    2011-01-01

    Technologies to fabricate integrated circuits (IC) with 3D structures are an emerging trend in IC design. They are based on vertical stacking of active components to form heterogeneous microsystems. Electronic image sensors will benefit from these technologies because they allow increased pixel-level data processing and device optimization. This paper covers general principles in the design of vertically-integrated (VI) CMOS image sensors that are fabricated by flip-chip bonding. These sensors are composed of a CMOS die and a photodetector die. As a specific example, the paper presents a VI-CMOS image sensor that was designed at the University of Alberta, and fabricated with the help of CMC Microsystems and Micralyne Inc. To realize prototypes, CMOS dies with logarithmic active pixels were prepared in a commercial process, and photodetector dies with metal-semiconductor-metal devices were prepared in a custom process using hydrogenated amorphous silicon. The paper also describes a digital camera that was developed to test the prototype. In this camera, scenes captured by the image sensor are read using an FPGA board, and sent in real time to a PC over USB for data processing and display. Experimental results show that the VI-CMOS prototype has a higher dynamic range and a lower dark limit than conventional electronic image sensors.

  18. Design and Fabrication of Vertically-Integrated CMOS Image Sensors

    PubMed Central

    Skorka, Orit; Joseph, Dileepan

    2011-01-01

    Technologies to fabricate integrated circuits (IC) with 3D structures are an emerging trend in IC design. They are based on vertical stacking of active components to form heterogeneous microsystems. Electronic image sensors will benefit from these technologies because they allow increased pixel-level data processing and device optimization. This paper covers general principles in the design of vertically-integrated (VI) CMOS image sensors that are fabricated by flip-chip bonding. These sensors are composed of a CMOS die and a photodetector die. As a specific example, the paper presents a VI-CMOS image sensor that was designed at the University of Alberta, and fabricated with the help of CMC Microsystems and Micralyne Inc. To realize prototypes, CMOS dies with logarithmic active pixels were prepared in a commercial process, and photodetector dies with metal-semiconductor-metal devices were prepared in a custom process using hydrogenated amorphous silicon. The paper also describes a digital camera that was developed to test the prototype. In this camera, scenes captured by the image sensor are read using an FPGA board, and sent in real time to a PC over USB for data processing and display. Experimental results show that the VI-CMOS prototype has a higher dynamic range and a lower dark limit than conventional electronic image sensors. PMID:22163860

  19. Image Segmentation Analysis for NASA Earth Science Applications

    NASA Technical Reports Server (NTRS)

    Tilton, James C.

    2010-01-01

    NASA collects large volumes of imagery data from satellite-based Earth remote sensing sensors. Nearly all of the computerized image analysis of this data is performed pixel-by-pixel, in which an algorithm is applied directly to individual image pixels. While this analysis approach is satisfactory in many cases, it is usually not fully effective in extracting the full information content from the high spatial resolution image data that s now becoming increasingly available from these sensors. The field of object-based image analysis (OBIA) has arisen in recent years to address the need to move beyond pixel-based analysis. The Recursive Hierarchical Segmentation (RHSEG) software developed by the author is being used to facilitate moving from pixel-based image analysis to OBIA. The key unique aspect of RHSEG is that it tightly intertwines region growing segmentation, which produces spatially connected region objects, with region object classification, which groups sets of region objects together into region classes. No other practical, operational image segmentation approach has this tight integration of region growing object finding with region classification This integration is made possible by the recursive, divide-and-conquer implementation utilized by RHSEG, in which the input image data is recursively subdivided until the image data sections are small enough to successfully mitigat the combinatorial explosion caused by the need to compute the dissimilarity between each pair of image pixels. RHSEG's tight integration of region growing object finding and region classification is what enables the high spatial fidelity of the image segmentations produced by RHSEG. This presentation will provide an overview of the RHSEG algorithm and describe how it is currently being used to support OBIA or Earth Science applications such as snow/ice mapping and finding archaeological sites from remotely sensed data.

  20. Room temperature infrared imaging sensors based on highly purified semiconducting carbon nanotubes.

    PubMed

    Liu, Yang; Wei, Nan; Zhao, Qingliang; Zhang, Dehui; Wang, Sheng; Peng, Lian-Mao

    2015-04-21

    High performance infrared (IR) imaging systems usually require expensive cooling systems, which are highly undesirable. Here we report the fabrication and performance characteristics of room temperature carbon nanotube (CNT) IR imaging sensors. The CNT IR imaging sensor is based on aligned semiconducting CNT films with 99% purity, and each pixel or device of the imaging sensor consists of aligned strips of CNT asymmetrically contacted by Sc and Pd. We found that the performance of the device is dependent on the CNT channel length. While short channel devices provide a large photocurrent and a rapid response of about 110 μs, long channel length devices exhibit a low dark current and a high signal-to-noise ratio which are critical for obtaining high detectivity. In total, 36 CNT IR imagers are constructed on a single chip, each consists of 3 × 3 pixel arrays. The demonstrated advantages of constructing a high performance IR system using purified semiconducting CNT aligned films include, among other things, fast response, excellent stability and uniformity, ideal linear photocurrent response, high imaging polarization sensitivity and low power consumption.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Domengie, F., E-mail: florian.domengie@st.com; Morin, P.; Bauza, D.

    We propose a model for dark current induced by metallic contamination in a CMOS image sensor. Based on Shockley-Read-Hall kinetics, the expression of dark current proposed accounts for the electric field enhanced emission factor due to the Poole-Frenkel barrier lowering and phonon-assisted tunneling mechanisms. To that aim, we considered the distribution of the electric field magnitude and metal atoms in the depth of the pixel. Poisson statistics were used to estimate the random distribution of metal atoms in each pixel for a given contamination dose. Then, we performed a Monte-Carlo-based simulation for each pixel to set the number of metalmore » atoms the pixel contained and the enhancement factor each atom underwent, and obtained a histogram of the number of pixels versus dark current for the full sensor. Excellent agreement with the dark current histogram measured on an ion-implanted gold-contaminated imager has been achieved, in particular, for the description of the distribution tails due to the pixel regions in which the contaminant atoms undergo a large electric field. The agreement remains very good when increasing the temperature by 15 °C. We demonstrated that the amplification of the dark current generated for the typical electric fields encountered in the CMOS image sensors, which depends on the nature of the metal contaminant, may become very large at high electric field. The electron and hole emissions and the resulting enhancement factor are described as a function of the trap characteristics, electric field, and temperature.« less

  2. Compressive hyperspectral sensor for LWIR gas detection

    NASA Astrophysics Data System (ADS)

    Russell, Thomas A.; McMackin, Lenore; Bridge, Bob; Baraniuk, Richard

    2012-06-01

    Focal plane arrays with associated electronics and cooling are a substantial portion of the cost, complexity, size, weight, and power requirements of Long-Wave IR (LWIR) imagers. Hyperspectral LWIR imagers add significant data volume burden as they collect a high-resolution spectrum at each pixel. We report here on a LWIR Hyperspectral Sensor that applies Compressive Sensing (CS) in order to achieve benefits in these areas. The sensor applies single-pixel detection technology demonstrated by Rice University. The single-pixel approach uses a Digital Micro-mirror Device (DMD) to reflect and multiplex the light from a random assortment of pixels onto the detector. This is repeated for a number of measurements much less than the total number of scene pixels. We have extended this architecture to hyperspectral LWIR sensing by inserting a Fabry-Perot spectrometer in the optical path. This compressive hyperspectral imager collects all three dimensions on a single detection element, greatly reducing the size, weight and power requirements of the system relative to traditional approaches, while also reducing data volume. The CS architecture also supports innovative adaptive approaches to sensing, as the DMD device allows control over the selection of spatial scene pixels to be multiplexed on the detector. We are applying this advantage to the detection of plume gases, by adaptively locating and concentrating target energy. A key challenge in this system is the diffraction loss produce by the DMD in the LWIR. We report the results of testing DMD operation in the LWIR, as well as system spatial and spectral performance.

  3. Is flat fielding safe for precision CCD astronomy?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baumer, Michael; Davis, Christopher P.; Roodman, Aaron

    The ambitious goals of precision cosmology with wide-field optical surveys such as the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope (LSST) demand precision CCD astronomy as their foundation. This in turn requires an understanding of previously uncharacterized sources of systematic error in CCD sensors, many of which manifest themselves as static effective variations in pixel area. Such variation renders a critical assumption behind the traditional procedure of flat fielding—that a sensor's pixels comprise a uniform grid—invalid. In this work, we present a method to infer a curl-free model of a sensor's underlying pixel grid from flat-field images,more » incorporating the superposition of all electrostatic sensor effects—both known and unknown—present in flat-field data. We use these pixel grid models to estimate the overall impact of sensor systematics on photometry, astrometry, and PSF shape measurements in a representative sensor from the Dark Energy Camera (DECam) and a prototype LSST sensor. Applying the method to DECam data recovers known significant sensor effects for which corrections are currently being developed within DES. For an LSST prototype CCD with pixel-response non-uniformity (PRNU) of 0.4%, we find the impact of "improper" flat fielding on these observables is negligible in nominal .7'' seeing conditions. Furthermore, these errors scale linearly with the PRNU, so for future LSST production sensors, which may have larger PRNU, our method provides a way to assess whether pixel-level calibration beyond flat fielding will be required.« less

  4. Is flat fielding safe for precision CCD astronomy?

    DOE PAGES

    Baumer, Michael; Davis, Christopher P.; Roodman, Aaron

    2017-07-06

    The ambitious goals of precision cosmology with wide-field optical surveys such as the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope (LSST) demand precision CCD astronomy as their foundation. This in turn requires an understanding of previously uncharacterized sources of systematic error in CCD sensors, many of which manifest themselves as static effective variations in pixel area. Such variation renders a critical assumption behind the traditional procedure of flat fielding—that a sensor's pixels comprise a uniform grid—invalid. In this work, we present a method to infer a curl-free model of a sensor's underlying pixel grid from flat-field images,more » incorporating the superposition of all electrostatic sensor effects—both known and unknown—present in flat-field data. We use these pixel grid models to estimate the overall impact of sensor systematics on photometry, astrometry, and PSF shape measurements in a representative sensor from the Dark Energy Camera (DECam) and a prototype LSST sensor. Applying the method to DECam data recovers known significant sensor effects for which corrections are currently being developed within DES. For an LSST prototype CCD with pixel-response non-uniformity (PRNU) of 0.4%, we find the impact of "improper" flat fielding on these observables is negligible in nominal .7'' seeing conditions. Furthermore, these errors scale linearly with the PRNU, so for future LSST production sensors, which may have larger PRNU, our method provides a way to assess whether pixel-level calibration beyond flat fielding will be required.« less

  5. Photon small-field measurements with a CMOS active pixel sensor.

    PubMed

    Spang, F Jiménez; Rosenberg, I; Hedin, E; Royle, G

    2015-06-07

    In this work the dosimetric performance of CMOS active pixel sensors for the measurement of small photon beams is presented. The detector used consisted of an array of 520  × 520 pixels on a 25 µm pitch. Dosimetric parameters measured with this sensor were compared with data collected with an ionization chamber, a film detector and GEANT4 Monte Carlo simulations. The sensor performance for beam profiles measurements was evaluated for field sizes of 0.5  × 0.5 cm(2). The high spatial resolution achieved with this sensor allowed the accurate measurement of profiles, beam penumbrae and field size under lateral electronic disequilibrium. Field size and penumbrae agreed within 5.4% and 2.2% respectively with film measurements. Agreements with ionization chambers better than 1.0% were obtained when measuring tissue-phantom ratios. Output factor measurements were in good agreement with ionization chamber and Monte Carlo simulation. The data obtained from this imaging sensor can be easily analyzed to extract dosimetric information. The results presented in this work are promising for the development and implementation of CMOS active pixel sensors for dosimetry applications.

  6. Spatial optical crosstalk in CMOS image sensors integrated with plasmonic color filters.

    PubMed

    Yu, Yan; Chen, Qin; Wen, Long; Hu, Xin; Zhang, Hui-Fang

    2015-08-24

    Imaging resolution of complementary metal oxide semiconductor (CMOS) image sensor (CIS) keeps increasing to approximately 7k × 4k. As a result, the pixel size shrinks down to sub-2μm, which greatly increases the spatial optical crosstalk. Recently, plasmonic color filter was proposed as an alternative to conventional colorant pigmented ones. However, there is little work on its size effect and the spatial optical crosstalk in a model of CIS. By numerical simulation, we investigate the size effect of nanocross array plasmonic color filters and analyze the spatial optical crosstalk of each pixel in a Bayer array of a CIS with a pixel size of 1μm. It is found that the small pixel size deteriorates the filtering performance of nanocross color filters and induces substantial spatial color crosstalk. By integrating the plasmonic filters in the low Metal layer in standard CMOS process, the crosstalk reduces significantly, which is compatible to pigmented filters in a state-of-the-art backside illumination CIS.

  7. Lensfree on-chip microscopy over a wide field-of-view using pixel super-resolution

    PubMed Central

    Bishara, Waheb; Su, Ting-Wei; Coskun, Ahmet F.; Ozcan, Aydogan

    2010-01-01

    We demonstrate lensfree holographic microscopy on a chip to achieve ~0.6 µm spatial resolution corresponding to a numerical aperture of ~0.5 over a large field-of-view of ~24 mm2. By using partially coherent illumination from a large aperture (~50 µm), we acquire lower resolution lensfree in-line holograms of the objects with unit fringe magnification. For each lensfree hologram, the pixel size at the sensor chip limits the spatial resolution of the reconstructed image. To circumvent this limitation, we implement a sub-pixel shifting based super-resolution algorithm to effectively recover much higher resolution digital holograms of the objects, permitting sub-micron spatial resolution to be achieved across the entire sensor chip active area, which is also equivalent to the imaging field-of-view (24 mm2) due to unit magnification. We demonstrate the success of this pixel super-resolution approach by imaging patterned transparent substrates, blood smear samples, as well as Caenoharbditis Elegans. PMID:20588977

  8. Characterisation of a novel reverse-biased PPD CMOS image sensor

    NASA Astrophysics Data System (ADS)

    Stefanov, K. D.; Clarke, A. S.; Ivory, J.; Holland, A. D.

    2017-11-01

    A new pinned photodiode (PPD) CMOS image sensor (CIS) has been developed and characterised. The sensor can be fully depleted by means of reverse bias applied to the substrate, and the principle of operation is applicable to very thick sensitive volumes. Additional n-type implants under the pixel p-wells, called Deep Depletion Extension (DDE), have been added in order to eliminate the large parasitic substrate current that would otherwise be present in a normal device. The first prototype has been manufactured on a 18 μm thick, 1000 Ω .cm epitaxial silicon wafers using 180 nm PPD image sensor process at TowerJazz Semiconductor. The chip contains arrays of 10 μm and 5.4 μm pixels, with variations of the shape, size and the depth of the DDE implant. Back-side illuminated (BSI) devices were manufactured in collaboration with Teledyne e2v, and characterised together with the front-side illuminated (FSI) variants. The presented results show that the devices could be reverse-biased without parasitic leakage currents, in good agreement with simulations. The new 10 μm pixels in both BSI and FSI variants exhibit nearly identical photo response to the reference non-modified pixels, as characterised with the photon transfer curve. Different techniques were used to measure the depletion depth in FSI and BSI chips, and the results are consistent with the expected full depletion.

  9. Spectral characterisation and noise performance of Vanilla—an active pixel sensor

    NASA Astrophysics Data System (ADS)

    Blue, Andrew; Bates, R.; Bohndiek, S. E.; Clark, A.; Arvanitis, Costas D.; Greenshaw, T.; Laing, A.; Maneuski, D.; Turchetta, R.; O'Shea, V.

    2008-06-01

    This work will report on the characterisation of a new active pixel sensor, Vanilla. The Vanilla comprises of 512×512 (25μm 2) pixels. The sensor has a 12 bit digital output for full-frame mode, although it can also be readout in analogue mode, whereby it can also be read in a fully programmable region-of-interest (ROI) mode. In full frame, the sensor can operate at a readout rate of more than 100 frames per second (fps), while in ROI mode, the speed depends on the size, shape and number of ROIs. For example, an ROI of 6×6 pixels can be read at 20,000 fps in analogue mode. Using photon transfer curve (PTC) measurements allowed for the calculation of the read noise, shot noise, full-well capacity and camera gain constant of the sensor. Spectral response measurements detailed the quantum efficiency (QE) of the detector through the UV and visible region. Analysis of the ROI readout mode was also performed. Such measurements suggest that the Vanilla APS (active pixel sensor) will be suitable for a wide range of applications including particle physics and medical imaging.

  10. A study on rational function model generation for TerraSAR-X imagery.

    PubMed

    Eftekhari, Akram; Saadatseresht, Mohammad; Motagh, Mahdi

    2013-09-09

    The Rational Function Model (RFM) has been widely used as an alternative to rigorous sensor models of high-resolution optical imagery in photogrammetry and remote sensing geometric processing. However, not much work has been done to evaluate the applicability of the RF model for Synthetic Aperture Radar (SAR) image processing. This paper investigates how to generate a Rational Polynomial Coefficient (RPC) for high-resolution TerraSAR-X imagery using an independent approach. The experimental results demonstrate that the RFM obtained using the independent approach fits the Range-Doppler physical sensor model with an accuracy of greater than 10-3 pixel. Because independent RPCs indicate absolute errors in geolocation, two methods can be used to improve the geometric accuracy of the RFM. In the first method, Ground Control Points (GCPs) are used to update SAR sensor orientation parameters, and the RPCs are calculated using the updated parameters. Our experiment demonstrates that by using three control points in the corners of the image, an accuracy of 0.69 pixels in range and 0.88 pixels in the azimuth direction is achieved. For the second method, we tested the use of an affine model for refining RPCs. In this case, by applying four GCPs in the corners of the image, the accuracy reached 0.75 pixels in range and 0.82 pixels in the azimuth direction.

  11. A Study on Rational Function Model Generation for TerraSAR-X Imagery

    PubMed Central

    Eftekhari, Akram; Saadatseresht, Mohammad; Motagh, Mahdi

    2013-01-01

    The Rational Function Model (RFM) has been widely used as an alternative to rigorous sensor models of high-resolution optical imagery in photogrammetry and remote sensing geometric processing. However, not much work has been done to evaluate the applicability of the RF model for Synthetic Aperture Radar (SAR) image processing. This paper investigates how to generate a Rational Polynomial Coefficient (RPC) for high-resolution TerraSAR-X imagery using an independent approach. The experimental results demonstrate that the RFM obtained using the independent approach fits the Range-Doppler physical sensor model with an accuracy of greater than 10−3 pixel. Because independent RPCs indicate absolute errors in geolocation, two methods can be used to improve the geometric accuracy of the RFM. In the first method, Ground Control Points (GCPs) are used to update SAR sensor orientation parameters, and the RPCs are calculated using the updated parameters. Our experiment demonstrates that by using three control points in the corners of the image, an accuracy of 0.69 pixels in range and 0.88 pixels in the azimuth direction is achieved. For the second method, we tested the use of an affine model for refining RPCs. In this case, by applying four GCPs in the corners of the image, the accuracy reached 0.75 pixels in range and 0.82 pixels in the azimuth direction. PMID:24021971

  12. A novel digital image sensor with row wise gain compensation for Hyper Spectral Imager (HySI) application

    NASA Astrophysics Data System (ADS)

    Lin, Shengmin; Lin, Chi-Pin; Wang, Weng-Lyang; Hsiao, Feng-Ke; Sikora, Robert

    2009-08-01

    A 256x512 element digital image sensor has been developed which has a large pixel size, slow scan and low power consumption for Hyper Spectral Imager (HySI) applications. The device is a mixed mode, silicon on chip (SOC) IC. It combines analog circuitry, digital circuitry and optical sensor circuitry into a single chip. This chip integrates a 256x512 active pixel sensor array, a programming gain amplifier (PGA) for row wise gain setting, I2C interface, SRAM, 12 bit analog to digital convertor (ADC), voltage regulator, low voltage differential signal (LVDS) and timing generator. The device can be used for 256 pixels of spatial resolution and 512 bands of spectral resolution ranged from 400 nm to 950 nm in wavelength. In row wise gain readout mode, one can set a different gain on each row of the photo detector by storing the gain setting data on the SRAM thru the I2C interface. This unique row wise gain setting can be used to compensate the silicon spectral response non-uniformity problem. Due to this unique function, the device is suitable for hyper-spectral imager applications. The HySI camera located on-board the Chandrayaan-1 satellite, was successfully launched to the moon on Oct. 22, 2008. The device is currently mapping the moon and sending back excellent images of the moon surface. The device design and the moon image data will be presented in the paper.

  13. Vertical waveguides integrated with silicon photodetectors: Towards high efficiency and low cross-talk image sensors

    NASA Astrophysics Data System (ADS)

    Tut, Turgut; Dan, Yaping; Duane, Peter; Yu, Young; Wober, Munib; Crozier, Kenneth B.

    2012-01-01

    We describe the experimental realization of vertical silicon nitride waveguides integrated with silicon photodetectors. The waveguides are embedded in a silicon dioxide layer. Scanning photocurrent microscopy is performed on a device containing a waveguide, and on a device containing the silicon dioxide layer, but without the waveguide. The results confirm the waveguide's ability to guide light onto the photodetector with high efficiency. We anticipate that the use of these structures in image sensors, with one waveguide per pixel, would greatly improve efficiency and significantly reduce inter-pixel crosstalk.

  14. Adaptive pixel-super-resolved lensfree in-line digital holography for wide-field on-chip microscopy.

    PubMed

    Zhang, Jialin; Sun, Jiasong; Chen, Qian; Li, Jiaji; Zuo, Chao

    2017-09-18

    High-resolution wide field-of-view (FOV) microscopic imaging plays an essential role in various fields of biomedicine, engineering, and physical sciences. As an alternative to conventional lens-based scanning techniques, lensfree holography provides a new way to effectively bypass the intrinsical trade-off between the spatial resolution and FOV of conventional microscopes. Unfortunately, due to the limited sensor pixel-size, unpredictable disturbance during image acquisition, and sub-optimum solution to the phase retrieval problem, typical lensfree microscopes only produce compromised imaging quality in terms of lateral resolution and signal-to-noise ratio (SNR). Here, we propose an adaptive pixel-super-resolved lensfree imaging (APLI) method which can solve, or at least partially alleviate these limitations. Our approach addresses the pixel aliasing problem by Z-scanning only, without resorting to subpixel shifting or beam-angle manipulation. Automatic positional error correction algorithm and adaptive relaxation strategy are introduced to enhance the robustness and SNR of reconstruction significantly. Based on APLI, we perform full-FOV reconstruction of a USAF resolution target (~29.85 mm 2 ) and achieve half-pitch lateral resolution of 770 nm, surpassing 2.17 times of the theoretical Nyquist-Shannon sampling resolution limit imposed by the sensor pixel-size (1.67µm). Full-FOV imaging result of a typical dicot root is also provided to demonstrate its promising potential applications in biologic imaging.

  15. Characterization of modulated time-of-flight range image sensors

    NASA Astrophysics Data System (ADS)

    Payne, Andrew D.; Dorrington, Adrian A.; Cree, Michael J.; Carnegie, Dale A.

    2009-01-01

    A number of full field image sensors have been developed that are capable of simultaneously measuring intensity and distance (range) for every pixel in a given scene using an indirect time-of-flight measurement technique. A light source is intensity modulated at a frequency between 10-100 MHz, and an image sensor is modulated at the same frequency, synchronously sampling light reflected from objects in the scene (homodyne detection). The time of flight is manifested as a phase shift in the illumination modulation envelope, which can be determined from the sampled data simultaneously for each pixel in the scene. This paper presents a method of characterizing the high frequency modulation response of these image sensors, using a pico-second laser pulser. The characterization results allow the optimal operating parameters, such as the modulation frequency, to be identified in order to maximize the range measurement precision for a given sensor. A number of potential sources of error exist when using these sensors, including deficiencies in the modulation waveform shape, duty cycle, or phase, resulting in contamination of the resultant range data. From the characterization data these parameters can be identified and compensated for by modifying the sensor hardware or through post processing of the acquired range measurements.

  16. Multi-Sensor Registration of Earth Remotely Sensed Imagery

    NASA Technical Reports Server (NTRS)

    LeMoigne, Jacqueline; Cole-Rhodes, Arlene; Eastman, Roger; Johnson, Kisha; Morisette, Jeffrey; Netanyahu, Nathan S.; Stone, Harold S.; Zavorin, Ilya; Zukor, Dorothy (Technical Monitor)

    2001-01-01

    Assuming that approximate registration is given within a few pixels by a systematic correction system, we develop automatic image registration methods for multi-sensor data with the goal of achieving sub-pixel accuracy. Automatic image registration is usually defined by three steps; feature extraction, feature matching, and data resampling or fusion. Our previous work focused on image correlation methods based on the use of different features. In this paper, we study different feature matching techniques and present five algorithms where the features are either original gray levels or wavelet-like features, and the feature matching is based on gradient descent optimization, statistical robust matching, and mutual information. These algorithms are tested and compared on several multi-sensor datasets covering one of the EOS Core Sites, the Konza Prairie in Kansas, from four different sensors: IKONOS (4m), Landsat-7/ETM+ (30m), MODIS (500m), and SeaWIFS (1000m).

  17. Architecture and applications of a high resolution gated SPAD image sensor

    PubMed Central

    Burri, Samuel; Maruyama, Yuki; Michalet, Xavier; Regazzoni, Francesco; Bruschini, Claudio; Charbon, Edoardo

    2014-01-01

    We present the architecture and three applications of the largest resolution image sensor based on single-photon avalanche diodes (SPADs) published to date. The sensor, fabricated in a high-voltage CMOS process, has a resolution of 512 × 128 pixels and a pitch of 24 μm. The fill-factor of 5% can be increased to 30% with the use of microlenses. For precise control of the exposure and for time-resolved imaging, we use fast global gating signals to define exposure windows as small as 4 ns. The uniformity of the gate edges location is ∼140 ps (FWHM) over the whole array, while in-pixel digital counting enables frame rates as high as 156 kfps. Currently, our camera is used as a highly sensitive sensor with high temporal resolution, for applications ranging from fluorescence lifetime measurements to fluorescence correlation spectroscopy and generation of true random numbers. PMID:25090572

  18. 50 μm pixel pitch wafer-scale CMOS active pixel sensor x-ray detector for digital breast tomosynthesis.

    PubMed

    Zhao, C; Konstantinidis, A C; Zheng, Y; Anaxagoras, T; Speller, R D; Kanicki, J

    2015-12-07

    Wafer-scale CMOS active pixel sensors (APSs) have been developed recently for x-ray imaging applications. The small pixel pitch and low noise are very promising properties for medical imaging applications such as digital breast tomosynthesis (DBT). In this work, we evaluated experimentally and through modeling the imaging properties of a 50 μm pixel pitch CMOS APS x-ray detector named DynAMITe (Dynamic Range Adjustable for Medical Imaging Technology). A modified cascaded system model was developed for CMOS APS x-ray detectors by taking into account the device nonlinear signal and noise properties. The imaging properties such as modulation transfer function (MTF), noise power spectrum (NPS), and detective quantum efficiency (DQE) were extracted from both measurements and the nonlinear cascaded system analysis. The results show that the DynAMITe x-ray detector achieves a high spatial resolution of 10 mm(-1) and a DQE of around 0.5 at spatial frequencies  <1 mm(-1). In addition, the modeling results were used to calculate the image signal-to-noise ratio (SNRi) of microcalcifications at various mean glandular dose (MGD). For an average breast (5 cm thickness, 50% glandular fraction), 165 μm microcalcifications can be distinguished at a MGD of 27% lower than the clinical value (~1.3 mGy). To detect 100 μm microcalcifications, further optimizations of the CMOS APS x-ray detector, image aquisition geometry and image reconstruction techniques should be considered.

  19. The DEPFET Sensor-Amplifier Structure: A Method to Beat 1/f Noise and Reach Sub-Electron Noise in Pixel Detectors

    PubMed Central

    Lutz, Gerhard; Porro, Matteo; Aschauer, Stefan; Wölfel, Stefan; Strüder, Lothar

    2016-01-01

    Depleted field effect transistors (DEPFET) are used to achieve very low noise signal charge readout with sub-electron measurement precision. This is accomplished by repeatedly reading an identical charge, thereby suppressing not only the white serial noise but also the usually constant 1/f noise. The repetitive non-destructive readout (RNDR) DEPFET is an ideal central element for an active pixel sensor (APS) pixel. The theory has been derived thoroughly and results have been verified on RNDR-DEPFET prototypes. A charge measurement precision of 0.18 electrons has been achieved. The device is well-suited for spectroscopic X-ray imaging and for optical photon counting in pixel sensors, even at high photon numbers in the same cell. PMID:27136549

  20. Circuit design for the retina-like image sensor based on space-variant lens array

    NASA Astrophysics Data System (ADS)

    Gao, Hongxun; Hao, Qun; Jin, Xuefeng; Cao, Jie; Liu, Yue; Song, Yong; Fan, Fan

    2013-12-01

    Retina-like image sensor is based on the non-uniformity of the human eyes and the log-polar coordinate theory. It has advantages of high-quality data compression and redundant information elimination. However, retina-like image sensors based on the CMOS craft have drawbacks such as high cost, low sensitivity and signal outputting efficiency and updating inconvenience. Therefore, this paper proposes a retina-like image sensor based on space-variant lens array, focusing on the circuit design to provide circuit support to the whole system. The circuit includes the following parts: (1) A photo-detector array with a lens array to convert optical signals to electrical signals; (2) a strobe circuit for time-gating of the pixels and parallel paths for high-speed transmission of the data; (3) a high-precision digital potentiometer for the I-V conversion, ratio normalization and sensitivity adjustment, a programmable gain amplifier for automatic generation control(AGC), and a A/D converter for the A/D conversion in every path; (4) the digital data is displayed on LCD and stored temporarily in DDR2 SDRAM; (5) a USB port to transfer the data to PC; (6) the whole system is controlled by FPGA. This circuit has advantages as lower cost, larger pixels, updating convenience and higher signal outputting efficiency. Experiments have proved that the grayscale output of every pixel basically matches the target and a non-uniform image of the target is ideally achieved in real time. The circuit can provide adequate technical support to retina-like image sensors based on space-variant lens array.

  1. Single Photon Counting Performance and Noise Analysis of CMOS SPAD-Based Image Sensors.

    PubMed

    Dutton, Neale A W; Gyongy, Istvan; Parmesan, Luca; Henderson, Robert K

    2016-07-20

    SPAD-based solid state CMOS image sensors utilising analogue integrators have attained deep sub-electron read noise (DSERN) permitting single photon counting (SPC) imaging. A new method is proposed to determine the read noise in DSERN image sensors by evaluating the peak separation and width (PSW) of single photon peaks in a photon counting histogram (PCH). The technique is used to identify and analyse cumulative noise in analogue integrating SPC SPAD-based pixels. The DSERN of our SPAD image sensor is exploited to confirm recent multi-photon threshold quanta image sensor (QIS) theory. Finally, various single and multiple photon spatio-temporal oversampling techniques are reviewed.

  2. Image Accumulation in Pixel Detector Gated by Late External Trigger Signal and its Application in Imaging Activation Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jakubek, J.; Cejnarova, A.; Platkevic, M.

    Single quantum counting pixel detectors of Medipix type are starting to be used in various radiographic applications. Compared to standard devices for digital imaging (such as CCDs or CMOS sensors) they present significant advantages: direct conversion of radiation to electric signal, energy sensitivity, noiseless image integration, unlimited dynamic range, absolute linearity. In this article we describe usage of the pixel device TimePix for image accumulation gated by late trigger signal. Demonstration of the technique is given on imaging coincidence instrumental neutron activation analysis (Imaging CINAA). This method allows one to determine concentration and distribution of certain preselected element in anmore » inspected sample.« less

  3. A high-speed trapezoid image sensor design for continuous traffic monitoring at signalized intersection approaches.

    DOT National Transportation Integrated Search

    2014-10-01

    The goal of this project is to monitor traffic flow continuously with an innovative camera system composed of a custom : designed image sensor integrated circuit (IC) containing trapezoid pixel array and camera system that is capable of : intelligent...

  4. Computation of dark frames in digital imagers

    NASA Astrophysics Data System (ADS)

    Widenhorn, Ralf; Rest, Armin; Blouke, Morley M.; Berry, Richard L.; Bodegom, Erik

    2007-02-01

    Dark current is caused by electrons that are thermally exited into the conduction band. These electrons are collected by the well of the CCD and add a false signal to the chip. We will present an algorithm that automatically corrects for dark current. It uses a calibration protocol to characterize the image sensor for different temperatures. For a given exposure time, the dark current of every pixel is characteristic of a specific temperature. The dark current of every pixel can therefore be used as an indicator of the temperature. Hot pixels have the highest signal-to-noise ratio and are the best temperature sensors. We use the dark current of a several hundred hot pixels to sense the chip temperature and predict the dark current of all pixels on the chip. Dark current computation is not a new concept, but our approach is unique. Some advantages of our method include applicability for poorly temperature-controlled camera systems and the possibility of ex post facto dark current correction.

  5. A neighbor pixel communication filtering structure for Dynamic Vision Sensors

    NASA Astrophysics Data System (ADS)

    Xu, Yuan; Liu, Shiqi; Lu, Hehui; Zhang, Zilong

    2017-02-01

    For Dynamic Vision Sensors (DVS), thermal noise and junction leakage current induced Background Activity (BA) is the major cause of the deterioration of images quality. Inspired by the smoothing filtering principle of horizontal cells in vertebrate retina, A DVS pixel with Neighbor Pixel Communication (NPC) filtering structure is proposed to solve this issue. The NPC structure is designed to judge the validity of pixel's activity through the communication between its 4 adjacent pixels. The pixel's outputs will be suppressed if its activities are determined not real. The proposed pixel's area is 23.76×24.71μm2 and only 3ns output latency is introduced. In order to validate the effectiveness of the structure, a 5×5 pixel array has been implemented in SMIC 0.13μm CIS process. 3 test cases of array's behavioral model show that the NPC-DVS have an ability of filtering the BA.

  6. Transparent Fingerprint Sensor System for Large Flat Panel Display.

    PubMed

    Seo, Wonkuk; Pi, Jae-Eun; Cho, Sung Haeung; Kang, Seung-Youl; Ahn, Seong-Deok; Hwang, Chi-Sun; Jeon, Ho-Sik; Kim, Jong-Uk; Lee, Myunghee

    2018-01-19

    In this paper, we introduce a transparent fingerprint sensing system using a thin film transistor (TFT) sensor panel, based on a self-capacitive sensing scheme. An armorphousindium gallium zinc oxide (a-IGZO) TFT sensor array and associated custom Read-Out IC (ROIC) are implemented for the system. The sensor panel has a 200 × 200 pixel array and each pixel size is as small as 50 μm × 50 μm. The ROIC uses only eight analog front-end (AFE) amplifier stages along with a successive approximation analog-to-digital converter (SAR ADC). To get the fingerprint image data from the sensor array, the ROIC senses a capacitance, which is formed by a cover glass material between a human finger and an electrode of each pixel of the sensor array. Three methods are reviewed for estimating the self-capacitance. The measurement result demonstrates that the transparent fingerprint sensor system has an ability to differentiate a human finger's ridges and valleys through the fingerprint sensor array.

  7. Transparent Fingerprint Sensor System for Large Flat Panel Display

    PubMed Central

    Seo, Wonkuk; Pi, Jae-Eun; Cho, Sung Haeung; Kang, Seung-Youl; Ahn, Seong-Deok; Hwang, Chi-Sun; Jeon, Ho-Sik; Kim, Jong-Uk

    2018-01-01

    In this paper, we introduce a transparent fingerprint sensing system using a thin film transistor (TFT) sensor panel, based on a self-capacitive sensing scheme. An armorphousindium gallium zinc oxide (a-IGZO) TFT sensor array and associated custom Read-Out IC (ROIC) are implemented for the system. The sensor panel has a 200 × 200 pixel array and each pixel size is as small as 50 μm × 50 μm. The ROIC uses only eight analog front-end (AFE) amplifier stages along with a successive approximation analog-to-digital converter (SAR ADC). To get the fingerprint image data from the sensor array, the ROIC senses a capacitance, which is formed by a cover glass material between a human finger and an electrode of each pixel of the sensor array. Three methods are reviewed for estimating the self-capacitance. The measurement result demonstrates that the transparent fingerprint sensor system has an ability to differentiate a human finger’s ridges and valleys through the fingerprint sensor array. PMID:29351218

  8. Image Sensors Enhance Camera Technologies

    NASA Technical Reports Server (NTRS)

    2010-01-01

    In the 1990s, a Jet Propulsion Laboratory team led by Eric Fossum researched ways of improving complementary metal-oxide semiconductor (CMOS) image sensors in order to miniaturize cameras on spacecraft while maintaining scientific image quality. Fossum s team founded a company to commercialize the resulting CMOS active pixel sensor. Now called the Aptina Imaging Corporation, based in San Jose, California, the company has shipped over 1 billion sensors for use in applications such as digital cameras, camera phones, Web cameras, and automotive cameras. Today, one of every three cell phone cameras on the planet feature Aptina s sensor technology.

  9. Methods for gas detection using stationary hyperspectral imaging sensors

    DOEpatents

    Conger, James L [San Ramon, CA; Henderson, John R [Castro Valley, CA

    2012-04-24

    According to one embodiment, a method comprises producing a first hyperspectral imaging (HSI) data cube of a location at a first time using data from a HSI sensor; producing a second HSI data cube of the same location at a second time using data from the HSI sensor; subtracting on a pixel-by-pixel basis the second HSI data cube from the first HSI data cube to produce a raw difference cube; calibrating the raw difference cube to produce a calibrated raw difference cube; selecting at least one desired spectral band based on a gas of interest; producing a detection image based on the at least one selected spectral band and the calibrated raw difference cube; examining the detection image to determine presence of the gas of interest; and outputting a result of the examination. Other methods, systems, and computer program products for detecting the presence of a gas are also described.

  10. High-Speed Binary-Output Image Sensor

    NASA Technical Reports Server (NTRS)

    Fossum, Eric; Panicacci, Roger A.; Kemeny, Sabrina E.; Jones, Peter D.

    1996-01-01

    Photodetector outputs digitized by circuitry on same integrated-circuit chip. Developmental special-purpose binary-output image sensor designed to capture up to 1,000 images per second, with resolution greater than 10 to the 6th power pixels per image. Lower-resolution but higher-frame-rate prototype of sensor contains 128 x 128 array of photodiodes on complementary metal oxide/semiconductor (CMOS) integrated-circuit chip. In application for which it is being developed, sensor used to examine helicopter oil to determine whether amount of metal and sand in oil sufficient to warrant replacement.

  11. Design and characterization of novel monolithic pixel sensors for the ALICE ITS upgrade

    NASA Astrophysics Data System (ADS)

    Cavicchioli, C.; Chalmet, P. L.; Giubilato, P.; Hillemanns, H.; Junique, A.; Kugathasan, T.; Mager, M.; Marin Tobon, C. A.; Martinengo, P.; Mattiazzo, S.; Mugnier, H.; Musa, L.; Pantano, D.; Rousset, J.; Reidt, F.; Riedler, P.; Snoeys, W.; Van Hoorne, J. W.; Yang, P.

    2014-11-01

    Within the R&D activities for the upgrade of the ALICE Inner Tracking System (ITS), Monolithic Active Pixel Sensors (MAPS) are being developed and studied, due to their lower material budget ( 0.3 %X0 in total for each inner layer) and higher granularity ( 20 μm × 20 μm pixels) with respect to the present pixel detector. This paper presents the design and characterization results of the Explorer0 chip, manufactured in the TowerJazz 180 nm CMOS Imaging Sensor process, based on a wafer with high-resistivity (ρ > 1 kΩ cm) and 18 μm thick epitaxial layer. The chip is organized in two sub-matrices with different pixel pitches (20 μm and 30 μm), each of them containing several pixel designs. The collection electrode size and shape, as well as the distance between the electrode and the surrounding electronics, are varied; the chip also offers the possibility to decouple the charge integration time from the readout time, and to change the sensor bias. The charge collection properties of the different pixel variants implemented in Explorer0 have been studied using a 55Fe X-ray source and 1-5 GeV/c electrons and positrons. The sensor capacitance has been estimated, and the effect of the sensor bias has also been examined in detail. A second version of the Explorer0 chip (called Explorer1) has been submitted for production in March 2013, together with a novel circuit with in-pixel discrimination and a sparsified readout. Results from these submissions are also presented.

  12. Small pixel cross-talk MTF and its impact on MWIR sensor performance

    NASA Astrophysics Data System (ADS)

    Goss, Tristan M.; Willers, Cornelius J.

    2017-05-01

    As pixel sizes reduce in the development of modern High Definition (HD) Mid Wave Infrared (MWIR) detectors the interpixel cross-talk becomes increasingly difficult to regulate. The diffusion lengths required to achieve the quantum efficiency and sensitivity of MWIR detectors are typically longer than the pixel pitch dimension, and the probability of inter-pixel cross-talk increases as the pixel pitch/diffusion length fraction decreases. Inter-pixel cross-talk is most conveniently quantified by the focal plane array sampling Modulation Transfer Function (MTF). Cross-talk MTF will reduce the ideal sinc square pixel MTF that is commonly used when modelling sensor performance. However, cross-talk MTF data is not always readily available from detector suppliers, and since the origins of inter-pixel cross-talk are uniquely device and manufacturing process specific, no generic MTF models appear to satisfy the needs of the sensor designers and analysts. In this paper cross-talk MTF data has been collected from recent publications and the development for a generic cross-talk MTF model to fit this data is investigated. The resulting cross-talk MTF model is then included in a MWIR sensor model and the impact on sensor performance is evaluated in terms of the National Imagery Interoperability Rating Scale's (NIIRS) General Image Quality Equation (GIQE) metric for a range of fnumber/ detector pitch Fλ/d configurations and operating environments. By applying non-linear boost transfer functions in the signal processing chain, the contrast losses due to cross-talk may be compensated for. Boost transfer functions, however, also reduce the signal to noise ratio of the sensor. In this paper boost function limits are investigated and included in the sensor performance assessments.

  13. Binary CMOS image sensor with a gate/body-tied MOSFET-type photodetector for high-speed operation

    NASA Astrophysics Data System (ADS)

    Choi, Byoung-Soo; Jo, Sung-Hyun; Bae, Myunghan; Kim, Sang-Hwan; Shin, Jang-Kyoo

    2016-05-01

    In this paper, a binary complementary metal oxide semiconductor (CMOS) image sensor with a gate/body-tied (GBT) metal oxide semiconductor field effect transistor (MOSFET)-type photodetector is presented. The sensitivity of the GBT MOSFET-type photodetector, which was fabricated using the standard CMOS 0.35-μm process, is higher than the sensitivity of the p-n junction photodiode, because the output signal of the photodetector is amplified by the MOSFET. A binary image sensor becomes more efficient when using this photodetector. Lower power consumptions and higher speeds of operation are possible, compared to the conventional image sensors using multi-bit analog to digital converters (ADCs). The frame rate of the proposed image sensor is over 2000 frames per second, which is higher than those of the conventional CMOS image sensors. The output signal of an active pixel sensor is applied to a comparator and compared with a reference level. The 1-bit output data of the binary process is determined by this level. To obtain a video signal, the 1-bit output data is stored in the memory and is read out by horizontal scanning. The proposed chip is composed of a GBT pixel array (144 × 100), binary-process circuit, vertical scanner, horizontal scanner, and readout circuit. The operation mode can be selected from between binary mode and multi-bit mode.

  14. Temporal Noise Analysis of Charge-Domain Sampling Readout Circuits for CMOS Image Sensors.

    PubMed

    Ge, Xiaoliang; Theuwissen, Albert J P

    2018-02-27

    This paper presents a temporal noise analysis of charge-domain sampling readout circuits for Complementary Metal-Oxide Semiconductor (CMOS) image sensors. In order to address the trade-off between the low input-referred noise and high dynamic range, a Gm-cell-based pixel together with a charge-domain correlated-double sampling (CDS) technique has been proposed to provide a way to efficiently embed a tunable conversion gain along the read-out path. Such readout topology, however, operates in a non-stationery large-signal behavior, and the statistical properties of its temporal noise are a function of time. Conventional noise analysis methods for CMOS image sensors are based on steady-state signal models, and therefore cannot be readily applied for Gm-cell-based pixels. In this paper, we develop analysis models for both thermal noise and flicker noise in Gm-cell-based pixels by employing the time-domain linear analysis approach and the non-stationary noise analysis theory, which help to quantitatively evaluate the temporal noise characteristic of Gm-cell-based pixels. Both models were numerically computed in MATLAB using design parameters of a prototype chip, and compared with both simulation and experimental results. The good agreement between the theoretical and measurement results verifies the effectiveness of the proposed noise analysis models.

  15. Temporal Noise Analysis of Charge-Domain Sampling Readout Circuits for CMOS Image Sensors †

    PubMed Central

    Theuwissen, Albert J. P.

    2018-01-01

    This paper presents a temporal noise analysis of charge-domain sampling readout circuits for Complementary Metal-Oxide Semiconductor (CMOS) image sensors. In order to address the trade-off between the low input-referred noise and high dynamic range, a Gm-cell-based pixel together with a charge-domain correlated-double sampling (CDS) technique has been proposed to provide a way to efficiently embed a tunable conversion gain along the read-out path. Such readout topology, however, operates in a non-stationery large-signal behavior, and the statistical properties of its temporal noise are a function of time. Conventional noise analysis methods for CMOS image sensors are based on steady-state signal models, and therefore cannot be readily applied for Gm-cell-based pixels. In this paper, we develop analysis models for both thermal noise and flicker noise in Gm-cell-based pixels by employing the time-domain linear analysis approach and the non-stationary noise analysis theory, which help to quantitatively evaluate the temporal noise characteristic of Gm-cell-based pixels. Both models were numerically computed in MATLAB using design parameters of a prototype chip, and compared with both simulation and experimental results. The good agreement between the theoretical and measurement results verifies the effectiveness of the proposed noise analysis models. PMID:29495496

  16. Multi-Scale Fractal Analysis of Image Texture and Pattern

    NASA Technical Reports Server (NTRS)

    Emerson, Charles W.; Lam, Nina Siu-Ngan; Quattrochi, Dale A.

    1999-01-01

    Analyses of the fractal dimension of Normalized Difference Vegetation Index (NDVI) images of homogeneous land covers near Huntsville, Alabama revealed that the fractal dimension of an image of an agricultural land cover indicates greater complexity as pixel size increases, a forested land cover gradually grows smoother, and an urban image remains roughly self-similar over the range of pixel sizes analyzed (10 to 80 meters). A similar analysis of Landsat Thematic Mapper images of the East Humboldt Range in Nevada taken four months apart show a more complex relation between pixel size and fractal dimension. The major visible difference between the spring and late summer NDVI images is the absence of high elevation snow cover in the summer image. This change significantly alters the relation between fractal dimension and pixel size. The slope of the fractal dimension-resolution relation provides indications of how image classification or feature identification will be affected by changes in sensor spatial resolution.

  17. Multi-Scale Fractal Analysis of Image Texture and Pattern

    NASA Technical Reports Server (NTRS)

    Emerson, Charles W.; Lam, Nina Siu-Ngan; Quattrochi, Dale A.

    1999-01-01

    Analyses of the fractal dimension of Normalized Difference Vegetation Index (NDVI) images of homogeneous land covers near Huntsville, Alabama revealed that the fractal dimension of an image of an agricultural land cover indicates greater complexity as pixel size increases, a forested land cover gradually grows smoother, and an urban image remains roughly self-similar over the range of pixel sizes analyzed (10 to 80 meters). A similar analysis of Landsat Thematic Mapper images of the East Humboldt Range in Nevada taken four months apart show a more complex relation between pixel size and fractal dimension. The major visible difference between the spring and late summer NDVI images of the absence of high elevation snow cover in the summer image. This change significantly alters the relation between fractal dimension and pixel size. The slope of the fractal dimensional-resolution relation provides indications of how image classification or feature identification will be affected by changes in sensor spatial resolution.

  18. Development of Gentle Slope Light Guide Structure in a 3.4 μm Pixel Pitch Global Shutter CMOS Image Sensor with Multiple Accumulation Shutter Technology.

    PubMed

    Sekine, Hiroshi; Kobayashi, Masahiro; Onuki, Yusuke; Kawabata, Kazunari; Tsuboi, Toshiki; Matsuno, Yasushi; Takahashi, Hidekazu; Inoue, Shunsuke; Ichikawa, Takeshi

    2017-12-09

    CMOS image sensors (CISs) with global shutter (GS) function are strongly required in order to avoid image degradation. However, CISs with GS function have generally been inferior to the rolling shutter (RS) CIS in performance, because they have more components. This problem is remarkable in small pixel pitch. The newly developed 3.4 µm pitch GS CIS solves this problem by using multiple accumulation shutter technology and the gentle slope light guide structure. As a result, the developed GS pixel achieves 1.8 e - temporal noise and 16,200 e - full well capacity with charge domain memory in 120 fps operation. The sensitivity and parasitic light sensitivity are 28,000 e - /lx·s and -89 dB, respectively. Moreover, the incident light angle dependence of sensitivity and parasitic light sensitivity are improved by the gentle slope light guide structure.

  19. Performance quantification of a millimeter-wavelength imaging system based on inexpensive glow-discharge-detector focal-plane array.

    PubMed

    Shilemay, Moshe; Rozban, Daniel; Levanon, Assaf; Yitzhaky, Yitzhak; Kopeika, Natan S; Yadid-Pecht, Orly; Abramovich, Amir

    2013-03-01

    Inexpensive millimeter-wavelength (MMW) optical digital imaging raises a challenge of evaluating the imaging performance and image quality because of the large electromagnetic wavelengths and pixel sensor sizes, which are 2 to 3 orders of magnitude larger than those of ordinary thermal or visual imaging systems, and also because of the noisiness of the inexpensive glow discharge detectors that compose the focal-plane array. This study quantifies the performances of this MMW imaging system. Its point-spread function and modulation transfer function were investigated. The experimental results and the analysis indicate that the image quality of this MMW imaging system is limited mostly by the noise, and the blur is dominated by the pixel sensor size. Therefore, the MMW image might be improved by oversampling, given that noise reduction is achieved. Demonstration of MMW image improvement through oversampling is presented.

  20. High speed CMOS imager with motion artifact supression and anti-blooming

    NASA Technical Reports Server (NTRS)

    Pain, Bedabrata (Inventor); Wrigley, Chris (Inventor); Yang, Guang (Inventor); Yadid-Pecht, Orly (Inventor)

    2001-01-01

    An image sensor includes pixels formed on a semiconductor substrate. Each pixel includes a photoactive region in the semiconductor substrate, a sense node, and a power supply node. A first electrode is disposed near a surface of the semiconductor substrate. A bias signal on the first electrode sets a potential in a region of the semiconductor substrate between the photoactive region and the sense node. A second electrode is disposed near the surface of the semiconductor substrate. A bias signal on the second electrode sets a potential in a region of the semiconductor substrate between the photoactive region and the power supply node. The image sensor includes a controller that causes bias signals to be provided to the electrodes so that photocharges generated in the photoactive region are accumulated in the photoactive region during a pixel integration period, the accumulated photocharges are transferred to the sense node during a charge transfer period, and photocharges generated in the photoactive region are transferred to the power supply node during a third period without passing through the sense node. The imager can operate at high shutter speeds with simultaneous integration of pixels in the array. High quality images can be produced free from motion artifacts. High quantum efficiency, good blooming control, low dark current, low noise and low image lag can be obtained.

  1. How many pixels does it take to make a good 4"×6" print? Pixel count wars revisited

    NASA Astrophysics Data System (ADS)

    Kriss, Michael A.

    2011-01-01

    In the early 1980's the future of conventional silver-halide photographic systems was of great concern due to the potential introduction of electronic imaging systems then typified by the Sony Mavica analog electronic camera. The focus was on the quality of film-based systems as expressed in the number of equivalent number pixels and bits-per-pixel, and how many pixels would be required to create an equivalent quality image from a digital camera. It was found that 35-mm frames, for ISO 100 color negative film, contained equivalent pixels of 12 microns for a total of 18 million pixels per frame (6 million pixels per layer) with about 6 bits of information per pixel; the introduction of new emulsion technology, tabular AgX grains, increased the value to 8 bit per pixel. Higher ISO speed films had larger equivalent pixels, fewer pixels per frame, but retained the 8 bits per pixel. Further work found that a high quality 3.5" x 5.25" print could be obtained from a three layer system containing 1300 x 1950 pixels per layer or about 7.6 million pixels in all. In short, it became clear that when a digital camera contained about 6 million pixels (in a single layer using a color filter array and appropriate image processing) that digital systems would challenge and replace conventional film-based system for the consumer market. By 2005 this became the reality. Since 2005 there has been a "pixel war" raging amongst digital camera makers. The question arises about just how many pixels are required and are all pixels equal? This paper will provide a practical look at how many pixels are needed for a good print based on the form factor of the sensor (sensor size) and the effective optical modulation transfer function (optical spread function) of the camera lens. Is it better to have 16 million, 5.7-micron pixels or 6 million 7.8-micron pixels? How does intrinsic (no electronic boost) ISO speed and exposure latitude vary with pixel size? A systematic review of these issues will be provided within the context of image quality and ISO speed models developed over the last 15 years.

  2. Analysis of simulated image sequences from sensors for restricted-visibility operations

    NASA Technical Reports Server (NTRS)

    Kasturi, Rangachar

    1991-01-01

    A real time model of the visible output from a 94 GHz sensor, based on a radiometric simulation of the sensor, was developed. A sequence of images as seen from an aircraft as it approaches for landing was simulated using this model. Thirty frames from this sequence of 200 x 200 pixel images were analyzed to identify and track objects in the image using the Cantata image processing package within the visual programming environment provided by the Khoros software system. The image analysis operations are described.

  3. Low-voltage 96 dB snapshot CMOS image sensor with 4.5 nW power dissipation per pixel.

    PubMed

    Spivak, Arthur; Teman, Adam; Belenky, Alexander; Yadid-Pecht, Orly; Fish, Alexander

    2012-01-01

    Modern "smart" CMOS sensors have penetrated into various applications, such as surveillance systems, bio-medical applications, digital cameras, cellular phones and many others. Reducing the power of these sensors continuously challenges designers. In this paper, a low power global shutter CMOS image sensor with Wide Dynamic Range (WDR) ability is presented. This sensor features several power reduction techniques, including a dual voltage supply, a selective power down, transistors with different threshold voltages, a non-rationed logic, and a low voltage static memory. A combination of all these approaches has enabled the design of the low voltage "smart" image sensor, which is capable of reaching a remarkable dynamic range, while consuming very low power. The proposed power-saving solutions have allowed the maintenance of the standard architecture of the sensor, reducing both the time and the cost of the design. In order to maintain the image quality, a relation between the sensor performance and power has been analyzed and a mathematical model, describing the sensor Signal to Noise Ratio (SNR) and Dynamic Range (DR) as a function of the power supplies, is proposed. The described sensor was implemented in a 0.18 um CMOS process and successfully tested in the laboratory. An SNR of 48 dB and DR of 96 dB were achieved with a power dissipation of 4.5 nW per pixel.

  4. Low-Voltage 96 dB Snapshot CMOS Image Sensor with 4.5 nW Power Dissipation per Pixel

    PubMed Central

    Spivak, Arthur; Teman, Adam; Belenky, Alexander; Yadid-Pecht, Orly; Fish, Alexander

    2012-01-01

    Modern “smart” CMOS sensors have penetrated into various applications, such as surveillance systems, bio-medical applications, digital cameras, cellular phones and many others. Reducing the power of these sensors continuously challenges designers. In this paper, a low power global shutter CMOS image sensor with Wide Dynamic Range (WDR) ability is presented. This sensor features several power reduction techniques, including a dual voltage supply, a selective power down, transistors with different threshold voltages, a non-rationed logic, and a low voltage static memory. A combination of all these approaches has enabled the design of the low voltage “smart” image sensor, which is capable of reaching a remarkable dynamic range, while consuming very low power. The proposed power-saving solutions have allowed the maintenance of the standard architecture of the sensor, reducing both the time and the cost of the design. In order to maintain the image quality, a relation between the sensor performance and power has been analyzed and a mathematical model, describing the sensor Signal to Noise Ratio (SNR) and Dynamic Range (DR) as a function of the power supplies, is proposed. The described sensor was implemented in a 0.18 um CMOS process and successfully tested in the laboratory. An SNR of 48 dB and DR of 96 dB were achieved with a power dissipation of 4.5 nW per pixel. PMID:23112588

  5. Adaptive time-sequential binary sensing for high dynamic range imaging

    NASA Astrophysics Data System (ADS)

    Hu, Chenhui; Lu, Yue M.

    2012-06-01

    We present a novel image sensor for high dynamic range imaging. The sensor performs an adaptive one-bit quantization at each pixel, with the pixel output switched from 0 to 1 only if the number of photons reaching that pixel is greater than or equal to a quantization threshold. With an oracle knowledge of the incident light intensity, one can pick an optimal threshold (for that light intensity) and the corresponding Fisher information contained in the output sequence follows closely that of an ideal unquantized sensor over a wide range of intensity values. This observation suggests the potential gains one may achieve by adaptively updating the quantization thresholds. As the main contribution of this work, we propose a time-sequential threshold-updating rule that asymptotically approaches the performance of the oracle scheme. With every threshold mapped to a number of ordered states, the dynamics of the proposed scheme can be modeled as a parametric Markov chain. We show that the frequencies of different thresholds converge to a steady-state distribution that is concentrated around the optimal choice. Moreover, numerical experiments show that the theoretical performance measures (Fisher information and Craḿer-Rao bounds) can be achieved by a maximum likelihood estimator, which is guaranteed to find globally optimal solution due to the concavity of the log-likelihood functions. Compared with conventional image sensors and the strategy that utilizes a constant single-photon threshold considered in previous work, the proposed scheme attains orders of magnitude improvement in terms of sensor dynamic ranges.

  6. Advancements in DEPMOSFET device developments for XEUS

    NASA Astrophysics Data System (ADS)

    Treis, J.; Bombelli, L.; Eckart, R.; Fiorini, C.; Fischer, P.; Hälker, O.; Herrmann, S.; Lechner, P.; Lutz, G.; Peric, I.; Porro, M.; Richter, R. H.; Schaller, G.; Schopper, F.; Soltau, H.; Strüder, L.; Wölfel, S.

    2006-06-01

    DEPMOSFET based Active Pixel Sensor (APS) matrices are a new detector concept for X-ray imaging spectroscopy missions. They can cope with the challenging requirements of the XEUS Wide Field Imager and combine excellent energy resolution, high speed readout and low power consumption with the attractive feature of random accessibility of pixels. From the evaluation of first prototypes, new concepts have been developed to overcome the minor drawbacks and problems encountered for the older devices. The new devices will have a pixel size of 75 μm × 75 μm. Besides 64 × 64 pixel arrays, prototypes with a sizes of 256 × 256 pixels and 128 × 512 pixels and an active area of about 3.6 cm2 will be produced, a milestone on the way towards the fully grown XEUS WFI device. The production of these improved devices is currently on the way. At the same time, the development of the next generation of front-end electronics has been started, which will permit to operate the sensor devices with the readout speed required by XEUS. Here, a summary of the DEPFET capabilities, the concept of the sensors of the next generation and the new front-end electronics will be given. Additionally, prospects of new device developments using the DEPFET as a sensitive element are shown, e.g. so-called RNDR-pixels, which feature repetitive non-destructive readout to lower the readout noise below the 1 e - ENC limit.

  7. A real-time ultrasonic field mapping system using a Fabry Pérot single pixel camera for 3D photoacoustic imaging

    NASA Astrophysics Data System (ADS)

    Huynh, Nam; Zhang, Edward; Betcke, Marta; Arridge, Simon R.; Beard, Paul; Cox, Ben

    2015-03-01

    A system for dynamic mapping of broadband ultrasound fields has been designed, with high frame rate photoacoustic imaging in mind. A Fabry-Pérot interferometric ultrasound sensor was interrogated using a coherent light single-pixel camera. Scrambled Hadamard measurement patterns were used to sample the acoustic field at the sensor, and either a fast Hadamard transform or a compressed sensing reconstruction algorithm were used to recover the acoustic pressure data. Frame rates of 80 Hz were achieved for 32x32 images even though no specialist hardware was used for the on-the-fly reconstructions. The ability of the system to obtain photocacoustic images with data compressions as low as 10% was also demonstrated.

  8. Characteristics of Monolithically Integrated InGaAs Active Pixel Imager Array

    NASA Technical Reports Server (NTRS)

    Kim, Q.; Cunningham, T. J.; Pain, B.; Lange, M. J.; Olsen, G. H.

    2000-01-01

    Switching and amplifying characteristics of a newly developed monolithic InGaAs Active Pixel Imager Array are presented. The sensor array is fabricated from InGaAs material epitaxially deposited on an InP substrate. It consists of an InGaAs photodiode connected to InP depletion-mode junction field effect transistors (JFETs) for low leakage, low power, and fast control of circuit signal amplifying, buffering, selection, and reset. This monolithically integrated active pixel sensor configuration eliminates the need for hybridization with silicon multiplexer. In addition, the configuration allows the sensor to be front illuminated, making it sensitive to visible as well as near infrared signal radiation. Adapting the existing 1.55 micrometer fiber optical communication technology, this integration will be an ideal system of optoelectronic integration for dual band (Visible/IR) applications near room temperature, for use in atmospheric gas sensing in space, and for target identification on earth. In this paper, two different types of small 4 x 1 test arrays will be described. The effectiveness of switching and amplifying circuits will be discussed in terms of circuit effectiveness (leakage, operating frequency, and temperature) in preparation for the second phase demonstration of integrated, two-dimensional monolithic InGaAs active pixel sensor arrays for applications in transportable shipboard surveillance, night vision, and emission spectroscopy.

  9. Single Photon Counting Performance and Noise Analysis of CMOS SPAD-Based Image Sensors

    PubMed Central

    Dutton, Neale A. W.; Gyongy, Istvan; Parmesan, Luca; Henderson, Robert K.

    2016-01-01

    SPAD-based solid state CMOS image sensors utilising analogue integrators have attained deep sub-electron read noise (DSERN) permitting single photon counting (SPC) imaging. A new method is proposed to determine the read noise in DSERN image sensors by evaluating the peak separation and width (PSW) of single photon peaks in a photon counting histogram (PCH). The technique is used to identify and analyse cumulative noise in analogue integrating SPC SPAD-based pixels. The DSERN of our SPAD image sensor is exploited to confirm recent multi-photon threshold quanta image sensor (QIS) theory. Finally, various single and multiple photon spatio-temporal oversampling techniques are reviewed. PMID:27447643

  10. The use of multisensor images for Earth Science applications

    NASA Technical Reports Server (NTRS)

    Evans, D.; Stromberg, B.

    1983-01-01

    The use of more than one remote sensing technique is particularly important for Earth Science applications because of the compositional and textural information derivable from the images. The ability to simultaneously analyze images acquired by different sensors requires coregistration of the multisensor image data sets. In order to insure pixel to pixel registration in areas of high relief, images must be rectified to eliminate topographic distortions. Coregistered images can be analyzed using a variety of multidimensional techniques and the acquired knowledge of topographic effects in the images can be used in photogeologic interpretations.

  11. Fires in Philippines

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Roughly a dozen fires (red pixels) dotted the landscape on the main Philippine island of Luzon on April 1, 2002. This true-color image was acquired by the Moderate-resolution Imaging Spectroradiometer (MODIS), flying aboard NASA's Terra spacecraft. Please note that the high-resolution scene provided here is 500 meters per pixel. For a copy of this scene at the sensor's fullest resolution, visit the MODIS Rapidfire site.

  12. Study of current-mode active pixel sensor circuits using amorphous InSnZnO thin-film transistor for 50-μm pixel-pitch indirect X-ray imagers

    NASA Astrophysics Data System (ADS)

    Cheng, Mao-Hsun; Zhao, Chumin; Kanicki, Jerzy

    2017-05-01

    Current-mode active pixel sensor (C-APS) circuits based on amorphous indium-tin-zinc-oxide thin-film transistors (a-ITZO TFTs) are proposed for indirect X-ray imagers. The proposed C-APS circuits include a combination of a hydrogenated amorphous silicon (a-Si:H) p+-i-n+ photodiode (PD) and a-ITZO TFTs. Source-output (SO) and drain-output (DO) C-APS are investigated and compared. Acceptable signal linearity and high gains are realized for SO C-APS. APS circuit characteristics including voltage gain, charge gain, signal linearity, charge-to-current conversion gain, electron-to-voltage conversion gain are evaluated. The impact of the a-ITZO TFT threshold voltage shifts on C-APS is also considered. A layout for a pixel pitch of 50 μm and an associated fabrication process are suggested. Data line loadings for 4k-resolution X-ray imagers are computed and their impact on circuit performances is taken into consideration. Noise analysis is performed, showing a total input-referred noise of 239 e-.

  13. True Ortho Generation of Urban Area Using High Resolution Aerial Photos

    NASA Astrophysics Data System (ADS)

    Hu, Yong; Stanley, David; Xin, Yubin

    2016-06-01

    The pros and cons of existing methods for true ortho generation are analyzed based on a critical literature review for its two major processing stages: visibility analysis and occlusion compensation. They process frame and pushbroom images using different algorithms for visibility analysis due to the need of perspective centers used by the z-buffer (or alike) techniques. For occlusion compensation, the pixel-based approach likely results in excessive seamlines in the ortho-rectified images due to the use of a quality measure on the pixel-by-pixel rating basis. In this paper, we proposed innovative solutions to tackle the aforementioned problems. For visibility analysis, an elevation buffer technique is introduced to employ the plain elevations instead of the distances from perspective centers by z-buffer, and has the advantage of sensor independency. A segment oriented strategy is developed to evaluate a plain cost measure per segment for occlusion compensation instead of the tedious quality rating per pixel. The cost measure directly evaluates the imaging geometry characteristics in ground space, and is also sensor independent. Experimental results are demonstrated using aerial photos acquired by UltraCam camera.

  14. Superresolution with the focused plenoptic camera

    NASA Astrophysics Data System (ADS)

    Georgiev, Todor; Chunev, Georgi; Lumsdaine, Andrew

    2011-03-01

    Digital images from a CCD or CMOS sensor with a color filter array must undergo a demosaicing process to combine the separate color samples into a single color image. This interpolation process can interfere with the subsequent superresolution process. Plenoptic superresolution, which relies on precise sub-pixel sampling across captured microimages, is particularly sensitive to such resampling of the raw data. In this paper we present an approach for superresolving plenoptic images that takes place at the time of demosaicing the raw color image data. Our approach exploits the interleaving provided by typical color filter arrays (e.g., Bayer filter) to further refine plenoptic sub-pixel sampling. Our rendering algorithm treats the color channels in a plenoptic image separately, which improves final superresolution by a factor of two. With appropriate plenoptic capture we show the theoretical possibility for rendering final images at full sensor resolution.

  15. Germanium ``hexa'' detector: production and testing

    NASA Astrophysics Data System (ADS)

    Sarajlić, M.; Pennicard, D.; Smoljanin, S.; Hirsemann, H.; Struth, B.; Fritzsch, T.; Rothermund, M.; Zuvic, M.; Lampert, M. O.; Askar, M.; Graafsma, H.

    2017-01-01

    Here we present new result on the testing of a Germanium sensor for X-ray radiation. The system is made of 3 × 2 Medipix3RX chips, bump-bonded to a monolithic sensor, and is called ``hexa''. Its dimensions are 45 × 30 mm2 and the sensor thickness was 1.5 mm. The total number of the pixels is 393216 in the matrix 768 × 512 with pixel pitch 55 μ m. Medipix3RX read-out chip provides photon counting read-out with single photon sensitivity. The sensor is cooled to -126°C and noise levels together with flat field response are measured. For -200 V polarization bias, leakage current was 4.4 mA (3.2 μ A/mm2). Due to higher leakage around 2.5% of all pixels stay non-responsive. More than 99% of all pixels are bump bonded correctly. In this paper we present the experimental set-up, threshold equalization procedure, image acquisition and the technique for bump bond quality estimate.

  16. A Dual-Mode Large-Arrayed CMOS ISFET Sensor for Accurate and High-Throughput pH Sensing in Biomedical Diagnosis.

    PubMed

    Huang, Xiwei; Yu, Hao; Liu, Xu; Jiang, Yu; Yan, Mei; Wu, Dongping

    2015-09-01

    The existing ISFET-based DNA sequencing detects hydrogen ions released during the polymerization of DNA strands on microbeads, which are scattered into microwell array above the ISFET sensor with unknown distribution. However, false pH detection happens at empty microwells due to crosstalk from neighboring microbeads. In this paper, a dual-mode CMOS ISFET sensor is proposed to have accurate pH detection toward DNA sequencing. Dual-mode sensing, optical and chemical modes, is realized by integrating a CMOS image sensor (CIS) with ISFET pH sensor, and is fabricated in a standard 0.18-μm CIS process. With accurate determination of microbead physical locations with CIS pixel by contact imaging, the dual-mode sensor can correlate local pH for one DNA slice at one location-determined microbead, which can result in improved pH detection accuracy. Moreover, toward a high-throughput DNA sequencing, a correlated-double-sampling readout that supports large array for both modes is deployed to reduce pixel-to-pixel nonuniformity such as threshold voltage mismatch. The proposed CMOS dual-mode sensor is experimentally examined to show a well correlated pH map and optical image for microbeads with a pH sensitivity of 26.2 mV/pH, a fixed pattern noise (FPN) reduction from 4% to 0.3%, and a readout speed of 1200 frames/s. A dual-mode CMOS ISFET sensor with suppressed FPN for accurate large-arrayed pH sensing is proposed and demonstrated with state-of-the-art measured results toward accurate and high-throughput DNA sequencing. The developed dual-mode CMOS ISFET sensor has great potential for future personal genome diagnostics with high accuracy and low cost.

  17. Demosaiced pixel super-resolution in digital holography for multiplexed computational color imaging on-a-chip (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Wu, Yichen; Zhang, Yibo; Luo, Wei; Ozcan, Aydogan

    2017-03-01

    Digital holographic on-chip microscopy achieves large space-bandwidth-products (e.g., >1 billion) by making use of pixel super-resolution techniques. To synthesize a digital holographic color image, one can take three sets of holograms representing the red (R), green (G) and blue (B) parts of the spectrum and digitally combine them to synthesize a color image. The data acquisition efficiency of this sequential illumination process can be improved by 3-fold using wavelength-multiplexed R, G and B illumination that simultaneously illuminates the sample, and using a Bayer color image sensor with known or calibrated transmission spectra to digitally demultiplex these three wavelength channels. This demultiplexing step is conventionally used with interpolation-based Bayer demosaicing methods. However, because the pixels of different color channels on a Bayer image sensor chip are not at the same physical location, conventional interpolation-based demosaicing process generates strong color artifacts, especially at rapidly oscillating hologram fringes, which become even more pronounced through digital wave propagation and phase retrieval processes. Here, we demonstrate that by merging the pixel super-resolution framework into the demultiplexing process, such color artifacts can be greatly suppressed. This novel technique, termed demosaiced pixel super-resolution (D-PSR) for digital holographic imaging, achieves very similar color imaging performance compared to conventional sequential R,G,B illumination, with 3-fold improvement in image acquisition time and data-efficiency. We successfully demonstrated the color imaging performance of this approach by imaging stained Pap smears. The D-PSR technique is broadly applicable to high-throughput, high-resolution digital holographic color microscopy techniques that can be used in resource-limited-settings and point-of-care offices.

  18. Hyperspectral image classification by a variable interval spectral average and spectral curve matching combined algorithm

    NASA Astrophysics Data System (ADS)

    Senthil Kumar, A.; Keerthi, V.; Manjunath, A. S.; Werff, Harald van der; Meer, Freek van der

    2010-08-01

    Classification of hyperspectral images has been receiving considerable attention with many new applications reported from commercial and military sectors. Hyperspectral images are composed of a large number of spectral channels, and have the potential to deliver a great deal of information about a remotely sensed scene. However, in addition to high dimensionality, hyperspectral image classification is compounded with a coarse ground pixel size of the sensor for want of adequate sensor signal to noise ratio within a fine spectral passband. This makes multiple ground features jointly occupying a single pixel. Spectral mixture analysis typically begins with pixel classification with spectral matching techniques, followed by the use of spectral unmixing algorithms for estimating endmembers abundance values in the pixel. The spectral matching techniques are analogous to supervised pattern recognition approaches, and try to estimate some similarity between spectral signatures of the pixel and reference target. In this paper, we propose a spectral matching approach by combining two schemes—variable interval spectral average (VISA) method and spectral curve matching (SCM) method. The VISA method helps to detect transient spectral features at different scales of spectral windows, while the SCM method finds a match between these features of the pixel and one of library spectra by least square fitting. Here we also compare the performance of the combined algorithm with other spectral matching techniques using a simulated and the AVIRIS hyperspectral data sets. Our results indicate that the proposed combination technique exhibits a stronger performance over the other methods in the classification of both the pure and mixed class pixels simultaneously.

  19. Investigation of CMOS pixel sensor with 0.18 μm CMOS technology for high-precision tracking detector

    NASA Astrophysics Data System (ADS)

    Zhang, L.; Fu, M.; Zhang, Y.; Yan, W.; Wang, M.

    2017-01-01

    The Circular Electron Positron Collider (CEPC) proposed by the Chinese high energy physics community is aiming to measure Higgs particles and their interactions precisely. The tracking detector including Silicon Inner Tracker (SIT) and Forward Tracking Disks (FTD) has driven stringent requirements on sensor technologies in term of spatial resolution, power consumption and readout speed. CMOS Pixel Sensor (CPS) is a promising candidate to approach these requirements. This paper presents the preliminary studies on the sensor optimization for tracking detector to achieve high collection efficiency while keeping necessary spatial resolution. Detailed studies have been performed on the charge collection using a 0.18 μm CMOS image sensor process. This process allows high resistivity epitaxial layer, leading to a significant improvement on the charge collection and therefore improving the radiation tolerance. Together with the simulation results, the first exploratory prototype has bee designed and fabricated. The prototype includes 9 different pixel arrays, which vary in terms of pixel pitch, diode size and geometry. The total area of the prototype amounts to 2 × 7.88 mm2.

  20. FDTD-based optical simulations methodology for CMOS image sensors pixels architecture and process optimization

    NASA Astrophysics Data System (ADS)

    Hirigoyen, Flavien; Crocherie, Axel; Vaillant, Jérôme M.; Cazaux, Yvon

    2008-02-01

    This paper presents a new FDTD-based optical simulation model dedicated to describe the optical performances of CMOS image sensors taking into account diffraction effects. Following market trend and industrialization constraints, CMOS image sensors must be easily embedded into even smaller packages, which are now equipped with auto-focus and short-term coming zoom system. Due to miniaturization, the ray-tracing models used to evaluate pixels optical performances are not accurate anymore to describe the light propagation inside the sensor, because of diffraction effects. Thus we adopt a more fundamental description to take into account these diffraction effects: we chose to use Maxwell-Boltzmann based modeling to compute the propagation of light, and to use a software with an FDTD-based (Finite Difference Time Domain) engine to solve this propagation. We present in this article the complete methodology of this modeling: on one hand incoherent plane waves are propagated to approximate a product-use diffuse-like source, on the other hand we use periodic conditions to limit the size of the simulated model and both memory and computation time. After having presented the correlation of the model with measurements we will illustrate its use in the case of the optimization of a 1.75μm pixel.

  1. ALPIDE, the Monolithic Active Pixel Sensor for the ALICE ITS upgrade

    NASA Astrophysics Data System (ADS)

    Mager, M.; ALICE Collaboration

    2016-07-01

    A new 10 m2 inner tracking system based on seven concentric layers of Monolithic Active Pixel Sensors will be installed in the ALICE experiment during the second long shutdown of LHC in 2019-2020. The monolithic pixel sensors will be fabricated in the 180 nm CMOS Imaging Sensor process of TowerJazz. The ALPIDE design takes full advantage of a particular process feature, the deep p-well, which allows for full CMOS circuitry within the pixel matrix, while at the same time retaining the full charge collection efficiency. Together with the small feature size and the availability of six metal layers, this allowed a continuously active low-power front-end to be placed into each pixel and an in-matrix sparsification circuit to be used that sends only the addresses of hit pixels to the periphery. This approach led to a power consumption of less than 40 mWcm-2, a spatial resolution of around 5 μm, a peaking time of around 2 μs, while being radiation hard to some 1013 1 MeVneq /cm2, fulfilling or exceeding the ALICE requirements. Over the last years of R & D, several prototype circuits have been used to verify radiation hardness, and to optimize pixel geometry and in-pixel front-end circuitry. The positive results led to a submission of full-scale (3 cm×1.5 cm) sensor prototypes in 2014. They are being characterized in a comprehensive campaign that also involves several irradiation and beam tests. A summary of the results obtained and prospects towards the final sensor to instrument the ALICE Inner Tracking System are given.

  2. Image interpolation and denoising for division of focal plane sensors using Gaussian processes.

    PubMed

    Gilboa, Elad; Cunningham, John P; Nehorai, Arye; Gruev, Viktor

    2014-06-16

    Image interpolation and denoising are important techniques in image processing. These methods are inherent to digital image acquisition as most digital cameras are composed of a 2D grid of heterogeneous imaging sensors. Current polarization imaging employ four different pixelated polarization filters, commonly referred to as division of focal plane polarization sensors. The sensors capture only partial information of the true scene, leading to a loss of spatial resolution as well as inaccuracy of the captured polarization information. Interpolation is a standard technique to recover the missing information and increase the accuracy of the captured polarization information. Here we focus specifically on Gaussian process regression as a way to perform a statistical image interpolation, where estimates of sensor noise are used to improve the accuracy of the estimated pixel information. We further exploit the inherent grid structure of this data to create a fast exact algorithm that operates in ����(N(3/2)) (vs. the naive ���� (N³)), thus making the Gaussian process method computationally tractable for image data. This modeling advance and the enabling computational advance combine to produce significant improvements over previously published interpolation methods for polarimeters, which is most pronounced in cases of low signal-to-noise ratio (SNR). We provide the comprehensive mathematical model as well as experimental results of the GP interpolation performance for division of focal plane polarimeter.

  3. Dependence of Adaptive Cross-correlation Algorithm Performance on the Extended Scene Image Quality

    NASA Technical Reports Server (NTRS)

    Sidick, Erkin

    2008-01-01

    Recently, we reported an adaptive cross-correlation (ACC) algorithm to estimate with high accuracy the shift as large as several pixels between two extended-scene sub-images captured by a Shack-Hartmann wavefront sensor. It determines the positions of all extended-scene image cells relative to a reference cell in the same frame using an FFT-based iterative image-shifting algorithm. It works with both point-source spot images as well as extended scene images. We have demonstrated previously based on some measured images that the ACC algorithm can determine image shifts with as high an accuracy as 0.01 pixel for shifts as large 3 pixels, and yield similar results for both point source spot images and extended scene images. The shift estimate accuracy of the ACC algorithm depends on illumination level, background, and scene content in addition to the amount of the shift between two image cells. In this paper we investigate how the performance of the ACC algorithm depends on the quality and the frequency content of extended scene images captured by a Shack-Hatmann camera. We also compare the performance of the ACC algorithm with those of several other approaches, and introduce a failsafe criterion for the ACC algorithm-based extended scene Shack-Hatmann sensors.

  4. Low-power priority Address-Encoder and Reset-Decoder data-driven readout for Monolithic Active Pixel Sensors for tracker system

    NASA Astrophysics Data System (ADS)

    Yang, P.; Aglieri, G.; Cavicchioli, C.; Chalmet, P. L.; Chanlek, N.; Collu, A.; Gao, C.; Hillemanns, H.; Junique, A.; Kofarago, M.; Keil, M.; Kugathasan, T.; Kim, D.; Kim, J.; Lattuca, A.; Marin Tobon, C. A.; Marras, D.; Mager, M.; Martinengo, P.; Mazza, G.; Mugnier, H.; Musa, L.; Puggioni, C.; Rousset, J.; Reidt, F.; Riedler, P.; Snoeys, W.; Siddhanta, S.; Usai, G.; van Hoorne, J. W.; Yi, J.

    2015-06-01

    Active Pixel Sensors used in High Energy Particle Physics require low power consumption to reduce the detector material budget, low integration time to reduce the possibilities of pile-up and fast readout to improve the detector data capability. To satisfy these requirements, a novel Address-Encoder and Reset-Decoder (AERD) asynchronous circuit for a fast readout of a pixel matrix has been developed. The AERD data-driven readout architecture operates the address encoding and reset decoding based on an arbitration tree, and allows us to readout only the hit pixels. Compared to the traditional readout structure of the rolling shutter scheme in Monolithic Active Pixel Sensors (MAPS), AERD can achieve a low readout time and a low power consumption especially for low hit occupancies. The readout is controlled at the chip periphery with a signal synchronous with the clock, allows a good digital and analogue signal separation in the matrix and a reduction of the power consumption. The AERD circuit has been implemented in the TowerJazz 180 nm CMOS Imaging Sensor (CIS) process with full complementary CMOS logic in the pixel. It works at 10 MHz with a matrix height of 15 mm. The energy consumed to read out one pixel is around 72 pJ. A scheme to boost the readout speed to 40 MHz is also discussed. The sensor chip equipped with AERD has been produced and characterised. Test results including electrical beam measurement are presented.

  5. Amorphous In–Ga–Zn–O thin-film transistor active pixel sensor x-ray imager for digital breast tomosynthesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Chumin; Kanicki, Jerzy, E-mail: kanicki@eecs.umich.edu

    Purpose: The breast cancer detection rate for digital breast tomosynthesis (DBT) is limited by the x-ray image quality. The limiting Nyquist frequency for current DBT systems is around 5 lp/mm, while the fine image details contained in the high spatial frequency region (>5 lp/mm) are lost. Also today the tomosynthesis patient dose is high (0.67–3.52 mGy). To address current issues, in this paper, for the first time, a high-resolution low-dose organic photodetector/amorphous In–Ga–Zn–O thin-film transistor (a-IGZO TFT) active pixel sensor (APS) x-ray imager is proposed for next generation DBT systems. Methods: The indirect x-ray detector is based on a combination of a novelmore » low-cost organic photodiode (OPD) and a cesium iodide-based (CsI:Tl) scintillator. The proposed APS x-ray imager overcomes the difficulty of weak signal detection, when small pixel size and low exposure conditions are used, by an on-pixel signal amplification with a significant charge gain. The electrical performance of a-IGZO TFT APS pixel circuit is investigated by SPICE simulation using modified Rensselaer Polytechnic Institute amorphous silicon (a-Si:H) TFT model. Finally, the noise, detective quantum efficiency (DQE), and resolvability of the complete system are modeled using the cascaded system formalism. Results: The result demonstrates that a large charge gain of 31–122 is achieved for the proposed high-mobility (5–20 cm{sup 2}/V s) amorphous metal-oxide TFT APS. The charge gain is sufficient to eliminate the TFT thermal noise, flicker noise as well as the external readout circuit noise. Moreover, the low TFT (<10{sup −13} A) and OPD (<10{sup −8} A/cm{sup 2}) leakage currents can further reduce the APS noise. Cascaded system analysis shows that the proposed APS imager with a 75 μm pixel pitch can effectively resolve the Nyquist frequency of 6.67 lp/mm, which can be further improved to ∼10 lp/mm if the pixel pitch is reduced to 50 μm. Moreover, the detector entrance exposure per projection can be reduced from 1 to 0.3 mR without a significant reduction of DQE. The signal-to-noise ratio of the a-IGZO APS imager under 0.3 mR x-ray exposure is comparable to that of a-Si:H passive pixel sensor imager under 1 mR, indicating good image quality under low dose. A threefold reduction of current tomosynthesis dose is expected if proposed technology is combined with an advanced DBT image reconstruction method. Conclusions: The proposed a-IGZO APS x-ray imager with a pixel pitch <75 μm is capable to achieve a high spatial frequency (>6.67 lp/mm) and a low dose (<0.4 mGy) in next generation DBT systems.« less

  6. Amorphous In-Ga-Zn-O thin-film transistor active pixel sensor x-ray imager for digital breast tomosynthesis.

    PubMed

    Zhao, Chumin; Kanicki, Jerzy

    2014-09-01

    The breast cancer detection rate for digital breast tomosynthesis (DBT) is limited by the x-ray image quality. The limiting Nyquist frequency for current DBT systems is around 5 lp/mm, while the fine image details contained in the high spatial frequency region (>5 lp/mm) are lost. Also today the tomosynthesis patient dose is high (0.67-3.52 mGy). To address current issues, in this paper, for the first time, a high-resolution low-dose organic photodetector/amorphous In-Ga-Zn-O thin-film transistor (a-IGZO TFT) active pixel sensor (APS) x-ray imager is proposed for next generation DBT systems. The indirect x-ray detector is based on a combination of a novel low-cost organic photodiode (OPD) and a cesium iodide-based (CsI:Tl) scintillator. The proposed APS x-ray imager overcomes the difficulty of weak signal detection, when small pixel size and low exposure conditions are used, by an on-pixel signal amplification with a significant charge gain. The electrical performance of a-IGZO TFT APS pixel circuit is investigated by SPICE simulation using modified Rensselaer Polytechnic Institute amorphous silicon (a-Si:H) TFT model. Finally, the noise, detective quantum efficiency (DQE), and resolvability of the complete system are modeled using the cascaded system formalism. The result demonstrates that a large charge gain of 31-122 is achieved for the proposed high-mobility (5-20 cm2/V s) amorphous metal-oxide TFT APS. The charge gain is sufficient to eliminate the TFT thermal noise, flicker noise as well as the external readout circuit noise. Moreover, the low TFT (<10(-13) A) and OPD (<10(-8) A/cm2) leakage currents can further reduce the APS noise. Cascaded system analysis shows that the proposed APS imager with a 75 μm pixel pitch can effectively resolve the Nyquist frequency of 6.67 lp/mm, which can be further improved to ∼10 lp/mm if the pixel pitch is reduced to 50 μm. Moreover, the detector entrance exposure per projection can be reduced from 1 to 0.3 mR without a significant reduction of DQE. The signal-to-noise ratio of the a-IGZO APS imager under 0.3 mR x-ray exposure is comparable to that of a-Si:H passive pixel sensor imager under 1 mR, indicating good image quality under low dose. A threefold reduction of current tomosynthesis dose is expected if proposed technology is combined with an advanced DBT image reconstruction method. The proposed a-IGZO APS x-ray imager with a pixel pitch<75 μm is capable to achieve a high spatial frequency (>6.67 lp/mm) and a low dose (<0.4 mGy) in next generation DBT systems.

  7. Impact of sensor's point spread function on land cover characterization: Assessment and deconvolution

    USGS Publications Warehouse

    Huang, C.; Townshend, J.R.G.; Liang, S.; Kalluri, S.N.V.; DeFries, R.S.

    2002-01-01

    Measured and modeled point spread functions (PSF) of sensor systems indicate that a significant portion of the recorded signal of each pixel of a satellite image originates from outside the area represented by that pixel. This hinders the ability to derive surface information from satellite images on a per-pixel basis. In this study, the impact of the PSF of the Moderate Resolution Imaging Spectroradiometer (MODIS) 250 m bands was assessed using four images representing different landscapes. Experimental results showed that though differences between pixels derived with and without PSF effects were small on the average, the PSF generally brightened dark objects and darkened bright objects. This impact of the PSF lowered the performance of a support vector machine (SVM) classifier by 5.4% in overall accuracy and increased the overall root mean square error (RMSE) by 2.4% in estimating subpixel percent land cover. An inversion method based on the known PSF model reduced the signals originating from surrounding areas by as much as 53%. This method differs from traditional PSF inversion deconvolution methods in that the PSF was adjusted with lower weighting factors for signals originating from neighboring pixels than those specified by the PSF model. By using this deconvolution method, the lost classification accuracy due to residual impact of PSF effects was reduced to only 1.66% in overall accuracy. The increase in the RMSE of estimated subpixel land cover proportions due to the residual impact of PSF effects was reduced to 0.64%. Spatial aggregation also effectively reduced the errors in estimated land cover proportion images. About 50% of the estimation errors were removed after applying the deconvolution method and aggregating derived proportion images to twice their dimensional pixel size. ?? 2002 Elsevier Science Inc. All rights reserved.

  8. Pixel parallel localized driver design for a 128 x 256 pixel array 3D 1Gfps image sensor

    NASA Astrophysics Data System (ADS)

    Zhang, C.; Dao, V. T. S.; Etoh, T. G.; Charbon, E.

    2017-02-01

    In this paper, a 3D 1Gfps BSI image sensor is proposed, where 128 × 256 pixels are located in the top-tier chip and a 32 × 32 localized driver array in the bottom-tier chip. Pixels are designed with Multiple Collection Gates (MCG), which collects photons selectively with different collection gates being active at intervals of 1ns to achieve 1Gfps. For the drivers, a global PLL is designed, which consists of a ring oscillator with 6-stage current starved differential inverters, achieving a wide frequency tuning range from 40MHz to 360MHz (20ps rms jitter). The drivers are the replicas of the ring oscillator that operates within a PLL. Together with level shifters and XNOR gates, continuous 3.3V pulses are generated with desired pulse width, which is 1/12 of the PLL clock period. The driver array is activated by a START signal, which propagates through a highly balanced clock tree, to activate all the pixels at the same time with virtually negligible skew.

  9. Detection of spectral line curvature in imaging spectrometer data

    NASA Astrophysics Data System (ADS)

    Neville, Robert A.; Sun, Lixin; Staenz, Karl

    2003-09-01

    A procedure has been developed to measure the band-centers and bandwidths for imaging spectrometers using data acquired by the sensor in flight. This is done for each across-track pixel, thus allowing the measurement of the instrument's slit curvature or spectral 'smile'. The procedure uses spectral features present in the at-sensor radiance which are common to all pixels in the scene. These are principally atmospheric absorption lines. The band-center and bandwidth determinations are made by correlating the sensor measured radiance with a modelled radiance, the latter calculated using MODTRAN 4.2. Measurements have been made for a number of instruments including Airborne Visible and Infra-Red Imaging Spectrometer (AVIRIS), SWIR Full Spectrum Imager (SFSI), and Hyperion. The measurements on AVIRIS data were performed as a test of the procedure; since AVIRIS is a whisk-broom scanner it is expected to be free of spectral smile. SFSI is an airborne pushbroom instrument with considerable spectral smile. Hyperion is a satellite pushbroom sensor with a relatively small degree of smile. Measurements of Hyperion were made using three different data sets to check for temporal variations.

  10. New amorphous-silicon image sensor for x-ray diagnostic medical imaging applications

    NASA Astrophysics Data System (ADS)

    Weisfield, Richard L.; Hartney, Mark A.; Street, Robert A.; Apte, Raj B.

    1998-07-01

    This paper introduces new high-resolution amorphous Silicon (a-Si) image sensors specifically configured for demonstrating film-quality medical x-ray imaging capabilities. The devices utilizes an x-ray phosphor screen coupled to an array of a-Si photodiodes for detecting visible light, and a-Si thin-film transistors (TFTs) for connecting the photodiodes to external readout electronics. We have developed imagers based on a pixel size of 127 micrometer X 127 micrometer with an approximately page-size imaging area of 244 mm X 195 mm, and array size of 1,536 data lines by 1,920 gate lines, for a total of 2.95 million pixels. More recently, we have developed a much larger imager based on the same pixel pattern, which covers an area of approximately 406 mm X 293 mm, with 2,304 data lines by 3,200 gate lines, for a total of nearly 7.4 million pixels. This is very likely to be the largest image sensor array and highest pixel count detector fabricated on a single substrate. Both imagers connect to a standard PC and are capable of taking an image in a few seconds. Through design rule optimization we have achieved a light sensitive area of 57% and optimized quantum efficiency for x-ray phosphor output in the green part of the spectrum, yielding an average quantum efficiency between 500 and 600 nm of approximately 70%. At the same time, we have managed to reduce extraneous leakage currents on these devices to a few fA per pixel, which allows for very high dynamic range to be achieved. We have characterized leakage currents as a function of photodiode bias, time and temperature to demonstrate high stability over these large sized arrays. At the electronics level, we have adopted a new generation of low noise, charge- sensitive amplifiers coupled to 12-bit A/D converters. Considerable attention was given to reducing electronic noise in order to demonstrate a large dynamic range (over 4,000:1) for medical imaging applications. Through a combination of low data lines capacitance, readout amplifier design, optimized timing, and noise cancellation techniques, we achieve 1,000e to 2,000e of noise for the page size and large size arrays, respectively. This allows for true 12-bit performance and quantum limited images over a wide range of x-ray exposures. Various approaches to reducing line correlated noise have been implemented and will be discussed. Images documenting the improved performance will be presented. Avenues for improvement are under development, including higher resolution 97 micrometer pixel imagers, further improvements in detective quantum efficiency, and characterization of dynamic behavior.

  11. An improved algorithm for de-striping of ocean colour monitor imageries aided by measured sensor characteristics

    NASA Astrophysics Data System (ADS)

    Dutt, Ashutosh; Mishra, Ashish; Goswami, D. R.; Kumar, A. S. Kiran

    2016-05-01

    The push-broom sensors in bands meant to study oceans, in general suffer from residual non uniformity even after radiometric correction. The in-orbit data from OCM-2 shows pronounced striping in lower bands. There have been many attempts and different approaches to solve the problem using image data itself. The success or lack of it of each algorithm lies on the quality of the uniform region identified. In this paper, an image based destriping algorithm is presented with constraints being derived from Ground Calibration exercise. The basis of the methodology is determination of pixel to pixel non-uniformity through uniform segments identified and collected from large number of images, covering the dynamic range of the sensor. The results show the effectiveness of the algorithm over different targets. The performance is qualitatively evaluated by visual inspection and quantitatively measured by two parameters.

  12. Optical and electrical characterization of a back-thinned CMOS active pixel sensor

    NASA Astrophysics Data System (ADS)

    Blue, Andrew; Clark, A.; Houston, S.; Laing, A.; Maneuski, D.; Prydderch, M.; Turchetta, R.; O'Shea, V.

    2009-06-01

    This work will report on the first work on the characterization of a back-thinned Vanilla-a 512×512 (25 μm squared) active pixel sensor (APS). Characterization of the detectors was carried out through the analysis of photon transfer curves to yield a measurement of full well capacity, noise levels, gain constants and linearity. Spectral characterization of the sensors was also performed in the visible and UV regions. A full comparison against non-back-thinned front illuminated Vanilla sensors is included. Such measurements suggest that the Vanilla APS will be suitable for a wide range of applications, including particle physics and biomedical imaging.

  13. Improved Denoising via Poisson Mixture Modeling of Image Sensor Noise.

    PubMed

    Zhang, Jiachao; Hirakawa, Keigo

    2017-04-01

    This paper describes a study aimed at comparing the real image sensor noise distribution to the models of noise often assumed in image denoising designs. A quantile analysis in pixel, wavelet transform, and variance stabilization domains reveal that the tails of Poisson, signal-dependent Gaussian, and Poisson-Gaussian models are too short to capture real sensor noise behavior. A new Poisson mixture noise model is proposed to correct the mismatch of tail behavior. Based on the fact that noise model mismatch results in image denoising that undersmoothes real sensor data, we propose a mixture of Poisson denoising method to remove the denoising artifacts without affecting image details, such as edge and textures. Experiments with real sensor data verify that denoising for real image sensor data is indeed improved by this new technique.

  14. Physical characterization and performance comparison of active- and passive-pixel CMOS detectors for mammography.

    PubMed

    Elbakri, I A; McIntosh, B J; Rickey, D W

    2009-03-21

    We investigated the physical characteristics of two complementary metal oxide semiconductor (CMOS) mammography detectors. The detectors featured 14-bit image acquisition, 50 microm detector element (del) size and an active area of 5 cm x 5 cm. One detector was a passive-pixel sensor (PPS) with signal amplification performed by an array of amplifiers connected to dels via data lines. The other detector was an active-pixel sensor (APS) with signal amplification performed at each del. Passive-pixel designs have higher read noise due to data line capacitance, and the APS represents an attempt to improve the noise performance of this technology. We evaluated the detectors' resolution by measuring the modulation transfer function (MTF) using a tilted edge. We measured the noise power spectra (NPS) and detective quantum efficiencies (DQE) using mammographic beam conditions specified by the IEC 62220-1-2 standard. Our measurements showed the APS to have much higher gain, slightly higher MTF, and higher NPS. The MTF of both sensors approached 10% near the Nyquist limit. DQE values near dc frequency were in the range of 55-67%, with the APS sensor DQE lower than the PPS DQE for all frequencies. Our results show that lower read noise specifications in this case do not translate into gains in the imaging performance of the sensor. We postulate that the lower fill factor of the APS is a possible cause for this result.

  15. Nonlinear time dependence of dark current in charge-coupled devices

    NASA Astrophysics Data System (ADS)

    Dunlap, Justin C.; Bodegom, Erik; Widenhorn, Ralf

    2011-03-01

    It is generally assumed that charge-coupled device (CCD) imagers produce a linear response of dark current versus exposure time except near saturation. We found a large number of pixels with nonlinear dark current response to exposure time to be present in two scientific CCD imagers. These pixels are found to exhibit distinguishable behavior with other analogous pixels and therefore can be characterized in groupings. Data from two Kodak CCD sensors are presented for exposure times from a few seconds up to two hours. Linear behavior is traditionally taken for granted when carrying out dark current correction and as a result, pixels with nonlinear behavior will be corrected inaccurately.

  16. A 65k pixel, 150k frames-per-second camera with global gating and micro-lenses suitable for fluorescence lifetime imaging

    NASA Astrophysics Data System (ADS)

    Burri, Samuel; Powolny, François; Bruschini, Claudio E.; Michalet, Xavier; Regazzoni, Francesco; Charbon, Edoardo

    2014-05-01

    This paper presents our work on a 65k pixel single-photon avalanche diode (SPAD) based imaging sensor realized in a 0.35μm standard CMOS process. At a resolution of 512 by 128 pixels the sensor is read out in 6.4μs to deliver over 150k monochrome frames per second. The individual pixel has a size of 24μm2 and contains the SPAD with a 12T quenching and gating circuitry along with a memory element. The gating signals are distributed across the chip through a balanced tree to minimize the signal skew between the pixels. The array of pixels is row-addressable and data is sent out of the chip on 128 lines in parallel at a frequency of 80MHz. The system is controlled by an FPGA which generates the gating and readout signals and can be used for arbitrary real-time computation on the frames from the sensor. The communication protocol between the camera and a conventional PC is USB2. The active area of the chip is 5% and can be significantly improved with the application of a micro-lens array. A micro-lens array, for use with collimated light, has been designed and its performance is reviewed in the paper. Among other high-speed phenomena the gating circuitry capable of generating illumination periods shorter than 5ns can be used for Fluorescence Lifetime Imaging (FLIM). In order to measure the lifetime of fluorophores excited by a picosecond laser, the sensor's illumination period is synchronized with the excitation laser pulses. A histogram of the photon arrival times relative to the excitation is then constructed by counting the photons arriving during the sensitive time for several positions of the illumination window. The histogram for each pixel is transferred afterwards to a computer where software routines extract the lifetime at each location with an accuracy better than 100ps. We show results for fluorescence lifetime measurements using different fluorophores with lifetimes ranging from 150ps to 5ns.

  17. Hyperspectral Imaging Sensors and the Marine Coastal Zone

    NASA Technical Reports Server (NTRS)

    Richardson, Laurie L.

    2000-01-01

    Hyperspectral imaging sensors greatly expand the potential of remote sensing to assess, map, and monitor marine coastal zones. Each pixel in a hyperspectral image contains an entire spectrum of information. As a result, hyperspectral image data can be processed in two very different ways: by image classification techniques, to produce mapped outputs of features in the image on a regional scale; and by use of spectral analysis of the spectral data embedded within each pixel of the image. The latter is particularly useful in marine coastal zones because of the spectral complexity of suspended as well as benthic features found in these environments. Spectral-based analysis of hyperspectral (AVIRIS) imagery was carried out to investigate a marine coastal zone of South Florida, USA. Florida Bay is a phytoplankton-rich estuary characterized by taxonomically distinct phytoplankton assemblages and extensive seagrass beds. End-member spectra were extracted from AVIRIS image data corresponding to ground-truth sample stations and well-known field sites. Spectral libraries were constructed from the AVIRIS end-member spectra and used to classify images using the Spectral Angle Mapper (SAM) algorithm, a spectral-based approach that compares the spectrum, in each pixel of an image with each spectrum in a spectral library. Using this approach different phytoplankton assemblages containing diatoms, cyanobacteria, and green microalgae, as well as benthic community (seagrasses), were mapped.

  18. Contrast computation methods for interferometric measurement of sensor modulation transfer function

    NASA Astrophysics Data System (ADS)

    Battula, Tharun; Georgiev, Todor; Gille, Jennifer; Goma, Sergio

    2018-01-01

    Accurate measurement of image-sensor frequency response over a wide range of spatial frequencies is very important for analyzing pixel array characteristics, such as modulation transfer function (MTF), crosstalk, and active pixel shape. Such analysis is especially significant in computational photography for the purposes of deconvolution, multi-image superresolution, and improved light-field capture. We use a lensless interferometric setup that produces high-quality fringes for measuring MTF over a wide range of frequencies (here, 37 to 434 line pairs per mm). We discuss the theoretical framework, involving Michelson and Fourier contrast measurement of the MTF, addressing phase alignment problems using a moiré pattern. We solidify the definition of Fourier contrast mathematically and compare it to Michelson contrast. Our interferometric measurement method shows high detail in the MTF, especially at high frequencies (above Nyquist frequency). We are able to estimate active pixel size and pixel pitch from measurements. We compare both simulation and experimental MTF results to a lens-free slanted-edge implementation using commercial software.

  19. Tritium autoradiography with thinned and back-side illuminated monolithic active pixel sensor device

    NASA Astrophysics Data System (ADS)

    Deptuch, G.

    2005-05-01

    The first autoradiographic results of the tritium ( 3H) marked source obtained with monolithic active pixel sensors are presented. The detector is a high-resolution, back-side illuminated imager, developed within the SUCIMA collaboration for low-energy (<30 keV) electrons detection. The sensitivity to these energies is obtained by thinning the detector, originally fabricated in the form of a standard VLSI chip, down to the thickness of the epitaxial layer. The detector used is the 1×10 6 pixel, thinned MIMOSA V chip. The low noise performance and thin (˜160 nm) entrance window provide the sensitivity of the device to energies as low as ˜4 keV. A polymer tritium source was parked directly atop the detector in open-air conditions. A real-time image of the source was obtained.

  20. Geiger-Mode Avalanche Photodiode Arrays Integrated to All-Digital CMOS Circuits

    DTIC Science & Technology

    2016-01-20

    Figure 7 4×4 GMAPD array wire bonded to CMOS timing circuits Figure 8 Low‐fill‐factor APD design used in lidar sensors The APD doping...epitaxial growth and the pixels are isolated by mesa etch. 128×32 lidar image sensors were built by bump bonding the APD arrays to a CMOS timing...passive image sensor with this large a format based on hybridization of a GMAPD array to a CMOS readout. Fig. 14 shows one of the first images taken

  1. Energy weighted x-ray dark-field imaging.

    PubMed

    Pelzer, Georg; Zang, Andrea; Anton, Gisela; Bayer, Florian; Horn, Florian; Kraus, Manuel; Rieger, Jens; Ritter, Andre; Wandner, Johannes; Weber, Thomas; Fauler, Alex; Fiederle, Michael; Wong, Winnie S; Campbell, Michael; Meiser, Jan; Meyer, Pascal; Mohr, Jürgen; Michel, Thilo

    2014-10-06

    The dark-field image obtained in grating-based x-ray phase-contrast imaging can provide information about the objects' microstructures on a scale smaller than the pixel size even with low geometric magnification. In this publication we demonstrate that the dark-field image quality can be enhanced with an energy-resolving pixel detector. Energy-resolved x-ray dark-field images were acquired with a 16-energy-channel photon-counting pixel detector with a 1 mm thick CdTe sensor in a Talbot-Lau x-ray interferometer. A method for contrast-noise-ratio (CNR) enhancement is proposed and validated experimentally. In measurements, a CNR improvement by a factor of 1.14 was obtained. This is equivalent to a possible radiation dose reduction of 23%.

  2. CMOS imager for pointing and tracking applications

    NASA Technical Reports Server (NTRS)

    Sun, Chao (Inventor); Pain, Bedabrata (Inventor); Yang, Guang (Inventor); Heynssens, Julie B. (Inventor)

    2006-01-01

    Systems and techniques to realize pointing and tracking applications with CMOS imaging devices. In general, in one implementation, the technique includes: sampling multiple rows and multiple columns of an active pixel sensor array into a memory array (e.g., an on-chip memory array), and reading out the multiple rows and multiple columns sampled in the memory array to provide image data with reduced motion artifact. Various operation modes may be provided, including TDS, CDS, CQS, a tracking mode to read out multiple windows, and/or a mode employing a sample-first-read-later readout scheme. The tracking mode can take advantage of a diagonal switch array. The diagonal switch array, the active pixel sensor array and the memory array can be integrated onto a single imager chip with a controller. This imager device can be part of a larger imaging system for both space-based applications and terrestrial applications.

  3. Research and implementation of simulation for TDICCD remote sensing in vibration of optical axis

    NASA Astrophysics Data System (ADS)

    Liu, Zhi-hong; Kang, Xiao-jun; Lin, Zhe; Song, Li

    2013-12-01

    During the exposure time, the charge transfer speed in the push-broom direction and the line-by-lines canning speed of the sensor are required to match each other strictly for a space-borne TDICCD push-broom camera. However, as attitude disturbance of satellite and vibration of camera are inevitable, it is impossible to eliminate the speed mismatch, which will make the signal of different targets overlay each other and result in a decline of image resolution. The effects of velocity mismatch will be visually observed and analyzed by simulating the degradation of image quality caused by the vibration of the optical axis, and it is significant for the evaluation of image quality and design of the image restoration algorithm. How to give a model in time domain and space domain during the imaging time is the problem needed to be solved firstly. As vibration information for simulation is usually given by a continuous curve, the pixels of original image matrix and sensor matrix are discrete, as a result, they cannot always match each other well. The effect of simulation will also be influenced by the discrete sampling in integration time. In conclusion, it is quite significant for improving simulation accuracy and efficiency to give an appropriate discrete modeling and simulation method. The paper analyses discretization schemes in time domain and space domain and presents a method to simulate the quality of image of the optical system in the vibration of the line of sight, which is based on the principle of TDICCD sensor. The gray value of pixels in sensor matrix is obtained by a weighted arithmetic, which solves the problem of pixels dismatch. The result which compared with the experiment of hardware test indicate that this simulation system performances well in accuracy and reliability.

  4. Automatic relative RPC image model bias compensation through hierarchical image matching for improving DEM quality

    NASA Astrophysics Data System (ADS)

    Noh, Myoung-Jong; Howat, Ian M.

    2018-02-01

    The quality and efficiency of automated Digital Elevation Model (DEM) extraction from stereoscopic satellite imagery is critically dependent on the accuracy of the sensor model used for co-locating pixels between stereo-pair images. In the absence of ground control or manual tie point selection, errors in the sensor models must be compensated with increased matching search-spaces, increasing both the computation time and the likelihood of spurious matches. Here we present an algorithm for automatically determining and compensating the relative bias in Rational Polynomial Coefficients (RPCs) between stereo-pairs utilizing hierarchical, sub-pixel image matching in object space. We demonstrate the algorithm using a suite of image stereo-pairs from multiple satellites over a range stereo-photogrammetrically challenging polar terrains. Besides providing a validation of the effectiveness of the algorithm for improving DEM quality, experiments with prescribed sensor model errors yield insight into the dependence of DEM characteristics and quality on relative sensor model bias. This algorithm is included in the Surface Extraction through TIN-based Search-space Minimization (SETSM) DEM extraction software package, which is the primary software used for the U.S. National Science Foundation ArcticDEM and Reference Elevation Model of Antarctica (REMA) products.

  5. SpectraCAM SPM: a camera system with high dynamic range for scientific and medical applications

    NASA Astrophysics Data System (ADS)

    Bhaskaran, S.; Baiko, D.; Lungu, G.; Pilon, M.; VanGorden, S.

    2005-08-01

    A scientific camera system having high dynamic range designed and manufactured by Thermo Electron for scientific and medical applications is presented. The newly developed CID820 image sensor with preamplifier-per-pixel technology is employed in this camera system. The 4 Mega-pixel imaging sensor has a raw dynamic range of 82dB. Each high-transparent pixel is based on a preamplifier-per-pixel architecture and contains two photogates for non-destructive readout of the photon-generated charge (NDRO). Readout is achieved via parallel row processing with on-chip correlated double sampling (CDS). The imager is capable of true random pixel access with a maximum operating speed of 4MHz. The camera controller consists of a custom camera signal processor (CSP) with an integrated 16-bit A/D converter and a PowerPC-based CPU running a Linux embedded operating system. The imager is cooled to -40C via three-stage cooler to minimize dark current. The camera housing is sealed and is designed to maintain the CID820 imager in the evacuated chamber for at least 5 years. Thermo Electron has also developed custom software and firmware to drive the SpectraCAM SPM camera. Included in this firmware package is the new Extreme DRTM algorithm that is designed to extend the effective dynamic range of the camera by several orders of magnitude up to 32-bit dynamic range. The RACID Exposure graphical user interface image analysis software runs on a standard PC that is connected to the camera via Gigabit Ethernet.

  6. A new 9T global shutter pixel with CDS technique

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Ma, Cheng; Zhou, Quan; Wang, Xinyang

    2015-04-01

    Benefiting from motion blur free, Global shutter pixel is very widely used in the design of CMOS image sensors for high speed applications such as motion vision, scientifically inspection, etc. In global shutter sensors, all pixel signal information needs to be stored in the pixel first and then waiting for readout. For higher frame rate, we need very fast operation of the pixel array. There are basically two ways for the in pixel signal storage, one is in charge domain, such as the one shown in [1], this needs complicated process during the pixel fabrication. The other one is in voltage domain, one example is the one in [2], this pixel is based on the 4T PPD technology and normally the driving of the high capacitive transfer gate limits the speed of the array operation. In this paper we report a new 9T global shutter pixel based on 3-T partially pinned photodiode (PPPD) technology. It incorporates three in-pixel storage capacitors allowing for correlated double sampling (CDS) and pipeline operation of the array (pixel exposure during the readout of the array). Only two control pulses are needed for all the pixels at the end of exposure which allows high speed exposure control.

  7. Indium antimonide large-format detector arrays

    NASA Astrophysics Data System (ADS)

    Davis, Mike; Greiner, Mark

    2011-06-01

    Large format infrared imaging sensors are required to achieve simultaneously high resolution and wide field of view image data. Infrared sensors are generally required to be cooled from room temperature to cryogenic temperatures in less than 10 min thousands of times during their lifetime. The challenge is to remove mechanical stress, which is due to different materials with different coefficients of expansion, over a very wide temperature range and at the same time, provide a high sensitivity and high resolution image data. These challenges are met by developing a hybrid where the indium antimonide detector elements (pixels) are unconnected islands that essentially float on a silicon substrate and form a near perfect match to the silicon read-out circuit. Since the pixels are unconnected and isolated from each other, the array is reticulated. This paper shows that the front side illuminated and reticulated element indium antimonide focal plane developed at L-3 Cincinnati Electronics are robust, approach background limited sensitivity limit, and provide the resolution expected of the reticulated pixel array.

  8. Mk x Nk gated CMOS imager

    NASA Astrophysics Data System (ADS)

    Janesick, James; Elliott, Tom; Andrews, James; Tower, John; Bell, Perry; Teruya, Alan; Kimbrough, Joe; Bishop, Jeanne

    2014-09-01

    Our paper will describe a recently designed Mk x Nk x 10 um pixel CMOS gated imager intended to be first employed at the LLNL National Ignition Facility (NIF). Fabrication involves stitching MxN 1024x1024x10 um pixel blocks together into a monolithic imager (where M = 1, 2, . .10 and N = 1, 2, . . 10). The imager has been designed for either NMOS or PMOS pixel fabrication using a base 0.18 um/3.3V CMOS process. Details behind the design are discussed with emphasis on a custom global reset feature which erases the imager of unwanted charge in ~1 us during the fusion ignition process followed by an exposure to obtain useful data. Performance data generated by prototype imagers designed similar to the Mk x Nk sensor is presented.

  9. Backside illuminated CMOS-TDI line scanner for space applications

    NASA Astrophysics Data System (ADS)

    Cohen, O.; Ben-Ari, N.; Nevo, I.; Shiloah, N.; Zohar, G.; Kahanov, E.; Brumer, M.; Gershon, G.; Ofer, O.

    2017-09-01

    A new multi-spectral line scanner CMOS image sensor is reported. The backside illuminated (BSI) image sensor was designed for continuous scanning Low Earth Orbit (LEO) space applications including A custom high quality CMOS Active Pixels, Time Delayed Integration (TDI) mechanism that increases the SNR, 2-phase exposure mechanism that increases the dynamic Modulation Transfer Function (MTF), very low power internal Analog to Digital Converters (ADC) with resolution of 12 bit per pixel and on chip controller. The sensor has 4 independent arrays of pixels where each array is arranged in 2600 TDI columns with controllable TDI depth from 8 up to 64 TDI levels. A multispectral optical filter with specific spectral response per array is assembled at the package level. In this paper we briefly describe the sensor design and present some electrical and electro-optical recent measurements of the first prototypes including high Quantum Efficiency (QE), high MTF, wide range selectable Full Well Capacity (FWC), excellent linearity of approximately 1.3% in a signal range of 5-85% and approximately 1.75% in a signal range of 2-95% out of the signal span, readout noise of approximately 95 electrons with 64 TDI levels, negligible dark current and power consumption of less than 1.5W total for 4 bands sensor at all operation conditions .

  10. Real time three-dimensional space video rate sensors for millimeter waves imaging based very inexpensive plasma LED lamps

    NASA Astrophysics Data System (ADS)

    Levanon, Assaf; Yitzhaky, Yitzhak; Kopeika, Natan S.; Rozban, Daniel; Abramovich, Amir

    2014-10-01

    In recent years, much effort has been invested to develop inexpensive but sensitive Millimeter Wave (MMW) detectors that can be used in focal plane arrays (FPAs), in order to implement real time MMW imaging. Real time MMW imaging systems are required for many varied applications in many fields as homeland security, medicine, communications, military products and space technology. It is mainly because this radiation has high penetration and good navigability through dust storm, fog, heavy rain, dielectric materials, biological tissue, and diverse materials. Moreover, the atmospheric attenuation in this range of the spectrum is relatively low and the scattering is also low compared to NIR and VIS. The lack of inexpensive room temperature imaging systems makes it difficult to provide a suitable MMW system for many of the above applications. In last few years we advanced in research and development of sensors using very inexpensive (30-50 cents) Glow Discharge Detector (GDD) plasma indicator lamps as MMW detectors. This paper presents three kinds of GDD sensor based lamp Focal Plane Arrays (FPA). Those three kinds of cameras are different in the number of detectors, scanning operation, and detection method. The 1st and 2nd generations are 8 × 8 pixel array and an 18 × 2 mono-rail scanner array respectively, both of them for direct detection and limited to fixed imaging. The last designed sensor is a multiplexing frame rate of 16x16 GDD FPA. It permits real time video rate imaging of 30 frames/ sec and comprehensive 3D MMW imaging. The principle of detection in this sensor is a frequency modulated continuous wave (FMCW) system while each of the 16 GDD pixel lines is sampled simultaneously. Direct detection is also possible and can be done with a friendly user interface. This FPA sensor is built over 256 commercial GDD lamps with 3 mm diameter International Light, Inc., Peabody, MA model 527 Ne indicator lamps as pixel detectors. All three sensors are fully supported by software Graphical Unit Interface (GUI). They were tested and characterized through different kinds of optical systems for imaging applications, super resolution, and calibration methods. Capability of the 16x16 sensor is to employ a chirp radar like method to produced depth and reflectance information in the image. This enables 3-D MMW imaging in real time with video frame rate. In this work we demonstrate different kinds of optical imaging systems. Those systems have capability of 3-D imaging for short range and longer distances to at least 10-20 meters.

  11. Wireless image-data transmission from an implanted image sensor through a living mouse brain by intra body communication

    NASA Astrophysics Data System (ADS)

    Hayami, Hajime; Takehara, Hiroaki; Nagata, Kengo; Haruta, Makito; Noda, Toshihiko; Sasagawa, Kiyotaka; Tokuda, Takashi; Ohta, Jun

    2016-04-01

    Intra body communication technology allows the fabrication of compact implantable biomedical sensors compared with RF wireless technology. In this paper, we report the fabrication of an implantable image sensor of 625 µm width and 830 µm length and the demonstration of wireless image-data transmission through a brain tissue of a living mouse. The sensor was designed to transmit output signals of pixel values by pulse width modulation (PWM). The PWM signals from the sensor transmitted through a brain tissue were detected by a receiver electrode. Wireless data transmission of a two-dimensional image was successfully demonstrated in a living mouse brain. The technique reported here is expected to provide useful methods of data transmission using micro sized implantable biomedical sensors.

  12. Faxed document image restoration method based on local pixel patterns

    NASA Astrophysics Data System (ADS)

    Akiyama, Teruo; Miyamoto, Nobuo; Oguro, Masami; Ogura, Kenji

    1998-04-01

    A method for restoring degraded faxed document images using the patterns of pixels that construct small areas in a document is proposed. The method effectively restores faxed images that contain the halftone textures and/or density salt-and-pepper noise that degrade OCR system performance. The halftone image restoration process, white-centered 3 X 3 pixels, in which black-and-white pixels alternate, are identified first using the distribution of the pixel values as halftone textures, and then the white center pixels are inverted to black. To remove high-density salt- and-pepper noise, it is assumed that the degradation is caused by ill-balanced bias and inappropriate thresholding of the sensor output which results in the addition of random noise. Restored image can be estimated using an approximation that uses the inverse operation of the assumed original process. In order to process degraded faxed images, the algorithms mentioned above are combined. An experiment is conducted using 24 especially poor quality examples selected from data sets that exemplify what practical fax- based OCR systems cannot handle. The maximum recovery rate in terms of mean square error was 98.8 percent.

  13. Detection of Spatially Unresolved (Nominally Sub-Pixel) Submerged and Surface Targets Using Hyperspectral Data

    DTIC Science & Technology

    2012-09-01

    Feasibility (MT Modeling ) a. Continuum of mixture distributions interpolated b. Mixture infeasibilities calculated for each pixel c. Valid detections...Visible/Infrared Imaging Spectrometer BRDF Bidirectional Reflectance Distribution Function CASI Compact Airborne Spectrographic Imager CCD...filtering (MTMF), and was designed by Healey and Slater (1999) to use “a physical model to generate the set of sensor spectra for a target that will be

  14. Modulated CMOS camera for fluorescence lifetime microscopy.

    PubMed

    Chen, Hongtao; Holst, Gerhard; Gratton, Enrico

    2015-12-01

    Widefield frequency-domain fluorescence lifetime imaging microscopy (FD-FLIM) is a fast and accurate method to measure the fluorescence lifetime of entire images. However, the complexity and high costs involved in construction of such a system limit the extensive use of this technique. PCO AG recently released the first luminescence lifetime imaging camera based on a high frequency modulated CMOS image sensor, QMFLIM2. Here we tested and provide operational procedures to calibrate the camera and to improve the accuracy using corrections necessary for image analysis. With its flexible input/output options, we are able to use a modulated laser diode or a 20 MHz pulsed white supercontinuum laser as the light source. The output of the camera consists of a stack of modulated images that can be analyzed by the SimFCS software using the phasor approach. The nonuniform system response across the image sensor must be calibrated at the pixel level. This pixel calibration is crucial and needed for every camera settings, e.g. modulation frequency and exposure time. A significant dependency of the modulation signal on the intensity was also observed and hence an additional calibration is needed for each pixel depending on the pixel intensity level. These corrections are important not only for the fundamental frequency, but also for the higher harmonics when using the pulsed supercontinuum laser. With these post data acquisition corrections, the PCO CMOS-FLIM camera can be used for various biomedical applications requiring a large frame and high speed acquisition. © 2015 Wiley Periodicals, Inc.

  15. Propagation phasor approach for holographic image reconstruction

    PubMed Central

    Luo, Wei; Zhang, Yibo; Göröcs, Zoltán; Feizi, Alborz; Ozcan, Aydogan

    2016-01-01

    To achieve high-resolution and wide field-of-view, digital holographic imaging techniques need to tackle two major challenges: phase recovery and spatial undersampling. Previously, these challenges were separately addressed using phase retrieval and pixel super-resolution algorithms, which utilize the diversity of different imaging parameters. Although existing holographic imaging methods can achieve large space-bandwidth-products by performing pixel super-resolution and phase retrieval sequentially, they require large amounts of data, which might be a limitation in high-speed or cost-effective imaging applications. Here we report a propagation phasor approach, which for the first time combines phase retrieval and pixel super-resolution into a unified mathematical framework and enables the synthesis of new holographic image reconstruction methods with significantly improved data efficiency. In this approach, twin image and spatial aliasing signals, along with other digital artifacts, are interpreted as noise terms that are modulated by phasors that analytically depend on the lateral displacement between hologram and sensor planes, sample-to-sensor distance, wavelength, and the illumination angle. Compared to previous holographic reconstruction techniques, this new framework results in five- to seven-fold reduced number of raw measurements, while still achieving a competitive resolution and space-bandwidth-product. We also demonstrated the success of this approach by imaging biological specimens including Papanicolaou and blood smears. PMID:26964671

  16. NDVI, scale invariance and the modifiable areal unit problem: An assessment of vegetation in the Adelaide Parklands

    USGS Publications Warehouse

    Nouri, Hamideh; Anderson, Sharolyn; Sutton, Paul; Beecham, Simon; Nagler, Pamela L.; Jarchow, Christopher J.; Roberts, Dar A.

    2017-01-01

    This research addresses the question as to whether or not the Normalised Difference Vegetation Index (NDVI) is scale invariant (i.e. constant over spatial aggregation) for pure pixels of urban vegetation. It has been long recognized that there are issues related to the modifiable areal unit problem (MAUP) pertaining to indices such as NDVI and images at varying spatial resolutions. These issues are relevant to using NDVI values in spatial analyses. We compare two different methods of calculation of a mean NDVI: 1) using pixel values of NDVI within feature/object boundaries and 2) first calculating the mean red and mean near-infrared across all feature pixels and then calculating NDVI. We explore the nature and magnitude of these differences for images taken from two sensors, a 1.24 m resolution WorldView-3 and a 0.1 m resolution digital aerial image. We apply these methods over an urban park located in the Adelaide Parklands of South Australia. We demonstrate that the MAUP is not an issue for calculation of NDVI within a sensor for pure urban vegetation pixels. This may prove useful for future rule-based monitoring of the ecosystem functioning of green infrastructure.

  17. A novel weighted-direction color interpolation

    NASA Astrophysics Data System (ADS)

    Tao, Jin-you; Yang, Jianfeng; Xue, Bin; Liang, Xiaofen; Qi, Yong-hong; Wang, Feng

    2013-08-01

    A digital camera capture images by covering the sensor surface with a color filter array (CFA), only get a color sample at pixel location. Demosaicking is a process by estimating the missing color components of each pixel to get a full resolution image. In this paper, a new algorithm based on edge adaptive and different weighting factors is proposed. Our method can effectively suppress undesirable artifacts. Experimental results based on Kodak images show that the proposed algorithm obtain higher quality images compared to other methods in numerical and visual aspects.

  18. Noise Reduction Effect of Multiple-Sampling-Based Signal-Readout Circuits for Ultra-Low Noise CMOS Image Sensors.

    PubMed

    Kawahito, Shoji; Seo, Min-Woong

    2016-11-06

    This paper discusses the noise reduction effect of multiple-sampling-based signal readout circuits for implementing ultra-low-noise image sensors. The correlated multiple sampling (CMS) technique has recently become an important technology for high-gain column readout circuits in low-noise CMOS image sensors (CISs). This paper reveals how the column CMS circuits, together with a pixel having a high-conversion-gain charge detector and low-noise transistor, realizes deep sub-electron read noise levels based on the analysis of noise components in the signal readout chain from a pixel to the column analog-to-digital converter (ADC). The noise measurement results of experimental CISs are compared with the noise analysis and the effect of noise reduction to the sampling number is discussed at the deep sub-electron level. Images taken with three CMS gains of two, 16, and 128 show distinct advantage of image contrast for the gain of 128 (noise(median): 0.29 e - rms ) when compared with the CMS gain of two (2.4 e - rms ), or 16 (1.1 e - rms ).

  19. First evidence of phase-contrast imaging with laboratory sources and active pixel sensors

    NASA Astrophysics Data System (ADS)

    Olivo, A.; Arvanitis, C. D.; Bohndiek, S. E.; Clark, A. T.; Prydderch, M.; Turchetta, R.; Speller, R. D.

    2007-11-01

    The aim of the present work is to achieve a first step towards combining the advantages of an innovative X-ray imaging technique—phase-contrast imaging (XPCi)—with those of a new class of sensors, i.e. CMOS-based active pixel sensors (APSs). The advantages of XPCi are well known and include increased image quality and detection of details invisible to conventional techniques, with potential application fields encompassing the medical, biological, industrial and security areas. Vanilla, one of the APSs developed by the MI-3 collaboration (see http://mi3.shef.ac.uk), was thoroughly characterised and an appropriate scintillator was selected to provide X-ray sensitivity. During this process, a set of phase-contrast images of different biological samples was acquired by means of the well-established free-space propagation XPCi technique. The obtained results are very encouraging and are in optimum agreement with the predictions of a simulation recently developed by some of the authors thus further supporting its reliability. This paper presents these preliminary results in detail and discusses in brief both the background to this work and its future developments.

  20. Noise Reduction Effect of Multiple-Sampling-Based Signal-Readout Circuits for Ultra-Low Noise CMOS Image Sensors

    PubMed Central

    Kawahito, Shoji; Seo, Min-Woong

    2016-01-01

    This paper discusses the noise reduction effect of multiple-sampling-based signal readout circuits for implementing ultra-low-noise image sensors. The correlated multiple sampling (CMS) technique has recently become an important technology for high-gain column readout circuits in low-noise CMOS image sensors (CISs). This paper reveals how the column CMS circuits, together with a pixel having a high-conversion-gain charge detector and low-noise transistor, realizes deep sub-electron read noise levels based on the analysis of noise components in the signal readout chain from a pixel to the column analog-to-digital converter (ADC). The noise measurement results of experimental CISs are compared with the noise analysis and the effect of noise reduction to the sampling number is discussed at the deep sub-electron level. Images taken with three CMS gains of two, 16, and 128 show distinct advantage of image contrast for the gain of 128 (noise(median): 0.29 e−rms) when compared with the CMS gain of two (2.4 e−rms), or 16 (1.1 e−rms). PMID:27827972

  1. Automatic registration of fused lidar/digital imagery (texel images) for three-dimensional image creation

    NASA Astrophysics Data System (ADS)

    Budge, Scott E.; Badamikar, Neeraj S.; Xie, Xuan

    2015-03-01

    Several photogrammetry-based methods have been proposed that the derive three-dimensional (3-D) information from digital images from different perspectives, and lidar-based methods have been proposed that merge lidar point clouds and texture the merged point clouds with digital imagery. Image registration alone has difficulty with smooth regions with low contrast, whereas point cloud merging alone has difficulty with outliers and a lack of proper convergence in the merging process. This paper presents a method to create 3-D images that uses the unique properties of texel images (pixel-fused lidar and digital imagery) to improve the quality and robustness of fused 3-D images. The proposed method uses both image processing and point-cloud merging to combine texel images in an iterative technique. Since the digital image pixels and the lidar 3-D points are fused at the sensor level, more accurate 3-D images are generated because registration of image data automatically improves the merging of the point clouds, and vice versa. Examples illustrate the value of this method over other methods. The proposed method also includes modifications for the situation where an estimate of position and attitude of the sensor is known, when obtained from low-cost global positioning systems and inertial measurement units sensors.

  2. A Spectralon BRF Data Base for MISR Calibration Application

    NASA Technical Reports Server (NTRS)

    Bruegge, C.; Chrien, N.; Haner, D.

    1999-01-01

    The Multi-angle Imaging SpectroRadiometer (MISR) is an Earth observing sensor which will provide global retrievals of aerosols, clouds, and land surface parameters. Instrument specifications require high accuracy absolute calibration, as well as accurate camera-to-camera, band-to-band and pixel-to-pixel relative response determinations.

  3. Rigorous Photogrammetric Processing of CHANG'E-1 and CHANG'E-2 Stereo Imagery for Lunar Topographic Mapping

    NASA Astrophysics Data System (ADS)

    Di, K.; Liu, Y.; Liu, B.; Peng, M.

    2012-07-01

    Chang'E-1(CE-1) and Chang'E-2(CE-2) are the two lunar orbiters of China's lunar exploration program. Topographic mapping using CE-1 and CE-2 images is of great importance for scientific research as well as for preparation of landing and surface operation of Chang'E-3 lunar rover. In this research, we developed rigorous sensor models of CE-1 and CE-2 CCD cameras based on push-broom imaging principle with interior and exterior orientation parameters. Based on the rigorous sensor model, the 3D coordinate of a ground point in lunar body-fixed (LBF) coordinate system can be calculated by space intersection from the image coordinates of con-jugate points in stereo images, and the image coordinates can be calculated from 3D coordinates by back-projection. Due to uncer-tainties of the orbit and the camera, the back-projected image points are different from the measured points. In order to reduce these inconsistencies and improve precision, we proposed two methods to refine the rigorous sensor model: 1) refining EOPs by correcting the attitude angle bias, 2) refining the interior orientation model by calibration of the relative position of the two linear CCD arrays. Experimental results show that the mean back-projection residuals of CE-1 images are reduced to better than 1/100 pixel by method 1 and the mean back-projection residuals of CE-2 images are reduced from over 20 pixels to 0.02 pixel by method 2. Consequently, high precision DEM (Digital Elevation Model) and DOM (Digital Ortho Map) are automatically generated.

  4. The implementation of CMOS sensors within a real time digital mammography intelligent imaging system: The I-ImaS System

    NASA Astrophysics Data System (ADS)

    Esbrand, C.; Royle, G.; Griffiths, J.; Speller, R.

    2009-07-01

    The integration of technology with healthcare has undoubtedly propelled the medical imaging sector well into the twenty first century. The concept of digital imaging introduced during the 1970s has since paved the way for established imaging techniques where digital mammography, phase contrast imaging and CT imaging are just a few examples. This paper presents a prototype intelligent digital mammography system designed and developed by a European consortium. The final system, the I-ImaS system, utilises CMOS monolithic active pixel sensor (MAPS) technology promoting on-chip data processing, enabling the acts of data processing and image acquisition to be achieved simultaneously; consequently, statistical analysis of tissue is achievable in real-time for the purpose of x-ray beam modulation via a feedback mechanism during the image acquisition procedure. The imager implements a dual array of twenty 520 pixel × 40 pixel CMOS MAPS sensing devices with a 32μm pixel size, each individually coupled to a 100μm thick thallium doped structured CsI scintillator. This paper presents the first intelligent images of real breast tissue obtained from the prototype system of real excised breast tissue where the x-ray exposure was modulated via the statistical information extracted from the breast tissue itself. Conventional images were experimentally acquired where the statistical analysis of the data was done off-line, resulting in the production of simulated real-time intelligently optimised images. The results obtained indicate real-time image optimisation using the statistical information extracted from the breast as a means of a feedback mechanisms is beneficial and foreseeable in the near future.

  5. Optimizing Floating Guard Ring Designs for FASPAX N-in-P Silicon Sensors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shin, Kyung-Wook; Bradford, Robert; Lipton, Ronald

    2016-10-06

    FASPAX (Fermi-Argonne Semiconducting Pixel Array X-ray detector) is being developed as a fast integrating area detector with wide dynamic range for time resolved applications at the upgraded Advanced Photon Source (APS.) A burst mode detector with intendedmore » $$\\mbox{13 $$MHz$}$ image rate, FASPAX will also incorporate a novel integration circuit to achieve wide dynamic range, from single photon sensitivity to $$10^{\\text{5}}$$ x-rays/pixel/pulse. To achieve these ambitious goals, a novel silicon sensor design is required. This paper will detail early design of the FASPAX sensor. Results from TCAD optimization studies, and characterization of prototype sensors will be presented.« less

  6. Integration of nanostructured planar diffractive lenses dedicated to near infrared detection for CMOS image sensors.

    PubMed

    Lopez, Thomas; Massenot, Sébastien; Estribeau, Magali; Magnan, Pierre; Pardo, Fabrice; Pelouard, Jean-Luc

    2016-04-18

    This paper deals with the integration of metallic and dielectric nanostructured planar lenses into a pixel from a silicon based CMOS image sensor, for a monochromatic application at 1.064 μm. The first is a Plasmonic Lens, based on the phase delay through nanoslits, which has been found to be hardly compatible with current CMOS technology and exhibits a notable metallic absorption. The second is a dielectric Phase-Fresnel Lens integrated at the top of a pixel, it exhibits an Optical Efficiency (OE) improved by a few percent and an angle of view of 50°. The third one is a metallic diffractive lens integrated inside a pixel, which shows a better OE and an angle of view of 24°. The last two lenses exhibit a compatibility with a spectral band close to 1.064 μm.

  7. High-speed particle tracking in microscopy using SPAD image sensors

    NASA Astrophysics Data System (ADS)

    Gyongy, Istvan; Davies, Amy; Miguelez Crespo, Allende; Green, Andrew; Dutton, Neale A. W.; Duncan, Rory R.; Rickman, Colin; Henderson, Robert K.; Dalgarno, Paul A.

    2018-02-01

    Single photon avalanche diodes (SPADs) are used in a wide range of applications, from fluorescence lifetime imaging microscopy (FLIM) to time-of-flight (ToF) 3D imaging. SPAD arrays are becoming increasingly established, combining the unique properties of SPADs with widefield camera configurations. Traditionally, the photosensitive area (fill factor) of SPAD arrays has been limited by the in-pixel digital electronics. However, recent designs have demonstrated that by replacing the complex digital pixel logic with simple binary pixels and external frame summation, the fill factor can be increased considerably. A significant advantage of such binary SPAD arrays is the high frame rates offered by the sensors (>100kFPS), which opens up new possibilities for capturing ultra-fast temporal dynamics in, for example, life science cellular imaging. In this work we consider the use of novel binary SPAD arrays in high-speed particle tracking in microscopy. We demonstrate the tracking of fluorescent microspheres undergoing Brownian motion, and in intra-cellular vesicle dynamics, at high frame rates. We thereby show how binary SPAD arrays can offer an important advance in live cell imaging in such fields as intercellular communication, cell trafficking and cell signaling.

  8. Synchrotron based planar imaging and digital tomosynthesis of breast and biopsy phantoms using a CMOS active pixel sensor.

    PubMed

    Szafraniec, Magdalena B; Konstantinidis, Anastasios C; Tromba, Giuliana; Dreossi, Diego; Vecchio, Sara; Rigon, Luigi; Sodini, Nicola; Naday, Steve; Gunn, Spencer; McArthur, Alan; Olivo, Alessandro

    2015-03-01

    The SYRMEP (SYnchrotron Radiation for MEdical Physics) beamline at Elettra is performing the first mammography study on human patients using free-space propagation phase contrast imaging. The stricter spatial resolution requirements of this method currently force the use of conventional films or specialized computed radiography (CR) systems. This also prevents the implementation of three-dimensional (3D) approaches. This paper explores the use of an X-ray detector based on complementary metal-oxide-semiconductor (CMOS) active pixel sensor (APS) technology as a possible alternative, for acquisitions both in planar and tomosynthesis geometry. Results indicate higher quality of the images acquired with the synchrotron set-up in both geometries. This improvement can be partly ascribed to the use of parallel, collimated and monochromatic synchrotron radiation (resulting in scatter rejection, no penumbra-induced blurring and optimized X-ray energy), and partly to phase contrast effects. Even though the pixel size of the used detector is still too large - and thus suboptimal - for free-space propagation phase contrast imaging, a degree of phase-induced edge enhancement can clearly be observed in the images. Copyright © 2014 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  9. The Dosepix detector—an energy-resolving photon-counting pixel detector for spectrometric measurements

    NASA Astrophysics Data System (ADS)

    Zang, A.; Anton, G.; Ballabriga, R.; Bisello, F.; Campbell, M.; Celi, J. C.; Fauler, A.; Fiederle, M.; Jensch, M.; Kochanski, N.; Llopart, X.; Michel, N.; Mollenhauer, U.; Ritter, I.; Tennert, F.; Wölfel, S.; Wong, W.; Michel, T.

    2015-04-01

    The Dosepix detector is a hybrid photon-counting pixel detector based on ideas of the Medipix and Timepix detector family. 1 mm thick cadmium telluride and 300 μm thick silicon were used as sensor material. The pixel matrix of the Dosepix consists of 16 x 16 square pixels with 12 rows of (200 μm)2 and 4 rows of (55 μm)2 sensitive area for the silicon sensor layer and 16 rows of pixels with 220 μm pixel pitch for CdTe. Besides digital energy integration and photon-counting mode, a novel concept of energy binning is included in the pixel electronics, allowing energy-resolved measurements in 16 energy bins within one acquisition. The possibilities of this detector concept range from applications in personal dosimetry and energy-resolved imaging to quality assurance of medical X-ray sources by analysis of the emitted photon spectrum. In this contribution the Dosepix detector, its response to X-rays as well as spectrum measurements with Si and CdTe sensor layer are presented. Furthermore, a first evaluation was carried out to use the Dosepix detector as a kVp-meter, that means to determine the applied acceleration voltage from measured X-ray tubes spectra.

  10. Broadband Terahertz Computed Tomography Using a 5k-pixel Real-time THz Camera

    NASA Astrophysics Data System (ADS)

    Trichopoulos, Georgios C.; Sertel, Kubilay

    2015-07-01

    We present a novel THz computed tomography system that enables fast 3-dimensional imaging and spectroscopy in the 0.6-1.2 THz band. The system is based on a new real-time broadband THz camera that enables rapid acquisition of multiple cross-sectional images required in computed tomography. Tomographic reconstruction is achieved using digital images from the densely-packed large-format (80×64) focal plane array sensor located behind a hyper-hemispherical silicon lens. Each pixel of the sensor array consists of an 85 μm × 92 μm lithographically fabricated wideband dual-slot antenna, monolithically integrated with an ultra-fast diode tuned to operate in the 0.6-1.2 THz regime. Concurrently, optimum impedance matching was implemented for maximum pixel sensitivity, enabling 5 frames-per-second image acquisition speed. As such, the THz computed tomography system generates diffraction-limited resolution cross-section images as well as the three-dimensional models of various opaque and partially transparent objects. As an example, an over-the-counter vitamin supplement pill is imaged and its material composition is reconstructed. The new THz camera enables, for the first time, a practical application of THz computed tomography for non-destructive evaluation and biomedical imaging.

  11. Room temperature 1040fps, 1 megapixel photon-counting image sensor with 1.1um pixel pitch

    NASA Astrophysics Data System (ADS)

    Masoodian, S.; Ma, J.; Starkey, D.; Wang, T. J.; Yamashita, Y.; Fossum, E. R.

    2017-05-01

    A 1Mjot single-bit quanta image sensor (QIS) implemented in a stacked backside-illuminated (BSI) process is presented. This is the first work to report a megapixel photon-counting CMOS-type image sensor to the best of our knowledge. A QIS with 1.1μm pitch tapered-pump-gate jots is implemented with cluster-parallel readout, where each cluster of jots is associated with its own dedicated readout electronics stacked under the cluster. Power dissipation is reduced with this cluster readout because of the reduced column bus parasitic capacitance, which is important for the development of 1Gjot arrays. The QIS functions at 1040fps with binary readout and dissipates only 17.6mW, including I/O pads. The readout signal chain uses a fully differential charge-transfer amplifier (CTA) gain stage before a 1b-ADC to achieve an energy/bit FOM of 16.1pJ/b and 6.9pJ/b for the whole sensor and gain stage+ADC, respectively. Analog outputs with on-chip gain are implemented for pixel characterization purposes.

  12. Ionizing radiation effects on CMOS imagers manufactured in deep submicron process

    NASA Astrophysics Data System (ADS)

    Goiffon, Vincent; Magnan, Pierre; Bernard, Frédéric; Rolland, Guy; Saint-Pé, Olivier; Huger, Nicolas; Corbière, Franck

    2008-02-01

    We present here a study on both CMOS sensors and elementary structures (photodiodes and in-pixel MOSFETs) manufactured in a deep submicron process dedicated to imaging. We designed a test chip made of one 128×128-3T-pixel array with 10 μm pitch and more than 120 isolated test structures including photodiodes and MOSFETs with various implants and different sizes. All these devices were exposed to ionizing radiation up to 100 krad and their responses were correlated to identify the CMOS sensor weaknesses. Characterizations in darkness and under illumination demonstrated that dark current increase is the major sensor degradation. Shallow trench isolation was identified to be responsible for this degradation as it increases the number of generation centers in photodiode depletion regions. Consequences on hardness assurance and hardening-by-design are discussed.

  13. 1 kHz 2D Visual Motion Sensor Using 20 × 20 Silicon Retina Optical Sensor and DSP Microcontroller.

    PubMed

    Liu, Shih-Chii; Yang, MinHao; Steiner, Andreas; Moeckel, Rico; Delbruck, Tobi

    2015-04-01

    Optical flow sensors have been a long running theme in neuromorphic vision sensors which include circuits that implement the local background intensity adaptation mechanism seen in biological retinas. This paper reports a bio-inspired optical motion sensor aimed towards miniature robotic and aerial platforms. It combines a 20 × 20 continuous-time CMOS silicon retina vision sensor with a DSP microcontroller. The retina sensor has pixels that have local gain control and adapt to background lighting. The system allows the user to validate various motion algorithms without building dedicated custom solutions. Measurements are presented to show that the system can compute global 2D translational motion from complex natural scenes using one particular algorithm: the image interpolation algorithm (I2A). With this algorithm, the system can compute global translational motion vectors at a sample rate of 1 kHz, for speeds up to ±1000 pixels/s, using less than 5 k instruction cycles (12 instructions per pixel) per frame. At 1 kHz sample rate the DSP is 12% occupied with motion computation. The sensor is implemented as a 6 g PCB consuming 170 mW of power.

  14. Geometric correction methods for Timepix based large area detectors

    NASA Astrophysics Data System (ADS)

    Zemlicka, J.; Dudak, J.; Karch, J.; Krejci, F.

    2017-01-01

    X-ray micro radiography with the hybrid pixel detectors provides versatile tool for the object inspection in various fields of science. It has proven itself especially suitable for the samples with low intrinsic attenuation contrast (e.g. soft tissue in biology, plastics in material sciences, thin paint layers in cultural heritage, etc.). The limited size of single Medipix type detector (1.96 cm2) was recently overcome by the construction of large area detectors WidePIX assembled of Timepix chips equipped with edgeless silicon sensors. The largest already built device consists of 100 chips and provides fully sensitive area of 14.3 × 14.3 cm2 without any physical gaps between sensors. The pixel resolution of this device is 2560 × 2560 pixels (6.5 Mpix). The unique modular detector layout requires special processing of acquired data to avoid occurring image distortions. It is necessary to use several geometric compensations after standard corrections methods typical for this type of pixel detectors (i.e. flat-field, beam hardening correction). The proposed geometric compensations cover both concept features and particular detector assembly misalignment of individual chip rows of large area detectors based on Timepix assemblies. The former deals with larger border pixels in individual edgeless sensors and their behaviour while the latter grapple with shifts, tilts and steps between detector rows. The real position of all pixels is defined in Cartesian coordinate system and together with non-binary reliability mask it is used for the final image interpolation. The results of geometric corrections for test wire phantoms and paleo botanic material are presented in this article.

  15. Scanning Microscopes Using X Rays and Microchannels

    NASA Technical Reports Server (NTRS)

    Wang, Yu

    2003-01-01

    Scanning microscopes that would be based on microchannel filters and advanced electronic image sensors and that utilize x-ray illumination have been proposed. Because the finest resolution attainable in a microscope is determined by the wavelength of the illumination, the xray illumination in the proposed microscopes would make it possible, in principle, to achieve resolutions of the order of nanometers about a thousand times as fine as the resolution of a visible-light microscope. Heretofore, it has been necessary to use scanning electron microscopes to obtain such fine resolution. In comparison with scanning electron microscopes, the proposed microscopes would likely be smaller, less massive, and less expensive. Moreover, unlike in scanning electron microscopes, it would not be necessary to place specimens under vacuum. The proposed microscopes are closely related to the ones described in several prior NASA Tech Briefs articles; namely, Miniature Microscope Without Lenses (NPO-20218), NASA Tech Briefs, Vol. 22, No. 8 (August 1998), page 43; and Reflective Variants of Miniature Microscope Without Lenses (NPO-20610), NASA Tech Briefs, Vol. 26, No. 9 (September 2002) page 6a. In all of these microscopes, the basic principle of design and operation is the same: The focusing optics of a conventional visible-light microscope are replaced by a combination of a microchannel filter and a charge-coupled-device (CCD) image detector. A microchannel plate containing parallel, microscopic-cross-section holes much longer than they are wide is placed between a specimen and an image sensor, which is typically the CCD. The microchannel plate must be made of a material that absorbs the illuminating radiation reflected or scattered from the specimen. The microchannels must be positioned and dimensioned so that each one is registered with a pixel on the image sensor. Because most of the radiation incident on the microchannel walls becomes absorbed, the radiation that reaches the image sensor consists predominantly of radiation that was launched along the longitudinal direction of the microchannels. Therefore, most of the radiation arriving at each pixel on the sensor must have traveled along a straight line from a corresponding location on the specimen. Thus, there is a one-to-one mapping from a point on a specimen to a pixel in the image sensor, so that the output of the image sensor contains image information equivalent to that from a microscope.

  16. Monitoring the long term stability of the IRS-P6 AWiFS sensor using the Sonoran and RVPN sites

    NASA Astrophysics Data System (ADS)

    Chander, Gyanesh; Sampath, Aparajithan; Angal, Amit; Choi, Taeyoung; Xiong, Xiaoxiong

    2010-10-01

    This paper focuses on radiometric and geometric assessment of the Indian Remote Sensing (IRS-P6) Advanced Wide Field Sensor (AWiFS) sensor using the Sonoran desert and Railroad Valley Playa, Nevada (RVPN) ground sites. Imageto- Image (I2I) accuracy and relative band-to-band (B2B) accuracy were measured. I2I accuracy of the AWiFS imagery was assessed by measuring the imagery against Landsat Global Land Survey (GLS) 2000. The AWiFS images were typically registered to within one pixel to the GLS 2000 mosaic images. The B2B process used the same concepts as the I2I, except instead of a reference image and a search image; the individual bands of a multispectral image are tested against each other. The B2B results showed that all the AWiFS multispectral bands are registered to sub-pixel accuracy. Using the limited amount of scenes available over these ground sites, the reflective bands of AWiFS sensor indicate a long-term drift in the top-of-atmosphere (TOA) reflectance. Because of the limited availability of AWiFS scenes over these ground sites, a comprehensive evaluation of the radiometric stability using these sites is not possible. In order to overcome this limitation, a cross-comparison between AWiFS and Landsat 7 (L7) Enhanced Thematic Mapper Plus (ETM+) was performed using image statistics based on large common areas observed by the sensors within 30 minutes. Regression curves and coefficients of determination for the TOA trends from these sensors were generated to quantify the uncertainty in these relationships and to provide an assessment of the calibration differences between these sensors.

  17. Increasing Linear Dynamic Range of a CMOS Image Sensor

    NASA Technical Reports Server (NTRS)

    Pain, Bedabrata

    2007-01-01

    A generic design and a corresponding operating sequence have been developed for increasing the linear-response dynamic range of a complementary metal oxide/semiconductor (CMOS) image sensor. The design provides for linear calibrated dual-gain pixels that operate at high gain at a low signal level and at low gain at a signal level above a preset threshold. Unlike most prior designs for increasing dynamic range of an image sensor, this design does not entail any increase in noise (including fixed-pattern noise), decrease in responsivity or linearity, or degradation of photometric calibration. The figure is a simplified schematic diagram showing the circuit of one pixel and pertinent parts of its column readout circuitry. The conventional part of the pixel circuit includes a photodiode having a small capacitance, CD. The unconventional part includes an additional larger capacitance, CL, that can be connected to the photodiode via a transfer gate controlled in part by a latch. In the high-gain mode, the signal labeled TSR in the figure is held low through the latch, which also helps to adapt the gain on a pixel-by-pixel basis. Light must be coupled to the pixel through a microlens or by back illumination in order to obtain a high effective fill factor; this is necessary to ensure high quantum efficiency, a loss of which would minimize the efficacy of the dynamic- range-enhancement scheme. Once the level of illumination of the pixel exceeds the threshold, TSR is turned on, causing the transfer gate to conduct, thereby adding CL to the pixel capacitance. The added capacitance reduces the conversion gain, and increases the pixel electron-handling capacity, thereby providing an extension of the dynamic range. By use of an array of comparators also at the bottom of the column, photocharge voltages on sampling capacitors in each column are compared with a reference voltage to determine whether it is necessary to switch from the high-gain to the low-gain mode. Depending upon the built-in offset in each pixel and in each comparator, the point at which the gain change occurs will be different, adding gain-dependent fixed pattern noise in each pixel. The offset, and hence the fixed pattern noise, is eliminated by sampling the pixel readout charge four times by use of four capacitors (instead of two such capacitors as in conventional design) connected to the bottom of the column via electronic switches SHS1, SHR1, SHS2, and SHR2, respectively, corresponding to high and low values of the signals TSR and RST. The samples are combined in an appropriate fashion to cancel offset-induced errors, and provide spurious-free imaging with extended dynamic range.

  18. Report on recent results of the PERCIVAL soft X-ray imager

    NASA Astrophysics Data System (ADS)

    Khromova, A.; Cautero, G.; Giuressi, D.; Menk, R.; Pinaroli, G.; Stebel, L.; Correa, J.; Marras, A.; Wunderer, C. B.; Lange, S.; Tennert, M.; Niemann, M.; Hirsemann, H.; Smoljanin, S.; Reza, S.; Graafsma, H.; Göttlicher, P.; Shevyakov, I.; Supra, J.; Xia, Q.; Zimmer, M.; Guerrini, N.; Marsh, B.; Sedgwick, I.; Nicholls, T.; Turchetta, R.; Pedersen, U.; Tartoni, N.; Hyun, H. J.; Kim, K. S.; Rah, S. Y.; Hoenk, M. E.; Jewell, A. D.; Jones, T. J.; Nikzad, S.

    2016-11-01

    The PERCIVAL (Pixelated Energy Resolving CMOS Imager, Versatile And Large) soft X-ray 2D imaging detector is based on stitched, wafer-scale sensors possessing a thick epi-layer, which together with back-thinning and back-side illumination yields elevated quantum efficiency in the photon energy range of 125-1000 eV. Main application fields of PERCIVAL are foreseen in photon science with FELs and synchrotron radiation. This requires high dynamic range up to 105 ph @ 250 eV paired with single photon sensitivity with high confidence at moderate frame rates in the range of 10-120 Hz. These figures imply the availability of dynamic gain switching on a pixel-by-pixel basis and a highly parallel, low noise analog and digital readout, which has been realized in the PERCIVAL sensor layout. Different aspects of the detector performance have been assessed using prototype sensors with different pixel and ADC types. This work will report on the recent test results performed on the newest chip prototypes with the improved pixel and ADC architecture. For the target frame rates in the 10-120 Hz range an average noise floor of 14e- has been determined, indicating the ability of detecting single photons with energies above 250 eV. Owing to the successfully implemented adaptive 3-stage multiple-gain switching, the integrated charge level exceeds 4 · 106 e- or 57000 X-ray photons at 250 eV per frame at 120 Hz. For all gains the noise level remains below the Poisson limit also in high-flux conditions. Additionally, a short overview over the updates on an oncoming 2 Mpixel (P2M) detector system (expected at the end of 2016) will be reported.

  19. International Symposium on Applications of Ferroelectrics

    DTIC Science & Technology

    1993-02-01

    neighborhood of the Curie point. A high dielectric constant The technology of producing monolithic IR detectors using is also useful in many imaging applications...a linear array of sensors. Eacha detector (pixel) or group of Work on new infrared (IR) sensors is at present them, can thus produce a signal ... recorded . The signal beam , was expanded to certain input image (or a partial one) is illuminated only with the 15mm to carry images and was then

  20. The Effects of Radiation on Imagery Sensors in Space

    NASA Technical Reports Server (NTRS)

    Mathis, Dylan

    2007-01-01

    Recent experience using high definition video on the International Space Station reveals camera pixel degradation due to particle radiation to be a much more significant problem with high definition cameras than with standard definition video. Although it may at first appear that increased pixel density on the imager is the logical explanation for this, the ISS implementations of high definition suggest a more complex causal and mediating factor mix. The degree of damage seems to vary from one type of camera to another, and this variation prompts a reconsideration of the possible factors in pixel loss, such as imager size, number of pixels, pixel aperture ratio, imager type (CCD or CMOS), method of error correction/concealment, and the method of compression used for recording or transmission. The problem of imager pixel loss due to particle radiation is not limited to out-of-atmosphere applications. Since particle radiation increases with altitude, it is not surprising to find anecdotal evidence that video cameras subject to many hours of airline travel show an increased incidence of pixel loss. This is even evident in some standard definition video applications, and pixel loss due to particle radiation only stands to become a more salient issue considering the continued diffusion of high definition video cameras in the marketplace.

  1. Dense range map reconstruction from a versatile robotic sensor system with an active trinocular vision and a passive binocular vision.

    PubMed

    Kim, Min Young; Lee, Hyunkee; Cho, Hyungsuck

    2008-04-10

    One major research issue associated with 3D perception by robotic systems is the creation of efficient sensor systems that can generate dense range maps reliably. A visual sensor system for robotic applications is developed that is inherently equipped with two types of sensor, an active trinocular vision and a passive stereo vision. Unlike in conventional active vision systems that use a large number of images with variations of projected patterns for dense range map acquisition or from conventional passive vision systems that work well on specific environments with sufficient feature information, a cooperative bidirectional sensor fusion method for this visual sensor system enables us to acquire a reliable dense range map using active and passive information simultaneously. The fusion algorithms are composed of two parts, one in which the passive stereo vision helps active vision and the other in which the active trinocular vision helps the passive one. The first part matches the laser patterns in stereo laser images with the help of intensity images; the second part utilizes an information fusion technique using the dynamic programming method in which image regions between laser patterns are matched pixel-by-pixel with help of the fusion results obtained in the first part. To determine how the proposed sensor system and fusion algorithms can work in real applications, the sensor system is implemented on a robotic system, and the proposed algorithms are applied. A series of experimental tests is performed for a variety of configurations of robot and environments. The performance of the sensor system is discussed in detail.

  2. Detection systems for mass spectrometry imaging: a perspective on novel developments with a focus on active pixel detectors.

    PubMed

    Jungmann, Julia H; Heeren, Ron M A

    2013-01-15

    Instrumental developments for imaging and individual particle detection for biomolecular mass spectrometry (imaging) and fundamental atomic and molecular physics studies are reviewed. Ion-counting detectors, array detection systems and high mass detectors for mass spectrometry (imaging) are treated. State-of-the-art detection systems for multi-dimensional ion, electron and photon detection are highlighted. Their application and performance in three different imaging modes--integrated, selected and spectral image detection--are described. Electro-optical and microchannel-plate-based systems are contrasted. The analytical capabilities of solid-state pixel detectors--both charge coupled device (CCD) and complementary metal oxide semiconductor (CMOS) chips--are introduced. The Medipix/Timepix detector family is described as an example of a CMOS hybrid active pixel sensor. Alternative imaging methods for particle detection and their potential for future applications are investigated. Copyright © 2012 John Wiley & Sons, Ltd.

  3. Classification of Hyperspectral or Trichromatic Measurements of Ocean Color Data into Spectral Classes.

    PubMed

    Prasad, Dilip K; Agarwal, Krishna

    2016-03-22

    We propose a method for classifying radiometric oceanic color data measured by hyperspectral satellite sensors into known spectral classes, irrespective of the downwelling irradiance of the particular day, i.e., the illumination conditions. The focus is not on retrieving the inherent optical properties but to classify the pixels according to the known spectral classes of the reflectances from the ocean. The method compensates for the unknown downwelling irradiance by white balancing the radiometric data at the ocean pixels using the radiometric data of bright pixels (typically from clouds). The white-balanced data is compared with the entries in a pre-calibrated lookup table in which each entry represents the spectral properties of one class. The proposed approach is tested on two datasets of in situ measurements and 26 different daylight illumination spectra for medium resolution imaging spectrometer (MERIS), moderate-resolution imaging spectroradiometer (MODIS), sea-viewing wide field-of-view sensor (SeaWiFS), coastal zone color scanner (CZCS), ocean and land colour instrument (OLCI), and visible infrared imaging radiometer suite (VIIRS) sensors. Results are also shown for CIMEL's SeaPRISM sun photometer sensor used on-board field trips. Accuracy of more than 92% is observed on the validation dataset and more than 86% is observed on the other dataset for all satellite sensors. The potential of applying the algorithms to non-satellite and non-multi-spectral sensors mountable on airborne systems is demonstrated by showing classification results for two consumer cameras. Classification on actual MERIS data is also shown. Additional results comparing the spectra of remote sensing reflectance with level 2 MERIS data and chlorophyll concentration estimates of the data are included.

  4. ASTER's First Views of Red Sea, Ethiopia - Thermal-Infrared (TIR) Image (monochrome)

    NASA Technical Reports Server (NTRS)

    2000-01-01

    ASTER succeeded in acquiring this image at night, which is something Visible/Near Infrared VNIR) and Shortwave Infrared (SWIR) sensors cannot do. The scene covers the Red Sea coastline to an inland area of Ethiopia. White pixels represent areas with higher temperature material on the surface, while dark pixels indicate lower temperatures. This image shows ASTER's ability as a highly sensitive, temperature-discerning instrument and the first spaceborne TIR multi-band sensor in history.

    The size of image: 60 km x 60 km approx., ground resolution 90 m x 90 m approximately.

    The ASTER instrument was built in Japan for the Ministry of International Trade and Industry. A joint United States/Japan Science Team is responsible for instrument design, calibration, and data validation. ASTER is flying on the Terra satellite, which is managed by NASA's Goddard Space Flight Center, Greenbelt, MD.

  5. CMOS active pixel sensor type imaging system on a chip

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R. (Inventor); Nixon, Robert (Inventor)

    2011-01-01

    A single chip camera which includes an .[.intergrated.]. .Iadd.integrated .Iaddend.image acquisition portion and control portion and which has double sampling/noise reduction capabilities thereon. Part of the .[.intergrated.]. .Iadd.integrated .Iaddend.structure reduces the noise that is picked up during imaging.

  6. CMOS minimal array

    NASA Astrophysics Data System (ADS)

    Janesick, James; Cheng, John; Bishop, Jeanne; Andrews, James T.; Tower, John; Walker, Jeff; Grygon, Mark; Elliot, Tom

    2006-08-01

    A high performance prototype CMOS imager is introduced. Test data is reviewed for different array formats that utilize 3T photo diode, 5T pinned photo diode and 6T photo gate CMOS pixel architectures. The imager allows several readout modes including progressive scan, snap and windowed operation. The new imager is built on different silicon substrates including very high resistivity epitaxial wafers for deep depletion operation. Data products contained in this paper focus on sensor's read noise, charge capacity, charge transfer efficiency, thermal dark current, RTS dark spikes, QE, pixel cross- talk and on-chip analog circuitry performance.

  7. Development and test of an active pixel sensor detector for heliospheric imager on solar orbiter and solar probe plus

    NASA Astrophysics Data System (ADS)

    Korendyke, Clarence M.; Vourlidas, Angelos; Plunkett, Simon P.; Howard, Russell A.; Wang, Dennis; Marshall, Cheryl J.; Waczynski, Augustyn; Janesick, James J.; Elliott, Thomas; Tun, Samuel; Tower, John; Grygon, Mark; Keller, David; Clifford, Gregory E.

    2013-10-01

    The Naval Research Laboratory is developing next generation CMOS imaging arrays for the Solar Orbiter and Solar Probe Plus missions. The device development is nearly complete with flight device delivery scheduled for summer of 2013. The 4Kx4K mosaic array with 10micron pixels is well suited to the panoramic imaging required for the Solar Orbiter mission. The devices are robust (<100krad) and exhibit minimal performance degradation with respect to radiation. The device design and performance are described.

  8. It's not the pixel count, you fool

    NASA Astrophysics Data System (ADS)

    Kriss, Michael A.

    2012-01-01

    The first thing a "marketing guy" asks the digital camera engineer is "how many pixels does it have, for we need as many mega pixels as possible since the other guys are killing us with their "umpteen" mega pixel pocket sized digital cameras. And so it goes until the pixels get smaller and smaller in order to inflate the pixel count in the never-ending pixel-wars. These small pixels just are not very good. The truth of the matter is that the most important feature of digital cameras in the last five years is the automatic motion control to stabilize the image on the sensor along with some very sophisticated image processing. All the rest has been hype and some "cool" design. What is the future for digital imaging and what will drive growth of camera sales (not counting the cell phone cameras which totally dominate the market in terms of camera sales) and more importantly after sales profits? Well sit in on the Dark Side of Color and find out what is being done to increase the after sales profits and don't be surprised if has been done long ago in some basement lab of a photographic company and of course, before its time.

  9. The Physics of Imaging with Remote Sensors : Photon State Space & Radiative Transfer

    NASA Technical Reports Server (NTRS)

    Davis, Anthony B.

    2012-01-01

    Standard (mono-pixel/steady-source) retrieval methodology is reaching its fundamental limit with access to multi-angle/multi-spectral photo- polarimetry. Next... Two emerging new classes of retrieval algorithm worth nurturing: multi-pixel time-domain Wave-radiometry transition regimes, and more... Cross-fertilization with bio-medical imaging. Physics-based remote sensing: - What is "photon state space?" - What is "radiative transfer?" - Is "the end" in sight? Two wide-open frontiers! center dot Examples (with variations.

  10. The Dram As An X-Ray Sensor

    NASA Astrophysics Data System (ADS)

    Jacobs, Alan M.; Cox, John D.; Juang, Yi-Shung

    1987-01-01

    A solid-state digital x-ray detector is described which can replace high resolution film in industrial radiography and has potential for application in some medical imaging. Because of the 10 micron pixel pitch on the sensor, contact magnification radiology is possible and is demonstrated. Methods for frame speed increase and integration of sensor to a large format are discussed.

  11. Proof of principle study of the use of a CMOS active pixel sensor for proton radiography.

    PubMed

    Seco, Joao; Depauw, Nicolas

    2011-02-01

    Proof of principle study of the use of a CMOS active pixel sensor (APS) in producing proton radiographic images using the proton beam at the Massachusetts General Hospital (MGH). A CMOS APS, previously tested for use in s-ray radiation therapy applications, was used for proton beam radiographic imaging at the MGH. Two different setups were used as a proof of principle that CMOS can be used as proton imaging device: (i) a pen with two metal screws to assess spatial resolution of the CMOS and (ii) a phantom with lung tissue, bone tissue, and water to assess tissue contrast of the CMOS. The sensor was then traversed by a double scattered monoenergetic proton beam at 117 MeV, and the energy deposition inside the detector was recorded to assess its energy response. Conventional x-ray images with similar setup at voltages of 70 kVp and proton images using commercial Gafchromic EBT 2 and Kodak X-Omat V films were also taken for comparison purposes. Images were successfully acquired and compared to x-ray kVp and proton EBT2/X-Omat film images. The spatial resolution of the CMOS detector image is subjectively comparable to the EBT2 and Kodak X-Omat V film images obtained at the same object-detector distance. X-rays have apparent higher spatial resolution than the CMOS. However, further studies with different commercial films using proton beam irradiation demonstrate that the distance of the detector to the object is important to the amount of proton scatter contributing to the proton image. Proton images obtained with films at different distances from the source indicate that proton scatter significantly affects the CMOS image quality. Proton radiographic images were successfully acquired at MGH using a CMOS active pixel sensor detector. The CMOS demonstrated spatial resolution subjectively comparable to films at the same object-detector distance. Further work will be done in order to establish the spatial and energy resolution of the CMOS detector for protons. The development and use of CMOS in proton radiography could allow in vivo proton range checks, patient setup QA, and real-time tumor tracking.

  12. Sparsely-Bonded CMOS Hybrid Imager

    NASA Technical Reports Server (NTRS)

    Sun, Chao (Inventor); Jones, Todd J. (Inventor); Nikzad, Shouleh (Inventor); Newton, Kenneth W. (Inventor); Cunningham, Thomas J. (Inventor); Hancock, Bruce R. (Inventor); Dickie, Matthew R. (Inventor); Hoenk, Michael E. (Inventor); Wrigley, Christopher J. (Inventor); Pain, Bedabrata (Inventor)

    2015-01-01

    A method and device for imaging or detecting electromagnetic radiation is provided. A device structure includes a first chip interconnected with a second chip. The first chip includes a detector array, wherein the detector array comprises a plurality of light sensors and one or more transistors. The second chip includes a Read Out Integrated Circuit (ROIC) that reads out, via the transistors, a signal produced by the light sensors. A number of interconnects between the ROIC and the detector array can be less than one per light sensor or pixel.

  13. New false color mapping for image fusion

    NASA Astrophysics Data System (ADS)

    Toet, Alexander; Walraven, Jan

    1996-03-01

    A pixel-based color-mapping algorithm is presented that produces a fused false color rendering of two gray-level images representing different sensor modalities. The resulting images have a higher information content than each of the original images and retain sensor-specific image information. The unique component of each image modality is enhanced in the resulting fused color image representation. First, the common component of the two original input images is determined. Second, the common component is subtracted from the original images to obtain the unique component of each image. Third, the unique component of each image modality is subtracted from the image of the other modality. This step serves to enhance the representation of sensor-specific details in the final fused result. Finally, a fused color image is produced by displaying the images resulting from the last step through, respectively, the red and green channels of a color display. The method is applied to fuse thermal and visual images. The results show that the color mapping enhances the visibility of certain details and preserves the specificity of the sensor information. The fused images also have a fairly natural appearance. The fusion scheme involves only operations on corresponding pixels. The resolution of a fused image is therefore directly related to the resolution of the input images. Before fusing, the contrast of the images can be enhanced and their noise can be reduced by standard image- processing techniques. The color mapping algorithm is computationally simple. This implies that the investigated approaches can eventually be applied in real time and that the hardware needed is not too complicated or too voluminous (an important consideration when it has to fit in an airplane, for instance).

  14. Comparison of a CCD and an APS for soft X-ray diffraction

    NASA Astrophysics Data System (ADS)

    Stewart, Graeme; Bates, R.; Blue, A.; Clark, A.; Dhesi, S. S.; Maneuski, D.; Marchal, J.; Steadman, P.; Tartoni, N.; Turchetta, R.

    2011-12-01

    We compare a new CMOS Active Pixel Sensor (APS) to a Princeton Instruments PIXIS-XO: 2048B Charge Coupled Device (CCD) with soft X-rays tested in a synchrotron beam line at the Diamond Light Source (DLS). Despite CCDs being established in the field of scientific imaging, APS are an innovative technology that offers advantages over CCDs. These include faster readout, higher operational temperature, in-pixel electronics for advanced image processing and reduced manufacturing cost. The APS employed was the Vanilla sensor designed by the MI3 collaboration and funded by an RCUK Basic technology grant. This sensor has 520 x 520 square pixels, of size 25 μm on each side. The sensor can operate at a full frame readout of up to 20 Hz. The sensor had been back-thinned, to the epitaxial layer. This was the first time that a back-thinned APS had been demonstrated at a beam line at DLS. In the synchrotron experiment soft X-rays with an energy of approximately 708 eV were used to produce a diffraction pattern from a permalloy sample. The pattern was imaged at a range of integration times with both sensors. The CCD had to be operated at a temperature of -55°C whereas the Vanilla was operated over a temperature range from 20°C to -10°C. We show that the APS detector can operate with frame rates up to two hundred times faster than the CCD, without excessive degradation of image quality. The signal to noise of the APS is shown to be the same as that of the CCD at identical integration times and the response is shown to be linear, with no charge blooming effects. The experiment has allowed a direct comparison of back thinned APS and CCDs in a real soft x-ray synchrotron experiment.

  15. Origami silicon optoelectronics for hemispherical electronic eye systems.

    PubMed

    Zhang, Kan; Jung, Yei Hwan; Mikael, Solomon; Seo, Jung-Hun; Kim, Munho; Mi, Hongyi; Zhou, Han; Xia, Zhenyang; Zhou, Weidong; Gong, Shaoqin; Ma, Zhenqiang

    2017-11-24

    Digital image sensors in hemispherical geometries offer unique imaging advantages over their planar counterparts, such as wide field of view and low aberrations. Deforming miniature semiconductor-based sensors with high-spatial resolution into such format is challenging. Here we report a simple origami approach for fabricating single-crystalline silicon-based focal plane arrays and artificial compound eyes that have hemisphere-like structures. Convex isogonal polyhedral concepts allow certain combinations of polygons to fold into spherical formats. Using each polygon block as a sensor pixel, the silicon-based devices are shaped into maps of truncated icosahedron and fabricated on flexible sheets and further folded either into a concave or convex hemisphere. These two electronic eye prototypes represent simple and low-cost methods as well as flexible optimization parameters in terms of pixel density and design. Results demonstrated in this work combined with miniature size and simplicity of the design establish practical technology for integration with conventional electronic devices.

  16. NDVI, scale invariance and the modifiable areal unit problem: An assessment of vegetation in the Adelaide Parklands.

    PubMed

    Nouri, Hamideh; Anderson, Sharolyn; Sutton, Paul; Beecham, Simon; Nagler, Pamela; Jarchow, Christopher J; Roberts, Dar A

    2017-04-15

    This research addresses the question as to whether or not the Normalised Difference Vegetation Index (NDVI) is scale invariant (i.e. constant over spatial aggregation) for pure pixels of urban vegetation. It has been long recognized that there are issues related to the modifiable areal unit problem (MAUP) pertaining to indices such as NDVI and images at varying spatial resolutions. These issues are relevant to using NDVI values in spatial analyses. We compare two different methods of calculation of a mean NDVI: 1) using pixel values of NDVI within feature/object boundaries and 2) first calculating the mean red and mean near-infrared across all feature pixels and then calculating NDVI. We explore the nature and magnitude of these differences for images taken from two sensors, a 1.24m resolution WorldView-3 and a 0.1m resolution digital aerial image. We apply these methods over an urban park located in the Adelaide Parklands of South Australia. We demonstrate that the MAUP is not an issue for calculation of NDVI within a sensor for pure urban vegetation pixels. This may prove useful for future rule-based monitoring of the ecosystem functioning of green infrastructure. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. A Sensitive Dynamic and Active Pixel Vision Sensor for Color or Neural Imaging Applications.

    PubMed

    Moeys, Diederik Paul; Corradi, Federico; Li, Chenghan; Bamford, Simeon A; Longinotti, Luca; Voigt, Fabian F; Berry, Stewart; Taverni, Gemma; Helmchen, Fritjof; Delbruck, Tobi

    2018-02-01

    Applications requiring detection of small visual contrast require high sensitivity. Event cameras can provide higher dynamic range (DR) and reduce data rate and latency, but most existing event cameras have limited sensitivity. This paper presents the results of a 180-nm Towerjazz CIS process vision sensor called SDAVIS192. It outputs temporal contrast dynamic vision sensor (DVS) events and conventional active pixel sensor frames. The SDAVIS192 improves on previous DAVIS sensors with higher sensitivity for temporal contrast. The temporal contrast thresholds can be set down to 1% for negative changes in logarithmic intensity (OFF events) and down to 3.5% for positive changes (ON events). The achievement is possible through the adoption of an in-pixel preamplification stage. This preamplifier reduces the effective intrascene DR of the sensor (70 dB for OFF and 50 dB for ON), but an automated operating region control allows up to at least 110-dB DR for OFF events. A second contribution of this paper is the development of characterization methodology for measuring DVS event detection thresholds by incorporating a measure of signal-to-noise ratio (SNR). At average SNR of 30 dB, the DVS temporal contrast threshold fixed pattern noise is measured to be 0.3%-0.8% temporal contrast. Results comparing monochrome and RGBW color filter array DVS events are presented. The higher sensitivity of SDAVIS192 make this sensor potentially useful for calcium imaging, as shown in a recording from cultured neurons expressing calcium sensitive green fluorescent protein GCaMP6f.

  18. Monolithic active pixel sensor development for the upgrade of the ALICE inner tracking system

    NASA Astrophysics Data System (ADS)

    Aglieri, G.; Cavicchioli, C.; Chalmet, P. L.; Chanlek, N.; Collu, A.; Giubilato, P.; Hillemanns, H.; Junique, A.; Keil, M.; Kim, D.; Kim, J.; Kugathasan, T.; Lattuca, A.; Mager, M.; Marin Tobon, C. A.; Marras, D.; Martinengo, P.; Mattiazzo, S.; Mazza, G.; Mugnier, H.; Musa, L.; Pantano, D.; Puggioni, C.; Rousset, J.; Reidt, F.; Riedler, P.; Siddhanta, S.; Snoeys, W.; Usai, G.; van Hoorne, J. W.; Yang, P.; Yi, J.

    2013-12-01

    ALICE plans an upgrade of its Inner Tracking System for 2018. The development of a monolithic active pixel sensor for this upgrade is described. The TowerJazz 180 nm CMOS imaging sensor process has been chosen as it is possible to use full CMOS in the pixel due to the offering of a deep pwell and also to use different starting materials. The ALPIDE development is an alternative to approaches based on a rolling shutter architecture, and aims to reduce power consumption and integration time by an order of magnitude below the ALICE specifications, which would be quite beneficial in terms of material budget and background. The approach is based on an in-pixel binary front-end combined with a hit-driven architecture. Several prototypes have already been designed, submitted for fabrication and some of them tested with X-ray sources and particles in a beam. Analog power consumption has been limited by optimizing the Q/C of the sensor using Explorer chips. Promising but preliminary first results have also been obtained with a prototype ALPIDE. Radiation tolerance up to the ALICE requirements has also been verified.

  19. Extraction of incident irradiance from LWIR hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Lahaie, Pierre

    2014-10-01

    The atmospheric correction of thermal hyperspectral imagery can be separated in two distinct processes: Atmospheric Compensation (AC) and Temperature and Emissivity separation (TES). TES requires for input at each pixel, the ground leaving radiance and the atmospheric downwelling irradiance, which are the outputs of the AC process. The extraction from imagery of the downwelling irradiance requires assumptions about some of the pixels' nature, the sensor and the atmosphere. Another difficulty is that, often the sensor's spectral response is not well characterized. To deal with this unknown, we defined a spectral mean operator that is used to filter the ground leaving radiance and a computation of the downwelling irradiance from MODTRAN. A user will select a number of pixels in the image for which the emissivity is assumed to be known. The emissivity of these pixels is assumed to be smooth and that the only spectrally fast varying variable in the downwelling irradiance. Using these assumptions we built an algorithm to estimate the downwelling irradiance. The algorithm is used on all the selected pixels. The estimated irradiance is the average on the spectral channels of the resulting computation. The algorithm performs well in simulation and results are shown for errors in the assumed emissivity and for errors in the atmospheric profiles. The sensor noise influences mainly the required number of pixels.

  20. Energy dispersive CdTe and CdZnTe detectors for spectral clinical CT and NDT applications

    NASA Astrophysics Data System (ADS)

    Barber, W. C.; Wessel, J. C.; Nygard, E.; Iwanczyk, J. S.

    2015-06-01

    We are developing room temperature compound semiconductor detectors for applications in energy-resolved high-flux single x-ray photon-counting spectral computed tomography (CT), including functional imaging with nanoparticle contrast agents for medical applications and non-destructive testing (NDT) for security applications. Energy-resolved photon-counting can provide reduced patient dose through optimal energy weighting for a particular imaging task in CT, functional contrast enhancement through spectroscopic imaging of metal nanoparticles in CT, and compositional analysis through multiple basis function material decomposition in CT and NDT. These applications produce high input count rates from an x-ray generator delivered to the detector. Therefore, in order to achieve energy-resolved single photon counting in these applications, a high output count rate (OCR) for an energy-dispersive detector must be achieved at the required spatial resolution and across the required dynamic range for the application. The required performance in terms of the OCR, spatial resolution, and dynamic range must be obtained with sufficient field of view (FOV) for the application thus requiring the tiling of pixel arrays and scanning techniques. Room temperature cadmium telluride (CdTe) and cadmium zinc telluride (CdZnTe) compound semiconductors, operating as direct conversion x-ray sensors, can provide the required speed when connected to application specific integrated circuits (ASICs) operating at fast peaking times with multiple fixed thresholds per pixel provided the sensors are designed for rapid signal formation across the x-ray energy ranges of the application at the required energy and spatial resolutions, and at a sufficiently high detective quantum efficiency (DQE). We have developed high-flux energy-resolved photon-counting x-ray imaging array sensors using pixellated CdTe and CdZnTe semiconductors optimized for clinical CT and security NDT. We have also fabricated high-flux ASICs with a two dimensional (2D) array of inputs for readout from the sensors. The sensors are guard ring free and have a 2D array of pixels and can be tiled in 2D while preserving pixel pitch. The 2D ASICs have four energy bins with a linear energy response across sufficient dynamic range for clinical CT and some NDT applications. The ASICs can also be tiled in 2D and are designed to fit within the active area of the sensors. We have measured several important performance parameters including: the output count rate (OCR) in excess of 20 million counts per second per square mm with a minimum loss of counts due to pulse pile-up, an energy resolution of 7 keV full width at half-maximum (FWHM) across the entire dynamic range, and a noise floor about 20 keV. This is achieved by directly interconnecting the ASIC inputs to the pixels of the CdZnTe sensors incurring very little input capacitance to the ASICs. We present measurements of the performance of the CdTe and CdZnTe sensors including the OCR, FWHM energy resolution, noise floor, as well as the temporal stability and uniformity under the rapidly varying high flux expected in CT and NDT applications.

  1. Energy dispersive CdTe and CdZnTe detectors for spectral clinical CT and NDT applications

    PubMed Central

    Barber, W. C.; Wessel, J. C.; Nygard, E.; Iwanczyk, J. S.

    2014-01-01

    We are developing room temperature compound semiconductor detectors for applications in energy-resolved high-flux single x-ray photon-counting spectral computed tomography (CT), including functional imaging with nanoparticle contrast agents for medical applications and non destructive testing (NDT) for security applications. Energy-resolved photon-counting can provide reduced patient dose through optimal energy weighting for a particular imaging task in CT, functional contrast enhancement through spectroscopic imaging of metal nanoparticles in CT, and compositional analysis through multiple basis function material decomposition in CT and NDT. These applications produce high input count rates from an x-ray generator delivered to the detector. Therefore, in order to achieve energy-resolved single photon counting in these applications, a high output count rate (OCR) for an energy-dispersive detector must be achieved at the required spatial resolution and across the required dynamic range for the application. The required performance in terms of the OCR, spatial resolution, and dynamic range must be obtained with sufficient field of view (FOV) for the application thus requiring the tiling of pixel arrays and scanning techniques. Room temperature cadmium telluride (CdTe) and cadmium zinc telluride (CdZnTe) compound semiconductors, operating as direct conversion x-ray sensors, can provide the required speed when connected to application specific integrated circuits (ASICs) operating at fast peaking times with multiple fixed thresholds per pixel provided the sensors are designed for rapid signal formation across the x-ray energy ranges of the application at the required energy and spatial resolutions, and at a sufficiently high detective quantum efficiency (DQE). We have developed high-flux energy-resolved photon-counting x-ray imaging array sensors using pixellated CdTe and CdZnTe semiconductors optimized for clinical CT and security NDT. We have also fabricated high-flux ASICs with a two dimensional (2D) array of inputs for readout from the sensors. The sensors are guard ring free and have a 2D array of pixels and can be tiled in 2D while preserving pixel pitch. The 2D ASICs have four energy bins with a linear energy response across sufficient dynamic range for clinical CT and some NDT applications. The ASICs can also be tiled in 2D and are designed to fit within the active area of the sensors. We have measured several important performance parameters including; the output count rate (OCR) in excess of 20 million counts per second per square mm with a minimum loss of counts due to pulse pile-up, an energy resolution of 7 keV full width at half maximum (FWHM) across the entire dynamic range, and a noise floor about 20keV. This is achieved by directly interconnecting the ASIC inputs to the pixels of the CdZnTe sensors incurring very little input capacitance to the ASICs. We present measurements of the performance of the CdTe and CdZnTe sensors including the OCR, FWHM energy resolution, noise floor, as well as the temporal stability and uniformity under the rapidly varying high flux expected in CT and NDT applications. PMID:25937684

  2. Energy dispersive CdTe and CdZnTe detectors for spectral clinical CT and NDT applications.

    PubMed

    Barber, W C; Wessel, J C; Nygard, E; Iwanczyk, J S

    2015-06-01

    We are developing room temperature compound semiconductor detectors for applications in energy-resolved high-flux single x-ray photon-counting spectral computed tomography (CT), including functional imaging with nanoparticle contrast agents for medical applications and non destructive testing (NDT) for security applications. Energy-resolved photon-counting can provide reduced patient dose through optimal energy weighting for a particular imaging task in CT, functional contrast enhancement through spectroscopic imaging of metal nanoparticles in CT, and compositional analysis through multiple basis function material decomposition in CT and NDT. These applications produce high input count rates from an x-ray generator delivered to the detector. Therefore, in order to achieve energy-resolved single photon counting in these applications, a high output count rate (OCR) for an energy-dispersive detector must be achieved at the required spatial resolution and across the required dynamic range for the application. The required performance in terms of the OCR, spatial resolution, and dynamic range must be obtained with sufficient field of view (FOV) for the application thus requiring the tiling of pixel arrays and scanning techniques. Room temperature cadmium telluride (CdTe) and cadmium zinc telluride (CdZnTe) compound semiconductors, operating as direct conversion x-ray sensors, can provide the required speed when connected to application specific integrated circuits (ASICs) operating at fast peaking times with multiple fixed thresholds per pixel provided the sensors are designed for rapid signal formation across the x-ray energy ranges of the application at the required energy and spatial resolutions, and at a sufficiently high detective quantum efficiency (DQE). We have developed high-flux energy-resolved photon-counting x-ray imaging array sensors using pixellated CdTe and CdZnTe semiconductors optimized for clinical CT and security NDT. We have also fabricated high-flux ASICs with a two dimensional (2D) array of inputs for readout from the sensors. The sensors are guard ring free and have a 2D array of pixels and can be tiled in 2D while preserving pixel pitch. The 2D ASICs have four energy bins with a linear energy response across sufficient dynamic range for clinical CT and some NDT applications. The ASICs can also be tiled in 2D and are designed to fit within the active area of the sensors. We have measured several important performance parameters including; the output count rate (OCR) in excess of 20 million counts per second per square mm with a minimum loss of counts due to pulse pile-up, an energy resolution of 7 keV full width at half maximum (FWHM) across the entire dynamic range, and a noise floor about 20keV. This is achieved by directly interconnecting the ASIC inputs to the pixels of the CdZnTe sensors incurring very little input capacitance to the ASICs. We present measurements of the performance of the CdTe and CdZnTe sensors including the OCR, FWHM energy resolution, noise floor, as well as the temporal stability and uniformity under the rapidly varying high flux expected in CT and NDT applications.

  3. Monolithic pixel development in TowerJazz 180 nm CMOS for the outer pixel layers in the ATLAS experiment

    NASA Astrophysics Data System (ADS)

    Berdalovic, I.; Bates, R.; Buttar, C.; Cardella, R.; Egidos Plaja, N.; Hemperek, T.; Hiti, B.; van Hoorne, J. W.; Kugathasan, T.; Mandic, I.; Maneuski, D.; Marin Tobon, C. A.; Moustakas, K.; Musa, L.; Pernegger, H.; Riedler, P.; Riegel, C.; Schaefer, D.; Schioppa, E. J.; Sharma, A.; Snoeys, W.; Solans Sanchez, C.; Wang, T.; Wermes, N.

    2018-01-01

    The upgrade of the ATLAS tracking detector (ITk) for the High-Luminosity Large Hadron Collider at CERN requires the development of novel radiation hard silicon sensor technologies. Latest developments in CMOS sensor processing offer the possibility of combining high-resistivity substrates with on-chip high-voltage biasing to achieve a large depleted active sensor volume. We have characterised depleted monolithic active pixel sensors (DMAPS), which were produced in a novel modified imaging process implemented in the TowerJazz 180 nm CMOS process in the framework of the monolithic sensor development for the ALICE experiment. Sensors fabricated in this modified process feature full depletion of the sensitive layer, a sensor capacitance of only a few fF and radiation tolerance up to 1015 neq/cm2. This paper summarises the measurements of charge collection properties in beam tests and in the laboratory using radioactive sources and edge TCT. The results of these measurements show significantly improved radiation hardness obtained for sensors manufactured using the modified process. This has opened the way to the design of two large scale demonstrators for the ATLAS ITk. To achieve a design compatible with the requirements of the outer pixel layers of the tracker, a charge sensitive front-end taking 500 nA from a 1.8 V supply is combined with a fast digital readout architecture. The low-power front-end with a 25 ns time resolution exploits the low sensor capacitance to reduce noise and analogue power, while the implemented readout architectures minimise power by reducing the digital activity.

  4. Image quality analysis of a color LCD as well as a monochrome LCD using a Foveon color CMOS camera

    NASA Astrophysics Data System (ADS)

    Dallas, William J.; Roehrig, Hans; Krupinski, Elizabeth A.

    2007-09-01

    We have combined a CMOS color camera with special software to compose a multi-functional image-quality analysis instrument. It functions as a colorimeter as well as measuring modulation transfer functions (MTF) and noise power spectra (NPS). It is presently being expanded to examine fixed-pattern noise and temporal noise. The CMOS camera has 9 μm square pixels and a pixel matrix of 2268 x 1512 x 3. The camera uses a sensor that has co-located pixels for all three primary colors. We have imaged sections of both a color and a monochrome LCD monitor onto the camera sensor with LCD-pixel-size to camera-pixel-size ratios of both 12:1 and 17.6:1. When used as an imaging colorimeter, each camera pixel is calibrated to provide CIE color coordinates and tristimulus values. This capability permits the camera to simultaneously determine chromaticity in different locations on the LCD display. After the color calibration with a CS-200 colorimeter the color coordinates of the display's primaries determined from the camera's luminance response are very close to those found from the CS-200. Only the color coordinates of the display's white point were in error. For calculating the MTF a vertical or horizontal line is displayed on the monitor. The captured image is color-matrix preprocessed, Fourier transformed then post-processed. For NPS, a uniform image is displayed on the monitor. Again, the image is pre-processed, transformed and processed. Our measurements show that the horizontal MTF's of both displays have a larger negative slope than that of the vertical MTF's. This behavior indicates that the horizontal MTF's are poorer than the vertical MTF's. However the modulations at the Nyquist frequency seem lower for the color LCD than for the monochrome LCD. The spatial noise of the color display in both directions is larger than that of the monochrome display. Attempts were also made to analyze the total noise in terms of spatial and temporal noise by applying subtractions of images taken at exactly the same exposure. Temporal noise seems to be significantly lower than spatial noise.

  5. Technical guidance for the development of a solid state image sensor for human low vision image warping

    NASA Technical Reports Server (NTRS)

    Vanderspiegel, Jan

    1994-01-01

    This report surveys different technologies and approaches to realize sensors for image warping. The goal is to study the feasibility, technical aspects, and limitations of making an electronic camera with special geometries which implements certain transformations for image warping. This work was inspired by the research done by Dr. Juday at NASA Johnson Space Center on image warping. The study has looked into different solid-state technologies to fabricate image sensors. It is found that among the available technologies, CMOS is preferred over CCD technology. CMOS provides more flexibility to design different functions into the sensor, is more widely available, and is a lower cost solution. By using an architecture with row and column decoders one has the added flexibility of addressing the pixels at random, or read out only part of the image.

  6. Precise calibration of pupil images in pyramid wavefront sensor.

    PubMed

    Liu, Yong; Mu, Quanquan; Cao, Zhaoliang; Hu, Lifa; Yang, Chengliang; Xuan, Li

    2017-04-20

    The pyramid wavefront sensor (PWFS) is a novel wavefront sensor with several inspiring advantages compared with Shack-Hartmann wavefront sensors. The PWFS uses four pupil images to calculate the local tilt of the incoming wavefront. Pupil images are conjugated with a telescope pupil so that each pixel in the pupil image is diffraction-limited by the telescope pupil diameter, thus the sensing error of the PWFS is much lower than that of the Shack-Hartmann sensor and is related to the extraction and alignment accuracy of pupil images. However, precise extraction of these images is difficult to conduct in practice. Aiming at improving the sensing accuracy, we analyzed the physical model of calibration of a PWFS and put forward an extraction algorithm. The process was verified via a closed-loop correction experiment. The results showed that the sensing accuracy of the PWFS increased after applying the calibration and extraction method.

  7. G-Channel Restoration for RWB CFA with Double-Exposed W Channel

    PubMed Central

    Park, Chulhee; Song, Ki Sun; Kang, Moon Gi

    2017-01-01

    In this paper, we propose a green (G)-channel restoration for a red–white–blue (RWB) color filter array (CFA) image sensor using the dual sampling technique. By using white (W) pixels instead of G pixels, the RWB CFA provides high-sensitivity imaging and an improved signal-to-noise ratio compared to the Bayer CFA. However, owing to this high sensitivity, the W pixel values become rapidly over-saturated before the red–blue (RB) pixel values reach the appropriate levels. Because the missing G color information included in the W channel cannot be restored with a saturated W, multiple captures with dual sampling are necessary to solve this early W-pixel saturation problem. Each W pixel has a different exposure time when compared to those of the R and B pixels, because the W pixels are double-exposed. Therefore, a RWB-to-RGB color conversion method is required in order to restore the G color information, using a double-exposed W channel. The proposed G-channel restoration algorithm restores G color information from the W channel by considering the energy difference caused by the different exposure times. Using the proposed method, the RGB full-color image can be obtained while maintaining the high-sensitivity characteristic of the W pixels. PMID:28165425

  8. G-Channel Restoration for RWB CFA with Double-Exposed W Channel.

    PubMed

    Park, Chulhee; Song, Ki Sun; Kang, Moon Gi

    2017-02-05

    In this paper, we propose a green (G)-channel restoration for a red-white-blue (RWB) color filter array (CFA) image sensor using the dual sampling technique. By using white (W) pixels instead of G pixels, the RWB CFA provides high-sensitivity imaging and an improved signal-to-noise ratio compared to the Bayer CFA. However, owing to this high sensitivity, the W pixel values become rapidly over-saturated before the red-blue (RB) pixel values reach the appropriate levels. Because the missing G color information included in the W channel cannot be restored with a saturated W, multiple captures with dual sampling are necessary to solve this early W-pixel saturation problem. Each W pixel has a different exposure time when compared to those of the R and B pixels, because the W pixels are double-exposed. Therefore, a RWB-to-RGB color conversion method is required in order to restore the G color information, using a double-exposed W channel. The proposed G-channel restoration algorithm restores G color information from the W channel by considering the energy difference caused by the different exposure times. Using the proposed method, the RGB full-color image can be obtained while maintaining the high-sensitivity characteristic of the W pixels.

  9. Dual light field and polarization imaging using CMOS diffractive image sensors.

    PubMed

    Jayasuriya, Suren; Sivaramakrishnan, Sriram; Chuang, Ellen; Guruaribam, Debashree; Wang, Albert; Molnar, Alyosha

    2015-05-15

    In this Letter we present, to the best of our knowledge, the first integrated CMOS image sensor that can simultaneously perform light field and polarization imaging without the use of external filters or additional optical elements. Previous work has shown how photodetectors with two stacks of integrated metal gratings above them (called angle sensitive pixels) diffract light in a Talbot pattern to capture four-dimensional light fields. We show, in addition to diffractive imaging, that these gratings polarize incoming light and characterize the response of these sensors to polarization and incidence angle. Finally, we show two applications of polarization imaging: imaging stress-induced birefringence and identifying specular reflections in scenes to improve light field algorithms for these scenes.

  10. Automatic parquet block sorting using real-time spectral classification

    NASA Astrophysics Data System (ADS)

    Astrom, Anders; Astrand, Erik; Johansson, Magnus

    1999-03-01

    This paper presents a real-time spectral classification system based on the PGP spectrograph and a smart image sensor. The PGP is a spectrograph which extracts the spectral information from a scene and projects the information on an image sensor, which is a method often referred to as Imaging Spectroscopy. The classification is based on linear models and categorizes a number of pixels along a line. Previous systems adopting this method have used standard sensors, which often resulted in poor performance. The new system, however, is based on a patented near-sensor classification method, which exploits analogue features on the smart image sensor. The method reduces the enormous amount of data to be processed at an early stage, thus making true real-time spectral classification possible. The system has been evaluated on hardwood parquet boards showing very good results. The color defects considered in the experiments were blue stain, white sapwood, yellow decay and red decay. In addition to these four defect classes, a reference class was used to indicate correct surface color. The system calculates a statistical measure for each parquet block, giving the pixel defect percentage. The patented method makes it possible to run at very high speeds with a high spectral discrimination ability. Using a powerful illuminator, the system can run with a line frequency exceeding 2000 line/s. This opens up the possibility to maintain high production speed and still measure with good resolution.

  11. High-speed high-resolution epifluorescence imaging system using CCD sensor and digital storage for neurobiological research

    NASA Astrophysics Data System (ADS)

    Takashima, Ichiro; Kajiwara, Riichi; Murano, Kiyo; Iijima, Toshio; Morinaka, Yasuhiro; Komobuchi, Hiroyoshi

    2001-04-01

    We have designed and built a high-speed CCD imaging system for monitoring neural activity in an exposed animal cortex stained with a voltage-sensitive dye. Two types of custom-made CCD sensors were developed for this system. The type I chip has a resolution of 2664 (H) X 1200 (V) pixels and a wide imaging area of 28.1 X 13.8 mm, while the type II chip has 1776 X 1626 pixels and an active imaging area of 20.4 X 18.7 mm. The CCD arrays were constructed with multiple output amplifiers in order to accelerate the readout rate. The two chips were divided into either 24 (I) or 16 (II) distinct areas that were driven in parallel. The parallel CCD outputs were digitized by 12-bit A/D converters and then stored in the frame memory. The frame memory was constructed with synchronous DRAM modules, which provided a capacity of 128 MB per channel. On-chip and on-memory binning methods were incorporated into the system, e.g., this enabled us to capture 444 X 200 pixel-images for periods of 36 seconds at a rate of 500 frames/second. This system was successfully used to visualize neural activity in the cortices of rats, guinea pigs, and monkeys.

  12. Variational optical flow estimation for images with spectral and photometric sensor diversity

    NASA Astrophysics Data System (ADS)

    Bengtsson, Tomas; McKelvey, Tomas; Lindström, Konstantin

    2015-03-01

    Motion estimation of objects in image sequences is an essential computer vision task. To this end, optical flow methods compute pixel-level motion, with the purpose of providing low-level input to higher-level algorithms and applications. Robust flow estimation is crucial for the success of applications, which in turn depends on the quality of the captured image data. This work explores the use of sensor diversity in the image data within a framework for variational optical flow. In particular, a custom image sensor setup intended for vehicle applications is tested. Experimental results demonstrate the improved flow estimation performance when IR sensitivity or flash illumination is added to the system.

  13. High dynamic range CMOS (HDRC) imagers for safety systems

    NASA Astrophysics Data System (ADS)

    Strobel, Markus; Döttling, Dietmar

    2013-04-01

    The first part of this paper describes the high dynamic range CMOS (HDRC®) imager - a special type of CMOS image sensor with logarithmic response. The powerful property of a high dynamic range (HDR) image acquisition is detailed by mathematical definition and measurement of the optoelectronic conversion function (OECF) of two different HDRC imagers. Specific sensor parameters will be discussed including the pixel design for the global shutter readout. The second part will give an outline on the applications and requirements of cameras for industrial safety. Equipped with HDRC global shutter sensors SafetyEYE® is a high-performance stereo camera system for safe three-dimensional zone monitoring enabling new and more flexible solutions compared to existing safety guards.

  14. Fully 3D-Integrated Pixel Detectors for X-Rays

    DOE PAGES

    Deptuch, Grzegorz W.; Gabriella, Carini; Enquist, Paul; ...

    2016-01-01

    The vertically integrated photon imaging chip (VIPIC1) pixel detector is a stack consisting of a 500-μm-thick silicon sensor, a two-tier 34-μm-thick integrated circuit, and a host printed circuit board (PCB). The integrated circuit tiers were bonded using the direct bonding technology with copper, and each tier features 1-μm-diameter through-silicon vias that were used for connections to the sensor on one side, and to the host PCB on the other side. The 80-μm-pixel-pitch sensor was the direct bonding technology with nickel bonded to the integrated circuit. The stack was mounted on the board using Sn–Pb balls placed on a 320-μm pitch,more » yielding an entirely wire-bond-less structure. The analog front-end features a pulse response peaking at below 250 ns, and the power consumption per pixel is 25 μW. We successful completed the 3-D integration and have reported here. Additionally, all pixels in the matrix of 64 × 64 pixels were responding on well-bonded devices. Correct operation of the sparsified readout, allowing a single 153-ns bunch timing resolution, was confirmed in the tests on a synchrotron beam of 10-keV X-rays. An equivalent noise charge of 36.2 e - rms and a conversion gain of 69.5 μV/e - with 2.6 e - rms and 2.7 μV/e - rms pixel-to-pixel variations, respectively, were measured.« less

  15. A Detailed Look at the Performance Characteristics of the Lightning Imaging Sensor

    NASA Technical Reports Server (NTRS)

    Zhang, Daile; Cummins, Kenneth L.; Bitzer, Phillip; Koshak, William J.

    2018-01-01

    The Lightning Imaging Sensor (LIS) on board the Tropical Rainfall Measuring Mission (TRMM) effectively reached its end of life on April 15, 2015 after 17+ years of observation. Given the wealth of information in the archived LIS lightning data, and growing use of optical observations of lightning from space throughout the world, it is still of importance to better understand LIS calibration and performance characteristics. In this work, we continue our efforts to quantify the optical characteristics of the LIS pixel array, and to further characterize the detection efficiency and location accuracy of LIS. The LIS pixel array was partitioned into four quadrants, each having its own signal amplifier and digital conversion hardware. In addition, the sensor optics resulted in a decreasing sensitivity with increasing displacement from the center of the array. These engineering limitations resulted in differences in the optical emissions detected across the pixel array. Our work to date has shown a 20% increase in the count of the lightning events detected in one of the LIS quadrants, because of a lower detection threshold. In this study, we will discuss our work in progress on these limitations, and their potential impact on the group- and flash-level parameters.

  16. A Combined Laser-Communication and Imager for Microspacecraft (ACLAIM)

    NASA Technical Reports Server (NTRS)

    Hemmati, H.; Lesh, J.

    1998-01-01

    ACLAIM is a multi-function instrument consisting of a laser communication terminal and an imaging camera that share a common telescope. A single APS- (Active Pixel Sensor) based focal-plane-array is used to perform both the acquisition and tracking (for laser communication) and science imaging functions.

  17. Adaptive scene-based correction algorithm for removal of residual fixed pattern noise in microgrid image data

    NASA Astrophysics Data System (ADS)

    Ratliff, Bradley M.; LeMaster, Daniel A.

    2012-06-01

    Pixel-to-pixel response nonuniformity is a common problem that affects nearly all focal plane array sensors. This results in a frame-to-frame fixed pattern noise (FPN) that causes an overall degradation in collected data. FPN is often compensated for through the use of blackbody calibration procedures; however, FPN is a particularly challenging problem because the detector responsivities drift relative to one another in time, requiring that the sensor be recalibrated periodically. The calibration process is obstructive to sensor operation and is therefore only performed at discrete intervals in time. Thus, any drift that occurs between calibrations (along with error in the calibration sources themselves) causes varying levels of residual calibration error to be present in the data at all times. Polarimetric microgrid sensors are particularly sensitive to FPN due to the spatial differencing involved in estimating the Stokes vector images. While many techniques exist in the literature to estimate FPN for conventional video sensors, few have been proposed to address the problem in microgrid imaging sensors. Here we present a scene-based nonuniformity correction technique for microgrid sensors that is able to reduce residual fixed pattern noise while preserving radiometry under a wide range of conditions. The algorithm requires a low number of temporal data samples to estimate the spatial nonuniformity and is computationally efficient. We demonstrate the algorithm's performance using real data from the AFRL PIRATE and University of Arizona LWIR microgrid sensors.

  18. Method and apparatus for determining the coordinates of an object

    DOEpatents

    Pedersen, Paul S; Sebring, Robert

    2003-01-01

    A method and apparatus is described for determining the coordinates on the surface of an object which is illuminated by a beam having pixels which have been modulated according to predetermined mathematical relationships with pixel position within the modulator. The reflected illumination is registered by an image sensor at a known location which registers the intensity of the pixels as received. Computations on the intensity, which relate the pixel intensities received to the pixel intensities transmitted at the modulator, yield the proportional loss of intensity and planar position of the originating pixels. The proportional loss and position information can then be utilized within triangulation equations to resolve the coordinates of associated surface locations on the object.

  19. Using the Medipix3 detector for direct electron imaging in the range 60 keV to 200 keV in electron microscopy

    NASA Astrophysics Data System (ADS)

    Mir, J. A.; Plackett, R.; Shipsey, I.; dos Santos, J. M. F.

    2017-11-01

    Hybrid pixel sensor technology such as the Medipix3 represents a unique tool for electron imaging. We have investigated its performance as a direct imaging detector using a Transmission Electron Microscope (TEM) which incorporated a Medipix3 detector with a 300 μm thick silicon layer compromising of 256×256 pixels at 55 μm pixel pitch. We present results taken with the Medipix3 in Single Pixel Mode (SPM) with electron beam energies in the range, 60-200 keV . Measurements of the Modulation Transfer Function (MTF) and the Detective Quantum Efficiency (DQE) were investigated. At a given beam energy, the MTF data was acquired by deploying the established knife edge technique. Similarly, the experimental data required to determine DQE was obtained by acquiring a stack of images of a focused beam and of free space (flatfield) to determine the Noise Power Spectrum (NPS).

  20. Vulnerability of CMOS image sensors in Megajoule Class Laser harsh environment.

    PubMed

    Goiffon, V; Girard, S; Chabane, A; Paillet, P; Magnan, P; Cervantes, P; Martin-Gonthier, P; Baggio, J; Estribeau, M; Bourgade, J-L; Darbon, S; Rousseau, A; Glebov, V Yu; Pien, G; Sangster, T C

    2012-08-27

    CMOS image sensors (CIS) are promising candidates as part of optical imagers for the plasma diagnostics devoted to the study of fusion by inertial confinement. However, the harsh radiative environment of Megajoule Class Lasers threatens the performances of these optical sensors. In this paper, the vulnerability of CIS to the transient and mixed pulsed radiation environment associated with such facilities is investigated during an experiment at the OMEGA facility at the Laboratory for Laser Energetics (LLE), Rochester, NY, USA. The transient and permanent effects of the 14 MeV neutron pulse on CIS are presented. The behavior of the tested CIS shows that active pixel sensors (APS) exhibit a better hardness to this harsh environment than a CCD. A first order extrapolation of the reported results to the higher level of radiation expected for Megajoule Class Laser facilities (Laser Megajoule in France or National Ignition Facility in the USA) shows that temporarily saturated pixels due to transient neutron-induced single event effects will be the major issue for the development of radiation-tolerant plasma diagnostic instruments whereas the permanent degradation of the CIS related to displacement damage or total ionizing dose effects could be reduced by applying well known mitigation techniques.

  1. Land cover mapping at sub-pixel scales

    NASA Astrophysics Data System (ADS)

    Makido, Yasuyo Kato

    One of the biggest drawbacks of land cover mapping from remotely sensed images relates to spatial resolution, which determines the level of spatial details depicted in an image. Fine spatial resolution images from satellite sensors such as IKONOS and QuickBird are now available. However, these images are not suitable for large-area studies, since a single image is very small and therefore it is costly for large area studies. Much research has focused on attempting to extract land cover types at sub-pixel scale, and little research has been conducted concerning the spatial allocation of land cover types within a pixel. This study is devoted to the development of new algorithms for predicting land cover distribution using remote sensory imagery at sub-pixel level. The "pixel-swapping" optimization algorithm, which was proposed by Atkinson for predicting sub-pixel land cover distribution, is investigated in this study. Two limitations of this method, the arbitrary spatial range value and the arbitrary exponential model of spatial autocorrelation, are assessed. Various weighting functions, as alternatives to the exponential model, are evaluated in order to derive the optimum weighting function. Two different simulation models were employed to develop spatially autocorrelated binary class maps. In all tested models, Gaussian, Exponential, and IDW, the pixel swapping method improved classification accuracy compared with the initial random allocation of sub-pixels. However the results suggested that equal weight could be used to increase accuracy and sub-pixel spatial autocorrelation instead of using these more complex models of spatial structure. New algorithms for modeling the spatial distribution of multiple land cover classes at sub-pixel scales are developed and evaluated. Three methods are examined: sequential categorical swapping, simultaneous categorical swapping, and simulated annealing. These three methods are applied to classified Landsat ETM+ data that has been resampled to 210 meters. The result suggested that the simultaneous method can be considered as the optimum method in terms of accuracy performance and computation time. The case study employs remote sensing imagery at the following sites: tropical forests in Brazil and temperate multiple land mosaic in East China. Sub-areas for both sites are used to examine how the characteristics of the landscape affect the ability of the optimum technique. Three types of measurement: Moran's I, mean patch size (MPS), and patch size standard deviation (STDEV), are used to characterize the landscape. All results suggested that this technique could increase the classification accuracy more than traditional hard classification. The methods developed in this study can benefit researchers who employ coarse remote sensing imagery but are interested in detailed landscape information. In many cases, the satellite sensor that provides large spatial coverage has insufficient spatial detail to identify landscape patterns. Application of the super-resolution technique described in this dissertation could potentially solve this problem by providing detailed land cover predictions from the coarse resolution satellite sensor imagery.

  2. Hard-X-Ray/Soft-Gamma-Ray Imaging Sensor Assembly for Astronomy

    NASA Technical Reports Server (NTRS)

    Myers, Richard A.

    2008-01-01

    An improved sensor assembly has been developed for astronomical imaging at photon energies ranging from 1 to 100 keV. The assembly includes a thallium-doped cesium iodide scintillator divided into pixels and coupled to an array of high-gain avalanche photodiodes (APDs). Optionally, the array of APDs can be operated without the scintillator to detect photons at energies below 15 keV. The array of APDs is connected to compact electronic readout circuitry that includes, among other things, 64 independent channels for detection of photons in various energy ranges, up to a maximum energy of 100 keV, at a count rate up to 3 kHz. The readout signals are digitized and processed by imaging software that performs "on-the-fly" analysis. The sensor assembly has been integrated into an imaging spectrometer, along with a pair of coded apertures (Fresnel zone plates) that are used in conjunction with the pixel layout to implement a shadow-masking technique to obtain relatively high spatial resolution without having to use extremely small pixels. Angular resolutions of about 20 arc-seconds have been measured. Thus, for example, the imaging spectrometer can be used to (1) determine both the energy spectrum of a distant x-ray source and the angular deviation of the source from the nominal line of sight of an x-ray telescope in which the spectrometer is mounted or (2) study the spatial and temporal development of solar flares, repeating - ray bursters, and other phenomena that emit transient radiation in the hard-x-ray/soft- -ray region of the electromagnetic spectrum.

  3. Development of X-Ray Microcalorimeter Imaging Spectrometers for the X-Ray Surveyor Mission Concept

    NASA Technical Reports Server (NTRS)

    Bandler, Simon R.; Adams, Joseph S.; Chervenak, James A.; Datesman, Aaron M.; Eckart, Megan E.; Finkbeiner, Fred M.; Kelley, Richard L.; Kilbourne, Caroline A.; Betncourt-Martinez, Gabriele; Miniussi, Antoine R.; hide

    2016-01-01

    Four astrophysics missions are currently being studied by NASA as candidate large missions to be chosen inthe 2020 astrophysics decadal survey.1 One of these missions is the X-Ray Surveyor (XRS), and possibleconfigurations of this mission are currently under study by a science and technology definition team (STDT). Oneof the key instruments under study is an X-ray microcalorimeter, and the requirements for such an instrument arecurrently under discussion. In this paper we review some different detector options that exist for this instrument,and discuss what array formats might be possible. We have developed one design option that utilizes eithertransition-edge sensor (TES) or magnetically coupled calorimeters (MCC) in pixel array-sizes approaching 100kilo-pixels. To reduce the number of sensors read out to a plausible scale, we have assumed detector geometriesin which a thermal sensor such a TES or MCC can read out a sub-array of 20-25 individual 1 pixels. In thispaper we describe the development status of these detectors, and also discuss the different options that exist forreading out the very large number of pixels.

  4. Miniature infrared hyperspectral imaging sensor for airborne applications

    NASA Astrophysics Data System (ADS)

    Hinnrichs, Michele; Hinnrichs, Bradford; McCutchen, Earl

    2017-05-01

    Pacific Advanced Technology (PAT) has developed an infrared hyperspectral camera, both MWIR and LWIR, small enough to serve as a payload on a miniature unmanned aerial vehicles. The optical system has been integrated into the cold-shield of the sensor enabling the small size and weight of the sensor. This new and innovative approach to infrared hyperspectral imaging spectrometer uses micro-optics and will be explained in this paper. The micro-optics are made up of an area array of diffractive optical elements where each element is tuned to image a different spectral region on a common focal plane array. The lenslet array is embedded in the cold-shield of the sensor and actuated with a miniature piezo-electric motor. This approach enables rapid infrared spectral imaging with multiple spectral images collected and processed simultaneously each frame of the camera. This paper will present our optical mechanical design approach which results in an infrared hyper-spectral imaging system that is small enough for a payload on a mini-UAV or commercial quadcopter. The diffractive optical elements used in the lenslet array are blazed gratings where each lenslet is tuned for a different spectral bandpass. The lenslets are configured in an area array placed a few millimeters above the focal plane and embedded in the cold-shield to reduce the background signal normally associated with the optics. We have developed various systems using a different number of lenslets in the area array. Depending on the size of the focal plane and the diameter of the lenslet array will determine the spatial resolution. A 2 x 2 lenslet array will image four different spectral images of the scene each frame and when coupled with a 512 x 512 focal plane array will give spatial resolution of 256 x 256 pixel each spectral image. Another system that we developed uses a 4 x 4 lenslet array on a 1024 x 1024 pixel element focal plane array which gives 16 spectral images of 256 x 256 pixel resolution each frame.

  5. Masking Strategies for Image Manifolds.

    PubMed

    Dadkhahi, Hamid; Duarte, Marco F

    2016-07-07

    We consider the problem of selecting an optimal mask for an image manifold, i.e., choosing a subset of the pixels of the image that preserves the manifold's geometric structure present in the original data. Such masking implements a form of compressive sensing through emerging imaging sensor platforms for which the power expense grows with the number of pixels acquired. Our goal is for the manifold learned from masked images to resemble its full image counterpart as closely as possible. More precisely, we show that one can indeed accurately learn an image manifold without having to consider a large majority of the image pixels. In doing so, we consider two masking methods that preserve the local and global geometric structure of the manifold, respectively. In each case, the process of finding the optimal masking pattern can be cast as a binary integer program, which is computationally expensive but can be approximated by a fast greedy algorithm. Numerical experiments show that the relevant manifold structure is preserved through the datadependent masking process, even for modest mask sizes.

  6. Fast Fourier single-pixel imaging via binary illumination.

    PubMed

    Zhang, Zibang; Wang, Xueying; Zheng, Guoan; Zhong, Jingang

    2017-09-20

    Fourier single-pixel imaging (FSI) employs Fourier basis patterns for encoding spatial information and is capable of reconstructing high-quality two-dimensional and three-dimensional images. Fourier-domain sparsity in natural scenes allows FSI to recover sharp images from undersampled data. The original FSI demonstration, however, requires grayscale Fourier basis patterns for illumination. This requirement imposes a limitation on the imaging speed as digital micro-mirror devices (DMDs) generate grayscale patterns at a low refreshing rate. In this paper, we report a new strategy to increase the speed of FSI by two orders of magnitude. In this strategy, we binarize the Fourier basis patterns based on upsampling and error diffusion dithering. We demonstrate a 20,000 Hz projection rate using a DMD and capture 256-by-256-pixel dynamic scenes at a speed of 10 frames per second. The reported technique substantially accelerates image acquisition speed of FSI. It may find broad imaging applications at wavebands that are not accessible using conventional two-dimensional image sensors.

  7. Active Pixel Sensors: Are CCD's Dinosaurs?

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R.

    1993-01-01

    Charge-coupled devices (CCD's) are presently the technology of choice for most imaging applications. In the 23 years since their invention in 1970, they have evolved to a sophisticated level of performance. However, as with all technologies, we can be certain that they will be supplanted someday. In this paper, the Active Pixel Sensor (APS) technology is explored as a possible successor to the CCD. An active pixel is defined as a detector array technology that has at least one active transistor within the pixel unit cell. The APS eliminates the need for nearly perfect charge transfer -- the Achilles' heel of CCDs. This perfect charge transfer makes CCD's radiation 'soft,' difficult to use under low light conditions, difficult to manufacture in large array sizes, difficult to integrate with on-chip electronics, difficult to use at low temperatures, difficult to use at high frame rates, and difficult to manufacture in non-silicon materials that extend wavelength response.

  8. A safety monitoring system for taxi based on CMOS imager

    NASA Astrophysics Data System (ADS)

    Liu, Zhi

    2005-01-01

    CMOS image sensors now become increasingly competitive with respect to their CCD counterparts, while adding advantages such as no blooming, simpler driving requirements and the potential of on-chip integration of sensor, analogue circuitry, and digital processing functions. A safety monitoring system for taxi based on cmos imager that can record field situation when unusual circumstance happened is described in this paper. The monitoring system is based on a CMOS imager (OV7120), which can output digital image data through parallel pixel data port. The system consists of a CMOS image sensor, a large capacity NAND FLASH ROM, a USB interface chip and a micro controller (AT90S8515). The structure of whole system and the test data is discussed and analyzed in detail.

  9. Hybrid UV Imager Containing Face-Up AlGaN/GaN Photodiodes

    NASA Technical Reports Server (NTRS)

    Zheng, Xinyu; Pain, Bedabrata

    2005-01-01

    A proposed hybrid ultraviolet (UV) image sensor would comprise a planar membrane array of face-up AlGaN/GaN photodiodes integrated with a complementary metal oxide/semiconductor (CMOS) readout-circuit chip. Each pixel in the hybrid image sensor would contain a UV photodiode on the AlGaN/GaN membrane, metal oxide/semiconductor field-effect transistor (MOSFET) readout circuitry on the CMOS chip underneath the photodiode, and a metal via connection between the photodiode and the readout circuitry (see figure). The proposed sensor design would offer all the advantages of comparable prior CMOS active-pixel sensors and AlGaN UV detectors while overcoming some of the limitations of prior (AlGaN/sapphire)/CMOS hybrid image sensors that have been designed and fabricated according to the methodology of flip-chip integration. AlGaN is a nearly ideal UV-detector material because its bandgap is wide and adjustable and it offers the potential to attain extremely low dark current. Integration of AlGaN with CMOS is necessary because at present there are no practical means of realizing readout circuitry in the AlGaN/GaN material system, whereas the means of realizing readout circuitry in CMOS are well established. In one variant of the flip-chip approach to integration, an AlGaN chip on a sapphire substrate is inverted (flipped) and then bump-bonded to a CMOS readout circuit chip; this variant results in poor quantum efficiency. In another variant of the flip-chip approach, an AlGaN chip on a crystalline AlN substrate would be bonded to a CMOS readout circuit chip; this variant is expected to result in narrow spectral response, which would be undesirable in many applications. Two other major disadvantages of flip-chip integration are large pixel size (a consequence of the need to devote sufficient area to each bump bond) and severe restriction on the photodetector structure. The membrane array of AlGaN/GaN photodiodes and the CMOS readout circuit for the proposed image sensor would be fabricated separately.

  10. Active pixel sensor pixel having a photodetector whose output is coupled to an output transistor gate

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R. (Inventor); Nakamura, Junichi (Inventor); Kemeny, Sabrina E. (Inventor)

    2005-01-01

    An imaging device formed as a monolithic complementary metal oxide semiconductor integrated circuit in an industry standard complementary metal oxide semiconductor process, the integrated circuit including a focal plane array of pixel cells, each one of the cells including a photogate overlying the substrate for accumulating photo-generated charge in an underlying portion of the substrate and a charge coupled device section formed on the substrate adjacent the photogate having a sensing node and at least one charge coupled device stage for transferring charge from the underlying portion of the substrate to the sensing node. There is also a readout circuit, part of which can be disposed at the bottom of each column of cells and be common to all the cells in the column. A Simple Floating Gate (SFG) pixel structure could also be employed in the imager to provide a non-destructive readout and smaller pixel sizes.

  11. Toward More Accurate Iris Recognition Using Cross-Spectral Matching.

    PubMed

    Nalla, Pattabhi Ramaiah; Kumar, Ajay

    2017-01-01

    Iris recognition systems are increasingly deployed for large-scale applications such as national ID programs, which continue to acquire millions of iris images to establish identity among billions. However, with the availability of variety of iris sensors that are deployed for the iris imaging under different illumination/environment, significant performance degradation is expected while matching such iris images acquired under two different domains (either sensor-specific or wavelength-specific). This paper develops a domain adaptation framework to address this problem and introduces a new algorithm using Markov random fields model to significantly improve cross-domain iris recognition. The proposed domain adaptation framework based on the naive Bayes nearest neighbor classification uses a real-valued feature representation, which is capable of learning domain knowledge. Our approach to estimate corresponding visible iris patterns from the synthesis of iris patches in the near infrared iris images achieves outperforming results for the cross-spectral iris recognition. In this paper, a new class of bi-spectral iris recognition system that can simultaneously acquire visible and near infra-red images with pixel-to-pixel correspondences is proposed and evaluated. This paper presents experimental results from three publicly available databases; PolyU cross-spectral iris image database, IIITD CLI and UND database, and achieve outperforming results for the cross-sensor and cross-spectral iris matching.

  12. Noise Reduction Techniques and Scaling Effects towards Photon Counting CMOS Image Sensors

    PubMed Central

    Boukhayma, Assim; Peizerat, Arnaud; Enz, Christian

    2016-01-01

    This paper presents an overview of the read noise in CMOS image sensors (CISs) based on four-transistors (4T) pixels, column-level amplification and correlated multiple sampling. Starting from the input-referred noise analytical formula, process level optimizations, device choices and circuit techniques at the pixel and column level of the readout chain are derived and discussed. The noise reduction techniques that can be implemented at the column and pixel level are verified by transient noise simulations, measurement and results from recently-published low noise CIS. We show how recently-reported process refinement, leading to the reduction of the sense node capacitance, can be combined with an optimal in-pixel source follower design to reach a sub-0.3erms- read noise at room temperature. This paper also discusses the impact of technology scaling on the CIS read noise. It shows how designers can take advantage of scaling and how the Metal-Oxide-Semiconductor (MOS) transistor gate leakage tunneling current appears as a challenging limitation. For this purpose, both simulation results of the gate leakage current and 1/f noise data reported from different foundries and technology nodes are used.

  13. Estimation of saturated pixel values in digital color imaging

    PubMed Central

    Zhang, Xuemei; Brainard, David H.

    2007-01-01

    Pixel saturation, where the incident light at a pixel causes one of the color channels of the camera sensor to respond at its maximum value, can produce undesirable artifacts in digital color images. We present a Bayesian algorithm that estimates what the saturated channel's value would have been in the absence of saturation. The algorithm uses the non-saturated responses from the other color channels, together with a multivariate Normal prior that captures the correlation in response across color channels. The appropriate parameters for the prior may be estimated directly from the image data, since most image pixels are not saturated. Given the prior, the responses of the non-saturated channels, and the fact that the true response of the saturated channel is known to be greater than the saturation level, the algorithm returns the optimal expected mean square estimate for the true response. Extensions of the algorithm to the case where more than one channel is saturated are also discussed. Both simulations and examples with real images are presented to show that the algorithm is effective. PMID:15603065

  14. Characterization study of an intensified complementary metal-oxide-semiconductor active pixel sensor.

    PubMed

    Griffiths, J A; Chen, D; Turchetta, R; Royle, G J

    2011-03-01

    An intensified CMOS active pixel sensor (APS) has been constructed for operation in low-light-level applications: a high-gain, fast-light decay image intensifier has been coupled via a fiber optic stud to a prototype "VANILLA" APS, developed by the UK based MI3 consortium. The sensor is capable of high frame rates and sparse readout. This paper presents a study of the performance parameters of the intensified VANILLA APS system over a range of image intensifier gain levels when uniformly illuminated with 520 nm green light. Mean-variance analysis shows the APS saturating around 3050 Digital Units (DU), with the maximum variance increasing with increasing image intensifier gain. The system's quantum efficiency varies in an exponential manner from 260 at an intensifier gain of 7.45 × 10(3) to 1.6 at a gain of 3.93 × 10(1). The usable dynamic range of the system is 60 dB for intensifier gains below 1.8 × 10(3), dropping to around 40 dB at high gains. The conclusion is that the system shows suitability for the desired application.

  15. Characterization study of an intensified complementary metal-oxide-semiconductor active pixel sensor

    NASA Astrophysics Data System (ADS)

    Griffiths, J. A.; Chen, D.; Turchetta, R.; Royle, G. J.

    2011-03-01

    An intensified CMOS active pixel sensor (APS) has been constructed for operation in low-light-level applications: a high-gain, fast-light decay image intensifier has been coupled via a fiber optic stud to a prototype "VANILLA" APS, developed by the UK based MI3 consortium. The sensor is capable of high frame rates and sparse readout. This paper presents a study of the performance parameters of the intensified VANILLA APS system over a range of image intensifier gain levels when uniformly illuminated with 520 nm green light. Mean-variance analysis shows the APS saturating around 3050 Digital Units (DU), with the maximum variance increasing with increasing image intensifier gain. The system's quantum efficiency varies in an exponential manner from 260 at an intensifier gain of 7.45 × 103 to 1.6 at a gain of 3.93 × 101. The usable dynamic range of the system is 60 dB for intensifier gains below 1.8 × 103, dropping to around 40 dB at high gains. The conclusion is that the system shows suitability for the desired application.

  16. Range imaging pulsed laser sensor with two-dimensional scanning of transmitted beam and scanless receiver using high-aspect avalanche photodiode array for eye-safe wavelength

    NASA Astrophysics Data System (ADS)

    Tsuji, Hidenobu; Imaki, Masaharu; Kotake, Nobuki; Hirai, Akihito; Nakaji, Masaharu; Kameyama, Shumpei

    2017-03-01

    We demonstrate a range imaging pulsed laser sensor with two-dimensional scanning of a transmitted beam and a scanless receiver using a high-aspect avalanche photodiode (APD) array for the eye-safe wavelength. The system achieves a high frame rate and long-range imaging with a relatively simple sensor configuration. We developed a high-aspect APD array for the wavelength of 1.5 μm, a receiver integrated circuit, and a range and intensity detector. By combining these devices, we realized 160×120 pixels range imaging with a frame rate of 8 Hz at a distance of about 50 m.

  17. A Low Power Digital Accumulation Technique for Digital-Domain CMOS TDI Image Sensor.

    PubMed

    Yu, Changwei; Nie, Kaiming; Xu, Jiangtao; Gao, Jing

    2016-09-23

    In this paper, an accumulation technique suitable for digital domain CMOS time delay integration (TDI) image sensors is proposed to reduce power consumption without degrading the rate of imaging. In terms of the slight variations of quantization codes among different pixel exposures towards the same object, the pixel array is divided into two groups: one is for coarse quantization of high bits only, and the other one is for fine quantization of low bits. Then, the complete quantization codes are composed of both results from the coarse-and-fine quantization. The equivalent operation comparably reduces the total required bit numbers of the quantization. In the 0.18 µm CMOS process, two versions of 16-stage digital domain CMOS TDI image sensor chains based on a 10-bit successive approximate register (SAR) analog-to-digital converter (ADC), with and without the proposed technique, are designed. The simulation results show that the average power consumption of slices of the two versions are 6 . 47 × 10 - 8 J/line and 7 . 4 × 10 - 8 J/line, respectively. Meanwhile, the linearity of the two versions are 99.74% and 99.99%, respectively.

  18. Comparison of Huanjing and Landsat satellite remote sensing of the spatial heterogeneity of Qinghai-Tibet alpine grassland

    NASA Astrophysics Data System (ADS)

    Wang, Junbang; Sun, Wenyi

    2014-11-01

    Remote sensing is widely applied in the study of terrestrial primary production and the global carbon cycle. The researches on the spatial heterogeneity in images with different sensors and resolutions would improve the application of remote sensing. In this study two sites on alpine meadow grassland in Qinghai, China, which have distinct fractal vegetation cover, were used to test and analyze differences between Normalized Difference Vegetation Index (NDVI) and enhanced vegetation index (EVI) derived from the Huanjing (HJ) and Landsat Thematic Mapper (TM) sensors. The results showed that: 1) NDVI estimated from HJ were smaller than the corresponding values from TM at the two sites whereas EVI were almost the same for the two sensors. 2) The overall variance represented by HJ data was consistently about half of that of Landsat TM although their nominal pixel size is approximately 30m for both sensors. The overall variance from EVI is greater than that from NDVI. The difference of the range between the two sensors is about 6 pixels at 30m resolution. The difference of the range in which there is not more corrective between two vegetation indices is about 1 pixel. 3) The sill decreased when pixel size increased from 30m to 1km, and then decreased very quickly when pixel size is changed to 250m from 30m or 90m but slowly when changed from 250m to 500m. HJ can capture this spatial heterogeneity to some extent and this study provides foundations for the use of the sensor for validation of net primary productivity estimates obtained from ecosystem process models.

  19. High-End CMOS Active Pixel Sensors For Space-Borne Imaging Instruments

    DTIC Science & Technology

    2005-07-13

    DISTRIBUTION/AVAILABILITY STATEMENT Approved for public release, distribution unlimited 13. SUPPLEMENTARY NOTES See also ADM001791, Potentially Disruptive ... Technologies and Their Impact in Space Programs Held in Marseille, France on 4-6 July 2005. , The original document contains color images. 14

  20. Study on polarized optical flow algorithm for imaging bionic polarization navigation micro sensor

    NASA Astrophysics Data System (ADS)

    Guan, Le; Liu, Sheng; Li, Shi-qi; Lin, Wei; Zhai, Li-yuan; Chu, Jin-kui

    2018-05-01

    At present, both the point source and the imaging polarization navigation devices only can output the angle information, which means that the velocity information of the carrier cannot be extracted from the polarization field pattern directly. Optical flow is an image-based method for calculating the velocity of pixel point movement in an image. However, for ordinary optical flow, the difference in pixel value as well as the calculation accuracy can be reduced in weak light. Polarization imaging technology has the ability to improve both the detection accuracy and the recognition probability of the target because it can acquire the extra polarization multi-dimensional information of target radiation or reflection. In this paper, combining the polarization imaging technique with the traditional optical flow algorithm, a polarization optical flow algorithm is proposed, and it is verified that the polarized optical flow algorithm has good adaptation in weak light and can improve the application range of polarization navigation sensors. This research lays the foundation for day and night all-weather polarization navigation applications in future.

  1. The performance analysis of three-dimensional track-before-detect algorithm based on Fisher-Tippett-Gnedenko theorem

    NASA Astrophysics Data System (ADS)

    Cho, Hoonkyung; Chun, Joohwan; Song, Sungchan

    2016-09-01

    The dim moving target tracking from the infrared image sequence in the presence of high clutter and noise has been recently under intensive investigation. The track-before-detect (TBD) algorithm processing the image sequence over a number of frames before decisions on the target track and existence is known to be especially attractive in very low SNR environments (⩽ 3 dB). In this paper, we shortly present a three-dimensional (3-D) TBD with dynamic programming (TBD-DP) algorithm using multiple IR image sensors. Since traditional two-dimensional TBD algorithm cannot track and detect the along the viewing direction, we use 3-D TBD with multiple sensors and also strictly analyze the detection performance (false alarm and detection probabilities) based on Fisher-Tippett-Gnedenko theorem. The 3-D TBD-DP algorithm which does not require a separate image registration step uses the pixel intensity values jointly read off from multiple image frames to compute the merit function required in the DP process. Therefore, we also establish the relationship between the pixel coordinates of image frame and the reference coordinates.

  2. Astronomical Polarimetry with the RIT Polarization Imaging Camera

    NASA Astrophysics Data System (ADS)

    Vorobiev, Dmitry V.; Ninkov, Zoran; Brock, Neal

    2018-06-01

    In the last decade, imaging polarimeters based on micropolarizer arrays have been developed for use in terrestrial remote sensing and metrology applications. Micropolarizer-based sensors are dramatically smaller and more mechanically robust than other polarimeters with similar spectral response and snapshot capability. To determine the suitability of these new polarimeters for astronomical applications, we developed the RIT Polarization Imaging Camera to investigate the performance of these devices, with a special attention to the low signal-to-noise regime. We characterized the device performance in the lab, by determining the relative throughput, efficiency, and orientation of every pixel, as a function of wavelength. Using the resulting pixel response model, we developed demodulation procedures for aperture photometry and imaging polarimetry observing modes. We found that, using the current calibration, RITPIC is capable of detecting polarization signals as small as ∼0.3%. The relative ease of data collection, calibration, and analysis provided by these sensors suggest than they may become an important tool for a number of astronomical targets.

  3. Cheetah: A high frame rate, high resolution SWIR image camera

    NASA Astrophysics Data System (ADS)

    Neys, Joel; Bentell, Jonas; O'Grady, Matt; Vermeiren, Jan; Colin, Thierry; Hooylaerts, Peter; Grietens, Bob

    2008-10-01

    A high resolution, high frame rate InGaAs based image sensor and associated camera has been developed. The sensor and the camera are capable of recording and delivering more than 1700 full 640x512pixel frames per second. The FPA utilizes a low lag CTIA current integrator in each pixel, enabling integration times shorter than one microsecond. On-chip logics allows for four different sub windows to be read out simultaneously at even higher rates. The spectral sensitivity of the FPA is situated in the SWIR range [0.9-1.7 μm] and can be further extended into the Visible and NIR range. The Cheetah camera has max 16 GB of on-board memory to store the acquired images and transfer the data over a Gigabit Ethernet connection to the PC. The camera is also equipped with a full CameralinkTM interface to directly stream the data to a frame grabber or dedicated image processing unit. The Cheetah camera is completely under software control.

  4. UTOFIA: an underwater time-of-flight image acquisition system

    NASA Astrophysics Data System (ADS)

    Driewer, Adrian; Abrosimov, Igor; Alexander, Jonathan; Benger, Marc; O'Farrell, Marion; Haugholt, Karl Henrik; Softley, Chris; Thielemann, Jens T.; Thorstensen, Jostein; Yates, Chris

    2017-10-01

    In this article the development of a newly designed Time-of-Flight (ToF) image sensor for underwater applications is described. The sensor is developed as part of the project UTOFIA (underwater time-of-flight image acquisition) funded by the EU within the Horizon 2020 framework. This project aims to develop a camera based on range gating that extends the visible range compared to conventional cameras by a factor of 2 to 3 and delivers real-time range information by means of a 3D video stream. The principle of underwater range gating as well as the concept of the image sensor are presented. Based on measurements on a test image sensor a pixel structure that suits best to the requirements has been selected. Within an extensive characterization underwater the capability of distance measurements in turbid environments is demonstrated.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weiss, Joel T.; Becker, Julian; Shanks, Katherine S.

    There is a compelling need for a high frame rate imaging detector with a wide dynamic range, from single x-rays/pixel/pulse to >10{sup 6} x-rays/pixel/pulse, that is capable of operating at both x-ray free electron laser (XFEL) and 3rd generation sources with sustained fluxes of > 10{sup 11} x-rays/pixel/s [1, 2, 3]. We propose to meet these requirements with the High Dynamic Range Pixel Array Detector (HDR-PAD) by (a) increasing the speed of charge removal strategies [4], (b) increasing integrator range by implementing adaptive gain [5], and (c) exploiting the extended charge collection times of electron-hole pair plasma clouds that formmore » when a sufficiently large number of x-rays are absorbed in a detector sensor in a short period of time [6]. We have developed a measurement platform similar to the one used in [6] to study the effects of high electron-hole densities in silicon sensors using optical lasers to emulate the conditions found at XFELs. Characterizations of the employed tunable wavelength laser with picosecond pulse duration have shown Gaussian focal spots sizes of 6 ± 1 µm rms over the relevant spectrum and 2 to 3 orders of magnitude increase in available intensity compared to previous measurements presented in [6]. Results from measurements on a typical pixelated silicon diode intended for use with the HDR-PAD (150 µm pixel size, 500 µm thick sensor) are presented.« less

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deptuch, Grzegorz W.; Gabriella, Carini; Enquist, Paul

    The vertically integrated photon imaging chip (VIPIC1) pixel detector is a stack consisting of a 500-μm-thick silicon sensor, a two-tier 34-μm-thick integrated circuit, and a host printed circuit board (PCB). The integrated circuit tiers were bonded using the direct bonding technology with copper, and each tier features 1-μm-diameter through-silicon vias that were used for connections to the sensor on one side, and to the host PCB on the other side. The 80-μm-pixel-pitch sensor was the direct bonding technology with nickel bonded to the integrated circuit. The stack was mounted on the board using Sn–Pb balls placed on a 320-μm pitch,more » yielding an entirely wire-bond-less structure. The analog front-end features a pulse response peaking at below 250 ns, and the power consumption per pixel is 25 μW. We successful completed the 3-D integration and have reported here. Additionally, all pixels in the matrix of 64 × 64 pixels were responding on well-bonded devices. Correct operation of the sparsified readout, allowing a single 153-ns bunch timing resolution, was confirmed in the tests on a synchrotron beam of 10-keV X-rays. An equivalent noise charge of 36.2 e - rms and a conversion gain of 69.5 μV/e - with 2.6 e - rms and 2.7 μV/e - rms pixel-to-pixel variations, respectively, were measured.« less

  7. Development of a driving method suitable for ultrahigh-speed shooting in a 2M-fps 300k-pixel single-chip color camera

    NASA Astrophysics Data System (ADS)

    Yonai, J.; Arai, T.; Hayashida, T.; Ohtake, H.; Namiki, J.; Yoshida, T.; Etoh, T. Goji

    2012-03-01

    We have developed an ultrahigh-speed CCD camera that can capture instantaneous phenomena not visible to the human eye and impossible to capture with a regular video camera. The ultrahigh-speed CCD was specially constructed so that the CCD memory between the photodiode and the vertical transfer path of each pixel can store 144 frames each. For every one-frame shot, the electric charges generated from the photodiodes are transferred in one step to the memory of all the parallel pixels, making ultrahigh-speed shooting possible. Earlier, we experimentally manufactured a 1M-fps ultrahigh-speed camera and tested it for broadcasting applications. Through those tests, we learned that there are cases that require shooting speeds (frame rate) of more than 1M fps; hence we aimed to develop a new ultrahigh-speed camera that will enable much faster shooting speeds than what is currently possible. Since shooting at speeds of more than 200,000 fps results in decreased image quality and abrupt heating of the image sensor and drive circuit board, faster speeds cannot be achieved merely by increasing the drive frequency. We therefore had to improve the image sensor wiring layout and the driving method to develop a new 2M-fps, 300k-pixel ultrahigh-speed single-chip color camera for broadcasting purposes.

  8. CMOS VLSI Active-Pixel Sensor for Tracking

    NASA Technical Reports Server (NTRS)

    Pain, Bedabrata; Sun, Chao; Yang, Guang; Heynssens, Julie

    2004-01-01

    An architecture for a proposed active-pixel sensor (APS) and a design to implement the architecture in a complementary metal oxide semiconductor (CMOS) very-large-scale integrated (VLSI) circuit provide for some advanced features that are expected to be especially desirable for tracking pointlike features of stars. The architecture would also make this APS suitable for robotic- vision and general pointing and tracking applications. CMOS imagers in general are well suited for pointing and tracking because they can be configured for random access to selected pixels and to provide readout from windows of interest within their fields of view. However, until now, the architectures of CMOS imagers have not supported multiwindow operation or low-noise data collection. Moreover, smearing and motion artifacts in collected images have made prior CMOS imagers unsuitable for tracking applications. The proposed CMOS imager (see figure) would include an array of 1,024 by 1,024 pixels containing high-performance photodiode-based APS circuitry. The pixel pitch would be 9 m. The operations of the pixel circuits would be sequenced and otherwise controlled by an on-chip timing and control block, which would enable the collection of image data, during a single frame period, from either the full frame (that is, all 1,024 1,024 pixels) or from within as many as 8 different arbitrarily placed windows as large as 8 by 8 pixels each. A typical prior CMOS APS operates in a row-at-a-time ( grolling-shutter h) readout mode, which gives rise to exposure skew. In contrast, the proposed APS would operate in a sample-first/readlater mode, suppressing rolling-shutter effects. In this mode, the analog readout signals from the pixels corresponding to the windows of the interest (which windows, in the star-tracking application, would presumably contain guide stars) would be sampled rapidly by routing them through a programmable diagonal switch array to an on-chip parallel analog memory array. The diagonal-switch and memory addresses would be generated by the on-chip controller. The memory array would be large enough to hold differential signals acquired from all 8 windows during a frame period. Following the rapid sampling from all the windows, the contents of the memory array would be read out sequentially by use of a capacitive transimpedance amplifier (CTIA) at a maximum data rate of 10 MHz. This data rate is compatible with an update rate of almost 10 Hz, even in full-frame operation

  9. Comparison of fractal dimensions based on segmented NDVI fields obtained from different remote sensors.

    NASA Astrophysics Data System (ADS)

    Alonso, C.; Benito, R. M.; Tarquis, A. M.

    2012-04-01

    Satellite image data have become an important source of information for monitoring vegetation and mapping land cover at several scales. Beside this, the distribution and phenology of vegetation is largely associated with climate, terrain characteristics and human activity. Various vegetation indices have been developed for qualitative and quantitative assessment of vegetation using remote spectral measurements. In particular, sensors with spectral bands in the red (RED) and near-infrared (NIR) lend themselves well to vegetation monitoring and based on them [(NIR - RED) / (NIR + RED)] Normalized Difference Vegetation Index (NDVI) has been widespread used. Given that the characteristics of spectral bands in RED and NIR vary distinctly from sensor to sensor, NDVI values based on data from different instruments will not be directly comparable. The spatial resolution also varies significantly between sensors, as well as within a given scene in the case of wide-angle and oblique sensors. As a result, NDVI values will vary according to combinations of the heterogeneity and scale of terrestrial surfaces and pixel footprint sizes. Therefore, the question arises as to the impact of differences in spectral and spatial resolutions on vegetation indices like the NDVI. The aim of this study is to establish a comparison between two different sensors in their NDVI values at different spatial resolutions. Scaling analysis and modeling techniques are increasingly understood to be the result of nonlinear dynamic mechanisms repeating scale after scale from large to small scales leading to non-classical resolution dependencies. In the remote sensing framework the main characteristic of sensors images is the high local variability in their values. This variability is a consequence of the increase in spatial and radiometric resolution that implies an increase in complexity that it is necessary to characterize. Fractal and multifractal techniques has been proven to be useful to extract such complexities from remote sensing images and will applied in this study to see the scaling behavior for each sensor in generalized fractal dimensions. The studied area is located in the provinces of Caceres and Salamanca (east of Iberia Peninsula) with an extension of 32 x 32 km2. The altitude in the area varies from 1,560 to 320 m, comprising natural vegetation in the mountain area (forest and bushes) and agricultural crops in the valleys. Scaling analysis were applied to Landsat-5 and MODIS TERRA to the normalized derived vegetation index (NDVI) on the same region with one day of difference, 13 and 12 of July 2003 respectively. From these images the area of interest was selected obtaining 1024 x 1024 pixels for Landsat image and 128 x 128 pixels for MODIS image. This implies that the resolution for MODIS is 250x250 m. and for Landsat is 30x30 m. From the reflectance data obtained from NIR and RED bands, NDVI was calculated for each image focusing this study on 0.2 to 0.5 ranges of values. Once that both NDVI fields were obtained several fractal dimensions were estimated in each one segmenting the values in 0.20-0.25, 0.25-0.30 and so on to rich 0.45-0.50. In all the scaling analysis the scale size length was expressed in meters, and not in pixels, to make the comparison between both sensors possible. Results are discussed. Acknowledgements This work has been supported by the Spanish MEC under Projects No. AGL2010-21501/AGR, MTM2009-14621 and i-MATH No. CSD2006-00032

  10. High Resolution Airborne Digital Imagery for Precision Agriculture

    NASA Technical Reports Server (NTRS)

    Herwitz, Stanley R.

    1998-01-01

    The Environmental Research Aircraft and Sensor Technology (ERAST) program is a NASA initiative that seeks to demonstrate the application of cost-effective aircraft and sensor technology to private commercial ventures. In 1997-98, a series of flight-demonstrations and image acquisition efforts were conducted over the Hawaiian Islands using a remotely-piloted solar- powered platform (Pathfinder) and a fixed-wing piloted aircraft (Navajo) equipped with a Kodak DCS450 CIR (color infrared) digital camera. As an ERAST Science Team Member, I defined a set of flight lines over the largest coffee plantation in Hawaii: the Kauai Coffee Company's 4,000 acre Koloa Estate. Past studies have demonstrated the applications of airborne digital imaging to agricultural management. Few studies have examined the usefulness of high resolution airborne multispectral imagery with 10 cm pixel sizes. The Kodak digital camera integrated with ERAST's Airborne Real Time Imaging System (ARTIS) which generated multiband CCD images consisting of 6 x 106 pixel elements. At the designated flight altitude of 1,000 feet over the coffee plantation, pixel size was 10 cm. The study involved the analysis of imagery acquired on 5 March 1998 for the detection of anomalous reflectance values and for the definition of spectral signatures as indicators of tree vigor and treatment effectiveness (e.g., drip irrigation; fertilizer application).

  11. Design and Calibration of a Novel Bio-Inspired Pixelated Polarized Light Compass.

    PubMed

    Han, Guoliang; Hu, Xiaoping; Lian, Junxiang; He, Xiaofeng; Zhang, Lilian; Wang, Yujie; Dong, Fengliang

    2017-11-14

    Animals, such as Savannah sparrows and North American monarch butterflies, are able to obtain compass information from skylight polarization patterns to help them navigate effectively and robustly. Inspired by excellent navigation ability of animals, this paper proposes a novel image-based polarized light compass, which has the advantages of having a small size and being light weight. Firstly, the polarized light compass, which is composed of a Charge Coupled Device (CCD) camera, a pixelated polarizer array and a wide-angle lens, is introduced. Secondly, the measurement method of a skylight polarization pattern and the orientation method based on a single scattering Rayleigh model are presented. Thirdly, the error model of the sensor, mainly including the response error of CCD pixels and the installation error of the pixelated polarizer, is established. A calibration method based on iterative least squares estimation is proposed. In the outdoor environment, the skylight polarization pattern can be measured in real time by our sensor. The orientation accuracy of the sensor increases with the decrease of the solar elevation angle, and the standard deviation of orientation error is 0 . 15 ∘ at sunset. Results of outdoor experiments show that the proposed polarization navigation sensor can be used for outdoor autonomous navigation.

  12. Multiple Sensor Camera for Enhanced Video Capturing

    NASA Astrophysics Data System (ADS)

    Nagahara, Hajime; Kanki, Yoshinori; Iwai, Yoshio; Yachida, Masahiko

    A resolution of camera has been drastically improved under a current request for high-quality digital images. For example, digital still camera has several mega pixels. Although a video camera has the higher frame-rate, the resolution of a video camera is lower than that of still camera. Thus, the high-resolution is incompatible with the high frame rate of ordinary cameras in market. It is difficult to solve this problem by a single sensor, since it comes from physical limitation of the pixel transfer rate. In this paper, we propose a multi-sensor camera for capturing a resolution and frame-rate enhanced video. Common multi-CCDs camera, such as 3CCD color camera, has same CCD for capturing different spectral information. Our approach is to use different spatio-temporal resolution sensors in a single camera cabinet for capturing higher resolution and frame-rate information separately. We build a prototype camera which can capture high-resolution (2588×1958 pixels, 3.75 fps) and high frame-rate (500×500, 90 fps) videos. We also proposed the calibration method for the camera. As one of the application of the camera, we demonstrate an enhanced video (2128×1952 pixels, 90 fps) generated from the captured videos for showing the utility of the camera.

  13. Design and Calibration of a Novel Bio-Inspired Pixelated Polarized Light Compass

    PubMed Central

    Hu, Xiaoping; Lian, Junxiang; He, Xiaofeng; Zhang, Lilian; Wang, Yujie; Dong, Fengliang

    2017-01-01

    Animals, such as Savannah sparrows and North American monarch butterflies, are able to obtain compass information from skylight polarization patterns to help them navigate effectively and robustly. Inspired by excellent navigation ability of animals, this paper proposes a novel image-based polarized light compass, which has the advantages of having a small size and being light weight. Firstly, the polarized light compass, which is composed of a Charge Coupled Device (CCD) camera, a pixelated polarizer array and a wide-angle lens, is introduced. Secondly, the measurement method of a skylight polarization pattern and the orientation method based on a single scattering Rayleigh model are presented. Thirdly, the error model of the sensor, mainly including the response error of CCD pixels and the installation error of the pixelated polarizer, is established. A calibration method based on iterative least squares estimation is proposed. In the outdoor environment, the skylight polarization pattern can be measured in real time by our sensor. The orientation accuracy of the sensor increases with the decrease of the solar elevation angle, and the standard deviation of orientation error is 0.15∘ at sunset. Results of outdoor experiments show that the proposed polarization navigation sensor can be used for outdoor autonomous navigation. PMID:29135927

  14. Readout of the upgraded ALICE-ITS

    NASA Astrophysics Data System (ADS)

    Szczepankiewicz, A.; ALICE Collaboration

    2016-07-01

    The ALICE experiment will undergo a major upgrade during the second long shutdown of the CERN LHC. As part of this program, the present Inner Tracking System (ITS), which employs different layers of hybrid pixels, silicon drift and strip detectors, will be replaced by a completely new tracker composed of seven layers of monolithic active pixel sensors. The upgraded ITS will have more than twelve billion pixels in total, producing 300 Gbit/s of data when tracking 50 kHz Pb-Pb events. Two families of pixel chips realized with the TowerJazz CMOS imaging process have been developed as candidate sensors: the ALPIDE, which uses a proprietary readout and sparsification mechanism and the MISTRAL-O, based on a proven rolling shutter architecture. Both chips can operate in continuous mode, with the ALPIDE also supporting triggered operations. As the communication IP blocks are shared among the two chip families, it has been possible to develop a common Readout Electronics. All the sensor components (analog stages, state machines, buffers, FIFOs, etc.) have been modelled in a system level simulation, which has been extensively used to optimize both the sensor and the whole readout chain design in an iterative process. This contribution covers the progress of the R&D efforts and the overall expected performance of the ALICE-ITS readout system.

  15. Mapping Electrical Crosstalk in Pixelated Sensor Arrays

    NASA Technical Reports Server (NTRS)

    Seshadri, S.; Cole, D. M.; Hancock, B. R.; Smith, R. M.

    2008-01-01

    Electronic coupling effects such as Inter-Pixel Capacitance (IPC) affect the quantitative interpretation of image data from CMOS, hybrid visible and infrared imagers alike. Existing methods of characterizing IPC do not provide a map of the spatial variation of IPC over all pixels. We demonstrate a deterministic method that provides a direct quantitative map of the crosstalk across an imager. The approach requires only the ability to reset single pixels to an arbitrary voltage, different from the rest of the imager. No illumination source is required. Mapping IPC independently for each pixel is also made practical by the greater S/N ratio achievable for an electrical stimulus than for an optical stimulus, which is subject to both Poisson statistics and diffusion effects of photo-generated charge. The data we present illustrates a more complex picture of IPC in Teledyne HgCdTe and HyViSi focal plane arrays than is presently understood, including the presence of a newly discovered, long range IPC in the HyViSi FPA that extends tens of pixels in distance, likely stemming from extended field effects in the fully depleted substrate. The sensitivity of the measurement approach has been shown to be good enough to distinguish spatial structure in IPC of the order of 0.1%.

  16. Process techniques of charge transfer time reduction for high speed CMOS image sensors

    NASA Astrophysics Data System (ADS)

    Zhongxiang, Cao; Quanliang, Li; Ye, Han; Qi, Qin; Peng, Feng; Liyuan, Liu; Nanjian, Wu

    2014-11-01

    This paper proposes pixel process techniques to reduce the charge transfer time in high speed CMOS image sensors. These techniques increase the lateral conductivity of the photo-generated carriers in a pinned photodiode (PPD) and the voltage difference between the PPD and the floating diffusion (FD) node by controlling and optimizing the N doping concentration in the PPD and the threshold voltage of the reset transistor, respectively. The techniques shorten the charge transfer time from the PPD diode to the FD node effectively. The proposed process techniques do not need extra masks and do not cause harm to the fill factor. A sub array of 32 × 64 pixels was designed and implemented in the 0.18 μm CIS process with five implantation conditions splitting the N region in the PPD. The simulation and measured results demonstrate that the charge transfer time can be decreased by using the proposed techniques. Comparing the charge transfer time of the pixel with the different implantation conditions of the N region, the charge transfer time of 0.32 μs is achieved and 31% of image lag was reduced by using the proposed process techniques.

  17. Radiation hard analog circuits for ALICE ITS upgrade

    NASA Astrophysics Data System (ADS)

    Gajanana, D.; Gromov, V.; Kuijer, P.; Kugathasan, T.; Snoeys, W.

    2016-03-01

    The ALICE experiment is planning to upgrade the ITS (Inner Tracking System) [1] detector during the LS2 shutdown. The present ITS will be fully replaced with a new one entirely based on CMOS monolithic pixel sensor chips fabricated in TowerJazz CMOS 0.18 μ m imaging technology. The large (3 cm × 1.5 cm = 4.5 cm2) ALPIDE (ALICE PIxel DEtector) sensor chip contains about 500 Kpixels, and will be used to cover a 10 m2 area with 12.5 Gpixels distributed over seven cylindrical layers. The ALPOSE chip was designed as a test chip for the various building blocks foreseen in the ALPIDE [2] pixel chip from CERN. The building blocks include: bandgap and Temperature sensor in four different flavours, and LDOs for powering schemes. One flavour of bandgap and temperature sensor will be included in the ALPIDE chip. Power consumption numbers have dropped very significantly making the use of LDOs less interesting, but in this paper all blocks are presented including measurement results before and after irradiation with neutrons to characterize robustness against displacement damage.

  18. Backside illuminated CMOS-TDI line scan sensor for space applications

    NASA Astrophysics Data System (ADS)

    Cohen, Omer; Ofer, Oren; Abramovich, Gil; Ben-Ari, Nimrod; Gershon, Gal; Brumer, Maya; Shay, Adi; Shamay, Yaron

    2018-05-01

    A multi-spectral backside illuminated Time Delayed Integration Radiation Hardened line scan sensor utilizing CMOS technology was designed for continuous scanning Low Earth Orbit small satellite applications. The sensor comprises a single silicon chip with 4 independent arrays of pixels where each array is arranged in 2600 columns with 64 TDI levels. A multispectral optical filter whose spectral responses per array are adjustable per system requirement is assembled at the package level. A custom 4T Pixel design provides the required readout speed, low-noise, very low dark current, and high conversion gains. A 2-phase internally controlled exposure mechanism improves the sensor's dynamic MTF. The sensor high level of integration includes on-chip 12 bit per pixel analog to digital converters, on-chip controller, and CMOS compatible voltage levels. Thus, the power consumption and the weight of the supporting electronics are reduced, and a simple electrical interface is provided. An adjustable gain provides a Full Well Capacity ranging from 150,000 electrons up to 500,000 electrons per column and an overall readout noise per column of less than 120 electrons. The imager supports line rates ranging from 50 to 10,000 lines/sec, with power consumption of less than 0.5W per array. Thus, the sensor is characterized by a high pixel rate, a high dynamic range and a very low power. To meet a Latch-up free requirement RadHard architecture and design rules were utilized. In this paper recent electrical and electro-optical measurements of the sensor's Flight Models will be presented for the first time.

  19. Enhancement of low light level images using color-plus-mono dual camera.

    PubMed

    Jung, Yong Ju

    2017-05-15

    In digital photography, the improvement of imaging quality in low light shooting is one of the users' needs. Unfortunately, conventional smartphone cameras that use a single, small image sensor cannot provide satisfactory quality in low light level images. A color-plus-mono dual camera that consists of two horizontally separate image sensors, which simultaneously captures both a color and mono image pair of the same scene, could be useful for improving the quality of low light level images. However, an incorrect image fusion between the color and mono image pair could also have negative effects, such as the introduction of severe visual artifacts in the fused images. This paper proposes a selective image fusion technique that applies an adaptive guided filter-based denoising and selective detail transfer to only those pixels deemed reliable with respect to binocular image fusion. We employ a dissimilarity measure and binocular just-noticeable-difference (BJND) analysis to identify unreliable pixels that are likely to cause visual artifacts during image fusion via joint color image denoising and detail transfer from the mono image. By constructing an experimental system of color-plus-mono camera, we demonstrate that the BJND-aware denoising and selective detail transfer is helpful in improving the image quality during low light shooting.

  20. Optical design of microlens array for CMOS image sensors

    NASA Astrophysics Data System (ADS)

    Zhang, Rongzhu; Lai, Liping

    2016-10-01

    The optical crosstalk between the pixel units can influence the image quality of CMOS image sensor. In the meantime, the duty ratio of CMOS is low because of its pixel structure. These two factors cause the low detection sensitivity of CMOS. In order to reduce the optical crosstalk and improve the fill factor of CMOS image sensor, a microlens array has been designed and integrated with CMOS. The initial parameters of the microlens array have been calculated according to the structure of a CMOS. Then the parameters have been optimized by using ZEMAX and the microlens arrays with different substrate thicknesses have been compared. The results show that in order to obtain the best imaging quality, when the effect of optical crosstalk for CMOS is the minimum, the best distance between microlens array and CMOS is about 19.3 μm. When incident light successively passes through microlens array and the distance, obtaining the minimum facula is around 0.347 um in the active area. In addition, when the incident angle of the light is 0o 22o, the microlens array has obvious inhibitory effect on the optical crosstalk. And the anti-crosstalk distance between microlens array and CMOS is 0 μm 162 μm.

  1. Active pixel sensors: the sensor of choice for future space applications?

    NASA Astrophysics Data System (ADS)

    Leijtens, Johan; Theuwissen, Albert; Rao, Padmakumar R.; Wang, Xinyang; Xie, Ning

    2007-10-01

    It is generally known that active pixel sensors (APS) have a number of advantages over CCD detectors if it comes to cost for mass production, power consumption and ease of integration. Nevertheless, most space applications still use CCD detectors because they tend to give better performance and have a successful heritage. To this respect a change may be at hand with the advent of deep sub-micron processed APS imagers (< 0.25-micron feature size). Measurements performed on test structures at the University of Delft have shown that the imagers are very radiation tolerant even if made in a standard process without the use of special design rules. Furthermore it was shown that the 1/f noise associated with deep sub-micron imagers is reduced as compared to previous generations APS imagers due to the improved quality of the gate oxides. Considering that end of life performance will have to be guaranteed, limited budget for adding shielding metal will be available for most applications and lower power operations is always seen as a positive characteristic in space applications, deep sub-micron APS imagers seem to have a number of advantages over CCD's that will probably cause them to replace CCD's in those applications where radiation tolerance and low power operation are important

  2. Proof of principle study of the use of a CMOS active pixel sensor for proton radiography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seco, Joao; Depauw, Nicolas

    2011-02-15

    Purpose: Proof of principle study of the use of a CMOS active pixel sensor (APS) in producing proton radiographic images using the proton beam at the Massachusetts General Hospital (MGH). Methods: A CMOS APS, previously tested for use in s-ray radiation therapy applications, was used for proton beam radiographic imaging at the MGH. Two different setups were used as a proof of principle that CMOS can be used as proton imaging device: (i) a pen with two metal screws to assess spatial resolution of the CMOS and (ii) a phantom with lung tissue, bone tissue, and water to assess tissuemore » contrast of the CMOS. The sensor was then traversed by a double scattered monoenergetic proton beam at 117 MeV, and the energy deposition inside the detector was recorded to assess its energy response. Conventional x-ray images with similar setup at voltages of 70 kVp and proton images using commercial Gafchromic EBT 2 and Kodak X-Omat V films were also taken for comparison purposes. Results: Images were successfully acquired and compared to x-ray kVp and proton EBT2/X-Omat film images. The spatial resolution of the CMOS detector image is subjectively comparable to the EBT2 and Kodak X-Omat V film images obtained at the same object-detector distance. X-rays have apparent higher spatial resolution than the CMOS. However, further studies with different commercial films using proton beam irradiation demonstrate that the distance of the detector to the object is important to the amount of proton scatter contributing to the proton image. Proton images obtained with films at different distances from the source indicate that proton scatter significantly affects the CMOS image quality. Conclusion: Proton radiographic images were successfully acquired at MGH using a CMOS active pixel sensor detector. The CMOS demonstrated spatial resolution subjectively comparable to films at the same object-detector distance. Further work will be done in order to establish the spatial and energy resolution of the CMOS detector for protons. The development and use of CMOS in proton radiography could allow in vivo proton range checks, patient setup QA, and real-time tumor tracking.« less

  3. A back-illuminated megapixel CMOS image sensor

    NASA Technical Reports Server (NTRS)

    Pain, Bedabrata; Cunningham, Thomas; Nikzad, Shouleh; Hoenk, Michael; Jones, Todd; Wrigley, Chris; Hancock, Bruce

    2005-01-01

    In this paper, we present the test and characterization results for a back-illuminated megapixel CMOS imager. The imager pixel consists of a standard junction photodiode coupled to a three transistor-per-pixel switched source-follower readout [1]. The imager also consists of integrated timing and control and bias generation circuits, and provides analog output. The analog column-scan circuits were implemented in such a way that the imager could be configured to run in off-chip correlated double-sampling (CDS) mode. The imager was originally designed for normal front-illuminated operation, and was fabricated in a commercially available 0.5 pn triple-metal CMOS-imager compatible process. For backside illumination, the imager was thinned by etching away the substrate was etched away in a post-fabrication processing step.

  4. A digital sedimentator for measuring erythrocyte sedimentation rate using a linear image sensor

    NASA Astrophysics Data System (ADS)

    Yoshikoshi, Akio; Sakanishi, Akio; Toyama, Yoshiharu

    2004-11-01

    A digital apparatus was fabricated to determine accurately the erythrocyte sedimentation rate (ESR) using a linear image sensor. Currently, ESR is utilized for clinical diagnosis, and in the laboratory as one of the many rheological properties of blood through the settling of red blood cells (RBCs). In this work, we aimed to measure ESR automatically using a small amount of a sample and without moving parts. The linear image sensor was placed behind a microhematocrit tube containing 36 μl of RBC suspension on a holder plate; the holder plate was fixed on an optical bench together with a tungsten lamp and an opal glass placed in front. RBC suspensions were prepared in autologous plasma with hematocrit H from 25% to 44%. The intensity profiles of transmitted light in 36 μl of RBC suspension were detected using the linear image sensor and sent to a personal computer every minute. ESR was observed at the settling interface between the plasma and RBC suspension in the profile in 1024 pixels (25 μm/pixel) along a microhematocrit tube of 25.6 mm total length for 1 h at a temperature of 37.0±0.1 °C. First, we determined the initial pixel position of the sample at the boundary with air. The boundary and the interface were defined by inflection points in the profile with 25 μm resolution. We obtained sedimentation curves that were determined by the RBC settling distance l(t) at the time t from the difference between pixel locations at the boundary and the interface. The sedimentation curves were well fitted to an empirical equation [Puccini et al., Biorheol. 14, 43 (1977)] from which we calculated the maximum sedimentation velocity smax at the time tmax. We reached tmax within 30 min at any H, and smax linearly related to the settling distance l(60) at 60 min after the start of sedimentation from 30% to 44% H with the correlation coefficient r=0.993. Thus, we may estimate conventional ESR at 1 h from smax more quickly and accurately with less effort.

  5. Information-efficient spectral imaging sensor

    DOEpatents

    Sweatt, William C.; Gentry, Stephen M.; Boye, Clinton A.; Grotbeck, Carter L.; Stallard, Brian R.; Descour, Michael R.

    2003-01-01

    A programmable optical filter for use in multispectral and hyperspectral imaging. The filter splits the light collected by an optical telescope into two channels for each of the pixels in a row in a scanned image, one channel to handle the positive elements of a spectral basis filter and one for the negative elements of the spectral basis filter. Each channel for each pixel disperses its light into n spectral bins, with the light in each bin being attenuated in accordance with the value of the associated positive or negative element of the spectral basis vector. The spectral basis vector is constructed so that its positive elements emphasize the presence of a target and its negative elements emphasize the presence of the constituents of the background of the imaged scene. The attenuated light in the channels is re-imaged onto separate detectors for each pixel and then the signals from the detectors are combined to give an indication of the presence or not of the target in each pixel of the scanned scene. This system provides for a very efficient optical determination of the presence of the target, as opposed to the very data intensive data manipulations that are required in conventional hyperspectral imaging systems.

  6. Object based image analysis for the classification of the growth stages of Avocado crop, in Michoacán State, Mexico

    NASA Astrophysics Data System (ADS)

    Gao, Yan; Marpu, Prashanth; Morales Manila, Luis M.

    2014-11-01

    This paper assesses the suitability of 8-band Worldview-2 (WV2) satellite data and object-based random forest algorithm for the classification of avocado growth stages in Mexico. We tested both pixel-based with minimum distance (MD) and maximum likelihood (MLC) and object-based with Random Forest (RF) algorithm for this task. Training samples and verification data were selected by visual interpreting the WV2 images for seven thematic classes: fully grown, middle stage, and early stage of avocado crops, bare land, two types of natural forests, and water body. To examine the contribution of the four new spectral bands of WV2 sensor, all the tested classifications were carried out with and without the four new spectral bands. Classification accuracy assessment results show that object-based classification with RF algorithm obtained higher overall higher accuracy (93.06%) than pixel-based MD (69.37%) and MLC (64.03%) method. For both pixel-based and object-based methods, the classifications with the four new spectral bands (overall accuracy obtained higher accuracy than those without: overall accuracy of object-based RF classification with vs without: 93.06% vs 83.59%, pixel-based MD: 69.37% vs 67.2%, pixel-based MLC: 64.03% vs 36.05%, suggesting that the four new spectral bands in WV2 sensor contributed to the increase of the classification accuracy.

  7. Electron imaging with Medipix2 hybrid pixel detector.

    PubMed

    McMullan, G; Cattermole, D M; Chen, S; Henderson, R; Llopart, X; Summerfield, C; Tlustos, L; Faruqi, A R

    2007-01-01

    The electron imaging performance of Medipix2 is described. Medipix2 is a hybrid pixel detector composed of two layers. It has a sensor layer and a layer of readout electronics, in which each 55 microm x 55 microm pixel has upper and lower energy discrimination and MHz rate counting. The sensor layer consists of a 300 microm slab of pixellated monolithic silicon and this is bonded to the readout chip. Experimental measurement of the detective quantum efficiency, DQE(0) at 120 keV shows that it can reach approximately 85% independent of electron exposure, since the detector has zero noise, and the DQE(Nyquist) can reach approximately 35% of that expected for a perfect detector (4/pi(2)). Experimental measurement of the modulation transfer function (MTF) at Nyquist resolution for 120 keV electrons using a 60 keV lower energy threshold, yields a value that is 50% of that expected for a perfect detector (2/pi). Finally, Monte Carlo simulations of electron tracks and energy deposited in adjacent pixels have been performed and used to calculate expected values for the MTF and DQE as a function of the threshold energy. The good agreement between theory and experiment allows suggestions for further improvements to be made with confidence. The present detector is already very useful for experiments that require a high DQE at very low doses.

  8. Fusion of spectral and panchromatic images using false color mapping and wavelet integrated approach

    NASA Astrophysics Data System (ADS)

    Zhao, Yongqiang; Pan, Quan; Zhang, Hongcai

    2006-01-01

    With the development of sensory technology, new image sensors have been introduced that provide a greater range of information to users. But as the power limitation of radiation, there will always be some trade-off between spatial and spectral resolution in the image captured by specific sensors. Images with high spatial resolution can locate objects with high accuracy, whereas images with high spectral resolution can be used to identify the materials. Many applications in remote sensing require fusing low-resolution imaging spectral images with panchromatic images to identify materials at high resolution in clutter. A pixel-based false color mapping and wavelet transform integrated fusion algorithm is presented in this paper, the resulting images have a higher information content than each of the original images and retain sensor-specific image information. The simulation results show that this algorithm can enhance the visibility of certain details and preserve the difference of different materials.

  9. Multi-Aperture-Based Probabilistic Noise Reduction of Random Telegraph Signal Noise and Photon Shot Noise in Semi-Photon-Counting Complementary-Metal-Oxide-Semiconductor Image Sensor

    PubMed Central

    Ishida, Haruki; Kagawa, Keiichiro; Komuro, Takashi; Zhang, Bo; Seo, Min-Woong; Takasawa, Taishi; Yasutomi, Keita; Kawahito, Shoji

    2018-01-01

    A probabilistic method to remove the random telegraph signal (RTS) noise and to increase the signal level is proposed, and was verified by simulation based on measured real sensor noise. Although semi-photon-counting-level (SPCL) ultra-low noise complementary-metal-oxide-semiconductor (CMOS) image sensors (CISs) with high conversion gain pixels have emerged, they still suffer from huge RTS noise, which is inherent to the CISs. The proposed method utilizes a multi-aperture (MA) camera that is composed of multiple sets of an SPCL CIS and a moderately fast and compact imaging lens to emulate a very fast single lens. Due to the redundancy of the MA camera, the RTS noise is removed by the maximum likelihood estimation where noise characteristics are modeled by the probability density distribution. In the proposed method, the photon shot noise is also relatively reduced because of the averaging effect, where the pixel values of all the multiple apertures are considered. An extremely low-light condition that the maximum number of electrons per aperture was the only 2e− was simulated. PSNRs of a test image for simple averaging, selective averaging (our previous method), and the proposed method were 11.92 dB, 11.61 dB, and 13.14 dB, respectively. The selective averaging, which can remove RTS noise, was worse than the simple averaging because it ignores the pixels with RTS noise and photon shot noise was less improved. The simulation results showed that the proposed method provided the best noise reduction performance. PMID:29587424

  10. SEM contour based metrology for microlens process studies in CMOS image sensor technologies

    NASA Astrophysics Data System (ADS)

    Lakcher, Amine; Ostrovsky, Alain; Le-Gratiet, Bertrand; Berthier, Ludovic; Bidault, Laurent; Ducoté, Julien; Jamin-Mornet, Clémence; Mortini, Etienne; Besacier, Maxime

    2018-03-01

    From the first digital cameras which appeared during the 70s to cameras of current smartphones, image sensors have undergone significant technological development in the last decades. The development of CMOS image sensor technologies in the 90s has been the main driver of the recent progresses. The main component of an image sensor is the pixel. A pixel contains a photodiode connected to transistors but only the photodiode area is light sensitive. This results in a significant loss of efficiency. To solve this issue, microlenses are used to focus the incident light on the photodiode. A microlens array is made out of a transparent material and has a spherical cap shape. To obtain this spherical shape, a lithography process is performed to generate resist blocks which are then annealed above their glass transition temperature (reflow). Even if the dimensions to consider are higher than in advanced IC nodes, microlenses are sensitive to process variability during lithography and reflow. A good control of the microlens dimensions is key to optimize the process and thus the performance of the final product. The purpose of this paper is to apply SEM contour metrology [1, 2, 3, 4] to microlenses in order to develop a relevant monitoring methodology and to propose new metrics to engineers to evaluate their process or optimize the design of the microlens arrays.

  11. Note: A disposable x-ray camera based on mass produced complementary metal-oxide-semiconductor sensors and single-board computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoidn, Oliver R.; Seidler, Gerald T., E-mail: seidler@uw.edu

    We have integrated mass-produced commercial complementary metal-oxide-semiconductor (CMOS) image sensors and off-the-shelf single-board computers into an x-ray camera platform optimized for acquisition of x-ray spectra and radiographs at energies of 2–6 keV. The CMOS sensor and single-board computer are complemented by custom mounting and interface hardware that can be easily acquired from rapid prototyping services. For single-pixel detection events, i.e., events where the deposited energy from one photon is substantially localized in a single pixel, we establish ∼20% quantum efficiency at 2.6 keV with ∼190 eV resolution and a 100 kHz maximum detection rate. The detector platform’s useful intrinsic energymore » resolution, 5-μm pixel size, ease of use, and obvious potential for parallelization make it a promising candidate for many applications at synchrotron facilities, in laser-heating plasma physics studies, and in laboratory-based x-ray spectrometry.« less

  12. Pixel pitch and particle energy influence on the dark current distribution of neutron irradiated CMOS image sensors.

    PubMed

    Belloir, Jean-Marc; Goiffon, Vincent; Virmontois, Cédric; Raine, Mélanie; Paillet, Philippe; Duhamel, Olivier; Gaillardin, Marc; Molina, Romain; Magnan, Pierre; Gilard, Olivier

    2016-02-22

    The dark current produced by neutron irradiation in CMOS Image Sensors (CIS) is investigated. Several CIS with different photodiode types and pixel pitches are irradiated with various neutron energies and fluences to study the influence of each of these optical detector and irradiation parameters on the dark current distribution. An empirical model is tested on the experimental data and validated on all the irradiated optical imagers. This model is able to describe all the presented dark current distributions with no parameter variation for neutron energies of 14 MeV or higher, regardless of the optical detector and irradiation characteristics. For energies below 1 MeV, it is shown that a single parameter has to be adjusted because of the lower mean damage energy per nuclear interaction. This model and these conclusions can be transposed to any silicon based solid-state optical imagers such as CIS or Charged Coupled Devices (CCD). This work can also be used when designing an optical imager instrument, to anticipate the dark current increase or to choose a mitigation technique.

  13. Spectral X-Ray Diffraction using a 6 Megapixel Photon Counting Array Detector.

    PubMed

    Muir, Ryan D; Pogranichniy, Nicholas R; Muir, J Lewis; Sullivan, Shane Z; Battaile, Kevin P; Mulichak, Anne M; Toth, Scott J; Keefe, Lisa J; Simpson, Garth J

    2015-03-12

    Pixel-array array detectors allow single-photon counting to be performed on a massively parallel scale, with several million counting circuits and detectors in the array. Because the number of photoelectrons produced at the detector surface depends on the photon energy, these detectors offer the possibility of spectral imaging. In this work, a statistical model of the instrument response is used to calibrate the detector on a per-pixel basis. In turn, the calibrated sensor was used to perform separation of dual-energy diffraction measurements into two monochromatic images. Targeting applications include multi-wavelength diffraction to aid in protein structure determination and X-ray diffraction imaging.

  14. Spectral x-ray diffraction using a 6 megapixel photon counting array detector

    NASA Astrophysics Data System (ADS)

    Muir, Ryan D.; Pogranichniy, Nicholas R.; Muir, J. Lewis; Sullivan, Shane Z.; Battaile, Kevin P.; Mulichak, Anne M.; Toth, Scott J.; Keefe, Lisa J.; Simpson, Garth J.

    2015-03-01

    Pixel-array array detectors allow single-photon counting to be performed on a massively parallel scale, with several million counting circuits and detectors in the array. Because the number of photoelectrons produced at the detector surface depends on the photon energy, these detectors offer the possibility of spectral imaging. In this work, a statistical model of the instrument response is used to calibrate the detector on a per-pixel basis. In turn, the calibrated sensor was used to perform separation of dual-energy diffraction measurements into two monochromatic images. Targeting applications include multi-wavelength diffraction to aid in protein structure determination and X-ray diffraction imaging.

  15. The Transition-Edge-Sensor Array for the Micro-X Sounding Rocket

    NASA Technical Reports Server (NTRS)

    Eckart, M. E.; Adams, J. S.; Bailey, C. N.; Bandler, S. R.; Busch, Sarah Elizabeth; Chervenak J. A.; Finkbeiner, F. M.; Kelley, R. L.; Kilbourne, C. A.; Porst, J. P.; hide

    2012-01-01

    The Micro-X sounding rocket program will fly a 128-element array of transition-edge-sensor microcalorimeters to enable high-resolution X-ray imaging spectroscopy of the Puppis-A supernova remnant. To match the angular resolution of the optics while maximizing the field-of-view and retaining a high energy resolution (< 4 eV at 1 keV), we have designed the pixels using 600 x 600 sq. micron Au/Bi absorbers, which overhang 140 x 140 sq. micron Mo/Au sensors. The data-rate capabilities of the rocket telemetry system require the pulse decay to be approximately 2 ms to allow a significant portion of the data to be telemetered during flight. Here we report experimental results from the flight array, including measurements of energy resolution, uniformity, and absorber thermalization. In addition, we present studies of test devices that have a variety of absorber contact geometries, as well as a variety of membrane-perforation schemes designed to slow the pulse decay time to match the telemetry requirements. Finally, we describe the reduction in pixel-to-pixel crosstalk afforded by an angle-evaporated Cu backside heatsinking layer, which provides Cu coverage on the four sidewalls of the silicon wells beneath each pixel.

  16. An Inventory of Impact Craters on the Martian South Polar Layered Deposits

    NASA Technical Reports Server (NTRS)

    Plaut, J. J.

    2005-01-01

    The polar layered deposits (PLD) of Mars continue to be a focus of study due to the possibility that these finely layered, volatile-rich deposits hold a record of recent eras in Martian climate history. Recently, the visible sensor on 2001 Mars Odyssey s Thermal Emission Imaging System (THEMIS) has acquired 36 meter/pixel contiguous single-band visible image data sets of both the north and the south polar layered deposits, during the local spring and summer seasons. In addition, significant coverage has been obtained at the THEMIS visible sensor s full resolution of 18 meters/pixel. This paper reports on the use of these data sets to further characterize the population of impact craters on the south polar layered deposits (SPLD), and the implications of the observed population for the age and evolution of the SPLD.

  17. Phonon-mediated superconducting transition-edge sensor X-ray detectors for use in astronomy

    NASA Astrophysics Data System (ADS)

    Leman, Steven W.; Martinez-Galarce, Dennis S.; Brink, Paul L.; Cabrera, Blas; Castle, Joseph P.; Morse, Kathleen; Stern, Robert A.; Tomada, Astrid

    2004-09-01

    Superconducting Transition-Edge Sensors (TESs) are generating a great deal of interest in the areas of x-ray astrophysics and space science, particularly to develop them as large-array, imaging x-ray spectrometers. We are developing a novel concept that is based on position-sensitive macro-pixels placing TESs on the backside of a silicon or germanium absorber. Each x-ray absorbed will be position (X/δX and Y/δY ~ 100) and energy (E/δE ~ 1000) resolved via four distributed TES readouts. In the future, combining such macropixels with advances in multiplexing could lead to 30 by 30 arrays of close-packed macro-pixels equivalent to imaging instruments of 10 megapixels or more. We report on our progress to date and discuss its application to a plausible solar satellite mission and plans for future development.

  18. Charge Transfer Inefficiency in Pinned Photodiode CMOS image sensors: Simple Montecarlo modeling and experimental measurement based on a pulsed storage-gate method

    NASA Astrophysics Data System (ADS)

    Pelamatti, Alice; Goiffon, Vincent; Chabane, Aziouz; Magnan, Pierre; Virmontois, Cédric; Saint-Pé, Olivier; de Boisanger, Michel Breart

    2016-11-01

    The charge transfer time represents the bottleneck in terms of temporal resolution in Pinned Photodiode (PPD) CMOS image sensors. This work focuses on the modeling and estimation of this key parameter. A simple numerical model of charge transfer in PPDs is presented. The model is based on a Montecarlo simulation and takes into account both charge diffusion in the PPD and the effect of potential obstacles along the charge transfer path. This work also presents a new experimental approach for the estimation of the charge transfer time, called pulsed Storage Gate (SG) method. This method, which allows reproduction of a ;worst-case; transfer condition, is based on dedicated SG pixel structures and is particularly suitable to compare transfer efficiency performances for different pixel geometries.

  19. Charge transfer efficiency improvement of 4T pixel for high speed CMOS image sensor

    NASA Astrophysics Data System (ADS)

    Jin, Xiangliang; Liu, Weihui; Yang, Hongjiao; Tang, Lizhen; Yang, Jia

    2015-03-01

    The charge transfer efficiency improvement method is proposed by optimizing the electrical potential distribution along the transfer path from the PPD to the FD. In this work, we present a non-uniform doped transfer transistor channel, with the adjustments to the overlap length between the CPIA layer and the transfer gate, and the overlap length between the SEN layer and transfer gate. Theory analysis and TCAD simulation results show that the density of the residual charge reduces from 1e11 /cm3 to 1e9 /cm3, and the transfer time reduces from 500 ns to 143 ns, and the charge transfer efficiency is about 77 e-/ns. This optimizing design effectively improves the charge transfer efficiency of 4T pixel and the performance of 4T high speed CMOS image sensor.

  20. Film cameras or digital sensors? The challenge ahead for aerial imaging

    USGS Publications Warehouse

    Light, D.L.

    1996-01-01

    Cartographic aerial cameras continue to play the key role in producing quality products for the aerial photography business, and specifically for the National Aerial Photography Program (NAPP). One NAPP photograph taken with cameras capable of 39 lp/mm system resolution can contain the equivalent of 432 million pixels at 11 ??m spot size, and the cost is less than $75 per photograph to scan and output the pixels on a magnetic storage medium. On the digital side, solid state charge coupled device linear and area arrays can yield quality resolution (7 to 12 ??m detector size) and a broader dynamic range. If linear arrays are to compete with film cameras, they will require precise attitude and positioning of the aircraft so that the lines of pixels can be unscrambled and put into a suitable homogeneous scene that is acceptable to an interpreter. Area arrays need to be much larger than currently available to image scenes competitive in size with film cameras. Analysis of the relative advantages and disadvantages of the two systems show that the analog approach is more economical at present. However, as arrays become larger, attitude sensors become more refined, global positioning system coordinate readouts become commonplace, and storage capacity becomes more affordable, the digital camera may emerge as the imaging system for the future. Several technical challenges must be overcome if digital sensors are to advance to where they can support mapping, charting, and geographic information system applications.

  1. A novel ship CFAR detection algorithm based on adaptive parameter enhancement and wake-aided detection in SAR images

    NASA Astrophysics Data System (ADS)

    Meng, Siqi; Ren, Kan; Lu, Dongming; Gu, Guohua; Chen, Qian; Lu, Guojun

    2018-03-01

    Synthetic aperture radar (SAR) is an indispensable and useful method for marine monitoring. With the increase of SAR sensors, high resolution images can be acquired and contain more target structure information, such as more spatial details etc. This paper presents a novel adaptive parameter transform (APT) domain constant false alarm rate (CFAR) to highlight targets. The whole method is based on the APT domain value. Firstly, the image is mapped to the new transform domain by the algorithm. Secondly, the false candidate target pixels are screened out by the CFAR detector to highlight the target ships. Thirdly, the ship pixels are replaced by the homogeneous sea pixels. And then, the enhanced image is processed by Niblack algorithm to obtain the wake binary image. Finally, normalized Hough transform (NHT) is used to detect wakes in the binary image, as a verification of the presence of the ships. Experiments on real SAR images validate that the proposed transform does enhance the target structure and improve the contrast of the image. The algorithm has a good performance in the ship and ship wake detection.

  2. New SOFRADIR 10μm pixel pitch infrared products

    NASA Astrophysics Data System (ADS)

    Lefoul, X.; Pere-Laperne, N.; Augey, T.; Rubaldo, L.; Aufranc, Sébastien; Decaens, G.; Ricard, N.; Mazaleyrat, E.; Billon-Lanfrey, D.; Gravrand, Olivier; Bisotto, Sylvette

    2014-10-01

    Recent advances in miniaturization of IR imaging technology have led to a growing market for mini thermal-imaging sensors. In that respect, Sofradir development on smaller pixel pitch has made much more compact products available to the users. When this competitive advantage is mixed with smaller coolers, made possible by HOT technology, we achieved valuable reductions in the size, weight and power of the overall package. At the same time, we are moving towards a global offer based on digital interfaces that provides our customers simplifications at the IR system design process while freeing up more space. This paper discusses recent developments on hot and small pixel pitch technologies as well as efforts made on compact packaging solution developed by SOFRADIR in collaboration with CEA-LETI.

  3. Image sensor system with bio-inspired efficient coding and adaptation.

    PubMed

    Okuno, Hirotsugu; Yagi, Tetsuya

    2012-08-01

    We designed and implemented an image sensor system equipped with three bio-inspired coding and adaptation strategies: logarithmic transform, local average subtraction, and feedback gain control. The system comprises a field-programmable gate array (FPGA), a resistive network, and active pixel sensors (APS), whose light intensity-voltage characteristics are controllable. The system employs multiple time-varying reset voltage signals for APS in order to realize multiple logarithmic intensity-voltage characteristics, which are controlled so that the entropy of the output image is maximized. The system also employs local average subtraction and gain control in order to obtain images with an appropriate contrast. The local average is calculated by the resistive network instantaneously. The designed system was successfully used to obtain appropriate images of objects that were subjected to large changes in illumination.

  4. A Fixed-Pattern Noise Correction Method Based on Gray Value Compensation for TDI CMOS Image Sensor.

    PubMed

    Liu, Zhenwang; Xu, Jiangtao; Wang, Xinlei; Nie, Kaiming; Jin, Weimin

    2015-09-16

    In order to eliminate the fixed-pattern noise (FPN) in the output image of time-delay-integration CMOS image sensor (TDI-CIS), a FPN correction method based on gray value compensation is proposed. One hundred images are first captured under uniform illumination. Then, row FPN (RFPN) and column FPN (CFPN) are estimated based on the row-mean vector and column-mean vector of all collected images, respectively. Finally, RFPN are corrected by adding the estimated RFPN gray value to the original gray values of pixels in the corresponding row, and CFPN are corrected by subtracting the estimated CFPN gray value from the original gray values of pixels in the corresponding column. Experimental results based on a 128-stage TDI-CIS show that, after correcting the FPN in the image captured under uniform illumination with the proposed method, the standard-deviation of row-mean vector decreases from 5.6798 to 0.4214 LSB, and the standard-deviation of column-mean vector decreases from 15.2080 to 13.4623 LSB. Both kinds of FPN in the real images captured by TDI-CIS are eliminated effectively with the proposed method.

  5. Large Format CMOS-based Detectors for Diffraction Studies

    NASA Astrophysics Data System (ADS)

    Thompson, A. C.; Nix, J. C.; Achterkirchen, T. G.; Westbrook, E. M.

    2013-03-01

    Complementary Metal Oxide Semiconductor (CMOS) devices are rapidly replacing CCD devices in many commercial and medical applications. Recent developments in CMOS fabrication have improved their radiation hardness, device linearity, readout noise and thermal noise, making them suitable for x-ray crystallography detectors. Large-format (e.g. 10 cm × 15 cm) CMOS devices with a pixel size of 100 μm × 100 μm are now becoming available that can be butted together on three sides so that very large area detector can be made with no dead regions. Like CCD systems our CMOS systems use a GdOS:Tb scintillator plate to convert stopping x-rays into visible light which is then transferred with a fiber-optic plate to the sensitive surface of the CMOS sensor. The amount of light per x-ray on the sensor is much higher in the CMOS system than a CCD system because the fiber optic plate is only 3 mm thick while on a CCD system it is highly tapered and much longer. A CMOS sensor is an active pixel matrix such that every pixel is controlled and readout independently of all other pixels. This allows these devices to be readout while the sensor is collecting charge in all the other pixels. For x-ray diffraction detectors this is a major advantage since image frames can be collected continuously at up 20 Hz while the crystal is rotated. A complete diffraction dataset can be collected over five times faster than with CCD systems with lower radiation exposure to the crystal. In addition, since the data is taken fine-phi slice mode the 3D angular position of diffraction peaks is improved. We have developed a cooled 6 sensor CMOS detector with an active area of 28.2 × 29.5 cm with 100 μm × 100 μm pixels and a readout rate of 20 Hz. The detective quantum efficiency exceeds 60% over the range 8-12 keV. One, two and twelve sensor systems are also being developed for a variety of scientific applications. Since the sensors are butt able on three sides, even larger systems could be built at reasonable cost.

  6. Bayesian model for matching the radiometric measurements of aerospace and field ocean color sensors.

    PubMed

    Salama, Mhd Suhyb; Su, Zhongbo

    2010-01-01

    A Bayesian model is developed to match aerospace ocean color observation to field measurements and derive the spatial variability of match-up sites. The performance of the model is tested against populations of synthesized spectra and full and reduced resolutions of MERIS data. The model derived the scale difference between synthesized satellite pixel and point measurements with R(2) > 0.88 and relative error < 21% in the spectral range from 400 nm to 695 nm. The sub-pixel variabilities of reduced resolution MERIS image are derived with less than 12% of relative errors in heterogeneous region. The method is generic and applicable to different sensors.

  7. Lagrange constraint neural networks for massive pixel parallel image demixing

    NASA Astrophysics Data System (ADS)

    Szu, Harold H.; Hsu, Charles C.

    2002-03-01

    We have shown that the remote sensing optical imaging to achieve detailed sub-pixel decomposition is a unique application of blind source separation (BSS) that is truly linear of far away weak signal, instantaneous speed of light without delay, and along the line of sight without multiple paths. In early papers, we have presented a direct application of statistical mechanical de-mixing method called Lagrange Constraint Neural Network (LCNN). While the BSAO algorithm (using a posteriori MaxEnt ANN and neighborhood pixel average) is not acceptable for remote sensing, a mirror symmetric LCNN approach is all right assuming a priori MaxEnt for unknown sources to be averaged over the source statistics (not neighborhood pixel data) in a pixel-by-pixel independent fashion. LCNN reduces the computation complexity, save a great number of memory devices, and cut the cost of implementation. The Landsat system is designed to measure the radiation to deduce surface conditions and materials. For any given material, the amount of emitted and reflected radiation varies by the wavelength. In practice, a single pixel of a Landsat image has seven channels receiving 0.1 to 12 microns of radiation from the ground within a 20x20 meter footprint containing a variety of radiation materials. A-priori LCNN algorithm provides the spatial-temporal variation of mixture that is hardly de-mixable by other a-posteriori BSS or ICA methods. We have already compared the Landsat remote sensing using both methods in WCCI 2002 Hawaii. Unfortunately the absolute benchmark is not possible because of lacking of the ground truth. We will arbitrarily mix two incoherent sampled images as the ground truth. However, the constant total probability of co-located sources within the pixel footprint is necessary for the remote sensing constraint (since on a clear day the total reflecting energy is constant in neighborhood receiving pixel sensors), we have to normalized two image pixel-by-pixel as well. Then, the result is indeed as expected.

  8. Amorphous and Polycrystalline Photoconductors for Direct Conversion Flat Panel X-Ray Image Sensors

    PubMed Central

    Kasap, Safa; Frey, Joel B.; Belev, George; Tousignant, Olivier; Mani, Habib; Greenspan, Jonathan; Laperriere, Luc; Bubon, Oleksandr; Reznik, Alla; DeCrescenzo, Giovanni; Karim, Karim S.; Rowlands, John A.

    2011-01-01

    In the last ten to fifteen years there has been much research in using amorphous and polycrystalline semiconductors as x-ray photoconductors in various x-ray image sensor applications, most notably in flat panel x-ray imagers (FPXIs). We first outline the essential requirements for an ideal large area photoconductor for use in a FPXI, and discuss how some of the current amorphous and polycrystalline semiconductors fulfill these requirements. At present, only stabilized amorphous selenium (doped and alloyed a-Se) has been commercialized, and FPXIs based on a-Se are particularly suitable for mammography, operating at the ideal limit of high detective quantum efficiency (DQE). Further, these FPXIs can also be used in real-time, and have already been used in such applications as tomosynthesis. We discuss some of the important attributes of amorphous and polycrystalline x-ray photoconductors such as their large area deposition ability, charge collection efficiency, x-ray sensitivity, DQE, modulation transfer function (MTF) and the importance of the dark current. We show the importance of charge trapping in limiting not only the sensitivity but also the resolution of these detectors. Limitations on the maximum acceptable dark current and the corresponding charge collection efficiency jointly impose a practical constraint that many photoconductors fail to satisfy. We discuss the case of a-Se in which the dark current was brought down by three orders of magnitude by the use of special blocking layers to satisfy the dark current constraint. There are also a number of polycrystalline photoconductors, HgI2 and PbO being good examples, that show potential for commercialization in the same way that multilayer stabilized a-Se x-ray photoconductors were developed for commercial applications. We highlight the unique nature of avalanche multiplication in a-Se and how it has led to the development of the commercial HARP video-tube. An all solid state version of the HARP has been recently demonstrated with excellent avalanche gains; the latter is expected to lead to a number of novel imaging device applications that would be quantum noise limited. While passive pixel sensors use one TFT (thin film transistor) as a switch at the pixel, active pixel sensors (APSs) have two or more transistors and provide gain at the pixel level. The advantages of APS based x-ray imagers are also discussed with examples. PMID:22163893

  9. Chromatic Modulator for a High-Resolution CCD or APS

    NASA Technical Reports Server (NTRS)

    Hartley, Frank; Hull, Anthony

    2008-01-01

    A chromatic modulator has been proposed to enable the separate detection of the red, green, and blue (RGB) color components of the same scene by a single charge-coupled device (CCD), active-pixel sensor (APS), or similar electronic image detector. Traditionally, the RGB color-separation problem in an electronic camera has been solved by use of either (1) fixed color filters over three separate image detectors; (2) a filter wheel that repeatedly imposes a red, then a green, then a blue filter over a single image detector; or (3) different fixed color filters over adjacent pixels. The use of separate image detectors necessitates precise registration of the detectors and the use of complicated optics; filter wheels are expensive and add considerably to the bulk of the camera; and fixed pixelated color filters reduce spatial resolution and introduce color-aliasing effects. The proposed chromatic modulator would not exhibit any of these shortcomings. The proposed chromatic modulator would be an electromechanical device fabricated by micromachining. It would include a filter having a spatially periodic pattern of RGB strips at a pitch equal to that of the pixels of the image detector. The filter would be placed in front of the image detector, supported at its periphery by a spring suspension and electrostatic comb drive. The spring suspension would bias the filter toward a middle position in which each filter strip would be registered with a row of pixels of the image detector. Hard stops would limit the excursion of the spring suspension to precisely one pixel row above and one pixel row below the middle position. In operation, the electrostatic comb drive would be actuated to repeatedly snap the filter to the upper extreme, middle, and lower extreme positions. This action would repeatedly place a succession of the differently colored filter strips in front of each pixel of the image detector. To simplify the processing, it would be desirable to encode information on the color of the filter strip over each row (or at least over some representative rows) of pixels at a given instant of time in synchronism with the pixel output at that instant.

  10. Evaluation of an innovative color sensor for space application

    NASA Astrophysics Data System (ADS)

    Cessa, Virginie; Beauvivre, Stéphane; Pittet, Jacques; Dougnac, Virgile; Fasano, M.

    2017-11-01

    We present in this paper an evaluation of an innovative image sensor that provides color information without the need of organic filters. The sensor is a CMOS array with more than 4 millions pixels which filters the incident photons into R, G, and B channels, delivering the full resolution in color. Such a sensor, combining high performance with low power consumption, is of high interest for future space missions. The paper presents the characteristics of the detector as well as the first results of environmental testing.

  11. FITPix COMBO—Timepix detector with integrated analog signal spectrometric readout

    NASA Astrophysics Data System (ADS)

    Holik, M.; Kraus, V.; Georgiev, V.; Granja, C.

    2016-02-01

    The hybrid semiconductor pixel detector Timepix has proven a powerful tool in radiation detection and imaging. Energy loss and directional sensitivity as well as particle type resolving power are possible by high resolution particle tracking and per-pixel energy and quantum-counting capability. The spectrometric resolving power of the detector can be further enhanced by analyzing the analog signal of the detector common sensor electrode (also called back-side pulse). In this work we present a new compact readout interface, based on the FITPix readout architecture, extended with integrated analog electronics for the detector's common sensor signal. Integrating simultaneous operation of the digital per-pixel information with the common sensor (called also back-side electrode) analog pulse processing circuitry into one device enhances the detector capabilities and opens new applications. Thanks to noise suppression and built-in electromagnetic interference shielding the common hardware platform enables parallel analog signal spectroscopy on the back side pulse signal with full operation and read-out of the pixelated digital part, the noise level is 600 keV and spectrometric resolution around 100 keV for 5.5 MeV alpha particles. Self-triggering is implemented with delay of few tens of ns making use of adjustable low-energy threshold of the particle analog signal amplitude. The digital pixelated full frame can be thus triggered and recorded together with the common sensor analog signal. The waveform, which is sampled with frequency 100 MHz, can be recorded in adjustable time window including time prior to the trigger level. An integrated software tool provides control, on-line display and read-out of both analog and digital channels. Both the pixelated digital record and the analog waveform are synchronized and written out by common time stamp.

  12. Studies of prototype DEPFET sensors for the Wide Field Imager of Athena

    NASA Astrophysics Data System (ADS)

    Treberspurg, Wolfgang; Andritschke, Robert; Bähr, Alexander; Behrens, Annika; Hauser, Günter; Lechner, Peter; Meidinger, Norbert; Müller-Seidlitz, Johannes; Treis, Johannes

    2017-08-01

    The Wide Field Imager (WFI) of ESA's next X-ray observatory Athena will combine a high count rate capability with a large field of view, both with state-of-the-art spectroscopic performance. To meet these demands, specific DEPFET active pixel detectors have been developed and operated. Due to the intrinsic amplification of detected signals they are best suited to achieve a high speed and low noise performance. Different fabrication technologies and transistor geometries have been implemented on a dedicated prototype production in the course of the development of the DEPFET sensors. The main modifications between the sensors concern the shape of the transistor gate - regarding the layout - and the thickness of the gate oxide - regarding the technology. To facilitate the fabrication and testing of the resulting variety of sensors the presented studies were carried out with 64×64 pixel detectors. The detector comprises a control ASIC (Switcher-A), a readout ASIC (VERITAS- 2) and the sensor. In this paper we give an overview on the evaluation of different prototype sensors. The most important results, which have been decisive for the identification of the optimal fabrication technology and transistor layout for subsequent sensor productions are summarized. It will be shown that the developments result in an excellent performance of spectroscopic X-ray DEPFETs with typical noise values below 2.5 ENC at 2.5 μs/row.

  13. Beam test results of a monolithic pixel sensor in the 0.18 μm tower-jazz technology with high resistivity epitaxial layer

    NASA Astrophysics Data System (ADS)

    Mattiazzo, S.; Aimo, I.; Baudot, J.; Bedda, C.; La Rocca, P.; Perez, A.; Riggi, F.; Spiriti, E.

    2015-10-01

    The ALICE experiment at CERN will undergo a major upgrade in the second Long LHC Shutdown in the years 2018-2019; this upgrade includes the full replacement of the Inner Tracking System (ITS), deploying seven layers of Monolithic Active Pixel Sensors (MAPS). For the development of the new ALICE ITS, the Tower-Jazz 0.18 μm CMOS imaging sensor process has been chosen as it is possible to use full CMOS in the pixel and different silicon wafers (including high resistivity epitaxial layers). A large test campaign has been carried out on several small prototype chips, designed to optimize the pixel sensor layout and the front-end electronics. Results match the target requirements both in terms of performance and of radiation hardness. Following this development, the first full scale chips have been designed, submitted and are currently under test, with promising results. A telescope composed of 4 planes of Mimosa-28 and 2 planes of Mimosa-18 chips is under development at the DAFNE Beam Test Facility (BTF) at the INFN Laboratori Nazionali di Frascati (LNF) in Italy with the final goal to perform a comparative test of the full scale prototypes. The telescope has been recently used to test a Mimosa-22THRb chip (a monolithic pixel sensor built in the 0.18 μm Tower-Jazz process) and we foresee to perform tests on the full scale chips for the ALICE ITS upgrade at the beginning of 2015. In this contribution we will describe some first measurements of spatial resolution, fake hit rate and detection efficiency of the Mimosa-22THRb chip obtained at the BTF facility in June 2014 with an electron beam of 500 MeV.

  14. High-resolution dynamic pressure sensor array based on piezo-phototronic effect tuned photoluminescence imaging.

    PubMed

    Peng, Mingzeng; Li, Zhou; Liu, Caihong; Zheng, Qiang; Shi, Xieqing; Song, Ming; Zhang, Yang; Du, Shiyu; Zhai, Junyi; Wang, Zhong Lin

    2015-03-24

    A high-resolution dynamic tactile/pressure display is indispensable to the comprehensive perception of force/mechanical stimulations such as electronic skin, biomechanical imaging/analysis, or personalized signatures. Here, we present a dynamic pressure sensor array based on pressure/strain tuned photoluminescence imaging without the need for electricity. Each sensor is a nanopillar that consists of InGaN/GaN multiple quantum wells. Its photoluminescence intensity can be modulated dramatically and linearly by small strain (0-0.15%) owing to the piezo-phototronic effect. The sensor array has a high pixel density of 6350 dpi and exceptional small standard deviation of photoluminescence. High-quality tactile/pressure sensing distribution can be real-time recorded by parallel photoluminescence imaging without any cross-talk. The sensor array can be inexpensively fabricated over large areas by semiconductor product lines. The proposed dynamic all-optical pressure imaging with excellent resolution, high sensitivity, good uniformity, and ultrafast response time offers a suitable way for smart sensing, micro/nano-opto-electromechanical systems.

  15. Computational imaging with a balanced detector.

    PubMed

    Soldevila, F; Clemente, P; Tajahuerce, E; Uribe-Patarroyo, N; Andrés, P; Lancis, J

    2016-06-29

    Single-pixel cameras allow to obtain images in a wide range of challenging scenarios, including broad regions of the electromagnetic spectrum and through scattering media. However, there still exist several drawbacks that single-pixel architectures must address, such as acquisition speed and imaging in the presence of ambient light. In this work we introduce balanced detection in combination with simultaneous complementary illumination in a single-pixel camera. This approach enables to acquire information even when the power of the parasite signal is higher than the signal itself. Furthermore, this novel detection scheme increases both the frame rate and the signal-to-noise ratio of the system. By means of a fast digital micromirror device together with a low numerical aperture collecting system, we are able to produce a live-feed video with a resolution of 64 × 64 pixels at 5 Hz. With advanced undersampling techniques, such as compressive sensing, we can acquire information at rates of 25 Hz. By using this strategy, we foresee real-time biological imaging with large area detectors in conditions where array sensors are unable to operate properly, such as infrared imaging and dealing with objects embedded in turbid media.

  16. Computational imaging with a balanced detector

    NASA Astrophysics Data System (ADS)

    Soldevila, F.; Clemente, P.; Tajahuerce, E.; Uribe-Patarroyo, N.; Andrés, P.; Lancis, J.

    2016-06-01

    Single-pixel cameras allow to obtain images in a wide range of challenging scenarios, including broad regions of the electromagnetic spectrum and through scattering media. However, there still exist several drawbacks that single-pixel architectures must address, such as acquisition speed and imaging in the presence of ambient light. In this work we introduce balanced detection in combination with simultaneous complementary illumination in a single-pixel camera. This approach enables to acquire information even when the power of the parasite signal is higher than the signal itself. Furthermore, this novel detection scheme increases both the frame rate and the signal-to-noise ratio of the system. By means of a fast digital micromirror device together with a low numerical aperture collecting system, we are able to produce a live-feed video with a resolution of 64 × 64 pixels at 5 Hz. With advanced undersampling techniques, such as compressive sensing, we can acquire information at rates of 25 Hz. By using this strategy, we foresee real-time biological imaging with large area detectors in conditions where array sensors are unable to operate properly, such as infrared imaging and dealing with objects embedded in turbid media.

  17. Portable lensless wide-field microscopy imaging platform based on digital inline holography and multi-frame pixel super-resolution

    PubMed Central

    Sobieranski, Antonio C; Inci, Fatih; Tekin, H Cumhur; Yuksekkaya, Mehmet; Comunello, Eros; Cobra, Daniel; von Wangenheim, Aldo; Demirci, Utkan

    2017-01-01

    In this paper, an irregular displacement-based lensless wide-field microscopy imaging platform is presented by combining digital in-line holography and computational pixel super-resolution using multi-frame processing. The samples are illuminated by a nearly coherent illumination system, where the hologram shadows are projected into a complementary metal-oxide semiconductor-based imaging sensor. To increase the resolution, a multi-frame pixel resolution approach is employed to produce a single holographic image from multiple frame observations of the scene, with small planar displacements. Displacements are resolved by a hybrid approach: (i) alignment of the LR images by a fast feature-based registration method, and (ii) fine adjustment of the sub-pixel information using a continuous optimization approach designed to find the global optimum solution. Numerical method for phase-retrieval is applied to decode the signal and reconstruct the morphological details of the analyzed sample. The presented approach was evaluated with various biological samples including sperm and platelets, whose dimensions are in the order of a few microns. The obtained results demonstrate a spatial resolution of 1.55 µm on a field-of-view of ≈30 mm2. PMID:29657866

  18. Computational imaging with a balanced detector

    PubMed Central

    Soldevila, F.; Clemente, P.; Tajahuerce, E.; Uribe-Patarroyo, N.; Andrés, P.; Lancis, J.

    2016-01-01

    Single-pixel cameras allow to obtain images in a wide range of challenging scenarios, including broad regions of the electromagnetic spectrum and through scattering media. However, there still exist several drawbacks that single-pixel architectures must address, such as acquisition speed and imaging in the presence of ambient light. In this work we introduce balanced detection in combination with simultaneous complementary illumination in a single-pixel camera. This approach enables to acquire information even when the power of the parasite signal is higher than the signal itself. Furthermore, this novel detection scheme increases both the frame rate and the signal-to-noise ratio of the system. By means of a fast digital micromirror device together with a low numerical aperture collecting system, we are able to produce a live-feed video with a resolution of 64 × 64 pixels at 5 Hz. With advanced undersampling techniques, such as compressive sensing, we can acquire information at rates of 25 Hz. By using this strategy, we foresee real-time biological imaging with large area detectors in conditions where array sensors are unable to operate properly, such as infrared imaging and dealing with objects embedded in turbid media. PMID:27353733

  19. Characterisation of the high dynamic range Large Pixel Detector (LPD) and its use at X-ray free electron laser sources

    NASA Astrophysics Data System (ADS)

    Veale, M. C.; Adkin, P.; Booker, P.; Coughlan, J.; French, M. J.; Hart, M.; Nicholls, T.; Schneider, A.; Seller, P.; Pape, I.; Sawhney, K.; Carini, G. A.; Hart, P. A.

    2017-12-01

    The STFC Rutherford Appleton Laboratory have delivered the Large Pixel Detector (LPD) for MHz frame rate imaging at the European XFEL. The detector system has an active area of 0.5 m × 0.5 m and consists of a million pixels on a 500 μm pitch. Sensors have been produced from 500 μm thick Hammamatsu silicon tiles that have been bump bonded to the readout ASIC using a silver epoxy and gold stud technique. Each pixel of the detector system is capable of measuring 105 12 keV photons per image readout at 4.5 MHz. In this paper results from the testing of these detectors at the Diamond Light Source and the Linac Coherent Light Source (LCLS) are presented. The performance of the detector in terms of linearity, spatial uniformity and the performance of the different ASIC gain stages is characterised.

  20. WE-G-204-03: Photon-Counting Hexagonal Pixel Array CdTe Detector: Optimal Resampling to Square Pixels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shrestha, S; Vedantham, S; Karellas, A

    Purpose: Detectors with hexagonal pixels require resampling to square pixels for distortion-free display of acquired images. In this work, the presampling modulation transfer function (MTF) of a hexagonal pixel array photon-counting CdTe detector for region-of-interest fluoroscopy was measured and the optimal square pixel size for resampling was determined. Methods: A 0.65mm thick CdTe Schottky sensor capable of concurrently acquiring up to 3 energy-windowed images was operated in a single energy-window mode to include ≥10 KeV photons. The detector had hexagonal pixels with apothem of 30 microns resulting in pixel spacing of 60 and 51.96 microns along the two orthogonal directions.more » Images of a tungsten edge test device acquired under IEC RQA5 conditions were double Hough transformed to identify the edge and numerically differentiated. The presampling MTF was determined from the finely sampled line spread function that accounted for the hexagonal sampling. The optimal square pixel size was determined in two ways; the square pixel size for which the aperture function evaluated at the Nyquist frequencies along the two orthogonal directions matched that from the hexagonal pixel aperture functions, and the square pixel size for which the mean absolute difference between the square and hexagonal aperture functions was minimized over all frequencies up to the Nyquist limit. Results: Evaluation of the aperture functions over the entire frequency range resulted in square pixel size of 53 microns with less than 2% difference from the hexagonal pixel. Evaluation of the aperture functions at Nyquist frequencies alone resulted in 54 microns square pixels. For the photon-counting CdTe detector and after resampling to 53 microns square pixels using quadratic interpolation, the presampling MTF at Nyquist frequency of 9.434 cycles/mm along the two directions were 0.501 and 0.507. Conclusion: Hexagonal pixel array photon-counting CdTe detector after resampling to square pixels provides high-resolution imaging suitable for fluoroscopy.« less

  1. Implantable self-reset CMOS image sensor and its application to hemodynamic response detection in living mouse brain

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Takahiro; Takehara, Hiroaki; Sunaga, Yoshinori; Haruta, Makito; Motoyama, Mayumi; Ohta, Yasumi; Noda, Toshihiko; Sasagawa, Kiyotaka; Tokuda, Takashi; Ohta, Jun

    2016-04-01

    A self-reset pixel of 15 × 15 µm2 with high signal-to-noise ratio (effective peak SNR ≃64 dB) for an implantable image sensor has been developed for intrinsic signal detection arising from hemodynamic responses in a living mouse brain. For detecting local conversion between oxyhemoglobin (HbO) and deoxyhemoglobin (HbR) in brain tissues, an implantable imaging device was fabricated with our newly designed self-reset image sensor and orange light-emitting diodes (LEDs; λ = 605 nm). We demonstrated imaging of hemodynamic responses in the sensory cortical area accompanied by forelimb stimulation of a living mouse. The implantable imaging device for intrinsic signal detection is expected to be a powerful tool to measure brain activities in living animals used in behavioral analysis.

  2. Computational multispectral video imaging [Invited].

    PubMed

    Wang, Peng; Menon, Rajesh

    2018-01-01

    Multispectral imagers reveal information unperceivable to humans and conventional cameras. Here, we demonstrate a compact single-shot multispectral video-imaging camera by placing a micro-structured diffractive filter in close proximity to the image sensor. The diffractive filter converts spectral information to a spatial code on the sensor pixels. Following a calibration step, this code can be inverted via regularization-based linear algebra to compute the multispectral image. We experimentally demonstrated spectral resolution of 9.6 nm within the visible band (430-718 nm). We further show that the spatial resolution is enhanced by over 30% compared with the case without the diffractive filter. We also demonstrate Vis-IR imaging with the same sensor. Because no absorptive color filters are utilized, sensitivity is preserved as well. Finally, the diffractive filters can be easily manufactured using optical lithography and replication techniques.

  3. Chromatic Modulator for High Resolution CCD or APS Devices

    NASA Technical Reports Server (NTRS)

    Hartley, Frank T. (Inventor); Hull, Anthony B. (Inventor)

    2003-01-01

    A system for providing high-resolution color separation in electronic imaging. Comb drives controllably oscillate a red-green-blue (RGB) color strip filter system (or otherwise) over an electronic imaging system such as a charge-coupled device (CCD) or active pixel sensor (APS). The color filter is modulated over the imaging array at a rate three or more times the frame rate of the imaging array. In so doing, the underlying active imaging elements are then able to detect separate color-separated images, which are then combined to provide a color-accurate frame which is then recorded as the representation of the recorded image. High pixel resolution is maintained. Registration is obtained between the color strip filter and the underlying imaging array through the use of electrostatic comb drives in conjunction with a spring suspension system.

  4. Small Pixel Hybrid CMOS X-ray Detectors

    NASA Astrophysics Data System (ADS)

    Hull, Samuel; Bray, Evan; Burrows, David N.; Chattopadhyay, Tanmoy; Falcone, Abraham; Kern, Matthew; McQuaide, Maria; Wages, Mitchell

    2018-01-01

    Concepts for future space-based X-ray observatories call for a large effective area and high angular resolution instrument to enable precision X-ray astronomy at high redshift and low luminosity. Hybrid CMOS detectors are well suited for such high throughput instruments, and the Penn State X-ray detector lab, in collaboration with Teledyne Imaging Sensors, has recently developed new small pixel hybrid CMOS X-ray detectors. These prototype 128x128 pixel devices have 12.5 micron pixel pitch, 200 micron fully depleted depth, and include crosstalk eliminating CTIA amplifiers and in-pixel correlated double sampling (CDS) capability. We report on characteristics of these new detectors, including the best read noise ever measured for an X-ray hybrid CMOS detector, 5.67 e- (RMS).

  5. Dual-gate photo thin-film transistor: a “smart” pixel for high- resolution and low-dose X-ray imaging

    NASA Astrophysics Data System (ADS)

    Wang, Kai; Ou, Hai; Chen, Jun

    2015-06-01

    Since its emergence a decade ago, amorphous silicon flat panel X-ray detector has established itself as a ubiquitous platform for an array of digital radiography modalities. The fundamental building block of a flat panel detector is called a pixel. In all current pixel architectures, sensing, storage, and readout are unanimously kept separate, inevitably compromising resolution by increasing pixel size. To address this issue, we hereby propose a “smart” pixel architecture where the aforementioned three components are combined in a single dual-gate photo thin-film transistor (TFT). In other words, the dual-gate photo TFT itself functions as a sensor, a storage capacitor, and a switch concurrently. Additionally, by harnessing the amplification effect of such a thin-film transistor, we for the first time created a single-transistor active pixel sensor. The proof-of-concept device had a W/L ratio of 250μm/20μm and was fabricated using a simple five-mask photolithography process, where a 130nm transparent ITO was used as the top photo gate, and a 200nm amorphous silicon as the absorbing channel layer. The preliminary results demonstrated that the photocurrent had been increased by four orders of magnitude due to light-induced threshold voltage shift in the sub-threshold region. The device sensitivity could be simply tuned by photo gate bias to specifically target low-level light detection. The dependence of threshold voltage on light illumination indicated that a dynamic range of at least 80dB could be achieved. The "smart" pixel technology holds tremendous promise for developing high-resolution and low-dose X-ray imaging and may potentially lower the cancer risk imposed by radiation, especially among paediatric patients.

  6. Wavelength- or Polarization-Selective Thermal Infrared Detectors for Multi-Color or Polarimetric Imaging Using Plasmonics and Metamaterials

    PubMed Central

    Ogawa, Shinpei; Kimata, Masafumi

    2017-01-01

    Wavelength- or polarization-selective thermal infrared (IR) detectors are promising for various novel applications such as fire detection, gas analysis, multi-color imaging, multi-channel detectors, recognition of artificial objects in a natural environment, and facial recognition. However, these functions require additional filters or polarizers, which leads to high cost and technical difficulties related to integration of many different pixels in an array format. Plasmonic metamaterial absorbers (PMAs) can impart wavelength or polarization selectivity to conventional thermal IR detectors simply by controlling the surface geometry of the absorbers to produce surface plasmon resonances at designed wavelengths or polarizations. This enables integration of many different pixels in an array format without any filters or polarizers. We review our recent advances in wavelength- and polarization-selective thermal IR sensors using PMAs for multi-color or polarimetric imaging. The absorption mechanism defined by the surface structures is discussed for three types of PMAs—periodic crystals, metal-insulator-metal and mushroom-type PMAs—to demonstrate appropriate applications. Our wavelength- or polarization-selective uncooled IR sensors using various PMAs and multi-color image sensors are then described. Finally, high-performance mushroom-type PMAs are investigated. These advanced functional thermal IR detectors with wavelength or polarization selectivity will provide great benefits for a wide range of applications. PMID:28772855

  7. Wavelength- or Polarization-Selective Thermal Infrared Detectors for Multi-Color or Polarimetric Imaging Using Plasmonics and Metamaterials.

    PubMed

    Ogawa, Shinpei; Kimata, Masafumi

    2017-05-04

    Wavelength- or polarization-selective thermal infrared (IR) detectors are promising for various novel applications such as fire detection, gas analysis, multi-color imaging, multi-channel detectors, recognition of artificial objects in a natural environment, and facial recognition. However, these functions require additional filters or polarizers, which leads to high cost and technical difficulties related to integration of many different pixels in an array format. Plasmonic metamaterial absorbers (PMAs) can impart wavelength or polarization selectivity to conventional thermal IR detectors simply by controlling the surface geometry of the absorbers to produce surface plasmon resonances at designed wavelengths or polarizations. This enables integration of many different pixels in an array format without any filters or polarizers. We review our recent advances in wavelength- and polarization-selective thermal IR sensors using PMAs for multi-color or polarimetric imaging. The absorption mechanism defined by the surface structures is discussed for three types of PMAs-periodic crystals, metal-insulator-metal and mushroom-type PMAs-to demonstrate appropriate applications. Our wavelength- or polarization-selective uncooled IR sensors using various PMAs and multi-color image sensors are then described. Finally, high-performance mushroom-type PMAs are investigated. These advanced functional thermal IR detectors with wavelength or polarization selectivity will provide great benefits for a wide range of applications.

  8. Layer by layer: complex analysis with OCT technology

    NASA Astrophysics Data System (ADS)

    Florin, Christian

    2017-03-01

    Standard visualisation systems capture two- dimensional images and need more or less fast image processing systems. Now, the ASP Array (Actives sensor pixel array) opens a new world in imaging. On the ASP array, each pixel is provided with its own lens and with its own signal pre-processing. The OCT technology works in "real time" with highest accuracy. In the ASP array systems functionalities of the data acquisition and signal processing are even integrated onto the "pixel level". For the extraction of interferometric features, the time-of-flight principle (TOF) is used. The ASP architecture offers the demodulation of the optical signal within a pixel with up to 100 kHz and the reconstruction of the amplitude and its phase. The dynamics of image capture with the ASP array is higher by two orders of magnitude in comparison with conventional image sensors!!! The OCT- Technology allows a topographic imaging in real time with an extremely high geometric spatial resolution. The optical path length is generated by an axial movement of the reference mirror. The amplitude-modulated optical signal and the carrier frequency are proportional to the scan rate and contains the depth information. Each maximum of the signal envelope corresponds to a reflection (or scattering) within a sample. The ASP array produces at same time 300 * 300 axial Interferorgrams which touch each other on all sides. The signal demodulation for detecting the envelope is not limited by the frame rate of the ASP array in comparison to standard OCT systems. If an optical signal arrives to a pixel of the ASP Array an electrical signal is generated. The background is faded to saturation of pixels by high light intensity to avoid. The sampled signal is integrated continuously multiplied by a signal of the same frequency and two paths whose phase is shifted by 90 degrees from each other are averaged. The outputs of the two paths are routed to the PC, where the envelope amplitude and the phase calculate a three-dimensional tomographic image. For 3D measuring technique specially designed ASP- arrays with a very high image rate are available. If ASP- Arrays are coupled with the OCT method, layer thicknesses can be determined without contact, sealing seams can be inspected or geometrical shapes can be measured. From a stack of hundreds of single OCT images, interesting images can be selected and fed to the computer to analyse them.

  9. Assessment and Prediction of Natural Hazards from Satellite Imagery

    PubMed Central

    Gillespie, Thomas W.; Chu, Jasmine; Frankenberg, Elizabeth; Thomas, Duncan

    2013-01-01

    Since 2000, there have been a number of spaceborne satellites that have changed the way we assess and predict natural hazards. These satellites are able to quantify physical geographic phenomena associated with the movements of the earth’s surface (earthquakes, mass movements), water (floods, tsunamis, storms), and fire (wildfires). Most of these satellites contain active or passive sensors that can be utilized by the scientific community for the remote sensing of natural hazards over a number of spatial and temporal scales. The most useful satellite imagery for the assessment of earthquake damage comes from high-resolution (0.6 m to 1 m pixel size) passive sensors and moderate resolution active sensors that can quantify the vertical and horizontal movement of the earth’s surface. High-resolution passive sensors have been used to successfully assess flood damage while predictive maps of flood vulnerability areas are possible based on physical variables collected from passive and active sensors. Recent moderate resolution sensors are able to provide near real time data on fires and provide quantitative data used in fire behavior models. Limitations currently exist due to atmospheric interference, pixel resolution, and revisit times. However, a number of new microsatellites and constellations of satellites will be launched in the next five years that contain increased resolution (0.5 m to 1 m pixel resolution for active sensors) and revisit times (daily ≤ 2.5 m resolution images from passive sensors) that will significantly improve our ability to assess and predict natural hazards from space. PMID:25170186

  10. Solution processed integrated pixel element for an imaging device

    NASA Astrophysics Data System (ADS)

    Swathi, K.; Narayan, K. S.

    2016-09-01

    We demonstrate the implementation of a solid state circuit/structure comprising of a high performing polymer field effect transistor (PFET) utilizing an oxide layer in conjunction with a self-assembled monolayer (SAM) as the dielectric and a bulk-heterostructure based organic photodiode as a CMOS-like pixel element for an imaging sensor. Practical usage of functional organic photon detectors requires on chip components for image capture and signal transfer as in the CMOS/CCD architecture rather than simple photodiode arrays in order to increase speed and sensitivity of the sensor. The availability of high performing PFETs with low operating voltage and photodiodes with high sensitivity provides the necessary prerequisite to implement a CMOS type image sensing device structure based on organic electronic devices. Solution processing routes in organic electronics offers relatively facile procedures to integrate these components, combined with unique features of large-area, form factor and multiple optical attributes. We utilize the inherent property of a binary mixture in a blend to phase-separate vertically and create a graded junction for effective photocurrent response. The implemented design enables photocharge generation along with on chip charge to voltage conversion with performance parameters comparable to traditional counterparts. Charge integration analysis for the passive pixel element using 2D TCAD simulations is also presented to evaluate the different processes that take place in the monolithic structure.

  11. Bio-inspired multi-mode optic flow sensors for micro air vehicles

    NASA Astrophysics Data System (ADS)

    Park, Seokjun; Choi, Jaehyuk; Cho, Jihyun; Yoon, Euisik

    2013-06-01

    Monitoring wide-field surrounding information is essential for vision-based autonomous navigation in micro-air-vehicles (MAV). Our image-cube (iCube) module, which consists of multiple sensors that are facing different angles in 3-D space, can be applied to the wide-field of view optic flows estimation (μ-Compound eyes) and to attitude control (μ- Ocelli) in the Micro Autonomous Systems and Technology (MAST) platforms. In this paper, we report an analog/digital (A/D) mixed-mode optic-flow sensor, which generates both optic flows and normal images in different modes for μ- Compound eyes and μ-Ocelli applications. The sensor employs a time-stamp based optic flow algorithm which is modified from the conventional EMD (Elementary Motion Detector) algorithm to give an optimum partitioning of hardware blocks in analog and digital domains as well as adequate allocation of pixel-level, column-parallel, and chip-level signal processing. Temporal filtering, which may require huge hardware resources if implemented in digital domain, is remained in a pixel-level analog processing unit. The rest of the blocks, including feature detection and timestamp latching, are implemented using digital circuits in a column-parallel processing unit. Finally, time-stamp information is decoded into velocity from look-up tables, multiplications, and simple subtraction circuits in a chip-level processing unit, thus significantly reducing core digital processing power consumption. In the normal image mode, the sensor generates 8-b digital images using single slope ADCs in the column unit. In the optic flow mode, the sensor estimates 8-b 1-D optic flows from the integrated mixed-mode algorithm core and 2-D optic flows with an external timestamp processing, respectively.

  12. Design and image-quality performance of high resolution CMOS-based X-ray imaging detectors for digital mammography

    NASA Astrophysics Data System (ADS)

    Cha, B. K.; Kim, J. Y.; Kim, Y. J.; Yun, S.; Cho, G.; Kim, H. K.; Seo, C.-W.; Jeon, S.; Huh, Y.

    2012-04-01

    In digital X-ray imaging systems, X-ray imaging detectors based on scintillating screens with electronic devices such as charge-coupled devices (CCDs), thin-film transistors (TFT), complementary metal oxide semiconductor (CMOS) flat panel imagers have been introduced for general radiography, dental, mammography and non-destructive testing (NDT) applications. Recently, a large-area CMOS active-pixel sensor (APS) in combination with scintillation films has been widely used in a variety of digital X-ray imaging applications. We employed a scintillator-based CMOS APS image sensor for high-resolution mammography. In this work, both powder-type Gd2O2S:Tb and a columnar structured CsI:Tl scintillation screens with various thicknesses were fabricated and used as materials to convert X-ray into visible light. These scintillating screens were directly coupled to a CMOS flat panel imager with a 25 × 50 mm2 active area and a 48 μm pixel pitch for high spatial resolution acquisition. We used a W/Al mammographic X-ray source with a 30 kVp energy condition. The imaging characterization of the X-ray detector was measured and analyzed in terms of linearity in incident X-ray dose, modulation transfer function (MTF), noise-power spectrum (NPS) and detective quantum efficiency (DQE).

  13. Fundamental performance differences between CMOS and CCD imagers: Part II

    NASA Astrophysics Data System (ADS)

    Janesick, James; Andrews, James; Tower, John; Grygon, Mark; Elliott, Tom; Cheng, John; Lesser, Michael; Pinter, Jeff

    2007-09-01

    A new class of CMOS imagers that compete with scientific CCDs is presented. The sensors are based on deep depletion backside illuminated technology to achieve high near infrared quantum efficiency and low pixel cross-talk. The imagers deliver very low read noise suitable for single photon counting - Fano-noise limited soft x-ray applications. Digital correlated double sampling signal processing necessary to achieve low read noise performance is analyzed and demonstrated for CMOS use. Detailed experimental data products generated by different pixel architectures (notably 3TPPD, 5TPPD and 6TPG designs) are presented including read noise, charge capacity, dynamic range, quantum efficiency, charge collection and transfer efficiency and dark current generation. Radiation damage data taken for the imagers is also reported.

  14. Theory and Development of Position-Sensitive Quantum Calorimeters. Degree awarded by Stanford Univ.

    NASA Technical Reports Server (NTRS)

    Figueroa-Feliciano, Enectali; White, Nicholas E. (Technical Monitor)

    2001-01-01

    Quantum calorimeters are being developed as imaging spectrometers for future X-ray astrophysics observatories. Much of the science to be done by these instruments could benefit greatly from larger focal-plane coverage of the detector (without increasing pixel size). An order of magnitude more area will greatly increase the science throughput of these future instruments. One of the main deterrents to achieving this goal is the complexity of the readout schemes involved. We have devised a way to increase the number of pixels from the current baseline designs by an order of magnitude without increasing the number of channels required for readout. The instrument is a high energy resolution, distributed-readout imaging spectrometer called a Position-Sensitive Transition-Edge Sensor (POST). A POST is a quantum calorimeter consisting of two Transition-Edge Sensors (TESS) on the ends of a long absorber capable of one-dimensional imaging spectroscopy. Comparing rise time and energy information from the two TESS, the position of the event in the POST is determined. The energy of the event is inferred from the sum of the two pulses. We have developed a generalized theoretical formalism for distributed-readout calorimeters and apply it to our devices. We derive the noise theory and calculate the theoretical energy resolution of a POST. Our calculations show that a 7-pixel POST with 6 keV saturation energy can achieve 2.3 eV resolution, making this a competitive design for future quantum calorimeter instruments. For this thesis we fabricated 7- and 15-pixel POSTS using Mo/Au TESs and gold absorbers, and moved from concept drawings on scraps of napkins to a 32 eV energy resolution at 1.5 keV, 7-pixel POST calorimeter.

  15. Multipurpose active pixel sensor (APS)-based microtracker

    NASA Astrophysics Data System (ADS)

    Eisenman, Allan R.; Liebe, Carl C.; Zhu, David Q.

    1998-12-01

    A new, photon-sensitive, imaging array, the active pixel sensor (APS) has emerged as a competitor to the CCD imager for use in star and target trackers. The Jet Propulsion Laboratory (JPL) has undertaken a program to develop a new generation, highly integrated, APS-based, multipurpose tracker: the Programmable Intelligent Microtracker (PIM). The supporting hardware used in the PIM has been carefully selected to enhance the inherent advantages of the APS. Adequate computation power is included to perform star identification, star tracking, attitude determination, space docking, feature tracking, descent imaging for landing control, and target tracking capabilities. Its first version uses a JPL developed 256 X 256-pixel APS and an advanced 32-bit RISC microcontroller. By taking advantage of the unique features of the APS/microcontroller combination, the microtracker will achieve about an order-of-magnitude reduction in mass and power consumption compared to present state-of-the-art star trackers. It will also add the advantage of programmability to enable it to perform a variety of star, other celestial body, and target tracking tasks. The PIM is already proving the usefulness of its design concept for space applications. It is demonstrating the effectiveness of taking such an integrated approach in building a new generation of high performance, general purpose, tracking instruments to be applied to a large variety of future space missions.

  16. Charge collection and non-ionizing radiation tolerance of CMOS pixel sensors using a 0.18 μm CMOS process

    NASA Astrophysics Data System (ADS)

    Zhang, Ying; Zhu, Hongbo; Zhang, Liang; Fu, Min

    2016-09-01

    The proposed Circular Electron Positron Collider (CEPC) will be primarily aimed for precision measurements of the discovered Higgs boson. Its innermost vertex detector, which will play a critical role in heavy-flavor tagging, must be constructed with fine-pitched silicon pixel sensors with low power consumption and fast readout. CMOS pixel sensor (CPS), as one of the most promising candidate technologies, has already demonstrated its excellent performance in several high energy physics experiments. Therefore it has been considered for R&D for the CEPC vertex detector. In this paper, we present the preliminary studies to improve the collected signal charge over the equivalent input capacitance ratio (Q / C), which will be crucial to reduce the analog power consumption. We have performed detailed 3D device simulation and evaluated potential impacts from diode geometry, epitaxial layer properties and non-ionizing radiation damage. We have proposed a new approach to improve the treatment of the boundary conditions in simulation. Along with the TCAD simulation, we have designed the exploratory prototype utilizing the TowerJazz 0.18 μm CMOS imaging sensor process and we will verify the simulation results with future measurements.

  17. An airborne multispectral imaging system based on two consumer-grade cameras for agricultural remote sensing

    USDA-ARS?s Scientific Manuscript database

    This paper describes the design and evaluation of an airborne multispectral imaging system based on two identical consumer-grade cameras for agricultural remote sensing. The cameras are equipped with a full-frame complementary metal oxide semiconductor (CMOS) sensor with 5616 × 3744 pixels. One came...

  18. General fusion approaches for the age determination of latent fingerprint traces: results for 2D and 3D binary pixel feature fusion

    NASA Astrophysics Data System (ADS)

    Merkel, Ronny; Gruhn, Stefan; Dittmann, Jana; Vielhauer, Claus; Bräutigam, Anja

    2012-03-01

    Determining the age of latent fingerprint traces found at crime scenes is an unresolved research issue since decades. Solving this issue could provide criminal investigators with the specific time a fingerprint trace was left on a surface, and therefore would enable them to link potential suspects to the time a crime took place as well as to reconstruct the sequence of events or eliminate irrelevant fingerprints to ensure privacy constraints. Transferring imaging techniques from different application areas, such as 3D image acquisition, surface measurement and chemical analysis to the domain of lifting latent biometric fingerprint traces is an upcoming trend in forensics. Such non-destructive sensor devices might help to solve the challenge of determining the age of a latent fingerprint trace, since it provides the opportunity to create time series and process them using pattern recognition techniques and statistical methods on digitized 2D, 3D and chemical data, rather than classical, contact-based capturing techniques, which alter the fingerprint trace and therefore make continuous scans impossible. In prior work, we have suggested to use a feature called binary pixel, which is a novel approach in the working field of fingerprint age determination. The feature uses a Chromatic White Light (CWL) image sensor to continuously scan a fingerprint trace over time and retrieves a characteristic logarithmic aging tendency for 2D-intensity as well as 3D-topographic images from the sensor. In this paper, we propose to combine such two characteristic aging features with other 2D and 3D features from the domains of surface measurement, microscopy, photography and spectroscopy, to achieve an increase in accuracy and reliability of a potential future age determination scheme. Discussing the feasibility of such variety of sensor devices and possible aging features, we propose a general fusion approach, which might combine promising features to a joint age determination scheme in future. We furthermore demonstrate the feasibility of the introduced approach by exemplary fusing the binary pixel features based on 2D-intensity and 3D-topographic images of the mentioned CWL sensor. We conclude that a formula based age determination approach requires very precise image data, which cannot be achieved at the moment, whereas a machine learning based classification approach seems to be feasible, if an adequate amount of features can be provided.

  19. Comparison of JPL-AIRSAR and DLR E-SAR images from the MAC Europe 1991 campaign over testsite Oberpfaffenhofen: Frequency and polarization dependent backscatter variations from agricultural fields

    NASA Technical Reports Server (NTRS)

    Schmullius, C.; Nithack, J.

    1992-01-01

    On July 12, the MAC Europe '91 (Multi-Sensor Airborne Campaign) took place over test site Oberpfaffenhofen. The DLR Institute of Radio-Frequency Technology participated with its C-VV, X-VV, and X-HH Experimental Synthetic Aperture Radar (E-SAR). The high resolution E-SAR images with a pixel size between 1 and 2 m and the polarimetric AIRSAR images were analyzed. Using both sensors in combination is a unique opportunity to evaluate SAR images in a frequency range from P- to X-band and to investigate polarimetric information.

  20. Cameras for digital microscopy.

    PubMed

    Spring, Kenneth R

    2013-01-01

    This chapter reviews the fundamental characteristics of charge-coupled devices (CCDs) and related detectors, outlines the relevant parameters for their use in microscopy, and considers promising recent developments in the technology of detectors. Electronic imaging with a CCD involves three stages--interaction of a photon with the photosensitive surface, storage of the liberated charge, and readout or measurement of the stored charge. The most demanding applications in fluorescence microscopy may require as much as four orders of greater magnitude sensitivity. The image in the present-day light microscope is usually acquired with a CCD camera. The CCD is composed of a large matrix of photosensitive elements (often referred to as "pixels" shorthand for picture elements, which simultaneously capture an image over the entire detector surface. The light-intensity information for each pixel is stored as electronic charge and is converted to an analog voltage by a readout amplifier. This analog voltage is subsequently converted to a numerical value by a digitizer situated on the CCD chip, or very close to it. Several (three to six) amplifiers are required for each pixel, and to date, uniform images with a homogeneous background have been a problem because of the inherent difficulties of balancing the gain in all of the amplifiers. Complementary metal oxide semiconductor sensors also exhibit relatively high noise associated with the requisite high-speed switching. Both of these deficiencies are being addressed, and sensor performance is nearing that required for scientific imaging. Copyright © 1998 Elsevier Inc. All rights reserved.

  1. High-speed X-ray imaging pixel array detector for synchrotron bunch isolation

    DOE PAGES

    Philipp, Hugh T.; Tate, Mark W.; Purohit, Prafull; ...

    2016-01-28

    A wide-dynamic-range imaging X-ray detector designed for recording successive frames at rates up to 10 MHz is described. X-ray imaging with frame rates of up to 6.5 MHz have been experimentally verified. The pixel design allows for up to 8–12 frames to be stored internally at high speed before readout, which occurs at a 1 kHz frame rate. An additional mode of operation allows the integration capacitors to be re-addressed repeatedly before readout which can enhance the signal-to-noise ratio of cyclical processes. This detector, along with modern storage ring sources which provide short (10–100 ps) and intense X-ray pulses atmore » megahertz rates, opens new avenues for the study of rapid structural changes in materials. The detector consists of hybridized modules, each of which is comprised of a 500 µm-thick silicon X-ray sensor solder bump-bonded, pixel by pixel, to an application-specific integrated circuit. The format of each module is 128 × 128 pixels with a pixel pitch of 150 µm. In the prototype detector described here, the three-side buttable modules are tiled in a 3 × 2 array with a full format of 256 × 384 pixels. Lastly, we detail the characteristics, operation, testing and application of the detector.« less

  2. High-speed X-ray imaging pixel array detector for synchrotron bunch isolation

    PubMed Central

    Philipp, Hugh T.; Tate, Mark W.; Purohit, Prafull; Shanks, Katherine S.; Weiss, Joel T.; Gruner, Sol M.

    2016-01-01

    A wide-dynamic-range imaging X-ray detector designed for recording successive frames at rates up to 10 MHz is described. X-ray imaging with frame rates of up to 6.5 MHz have been experimentally verified. The pixel design allows for up to 8–12 frames to be stored internally at high speed before readout, which occurs at a 1 kHz frame rate. An additional mode of operation allows the integration capacitors to be re-addressed repeatedly before readout which can enhance the signal-to-noise ratio of cyclical processes. This detector, along with modern storage ring sources which provide short (10–100 ps) and intense X-ray pulses at megahertz rates, opens new avenues for the study of rapid structural changes in materials. The detector consists of hybridized modules, each of which is comprised of a 500 µm-thick silicon X-ray sensor solder bump-bonded, pixel by pixel, to an application-specific integrated circuit. The format of each module is 128 × 128 pixels with a pixel pitch of 150 µm. In the prototype detector described here, the three-side buttable modules are tiled in a 3 × 2 array with a full format of 256 × 384 pixels. The characteristics, operation, testing and application of the detector are detailed. PMID:26917125

  3. High-speed X-ray imaging pixel array detector for synchrotron bunch isolation.

    PubMed

    Philipp, Hugh T; Tate, Mark W; Purohit, Prafull; Shanks, Katherine S; Weiss, Joel T; Gruner, Sol M

    2016-03-01

    A wide-dynamic-range imaging X-ray detector designed for recording successive frames at rates up to 10 MHz is described. X-ray imaging with frame rates of up to 6.5 MHz have been experimentally verified. The pixel design allows for up to 8-12 frames to be stored internally at high speed before readout, which occurs at a 1 kHz frame rate. An additional mode of operation allows the integration capacitors to be re-addressed repeatedly before readout which can enhance the signal-to-noise ratio of cyclical processes. This detector, along with modern storage ring sources which provide short (10-100 ps) and intense X-ray pulses at megahertz rates, opens new avenues for the study of rapid structural changes in materials. The detector consists of hybridized modules, each of which is comprised of a 500 µm-thick silicon X-ray sensor solder bump-bonded, pixel by pixel, to an application-specific integrated circuit. The format of each module is 128 × 128 pixels with a pixel pitch of 150 µm. In the prototype detector described here, the three-side buttable modules are tiled in a 3 × 2 array with a full format of 256 × 384 pixels. The characteristics, operation, testing and application of the detector are detailed.

  4. Kinematic model for the space-variant image motion of star sensors under dynamical conditions

    NASA Astrophysics Data System (ADS)

    Liu, Chao-Shan; Hu, Lai-Hong; Liu, Guang-Bin; Yang, Bo; Li, Ai-Jun

    2015-06-01

    A kinematic description of a star spot in the focal plane is presented for star sensors under dynamical conditions, which involves all necessary parameters such as the image motion, velocity, and attitude parameters of the vehicle. Stars at different locations of the focal plane correspond to the slightly different orientation and extent of motion blur, which characterize the space-variant point spread function. Finally, the image motion, the energy distribution, and centroid extraction are numerically investigated using the kinematic model under dynamic conditions. A centroid error of eight successive iterations <0.002 pixel is used as the termination criterion for the Richardson-Lucy deconvolution algorithm. The kinematic model of a star sensor is useful for evaluating the compensation algorithms of motion-blurred images.

  5. Cloud screening Coastal Zone Color Scanner images using channel 5

    NASA Technical Reports Server (NTRS)

    Eckstein, B. A.; Simpson, J. J.

    1991-01-01

    Clouds are removed from Coastal Zone Color Scanner (CZCS) data using channel 5. Instrumentation problems require pre-processing of channel 5 before an intelligent cloud-screening algorithm can be used. For example, at intervals of about 16 lines, the sensor records anomalously low radiances. Moreover, the calibration equation yields negative radiances when the sensor records zero counts, and pixels corrupted by electronic overshoot must also be excluded. The remaining pixels may then be used in conjunction with the procedure of Simpson and Humphrey to determine the CZCS cloud mask. These results plus in situ observations of phytoplankton pigment concentration show that pre-processing and proper cloud-screening of CZCS data are necessary for accurate satellite-derived pigment concentrations. This is especially true in the coastal margins, where pigment content is high and image distortion associated with electronic overshoot is also present. The pre-processing algorithm is critical to obtaining accurate global estimates of pigment from spacecraft data.

  6. High-Speed Scanning Interferometer Using CMOS Image Sensor and FPGA Based on Multifrequency Phase-Tracking Detection

    NASA Technical Reports Server (NTRS)

    Ohara, Tetsuo

    2012-01-01

    A sub-aperture stitching optical interferometer can provide a cost-effective solution for an in situ metrology tool for large optics; however, the currently available technologies are not suitable for high-speed and real-time continuous scan. NanoWave s SPPE (Scanning Probe Position Encoder) has been proven to exhibit excellent stability and sub-nanometer precision with a large dynamic range. This same technology can transform many optical interferometers into real-time subnanometer precision tools with only minor modification. The proposed field-programmable gate array (FPGA) signal processing concept, coupled with a new-generation, high-speed, mega-pixel CMOS (complementary metal-oxide semiconductor) image sensor, enables high speed (>1 m/s) and real-time continuous surface profiling that is insensitive to variation of pixel sensitivity and/or optical transmission/reflection. This is especially useful for large optics surface profiling.

  7. ChromAIX2: A large area, high count-rate energy-resolving photon counting ASIC for a Spectral CT Prototype

    NASA Astrophysics Data System (ADS)

    Steadman, Roger; Herrmann, Christoph; Livne, Amir

    2017-08-01

    Spectral CT based on energy-resolving photon counting detectors is expected to deliver additional diagnostic value at a lower dose than current state-of-the-art CT [1]. The capability of simultaneously providing a number of spectrally distinct measurements not only allows distinguishing between photo-electric and Compton interactions but also discriminating contrast agents that exhibit a K-edge discontinuity in the absorption spectrum, referred to as K-edge Imaging [2]. Such detectors are based on direct converting sensors (e.g. CdTe or CdZnTe) and high-rate photon counting electronics. To support the development of Spectral CT and show the feasibility of obtaining rates exceeding 10 Mcps/pixel (Poissonian observed count-rate), the ChromAIX ASIC has been previously reported showing 13.5 Mcps/pixel (150 Mcps/mm2 incident) [3]. The ChromAIX has been improved to offer the possibility of a large area coverage detector, and increased overall performance. The new ASIC is called ChromAIX2, and delivers count-rates exceeding 15 Mcps/pixel with an rms-noise performance of approximately 260 e-. It has an isotropic pixel pitch of 500 μm in an array of 22×32 pixels and is tile-able on three of its sides. The pixel topology consists of a two stage amplifier (CSA and Shaper) and a number of test features allowing to thoroughly characterize the ASIC without a sensor. A total of 5 independent thresholds are also available within each pixel, allowing to acquire 5 spectrally distinct measurements simultaneously. The ASIC also incorporates a baseline restorer to eliminate excess currents induced by the sensor (e.g. dark current and low frequency drifts) which would otherwise cause an energy estimation error. In this paper we report on the inherent electrical performance of the ChromAXI2 as well as measurements obtained with CZT (CdZnTe)/CdTe sensors and X-rays and radioactive sources.

  8. Infrared hyperspectral imaging miniaturized for UAV applications

    NASA Astrophysics Data System (ADS)

    Hinnrichs, Michele; Hinnrichs, Bradford; McCutchen, Earl

    2017-02-01

    Pacific Advanced Technology (PAT) has developed an infrared hyperspectral camera, both MWIR and LWIR, small enough to serve as a payload on a miniature unmanned aerial vehicles. The optical system has been integrated into the cold-shield of the sensor enabling the small size and weight of the sensor. This new and innovative approach to infrared hyperspectral imaging spectrometer uses micro-optics and will be explained in this paper. The micro-optics are made up of an area array of diffractive optical elements where each element is tuned to image a different spectral region on a common focal plane array. The lenslet array is embedded in the cold-shield of the sensor and actuated with a miniature piezo-electric motor. This approach enables rapid infrared spectral imaging with multiple spectral images collected and processed simultaneously each frame of the camera. This paper will present our optical mechanical design approach which results in an infrared hyper-spectral imaging system that is small enough for a payload on a mini-UAV or commercial quadcopter. Also, an example of how this technology can easily be used to quantify a hydrocarbon gas leak's volume and mass flowrates. The diffractive optical elements used in the lenslet array are blazed gratings where each lenslet is tuned for a different spectral bandpass. The lenslets are configured in an area array placed a few millimeters above the focal plane and embedded in the cold-shield to reduce the background signal normally associated with the optics. We have developed various systems using a different number of lenslets in the area array. Depending on the size of the focal plane and the diameter of the lenslet array will determine the spatial resolution. A 2 x 2 lenslet array will image four different spectral images of the scene each frame and when coupled with a 512 x 512 focal plane array will give spatial resolution of 256 x 256 pixel each spectral image. Another system that we developed uses a 4 x 4 lenslet array on a 1024 x 1024 pixel element focal plane array which gives 16 spectral images of 256 x 256 pixel resolution each frame.

  9. Low-Light Image Enhancement Using Adaptive Digital Pixel Binning

    PubMed Central

    Yoo, Yoonjong; Im, Jaehyun; Paik, Joonki

    2015-01-01

    This paper presents an image enhancement algorithm for low-light scenes in an environment with insufficient illumination. Simple amplification of intensity exhibits various undesired artifacts: noise amplification, intensity saturation, and loss of resolution. In order to enhance low-light images without undesired artifacts, a novel digital binning algorithm is proposed that considers brightness, context, noise level, and anti-saturation of a local region in the image. The proposed algorithm does not require any modification of the image sensor or additional frame-memory; it needs only two line-memories in the image signal processor (ISP). Since the proposed algorithm does not use an iterative computation, it can be easily embedded in an existing digital camera ISP pipeline containing a high-resolution image sensor. PMID:26121609

  10. Pixel decomposition for tracking in low resolution videos

    NASA Astrophysics Data System (ADS)

    Govinda, Vivekanand; Ralph, Jason F.; Spencer, Joseph W.; Goulermas, John Y.; Yang, Lihua; Abbas, Alaa M.

    2008-04-01

    This paper describes a novel set of algorithms that allows indoor activity to be monitored using data from very low resolution imagers and other non-intrusive sensors. The objects are not resolved but activity may still be determined. This allows the use of such technology in sensitive environments where privacy must be maintained. Spectral un-mixing algorithms from remote sensing were adapted for this environment. These algorithms allow the fractional contributions from different colours within each pixel to be estimated and this is used to assist in the detection and monitoring of small objects or sub-pixel motion.

  11. A CMOS In-Pixel CTIA High Sensitivity Fluorescence Imager.

    PubMed

    Murari, Kartikeya; Etienne-Cummings, Ralph; Thakor, Nitish; Cauwenberghs, Gert

    2011-10-01

    Traditionally, charge coupled device (CCD) based image sensors have held sway over the field of biomedical imaging. Complementary metal oxide semiconductor (CMOS) based imagers so far lack sensitivity leading to poor low-light imaging. Certain applications including our work on animal-mountable systems for imaging in awake and unrestrained rodents require the high sensitivity and image quality of CCDs and the low power consumption, flexibility and compactness of CMOS imagers. We present a 132×124 high sensitivity imager array with a 20.1 μm pixel pitch fabricated in a standard 0.5 μ CMOS process. The chip incorporates n-well/p-sub photodiodes, capacitive transimpedance amplifier (CTIA) based in-pixel amplification, pixel scanners and delta differencing circuits. The 5-transistor all-nMOS pixel interfaces with peripheral pMOS transistors for column-parallel CTIA. At 70 fps, the array has a minimum detectable signal of 4 nW/cm(2) at a wavelength of 450 nm while consuming 718 μA from a 3.3 V supply. Peak signal to noise ratio (SNR) was 44 dB at an incident intensity of 1 μW/cm(2). Implementing 4×4 binning allowed the frame rate to be increased to 675 fps. Alternately, sensitivity could be increased to detect about 0.8 nW/cm(2) while maintaining 70 fps. The chip was used to image single cell fluorescence at 28 fps with an average SNR of 32 dB. For comparison, a cooled CCD camera imaged the same cell at 20 fps with an average SNR of 33.2 dB under the same illumination while consuming over a watt.

  12. A CMOS In-Pixel CTIA High Sensitivity Fluorescence Imager

    PubMed Central

    Murari, Kartikeya; Etienne-Cummings, Ralph; Thakor, Nitish; Cauwenberghs, Gert

    2012-01-01

    Traditionally, charge coupled device (CCD) based image sensors have held sway over the field of biomedical imaging. Complementary metal oxide semiconductor (CMOS) based imagers so far lack sensitivity leading to poor low-light imaging. Certain applications including our work on animal-mountable systems for imaging in awake and unrestrained rodents require the high sensitivity and image quality of CCDs and the low power consumption, flexibility and compactness of CMOS imagers. We present a 132×124 high sensitivity imager array with a 20.1 μm pixel pitch fabricated in a standard 0.5 μ CMOS process. The chip incorporates n-well/p-sub photodiodes, capacitive transimpedance amplifier (CTIA) based in-pixel amplification, pixel scanners and delta differencing circuits. The 5-transistor all-nMOS pixel interfaces with peripheral pMOS transistors for column-parallel CTIA. At 70 fps, the array has a minimum detectable signal of 4 nW/cm2 at a wavelength of 450 nm while consuming 718 μA from a 3.3 V supply. Peak signal to noise ratio (SNR) was 44 dB at an incident intensity of 1 μW/cm2. Implementing 4×4 binning allowed the frame rate to be increased to 675 fps. Alternately, sensitivity could be increased to detect about 0.8 nW/cm2 while maintaining 70 fps. The chip was used to image single cell fluorescence at 28 fps with an average SNR of 32 dB. For comparison, a cooled CCD camera imaged the same cell at 20 fps with an average SNR of 33.2 dB under the same illumination while consuming over a watt. PMID:23136624

  13. Towards real-time VMAT verification using a prototype, high-speed CMOS active pixel sensor.

    PubMed

    Zin, Hafiz M; Harris, Emma J; Osmond, John P F; Allinson, Nigel M; Evans, Philip M

    2013-05-21

    This work investigates the feasibility of using a prototype complementary metal oxide semiconductor active pixel sensor (CMOS APS) for real-time verification of volumetric modulated arc therapy (VMAT) treatment. The prototype CMOS APS used region of interest read out on the chip to allow fast imaging of up to 403.6 frames per second (f/s). The sensor was made larger (5.4 cm × 5.4 cm) using recent advances in photolithographic technique but retains fast imaging speed with the sensor's regional read out. There is a paradigm shift in radiotherapy treatment verification with the advent of advanced treatment techniques such as VMAT. This work has demonstrated that the APS can track multi leaf collimator (MLC) leaves moving at 18 mm s(-1) with an automatic edge tracking algorithm at accuracy better than 1.0 mm even at the fastest imaging speed. Evaluation of the measured fluence distribution for an example VMAT delivery sampled at 50.4 f/s was shown to agree well with the planned fluence distribution, with an average gamma pass rate of 96% at 3%/3 mm. The MLC leaves motion and linac pulse rate variation delivered throughout the VMAT treatment can also be measured. The results demonstrate the potential of CMOS APS technology as a real-time radiotherapy dosimeter for delivery of complex treatments such as VMAT.

  14. Improved Space Object Observation Techniques Using CMOS Detectors

    NASA Astrophysics Data System (ADS)

    Schildknecht, T.; Hinze, A.; Schlatter, P.; Silha, J.; Peltonen, J.; Santti, T.; Flohrer, T.

    2013-08-01

    CMOS-sensors, or in general Active Pixel Sensors (APS), are rapidly replacing CCDs in the consumer camera market. Due to significant technological advances during the past years these devices start to compete with CCDs also for demanding scientific imaging applications, in particular in the astronomy community. CMOS detectors offer a series of inherent advantages compared to CCDs, due to the structure of their basic pixel cells, which each contain their own amplifier and readout electronics. The most prominent advantages for space object observations are the extremely fast and flexible readout capabilities, feasibility for electronic shuttering and precise epoch registration, and the potential to perform image processing operations on-chip and in real-time. Presently applied and proposed optical observation strategies for space debris surveys and space surveillance applications had to be analyzed. The major design drivers were identified and potential benefits from using available and future CMOS sensors were assessed. The major challenges and design drivers for ground-based and space-based optical observation strategies have been analyzed. CMOS detector characteristics were critically evaluated and compared with the established CCD technology, especially with respect to the above mentioned observations. Similarly, the desirable on-chip processing functionalities which would further enhance the object detection and image segmentation were identified. Finally, the characteristics of a particular CMOS sensor available at the Zimmerwald observatory were analyzed by performing laboratory test measurements.

  15. Performance of CMOS imager as sensing element for a Real-time Active Pixel Dosimeter for Interventional Radiology procedures

    NASA Astrophysics Data System (ADS)

    Magalotti, D.; Bissi, L.; Conti, E.; Paolucci, M.; Placidi, P.; Scorzoni, A.; Servoli, L.

    2014-01-01

    Staff members applying Interventional Radiology procedures are exposed to ionizing radiation, which can induce detrimental effects to the human body, and requires an improvement of radiation protection. This paper is focused on the study of the sensor element for a wireless real-time dosimeter to be worn by the medical staff during the interventional radiology procedures, in the framework of the Real-Time Active PIxel Dosimetry (RAPID) INFN project. We characterize a CMOS imager to be used as detection element for the photons scattered by the patient body. The CMOS imager has been first characterized in laboratory using fluorescence X-ray sources, then a PMMA phantom has been used to diffuse the X-ray photons from an angiography system. Different operating conditions have been used to test the detector response in realistic situations, by varying the X-ray tube parameters (continuous/pulsed mode, tube voltage and current, pulse parameters), the sensor parameters (gain, integration time) and the relative distance between sensor and phantom. The sensor response has been compared with measurements performed using passive dosimeters (TLD) and also with a certified beam, in an accredited calibration centre, in order to obtain an absolute calibration. The results are very encouraging, with dose and dose rate measurement uncertainties below the 10% level even for the most demanding Interventional Radiology protocols.

  16. Electrophoretically mediated microanalysis of a nicotinamide adenine dinucleotide-dependent enzyme and its facile multiplexing using an active pixel sensor UV detector.

    PubMed

    Urban, Pawel L; Goodall, David M; Bergström, Edmund T; Bruce, Neil C

    2007-08-31

    An electrophoretically mediated microanalysis (EMMA) method has been developed for yeast alcohol dehydrogenase and quantification of reactant and product cofactors, NAD and NADH. The enzyme substrate ethanol (1% (v/v)) was added to the buffer (50 mM borate, pH 8.8). Results are presented for parallel capillary electrophoresis with a novel miniature UV area detector, with an active pixel sensor imaging an array of two or six parallel capillaries connected via a manifold to a single output capillary in a commercial CE instrument, allowing conversions with five different yeast alcohol dehydrogenase concentrations to be quantified in a single experiment.

  17. Bayesian Model for Matching the Radiometric Measurements of Aerospace and Field Ocean Color Sensors

    PubMed Central

    Salama, Mhd. Suhyb; Su, Zhongbo

    2010-01-01

    A Bayesian model is developed to match aerospace ocean color observation to field measurements and derive the spatial variability of match-up sites. The performance of the model is tested against populations of synthesized spectra and full and reduced resolutions of MERIS data. The model derived the scale difference between synthesized satellite pixel and point measurements with R2 > 0.88 and relative error < 21% in the spectral range from 400 nm to 695 nm. The sub-pixel variabilities of reduced resolution MERIS image are derived with less than 12% of relative errors in heterogeneous region. The method is generic and applicable to different sensors. PMID:22163615

  18. Comparative analysis of respiratory motion tracking using Microsoft Kinect v2 sensor.

    PubMed

    Silverstein, Evan; Snyder, Michael

    2018-05-01

    To present and evaluate a straightforward implementation of a marker-less, respiratory motion-tracking process utilizing Kinect v2 camera as a gating tool during 4DCT or during radiotherapy treatments. Utilizing the depth sensor on the Kinect as well as author written C# code, respiratory motion of a subject was tracked by recording depth values obtained at user selected points on the subject, with each point representing one pixel on the depth image. As a patient breathes, specific anatomical points on the chest/abdomen will move slightly within the depth image across pixels. By tracking how depth values change for a specific pixel, instead of how the anatomical point moves throughout the image, a respiratory trace can be obtained based on changing depth values of the selected pixel. Tracking these values was implemented via marker-less setup. Varian's RPM system and the Anzai belt system were used in tandem with the Kinect to compare respiratory traces obtained by each using two different subjects. Analysis of the depth information from the Kinect for purposes of phase- and amplitude-based binning correlated well with the RPM and Anzai systems. Interquartile Range (IQR) values were obtained comparing times correlated with specific amplitude and phase percentages against each product. The IQR time spans indicated the Kinect would measure specific percentage values within 0.077 s for Subject 1 and 0.164 s for Subject 2 when compared to values obtained with RPM or Anzai. For 4DCT scans, these times correlate to less than 1 mm of couch movement and would create an offset of 1/2 an acquired slice. By tracking depth values of user selected pixels within the depth image, rather than tracking specific anatomical locations, respiratory motion can be tracked and visualized utilizing the Kinect with results comparable to that of the Varian RPM and Anzai belt. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  19. Uncooled Terahertz real-time imaging 2D arrays developed at LETI: present status and perspectives

    NASA Astrophysics Data System (ADS)

    Simoens, François; Meilhan, Jérôme; Dussopt, Laurent; Nicolas, Jean-Alain; Monnier, Nicolas; Sicard, Gilles; Siligaris, Alexandre; Hiberty, Bruno

    2017-05-01

    As for other imaging sensor markets, whatever is the technology, the commercial spread of terahertz (THz) cameras has to fulfil simultaneously the criteria of high sensitivity and low cost and SWAP (size, weight and power). Monolithic silicon-based 2D sensors integrated in uncooled THz real-time cameras are good candidates to meet these requirements. Over the past decade, LETI has been studying and developing such arrays with two complimentary technological approaches, i.e. antenna-coupled silicon bolometers and CMOS Field Effect Transistors (FET), both being compatible to standard silicon microelectronics processes. LETI has leveraged its know-how in thermal infrared bolometer sensors in developing a proprietary architecture for THz sensing. High technological maturity has been achieved as illustrated by the demonstration of fast scanning of large field of view and the recent birth of a commercial camera. In the FET-based THz field, recent works have been focused on innovative CMOS read-out-integrated circuit designs. The studied architectures take advantage of the large pixel pitch to enhance the flexibility and the sensitivity: an embedded in-pixel configurable signal processing chain dramatically reduces the noise. Video sequences at 100 frames per second using our 31x31 pixels 2D Focal Plane Arrays (FPA) have been achieved. The authors describe the present status of these developments and perspectives of performance evolutions are discussed. Several experimental imaging tests are also presented in order to illustrate the capabilities of these arrays to address industrial applications such as non-destructive testing (NDT), security or quality control of food.

  20. Performance test and image correction of CMOS image sensor in radiation environment

    NASA Astrophysics Data System (ADS)

    Wang, Congzheng; Hu, Song; Gao, Chunming; Feng, Chang

    2016-09-01

    CMOS image sensors rival CCDs in domains that include strong radiation resistance as well as simple drive signals, so it is widely applied in the high-energy radiation environment, such as space optical imaging application and video monitoring of nuclear power equipment. However, the silicon material of CMOS image sensors has the ionizing dose effect in the high-energy rays, and then the indicators of image sensors, such as signal noise ratio (SNR), non-uniformity (NU) and bad point (BP) are degraded because of the radiation. The radiation environment of test experiments was generated by the 60Co γ-rays source. The camera module based on image sensor CMV2000 from CMOSIS Inc. was chosen as the research object. The ray dose used for the experiments was with a dose rate of 20krad/h. In the test experiences, the output signals of the pixels of image sensor were measured on the different total dose. The results of data analysis showed that with the accumulation of irradiation dose, SNR of image sensors decreased, NU of sensors was enhanced, and the number of BP increased. The indicators correction of image sensors was necessary, as it was the main factors to image quality. The image processing arithmetic was adopt to the data from the experiences in the work, which combined local threshold method with NU correction based on non-local means (NLM) method. The results from image processing showed that image correction can effectively inhibit the BP, improve the SNR, and reduce the NU.

  1. ISBDD Model for Classification of Hyperspectral Remote Sensing Imagery

    PubMed Central

    Li, Na; Xu, Zhaopeng; Zhao, Huijie; Huang, Xinchen; Drummond, Jane; Wang, Daming

    2018-01-01

    The diverse density (DD) algorithm was proposed to handle the problem of low classification accuracy when training samples contain interference such as mixed pixels. The DD algorithm can learn a feature vector from training bags, which comprise instances (pixels). However, the feature vector learned by the DD algorithm cannot always effectively represent one type of ground cover. To handle this problem, an instance space-based diverse density (ISBDD) model that employs a novel training strategy is proposed in this paper. In the ISBDD model, DD values of each pixel are computed instead of learning a feature vector, and as a result, the pixel can be classified according to its DD values. Airborne hyperspectral data collected by the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) sensor and the Push-broom Hyperspectral Imager (PHI) are applied to evaluate the performance of the proposed model. Results show that the overall classification accuracy of ISBDD model on the AVIRIS and PHI images is up to 97.65% and 89.02%, respectively, while the kappa coefficient is up to 0.97 and 0.88, respectively. PMID:29510547

  2. An asynchronous data-driven readout prototype for CEPC vertex detector

    NASA Astrophysics Data System (ADS)

    Yang, Ping; Sun, Xiangming; Huang, Guangming; Xiao, Le; Gao, Chaosong; Huang, Xing; Zhou, Wei; Ren, Weiping; Li, Yashu; Liu, Jianchao; You, Bihui; Zhang, Li

    2017-12-01

    The Circular Electron Positron Collider (CEPC) is proposed as a Higgs boson and/or Z boson factory for high-precision measurements on the Higgs boson. The precision of secondary vertex impact parameter plays an important role in such measurements which typically rely on flavor-tagging. Thus silicon CMOS Pixel Sensors (CPS) are the most promising technology candidate for a CEPC vertex detector, which can most likely feature a high position resolution, a low power consumption and a fast readout simultaneously. For the R&D of the CEPC vertex detector, we have developed a prototype MIC4 in the Towerjazz 180 nm CMOS Image Sensor (CIS) process. We have proposed and implemented a new architecture of asynchronous zero-suppression data-driven readout inside the matrix combined with a binary front-end inside the pixel. The matrix contains 128 rows and 64 columns with a small pixel pitch of 25 μm. The readout architecture has implemented the traditional OR-gate chain inside a super pixel combined with a priority arbiter tree between the super pixels, only reading out relevant pixels. The MIC4 architecture will be introduced in more detail in this paper. It will be taped out in May and will be characterized when the chip comes back.

  3. Restoring the spatial resolution of refocus images on 4D light field

    NASA Astrophysics Data System (ADS)

    Lim, JaeGuyn; Park, ByungKwan; Kang, JooYoung; Lee, SeongDeok

    2010-01-01

    This paper presents the method for generating a refocus image with restored spatial resolution on a plenoptic camera, which functions controlling the depth of field after capturing one image unlike a traditional camera. It is generally known that the camera captures 4D light field (angular and spatial information of light) within a limited 2D sensor and results in reducing 2D spatial resolution due to inevitable 2D angular data. That's the reason why a refocus image is composed of a low spatial resolution compared with 2D sensor. However, it has recently been known that angular data contain sub-pixel spatial information such that the spatial resolution of 4D light field can be increased. We exploit the fact for improving the spatial resolution of a refocus image. We have experimentally scrutinized that the spatial information is different according to the depth of objects from a camera. So, from the selection of refocused regions (corresponding depth), we use corresponding pre-estimated sub-pixel spatial information for reconstructing spatial resolution of the regions. Meanwhile other regions maintain out-of-focus. Our experimental results show the effect of this proposed method compared to existing method.

  4. Amorphous selenium direct detection CMOS digital x-ray imager with 25 micron pixel pitch

    NASA Astrophysics Data System (ADS)

    Scott, Christopher C.; Abbaszadeh, Shiva; Ghanbarzadeh, Sina; Allan, Gary; Farrier, Michael; Cunningham, Ian A.; Karim, Karim S.

    2014-03-01

    We have developed a high resolution amorphous selenium (a-Se) direct detection imager using a large-area compatible back-end fabrication process on top of a CMOS active pixel sensor having 25 micron pixel pitch. Integration of a-Se with CMOS technology requires overcoming CMOS/a-Se interfacial strain, which initiates nucleation of crystalline selenium and results in high detector dark currents. A CMOS-compatible polyimide buffer layer was used to planarize the backplane and provide a low stress and thermally stable surface for a-Se. The buffer layer inhibits crystallization and provides detector stability that is not only a performance factor but also critical for favorable long term cost-benefit considerations in the application of CMOS digital x-ray imagers in medical practice. The detector structure is comprised of a polyimide (PI) buffer layer, the a-Se layer, and a gold (Au) top electrode. The PI layer is applied by spin-coating and is patterned using dry etching to open the backplane bond pads for wire bonding. Thermal evaporation is used to deposit the a-Se and Au layers, and the detector is operated in hole collection mode (i.e. a positive bias on the Au top electrode). High resolution a-Se diagnostic systems typically use 70 to 100 μm pixel pitch and have a pre-sampling modulation transfer function (MTF) that is significantly limited by the pixel aperture. Our results confirm that, for a densely integrated 25 μm pixel pitch CMOS array, the MTF approaches the fundamental material limit, i.e. where the MTF begins to be limited by the a-Se material properties and not the pixel aperture. Preliminary images demonstrating high spatial resolution have been obtained from a frst prototype imager.

  5. The Quanta Image Sensor: Every Photon Counts

    PubMed Central

    Fossum, Eric R.; Ma, Jiaju; Masoodian, Saleh; Anzagira, Leo; Zizza, Rachel

    2016-01-01

    The Quanta Image Sensor (QIS) was conceived when contemplating shrinking pixel sizes and storage capacities, and the steady increase in digital processing power. In the single-bit QIS, the output of each field is a binary bit plane, where each bit represents the presence or absence of at least one photoelectron in a photodetector. A series of bit planes is generated through high-speed readout, and a kernel or “cubicle” of bits (x, y, t) is used to create a single output image pixel. The size of the cubicle can be adjusted post-acquisition to optimize image quality. The specialized sub-diffraction-limit photodetectors in the QIS are referred to as “jots” and a QIS may have a gigajot or more, read out at 1000 fps, for a data rate exceeding 1 Tb/s. Basically, we are trying to count photons as they arrive at the sensor. This paper reviews the QIS concept and its imaging characteristics. Recent progress towards realizing the QIS for commercial and scientific purposes is discussed. This includes implementation of a pump-gate jot device in a 65 nm CIS BSI process yielding read noise as low as 0.22 e− r.m.s. and conversion gain as high as 420 µV/e−, power efficient readout electronics, currently as low as 0.4 pJ/b in the same process, creating high dynamic range images from jot data, and understanding the imaging characteristics of single-bit and multi-bit QIS devices. The QIS represents a possible major paradigm shift in image capture. PMID:27517926

  6. I-ImaS: intelligent imaging sensors

    NASA Astrophysics Data System (ADS)

    Griffiths, J.; Royle, G.; Esbrand, C.; Hall, G.; Turchetta, R.; Speller, R.

    2010-08-01

    Conventional x-radiography uniformly irradiates the relevant region of the patient. Across that region, however, there is likely to be significant variation in both the thickness and pathological composition of the tissues present, which means that the x-ray exposure conditions selected, and consequently the image quality achieved, are a compromise. The I-ImaS concept eliminates this compromise by intelligently scanning the patient to identify the important diagnostic features, which are then used to adaptively control the x-ray exposure conditions at each point in the patient. In this way optimal image quality is achieved throughout the region of interest whilst maintaining or reducing the dose. An I-ImaS system has been built under an EU Framework 6 project and has undergone pre-clinical testing. The system is based upon two rows of sensors controlled via an FPGA based DAQ board. Each row consists of a 160 mm × 1 mm linear array of ten scintillator coated 3T CMOS APS devices with 32 μm pixels and a readable array of 520 × 40 pixels. The first sensor row scans the patient using a fraction of the total radiation dose to produce a preview image, which is then interrogated to identify the optimal exposure conditions at each point in the image. A signal is then sent to control a beam filter mechanism to appropriately moderate x-ray beam intensity at the patient as the second row of sensors follows behind. Tests performed on breast tissue sections found that the contrast-to-noise ratio in over 70% of the images was increased by an average of 15% at an average dose reduction of 9%. The same technology is currently also being applied to baggage scanning for airport security.

  7. Design and characterization of high precision in-pixel discriminators for rolling shutter CMOS pixel sensors with full CMOS capability

    NASA Astrophysics Data System (ADS)

    Fu, Y.; Hu-Guo, C.; Dorokhov, A.; Pham, H.; Hu, Y.

    2013-07-01

    In order to exploit the ability to integrate a charge collecting electrode with analog and digital processing circuitry down to the pixel level, a new type of CMOS pixel sensors with full CMOS capability is presented in this paper. The pixel array is read out based on a column-parallel read-out architecture, where each pixel incorporates a diode, a preamplifier with a double sampling circuitry and a discriminator to completely eliminate analog read-out bottlenecks. The sensor featuring a pixel array of 8 rows and 32 columns with a pixel pitch of 80 μm×16 μm was fabricated in a 0.18 μm CMOS process. The behavior of each pixel-level discriminator isolated from the diode and the preamplifier was studied. The experimental results indicate that all in-pixel discriminators which are fully operational can provide significant improvements in the read-out speed and the power consumption of CMOS pixel sensors.

  8. Wave analysis of a plenoptic system and its applications

    NASA Astrophysics Data System (ADS)

    Shroff, Sapna A.; Berkner, Kathrin

    2013-03-01

    Traditional imaging systems directly image a 2D object plane on to the sensor. Plenoptic imaging systems contain a lenslet array at the conventional image plane and a sensor at the back focal plane of the lenslet array. In this configuration the data captured at the sensor is not a direct image of the object. Each lenslet effectively images the aperture of the main imaging lens at the sensor. Therefore the sensor data retains angular light-field information which can be used for a posteriori digital computation of multi-angle images and axially refocused images. If a filter array, containing spectral filters or neutral density or polarization filters, is placed at the pupil aperture of the main imaging lens, then each lenslet images the filters on to the sensor. This enables the digital separation of multiple filter modalities giving single snapshot, multi-modal images. Due to the diversity of potential applications of plenoptic systems, their investigation is increasing. As the application space moves towards microscopes and other complex systems, and as pixel sizes become smaller, the consideration of diffraction effects in these systems becomes increasingly important. We discuss a plenoptic system and its wave propagation analysis for both coherent and incoherent imaging. We simulate a system response using our analysis and discuss various applications of the system response pertaining to plenoptic system design, implementation and calibration.

  9. Efficient space-time sampling with pixel-wise coded exposure for high-speed imaging.

    PubMed

    Liu, Dengyu; Gu, Jinwei; Hitomi, Yasunobu; Gupta, Mohit; Mitsunaga, Tomoo; Nayar, Shree K

    2014-02-01

    Cameras face a fundamental trade-off between spatial and temporal resolution. Digital still cameras can capture images with high spatial resolution, but most high-speed video cameras have relatively low spatial resolution. It is hard to overcome this trade-off without incurring a significant increase in hardware costs. In this paper, we propose techniques for sampling, representing, and reconstructing the space-time volume to overcome this trade-off. Our approach has two important distinctions compared to previous works: 1) We achieve sparse representation of videos by learning an overcomplete dictionary on video patches, and 2) we adhere to practical hardware constraints on sampling schemes imposed by architectures of current image sensors, which means that our sampling function can be implemented on CMOS image sensors with modified control units in the future. We evaluate components of our approach, sampling function and sparse representation, by comparing them to several existing approaches. We also implement a prototype imaging system with pixel-wise coded exposure control using a liquid crystal on silicon device. System characteristics such as field of view and modulation transfer function are evaluated for our imaging system. Both simulations and experiments on a wide range of scenes show that our method can effectively reconstruct a video from a single coded image while maintaining high spatial resolution.

  10. Rugged: an operational, open-source solution for Sentinel-2 mapping

    NASA Astrophysics Data System (ADS)

    Maisonobe, Luc; Seyral, Jean; Prat, Guylaine; Guinet, Jonathan; Espesset, Aude

    2015-10-01

    When you map the entire Earth every 5 days with the aim of generating high-quality time series over land, there is no room for geometrical error: the algorithms have to be stable, reliable, and precise. Rugged, a new open-source library for pixel geolocation, is at the geometrical heart of the operational processing for Sentinel-2. Rugged performs sensor-to-terrain mapping taking into account ground Digital Elevation Models, Earth rotation with all its small irregularities, on-board sensor pixel individual lines-of-sight, spacecraft motion and attitude, and all significant physical effects. It provides direct and inverse location, i.e. it allows the accurate computation of which ground point is viewed from a specific pixel in a spacecraft instrument, and conversely which pixel will view a specified ground point. Direct and inverse location can be used to perform full ortho-rectification of images and correlation between sensors observing the same area. Implemented as an add-on for Orekit (Orbits Extrapolation KIT; a low-level space dynamics library), Rugged also offers the possibility of simulating satellite motion and attitude auxiliary data using Orekit's full orbit propagation capability. This is a considerable advantage for test data generation and mission simulation activities. Together with the Orfeo ToolBox (OTB) image processing library, Rugged provides the algorithmic core of Sentinel-2 Instrument Processing Facilities. The S2 complex viewing model - with 12 staggered push-broom detectors and 13 spectral bands - is built using Rugged objects, enabling the computation of rectification grids for mapping between cartographic and focal plane coordinates. These grids are passed to the OTB library for further image resampling, thus completing the ortho-rectification chain. Sentinel-2 stringent operational requirements to process several terabytes of data per week represented a tough challenge, though one that was well met by Rugged in terms of the robustness and performance of the library.

  11. Identifying High-Traffic Patterns in the Workplace with Radio Tomographic Imaging in 3D Wireless Sensor Networks

    DTIC Science & Technology

    2014-03-27

    TOMOGRAPHIC IMAGING IN 3D WIRELESS SENSOR NETWORKS Thea S. Danella, B.S.E.E. Captain, USAF Approved: //signed// Richard K. Martin , PhD (Chairman) //signed...have every one of them in my life. I want to also thank my advisor, Dr. Richard K. Martin , and fellow student, Mr. Jason Pennington. They were...of the Fisher Information Matrix (FIM) J, and as such are the lower bounds on the Normalized Mean Squared Error (NMSE)R for pixel p. In [49], Martin et

  12. CCD imaging sensors

    NASA Technical Reports Server (NTRS)

    Janesick, James R. (Inventor); Elliott, Stythe T. (Inventor)

    1989-01-01

    A method for promoting quantum efficiency (QE) of a CCD imaging sensor for UV, far UV and low energy x-ray wavelengths by overthinning the back side beyond the interface between the substrate and the photosensitive semiconductor material, and flooding the back side with UV prior to using the sensor for imaging. This UV flooding promotes an accumulation layer of positive states in the oxide film over the thinned sensor to greatly increase QE for either frontside or backside illumination. A permanent or semipermanent image (analog information) may be stored in a frontside SiO.sub.2 layer over the photosensitive semiconductor material using implanted ions for a permanent storage and intense photon radiation for a semipermanent storage. To read out this stored information, the gate potential of the CCD is biased more negative than that used for normal imaging, and excess charge current thus produced through the oxide is integrated in the pixel wells for subsequent readout by charge transfer from well to well in the usual manner.

  13. Analysis and Enhancement of Low-Light-Level Performance of Photodiode-Type CMOS Active Pixel Images Operated with Sub-Threshold Reset

    NASA Technical Reports Server (NTRS)

    Pain, Bedabrata; Yang, Guang; Ortiz, Monico; Wrigley, Christopher; Hancock, Bruce; Cunningham, Thomas

    2000-01-01

    Noise in photodiode-type CMOS active pixel sensors (APS) is primarily due to the reset (kTC) noise at the sense node, since it is difficult to implement in-pixel correlated double sampling for a 2-D array. Signal integrated on the photodiode sense node (SENSE) is calculated by measuring difference between the voltage on the column bus (COL) - before and after the reset (RST) is pulsed. Lower than kTC noise can be achieved with photodiode-type pixels by employing "softreset" technique. Soft-reset refers to resetting with both drain and gate of the n-channel reset transistor kept at the same potential, causing the sense node to be reset using sub-threshold MOSFET current. However, lowering of noise is achieved only at the expense higher image lag and low-light-level non-linearity. In this paper, we present an analysis to explain the noise behavior, show evidence of degraded performance under low-light levels, and describe new pixels that eliminate non-linearity and lag without compromising noise.

  14. Sensor assembly method using silicon interposer with trenches for three-dimensional binocular range sensors

    NASA Astrophysics Data System (ADS)

    Nakajima, Kazuhiro; Yamamoto, Yuji; Arima, Yutaka

    2018-04-01

    To easily assemble a three-dimensional binocular range sensor, we devised an alignment method for two image sensors using a silicon interposer with trenches. The trenches were formed using deep reactive ion etching (RIE) equipment. We produced a three-dimensional (3D) range sensor using the method and experimentally confirmed that sufficient alignment accuracy was realized. It was confirmed that the alignment accuracy of the two image sensors when using the proposed method is more than twice that of the alignment assembly method on a conventional board. In addition, as a result of evaluating the deterioration of the detection performance caused by the alignment accuracy, it was confirmed that the vertical deviation between the corresponding pixels in the two image sensors is substantially proportional to the decrease in detection performance. Therefore, we confirmed that the proposed method can realize more than twice the detection performance of the conventional method. Through these evaluations, the effectiveness of the 3D binocular range sensor aligned by the silicon interposer with the trenches was confirmed.

  15. Proton irradiation of the CIS115 for the JUICE mission

    NASA Astrophysics Data System (ADS)

    Soman, M. R.; Allanwood, E. A. H.; Holland, A. D.; Winstone, G. P.; Gow, J. P. D.; Stefanov, K.; Leese, M.

    2015-09-01

    The CIS115 is one of the latest CMOS Imaging Sensors designed by e2v technologies, with 1504x2000 pixels on a 7 μm pitch. Each pixel in the array is a pinned photodiode with a 4T architecture, achieving an average dark current of 22 electrons pixel-1 s-1 at 21°C measured in a front-faced device. The sensor aims for high optical sensitivity by utilising e2v's back-thinning and processing capabilities, providing a sensitive silicon thickness approximately 9 μm to 12 μm thick with a tuned anti-reflective coating. The sensor operates in a rolling shutter mode incorporating reset level subtraction resulting in a mean pixel readout noise of 4.25 electrons rms. The full well has been measured to be 34000 electrons in a previous study, resulting in a dynamic range of up to 8000. These performance characteristics have led to the CIS115 being chosen for JANUS, the high-resolution and wide-angle optical camera on the JUpiter ICy moon Explorer (JUICE). The three year science phase of JUICE is in the harsh radiation environment of the Jovian magnetosphere, primarily studying Jupiter and its icy moons. Analysis of the expected radiation environment and shielding levels from the spacecraft and instrument design predict the End Of Life (EOL) displacement and ionising damage for the CIS115 to be equivalent to 1010 10 MeV protons cm-2 and 100 krad(Si) respectively. Dark current and image lag characterisation results following initial proton irradiations are presented, detailing the initial phase of space qualification of the CIS115. Results are compared to the pre-irradiation performance and the instrument specifications and further qualification plans are outlined.

  16. Pulse-coupled neural network sensor fusion

    NASA Astrophysics Data System (ADS)

    Johnson, John L.; Schamschula, Marius P.; Inguva, Ramarao; Caulfield, H. John

    1998-03-01

    Perception is assisted by sensed impressions of the outside world but not determined by them. The primary organ of perception is the brain and, in particular, the cortex. With that in mind, we have sought to see how a computer-modeled cortex--the PCNN or Pulse Coupled Neural Network--performs as a sensor fusing element. In essence, the PCNN is comprised of an array of integrate-and-fire neurons with one neuron for each input pixel. In such a system, the neurons corresponding to bright pixels reach firing threshold faster than the neurons corresponding to duller pixels. Thus, firing rate is proportional to brightness. In PCNNs, when a neuron fires it sends some of the resulting signal to its neighbors. This linking can cause a near-threshold neuron to fire earlier than it would have otherwise. This leads to synchronization of the pulses across large regions of the image. We can simplify the 3D PCNN output by integrating out the time dimension. Over a long enough time interval, the resulting 2D (x,y) pattern IS the input image. The PCNN has taken it apart and put it back together again. The shorter- term time integrals are interesting in themselves and will be commented upon in the paper. The main thrust of this paper is the use of multiple PCNNs mutually coupled in various ways to assemble a single 2D pattern or fused image. Results of experiments on PCNN image fusion and an evaluation of its advantages are our primary objectives.

  17. Spectral-Spatial Classification of Hyperspectral Images Using Hierarchical Optimization

    NASA Technical Reports Server (NTRS)

    Tarabalka, Yuliya; Tilton, James C.

    2011-01-01

    A new spectral-spatial method for hyperspectral data classification is proposed. For a given hyperspectral image, probabilistic pixelwise classification is first applied. Then, hierarchical step-wise optimization algorithm is performed, by iteratively merging neighboring regions with the smallest Dissimilarity Criterion (DC) and recomputing class labels for new regions. The DC is computed by comparing region mean vectors, class labels and a number of pixels in the two regions under consideration. The algorithm is converged when all the pixels get involved in the region merging procedure. Experimental results are presented on two remote sensing hyperspectral images acquired by the AVIRIS and ROSIS sensors. The proposed approach improves classification accuracies and provides maps with more homogeneous regions, when compared to previously proposed classification techniques.

  18. Real time thermal imaging for analysis and control of crystal growth by the Czochralski technique

    NASA Technical Reports Server (NTRS)

    Wargo, M. J.; Witt, A. F.

    1992-01-01

    A real time thermal imaging system with temperature resolution better than +/- 0.5 C and spatial resolution of better than 0.5 mm has been developed. It has been applied to the analysis of melt surface thermal field distributions in both Czochralski and liquid encapsulated Czochralski growth configurations. The sensor can provide single/multiple point thermal information; a multi-pixel averaging algorithm has been developed which permits localized, low noise sensing and display of optical intensity variations at any location in the hot zone as a function of time. Temperature distributions are measured by extraction of data along a user selectable linear pixel array and are simultaneously displayed, as a graphic overlay, on the thermal image.

  19. Optical Demonstration of a Medical Imaging System with an EMCCD-Sensor Array for Use in a High Resolution Dynamic X-ray Imager

    PubMed Central

    Qu, Bin; Huang, Ying; Wang, Weiyuan; Sharma, Prateek; Kuhls-Gilcrist, Andrew T.; Cartwright, Alexander N.; Titus, Albert H.; Bednarek, Daniel R.; Rudin, Stephen

    2011-01-01

    Use of an extensible array of Electron Multiplying CCDs (EMCCDs) in medical x-ray imager applications was demonstrated for the first time. The large variable electronic-gain (up to 2000) and small pixel size of EMCCDs provide effective suppression of readout noise compared to signal, as well as high resolution, enabling the development of an x-ray detector with far superior performance compared to conventional x-ray image intensifiers and flat panel detectors. We are developing arrays of EMCCDs to overcome their limited field of view (FOV). In this work we report on an array of two EMCCD sensors running simultaneously at a high frame rate and optically focused on a mammogram film showing calcified ducts. The work was conducted on an optical table with a pulsed LED bar used to provide a uniform diffuse light onto the film to simulate x-ray projection images. The system can be selected to run at up to 17.5 frames per second or even higher frame rate with binning. Integration time for the sensors can be adjusted from 1 ms to 1000 ms. Twelve-bit correlated double sampling AD converters were used to digitize the images, which were acquired by a National Instruments dual-channel Camera Link PC board in real time. A user-friendly interface was programmed using LabVIEW to save and display 2K × 1K pixel matrix digital images. The demonstration tiles a 2 × 1 array to acquire increased-FOV stationary images taken at different gains and fluoroscopic-like videos recorded by scanning the mammogram simultaneously with both sensors. The results show high resolution and high dynamic range images stitched together with minimal adjustments needed. The EMCCD array design allows for expansion to an M×N array for arbitrarily larger FOV, yet with high resolution and large dynamic range maintained. PMID:23505330

  20. Characterisation of the high dynamic range Large Pixel Detector (LPD) and its use at X-ray free electron laser sources

    DOE PAGES

    Veale, M. C.; Adkin, P.; Booker, P.; ...

    2017-12-04

    The STFC Rutherford Appleton Laboratory have delivered the Large Pixel Detector (LPD) for MHz frame rate imaging at the European XFEL. The detector system has an active area of 0.5 m × 0.5 m and consists of a million pixels on a 500 μm pitch. Sensors have been produced from 500 μm thick Hammamatsu silicon tiles that have been bump bonded to the readout ASIC using a silver epoxy and gold stud technique. Each pixel of the detector system is capable of measuring 10 5 12 keV photons per image readout at 4.5 MHz. In this paper results from themore » testing of these detectors at the Diamond Light Source and the Linac Coherent Light Source (LCLS) are presented. As a result, the performance of the detector in terms of linearity, spatial uniformity and the performance of the different ASIC gain stages is characterised.« less

  1. Characterisation of the high dynamic range Large Pixel Detector (LPD) and its use at X-ray free electron laser sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Veale, M. C.; Adkin, P.; Booker, P.

    The STFC Rutherford Appleton Laboratory have delivered the Large Pixel Detector (LPD) for MHz frame rate imaging at the European XFEL. The detector system has an active area of 0.5 m × 0.5 m and consists of a million pixels on a 500 μm pitch. Sensors have been produced from 500 μm thick Hammamatsu silicon tiles that have been bump bonded to the readout ASIC using a silver epoxy and gold stud technique. Each pixel of the detector system is capable of measuring 10 5 12 keV photons per image readout at 4.5 MHz. In this paper results from themore » testing of these detectors at the Diamond Light Source and the Linac Coherent Light Source (LCLS) are presented. As a result, the performance of the detector in terms of linearity, spatial uniformity and the performance of the different ASIC gain stages is characterised.« less

  2. Subpixel Mapping of Hyperspectral Image Based on Linear Subpixel Feature Detection and Object Optimization

    NASA Astrophysics Data System (ADS)

    Liu, Zhaoxin; Zhao, Liaoying; Li, Xiaorun; Chen, Shuhan

    2018-04-01

    Owing to the limitation of spatial resolution of the imaging sensor and the variability of ground surfaces, mixed pixels are widesperead in hyperspectral imagery. The traditional subpixel mapping algorithms treat all mixed pixels as boundary-mixed pixels while ignoring the existence of linear subpixels. To solve this question, this paper proposed a new subpixel mapping method based on linear subpixel feature detection and object optimization. Firstly, the fraction value of each class is obtained by spectral unmixing. Secondly, the linear subpixel features are pre-determined based on the hyperspectral characteristics and the linear subpixel feature; the remaining mixed pixels are detected based on maximum linearization index analysis. The classes of linear subpixels are determined by using template matching method. Finally, the whole subpixel mapping results are iteratively optimized by binary particle swarm optimization algorithm. The performance of the proposed subpixel mapping method is evaluated via experiments based on simulated and real hyperspectral data sets. The experimental results demonstrate that the proposed method can improve the accuracy of subpixel mapping.

  3. Optical and x-ray characterization of two novel CMOS image sensors

    NASA Astrophysics Data System (ADS)

    Bohndiek, Sarah E.; Arvanitis, Costas D.; Venanzi, Cristian; Royle, Gary J.; Clark, Andy T.; Crooks, Jamie P.; Prydderch, Mark L.; Turchetta, Renato; Blue, Andrew; Speller, Robert D.

    2007-02-01

    A UK consortium (MI3) has been founded to develop advanced CMOS pixel designs for scientific applications. Vanilla, a 520x520 array of 25μm pixels benefits from flushed reset circuitry for low noise and random pixel access for region of interest (ROI) readout. OPIC, a 64x72 test structure array of 30μm digital pixels has thresholding capabilities for sparse readout at 3,700fps. Characterization is performed with both optical illumination and x-ray exposure via a scintillator. Vanilla exhibits 34+/-3e - read noise, interactive quantum efficiency of 54% at 500nm and can read a 6x6 ROI at 24,395fps. OPIC has 46+/-3e - read noise and a wide dynamic range of 65dB due to high full well capacity. Based on these characterization studies, Vanilla could be utilized in applications where demands include high spectral response and high speed region of interest readout while OPIC could be used for high speed, high dynamic range imaging.

  4. First tests of Timepix detectors based on semi-insulating GaAs matrix of different pixel size

    NASA Astrophysics Data System (ADS)

    Zaťko, B.; Kubanda, D.; Žemlička, J.; Šagátová, A.; Zápražný, Z.; Boháček, P.; Nečas, V.; Mora, Y.; Pichotka, M.; Dudák, J.

    2018-02-01

    In this work, we have focused on Timepix detectors coupled with the semi-insulating GaAs material sensor. We used undoped bulk GaAs material with the thickness of 350 μm. We prepared and tested four pixelated detectors with 165 μm and 220 μm pixel size with two versions of technology preparation, without and with wet chemically etched trenches around each pixel. We have carried out adjustment of GaAs Timepix detectors to optimize their performance. The energy calibration of one GaAs Timepix detector in Time-over-threshold mode was performed with the use of 241Am and 133Ba radioisotopes. We were able to detect γ-photons with the energy up to 160 keV. The X-ray imaging quality of GaAs Timepix detector was tested with X-ray source using various samples. After flat field we obtained very promising imaging performance of tested GaAs Timepix detectors.

  5. A 176×144 148dB adaptive tone-mapping imager

    NASA Astrophysics Data System (ADS)

    Vargas-Sierra, S.; Liñán-Cembrano, G.; Rodríguez-Vázquez, A.

    2012-03-01

    This paper presents a 176x144 (QCIF) HDR image sensor where visual information is simultaneously captured and adaptively compressed by means of an in-pixel tone mapping scheme. The tone mapping curve (TMC) is calculated from the histogram of a Time Stamp image captured in the previous frame, which serves as a probability indicator of the distribution of illuminations within the present frame. The chip produces 7-bit/pixel images that can map illuminations from 311μlux to 55.3 klux in a single frame in a way that each pixel decides when to stop observing photocurrent integration -with extreme values captured at 8s and 2.34μs respectively. Pixels size is 33x33μm2, which includes a 3x3μm2 Nwell- Psubstrate photodiode and an autozeroing technique for establishing the reset voltage, which cancels most of the offset contributions created by the analog processing circuitry. Dark signal (10.8 mV/s ) effects in the final image are attenuated by an automatic programming of the DAC top voltage. Measured characteristics are Sensitivity 5.79 V/lux.s , FWC 12.2ke-, Conversion Factor 129(e-/DN), and Read Noise 25e-. The chip has been designed in the 0.35μm OPTO technology from Austriamicrosystems (AMS). Due to the focal plane operation, this architecture is especially well suited to be implemented in a 3D (vertical stacking) technology using per-pixel TSVs.

  6. MAPS development for the ALICE ITS upgrade

    NASA Astrophysics Data System (ADS)

    Yang, P.; Aglieri, G.; Cavicchioli, C.; Chalmet, P. L.; Chanlek, N.; Collu, A.; Gao, C.; Hillemanns, H.; Junique, A.; Kofarago, M.; Keil, M.; Kugathasan, T.; Kim, D.; Kim, J.; Lattuca, A.; Marin Tobon, C. A.; Marras, D.; Mager, M.; Martinengo, P.; Mazza, G.; Mugnier, H.; Musa, L.; Puggioni, C.; Rousset, J.; Reidt, F.; Riedler, P.; Snoeys, W.; Siddhanta, S.; Usai, G.; van Hoorne, J. W.; Yi, J.

    2015-03-01

    Monolithic Active Pixel Sensors (MAPS) offer the possibility to build pixel detectors and tracking layers with high spatial resolution and low material budget in commercial CMOS processes. Significant progress has been made in the field of MAPS in recent years, and they are now considered for the upgrades of the LHC experiments. This contribution will focus on MAPS detectors developed for the ALICE Inner Tracking System (ITS) upgrade and manufactured in the TowerJazz 180 nm CMOS imaging sensor process on wafers with a high resistivity epitaxial layer. Several sensor chip prototypes have been developed and produced to optimise both charge collection and readout circuitry. The chips have been characterised using electrical measurements, radioactive sources and particle beams. The tests indicate that the sensors satisfy the ALICE requirements and first prototypes with the final size of 1.5 × 3 cm2 have been produced in the first half of 2014. This contribution summarises the characterisation measurements and presents first results from the full-scale chips.

  7. Ground Optical Signal Processing Architecture for Contributing SSA Space Based Sensor Data

    NASA Astrophysics Data System (ADS)

    Koblick, D.; Klug, M.; Goldsmith, A.; Flewelling, B.; Jah, M.; Shanks, J.; Piña, R.

    2014-09-01

    The main objective of the DARPA program Orbit Outlook (O^2) is to improve the metric tracking and detection performance of the Space Situational Network (SSN) by adding a diverse low-cost network of contributing sensors to the Space Situational Awareness (SSA) mission. In order to accomplish this objective, not only must a sensor be in constant communication with a planning and scheduling system to process tasking requests, there must be an underlying framework to provide useful data products, such as angles only measurements. Existing optical signal processing implementations such as the Optical Processing Architecture at Lincoln (OPAL) are capable of converting mission data collections to angles only observations, but may be difficult for many users to obtain, support, and customize for low-cost missions and demonstration programs. The Ground Optical Signal Processing Architecture (GOSPA) will ingest raw imagery and telemetry data from a space based electro optical sensor and perform a background removal process to remove anomalous pixels, interpolate over bad pixels, and dominant temporal noise. After background removal, the streak end points and target centroids are located using a corner detection algorithm developed by Air Force Research Laboratory. These identified streak locations are then fused with the corresponding spacecraft telemetry data to determine the Right Ascension and Declination measurements with respect to time. To demonstrate the performance of GOSPA, non-rate tracking collections against a satellite in Geosynchronous Orbit are simulated from a visible optical imaging sensor in a polar Low Earth Orbit. Stars, noise and bad pixels are added to the simulated images based on look angles and sensor parameters. These collections are run through the GOSPA framework to provide angles- only measurements to the Air Force Research Laboratory Constrained Admissible Region Multiple Hypothesis Filter (CAR-MHF) in which an Initial Orbit Determination is performed and compared to truth data.

  8. Automatic Near-Real-Time Image Processing Chain for Very High Resolution Optical Satellite Data

    NASA Astrophysics Data System (ADS)

    Ostir, K.; Cotar, K.; Marsetic, A.; Pehani, P.; Perse, M.; Zaksek, K.; Zaletelj, J.; Rodic, T.

    2015-04-01

    In response to the increasing need for automatic and fast satellite image processing SPACE-SI has developed and implemented a fully automatic image processing chain STORM that performs all processing steps from sensor-corrected optical images (level 1) to web-delivered map-ready images and products without operator's intervention. Initial development was tailored to high resolution RapidEye images, and all crucial and most challenging parts of the planned full processing chain were developed: module for automatic image orthorectification based on a physical sensor model and supported by the algorithm for automatic detection of ground control points (GCPs); atmospheric correction module, topographic corrections module that combines physical approach with Minnaert method and utilizing anisotropic illumination model; and modules for high level products generation. Various parts of the chain were implemented also for WorldView-2, THEOS, Pleiades, SPOT 6, Landsat 5-8, and PROBA-V. Support of full-frame sensor currently in development by SPACE-SI is in plan. The proposed paper focuses on the adaptation of the STORM processing chain to very high resolution multispectral images. The development concentrated on the sub-module for automatic detection of GCPs. The initially implemented two-step algorithm that worked only with rasterized vector roads and delivered GCPs with sub-pixel accuracy for the RapidEye images, was improved with the introduction of a third step: super-fine positioning of each GCP based on a reference raster chip. The added step exploits the high spatial resolution of the reference raster to improve the final matching results and to achieve pixel accuracy also on very high resolution optical satellite data.

  9. Measurement of charge transfer potential barrier in pinned photodiode CMOS image sensors

    NASA Astrophysics Data System (ADS)

    Chen, Cao; Bing, Zhang; Junfeng, Wang; Longsheng, Wu

    2016-05-01

    The charge transfer potential barrier (CTPB) formed beneath the transfer gate causes a noticeable image lag issue in pinned photodiode (PPD) CMOS image sensors (CIS), and is difficult to measure straightforwardly since it is embedded inside the device. From an understanding of the CTPB formation mechanism, we report on an alternative method to feasibly measure the CTPB height by performing a linear extrapolation coupled with a horizontal left-shift on the sensor photoresponse curve under the steady-state illumination. The theoretical study was performed in detail on the principle of the proposed method. Application of the measurements on a prototype PPD-CIS chip with an array of 160 × 160 pixels is demonstrated. Such a method intends to shine new light on the guidance for the lag-free and high-speed sensors optimization based on PPD devices. Project supported by the National Defense Pre-Research Foundation of China (No. 51311050301095).

  10. Building Change Detection in Very High Resolution Satellite Stereo Image Time Series

    NASA Astrophysics Data System (ADS)

    Tian, J.; Qin, R.; Cerra, D.; Reinartz, P.

    2016-06-01

    There is an increasing demand for robust methods on urban sprawl monitoring. The steadily increasing number of high resolution and multi-view sensors allows producing datasets with high temporal and spatial resolution; however, less effort has been dedicated to employ very high resolution (VHR) satellite image time series (SITS) to monitor the changes in buildings with higher accuracy. In addition, these VHR data are often acquired from different sensors. The objective of this research is to propose a robust time-series data analysis method for VHR stereo imagery. Firstly, the spatial-temporal information of the stereo imagery and the Digital Surface Models (DSMs) generated from them are combined, and building probability maps (BPM) are calculated for all acquisition dates. In the second step, an object-based change analysis is performed based on the derivative features of the BPM sets. The change consistence between object-level and pixel-level are checked to remove any outlier pixels. Results are assessed on six pairs of VHR satellite images acquired within a time span of 7 years. The evaluation results have proved the efficiency of the proposed method.

  11. Hot pixel generation in active pixel sensors: dosimetric and micro-dosimetric response

    NASA Technical Reports Server (NTRS)

    Scheick, Leif; Novak, Frank

    2003-01-01

    The dosimetric response of an active pixel sensor is analyzed. heavy ions are seen to damage the pixel in much the same way as gamma radiation. The probability of a hot pixel is seen to exhibit behavior that is not typical with other microdose effects.

  12. Impact of LANDSAT MSS sensor differences on change detection analysis

    NASA Technical Reports Server (NTRS)

    Likens, W. C.; Wrigley, R. C.

    1983-01-01

    Some 512 by 512 pixel subwindows for simultaneously acquired scene pairs obtained by LANDSAT 2,3 and 4 multispectral band scanners were coregistered using LANDSAT 4 scenes as the base to which the other images were registered. Scattergrams between the coregistered scenes (a form of contingency analysis) were used to radiometrically compare data from the various sensors. Mode values were derived and used to visually fit a linear regression. Root mean square errors of the registration varied between .1 and 1.5 pixels. There appear to be no major problem preventing the use of LANDSAT 4 MSS with previous MSS sensors for change detection, provided the noise interference can be removed or minimized. Data normalizations for change detection should be based on the data rather than solely on calibration information. This allows simultaneous normalization of the atmosphere as well as the radiometry.

  13. Proton-counting radiography for proton therapy: a proof of principle using CMOS APS technology

    NASA Astrophysics Data System (ADS)

    Poludniowski, G.; Allinson, N. M.; Anaxagoras, T.; Esposito, M.; Green, S.; Manolopoulos, S.; Nieto-Camero, J.; Parker, D. J.; Price, T.; Evans, P. M.

    2014-06-01

    Despite the early recognition of the potential of proton imaging to assist proton therapy (Cormack 1963 J. Appl. Phys. 34 2722), the modality is still removed from clinical practice, with various approaches in development. For proton-counting radiography applications such as computed tomography (CT), the water-equivalent-path-length that each proton has travelled through an imaged object must be inferred. Typically, scintillator-based technology has been used in various energy/range telescope designs. Here we propose a very different alternative of using radiation-hard CMOS active pixel sensor technology. The ability of such a sensor to resolve the passage of individual protons in a therapy beam has not been previously shown. Here, such capability is demonstrated using a 36 MeV cyclotron beam (University of Birmingham Cyclotron, Birmingham, UK) and a 200 MeV clinical radiotherapy beam (iThemba LABS, Cape Town, SA). The feasibility of tracking individual protons through multiple CMOS layers is also demonstrated using a two-layer stack of sensors. The chief advantages of this solution are the spatial discrimination of events intrinsic to pixelated sensors, combined with the potential provision of information on both the range and residual energy of a proton. The challenges in developing a practical system are discussed.

  14. Proton-counting radiography for proton therapy: a proof of principle using CMOS APS technology

    PubMed Central

    Poludniowski, G; Allinson, N M; Anaxagoras, T; Esposito, M; Green, S; Manolopoulos, S; Nieto-Camero, J; Parker, D J; Price, T; Evans, P M

    2014-01-01

    Despite the early recognition of the potential of proton imaging to assist proton therapy the modality is still removed from clinical practice, with various approaches in development. For proton-counting radiography applications such as Computed Tomography (CT), the Water-Equivalent-Path-Length (WEPL) that each proton has travelled through an imaged object must be inferred. Typically, scintillator-based technology has been used in various energy/range telescope designs. Here we propose a very different alternative of using radiation-hard CMOS Active Pixel Sensor (APS) technology. The ability of such a sensor to resolve the passage of individual protons in a therapy beam has not been previously shown. Here, such capability is demonstrated using a 36 MeV cyclotron beam (University of Birmingham Cyclotron, Birmingham, UK) and a 200 MeV clinical radiotherapy beam (iThemba LABS, Cape Town, SA). The feasibility of tracking individual protons through multiple CMOS layers is also demonstrated using a two-layer stack of sensors. The chief advantages of this solution are the spatial discrimination of events intrinsic to pixelated sensors, combined with the potential provision of information on both the range and residual energy of a proton. The challenges in developing a practical system are discussed. PMID:24785680

  15. The Mammographic Head Demonstrator Developed in the Framework of the “IMI” Project:. First Imaging Tests Results

    NASA Astrophysics Data System (ADS)

    Bisogni, Maria Giuseppina

    2006-04-01

    In this paper we report on the performances and the first imaging test results of a digital mammographic demonstrator based on GaAs pixel detectors. The heart of this prototype is the X-ray detection unit, which is a GaAs pixel sensor read-out by the PCC/MEDIPIXI circuit. Since the active area of the sensor is 1 cm2, 18 detectors have been organized in two staggered rows of nine chips each. To cover the typical mammographic format (18 × 24 cm2) a linear scanning is performed by means of a stepper motor. The system is integrated in mammographic equipment comprehending the X-ray tube, the bias and data acquisition systems and the PC-based control system. The prototype has been developed in the framework of the integrated Mammographic Imaging (IMI) project, an industrial research activity aiming to develop innovative instrumentation for morphologic and functional imaging. The project has been supported by the Italian Ministry of Education, University and Research (MIUR) and by five Italian High Tech companies in collaboration with the universities of Ferrara, Roma “La Sapienza”, Pisa and the INFN.

  16. Minimum risk wavelet shrinkage operator for Poisson image denoising.

    PubMed

    Cheng, Wu; Hirakawa, Keigo

    2015-05-01

    The pixel values of images taken by an image sensor are said to be corrupted by Poisson noise. To date, multiscale Poisson image denoising techniques have processed Haar frame and wavelet coefficients--the modeling of coefficients is enabled by the Skellam distribution analysis. We extend these results by solving for shrinkage operators for Skellam that minimizes the risk functional in the multiscale Poisson image denoising setting. The minimum risk shrinkage operator of this kind effectively produces denoised wavelet coefficients with minimum attainable L2 error.

  17. Single-exposure quantitative phase imaging in color-coded LED microscopy.

    PubMed

    Lee, Wonchan; Jung, Daeseong; Ryu, Suho; Joo, Chulmin

    2017-04-03

    We demonstrate single-shot quantitative phase imaging (QPI) in a platform of color-coded LED microscopy (cLEDscope). The light source in a conventional microscope is replaced by a circular LED pattern that is trisected into subregions with equal area, assigned to red, green, and blue colors. Image acquisition with a color image sensor and subsequent computation based on weak object transfer functions allow for the QPI of a transparent specimen. We also provide a correction method for color-leakage, which may be encountered in implementing our method with consumer-grade LEDs and image sensors. Most commercially available LEDs and image sensors do not provide spectrally isolated emissions and pixel responses, generating significant error in phase estimation in our method. We describe the correction scheme for this color-leakage issue, and demonstrate improved phase measurement accuracy. The computational model and single-exposure QPI capability of our method are presented by showing images of calibrated phase samples and cellular specimens.

  18. Active pixel sensor array with multiresolution readout

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R. (Inventor); Kemeny, Sabrina E. (Inventor); Pain, Bedabrata (Inventor)

    1999-01-01

    An imaging device formed as a monolithic complementary metal oxide semiconductor integrated circuit in an industry standard complementary metal oxide semiconductor process, the integrated circuit including a focal plane array of pixel cells, each one of the cells including a photogate overlying the substrate for accumulating photo-generated charge in an underlying portion of the substrate and a charge coupled device section formed on the substrate adjacent the photogate having a sensing node and at least one charge coupled device stage for transferring charge from the underlying portion of the substrate to the sensing node. There is also a readout circuit, part of which can be disposed at the bottom of each column of cells and be common to all the cells in the column. The imaging device can also include an electronic shutter formed on the substrate adjacent the photogate, and/or a storage section to allow for simultaneous integration. In addition, the imaging device can include a multiresolution imaging circuit to provide images of varying resolution. The multiresolution circuit could also be employed in an array where the photosensitive portion of each pixel cell is a photodiode. This latter embodiment could further be modified to facilitate low light imaging.

  19. Resampling approach for anomalous change detection

    NASA Astrophysics Data System (ADS)

    Theiler, James; Perkins, Simon

    2007-04-01

    We investigate the problem of identifying pixels in pairs of co-registered images that correspond to real changes on the ground. Changes that are due to environmental differences (illumination, atmospheric distortion, etc.) or sensor differences (focus, contrast, etc.) will be widespread throughout the image, and the aim is to avoid these changes in favor of changes that occur in only one or a few pixels. Formal outlier detection schemes (such as the one-class support vector machine) can identify rare occurrences, but will be confounded by pixels that are "equally rare" in both images: they may be anomalous, but they are not changes. We describe a resampling scheme we have developed that formally addresses both of these issues, and reduces the problem to a binary classification, a problem for which a large variety of machine learning tools have been developed. In principle, the effects of misregistration will manifest themselves as pervasive changes, and our method will be robust against them - but in practice, misregistration remains a serious issue.

  20. Characterization techniques for incorporating backgrounds into DIRSIG

    NASA Astrophysics Data System (ADS)

    Brown, Scott D.; Schott, John R.

    2000-07-01

    The appearance of operation hyperspectral imaging spectrometers in both solar and thermal regions has lead to the development of a variety of spectral detection algorithms. The development and testing of these algorithms requires well characterized field collection campaigns that can be time and cost prohibitive. Radiometrically robust synthetic image generation (SIG) environments that can generate appropriate images under a variety of atmospheric conditions and with a variety of sensors offers an excellent supplement to reduce the scope of the expensive field collections. In addition, SIG image products provide the algorithm developer with per-pixel truth, allowing for improved characterization of the algorithm performance. To meet the needs of the algorithm development community, the image modeling community needs to supply synthetic image products that contain all the spatial and spectral variability present in real world scenes, and that provide the large area coverage typically acquired with actual sensors. This places a heavy burden on synthetic scene builders to construct well characterized scenes that span large areas. Several SIG models have demonstrated the ability to accurately model targets (vehicles, buildings, etc.) Using well constructed target geometry (from CAD packages) and robust thermal and radiometry models. However, background objects (vegetation, infrastructure, etc.) dominate the percentage of real world scene pixels and utilizing target building techniques is time and resource prohibitive. This paper discusses new methods that have been integrated into the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model to characterize backgrounds. The new suite of scene construct types allows the user to incorporate both terrain and surface properties to obtain wide area coverage. The terrain can be incorporated using a triangular irregular network (TIN) derived from elevation data or digital elevation model (DEM) data from actual sensors, temperature maps, spectral reflectance cubes (possible derived from actual sensors), and/or material and mixture maps. Descriptions and examples of each new technique are presented as well as hybrid methods to demonstrate target embedding in real world imagery.

Top