Sample records for active pixel image

  1. CMOS Active-Pixel Image Sensor With Simple Floating Gates

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R.; Nakamura, Junichi; Kemeny, Sabrina E.

    1996-01-01

    Experimental complementary metal-oxide/semiconductor (CMOS) active-pixel image sensor integrated circuit features simple floating-gate structure, with metal-oxide/semiconductor field-effect transistor (MOSFET) as active circuit element in each pixel. Provides flexibility of readout modes, no kTC noise, and relatively simple structure suitable for high-density arrays. Features desirable for "smart sensor" applications.

  2. Method and apparatus of high dynamic range image sensor with individual pixel reset

    NASA Technical Reports Server (NTRS)

    Yadid-Pecht, Orly (Inventor); Pain, Bedabrata (Inventor); Fossum, Eric R. (Inventor)

    2001-01-01

    A wide dynamic range image sensor provides individual pixel reset to vary the integration time of individual pixels. The integration time of each pixel is controlled by column and row reset control signals which activate a logical reset transistor only when both signals coincide for a given pixel.

  3. Acquisition of STEM Images by Adaptive Compressive Sensing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Weiyi; Feng, Qianli; Srinivasan, Ramprakash

    Compressive Sensing (CS) allows a signal to be sparsely measured first and accurately recovered later in software [1]. In scanning transmission electron microscopy (STEM), it is possible to compress an image spatially by reducing the number of measured pixels, which decreases electron dose and increases sensing speed [2,3,4]. The two requirements for CS to work are: (1) sparsity of basis coefficients and (2) incoherence of the sensing system and the representation system. However, when pixels are missing from the image, it is difficult to have an incoherent sensing matrix. Nevertheless, dictionary learning techniques such as Beta-Process Factor Analysis (BPFA) [5]more » are able to simultaneously discover a basis and the sparse coefficients in the case of missing pixels. On top of CS, we would like to apply active learning [6,7] to further reduce the proportion of pixels being measured, while maintaining image reconstruction quality. Suppose we initially sample 10% of random pixels. We wish to select the next 1% of pixels that are most useful in recovering the image. Now, we have 11% of pixels, and we want to decide the next 1% of “most informative” pixels. Active learning methods are online and sequential in nature. Our goal is to adaptively discover the best sensing mask during acquisition using feedback about the structures in the image. In the end, we hope to recover a high quality reconstruction with a dose reduction relative to the non-adaptive (random) sensing scheme. In doing this, we try three metrics applied to the partial reconstructions for selecting the new set of pixels: (1) variance, (2) Kullback-Leibler (KL) divergence using a Radial Basis Function (RBF) kernel, and (3) entropy. Figs. 1 and 2 display the comparison of Peak Signal-to-Noise (PSNR) using these three different active learning methods at different percentages of sampled pixels. At 20% level, all the three active learning methods underperform the original CS without active learning. However, they all beat the original CS as more of the “most informative” pixels are sampled. One can also argue that CS equipped with active learning requires less sampled pixels to achieve the same value of PSNR than CS with pixels randomly sampled, since all the three PSNR curves with active learning grow at a faster pace than that without active learning. For this particular STEM image, by observing the reconstructed images and the sensing masks, we find that while the method based on RBF kernel acquires samples more uniformly, the one on entropy samples more areas of significant change, thus less uniformly. The KL-divergence method performs the best in terms of reconstruction error (PSNR) for this example [8].« less

  4. Active-Pixel Image Sensor With Analog-To-Digital Converters

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R.; Mendis, Sunetra K.; Pain, Bedabrata; Nixon, Robert H.

    1995-01-01

    Proposed single-chip integrated-circuit image sensor contains 128 x 128 array of active pixel sensors at 50-micrometer pitch. Output terminals of all pixels in each given column connected to analog-to-digital (A/D) converter located at bottom of column. Pixels scanned in semiparallel fashion, one row at time; during time allocated to scanning row, outputs of all active pixel sensors in row fed to respective A/D converters. Design of chip based on complementary metal oxide semiconductor (CMOS) technology, and individual circuit elements fabricated according to 2-micrometer CMOS design rules. Active pixel sensors designed to operate at video rate of 30 frames/second, even at low light levels. A/D scheme based on first-order Sigma-Delta modulation.

  5. A 128 x 128 CMOS Active Pixel Image Sensor for Highly Integrated Imaging Systems

    NASA Technical Reports Server (NTRS)

    Mendis, Sunetra K.; Kemeny, Sabrina E.; Fossum, Eric R.

    1993-01-01

    A new CMOS-based image sensor that is intrinsically compatible with on-chip CMOS circuitry is reported. The new CMOS active pixel image sensor achieves low noise, high sensitivity, X-Y addressability, and has simple timing requirements. The image sensor was fabricated using a 2 micrometer p-well CMOS process, and consists of a 128 x 128 array of 40 micrometer x 40 micrometer pixels. The CMOS image sensor technology enables highly integrated smart image sensors, and makes the design, incorporation and fabrication of such sensors widely accessible to the integrated circuit community.

  6. Decoding brain responses to pixelized images in the primary visual cortex: implications for visual cortical prostheses

    PubMed Central

    Guo, Bing-bing; Zheng, Xiao-lin; Lu, Zhen-gang; Wang, Xing; Yin, Zheng-qin; Hou, Wen-sheng; Meng, Ming

    2015-01-01

    Visual cortical prostheses have the potential to restore partial vision. Still limited by the low-resolution visual percepts provided by visual cortical prostheses, implant wearers can currently only “see” pixelized images, and how to obtain the specific brain responses to different pixelized images in the primary visual cortex (the implant area) is still unknown. We conducted a functional magnetic resonance imaging experiment on normal human participants to investigate the brain activation patterns in response to 18 different pixelized images. There were 100 voxels in the brain activation pattern that were selected from the primary visual cortex, and voxel size was 4 mm × 4 mm × 4 mm. Multi-voxel pattern analysis was used to test if these 18 different brain activation patterns were specific. We chose a Linear Support Vector Machine (LSVM) as the classifier in this study. The results showed that the classification accuracies of different brain activation patterns were significantly above chance level, which suggests that the classifier can successfully distinguish the brain activation patterns. Our results suggest that the specific brain activation patterns to different pixelized images can be obtained in the primary visual cortex using a 4 mm × 4 mm × 4 mm voxel size and a 100-voxel pattern. PMID:26692860

  7. Dynamically re-configurable CMOS imagers for an active vision system

    NASA Technical Reports Server (NTRS)

    Yang, Guang (Inventor); Pain, Bedabrata (Inventor)

    2005-01-01

    A vision system is disclosed. The system includes a pixel array, at least one multi-resolution window operation circuit, and a pixel averaging circuit. The pixel array has an array of pixels configured to receive light signals from an image having at least one tracking target. The multi-resolution window operation circuits are configured to process the image. Each of the multi-resolution window operation circuits processes each tracking target within a particular multi-resolution window. The pixel averaging circuit is configured to sample and average pixels within the particular multi-resolution window.

  8. Photodiode area effect on performance of X-ray CMOS active pixel sensors

    NASA Astrophysics Data System (ADS)

    Kim, M. S.; Kim, Y.; Kim, G.; Lim, K. T.; Cho, G.; Kim, D.

    2018-02-01

    Compared to conventional TFT-based X-ray imaging devices, CMOS-based X-ray imaging sensors are considered next generation because they can be manufactured in very small pixel pitches and can acquire high-speed images. In addition, CMOS-based sensors have the advantage of integration of various functional circuits within the sensor. The image quality can also be improved by the high fill-factor in large pixels. If the size of the subject is small, the size of the pixel must be reduced as a consequence. In addition, the fill factor must be reduced to aggregate various functional circuits within the pixel. In this study, 3T-APS (active pixel sensor) with photodiodes of four different sizes were fabricated and evaluated. It is well known that a larger photodiode leads to improved overall performance. Nonetheless, if the size of the photodiode is > 1000 μm2, the degree to which the sensor performance increases as the photodiode size increases, is reduced. As a result, considering the fill factor, pixel-pitch > 32 μm is not necessary to achieve high-efficiency image quality. In addition, poor image quality is to be expected unless special sensor-design techniques are included for sensors with a pixel pitch of 25 μm or less.

  9. CMOS Active Pixel Sensor Technology and Reliability Characterization Methodology

    NASA Technical Reports Server (NTRS)

    Chen, Yuan; Guertin, Steven M.; Pain, Bedabrata; Kayaii, Sammy

    2006-01-01

    This paper describes the technology, design features and reliability characterization methodology of a CMOS Active Pixel Sensor. Both overall chip reliability and pixel reliability are projected for the imagers.

  10. Pixel electronic noise as a function of position in an active matrix flat panel imaging array

    NASA Astrophysics Data System (ADS)

    Yazdandoost, Mohammad Y.; Wu, Dali; Karim, Karim S.

    2010-04-01

    We present an analysis of output referred pixel electronic noise as a function of position in the active matrix array for both active and passive pixel architectures. Three different noise sources for Active Pixel Sensor (APS) arrays are considered: readout period noise, reset period noise and leakage current noise of the reset TFT during readout. For the state-of-the-art Passive Pixel Sensor (PPS) array, the readout noise of the TFT switch is considered. Measured noise results are obtained by modeling the array connections with RC ladders on a small in-house fabricated prototype. The results indicate that the pixels in the rows located in the middle part of the array have less random electronic noise at the output of the off-panel charge amplifier compared to the ones in rows at the two edges of the array. These results can help optimize for clearer images as well as help define the region-of-interest with the best signal-to-noise ratio in an active matrix digital flat panel imaging array.

  11. Active pixel image sensor with a winner-take-all mode of operation

    NASA Technical Reports Server (NTRS)

    Yadid-Pecht, Orly (Inventor); Mead, Carver (Inventor); Fossum, Eric R. (Inventor)

    2003-01-01

    An integrated CMOS semiconductor imaging device having two modes of operation that can be performed simultaneously to produce an output image and provide information of a brightest or darkest pixel in the image.

  12. Active pixel sensors with substantially planarized color filtering elements

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R. (Inventor); Kemeny, Sabrina E. (Inventor)

    1999-01-01

    A semiconductor imaging system preferably having an active pixel sensor array compatible with a CMOS fabrication process. Color-filtering elements such as polymer filters and wavelength-converting phosphors can be integrated with the image sensor.

  13. Image Accumulation in Pixel Detector Gated by Late External Trigger Signal and its Application in Imaging Activation Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jakubek, J.; Cejnarova, A.; Platkevic, M.

    Single quantum counting pixel detectors of Medipix type are starting to be used in various radiographic applications. Compared to standard devices for digital imaging (such as CCDs or CMOS sensors) they present significant advantages: direct conversion of radiation to electric signal, energy sensitivity, noiseless image integration, unlimited dynamic range, absolute linearity. In this article we describe usage of the pixel device TimePix for image accumulation gated by late trigger signal. Demonstration of the technique is given on imaging coincidence instrumental neutron activation analysis (Imaging CINAA). This method allows one to determine concentration and distribution of certain preselected element in anmore » inspected sample.« less

  14. CMOS foveal image sensor chip

    NASA Technical Reports Server (NTRS)

    Scott, Peter (Inventor); Sridhar, Ramalingam (Inventor); Bandera, Cesar (Inventor); Xia, Shu (Inventor)

    2002-01-01

    A foveal image sensor integrated circuit comprising a plurality of CMOS active pixel sensors arranged both within and about a central fovea region of the chip. The pixels in the central fovea region have a smaller size than the pixels arranged in peripheral rings about the central region. A new photocharge normalization scheme and associated circuitry normalizes the output signals from the different size pixels in the array. The pixels are assembled into a multi-resolution rectilinear foveal image sensor chip using a novel access scheme to reduce the number of analog RAM cells needed. Localized spatial resolution declines monotonically with offset from the imager's optical axis, analogous to biological foveal vision.

  15. Position and time resolution measurements with a microchannel plate image intensifier: A comparison of monolithic and pixelated CeBr3 scintillators

    NASA Astrophysics Data System (ADS)

    Ackermann, Ulrich; Eschbaumer, Stephan; Bergmaier, Andreas; Egger, Werner; Sperr, Peter; Greubel, Christoph; Löwe, Benjamin; Schotanus, Paul; Dollinger, Günther

    2016-07-01

    To perform Four Dimensional Age Momentum Correlation measurements in the near future, where one obtains the positron lifetime in coincidence with the three dimensional momentum of the electron annihilating with the positron, we have investigated the time and position resolution of two CeBr3 scintillators (monolithic and an array of pixels) using a Photek IPD340/Q/BI/RS microchannel plate image intensifier. The microchannel plate image intensifier has an active diameter of 40 mm and a stack of two microchannel plates in chevron configuration. The monolithic CeBr3 scintillator was cylindrically shaped with a diameter of 40 mm and a height of 5 mm. The pixelated scintillator array covered the whole active area of the microchannel plate image intensifier and the shape of each pixel was 2.5·2.5·8 mm3 with a pixel pitch of 3.3 mm. For the monolithic setup the measured mean single time resolution was 330 ps (FWHM) at a gamma energy of 511 keV. No significant dependence on the position was detected. The position resolution at the center of the monolithic scintillator was about 2.5 mm (FWHM) at a gamma energy of 662 keV. The single time resolution of the pixelated crystal setup reached 320 ps (FWHM) in the region of the center of the active area of the microchannel plate image intensifier. The position resolution was limited by the cross-section of the pixels. The gamma energy for the pixel setup measurements was 511 keV.

  16. All-passive pixel super-resolution of time-stretch imaging

    PubMed Central

    Chan, Antony C. S.; Ng, Ho-Cheung; Bogaraju, Sharat C. V.; So, Hayden K. H.; Lam, Edmund Y.; Tsia, Kevin K.

    2017-01-01

    Based on image encoding in a serial-temporal format, optical time-stretch imaging entails a stringent requirement of state-of-the-art fast data acquisition unit in order to preserve high image resolution at an ultrahigh frame rate — hampering the widespread utilities of such technology. Here, we propose a pixel super-resolution (pixel-SR) technique tailored for time-stretch imaging that preserves pixel resolution at a relaxed sampling rate. It harnesses the subpixel shifts between image frames inherently introduced by asynchronous digital sampling of the continuous time-stretch imaging process. Precise pixel registration is thus accomplished without any active opto-mechanical subpixel-shift control or other additional hardware. Here, we present the experimental pixel-SR image reconstruction pipeline that restores high-resolution time-stretch images of microparticles and biological cells (phytoplankton) at a relaxed sampling rate (≈2–5 GSa/s)—more than four times lower than the originally required readout rate (20 GSa/s) — is thus effective for high-throughput label-free, morphology-based cellular classification down to single-cell precision. Upon integration with the high-throughput image processing technology, this pixel-SR time-stretch imaging technique represents a cost-effective and practical solution for large scale cell-based phenotypic screening in biomedical diagnosis and machine vision for quality control in manufacturing. PMID:28303936

  17. Method to improve cancerous lesion detection sensitivity in a dedicated dual-head scintimammography system

    DOEpatents

    Kieper, Douglas Arthur [Seattle, WA; Majewski, Stanislaw [Morgantown, WV; Welch, Benjamin L [Hampton, VA

    2012-07-03

    An improved method for enhancing the contrast between background and lesion areas of a breast undergoing dual-head scintimammographic examination comprising: 1) acquiring a pair of digital images from a pair of small FOV or mini gamma cameras compressing the breast under examination from opposing sides; 2) inverting one of the pair of images to align or co-register with the other of the images to obtain co-registered pixel values; 3) normalizing the pair of images pixel-by-pixel by dividing pixel values from each of the two acquired images and the co-registered image by the average count per pixel in the entire breast area of the corresponding detector; and 4) multiplying the number of counts in each pixel by the value obtained in step 3 to produce a normalization enhanced two dimensional contrast map. This enhanced (increased contrast) contrast map enhances the visibility of minor local increases (uptakes) of activity over the background and therefore improves lesion detection sensitivity, especially of small lesions.

  18. Method to improve cancerous lesion detection sensitivity in a dedicated dual-head scintimammography system

    DOEpatents

    Kieper, Douglas Arthur [Newport News, VA; Majewski, Stanislaw [Yorktown, VA; Welch, Benjamin L [Hampton, VA

    2008-10-28

    An improved method for enhancing the contrast between background and lesion areas of a breast undergoing dual-head scintimammographic examination comprising: 1) acquiring a pair of digital images from a pair of small FOV or mini gamma cameras compressing the breast under examination from opposing sides; 2) inverting one of the pair of images to align or co-register with the other of the images to obtain co-registered pixel values; 3) normalizing the pair of images pixel-by-pixel by dividing pixel values from each of the two acquired images and the co-registered image by the average count per pixel in the entire breast area of the corresponding detector; and 4) multiplying the number of counts in each pixel by the value obtained in step 3 to produce a normalization enhanced two dimensional contrast map. This enhanced (increased contrast) contrast map enhances the visibility of minor local increases (uptakes) of activity over the background and therefore improves lesion detection sensitivity, especially of small lesions.

  19. CMOS Active-Pixel Image Sensor With Intensity-Driven Readout

    NASA Technical Reports Server (NTRS)

    Langenbacher, Harry T.; Fossum, Eric R.; Kemeny, Sabrina

    1996-01-01

    Proposed complementary metal oxide/semiconductor (CMOS) integrated-circuit image sensor automatically provides readouts from pixels in order of decreasing illumination intensity. Sensor operated in integration mode. Particularly useful in number of image-sensing tasks, including diffractive laser range-finding, three-dimensional imaging, event-driven readout of sparse sensor arrays, and star tracking.

  20. Preliminary investigations of active pixel sensors in Nuclear Medicine imaging

    NASA Astrophysics Data System (ADS)

    Ott, Robert; Evans, Noel; Evans, Phil; Osmond, J.; Clark, A.; Turchetta, R.

    2009-06-01

    Three CMOS active pixel sensors have been investigated for their application to Nuclear Medicine imaging. Startracker with 525×525 25 μm square pixels has been coupled via a fibre optic stud to a 2 mm thick segmented CsI(Tl) crystal. Imaging tests were performed using 99mTc sources, which emit 140 keV gamma rays. The system was interfaced to a PC via FPGA-based DAQ and optical link enabling imaging rates of 10 f/s. System noise was measured to be >100e and it was shown that the majority of this noise was fixed pattern in nature. The intrinsic spatial resolution was measured to be ˜80 μm and the system spatial resolution measured with a slit was ˜450 μm. The second sensor, On Pixel Intelligent CMOS (OPIC), had 64×72 40 μm pixels and was used to evaluate noise characteristics and to develop a method of differentiation between fixed pattern and statistical noise. The third sensor, Vanilla, had 520×520 25 μm pixels and a measured system noise of ˜25e. This sensor was coupled directly to the segmented phosphor. Imaging results show that even at this lower level of noise the signal from 140 keV gamma rays is small as the light from the phosphor is spread over a large number of pixels. Suggestions for the 'ideal' sensor are made.

  1. Imaging properties of pixellated scintillators with deep pixels

    PubMed Central

    Barber, H. Bradford; Fastje, David; Lemieux, Daniel; Grim, Gary P.; Furenlid, Lars R.; Miller, Brian W.; Parkhurst, Philip; Nagarkar, Vivek V.

    2015-01-01

    We have investigated the light-transport properties of scintillator arrays with long, thin pixels (deep pixels) for use in high-energy gamma-ray imaging. We compared 10×10 pixel arrays of YSO:Ce, LYSO:Ce and BGO (1mm × 1mm × 20 mm pixels) made by Proteus, Inc. with similar 10×10 arrays of LSO:Ce and BGO (1mm × 1mm × 15mm pixels) loaned to us by Saint-Gobain. The imaging and spectroscopic behaviors of these scintillator arrays are strongly affected by the choice of a reflector used as an inter-pixel spacer (3M ESR in the case of the Proteus arrays and white, diffuse-reflector for the Saint-Gobain arrays). We have constructed a 3700-pixel LYSO:Ce Prototype NIF Gamma-Ray Imager for use in diagnosing target compression in inertial confinement fusion. This system was tested at the OMEGA Laser and exhibited significant optical, inter-pixel cross-talk that was traced to the use of a single-layer of ESR film as an inter-pixel spacer. We show how the optical cross-talk can be mapped, and discuss correction procedures. We demonstrate a 10×10 YSO:Ce array as part of an iQID (formerly BazookaSPECT) imager and discuss issues related to the internal activity of 176Lu in LSO:Ce and LYSO:Ce detectors. PMID:26236070

  2. Imaging properties of pixellated scintillators with deep pixels

    NASA Astrophysics Data System (ADS)

    Barber, H. Bradford; Fastje, David; Lemieux, Daniel; Grim, Gary P.; Furenlid, Lars R.; Miller, Brian W.; Parkhurst, Philip; Nagarkar, Vivek V.

    2014-09-01

    We have investigated the light-transport properties of scintillator arrays with long, thin pixels (deep pixels) for use in high-energy gamma-ray imaging. We compared 10x10 pixel arrays of YSO:Ce, LYSO:Ce and BGO (1mm x 1mm x 20 mm pixels) made by Proteus, Inc. with similar 10x10 arrays of LSO:Ce and BGO (1mm x 1mm x 15mm pixels) loaned to us by Saint-Gobain. The imaging and spectroscopic behaviors of these scintillator arrays are strongly affected by the choice of a reflector used as an inter-pixel spacer (3M ESR in the case of the Proteus arrays and white, diffuse-reflector for the Saint-Gobain arrays). We have constructed a 3700-pixel LYSO:Ce Prototype NIF Gamma-Ray Imager for use in diagnosing target compression in inertial confinement fusion. This system was tested at the OMEGA Laser and exhibited significant optical, inter-pixel cross-talk that was traced to the use of a single-layer of ESR film as an inter-pixel spacer. We show how the optical cross-talk can be mapped, and discuss correction procedures. We demonstrate a 10x10 YSO:Ce array as part of an iQID (formerly BazookaSPECT) imager and discuss issues related to the internal activity of 176Lu in LSO:Ce and LYSO:Ce detectors.

  3. High-sensitivity brain SPECT system using cadmium telluride (CdTe) semiconductor detector and 4-pixel matched collimator.

    PubMed

    Suzuki, Atsuro; Takeuchi, Wataru; Ishitsu, Takafumi; Tsuchiya, Katsutoshi; Morimoto, Yuichi; Ueno, Yuichiro; Kobashi, Keiji; Kubo, Naoki; Shiga, Tohru; Tamaki, Nagara

    2013-11-07

    For high-sensitivity brain imaging, we have developed a two-head single-photon emission computed tomography (SPECT) system using a CdTe semiconductor detector and 4-pixel matched collimator (4-PMC). The term, '4-PMC' indicates that the collimator hole size is matched to a 2 × 2 array of detector pixels. By contrast, a 1-pixel matched collimator (1-PMC) is defined as a collimator whose hole size is matched to one detector pixel. The performance of the higher-sensitivity 4-PMC was experimentally compared with that of the 1-PMC. The sensitivities of the 1-PMC and 4-PMC were 70 cps/MBq/head and 220 cps/MBq/head, respectively. The SPECT system using the 4-PMC provides superior image resolution in cold and hot rods phantom with the same activity and scan time to that of the 1-PMC. In addition, with half the usual scan time the 4-PMC provides comparable image quality to that of the 1-PMC. Furthermore, (99m)Tc-ECD brain perfusion images of healthy volunteers obtained using the 4-PMC demonstrated acceptable image quality for clinical diagnosis. In conclusion, our CdTe SPECT system equipped with the higher-sensitivity 4-PMC can provide better spatial resolution than the 1-PMC either in half the imaging time with the same administered activity, or alternatively, in the same imaging time with half the activity.

  4. Terahertz imaging with compressive sensing

    NASA Astrophysics Data System (ADS)

    Chan, Wai Lam

    Most existing terahertz imaging systems are generally limited by slow image acquisition due to mechanical raster scanning. Other systems using focal plane detector arrays can acquire images in real time, but are either too costly or limited by low sensitivity in the terahertz frequency range. To design faster and more cost-effective terahertz imaging systems, the first part of this thesis proposes two new terahertz imaging schemes based on compressive sensing (CS). Both schemes can acquire amplitude and phase-contrast images efficiently with a single-pixel detector, thanks to the powerful CS algorithms which enable the reconstruction of N-by- N pixel images with much fewer than N2 measurements. The first CS Fourier imaging approach successfully reconstructs a 64x64 image of an object with pixel size 1.4 mm using a randomly chosen subset of the 4096 pixels which defines the image in the Fourier plane. Only about 12% of the pixels are required for reassembling the image of a selected object, equivalent to a 2/3 reduction in acquisition time. The second approach is single-pixel CS imaging, which uses a series of random masks for acquisition. Besides speeding up acquisition with a reduced number of measurements, the single-pixel system can further cut down acquisition time by electrical or optical spatial modulation of random patterns. In order to switch between random patterns at high speed in the single-pixel imaging system, the second part of this thesis implements a multi-pixel electrical spatial modulator for terahertz beams using active terahertz metamaterials. The first generation of this device consists of a 4x4 pixel array, where each pixel is an array of sub-wavelength-sized split-ring resonator elements fabricated on a semiconductor substrate, and is independently controlled by applying an external voltage. The spatial modulator has a uniform modulation depth of around 40 percent across all pixels, and negligible crosstalk, at the resonant frequency. The second-generation spatial terahertz modulator, also based on metamaterials with a higher resolution (32x32), is under development. A FPGA-based circuit is designed to control the large number of modulator pixels. Once fully implemented, this second-generation device will enable fast terahertz imaging with both pulsed and continuous-wave terahertz sources.

  5. Detection systems for mass spectrometry imaging: a perspective on novel developments with a focus on active pixel detectors.

    PubMed

    Jungmann, Julia H; Heeren, Ron M A

    2013-01-15

    Instrumental developments for imaging and individual particle detection for biomolecular mass spectrometry (imaging) and fundamental atomic and molecular physics studies are reviewed. Ion-counting detectors, array detection systems and high mass detectors for mass spectrometry (imaging) are treated. State-of-the-art detection systems for multi-dimensional ion, electron and photon detection are highlighted. Their application and performance in three different imaging modes--integrated, selected and spectral image detection--are described. Electro-optical and microchannel-plate-based systems are contrasted. The analytical capabilities of solid-state pixel detectors--both charge coupled device (CCD) and complementary metal oxide semiconductor (CMOS) chips--are introduced. The Medipix/Timepix detector family is described as an example of a CMOS hybrid active pixel sensor. Alternative imaging methods for particle detection and their potential for future applications are investigated. Copyright © 2012 John Wiley & Sons, Ltd.

  6. High responsivity CMOS imager pixel implemented in SOI technology

    NASA Technical Reports Server (NTRS)

    Zheng, X.; Wrigley, C.; Yang, G.; Pain, B.

    2000-01-01

    Availability of mature sub-micron CMOS technology and the advent of the new low noise active pixel sensor (APS) concept have enabled the development of low power, miniature, single-chip, CMOS digital imagers in the decade of the 1990's.

  7. A digital pixel cell for address event representation image convolution processing

    NASA Astrophysics Data System (ADS)

    Camunas-Mesa, Luis; Acosta-Jimenez, Antonio; Serrano-Gotarredona, Teresa; Linares-Barranco, Bernabe

    2005-06-01

    Address Event Representation (AER) is an emergent neuromorphic interchip communication protocol that allows for real-time virtual massive connectivity between huge number of neurons located on different chips. By exploiting high speed digital communication circuits (with nano-seconds timings), synaptic neural connections can be time multiplexed, while neural activity signals (with mili-seconds timings) are sampled at low frequencies. Also, neurons generate events according to their information levels. Neurons with more information (activity, derivative of activities, contrast, motion, edges,...) generate more events per unit time, and access the interchip communication channel more frequently, while neurons with low activity consume less communication bandwidth. AER technology has been used and reported for the implementation of various type of image sensors or retinae: luminance with local agc, contrast retinae, motion retinae,... Also, there has been a proposal for realizing programmable kernel image convolution chips. Such convolution chips would contain an array of pixels that perform weighted addition of events. Once a pixel has added sufficient event contributions to reach a fixed threshold, the pixel fires an event, which is then routed out of the chip for further processing. Such convolution chips have been proposed to be implemented using pulsed current mode mixed analog and digital circuit techniques. In this paper we present a fully digital pixel implementation to perform the weighted additions and fire the events. This way, for a given technology, there is a fully digital implementation reference against which compare the mixed signal implementations. We have designed, implemented and tested a fully digital AER convolution pixel. This pixel will be used to implement a full AER convolution chip for programmable kernel image convolution processing.

  8. CMOS Image Sensors: Electronic Camera On A Chip

    NASA Technical Reports Server (NTRS)

    Fossum, E. R.

    1995-01-01

    Recent advancements in CMOS image sensor technology are reviewed, including both passive pixel sensors and active pixel sensors. On- chip analog to digital converters and on-chip timing and control circuits permit realization of an electronic camera-on-a-chip. Highly miniaturized imaging systems based on CMOS image sensor technology are emerging as a competitor to charge-coupled devices for low cost uses.

  9. Automatic removal of cosmic ray signatures in Deep Impact images

    NASA Astrophysics Data System (ADS)

    Ipatov, S. I.; A'Hearn, M. F.; Klaasen, K. P.

    The results of recognition of cosmic ray (CR) signatures on single images made during the Deep Impact mission were analyzed for several codes written by several authors. For automatic removal of CR signatures on many images, we suggest using the code imgclean ( http://pdssbn.astro.umd.edu/volume/didoc_0001/document/calibration_software/dical_v5/) written by E. Deutsch as other codes considered do not work properly automatically with a large number of images and do not run to completion for some images; however, other codes can be better for analysis of certain specific images. Sometimes imgclean detects false CR signatures near the edge of a comet nucleus, and it often does not recognize all pixels of long CR signatures. Our code rmcr is the only code among those considered that allows one to work with raw images. For most visual images made during low solar activity at exposure time t > 4 s, the number of clusters of bright pixels on an image per second per sq. cm of CCD was about 2-4, both for dark and normal sky images. At high solar activity, it sometimes exceeded 10. The ratio of the number of CR signatures consisting of n pixels obtained at high solar activity to that at low solar activity was greater for greater n. The number of clusters detected as CR signatures on a single infrared image is by at least a factor of several greater than the actual number of CR signatures; the number of clusters based on analysis of two successive dark infrared frames is in agreement with an expected number of CR signatures. Some glitches of false CR signatures include bright pixels repeatedly present on different infrared images. Our interactive code imr allows a user to choose the regions on a considered image where glitches detected by imgclean as CR signatures are ignored. In other regions chosen by the user, the brightness of some pixels is replaced by the local median brightness if the brightness of these pixels is greater by some factor than the median brightness. The interactive code allows one to delete long CR signatures and prevents removal of false CR signatures near the edge of the nucleus of the comet. The interactive code can be applied to editing any digital images. Results obtained can be used for other missions to comets.

  10. Low Power Camera-on-a-Chip Using CMOS Active Pixel Sensor Technology

    NASA Technical Reports Server (NTRS)

    Fossum, E. R.

    1995-01-01

    A second generation image sensor technology has been developed at the NASA Jet Propulsion Laboratory as a result of the continuing need to miniaturize space science imaging instruments. Implemented using standard CMOS, the active pixel sensor (APS) technology permits the integration of the detector array with on-chip timing, control and signal chain electronics, including analog-to-digital conversion.

  11. CMOS Image Sensors for High Speed Applications.

    PubMed

    El-Desouki, Munir; Deen, M Jamal; Fang, Qiyin; Liu, Louis; Tse, Frances; Armstrong, David

    2009-01-01

    Recent advances in deep submicron CMOS technologies and improved pixel designs have enabled CMOS-based imagers to surpass charge-coupled devices (CCD) imaging technology for mainstream applications. The parallel outputs that CMOS imagers can offer, in addition to complete camera-on-a-chip solutions due to being fabricated in standard CMOS technologies, result in compelling advantages in speed and system throughput. Since there is a practical limit on the minimum pixel size (4∼5 μm) due to limitations in the optics, CMOS technology scaling can allow for an increased number of transistors to be integrated into the pixel to improve both detection and signal processing. Such smart pixels truly show the potential of CMOS technology for imaging applications allowing CMOS imagers to achieve the image quality and global shuttering performance necessary to meet the demands of ultrahigh-speed applications. In this paper, a review of CMOS-based high-speed imager design is presented and the various implementations that target ultrahigh-speed imaging are described. This work also discusses the design, layout and simulation results of an ultrahigh acquisition rate CMOS active-pixel sensor imager that can take 8 frames at a rate of more than a billion frames per second (fps).

  12. Microscope mode secondary ion mass spectrometry imaging with a Timepix detector.

    PubMed

    Kiss, Andras; Jungmann, Julia H; Smith, Donald F; Heeren, Ron M A

    2013-01-01

    In-vacuum active pixel detectors enable high sensitivity, highly parallel time- and space-resolved detection of ions from complex surfaces. For the first time, a Timepix detector assembly was combined with a secondary ion mass spectrometer for microscope mode secondary ion mass spectrometry (SIMS) imaging. Time resolved images from various benchmark samples demonstrate the imaging capabilities of the detector system. The main advantages of the active pixel detector are the higher signal-to-noise ratio and parallel acquisition of arrival time and position. Microscope mode SIMS imaging of biomolecules is demonstrated from tissue sections with the Timepix detector.

  13. 50 μm pixel pitch wafer-scale CMOS active pixel sensor x-ray detector for digital breast tomosynthesis.

    PubMed

    Zhao, C; Konstantinidis, A C; Zheng, Y; Anaxagoras, T; Speller, R D; Kanicki, J

    2015-12-07

    Wafer-scale CMOS active pixel sensors (APSs) have been developed recently for x-ray imaging applications. The small pixel pitch and low noise are very promising properties for medical imaging applications such as digital breast tomosynthesis (DBT). In this work, we evaluated experimentally and through modeling the imaging properties of a 50 μm pixel pitch CMOS APS x-ray detector named DynAMITe (Dynamic Range Adjustable for Medical Imaging Technology). A modified cascaded system model was developed for CMOS APS x-ray detectors by taking into account the device nonlinear signal and noise properties. The imaging properties such as modulation transfer function (MTF), noise power spectrum (NPS), and detective quantum efficiency (DQE) were extracted from both measurements and the nonlinear cascaded system analysis. The results show that the DynAMITe x-ray detector achieves a high spatial resolution of 10 mm(-1) and a DQE of around 0.5 at spatial frequencies  <1 mm(-1). In addition, the modeling results were used to calculate the image signal-to-noise ratio (SNRi) of microcalcifications at various mean glandular dose (MGD). For an average breast (5 cm thickness, 50% glandular fraction), 165 μm microcalcifications can be distinguished at a MGD of 27% lower than the clinical value (~1.3 mGy). To detect 100 μm microcalcifications, further optimizations of the CMOS APS x-ray detector, image aquisition geometry and image reconstruction techniques should be considered.

  14. Digital radiology using active matrix readout: amplified pixel detector array for fluoroscopy.

    PubMed

    Matsuura, N; Zhao, W; Huang, Z; Rowlands, J A

    1999-05-01

    Active matrix array technology has made possible the concept of flat panel imaging systems for radiography. In the conventional approach a thin-film circuit built on glass contains the necessary switching components (thin-film transistors or TFTs) to readout an image formed in either a phosphor or photoconductor layer. Extension of this concept to real time imaging--fluoroscopy--has had problems due to the very low noise required. A new design strategy for fluoroscopic active matrix flat panel detectors has therefore been investigated theoretically. In this approach, the active matrix has integrated thin-film amplifiers and readout electronics at each pixel and is called the amplified pixel detector array (APDA). Each amplified pixel consists of three thin-film transistors: an amplifier, a readout, and a reset TFT. The performance of the APDA approach compared to the conventional active matrix was investigated for two semiconductors commonly used to construct active matrix arrays--hydrogenated amorphous silicon and polycrystalline silicon. The results showed that with amplification close to the pixel, the noise from the external charge preamplifiers becomes insignificant. The thermal and flicker noise of the readout and the amplifying TFTs at the pixel become the dominant sources of noise. The magnitude of these noise sources is strongly dependent on the TFT geometry and its fabrication process. Both of these could be optimized to make the APDA active matrix operate at lower noise levels than is possible with the conventional approach. However, the APDA cannot be made to operate ideally (i.e., have noise limited only by the amount of radiation used) at the lowest exposure rate required in medical fluoroscopy.

  15. A neighbor pixel communication filtering structure for Dynamic Vision Sensors

    NASA Astrophysics Data System (ADS)

    Xu, Yuan; Liu, Shiqi; Lu, Hehui; Zhang, Zilong

    2017-02-01

    For Dynamic Vision Sensors (DVS), thermal noise and junction leakage current induced Background Activity (BA) is the major cause of the deterioration of images quality. Inspired by the smoothing filtering principle of horizontal cells in vertebrate retina, A DVS pixel with Neighbor Pixel Communication (NPC) filtering structure is proposed to solve this issue. The NPC structure is designed to judge the validity of pixel's activity through the communication between its 4 adjacent pixels. The pixel's outputs will be suppressed if its activities are determined not real. The proposed pixel's area is 23.76×24.71μm2 and only 3ns output latency is introduced. In order to validate the effectiveness of the structure, a 5×5 pixel array has been implemented in SMIC 0.13μm CIS process. 3 test cases of array's behavioral model show that the NPC-DVS have an ability of filtering the BA.

  16. Video Image Tracking Engine

    NASA Technical Reports Server (NTRS)

    Howard, Richard T. (Inventor); Bryan, ThomasC. (Inventor); Book, Michael L. (Inventor)

    2004-01-01

    A method and system for processing an image including capturing an image and storing the image as image pixel data. Each image pixel datum is stored in a respective memory location having a corresponding address. Threshold pixel data is selected from the image pixel data and linear spot segments are identified from the threshold pixel data selected.. Ihe positions of only a first pixel and a last pixel for each linear segment are saved. Movement of one or more objects are tracked by comparing the positions of fust and last pixels of a linear segment present in the captured image with respective first and last pixel positions in subsequent captured images. Alternatively, additional data for each linear data segment is saved such as sum of pixels and the weighted sum of pixels i.e., each threshold pixel value is multiplied by that pixel's x-location).

  17. The Multidimensional Integrated Intelligent Imaging project (MI-3)

    NASA Astrophysics Data System (ADS)

    Allinson, N.; Anaxagoras, T.; Aveyard, J.; Arvanitis, C.; Bates, R.; Blue, A.; Bohndiek, S.; Cabello, J.; Chen, L.; Chen, S.; Clark, A.; Clayton, C.; Cook, E.; Cossins, A.; Crooks, J.; El-Gomati, M.; Evans, P. M.; Faruqi, W.; French, M.; Gow, J.; Greenshaw, T.; Greig, T.; Guerrini, N.; Harris, E. J.; Henderson, R.; Holland, A.; Jeyasundra, G.; Karadaglic, D.; Konstantinidis, A.; Liang, H. X.; Maini, K. M. S.; McMullen, G.; Olivo, A.; O'Shea, V.; Osmond, J.; Ott, R. J.; Prydderch, M.; Qiang, L.; Riley, G.; Royle, G.; Segneri, G.; Speller, R.; Symonds-Tayler, J. R. N.; Triger, S.; Turchetta, R.; Venanzi, C.; Wells, K.; Zha, X.; Zin, H.

    2009-06-01

    MI-3 is a consortium of 11 universities and research laboratories whose mission is to develop complementary metal-oxide semiconductor (CMOS) active pixel sensors (APS) and to apply these sensors to a range of imaging challenges. A range of sensors has been developed: On-Pixel Intelligent CMOS (OPIC)—designed for in-pixel intelligence; FPN—designed to develop novel techniques for reducing fixed pattern noise; HDR—designed to develop novel techniques for increasing dynamic range; Vanilla/PEAPS—with digital and analogue modes and regions of interest, which has also been back-thinned; Large Area Sensor (LAS)—a novel, stitched LAS; and eLeNA—which develops a range of low noise pixels. Applications being developed include autoradiography, a gamma camera system, radiotherapy verification, tissue diffraction imaging, X-ray phase-contrast imaging, DNA sequencing and electron microscopy.

  18. An active-optics image-motion compensation technology application for high-speed searching and infrared detection system

    NASA Astrophysics Data System (ADS)

    Wu, Jianping; Lu, Fei; Zou, Kai; Yan, Hong; Wan, Min; Kuang, Yan; Zhou, Yanqing

    2018-03-01

    An ultra-high angular velocity and minor-caliber high-precision stably control technology application for active-optics image-motion compensation, is put forward innovatively in this paper. The image blur problem due to several 100°/s high-velocity relative motion between imaging system and target is theoretically analyzed. The velocity match model of detection system and active optics compensation system is built, and active optics image motion compensation platform experiment parameters are designed. Several 100°/s high-velocity high-precision control optics compensation technology is studied and implemented. The relative motion velocity is up to 250°/s, and image motion amplitude is more than 20 pixel. After the active optics compensation, motion blur is less than one pixel. The bottleneck technology of ultra-high angular velocity and long exposure time in searching and infrared detection system is successfully broke through.

  19. Novel Si-Ge-C Superlattices for More than Moore CMOS

    DTIC Science & Technology

    2016-03-31

    diodes can be entirely formed by epitaxial growth, CMOS Active Pixel Sensors can be made with Fully-Depleted SOI CMOS . One important advantage of...a NMOS Transfer Gate (TG), which could be part of a 4T pixel APS. PPDs are preferred in CMOS image sensors for the ability of the pinning layer to...than Moore” with the creation of active photonic devices monolithically integrated with CMOS . Applications include Multispectral CMOS Image Sensors

  20. Pixel parallel localized driver design for a 128 x 256 pixel array 3D 1Gfps image sensor

    NASA Astrophysics Data System (ADS)

    Zhang, C.; Dao, V. T. S.; Etoh, T. G.; Charbon, E.

    2017-02-01

    In this paper, a 3D 1Gfps BSI image sensor is proposed, where 128 × 256 pixels are located in the top-tier chip and a 32 × 32 localized driver array in the bottom-tier chip. Pixels are designed with Multiple Collection Gates (MCG), which collects photons selectively with different collection gates being active at intervals of 1ns to achieve 1Gfps. For the drivers, a global PLL is designed, which consists of a ring oscillator with 6-stage current starved differential inverters, achieving a wide frequency tuning range from 40MHz to 360MHz (20ps rms jitter). The drivers are the replicas of the ring oscillator that operates within a PLL. Together with level shifters and XNOR gates, continuous 3.3V pulses are generated with desired pulse width, which is 1/12 of the PLL clock period. The driver array is activated by a START signal, which propagates through a highly balanced clock tree, to activate all the pixels at the same time with virtually negligible skew.

  1. Biological tissue imaging with a position and time sensitive pixelated detector.

    PubMed

    Jungmann, Julia H; Smith, Donald F; MacAleese, Luke; Klinkert, Ivo; Visser, Jan; Heeren, Ron M A

    2012-10-01

    We demonstrate the capabilities of a highly parallel, active pixel detector for large-area, mass spectrometric imaging of biological tissue sections. A bare Timepix assembly (512 × 512 pixels) is combined with chevron microchannel plates on an ion microscope matrix-assisted laser desorption time-of-flight mass spectrometer (MALDI TOF-MS). The detector assembly registers position- and time-resolved images of multiple m/z species in every measurement frame. We prove the applicability of the detection system to biomolecular mass spectrometry imaging on biologically relevant samples by mass-resolved images from Timepix measurements of a peptide-grid benchmark sample and mouse testis tissue slices. Mass-spectral and localization information of analytes at physiologic concentrations are measured in MALDI-TOF-MS imaging experiments. We show a high spatial resolution (pixel size down to 740 × 740 nm(2) on the sample surface) and a spatial resolving power of 6 μm with a microscope mode laser field of view of 100-335 μm. Automated, large-area imaging is demonstrated and the Timepix' potential for fast, large-area image acquisition is highlighted.

  2. Three-dimensional cascaded system analysis of a 50 µm pixel pitch wafer-scale CMOS active pixel sensor x-ray detector for digital breast tomosynthesis.

    PubMed

    Zhao, C; Vassiljev, N; Konstantinidis, A C; Speller, R D; Kanicki, J

    2017-03-07

    High-resolution, low-noise x-ray detectors based on the complementary metal-oxide-semiconductor (CMOS) active pixel sensor (APS) technology have been developed and proposed for digital breast tomosynthesis (DBT). In this study, we evaluated the three-dimensional (3D) imaging performance of a 50 µm pixel pitch CMOS APS x-ray detector named DynAMITe (Dynamic Range Adjustable for Medical Imaging Technology). The two-dimensional (2D) angle-dependent modulation transfer function (MTF), normalized noise power spectrum (NNPS), and detective quantum efficiency (DQE) were experimentally characterized and modeled using the cascaded system analysis at oblique incident angles up to 30°. The cascaded system model was extended to the 3D spatial frequency space in combination with the filtered back-projection (FBP) reconstruction method to calculate the 3D and in-plane MTF, NNPS and DQE parameters. The results demonstrate that the beam obliquity blurs the 2D MTF and DQE in the high spatial frequency range. However, this effect can be eliminated after FBP image reconstruction. In addition, impacts of the image acquisition geometry and detector parameters were evaluated using the 3D cascaded system analysis for DBT. The result shows that a wider projection angle range (e.g.  ±30°) improves the low spatial frequency (below 5 mm -1 ) performance of the CMOS APS detector. In addition, to maintain a high spatial resolution for DBT, a focal spot size of smaller than 0.3 mm should be used. Theoretical analysis suggests that a pixelated scintillator in combination with the 50 µm pixel pitch CMOS APS detector could further improve the 3D image resolution. Finally, the 3D imaging performance of the CMOS APS and an indirect amorphous silicon (a-Si:H) thin-film transistor (TFT) passive pixel sensor (PPS) detector was simulated and compared.

  3. Three-dimensional cascaded system analysis of a 50 µm pixel pitch wafer-scale CMOS active pixel sensor x-ray detector for digital breast tomosynthesis

    NASA Astrophysics Data System (ADS)

    Zhao, C.; Vassiljev, N.; Konstantinidis, A. C.; Speller, R. D.; Kanicki, J.

    2017-03-01

    High-resolution, low-noise x-ray detectors based on the complementary metal-oxide-semiconductor (CMOS) active pixel sensor (APS) technology have been developed and proposed for digital breast tomosynthesis (DBT). In this study, we evaluated the three-dimensional (3D) imaging performance of a 50 µm pixel pitch CMOS APS x-ray detector named DynAMITe (Dynamic Range Adjustable for Medical Imaging Technology). The two-dimensional (2D) angle-dependent modulation transfer function (MTF), normalized noise power spectrum (NNPS), and detective quantum efficiency (DQE) were experimentally characterized and modeled using the cascaded system analysis at oblique incident angles up to 30°. The cascaded system model was extended to the 3D spatial frequency space in combination with the filtered back-projection (FBP) reconstruction method to calculate the 3D and in-plane MTF, NNPS and DQE parameters. The results demonstrate that the beam obliquity blurs the 2D MTF and DQE in the high spatial frequency range. However, this effect can be eliminated after FBP image reconstruction. In addition, impacts of the image acquisition geometry and detector parameters were evaluated using the 3D cascaded system analysis for DBT. The result shows that a wider projection angle range (e.g.  ±30°) improves the low spatial frequency (below 5 mm-1) performance of the CMOS APS detector. In addition, to maintain a high spatial resolution for DBT, a focal spot size of smaller than 0.3 mm should be used. Theoretical analysis suggests that a pixelated scintillator in combination with the 50 µm pixel pitch CMOS APS detector could further improve the 3D image resolution. Finally, the 3D imaging performance of the CMOS APS and an indirect amorphous silicon (a-Si:H) thin-film transistor (TFT) passive pixel sensor (PPS) detector was simulated and compared.

  4. CMOS Active Pixel Sensors for Low Power, Highly Miniaturized Imaging Systems

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R.

    1996-01-01

    The complementary metal-oxide-semiconductor (CMOS) active pixel sensor (APS) technology has been developed over the past three years by NASA at the Jet Propulsion Laboratory, and has reached a level of performance comparable to CCDs with greatly increased functionality but at a very reduced power level.

  5. Method and system for non-linear motion estimation

    NASA Technical Reports Server (NTRS)

    Lu, Ligang (Inventor)

    2011-01-01

    A method and system for extrapolating and interpolating a visual signal including determining a first motion vector between a first pixel position in a first image to a second pixel position in a second image, determining a second motion vector between the second pixel position in the second image and a third pixel position in a third image, determining a third motion vector between one of the first pixel position in the first image and the second pixel position in the second image, and the second pixel position in the second image and the third pixel position in the third image using a non-linear model, determining a position of the fourth pixel in a fourth image based upon the third motion vector.

  6. Curiosity's Mars Hand Lens Imager (MAHLI): Inital Observations and Activities

    NASA Technical Reports Server (NTRS)

    Edgett, K. S.; Yingst, R. A.; Minitti, M. E.; Robinson, M. L.; Kennedy, M. R.; Lipkaman, L. J.; Jensen, E. H.; Anderson, R. C.; Bean, K. M.; Beegle, L. W.; hide

    2013-01-01

    MAHLI (Mars Hand Lens Imager) is a 2-megapixel focusable macro lens color camera on the turret on Curiosity's robotic arm. The investigation centers on stratigraphy, grain-scale texture, structure, mineralogy, and morphology of geologic materials at Curiosity's Gale robotic field site. MAHLI acquires focused images at working distances of 2.1 cm to infinity; for reference, at 2.1 cm the scale is 14 microns/pixel; at 6.9 cm it is 31 microns/pixel, like the Spirit and Opportunity Microscopic Imager (MI) cameras.

  7. Sub-pixel mapping of hyperspectral imagery using super-resolution

    NASA Astrophysics Data System (ADS)

    Sharma, Shreya; Sharma, Shakti; Buddhiraju, Krishna M.

    2016-04-01

    With the development of remote sensing technologies, it has become possible to obtain an overview of landscape elements which helps in studying the changes on earth's surface due to climate, geological, geomorphological and human activities. Remote sensing measures the electromagnetic radiations from the earth's surface and match the spectral similarity between the observed signature and the known standard signatures of the various targets. However, problem lies when image classification techniques assume pixels to be pure. In hyperspectral imagery, images have high spectral resolution but poor spatial resolution. Therefore, the spectra obtained is often contaminated due to the presence of mixed pixels and causes misclassification. To utilise this high spectral information, spatial resolution has to be enhanced. Many factors make the spatial resolution one of the most expensive and hardest to improve in imaging systems. To solve this problem, post-processing of hyperspectral images is done to retrieve more information from the already acquired images. The algorithm to enhance spatial resolution of the images by dividing them into sub-pixels is known as super-resolution and several researches have been done in this domain.In this paper, we propose a new method for super-resolution based on ant colony optimization and review the popular methods of sub-pixel mapping of hyperspectral images along with their comparative analysis.

  8. Chromatic Modulator for a High-Resolution CCD or APS

    NASA Technical Reports Server (NTRS)

    Hartley, Frank; Hull, Anthony

    2008-01-01

    A chromatic modulator has been proposed to enable the separate detection of the red, green, and blue (RGB) color components of the same scene by a single charge-coupled device (CCD), active-pixel sensor (APS), or similar electronic image detector. Traditionally, the RGB color-separation problem in an electronic camera has been solved by use of either (1) fixed color filters over three separate image detectors; (2) a filter wheel that repeatedly imposes a red, then a green, then a blue filter over a single image detector; or (3) different fixed color filters over adjacent pixels. The use of separate image detectors necessitates precise registration of the detectors and the use of complicated optics; filter wheels are expensive and add considerably to the bulk of the camera; and fixed pixelated color filters reduce spatial resolution and introduce color-aliasing effects. The proposed chromatic modulator would not exhibit any of these shortcomings. The proposed chromatic modulator would be an electromechanical device fabricated by micromachining. It would include a filter having a spatially periodic pattern of RGB strips at a pitch equal to that of the pixels of the image detector. The filter would be placed in front of the image detector, supported at its periphery by a spring suspension and electrostatic comb drive. The spring suspension would bias the filter toward a middle position in which each filter strip would be registered with a row of pixels of the image detector. Hard stops would limit the excursion of the spring suspension to precisely one pixel row above and one pixel row below the middle position. In operation, the electrostatic comb drive would be actuated to repeatedly snap the filter to the upper extreme, middle, and lower extreme positions. This action would repeatedly place a succession of the differently colored filter strips in front of each pixel of the image detector. To simplify the processing, it would be desirable to encode information on the color of the filter strip over each row (or at least over some representative rows) of pixels at a given instant of time in synchronism with the pixel output at that instant.

  9. Development of CMOS Active Pixel Image Sensors for Low Cost Commercial Applications

    NASA Technical Reports Server (NTRS)

    Gee, R.; Kemeny, S.; Kim, Q.; Mendis, S.; Nakamura, J.; Nixon, R.; Ortiz, M.; Pain, B.; Staller, C.; Zhou, Z; hide

    1994-01-01

    JPL, under sponsorship from the NASA Office of Advanced Concepts and Technology, has been developing a second-generation solid-state image sensor technology. Charge-coupled devices (CCD) are a well-established first generation image sensor technology. For both commercial and NASA applications, CCDs have numerous shortcomings. In response, the active pixel sensor (APS) technology has been under research. The major advantages of APS technology are the ability to integrate on-chip timing, control, signal-processing and analog-to-digital converter functions, reduced sensitivity to radiation effects, low power operation, and random access readout.

  10. Can direct electron detectors outperform phosphor-CCD systems for TEM?

    NASA Astrophysics Data System (ADS)

    Moldovan, G.; Li, X.; Kirkland, A.

    2008-08-01

    A new generation of imaging detectors is being considered for application in TEM, but which device architectures can provide the best images? Monte Carlo simulations of the electron-sensor interaction are used here to calculate the expected modulation transfer of monolithic active pixel sensors (MAPS), hybrid active pixel sensors (HAPS) and double sided Silicon strip detectors (DSSD), showing that ideal and nearly ideal transfer can be obtained using DSSD and MAPS sensors. These results highly recommend the replacement of current phosphor screen and charge coupled device imaging systems with such new directly exposed position sensitive electron detectors.

  11. Development of CMOS Active Pixel Image Sensors for Low Cost Commercial Applications

    NASA Technical Reports Server (NTRS)

    Fossum, E.; Gee, R.; Kemeny, S.; Kim, Q.; Mendis, S.; Nakamura, J.; Nixon, R.; Ortiz, M.; Pain, B.; Zhou, Z.; hide

    1994-01-01

    This paper describes ongoing research and development of CMOS active pixel image sensors for low cost commercial applications. A number of sensor designs have been fabricated and tested in both p-well and n-well technologies. Major elements in the development of the sensor include on-chip analog signal processing circuits for the reduction of fixed pattern noise, on-chip timing and control circuits and on-chip analog-to-digital conversion (ADC). Recent results and continuing efforts in these areas will be presented.

  12. Digital radiography using amorphous selenium: photoconductively activated switch (PAS) readout system.

    PubMed

    Reznik, Nikita; Komljenovic, Philip T; Germann, Stephen; Rowlands, John A

    2008-03-01

    A new amorphous selenium (a-Se) digital radiography detector is introduced. The proposed detector generates a charge image in the a-Se layer in a conventional manner, which is stored on electrode pixels at the surface of the a-Se layer. A novel method, called photoconductively activated switch (PAS), is used to read out the latent x-ray charge image. The PAS readout method uses lateral photoconduction at the a-Se surface which is a revolutionary modification of the bulk photoinduced discharge (PID) methods. The PAS method addresses and eliminates the fundamental weaknesses of the PID methods--long readout times and high readout noise--while maintaining the structural simplicity and high resolution for which PID optical readout systems are noted. The photoconduction properties of the a-Se surface were investigated and the geometrical design for the electrode pixels for a PAS radiography system was determined. This design was implemented in a single pixel PAS evaluation system. The results show that the PAS x-ray induced output charge signal was reproducible and depended linearly on the x-ray exposure in the diagnostic exposure range. Furthermore, the readout was reasonably rapid (10 ms for pixel discharge). The proposed detector allows readout of half a pixel row at a time (odd pixels followed by even pixels), thus permitting the readout of a complete image in 30 s for a 40 cm x 40 cm detector with the potential of reducing that time by using greater readout light intensity. This demonstrates that a-Se based x-ray detectors using photoconductively activated switches could form a basis for a practical integrated digital radiography system.

  13. Exploring the Hidden Structure of Astronomical Images: A "Pixelated" View of Solar System and Deep Space Features!

    ERIC Educational Resources Information Center

    Ward, R. Bruce; Sienkiewicz, Frank; Sadler, Philip; Antonucci, Paul; Miller, Jaimie

    2013-01-01

    We describe activities created to help student participants in Project ITEAMS (Innovative Technology-Enabled Astronomy for Middle Schools) develop a deeper understanding of picture elements (pixels), image creation, and analysis of the recorded data. ITEAMS is an out-of-school time (OST) program funded by the National Science Foundation (NSF) with…

  14. Tests of monolithic active pixel sensors at national synchrotron light source

    NASA Astrophysics Data System (ADS)

    Deptuch, G.; Besson, A.; Carini, G. A.; Siddons, D. P.; Szelezniak, M.; Winter, M.

    2007-01-01

    The paper discusses basic characterization of Monolithic Active Pixel Sensors (MAPS) carried out at the X12A beam-line at National Synchrotron Light Source (NSLS), Upton, NY, USA. The tested device was a MIMOSA V (MV) chip, back-thinned down to the epitaxial layer. This 1M pixels device features a pixel size of 17×17 μm2 and was designed in a 0.6 μm CMOS process. The X-ray beam energies used range from 5 to 12 keV. Examples of direct X-ray imaging capabilities are presented.

  15. Development of a novel direct X-ray detector using photoinduced discharge (PID) readout for digital radiography

    NASA Astrophysics Data System (ADS)

    Heo, D.; Jeon, S.; Kim, J.-S.; Kim, R. K.; Cha, B. K.; Moon, B. J.; Yoon, J.

    2013-02-01

    We developed a novel direct X-ray detector using photoinduced discharge (PID) readout for digital radiography. The pixel resolution is 512 × 512 with 200 μm pixel and the overall active dimensions of the X-ray imaging panel is 10.24 cm × 10.24 cm. The detector consists of an X-ray absorption layer of amorphous selenium, a charge accumulation layer of metal, and a PID readout layer of amorphous silicon. In particular, the charge accumulation is pixelated because image charges generated by X-ray should be stored pixel by pixel. Here the image charges, or holes, are recombined with electrons generated by the PID method. We used a 405 nm laser diode and cylindrical lens to make a line beam source with a width of 50 μm for PID readout, which generates charges for each pixel lines during the scan. We obtained spatial frequencies of about 1.0 lp/mm for the X-direction (lateral direction) and 0.9 lp/mm for the Y-direction (scanning direction) at 50% modulation transfer function.

  16. PixelLearn

    NASA Technical Reports Server (NTRS)

    Mazzoni, Dominic; Wagstaff, Kiri; Bornstein, Benjamin; Tang, Nghia; Roden, Joseph

    2006-01-01

    PixelLearn is an integrated user-interface computer program for classifying pixels in scientific images. Heretofore, training a machine-learning algorithm to classify pixels in images has been tedious and difficult. PixelLearn provides a graphical user interface that makes it faster and more intuitive, leading to more interactive exploration of image data sets. PixelLearn also provides image-enhancement controls to make it easier to see subtle details in images. PixelLearn opens images or sets of images in a variety of common scientific file formats and enables the user to interact with several supervised or unsupervised machine-learning pixel-classifying algorithms while the user continues to browse through the images. The machinelearning algorithms in PixelLearn use advanced clustering and classification methods that enable accuracy much higher than is achievable by most other software previously available for this purpose. PixelLearn is written in portable C++ and runs natively on computers running Linux, Windows, or Mac OS X.

  17. Functional magnetic resonance imaging of visual object construction and shape discrimination : relations among task, hemispheric lateralization, and gender.

    PubMed

    Georgopoulos, A P; Whang, K; Georgopoulos, M A; Tagaris, G A; Amirikian, B; Richter, W; Kim, S G; Uğurbil, K

    2001-01-01

    We studied the brain activation patterns in two visual image processing tasks requiring judgements on object construction (FIT task) or object sameness (SAME task). Eight right-handed healthy human subjects (four women and four men) performed the two tasks in a randomized block design while 5-mm, multislice functional images of the whole brain were acquired using a 4-tesla system using blood oxygenation dependent (BOLD) activation. Pairs of objects were picked randomly from a set of 25 oriented fragments of a square and presented to the subjects approximately every 5 sec. In the FIT task, subjects had to indicate, by pushing one of two buttons, whether the two fragments could match to form a perfect square, whereas in the SAME task they had to decide whether they were the same or not. In a control task, preceding and following each of the two tasks above, a single square was presented at the same rate and subjects pushed any of the two keys at random. Functional activation maps were constructed based on a combination of conservative criteria. The areas with activated pixels were identified using Talairach coordinates and anatomical landmarks, and the number of activated pixels was determined for each area. Altogether, 379 pixels were activated. The counts of activated pixels did not differ significantly between the two tasks or between the two genders. However, there were significantly more activated pixels in the left (n = 218) than the right side of the brain (n = 161). Of the 379 activated pixels, 371 were located in the cerebral cortex. The Talairach coordinates of these pixels were analyzed with respect to their overall distribution in the two tasks. These distributions differed significantly between the two tasks. With respect to individual dimensions, the two tasks differed significantly in the anterior--posterior and superior--inferior distributions but not in the left--right (including mediolateral, within the left or right side) distribution. Specifically, the FIT distribution was, overall, more anterior and inferior than that of the SAME task. A detailed analysis of the counts and spatial distributions of activated pixels was carried out for 15 brain areas (all in the cerebral cortex) in which a consistent activation (in > or = 3 subjects) was observed (n = 323 activated pixels). We found the following. Except for the inferior temporal gyrus, which was activated exclusively in the FIT task, all other areas showed activation in both tasks but to different extents. Based on the extent of activation, areas fell within two distinct groups (FIT or SAME) depending on which pixel count (i.e., FIT or SAME) was greater. The FIT group consisted of the following areas, in decreasing FIT/SAME order (brackets indicate ties): GTi, GTs, GC, GFi, GFd, [GTm, GF], GO. The SAME group consisted of the following areas, in decreasing SAME/FIT order : GOi, LPs, Sca, GPrC, GPoC, [GFs, GFm]. These results indicate that there are distributed, graded, and partially overlapping patterns of activation during performance of the two tasks. We attribute these overlapping patterns of activation to the engagement of partially shared processes. Activated pixels clustered to three types of clusters : FIT-only (111 pixels), SAME-only (97 pixels), and FIT + SAME (115 pixels). Pixels contained in FIT-only and SAME-only clusters were distributed approximately equally between the left and right hemispheres, whereas pixels in the SAME + FIT clusters were located mostly in the left hemisphere. With respect to gender, the left-right distribution of activated pixels was very similar in women and men for the SAME-only and FIT + SAME clusters but differed for the FIT-only case in which there was a prominent left side preponderance for women, in contrast to a right side preponderance for men. We conclude that (a) cortical mechanisms common for processing visual object construction and discrimination involve mostly the left hemisphere, (b) cortical mechanisms specific for these tasks engage both hemispheres, and (c) in object construction only, men engage predominantly the right hemisphere whereas women show a left-hemisphere preponderance.

  18. Chromatic Modulator for High Resolution CCD or APS Devices

    NASA Technical Reports Server (NTRS)

    Hartley, Frank T. (Inventor); Hull, Anthony B. (Inventor)

    2003-01-01

    A system for providing high-resolution color separation in electronic imaging. Comb drives controllably oscillate a red-green-blue (RGB) color strip filter system (or otherwise) over an electronic imaging system such as a charge-coupled device (CCD) or active pixel sensor (APS). The color filter is modulated over the imaging array at a rate three or more times the frame rate of the imaging array. In so doing, the underlying active imaging elements are then able to detect separate color-separated images, which are then combined to provide a color-accurate frame which is then recorded as the representation of the recorded image. High pixel resolution is maintained. Registration is obtained between the color strip filter and the underlying imaging array through the use of electrostatic comb drives in conjunction with a spring suspension system.

  19. Study of current-mode active pixel sensor circuits using amorphous InSnZnO thin-film transistor for 50-μm pixel-pitch indirect X-ray imagers

    NASA Astrophysics Data System (ADS)

    Cheng, Mao-Hsun; Zhao, Chumin; Kanicki, Jerzy

    2017-05-01

    Current-mode active pixel sensor (C-APS) circuits based on amorphous indium-tin-zinc-oxide thin-film transistors (a-ITZO TFTs) are proposed for indirect X-ray imagers. The proposed C-APS circuits include a combination of a hydrogenated amorphous silicon (a-Si:H) p+-i-n+ photodiode (PD) and a-ITZO TFTs. Source-output (SO) and drain-output (DO) C-APS are investigated and compared. Acceptable signal linearity and high gains are realized for SO C-APS. APS circuit characteristics including voltage gain, charge gain, signal linearity, charge-to-current conversion gain, electron-to-voltage conversion gain are evaluated. The impact of the a-ITZO TFT threshold voltage shifts on C-APS is also considered. A layout for a pixel pitch of 50 μm and an associated fabrication process are suggested. Data line loadings for 4k-resolution X-ray imagers are computed and their impact on circuit performances is taken into consideration. Noise analysis is performed, showing a total input-referred noise of 239 e-.

  20. 18F-FDG positron autoradiography with a particle counting silicon pixel detector.

    PubMed

    Russo, P; Lauria, A; Mettivier, G; Montesi, M C; Marotta, M; Aloj, L; Lastoria, S

    2008-11-07

    We report on tests of a room-temperature particle counting silicon pixel detector of the Medipix2 series as the detector unit of a positron autoradiography (AR) system, for samples labelled with (18)F-FDG radiopharmaceutical used in PET studies. The silicon detector (1.98 cm(2) sensitive area, 300 microm thick) has high intrinsic resolution (55 microm pitch) and works by counting all hits in a pixel above a certain energy threshold. The present work extends the detector characterization with (18)F-FDG of a previous paper. We analysed the system's linearity, dynamic range, sensitivity, background count rate, noise, and its imaging performance on biological samples. Tests have been performed in the laboratory with (18)F-FDG drops (37-37 000 Bq initial activity) and ex vivo in a rat injected with 88.8 MBq of (18)F-FDG. Particles interacting in the detector volume produced a hit in a cluster of pixels whose mean size was 4.3 pixels/event at 11 keV threshold and 2.2 pixels/event at 37 keV threshold. Results show a sensitivity for beta(+) of 0.377 cps Bq(-1), a dynamic range of at least five orders of magnitude and a lower detection limit of 0.0015 Bq mm(-2). Real-time (18)F-FDG positron AR images have been obtained in 500-1000 s exposure time of thin (10-20 microm) slices of a rat brain and compared with 20 h film autoradiography of adjacent slices. The analysis of the image contrast and signal-to-noise ratio in a rat brain slice indicated that Poisson noise-limited imaging can be approached in short (e.g. 100 s) exposures, with approximately 100 Bq slice activity, and that the silicon pixel detector produced a higher image quality than film-based AR.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shin, Kyung -Wook; Karim, Karim S.

    Direct conversion crystalline silicon X-ray imagers are used for low-energy X-ray photon (4-20 keV) detection in scientific research applications such as protein crystallography. In this paper, we demonstrate a novel pixel architecture that integrates a crystalline silicon X-ray detector with a thin-film transistor amorphous silicon pixel readout circuit. We describe a simplified two-mask process to fabricate a complete imaging array and present preliminary results that show the fabricated pixel to be sensitive to 5.89-keV photons from a low activity Fe-55 gamma source. Furthermore, this paper presented can expedite the development of high spatial resolution, low cost, direct conversion imagers formore » X-ray diffraction and crystallography applications.« less

  2. Analysis and Enhancement of Low-Light-Level Performance of Photodiode-Type CMOS Active Pixel Images Operated with Sub-Threshold Reset

    NASA Technical Reports Server (NTRS)

    Pain, Bedabrata; Yang, Guang; Ortiz, Monico; Wrigley, Christopher; Hancock, Bruce; Cunningham, Thomas

    2000-01-01

    Noise in photodiode-type CMOS active pixel sensors (APS) is primarily due to the reset (kTC) noise at the sense node, since it is difficult to implement in-pixel correlated double sampling for a 2-D array. Signal integrated on the photodiode sense node (SENSE) is calculated by measuring difference between the voltage on the column bus (COL) - before and after the reset (RST) is pulsed. Lower than kTC noise can be achieved with photodiode-type pixels by employing "softreset" technique. Soft-reset refers to resetting with both drain and gate of the n-channel reset transistor kept at the same potential, causing the sense node to be reset using sub-threshold MOSFET current. However, lowering of noise is achieved only at the expense higher image lag and low-light-level non-linearity. In this paper, we present an analysis to explain the noise behavior, show evidence of degraded performance under low-light levels, and describe new pixels that eliminate non-linearity and lag without compromising noise.

  3. Amorphous In–Ga–Zn–O thin-film transistor active pixel sensor x-ray imager for digital breast tomosynthesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Chumin; Kanicki, Jerzy, E-mail: kanicki@eecs.umich.edu

    Purpose: The breast cancer detection rate for digital breast tomosynthesis (DBT) is limited by the x-ray image quality. The limiting Nyquist frequency for current DBT systems is around 5 lp/mm, while the fine image details contained in the high spatial frequency region (>5 lp/mm) are lost. Also today the tomosynthesis patient dose is high (0.67–3.52 mGy). To address current issues, in this paper, for the first time, a high-resolution low-dose organic photodetector/amorphous In–Ga–Zn–O thin-film transistor (a-IGZO TFT) active pixel sensor (APS) x-ray imager is proposed for next generation DBT systems. Methods: The indirect x-ray detector is based on a combination of a novelmore » low-cost organic photodiode (OPD) and a cesium iodide-based (CsI:Tl) scintillator. The proposed APS x-ray imager overcomes the difficulty of weak signal detection, when small pixel size and low exposure conditions are used, by an on-pixel signal amplification with a significant charge gain. The electrical performance of a-IGZO TFT APS pixel circuit is investigated by SPICE simulation using modified Rensselaer Polytechnic Institute amorphous silicon (a-Si:H) TFT model. Finally, the noise, detective quantum efficiency (DQE), and resolvability of the complete system are modeled using the cascaded system formalism. Results: The result demonstrates that a large charge gain of 31–122 is achieved for the proposed high-mobility (5–20 cm{sup 2}/V s) amorphous metal-oxide TFT APS. The charge gain is sufficient to eliminate the TFT thermal noise, flicker noise as well as the external readout circuit noise. Moreover, the low TFT (<10{sup −13} A) and OPD (<10{sup −8} A/cm{sup 2}) leakage currents can further reduce the APS noise. Cascaded system analysis shows that the proposed APS imager with a 75 μm pixel pitch can effectively resolve the Nyquist frequency of 6.67 lp/mm, which can be further improved to ∼10 lp/mm if the pixel pitch is reduced to 50 μm. Moreover, the detector entrance exposure per projection can be reduced from 1 to 0.3 mR without a significant reduction of DQE. The signal-to-noise ratio of the a-IGZO APS imager under 0.3 mR x-ray exposure is comparable to that of a-Si:H passive pixel sensor imager under 1 mR, indicating good image quality under low dose. A threefold reduction of current tomosynthesis dose is expected if proposed technology is combined with an advanced DBT image reconstruction method. Conclusions: The proposed a-IGZO APS x-ray imager with a pixel pitch <75 μm is capable to achieve a high spatial frequency (>6.67 lp/mm) and a low dose (<0.4 mGy) in next generation DBT systems.« less

  4. Amorphous In-Ga-Zn-O thin-film transistor active pixel sensor x-ray imager for digital breast tomosynthesis.

    PubMed

    Zhao, Chumin; Kanicki, Jerzy

    2014-09-01

    The breast cancer detection rate for digital breast tomosynthesis (DBT) is limited by the x-ray image quality. The limiting Nyquist frequency for current DBT systems is around 5 lp/mm, while the fine image details contained in the high spatial frequency region (>5 lp/mm) are lost. Also today the tomosynthesis patient dose is high (0.67-3.52 mGy). To address current issues, in this paper, for the first time, a high-resolution low-dose organic photodetector/amorphous In-Ga-Zn-O thin-film transistor (a-IGZO TFT) active pixel sensor (APS) x-ray imager is proposed for next generation DBT systems. The indirect x-ray detector is based on a combination of a novel low-cost organic photodiode (OPD) and a cesium iodide-based (CsI:Tl) scintillator. The proposed APS x-ray imager overcomes the difficulty of weak signal detection, when small pixel size and low exposure conditions are used, by an on-pixel signal amplification with a significant charge gain. The electrical performance of a-IGZO TFT APS pixel circuit is investigated by SPICE simulation using modified Rensselaer Polytechnic Institute amorphous silicon (a-Si:H) TFT model. Finally, the noise, detective quantum efficiency (DQE), and resolvability of the complete system are modeled using the cascaded system formalism. The result demonstrates that a large charge gain of 31-122 is achieved for the proposed high-mobility (5-20 cm2/V s) amorphous metal-oxide TFT APS. The charge gain is sufficient to eliminate the TFT thermal noise, flicker noise as well as the external readout circuit noise. Moreover, the low TFT (<10(-13) A) and OPD (<10(-8) A/cm2) leakage currents can further reduce the APS noise. Cascaded system analysis shows that the proposed APS imager with a 75 μm pixel pitch can effectively resolve the Nyquist frequency of 6.67 lp/mm, which can be further improved to ∼10 lp/mm if the pixel pitch is reduced to 50 μm. Moreover, the detector entrance exposure per projection can be reduced from 1 to 0.3 mR without a significant reduction of DQE. The signal-to-noise ratio of the a-IGZO APS imager under 0.3 mR x-ray exposure is comparable to that of a-Si:H passive pixel sensor imager under 1 mR, indicating good image quality under low dose. A threefold reduction of current tomosynthesis dose is expected if proposed technology is combined with an advanced DBT image reconstruction method. The proposed a-IGZO APS x-ray imager with a pixel pitch<75 μm is capable to achieve a high spatial frequency (>6.67 lp/mm) and a low dose (<0.4 mGy) in next generation DBT systems.

  5. Vectorized image segmentation via trixel agglomeration

    DOEpatents

    Prasad, Lakshman [Los Alamos, NM; Skourikhine, Alexei N [Los Alamos, NM

    2006-10-24

    A computer implemented method transforms an image comprised of pixels into a vectorized image specified by a plurality of polygons that can be subsequently used to aid in image processing and understanding. The pixelated image is processed to extract edge pixels that separate different colors and a constrained Delaunay triangulation of the edge pixels forms a plurality of triangles having edges that cover the pixelated image. A color for each one of the plurality of triangles is determined from the color pixels within each triangle. A filter is formed with a set of grouping rules related to features of the pixelated image and applied to the plurality of triangle edges to merge adjacent triangles consistent with the filter into polygons having a plurality of vertices. The pixelated image may be then reformed into an array of the polygons, that can be represented collectively and efficiently by standard vector image.

  6. Gullies in Winter Shadow

    NASA Image and Video Library

    2017-03-21

    This is an odd-looking image. It shows gullies during the winter while entirely in the shadow of the crater wall. Illumination comes only from the winter skylight. We acquire such images because gullies on Mars actively form in the winter when there is carbon dioxide frost on the ground, so we image them in the winter, even though not well illuminated, to look for signs of activity. The dark streaks might be signs of current activity, removing the frost, but further analysis is needed. NB: North is down in the cutout, and the terrain slopes towards the bottom of the image. The map is projected here at a scale of 50 centimeters (19.7 inches) per pixel. [The original image scale is 62.3 centimeters (24.5 inches) per pixel (with 2 x 2 binning); objects on the order of 187 centimeters (73.6 inches) across are resolved.] North is up. http://photojournal.jpl.nasa.gov/catalog/PIA21568

  7. Selective document image data compression technique

    DOEpatents

    Fu, C.Y.; Petrich, L.I.

    1998-05-19

    A method of storing information from filled-in form-documents comprises extracting the unique user information in the foreground from the document form information in the background. The contrast of the pixels is enhanced by a gamma correction on an image array, and then the color value of each of pixel is enhanced. The color pixels lying on edges of an image are converted to black and an adjacent pixel is converted to white. The distance between black pixels and other pixels in the array is determined, and a filled-edge array of pixels is created. User information is then converted to a two-color format by creating a first two-color image of the scanned image by converting all pixels darker than a threshold color value to black. All the pixels that are lighter than the threshold color value to white. Then a second two-color image of the filled-edge file is generated by converting all pixels darker than a second threshold value to black and all pixels lighter than the second threshold color value to white. The first two-color image and the second two-color image are then combined and filtered to smooth the edges of the image. The image may be compressed with a unique Huffman coding table for that image. The image file is also decimated to create a decimated-image file which can later be interpolated back to produce a reconstructed image file using a bilinear interpolation kernel. 10 figs.

  8. Selective document image data compression technique

    DOEpatents

    Fu, Chi-Yung; Petrich, Loren I.

    1998-01-01

    A method of storing information from filled-in form-documents comprises extracting the unique user information in the foreground from the document form information in the background. The contrast of the pixels is enhanced by a gamma correction on an image array, and then the color value of each of pixel is enhanced. The color pixels lying on edges of an image are converted to black and an adjacent pixel is converted to white. The distance between black pixels and other pixels in the array is determined, and a filled-edge array of pixels is created. User information is then converted to a two-color format by creating a first two-color image of the scanned image by converting all pixels darker than a threshold color value to black. All the pixels that are lighter than the threshold color value to white. Then a second two-color image of the filled-edge file is generated by converting all pixels darker than a second threshold value to black and all pixels lighter than the second threshold color value to white. The first two-color image and the second two-color image are then combined and filtered to smooth the edges of the image. The image may be compressed with a unique Huffman coding table for that image. The image file is also decimated to create a decimated-image file which can later be interpolated back to produce a reconstructed image file using a bilinear interpolation kernel.--(235 words)

  9. Image Processing for Binarization Enhancement via Fuzzy Reasoning

    NASA Technical Reports Server (NTRS)

    Dominguez, Jesus A. (Inventor)

    2009-01-01

    A technique for enhancing a gray-scale image to improve conversions of the image to binary employs fuzzy reasoning. In the technique, pixels in the image are analyzed by comparing the pixel's gray scale value, which is indicative of its relative brightness, to the values of pixels immediately surrounding the selected pixel. The degree to which each pixel in the image differs in value from the values of surrounding pixels is employed as the variable in a fuzzy reasoning-based analysis that determines an appropriate amount by which the selected pixel's value should be adjusted to reduce vagueness and ambiguity in the image and improve retention of information during binarization of the enhanced gray-scale image.

  10. Fundamental performance differences between CMOS and CCD imagers, part IV

    NASA Astrophysics Data System (ADS)

    Janesick, James; Pinter, Jeff; Potter, Robert; Elliott, Tom; Andrews, James; Tower, John; Grygon, Mark; Keller, Dave

    2010-07-01

    This paper is a continuation of past papers written on fundamental performance differences of scientific CMOS and CCD imagers. New characterization results presented below include: 1). a new 1536 × 1536 × 8μm 5TPPD pixel CMOS imager, 2). buried channel MOSFETs for random telegraph noise (RTN) and threshold reduction, 3) sub-electron noise pixels, 4) 'MIM pixel' for pixel sensitivity (V/e-) control, 5) '5TPPD RING pixel' for large pixel, high-speed charge transfer applications, 6) pixel-to-pixel blooming control, 7) buried channel photo gate pixels and CMOSCCDs, 8) substrate bias for deep depletion CMOS imagers, 9) CMOS dark spikes and dark current issues and 10) high energy radiation damage test data. Discussions are also given to a 1024 × 1024 × 16 um 5TPPD pixel imager currently in fabrication and new stitched CMOS imagers that are in the design phase including 4k × 4k × 10 μm and 10k × 10k × 10 um imager formats.

  11. Micro-valve pump light valve display

    DOEpatents

    Yeechun Lee.

    1993-01-19

    A flat panel display incorporates a plurality of micro-pump light valves (MLV's) to form pixels for recreating an image. Each MLV consists of a dielectric drop sandwiched between substrates, at least one of which is transparent, a holding electrode for maintaining the drop outside a viewing area, and a switching electrode from accelerating the drop from a location within the holding electrode to a location within the viewing area. The sustrates may further define non-wetting surface areas to create potential energy barriers to assist in controlling movement of the drop. The forces acting on the drop are quadratic in nature to provide a nonlinear response for increased image contrast. A crossed electrode structure can be used to activate the pixels whereby a large flat panel display is formed without active driver components at each pixel.

  12. Micro-valve pump light valve display

    DOEpatents

    Lee, Yee-Chun

    1993-01-01

    A flat panel display incorporates a plurality of micro-pump light valves (MLV's) to form pixels for recreating an image. Each MLV consists of a dielectric drop sandwiched between substrates, at least one of which is transparent, a holding electrode for maintaining the drop outside a viewing area, and a switching electrode from accelerating the drop from a location within the holding electrode to a location within the viewing area. The sustrates may further define non-wetting surface areas to create potential energy barriers to assist in controlling movement of the drop. The forces acting on the drop are quadratic in nature to provide a nonlinear response for increased image contrast. A crossed electrode structure can be used to activate the pixels whereby a large flat panel display is formed without active driver components at each pixel.

  13. a-Si:H TFT-silicon hybrid low-energy x-ray detector

    DOE PAGES

    Shin, Kyung -Wook; Karim, Karim S.

    2017-03-15

    Direct conversion crystalline silicon X-ray imagers are used for low-energy X-ray photon (4-20 keV) detection in scientific research applications such as protein crystallography. In this paper, we demonstrate a novel pixel architecture that integrates a crystalline silicon X-ray detector with a thin-film transistor amorphous silicon pixel readout circuit. We describe a simplified two-mask process to fabricate a complete imaging array and present preliminary results that show the fabricated pixel to be sensitive to 5.89-keV photons from a low activity Fe-55 gamma source. Furthermore, this paper presented can expedite the development of high spatial resolution, low cost, direct conversion imagers formore » X-ray diffraction and crystallography applications.« less

  14. Imaging During MESSENGER's Second Flyby of Mercury

    NASA Astrophysics Data System (ADS)

    Chabot, N. L.; Prockter, L. M.; Murchie, S. L.; Robinson, M. S.; Laslo, N. R.; Kang, H. K.; Hawkins, S. E.; Vaughan, R. M.; Head, J. W.; Solomon, S. C.; MESSENGER Team

    2008-12-01

    During MESSENGER's second flyby of Mercury on October 6, 2008, the Mercury Dual Imaging System (MDIS) will acquire 1287 images. The images will include coverage of about 30% of Mercury's surface not previously seen by spacecraft. A portion of the newly imaged terrain will be viewed during the inbound portion of the flyby. On the outbound leg, MDIS will image additional previously unseen terrain as well as regions imaged under different illumination geometry by Mariner 10. These new images, when combined with images from Mariner 10 and from MESSENGER's first Mercury flyby, will enable the first regional- resolution global view of Mercury constituting a combined total coverage of about 96% of the planet's surface. MDIS consists of both a Wide Angle Camera (WAC) and a Narrow Angle Camera (NAC). During MESSENGER's second Mercury flyby, the following imaging activities are planned: about 86 minutes before the spacecraft's closest pass by the planet, the WAC will acquire images through 11 different narrow-band color filters of the approaching crescent planet at a resolution of about 5 km/pixel. At slightly less than 1 hour to closest approach, the NAC will acquire a 4-column x 11-row mosaic with an approximate resolution of 450 m/pixel. At 8 minutes after closest approach, the WAC will obtain the highest-resolution multispectral images to date of Mercury's surface, imaging a portion of the surface through 11 color filters at resolutions of about 250-600 m/pixel. A strip of high-resolution NAC images, with a resolution of approximately 100 m/pixel, will follow these WAC observations. The NAC will next acquire a 15-column x 13- row high-resolution mosaic of the northern hemisphere of the departing planet, beginning approximately 21 minutes after closest approach, with resolutions of 140-300 m/pixel; this mosaic will fill a large gore in the Mariner 10 data. At about 42 minutes following closest approach, the WAC will acquire a 3x3, 11-filter, full- planet mosaic with an average resolution of 2.5 km/pixel. Two NAC mosaics of the entire departing planet will be acquired beginning about 66 minutes after closest approach, with resolutions of 500-700 m/pixel. About 89 minutes following closest approach, the WAC will acquire a multispectral image set with a resolution of about 5 km/pixel. Following this WAC image set, MDIS will continue to acquire occasional images with both the WAC and NAC until 20 hours after closest approach, at which time the flyby data will begin being transmitted to Earth.

  15. Penrose high-dynamic-range imaging

    NASA Astrophysics Data System (ADS)

    Li, Jia; Bai, Chenyan; Lin, Zhouchen; Yu, Jian

    2016-05-01

    High-dynamic-range (HDR) imaging is becoming increasingly popular and widespread. The most common multishot HDR approach, based on multiple low-dynamic-range images captured with different exposures, has difficulties in handling camera and object movements. The spatially varying exposures (SVE) technology provides a solution to overcome this limitation by obtaining multiple exposures of the scene in only one shot but suffers from a loss in spatial resolution of the captured image. While aperiodic assignment of exposures has been shown to be advantageous during reconstruction in alleviating resolution loss, almost all the existing imaging sensors use the square pixel layout, which is a periodic tiling of square pixels. We propose the Penrose pixel layout, using pixels in aperiodic rhombus Penrose tiling, for HDR imaging. With the SVE technology, Penrose pixel layout has both exposure and pixel aperiodicities. To investigate its performance, we have to reconstruct HDR images in square pixel layout from Penrose raw images with SVE. Since the two pixel layouts are different, the traditional HDR reconstruction methods are not applicable. We develop a reconstruction method for Penrose pixel layout using a Gaussian mixture model for regularization. Both quantitative and qualitative results show the superiority of Penrose pixel layout over square pixel layout.

  16. Active pixel sensor pixel having a photodetector whose output is coupled to an output transistor gate

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R. (Inventor); Nakamura, Junichi (Inventor); Kemeny, Sabrina E. (Inventor)

    2005-01-01

    An imaging device formed as a monolithic complementary metal oxide semiconductor integrated circuit in an industry standard complementary metal oxide semiconductor process, the integrated circuit including a focal plane array of pixel cells, each one of the cells including a photogate overlying the substrate for accumulating photo-generated charge in an underlying portion of the substrate and a charge coupled device section formed on the substrate adjacent the photogate having a sensing node and at least one charge coupled device stage for transferring charge from the underlying portion of the substrate to the sensing node. There is also a readout circuit, part of which can be disposed at the bottom of each column of cells and be common to all the cells in the column. A Simple Floating Gate (SFG) pixel structure could also be employed in the imager to provide a non-destructive readout and smaller pixel sizes.

  17. High-speed high-resolution epifluorescence imaging system using CCD sensor and digital storage for neurobiological research

    NASA Astrophysics Data System (ADS)

    Takashima, Ichiro; Kajiwara, Riichi; Murano, Kiyo; Iijima, Toshio; Morinaka, Yasuhiro; Komobuchi, Hiroyoshi

    2001-04-01

    We have designed and built a high-speed CCD imaging system for monitoring neural activity in an exposed animal cortex stained with a voltage-sensitive dye. Two types of custom-made CCD sensors were developed for this system. The type I chip has a resolution of 2664 (H) X 1200 (V) pixels and a wide imaging area of 28.1 X 13.8 mm, while the type II chip has 1776 X 1626 pixels and an active imaging area of 20.4 X 18.7 mm. The CCD arrays were constructed with multiple output amplifiers in order to accelerate the readout rate. The two chips were divided into either 24 (I) or 16 (II) distinct areas that were driven in parallel. The parallel CCD outputs were digitized by 12-bit A/D converters and then stored in the frame memory. The frame memory was constructed with synchronous DRAM modules, which provided a capacity of 128 MB per channel. On-chip and on-memory binning methods were incorporated into the system, e.g., this enabled us to capture 444 X 200 pixel-images for periods of 36 seconds at a rate of 500 frames/second. This system was successfully used to visualize neural activity in the cortices of rats, guinea pigs, and monkeys.

  18. Contrast computation methods for interferometric measurement of sensor modulation transfer function

    NASA Astrophysics Data System (ADS)

    Battula, Tharun; Georgiev, Todor; Gille, Jennifer; Goma, Sergio

    2018-01-01

    Accurate measurement of image-sensor frequency response over a wide range of spatial frequencies is very important for analyzing pixel array characteristics, such as modulation transfer function (MTF), crosstalk, and active pixel shape. Such analysis is especially significant in computational photography for the purposes of deconvolution, multi-image superresolution, and improved light-field capture. We use a lensless interferometric setup that produces high-quality fringes for measuring MTF over a wide range of frequencies (here, 37 to 434 line pairs per mm). We discuss the theoretical framework, involving Michelson and Fourier contrast measurement of the MTF, addressing phase alignment problems using a moiré pattern. We solidify the definition of Fourier contrast mathematically and compare it to Michelson contrast. Our interferometric measurement method shows high detail in the MTF, especially at high frequencies (above Nyquist frequency). We are able to estimate active pixel size and pixel pitch from measurements. We compare both simulation and experimental MTF results to a lens-free slanted-edge implementation using commercial software.

  19. CMOS VLSI Active-Pixel Sensor for Tracking

    NASA Technical Reports Server (NTRS)

    Pain, Bedabrata; Sun, Chao; Yang, Guang; Heynssens, Julie

    2004-01-01

    An architecture for a proposed active-pixel sensor (APS) and a design to implement the architecture in a complementary metal oxide semiconductor (CMOS) very-large-scale integrated (VLSI) circuit provide for some advanced features that are expected to be especially desirable for tracking pointlike features of stars. The architecture would also make this APS suitable for robotic- vision and general pointing and tracking applications. CMOS imagers in general are well suited for pointing and tracking because they can be configured for random access to selected pixels and to provide readout from windows of interest within their fields of view. However, until now, the architectures of CMOS imagers have not supported multiwindow operation or low-noise data collection. Moreover, smearing and motion artifacts in collected images have made prior CMOS imagers unsuitable for tracking applications. The proposed CMOS imager (see figure) would include an array of 1,024 by 1,024 pixels containing high-performance photodiode-based APS circuitry. The pixel pitch would be 9 m. The operations of the pixel circuits would be sequenced and otherwise controlled by an on-chip timing and control block, which would enable the collection of image data, during a single frame period, from either the full frame (that is, all 1,024 1,024 pixels) or from within as many as 8 different arbitrarily placed windows as large as 8 by 8 pixels each. A typical prior CMOS APS operates in a row-at-a-time ( grolling-shutter h) readout mode, which gives rise to exposure skew. In contrast, the proposed APS would operate in a sample-first/readlater mode, suppressing rolling-shutter effects. In this mode, the analog readout signals from the pixels corresponding to the windows of the interest (which windows, in the star-tracking application, would presumably contain guide stars) would be sampled rapidly by routing them through a programmable diagonal switch array to an on-chip parallel analog memory array. The diagonal-switch and memory addresses would be generated by the on-chip controller. The memory array would be large enough to hold differential signals acquired from all 8 windows during a frame period. Following the rapid sampling from all the windows, the contents of the memory array would be read out sequentially by use of a capacitive transimpedance amplifier (CTIA) at a maximum data rate of 10 MHz. This data rate is compatible with an update rate of almost 10 Hz, even in full-frame operation

  20. Luminance compensation for AMOLED displays using integrated MIS sensors

    NASA Astrophysics Data System (ADS)

    Vygranenko, Yuri; Fernandes, Miguel; Louro, Paula; Vieira, Manuela

    2017-05-01

    Active-matrix organic light-emitting diodes (AMOLEDs) are ideal for future TV applications due to their ability to faithfully reproduce real images. However, pixel luminance can be affected by instability of driver TFTs and aging effect in OLEDs. This paper reports on a pixel driver utilizing a metal-insulator-semiconductor (MIS) sensor for luminance control of the OLED element. In the proposed pixel architecture for bottom-emission AMOLEDs, the embedded MIS sensor shares the same layer stack with back-channel etched a Si:H TFTs to maintain the fabrication simplicity. The pixel design for a large-area HD display is presented. The external electronics performs image processing to modify incoming video using correction parameters for each pixel in the backplane, and also sensor data processing to update the correction parameters. The luminance adjusting algorithm is based on realistic models for pixel circuit elements to predict the relation between the programming voltage and OLED luminance. SPICE modeling of the sensing part of the backplane is performed to demonstrate its feasibility. Details on the pixel circuit functionality including the sensing and programming operations are also discussed.

  1. Some spectral and spatial characteristics of LANDSAT data

    NASA Technical Reports Server (NTRS)

    1982-01-01

    Activities are provided for: (1) developing insight into the way in which the LANDSAT MSS produces multispectral data; (2) promoting understanding of what a "pixel" means in a LANDSAT image and the implications of the term "mixed pixel"; (3) explaining the concept of spectral signatures; (4) deriving a simple signature for a class or feature by analysis: of the four band images; (5) understanding the production of false color composites; (6) appreciating the use of color additive techniques; (7) preparing Diazo images; and (8) making quick visual identifications of major land cover types by their characteristic gray tones or colors in LANDSAT images.

  2. Image Edge Extraction via Fuzzy Reasoning

    NASA Technical Reports Server (NTRS)

    Dominquez, Jesus A. (Inventor); Klinko, Steve (Inventor)

    2008-01-01

    A computer-based technique for detecting edges in gray level digital images employs fuzzy reasoning to analyze whether each pixel in an image is likely on an edge. The image is analyzed on a pixel-by-pixel basis by analyzing gradient levels of pixels in a square window surrounding the pixel being analyzed. An edge path passing through the pixel having the greatest intensity gradient is used as input to a fuzzy membership function, which employs fuzzy singletons and inference rules to assigns a new gray level value to the pixel that is related to the pixel's edginess degree.

  3. Evaluation of a single-pixel one-transistor active pixel sensor for fingerprint imaging

    NASA Astrophysics Data System (ADS)

    Xu, Man; Ou, Hai; Chen, Jun; Wang, Kai

    2015-08-01

    Since it first appeared in iPhone 5S in 2013, fingerprint identification (ID) has rapidly gained popularity among consumers. Current fingerprint-enabled smartphones unanimously consists of a discrete sensor to perform fingerprint ID. This architecture not only incurs higher material and manufacturing cost, but also provides only static identification and limited authentication. Hence as the demand for a thinner, lighter, and more secure handset grows, we propose a novel pixel architecture that is a photosensitive device embedded in a display pixel and detects the reflected light from the finger touch for high resolution, high fidelity and dynamic biometrics. To this purpose, an amorphous silicon (a-Si:H) dual-gate photo TFT working in both fingerprint-imaging mode and display-driving mode will be developed.

  4. Design of integrated eye tracker-display device for head mounted systems

    NASA Astrophysics Data System (ADS)

    David, Y.; Apter, B.; Thirer, N.; Baal-Zedaka, I.; Efron, U.

    2009-08-01

    We propose an Eye Tracker/Display system, based on a novel, dual function device termed ETD, which allows sharing the optical paths of the Eye tracker and the display and on-chip processing. The proposed ETD design is based on a CMOS chip combining a Liquid-Crystal-on-Silicon (LCoS) micro-display technology with near infrared (NIR) Active Pixel Sensor imager. The ET operation allows capturing the Near IR (NIR) light, back-reflected from the eye's retina. The retinal image is then used for the detection of the current direction of eye's gaze. The design of the eye tracking imager is based on the "deep p-well" pixel technology, providing low crosstalk while shielding the active pixel circuitry, which serves the imaging and the display drivers, from the photo charges generated in the substrate. The use of the ETD in the HMD Design enables a very compact design suitable for Smart Goggle applications. A preliminary optical, electronic and digital design of the goggle and its associated ETD chip and digital control, are presented.

  5. Thermal signature analysis of human face during jogging activity using infrared thermography technique

    NASA Astrophysics Data System (ADS)

    Budiarti, Putria W.; Kusumawardhani, Apriani; Setijono, Heru

    2016-11-01

    Thermal imaging has been widely used for many applications. Thermal camera is used to measure object's temperature above absolute temperature of 0 Kelvin using infrared radiation emitted by the object. Thermal imaging is color mapping taken using false color that represents temperature. Human body is one of the objects that emits infrared radiation. Human infrared radiations vary according to the activity that is being done. Physical activities such as jogging is among ones that is commonly done. Therefore this experiment will investigate the thermal signature profile of jogging activity in human body, especially in the face parts. The results show that the significant increase is found in periorbital area that is near eyes and forehand by the number of 7.5%. Graphical temperature distributions show that all region, eyes, nose, cheeks, and chin at the temperature of 28.5 - 30.2°C the pixel area tends to be constant since it is the surrounding temperature. At the temperature of 30.2 - 34.7°C the pixel area tends to increase, while at the temperature of 34.7 - 37.1°C the pixel area tends to decrease because pixels at temperature of 34.7 - 37.1°C after jogging activity change into temperature of 30.2 - 34.7°C so that the pixel area increases. The trendline of jogging activity during 10 minutes period also shows the increasing of temperature. The results of each person also show variations due to physiological nature of each person, such as sweat production during physical activities.

  6. Path planning on cellular nonlinear network using active wave computing technique

    NASA Astrophysics Data System (ADS)

    Yeniçeri, Ramazan; Yalçın, Müstak E.

    2009-05-01

    This paper introduces a simple algorithm to solve robot path finding problem using active wave computing techniques. A two-dimensional Cellular Neural/Nonlinear Network (CNN), consist of relaxation oscillators, has been used to generate active waves and to process the visual information. The network, which has been implemented on a Field Programmable Gate Array (FPGA) chip, has the feature of being programmed, controlled and observed by a host computer. The arena of the robot is modelled as the medium of the active waves on the network. Active waves are employed to cover the whole medium with their own dynamics, by starting from an initial point. The proposed algorithm is achieved by observing the motion of the wave-front of the active waves. Host program first loads the arena model onto the active wave generator network and command to start the generation. Then periodically pulls the network image from the generator hardware to analyze evolution of the active waves. When the algorithm is completed, vectorial data image is generated. The path from any of the pixel on this image to the active wave generating pixel is drawn by the vectors on this image. The robot arena may be a complicated labyrinth or may have a simple geometry. But, the arena surface always must be flat. Our Autowave Generator CNN implementation which is settled on the Xilinx University Program Virtex-II Pro Development System is operated by a MATLAB program running on the host computer. As the active wave generator hardware has 16, 384 neurons, an arena with 128 × 128 pixels can be modeled and solved by the algorithm. The system also has a monitor and network image is depicted on the monitor simultaneously.

  7. Development and test of an active pixel sensor detector for heliospheric imager on solar orbiter and solar probe plus

    NASA Astrophysics Data System (ADS)

    Korendyke, Clarence M.; Vourlidas, Angelos; Plunkett, Simon P.; Howard, Russell A.; Wang, Dennis; Marshall, Cheryl J.; Waczynski, Augustyn; Janesick, James J.; Elliott, Thomas; Tun, Samuel; Tower, John; Grygon, Mark; Keller, David; Clifford, Gregory E.

    2013-10-01

    The Naval Research Laboratory is developing next generation CMOS imaging arrays for the Solar Orbiter and Solar Probe Plus missions. The device development is nearly complete with flight device delivery scheduled for summer of 2013. The 4Kx4K mosaic array with 10micron pixels is well suited to the panoramic imaging required for the Solar Orbiter mission. The devices are robust (<100krad) and exhibit minimal performance degradation with respect to radiation. The device design and performance are described.

  8. Super-pixel extraction based on multi-channel pulse coupled neural network

    NASA Astrophysics Data System (ADS)

    Xu, GuangZhu; Hu, Song; Zhang, Liu; Zhao, JingJing; Fu, YunXia; Lei, BangJun

    2018-04-01

    Super-pixel extraction techniques group pixels to form over-segmented image blocks according to the similarity among pixels. Compared with the traditional pixel-based methods, the image descripting method based on super-pixel has advantages of less calculation, being easy to perceive, and has been widely used in image processing and computer vision applications. Pulse coupled neural network (PCNN) is a biologically inspired model, which stems from the phenomenon of synchronous pulse release in the visual cortex of cats. Each PCNN neuron can correspond to a pixel of an input image, and the dynamic firing pattern of each neuron contains both the pixel feature information and its context spatial structural information. In this paper, a new color super-pixel extraction algorithm based on multi-channel pulse coupled neural network (MPCNN) was proposed. The algorithm adopted the block dividing idea of SLIC algorithm, and the image was divided into blocks with same size first. Then, for each image block, the adjacent pixels of each seed with similar color were classified as a group, named a super-pixel. At last, post-processing was adopted for those pixels or pixel blocks which had not been grouped. Experiments show that the proposed method can adjust the number of superpixel and segmentation precision by setting parameters, and has good potential for super-pixel extraction.

  9. Image compression technique

    DOEpatents

    Fu, Chi-Yung; Petrich, Loren I.

    1997-01-01

    An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace's equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image.

  10. Image compression technique

    DOEpatents

    Fu, C.Y.; Petrich, L.I.

    1997-03-25

    An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace`s equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image. 16 figs.

  11. High-resolution three-dimensional imaging radar

    NASA Technical Reports Server (NTRS)

    Cooper, Ken B. (Inventor); Chattopadhyay, Goutam (Inventor); Siegel, Peter H. (Inventor); Dengler, Robert J. (Inventor); Schlecht, Erich T. (Inventor); Mehdi, Imran (Inventor); Skalare, Anders J. (Inventor)

    2010-01-01

    A three-dimensional imaging radar operating at high frequency e.g., 670 GHz, is disclosed. The active target illumination inherent in radar solves the problem of low signal power and narrow-band detection by using submillimeter heterodyne mixer receivers. A submillimeter imaging radar may use low phase-noise synthesizers and a fast chirper to generate a frequency-modulated continuous-wave (FMCW) waveform. Three-dimensional images are generated through range information derived for each pixel scanned over a target. A peak finding algorithm may be used in processing for each pixel to differentiate material layers of the target. Improved focusing is achieved through a compensation signal sampled from a point source calibration target and applied to received signals from active targets prior to FFT-based range compression to extract and display high-resolution target images. Such an imaging radar has particular application in detecting concealed weapons or contraband.

  12. CMOS imager for pointing and tracking applications

    NASA Technical Reports Server (NTRS)

    Sun, Chao (Inventor); Pain, Bedabrata (Inventor); Yang, Guang (Inventor); Heynssens, Julie B. (Inventor)

    2006-01-01

    Systems and techniques to realize pointing and tracking applications with CMOS imaging devices. In general, in one implementation, the technique includes: sampling multiple rows and multiple columns of an active pixel sensor array into a memory array (e.g., an on-chip memory array), and reading out the multiple rows and multiple columns sampled in the memory array to provide image data with reduced motion artifact. Various operation modes may be provided, including TDS, CDS, CQS, a tracking mode to read out multiple windows, and/or a mode employing a sample-first-read-later readout scheme. The tracking mode can take advantage of a diagonal switch array. The diagonal switch array, the active pixel sensor array and the memory array can be integrated onto a single imager chip with a controller. This imager device can be part of a larger imaging system for both space-based applications and terrestrial applications.

  13. Synchrotron based planar imaging and digital tomosynthesis of breast and biopsy phantoms using a CMOS active pixel sensor.

    PubMed

    Szafraniec, Magdalena B; Konstantinidis, Anastasios C; Tromba, Giuliana; Dreossi, Diego; Vecchio, Sara; Rigon, Luigi; Sodini, Nicola; Naday, Steve; Gunn, Spencer; McArthur, Alan; Olivo, Alessandro

    2015-03-01

    The SYRMEP (SYnchrotron Radiation for MEdical Physics) beamline at Elettra is performing the first mammography study on human patients using free-space propagation phase contrast imaging. The stricter spatial resolution requirements of this method currently force the use of conventional films or specialized computed radiography (CR) systems. This also prevents the implementation of three-dimensional (3D) approaches. This paper explores the use of an X-ray detector based on complementary metal-oxide-semiconductor (CMOS) active pixel sensor (APS) technology as a possible alternative, for acquisitions both in planar and tomosynthesis geometry. Results indicate higher quality of the images acquired with the synchrotron set-up in both geometries. This improvement can be partly ascribed to the use of parallel, collimated and monochromatic synchrotron radiation (resulting in scatter rejection, no penumbra-induced blurring and optimized X-ray energy), and partly to phase contrast effects. Even though the pixel size of the used detector is still too large - and thus suboptimal - for free-space propagation phase contrast imaging, a degree of phase-induced edge enhancement can clearly be observed in the images. Copyright © 2014 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  14. Amorphous selenium direct detection CMOS digital x-ray imager with 25 micron pixel pitch

    NASA Astrophysics Data System (ADS)

    Scott, Christopher C.; Abbaszadeh, Shiva; Ghanbarzadeh, Sina; Allan, Gary; Farrier, Michael; Cunningham, Ian A.; Karim, Karim S.

    2014-03-01

    We have developed a high resolution amorphous selenium (a-Se) direct detection imager using a large-area compatible back-end fabrication process on top of a CMOS active pixel sensor having 25 micron pixel pitch. Integration of a-Se with CMOS technology requires overcoming CMOS/a-Se interfacial strain, which initiates nucleation of crystalline selenium and results in high detector dark currents. A CMOS-compatible polyimide buffer layer was used to planarize the backplane and provide a low stress and thermally stable surface for a-Se. The buffer layer inhibits crystallization and provides detector stability that is not only a performance factor but also critical for favorable long term cost-benefit considerations in the application of CMOS digital x-ray imagers in medical practice. The detector structure is comprised of a polyimide (PI) buffer layer, the a-Se layer, and a gold (Au) top electrode. The PI layer is applied by spin-coating and is patterned using dry etching to open the backplane bond pads for wire bonding. Thermal evaporation is used to deposit the a-Se and Au layers, and the detector is operated in hole collection mode (i.e. a positive bias on the Au top electrode). High resolution a-Se diagnostic systems typically use 70 to 100 μm pixel pitch and have a pre-sampling modulation transfer function (MTF) that is significantly limited by the pixel aperture. Our results confirm that, for a densely integrated 25 μm pixel pitch CMOS array, the MTF approaches the fundamental material limit, i.e. where the MTF begins to be limited by the a-Se material properties and not the pixel aperture. Preliminary images demonstrating high spatial resolution have been obtained from a frst prototype imager.

  15. A New Pixels Flipping Method for Huge Watermarking Capacity of the Invoice Font Image

    PubMed Central

    Li, Li; Hou, Qingzheng; Lu, Jianfeng; Dai, Junping; Mao, Xiaoyang; Chang, Chin-Chen

    2014-01-01

    Invoice printing just has two-color printing, so invoice font image can be seen as binary image. To embed watermarks into invoice image, the pixels need to be flipped. The more huge the watermark is, the more the pixels need to be flipped. We proposed a new pixels flipping method in invoice image for huge watermarking capacity. The pixels flipping method includes one novel interpolation method for binary image, one flippable pixels evaluation mechanism, and one denoising method based on gravity center and chaos degree. The proposed interpolation method ensures that the invoice image keeps features well after scaling. The flippable pixels evaluation mechanism ensures that the pixels keep better connectivity and smoothness and the pattern has highest structural similarity after flipping. The proposed denoising method makes invoice font image smoother and fiter for human vision. Experiments show that the proposed flipping method not only keeps the invoice font structure well but also improves watermarking capacity. PMID:25489606

  16. Tritium autoradiography with thinned and back-side illuminated monolithic active pixel sensor device

    NASA Astrophysics Data System (ADS)

    Deptuch, G.

    2005-05-01

    The first autoradiographic results of the tritium ( 3H) marked source obtained with monolithic active pixel sensors are presented. The detector is a high-resolution, back-side illuminated imager, developed within the SUCIMA collaboration for low-energy (<30 keV) electrons detection. The sensitivity to these energies is obtained by thinning the detector, originally fabricated in the form of a standard VLSI chip, down to the thickness of the epitaxial layer. The detector used is the 1×10 6 pixel, thinned MIMOSA V chip. The low noise performance and thin (˜160 nm) entrance window provide the sensitivity of the device to energies as low as ˜4 keV. A polymer tritium source was parked directly atop the detector in open-air conditions. A real-time image of the source was obtained.

  17. Design of a Low-Light-Level Image Sensor with On-Chip Sigma-Delta Analog-to- Digital Conversion

    NASA Technical Reports Server (NTRS)

    Mendis, Sunetra K.; Pain, Bedabrata; Nixon, Robert H.; Fossum, Eric R.

    1993-01-01

    The design and projected performance of a low-light-level active-pixel-sensor (APS) chip with semi-parallel analog-to-digital (A/D) conversion is presented. The individual elements have been fabricated and tested using MOSIS* 2 micrometer CMOS technology, although the integrated system has not yet been fabricated. The imager consists of a 128 x 128 array of active pixels at a 50 micrometer pitch. Each column of pixels shares a 10-bit A/D converter based on first-order oversampled sigma-delta (Sigma-Delta) modulation. The 10-bit outputs of each converter are multiplexed and read out through a single set of outputs. A semi-parallel architecture is chosen to achieve 30 frames/second operation even at low light levels. The sensor is designed for less than 12 e^- rms noise performance.

  18. Microlens performance limits in sub-2mum pixel CMOS image sensors.

    PubMed

    Huo, Yijie; Fesenmaier, Christian C; Catrysse, Peter B

    2010-03-15

    CMOS image sensors with smaller pixels are expected to enable digital imaging systems with better resolution. When pixel size scales below 2 mum, however, diffraction affects the optical performance of the pixel and its microlens, in particular. We present a first-principles electromagnetic analysis of microlens behavior during the lateral scaling of CMOS image sensor pixels. We establish for a three-metal-layer pixel that diffraction prevents the microlens from acting as a focusing element when pixels become smaller than 1.4 microm. This severely degrades performance for on and off-axis pixels in red, green and blue color channels. We predict that one-metal-layer or backside-illuminated pixels are required to extend the functionality of microlenses beyond the 1.4 microm pixel node.

  19. FAIR exempting separate T (1) measurement (FAIREST): a novel technique for online quantitative perfusion imaging and multi-contrast fMRI.

    PubMed

    Lai, S; Wang, J; Jahng, G H

    2001-01-01

    A new pulse sequence, dubbed FAIR exempting separate T(1) measurement (FAIREST) in which a slice-selective saturation recovery acquisition is added in addition to the standard FAIR (flow-sensitive alternating inversion recovery) scheme, was developed for quantitative perfusion imaging and multi-contrast fMRI. The technique allows for clean separation between and thus simultaneous assessment of BOLD and perfusion effects, whereas quantitative cerebral blood flow (CBF) and tissue T(1) values are monitored online. Online CBF maps were obtained using the FAIREST technique and the measured CBF values were consistent with the off-line CBF maps obtained from using the FAIR technique in combination with a separate sequence for T(1) measurement. Finger tapping activation studies were carried out to demonstrate the applicability of the FAIREST technique in a typical fMRI setting for multi-contrast fMRI. The relative CBF and BOLD changes induced by finger-tapping were 75.1 +/- 18.3 and 1.8 +/- 0.4%, respectively, and the relative oxygen consumption rate change was 2.5 +/- 7.7%. The results from correlation of the T(1) maps with the activation images on a pixel-by-pixel basis show that the mean T(1) value of the CBF activation pixels is close to the T(1) of gray matter while the mean T(1) value of the BOLD activation pixels is close to the T(1) range of blood and cerebrospinal fluid. Copyright 2001 John Wiley & Sons, Ltd.

  20. Theory and applications of structured light single pixel imaging

    NASA Astrophysics Data System (ADS)

    Stokoe, Robert J.; Stockton, Patrick A.; Pezeshki, Ali; Bartels, Randy A.

    2018-02-01

    Many single-pixel imaging techniques have been developed in recent years. Though the methods of image acquisition vary considerably, the methods share unifying features that make general analysis possible. Furthermore, the methods developed thus far are based on intuitive processes that enable simple and physically-motivated reconstruction algorithms, however, this approach may not leverage the full potential of single-pixel imaging. We present a general theoretical framework of single-pixel imaging based on frame theory, which enables general, mathematically rigorous analysis. We apply our theoretical framework to existing single-pixel imaging techniques, as well as provide a foundation for developing more-advanced methods of image acquisition and reconstruction. The proposed frame theoretic framework for single-pixel imaging results in improved noise robustness, decrease in acquisition time, and can take advantage of special properties of the specimen under study. By building on this framework, new methods of imaging with a single element detector can be developed to realize the full potential associated with single-pixel imaging.

  1. Impact of defective pixels in AMLCDs on the perception of medical images

    NASA Astrophysics Data System (ADS)

    Kimpe, Tom; Sneyders, Yuri

    2006-03-01

    With LCD displays, each pixel has its own individual transistor that controls the transmittance of that pixel. Occasionally, these individual transistors will short or alternatively malfunction, resulting in a defective pixel that always shows the same brightness. With ever increasing resolution of displays the number of defect pixels per display increases accordingly. State of the art processes are capable of producing displays with no more than one faulty transistor out of 3 million. A five Mega Pixel medical LCD panel contains 15 million individual sub pixels (3 sub pixels per pixel), each having an individual transistor. This means that a five Mega Pixel display on average will have 5 failing pixels. This paper investigates the visibility of defective pixels and analyzes the possible impact of defective pixels on the perception of medical images. JND simulations were done to study the effect of defective pixels on medical images. Our results indicate that defective LCD pixels can mask subtle features in medical images in an unexpectedly broad area around the defect and therefore may reduce the quality of diagnosis for specific high-demanding areas such as mammography. As a second contribution an innovative solution is proposed. A specialized image processing algorithm can make defective pixels completely invisible and moreover can also recover the information of the defect so that the radiologist perceives the medical image correctly. This correction algorithm has been validated with both JND simulations and psycho visual tests.

  2. Lensfree on-chip microscopy over a wide field-of-view using pixel super-resolution

    PubMed Central

    Bishara, Waheb; Su, Ting-Wei; Coskun, Ahmet F.; Ozcan, Aydogan

    2010-01-01

    We demonstrate lensfree holographic microscopy on a chip to achieve ~0.6 µm spatial resolution corresponding to a numerical aperture of ~0.5 over a large field-of-view of ~24 mm2. By using partially coherent illumination from a large aperture (~50 µm), we acquire lower resolution lensfree in-line holograms of the objects with unit fringe magnification. For each lensfree hologram, the pixel size at the sensor chip limits the spatial resolution of the reconstructed image. To circumvent this limitation, we implement a sub-pixel shifting based super-resolution algorithm to effectively recover much higher resolution digital holograms of the objects, permitting sub-micron spatial resolution to be achieved across the entire sensor chip active area, which is also equivalent to the imaging field-of-view (24 mm2) due to unit magnification. We demonstrate the success of this pixel super-resolution approach by imaging patterned transparent substrates, blood smear samples, as well as Caenoharbditis Elegans. PMID:20588977

  3. Cellular image segmentation using n-agent cooperative game theory

    NASA Astrophysics Data System (ADS)

    Dimock, Ian B.; Wan, Justin W. L.

    2016-03-01

    Image segmentation is an important problem in computer vision and has significant applications in the segmentation of cellular images. Many different imaging techniques exist and produce a variety of image properties which pose difficulties to image segmentation routines. Bright-field images are particularly challenging because of the non-uniform shape of the cells, the low contrast between cells and background, and imaging artifacts such as halos and broken edges. Classical segmentation techniques often produce poor results on these challenging images. Previous attempts at bright-field imaging are often limited in scope to the images that they segment. In this paper, we introduce a new algorithm for automatically segmenting cellular images. The algorithm incorporates two game theoretic models which allow each pixel to act as an independent agent with the goal of selecting their best labelling strategy. In the non-cooperative model, the pixels choose strategies greedily based only on local information. In the cooperative model, the pixels can form coalitions, which select labelling strategies that benefit the entire group. Combining these two models produces a method which allows the pixels to balance both local and global information when selecting their label. With the addition of k-means and active contour techniques for initialization and post-processing purposes, we achieve a robust segmentation routine. The algorithm is applied to several cell image datasets including bright-field images, fluorescent images and simulated images. Experiments show that the algorithm produces good segmentation results across the variety of datasets which differ in cell density, cell shape, contrast, and noise levels.

  4. Method and System for Temporal Filtering in Video Compression Systems

    NASA Technical Reports Server (NTRS)

    Lu, Ligang; He, Drake; Jagmohan, Ashish; Sheinin, Vadim

    2011-01-01

    Three related innovations combine improved non-linear motion estimation, video coding, and video compression. The first system comprises a method in which side information is generated using an adaptive, non-linear motion model. This method enables extrapolating and interpolating a visual signal, including determining the first motion vector between the first pixel position in a first image to a second pixel position in a second image; determining a second motion vector between the second pixel position in the second image and a third pixel position in a third image; determining a third motion vector between the first pixel position in the first image and the second pixel position in the second image, the second pixel position in the second image, and the third pixel position in the third image using a non-linear model; and determining a position of the fourth pixel in a fourth image based upon the third motion vector. For the video compression element, the video encoder has low computational complexity and high compression efficiency. The disclosed system comprises a video encoder and a decoder. The encoder converts the source frame into a space-frequency representation, estimates the conditional statistics of at least one vector of space-frequency coefficients with similar frequencies, and is conditioned on previously encoded data. It estimates an encoding rate based on the conditional statistics and applies a Slepian-Wolf code with the computed encoding rate. The method for decoding includes generating a side-information vector of frequency coefficients based on previously decoded source data and encoder statistics and previous reconstructions of the source frequency vector. It also performs Slepian-Wolf decoding of a source frequency vector based on the generated side-information and the Slepian-Wolf code bits. The video coding element includes receiving a first reference frame having a first pixel value at a first pixel position, a second reference frame having a second pixel value at a second pixel position, and a third reference frame having a third pixel value at a third pixel position. It determines a first motion vector between the first pixel position and the second pixel position, a second motion vector between the second pixel position and the third pixel position, and a fourth pixel value for a fourth frame based upon a linear or nonlinear combination of the first pixel value, the second pixel value, and the third pixel value. A stationary filtering process determines the estimated pixel values. The parameters of the filter may be predetermined constants.

  5. Novel image processing approach to detect malaria

    NASA Astrophysics Data System (ADS)

    Mas, David; Ferrer, Belen; Cojoc, Dan; Finaurini, Sara; Mico, Vicente; Garcia, Javier; Zalevsky, Zeev

    2015-09-01

    In this paper we present a novel image processing algorithm providing good preliminary capabilities for in vitro detection of malaria. The proposed concept is based upon analysis of the temporal variation of each pixel. Changes in dark pixels mean that inter cellular activity happened, indicating the presence of the malaria parasite inside the cell. Preliminary experimental results involving analysis of red blood cells being either healthy or infected with malaria parasites, validated the potential benefit of the proposed numerical approach.

  6. Active Pixel Sensors: Are CCD's Dinosaurs?

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R.

    1993-01-01

    Charge-coupled devices (CCD's) are presently the technology of choice for most imaging applications. In the 23 years since their invention in 1970, they have evolved to a sophisticated level of performance. However, as with all technologies, we can be certain that they will be supplanted someday. In this paper, the Active Pixel Sensor (APS) technology is explored as a possible successor to the CCD. An active pixel is defined as a detector array technology that has at least one active transistor within the pixel unit cell. The APS eliminates the need for nearly perfect charge transfer -- the Achilles' heel of CCDs. This perfect charge transfer makes CCD's radiation 'soft,' difficult to use under low light conditions, difficult to manufacture in large array sizes, difficult to integrate with on-chip electronics, difficult to use at low temperatures, difficult to use at high frame rates, and difficult to manufacture in non-silicon materials that extend wavelength response.

  7. Pixel decomposition for tracking in low resolution videos

    NASA Astrophysics Data System (ADS)

    Govinda, Vivekanand; Ralph, Jason F.; Spencer, Joseph W.; Goulermas, John Y.; Yang, Lihua; Abbas, Alaa M.

    2008-04-01

    This paper describes a novel set of algorithms that allows indoor activity to be monitored using data from very low resolution imagers and other non-intrusive sensors. The objects are not resolved but activity may still be determined. This allows the use of such technology in sensitive environments where privacy must be maintained. Spectral un-mixing algorithms from remote sensing were adapted for this environment. These algorithms allow the fractional contributions from different colours within each pixel to be estimated and this is used to assist in the detection and monitoring of small objects or sub-pixel motion.

  8. Computational imaging with a single-pixel detector and a consumer video projector

    NASA Astrophysics Data System (ADS)

    Sych, D.; Aksenov, M.

    2018-02-01

    Single-pixel imaging is a novel rapidly developing imaging technique that employs spatially structured illumination and a single-pixel detector. In this work, we experimentally demonstrate a fully operating modular single-pixel imaging system. Light patterns in our setup are created with help of a computer-controlled digital micromirror device from a consumer video projector. We investigate how different working modes and settings of the projector affect the quality of reconstructed images. We develop several image reconstruction algorithms and compare their performance for real imaging. Also, we discuss the potential use of the single-pixel imaging system for quantum applications.

  9. The Europa Imaging System (EIS): High-Resolution, 3-D Insight into Europa's Geology, Ice Shell, and Potential for Current Activity

    NASA Astrophysics Data System (ADS)

    Turtle, E. P.; McEwen, A. S.; Collins, G. C.; Fletcher, L. N.; Hansen, C. J.; Hayes, A.; Hurford, T., Jr.; Kirk, R. L.; Barr, A.; Nimmo, F.; Patterson, G.; Quick, L. C.; Soderblom, J. M.; Thomas, N.

    2015-12-01

    The Europa Imaging System will transform our understanding of Europa through global decameter-scale coverage, three-dimensional maps, and unprecedented meter-scale imaging. EIS combines narrow-angle and wide-angle cameras (NAC and WAC) designed to address high-priority Europa science and reconnaissance goals. It will: (A) Characterize the ice shell by constraining its thickness and correlating surface features with subsurface structures detected by ice penetrating radar; (B) Constrain formation processes of surface features and the potential for current activity by characterizing endogenic structures, surface units, global cross-cutting relationships, and relationships to Europa's subsurface structure, and by searching for evidence of recent activity, including potential plumes; and (C) Characterize scientifically compelling landing sites and hazards by determining the nature of the surface at scales relevant to a potential lander. The NAC provides very high-resolution, stereo reconnaissance, generating 2-km-wide swaths at 0.5-m pixel scale from 50-km altitude, and uses a gimbal to enable independent targeting. NAC observations also include: near-global (>95%) mapping of Europa at ≤50-m pixel scale (to date, only ~14% of Europa has been imaged at ≤500 m/pixel, with best pixel scale 6 m); regional and high-resolution stereo imaging at <1-m/pixel; and high-phase-angle observations for plume searches. The WAC is designed to acquire pushbroom stereo swaths along flyby ground-tracks, generating digital topographic models with 32-m spatial scale and 4-m vertical precision from 50-km altitude. These data support characterization of cross-track clutter for radar sounding. The WAC also performs pushbroom color imaging with 6 broadband filters (350-1050 nm) to map surface units and correlations with geologic features and topography. EIS will provide comprehensive data sets essential to fulfilling the goal of exploring Europa to investigate its habitability and perform collaborative science with other investigations, including cartographic and geologic maps, regional and high-resolution digital topography, GIS products, color and photometric data products, a geodetic control network tied to radar altimetry, and a database of plume-search observations.

  10. Camera-on-a-Chip

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Jet Propulsion Laboratory's research on a second generation, solid-state image sensor technology has resulted in the Complementary Metal- Oxide Semiconductor Active Pixel Sensor (CMOS), establishing an alternative to the Charged Coupled Device (CCD). Photobit Corporation, the leading supplier of CMOS image sensors, has commercialized two products of their own based on this technology: the PB-100 and PB-300. These devices are cameras on a chip, combining all camera functions. CMOS "active-pixel" digital image sensors offer several advantages over CCDs, a technology used in video and still-camera applications for 30 years. The CMOS sensors draw less energy, they use the same manufacturing platform as most microprocessors and memory chips, and they allow on-chip programming of frame size, exposure, and other parameters.

  11. A time-resolved image sensor for tubeless streak cameras

    NASA Astrophysics Data System (ADS)

    Yasutomi, Keita; Han, SangMan; Seo, Min-Woong; Takasawa, Taishi; Kagawa, Keiichiro; Kawahito, Shoji

    2014-03-01

    This paper presents a time-resolved CMOS image sensor with draining-only modulation (DOM) pixels for tube-less streak cameras. Although the conventional streak camera has high time resolution, the device requires high voltage and bulky system due to the structure with a vacuum tube. The proposed time-resolved imager with a simple optics realize a streak camera without any vacuum tubes. The proposed image sensor has DOM pixels, a delay-based pulse generator, and a readout circuitry. The delay-based pulse generator in combination with an in-pixel logic allows us to create and to provide a short gating clock to the pixel array. A prototype time-resolved CMOS image sensor with the proposed pixel is designed and implemented using 0.11um CMOS image sensor technology. The image array has 30(Vertical) x 128(Memory length) pixels with the pixel pitch of 22.4um. .

  12. Alignment by Maximization of Mutual Information

    DTIC Science & Technology

    1995-06-01

    Davi Geiger, David Chapman, Jose Robles, Tao Alter, Misha Bolotski, Jonathan Connel, Karen Sarachik, Maja Mataric , Ian Horswill, Colin Angle...the same pose. These images are very different and are in fact anti-correlated: bright pixels in the left image correspond to dark pixels in the right...image; dark pixels in the left image correspond to bright pixels in the right image. No variant of correlation could match these images together

  13. Characteristics of Monolithically Integrated InGaAs Active Pixel Imager Array

    NASA Technical Reports Server (NTRS)

    Kim, Q.; Cunningham, T. J.; Pain, B.; Lange, M. J.; Olsen, G. H.

    2000-01-01

    Switching and amplifying characteristics of a newly developed monolithic InGaAs Active Pixel Imager Array are presented. The sensor array is fabricated from InGaAs material epitaxially deposited on an InP substrate. It consists of an InGaAs photodiode connected to InP depletion-mode junction field effect transistors (JFETs) for low leakage, low power, and fast control of circuit signal amplifying, buffering, selection, and reset. This monolithically integrated active pixel sensor configuration eliminates the need for hybridization with silicon multiplexer. In addition, the configuration allows the sensor to be front illuminated, making it sensitive to visible as well as near infrared signal radiation. Adapting the existing 1.55 micrometer fiber optical communication technology, this integration will be an ideal system of optoelectronic integration for dual band (Visible/IR) applications near room temperature, for use in atmospheric gas sensing in space, and for target identification on earth. In this paper, two different types of small 4 x 1 test arrays will be described. The effectiveness of switching and amplifying circuits will be discussed in terms of circuit effectiveness (leakage, operating frequency, and temperature) in preparation for the second phase demonstration of integrated, two-dimensional monolithic InGaAs active pixel sensor arrays for applications in transportable shipboard surveillance, night vision, and emission spectroscopy.

  14. Advancements in DEPMOSFET device developments for XEUS

    NASA Astrophysics Data System (ADS)

    Treis, J.; Bombelli, L.; Eckart, R.; Fiorini, C.; Fischer, P.; Hälker, O.; Herrmann, S.; Lechner, P.; Lutz, G.; Peric, I.; Porro, M.; Richter, R. H.; Schaller, G.; Schopper, F.; Soltau, H.; Strüder, L.; Wölfel, S.

    2006-06-01

    DEPMOSFET based Active Pixel Sensor (APS) matrices are a new detector concept for X-ray imaging spectroscopy missions. They can cope with the challenging requirements of the XEUS Wide Field Imager and combine excellent energy resolution, high speed readout and low power consumption with the attractive feature of random accessibility of pixels. From the evaluation of first prototypes, new concepts have been developed to overcome the minor drawbacks and problems encountered for the older devices. The new devices will have a pixel size of 75 μm × 75 μm. Besides 64 × 64 pixel arrays, prototypes with a sizes of 256 × 256 pixels and 128 × 512 pixels and an active area of about 3.6 cm2 will be produced, a milestone on the way towards the fully grown XEUS WFI device. The production of these improved devices is currently on the way. At the same time, the development of the next generation of front-end electronics has been started, which will permit to operate the sensor devices with the readout speed required by XEUS. Here, a summary of the DEPFET capabilities, the concept of the sensors of the next generation and the new front-end electronics will be given. Additionally, prospects of new device developments using the DEPFET as a sensitive element are shown, e.g. so-called RNDR-pixels, which feature repetitive non-destructive readout to lower the readout noise below the 1 e - ENC limit.

  15. Technique for ship/wake detection

    DOEpatents

    Roskovensky, John K [Albuquerque, NM

    2012-05-01

    An automated ship detection technique includes accessing data associated with an image of a portion of Earth. The data includes reflectance values. A first portion of pixels within the image are masked with a cloud and land mask based on spectral flatness of the reflectance values associated with the pixels. A given pixel selected from the first portion of pixels is unmasked when a threshold number of localized pixels surrounding the given pixel are not masked by the cloud and land mask. A spatial variability image is generated based on spatial derivatives of the reflectance values of the pixels which remain unmasked by the cloud and land mask. The spatial variability image is thresholded to identify one or more regions within the image as possible ship detection regions.

  16. Image sensor with high dynamic range linear output

    NASA Technical Reports Server (NTRS)

    Yadid-Pecht, Orly (Inventor); Fossum, Eric R. (Inventor)

    2007-01-01

    Designs and operational methods to increase the dynamic range of image sensors and APS devices in particular by achieving more than one integration times for each pixel thereof. An APS system with more than one column-parallel signal chains for readout are described for maintaining a high frame rate in readout. Each active pixel is sampled for multiple times during a single frame readout, thus resulting in multiple integration times. The operation methods can also be used to obtain multiple integration times for each pixel with an APS design having a single column-parallel signal chain for readout. Furthermore, analog-to-digital conversion of high speed and high resolution can be implemented.

  17. Steganography on quantum pixel images using Shannon entropy

    NASA Astrophysics Data System (ADS)

    Laurel, Carlos Ortega; Dong, Shi-Hai; Cruz-Irisson, M.

    2016-07-01

    This paper presents a steganographical algorithm based on least significant bit (LSB) from the most significant bit information (MSBI) and the equivalence of a bit pixel image to a quantum pixel image, which permits to make the information communicate secretly onto quantum pixel images for its secure transmission through insecure channels. This algorithm offers higher security since it exploits the Shannon entropy for an image.

  18. Thermal wake/vessel detection technique

    DOEpatents

    Roskovensky, John K [Albuquerque, NM; Nandy, Prabal [Albuquerque, NM; Post, Brian N [Albuquerque, NM

    2012-01-10

    A computer-automated method for detecting a vessel in water based on an image of a portion of Earth includes generating a thermal anomaly mask. The thermal anomaly mask flags each pixel of the image initially deemed to be a wake pixel based on a comparison of a thermal value of each pixel against other thermal values of other pixels localized about each pixel. Contiguous pixels flagged by the thermal anomaly mask are grouped into pixel clusters. A shape of each of the pixel clusters is analyzed to determine whether each of the pixel clusters represents a possible vessel detection event. The possible vessel detection events are represented visually within the image.

  19. First images of a digital autoradiography system based on a Medipix2 hybrid silicon pixel detector.

    PubMed

    Mettivier, Giovanni; Montesi, Maria Cristina; Russo, Paolo

    2003-06-21

    We present the first images of beta autoradiography obtained with the high-resolution hybrid pixel detector consisting of the Medipix2 single photon counting read-out chip bump-bonded to a 300 microm thick silicon pixel detector. This room temperature system has 256 x 256 square pixels of 55 microm pitch (total sensitive area of 14 x 14 mm2), with a double threshold discriminator and a 13-bit counter in each pixel. It is read out via a dedicated electronic interface and control software, also developed in the framework of the European Medipix2 Collaboration. Digital beta autoradiograms of 14C microscale standard strips (containing separate bands of increasing specific activity in the range 0.0038-32.9 kBq g(-1)) indicate system linearity down to a total background noise of 1.8 x 10(-3) counts mm(-2) s(-1). The minimum detectable activity is estimated to be 0.012 Bq for 36,000 s exposure and 0.023 Bq for 10,800 s exposure. The measured minimum detection threshold is less than 1600 electrons (equivalent to about 6 keV Si). This real-time system for beta autoradiography offers lower pixel pitch and higher sensitive area than the previous Medipix1-based system. It has a 14C sensitivity better than that of micro channel plate based systems, which, however, shows higher spatial resolution and sensitive area.

  20. Pixel-wise deblurring imaging system based on active vision for structural health monitoring at a speed of 100 km/h

    NASA Astrophysics Data System (ADS)

    Hayakawa, Tomohiko; Moko, Yushi; Morishita, Kenta; Ishikawa, Masatoshi

    2018-04-01

    In this paper, we propose a pixel-wise deblurring imaging (PDI) system based on active vision for compensation of the blur caused by high-speed one-dimensional motion between a camera and a target. The optical axis is controlled by back-and-forth motion of a galvanometer mirror to compensate the motion. High-spatial-resolution image captured by our system in high-speed motion is useful for efficient and precise visual inspection, such as visually judging abnormal parts of a tunnel surface to prevent accidents; hence, we applied the PDI system for structural health monitoring. By mounting the system onto a vehicle in a tunnel, we confirmed significant improvement in image quality for submillimeter black-and-white stripes and real tunnel-surface cracks at a speed of 100 km/h.

  1. Hyperspectral Imaging and Obstacle Detection for Robotics Navigation

    DTIC Science & Technology

    2005-09-01

    anatomy and diffraction process. 17 3.3 Technical Specifications of the System A. Brimrose AOTF Video Adaptor Specifications: Material TeO2 Active...sampled from glass case on person 2’s belt 530 pixels 20 pick-up white sampled from body panels of pick-up 600 pixels 21 pick-up blue sampled from

  2. Development of multi-pixel x-ray source using oxide-coated cathodes.

    PubMed

    Kandlakunta, Praneeth; Pham, Richard; Khan, Rao; Zhang, Tiezhi

    2017-07-07

    Multiple pixel x-ray sources facilitate new designs of imaging modalities that may result in faster imaging speed, improved image quality, and more compact geometry. We are developing a high-brightness multiple-pixel thermionic emission x-ray (MPTEX) source based on oxide-coated cathodes. Oxide cathodes have high emission efficiency and, thereby, produce high emission current density at low temperature when compared to traditional tungsten filaments. Indirectly heated micro-rectangular oxide cathodes were developed using carbonates, which were converted to semiconductor oxides of barium, strontium, and calcium after activation. Each cathode produces a focal spot on an elongated fixed anode. The x-ray beam ON and OFF control is performed by source-switching electronics, which supplies bias voltage to the cathode emitters. In this paper, we report the initial performance of the oxide-coated cathodes and the MPTEX source.

  3. Dependence of the appearance-based perception of criminality, suggestibility, and trustworthiness on the level of pixelation of facial images.

    PubMed

    Nurmoja, Merle; Eamets, Triin; Härma, Hanne-Loore; Bachmann, Talis

    2012-10-01

    While the dependence of face identification on the level of pixelation-transform of the images of faces has been well studied, similar research on face-based trait perception is underdeveloped. Because depiction formats used for hiding individual identity in visual media and evidential material recorded by surveillance cameras often consist of pixelized images, knowing the effects of pixelation on person perception has practical relevance. Here, the results of two experiments are presented showing the effect of facial image pixelation on the perception of criminality, trustworthiness, and suggestibility. It appears that individuals (N = 46, M age = 21.5 yr., SD = 3.1 for criminality ratings; N = 94, M age = 27.4 yr., SD = 10.1 for other ratings) have the ability to discriminate between facial cues ndicative of these perceived traits from the coarse level of image pixelation (10-12 pixels per face horizontally) and that the discriminability increases with a decrease in the coarseness of pixelation. Perceived criminality and trustworthiness appear to be better carried by the pixelized images than perceived suggestibility.

  4. An investigation of signal performance enhancements achieved through innovative pixel design across several generations of indirect detection, active matrix, flat-panel arrays

    PubMed Central

    Antonuk, Larry E.; Zhao, Qihua; El-Mohri, Youcef; Du, Hong; Wang, Yi; Street, Robert A.; Ho, Jackson; Weisfield, Richard; Yao, William

    2009-01-01

    Active matrix flat-panel imager (AMFPI) technology is being employed for an increasing variety of imaging applications. An important element in the adoption of this technology has been significant ongoing improvements in optical signal collection achieved through innovations in indirect detection array pixel design. Such improvements have a particularly beneficial effect on performance in applications involving low exposures and∕or high spatial frequencies, where detective quantum efficiency is strongly reduced due to the relatively high level of additive electronic noise compared to signal levels of AMFPI devices. In this article, an examination of various signal properties, as determined through measurements and calculations related to novel array designs, is reported in the context of the evolution of AMFPI pixel design. For these studies, dark, optical, and radiation signal measurements were performed on prototype imagers incorporating a variety of increasingly sophisticated array designs, with pixel pitches ranging from 75 to 127 μm. For each design, detailed measurements of fundamental pixel-level properties conducted under radiographic and fluoroscopic operating conditions are reported and the results are compared. A series of 127 μm pitch arrays employing discrete photodiodes culminated in a novel design providing an optical fill factor of ∼80% (thereby assuring improved x-ray sensitivity), and demonstrating low dark current, very low charge trapping and charge release, and a large range of linear signal response. In two of the designs having 75 and 90 μm pitches, a novel continuous photodiode structure was found to provide fill factors that approach the theoretical maximum of 100%. Both sets of novel designs achieved large fill factors by employing architectures in which some, or all of the photodiode structure was elevated above the plane of the pixel addressing transistor. Generally, enhancement of the fill factor in either discrete or continuous photodiode arrays was observed to result in no degradation in MTF due to charge sharing between pixels. While the continuous designs exhibited relatively high levels of charge trapping and release, as well as shorter ranges of linearity, it is possible that these behaviors can be addressed through further refinements to pixel design. Both the continuous and the most recent discrete photodiode designs accommodate more sophisticated pixel circuitry than is present on conventional AMFPIs – such as a pixel clamp circuit, which is demonstrated to limit signal saturation under conditions corresponding to high exposures. It is anticipated that photodiode structures such as the ones reported in this study will enable the development of even more complex pixel circuitry, such as pixel-level amplifiers, that will lead to further significant improvements in imager performance. PMID:19673228

  5. III-V infrared research at the Jet Propulsion Laboratory

    NASA Astrophysics Data System (ADS)

    Gunapala, S. D.; Ting, D. Z.; Hill, C. J.; Soibel, A.; Liu, John; Liu, J. K.; Mumolo, J. M.; Keo, S. A.; Nguyen, J.; Bandara, S. V.; Tidrow, M. Z.

    2009-08-01

    Jet Propulsion Laboratory is actively developing the III-V based infrared detector and focal plane arrays (FPAs) for NASA, DoD, and commercial applications. Currently, we are working on multi-band Quantum Well Infrared Photodetectors (QWIPs), Superlattice detectors, and Quantum Dot Infrared Photodetector (QDIPs) technologies suitable for high pixel-pixel uniformity and high pixel operability large area imaging arrays. In this paper we report the first demonstration of the megapixel-simultaneously-readable and pixel-co-registered dual-band QWIP focal plane array (FPA). In addition, we will present the latest advances in QDIPs and Superlattice infrared detectors at the Jet Propulsion Laboratory.

  6. Method and apparatus for detecting a desired behavior in digital image data

    DOEpatents

    Kegelmeyer, Jr., W. Philip

    1997-01-01

    A method for detecting stellate lesions in digitized mammographic image data includes the steps of prestoring a plurality of reference images, calculating a plurality of features for each of the pixels of the reference images, and creating a binary decision tree from features of randomly sampled pixels from each of the reference images. Once the binary decision tree has been created, a plurality of features, preferably including an ALOE feature (analysis of local oriented edges), are calculated for each of the pixels of the digitized mammographic data. Each of these plurality of features of each pixel are input into the binary decision tree and a probability is determined, for each of the pixels, corresponding to the likelihood of the presence of a stellate lesion, to create a probability image. Finally, the probability image is spatially filtered to enforce local consensus among neighboring pixels and the spatially filtered image is output.

  7. Method and apparatus for detecting a desired behavior in digital image data

    DOEpatents

    Kegelmeyer, Jr., W. Philip

    1997-01-01

    A method for detecting stellate lesions in digitized mammographic image data includes the steps of prestoring a plurality of reference images, calculating a plurality of features for each of the pixels of the reference images, and creating a binary decision tree from features of randomly sampled pixels from each of the reference images. Once the binary decision tree has been created, a plurality of features, preferably including an ALOE feature (analysis of local oriented edges), are calculated for each of the pixels of the digitized mammographic data. Each of these plurality of features of each pixel are input into the binary decision tree and a probability is determined, for each of the pixels, corresponding to the likelihood of the presence of a stellate lesion, to create a probability image. Finally, the probability image is spacially filtered to enforce local consensus among neighboring pixels and the spacially filtered image is output.

  8. Error analysis of filtering operations in pixel-duplicated images of diabetic retinopathy

    NASA Astrophysics Data System (ADS)

    Mehrubeoglu, Mehrube; McLauchlan, Lifford

    2010-08-01

    In this paper, diabetic retinopathy is chosen for a sample target image to demonstrate the effectiveness of image enlargement through pixel duplication in identifying regions of interest. Pixel duplication is presented as a simpler alternative to data interpolation techniques for detecting small structures in the images. A comparative analysis is performed on different image processing schemes applied to both original and pixel-duplicated images. Structures of interest are detected and and classification parameters optimized for minimum false positive detection in the original and enlarged retinal pictures. The error analysis demonstrates the advantages as well as shortcomings of pixel duplication in image enhancement when spatial averaging operations (smoothing filters) are also applied.

  9. Characterisation of the high dynamic range Large Pixel Detector (LPD) and its use at X-ray free electron laser sources

    NASA Astrophysics Data System (ADS)

    Veale, M. C.; Adkin, P.; Booker, P.; Coughlan, J.; French, M. J.; Hart, M.; Nicholls, T.; Schneider, A.; Seller, P.; Pape, I.; Sawhney, K.; Carini, G. A.; Hart, P. A.

    2017-12-01

    The STFC Rutherford Appleton Laboratory have delivered the Large Pixel Detector (LPD) for MHz frame rate imaging at the European XFEL. The detector system has an active area of 0.5 m × 0.5 m and consists of a million pixels on a 500 μm pitch. Sensors have been produced from 500 μm thick Hammamatsu silicon tiles that have been bump bonded to the readout ASIC using a silver epoxy and gold stud technique. Each pixel of the detector system is capable of measuring 105 12 keV photons per image readout at 4.5 MHz. In this paper results from the testing of these detectors at the Diamond Light Source and the Linac Coherent Light Source (LCLS) are presented. The performance of the detector in terms of linearity, spatial uniformity and the performance of the different ASIC gain stages is characterised.

  10. Spatial light modulator array with heat minimization and image enhancement features

    DOEpatents

    Jain, Kanti [Briarcliff Manor, NY; Sweatt, William C [Albuquerque, NM; Zemel, Marc [New Rochelle, NY

    2007-01-30

    An enhanced spatial light modulator (ESLM) array, a microelectronics patterning system and a projection display system using such an ESLM for heat-minimization and resolution enhancement during imaging, and the method for fabricating such an ESLM array. The ESLM array includes, in each individual pixel element, a small pixel mirror (reflective region) and a much larger pixel surround. Each pixel surround includes diffraction-grating regions and resolution-enhancement regions. During imaging, a selected pixel mirror reflects a selected-pixel beamlet into the capture angle of a projection lens, while the diffraction grating of the pixel surround redirects heat-producing unused radiation away from the projection lens. The resolution-enhancement regions of selected pixels provide phase shifts that increase effective modulation-transfer function in imaging. All of the non-selected pixel surrounds redirect all radiation energy away from the projection lens. All elements of the ESLM are fabricated by deposition, patterning, etching and other microelectronic process technologies.

  11. Supervised pixel classification using a feature space derived from an artificial visual system

    NASA Technical Reports Server (NTRS)

    Baxter, Lisa C.; Coggins, James M.

    1991-01-01

    Image segmentation involves labelling pixels according to their membership in image regions. This requires the understanding of what a region is. Using supervised pixel classification, the paper investigates how groups of pixels labelled manually according to perceived image semantics map onto the feature space created by an Artificial Visual System. Multiscale structure of regions are investigated and it is shown that pixels form clusters based on their geometric roles in the image intensity function, not by image semantics. A tentative abstract definition of a 'region' is proposed based on this behavior.

  12. A kind of color image segmentation algorithm based on super-pixel and PCNN

    NASA Astrophysics Data System (ADS)

    Xu, GuangZhu; Wang, YaWen; Zhang, Liu; Zhao, JingJing; Fu, YunXia; Lei, BangJun

    2018-04-01

    Image segmentation is a very important step in the low-level visual computing. Although image segmentation has been studied for many years, there are still many problems. PCNN (Pulse Coupled Neural network) has biological background, when it is applied to image segmentation it can be viewed as a region-based method, but due to the dynamics properties of PCNN, many connectionless neurons will pulse at the same time, so it is necessary to identify different regions for further processing. The existing PCNN image segmentation algorithm based on region growing is used for grayscale image segmentation, cannot be directly used for color image segmentation. In addition, the super-pixel can better reserve the edges of images, and reduce the influences resulted from the individual difference between the pixels on image segmentation at the same time. Therefore, on the basis of the super-pixel, the original PCNN algorithm based on region growing is improved by this paper. First, the color super-pixel image was transformed into grayscale super-pixel image which was used to seek seeds among the neurons that hadn't been fired. And then it determined whether to stop growing by comparing the average of each color channel of all the pixels in the corresponding regions of the color super-pixel image. Experiment results show that the proposed algorithm for the color image segmentation is fast and effective, and has a certain effect and accuracy.

  13. Cerebral hypoxia during cardiopulmonary bypass: a magnetic resonance imaging study.

    PubMed

    Mutch, W A; Ryner, L N; Kozlowski, P; Scarth, G; Warrian, R K; Lefevre, G R; Wong, T G; Thiessen, D B; Girling, L G; Doiron, L; McCudden, C; Saunders, J K

    1997-09-01

    Neurocognitive deficits after open heart operations have been correlated to jugular venous oxygen desaturation on rewarming from hypothermic cardiopulmonary bypass (CPB). Using a porcine model, we looked for evidence of cerebral hypoxia by magnetic resonance imaging during CPB. Brain oxygenation was assessed by T2*-weighted imaging, based on the blood oxygenation level-dependent effect (decreased T2*-weighted signal intensity with increased tissue concentrations of deoxyhemoglobin). Pigs were placed on normothermic CPB, then cooled to 28 degrees C for 2 hours of hypothermic CPB, then rewarmed to baseline temperature. T2*-weighted, imaging was undertaken before CPB, during normothermic CPB, at 30-minute intervals during hypothermic CPB, after rewarming, and then 15 minutes after death. Imaging was with a Bruker 7.0 Tesla, 40-cm bore magnetic resonance scanner with actively shielded gradient coils. Regions of interest from the magnetic resonance images were analyzed to identify parenchymal hypoxia and correlated with jugular venous oxygen saturation. Post-hoc fuzzy clustering analysis was used to examine spatially distributed regions of interest whose pixels followed similar time courses. Attention was paid to pixels showing decreased T2* signal intensity over time. T2* signal intensity decreased with rewarming and in five of seven experiments correlated with the decrease in jugular venous oxygen saturation. T2* imaging with fuzzy clustering analysis revealed two diffusely distributed pixel groups during CPB. One large group of pixels (50% +/- 13% of total pixel count) showed increased T2* signal intensity (well-oxygenated tissue) during hypothermia, with decreased intensity on rewarming. Changes in a second group of pixels (34% +/- 8% of total pixel count) showed a progressive decrease in T2* signal intensity, independent of temperature, suggestive of increased brain hypoxia during CPB. Decreased T2* signal intensity in a diffuse spatial distribution indicates that a large proportion of cerebral parenchyma is hypoxic (evidenced by an increased proportion of tissue deoxyhemoglobin) during CPB in this porcine model. Neuronal damage secondary to parenchymal hypoxia may explain the postoperative neuropsychological dysfunction after cardiac operations.

  14. CMOS active pixel sensor type imaging system on a chip

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R. (Inventor); Nixon, Robert (Inventor)

    2011-01-01

    A single chip camera which includes an .[.intergrated.]. .Iadd.integrated .Iaddend.image acquisition portion and control portion and which has double sampling/noise reduction capabilities thereon. Part of the .[.intergrated.]. .Iadd.integrated .Iaddend.structure reduces the noise that is picked up during imaging.

  15. Mitigating illumination gradients in a SAR image based on the image data and antenna beam pattern

    DOEpatents

    Doerry, Armin W.

    2013-04-30

    Illumination gradients in a synthetic aperture radar (SAR) image of a target can be mitigated by determining a correction for pixel values associated with the SAR image. This correction is determined based on information indicative of a beam pattern used by a SAR antenna apparatus to illuminate the target, and also based on the pixel values associated with the SAR image. The correction is applied to the pixel values associated with the SAR image to produce corrected pixel values that define a corrected SAR image.

  16. Shadow-free single-pixel imaging

    NASA Astrophysics Data System (ADS)

    Li, Shunhua; Zhang, Zibang; Ma, Xiao; Zhong, Jingang

    2017-11-01

    Single-pixel imaging is an innovative imaging scheme and receives increasing attention in recent years, for it is applicable for imaging at non-visible wavelengths and imaging under weak light conditions. However, as in conventional imaging, shadows would likely occur in single-pixel imaging and sometimes bring negative effects in practical uses. In this paper, the principle of shadows occurrence in single-pixel imaging is analyzed, following which a technique for shadows removal is proposed. In the proposed technique, several single-pixel detectors are used to detect the backscattered light at different locations so that the shadows in the reconstructed images corresponding to each detector shadows are complementary. Shadow-free reconstruction can be derived by fusing the shadow-complementary images using maximum selection rule. To deal with the problem of intensity mismatch in image fusion, we put forward a simple calibration. As experimentally demonstrated, the technique is able to reconstruct monochromatic and full-color shadow-free images.

  17. Fast Pixel Buffer For Processing With Lookup Tables

    NASA Technical Reports Server (NTRS)

    Fisher, Timothy E.

    1992-01-01

    Proposed scheme for buffering data on intensities of picture elements (pixels) of image increases rate or processing beyond that attainable when data read, one pixel at time, from main image memory. Scheme applied in design of specialized image-processing circuitry. Intended to optimize performance of processor in which electronic equivalent of address-lookup table used to address those pixels in main image memory required for processing.

  18. Detector motion method to increase spatial resolution in photon-counting detectors

    NASA Astrophysics Data System (ADS)

    Lee, Daehee; Park, Kyeongjin; Lim, Kyung Taek; Cho, Gyuseong

    2017-03-01

    Medical imaging requires high spatial resolution of an image to identify fine lesions. Photon-counting detectors in medical imaging have recently been rapidly replacing energy-integrating detectors due to the former`s high spatial resolution, high efficiency and low noise. Spatial resolution in a photon counting image is determined by the pixel size. Therefore, the smaller the pixel size, the higher the spatial resolution that can be obtained in an image. However, detector redesigning is required to reduce pixel size, and an expensive fine process is required to integrate a signal processing unit with reduced pixel size. Furthermore, as the pixel size decreases, charge sharing severely deteriorates spatial resolution. To increase spatial resolution, we propose a detector motion method using a large pixel detector that is less affected by charge sharing. To verify the proposed method, we utilized a UNO-XRI photon-counting detector (1-mm CdTe, Timepix chip) at the maximum X-ray tube voltage of 80 kVp. A similar spatial resolution of a 55- μm-pixel image was achieved by application of the proposed method to a 110- μm-pixel detector with a higher signal-to-noise ratio. The proposed method could be a way to increase spatial resolution without a pixel redesign when pixels severely suffer from charge sharing as pixel size is reduced.

  19. Imaging through scattering media by Fourier filtering and single-pixel detection

    NASA Astrophysics Data System (ADS)

    Jauregui-Sánchez, Y.; Clemente, P.; Lancis, J.; Tajahuerce, E.

    2018-02-01

    We present a novel imaging system that combines the principles of Fourier spatial filtering and single-pixel imaging in order to recover images of an object hidden behind a turbid medium by transillumination. We compare the performance of our single-pixel imaging setup with that of a conventional system. We conclude that the introduction of Fourier gating improves the contrast of images in both cases. Furthermore, we show that the combination of single-pixel imaging and Fourier spatial filtering techniques is particularly well adapted to provide images of objects transmitted through scattering media.

  20. Active hyperspectral imaging using a quantum cascade laser (QCL) array and digital-pixel focal plane array (DFPA) camera.

    PubMed

    Goyal, Anish; Myers, Travis; Wang, Christine A; Kelly, Michael; Tyrrell, Brian; Gokden, B; Sanchez, Antonio; Turner, George; Capasso, Federico

    2014-06-16

    We demonstrate active hyperspectral imaging using a quantum-cascade laser (QCL) array as the illumination source and a digital-pixel focal-plane-array (DFPA) camera as the receiver. The multi-wavelength QCL array used in this work comprises 15 individually addressable QCLs in which the beams from all lasers are spatially overlapped using wavelength beam combining (WBC). The DFPA camera was configured to integrate the laser light reflected from the sample and to perform on-chip subtraction of the passive thermal background. A 27-frame hyperspectral image was acquired of a liquid contaminant on a diffuse gold surface at a range of 5 meters. The measured spectral reflectance closely matches the calculated reflectance. Furthermore, the high-speed capabilities of the system were demonstrated by capturing differential reflectance images of sand and KClO3 particles that were moving at speeds of up to 10 m/s.

  1. Highly sensitive and area-efficient CMOS image sensor using a PMOSFET-type photodetector with a built-in transfer gate

    NASA Astrophysics Data System (ADS)

    Seo, Sang-Ho; Kim, Kyoung-Do; Kong, Jae-Sung; Shin, Jang-Kyoo; Choi, Pyung

    2007-02-01

    In this paper, a new CMOS image sensor is presented, which uses a PMOSFET-type photodetector with a transfer gate that has a high and variable sensitivity. The proposed CMOS image sensor has been fabricated using a 0.35 μm 2-poly 4- metal standard CMOS technology and is composed of a 256 × 256 array of 7.05 × 7.10 μm pixels. The unit pixel has a configuration of a pseudo 3-transistor active pixel sensor (APS) with the PMOSFET-type photodetector with a transfer gate, which has a function of conventional 4-transistor APS. The generated photocurrent is controlled by the transfer gate of the PMOSFET-type photodetector. The maximum responsivity of the photodetector is larger than 1.0 × 10 3 A/W without any optical lens. Fabricated 256 × 256 CMOS image sensor exhibits a good response to low-level illumination as low as 5 lux.

  2. High dynamic range bio-molecular ion microscopy with the Timepix detector.

    PubMed

    Jungmann, Julia H; MacAleese, Luke; Visser, Jan; Vrakking, Marc J J; Heeren, Ron M A

    2011-10-15

    Highly parallel, active pixel detectors enable novel detection capabilities for large biomolecules in time-of-flight (TOF) based mass spectrometry imaging (MSI). In this work, a 512 × 512 pixel, bare Timepix assembly combined with chevron microchannel plates (MCP) captures time-resolved images of several m/z species in a single measurement. Mass-resolved ion images from Timepix measurements of peptide and protein standards demonstrate the capability to return both mass-spectral and localization information of biologically relevant analytes from matrix-assisted laser desorption ionization (MALDI) on a commercial ion microscope. The use of a MCP-Timepix assembly delivers an increased dynamic range of several orders of magnitude. The Timepix returns defined mass spectra already at subsaturation MCP gains, which prolongs the MCP lifetime and allows the gain to be optimized for image quality. The Timepix peak resolution is only limited by the resolution of the in-pixel measurement clock. Oligomers of the protein ubiquitin were measured up to 78 kDa. © 2011 American Chemical Society

  3. Noise performance limits of advanced x-ray imagers employing poly-Si-based active pixel architectures

    NASA Astrophysics Data System (ADS)

    Koniczek, Martin; El-Mohri, Youcef; Antonuk, Larry E.; Liang, Albert; Zhao, Qihua; Jiang, Hao

    2011-03-01

    A decade after the clinical introduction of active matrix, flat-panel imagers (AMFPIs), the performance of this technology continues to be limited by the relatively large additive electronic noise of these systems - resulting in significant loss of detective quantum efficiency (DQE) under conditions of low exposure or high spatial frequencies. An increasingly promising approach for overcoming such limitations involves the incorporation of in-pixel amplification circuits, referred to as active pixel architectures (AP) - based on low-temperature polycrystalline silicon (poly-Si) thin-film transistors (TFTs). In this study, a methodology for theoretically examining the limiting noise and DQE performance of circuits employing 1-stage in-pixel amplification is presented. This methodology involves sophisticated SPICE circuit simulations along with cascaded systems modeling. In these simulations, a device model based on the RPI poly-Si TFT model is used with additional controlled current sources corresponding to thermal and flicker (1/f) noise. From measurements of transfer and output characteristics (as well as current noise densities) performed upon individual, representative, poly-Si TFTs test devices, model parameters suitable for these simulations are extracted. The input stimuli and operating-point-dependent scaling of the current sources are derived from the measured current noise densities (for flicker noise), or from fundamental equations (for thermal noise). Noise parameters obtained from the simulations, along with other parametric information, is input to a cascaded systems model of an AP imager design to provide estimates of DQE performance. In this paper, this method of combining circuit simulations and cascaded systems analysis to predict the lower limits on additive noise (and upper limits on DQE) for large area AP imagers with signal levels representative of those generated at fluoroscopic exposures is described, and initial results are reported.

  4. Median filters as a tool to determine dark noise thresholds in high resolution smartphone image sensors for scientific imaging

    NASA Astrophysics Data System (ADS)

    Igoe, Damien P.; Parisi, Alfio V.; Amar, Abdurazaq; Rummenie, Katherine J.

    2018-01-01

    An evaluation of the use of median filters in the reduction of dark noise in smartphone high resolution image sensors is presented. The Sony Xperia Z1 employed has a maximum image sensor resolution of 20.7 Mpixels, with each pixel having a side length of just over 1 μm. Due to the large number of photosites, this provides an image sensor with very high sensitivity but also makes them prone to noise effects such as hot-pixels. Similar to earlier research with older models of smartphone, no appreciable temperature effects were observed in the overall average pixel values for images taken in ambient temperatures between 5 °C and 25 °C. In this research, hot-pixels are defined as pixels with intensities above a specific threshold. The threshold is determined using the distribution of pixel values of a set of images with uniform statistical properties associated with the application of median-filters of increasing size. An image with uniform statistics was employed as a training set from 124 dark images, and the threshold was determined to be 9 digital numbers (DN). The threshold remained constant for multiple resolutions and did not appreciably change even after a year of extensive field use and exposure to solar ultraviolet radiation. Although the temperature effects' uniformity masked an increase in hot-pixel occurrences, the total number of occurrences represented less than 0.1% of the total image. Hot-pixels were removed by applying a median filter, with an optimum filter size of 7 × 7; similar trends were observed for four additional smartphone image sensors used for validation. Hot-pixels were also reduced by decreasing image resolution. The method outlined in this research provides a methodology to characterise the dark noise behavior of high resolution image sensors for use in scientific investigations, especially as pixel sizes decrease.

  5. The implementation of CMOS sensors within a real time digital mammography intelligent imaging system: The I-ImaS System

    NASA Astrophysics Data System (ADS)

    Esbrand, C.; Royle, G.; Griffiths, J.; Speller, R.

    2009-07-01

    The integration of technology with healthcare has undoubtedly propelled the medical imaging sector well into the twenty first century. The concept of digital imaging introduced during the 1970s has since paved the way for established imaging techniques where digital mammography, phase contrast imaging and CT imaging are just a few examples. This paper presents a prototype intelligent digital mammography system designed and developed by a European consortium. The final system, the I-ImaS system, utilises CMOS monolithic active pixel sensor (MAPS) technology promoting on-chip data processing, enabling the acts of data processing and image acquisition to be achieved simultaneously; consequently, statistical analysis of tissue is achievable in real-time for the purpose of x-ray beam modulation via a feedback mechanism during the image acquisition procedure. The imager implements a dual array of twenty 520 pixel × 40 pixel CMOS MAPS sensing devices with a 32μm pixel size, each individually coupled to a 100μm thick thallium doped structured CsI scintillator. This paper presents the first intelligent images of real breast tissue obtained from the prototype system of real excised breast tissue where the x-ray exposure was modulated via the statistical information extracted from the breast tissue itself. Conventional images were experimentally acquired where the statistical analysis of the data was done off-line, resulting in the production of simulated real-time intelligently optimised images. The results obtained indicate real-time image optimisation using the statistical information extracted from the breast as a means of a feedback mechanisms is beneficial and foreseeable in the near future.

  6. A fractal concentration area method for assigning a color palette for image representation

    NASA Astrophysics Data System (ADS)

    Cheng, Qiuming; Li, Qingmou

    2002-05-01

    Displaying the remotely sensed image with a proper color palette is the first task in any kind of image processing and pattern recognition in GIS and image processing environments. The purpose of displaying the image should be not only to provide a visual representation of the variance of the image, although this has been the primary objective of most conventional methods, but also the color palette should reflect real-world features on the ground which must be the primary objective of employing remotely sensed data. Although most conventional methods focus only on the first purpose of image representation, the concentration-area ( C- A plot) fractal method proposed in this paper aims to meet both purposes on the basis of pixel values and pixel value frequency distribution as well as spatial and geometrical properties of image patterns. The C- A method can be used to establish power-law relationships between the area A(⩾ s) with the pixel values greater than s and the pixel value s itself after plotting these values on log-log paper. A number of straight-line segments can be manually or automatically fitted to the points on the log-log paper, each representing a power-law relationship between the area A and the cutoff pixel value for s in a particular range. These straight-line segments can yield a group of cutoff values on the basis of which the image can be classified into discrete classes or zones. These zones usually correspond to the real-world features on the ground. A Windows program has been prepared in ActiveX format for implementing the C- A method and integrating it into other GIS and image processing systems. A case study of Landsat TM band 5 has been used to demonstrate the application of the method and the flexibility of the computer program.

  7. A fast image encryption algorithm based on only blocks in cipher text

    NASA Astrophysics Data System (ADS)

    Wang, Xing-Yuan; Wang, Qian

    2014-03-01

    In this paper, a fast image encryption algorithm is proposed, in which the shuffling and diffusion is performed simultaneously. The cipher-text image is divided into blocks and each block has k ×k pixels, while the pixels of the plain-text are scanned one by one. Four logistic maps are used to generate the encryption key stream and the new place in the cipher image of plain image pixels, including the row and column of the block which the pixel belongs to and the place where the pixel would be placed in the block. After encrypting each pixel, the initial conditions of logistic maps would be changed according to the encrypted pixel's value; after encrypting each row of plain image, the initial condition would also be changed by the skew tent map. At last, it is illustrated that this algorithm has a faster speed, big key space, and better properties in withstanding differential attacks, statistical analysis, known plaintext, and chosen plaintext attacks.

  8. Mapping Electrical Crosstalk in Pixelated Sensor Arrays

    NASA Technical Reports Server (NTRS)

    Seshadri, Suresh (Inventor); Cole, David (Inventor); Smith, Roger M. (Inventor); Hancock, Bruce R. (Inventor)

    2017-01-01

    The effects of inter pixel capacitance in a pixilated array may be measured by first resetting all pixels in the array to a first voltage, where a first image is read out, followed by resetting only a subset of pixels in the array to a second voltage, where a second image is read out, where the difference in the first and second images provide information about the inter pixel capacitance. Other embodiments are described and claimed.

  9. Mapping Electrical Crosstalk in Pixelated Sensor Arrays

    NASA Technical Reports Server (NTRS)

    Smith, Roger M (Inventor); Hancock, Bruce R. (Inventor); Cole, David (Inventor); Seshadri, Suresh (Inventor)

    2013-01-01

    The effects of inter pixel capacitance in a pixilated array may be measured by first resetting all pixels in the array to a first voltage, where a first image is read out, followed by resetting only a subset of pixels in the array to a second voltage, where a second image is read out, where the difference in the first and second images provide information about the inter pixel capacitance. Other embodiments are described and claimed.

  10. Spatio-Temporal Mining of PolSAR Satellite Image Time Series

    NASA Astrophysics Data System (ADS)

    Julea, A.; Meger, N.; Trouve, E.; Bolon, Ph.; Rigotti, C.; Fallourd, R.; Nicolas, J.-M.; Vasile, G.; Gay, M.; Harant, O.; Ferro-Famil, L.

    2010-12-01

    This paper presents an original data mining approach for describing Satellite Image Time Series (SITS) spatially and temporally. It relies on pixel-based evolution and sub-evolution extraction. These evolutions, namely the frequent grouped sequential patterns, are required to cover a minimum surface and to affect pixels that are sufficiently connected. These spatial constraints are actively used to face large data volumes and to select evolutions making sense for end-users. In this paper, a specific application to fully polarimetric SAR image time series is presented. Preliminary experiments performed on a RADARSAT-2 SITS covering the Chamonix Mont-Blanc test-site are used to illustrate the proposed approach.

  11. How Many Pixels Does It Take to Make a Good 4"×6" Print? Pixel Count Wars Revisited

    NASA Astrophysics Data System (ADS)

    Kriss, Michael A.

    Digital still cameras emerged following the introduction of the Sony Mavica analog prototype camera in 1981. These early cameras produced poor image quality and did not challenge film cameras for overall quality. By 1995 digital still cameras in expensive SLR formats had 6 mega-pixels and produced high quality images (with significant image processing). In 2005 significant improvement in image quality was apparent and lower prices for digital still cameras (DSCs) started a rapid decline in film usage and film camera sells. By 2010 film usage was mostly limited to professionals and the motion picture industry. The rise of DSCs was marked by a “pixel war” where the driving feature of the cameras was the pixel count where even moderate cost, ˜120, DSCs would have 14 mega-pixels. The improvement of CMOS technology pushed this trend of lower prices and higher pixel counts. Only the single lens reflex cameras had large sensors and large pixels. The drive for smaller pixels hurt the quality aspects of the final image (sharpness, noise, speed, and exposure latitude). Only today are camera manufactures starting to reverse their course and producing DSCs with larger sensors and pixels. This paper will explore why larger pixels and sensors are key to the future of DSCs.

  12. The effect of split pixel HDR image sensor technology on MTF measurements

    NASA Astrophysics Data System (ADS)

    Deegan, Brian M.

    2014-03-01

    Split-pixel HDR sensor technology is particularly advantageous in automotive applications, because the images are captured simultaneously rather than sequentially, thereby reducing motion blur. However, split pixel technology introduces artifacts in MTF measurement. To achieve a HDR image, raw images are captured from both large and small sub-pixels, and combined to make the HDR output. In some cases, a large sub-pixel is used for long exposure captures, and a small sub-pixel for short exposures, to extend the dynamic range. The relative size of the photosensitive area of the pixel (fill factor) plays a very significant role in the output MTF measurement. Given an identical scene, the MTF will be significantly different, depending on whether you use the large or small sub-pixels i.e. a smaller fill factor (e.g. in the short exposure sub-pixel) will result in higher MTF scores, but significantly greater aliasing. Simulations of split-pixel sensors revealed that, when raw images from both sub-pixels are combined, there is a significant difference in rising edge (i.e. black-to-white transition) and falling edge (white-to-black) reproduction. Experimental results showed a difference of ~50% in measured MTF50 between the falling and rising edges of a slanted edge test chart.

  13. Avoiding Stair-Step Artifacts in Image Registration for GOES-R Navigation and Registration Assessment

    NASA Technical Reports Server (NTRS)

    Grycewicz, Thomas J.; Tan, Bin; Isaacson, Peter J.; De Luccia, Frank J.; Dellomo, John

    2016-01-01

    In developing software for independent verification and validation (IVV) of the Image Navigation and Registration (INR) capability for the Geostationary Operational Environmental Satellite R Series (GOES-R) Advanced Baseline Imager (ABI), we have encountered an image registration artifact which limits the accuracy of image offset estimation at the subpixel scale using image correlation. Where the two images to be registered have the same pixel size, subpixel image registration preferentially selects registration values where the image pixel boundaries are close to lined up. Because of the shape of a curve plotting input displacement to estimated offset, we call this a stair-step artifact. When one image is at a higher resolution than the other, the stair-step artifact is minimized by correlating at the higher resolution. For validating ABI image navigation, GOES-R images are correlated with Landsat-based ground truth maps. To create the ground truth map, the Landsat image is first transformed to the perspective seen from the GOES-R satellite, and then is scaled to an appropriate pixel size. Minimizing processing time motivates choosing the map pixels to be the same size as the GOES-R pixels. At this pixel size image processing of the shift estimate is efficient, but the stair-step artifact is present. If the map pixel is very small, stair-step is not a problem, but image correlation is computation-intensive. This paper describes simulation-based selection of the scale for truth maps for registering GOES-R ABI images.

  14. A semiconductor radiation imaging pixel detector for space radiation dosimetry

    NASA Astrophysics Data System (ADS)

    Kroupa, Martin; Bahadori, Amir; Campbell-Ricketts, Thomas; Empl, Anton; Hoang, Son Minh; Idarraga-Munoz, John; Rios, Ryan; Semones, Edward; Stoffle, Nicholas; Tlustos, Lukas; Turecek, Daniel; Pinsky, Lawrence

    2015-07-01

    Progress in the development of high-performance semiconductor radiation imaging pixel detectors based on technologies developed for use in high-energy physics applications has enabled the development of a completely new generation of compact low-power active dosimeters and area monitors for use in space radiation environments. Such detectors can provide real-time information concerning radiation exposure, along with detailed analysis of the individual particles incident on the active medium. Recent results from the deployment of detectors based on the Timepix from the CERN-based Medipix2 Collaboration on the International Space Station (ISS) are reviewed, along with a glimpse of developments to come. Preliminary results from Orion MPCV Exploration Flight Test 1 are also presented.

  15. CMOS Imaging of Pin-Printed Xerogel-Based Luminescent Sensor Microarrays.

    PubMed

    Yao, Lei; Yung, Ka Yi; Khan, Rifat; Chodavarapu, Vamsy P; Bright, Frank V

    2010-12-01

    We present the design and implementation of a luminescence-based miniaturized multisensor system using pin-printed xerogel materials which act as host media for chemical recognition elements. We developed a CMOS imager integrated circuit (IC) to image the luminescence response of the xerogel-based sensor array. The imager IC uses a 26 × 20 (520 elements) array of active pixel sensors and each active pixel includes a high-gain phototransistor to convert the detected optical signals into electrical currents. The imager includes a correlated double sampling circuit and pixel address/digital control circuit; the image data is read-out as coded serial signal. The sensor system uses a light-emitting diode (LED) to excite the target analyte responsive luminophores doped within discrete xerogel-based sensor elements. As a prototype, we developed a 4 × 4 (16 elements) array of oxygen (O 2 ) sensors. Each group of 4 sensor elements in the array (arranged in a row) is designed to provide a different and specific sensitivity to the target gaseous O 2 concentration. This property of multiple sensitivities is achieved by using a strategic mix of two oxygen sensitive luminophores ([Ru(dpp) 3 ] 2+ and ([Ru(bpy) 3 ] 2+ ) in each pin-printed xerogel sensor element. The CMOS imager consumes an average power of 8 mW operating at 1 kHz sampling frequency driven at 5 V. The developed prototype system demonstrates a low cost and miniaturized luminescence multisensor system.

  16. Performance of a novel wafer scale CMOS active pixel sensor for bio-medical imaging.

    PubMed

    Esposito, M; Anaxagoras, T; Konstantinidis, A C; Zheng, Y; Speller, R D; Evans, P M; Allinson, N M; Wells, K

    2014-07-07

    Recently CMOS active pixels sensors (APSs) have become a valuable alternative to amorphous silicon and selenium flat panel imagers (FPIs) in bio-medical imaging applications. CMOS APSs can now be scaled up to the standard 20 cm diameter wafer size by means of a reticle stitching block process. However, despite wafer scale CMOS APS being monolithic, sources of non-uniformity of response and regional variations can persist representing a significant challenge for wafer scale sensor response. Non-uniformity of stitched sensors can arise from a number of factors related to the manufacturing process, including variation of amplification, variation between readout components, wafer defects and process variations across the wafer due to manufacturing processes. This paper reports on an investigation into the spatial non-uniformity and regional variations of a wafer scale stitched CMOS APS. For the first time a per-pixel analysis of the electro-optical performance of a wafer CMOS APS is presented, to address inhomogeneity issues arising from the stitching techniques used to manufacture wafer scale sensors. A complete model of the signal generation in the pixel array has been provided and proved capable of accounting for noise and gain variations across the pixel array. This novel analysis leads to readout noise and conversion gain being evaluated at pixel level, stitching block level and in regions of interest, resulting in a coefficient of variation ⩽1.9%. The uniformity of the image quality performance has been further investigated in a typical x-ray application, i.e. mammography, showing a uniformity in terms of CNR among the highest when compared with mammography detectors commonly used in clinical practice. Finally, in order to compare the detection capability of this novel APS with the technology currently used (i.e. FPIs), theoretical evaluation of the detection quantum efficiency (DQE) at zero-frequency has been performed, resulting in a higher DQE for this detector compared to FPIs. Optical characterization, x-ray contrast measurements and theoretical DQE evaluation suggest that a trade off can be found between the need of a large imaging area and the requirement of a uniform imaging performance, making the DynAMITe large area CMOS APS suitable for a range of bio-medical applications.

  17. SVM Pixel Classification on Colour Image Segmentation

    NASA Astrophysics Data System (ADS)

    Barui, Subhrajit; Latha, S.; Samiappan, Dhanalakshmi; Muthu, P.

    2018-04-01

    The aim of image segmentation is to simplify the representation of an image with the help of cluster pixels into something meaningful to analyze. Segmentation is typically used to locate boundaries and curves in an image, precisely to label every pixel in an image to give each pixel an independent identity. SVM pixel classification on colour image segmentation is the topic highlighted in this paper. It holds useful application in the field of concept based image retrieval, machine vision, medical imaging and object detection. The process is accomplished step by step. At first we need to recognize the type of colour and the texture used as an input to the SVM classifier. These inputs are extracted via local spatial similarity measure model and Steerable filter also known as Gabon Filter. It is then trained by using FCM (Fuzzy C-Means). Both the pixel level information of the image and the ability of the SVM Classifier undergoes some sophisticated algorithm to form the final image. The method has a well developed segmented image and efficiency with respect to increased quality and faster processing of the segmented image compared with the other segmentation methods proposed earlier. One of the latest application result is the Light L16 camera.

  18. A Combined Laser-Communication and Imager for Microspacecraft (ACLAIM)

    NASA Technical Reports Server (NTRS)

    Hemmati, H.; Lesh, J.

    1998-01-01

    ACLAIM is a multi-function instrument consisting of a laser communication terminal and an imaging camera that share a common telescope. A single APS- (Active Pixel Sensor) based focal-plane-array is used to perform both the acquisition and tracking (for laser communication) and science imaging functions.

  19. Integrated imaging sensor systems with CMOS active pixel sensor technology

    NASA Technical Reports Server (NTRS)

    Yang, G.; Cunningham, T.; Ortiz, M.; Heynssens, J.; Sun, C.; Hancock, B.; Seshadri, S.; Wrigley, C.; McCarty, K.; Pain, B.

    2002-01-01

    This paper discusses common approaches to CMOS APS technology, as well as specific results on the five-wire programmable digital camera-on-a-chip developed at JPL. The paper also reports recent research in the design, operation, and performance of APS imagers for several imager applications.

  20. Half-unit weighted bilinear algorithm for image contrast enhancement in capsule endoscopy

    NASA Astrophysics Data System (ADS)

    Rukundo, Olivier

    2018-04-01

    This paper proposes a novel enhancement method based exclusively on the bilinear interpolation algorithm for capsule endoscopy images. The proposed method does not convert the original RBG image components to HSV or any other color space or model; instead, it processes directly RGB components. In each component, a group of four adjacent pixels and half-unit weight in the bilinear weighting function are used to calculate the average pixel value, identical for each pixel in that particular group. After calculations, groups of identical pixels are overlapped successively in horizontal and vertical directions to achieve a preliminary-enhanced image. The final-enhanced image is achieved by halving the sum of the original and preliminary-enhanced image pixels. Quantitative and qualitative experiments were conducted focusing on pairwise comparisons between original and enhanced images. Final-enhanced images have generally the best diagnostic quality and gave more details about the visibility of vessels and structures in capsule endoscopy images.

  1. The SWAP EUV Imaging Telescope Part I: Instrument Overview and Pre-Flight Testing

    NASA Astrophysics Data System (ADS)

    Seaton, D. B.; Berghmans, D.; Nicula, B.; Halain, J.-P.; De Groof, A.; Thibert, T.; Bloomfield, D. S.; Raftery, C. L.; Gallagher, P. T.; Auchère, F.; Defise, J.-M.; D'Huys, E.; Lecat, J.-H.; Mazy, E.; Rochus, P.; Rossi, L.; Schühle, U.; Slemzin, V.; Yalim, M. S.; Zender, J.

    2013-08-01

    The Sun Watcher with Active Pixels and Image Processing (SWAP) is an EUV solar telescope onboard ESA's Project for Onboard Autonomy 2 (PROBA2) mission launched on 2 November 2009. SWAP has a spectral bandpass centered on 17.4 nm and provides images of the low solar corona over a 54×54 arcmin field-of-view with 3.2 arcsec pixels and an imaging cadence of about two minutes. SWAP is designed to monitor all space-weather-relevant events and features in the low solar corona. Given the limited resources of the PROBA2 microsatellite, the SWAP telescope is designed with various innovative technologies, including an off-axis optical design and a CMOS-APS detector. This article provides reference documentation for users of the SWAP image data.

  2. Front end optimization for the monolithic active pixel sensor of the ALICE Inner Tracking System upgrade

    NASA Astrophysics Data System (ADS)

    Kim, D.; Aglieri Rinella, G.; Cavicchioli, C.; Chanlek, N.; Collu, A.; Degerli, Y.; Dorokhov, A.; Flouzat, C.; Gajanana, D.; Gao, C.; Guilloux, F.; Hillemanns, H.; Hristozkov, S.; Junique, A.; Keil, M.; Kofarago, M.; Kugathasan, T.; Kwon, Y.; Lattuca, A.; Mager, M.; Sielewicz, K. M.; Marin Tobon, C. A.; Marras, D.; Martinengo, P.; Mazza, G.; Mugnier, H.; Musa, L.; Pham, T. H.; Puggioni, C.; Reidt, F.; Riedler, P.; Rousset, J.; Siddhanta, S.; Snoeys, W.; Song, M.; Usai, G.; Van Hoorne, J. W.; Yang, P.

    2016-02-01

    ALICE plans to replace its Inner Tracking System during the second long shut down of the LHC in 2019 with a new 10 m2 tracker constructed entirely with monolithic active pixel sensors. The TowerJazz 180 nm CMOS imaging Sensor process has been selected to produce the sensor as it offers a deep pwell allowing full CMOS in-pixel circuitry and different starting materials. First full-scale prototypes have been fabricated and tested. Radiation tolerance has also been verified. In this paper the development of the charge sensitive front end and in particular its optimization for uniformity of charge threshold and time response will be presented.

  3. Identification of optimal mask size parameter for noise filtering in 99mTc-methylene diphosphonate bone scintigraphy images.

    PubMed

    Pandey, Anil K; Bisht, Chandan S; Sharma, Param D; ArunRaj, Sreedharan Thankarajan; Taywade, Sameer; Patel, Chetan; Bal, Chandrashekhar; Kumar, Rakesh

    2017-11-01

    Tc-methylene diphosphonate (Tc-MDP) bone scintigraphy images have limited number of counts per pixel. A noise filtering method based on local statistics of the image produces better results than a linear filter. However, the mask size has a significant effect on image quality. In this study, we have identified the optimal mask size that yields a good smooth bone scan image. Forty four bone scan images were processed using mask sizes 3, 5, 7, 9, 11, 13, and 15 pixels. The input and processed images were reviewed in two steps. In the first step, the images were inspected and the mask sizes that produced images with significant loss of clinical details in comparison with the input image were excluded. In the second step, the image quality of the 40 sets of images (each set had input image, and its corresponding three processed images with 3, 5, and 7-pixel masks) was assessed by two nuclear medicine physicians. They selected one good smooth image from each set of images. The image quality was also assessed quantitatively with a line profile. Fisher's exact test was used to find statistically significant differences in image quality processed with 5 and 7-pixel mask at a 5% cut-off. A statistically significant difference was found between the image quality processed with 5 and 7-pixel mask at P=0.00528. The identified optimal mask size to produce a good smooth image was found to be 7 pixels. The best mask size for the John-Sen Lee filter was found to be 7×7 pixels, which yielded Tc-methylene diphosphonate bone scan images with the highest acceptable smoothness.

  4. Method for removing RFI from SAR images

    DOEpatents

    Doerry, Armin W.

    2003-08-19

    A method of removing RFI from a SAR by comparing two SAR images on a pixel by pixel basis and selecting the pixel with the lower magnitude to form a composite image. One SAR image is the conventional image produced by the SAR. The other image is created from phase-history data which has been filtered to have the frequency bands containing the RFI removed.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Altunbas, Cem, E-mail: caltunbas@gmail.com; Lai, Chao-Jen; Zhong, Yuncheng

    Purpose: In using flat panel detectors (FPD) for cone beam computed tomography (CBCT), pixel gain variations may lead to structured nonuniformities in projections and ring artifacts in CBCT images. Such gain variations can be caused by change in detector entrance exposure levels or beam hardening, and they are not accounted by conventional flat field correction methods. In this work, the authors presented a method to identify isolated pixel clusters that exhibit gain variations and proposed a pixel gain correction (PGC) method to suppress both beam hardening and exposure level dependent gain variations. Methods: To modulate both beam spectrum and entrancemore » exposure, flood field FPD projections were acquired using beam filters with varying thicknesses. “Ideal” pixel values were estimated by performing polynomial fits in both raw and flat field corrected projections. Residuals were calculated by taking the difference between measured and ideal pixel values to identify clustered image and FPD artifacts in flat field corrected and raw images, respectively. To correct clustered image artifacts, the ratio of ideal to measured pixel values in filtered images were utilized as pixel-specific gain correction factors, referred as PGC method, and they were tabulated as a function of pixel value in a look-up table. Results: 0.035% of detector pixels lead to clustered image artifacts in flat field corrected projections, where 80% of these pixels were traced back and linked to artifacts in the FPD. The performance of PGC method was tested in variety of imaging conditions and phantoms. The PGC method reduced clustered image artifacts and fixed pattern noise in projections, and ring artifacts in CBCT images. Conclusions: Clustered projection image artifacts that lead to ring artifacts in CBCT can be better identified with our artifact detection approach. When compared to the conventional flat field correction method, the proposed PGC method enables characterization of nonlinear pixel gain variations as a function of change in x-ray spectrum and intensity. Hence, it can better suppress image artifacts due to beam hardening as well as artifacts that arise from detector entrance exposure variation.« less

  6. Evaluation of PET Imaging Resolution Using 350 mu{m} Pixelated CZT as a VP-PET Insert Detector

    NASA Astrophysics Data System (ADS)

    Yin, Yongzhi; Chen, Ximeng; Li, Chongzheng; Wu, Heyu; Komarov, Sergey; Guo, Qingzhen; Krawczynski, Henric; Meng, Ling-Jian; Tai, Yuan-Chuan

    2014-02-01

    A cadmium-zinc-telluride (CZT) detector with 350 μm pitch pixels was studied in high-resolution positron emission tomography (PET) imaging applications. The PET imaging system was based on coincidence detection between a CZT detector and a lutetium oxyorthosilicate (LSO)-based Inveon PET detector in virtual-pinhole PET geometry. The LSO detector is a 20 ×20 array, with 1.6 mm pitches, and 10 mm thickness. The CZT detector uses ac 20 ×20 ×5 mm substrate, with 350 μm pitch pixelated anodes and a coplanar cathode. A NEMA NU4 Na-22 point source of 250 μm in diameter was imaged by this system. Experiments show that the image resolution of single-pixel photopeak events was 590 μm FWHM while the image resolution of double-pixel photopeak events was 640 μm FWHM. The inclusion of double-pixel full-energy events increased the sensitivity of the imaging system. To validate the imaging experiment, we conducted a Monte Carlo (MC) simulation for the same PET system in Geant4 Application for Emission Tomography. We defined LSO detectors as a scanner ring and 350 μm pixelated CZT detectors as an insert ring. GATE simulated coincidence data were sorted into an insert-scanner sinogram and reconstructed. The image resolution of MC-simulated data (which did not factor in positron range and acolinearity effect) was 460 μm at FWHM for single-pixel events. The image resolutions of experimental data, MC simulated data, and theoretical calculation are all close to 500 μm FWHM when the proposed 350 μm pixelated CZT detector is used as a PET insert. The interpolation algorithm for the charge sharing events was also investigated. The PET image that was reconstructed using the interpolation algorithm shows improved image resolution compared with the image resolution without interpolation algorithm.

  7. Active pixel sensor array with multiresolution readout

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R. (Inventor); Kemeny, Sabrina E. (Inventor); Pain, Bedabrata (Inventor)

    1999-01-01

    An imaging device formed as a monolithic complementary metal oxide semiconductor integrated circuit in an industry standard complementary metal oxide semiconductor process, the integrated circuit including a focal plane array of pixel cells, each one of the cells including a photogate overlying the substrate for accumulating photo-generated charge in an underlying portion of the substrate and a charge coupled device section formed on the substrate adjacent the photogate having a sensing node and at least one charge coupled device stage for transferring charge from the underlying portion of the substrate to the sensing node. There is also a readout circuit, part of which can be disposed at the bottom of each column of cells and be common to all the cells in the column. The imaging device can also include an electronic shutter formed on the substrate adjacent the photogate, and/or a storage section to allow for simultaneous integration. In addition, the imaging device can include a multiresolution imaging circuit to provide images of varying resolution. The multiresolution circuit could also be employed in an array where the photosensitive portion of each pixel cell is a photodiode. This latter embodiment could further be modified to facilitate low light imaging.

  8. Small target detection using bilateral filter and temporal cross product in infrared images

    NASA Astrophysics Data System (ADS)

    Bae, Tae-Wuk

    2011-09-01

    We introduce a spatial and temporal target detection method using spatial bilateral filter (BF) and temporal cross product (TCP) of temporal pixels in infrared (IR) image sequences. At first, the TCP is presented to extract the characteristics of temporal pixels by using temporal profile in respective spatial coordinates of pixels. The TCP represents the cross product values by the gray level distance vector of a current temporal pixel and the adjacent temporal pixel, as well as the horizontal distance vector of the current temporal pixel and a temporal pixel corresponding to potential target center. The summation of TCP values of temporal pixels in spatial coordinates makes the temporal target image (TTI), which represents the temporal target information of temporal pixels in spatial coordinates. And then the proposed BF filter is used to extract the spatial target information. In order to predict background without targets, the proposed BF filter uses standard deviations obtained by an exponential mapping of the TCP value corresponding to the coordinate of a pixel processed spatially. The spatial target image (STI) is made by subtracting the predicted image from the original image. Thus, the spatial and temporal target image (STTI) is achieved by multiplying the STI and the TTI, and then targets finally are detected in STTI. In experimental result, the receiver operating characteristics (ROC) curves were computed experimentally to compare the objective performance. From the results, the proposed algorithm shows better discrimination of target and clutters and lower false alarm rates than the existing target detection methods.

  9. A Hopfield neural network for image change detection.

    PubMed

    Pajares, Gonzalo

    2006-09-01

    This paper outlines an optimization relaxation approach based on the analog Hopfield neural network (HNN) for solving the image change detection problem between two images. A difference image is obtained by subtracting pixel by pixel both images. The network topology is built so that each pixel in the difference image is a node in the network. Each node is characterized by its state, which determines if a pixel has changed. An energy function is derived, so that the network converges to stable states. The analog Hopfield's model allows each node to take on analog state values. Unlike most widely used approaches, where binary labels (changed/unchanged) are assigned to each pixel, the analog property provides the strength of the change. The main contribution of this paper is reflected in the customization of the analog Hopfield neural network to derive an automatic image change detection approach. When a pixel is being processed, some existing image change detection procedures consider only interpixel relations on its neighborhood. The main drawback of such approaches is the labeling of this pixel as changed or unchanged according to the information supplied by its neighbors, where its own information is ignored. The Hopfield model overcomes this drawback and for each pixel allows a tradeoff between the influence of its neighborhood and its own criterion. This is mapped under the energy function to be minimized. The performance of the proposed method is illustrated by comparative analysis against some existing image change detection methods.

  10. Fiber pixelated image database

    NASA Astrophysics Data System (ADS)

    Shinde, Anant; Perinchery, Sandeep Menon; Matham, Murukeshan Vadakke

    2016-08-01

    Imaging of physically inaccessible parts of the body such as the colon at micron-level resolution is highly important in diagnostic medical imaging. Though flexible endoscopes based on the imaging fiber bundle are used for such diagnostic procedures, their inherent honeycomb-like structure creates fiber pixelation effects. This impedes the observer from perceiving the information from an image captured and hinders the direct use of image processing and machine intelligence techniques on the recorded signal. Significant efforts have been made by researchers in the recent past in the development and implementation of pixelation removal techniques. However, researchers have often used their own set of images without making source data available which subdued their usage and adaptability universally. A database of pixelated images is the current requirement to meet the growing diagnostic needs in the healthcare arena. An innovative fiber pixelated image database is presented, which consists of pixelated images that are synthetically generated and experimentally acquired. Sample space encompasses test patterns of different scales, sizes, and shapes. It is envisaged that this proposed database will alleviate the current limitations associated with relevant research and development and would be of great help for researchers working on comb structure removal algorithms.

  11. Quantitative evaluation of the accuracy and variance of individual pixels in a scientific CMOS (sCMOS) camera for computational imaging

    NASA Astrophysics Data System (ADS)

    Watanabe, Shigeo; Takahashi, Teruo; Bennett, Keith

    2017-02-01

    The"scientific" CMOS (sCMOS) camera architecture fundamentally differs from CCD and EMCCD cameras. In digital CCD and EMCCD cameras, conversion from charge to the digital output is generally through a single electronic chain, and the read noise and the conversion factor from photoelectrons to digital outputs are highly uniform for all pixels, although quantum efficiency may spatially vary. In CMOS cameras, the charge to voltage conversion is separate for each pixel and each column has independent amplifiers and analog-to-digital converters, in addition to possible pixel-to-pixel variation in quantum efficiency. The "raw" output from the CMOS image sensor includes pixel-to-pixel variability in the read noise, electronic gain, offset and dark current. Scientific camera manufacturers digitally compensate the raw signal from the CMOS image sensors to provide usable images. Statistical noise in images, unless properly modeled, can introduce errors in methods such as fluctuation correlation spectroscopy or computational imaging, for example, localization microscopy using maximum likelihood estimation. We measured the distributions and spatial maps of individual pixel offset, dark current, read noise, linearity, photoresponse non-uniformity and variance distributions of individual pixels for standard, off-the-shelf Hamamatsu ORCA-Flash4.0 V3 sCMOS cameras using highly uniform and controlled illumination conditions, from dark conditions to multiple low light levels between 20 to 1,000 photons / pixel per frame to higher light conditions. We further show that using pixel variance for flat field correction leads to errors in cameras with good factory calibration.

  12. Full-Frame Reference for Test Photo of Moon

    NASA Technical Reports Server (NTRS)

    2005-01-01

    This pair of views shows how little of the full image frame was taken up by the Moon in test images taken Sept. 8, 2005, by the High Resolution Imaging Science Experiment (HiRISE) camera on NASA's Mars Reconnaissance Orbiter. The Mars-bound camera imaged Earth's Moon from a distance of about 10 million kilometers (6 million miles) away -- 26 times the distance between Earth and the Moon -- as part of an activity to test and calibrate the camera. The images are very significant because they show that the Mars Reconnaissance Orbiter spacecraft and this camera can properly operate together to collect very high-resolution images of Mars. The target must move through the camera's telescope view in just the right direction and speed to acquire a proper image. The day's test images also demonstrate that the focus mechanism works properly with the telescope to produce sharp images.

    Out of the 20,000-pixel-by-6,000-pixel full frame, the Moon's diameter is about 340 pixels, if the full Moon could be seen. The illuminated crescent is about 60 pixels wide, and the resolution is about 10 kilometers (6 miles) per pixel. At Mars, the entire image region will be filled with high-resolution information.

    The Mars Reconnaissance Orbiter, launched on Aug. 12, 2005, is on course to reach Mars on March 10, 2006. After gradually adjusting the shape of its orbit for half a year, it will begin its primary science phase in November 2006. From the mission's planned science orbit about 300 kilometers (186 miles) above the surface of Mars, the high resolution camera will be able to discern features as small as one meter or yard across.

    The Mars Reconnaissance Orbiter mission is managed by NASA's Jet Propulsion Laboratory, a division of the California Institute of Technology, Pasadena, for the NASA Science Mission Directorate. Lockheed Martin Space Systems, Denver, prime contractor for the project, built the spacecraft. Ball Aerospace & Technologies Corp., Boulder, Colo., built the High Resolution Imaging Science Experiment instrument for the University of Arizona, Tucson, to provide to the mission. The HiRISE Operations Center at the University of Arizona processes images from the camera.

  13. Adaptive box filters for removal of random noise from digital images

    USGS Publications Warehouse

    Eliason, E.M.; McEwen, A.S.

    1990-01-01

    We have developed adaptive box-filtering algorithms to (1) remove random bit errors (pixel values with no relation to the image scene) and (2) smooth noisy data (pixels related to the image scene but with an additive or multiplicative component of noise). For both procedures, we use the standard deviation (??) of those pixels within a local box surrounding each pixel, hence they are adaptive filters. This technique effectively reduces speckle in radar images without eliminating fine details. -from Authors

  14. Characterisation of the high dynamic range Large Pixel Detector (LPD) and its use at X-ray free electron laser sources

    DOE PAGES

    Veale, M. C.; Adkin, P.; Booker, P.; ...

    2017-12-04

    The STFC Rutherford Appleton Laboratory have delivered the Large Pixel Detector (LPD) for MHz frame rate imaging at the European XFEL. The detector system has an active area of 0.5 m × 0.5 m and consists of a million pixels on a 500 μm pitch. Sensors have been produced from 500 μm thick Hammamatsu silicon tiles that have been bump bonded to the readout ASIC using a silver epoxy and gold stud technique. Each pixel of the detector system is capable of measuring 10 5 12 keV photons per image readout at 4.5 MHz. In this paper results from themore » testing of these detectors at the Diamond Light Source and the Linac Coherent Light Source (LCLS) are presented. As a result, the performance of the detector in terms of linearity, spatial uniformity and the performance of the different ASIC gain stages is characterised.« less

  15. Characterisation of the high dynamic range Large Pixel Detector (LPD) and its use at X-ray free electron laser sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Veale, M. C.; Adkin, P.; Booker, P.

    The STFC Rutherford Appleton Laboratory have delivered the Large Pixel Detector (LPD) for MHz frame rate imaging at the European XFEL. The detector system has an active area of 0.5 m × 0.5 m and consists of a million pixels on a 500 μm pitch. Sensors have been produced from 500 μm thick Hammamatsu silicon tiles that have been bump bonded to the readout ASIC using a silver epoxy and gold stud technique. Each pixel of the detector system is capable of measuring 10 5 12 keV photons per image readout at 4.5 MHz. In this paper results from themore » testing of these detectors at the Diamond Light Source and the Linac Coherent Light Source (LCLS) are presented. As a result, the performance of the detector in terms of linearity, spatial uniformity and the performance of the different ASIC gain stages is characterised.« less

  16. Color constancy using bright-neutral pixels

    NASA Astrophysics Data System (ADS)

    Wang, Yanfang; Luo, Yupin

    2014-03-01

    An effective illuminant-estimation approach for color constancy is proposed. Bright and near-neutral pixels are selected to jointly represent the illuminant color and utilized for illuminant estimation. To assess the representing capability of pixels, bright-neutral strength (BNS) is proposed by combining pixel chroma and brightness. Accordingly, a certain percentage of pixels with the largest BNS is selected to be the representative set. For every input image, a proper percentage value is determined via an iterative strategy by seeking the optimal color-corrected image. To compare various color-corrected images of an input image, image color-cast degree (ICCD) is devised using means and standard deviations of RGB channels. Experimental evaluation on standard real-world datasets validates the effectiveness of the proposed approach.

  17. Algorithm for Detecting a Bright Spot in an Image

    NASA Technical Reports Server (NTRS)

    2009-01-01

    An algorithm processes the pixel intensities of a digitized image to detect and locate a circular bright spot, the approximate size of which is known in advance. The algorithm is used to find images of the Sun in cameras aboard the Mars Exploration Rovers. (The images are used in estimating orientations of the Rovers relative to the direction to the Sun.) The algorithm can also be adapted to tracking of circular shaped bright targets in other diverse applications. The first step in the algorithm is to calculate a dark-current ramp a correction necessitated by the scheme that governs the readout of pixel charges in the charge-coupled-device camera in the original Mars Exploration Rover application. In this scheme, the fraction of each frame period during which dark current is accumulated in a given pixel (and, hence, the dark-current contribution to the pixel image-intensity reading) is proportional to the pixel row number. For the purpose of the algorithm, the dark-current contribution to the intensity reading from each pixel is assumed to equal the average of intensity readings from all pixels in the same row, and the factor of proportionality is estimated on the basis of this assumption. Then the product of the row number and the factor of proportionality is subtracted from the reading from each pixel to obtain a dark-current-corrected intensity reading. The next step in the algorithm is to determine the best location, within the overall image, for a window of N N pixels (where N is an odd number) large enough to contain the bright spot of interest plus a small margin. (In the original application, the overall image contains 1,024 by 1,024 pixels, the image of the Sun is about 22 pixels in diameter, and N is chosen to be 29.)

  18. Mapping of the Culann-Tohil Region of Io

    NASA Technical Reports Server (NTRS)

    Turtle, E. P.; Keszthelyi, L. P.; Jaeger, W. L.; Radebaugh, J.; Milazzo, M. P.; McEwen, A. S.; Moore, J. M.; Schenk, P. M.; Lopes, R. M. C.

    2003-01-01

    The Galileo spacecraft completed its observations of Jupiter's volcanic moon Io in October 2001 with the orbit I32 flyby, during which new local (13-55 m/pixel) and regional (130-400 m/pixel) resolution images and spectroscopic data were returned of the antijovian hemisphere. We have combined a I32 regional mosaic (330 m/pixel) with lower-resolution C21 color data (1.4 km/pixel, Figure 1) and produced a geomorphologic map of the Culann-Tohil area of this hemisphere. Here we present the geologic features, map units, and structures in this region, and give preliminary conclusions about geologic activity for comparison with other regions to better understand Io's geologic evolution.

  19. Estimating pixel variances in the scenes of staring sensors

    DOEpatents

    Simonson, Katherine M [Cedar Crest, NM; Ma, Tian J [Albuquerque, NM

    2012-01-24

    A technique for detecting changes in a scene perceived by a staring sensor is disclosed. The technique includes acquiring a reference image frame and a current image frame of a scene with the staring sensor. A raw difference frame is generated based upon differences between the reference image frame and the current image frame. Pixel error estimates are generated for each pixel in the raw difference frame based at least in part upon spatial error estimates related to spatial intensity gradients in the scene. The pixel error estimates are used to mitigate effects of camera jitter in the scene between the current image frame and the reference image frame.

  20. Measurements with MÖNCH, a 25 μm pixel pitch hybrid pixel detector

    NASA Astrophysics Data System (ADS)

    Ramilli, M.; Bergamaschi, A.; Andrae, M.; Brückner, M.; Cartier, S.; Dinapoli, R.; Fröjdh, E.; Greiffenberg, D.; Hutwelker, T.; Lopez-Cuenca, C.; Mezza, D.; Mozzanica, A.; Ruat, M.; Redford, S.; Schmitt, B.; Shi, X.; Tinti, G.; Zhang, J.

    2017-01-01

    MÖNCH is a hybrid silicon pixel detector based on charge integration and with analog readout, featuring a pixel size of 25×25 μm2. The latest working prototype consists of an array of 400×400 identical pixels for a total active area of 1×1 cm2. Its design is optimized for the single photon regime. An exhaustive characterization of this large area prototype has been carried out in the past months, and it confirms an ENC in the order of 35 electrons RMS and a dynamic range of ~4×12 keV photons in high gain mode, which increases to ~100×12 keV photons with the lowest gain setting. The low noise levels of MÖNCH make it a suitable candidate for X-ray detection at energies around 1 keV and below. Imaging applications in particular can benefit significantly from the use of MÖNCH: due to its extremely small pixel pitch, the detector intrinsically offers excellent position resolution. Moreover, in low flux conditions, charge sharing between neighboring pixels allows the use of position interpolation algorithms which grant a resolution at the micrometer-level. Its energy reconstruction and imaging capabilities have been tested for the first time at a low energy beamline at PSI, with photon energies between 1.75 keV and 3.5 keV, and results will be shown.

  1. Variable waveband infrared imager

    DOEpatents

    Hunter, Scott R.

    2013-06-11

    A waveband imager includes an imaging pixel that utilizes photon tunneling with a thermally actuated bimorph structure to convert infrared radiation to visible radiation. Infrared radiation passes through a transparent substrate and is absorbed by a bimorph structure formed with a pixel plate. The absorption generates heat which deflects the bimorph structure and pixel plate towards the substrate and into an evanescent electric field generated by light propagating through the substrate. Penetration of the bimorph structure and pixel plate into the evanescent electric field allows a portion of the visible wavelengths propagating through the substrate to tunnel through the substrate, bimorph structure, and/or pixel plate as visible radiation that is proportional to the intensity of the incident infrared radiation. This converted visible radiation may be superimposed over visible wavelengths passed through the imaging pixel.

  2. The CAOS camera platform: ushering in a paradigm change in extreme dynamic range imager design

    NASA Astrophysics Data System (ADS)

    Riza, Nabeel A.

    2017-02-01

    Multi-pixel imaging devices such as CCD, CMOS and Focal Plane Array (FPA) photo-sensors dominate the imaging world. These Photo-Detector Array (PDA) devices certainly have their merits including increasingly high pixel counts and shrinking pixel sizes, nevertheless, they are also being hampered by limitations in instantaneous dynamic range, inter-pixel crosstalk, quantum full well capacity, signal-to-noise ratio, sensitivity, spectral flexibility, and in some cases, imager response time. Recently invented is the Coded Access Optical Sensor (CAOS) Camera platform that works in unison with current Photo-Detector Array (PDA) technology to counter fundamental limitations of PDA-based imagers while providing high enough imaging spatial resolution and pixel counts. Using for example the Texas Instruments (TI) Digital Micromirror Device (DMD) to engineer the CAOS camera platform, ushered in is a paradigm change in advanced imager design, particularly for extreme dynamic range applications.

  3. Low Temperature Polycrystalline Silicon Thin Film Transistor Pixel Circuits for Active Matrix Organic Light Emitting Diodes

    NASA Astrophysics Data System (ADS)

    Fan, Ching-Lin; Lin, Yu-Sheng; Liu, Yan-Wei

    A new pixel design and driving method for active matrix organic light emitting diode (AMOLED) displays that use low-temperature polycrystalline silicon thin-film transistors (LTPS-TFTs) with a voltage programming method are proposed and verified using the SPICE simulator. We had employed an appropriate TFT model in SPICE simulation to demonstrate the performance of the pixel circuit. The OLED anode voltage variation error rates are below 0.35% under driving TFT threshold voltage deviation (Δ Vth =± 0.33V). The OLED current non-uniformity caused by the OLED threshold voltage degradation (Δ VTO =+0.33V) is significantly reduced (below 6%). The simulation results show that the pixel design can improve the display image non-uniformity by compensating for the threshold voltage deviation in the driving TFT and the OLED threshold voltage degradation at the same time.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Becker, Julian; Tate, Mark W.; Shanks, Katherine S.

    Pixel Array Detectors (PADs) consist of an x-ray sensor layer bonded pixel-by-pixel to an underlying readout chip. This approach allows both the sensor and the custom pixel electronics to be tailored independently to best match the x-ray imaging requirements. Here we describe the hybridization of CdTe sensors to two different charge-integrating readout chips, the Keck PAD and the Mixed-Mode PAD (MM-PAD), both developed previously in our laboratory. The charge-integrating architecture of each of these PADs extends the instantaneous counting rate by many orders of magnitude beyond that obtainable with photon counting architectures. The Keck PAD chip consists of rapid, 8-frame,more » in-pixel storage elements with framing periods <150 ns. The second detector, the MM-PAD, has an extended dynamic range by utilizing an in-pixel overflow counter coupled with charge removal circuitry activated at each overflow. This allows the recording of signals from the single-photon level to tens of millions of x-rays/pixel/frame while framing at 1 kHz. Both detector chips consist of a 128×128 pixel array with (150 µm){sup 2} pixels.« less

  5. Evaluation of the MTF for a-Si:H imaging arrays

    NASA Astrophysics Data System (ADS)

    Yorkston, John; Antonuk, Larry E.; Seraji, N.; Huang, Weidong; Siewerdsen, Jeffrey H.; El-Mohri, Youcef

    1994-05-01

    Hydrogenated amorphous silicon imaging arrays are being developed for numerous applications in medical imaging. Diagnostic and megavoltage images have previously been reported and a number of the intrinsic properties of the arrays have been investigated. This paper reports on the first attempt to characterize the intrinsic spatial resolution of the imaging pixels on a 450 micrometers pitch, n-i-p imaging array fabricated at Xerox P.A.R.C. The pre- sampled modulation transfer function was measured by scanning a approximately 25 micrometers wide slit of visible wavelength light across a pixel in both the DATA and FET directions. The results show that the response of the pixel in these orthogonal directions is well described by a simple model that accounts for asymmetries in the pixel response due to geometric aspects of the pixel design.

  6. Multiple image encryption scheme based on pixel exchange operation and vector decomposition

    NASA Astrophysics Data System (ADS)

    Xiong, Y.; Quan, C.; Tay, C. J.

    2018-02-01

    We propose a new multiple image encryption scheme based on a pixel exchange operation and a basic vector decomposition in Fourier domain. In this algorithm, original images are imported via a pixel exchange operator, from which scrambled images and pixel position matrices are obtained. Scrambled images encrypted into phase information are imported using the proposed algorithm and phase keys are obtained from the difference between scrambled images and synthesized vectors in a charge-coupled device (CCD) plane. The final synthesized vector is used as an input in a random phase encoding (DRPE) scheme. In the proposed encryption scheme, pixel position matrices and phase keys serve as additional private keys to enhance the security of the cryptosystem which is based on a 4-f system. Numerical simulations are presented to demonstrate the feasibility and robustness of the proposed encryption scheme.

  7. Color filter array pattern identification using variance of color difference image

    NASA Astrophysics Data System (ADS)

    Shin, Hyun Jun; Jeon, Jong Ju; Eom, Il Kyu

    2017-07-01

    A color filter array is placed on the image sensor of a digital camera to acquire color images. Each pixel uses only one color, since the image sensor can measure only one color per pixel. Therefore, empty pixels are filled using an interpolation process called demosaicing. The original and the interpolated pixels have different statistical characteristics. If the image is modified by manipulation or forgery, the color filter array pattern is altered. This pattern change can be a clue for image forgery detection. However, most forgery detection algorithms have the disadvantage of assuming the color filter array pattern. We present an identification method of the color filter array pattern. Initially, the local mean is eliminated to remove the background effect. Subsequently, the color difference block is constructed to emphasize the difference between the original pixel and the interpolated pixel. The variance measure of the color difference image is proposed as a means of estimating the color filter array configuration. The experimental results show that the proposed method is effective in identifying the color filter array pattern. Compared with conventional methods, our method provides superior performance.

  8. A Multi-Resolution Mode CMOS Image Sensor with a Novel Two-Step Single-Slope ADC for Intelligent Surveillance Systems.

    PubMed

    Kim, Daehyeok; Song, Minkyu; Choe, Byeongseong; Kim, Soo Youn

    2017-06-25

    In this paper, we present a multi-resolution mode CMOS image sensor (CIS) for intelligent surveillance system (ISS) applications. A low column fixed-pattern noise (CFPN) comparator is proposed in 8-bit two-step single-slope analog-to-digital converter (TSSS ADC) for the CIS that supports normal, 1/2, 1/4, 1/8, 1/16, 1/32, and 1/64 mode of pixel resolution. We show that the scaled-resolution images enable CIS to reduce total power consumption while images hold steady without events. A prototype sensor of 176 × 144 pixels has been fabricated with a 0.18 μm 1-poly 4-metal CMOS process. The area of 4-shared 4T-active pixel sensor (APS) is 4.4 μm × 4.4 μm and the total chip size is 2.35 mm × 2.35 mm. The maximum power consumption is 10 mW (with full resolution) with supply voltages of 3.3 V (analog) and 1.8 V (digital) and 14 frame/s of frame rates.

  9. Single photon detection using Geiger mode CMOS avalanche photodiodes

    NASA Astrophysics Data System (ADS)

    Lawrence, William G.; Stapels, Christopher; Augustine, Frank L.; Christian, James F.

    2005-10-01

    Geiger mode Avalanche Photodiodes fabricated using complementary metal-oxide-semiconductor (CMOS) fabrication technology combine high sensitivity detectors with pixel-level auxiliary circuitry. Radiation Monitoring Devices has successfully implemented CMOS manufacturing techniques to develop prototype detectors with active diameters ranging from 5 to 60 microns and measured detection efficiencies of up to 60%. CMOS active quenching circuits are included in the pixel layout. The actively quenched pixels have a quenching time less than 30 ns and a maximum count rate greater than 10 MHz. The actively quenched Geiger mode avalanche photodiode (GPD) has linear response at room temperature over six orders of magnitude. When operating in Geiger mode, these GPDs act as single photon-counting detectors that produce a digital output pulse for each photon with no associated read noise. Thermoelectrically cooled detectors have less than 1 Hz dark counts. The detection efficiency, dark count rate, and after-pulsing of two different pixel designs are measured and demonstrate the differences in the device operation. Additional applications for these devices include nuclear imaging and replacement of photomultiplier tubes in dosimeters.

  10. Satellite Data Used to Combat Fires

    NASA Technical Reports Server (NTRS)

    2002-01-01

    This visible light/infrared composite image over Montana and Idaho was acquired by the Moderate-resolution Imaging Spectroradiometer on Aug. 23, 2000. The image shows the locations of actively burning wildfires (red pixels) and the thick shroud of smoke they produced (grey-blue pixels). There were 57 wildfires burning across both states. A single MODIS image can be up to 2,330 kilometers wide, allowing fire scientists to monitor a much larger area than can be covered on the ground or by aircraft. Also, because MODIS has detectors that are sensitive to thermal infrared wavelengths of 3.70 and 3.90 micrometers, it can detect fires on the surface even through heavy smoke. For more information, see: NASA Satellite Data Used Operationally to Help Combat Fires in the West Image courtesy MODIS Science Team, Reto Stockli, and Robert Simmon.

  11. Spatial clustering of pixels of a multispectral image

    DOEpatents

    Conger, James Lynn

    2014-08-19

    A method and system for clustering the pixels of a multispectral image is provided. A clustering system computes a maximum spectral similarity score for each pixel that indicates the similarity between that pixel and the most similar neighboring. To determine the maximum similarity score for a pixel, the clustering system generates a similarity score between that pixel and each of its neighboring pixels and then selects the similarity score that represents the highest similarity as the maximum similarity score. The clustering system may apply a filtering criterion based on the maximum similarity score so that pixels with similarity scores below a minimum threshold are not clustered. The clustering system changes the current pixel values of the pixels in a cluster based on an averaging of the original pixel values of the pixels in the cluster.

  12. Active pixel imagers incorporating pixel-level amplifiers based on polycrystalline-silicon thin-film transistors

    PubMed Central

    El-Mohri, Youcef; Antonuk, Larry E.; Koniczek, Martin; Zhao, Qihua; Li, Yixin; Street, Robert A.; Lu, Jeng-Ping

    2009-01-01

    Active matrix, flat-panel imagers (AMFPIs) employing a 2D matrix of a-Si addressing TFTs have become ubiquitous in many x-ray imaging applications due to their numerous advantages. However, under conditions of low exposures and∕or high spatial resolution, their signal-to-noise performance is constrained by the modest system gain relative to the electronic additive noise. In this article, a strategy for overcoming this limitation through the incorporation of in-pixel amplification circuits, referred to as active pixel (AP) architectures, using polycrystalline-silicon (poly-Si) TFTs is reported. Compared to a-Si, poly-Si offers substantially higher mobilities, enabling higher TFT currents and the possibility of sophisticated AP designs based on both n- and p-channel TFTs. Three prototype indirect detection arrays employing poly-Si TFTs and a continuous a-Si photodiode structure were characterized. The prototypes consist of an array (PSI-1) that employs a pixel architecture with a single TFT, as well as two arrays (PSI-2 and PSI-3) that employ AP architectures based on three and five TFTs, respectively. While PSI-1 serves as a reference with a design similar to that of conventional AMFPI arrays, PSI-2 and PSI-3 incorporate additional in-pixel amplification circuitry. Compared to PSI-1, results of x-ray sensitivity demonstrate signal gains of ∼10.7 and 20.9 for PSI-2 and PSI-3, respectively. These values are in reasonable agreement with design expectations, demonstrating that poly-Si AP circuits can be tailored to provide a desired level of signal gain. PSI-2 exhibits the same high levels of charge trapping as those observed for PSI-1 and other conventional arrays employing a continuous photodiode structure. For PSI-3, charge trapping was found to be significantly lower and largely independent of the bias voltage applied across the photodiode. MTF results indicate that the use of a continuous photodiode structure in PSI-1, PSI-2, and PSI-3 results in optical fill factors that are close to unity. In addition, the greater complexity of PSI-2 and PSI-3 pixel circuits, compared to that of PSI-1, has no observable effect on spatial resolution. Both PSI-2 and PSI-3 exhibit high levels of additive noise, resulting in no net improvement in the signal-to-noise performance of these early prototypes compared to conventional AMFPIs. However, faster readout rates, coupled with implementation of multiple sampling protocols allowed by the nondestructive nature of pixel readout, resulted in a significantly lower noise level of ∼560 e (rms) for PSI-3. PMID:19673229

  13. Active pixel imagers incorporating pixel-level amplifiers based on polycrystalline-silicon thin-film transistors.

    PubMed

    El-Mohri, Youcef; Antonuk, Larry E; Koniczek, Martin; Zhao, Qihua; Li, Yixin; Street, Robert A; Lu, Jeng-Ping

    2009-07-01

    Active matrix, flat-panel imagers (AMFPIs) employing a 2D matrix of a-Si addressing TFTs have become ubiquitous in many x-ray imaging applications due to their numerous advantages. However, under conditions of low exposures and/or high spatial resolution, their signal-to-noise performance is constrained by the modest system gain relative to the electronic additive noise. In this article, a strategy for overcoming this limitation through the incorporation of in-pixel amplification circuits, referred to as active pixel (AP) architectures, using polycrystalline-silicon (poly-Si) TFTs is reported. Compared to a-Si, poly-Si offers substantially higher mobilities, enabling higher TFT currents and the possibility of sophisticated AP designs based on both n- and p-channel TFTs. Three prototype indirect detection arrays employing poly-Si TFTs and a continuous a-Si photodiode structure were characterized. The prototypes consist of an array (PSI-1) that employs a pixel architecture with a single TFT, as well as two arrays (PSI-2 and PSI-3) that employ AP architectures based on three and five TFTs, respectively. While PSI-1 serves as a reference with a design similar to that of conventional AMFPI arrays, PSI-2 and PSI-3 incorporate additional in-pixel amplification circuitry. Compared to PSI-1, results of x-ray sensitivity demonstrate signal gains of approximately 10.7 and 20.9 for PSI-2 and PSI-3, respectively. These values are in reasonable agreement with design expectations, demonstrating that poly-Si AP circuits can be tailored to provide a desired level of signal gain. PSI-2 exhibits the same high levels of charge trapping as those observed for PSI-1 and other conventional arrays employing a continuous photodiode structure. For PSI-3, charge trapping was found to be significantly lower and largely independent of the bias voltage applied across the photodiode. MTF results indicate that the use of a continuous photodiode structure in PSI-1, PSI-2, and PSI-3 results in optical fill factors that are close to unity. In addition, the greater complexity of PSI-2 and PSI-3 pixel circuits, compared to that of PSI-1, has no observable effect on spatial resolution. Both PSI-2 and PSI-3 exhibit high levels of additive noise, resulting in no net improvement in the signal-to-noise performance of these early prototypes compared to conventional AMFPIs. However, faster readout rates, coupled with implementation of multiple sampling protocols allowed by the nondestructive nature of pixel readout, resulted in a significantly lower noise level of approximately 560 e (rms) for PSI-3.

  14. High-End CMOS Active Pixel Sensors For Space-Borne Imaging Instruments

    DTIC Science & Technology

    2005-07-13

    DISTRIBUTION/AVAILABILITY STATEMENT Approved for public release, distribution unlimited 13. SUPPLEMENTARY NOTES See also ADM001791, Potentially Disruptive ... Technologies and Their Impact in Space Programs Held in Marseille, France on 4-6 July 2005. , The original document contains color images. 14

  15. Muon Trigger for Mobile Phones

    NASA Astrophysics Data System (ADS)

    Borisyak, M.; Usvyatsov, M.; Mulhearn, M.; Shimmin, C.; Ustyuzhanin, A.

    2017-10-01

    The CRAYFIS experiment proposes to use privately owned mobile phones as a ground detector array for Ultra High Energy Cosmic Rays. Upon interacting with Earth’s atmosphere, these events produce extensive particle showers which can be detected by cameras on mobile phones. A typical shower contains minimally-ionizing particles such as muons. As these particles interact with CMOS image sensors, they may leave tracks of faintly-activated pixels that are sometimes hard to distinguish from random detector noise. Triggers that rely on the presence of very bright pixels within an image frame are not efficient in this case. We present a trigger algorithm based on Convolutional Neural Networks which selects images containing such tracks and are evaluated in a lazy manner: the response of each successive layer is computed only if activation of the current layer satisfies a continuation criterion. Usage of neural networks increases the sensitivity considerably comparable with image thresholding, while the lazy evaluation allows for execution of the trigger under the limited computational power of mobile phones.

  16. Chemical images of marine bio-active compounds by surface enhanced Raman spectroscopy and transposed orthogonal partial least squares (T-OPLS).

    PubMed

    Abbas, Aamer; Josefson, Mats; Nylund, Göran M; Pavia, Henrik; Abrahamsson, Katarina

    2012-08-06

    Surface enhanced Raman spectroscopy combined with transposed Orthogonal Partial Least Squares (T-OPLS) was shown to produce chemical images of the natural antibacterial surface-active compound 1,1,3,3-tetrabromo-2-heptanone (TBH) on Bonnemaisonia hamifera. The use of gold colloids functionalised with the internal standard 4-mercapto-benzonitrile (MBN) made it possible to create images of the relative concentration of TBH over the surfaces. A gradient of TBH could be mapped over and in the close vicinity of the B. hamifera algal vesicles at the attomol/pixel level. T-OPLS produced a measure of the spectral correlation for each pixel of the hyperspectral images whilst not including spectral variation that was linearly independent of the target spectrum. In this paper we show the possibility to retrieve specific spectral information with a low magnitude in a complex matrix. Copyright © 2012 Elsevier B.V. All rights reserved.

  17. Wavelength scanning achieves pixel super-resolution in holographic on-chip microscopy

    NASA Astrophysics Data System (ADS)

    Luo, Wei; Göröcs, Zoltan; Zhang, Yibo; Feizi, Alborz; Greenbaum, Alon; Ozcan, Aydogan

    2016-03-01

    Lensfree holographic on-chip imaging is a potent solution for high-resolution and field-portable bright-field imaging over a wide field-of-view. Previous lensfree imaging approaches utilize a pixel super-resolution technique, which relies on sub-pixel lateral displacements between the lensfree diffraction patterns and the image sensor's pixel-array, to achieve sub-micron resolution under unit magnification using state-of-the-art CMOS imager chips, commonly used in e.g., mobile-phones. Here we report, for the first time, a wavelength scanning based pixel super-resolution technique in lensfree holographic imaging. We developed an iterative super-resolution algorithm, which generates high-resolution reconstructions of the specimen from low-resolution (i.e., under-sampled) diffraction patterns recorded at multiple wavelengths within a narrow spectral range (e.g., 10-30 nm). Compared with lateral shift-based pixel super-resolution, this wavelength scanning approach does not require any physical shifts in the imaging setup, and the resolution improvement is uniform in all directions across the sensor-array. Our wavelength scanning super-resolution approach can also be integrated with multi-height and/or multi-angle on-chip imaging techniques to obtain even higher resolution reconstructions. For example, using wavelength scanning together with multi-angle illumination, we achieved a halfpitch resolution of 250 nm, corresponding to a numerical aperture of 1. In addition to pixel super-resolution, the small scanning steps in wavelength also enable us to robustly unwrap phase, revealing the specimen's optical path length in our reconstructed images. We believe that this new wavelength scanning based pixel super-resolution approach can provide competitive microscopy solutions for high-resolution and field-portable imaging needs, potentially impacting tele-pathology applications in resource-limited-settings.

  18. A Decision-Based Modified Total Variation Diffusion Method for Impulse Noise Removal

    PubMed Central

    Zhu, Qingxin; Song, Xiuli; Tao, Jinsong

    2017-01-01

    Impulsive noise removal usually employs median filtering, switching median filtering, the total variation L1 method, and variants. These approaches however often introduce excessive smoothing and can result in extensive visual feature blurring and thus are suitable only for images with low density noise. A new method to remove noise is proposed in this paper to overcome this limitation, which divides pixels into different categories based on different noise characteristics. If an image is corrupted by salt-and-pepper noise, the pixels are divided into corrupted and noise-free; if the image is corrupted by random valued impulses, the pixels are divided into corrupted, noise-free, and possibly corrupted. Pixels falling into different categories are processed differently. If a pixel is corrupted, modified total variation diffusion is applied; if the pixel is possibly corrupted, weighted total variation diffusion is applied; otherwise, the pixel is left unchanged. Experimental results show that the proposed method is robust to different noise strengths and suitable for different images, with strong noise removal capability as shown by PSNR/SSIM results as well as the visual quality of restored images. PMID:28536602

  19. SAR Image Change Detection Based on Fuzzy Markov Random Field Model

    NASA Astrophysics Data System (ADS)

    Zhao, J.; Huang, G.; Zhao, Z.

    2018-04-01

    Most existing SAR image change detection algorithms only consider single pixel information of different images, and not consider the spatial dependencies of image pixels. So the change detection results are susceptible to image noise, and the detection effect is not ideal. Markov Random Field (MRF) can make full use of the spatial dependence of image pixels and improve detection accuracy. When segmenting the difference image, different categories of regions have a high degree of similarity at the junction of them. It is difficult to clearly distinguish the labels of the pixels near the boundaries of the judgment area. In the traditional MRF method, each pixel is given a hard label during iteration. So MRF is a hard decision in the process, and it will cause loss of information. This paper applies the combination of fuzzy theory and MRF to the change detection of SAR images. The experimental results show that the proposed method has better detection effect than the traditional MRF method.

  20. Image indexing using color correlograms

    DOEpatents

    Huang, Jing; Kumar, Shanmugasundaram Ravi; Mitra, Mandar; Zhu, Wei-Jing

    2001-01-01

    A color correlogram is a three-dimensional table indexed by color and distance between pixels which expresses how the spatial correlation of color changes with distance in a stored image. The color correlogram may be used to distinguish an image from other images in a database. To create a color correlogram, the colors in the image are quantized into m color values, c.sub.i . . . c.sub.m. Also, the distance values k.epsilon.[d] to be used in the correlogram are determined where [d] is the set of distances between pixels in the image, and where dmax is the maximum distance measurement between pixels in the image. Each entry (i, j, k) in the table is the probability of finding a pixel of color c.sub.i at a selected distance k from a pixel of color c.sub.i. A color autocorrelogram, which is a restricted version of the color correlogram that considers color pairs of the form (i,i) only, may also be used to identify an image.

  1. Spectral characterisation and noise performance of Vanilla—an active pixel sensor

    NASA Astrophysics Data System (ADS)

    Blue, Andrew; Bates, R.; Bohndiek, S. E.; Clark, A.; Arvanitis, Costas D.; Greenshaw, T.; Laing, A.; Maneuski, D.; Turchetta, R.; O'Shea, V.

    2008-06-01

    This work will report on the characterisation of a new active pixel sensor, Vanilla. The Vanilla comprises of 512×512 (25μm 2) pixels. The sensor has a 12 bit digital output for full-frame mode, although it can also be readout in analogue mode, whereby it can also be read in a fully programmable region-of-interest (ROI) mode. In full frame, the sensor can operate at a readout rate of more than 100 frames per second (fps), while in ROI mode, the speed depends on the size, shape and number of ROIs. For example, an ROI of 6×6 pixels can be read at 20,000 fps in analogue mode. Using photon transfer curve (PTC) measurements allowed for the calculation of the read noise, shot noise, full-well capacity and camera gain constant of the sensor. Spectral response measurements detailed the quantum efficiency (QE) of the detector through the UV and visible region. Analysis of the ROI readout mode was also performed. Such measurements suggest that the Vanilla APS (active pixel sensor) will be suitable for a wide range of applications including particle physics and medical imaging.

  2. Experimental single-chip color HDTV image acquisition system with 8M-pixel CMOS image sensor

    NASA Astrophysics Data System (ADS)

    Shimamoto, Hiroshi; Yamashita, Takayuki; Funatsu, Ryohei; Mitani, Kohji; Nojiri, Yuji

    2006-02-01

    We have developed an experimental single-chip color HDTV image acquisition system using 8M-pixel CMOS image sensor. The sensor has 3840 × 2160 effective pixels and is progressively scanned at 60 frames per second. We describe the color filter array and interpolation method to improve image quality with a high-pixel-count single-chip sensor. We also describe an experimental image acquisition system we used to measured spatial frequency characteristics in the horizontal direction. The results indicate good prospects for achieving a high quality single chip HDTV camera that reduces pseudo signals and maintains high spatial frequency characteristics within the frequency band for HDTV.

  3. Design of a High-resolution Optoelectronic Retinal Prosthesis

    NASA Astrophysics Data System (ADS)

    Palanker, Daniel

    2005-03-01

    It has been demonstrated that electrical stimulation of the retina can produce visual percepts in blind patients suffering from macular degeneration and retinitis pigmentosa. So far retinal implants have had just a few electrodes, whereas at least several thousand pixels would be required for any functional restoration of sight. We will discuss physical limitations on the number of stimulating electrodes and on delivery of information and power to the retinal implant. Using a model of extracellular stimulation we derive the threshold values of current and voltage as a function of electrode size and distance to the target cell. Electrolysis, tissue heating, and cross-talk between neighboring electrodes depend critically on separation between electrodes and cells, thus strongly limiting the pixels size and spacing. Minimal pixel density required for 20/80 visual acuity (2500 pixels/mm2, pixel size 20 um) cannot be achieved unless the target neurons are within 7 um of the electrodes. At a separation of 50 um, the density drops to 44 pixels/mm2, and at 100 um it is further reduced to 10 pixels/mm2. We will present designs of subretinal implants that provide close proximity of electrodes to cells using migration of retinal cells to target areas. Two basic implant geometries will be described: perforated membranes and protruding electrode arrays. In addition, we will discuss delivery of information to the implant that allows for natural eye scanning of the scene, rather than scanning with a head-mounted camera. It operates similarly to ``virtual reality'' imaging devices where an image from a video camera is projected by a goggle-mounted collimated infrared LED-LCD display onto the retina, activating an array of powered photodiodes in the retinal implant. Optical delivery of visual information to the implant allows for flexible control of the image processing algorithms and stimulation parameters. In summary, we will describe solutions to some of the major problems facing the realization of a functional retinal implant: high pixel density, proximity of electrodes to target cells, natural eye scanning capability, and real-time image processing adjustable to retinal architecture.

  4. Random On-Board Pixel Sampling (ROPS) X-Ray Camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Zhehui; Iaroshenko, O.; Li, S.

    Recent advances in compressed sensing theory and algorithms offer new possibilities for high-speed X-ray camera design. In many CMOS cameras, each pixel has an independent on-board circuit that includes an amplifier, noise rejection, signal shaper, an analog-to-digital converter (ADC), and optional in-pixel storage. When X-ray images are sparse, i.e., when one of the following cases is true: (a.) The number of pixels with true X-ray hits is much smaller than the total number of pixels; (b.) The X-ray information is redundant; or (c.) Some prior knowledge about the X-ray images exists, sparse sampling may be allowed. Here we first illustratemore » the feasibility of random on-board pixel sampling (ROPS) using an existing set of X-ray images, followed by a discussion about signal to noise as a function of pixel size. Next, we describe a possible circuit architecture to achieve random pixel access and in-pixel storage. The combination of a multilayer architecture, sparse on-chip sampling, and computational image techniques, is expected to facilitate the development and applications of high-speed X-ray camera technology.« less

  5. A compressed sensing X-ray camera with a multilayer architecture

    NASA Astrophysics Data System (ADS)

    Wang, Zhehui; Iaroshenko, O.; Li, S.; Liu, T.; Parab, N.; Chen, W. W.; Chu, P.; Kenyon, G. T.; Lipton, R.; Sun, K.-X.

    2018-01-01

    Recent advances in compressed sensing theory and algorithms offer new possibilities for high-speed X-ray camera design. In many CMOS cameras, each pixel has an independent on-board circuit that includes an amplifier, noise rejection, signal shaper, an analog-to-digital converter (ADC), and optional in-pixel storage. When X-ray images are sparse, i.e., when one of the following cases is true: (a.) The number of pixels with true X-ray hits is much smaller than the total number of pixels; (b.) The X-ray information is redundant; or (c.) Some prior knowledge about the X-ray images exists, sparse sampling may be allowed. Here we first illustrate the feasibility of random on-board pixel sampling (ROPS) using an existing set of X-ray images, followed by a discussion about signal to noise as a function of pixel size. Next, we describe a possible circuit architecture to achieve random pixel access and in-pixel storage. The combination of a multilayer architecture, sparse on-chip sampling, and computational image techniques, is expected to facilitate the development and applications of high-speed X-ray camera technology.

  6. An efficient approach for pixel decomposition to increase the spatial resolution of land surface temperature images from MODIS thermal infrared band data.

    PubMed

    Wang, Fei; Qin, Zhihao; Li, Wenjuan; Song, Caiying; Karnieli, Arnon; Zhao, Shuhe

    2014-12-25

    Land surface temperature (LST) images retrieved from the thermal infrared (TIR) band data of Moderate Resolution Imaging Spectroradiometer (MODIS) have much lower spatial resolution than the MODIS visible and near-infrared (VNIR) band data. The coarse pixel scale of MODIS LST images (1000 m under nadir) have limited their capability in applying to many studies required high spatial resolution in comparison of the MODIS VNIR band data with pixel scale of 250-500 m. In this paper we intend to develop an efficient approach for pixel decomposition to increase the spatial resolution of MODIS LST image using the VNIR band data as assistance. The unique feature of this approach is to maintain the thermal radiance of parent pixels in the MODIS LST image unchanged after they are decomposed into the sub-pixels in the resulted image. There are two important steps in the decomposition: initial temperature estimation and final temperature determination. Therefore the approach can be termed double-step pixel decomposition (DSPD). Both steps involve a series of procedures to achieve the final result of decomposed LST image, including classification of the surface patterns, establishment of LST change with normalized difference of vegetation index (NDVI) and building index (NDBI), reversion of LST into thermal radiance through Planck equation, and computation of weights for the sub-pixels of the resulted image. Since the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) with much higher spatial resolution than MODIS data was on-board the same platform (Terra) as MODIS for Earth observation, an experiment had been done in the study to validate the accuracy and efficiency of our approach for pixel decomposition. The ASTER LST image was used as the reference to compare with the decomposed LST image. The result showed that the spatial distribution of the decomposed LST image was very similar to that of the ASTER LST image with a root mean square error (RMSE) of 2.7 K for entire image. Comparison with the evaluation DisTrad (E-DisTrad) and re-sampling methods for pixel decomposition also indicate that our DSPD has the lowest RMSE in all cases, including urban region, water bodies, and natural terrain. The obvious increase in spatial resolution remarkably uplifts the capability of the coarse MODIS LST images in highlighting the details of LST variation. Therefore it can be concluded that, in spite of complicated procedures, the proposed DSPD approach provides an alternative to improve the spatial resolution of MODIS LST image hence expand its applicability to the real world.

  7. Assessing Mesoscale Volcanic Aviation Hazards using ASTER

    NASA Astrophysics Data System (ADS)

    Pieri, D.; Gubbels, T.; Hufford, G.; Olsson, P.; Realmuto, V.

    2006-12-01

    The Advanced Spaceborne Thermal Emission and Reflection (ASTER) imager onboard the NASA Terra Spacecraft is a joint project of the Japanese Ministry for Economy, Trade, and Industry (METI) and NASA. ASTER has acquired over one million multi-spectral 60km by 60 km images of the earth over the last six years. It consists of three sub-instruments: (a) a four channel VNIR (0.52-0.86um) imager with a spatial resolution of 15m/pixel, including three nadir-viewing bands (1N, 2N, 3N) and one repeated rear-viewing band (3B) for stereo-photogrammetric terrain reconstruction (8-12m vertical resolution); (b) a SWIR (1.6-2.43um) imager with six bands at 30m/pixel; and (c) a TIR (8.125-11.65um) instrument with five bands at 90m/pixel. Returned data are processed in Japan at the Earth Remote Sensing Data Analysis Center (ERSDAC) and at the Land Processes Distributed Active Archive Center (LP DAAC), located at the USGS Center for Earth Resource Observation and Science (EROS) in Sioux Falls, South Dakota. Within the ASTER Project, the JPL Volcano Data Acquisition and Analyses System (VDAAS) houses over 60,000 ASTER volcano images of 1542 volcanoes worldwide and will be accessible for downloads by the general public and on-line image analyses by researchers in early 2007. VDAAS multi-spectral thermal infrared (TIR) de-correlation stretch products are optimized for volcanic ash detection and have a spatial resolution of 90m/pixel. Digital elevation models (DEM) stereo-photogrammetrically derived from ASTER Band 3B/3N data are also available within VDAAS at 15 and 30m/pixel horizontal resolution. Thus, ASTER visible, IR, and DEM data at 15-100m/pixel resolution within VDAAS can be combined to provide useful boundary conditions on local volcanic eruption plume location, composition, and altitude, as well as on topography of underlying terrain. During and after eruptions, low- altitude winds and ash transport can be affected by topography, and other orographic thermal and water vapor transport effects from the micro (<1km) to mesoscale (1-100km). Such phenomena are thus well-observed by ASTER and pose transient and severe hazards to aircraft operating in and out of airports near volcanoes (e.g., Anchorage, AK, USA; Catania, Italy; Kagoshima City, Japan). ASTER image data and derived products provide boundary conditions for 3D mesoscale atmospheric transport and chemistry models (e.g., RAMS) for retrospective and prospective studies of volcanic aerosol transport at low altitudes in takeoff and landing corridors near active volcanoes. Putative ASTER direct downlinks in the future could provide real-time mitigation of such hazards. Some examples of mesoscale analyses for threatened airspace near US and non- US airports will be shown. This work was, in part, carried out at the Jet Propulsion Laboratory of the California Institute of Technology under contract to the NASA Earth Science Research Program and as part of ASTER Science Team activities.

  8. Proof of principle study of the use of a CMOS active pixel sensor for proton radiography.

    PubMed

    Seco, Joao; Depauw, Nicolas

    2011-02-01

    Proof of principle study of the use of a CMOS active pixel sensor (APS) in producing proton radiographic images using the proton beam at the Massachusetts General Hospital (MGH). A CMOS APS, previously tested for use in s-ray radiation therapy applications, was used for proton beam radiographic imaging at the MGH. Two different setups were used as a proof of principle that CMOS can be used as proton imaging device: (i) a pen with two metal screws to assess spatial resolution of the CMOS and (ii) a phantom with lung tissue, bone tissue, and water to assess tissue contrast of the CMOS. The sensor was then traversed by a double scattered monoenergetic proton beam at 117 MeV, and the energy deposition inside the detector was recorded to assess its energy response. Conventional x-ray images with similar setup at voltages of 70 kVp and proton images using commercial Gafchromic EBT 2 and Kodak X-Omat V films were also taken for comparison purposes. Images were successfully acquired and compared to x-ray kVp and proton EBT2/X-Omat film images. The spatial resolution of the CMOS detector image is subjectively comparable to the EBT2 and Kodak X-Omat V film images obtained at the same object-detector distance. X-rays have apparent higher spatial resolution than the CMOS. However, further studies with different commercial films using proton beam irradiation demonstrate that the distance of the detector to the object is important to the amount of proton scatter contributing to the proton image. Proton images obtained with films at different distances from the source indicate that proton scatter significantly affects the CMOS image quality. Proton radiographic images were successfully acquired at MGH using a CMOS active pixel sensor detector. The CMOS demonstrated spatial resolution subjectively comparable to films at the same object-detector distance. Further work will be done in order to establish the spatial and energy resolution of the CMOS detector for protons. The development and use of CMOS in proton radiography could allow in vivo proton range checks, patient setup QA, and real-time tumor tracking.

  9. Compressed sensing with cyclic-S Hadamard matrix for terahertz imaging applications

    NASA Astrophysics Data System (ADS)

    Ermeydan, Esra Şengün; ćankaya, Ilyas

    2018-01-01

    Compressed Sensing (CS) with Cyclic-S Hadamard matrix is proposed for single pixel imaging applications in this study. In single pixel imaging scheme, N = r . c samples should be taken for r×c pixel image where . denotes multiplication. CS is a popular technique claiming that the sparse signals can be reconstructed with samples under Nyquist rate. Therefore to solve the slow data acquisition problem in Terahertz (THz) single pixel imaging, CS is a good candidate. However, changing mask for each measurement is a challenging problem since there is no commercial Spatial Light Modulators (SLM) for THz band yet, therefore circular masks are suggested so that for each measurement one or two column shifting will be enough to change the mask. The CS masks are designed using cyclic-S matrices based on Hadamard transform for 9 × 7 and 15 × 17 pixel images within the framework of this study. The %50 compressed images are reconstructed using total variation based TVAL3 algorithm. Matlab simulations demonstrates that cyclic-S matrices can be used for single pixel imaging based on CS. The circular masks have the advantage to reduce the mechanical SLMs to a single sliding strip, whereas the CS helps to reduce acquisition time and energy since it allows to reconstruct the image from fewer samples.

  10. Roughness effects on thermal-infrared emissivities estimated from remotely sensed images

    NASA Astrophysics Data System (ADS)

    Mushkin, Amit; Danilina, Iryna; Gillespie, Alan R.; Balick, Lee K.; McCabe, Matthew F.

    2007-10-01

    Multispectral thermal-infrared images from the Mauna Loa caldera in Hawaii, USA are examined to study the effects of surface roughness on remotely retrieved emissivities. We find up to a 3% decrease in spectral contrast in ASTER (Advanced Spaceborne Thermal Emission and Reflection Radiometer) 90-m/pixel emissivities due to sub-pixel surface roughness variations on the caldera floor. A similar decrease in spectral contrast of emissivities extracted from MASTER (MODIS/ASTER Airborne Simulator) ~12.5-m/pixel data can be described as a function of increasing surface roughness, which was measured remotely from ASTER 15-m/pixel stereo images. The ratio between ASTER stereo images provides a measure of sub-pixel surface-roughness variations across the scene. These independent roughness estimates complement a radiosity model designed to quantify the unresolved effects of multiple scattering and differential solar heating due to sub-pixel roughness elements and to compensate for both sub-pixel temperature dispersion and cavity radiation on TIR measurements.

  11. A CMOS image sensor with programmable pixel-level analog processing.

    PubMed

    Massari, Nicola; Gottardi, Massimo; Gonzo, Lorenzo; Stoppa, David; Simoni, Andrea

    2005-11-01

    A prototype of a 34 x 34 pixel image sensor, implementing real-time analog image processing, is presented. Edge detection, motion detection, image amplification, and dynamic-range boosting are executed at pixel level by means of a highly interconnected pixel architecture based on the absolute value of the difference among neighbor pixels. The analog operations are performed over a kernel of 3 x 3 pixels. The square pixel, consisting of 30 transistors, has a pitch of 35 microm with a fill-factor of 20%. The chip was fabricated in a 0.35 microm CMOS technology, and its power consumption is 6 mW with 3.3 V power supply. The device was fully characterized and achieves a dynamic range of 50 dB with a light power density of 150 nW/mm2 and a frame rate of 30 frame/s. The measured fixed pattern noise corresponds to 1.1% of the saturation level. The sensor's dynamic range can be extended up to 96 dB using the double-sampling technique.

  12. A hyperspectral image optimizing method based on sub-pixel MTF analysis

    NASA Astrophysics Data System (ADS)

    Wang, Yun; Li, Kai; Wang, Jinqiang; Zhu, Yajie

    2015-04-01

    Hyperspectral imaging is used to collect tens or hundreds of images continuously divided across electromagnetic spectrum so that the details under different wavelengths could be represented. A popular hyperspectral imaging methods uses a tunable optical band-pass filter settled in front of the focal plane to acquire images of different wavelengths. In order to alleviate the influence of chromatic aberration in some segments in a hyperspectral series, in this paper, a hyperspectral optimizing method uses sub-pixel MTF to evaluate image blurring quality was provided. This method acquired the edge feature in the target window by means of the line spread function (LSF) to calculate the reliable position of the edge feature, then the evaluation grid in each line was interpolated by the real pixel value based on its relative position to the optimal edge and the sub-pixel MTF was used to analyze the image in frequency domain, by which MTF calculation dimension was increased. The sub-pixel MTF evaluation was reliable, since no image rotation and pixel value estimation was needed, and no artificial information was introduced. With theoretical analysis, the method proposed in this paper is reliable and efficient when evaluation the common images with edges of small tilt angle in real scene. It also provided a direction for the following hyperspectral image blurring evaluation and the real-time focal plane adjustment in real time in related imaging system.

  13. In-flight calibration of the Hitomi Soft X-ray Spectrometer. (2) Point spread function

    NASA Astrophysics Data System (ADS)

    Maeda, Yoshitomo; Sato, Toshiki; Hayashi, Takayuki; Iizuka, Ryo; Angelini, Lorella; Asai, Ryota; Furuzawa, Akihiro; Kelley, Richard; Koyama, Shu; Kurashima, Sho; Ishida, Manabu; Mori, Hideyuki; Nakaniwa, Nozomi; Okajima, Takashi; Serlemitsos, Peter J.; Tsujimoto, Masahiro; Yaqoob, Tahir

    2018-03-01

    We present results of inflight calibration of the point spread function of the Soft X-ray Telescope that focuses X-rays onto the pixel array of the Soft X-ray Spectrometer system. We make a full array image of a point-like source by extracting a pulsed component of the Crab nebula emission. Within the limited statistics afforded by an exposure time of only 6.9 ks and limited knowledge of the systematic uncertainties, we find that the raytracing model of 1 {^'.} 2 half-power-diameter is consistent with an image of the observed event distributions across pixels. The ratio between the Crab pulsar image and the raytracing shows scatter from pixel to pixel that is 40% or less in all except one pixel. The pixel-to-pixel ratio has a spread of 20%, on average, for the 15 edge pixels, with an averaged statistical error of 17% (1 σ). In the central 16 pixels, the corresponding ratio is 15% with an error of 6%.

  14. First evidence of phase-contrast imaging with laboratory sources and active pixel sensors

    NASA Astrophysics Data System (ADS)

    Olivo, A.; Arvanitis, C. D.; Bohndiek, S. E.; Clark, A. T.; Prydderch, M.; Turchetta, R.; Speller, R. D.

    2007-11-01

    The aim of the present work is to achieve a first step towards combining the advantages of an innovative X-ray imaging technique—phase-contrast imaging (XPCi)—with those of a new class of sensors, i.e. CMOS-based active pixel sensors (APSs). The advantages of XPCi are well known and include increased image quality and detection of details invisible to conventional techniques, with potential application fields encompassing the medical, biological, industrial and security areas. Vanilla, one of the APSs developed by the MI-3 collaboration (see http://mi3.shef.ac.uk), was thoroughly characterised and an appropriate scintillator was selected to provide X-ray sensitivity. During this process, a set of phase-contrast images of different biological samples was acquired by means of the well-established free-space propagation XPCi technique. The obtained results are very encouraging and are in optimum agreement with the predictions of a simulation recently developed by some of the authors thus further supporting its reliability. This paper presents these preliminary results in detail and discusses in brief both the background to this work and its future developments.

  15. Design, optimization and evaluation of a "smart" pixel sensor array for low-dose digital radiography

    NASA Astrophysics Data System (ADS)

    Wang, Kai; Liu, Xinghui; Ou, Hai; Chen, Jun

    2016-04-01

    Amorphous silicon (a-Si:H) thin-film transistors (TFTs) have been widely used to build flat-panel X-ray detectors for digital radiography (DR). As the demand for low-dose X-ray imaging grows, a detector with high signal-to-noise-ratio (SNR) pixel architecture emerges. "Smart" pixel is intended to use a dual-gate photosensitive TFT for sensing, storage, and switch. It differs from a conventional passive pixel sensor (PPS) and active pixel sensor (APS) in that all these three functions are combined into one device instead of three separate units in a pixel. Thus, it is expected to have high fill factor and high spatial resolution. In addition, it utilizes the amplification effect of the dual-gate photosensitive TFT to form a one-transistor APS that leads to a potentially high SNR. This paper addresses the design, optimization and evaluation of the smart pixel sensor and array for low-dose DR. We will design and optimize the smart pixel from the scintillator to TFT levels and validate it through optical and electrical simulation and experiments of a 4x4 sensor array.

  16. Image registration method for medical image sequences

    DOEpatents

    Gee, Timothy F.; Goddard, James S.

    2013-03-26

    Image registration of low contrast image sequences is provided. In one aspect, a desired region of an image is automatically segmented and only the desired region is registered. Active contours and adaptive thresholding of intensity or edge information may be used to segment the desired regions. A transform function is defined to register the segmented region, and sub-pixel information may be determined using one or more interpolation methods.

  17. Contact CMOS imaging of gaseous oxygen sensor array

    PubMed Central

    Daivasagaya, Daisy S.; Yao, Lei; Yi Yung, Ka; Hajj-Hassan, Mohamad; Cheung, Maurice C.; Chodavarapu, Vamsy P.; Bright, Frank V.

    2014-01-01

    We describe a compact luminescent gaseous oxygen (O2) sensor microsystem based on the direct integration of sensor elements with a polymeric optical filter and placed on a low power complementary metal-oxide semiconductor (CMOS) imager integrated circuit (IC). The sensor operates on the measurement of excited-state emission intensity of O2-sensitive luminophore molecules tris(4,7-diphenyl-1,10-phenanthroline) ruthenium(II) ([Ru(dpp)3]2+) encapsulated within sol–gel derived xerogel thin films. The polymeric optical filter is made with polydimethylsiloxane (PDMS) that is mixed with a dye (Sudan-II). The PDMS membrane surface is molded to incorporate arrays of trapezoidal microstructures that serve to focus the optical sensor signals on to the imager pixels. The molded PDMS membrane is then attached with the PDMS color filter. The xerogel sensor arrays are contact printed on top of the PDMS trapezoidal lens-like microstructures. The CMOS imager uses a 32 × 32 (1024 elements) array of active pixel sensors and each pixel includes a high-gain phototransistor to convert the detected optical signals into electrical currents. Correlated double sampling circuit, pixel address, digital control and signal integration circuits are also implemented on-chip. The CMOS imager data is read out as a serial coded signal. The CMOS imager consumes a static power of 320 µW and an average dynamic power of 625 µW when operating at 100 Hz sampling frequency and 1.8 V DC. This CMOS sensor system provides a useful platform for the development of miniaturized optical chemical gas sensors. PMID:24493909

  18. Contact CMOS imaging of gaseous oxygen sensor array.

    PubMed

    Daivasagaya, Daisy S; Yao, Lei; Yi Yung, Ka; Hajj-Hassan, Mohamad; Cheung, Maurice C; Chodavarapu, Vamsy P; Bright, Frank V

    2011-10-01

    We describe a compact luminescent gaseous oxygen (O 2 ) sensor microsystem based on the direct integration of sensor elements with a polymeric optical filter and placed on a low power complementary metal-oxide semiconductor (CMOS) imager integrated circuit (IC). The sensor operates on the measurement of excited-state emission intensity of O 2 -sensitive luminophore molecules tris(4,7-diphenyl-1,10-phenanthroline) ruthenium(II) ([Ru(dpp) 3 ] 2+ ) encapsulated within sol-gel derived xerogel thin films. The polymeric optical filter is made with polydimethylsiloxane (PDMS) that is mixed with a dye (Sudan-II). The PDMS membrane surface is molded to incorporate arrays of trapezoidal microstructures that serve to focus the optical sensor signals on to the imager pixels. The molded PDMS membrane is then attached with the PDMS color filter. The xerogel sensor arrays are contact printed on top of the PDMS trapezoidal lens-like microstructures. The CMOS imager uses a 32 × 32 (1024 elements) array of active pixel sensors and each pixel includes a high-gain phototransistor to convert the detected optical signals into electrical currents. Correlated double sampling circuit, pixel address, digital control and signal integration circuits are also implemented on-chip. The CMOS imager data is read out as a serial coded signal. The CMOS imager consumes a static power of 320 µW and an average dynamic power of 625 µW when operating at 100 Hz sampling frequency and 1.8 V DC. This CMOS sensor system provides a useful platform for the development of miniaturized optical chemical gas sensors.

  19. SOI CMOS Imager with Suppression of Cross-Talk

    NASA Technical Reports Server (NTRS)

    Pain, Bedabrata; Zheng, Xingyu; Cunningham, Thomas J.; Seshadri, Suresh; Sun, Chao

    2009-01-01

    A monolithic silicon-on-insulator (SOI) complementary metal oxide/semiconductor (CMOS) image-detecting integrated circuit of the active-pixel-sensor type, now undergoing development, is designed to operate at visible and near-infrared wavelengths and to offer a combination of high quantum efficiency and low diffusion and capacitive cross-talk among pixels. The imager is designed to be especially suitable for astronomical and astrophysical applications. The imager design could also readily be adapted to general scientific, biological, medical, and spectroscopic applications. One of the conditions needed to ensure both high quantum efficiency and low diffusion cross-talk is a relatively high reverse bias potential (between about 20 and about 50 V) on the photodiode in each pixel. Heretofore, a major obstacle to realization of this condition in a monolithic integrated circuit has been posed by the fact that the required high reverse bias on the photodiode is incompatible with metal oxide/semiconductor field-effect transistors (MOSFETs) in the CMOS pixel readout circuitry. In the imager now being developed, the SOI structure is utilized to overcome this obstacle: The handle wafer is retained and the photodiode is formed in the handle wafer. The MOSFETs are formed on the SOI layer, which is separated from the handle wafer by a buried oxide layer. The electrical isolation provided by the buried oxide layer makes it possible to bias the MOSFETs at CMOS-compatible potentials (between 0 and 3 V), while biasing the photodiode at the required higher potential, and enables independent optimization of the sensory and readout portions of the imager.

  20. Large area CMOS active pixel sensor x-ray imager for digital breast tomosynthesis: Analysis, modeling, and characterization.

    PubMed

    Zhao, Chumin; Kanicki, Jerzy; Konstantinidis, Anastasios C; Patel, Tushita

    2015-11-01

    Large area x-ray imagers based on complementary metal-oxide-semiconductor (CMOS) active pixel sensor (APS) technology have been proposed for various medical imaging applications including digital breast tomosynthesis (DBT). The low electronic noise (50-300 e-) of CMOS APS x-ray imagers provides a possible route to shrink the pixel pitch to smaller than 75 μm for microcalcification detection and possible reduction of the DBT mean glandular dose (MGD). In this study, imaging performance of a large area (29×23 cm2) CMOS APS x-ray imager [Dexela 2923 MAM (PerkinElmer, London)] with a pixel pitch of 75 μm was characterized and modeled. The authors developed a cascaded system model for CMOS APS x-ray imagers using both a broadband x-ray radiation and monochromatic synchrotron radiation. The experimental data including modulation transfer function, noise power spectrum, and detective quantum efficiency (DQE) were theoretically described using the proposed cascaded system model with satisfactory consistency to experimental results. Both high full well and low full well (LFW) modes of the Dexela 2923 MAM CMOS APS x-ray imager were characterized and modeled. The cascaded system analysis results were further used to extract the contrast-to-noise ratio (CNR) for microcalcifications with sizes of 165-400 μm at various MGDs. The impact of electronic noise on CNR was also evaluated. The LFW mode shows better DQE at low air kerma (Ka<10 μGy) and should be used for DBT. At current DBT applications, air kerma (Ka∼10 μGy, broadband radiation of 28 kVp), DQE of more than 0.7 and ∼0.3 was achieved using the LFW mode at spatial frequency of 0.5 line pairs per millimeter (lp/mm) and Nyquist frequency ∼6.7 lp/mm, respectively. It is shown that microcalcifications of 165-400 μm in size can be resolved using a MGD range of 0.3-1 mGy, respectively. In comparison to a General Electric GEN2 prototype DBT system (at MGD of 2.5 mGy), an increased CNR (by ∼10) for microcalcifications was observed using the Dexela 2923 MAM CMOS APS x-ray imager at a lower MGD (2.0 mGy). The Dexela 2923 MAM CMOS APS x-ray imager is capable to achieve a high imaging performance at spatial frequencies up to 6.7 lp/mm. Microcalcifications of 165 μm are distinguishable based on reported data and their modeling results due to the small pixel pitch of 75 μm. At the same time, potential dose reduction is expected using the studied CMOS APS x-ray imager.

  1. High-dynamic-range coherent diffractive imaging: ptychography using the mixed-mode pixel array detector.

    PubMed

    Giewekemeyer, Klaus; Philipp, Hugh T; Wilke, Robin N; Aquila, Andrew; Osterhoff, Markus; Tate, Mark W; Shanks, Katherine S; Zozulya, Alexey V; Salditt, Tim; Gruner, Sol M; Mancuso, Adrian P

    2014-09-01

    Coherent (X-ray) diffractive imaging (CDI) is an increasingly popular form of X-ray microscopy, mainly due to its potential to produce high-resolution images and the lack of an objective lens between the sample and its corresponding imaging detector. One challenge, however, is that very high dynamic range diffraction data must be collected to produce both quantitative and high-resolution images. In this work, hard X-ray ptychographic coherent diffractive imaging has been performed at the P10 beamline of the PETRA III synchrotron to demonstrate the potential of a very wide dynamic range imaging X-ray detector (the Mixed-Mode Pixel Array Detector, or MM-PAD). The detector is capable of single photon detection, detecting fluxes exceeding 1 × 10(8) 8-keV photons pixel(-1) s(-1), and framing at 1 kHz. A ptychographic reconstruction was performed using a peak focal intensity on the order of 1 × 10(10) photons µm(-2) s(-1) within an area of approximately 325 nm × 603 nm. This was done without need of a beam stop and with a very modest attenuation, while `still' images of the empty beam far-field intensity were recorded without any attenuation. The treatment of the detector frames and CDI methodology for reconstruction of non-sensitive detector regions, partially also extending the active detector area, are described.

  2. Digital mammography: observer performance study of the effects of pixel size on radiologists' characterization of malignant and benign microcalcifications

    NASA Astrophysics Data System (ADS)

    Chan, Heang-Ping; Helvie, Mark A.; Petrick, Nicholas; Sahiner, Berkman; Adler, Dorit D.; Blane, Caroline E.; Joynt, Lynn K.; Paramagul, Chintana; Roubidoux, Marilyn A.; Wilson, Todd E.; Hadjiiski, Lubomir M.; Goodsitt, Mitchell M.

    1999-05-01

    A receiver operating characteristic (ROC) experiment was conducted to evaluate the effects of pixel size on the characterization of mammographic microcalcifications. Digital mammograms were obtained by digitizing screen-film mammograms with a laser film scanner. One hundred twelve two-view mammograms with biopsy-proven microcalcifications were digitized at a pixel size of 35 micrometer X 35 micrometer. A region of interest (ROI) containing the microcalcifications was extracted from each image. ROI images with pixel sizes of 70 micrometers, 105 micrometers, and 140 micrometers were derived from the ROI of 35 micrometer pixel size by averaging 2 X 2, 3 X 3, and 4 X 4 neighboring pixels, respectively. The ROI images were printed on film with a laser imager. Seven MQSA-approved radiologists participated as observers. The likelihood of malignancy of the microcalcifications was rated on a 10-point confidence rating scale and analyzed with ROC methodology. The classification accuracy was quantified by the area, Az, under the ROC curve. The statistical significance of the differences in the Az values for different pixel sizes was estimated with the Dorfman-Berbaum-Metz (DBM) method for multi-reader, multi-case ROC data. It was found that five of the seven radiologists demonstrated a higher classification accuracy with the 70 micrometer or 105 micrometer images. The average Az also showed a higher classification accuracy in the range of 70 to 105 micrometer pixel size. However, the differences in A(subscript z/ between different pixel sizes did not achieve statistical significance. The low specificity of image features of microcalcifications an the large interobserver and intraobserver variabilities may have contributed to the relatively weak dependence of classification accuracy on pixel size.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rasca, Anthony P.; Chen, James; Pevtsov, Alexei A., E-mail: anthony.rasca.ctr@nrl.navy.mil

    Recent observations of the photosphere using high spatial and temporal resolution show small dynamic features at or below the current resolving limits. A new pixel dynamics method has been developed to analyze spectral profiles and quantify changes in line displacement, width, asymmetry, and peakedness of photospheric absorption lines. The algorithm evaluates variations of line profile properties in each pixel and determines the statistics of such fluctuations averaged over all pixels in a given region. The method has been used to derive statistical characteristics of pixel fluctuations in observed quiet-Sun regions, an active region with no eruption, and an active regionmore » with an ongoing eruption. Using Stokes I images from the Vector Spectromagnetograph (VSM) of the Synoptic Optical Long-term Investigations of the Sun (SOLIS) telescope on 2012 March 13, variations in line width and peakedness of Fe i 6301.5 Å are shown to have a distinct spatial and temporal relationship with an M7.9 X-ray flare in NOAA 11429. This relationship is observed as stationary and contiguous patches of pixels adjacent to a sunspot exhibiting intense flattening in the line profile and line-center displacement as the X-ray flare approaches peak intensity, which is not present in area scans of the non-eruptive active region. The analysis of pixel dynamics allows one to extract quantitative information on differences in plasma dynamics on sub-pixel scales in these photospheric regions. The analysis can be extended to include the Stokes parameters and study signatures of vector components of magnetic fields and coupled plasma properties.« less

  4. Low-power priority Address-Encoder and Reset-Decoder data-driven readout for Monolithic Active Pixel Sensors for tracker system

    NASA Astrophysics Data System (ADS)

    Yang, P.; Aglieri, G.; Cavicchioli, C.; Chalmet, P. L.; Chanlek, N.; Collu, A.; Gao, C.; Hillemanns, H.; Junique, A.; Kofarago, M.; Keil, M.; Kugathasan, T.; Kim, D.; Kim, J.; Lattuca, A.; Marin Tobon, C. A.; Marras, D.; Mager, M.; Martinengo, P.; Mazza, G.; Mugnier, H.; Musa, L.; Puggioni, C.; Rousset, J.; Reidt, F.; Riedler, P.; Snoeys, W.; Siddhanta, S.; Usai, G.; van Hoorne, J. W.; Yi, J.

    2015-06-01

    Active Pixel Sensors used in High Energy Particle Physics require low power consumption to reduce the detector material budget, low integration time to reduce the possibilities of pile-up and fast readout to improve the detector data capability. To satisfy these requirements, a novel Address-Encoder and Reset-Decoder (AERD) asynchronous circuit for a fast readout of a pixel matrix has been developed. The AERD data-driven readout architecture operates the address encoding and reset decoding based on an arbitration tree, and allows us to readout only the hit pixels. Compared to the traditional readout structure of the rolling shutter scheme in Monolithic Active Pixel Sensors (MAPS), AERD can achieve a low readout time and a low power consumption especially for low hit occupancies. The readout is controlled at the chip periphery with a signal synchronous with the clock, allows a good digital and analogue signal separation in the matrix and a reduction of the power consumption. The AERD circuit has been implemented in the TowerJazz 180 nm CMOS Imaging Sensor (CIS) process with full complementary CMOS logic in the pixel. It works at 10 MHz with a matrix height of 15 mm. The energy consumed to read out one pixel is around 72 pJ. A scheme to boost the readout speed to 40 MHz is also discussed. The sensor chip equipped with AERD has been produced and characterised. Test results including electrical beam measurement are presented.

  5. A semiconductor radiation imaging pixel detector for space radiation dosimetry.

    PubMed

    Kroupa, Martin; Bahadori, Amir; Campbell-Ricketts, Thomas; Empl, Anton; Hoang, Son Minh; Idarraga-Munoz, John; Rios, Ryan; Semones, Edward; Stoffle, Nicholas; Tlustos, Lukas; Turecek, Daniel; Pinsky, Lawrence

    2015-07-01

    Progress in the development of high-performance semiconductor radiation imaging pixel detectors based on technologies developed for use in high-energy physics applications has enabled the development of a completely new generation of compact low-power active dosimeters and area monitors for use in space radiation environments. Such detectors can provide real-time information concerning radiation exposure, along with detailed analysis of the individual particles incident on the active medium. Recent results from the deployment of detectors based on the Timepix from the CERN-based Medipix2 Collaboration on the International Space Station (ISS) are reviewed, along with a glimpse of developments to come. Preliminary results from Orion MPCV Exploration Flight Test 1 are also presented. Copyright © 2015 The Committee on Space Research (COSPAR). All rights reserved.

  6. CMOS image sensors: State-of-the-art

    NASA Astrophysics Data System (ADS)

    Theuwissen, Albert J. P.

    2008-09-01

    This paper gives an overview of the state-of-the-art of CMOS image sensors. The main focus is put on the shrinkage of the pixels : what is the effect on the performance characteristics of the imagers and on the various physical parameters of the camera ? How is the CMOS pixel architecture optimized to cope with the negative performance effects of the ever-shrinking pixel size ? On the other hand, the smaller dimensions in CMOS technology allow further integration on column level and even on pixel level. This will make CMOS imagers even smarter that they are already.

  7. Methods in quantitative image analysis.

    PubMed

    Oberholzer, M; Ostreicher, M; Christen, H; Brühlmann, M

    1996-05-01

    The main steps of image analysis are image capturing, image storage (compression), correcting imaging defects (e.g. non-uniform illumination, electronic-noise, glare effect), image enhancement, segmentation of objects in the image and image measurements. Digitisation is made by a camera. The most modern types include a frame-grabber, converting the analog-to-digital signal into digital (numerical) information. The numerical information consists of the grey values describing the brightness of every point within the image, named a pixel. The information is stored in bits. Eight bits are summarised in one byte. Therefore, grey values can have a value between 0 and 256 (2(8)). The human eye seems to be quite content with a display of 5-bit images (corresponding to 64 different grey values). In a digitised image, the pixel grey values can vary within regions that are uniform in the original scene: the image is noisy. The noise is mainly manifested in the background of the image. For an optimal discrimination between different objects or features in an image, uniformity of illumination in the whole image is required. These defects can be minimised by shading correction [subtraction of a background (white) image from the original image, pixel per pixel, or division of the original image by the background image]. The brightness of an image represented by its grey values can be analysed for every single pixel or for a group of pixels. The most frequently used pixel-based image descriptors are optical density, integrated optical density, the histogram of the grey values, mean grey value and entropy. The distribution of the grey values existing within an image is one of the most important characteristics of the image. However, the histogram gives no information about the texture of the image. The simplest way to improve the contrast of an image is to expand the brightness scale by spreading the histogram out to the full available range. Rules for transforming the grey value histogram of an existing image (input image) into a new grey value histogram (output image) are most quickly handled by a look-up table (LUT). The histogram of an image can be influenced by gain, offset and gamma of the camera. Gain defines the voltage range, offset defines the reference voltage and gamma the slope of the regression line between the light intensity and the voltage of the camera. A very important descriptor of neighbourhood relations in an image is the co-occurrence matrix. The distance between the pixels (original pixel and its neighbouring pixel) can influence the various parameters calculated from the co-occurrence matrix. The main goals of image enhancement are elimination of surface roughness in an image (smoothing), correction of defects (e.g. noise), extraction of edges, identification of points, strengthening texture elements and improving contrast. In enhancement, two types of operations can be distinguished: pixel-based (point operations) and neighbourhood-based (matrix operations). The most important pixel-based operations are linear stretching of grey values, application of pre-stored LUTs and histogram equalisation. The neighbourhood-based operations work with so-called filters. These are organising elements with an original or initial point in their centre. Filters can be used to accentuate or to suppress specific structures within the image. Filters can work either in the spatial or in the frequency domain. The method used for analysing alterations of grey value intensities in the frequency domain is the Hartley transform. Filter operations in the spatial domain can be based on averaging or ranking the grey values occurring in the organising element. The most important filters, which are usually applied, are the Gaussian filter and the Laplace filter (both averaging filters), and the median filter, the top hat filter and the range operator (all ranking filters). Segmentation of objects is traditionally based on threshold grey values. (AB

  8. Single-pixel non-imaging object recognition by means of Fourier spectrum acquisition

    NASA Astrophysics Data System (ADS)

    Chen, Huichao; Shi, Jianhong; Liu, Xialin; Niu, Zhouzhou; Zeng, Guihua

    2018-04-01

    Single-pixel imaging has emerged over recent years as a novel imaging technique, which has significant application prospects. In this paper, we propose and experimentally demonstrate a scheme that can achieve single-pixel non-imaging object recognition by acquiring the Fourier spectrum. In an experiment, a four-step phase-shifting sinusoid illumination light is used to irradiate the object image, the value of the light intensity is measured with a single-pixel detection unit, and the Fourier coefficients of the object image are obtained by a differential measurement. The Fourier coefficients are first cast into binary numbers to obtain the hash value. We propose a new method of perceptual hashing algorithm, which is combined with a discrete Fourier transform to calculate the hash value. The hash distance is obtained by calculating the difference of the hash value between the object image and the contrast images. By setting an appropriate threshold, the object image can be quickly and accurately recognized. The proposed scheme realizes single-pixel non-imaging perceptual hashing object recognition by using fewer measurements. Our result might open a new path for realizing object recognition with non-imaging.

  9. Carotid Stenosis And Ulcer Detectability As A Function Of Pixel Size

    NASA Astrophysics Data System (ADS)

    Mintz, Leslie J.; Enzmann, Dieter R.; Keyes, Gary S.; Mainiero, Louis M.; Brody, William R.

    1981-11-01

    Digital radiography, in conjunction with digital subtraction methods can provide high quality images of the vascular system,1-4 Spatial resolution is one important limiting factor of this imaging technique. Since spatial resolution of a digital image is a function of pixel size, it is important to determine the pixel size threshold necessary to provide information comparable to that of conventional angiograms. This study was designed to establish the pixel size necessary to identify accurately stenotic and ulcerative lesions of the carotid artery.

  10. Time multiplexing for increased FOV and resolution in virtual reality

    NASA Astrophysics Data System (ADS)

    Miñano, Juan C.; Benitez, Pablo; Grabovičkić, Dejan; Zamora, Pablo; Buljan, Marina; Narasimhan, Bharathwaj

    2017-06-01

    We introduce a time multiplexing strategy to increase the total pixel count of the virtual image seen in a VR headset. This translates into an improvement of the pixel density or the Field of View FOV (or both) A given virtual image is displayed by generating a succession of partial real images, each representing part of the virtual image and together representing the virtual image. Each partial real image uses the full set of physical pixels available in the display. The partial real images are successively formed and combine spatially and temporally to form a virtual image viewable from the eye position. Partial real images are imaged through different optical channels depending of its time slot. Shutters or other schemes are used to avoid that a partial real image be imaged through the wrong optical channels or at the wrong time slot. This time multiplexing strategy needs real images be shown at high frame rates (>120fps). Available display and shutters technologies are discussed. Several optical designs for achieving this time multiplexing scheme in a compact format are shown. This time multiplexing scheme allows increasing the resolution/FOV of the virtual image not only by increasing the physical pixel density but also by decreasing the pixels switching time, a feature that may be simpler to achieve in certain circumstances.

  11. Geologic map of Io

    USGS Publications Warehouse

    Williams, David A.; Keszthelyi, Laszlo P.; Crown, David A.; Yff, Jessica A.; Jaeger, Windy L.; Schenk, Paul M.; Geissler, Paul E.; Becker, Tammy L.

    2011-01-01

    Io, discovered by Galileo Galilei on January 7–13, 1610, is the innermost of the four Galilean satellites of the planet Jupiter (Galilei, 1610). It is the most volcanically active object in the Solar System, as recognized by observations from six National Aeronautics and Space Administration (NASA) spacecraft: Voyager 1 (March 1979), Voyager 2 (July 1979), Hubble Space Telescope (1990–present), Galileo (1996–2001), Cassini (December 2000), and New Horizons (February 2007). The lack of impact craters on Io in any spacecraft images at any resolution attests to the high resurfacing rate (1 cm/yr) and the dominant role of active volcanism in shaping its surface. High-temperature hot spots detected by the Galileo Solid-State Imager (SSI), Near-Infrared Mapping Spectrometer (NIMS), and Photopolarimeter-Radiometer (PPR) usually correlate with darkest materials on the surface, suggesting active volcanism. The Voyager flybys obtained complete coverage of Io's subjovian hemisphere at 500 m/pixel to 2 km/pixel, and most of the rest of the satellite at 5–20 km/pixel. Repeated Galileo flybys obtained complementary coverage of Io's antijovian hemisphere at 5 m/pixel to 1.4 km/pixel. Thus, the Voyager and Galileo data sets were merged to enable the characterization of the whole surface of the satellite at a consistent resolution. The United States Geological Survey (USGS) produced a set of four global mosaics of Io in visible wavelengths at a spatial resolution of 1 km/pixel, released in February 2006, which we have used as base maps for this new global geologic map. Much has been learned about Io's volcanism, tectonics, degradation, and interior since the Voyager flybys, primarily during and following the Galileo Mission at Jupiter (December 1995–September 2003), and the results have been summarized in books published after the end of the Galileo Mission. Our mapping incorporates this new understanding to assist in map unit definition and to provide a global synthesis of Io's geology.

  12. Design and image-quality performance of high resolution CMOS-based X-ray imaging detectors for digital mammography

    NASA Astrophysics Data System (ADS)

    Cha, B. K.; Kim, J. Y.; Kim, Y. J.; Yun, S.; Cho, G.; Kim, H. K.; Seo, C.-W.; Jeon, S.; Huh, Y.

    2012-04-01

    In digital X-ray imaging systems, X-ray imaging detectors based on scintillating screens with electronic devices such as charge-coupled devices (CCDs), thin-film transistors (TFT), complementary metal oxide semiconductor (CMOS) flat panel imagers have been introduced for general radiography, dental, mammography and non-destructive testing (NDT) applications. Recently, a large-area CMOS active-pixel sensor (APS) in combination with scintillation films has been widely used in a variety of digital X-ray imaging applications. We employed a scintillator-based CMOS APS image sensor for high-resolution mammography. In this work, both powder-type Gd2O2S:Tb and a columnar structured CsI:Tl scintillation screens with various thicknesses were fabricated and used as materials to convert X-ray into visible light. These scintillating screens were directly coupled to a CMOS flat panel imager with a 25 × 50 mm2 active area and a 48 μm pixel pitch for high spatial resolution acquisition. We used a W/Al mammographic X-ray source with a 30 kVp energy condition. The imaging characterization of the X-ray detector was measured and analyzed in terms of linearity in incident X-ray dose, modulation transfer function (MTF), noise-power spectrum (NPS) and detective quantum efficiency (DQE).

  13. Multipurpose active pixel sensor (APS)-based microtracker

    NASA Astrophysics Data System (ADS)

    Eisenman, Allan R.; Liebe, Carl C.; Zhu, David Q.

    1998-12-01

    A new, photon-sensitive, imaging array, the active pixel sensor (APS) has emerged as a competitor to the CCD imager for use in star and target trackers. The Jet Propulsion Laboratory (JPL) has undertaken a program to develop a new generation, highly integrated, APS-based, multipurpose tracker: the Programmable Intelligent Microtracker (PIM). The supporting hardware used in the PIM has been carefully selected to enhance the inherent advantages of the APS. Adequate computation power is included to perform star identification, star tracking, attitude determination, space docking, feature tracking, descent imaging for landing control, and target tracking capabilities. Its first version uses a JPL developed 256 X 256-pixel APS and an advanced 32-bit RISC microcontroller. By taking advantage of the unique features of the APS/microcontroller combination, the microtracker will achieve about an order-of-magnitude reduction in mass and power consumption compared to present state-of-the-art star trackers. It will also add the advantage of programmability to enable it to perform a variety of star, other celestial body, and target tracking tasks. The PIM is already proving the usefulness of its design concept for space applications. It is demonstrating the effectiveness of taking such an integrated approach in building a new generation of high performance, general purpose, tracking instruments to be applied to a large variety of future space missions.

  14. High-resolution pulse-counting array detectors for imaging and spectroscopy at ultraviolet wavelengths

    NASA Technical Reports Server (NTRS)

    Timothy, J. Gethyn; Bybee, Richard L.

    1986-01-01

    The performance characteristics of multianode microchannel array (MAMA) detector systems which have formats as large as 256 x 1024 pixels and which have application to imaging and spectroscopy at UV wavelengths are evaluated. Sealed and open-structure MAMA detector tubes with opaque CsI photocathodes can determine the arrival time of the detected photon to an accuracy of 100 ns or better. Very large format MAMA detectors with CsI and Cs2Te photocathodes and active areas of 52 x 52 mm (2048 x 2048 pixels) will be used as the UV solar blind detectors for the NASA STIS.

  15. Distance-based over-segmentation for single-frame RGB-D images

    NASA Astrophysics Data System (ADS)

    Fang, Zhuoqun; Wu, Chengdong; Chen, Dongyue; Jia, Tong; Yu, Xiaosheng; Zhang, Shihong; Qi, Erzhao

    2017-11-01

    Over-segmentation, known as super-pixels, is a widely used preprocessing step in segmentation algorithms. Oversegmentation algorithm segments an image into regions of perceptually similar pixels, but performs badly based on only color image in the indoor environments. Fortunately, RGB-D images can improve the performances on the images of indoor scene. In order to segment RGB-D images into super-pixels effectively, we propose a novel algorithm, DBOS (Distance-Based Over-Segmentation), which realizes full coverage of super-pixels on the image. DBOS fills the holes in depth images to fully utilize the depth information, and applies SLIC-like frameworks for fast running. Additionally, depth features such as plane projection distance are extracted to compute distance which is the core of SLIC-like frameworks. Experiments on RGB-D images of NYU Depth V2 dataset demonstrate that DBOS outperforms state-ofthe-art methods in quality while maintaining speeds comparable to them.

  16. Heterogeneity of Particle Deposition by Pixel Analysis of 2D Gamma Scintigraphy Images

    PubMed Central

    Xie, Miao; Zeman, Kirby; Hurd, Harry; Donaldson, Scott

    2015-01-01

    Abstract Background: Heterogeneity of inhaled particle deposition in airways disease may be a sensitive indicator of physiologic changes in the lungs. Using planar gamma scintigraphy, we developed new methods to locate and quantify regions of high (hot) and low (cold) particle deposition in the lungs. Methods: Initial deposition and 24 hour retention images were obtained from healthy (n=31) adult subjects and patients with mild cystic fibrosis lung disease (CF) (n=14) following inhalation of radiolabeled particles (Tc99m-sulfur colloid, 5.4 μm MMAD) under controlled breathing conditions. The initial deposition image of the right lung was normalized to (i.e., same median pixel value), and then divided by, a transmission (Tc99m) image in the same individual to obtain a pixel-by-pixel ratio image. Hot spots were defined where pixel values in the deposition image were greater than 2X those of the transmission, and cold spots as pixels where the deposition image was less than 0.5X of the transmission. The number ratio (NR) of the hot and cold pixels to total lung pixels, and the sum ratio (SR) of total counts in hot pixels to total lung counts were compared between healthy and CF subjects. Other traditional measures of regional particle deposition, nC/P and skew of the pixel count histogram distribution, were also compared. Results: The NR of cold spots was greater in mild CF, 0.221±0.047(CF) vs. 0.186±0.038 (healthy) (p<0.005) and was significantly correlated with FEV1 %pred in the patients (R=−0.70). nC/P (central to peripheral count ratio), skew of the count histogram, and hot NR or SR were not different between the healthy and mild CF patients. Conclusions: These methods may provide more sensitive measures of airway function and localization of deposition that might be useful for assessing treatment efficacy in these patients. PMID:25393109

  17. Physical characterization and performance comparison of active- and passive-pixel CMOS detectors for mammography.

    PubMed

    Elbakri, I A; McIntosh, B J; Rickey, D W

    2009-03-21

    We investigated the physical characteristics of two complementary metal oxide semiconductor (CMOS) mammography detectors. The detectors featured 14-bit image acquisition, 50 microm detector element (del) size and an active area of 5 cm x 5 cm. One detector was a passive-pixel sensor (PPS) with signal amplification performed by an array of amplifiers connected to dels via data lines. The other detector was an active-pixel sensor (APS) with signal amplification performed at each del. Passive-pixel designs have higher read noise due to data line capacitance, and the APS represents an attempt to improve the noise performance of this technology. We evaluated the detectors' resolution by measuring the modulation transfer function (MTF) using a tilted edge. We measured the noise power spectra (NPS) and detective quantum efficiencies (DQE) using mammographic beam conditions specified by the IEC 62220-1-2 standard. Our measurements showed the APS to have much higher gain, slightly higher MTF, and higher NPS. The MTF of both sensors approached 10% near the Nyquist limit. DQE values near dc frequency were in the range of 55-67%, with the APS sensor DQE lower than the PPS DQE for all frequencies. Our results show that lower read noise specifications in this case do not translate into gains in the imaging performance of the sensor. We postulate that the lower fill factor of the APS is a possible cause for this result.

  18. Efficient Solar Scene Wavefront Estimation with Reduced Systematic and RMS Errors: Summary

    NASA Astrophysics Data System (ADS)

    Anugu, N.; Garcia, P.

    2016-04-01

    Wave front sensing for solar telescopes is commonly implemented with the Shack-Hartmann sensors. Correlation algorithms are usually used to estimate the extended scene Shack-Hartmann sub-aperture image shifts or slopes. The image shift is computed by correlating a reference sub-aperture image with the target distorted sub-aperture image. The pixel position where the maximum correlation is located gives the image shift in integer pixel coordinates. Sub-pixel precision image shifts are computed by applying a peak-finding algorithm to the correlation peak Poyneer (2003); Löfdahl (2010). However, the peak-finding algorithm results are usually biased towards the integer pixels, these errors are called as systematic bias errors Sjödahl (1994). These errors are caused due to the low pixel sampling of the images. The amplitude of these errors depends on the type of correlation algorithm and the type of peak-finding algorithm being used. To study the systematic errors in detail, solar sub-aperture synthetic images are constructed by using a Swedish Solar Telescope solar granulation image1. The performance of cross-correlation algorithm in combination with different peak-finding algorithms is investigated. The studied peak-finding algorithms are: parabola Poyneer (2003); quadratic polynomial Löfdahl (2010); threshold center of gravity Bailey (2003); Gaussian Nobach & Honkanen (2005) and Pyramid Bailey (2003). The systematic error study reveals that that the pyramid fit is the most robust to pixel locking effects. The RMS error analysis study reveals that the threshold centre of gravity behaves better in low SNR, although the systematic errors in the measurement are large. It is found that no algorithm is best for both the systematic and the RMS error reduction. To overcome the above problem, a new solution is proposed. In this solution, the image sampling is increased prior to the actual correlation matching. The method is realized in two steps to improve its computational efficiency. In the first step, the cross-correlation is implemented at the original image spatial resolution grid (1 pixel). In the second step, the cross-correlation is performed using a sub-pixel level grid by limiting the field of search to 4 × 4 pixels centered at the first step delivered initial position. The generation of these sub-pixel grid based region of interest images is achieved with the bi-cubic interpolation. The correlation matching with sub-pixel grid technique was previously reported in electronic speckle photography Sjö'dahl (1994). This technique is applied here for the solar wavefront sensing. A large dynamic range and a better accuracy in the measurements are achieved with the combination of the original pixel grid based correlation matching in a large field of view and a sub-pixel interpolated image grid based correlation matching within a small field of view. The results revealed that the proposed method outperforms all the different peak-finding algorithms studied in the first approach. It reduces both the systematic error and the RMS error by a factor of 5 (i.e., 75% systematic error reduction), when 5 times improved image sampling was used. This measurement is achieved at the expense of twice the computational cost. With the 5 times improved image sampling, the wave front accuracy is increased by a factor of 5. The proposed solution is strongly recommended for wave front sensing in the solar telescopes, particularly, for measuring large dynamic image shifts involved open loop adaptive optics. Also, by choosing an appropriate increment of image sampling in trade-off between the computational speed limitation and the aimed sub-pixel image shift accuracy, it can be employed in closed loop adaptive optics. The study is extended to three other class of sub-aperture images (a point source; a laser guide star; a Galactic Center extended scene). The results are planned to submit for the Optical Express journal.

  19. Photon small-field measurements with a CMOS active pixel sensor.

    PubMed

    Spang, F Jiménez; Rosenberg, I; Hedin, E; Royle, G

    2015-06-07

    In this work the dosimetric performance of CMOS active pixel sensors for the measurement of small photon beams is presented. The detector used consisted of an array of 520  × 520 pixels on a 25 µm pitch. Dosimetric parameters measured with this sensor were compared with data collected with an ionization chamber, a film detector and GEANT4 Monte Carlo simulations. The sensor performance for beam profiles measurements was evaluated for field sizes of 0.5  × 0.5 cm(2). The high spatial resolution achieved with this sensor allowed the accurate measurement of profiles, beam penumbrae and field size under lateral electronic disequilibrium. Field size and penumbrae agreed within 5.4% and 2.2% respectively with film measurements. Agreements with ionization chambers better than 1.0% were obtained when measuring tissue-phantom ratios. Output factor measurements were in good agreement with ionization chamber and Monte Carlo simulation. The data obtained from this imaging sensor can be easily analyzed to extract dosimetric information. The results presented in this work are promising for the development and implementation of CMOS active pixel sensors for dosimetry applications.

  20. Identification of coffee bean varieties using hyperspectral imaging: influence of preprocessing methods and pixel-wise spectra analysis.

    PubMed

    Zhang, Chu; Liu, Fei; He, Yong

    2018-02-01

    Hyperspectral imaging was used to identify and to visualize the coffee bean varieties. Spectral preprocessing of pixel-wise spectra was conducted by different methods, including moving average smoothing (MA), wavelet transform (WT) and empirical mode decomposition (EMD). Meanwhile, spatial preprocessing of the gray-scale image at each wavelength was conducted by median filter (MF). Support vector machine (SVM) models using full sample average spectra and pixel-wise spectra, and the selected optimal wavelengths by second derivative spectra all achieved classification accuracy over 80%. Primarily, the SVM models using pixel-wise spectra were used to predict the sample average spectra, and these models obtained over 80% of the classification accuracy. Secondly, the SVM models using sample average spectra were used to predict pixel-wise spectra, but achieved with lower than 50% of classification accuracy. The results indicated that WT and EMD were suitable for pixel-wise spectra preprocessing. The use of pixel-wise spectra could extend the calibration set, and resulted in the good prediction results for pixel-wise spectra and sample average spectra. The overall results indicated the effectiveness of using spectral preprocessing and the adoption of pixel-wise spectra. The results provided an alternative way of data processing for applications of hyperspectral imaging in food industry.

  1. Layer by layer: complex analysis with OCT technology

    NASA Astrophysics Data System (ADS)

    Florin, Christian

    2017-03-01

    Standard visualisation systems capture two- dimensional images and need more or less fast image processing systems. Now, the ASP Array (Actives sensor pixel array) opens a new world in imaging. On the ASP array, each pixel is provided with its own lens and with its own signal pre-processing. The OCT technology works in "real time" with highest accuracy. In the ASP array systems functionalities of the data acquisition and signal processing are even integrated onto the "pixel level". For the extraction of interferometric features, the time-of-flight principle (TOF) is used. The ASP architecture offers the demodulation of the optical signal within a pixel with up to 100 kHz and the reconstruction of the amplitude and its phase. The dynamics of image capture with the ASP array is higher by two orders of magnitude in comparison with conventional image sensors!!! The OCT- Technology allows a topographic imaging in real time with an extremely high geometric spatial resolution. The optical path length is generated by an axial movement of the reference mirror. The amplitude-modulated optical signal and the carrier frequency are proportional to the scan rate and contains the depth information. Each maximum of the signal envelope corresponds to a reflection (or scattering) within a sample. The ASP array produces at same time 300 * 300 axial Interferorgrams which touch each other on all sides. The signal demodulation for detecting the envelope is not limited by the frame rate of the ASP array in comparison to standard OCT systems. If an optical signal arrives to a pixel of the ASP Array an electrical signal is generated. The background is faded to saturation of pixels by high light intensity to avoid. The sampled signal is integrated continuously multiplied by a signal of the same frequency and two paths whose phase is shifted by 90 degrees from each other are averaged. The outputs of the two paths are routed to the PC, where the envelope amplitude and the phase calculate a three-dimensional tomographic image. For 3D measuring technique specially designed ASP- arrays with a very high image rate are available. If ASP- Arrays are coupled with the OCT method, layer thicknesses can be determined without contact, sealing seams can be inspected or geometrical shapes can be measured. From a stack of hundreds of single OCT images, interesting images can be selected and fed to the computer to analyse them.

  2. Pixels, Imagers and Related Fabrication Methods

    NASA Technical Reports Server (NTRS)

    Pain, Bedabrata (Inventor); Cunningham, Thomas J. (Inventor)

    2014-01-01

    Pixels, imagers and related fabrication methods are described. The described methods result in cross-talk reduction in imagers and related devices by generating depletion regions. The devices can also be used with electronic circuits for imaging applications.

  3. Pixels, Imagers and Related Fabrication Methods

    NASA Technical Reports Server (NTRS)

    Pain, Bedabrata (Inventor); Cunningham, Thomas J. (Inventor)

    2016-01-01

    Pixels, imagers and related fabrication methods are described. The described methods result in cross-talk reduction in imagers and related devices by generating depletion regions. The devices can also be used with electronic circuits for imaging applications.

  4. 14C autoradiography with an energy-sensitive silicon pixel detector.

    PubMed

    Esposito, M; Mettivier, G; Russo, P

    2011-04-07

    The first performance tests are presented of a carbon-14 ((14)C) beta-particle digital autoradiography system with an energy-sensitive hybrid silicon pixel detector based on the Timepix readout circuit. Timepix was developed by the Medipix2 Collaboration and it is similar to the photon-counting Medipix2 circuit, except for an added time-based synchronization logic which allows derivation of energy information from the time-over-threshold signal. This feature permits direct energy measurements in each pixel of the detector array. Timepix is bump-bonded to a 300 µm thick silicon detector with 256 × 256 pixels of 55 µm pitch. Since an energetic beta-particle could release its kinetic energy in more than one detector pixel as it slows down in the semiconductor detector, an off-line image analysis procedure was adopted in which the single-particle cluster of hit pixels is recognized; its total energy is calculated and the position of interaction on the detector surface is attributed to the centre of the charge cluster. Measurements reported are detector sensitivity, (4.11 ± 0.03) × 10(-3) cps mm(-2) kBq(-1) g, background level, (3.59 ± 0.01) × 10(-5) cps mm(-2), and minimum detectable activity, 0.0077 Bq. The spatial resolution is 76.9 µm full-width at half-maximum. These figures are compared with several digital imaging detectors for (14)C beta-particle digital autoradiography.

  5. Wavefront sensing in space: flight demonstration II of the PICTURE sounding rocket payload

    NASA Astrophysics Data System (ADS)

    Douglas, Ewan S.; Mendillo, Christopher B.; Cook, Timothy A.; Cahoy, Kerri L.; Chakrabarti, Supriya

    2018-01-01

    A NASA sounding rocket for high-contrast imaging with a visible nulling coronagraph, the Planet Imaging Concept Testbed Using a Rocket Experiment (PICTURE) payload, has made two suborbital attempts to observe the warm dust disk inferred around Epsilon Eridani. The first flight in 2011 demonstrated a 5 mas fine pointing system in space. The reduced flight data from the second launch, on November 25, 2015, presented herein, demonstrate active sensing of wavefront phase in space. Despite several anomalies in flight, postfacto reduction phase stepping interferometer data provide insight into the wavefront sensing precision and the system stability for a portion of the pupil. These measurements show the actuation of a 32 × 32-actuator microelectromechanical system deformable mirror. The wavefront sensor reached a median precision of 1.4 nm per pixel, with 95% of samples between 0.8 and 12.0 nm per pixel. The median system stability, including telescope and coronagraph wavefront errors other than tip, tilt, and piston, was 3.6 nm per pixel, with 95% of samples between 1.2 and 23.7 nm per pixel.

  6. Log polar image sensor in CMOS technology

    NASA Astrophysics Data System (ADS)

    Scheffer, Danny; Dierickx, Bart; Pardo, Fernando; Vlummens, Jan; Meynants, Guy; Hermans, Lou

    1996-08-01

    We report on the design, design issues, fabrication and performance of a log-polar CMOS image sensor. The sensor is developed for the use in a videophone system for deaf and hearing impaired people, who are not capable of communicating through a 'normal' telephone. The system allows 15 detailed images per second to be transmitted over existing telephone lines. This framerate is sufficient for conversations by means of sign language or lip reading. The pixel array of the sensor consists of 76 concentric circles with (up to) 128 pixels per circle, in total 8013 pixels. The interior pixels have a pitch of 14 micrometers, up to 250 micrometers at the border. The 8013-pixels image is mapped (log-polar transformation) in a X-Y addressable 76 by 128 array.

  7. Mapping Capacitive Coupling Among Pixels in a Sensor Array

    NASA Technical Reports Server (NTRS)

    Seshadri, Suresh; Cole, David M.; Smith, Roger M.

    2010-01-01

    An improved method of mapping the capacitive contribution to cross-talk among pixels in an imaging array of sensors (typically, an imaging photodetector array) has been devised for use in calibrating and/or characterizing such an array. The method involves a sequence of resets of subarrays of pixels to specified voltages and measurement of the voltage responses of neighboring non-reset pixels.

  8. A compressed sensing X-ray camera with a multilayer architecture

    DOE PAGES

    Wang, Zhehui; Laroshenko, O.; Li, S.; ...

    2018-01-25

    Recent advances in compressed sensing theory and algorithms offer new possibilities for high-speed X-ray camera design. In many CMOS cameras, each pixel has an independent on-board circuit that includes an amplifier, noise rejection, signal shaper, an analog-to-digital converter (ADC), and optional in-pixel storage. When X-ray images are sparse, i.e., when one of the following cases is true: (a.) The number of pixels with true X-ray hits is much smaller than the total number of pixels; (b.) The X-ray information is redundant; or (c.) Some prior knowledge about the X-ray images exists, sparse sampling may be allowed. In this work, wemore » first illustrate the feasibility of random on-board pixel sampling (ROPS) using an existing set of X-ray images, followed by a discussion about signal to noise as a function of pixel size. Next, we describe a possible circuit architecture to achieve random pixel access and in-pixel storage. The combination of a multilayer architecture, sparse on-chip sampling, and computational image techniques, is expected to facilitate the development and applications of high-speed X-ray camera technology.« less

  9. A compressed sensing X-ray camera with a multilayer architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Zhehui; Laroshenko, O.; Li, S.

    Recent advances in compressed sensing theory and algorithms offer new possibilities for high-speed X-ray camera design. In many CMOS cameras, each pixel has an independent on-board circuit that includes an amplifier, noise rejection, signal shaper, an analog-to-digital converter (ADC), and optional in-pixel storage. When X-ray images are sparse, i.e., when one of the following cases is true: (a.) The number of pixels with true X-ray hits is much smaller than the total number of pixels; (b.) The X-ray information is redundant; or (c.) Some prior knowledge about the X-ray images exists, sparse sampling may be allowed. In this work, wemore » first illustrate the feasibility of random on-board pixel sampling (ROPS) using an existing set of X-ray images, followed by a discussion about signal to noise as a function of pixel size. Next, we describe a possible circuit architecture to achieve random pixel access and in-pixel storage. The combination of a multilayer architecture, sparse on-chip sampling, and computational image techniques, is expected to facilitate the development and applications of high-speed X-ray camera technology.« less

  10. The Europa Imaging System (EIS), a Camera Suite to investigate Europa's Geology, Ice Shell, and Potential for Current Activity

    NASA Astrophysics Data System (ADS)

    Turtle, E. P.; McEwen, A. S.; Osterman, S. N.; Boldt, J. D.; Strohbehn, K.; EIS Science Team

    2016-10-01

    EIS NAC and WAC use identical rad-hard rapid-readout 4k × 2k CMOS detectors for imaging during close (≤25 km) fast ( 4.5 km/s) Europa flybys. NAC achieves 0.5 m/pixel over a 2-km swath from 50 km, and WAC provides context pushbroom stereo imaging.

  11. A 100 Mfps image sensor for biological applications

    NASA Astrophysics Data System (ADS)

    Etoh, T. Goji; Shimonomura, Kazuhiro; Nguyen, Anh Quang; Takehara, Kosei; Kamakura, Yoshinari; Goetschalckx, Paul; Haspeslagh, Luc; De Moor, Piet; Dao, Vu Truong Son; Nguyen, Hoang Dung; Hayashi, Naoki; Mitsui, Yo; Inumaru, Hideo

    2018-02-01

    Two ultrahigh-speed CCD image sensors with different characteristics were fabricated for applications to advanced scientific measurement apparatuses. The sensors are BSI MCG (Backside-illuminated Multi-Collection-Gate) image sensors with multiple collection gates around the center of the front side of each pixel, placed like petals of a flower. One has five collection gates and one drain gate at the center, which can capture consecutive five frames at 100 Mfps with the pixel count of about 600 kpixels (512 x 576 x 2 pixels). In-pixel signal accumulation is possible for repetitive image capture of reproducible events. The target application is FLIM. The other is equipped with four collection gates each connected to an in-situ CCD memory with 305 elements, which enables capture of 1,220 (4 x 305) consecutive images at 50 Mfps. The CCD memory is folded and looped with the first element connected to the last element, which also makes possible the in-pixel signal accumulation. The sensor is a small test sensor with 32 x 32 pixels. The target applications are imaging TOF MS, pulse neutron tomography and dynamic PSP. The paper also briefly explains an expression of the temporal resolution of silicon image sensors theoretically derived by the authors in 2017. It is shown that the image sensor designed based on the theoretical analysis achieves imaging of consecutive frames at the frame interval of 50 ps.

  12. An Efficient Approach for Pixel Decomposition to Increase the Spatial Resolution of Land Surface Temperature Images from MODIS Thermal Infrared Band Data

    PubMed Central

    Wang, Fei; Qin, Zhihao; Li, Wenjuan; Song, Caiying; Karnieli, Arnon; Zhao, Shuhe

    2015-01-01

    Land surface temperature (LST) images retrieved from the thermal infrared (TIR) band data of Moderate Resolution Imaging Spectroradiometer (MODIS) have much lower spatial resolution than the MODIS visible and near-infrared (VNIR) band data. The coarse pixel scale of MODIS LST images (1000 m under nadir) have limited their capability in applying to many studies required high spatial resolution in comparison of the MODIS VNIR band data with pixel scale of 250–500 m. In this paper we intend to develop an efficient approach for pixel decomposition to increase the spatial resolution of MODIS LST image using the VNIR band data as assistance. The unique feature of this approach is to maintain the thermal radiance of parent pixels in the MODIS LST image unchanged after they are decomposed into the sub-pixels in the resulted image. There are two important steps in the decomposition: initial temperature estimation and final temperature determination. Therefore the approach can be termed double-step pixel decomposition (DSPD). Both steps involve a series of procedures to achieve the final result of decomposed LST image, including classification of the surface patterns, establishment of LST change with normalized difference of vegetation index (NDVI) and building index (NDBI), reversion of LST into thermal radiance through Planck equation, and computation of weights for the sub-pixels of the resulted image. Since the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) with much higher spatial resolution than MODIS data was on-board the same platform (Terra) as MODIS for Earth observation, an experiment had been done in the study to validate the accuracy and efficiency of our approach for pixel decomposition. The ASTER LST image was used as the reference to compare with the decomposed LST image. The result showed that the spatial distribution of the decomposed LST image was very similar to that of the ASTER LST image with a root mean square error (RMSE) of 2.7 K for entire image. Comparison with the evaluation DisTrad (E-DisTrad) and re-sampling methods for pixel decomposition also indicate that our DSPD has the lowest RMSE in all cases, including urban region, water bodies, and natural terrain. The obvious increase in spatial resolution remarkably uplifts the capability of the coarse MODIS LST images in highlighting the details of LST variation. Therefore it can be concluded that, in spite of complicated procedures, the proposed DSPD approach provides an alternative to improve the spatial resolution of MODIS LST image hence expand its applicability to the real world. PMID:25609048

  13. CMOS image sensor with lateral electric field modulation pixels for fluorescence lifetime imaging with sub-nanosecond time response

    NASA Astrophysics Data System (ADS)

    Li, Zhuo; Seo, Min-Woong; Kagawa, Keiichiro; Yasutomi, Keita; Kawahito, Shoji

    2016-04-01

    This paper presents the design and implementation of a time-resolved CMOS image sensor with a high-speed lateral electric field modulation (LEFM) gating structure for time domain fluorescence lifetime measurement. Time-windowed signal charge can be transferred from a pinned photodiode (PPD) to a pinned storage diode (PSD) by turning on a pair of transfer gates, which are situated beside the channel. Unwanted signal charge can be drained from the PPD to the drain by turning on another pair of gates. The pixel array contains 512 (V) × 310 (H) pixels with 5.6 × 5.6 µm2 pixel size. The imager chip was fabricated using 0.11 µm CMOS image sensor process technology. The prototype sensor has a time response of 150 ps at 374 nm. The fill factor of the pixels is 5.6%. The usefulness of the prototype sensor is demonstrated for fluorescence lifetime imaging through simulation and measurement results.

  14. Structural colour printing from a reusable generic nanosubstrate masked for the target image

    NASA Astrophysics Data System (ADS)

    Rezaei, M.; Jiang, H.; Kaminska, B.

    2016-02-01

    Structural colour printing has advantages over traditional pigment-based colour printing. However, the high fabrication cost has hindered its applications in printing large-area images because each image requires patterning structural pixels in nanoscale resolution. In this work, we present a novel strategy to print structural colour images from a pixelated substrate which is called a nanosubstrate. The nanosubstrate is fabricated only once using nanofabrication tools and can be reused for printing a large quantity of structural colour images. It contains closely packed arrays of nanostructures from which red, green, blue and infrared structural pixels can be imprinted. To print a target colour image, the nanosubstrate is first covered with a mask layer to block all the structural pixels. The mask layer is subsequently patterned according to the target colour image to make apertures of controllable sizes on top of the wanted primary colour pixels. The masked nanosubstrate is then used as a stamp to imprint the colour image onto a separate substrate surface using nanoimprint lithography. Different visual colours are achieved by properly mixing the red, green and blue primary colours into appropriate ratios controlled by the aperture sizes on the patterned mask layer. Such a strategy significantly reduces the cost and complexity of printing a structural colour image from lengthy nanoscale patterning into high throughput micro-patterning and makes it possible to apply structural colour printing in personalized security features and data storage. In this paper, nanocone array grating pixels were used as the structural pixels and the nanosubstrate contains structures to imprint the nanocone arrays. Laser lithography was implemented to pattern the mask layer with submicron resolution. The optical properties of the nanocone array gratings are studied in detail. Multiple printed structural colour images with embedded covert information are demonstrated.

  15. How many pixels does it take to make a good 4"×6" print? Pixel count wars revisited

    NASA Astrophysics Data System (ADS)

    Kriss, Michael A.

    2011-01-01

    In the early 1980's the future of conventional silver-halide photographic systems was of great concern due to the potential introduction of electronic imaging systems then typified by the Sony Mavica analog electronic camera. The focus was on the quality of film-based systems as expressed in the number of equivalent number pixels and bits-per-pixel, and how many pixels would be required to create an equivalent quality image from a digital camera. It was found that 35-mm frames, for ISO 100 color negative film, contained equivalent pixels of 12 microns for a total of 18 million pixels per frame (6 million pixels per layer) with about 6 bits of information per pixel; the introduction of new emulsion technology, tabular AgX grains, increased the value to 8 bit per pixel. Higher ISO speed films had larger equivalent pixels, fewer pixels per frame, but retained the 8 bits per pixel. Further work found that a high quality 3.5" x 5.25" print could be obtained from a three layer system containing 1300 x 1950 pixels per layer or about 7.6 million pixels in all. In short, it became clear that when a digital camera contained about 6 million pixels (in a single layer using a color filter array and appropriate image processing) that digital systems would challenge and replace conventional film-based system for the consumer market. By 2005 this became the reality. Since 2005 there has been a "pixel war" raging amongst digital camera makers. The question arises about just how many pixels are required and are all pixels equal? This paper will provide a practical look at how many pixels are needed for a good print based on the form factor of the sensor (sensor size) and the effective optical modulation transfer function (optical spread function) of the camera lens. Is it better to have 16 million, 5.7-micron pixels or 6 million 7.8-micron pixels? How does intrinsic (no electronic boost) ISO speed and exposure latitude vary with pixel size? A systematic review of these issues will be provided within the context of image quality and ISO speed models developed over the last 15 years.

  16. 47 CFR 73.9003 - Compliance requirements for covered demodulator products: Unscreened content.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... operating in a mode compatible with the digital visual interface (DVI) rev. 1.0 Specification as an image having the visual equivalent of no more than 350,000 pixels per frame (e.g. an image with resolution of 720×480 pixels for a 4:3 (nonsquare pixel) aspect ratio), and 30 frames per second. Such an image may...

  17. 47 CFR 73.9004 - Compliance requirements for covered demodulator products: Marked content.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... compatible with the digital visual interface (DVI) Rev. 1.0 Specification as an image having the visual equivalent of no more than 350,000 pixels per frame (e.g., an image with resolution of 720×480 pixels for a 4:3 (nonsquare pixel) aspect ratio), and 30 frames per second. Such an image may be attained by...

  18. Pixelated camouflage patterns from the perspective of hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Racek, František; Jobánek, Adam; Baláž, Teodor; Krejčí, Jaroslav

    2016-10-01

    Pixelated camouflage patterns fulfill the role of both principles the matching and the disrupting that are exploited for blending the target into the background. It means that pixelated pattern should respect natural background in spectral and spatial characteristics embodied in micro and macro patterns. The HS imaging plays the similar, however the reverse role in the field of reconnaissance systems. The HS camera fundamentally records and extracts both the spectral and spatial information belonging to the recorded scenery. Therefore, the article deals with problems of hyperspectral (HS) imaging and subsequent processing of HS images of pixelated camouflage patterns which are among others characterized by their specific spatial frequency heterogeneity.

  19. Faxed document image restoration method based on local pixel patterns

    NASA Astrophysics Data System (ADS)

    Akiyama, Teruo; Miyamoto, Nobuo; Oguro, Masami; Ogura, Kenji

    1998-04-01

    A method for restoring degraded faxed document images using the patterns of pixels that construct small areas in a document is proposed. The method effectively restores faxed images that contain the halftone textures and/or density salt-and-pepper noise that degrade OCR system performance. The halftone image restoration process, white-centered 3 X 3 pixels, in which black-and-white pixels alternate, are identified first using the distribution of the pixel values as halftone textures, and then the white center pixels are inverted to black. To remove high-density salt- and-pepper noise, it is assumed that the degradation is caused by ill-balanced bias and inappropriate thresholding of the sensor output which results in the addition of random noise. Restored image can be estimated using an approximation that uses the inverse operation of the assumed original process. In order to process degraded faxed images, the algorithms mentioned above are combined. An experiment is conducted using 24 especially poor quality examples selected from data sets that exemplify what practical fax- based OCR systems cannot handle. The maximum recovery rate in terms of mean square error was 98.8 percent.

  20. Weber-aware weighted mutual information evaluation for infrared-visible image fusion

    NASA Astrophysics Data System (ADS)

    Luo, Xiaoyan; Wang, Shining; Yuan, Ding

    2016-10-01

    A performance metric for infrared and visible image fusion is proposed based on Weber's law. To indicate the stimulus of source images, two Weber components are provided. One is differential excitation to reflect the spectral signal of visible and infrared images, and the other is orientation to capture the scene structure feature. By comparing the corresponding Weber component in infrared and visible images, the source pixels can be marked with different dominant properties in intensity or structure. If the pixels have the same dominant property label, the pixels are grouped to calculate the mutual information (MI) on the corresponding Weber components between dominant source and fused images. Then, the final fusion metric is obtained via weighting the group-wise MI values according to the number of pixels in different groups. Experimental results demonstrate that the proposed metric performs well on popular image fusion cases and outperforms other image fusion metrics.

  1. Multi-Scale Fractal Analysis of Image Texture and Pattern

    NASA Technical Reports Server (NTRS)

    Emerson, Charles W.; Lam, Nina Siu-Ngan; Quattrochi, Dale A.

    1999-01-01

    Analyses of the fractal dimension of Normalized Difference Vegetation Index (NDVI) images of homogeneous land covers near Huntsville, Alabama revealed that the fractal dimension of an image of an agricultural land cover indicates greater complexity as pixel size increases, a forested land cover gradually grows smoother, and an urban image remains roughly self-similar over the range of pixel sizes analyzed (10 to 80 meters). A similar analysis of Landsat Thematic Mapper images of the East Humboldt Range in Nevada taken four months apart show a more complex relation between pixel size and fractal dimension. The major visible difference between the spring and late summer NDVI images is the absence of high elevation snow cover in the summer image. This change significantly alters the relation between fractal dimension and pixel size. The slope of the fractal dimension-resolution relation provides indications of how image classification or feature identification will be affected by changes in sensor spatial resolution.

  2. Multi-Scale Fractal Analysis of Image Texture and Pattern

    NASA Technical Reports Server (NTRS)

    Emerson, Charles W.; Lam, Nina Siu-Ngan; Quattrochi, Dale A.

    1999-01-01

    Analyses of the fractal dimension of Normalized Difference Vegetation Index (NDVI) images of homogeneous land covers near Huntsville, Alabama revealed that the fractal dimension of an image of an agricultural land cover indicates greater complexity as pixel size increases, a forested land cover gradually grows smoother, and an urban image remains roughly self-similar over the range of pixel sizes analyzed (10 to 80 meters). A similar analysis of Landsat Thematic Mapper images of the East Humboldt Range in Nevada taken four months apart show a more complex relation between pixel size and fractal dimension. The major visible difference between the spring and late summer NDVI images of the absence of high elevation snow cover in the summer image. This change significantly alters the relation between fractal dimension and pixel size. The slope of the fractal dimensional-resolution relation provides indications of how image classification or feature identification will be affected by changes in sensor spatial resolution.

  3. An LOD with improved breakdown voltage in full-frame CCD devices

    NASA Astrophysics Data System (ADS)

    Banghart, Edmund K.; Stevens, Eric G.; Doan, Hung Q.; Shepherd, John P.; Meisenzahl, Eric J.

    2005-02-01

    In full-frame image sensors, lateral overflow drain (LOD) structures are typically formed along the vertical CCD shift registers to provide a means for preventing charge blooming in the imager pixels. In a conventional LOD structure, the n-type LOD implant is made through the thin gate dielectric stack in the device active area and adjacent to the thick field oxidation that isolates the vertical CCD columns of the imager. In this paper, a novel LOD structure is described in which the n-type LOD impurities are placed directly under the field oxidation and are, therefore, electrically isolated from the gate electrodes. By reducing the electrical fields that cause breakdown at the silicon surface, this new structure permits a larger amount of n-type impurities to be implanted for the purpose of increasing the LOD conductivity. As a consequence of the improved conductance, the LOD width can be significantly reduced, enabling the design of higher resolution imaging arrays without sacrificing charge capacity in the pixels. Numerical simulations with MEDICI of the LOD leakage current are presented that identify the breakdown mechanism, while three-dimensional solutions to Poisson's equation are used to determine the charge capacity as a function of pixel dimension.

  4. Amplifier based broadband pixel for sub-millimeter wave imaging

    NASA Astrophysics Data System (ADS)

    Sarkozy, Stephen; Drewes, Jonathan; Leong, Kevin M. K. H.; Lai, Richard; Mei, X. B. (Gerry); Yoshida, Wayne; Lange, Michael D.; Lee, Jane; Deal, William R.

    2012-09-01

    Broadband sub-millimeter wave technology has received significant attention for potential applications in security, medical, and military imaging. Despite theoretical advantages of reduced size, weight, and power compared to current millimeter wave systems, sub-millimeter wave systems have been hampered by a fundamental lack of amplification with sufficient gain and noise figure properties. We report a broadband pixel operating from 300 to 340 GHz, biased off a single 2 V power supply. Over this frequency range, the amplifiers provide > 40 dB gain and <8 dB noise figure, representing the current state-of-art performance capabilities. This pixel is enabled by revolutionary enhancements to indium phosphide (InP) high electron mobility transistor technology, based on a sub-50 nm gate and indium arsenide composite channel with a projected maximum oscillation frequency fmax>1.0 THz. The first sub-millimeter wave-based images using active amplification are demonstrated as part of the Joint Improvised Explosive Device Defeat Organization Longe Range Personnel Imager Program. This development and demonstration may bring to life future sub-millimeter-wave and THz applications such as solutions to brownout problems, ultra-high bandwidth satellite communication cross-links, and future planetary exploration missions.

  5. UV Imaging of R136 with the GHRS and the WFPC-2

    NASA Astrophysics Data System (ADS)

    Malumuth, E. M.; Ebbets, D.; Heap, S. R.; Maran, S. P.; Hutchings, J. B.; Lindler, D. J.

    1994-05-01

    Now that the COSTAR corrective optics have been installed and aligned in the Hubble Space Telescope (HST), the Goddard High Resolution Spectrograph (GHRS) can obtain clean spectra and images of stars in very crowded fields. To demonstrate this restored capability, an Early Release Observation program to observe hot, luminous stars in the center of R136a (the central cluster of the 30 Doradus complex in the Large Magellanic Cloud) has been scheduled in early April. Through this program we will obtain a series of UV images through the Small Science Aperture (SSA) and Large Science Aperture (LSA) of the GHRS. The images will be taken with the N2 mirror and D2 detector (CsTe cathode on a MgF_2 window) and thus will have a bandpass that extends from 1150 to 3200 Angstroms. The SSA images will consist of 13 x 13 pixels with a pixel spacing of 0\\farcs027 pixel(-1) . Each pixel covers a 0\\farcs11 x 0\\farcs11 area on the sky. Thus each image will cover the entire SSA (0\\farcs22 x 0\\farcs22). The SSA images will include one centered on the initial pointing (located between R136a1 and R136a2; separation = 0\\farcs12), an image of R136a2, and an image of R136a5 (0\\farcs18 from R136a2). Two LSA images of the central region of R136 will be taken. The first, a 3 x 3 mosaic centered on R136a5, will consist of 22 x 22 pixels each, with a pixel spacing of 0\\farcs11 pixel(-1) . Together these images cover a 5\\farcs22 x 5\\farcs22 area. The second, will cover the central 1\\farcs2 x 1\\farcs2 with a pixel spacing of 0\\farcs055 pixel(-1) . These images will be examined to determine the true pointing for the spectra of R136a2 and R136a5, the imaging characteristics of the GHRS, and the UV brightnesses of all of the stars within the field. In addition to these images, 3 WFPC-2 PC exposures will be obtained with the F336W filter. These images are 5, 10 and 20 seconds in duration. Photometry of the stars in these images will be compared with the GHRS UV photometry, as well as published WFPC photometry.

  6. Superpixel-Augmented Endmember Detection for Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Thompson, David R.; Castano, Rebecca; Gilmore, Martha

    2011-01-01

    Superpixels are homogeneous image regions comprised of several contiguous pixels. They are produced by shattering the image into contiguous, homogeneous regions that each cover between 20 and 100 image pixels. The segmentation aims for a many-to-one mapping from superpixels to image features; each image feature could contain several superpixels, but each superpixel occupies no more than one image feature. This conservative segmentation is relatively easy to automate in a robust fashion. Superpixel processing is related to the more general idea of improving hyperspectral analysis through spatial constraints, which can recognize subtle features at or below the level of noise by exploiting the fact that their spectral signatures are found in neighboring pixels. Recent work has explored spatial constraints for endmember extraction, showing significant advantages over techniques that ignore pixels relative positions. Methods such as AMEE (automated morphological endmember extraction) express spatial influence using fixed isometric relationships a local square window or Euclidean distance in pixel coordinates. In other words, two pixels covariances are based on their spatial proximity, but are independent of their absolute location in the scene. These isometric spatial constraints are most appropriate when spectral variation is smooth and constant over the image. Superpixels are simple to implement, efficient to compute, and are empirically effective. They can be used as a preprocessing step with any desired endmember extraction technique. Superpixels also have a solid theoretical basis in the hyperspectral linear mixing model, making them a principled approach for improving endmember extraction. Unlike existing approaches, superpixels can accommodate non-isometric covariance between image pixels (characteristic of discrete image features separated by step discontinuities). These kinds of image features are common in natural scenes. Analysts can substitute superpixels for image pixels during endmember analysis that leverages the spatial contiguity of scene features to enhance subtle spectral features. Superpixels define populations of image pixels that are independent samples from each image feature, permitting robust estimation of spectral properties, and reducing measurement noise in proportion to the area of the superpixel. This permits improved endmember extraction, and enables automated search for novel and constituent minerals in very noisy, hyperspatial images. This innovation begins with a graph-based segmentation based on the work of Felzenszwalb et al., but then expands their approach to the hyperspectral image domain with a Euclidean distance metric. Then, the mean spectrum of each segment is computed, and the resulting data cloud is used as input into sequential maximum angle convex cone (SMACC) endmember extraction.

  7. Resolution Enhancement of MODIS-derived Water Indices for Studying Persistent Flooding

    NASA Astrophysics Data System (ADS)

    Underwood, L. W.; Kalcic, M. T.; Fletcher, R. M.

    2012-12-01

    Monitoring coastal marshes for persistent flooding and salinity stress is a high priority issue in Louisiana. Remote sensing can identify environmental variables that can be indicators of marsh habitat conditions, and offer timely and relatively accurate information for aiding wetland vegetation management. Monitoring activity accuracy is often limited by mixed pixels which occur when areas represented by the pixel encompasses more than one cover type. Mixtures of marsh grasses and open water in 250m Moderate Resolution Imaging Spectroradiometer (MODIS) data can impede flood area estimation. Flood mapping of such mixtures requires finer spatial resolution data to better represent the cover type composition within 250m MODIS pixel. Fusion of MODIS and Landsat can improve both spectral and temporal resolution of time series products to resolve rapid changes from forcing mechanisms like hurricane winds and storm surge. For this study, using a method for estimating sub-pixel values from a MODIS time series of a Normalized Difference Water Index (NDWI), using temporal weighting, was implemented to map persistent flooding in Louisiana coastal marshes. Ordinarily NDWI computed from daily 250m MODIS pixels represents a mixture of fragmented marshes and water. Here, sub-pixel NDWI values were derived for MODIS data using Landsat 30-m data. Each MODIS pixel was disaggregated into a mixture of the eight cover types according to the classified image pixels falling inside the MODIS pixel. The Landsat pixel means for each cover type inside a MODIS pixel were computed for the Landsat data preceding the MODIS image in time and for the Landsat data succeeding the MODIS image. The Landsat data were then weighted exponentially according to closeness in date to the MODIS data. The reconstructed MODIS data were produced by summing the product of fractional cover type with estimated NDWI values within each cover type. A new daily time series was produced using both the reconstructed 250-m MODIS, with enhanced features, and the approximated daily 30-m high-resolution image based on Landsat data. The algorithm was developed and tested over the Calcasieu-Sabine Basin, which was heavily inundated by storm surge from Hurricane Ike to study the extent and duration of flooding following the storm. Time series for 2000-2009, covering flooding events by Hurricane Rita in 2005 and Hurricane Ike in 2008, were derived. High resolution images were formed for all days in 2008 between the first cloud free Landsat scene and the last cloud-free Landsat scene. To refine and validate flooding maps, each time series was compared to Louisiana Coastwide Reference Monitoring System (CRMS) station water levels adjusted to marsh to optimize thresholds for MODIS-derived time series of NDWI. Seasonal fluctuations were adjusted by subtracting ten year average NDWI for marshes, excluding the hurricane events. Results from different NDWI indices and a combination of indices were compared. Flooding persistence that was mapped with higher-resolution data showed some improvement over the original MODIS time series estimates. The advantage of this novel technique is that improved mapping of extent and duration of inundation can be provided.

  8. Resolution Enhancement of MODIS-Derived Water Indices for Studying Persistent Flooding

    NASA Technical Reports Server (NTRS)

    Underwood, L. W.; Kalcic, Maria; Fletcher, Rose

    2012-01-01

    Monitoring coastal marshes for persistent flooding and salinity stress is a high priority issue in Louisiana. Remote sensing can identify environmental variables that can be indicators of marsh habitat conditions, and offer timely and relatively accurate information for aiding wetland vegetation management. Monitoring activity accuracy is often limited by mixed pixels which occur when areas represented by the pixel encompasses more than one cover type. Mixtures of marsh grasses and open water in 250m Moderate Resolution Imaging Spectroradiometer (MODIS) data can impede flood area estimation. Flood mapping of such mixtures requires finer spatial resolution data to better represent the cover type composition within 250m MODIS pixel. Fusion of MODIS and Landsat can improve both spectral and temporal resolution of time series products to resolve rapid changes from forcing mechanisms like hurricane winds and storm surge. For this study, using a method for estimating sub-pixel values from a MODIS time series of a Normalized Difference Water Index (NDWI), using temporal weighting, was implemented to map persistent flooding in Louisiana coastal marshes. Ordinarily NDWI computed from daily 250m MODIS pixels represents a mixture of fragmented marshes and water. Here, sub-pixel NDWI values were derived for MODIS data using Landsat 30-m data. Each MODIS pixel was disaggregated into a mixture of the eight cover types according to the classified image pixels falling inside the MODIS pixel. The Landsat pixel means for each cover type inside a MODIS pixel were computed for the Landsat data preceding the MODIS image in time and for the Landsat data succeeding the MODIS image. The Landsat data were then weighted exponentially according to closeness in date to the MODIS data. The reconstructed MODIS data were produced by summing the product of fractional cover type with estimated NDWI values within each cover type. A new daily time series was produced using both the reconstructed 250-m MODIS, with enhanced features, and the approximated daily 30-m high-resolution image based on Landsat data. The algorithm was developed and tested over the Calcasieu-Sabine Basin, which was heavily inundated by storm surge from Hurricane Ike to study the extent and duration of flooding following the storm. Time series for 2000-2009, covering flooding events by Hurricane Rita in 2005 and Hurricane Ike in 2008, were derived. High resolution images were formed for all days in 2008 between the first cloud free Landsat scene and the last cloud-free Landsat scene. To refine and validate flooding maps, each time series was compared to Louisiana Coastwide Reference Monitoring System (CRMS) station water levels adjusted to marsh to optimize thresholds for MODIS-derived time series of NDWI. Seasonal fluctuations were adjusted by subtracting ten year average NDWI for marshes, excluding the hurricane events. Results from different NDWI indices and a combination of indices were compared. Flooding persistence that was mapped with higher-resolution data showed some improvement over the original MODIS time series estimates. The advantage of this novel technique is that improved mapping of extent and duration of inundation can be provided.

  9. Improved discrimination among similar agricultural plots using red-and-green-based pseudo-colour imaging

    NASA Astrophysics Data System (ADS)

    Doi, Ryoichi

    2016-04-01

    The effects of a pseudo-colour imaging method were investigated by discriminating among similar agricultural plots in remote sensing images acquired using the Airborne Visible/Infrared Imaging Spectrometer (Indiana, USA) and the Landsat 7 satellite (Fergana, Uzbekistan), and that provided by GoogleEarth (Toyama, Japan). From each dataset, red (R)-green (G)-R-G-blue yellow (RGrgbyB), and RGrgby-1B pseudo-colour images were prepared. From each, cyan, magenta, yellow, key black, L*, a*, and b* derivative grayscale images were generated. In the Airborne Visible/Infrared Imaging Spectrometer image, pixels were selected for corn no tillage (29 pixels), corn minimum tillage (27), and soybean (34) plots. Likewise, in the Landsat 7 image, pixels representing corn (73 pixels), cotton (110), and wheat (112) plots were selected, and in the GoogleEarth image, those representing soybean (118 pixels) and rice (151) were selected. When the 14 derivative grayscale images were used together with an RGB yellow grayscale image, the overall classification accuracy improved from 74 to 94% (Airborne Visible/Infrared Imaging Spectrometer), 64 to 83% (Landsat), or 77 to 90% (GoogleEarth). As an indicator of discriminatory power, the kappa significance improved 1018-fold (Airborne Visible/Infrared Imaging Spectrometer) or greater. The derivative grayscale images were found to increase the dimensionality and quantity of data. Herein, the details of the increases in dimensionality and quantity are further analysed and discussed.

  10. Multi-Pixel Simultaneous Classification of PolSAR Image Using Convolutional Neural Networks

    PubMed Central

    Xu, Xin; Gui, Rong; Pu, Fangling

    2018-01-01

    Convolutional neural networks (CNN) have achieved great success in the optical image processing field. Because of the excellent performance of CNN, more and more methods based on CNN are applied to polarimetric synthetic aperture radar (PolSAR) image classification. Most CNN-based PolSAR image classification methods can only classify one pixel each time. Because all the pixels of a PolSAR image are classified independently, the inherent interrelation of different land covers is ignored. We use a fixed-feature-size CNN (FFS-CNN) to classify all pixels in a patch simultaneously. The proposed method has several advantages. First, FFS-CNN can classify all the pixels in a small patch simultaneously. When classifying a whole PolSAR image, it is faster than common CNNs. Second, FFS-CNN is trained to learn the interrelation of different land covers in a patch, so it can use the interrelation of land covers to improve the classification results. The experiments of FFS-CNN are evaluated on a Chinese Gaofen-3 PolSAR image and other two real PolSAR images. Experiment results show that FFS-CNN is comparable with the state-of-the-art PolSAR image classification methods. PMID:29510499

  11. Multi-Pixel Simultaneous Classification of PolSAR Image Using Convolutional Neural Networks.

    PubMed

    Wang, Lei; Xu, Xin; Dong, Hao; Gui, Rong; Pu, Fangling

    2018-03-03

    Convolutional neural networks (CNN) have achieved great success in the optical image processing field. Because of the excellent performance of CNN, more and more methods based on CNN are applied to polarimetric synthetic aperture radar (PolSAR) image classification. Most CNN-based PolSAR image classification methods can only classify one pixel each time. Because all the pixels of a PolSAR image are classified independently, the inherent interrelation of different land covers is ignored. We use a fixed-feature-size CNN (FFS-CNN) to classify all pixels in a patch simultaneously. The proposed method has several advantages. First, FFS-CNN can classify all the pixels in a small patch simultaneously. When classifying a whole PolSAR image, it is faster than common CNNs. Second, FFS-CNN is trained to learn the interrelation of different land covers in a patch, so it can use the interrelation of land covers to improve the classification results. The experiments of FFS-CNN are evaluated on a Chinese Gaofen-3 PolSAR image and other two real PolSAR images. Experiment results show that FFS-CNN is comparable with the state-of-the-art PolSAR image classification methods.

  12. Pixelated coatings and advanced IR coatings

    NASA Astrophysics Data System (ADS)

    Pradal, Fabien; Portier, Benjamin; Oussalah, Meihdi; Leplan, Hervé

    2017-09-01

    Reosc developed pixelated infrared coatings on detector. Reosc manufactured thick pixelated multilayer stacks on IR-focal plane arrays for bi-spectral imaging systems, demonstrating high filter performance, low crosstalk, and no deterioration of the device sensitivities. More recently, a 5-pixel filter matrix was designed and fabricated. Recent developments in pixelated coatings, shows that high performance infrared filters can be coated directly on detector for multispectral imaging. Next generation space instrument can benefit from this technology to reduce their weight and consumptions.

  13. Vision Sensors and Cameras

    NASA Astrophysics Data System (ADS)

    Hoefflinger, Bernd

    Silicon charge-coupled-device (CCD) imagers have been and are a specialty market ruled by a few companies for decades. Based on CMOS technologies, active-pixel sensors (APS) began to appear in 1990 at the 1 μm technology node. These pixels allow random access, global shutters, and they are compatible with focal-plane imaging systems combining sensing and first-level image processing. The progress towards smaller features and towards ultra-low leakage currents has provided reduced dark currents and μm-size pixels. All chips offer Mega-pixel resolution, and many have very high sensitivities equivalent to ASA 12.800. As a result, HDTV video cameras will become a commodity. Because charge-integration sensors suffer from a limited dynamic range, significant processing effort is spent on multiple exposure and piece-wise analog-digital conversion to reach ranges >10,000:1. The fundamental alternative is log-converting pixels with an eye-like response. This offers a range of almost a million to 1, constant contrast sensitivity and constant colors, important features in professional, technical and medical applications. 3D retino-morphic stacking of sensing and processing on top of each other is being revisited with sub-100 nm CMOS circuits and with TSV technology. With sensor outputs directly on top of neurons, neural focal-plane processing will regain momentum, and new levels of intelligent vision will be achieved. The industry push towards thinned wafers and TSV enables backside-illuminated and other pixels with a 100% fill-factor. 3D vision, which relies on stereo or on time-of-flight, high-speed circuitry, will also benefit from scaled-down CMOS technologies both because of their size as well as their higher speed.

  14. Synthetic aperture radar images with composite azimuth resolution

    DOEpatents

    Bielek, Timothy P; Bickel, Douglas L

    2015-03-31

    A synthetic aperture radar (SAR) image is produced by using all phase histories of a set of phase histories to produce a first pixel array having a first azimuth resolution, and using less than all phase histories of the set to produce a second pixel array having a second azimuth resolution that is coarser than the first azimuth resolution. The first and second pixel arrays are combined to produce a third pixel array defining a desired SAR image that shows distinct shadows of moving objects while preserving detail in stationary background clutter.

  15. A custom hardware classifier for bruised apple detection in hyperspectral images

    NASA Astrophysics Data System (ADS)

    Cárdenas, Javier; Figueroa, Miguel; Pezoa, Jorge E.

    2015-09-01

    We present a custom digital architecture for bruised apple classification using hyperspectral images in the near infrared (NIR) spectrum. The algorithm classifies each pixel in an image into one of three classes: bruised, non-bruised, and background. We extract two 5-element feature vectors for each pixel using only 10 out of the 236 spectral bands provided by the hyperspectral camera, thereby greatly reducing both the requirements of the imager and the computational complexity of the algorithm. We then use two linear-kernel support vector machine (SVM) to classify each pixel. Each SVM was trained with 504 windows of size 17×17-pixel taken from 14 hyperspectral images of 320×320 pixels each, for each class. The architecture then computes the percentage of bruised pixels in each apple in order to adequately classify the fruit. We implemented the architecture on a Xilinx Zynq Z-7010 field-programmable gate array (FPGA) and tested it on images from a NIR N17E push-broom camera with a frame rate of 25 fps, a band-pixel rate of 1.888 MHz, and 236 spectral bands between 900 and 1700 nanometers in laboratory conditions. Using 28-bit fixed-point arithmetic, the circuit accurately discriminates 95.2% of the pixels corresponding to an apple, 81% of the pixels corresponding to a bruised apple, and 96.4% of the background. With the default threshold settings, the highest false positive (FP) for a bruised apple is 18.7%. The circuit operates at the native frame rate of the camera, consumes 67 mW of dynamic power, and uses less than 10% of the logic resources on the FPGA.

  16. Direct imaging detectors for electron microscopy

    NASA Astrophysics Data System (ADS)

    Faruqi, A. R.; McMullan, G.

    2018-01-01

    Electronic detectors used for imaging in electron microscopy are reviewed in this paper. Much of the detector technology is based on the developments in microelectronics, which have allowed the design of direct detectors with fine pixels, fast readout and which are sufficiently radiation hard for practical use. Detectors included in this review are hybrid pixel detectors, monolithic active pixel sensors based on CMOS technology and pnCCDs, which share one important feature: they are all direct imaging detectors, relying on directly converting energy in a semiconductor. Traditional methods of recording images in the electron microscope such as film and CCDs, are mentioned briefly along with a more detailed description of direct electronic detectors. Many applications benefit from the use of direct electron detectors and a few examples are mentioned in the text. In recent years one of the most dramatic advances in structural biology has been in the deployment of the new backthinned CMOS direct detectors to attain near-atomic resolution molecular structures with electron cryo-microscopy (cryo-EM). The development of direct detectors, along with a number of other parallel advances, has seen a very significant amount of new information being recorded in the images, which was not previously possible-and this forms the main emphasis of the review.

  17. Compression of color-mapped images

    NASA Technical Reports Server (NTRS)

    Hadenfeldt, A. C.; Sayood, Khalid

    1992-01-01

    In a standard image coding scenario, pixel-to-pixel correlation nearly always exists in the data, especially if the image is a natural scene. This correlation is what allows predictive coding schemes (e.g., DPCM) to perform efficient compression. In a color-mapped image, the values stored in the pixel array are no longer directly related to the pixel intensity. Two color indices which are numerically adjacent (close) may point to two very different colors. The correlation still exists, but only via the colormap. This fact can be exploited by sorting the color map to reintroduce the structure. The sorting of colormaps is studied and it is shown how the resulting structure can be used in both lossless and lossy compression of images.

  18. Regional SAR Image Segmentation Based on Fuzzy Clustering with Gamma Mixture Model

    NASA Astrophysics Data System (ADS)

    Li, X. L.; Zhao, Q. H.; Li, Y.

    2017-09-01

    Most of stochastic based fuzzy clustering algorithms are pixel-based, which can not effectively overcome the inherent speckle noise in SAR images. In order to deal with the problem, a regional SAR image segmentation algorithm based on fuzzy clustering with Gamma mixture model is proposed in this paper. First, initialize some generating points randomly on the image, the image domain is divided into many sub-regions using Voronoi tessellation technique. Each sub-region is regarded as a homogeneous area in which the pixels share the same cluster label. Then, assume the probability of the pixel to be a Gamma mixture model with the parameters respecting to the cluster which the pixel belongs to. The negative logarithm of the probability represents the dissimilarity measure between the pixel and the cluster. The regional dissimilarity measure of one sub-region is defined as the sum of the measures of pixels in the region. Furthermore, the Markov Random Field (MRF) model is extended from pixels level to Voronoi sub-regions, and then the regional objective function is established under the framework of fuzzy clustering. The optimal segmentation results can be obtained by the solution of model parameters and generating points. Finally, the effectiveness of the proposed algorithm can be proved by the qualitative and quantitative analysis from the segmentation results of the simulated and real SAR images.

  19. The DEPFET Sensor-Amplifier Structure: A Method to Beat 1/f Noise and Reach Sub-Electron Noise in Pixel Detectors

    PubMed Central

    Lutz, Gerhard; Porro, Matteo; Aschauer, Stefan; Wölfel, Stefan; Strüder, Lothar

    2016-01-01

    Depleted field effect transistors (DEPFET) are used to achieve very low noise signal charge readout with sub-electron measurement precision. This is accomplished by repeatedly reading an identical charge, thereby suppressing not only the white serial noise but also the usually constant 1/f noise. The repetitive non-destructive readout (RNDR) DEPFET is an ideal central element for an active pixel sensor (APS) pixel. The theory has been derived thoroughly and results have been verified on RNDR-DEPFET prototypes. A charge measurement precision of 0.18 electrons has been achieved. The device is well-suited for spectroscopic X-ray imaging and for optical photon counting in pixel sensors, even at high photon numbers in the same cell. PMID:27136549

  20. PIXEL PUSHER

    NASA Technical Reports Server (NTRS)

    Stanfill, D. F.

    1994-01-01

    Pixel Pusher is a Macintosh application used for viewing and performing minor enhancements on imagery. It will read image files in JPL's two primary image formats- VICAR and PDS - as well as the Macintosh PICT format. VICAR (NPO-18076) handles an array of image processing capabilities which may be used for a variety of applications including biomedical image processing, cartography, earth resources, and geological exploration. Pixel Pusher can also import VICAR format color lookup tables for viewing images in pseudocolor (256 colors). This program currently supports only eight bit images but will work on monitors with any number of colors. Arbitrarily large image files may be viewed in a normal Macintosh window. Color and contrast enhancement can be performed with a graphical "stretch" editor (as in contrast stretch). In addition, VICAR images may be saved as Macintosh PICT files for exporting into other Macintosh programs, and individual pixels can be queried to determine their locations and actual data values. Pixel Pusher is written in Symantec's Think C and was developed for use on a Macintosh SE30, LC, or II series computer running System Software 6.0.3 or later and 32 bit QuickDraw. Pixel Pusher will only run on a Macintosh which supports color (whether a color monitor is being used or not). The standard distribution medium for this program is a set of three 3.5 inch Macintosh format diskettes. The program price includes documentation. Pixel Pusher was developed in 1991 and is a copyrighted work with all copyright vested in NASA. Think C is a trademark of Symantec Corporation. Macintosh is a registered trademark of Apple Computer, Inc.

  1. ALPIDE, the Monolithic Active Pixel Sensor for the ALICE ITS upgrade

    NASA Astrophysics Data System (ADS)

    Mager, M.; ALICE Collaboration

    2016-07-01

    A new 10 m2 inner tracking system based on seven concentric layers of Monolithic Active Pixel Sensors will be installed in the ALICE experiment during the second long shutdown of LHC in 2019-2020. The monolithic pixel sensors will be fabricated in the 180 nm CMOS Imaging Sensor process of TowerJazz. The ALPIDE design takes full advantage of a particular process feature, the deep p-well, which allows for full CMOS circuitry within the pixel matrix, while at the same time retaining the full charge collection efficiency. Together with the small feature size and the availability of six metal layers, this allowed a continuously active low-power front-end to be placed into each pixel and an in-matrix sparsification circuit to be used that sends only the addresses of hit pixels to the periphery. This approach led to a power consumption of less than 40 mWcm-2, a spatial resolution of around 5 μm, a peaking time of around 2 μs, while being radiation hard to some 1013 1 MeVneq /cm2, fulfilling or exceeding the ALICE requirements. Over the last years of R & D, several prototype circuits have been used to verify radiation hardness, and to optimize pixel geometry and in-pixel front-end circuitry. The positive results led to a submission of full-scale (3 cm×1.5 cm) sensor prototypes in 2014. They are being characterized in a comprehensive campaign that also involves several irradiation and beam tests. A summary of the results obtained and prospects towards the final sensor to instrument the ALICE Inner Tracking System are given.

  2. WE-G-204-03: Photon-Counting Hexagonal Pixel Array CdTe Detector: Optimal Resampling to Square Pixels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shrestha, S; Vedantham, S; Karellas, A

    Purpose: Detectors with hexagonal pixels require resampling to square pixels for distortion-free display of acquired images. In this work, the presampling modulation transfer function (MTF) of a hexagonal pixel array photon-counting CdTe detector for region-of-interest fluoroscopy was measured and the optimal square pixel size for resampling was determined. Methods: A 0.65mm thick CdTe Schottky sensor capable of concurrently acquiring up to 3 energy-windowed images was operated in a single energy-window mode to include ≥10 KeV photons. The detector had hexagonal pixels with apothem of 30 microns resulting in pixel spacing of 60 and 51.96 microns along the two orthogonal directions.more » Images of a tungsten edge test device acquired under IEC RQA5 conditions were double Hough transformed to identify the edge and numerically differentiated. The presampling MTF was determined from the finely sampled line spread function that accounted for the hexagonal sampling. The optimal square pixel size was determined in two ways; the square pixel size for which the aperture function evaluated at the Nyquist frequencies along the two orthogonal directions matched that from the hexagonal pixel aperture functions, and the square pixel size for which the mean absolute difference between the square and hexagonal aperture functions was minimized over all frequencies up to the Nyquist limit. Results: Evaluation of the aperture functions over the entire frequency range resulted in square pixel size of 53 microns with less than 2% difference from the hexagonal pixel. Evaluation of the aperture functions at Nyquist frequencies alone resulted in 54 microns square pixels. For the photon-counting CdTe detector and after resampling to 53 microns square pixels using quadratic interpolation, the presampling MTF at Nyquist frequency of 9.434 cycles/mm along the two directions were 0.501 and 0.507. Conclusion: Hexagonal pixel array photon-counting CdTe detector after resampling to square pixels provides high-resolution imaging suitable for fluoroscopy.« less

  3. Noise and spectroscopic performance of DEPMOSFET matrix devices for XEUS

    NASA Astrophysics Data System (ADS)

    Treis, J.; Fischer, P.; Hälker, O.; Herrmann, S.; Kohrs, R.; Krüger, H.; Lechner, P.; Lutz, G.; Peric, I.; Porro, M.; Richter, R. H.; Strüder, L.; Trimpl, M.; Wermes, N.; Wölfel, S.

    2005-08-01

    DEPMOSFET based Active Pixel Sensor (APS) matrix devices, originally developed to cope with the challenging requirements of the XEUS Wide Field Imager, have proven to be a promising new imager concept for a variety of future X-ray imaging and spectroscopy missions like Simbol-X. The devices combine excellent energy resolution, high speed readout and low power consumption with the attractive feature of random accessibility of pixels. A production of sensor prototypes with 64 x 64 pixels with a size of 75 μm x 75 μm each has recently been finished at the MPI semiconductor laboratory in Munich. The devices are built for row-wise readout and require dedicated control and signal processing electronics of the CAMEX type, which is integrated together with the sensor onto a readout hybrid. A number of hybrids incorporating the most promising sensor design variants has been built, and their performance has been studied in detail. A spectroscopic resolution of 131 eV has been measured, the readout noise is as low as 3.5 e- ENC. Here, the dependence of readout noise and spectroscopic resolution on the device temperature is presented.

  4. Programmable hyperspectral image mapper with on-array processing

    NASA Technical Reports Server (NTRS)

    Cutts, James A. (Inventor)

    1995-01-01

    A hyperspectral imager includes a focal plane having an array of spaced image recording pixels receiving light from a scene moving relative to the focal plane in a longitudinal direction, the recording pixels being transportable at a controllable rate in the focal plane in the longitudinal direction, an electronic shutter for adjusting an exposure time of the focal plane, whereby recording pixels in an active area of the focal plane are removed therefrom and stored upon expiration of the exposure time, an electronic spectral filter for selecting a spectral band of light received by the focal plane from the scene during each exposure time and an electronic controller connected to the focal plane, to the electronic shutter and to the electronic spectral filter for controlling (1) the controllable rate at which the recording is transported in the longitudinal direction, (2) the exposure time, and (3) the spectral band so as to record a selected portion of the scene through M spectral bands with a respective exposure time t(sub q) for each respective spectral band q.

  5. Compact SPAD-Based Pixel Architectures for Time-Resolved Image Sensors

    PubMed Central

    Perenzoni, Matteo; Pancheri, Lucio; Stoppa, David

    2016-01-01

    This paper reviews the state of the art of single-photon avalanche diode (SPAD) image sensors for time-resolved imaging. The focus of the paper is on pixel architectures featuring small pixel size (<25 μm) and high fill factor (>20%) as a key enabling technology for the successful implementation of high spatial resolution SPAD-based image sensors. A summary of the main CMOS SPAD implementations, their characteristics and integration challenges, is provided from the perspective of targeting large pixel arrays, where one of the key drivers is the spatial uniformity. The main analog techniques aimed at time-gated photon counting and photon timestamping suitable for compact and low-power pixels are critically discussed. The main features of these solutions are the adoption of analog counting techniques and time-to-analog conversion, in NMOS-only pixels. Reliable quantum-limited single-photon counting, self-referenced analog-to-digital conversion, time gating down to 0.75 ns and timestamping with 368 ps jitter are achieved. PMID:27223284

  6. Correction of clipped pixels in color images.

    PubMed

    Xu, Di; Doutre, Colin; Nasiopoulos, Panos

    2011-03-01

    Conventional images store a very limited dynamic range of brightness. The true luma in the bright area of such images is often lost due to clipping. When clipping changes the R, G, B color ratios of a pixel, color distortion also occurs. In this paper, we propose an algorithm to enhance both the luma and chroma of the clipped pixels. Our method is based on the strong chroma spatial correlation between clipped pixels and their surrounding unclipped area. After identifying the clipped areas in the image, we partition the clipped areas into regions with similar chroma, and estimate the chroma of each clipped region based on the chroma of its surrounding unclipped region. We correct the clipped R, G, or B color channels based on the estimated chroma and the unclipped color channel(s) of the current pixel. The last step involves smoothing of the boundaries between regions of different clipping scenarios. Both objective and subjective experimental results show that our algorithm is very effective in restoring the color of clipped pixels. © 2011 IEEE

  7. Label-Free Biomedical Imaging Using High-Speed Lock-In Pixel Sensor for Stimulated Raman Scattering

    PubMed Central

    Mars, Kamel; Kawahito, Shoji; Yasutomi, Keita; Kagawa, Keiichiro; Yamada, Takahiro

    2017-01-01

    Raman imaging eliminates the need for staining procedures, providing label-free imaging to study biological samples. Recent developments in stimulated Raman scattering (SRS) have achieved fast acquisition speed and hyperspectral imaging. However, there has been a problem of lack of detectors suitable for MHz modulation rate parallel detection, detecting multiple small SRS signals while eliminating extremely strong offset due to direct laser light. In this paper, we present a complementary metal-oxide semiconductor (CMOS) image sensor using high-speed lock-in pixels for stimulated Raman scattering that is capable of obtaining the difference of Stokes-on and Stokes-off signal at modulation frequency of 20 MHz in the pixel before reading out. The generated small SRS signal is extracted and amplified in a pixel using a high-speed and large area lateral electric field charge modulator (LEFM) employing two-step ion implantation and an in-pixel pair of low-pass filter, a sample and hold circuit and a switched capacitor integrator using a fully differential amplifier. A prototype chip is fabricated using 0.11 μm CMOS image sensor technology process. SRS spectra and images of stearic acid and 3T3-L1 samples are successfully obtained. The outcomes suggest that hyperspectral and multi-focus SRS imaging at video rate is viable after slight modifications to the pixel architecture and the acquisition system. PMID:29120358

  8. Label-Free Biomedical Imaging Using High-Speed Lock-In Pixel Sensor for Stimulated Raman Scattering.

    PubMed

    Mars, Kamel; Lioe, De Xing; Kawahito, Shoji; Yasutomi, Keita; Kagawa, Keiichiro; Yamada, Takahiro; Hashimoto, Mamoru

    2017-11-09

    Raman imaging eliminates the need for staining procedures, providing label-free imaging to study biological samples. Recent developments in stimulated Raman scattering (SRS) have achieved fast acquisition speed and hyperspectral imaging. However, there has been a problem of lack of detectors suitable for MHz modulation rate parallel detection, detecting multiple small SRS signals while eliminating extremely strong offset due to direct laser light. In this paper, we present a complementary metal-oxide semiconductor (CMOS) image sensor using high-speed lock-in pixels for stimulated Raman scattering that is capable of obtaining the difference of Stokes-on and Stokes-off signal at modulation frequency of 20 MHz in the pixel before reading out. The generated small SRS signal is extracted and amplified in a pixel using a high-speed and large area lateral electric field charge modulator (LEFM) employing two-step ion implantation and an in-pixel pair of low-pass filter, a sample and hold circuit and a switched capacitor integrator using a fully differential amplifier. A prototype chip is fabricated using 0.11 μm CMOS image sensor technology process. SRS spectra and images of stearic acid and 3T3-L1 samples are successfully obtained. The outcomes suggest that hyperspectral and multi-focus SRS imaging at video rate is viable after slight modifications to the pixel architecture and the acquisition system.

  9. Development of high energy micro-tomography system at SPring-8

    NASA Astrophysics Data System (ADS)

    Uesugi, Kentaro; Hoshino, Masato

    2017-09-01

    A high energy X-ray micro-tomography system has been developed at BL20B2 in SPring-8. The available range of the energy is between 20keV and 113keV with a Si (511) double crystal monochromator. The system enables us to image large or heavy materials such as fossils and metals. The X-ray image detector consists of visible light conversion system and sCMOS camera. The effective pixel size is variable by changing a tandem lens between 6.5 μm/pixel and 25.5 μm/pixel discretely. The format of the camera is 2048 pixels x 2048 pixels. As a demonstration of the system, alkaline battery and a nodule from Bolivia were imaged. A detail of the structure of the battery and a female mold Trilobite were successfully imaged without breaking those fossils.

  10. Enhancing spatial resolution of (18)F positron imaging with the Timepix detector by classification of primary fired pixels using support vector machine.

    PubMed

    Wang, Qian; Liu, Zhen; Ziegler, Sibylle I; Shi, Kuangyu

    2015-07-07

    Position-sensitive positron cameras using silicon pixel detectors have been applied for some preclinical and intraoperative clinical applications. However, the spatial resolution of a positron camera is limited by positron multiple scattering in the detector. An incident positron may fire a number of successive pixels on the imaging plane. It is still impossible to capture the primary fired pixel along a particle trajectory by hardware or to perceive the pixel firing sequence by direct observation. Here, we propose a novel data-driven method to improve the spatial resolution by classifying the primary pixels within the detector using support vector machine. A classification model is constructed by learning the features of positron trajectories based on Monte-Carlo simulations using Geant4. Topological and energy features of pixels fired by (18)F positrons were considered for the training and classification. After applying the classification model on measurements, the primary fired pixels of the positron tracks in the silicon detector were estimated. The method was tested and assessed for [(18)F]FDG imaging of an absorbing edge protocol and a leaf sample. The proposed method improved the spatial resolution from 154.6   ±   4.2 µm (energy weighted centroid approximation) to 132.3   ±   3.5 µm in the absorbing edge measurements. For the positron imaging of a leaf sample, the proposed method achieved lower root mean square error relative to phosphor plate imaging, and higher similarity with the reference optical image. The improvements of the preliminary results support further investigation of the proposed algorithm for the enhancement of positron imaging in clinical and preclinical applications.

  11. Enhancing spatial resolution of 18F positron imaging with the Timepix detector by classification of primary fired pixels using support vector machine

    NASA Astrophysics Data System (ADS)

    Wang, Qian; Liu, Zhen; Ziegler, Sibylle I.; Shi, Kuangyu

    2015-07-01

    Position-sensitive positron cameras using silicon pixel detectors have been applied for some preclinical and intraoperative clinical applications. However, the spatial resolution of a positron camera is limited by positron multiple scattering in the detector. An incident positron may fire a number of successive pixels on the imaging plane. It is still impossible to capture the primary fired pixel along a particle trajectory by hardware or to perceive the pixel firing sequence by direct observation. Here, we propose a novel data-driven method to improve the spatial resolution by classifying the primary pixels within the detector using support vector machine. A classification model is constructed by learning the features of positron trajectories based on Monte-Carlo simulations using Geant4. Topological and energy features of pixels fired by 18F positrons were considered for the training and classification. After applying the classification model on measurements, the primary fired pixels of the positron tracks in the silicon detector were estimated. The method was tested and assessed for [18F]FDG imaging of an absorbing edge protocol and a leaf sample. The proposed method improved the spatial resolution from 154.6   ±   4.2 µm (energy weighted centroid approximation) to 132.3   ±   3.5 µm in the absorbing edge measurements. For the positron imaging of a leaf sample, the proposed method achieved lower root mean square error relative to phosphor plate imaging, and higher similarity with the reference optical image. The improvements of the preliminary results support further investigation of the proposed algorithm for the enhancement of positron imaging in clinical and preclinical applications.

  12. Fifty Years of Mars Imaging: from Mariner 4 to HiRISE

    NASA Image and Video Library

    2017-11-20

    This image from NASA's Mars Reconnaissance Orbiter (MRO) shows Mars' surface in detail. Mars has captured the imagination of astronomers for thousands of years, but it wasn't until the last half a century that we were able to capture images of its surface in detail. This particular site on Mars was first imaged in 1965 by the Mariner 4 spacecraft during the first successful fly-by mission to Mars. From an altitude of around 10,000 kilometers, this image (the ninth frame taken) achieved a resolution of approximately 1.25 kilometers per pixel. Since then, this location has been observed by six other visible cameras producing images with varying resolutions and sizes. This includes HiRISE (highlighted in yellow), which is the highest-resolution and has the smallest "footprint." This compilation, spanning Mariner 4 to HiRISE, shows each image at full-resolution. Beginning with Viking 1 and ending with our HiRISE image, this animation documents the historic imaging of a particular site on another world. In 1976, the Viking 1 orbiter began imaging Mars in unprecedented detail, and by 1980 had successfully mosaicked the planet at approximately 230 meters per pixel. In 1999, the Mars Orbiter Camera onboard the Mars Global Surveyor (1996) also imaged this site with its Wide Angle lens, at around 236 meters per pixel. This was followed by the Thermal Emission Imaging System on Mars Odyssey (2001), which also provided a visible camera producing the image we see here at 17 meters per pixel. Later in 2012, the High-Resolution Stereo Camera on the Mars Express orbiter (2003) captured this image of the surface at 25 meters per pixel. In 2010, the Context Camera on the Mars Reconnaissance Orbiter (2005) imaged this site at about 5 meters per pixel. Finally, in 2017, HiRISE acquired the highest resolution image of this location to date at 50 centimeters per pixel. When seen at this unprecedented scale, we can discern a crater floor strewn with small rocky deposits, boulders several meters across, and wind-blown deposits in the floors of small craters and depressions. This compilation of Mars images spanning over 50 years gives us a visual appreciation of the evolution of orbital Mars imaging over a single site. The map is projected here at a scale of 50 centimeters (19.7 inches) per pixel. [The original image scale is 52.2 centimeters (20.6 inches) per pixel (with 2 x 2 binning); objects on the order of 156 centimeters (61.4 inches) across are resolved.] North is up. https://photojournal.jpl.nasa.gov/catalog/PIA22115

  13. Hyperspectral image analysis for the determination of alteration minerals in geothermal fields: Çürüksu (Denizli) Graben, Turkey

    NASA Astrophysics Data System (ADS)

    Uygur, Merve; Karaman, Muhittin; Kumral, Mustafa

    2016-04-01

    Çürüksu (Denizli) Graben hosts various geothermal fields such as Kızıldere, Yenice, Gerali, Karahayıt, and Tekkehamam. Neotectonic activities, which are caused by extensional tectonism, and deep circulation in sub-volcanic intrusions are heat sources of hydrothermal solutions. The temperature of hydrothermal solutions is between 53 and 260 degree Celsius. Phyllic, argillic, silicic, and carbonatization alterations and various hydrothermal minerals have been identified in various research studies of these areas. Surfaced hydrothermal alteration minerals are one set of potential indicators of geothermal resources. Developing the exploration tools to define the surface indicators of geothermal fields can assist in the recognition of geothermal resources. Thermal and hyperspectral imaging and analysis can be used for defining the surface indicators of geothermal fields. This study tests the hypothesis that hyperspectral image analysis based on EO-1 Hyperion images can be used for the delineation and definition of surfaced hydrothermal alteration in geothermal fields. Hyperspectral image analyses were applied to images covering the geothermal fields whose alteration characteristic are known. To reduce data dimensionality and identify spectral endmembers, Kruse's multi-step process was applied to atmospherically and geometrically-corrected hyperspectral images. Minimum Noise Fraction was used to reduce the spectral dimensions and isolate noise in the images. Extreme pixels were identified from high order MNF bands using the Pixel Purity Index. n-Dimensional Visualization was utilized for unique pixel identification. Spectral similarities between pixel spectral signatures and known endmember spectrum (USGS Spectral Library) were compared with Spectral Angle Mapper Classification. EO-1 Hyperion hyperspectral images and hyperspectral analysis are sensitive to hydrothermal alteration minerals, as their diagnostic spectral signatures span the visible and shortwave infrared seen in geothermal fields. Hyperspectral analysis results indicated that kaolinite, smectite, illite, montmorillonite, and sepiolite minerals were distributed in a wide area, which covered the hot spring outlet. Rectorite, lizardite, richterite, dumortierite, nontronite, erionite, and clinoptilolite were observed occasionally.

  14. All-CMOS night vision viewer with integrated microdisplay

    NASA Astrophysics Data System (ADS)

    Goosen, Marius E.; Venter, Petrus J.; du Plessis, Monuko; Faure, Nicolaas M.; Janse van Rensburg, Christo; Rademeyer, Pieter

    2014-02-01

    The unrivalled integration potential of CMOS has made it the dominant technology for digital integrated circuits. With the advent of visible light emission from silicon through hot carrier electroluminescence, several applications arose, all of which rely upon the advantages of mature CMOS technologies for a competitive edge in a very active and attractive market. In this paper we present a low-cost night vision viewer which employs only standard CMOS technologies. A commercial CMOS imager is utilized for near infrared image capturing with a 128x96 pixel all-CMOS microdisplay implemented to convey the image to the user. The display is implemented in a standard 0.35 μm CMOS process, with no process alterations or post processing. The display features a 25 μm pixel pitch and a 3.2 mm x 2.4 mm active area, which through magnification presents the virtual image to the user equivalent of a 19-inch display viewed from a distance of 3 meters. This work represents the first application of a CMOS microdisplay in a low-cost consumer product.

  15. Equilibrium radionuclide gated angiography in patients with tricuspid regurgitation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Handler, B.; Pavel, D.G.; Pietras, R.

    Equilibrium gated radionuclide angiography was performed in 2 control groups (15 patients with no organic heart disease and 24 patients with organic heart disease but without right- or left-sided valvular regurgitation) and in 9 patients with clinical tricuspid regurgitation. The regurgitant index, or ratio of left to right ventricular stroke counts, was significantly lower in patients with tricuspid regurgitation than in either control group. Time-activity variation over the liver was used to compute a hepatic expansion fraction which was significantly higher in patients with tricuspid regurgitation than in either control group. Fourier analysis of time-activity variation in each pixel wasmore » used to generate amplitude and phase images. Only pixels with values for amplitude at least 7% of the maximum in the image were retained in the final display. All patients with tricuspid regurgitation had greater than 100 pixels over the liver automatically retained by the computer. These pixels were of phase comparable to that of the right atrium and approximately 180 degrees out of phase with the right ventricle. In contrast, no patient with no organic heart disease and only 1 of 24 patients with organic heart disease had any pixels retained by the computer. In conclusion, patients with tricuspid regurgitation were characterized on equilibrium gated angiography by an abnormally low regurgitant index (7 of 9 patients) reflecting increased right ventricular stroke volume, increased hepatic expansion fraction (7 of 9 patients), and increased amplitude of count variation over the liver in phase with the right atrium (9 of 9 patients).« less

  16. An Over 90 dB Intra-Scene Single-Exposure Dynamic Range CMOS Image Sensor Using a 3.0 μm Triple-Gain Pixel Fabricated in a Standard BSI Process.

    PubMed

    Takayanagi, Isao; Yoshimura, Norio; Mori, Kazuya; Matsuo, Shinichiro; Tanaka, Shunsuke; Abe, Hirofumi; Yasuda, Naoto; Ishikawa, Kenichiro; Okura, Shunsuke; Ohsawa, Shinji; Otaka, Toshinori

    2018-01-12

    To respond to the high demand for high dynamic range imaging suitable for moving objects with few artifacts, we have developed a single-exposure dynamic range image sensor by introducing a triple-gain pixel and a low noise dual-gain readout circuit. The developed 3 μm pixel is capable of having three conversion gains. Introducing a new split-pinned photodiode structure, linear full well reaches 40 ke - . Readout noise under the highest pixel gain condition is 1 e - with a low noise readout circuit. Merging two signals, one with high pixel gain and high analog gain, and the other with low pixel gain and low analog gain, a single exposure dynamic rage (SEHDR) signal is obtained. Using this technology, a 1/2.7", 2M-pixel CMOS image sensor has been developed and characterized. The image sensor also employs an on-chip linearization function, yielding a 16-bit linear signal at 60 fps, and an intra-scene dynamic range of higher than 90 dB was successfully demonstrated. This SEHDR approach inherently mitigates the artifacts from moving objects or time-varying light sources that can appear in the multiple exposure high dynamic range (MEHDR) approach.

  17. An Over 90 dB Intra-Scene Single-Exposure Dynamic Range CMOS Image Sensor Using a 3.0 μm Triple-Gain Pixel Fabricated in a Standard BSI Process †

    PubMed Central

    Takayanagi, Isao; Yoshimura, Norio; Mori, Kazuya; Matsuo, Shinichiro; Tanaka, Shunsuke; Abe, Hirofumi; Yasuda, Naoto; Ishikawa, Kenichiro; Okura, Shunsuke; Ohsawa, Shinji; Otaka, Toshinori

    2018-01-01

    To respond to the high demand for high dynamic range imaging suitable for moving objects with few artifacts, we have developed a single-exposure dynamic range image sensor by introducing a triple-gain pixel and a low noise dual-gain readout circuit. The developed 3 μm pixel is capable of having three conversion gains. Introducing a new split-pinned photodiode structure, linear full well reaches 40 ke−. Readout noise under the highest pixel gain condition is 1 e− with a low noise readout circuit. Merging two signals, one with high pixel gain and high analog gain, and the other with low pixel gain and low analog gain, a single exposure dynamic rage (SEHDR) signal is obtained. Using this technology, a 1/2.7”, 2M-pixel CMOS image sensor has been developed and characterized. The image sensor also employs an on-chip linearization function, yielding a 16-bit linear signal at 60 fps, and an intra-scene dynamic range of higher than 90 dB was successfully demonstrated. This SEHDR approach inherently mitigates the artifacts from moving objects or time-varying light sources that can appear in the multiple exposure high dynamic range (MEHDR) approach. PMID:29329210

  18. Dead pixel replacement in LWIR microgrid polarimeters.

    PubMed

    Ratliff, Bradley M; Tyo, J Scott; Boger, James K; Black, Wiley T; Bowers, David L; Fetrow, Matthew P

    2007-06-11

    LWIR imaging arrays are often affected by nonresponsive pixels, or "dead pixels." These dead pixels can severely degrade the quality of imagery and often have to be replaced before subsequent image processing and display of the imagery data. For LWIR arrays that are integrated with arrays of micropolarizers, the problem of dead pixels is amplified. Conventional dead pixel replacement (DPR) strategies cannot be employed since neighboring pixels are of different polarizations. In this paper we present two DPR schemes. The first is a modified nearest-neighbor replacement method. The second is a method based on redundancy in the polarization measurements.We find that the redundancy-based DPR scheme provides an order-of-magnitude better performance for typical LWIR polarimetric data.

  19. A 20 Mfps high frame-depth CMOS burst-mode imager with low power in-pixel NMOS-only passive amplifier

    NASA Astrophysics Data System (ADS)

    Wu, L.; San Segundo Bello, D.; Coppejans, P.; Craninckx, J.; Wambacq, P.; Borremans, J.

    2017-02-01

    This paper presents a 20 Mfps 32 × 84 pixels CMOS burst-mode imager featuring high frame depth with a passive in-pixel amplifier. Compared to the CCD alternatives, CMOS burst-mode imagers are attractive for their low power consumption and integration of circuitry such as ADCs. Due to storage capacitor size and its noise limitations, CMOS burst-mode imagers usually suffer from a lower frame depth than CCD implementations. In order to capture fast transitions over a longer time span, an in-pixel CDS technique has been adopted to reduce the required memory cells for each frame by half. Moreover, integrated with in-pixel CDS, an in-pixel NMOS-only passive amplifier alleviates the kTC noise requirements of the memory bank allowing the usage of smaller capacitors. Specifically, a dense 108-cell MOS memory bank (10fF/cell) has been implemented inside a 30μm pitch pixel, with an area of 25 × 30μm2 occupied by the memory bank. There is an improvement of about 4x in terms of frame depth per pixel area by applying in-pixel CDS and amplification. With the amplifier's gain of 3.3, an FD input-referred RMS noise of 1mV is achieved at 20 Mfps operation. While the amplification is done without burning DC current, including the pixel source follower biasing, the full pixel consumes 10μA at 3.3V supply voltage at full speed. The chip has been fabricated in imec's 130nm CMOS CIS technology.

  20. High dynamic range pixel architecture for advanced diagnostic medical x-ray imaging applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Izadi, Mohammad Hadi; Karim, Karim S.

    2006-05-15

    The most widely used architecture in large-area amorphous silicon (a-Si) flat panel imagers is a passive pixel sensor (PPS), which consists of a detector and a readout switch. While the PPS has the advantage of being compact and amenable toward high-resolution imaging, small PPS output signals are swamped by external column charge amplifier and data line thermal noise, which reduce the minimum readable sensor input signal. In contrast to PPS circuits, on-pixel amplifiers in a-Si technology reduce readout noise to levels that can meet even the stringent requirements for low noise digital x-ray fluoroscopy (<1000 noise electrons). However, larger voltagesmore » at the pixel input cause the output of the amplified pixel to become nonlinear thus reducing the dynamic range. We reported a hybrid amplified pixel architecture based on a combination of PPS and amplified pixel designs that, in addition to low noise performance, also resulted in large-signal linearity and consequently higher dynamic range [K. S. Karim et al., Proc. SPIE 5368, 657 (2004)]. The additional benefit in large-signal linearity, however, came at the cost of an additional pixel transistor. We present an amplified pixel design that achieves the goals of low noise performance and large-signal linearity without the need for an additional pixel transistor. Theoretical calculations and simulation results for noise indicate the applicability of the amplified a-Si pixel architecture for high dynamic range, medical x-ray imaging applications that require switching between low exposure, real-time fluoroscopy and high-exposure radiography.« less

  1. Characterization of a hybrid energy-resolving photon-counting detector

    NASA Astrophysics Data System (ADS)

    Zang, A.; Pelzer, G.; Anton, G.; Ballabriga Sune, R.; Bisello, F.; Campbell, M.; Fauler, A.; Fiederle, M.; Llopart Cudie, X.; Ritter, I.; Tennert, F.; Wölfel, S.; Wong, W. S.; Michel, T.

    2014-03-01

    Photon-counting detectors in medical x-ray imaging provide a higher dose efficiency than integrating detectors. Even further possibilities for imaging applications arise, if the energy of each photon counted is measured, as for example K-edge-imaging or optimizing image quality by applying energy weighting factors. In this contribution, we show results of the characterization of the Dosepix detector. This hybrid photon- counting pixel detector allows energy resolved measurements with a novel concept of energy binning included in the pixel electronics. Based on ideas of the Medipix detector family, it provides three different modes of operation: An integration mode, a photon-counting mode, and an energy-binning mode. In energy-binning mode, it is possible to set 16 energy thresholds in each pixel individually to derive a binned energy spectrum in every pixel in one acquisition. The hybrid setup allows using different sensor materials. For the measurements 300 μm Si and 1 mm CdTe were used. The detector matrix consists of 16 x 16 square pixels for CdTe (16 x 12 for Si) with a pixel pitch of 220 μm. The Dosepix was originally intended for applications in the field of radiation measurement. Therefore it is not optimized towards medical imaging. The detector concept itself still promises potential as an imaging detector. We present spectra measured in one single pixel as well as in the whole pixel matrix in energy-binning mode with a conventional x-ray tube. In addition, results concerning the count rate linearity for the different sensor materials are shown as well as measurements regarding energy resolution.

  2. Camouflaging in Digital Image for Secure Communication

    NASA Astrophysics Data System (ADS)

    Jindal, B.; Singh, A. P.

    2013-06-01

    The present paper reports on a new type of camouflaging in digital image for hiding crypto-data using moderate bit alteration in the pixel. In the proposed method, cryptography is combined with steganography to provide a two layer security to the hidden data. The novelty of the algorithm proposed in the present work lies in the fact that the information about hidden bit is reflected by parity condition in one part of the image pixel. The remaining part of the image pixel is used to perform local pixel adjustment to improve the visual perception of the cover image. In order to examine the effectiveness of the proposed method, image quality measuring parameters are computed. In addition to this, security analysis is also carried by comparing the histograms of cover and stego images. This scheme provides a higher security as well as robustness to intentional as well as unintentional attacks.

  3. Wavelet imaging cleaning method for atmospheric Cherenkov telescopes

    NASA Astrophysics Data System (ADS)

    Lessard, R. W.; Cayón, L.; Sembroski, G. H.; Gaidos, J. A.

    2002-07-01

    We present a new method of image cleaning for imaging atmospheric Cherenkov telescopes. The method is based on the utilization of wavelets to identify noise pixels in images of gamma-ray and hadronic induced air showers. This method selects more signal pixels with Cherenkov photons than traditional image processing techniques. In addition, the method is equally efficient at rejecting pixels with noise alone. The inclusion of more signal pixels in an image of an air shower allows for a more accurate reconstruction, especially at lower gamma-ray energies that produce low levels of light. We present the results of Monte Carlo simulations of gamma-ray and hadronic air showers which show improved angular resolution using this cleaning procedure. Data from the Whipple Observatory's 10-m telescope are utilized to show the efficacy of the method for extracting a gamma-ray signal from the background of hadronic generated images.

  4. A hyperspectral image projector for hyperspectral imagers

    NASA Astrophysics Data System (ADS)

    Rice, Joseph P.; Brown, Steven W.; Neira, Jorge E.; Bousquet, Robert R.

    2007-04-01

    We have developed and demonstrated a Hyperspectral Image Projector (HIP) intended for system-level validation testing of hyperspectral imagers, including the instrument and any associated spectral unmixing algorithms. HIP, based on the same digital micromirror arrays used in commercial digital light processing (DLP*) displays, is capable of projecting any combination of many different arbitrarily programmable basis spectra into each image pixel at up to video frame rates. We use a scheme whereby one micromirror array is used to produce light having the spectra of endmembers (i.e. vegetation, water, minerals, etc.), and a second micromirror array, optically in series with the first, projects any combination of these arbitrarily-programmable spectra into the pixels of a 1024 x 768 element spatial image, thereby producing temporally-integrated images having spectrally mixed pixels. HIP goes beyond conventional DLP projectors in that each spatial pixel can have an arbitrary spectrum, not just arbitrary color. As such, the resulting spectral and spatial content of the projected image can simulate realistic scenes that a hyperspectral imager will measure during its use. Also, the spectral radiance of the projected scenes can be measured with a calibrated spectroradiometer, such that the spectral radiance projected into each pixel of the hyperspectral imager can be accurately known. Use of such projected scenes in a controlled laboratory setting would alleviate expensive field testing of instruments, allow better separation of environmental effects from instrument effects, and enable system-level performance testing and validation of hyperspectral imagers as used with analysis algorithms. For example, known mixtures of relevant endmember spectra could be projected into arbitrary spatial pixels in a hyperspectral imager, enabling tests of how well a full system, consisting of the instrument + calibration + analysis algorithm, performs in unmixing (i.e. de-convolving) the spectra in all pixels. We discuss here the performance of a visible prototype HIP. The technology is readily extendable to the ultraviolet and infrared spectral ranges, and the scenes can be static or dynamic.

  5. LSA SAF Meteosat FRP Products: Part 2 - Evaluation and demonstration of use in the Copernicus Atmosphere Monitoring Service (CAMS)

    NASA Astrophysics Data System (ADS)

    Roberts, G.; Wooster, M. J.; Xu, W.; Freeborn, P. H.; Morcrette, J.-J.; Jones, L.; Benedetti, A.; Kaiser, J.

    2015-06-01

    Characterising the dynamics of landscape scale wildfires at very high temporal resolutions is best achieved using observations from Earth Observation (EO) sensors mounted onboard geostationary satellites. As a result, a number of operational active fire products have been developed from the data of such sensors. An example of which are the Fire Radiative Power (FRP) products, the FRP-PIXEL and FRP-GRID products, generated by the Land Surface Analysis Satellite Applications Facility (LSA SAF) from imagery collected by the Spinning Enhanced Visible and Infrared Imager (SEVIRI) on-board the Meteosat Second Generation (MSG) series of geostationary EO satellites. The processing chain developed to deliver these FRP products detects SEVIRI pixels containing actively burning fires and characterises their FRP output across four geographic regions covering Europe, part of South America and northern and southern Africa. The FRP-PIXEL product contains the highest spatial and temporal resolution FRP dataset, whilst the FRP-GRID product contains a spatio-temporal summary that includes bias adjustments for cloud cover and the non-detection of low FRP fire pixels. Here we evaluate these two products against active fire data collected by the Moderate Resolution Imaging Spectroradiometer (MODIS), and compare the results to those for three alternative active fire products derived from SEVIRI imagery. The FRP-PIXEL product is shown to detect a substantially greater number of active fire pixels than do alternative SEVIRI-based products, and comparison to MODIS on a per-fire basis indicates a strong agreement and low bias in terms of FRP values. However, low FRP fire pixels remain undetected by SEVIRI, with errors of active fire pixel detection commission and omission compared to MODIS ranging between 9-13 and 65-77% respectively in Africa. Higher errors of omission result in greater underestimation of regional FRP totals relative to those derived from simultaneously collected MODIS data, ranging from 35% over the Northern Africa region to 89% over the European region. High errors of active fire omission and FRP underestimation are found over Europe and South America, and result from SEVIRI's larger pixel area over these regions. An advantage of using FRP for characterising wildfire emissions is the ability to do so very frequently and in near real time (NRT). To illustrate the potential of this approach, wildfire fuel consumption rates derived from the SEVIRI FRP-PIXEL product are used to characterise smoke emissions of the 2007 Peloponnese wildfires within the European Centre for Medium-Range Weather Forecasting (ECMWF) Integrated Forecasting System (IFS), as a demonstration of what can be achieved when using geostationary active fire data within the Copernicus Atmosphere Monitoring System (CAMS). Qualitative comparison of the modelled smoke plumes with MODIS optical imagery illustrates that the model captures the temporal and spatial dynamics of the plume very well, and that high temporal resolution emissions estimates such as those available from geostationary orbit are important for capturing the sub-daily variability in smoke plume parameters such as aerosol optical depth (AOD), which are increasingly less well resolved using daily or coarser temporal resolution emissions datasets. Quantitative comparison of modelled AOD with coincident MODIS and AERONET AOD indicates that the former is overestimated by ∼ 20-30%, but captures the observed AOD dynamics with a high degree of fidelity. The case study highlights the potential of using geostationary FRP data to drive fire emissions estimates for use within atmospheric transport models such as those currently implemented as part of the Monitoring Atmospheric Composition and Climate (MACC) programme within the CAMS.

  6. Coloured computational imaging with single-pixel detectors based on a 2D discrete cosine transform

    NASA Astrophysics Data System (ADS)

    Liu, Bao-Lei; Yang, Zhao-Hua; Liu, Xia; Wu, Ling-An

    2017-02-01

    We propose and demonstrate a computational imaging technique that uses structured illumination based on a two-dimensional discrete cosine transform to perform imaging with a single-pixel detector. A scene is illuminated by a projector with two sets of orthogonal patterns, then by applying an inverse cosine transform to the spectra obtained from the single-pixel detector a full-colour image is retrieved. This technique can retrieve an image from sub-Nyquist measurements, and the background noise is easily cancelled to give excellent image quality. Moreover, the experimental set-up is very simple.

  7. A robust sub-pixel edge detection method of infrared image based on tremor-based retinal receptive field model

    NASA Astrophysics Data System (ADS)

    Gao, Kun; Yang, Hu; Chen, Xiaomei; Ni, Guoqiang

    2008-03-01

    Because of complex thermal objects in an infrared image, the prevalent image edge detection operators are often suitable for a certain scene and extract too wide edges sometimes. From a biological point of view, the image edge detection operators work reliably when assuming a convolution-based receptive field architecture. A DoG (Difference-of- Gaussians) model filter based on ON-center retinal ganglion cell receptive field architecture with artificial eye tremors introduced is proposed for the image contour detection. Aiming at the blurred edges of an infrared image, the subsequent orthogonal polynomial interpolation and sub-pixel level edge detection in rough edge pixel neighborhood is adopted to locate the foregoing rough edges in sub-pixel level. Numerical simulations show that this method can locate the target edge accurately and robustly.

  8. Anomaly clustering in hyperspectral images

    NASA Astrophysics Data System (ADS)

    Doster, Timothy J.; Ross, David S.; Messinger, David W.; Basener, William F.

    2009-05-01

    The topological anomaly detection algorithm (TAD) differs from other anomaly detection algorithms in that it uses a topological/graph-theoretic model for the image background instead of modeling the image with a Gaussian normal distribution. In the construction of the model, TAD produces a hard threshold separating anomalous pixels from background in the image. We build on this feature of TAD by extending the algorithm so that it gives a measure of the number of anomalous objects, rather than the number of anomalous pixels, in a hyperspectral image. This is done by identifying, and integrating, clusters of anomalous pixels via a graph theoretical method combining spatial and spectral information. The method is applied to a cluttered HyMap image and combines small groups of pixels containing like materials, such as those corresponding to rooftops and cars, into individual clusters. This improves visualization and interpretation of objects.

  9. A new algorithm to reduce noise in microscopy images implemented with a simple program in python.

    PubMed

    Papini, Alessio

    2012-03-01

    All microscopical images contain noise, increasing when (e.g., transmission electron microscope or light microscope) approaching the resolution limit. Many methods are available to reduce noise. One of the most commonly used is image averaging. We propose here to use the mode of pixel values. Simple Python programs process a given number of images, recorded consecutively from the same subject. The programs calculate the mode of the pixel values in a given position (a, b). The result is a new image containing in (a, b) the mode of the values. Therefore, the final pixel value corresponds to that read in at least two of the pixels in position (a, b). The application of the program on a set of images obtained by applying salt and pepper noise and GIMP hurl noise with 10-90% standard deviation showed that the mode performs better than averaging with three-eight images. The data suggest that the mode would be more efficient (in the sense of a lower number of recorded images to process to reduce noise below a given limit) for lower number of total noisy pixels and high standard deviation (as impulse noise and salt and pepper noise), while averaging would be more efficient when the number of varying pixels is high, and the standard deviation is low, as in many cases of Gaussian noise affected images. The two methods may be used serially. Copyright © 2011 Wiley Periodicals, Inc.

  10. The Effects of Radiation on Imagery Sensors in Space

    NASA Technical Reports Server (NTRS)

    Mathis, Dylan

    2007-01-01

    Recent experience using high definition video on the International Space Station reveals camera pixel degradation due to particle radiation to be a much more significant problem with high definition cameras than with standard definition video. Although it may at first appear that increased pixel density on the imager is the logical explanation for this, the ISS implementations of high definition suggest a more complex causal and mediating factor mix. The degree of damage seems to vary from one type of camera to another, and this variation prompts a reconsideration of the possible factors in pixel loss, such as imager size, number of pixels, pixel aperture ratio, imager type (CCD or CMOS), method of error correction/concealment, and the method of compression used for recording or transmission. The problem of imager pixel loss due to particle radiation is not limited to out-of-atmosphere applications. Since particle radiation increases with altitude, it is not surprising to find anecdotal evidence that video cameras subject to many hours of airline travel show an increased incidence of pixel loss. This is even evident in some standard definition video applications, and pixel loss due to particle radiation only stands to become a more salient issue considering the continued diffusion of high definition video cameras in the marketplace.

  11. Image reconstruction of dynamic infrared single-pixel imaging system

    NASA Astrophysics Data System (ADS)

    Tong, Qi; Jiang, Yilin; Wang, Haiyan; Guo, Limin

    2018-03-01

    Single-pixel imaging technique has recently received much attention. Most of the current single-pixel imaging is aimed at relatively static targets or the imaging system is fixed, which is limited by the number of measurements received through the single detector. In this paper, we proposed a novel dynamic compressive imaging method to solve the imaging problem, where exists imaging system motion behavior, for the infrared (IR) rosette scanning system. The relationship between adjacent target images and scene is analyzed under different system movement scenarios. These relationships are used to build dynamic compressive imaging models. Simulation results demonstrate that the proposed method can improve the reconstruction quality of IR image and enhance the contrast between the target and the background in the presence of system movement.

  12. A piecewise-focused high DQE detector for MV imaging.

    PubMed

    Star-Lack, Josh; Shedlock, Daniel; Swahn, Dennis; Humber, Dave; Wang, Adam; Hirsh, Hayley; Zentai, George; Sawkey, Daren; Kruger, Isaac; Sun, Mingshan; Abel, Eric; Virshup, Gary; Shin, Mihye; Fahrig, Rebecca

    2015-09-01

    Electronic portal imagers (EPIDs) with high detective quantum efficiencies (DQEs) are sought to facilitate the use of the megavoltage (MV) radiotherapy treatment beam for image guidance. Potential advantages include high quality (treatment) beam's eye view imaging, and improved cone-beam computed tomography (CBCT) generating images with more accurate electron density maps with immunity to metal artifacts. One approach to increasing detector sensitivity is to couple a thick pixelated scintillator array to an active matrix flat panel imager (AMFPI) incorporating amorphous silicon thin film electronics. Cadmium tungstate (CWO) has many desirable scintillation properties including good light output, a high index of refraction, high optical transparency, and reasonable cost. However, due to the 0 1 0 cleave plane inherent in its crystalline structure, the difficulty of cutting and polishing CWO has, in part, limited its study relative to other scintillators such as cesium iodide and bismuth germanate (BGO). The goal of this work was to build and test a focused large-area pixelated "strip" CWO detector. A 361 × 52 mm scintillator assembly that contained a total of 28 072 pixels was constructed. The assembly comprised seven subarrays, each 15 mm thick. Six of the subarrays were fabricated from CWO with a pixel pitch of 0.784 mm, while one array was constructed from BGO for comparison. Focusing was achieved by coupling the arrays to the Varian AS1000 AMFPI through a piecewise linear arc-shaped fiber optic plate. Simulation and experimental studies of modulation transfer function (MTF) and DQE were undertaken using a 6 MV beam, and comparisons were made between the performance of the pixelated strip assembly and the most common EPID configuration comprising a 1 mm-thick copper build-up plate attached to a 133 mg/cm(2) gadolinium oxysulfide scintillator screen (Cu-GOS). Projection radiographs and CBCT images of phantoms were acquired. The work also introduces the use of a lightweight edge phantom to generate MTF measurements at MV energies and shows its functional equivalence to the more cumbersome slit-based method. Measured and simulated DQE(0)'s of the pixelated CWO detector were 22% and 26%, respectively. The average measured and simulated ratios of CWO DQE(f) to Cu-GOS DQE(f) across the frequency range of 0.0-0.62 mm(-1) were 23 and 29, respectively. 2D and 3D imaging studies confirmed the large dose efficiency improvement and that focus was maintained across the field of view. In the CWO CBCT images, the measured spatial resolution was 7 lp/cm. The contrast-to-noise ratio was dramatically improved reflecting a 22 × sensitivity increase relative to Cu-GOS. The CWO scintillator material showed significantly higher stability and light yield than the BGO material. An efficient piecewise-focused pixelated strip scintillator for MV imaging is described that offers more than a 20-fold dose efficiency improvement over Cu-GOS.

  13. A piecewise-focused high DQE detector for MV imaging

    PubMed Central

    Star-Lack, Josh; Shedlock, Daniel; Swahn, Dennis; Humber, Dave; Wang, Adam; Hirsh, Hayley; Zentai, George; Sawkey, Daren; Kruger, Isaac; Sun, Mingshan; Abel, Eric; Virshup, Gary; Shin, Mihye; Fahrig, Rebecca

    2015-01-01

    Purpose: Electronic portal imagers (EPIDs) with high detective quantum efficiencies (DQEs) are sought to facilitate the use of the megavoltage (MV) radiotherapy treatment beam for image guidance. Potential advantages include high quality (treatment) beam’s eye view imaging, and improved cone-beam computed tomography (CBCT) generating images with more accurate electron density maps with immunity to metal artifacts. One approach to increasing detector sensitivity is to couple a thick pixelated scintillator array to an active matrix flat panel imager (AMFPI) incorporating amorphous silicon thin film electronics. Cadmium tungstate (CWO) has many desirable scintillation properties including good light output, a high index of refraction, high optical transparency, and reasonable cost. However, due to the 0 1 0 cleave plane inherent in its crystalline structure, the difficulty of cutting and polishing CWO has, in part, limited its study relative to other scintillators such as cesium iodide and bismuth germanate (BGO). The goal of this work was to build and test a focused large-area pixelated “strip” CWO detector. Methods: A 361  ×  52 mm scintillator assembly that contained a total of 28 072 pixels was constructed. The assembly comprised seven subarrays, each 15 mm thick. Six of the subarrays were fabricated from CWO with a pixel pitch of 0.784 mm, while one array was constructed from BGO for comparison. Focusing was achieved by coupling the arrays to the Varian AS1000 AMFPI through a piecewise linear arc-shaped fiber optic plate. Simulation and experimental studies of modulation transfer function (MTF) and DQE were undertaken using a 6 MV beam, and comparisons were made between the performance of the pixelated strip assembly and the most common EPID configuration comprising a 1 mm-thick copper build-up plate attached to a 133 mg/cm2 gadolinium oxysulfide scintillator screen (Cu-GOS). Projection radiographs and CBCT images of phantoms were acquired. The work also introduces the use of a lightweight edge phantom to generate MTF measurements at MV energies and shows its functional equivalence to the more cumbersome slit-based method. Results: Measured and simulated DQE(0)’s of the pixelated CWO detector were 22% and 26%, respectively. The average measured and simulated ratios of CWO DQE(f) to Cu-GOS DQE(f) across the frequency range of 0.0–0.62 mm−1 were 23 and 29, respectively. 2D and 3D imaging studies confirmed the large dose efficiency improvement and that focus was maintained across the field of view. In the CWO CBCT images, the measured spatial resolution was 7 lp/cm. The contrast-to-noise ratio was dramatically improved reflecting a 22 × sensitivity increase relative to Cu-GOS. The CWO scintillator material showed significantly higher stability and light yield than the BGO material. Conclusions: An efficient piecewise-focused pixelated strip scintillator for MV imaging is described that offers more than a 20-fold dose efficiency improvement over Cu-GOS. PMID:26328960

  14. The progress of sub-pixel imaging methods

    NASA Astrophysics Data System (ADS)

    Wang, Hu; Wen, Desheng

    2014-02-01

    This paper reviews the Sub-pixel imaging technology principles, characteristics, the current development status at home and abroad and the latest research developments. As Sub-pixel imaging technology has achieved the advantages of high resolution of optical remote sensor, flexible working ways and being miniaturized with no moving parts. The imaging system is suitable for the application of space remote sensor. Its application prospect is very extensive. It is quite possible to be the research development direction of future space optical remote sensing technology.

  15. Estimation of urban surface water at subpixel level from neighborhood pixels using multispectral remote sensing image (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Xie, Huan; Luo, Xin; Xu, Xiong; Wang, Chen; Pan, Haiyan; Tong, Xiaohua; Liu, Shijie

    2016-10-01

    Water body is a fundamental element in urban ecosystems and water mapping is critical for urban and landscape planning and management. As remote sensing has increasingly been used for water mapping in rural areas, this spatially explicit approach applied in urban area is also a challenging work due to the water bodies mainly distributed in a small size and the spectral confusion widely exists between water and complex features in the urban environment. Water index is the most common method for water extraction at pixel level, and spectral mixture analysis (SMA) has been widely employed in analyzing urban environment at subpixel level recently. In this paper, we introduce an automatic subpixel water mapping method in urban areas using multispectral remote sensing data. The objectives of this research consist of: (1) developing an automatic land-water mixed pixels extraction technique by water index; (2) deriving the most representative endmembers of water and land by utilizing neighboring water pixels and adaptive iterative optimal neighboring land pixel for respectively; (3) applying a linear unmixing model for subpixel water fraction estimation. Specifically, to automatically extract land-water pixels, the locally weighted scatter plot smoothing is firstly used to the original histogram curve of WI image . And then the Ostu threshold is derived as the start point to select land-water pixels based on histogram of the WI image with the land threshold and water threshold determination through the slopes of histogram curve . Based on the previous process at pixel level, the image is divided into three parts: water pixels, land pixels, and mixed land-water pixels. Then the spectral mixture analysis (SMA) is applied to land-water mixed pixels for water fraction estimation at subpixel level. With the assumption that the endmember signature of a target pixel should be more similar to adjacent pixels due to spatial dependence, the endmember of water and land are determined by neighboring pure land or pure water pixels within a distance. To obtaining the most representative endmembers in SMA, we designed an adaptive iterative endmember selection method based on the spatial similarity of adjacent pixels. According to the spectral similarity in a spatial adjacent region, the spectrum of land endmember is determined by selecting the most representative land pixel in a local window, and the spectrum of water endmember is determined by calculating an average of the water pixels in the local window. The proposed hierarchical processing method based on WI and SMA (WISMA) is applied to urban areas for reliability evaluation using the Landsat-8 Operational Land Imager (OLI) images. For comparison, four methods at pixel level and subpixel level were chosen respectively. Results indicate that the water maps generated by the proposed method correspond as closely with the truth water maps with subpixel precision. And the results showed that the WISMA achieved the best performance in water mapping with comprehensive analysis of different accuracy evaluation indexes (RMSE and SE).

  16. An enhanced fast scanning algorithm for image segmentation

    NASA Astrophysics Data System (ADS)

    Ismael, Ahmed Naser; Yusof, Yuhanis binti

    2015-12-01

    Segmentation is an essential and important process that separates an image into regions that have similar characteristics or features. This will transform the image for a better image analysis and evaluation. An important benefit of segmentation is the identification of region of interest in a particular image. Various algorithms have been proposed for image segmentation and this includes the Fast Scanning algorithm which has been employed on food, sport and medical images. It scans all pixels in the image and cluster each pixel according to the upper and left neighbor pixels. The clustering process in Fast Scanning algorithm is performed by merging pixels with similar neighbor based on an identified threshold. Such an approach will lead to a weak reliability and shape matching of the produced segments. This paper proposes an adaptive threshold function to be used in the clustering process of the Fast Scanning algorithm. This function used the gray'value in the image's pixels and variance Also, the level of the image that is more the threshold are converted into intensity values between 0 and 1, and other values are converted into intensity values zero. The proposed enhanced Fast Scanning algorithm is realized on images of the public and private transportation in Iraq. Evaluation is later made by comparing the produced images of proposed algorithm and the standard Fast Scanning algorithm. The results showed that proposed algorithm is faster in terms the time from standard fast scanning.

  17. Model of Image Artifacts from Dust Particles

    NASA Technical Reports Server (NTRS)

    Willson, Reg

    2008-01-01

    A mathematical model of image artifacts produced by dust particles on lenses has been derived. Machine-vision systems often have to work with camera lenses that become dusty during use. Dust particles on the front surface of a lens produce image artifacts that can potentially affect the performance of a machine-vision algorithm. The present model satisfies a need for a means of synthesizing dust image artifacts for testing machine-vision algorithms for robustness (or the lack thereof) in the presence of dust on lenses. A dust particle can absorb light or scatter light out of some pixels, thereby giving rise to a dark dust artifact. It can also scatter light into other pixels, thereby giving rise to a bright dust artifact. For the sake of simplicity, this model deals only with dark dust artifacts. The model effectively represents dark dust artifacts as an attenuation image consisting of an array of diffuse darkened spots centered at image locations corresponding to the locations of dust particles. The dust artifacts are computationally incorporated into a given test image by simply multiplying the brightness value of each pixel by a transmission factor that incorporates the factor of attenuation, by dust particles, of the light incident on that pixel. With respect to computation of the attenuation and transmission factors, the model is based on a first-order geometric (ray)-optics treatment of the shadows cast by dust particles on the image detector. In this model, the light collected by a pixel is deemed to be confined to a pair of cones defined by the location of the pixel s image in object space, the entrance pupil of the lens, and the location of the pixel in the image plane (see Figure 1). For simplicity, it is assumed that the size of a dust particle is somewhat less than the diameter, at the front surface of the lens, of any collection cone containing all or part of that dust particle. Under this assumption, the shape of any individual dust particle artifact is the shape (typically, circular) of the aperture, and the contribution of the particle to the attenuation factor for a given pixel is the fraction of the cross-sectional area of the collection cone occupied by the particle. Assuming that dust particles do not overlap, the net transmission factor for a given pixel is calculated as one minus the sum of attenuation factors contributed by all dust particles affecting that pixel. In a test, the model was used to synthesize attenuation images for random distributions of dust particles on the front surface of a lens at various relative aperture (F-number) settings. As shown in Figure 2, the attenuation images resembled dust artifacts in real test images recorded while the lens was aimed at a white target.

  18. Large area CMOS active pixel sensor x-ray imager for digital breast tomosynthesis: Analysis, modeling, and characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Chumin; Kanicki, Jerzy, E-mail: kanicki@eecs.umich.edu; Konstantinidis, Anastasios C.

    Purpose: Large area x-ray imagers based on complementary metal-oxide-semiconductor (CMOS) active pixel sensor (APS) technology have been proposed for various medical imaging applications including digital breast tomosynthesis (DBT). The low electronic noise (50–300 e{sup −}) of CMOS APS x-ray imagers provides a possible route to shrink the pixel pitch to smaller than 75 μm for microcalcification detection and possible reduction of the DBT mean glandular dose (MGD). Methods: In this study, imaging performance of a large area (29 × 23 cm{sup 2}) CMOS APS x-ray imager [Dexela 2923 MAM (PerkinElmer, London)] with a pixel pitch of 75 μm was characterizedmore » and modeled. The authors developed a cascaded system model for CMOS APS x-ray imagers using both a broadband x-ray radiation and monochromatic synchrotron radiation. The experimental data including modulation transfer function, noise power spectrum, and detective quantum efficiency (DQE) were theoretically described using the proposed cascaded system model with satisfactory consistency to experimental results. Both high full well and low full well (LFW) modes of the Dexela 2923 MAM CMOS APS x-ray imager were characterized and modeled. The cascaded system analysis results were further used to extract the contrast-to-noise ratio (CNR) for microcalcifications with sizes of 165–400 μm at various MGDs. The impact of electronic noise on CNR was also evaluated. Results: The LFW mode shows better DQE at low air kerma (K{sub a} < 10 μGy) and should be used for DBT. At current DBT applications, air kerma (K{sub a} ∼ 10 μGy, broadband radiation of 28 kVp), DQE of more than 0.7 and ∼0.3 was achieved using the LFW mode at spatial frequency of 0.5 line pairs per millimeter (lp/mm) and Nyquist frequency ∼6.7 lp/mm, respectively. It is shown that microcalcifications of 165–400 μm in size can be resolved using a MGD range of 0.3–1 mGy, respectively. In comparison to a General Electric GEN2 prototype DBT system (at MGD of 2.5 mGy), an increased CNR (by ∼10) for microcalcifications was observed using the Dexela 2923 MAM CMOS APS x-ray imager at a lower MGD (2.0 mGy). Conclusions: The Dexela 2923 MAM CMOS APS x-ray imager is capable to achieve a high imaging performance at spatial frequencies up to 6.7 lp/mm. Microcalcifications of 165 μm are distinguishable based on reported data and their modeling results due to the small pixel pitch of 75 μm. At the same time, potential dose reduction is expected using the studied CMOS APS x-ray imager.« less

  19. A New Framework of Removing Salt and Pepper Impulse Noise for the Noisy Image Including Many Noise-Free White and Black Pixels

    NASA Astrophysics Data System (ADS)

    Li, Song; Wang, Caizhu; Li, Yeqiu; Wang, Ling; Sakata, Shiro; Sekiya, Hiroo; Kuroiwa, Shingo

    In this paper, we propose a new framework of removing salt and pepper impulse noise. In our proposed framework, the most important point is that the number of noise-free white and black pixels in a noisy image can be determined by using the noise rates estimated by Fuzzy Impulse Noise Detection and Reduction Method (FINDRM) and Efficient Detail-Preserving Approach (EDPA). For the noisy image includes many noise-free white and black pixels, the detected noisy pixel from the FINDRM is re-checked by using the alpha-trimmed mean. Finally, the impulse noise filtering phase of the FINDRM is used to restore the image. Simulation results show that for the noisy image including many noise-free white and black pixels, the proposed framework can decrease the False Hit Rate (FHR) efficiently compared with the FINDRM. Therefore, the proposed framework can be used more widely than the FINDRM.

  20. High Dynamic Range Imaging at the Quantum Limit with Single Photon Avalanche Diode-Based Image Sensors †

    PubMed Central

    Mattioli Della Rocca, Francescopaolo

    2018-01-01

    This paper examines methods to best exploit the High Dynamic Range (HDR) of the single photon avalanche diode (SPAD) in a high fill-factor HDR photon counting pixel that is scalable to megapixel arrays. The proposed method combines multi-exposure HDR with temporal oversampling in-pixel. We present a silicon demonstration IC with 96 × 40 array of 8.25 µm pitch 66% fill-factor SPAD-based pixels achieving >100 dB dynamic range with 3 back-to-back exposures (short, mid, long). Each pixel sums 15 bit-planes or binary field images internally to constitute one frame providing 3.75× data compression, hence the 1k frames per second (FPS) output off-chip represents 45,000 individual field images per second on chip. Two future projections of this work are described: scaling SPAD-based image sensors to HDR 1 MPixel formats and shrinking the pixel pitch to 1–3 µm. PMID:29641479

  1. Facial recognition using enhanced pixelized image for simulated visual prosthesis.

    PubMed

    Li, Ruonan; Zhhang, Xudong; Zhang, Hui; Hu, Guanshu

    2005-01-01

    A simulated face recognition experiment using enhanced pixelized images is designed and performed for the artificial visual prosthesis. The results of the simulation reveal new characteristics of visual performance in an enhanced pixelization condition, and then new suggestions on the future design of visual prosthesis are provided.

  2. Relation between star formation and AGN activity in typical elliptical galaxies: Analysis of the 2MASS K-band galaxy images

    NASA Astrophysics Data System (ADS)

    Pierce, Katherine

    2014-01-01

    We are carrying out a program of aperture photometry on typical elliptical galaxies. While there are many ways to calculate the and magnitude, we are going to use the Aperture Photometry Tool (APT) GUI and the program IRAF (Image Reduction and Analysis Facility). By looking at a sample of 236 galaxies from the 2MASS survey k-band, it was determined that 68 of the galaxies needed some sort of a pixel blocking technique due to unwanted background stars or galaxies that may interfere with our readings. My job is to determine a way to block out these pixels while not compromising the true from the galaxy.

  3. Pixel Stability in the Hubble Space Telescope WFC3/UVIS Detector

    NASA Astrophysics Data System (ADS)

    Bourque, Matthew; Baggett, Sylvia M.; Borncamp, David; Desjardins, Tyler D.; Grogin, Norman A.; Wide Field Camera 3 Team

    2018-06-01

    The Hubble Space Telescope (HST) Wide Field Camera 3 (WFC3) Ultraviolet-Visible (UVIS) detector has acquired roughly 12,000 dark images since the installation of WFC3 in 2009, as part of a daily monitoring program to measure the instrinsic dark current of the detector. These images have been reconfigured into 'pixel history' images in which detector columns are extracted from each dark and placed into a new time-ordered array, allowing for efficient analysis of a given pixel's behavior over time. We discuss how we measure each pixel's stability, as well as plans for a new Data Quality (DQ) flag to be introduced in a future release of the WFC3 calibration pipeline (CALWF3) for flagging pixels that are deemed unstable.

  4. Automatic Detection of Clouds and Shadows Using High Resolution Satellite Image Time Series

    NASA Astrophysics Data System (ADS)

    Champion, Nicolas

    2016-06-01

    Detecting clouds and their shadows is one of the primaries steps to perform when processing satellite images because they may alter the quality of some products such as large-area orthomosaics. The main goal of this paper is to present the automatic method developed at IGN-France for detecting clouds and shadows in a sequence of satellite images. In our work, surface reflectance orthoimages are used. They were processed from initial satellite images using a dedicated software. The cloud detection step consists of a region-growing algorithm. Seeds are firstly extracted. For that purpose and for each input ortho-image to process, we select the other ortho-images of the sequence that intersect it. The pixels of the input ortho-image are secondly labelled seeds if the difference of reflectance (in the blue channel) with overlapping ortho-images is bigger than a given threshold. Clouds are eventually delineated using a region-growing method based on a radiometric and homogeneity criterion. Regarding the shadow detection, our method is based on the idea that a shadow pixel is darker when comparing to the other images of the time series. The detection is basically composed of three steps. Firstly, we compute a synthetic ortho-image covering the whole study area. Its pixels have a value corresponding to the median value of all input reflectance ortho-images intersecting at that pixel location. Secondly, for each input ortho-image, a pixel is labelled shadows if the difference of reflectance (in the NIR channel) with the synthetic ortho-image is below a given threshold. Eventually, an optional region-growing step may be used to refine the results. Note that pixels labelled clouds during the cloud detection are not used for computing the median value in the first step; additionally, the NIR input data channel is used to perform the shadow detection, because it appeared to better discriminate shadow pixels. The method was tested on times series of Landsat 8 and Pléiades-HR images and our first experiments show the feasibility to automate the detection of shadows and clouds in satellite image sequences.

  5. Combinational pixel-by-pixel and object-level classifying, segmenting, and agglomerating in performing quantitative image analysis that distinguishes between healthy non-cancerous and cancerous cell nuclei and delineates nuclear, cytoplasm, and stromal material objects from stained biological tissue materials

    DOEpatents

    Boucheron, Laura E

    2013-07-16

    Quantitative object and spatial arrangement-level analysis of tissue are detailed using expert (pathologist) input to guide the classification process. A two-step method is disclosed for imaging tissue, by classifying one or more biological materials, e.g. nuclei, cytoplasm, and stroma, in the tissue into one or more identified classes on a pixel-by-pixel basis, and segmenting the identified classes to agglomerate one or more sets of identified pixels into segmented regions. Typically, the one or more biological materials comprises nuclear material, cytoplasm material, and stromal material. The method further allows a user to markup the image subsequent to the classification to re-classify said materials. The markup is performed via a graphic user interface to edit designated regions in the image.

  6. Proof of principle study of the use of a CMOS active pixel sensor for proton radiography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seco, Joao; Depauw, Nicolas

    2011-02-15

    Purpose: Proof of principle study of the use of a CMOS active pixel sensor (APS) in producing proton radiographic images using the proton beam at the Massachusetts General Hospital (MGH). Methods: A CMOS APS, previously tested for use in s-ray radiation therapy applications, was used for proton beam radiographic imaging at the MGH. Two different setups were used as a proof of principle that CMOS can be used as proton imaging device: (i) a pen with two metal screws to assess spatial resolution of the CMOS and (ii) a phantom with lung tissue, bone tissue, and water to assess tissuemore » contrast of the CMOS. The sensor was then traversed by a double scattered monoenergetic proton beam at 117 MeV, and the energy deposition inside the detector was recorded to assess its energy response. Conventional x-ray images with similar setup at voltages of 70 kVp and proton images using commercial Gafchromic EBT 2 and Kodak X-Omat V films were also taken for comparison purposes. Results: Images were successfully acquired and compared to x-ray kVp and proton EBT2/X-Omat film images. The spatial resolution of the CMOS detector image is subjectively comparable to the EBT2 and Kodak X-Omat V film images obtained at the same object-detector distance. X-rays have apparent higher spatial resolution than the CMOS. However, further studies with different commercial films using proton beam irradiation demonstrate that the distance of the detector to the object is important to the amount of proton scatter contributing to the proton image. Proton images obtained with films at different distances from the source indicate that proton scatter significantly affects the CMOS image quality. Conclusion: Proton radiographic images were successfully acquired at MGH using a CMOS active pixel sensor detector. The CMOS demonstrated spatial resolution subjectively comparable to films at the same object-detector distance. Further work will be done in order to establish the spatial and energy resolution of the CMOS detector for protons. The development and use of CMOS in proton radiography could allow in vivo proton range checks, patient setup QA, and real-time tumor tracking.« less

  7. Lagrange constraint neural networks for massive pixel parallel image demixing

    NASA Astrophysics Data System (ADS)

    Szu, Harold H.; Hsu, Charles C.

    2002-03-01

    We have shown that the remote sensing optical imaging to achieve detailed sub-pixel decomposition is a unique application of blind source separation (BSS) that is truly linear of far away weak signal, instantaneous speed of light without delay, and along the line of sight without multiple paths. In early papers, we have presented a direct application of statistical mechanical de-mixing method called Lagrange Constraint Neural Network (LCNN). While the BSAO algorithm (using a posteriori MaxEnt ANN and neighborhood pixel average) is not acceptable for remote sensing, a mirror symmetric LCNN approach is all right assuming a priori MaxEnt for unknown sources to be averaged over the source statistics (not neighborhood pixel data) in a pixel-by-pixel independent fashion. LCNN reduces the computation complexity, save a great number of memory devices, and cut the cost of implementation. The Landsat system is designed to measure the radiation to deduce surface conditions and materials. For any given material, the amount of emitted and reflected radiation varies by the wavelength. In practice, a single pixel of a Landsat image has seven channels receiving 0.1 to 12 microns of radiation from the ground within a 20x20 meter footprint containing a variety of radiation materials. A-priori LCNN algorithm provides the spatial-temporal variation of mixture that is hardly de-mixable by other a-posteriori BSS or ICA methods. We have already compared the Landsat remote sensing using both methods in WCCI 2002 Hawaii. Unfortunately the absolute benchmark is not possible because of lacking of the ground truth. We will arbitrarily mix two incoherent sampled images as the ground truth. However, the constant total probability of co-located sources within the pixel footprint is necessary for the remote sensing constraint (since on a clear day the total reflecting energy is constant in neighborhood receiving pixel sensors), we have to normalized two image pixel-by-pixel as well. Then, the result is indeed as expected.

  8. An ultra-low power CMOS image sensor with on-chip energy harvesting and power management capability.

    PubMed

    Cevik, Ismail; Huang, Xiwei; Yu, Hao; Yan, Mei; Ay, Suat U

    2015-03-06

    An ultra-low power CMOS image sensor with on-chip energy harvesting and power management capability is introduced in this paper. The photodiode pixel array can not only capture images but also harvest solar energy. As such, the CMOS image sensor chip is able to switch between imaging and harvesting modes towards self-power operation. Moreover, an on-chip maximum power point tracking (MPPT)-based power management system (PMS) is designed for the dual-mode image sensor to further improve the energy efficiency. A new isolated P-well energy harvesting and imaging (EHI) pixel with very high fill factor is introduced. Several ultra-low power design techniques such as reset and select boosting techniques have been utilized to maintain a wide pixel dynamic range. The chip was designed and fabricated in a 1.8 V, 1P6M 0.18 µm CMOS process. Total power consumption of the imager is 6.53 µW for a 96 × 96 pixel array with 1 V supply and 5 fps frame rate. Up to 30 μW of power could be generated by the new EHI pixels. The PMS is capable of providing 3× the power required during imaging mode with 50% efficiency allowing energy autonomous operation with a 72.5% duty cycle.

  9. An Ultra-Low Power CMOS Image Sensor with On-Chip Energy Harvesting and Power Management Capability

    PubMed Central

    Cevik, Ismail; Huang, Xiwei; Yu, Hao; Yan, Mei; Ay, Suat U.

    2015-01-01

    An ultra-low power CMOS image sensor with on-chip energy harvesting and power management capability is introduced in this paper. The photodiode pixel array can not only capture images but also harvest solar energy. As such, the CMOS image sensor chip is able to switch between imaging and harvesting modes towards self-power operation. Moreover, an on-chip maximum power point tracking (MPPT)-based power management system (PMS) is designed for the dual-mode image sensor to further improve the energy efficiency. A new isolated P-well energy harvesting and imaging (EHI) pixel with very high fill factor is introduced. Several ultra-low power design techniques such as reset and select boosting techniques have been utilized to maintain a wide pixel dynamic range. The chip was designed and fabricated in a 1.8 V, 1P6M 0.18 µm CMOS process. Total power consumption of the imager is 6.53 µW for a 96 × 96 pixel array with 1 V supply and 5 fps frame rate. Up to 30 μW of power could be generated by the new EHI pixels. The PMS is capable of providing 3× the power required during imaging mode with 50% efficiency allowing energy autonomous operation with a 72.5% duty cycle. PMID:25756863

  10. Sub-pixel localisation of passive micro-coil fiducial markers in interventional MRI.

    PubMed

    Rea, Marc; McRobbie, Donald; Elhawary, Haytham; Tse, Zion T H; Lamperth, Michael; Young, Ian

    2009-04-01

    Electromechanical devices enable increased accuracy in surgical procedures, and the recent development of MRI-compatible mechatronics permits the use of MRI for real-time image guidance. Integrated imaging of resonant micro-coil fiducials provides an accurate method of tracking devices in a scanner with increased flexibility compared to gradient tracking. Here we report on the ability of ten different image-processing algorithms to track micro-coil fiducials with sub-pixel accuracy. Five algorithms: maximum pixel, barycentric weighting, linear interpolation, quadratic fitting and Gaussian fitting were applied both directly to the pixel intensity matrix and to the cross-correlation matrix obtained by 2D convolution with a reference image. Using images of a 3 mm fiducial marker and a pixel size of 1.1 mm, intensity linear interpolation, which calculates the position of the fiducial centre by interpolating the pixel data to find the fiducial edges, was found to give the best performance for minimal computing power; a maximum error of 0.22 mm was observed in fiducial localisation for displacements up to 40 mm. The inherent standard deviation of fiducial localisation was 0.04 mm. This work enables greater accuracy to be achieved in passive fiducial tracking.

  11. Contrast-guided image interpolation.

    PubMed

    Wei, Zhe; Ma, Kai-Kuang

    2013-11-01

    In this paper a contrast-guided image interpolation method is proposed that incorporates contrast information into the image interpolation process. Given the image under interpolation, four binary contrast-guided decision maps (CDMs) are generated and used to guide the interpolation filtering through two sequential stages: 1) the 45(°) and 135(°) CDMs for interpolating the diagonal pixels and 2) the 0(°) and 90(°) CDMs for interpolating the row and column pixels. After applying edge detection to the input image, the generation of a CDM lies in evaluating those nearby non-edge pixels of each detected edge for re-classifying them possibly as edge pixels. This decision is realized by solving two generalized diffusion equations over the computed directional variation (DV) fields using a derived numerical approach to diffuse or spread the contrast boundaries or edges, respectively. The amount of diffusion or spreading is proportional to the amount of local contrast measured at each detected edge. The diffused DV fields are then thresholded for yielding the binary CDMs, respectively. Therefore, the decision bands with variable widths will be created on each CDM. The two CDMs generated in each stage will be exploited as the guidance maps to conduct the interpolation process: for each declared edge pixel on the CDM, a 1-D directional filtering will be applied to estimate its associated to-be-interpolated pixel along the direction as indicated by the respective CDM; otherwise, a 2-D directionless or isotropic filtering will be used instead to estimate the associated missing pixels for each declared non-edge pixel. Extensive simulation results have clearly shown that the proposed contrast-guided image interpolation is superior to other state-of-the-art edge-guided image interpolation methods. In addition, the computational complexity is relatively low when compared with existing methods; hence, it is fairly attractive for real-time image applications.

  12. Colorization-Based RGB-White Color Interpolation using Color Filter Array with Randomly Sampled Pattern

    PubMed Central

    Oh, Paul; Lee, Sukho; Kang, Moon Gi

    2017-01-01

    Recently, several RGB-White (RGBW) color filter arrays (CFAs) have been proposed, which have extra white (W) pixels in the filter array that are highly sensitive. Due to the high sensitivity, the W pixels have better SNR (Signal to Noise Ratio) characteristics than other color pixels in the filter array, especially, in low light conditions. However, most of the RGBW CFAs are designed so that the acquired RGBW pattern image can be converted into the conventional Bayer pattern image, which is then again converted into the final color image by using conventional demosaicing methods, i.e., color interpolation techniques. In this paper, we propose a new RGBW color filter array based on a totally different color interpolation technique, the colorization algorithm. The colorization algorithm was initially proposed for colorizing a gray image into a color image using a small number of color seeds. Here, we adopt this algorithm as a color interpolation technique, so that the RGBW color filter array can be designed with a very large number of W pixels to make the most of the highly sensitive characteristics of the W channel. The resulting RGBW color filter array has a pattern with a large proportion of W pixels, while the small-numbered RGB pixels are randomly distributed over the array. The colorization algorithm makes it possible to reconstruct the colors from such a small number of RGB values. Due to the large proportion of W pixels, the reconstructed color image has a high SNR value, especially higher than those of conventional CFAs in low light condition. Experimental results show that many important information which are not perceived in color images reconstructed with conventional CFAs are perceived in the images reconstructed with the proposed method. PMID:28657602

  13. Colorization-Based RGB-White Color Interpolation using Color Filter Array with Randomly Sampled Pattern.

    PubMed

    Oh, Paul; Lee, Sukho; Kang, Moon Gi

    2017-06-28

    Recently, several RGB-White (RGBW) color filter arrays (CFAs) have been proposed, which have extra white (W) pixels in the filter array that are highly sensitive. Due to the high sensitivity, the W pixels have better SNR (Signal to Noise Ratio) characteristics than other color pixels in the filter array, especially, in low light conditions. However, most of the RGBW CFAs are designed so that the acquired RGBW pattern image can be converted into the conventional Bayer pattern image, which is then again converted into the final color image by using conventional demosaicing methods, i.e., color interpolation techniques. In this paper, we propose a new RGBW color filter array based on a totally different color interpolation technique, the colorization algorithm. The colorization algorithm was initially proposed for colorizing a gray image into a color image using a small number of color seeds. Here, we adopt this algorithm as a color interpolation technique, so that the RGBW color filter array can be designed with a very large number of W pixels to make the most of the highly sensitive characteristics of the W channel. The resulting RGBW color filter array has a pattern with a large proportion of W pixels, while the small-numbered RGB pixels are randomly distributed over the array. The colorization algorithm makes it possible to reconstruct the colors from such a small number of RGB values. Due to the large proportion of W pixels, the reconstructed color image has a high SNR value, especially higher than those of conventional CFAs in low light condition. Experimental results show that many important information which are not perceived in color images reconstructed with conventional CFAs are perceived in the images reconstructed with the proposed method.

  14. Saliency-Guided Change Detection of Remotely Sensed Images Using Random Forest

    NASA Astrophysics Data System (ADS)

    Feng, W.; Sui, H.; Chen, X.

    2018-04-01

    Studies based on object-based image analysis (OBIA) representing the paradigm shift in change detection (CD) have achieved remarkable progress in the last decade. Their aim has been developing more intelligent interpretation analysis methods in the future. The prediction effect and performance stability of random forest (RF), as a new kind of machine learning algorithm, are better than many single predictors and integrated forecasting method. In this paper, we present a novel CD approach for high-resolution remote sensing images, which incorporates visual saliency and RF. First, highly homogeneous and compact image super-pixels are generated using super-pixel segmentation, and the optimal segmentation result is obtained through image superimposition and principal component analysis (PCA). Second, saliency detection is used to guide the search of interest regions in the initial difference image obtained via the improved robust change vector analysis (RCVA) algorithm. The salient regions within the difference image that correspond to the binarized saliency map are extracted, and the regions are subject to the fuzzy c-means (FCM) clustering to obtain the pixel-level pre-classification result, which can be used as a prerequisite for superpixel-based analysis. Third, on the basis of the optimal segmentation and pixel-level pre-classification results, different super-pixel change possibilities are calculated. Furthermore, the changed and unchanged super-pixels that serve as the training samples are automatically selected. The spectral features and Gabor features of each super-pixel are extracted. Finally, superpixel-based CD is implemented by applying RF based on these samples. Experimental results on Ziyuan 3 (ZY3) multi-spectral images show that the proposed method outperforms the compared methods in the accuracy of CD, and also confirm the feasibility and effectiveness of the proposed approach.

  15. Fundamental performance differences of CMOS and CCD imagers: part V

    NASA Astrophysics Data System (ADS)

    Janesick, James R.; Elliott, Tom; Andrews, James; Tower, John; Pinter, Jeff

    2013-02-01

    Previous papers delivered over the last decade have documented developmental progress made on large pixel scientific CMOS imagers that match or surpass CCD performance. New data and discussions presented in this paper include: 1) a new buried channel CCD fabricated on a CMOS process line, 2) new data products generated by high performance custom scientific CMOS 4T/5T/6T PPD pixel imagers, 3) ultimate CTE and speed limits for large pixel CMOS imagers, 4) fabrication and test results of a flight 4k x 4k CMOS imager for NRL's SoloHi Solar Orbiter Mission, 5) a progress report on ultra large stitched Mk x Nk CMOS imager, 6) data generated by on-chip sub-electron CDS signal chain circuitry used in our imagers, 7) CMOS and CMOSCCD proton and electron radiation damage data for dose levels up to 10 Mrd, 8) discussions and data for a new class of PMOS pixel CMOS imagers and 9) future CMOS development work planned.

  16. A Compact Polarization Imager

    NASA Technical Reports Server (NTRS)

    Thompson, Karl E.; Rust, David M.; Chen, Hua

    1995-01-01

    A new type of image detector has been designed to analyze the polarization of light simultaneously at all picture elements (pixels) in a scene. The Integrated Dual Imaging Detector (IDID) consists of a polarizing beamsplitter bonded to a custom-designed charge-coupled device with signal-analysis circuitry, all integrated on a silicon chip. The IDID should simplify the design and operation of imaging polarimeters and spectroscopic imagers used, for example, in atmospheric and solar research. Other applications include environmental monitoring and robot vision. Innovations in the IDID include two interleaved 512 x 1024 pixel imaging arrays (one for each polarization plane), large dynamic range (well depth of 10(exp 6) electrons per pixel), simultaneous readout and display of both images at 10(exp 6) pixels per second, and on-chip analog signal processing to produce polarization maps in real time. When used with a lithium niobate Fabry-Perot etalon or other color filter that can encode spectral information as polarization, the IDID can reveal tiny differences between simultaneous images at two wavelengths.

  17. Precise color images a high-speed color video camera system with three intensified sensors

    NASA Astrophysics Data System (ADS)

    Oki, Sachio; Yamakawa, Masafumi; Gohda, Susumu; Etoh, Takeharu G.

    1999-06-01

    High speed imaging systems have been used in a large field of science and engineering. Although the high speed camera systems have been improved to high performance, most of their applications are only to get high speed motion pictures. However, in some fields of science and technology, it is useful to get some other information, such as temperature of combustion flame, thermal plasma and molten materials. Recent digital high speed video imaging technology should be able to get such information from those objects. For this purpose, we have already developed a high speed video camera system with three-intensified-sensors and cubic prism image splitter. The maximum frame rate is 40,500 pps (picture per second) at 64 X 64 pixels and 4,500 pps at 256 X 256 pixels with 256 (8 bit) intensity resolution for each pixel. The camera system can store more than 1,000 pictures continuously in solid state memory. In order to get the precise color images from this camera system, we need to develop a digital technique, which consists of a computer program and ancillary instruments, to adjust displacement of images taken from two or three image sensors and to calibrate relationship between incident light intensity and corresponding digital output signals. In this paper, the digital technique for pixel-based displacement adjustment are proposed. Although the displacement of the corresponding circle was more than 8 pixels in original image, the displacement was adjusted within 0.2 pixels at most by this method.

  18. Sparsely-sampled hyperspectral stimulated Raman scattering microscopy: a theoretical investigation

    NASA Astrophysics Data System (ADS)

    Lin, Haonan; Liao, Chien-Sheng; Wang, Pu; Huang, Kai-Chih; Bouman, Charles A.; Kong, Nan; Cheng, Ji-Xin

    2017-02-01

    A hyperspectral image corresponds to a data cube with two spatial dimensions and one spectral dimension. Through linear un-mixing, hyperspectral images can be decomposed into spectral signatures of pure components as well as their concentration maps. Due to this distinct advantage on component identification, hyperspectral imaging becomes a rapidly emerging platform for engineering better medicine and expediting scientific discovery. Among various hyperspectral imaging techniques, hyperspectral stimulated Raman scattering (HSRS) microscopy acquires data in a pixel-by-pixel scanning manner. Nevertheless, current image acquisition speed for HSRS is insufficient to capture the dynamics of freely moving subjects. Instead of reducing the pixel dwell time to achieve speed-up, which would inevitably decrease signal-to-noise ratio (SNR), we propose to reduce the total number of sampled pixels. Location of sampled pixels are carefully engineered with triangular wave Lissajous trajectory. Followed by a model-based image in-painting algorithm, the complete data is recovered for linear unmixing. Simulation results show that by careful selection of trajectory, a fill rate as low as 10% is sufficient to generate accurate linear unmixing results. The proposed framework applies to any hyperspectral beam-scanning imaging platform which demands high acquisition speed.

  19. A 10MHz Fiber-Coupled Photodiode Imaging Array for Plasma Diagnostics

    NASA Astrophysics Data System (ADS)

    Brockington, Samuel; Case, Andrew; Witherspoon, F. Douglas

    2013-10-01

    HyperV Technologies has been developing an imaging diagnostic comprised of arrays of fast, low-cost, long-record-length, fiber-optically-coupled photodiode channels to investigate plasma dynamics and other fast, bright events. By coupling an imaging fiber bundle to a bank of amplified photodiode channels, imagers and streak imagers of 100 to 10,000 pixels can be constructed. By interfacing analog photodiode systems directly to commercial analog to digital convertors and modern memory chips, a prototype pixel with an extremely deep record length (128 k points at 40 Msamples/s) has been achieved for a 10 bit resolution system with signal bandwidths of at least 10 MHz. Progress on a prototype 100 Pixel streak camera employing this technique is discussed along with preliminary experimental results and plans for a 10,000 pixel imager. Work supported by USDOE Phase 1 SBIR Grant DE-SC0009492.

  20. A novel digital image sensor with row wise gain compensation for Hyper Spectral Imager (HySI) application

    NASA Astrophysics Data System (ADS)

    Lin, Shengmin; Lin, Chi-Pin; Wang, Weng-Lyang; Hsiao, Feng-Ke; Sikora, Robert

    2009-08-01

    A 256x512 element digital image sensor has been developed which has a large pixel size, slow scan and low power consumption for Hyper Spectral Imager (HySI) applications. The device is a mixed mode, silicon on chip (SOC) IC. It combines analog circuitry, digital circuitry and optical sensor circuitry into a single chip. This chip integrates a 256x512 active pixel sensor array, a programming gain amplifier (PGA) for row wise gain setting, I2C interface, SRAM, 12 bit analog to digital convertor (ADC), voltage regulator, low voltage differential signal (LVDS) and timing generator. The device can be used for 256 pixels of spatial resolution and 512 bands of spectral resolution ranged from 400 nm to 950 nm in wavelength. In row wise gain readout mode, one can set a different gain on each row of the photo detector by storing the gain setting data on the SRAM thru the I2C interface. This unique row wise gain setting can be used to compensate the silicon spectral response non-uniformity problem. Due to this unique function, the device is suitable for hyper-spectral imager applications. The HySI camera located on-board the Chandrayaan-1 satellite, was successfully launched to the moon on Oct. 22, 2008. The device is currently mapping the moon and sending back excellent images of the moon surface. The device design and the moon image data will be presented in the paper.

  1. A Closer Look at the Congo and the Lightning Maximum on Earth

    NASA Technical Reports Server (NTRS)

    Blakeslee, R. J.; Buechler, D. E.; Lavreau, Johan; Goodman, Steven J.

    2008-01-01

    The global maps of maximum mean annual flash density derived from a decade of observations from the Lightning Imaging Sensor on the NASA Tropical Rainfall Measuring Mission (TRMM) satellite show that a 0.5 degree x 0.5 degree pixel west of Bukavu, Democratic Republic of Congo (latitude 2S, longitude 28E) has the most frequent lightning activity anywhere on earth with an average value in excess of 157 fl/sq km/yr. This pixel has a flash density that is much greater than even its surrounding neighbors. By contrast the maximum mean annual flash rate for North America located in central Florida is only 33 fl/sq km/yr. Previous studies have shown that monthly-seasonal-annual lightning maxima on earth occur in regions dominated by coastal (land-sea breeze interactions) or topographic influences (elevated heat sources, enhanced convergence). Using TRMM, Landsat Enhanced Thematic Mapper, and Shuttle Imaging Radar imagery we further examine the unique features of this region situated in the deep tropics and dominated by a complex topography having numerous mountain ridges and valleys to better understand why this pixel, unlike any other, has the most active lightning on the planet.

  2. Terahertz imaging with compressed sensing and phase retrieval.

    PubMed

    Chan, Wai Lam; Moravec, Matthew L; Baraniuk, Richard G; Mittleman, Daniel M

    2008-05-01

    We describe a novel, high-speed pulsed terahertz (THz) Fourier imaging system based on compressed sensing (CS), a new signal processing theory, which allows image reconstruction with fewer samples than traditionally required. Using CS, we successfully reconstruct a 64 x 64 image of an object with pixel size 1.4 mm using a randomly chosen subset of the 4096 pixels, which defines the image in the Fourier plane, and observe improved reconstruction quality when we apply phase correction. For our chosen image, only about 12% of the pixels are required for reassembling the image. In combination with phase retrieval, our system has the capability to reconstruct images with only a small subset of Fourier amplitude measurements and thus has potential application in THz imaging with cw sources.

  3. A fast and efficient segmentation scheme for cell microscopic image.

    PubMed

    Lebrun, G; Charrier, C; Lezoray, O; Meurie, C; Cardot, H

    2007-04-27

    Microscopic cellular image segmentation schemes must be efficient for reliable analysis and fast to process huge quantity of images. Recent studies have focused on improving segmentation quality. Several segmentation schemes have good quality but processing time is too expensive to deal with a great number of images per day. For segmentation schemes based on pixel classification, the classifier design is crucial since it is the one which requires most of the processing time necessary to segment an image. The main contribution of this work is focused on how to reduce the complexity of decision functions produced by support vector machines (SVM) while preserving recognition rate. Vector quantization is used in order to reduce the inherent redundancy present in huge pixel databases (i.e. images with expert pixel segmentation). Hybrid color space design is also used in order to improve data set size reduction rate and recognition rate. A new decision function quality criterion is defined to select good trade-off between recognition rate and processing time of pixel decision function. The first results of this study show that fast and efficient pixel classification with SVM is possible. Moreover posterior class pixel probability estimation is easy to compute with Platt method. Then a new segmentation scheme using probabilistic pixel classification has been developed. This one has several free parameters and an automatic selection must dealt with, but criteria for evaluate segmentation quality are not well adapted for cell segmentation, especially when comparison with expert pixel segmentation must be achieved. Another important contribution in this paper is the definition of a new quality criterion for evaluation of cell segmentation. The results presented here show that the selection of free parameters of the segmentation scheme by optimisation of the new quality cell segmentation criterion produces efficient cell segmentation.

  4. Image Segmentation Analysis for NASA Earth Science Applications

    NASA Technical Reports Server (NTRS)

    Tilton, James C.

    2010-01-01

    NASA collects large volumes of imagery data from satellite-based Earth remote sensing sensors. Nearly all of the computerized image analysis of this data is performed pixel-by-pixel, in which an algorithm is applied directly to individual image pixels. While this analysis approach is satisfactory in many cases, it is usually not fully effective in extracting the full information content from the high spatial resolution image data that s now becoming increasingly available from these sensors. The field of object-based image analysis (OBIA) has arisen in recent years to address the need to move beyond pixel-based analysis. The Recursive Hierarchical Segmentation (RHSEG) software developed by the author is being used to facilitate moving from pixel-based image analysis to OBIA. The key unique aspect of RHSEG is that it tightly intertwines region growing segmentation, which produces spatially connected region objects, with region object classification, which groups sets of region objects together into region classes. No other practical, operational image segmentation approach has this tight integration of region growing object finding with region classification This integration is made possible by the recursive, divide-and-conquer implementation utilized by RHSEG, in which the input image data is recursively subdivided until the image data sections are small enough to successfully mitigat the combinatorial explosion caused by the need to compute the dissimilarity between each pair of image pixels. RHSEG's tight integration of region growing object finding and region classification is what enables the high spatial fidelity of the image segmentations produced by RHSEG. This presentation will provide an overview of the RHSEG algorithm and describe how it is currently being used to support OBIA or Earth Science applications such as snow/ice mapping and finding archaeological sites from remotely sensed data.

  5. Geometric registration of images by similarity transformation using two reference points

    NASA Technical Reports Server (NTRS)

    Kang, Yong Q. (Inventor); Jo, Young-Heon (Inventor); Yan, Xiao-Hai (Inventor)

    2011-01-01

    A method for registering a first image to a second image using a similarity transformation. The each image includes a plurality of pixels. The first image pixels are mapped to a set of first image coordinates and the second image pixels are mapped to a set of second image coordinates. The first image coordinates of two reference points in the first image are determined. The second image coordinates of these reference points in the second image are determined. A Cartesian translation of the set of second image coordinates is performed such that the second image coordinates of the first reference point match its first image coordinates. A similarity transformation of the translated set of second image coordinates is performed. This transformation scales and rotates the second image coordinates about the first reference point such that the second image coordinates of the second reference point match its first image coordinates.

  6. Active pixel sensor with intra-pixel charge transfer

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R. (Inventor); Mendis, Sunetra (Inventor); Kemeny, Sabrina E. (Inventor)

    1995-01-01

    An imaging device formed as a monolithic complementary metal oxide semiconductor integrated circuit in an industry standard complementary metal oxide semiconductor process, the integrated circuit including a focal plane array of pixel cells, each one of the cells including a photogate overlying the substrate for accumulating photo-generated charge in an underlying portion of the substrate, a readout circuit including at least an output field effect transistor formed in the substrate, and a charge coupled device section formed on the substrate adjacent the photogate having a sensing node connected to the output transistor and at least one charge coupled device stage for transferring charge from the underlying portion of the substrate to the sensing node.

  7. Active pixel sensor with intra-pixel charge transfer

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R. (Inventor); Mendis, Sunetra (Inventor); Kemeny, Sabrina E. (Inventor)

    2003-01-01

    An imaging device formed as a monolithic complementary metal oxide semiconductor integrated circuit in an industry standard complementary metal oxide semiconductor process, the integrated circuit including a focal plane array of pixel cells, each one of the cells including a photogate overlying the substrate for accumulating photo-generated charge in an underlying portion of the substrate, a readout circuit including at least an output field effect transistor formed in the substrate, and a charge coupled device section formed on the substrate adjacent the photogate having a sensing node connected to the output transistor and at least one charge coupled device stage for transferring charge from the underlying portion of the substrate to the sensing node.

  8. Active pixel sensor with intra-pixel charge transfer

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R. (Inventor); Mendis, Sunetra (Inventor); Kemeny, Sabrina E. (Inventor)

    2004-01-01

    An imaging device formed as a monolithic complementary metal oxide semiconductor integrated circuit in an industry standard complementary metal oxide semiconductor process, the integrated circuit including a focal plane array of pixel cells, each one of the cells including a photogate overlying the substrate for accumulating photo-generated charge in an underlying portion of the substrate, a readout circuit including at least an output field effect transistor formed in the substrate, and a charge coupled device section formed on the substrate adjacent the photogate having a sensing node connected to the output transistor and at least one charge coupled device stage for transferring charge from the underlying portion of the substrate to the sensing node.

  9. A Chip and Pixel Qualification Methodology on Imaging Sensors

    NASA Technical Reports Server (NTRS)

    Chen, Yuan; Guertin, Steven M.; Petkov, Mihail; Nguyen, Duc N.; Novak, Frank

    2004-01-01

    This paper presents a qualification methodology on imaging sensors. In addition to overall chip reliability characterization based on sensor s overall figure of merit, such as Dark Rate, Linearity, Dark Current Non-Uniformity, Fixed Pattern Noise and Photon Response Non-Uniformity, a simulation technique is proposed and used to project pixel reliability. The projected pixel reliability is directly related to imaging quality and provides additional sensor reliability information and performance control.

  10. Characterization of pixel sensor designed in 180 nm SOI CMOS technology

    NASA Astrophysics Data System (ADS)

    Benka, T.; Havranek, M.; Hejtmanek, M.; Jakovenko, J.; Janoska, Z.; Marcisovska, M.; Marcisovsky, M.; Neue, G.; Tomasek, L.; Vrba, V.

    2018-01-01

    A new type of X-ray imaging Monolithic Active Pixel Sensor (MAPS), X-CHIP-02, was developed using a 180 nm deep submicron Silicon On Insulator (SOI) CMOS commercial technology. Two pixel matrices were integrated into the prototype chip, which differ by the pixel pitch of 50 μm and 100 μm. The X-CHIP-02 contains several test structures, which are useful for characterization of individual blocks. The sensitive part of the pixel integrated in the handle wafer is one of the key structures designed for testing. The purpose of this structure is to determine the capacitance of the sensitive part (diode in the MAPS pixel). The measured capacitance is 2.9 fF for 50 μm pixel pitch and 4.8 fF for 100 μm pixel pitch at -100 V (default operational voltage). This structure was used to measure the IV characteristics of the sensitive diode. In this work, we report on a circuit designed for precise determination of sensor capacitance and IV characteristics of both pixel types with respect to X-ray irradiation. The motivation for measurement of the sensor capacitance was its importance for the design of front-end amplifier circuits. The design of pixel elements, as well as circuit simulation and laboratory measurement techniques are described. The experimental results are of great importance for further development of MAPS sensors in this technology.

  11. The analysis and rationale behind the upgrading of existing standard definition thermal imagers to high definition

    NASA Astrophysics Data System (ADS)

    Goss, Tristan M.

    2016-05-01

    With 640x512 pixel format IR detector arrays having been on the market for the past decade, Standard Definition (SD) thermal imaging sensors have been developed and deployed across the world. Now with 1280x1024 pixel format IR detector arrays becoming readily available designers of thermal imager systems face new challenges as pixel sizes reduce and the demand and applications for High Definition (HD) thermal imaging sensors increases. In many instances the upgrading of existing under-sampled SD thermal imaging sensors into more optimally sampled or oversampled HD thermal imaging sensors provides a more cost effective and reduced time to market option than to design and develop a completely new sensor. This paper presents the analysis and rationale behind the selection of the best suited HD pixel format MWIR detector for the upgrade of an existing SD thermal imaging sensor to a higher performing HD thermal imaging sensor. Several commercially available and "soon to be" commercially available HD small pixel IR detector options are included as part of the analysis and are considered for this upgrade. The impact the proposed detectors have on the sensor's overall sensitivity, noise and resolution is analyzed, and the improved range performance is predicted. Furthermore with reduced dark currents due to the smaller pixel sizes, the candidate HD MWIR detectors are operated at higher temperatures when compared to their SD predecessors. Therefore, as an additional constraint and as a design goal, the feasibility of achieving upgraded performance without any increase in the size, weight and power consumption of the thermal imager is discussed herein.

  12. Adaptive single-pixel imaging with aggregated sampling and continuous differential measurements

    NASA Astrophysics Data System (ADS)

    Huo, Yaoran; He, Hongjie; Chen, Fan; Tai, Heng-Ming

    2018-06-01

    This paper proposes an adaptive compressive imaging technique with one single-pixel detector and single arm. The aggregated sampling (AS) method enables the reduction of resolutions of the reconstructed images. It aims to reduce the time and space consumption. The target image with a resolution up to 1024 × 1024 can be reconstructed successfully at the 20% sampling rate. The continuous differential measurement (CDM) method combined with a ratio factor of significant coefficient (RFSC) improves the imaging quality. Moreover, RFSC reduces the human intervention in parameter setting. This technique enhances the practicability of single-pixel imaging with the benefits from less time and space consumption, better imaging quality and less human intervention.

  13. Design and fabrication of vertically-integrated CMOS image sensors.

    PubMed

    Skorka, Orit; Joseph, Dileepan

    2011-01-01

    Technologies to fabricate integrated circuits (IC) with 3D structures are an emerging trend in IC design. They are based on vertical stacking of active components to form heterogeneous microsystems. Electronic image sensors will benefit from these technologies because they allow increased pixel-level data processing and device optimization. This paper covers general principles in the design of vertically-integrated (VI) CMOS image sensors that are fabricated by flip-chip bonding. These sensors are composed of a CMOS die and a photodetector die. As a specific example, the paper presents a VI-CMOS image sensor that was designed at the University of Alberta, and fabricated with the help of CMC Microsystems and Micralyne Inc. To realize prototypes, CMOS dies with logarithmic active pixels were prepared in a commercial process, and photodetector dies with metal-semiconductor-metal devices were prepared in a custom process using hydrogenated amorphous silicon. The paper also describes a digital camera that was developed to test the prototype. In this camera, scenes captured by the image sensor are read using an FPGA board, and sent in real time to a PC over USB for data processing and display. Experimental results show that the VI-CMOS prototype has a higher dynamic range and a lower dark limit than conventional electronic image sensors.

  14. Design and Fabrication of Vertically-Integrated CMOS Image Sensors

    PubMed Central

    Skorka, Orit; Joseph, Dileepan

    2011-01-01

    Technologies to fabricate integrated circuits (IC) with 3D structures are an emerging trend in IC design. They are based on vertical stacking of active components to form heterogeneous microsystems. Electronic image sensors will benefit from these technologies because they allow increased pixel-level data processing and device optimization. This paper covers general principles in the design of vertically-integrated (VI) CMOS image sensors that are fabricated by flip-chip bonding. These sensors are composed of a CMOS die and a photodetector die. As a specific example, the paper presents a VI-CMOS image sensor that was designed at the University of Alberta, and fabricated with the help of CMC Microsystems and Micralyne Inc. To realize prototypes, CMOS dies with logarithmic active pixels were prepared in a commercial process, and photodetector dies with metal-semiconductor-metal devices were prepared in a custom process using hydrogenated amorphous silicon. The paper also describes a digital camera that was developed to test the prototype. In this camera, scenes captured by the image sensor are read using an FPGA board, and sent in real time to a PC over USB for data processing and display. Experimental results show that the VI-CMOS prototype has a higher dynamic range and a lower dark limit than conventional electronic image sensors. PMID:22163860

  15. Design and characterization of novel monolithic pixel sensors for the ALICE ITS upgrade

    NASA Astrophysics Data System (ADS)

    Cavicchioli, C.; Chalmet, P. L.; Giubilato, P.; Hillemanns, H.; Junique, A.; Kugathasan, T.; Mager, M.; Marin Tobon, C. A.; Martinengo, P.; Mattiazzo, S.; Mugnier, H.; Musa, L.; Pantano, D.; Rousset, J.; Reidt, F.; Riedler, P.; Snoeys, W.; Van Hoorne, J. W.; Yang, P.

    2014-11-01

    Within the R&D activities for the upgrade of the ALICE Inner Tracking System (ITS), Monolithic Active Pixel Sensors (MAPS) are being developed and studied, due to their lower material budget ( 0.3 %X0 in total for each inner layer) and higher granularity ( 20 μm × 20 μm pixels) with respect to the present pixel detector. This paper presents the design and characterization results of the Explorer0 chip, manufactured in the TowerJazz 180 nm CMOS Imaging Sensor process, based on a wafer with high-resistivity (ρ > 1 kΩ cm) and 18 μm thick epitaxial layer. The chip is organized in two sub-matrices with different pixel pitches (20 μm and 30 μm), each of them containing several pixel designs. The collection electrode size and shape, as well as the distance between the electrode and the surrounding electronics, are varied; the chip also offers the possibility to decouple the charge integration time from the readout time, and to change the sensor bias. The charge collection properties of the different pixel variants implemented in Explorer0 have been studied using a 55Fe X-ray source and 1-5 GeV/c electrons and positrons. The sensor capacitance has been estimated, and the effect of the sensor bias has also been examined in detail. A second version of the Explorer0 chip (called Explorer1) has been submitted for production in March 2013, together with a novel circuit with in-pixel discrimination and a sparsified readout. Results from these submissions are also presented.

  16. Sources of Gullies in Hale Crater

    NASA Image and Video Library

    2017-04-12

    Color from the High Resolution Imaging Science Experiment (HiRISE) instrument onboard NASA's Mars Reconnaissance Orbiter can show mineralogical differences due to the near-infrared filter. The sources of channels on the north rim of Hale Crater show fresh blue, green, purple and light toned exposures under the the overlying reddish dust. The causes and timing of activity in channels and gullies on Mars remains an active area of research. Geologists infer the timing of different events based on what are called "superposition relationships" between different landforms. Areas like this are a puzzle. The map is projected here at a scale of 25 centimeters (9.8 inches) per pixel. [The original image scale is 25.2 centimeters (9.9 inches) per pixel (with 1 x 1 binning); objects on the order of 76 centimeters (29.9 inches) across are resolved.] North is up. https://photojournal.jpl.nasa.gov/catalog/PIA21586

  17. Enhancing the image resolution in a single-pixel sub-THz imaging system based on compressed sensing

    NASA Astrophysics Data System (ADS)

    Alkus, Umit; Ermeydan, Esra Sengun; Sahin, Asaf Behzat; Cankaya, Ilyas; Altan, Hakan

    2018-04-01

    Compressed sensing (CS) techniques allow for faster imaging when combined with scan architectures, which typically suffer from speed. This technique when implemented with a subterahertz (sub-THz) single detector scan imaging system provides images whose resolution is only limited by the pixel size of the pattern used to scan the image plane. To overcome this limitation, the image of the target can be oversampled; however, this results in slower imaging rates especially if this is done in two-dimensional across the image plane. We show that by implementing a one-dimensional (1-D) scan of the image plane, a modified approach to CS theory applied with an appropriate reconstruction algorithm allows for successful reconstruction of the reflected oversampled image of a target placed in standoff configuration from the source. The experiments are done in reflection mode configuration where the operating frequency is 93 GHz and the corresponding wavelength is λ = 3.2 mm. To reconstruct the image with fewer samples, CS theory is applied using masks where the pixel size is 5 mm × 5 mm, and each mask covers an image area of 5 cm × 5 cm, meaning that the basic image is resolved as 10 × 10 pixels. To enhance the resolution, the information between two consecutive pixels is used, and oversampling along 1-D coupled with a modification of the masks in CS theory allowed for oversampled images to be reconstructed rapidly in 20 × 20 and 40 × 40 pixel formats. These are then compared using two different reconstruction algorithms, TVAL3 and ℓ1-MAGIC. The performance of these methods is compared for both simulated signals and real signals. It is found that the modified CS theory approach coupled with the TVAL3 reconstruction process, even when scanning along only 1-D, allows for rapid precise reconstruction of the oversampled target.

  18. The Effect Of Pixel Size On The Detection Rate Of Early Pulmonary Sarcoidosis In Digital Chest Radiographic Systems

    NASA Astrophysics Data System (ADS)

    MacMahon, Heber; Vyborny, Carl; Powell, Gregory; Doi, Kunio; Metz, Charles E.

    1984-08-01

    In digital radiography the pixel size used determines the potential spatial resolution of the system. The need for spatial resolution varies depending on the subject matter imaged. In many areas, including the chest, the minimum spatial resolution requirements have not been determined. Sarcoidosis is a disease which frequently causes subtle interstitial infiltrates in the lungs. As the initial step in an investigation designed to determine the minimum pixel size required in digital chest radiographic systems, we have studied 1 mm pixel digitized images on patients with early pulmonary sarcoidosis. The results of this preliminary study suggest that neither mild interstitial pulmonary infiltrates nor other abnormalities such as pneumothoraces may be detected reliably with 1 mm pixel digital images.

  19. ACE: Automatic Centroid Extractor for real time target tracking

    NASA Technical Reports Server (NTRS)

    Cameron, K.; Whitaker, S.; Canaris, J.

    1990-01-01

    A high performance video image processor has been implemented which is capable of grouping contiguous pixels from a raster scan image into groups and then calculating centroid information for each object in a frame. The algorithm employed to group pixels is very efficient and is guaranteed to work properly for all convex shapes as well as most concave shapes. Processing speeds are adequate for real time processing of video images having a pixel rate of up to 20 million pixels per second. Pixels may be up to 8 bits wide. The processor is designed to interface directly to a transputer serial link communications channel with no additional hardware. The full custom VLSI processor was implemented in a 1.6 mu m CMOS process and measures 7200 mu m on a side.

  20. 3-D Spatial Resolution of 350 μm Pitch Pixelated CdZnTe Detectors for Imaging Applications.

    PubMed

    Yin, Yongzhi; Chen, Ximeng; Wu, Heyu; Komarov, Sergey; Garson, Alfred; Li, Qiang; Guo, Qingzhen; Krawczynski, Henric; Meng, Ling-Jian; Tai, Yuan-Chuan

    2013-02-01

    We are currently investigating the feasibility of using highly pixelated Cadmium Zinc Telluride (CdZnTe) detectors for sub-500 μ m resolution PET imaging applications. A 20 mm × 20 mm × 5 mm CdZnTe substrate was fabricated with 350 μ m pitch pixels (250 μ m anode pixels with 100 μ m gap) and coplanar cathode. Charge sharing among the pixels of a 350 μ m pitch detector was studied using collimated 122 keV and 511 keV gamma ray sources. For a 350 μ m pitch CdZnTe detector, scatter plots of the charge signal of two neighboring pixels clearly show more charge sharing when the collimated beam hits the gap between adjacent pixels. Using collimated Co-57 and Ge-68 sources, we measured the count profiles and estimated the intrinsic spatial resolution of 350 μ m pitch detector biased at -1000 V. Depth of interaction was analyzed based on two methods, i.e., cathode/anode ratio and electron drift time, in both 122 keV and 511 keV measurements. For single-pixel photopeak events, a linear correlation between cathode/anode ratio and electron drift time was shown, which would be useful for estimating the DOI information and preserving image resolution in CdZnTe PET imaging applications.

  1. 3-D Spatial Resolution of 350 μm Pitch Pixelated CdZnTe Detectors for Imaging Applications

    PubMed Central

    Yin, Yongzhi; Chen, Ximeng; Wu, Heyu; Komarov, Sergey; Garson, Alfred; Li, Qiang; Guo, Qingzhen; Krawczynski, Henric; Meng, Ling-Jian; Tai, Yuan-Chuan

    2016-01-01

    We are currently investigating the feasibility of using highly pixelated Cadmium Zinc Telluride (CdZnTe) detectors for sub-500 μm resolution PET imaging applications. A 20 mm × 20 mm × 5 mm CdZnTe substrate was fabricated with 350 μm pitch pixels (250 μm anode pixels with 100 μm gap) and coplanar cathode. Charge sharing among the pixels of a 350 μm pitch detector was studied using collimated 122 keV and 511 keV gamma ray sources. For a 350 μm pitch CdZnTe detector, scatter plots of the charge signal of two neighboring pixels clearly show more charge sharing when the collimated beam hits the gap between adjacent pixels. Using collimated Co-57 and Ge-68 sources, we measured the count profiles and estimated the intrinsic spatial resolution of 350 μm pitch detector biased at −1000 V. Depth of interaction was analyzed based on two methods, i.e., cathode/anode ratio and electron drift time, in both 122 keV and 511 keV measurements. For single-pixel photopeak events, a linear correlation between cathode/anode ratio and electron drift time was shown, which would be useful for estimating the DOI information and preserving image resolution in CdZnTe PET imaging applications. PMID:28250476

  2. Pixel Statistical Analysis of Diabetic vs. Non-diabetic Foot-Sole Spectral Terahertz Reflection Images

    NASA Astrophysics Data System (ADS)

    Hernandez-Cardoso, G. G.; Alfaro-Gomez, M.; Rojas-Landeros, S. C.; Salas-Gutierrez, I.; Castro-Camus, E.

    2018-03-01

    In this article, we present a series of hydration mapping images of the foot soles of diabetic and non-diabetic subjects measured by terahertz reflectance. In addition to the hydration images, we present a series of RYG-color-coded (red yellow green) images where pixels are assigned one of the three colors in order to easily identify areas in risk of ulceration. We also present the statistics of the number of pixels with each color as a potential quantitative indicator for diabetic foot-syndrome deterioration.

  3. Microradiography with Semiconductor Pixel Detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jakubek, Jan; Cejnarova, Andrea; Dammer, Jiri

    High resolution radiography (with X-rays, neutrons, heavy charged particles, ...) often exploited also in tomographic mode to provide 3D images stands as a powerful imaging technique for instant and nondestructive visualization of fine internal structure of objects. Novel types of semiconductor single particle counting pixel detectors offer many advantages for radiation imaging: high detection efficiency, energy discrimination or direct energy measurement, noiseless digital integration (counting), high frame rate and virtually unlimited dynamic range. This article shows the application and potential of pixel detectors (such as Medipix2 or TimePix) in different fields of radiation imaging.

  4. Optical and electrical characterization of a back-thinned CMOS active pixel sensor

    NASA Astrophysics Data System (ADS)

    Blue, Andrew; Clark, A.; Houston, S.; Laing, A.; Maneuski, D.; Prydderch, M.; Turchetta, R.; O'Shea, V.

    2009-06-01

    This work will report on the first work on the characterization of a back-thinned Vanilla-a 512×512 (25 μm squared) active pixel sensor (APS). Characterization of the detectors was carried out through the analysis of photon transfer curves to yield a measurement of full well capacity, noise levels, gain constants and linearity. Spectral characterization of the sensors was also performed in the visible and UV regions. A full comparison against non-back-thinned front illuminated Vanilla sensors is included. Such measurements suggest that the Vanilla APS will be suitable for a wide range of applications, including particle physics and biomedical imaging.

  5. Modulation transfer function measurement technique for small-pixel detectors

    NASA Technical Reports Server (NTRS)

    Marchywka, Mike; Socker, Dennis G.

    1992-01-01

    A modulation transfer function (MTF) measurement technique suitable for large-format, small-pixel detector characterization has been investigated. A volume interference grating is used as a test image instead of the bar or sine wave target images normally used. This technique permits a high-contrast, large-area, sinusoidal intensity distribution to illuminate the device being tested, avoiding the need to deconvolve raw data with imaging system characteristics. A high-confidence MTF result at spatial frequencies near 200 cycles/mm is obtained. We present results at several visible light wavelengths with a 6.8-micron-pixel CCD. Pixel response functions are derived from the MTF results.

  6. Impulsive noise suppression in color images based on the geodesic digital paths

    NASA Astrophysics Data System (ADS)

    Smolka, Bogdan; Cyganek, Boguslaw

    2015-02-01

    In the paper a novel filtering design based on the concept of exploration of the pixel neighborhood by digital paths is presented. The paths start from the boundary of a filtering window and reach its center. The cost of transitions between adjacent pixels is defined in the hybrid spatial-color space. Then, an optimal path of minimum total cost, leading from pixels of the window's boundary to its center is determined. The cost of an optimal path serves as a degree of similarity of the central pixel to the samples from the local processing window. If a pixel is an outlier, then all the paths starting from the window's boundary will have high costs and the minimum one will also be high. The filter output is calculated as a weighted mean of the central pixel and an estimate constructed using the information on the minimum cost assigned to each image pixel. So, first the costs of optimal paths are used to build a smoothed image and in the second step the minimum cost of the central pixel is utilized for construction of the weights of a soft-switching scheme. The experiments performed on a set of standard color images, revealed that the efficiency of the proposed algorithm is superior to the state-of-the-art filtering techniques in terms of the objective restoration quality measures, especially for high noise contamination ratios. The proposed filter, due to its low computational complexity, can be applied for real time image denoising and also for the enhancement of video streams.

  7. Automatic Sub-Pixel Co-Registration of LandSat-8 OLI and Sentinel-2A MSI Images Using Phase Correlation and Machine Learning Based Mapping

    NASA Technical Reports Server (NTRS)

    Skakun, Sergii; Roger, Jean-Claude; Vermote, Eric F.; Masek, Jeffrey G.; Justice, Christopher O.

    2017-01-01

    This study investigates misregistration issues between Landsat-8/OLI and Sentinel-2A/MSI at 30 m resolution, and between multi-temporal Sentinel-2A images at 10 m resolution using a phase correlation approach and multiple transformation functions. Co-registration of 45 Landsat-8 to Sentinel-2A pairs and 37 Sentinel-2A to Sentinel-2A pairs were analyzed. Phase correlation proved to be a robust approach that allowed us to identify hundreds and thousands of control points on images acquired more than 100 days apart. Overall, misregistration of up to 1.6 pixels at 30 m resolution between Landsat-8 and Sentinel-2A images, and 1.2 pixels and 2.8 pixels at 10 m resolution between multi-temporal Sentinel-2A images from the same and different orbits, respectively, were observed. The non-linear Random Forest regression used for constructing the mapping function showed best results in terms of root mean square error (RMSE), yielding an average RMSE error of 0.07+/-0.02 pixels at 30 m resolution, and 0.09+/-0.05 and 0.15+/-0.06 pixels at 10 m resolution for the same and adjacent Sentinel-2A orbits, respectively, for multiple tiles and multiple conditions. A simpler 1st order polynomial function (affine transformation) yielded RMSE of 0.08+/-0.02 pixels at 30 m resolution and 0.12+/-0.06 (same Sentinel-2A orbits) and 0.20+/-0.09 (adjacent orbits) pixels at 10 m resolution.

  8. Evaluation of a CdTe semiconductor based compact γ camera for sentinel lymph node imaging.

    PubMed

    Russo, Paolo; Curion, Assunta S; Mettivier, Giovanni; Esposito, Michela; Aurilio, Michela; Caracò, Corradina; Aloj, Luigi; Lastoria, Secondo

    2011-03-01

    The authors assembled a prototype compact gamma-ray imaging probe (MediPROBE) for sentinel lymph node (SLN) localization. This probe is based on a semiconductor pixel detector. Its basic performance was assessed in the laboratory and clinically in comparison with a conventional gamma camera. The room-temperature CdTe pixel detector (1 mm thick) has 256 x 256 square pixels arranged with a 55 microm pitch (sensitive area 14.08 x 14.08 mm2), coupled pixel-by-pixel via bump-bonding to the Medipix2 photon-counting readout CMOS integrated circuit. The imaging probe is equipped with a set of three interchangeable knife-edge pinhole collimators (0.94, 1.2, or 2.1 mm effective diameter at 140 keV) and its focal distance can be regulated in order to set a given field of view (FOV). A typical FOV of 70 mm at 50 mm skin-to-collimator distance corresponds to a minification factor 1:5. The detector is operated at a single low-energy threshold of about 20 keV. For 99 mTc, at 50 mm distance, a background-subtracted sensitivity of 6.5 x 10(-3) cps/kBq and a system spatial resolution of 5.5 mm FWHM were obtained for the 0.94 mm pinhole; corresponding values for the 2.1 mm pinhole were 3.3 x 10(-2) cps/kBq and 12.6 mm. The dark count rate was 0.71 cps. Clinical images in three patients with melanoma indicate detection of the SLNs with acquisition times between 60 and 410 s with an injected activity of 26 MBq 99 mTc and prior localization with standard gamma camera lymphoscintigraphy. The laboratory performance of this imaging probe is limited by the pinhole collimator performance and the necessity of working in minification due to the limited detector size. However, in clinical operative conditions, the CdTe imaging probe was effective in detecting SLNs with adequate resolution and an acceptable sensitivity. Sensitivity is expected to improve with the future availability of a larger CdTe detector permitting operation at shorter distances from the patient skin.

  9. Reflectance Prediction Modelling for Residual-Based Hyperspectral Image Coding

    PubMed Central

    Xiao, Rui; Gao, Junbin; Bossomaier, Terry

    2016-01-01

    A Hyperspectral (HS) image provides observational powers beyond human vision capability but represents more than 100 times the data compared to a traditional image. To transmit and store the huge volume of an HS image, we argue that a fundamental shift is required from the existing “original pixel intensity”-based coding approaches using traditional image coders (e.g., JPEG2000) to the “residual”-based approaches using a video coder for better compression performance. A modified video coder is required to exploit spatial-spectral redundancy using pixel-level reflectance modelling due to the different characteristics of HS images in their spectral and shape domain of panchromatic imagery compared to traditional videos. In this paper a novel coding framework using Reflectance Prediction Modelling (RPM) in the latest video coding standard High Efficiency Video Coding (HEVC) for HS images is proposed. An HS image presents a wealth of data where every pixel is considered a vector for different spectral bands. By quantitative comparison and analysis of pixel vector distribution along spectral bands, we conclude that modelling can predict the distribution and correlation of the pixel vectors for different bands. To exploit distribution of the known pixel vector, we estimate a predicted current spectral band from the previous bands using Gaussian mixture-based modelling. The predicted band is used as the additional reference band together with the immediate previous band when we apply the HEVC. Every spectral band of an HS image is treated like it is an individual frame of a video. In this paper, we compare the proposed method with mainstream encoders. The experimental results are fully justified by three types of HS dataset with different wavelength ranges. The proposed method outperforms the existing mainstream HS encoders in terms of rate-distortion performance of HS image compression. PMID:27695102

  10. A New Low Temperature Polycrystalline Silicon Thin Film Transistor Pixel Circuit for Active Matrix Organic Light Emitting Diode

    NASA Astrophysics Data System (ADS)

    Ching-Lin Fan,; Yi-Yan Lin,; Jyu-Yu Chang,; Bo-Jhang Sun,; Yan-Wei Liu,

    2010-06-01

    This study presents one novel compensation pixel design and driving method for active matrix organic light-emitting diode (AMOLED) displays that use low-temperature polycrystalline silicon thin-film transistors (LTPS-TFTs) with a voltage feed-back method and the simulation results are proposed and verified by SPICE simulator. The measurement and simulation of LTPS TFT characteristics demonstrate the good fitting result. The proposed circuit consists of four TFTs and two capacitors with an additional signal line. The error rates of OLED anode voltage variation are below 0.3% under the threshold voltage deviation of driving TFT (Δ VTH = ± 0.33 V). The simulation results show that the pixel design can improve the display image non-uniformity by compensating the threshold voltage deviation of driving TFT and the degradation of OLED threshold voltage at the same time.

  11. A New Low Temperature Polycrystalline Silicon Thin Film Transistor Pixel Circuit for Active Matrix Organic Light Emitting Diode

    NASA Astrophysics Data System (ADS)

    Fan, Ching-Lin; Lin, Yi-Yan; Chang, Jyu-Yu; Sun, Bo-Jhang; Liu, Yan-Wei

    2010-06-01

    This study presents one novel compensation pixel design and driving method for active matrix organic light-emitting diode (AMOLED) displays that use low-temperature polycrystalline silicon thin-film transistors (LTPS-TFTs) with a voltage feed-back method and the simulation results are proposed and verified by SPICE simulator. The measurement and simulation of LTPS TFT characteristics demonstrate the good fitting result. The proposed circuit consists of four TFTs and two capacitors with an additional signal line. The error rates of OLED anode voltage variation are below 0.3% under the threshold voltage deviation of driving TFT (ΔVTH = ±0.33 V). The simulation results show that the pixel design can improve the display image non-uniformity by compensating the threshold voltage deviation of driving TFT and the degradation of OLED threshold voltage at the same time.

  12. Analysis of identification of digital images from a map of cosmic microwaves

    NASA Astrophysics Data System (ADS)

    Skeivalas, J.; Turla, V.; Jurevicius, M.; Viselga, G.

    2018-04-01

    This paper discusses identification of digital images from the cosmic microwave background radiation map formed according to the data of the European Space Agency "Planck" telescope by applying covariance functions and wavelet theory. The estimates of covariance functions of two digital images or single images are calculated according to the random functions formed of the digital images in the form of pixel vectors. The estimates of pixel vectors are formed on expansion of the pixel arrays of the digital images by a single vector. When the scale of a digital image is varied, the frequencies of single-pixel color waves remain constant and the procedure for calculation of covariance functions is not affected. For identification of the images, the RGB format spectrum has been applied. The impact of RGB spectrum components and the color tensor on the estimates of covariance functions was analyzed. The identity of digital images is assessed according to the changes in the values of the correlation coefficients in a certain range of values by applying the developed computer program.

  13. G-Channel Restoration for RWB CFA with Double-Exposed W Channel

    PubMed Central

    Park, Chulhee; Song, Ki Sun; Kang, Moon Gi

    2017-01-01

    In this paper, we propose a green (G)-channel restoration for a red–white–blue (RWB) color filter array (CFA) image sensor using the dual sampling technique. By using white (W) pixels instead of G pixels, the RWB CFA provides high-sensitivity imaging and an improved signal-to-noise ratio compared to the Bayer CFA. However, owing to this high sensitivity, the W pixel values become rapidly over-saturated before the red–blue (RB) pixel values reach the appropriate levels. Because the missing G color information included in the W channel cannot be restored with a saturated W, multiple captures with dual sampling are necessary to solve this early W-pixel saturation problem. Each W pixel has a different exposure time when compared to those of the R and B pixels, because the W pixels are double-exposed. Therefore, a RWB-to-RGB color conversion method is required in order to restore the G color information, using a double-exposed W channel. The proposed G-channel restoration algorithm restores G color information from the W channel by considering the energy difference caused by the different exposure times. Using the proposed method, the RGB full-color image can be obtained while maintaining the high-sensitivity characteristic of the W pixels. PMID:28165425

  14. G-Channel Restoration for RWB CFA with Double-Exposed W Channel.

    PubMed

    Park, Chulhee; Song, Ki Sun; Kang, Moon Gi

    2017-02-05

    In this paper, we propose a green (G)-channel restoration for a red-white-blue (RWB) color filter array (CFA) image sensor using the dual sampling technique. By using white (W) pixels instead of G pixels, the RWB CFA provides high-sensitivity imaging and an improved signal-to-noise ratio compared to the Bayer CFA. However, owing to this high sensitivity, the W pixel values become rapidly over-saturated before the red-blue (RB) pixel values reach the appropriate levels. Because the missing G color information included in the W channel cannot be restored with a saturated W, multiple captures with dual sampling are necessary to solve this early W-pixel saturation problem. Each W pixel has a different exposure time when compared to those of the R and B pixels, because the W pixels are double-exposed. Therefore, a RWB-to-RGB color conversion method is required in order to restore the G color information, using a double-exposed W channel. The proposed G-channel restoration algorithm restores G color information from the W channel by considering the energy difference caused by the different exposure times. Using the proposed method, the RGB full-color image can be obtained while maintaining the high-sensitivity characteristic of the W pixels.

  15. The HEXITEC Hard X-Ray Pixelated CdTe Imager for Fast Solar Observations

    NASA Technical Reports Server (NTRS)

    Baumgartner, Wayne H.; Christe, Steven D.; Ryan, Daniel; Inglis, Andrew R.; Shih, Albert Y.; Gregory, Kyle; Wilson, Matt; Seller, Paul; Gaskin, Jessica; Wilson-Hodge, Colleen

    2016-01-01

    There is an increasing demand in solar and astrophysics for high resolution X-ray spectroscopic imaging. Such observations would present ground breaking opportunities to study the poorly understood high energy processes in our solar system and beyond, such as solar flares, X-ray binaries, and active galactic nuclei. However, such observations require a new breed of solid state detectors sensitive to high energy X-rays with fine independent pixels to sub-sample the point spread function (PSF) of the X-ray optics. For solar observations in particular, they must also be capable of handling very high count rates as photon fluxes from solar flares often cause pile up and saturation in present generation detectors. The Rutherford Appleton Laboratory (RAL) has recently developed a new cadmium telluride (CdTe) detector system, called HEXITEC (High Energy X-ray Imaging Technology). It is an 80 x 80 array of 250 micron independent pixels sensitive in the 2-200 keV band and capable of a high full frame read out rate of 10 kHz. HEXITEC provides the smallest independently read out CdTe pixels currently available, and are well matched to the few arcsecond PSF produced by current and next generation hard X-ray focusing optics. NASA's Goddard and Marshall Space Flight Centers are collaborating with RAL to develop these detectors for use on future space borne hard X-ray focusing telescopes. We show the latest results on HEXITEC's imaging capability, energy resolution, high read out rate, and reveal it to be ideal for such future instruments.

  16. It's not the pixel count, you fool

    NASA Astrophysics Data System (ADS)

    Kriss, Michael A.

    2012-01-01

    The first thing a "marketing guy" asks the digital camera engineer is "how many pixels does it have, for we need as many mega pixels as possible since the other guys are killing us with their "umpteen" mega pixel pocket sized digital cameras. And so it goes until the pixels get smaller and smaller in order to inflate the pixel count in the never-ending pixel-wars. These small pixels just are not very good. The truth of the matter is that the most important feature of digital cameras in the last five years is the automatic motion control to stabilize the image on the sensor along with some very sophisticated image processing. All the rest has been hype and some "cool" design. What is the future for digital imaging and what will drive growth of camera sales (not counting the cell phone cameras which totally dominate the market in terms of camera sales) and more importantly after sales profits? Well sit in on the Dark Side of Color and find out what is being done to increase the after sales profits and don't be surprised if has been done long ago in some basement lab of a photographic company and of course, before its time.

  17. SU-C-206-03: Metal Artifact Reduction in X-Ray Computed Tomography Based On Local Anatomical Similarity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, X; Yang, X; Rosenfield, J

    Purpose: Metal implants such as orthopedic hardware and dental fillings cause severe bright and dark streaking in reconstructed CT images. These artifacts decrease image contrast and degrade HU accuracy, leading to inaccuracies in target delineation and dose calculation. Additionally, such artifacts negatively impact patient set-up in image guided radiation therapy (IGRT). In this work, we propose a novel method for metal artifact reduction which utilizes the anatomical similarity between neighboring CT slices. Methods: Neighboring CT slices show similar anatomy. Based on this anatomical similarity, the proposed method replaces corrupted CT pixels with pixels from adjacent, artifact-free slices. A gamma map,more » which is the weighted summation of relative HU error and distance error, is calculated for each pixel in the artifact-corrupted CT image. The minimum value in each pixel’s gamma map is used to identify a pixel from the adjacent CT slice to replace the corresponding artifact-corrupted pixel. This replacement only occurs if the minimum value in a particular pixel’s gamma map is larger than a threshold. The proposed method was evaluated with clinical images. Results: Highly attenuating dental fillings and hip implants cause severe streaking artifacts on CT images. The proposed method eliminates the dark and bright streaking and improves the implant delineation and visibility. In particular, the image non-uniformity in the central region of interest was reduced from 1.88 and 1.01 to 0.28 and 0.35, respectively. Further, the mean CT HU error was reduced from 328 HU and 460 HU to 60 HU and 36 HU, respectively. Conclusions: The proposed metal artifact reduction method replaces corrupted image pixels with pixels from neighboring slices that are free of metal artifacts. This method proved capable of suppressing streaking artifacts, improving HU accuracy and image detectability.« less

  18. Photon-counting hexagonal pixel array CdTe detector: Spatial resolution characteristics for image-guided interventional applications

    PubMed Central

    Shrestha, Suman; Karellas, Andrew; Shi, Linxi; Gounis, Matthew J.; Bellazzini, Ronaldo; Spandre, Gloria; Brez, Alessandro; Minuti, Massimo

    2016-01-01

    Purpose: High-resolution, photon-counting, energy-resolved detector with fast-framing capability can facilitate simultaneous acquisition of precontrast and postcontrast images for subtraction angiography without pixel registration artifacts and can facilitate high-resolution real-time imaging during image-guided interventions. Hence, this study was conducted to determine the spatial resolution characteristics of a hexagonal pixel array photon-counting cadmium telluride (CdTe) detector. Methods: A 650 μm thick CdTe Schottky photon-counting detector capable of concurrently acquiring up to two energy-windowed images was operated in a single energy-window mode to include photons of 10 keV or higher. The detector had hexagonal pixels with apothem of 30 μm resulting in pixel pitch of 60 and 51.96 μm along the two orthogonal directions. The detector was characterized at IEC-RQA5 spectral conditions. Linear response of the detector was determined over the air kerma rate relevant to image-guided interventional procedures ranging from 1.3 nGy/frame to 91.4 μGy/frame. Presampled modulation transfer was determined using a tungsten edge test device. The edge-spread function and the finely sampled line spread function accounted for hexagonal sampling, from which the presampled modulation transfer function (MTF) was determined. Since detectors with hexagonal pixels require resampling to square pixels for distortion-free display, the optimal square pixel size was determined by minimizing the root-mean-squared-error of the aperture functions for the square and hexagonal pixels up to the Nyquist limit. Results: At Nyquist frequencies of 8.33 and 9.62 cycles/mm along the apothem and orthogonal to the apothem directions, the modulation factors were 0.397 and 0.228, respectively. For the corresponding axis, the limiting resolution defined as 10% MTF occurred at 13.3 and 12 cycles/mm, respectively. Evaluation of the aperture functions yielded an optimal square pixel size of 54 μm. After resampling to 54 μm square pixels using trilinear interpolation, the presampled MTF at Nyquist frequency of 9.26 cycles/mm was 0.29 and 0.24 along the orthogonal directions and the limiting resolution (10% MTF) occurred at approximately 12 cycles/mm. Visual analysis of a bar pattern image showed the ability to resolve close to 12 line-pairs/mm and qualitative evaluation of a neurovascular nitinol-stent showed the ability to visualize its struts at clinically relevant conditions. Conclusions: Hexagonal pixel array photon-counting CdTe detector provides high spatial resolution in single-photon counting mode. After resampling to optimal square pixel size for distortion-free display, the spatial resolution is preserved. The dual-energy capabilities of the detector could allow for artifact-free subtraction angiography and basis material decomposition. The proposed high-resolution photon-counting detector with energy-resolving capability can be of importance for several image-guided interventional procedures as well as for pediatric applications. PMID:27147324

  19. Photon-counting hexagonal pixel array CdTe detector: Spatial resolution characteristics for image-guided interventional applications.

    PubMed

    Vedantham, Srinivasan; Shrestha, Suman; Karellas, Andrew; Shi, Linxi; Gounis, Matthew J; Bellazzini, Ronaldo; Spandre, Gloria; Brez, Alessandro; Minuti, Massimo

    2016-05-01

    High-resolution, photon-counting, energy-resolved detector with fast-framing capability can facilitate simultaneous acquisition of precontrast and postcontrast images for subtraction angiography without pixel registration artifacts and can facilitate high-resolution real-time imaging during image-guided interventions. Hence, this study was conducted to determine the spatial resolution characteristics of a hexagonal pixel array photon-counting cadmium telluride (CdTe) detector. A 650 μm thick CdTe Schottky photon-counting detector capable of concurrently acquiring up to two energy-windowed images was operated in a single energy-window mode to include photons of 10 keV or higher. The detector had hexagonal pixels with apothem of 30 μm resulting in pixel pitch of 60 and 51.96 μm along the two orthogonal directions. The detector was characterized at IEC-RQA5 spectral conditions. Linear response of the detector was determined over the air kerma rate relevant to image-guided interventional procedures ranging from 1.3 nGy/frame to 91.4 μGy/frame. Presampled modulation transfer was determined using a tungsten edge test device. The edge-spread function and the finely sampled line spread function accounted for hexagonal sampling, from which the presampled modulation transfer function (MTF) was determined. Since detectors with hexagonal pixels require resampling to square pixels for distortion-free display, the optimal square pixel size was determined by minimizing the root-mean-squared-error of the aperture functions for the square and hexagonal pixels up to the Nyquist limit. At Nyquist frequencies of 8.33 and 9.62 cycles/mm along the apothem and orthogonal to the apothem directions, the modulation factors were 0.397 and 0.228, respectively. For the corresponding axis, the limiting resolution defined as 10% MTF occurred at 13.3 and 12 cycles/mm, respectively. Evaluation of the aperture functions yielded an optimal square pixel size of 54 μm. After resampling to 54 μm square pixels using trilinear interpolation, the presampled MTF at Nyquist frequency of 9.26 cycles/mm was 0.29 and 0.24 along the orthogonal directions and the limiting resolution (10% MTF) occurred at approximately 12 cycles/mm. Visual analysis of a bar pattern image showed the ability to resolve close to 12 line-pairs/mm and qualitative evaluation of a neurovascular nitinol-stent showed the ability to visualize its struts at clinically relevant conditions. Hexagonal pixel array photon-counting CdTe detector provides high spatial resolution in single-photon counting mode. After resampling to optimal square pixel size for distortion-free display, the spatial resolution is preserved. The dual-energy capabilities of the detector could allow for artifact-free subtraction angiography and basis material decomposition. The proposed high-resolution photon-counting detector with energy-resolving capability can be of importance for several image-guided interventional procedures as well as for pediatric applications.

  20. Providing integrity, authenticity, and confidentiality for header and pixel data of DICOM images.

    PubMed

    Al-Haj, Ali

    2015-04-01

    Exchange of medical images over public networks is subjected to different types of security threats. This has triggered persisting demands for secured telemedicine implementations that will provide confidentiality, authenticity, and integrity for the transmitted images. The medical image exchange standard (DICOM) offers mechanisms to provide confidentiality for the header data of the image but not for the pixel data. On the other hand, it offers mechanisms to achieve authenticity and integrity for the pixel data but not for the header data. In this paper, we propose a crypto-based algorithm that provides confidentially, authenticity, and integrity for the pixel data, as well as for the header data. This is achieved by applying strong cryptographic primitives utilizing internally generated security data, such as encryption keys, hashing codes, and digital signatures. The security data are generated internally from the header and the pixel data, thus a strong bond is established between the DICOM data and the corresponding security data. The proposed algorithm has been evaluated extensively using DICOM images of different modalities. Simulation experiments show that confidentiality, authenticity, and integrity have been achieved as reflected by the results we obtained for normalized correlation, entropy, PSNR, histogram analysis, and robustness.

  1. Pixel-super-resolved lensfree holography using adaptive relaxation factor and positional error correction

    NASA Astrophysics Data System (ADS)

    Zhang, Jialin; Chen, Qian; Sun, Jiasong; Li, Jiaji; Zuo, Chao

    2018-01-01

    Lensfree holography provides a new way to effectively bypass the intrinsical trade-off between the spatial resolution and field-of-view (FOV) of conventional lens-based microscopes. Unfortunately, due to the limited sensor pixel-size, unpredictable disturbance during image acquisition, and sub-optimum solution to the phase retrieval problem, typical lensfree microscopes only produce compromised imaging quality in terms of lateral resolution and signal-to-noise ratio (SNR). In this paper, we propose an adaptive pixel-super-resolved lensfree imaging (APLI) method to address the pixel aliasing problem by Z-scanning only, without resorting to subpixel shifting or beam-angle manipulation. Furthermore, an automatic positional error correction algorithm and adaptive relaxation strategy are introduced to enhance the robustness and SNR of reconstruction significantly. Based on APLI, we perform full-FOV reconstruction of a USAF resolution target across a wide imaging area of {29.85 mm2 and achieve half-pitch lateral resolution of 770 nm, surpassing 2.17 times of the theoretical Nyquist-Shannon sampling resolution limit imposed by the sensor pixel-size (1.67 μm). Full-FOV imaging result of a typical dicot root is also provided to demonstrate its promising potential applications in biologic imaging.

  2. Dual-gate photo thin-film transistor: a “smart” pixel for high- resolution and low-dose X-ray imaging

    NASA Astrophysics Data System (ADS)

    Wang, Kai; Ou, Hai; Chen, Jun

    2015-06-01

    Since its emergence a decade ago, amorphous silicon flat panel X-ray detector has established itself as a ubiquitous platform for an array of digital radiography modalities. The fundamental building block of a flat panel detector is called a pixel. In all current pixel architectures, sensing, storage, and readout are unanimously kept separate, inevitably compromising resolution by increasing pixel size. To address this issue, we hereby propose a “smart” pixel architecture where the aforementioned three components are combined in a single dual-gate photo thin-film transistor (TFT). In other words, the dual-gate photo TFT itself functions as a sensor, a storage capacitor, and a switch concurrently. Additionally, by harnessing the amplification effect of such a thin-film transistor, we for the first time created a single-transistor active pixel sensor. The proof-of-concept device had a W/L ratio of 250μm/20μm and was fabricated using a simple five-mask photolithography process, where a 130nm transparent ITO was used as the top photo gate, and a 200nm amorphous silicon as the absorbing channel layer. The preliminary results demonstrated that the photocurrent had been increased by four orders of magnitude due to light-induced threshold voltage shift in the sub-threshold region. The device sensitivity could be simply tuned by photo gate bias to specifically target low-level light detection. The dependence of threshold voltage on light illumination indicated that a dynamic range of at least 80dB could be achieved. The "smart" pixel technology holds tremendous promise for developing high-resolution and low-dose X-ray imaging and may potentially lower the cancer risk imposed by radiation, especially among paediatric patients.

  3. IMDISP - INTERACTIVE IMAGE DISPLAY PROGRAM

    NASA Technical Reports Server (NTRS)

    Martin, M. D.

    1994-01-01

    The Interactive Image Display Program (IMDISP) is an interactive image display utility for the IBM Personal Computer (PC, XT and AT) and compatibles. Until recently, efforts to utilize small computer systems for display and analysis of scientific data have been hampered by the lack of sufficient data storage capacity to accomodate large image arrays. Most planetary images, for example, require nearly a megabyte of storage. The recent development of the "CDROM" (Compact Disk Read-Only Memory) storage technology makes possible the storage of up to 680 megabytes of data on a single 4.72-inch disk. IMDISP was developed for use with the CDROM storage system which is currently being evaluated by the Planetary Data System. The latest disks to be produced by the Planetary Data System are a set of three disks containing all of the images of Uranus acquired by the Voyager spacecraft. The images are in both compressed and uncompressed format. IMDISP can read the uncompressed images directly, but special software is provided to decompress the compressed images, which can not be processed directly. IMDISP can also display images stored on floppy or hard disks. A digital image is a picture converted to numerical form so that it can be stored and used in a computer. The image is divided into a matrix of small regions called picture elements, or pixels. The rows and columns of pixels are called "lines" and "samples", respectively. Each pixel has a numerical value, or DN (data number) value, quantifying the darkness or brightness of the image at that spot. In total, each pixel has an address (line number, sample number) and a DN value, which is all that the computer needs for processing. DISPLAY commands allow the IMDISP user to display all or part of an image at various positions on the display screen. The user may also zoom in and out from a point on the image defined by the cursor, and may pan around the image. To enable more or all of the original image to be displayed on the screen at once, the image can be "subsampled." For example, if the image were subsampled by a factor of 2, every other pixel from every other line would be displayed, starting from the upper left corner of the image. Any positive integer may be used for subsampling. The user may produce a histogram of an image file, which is a graph showing the number of pixels per DN value, or per range of DN values, for the entire image. IMDISP can also plot the DN value versus pixels along a line between two points on the image. The user can "stretch" or increase the contrast of an image by specifying low and high DN values; all pixels with values lower than the specified "low" will then become black, and all pixels higher than the specified "high" value will become white. Pixels between the low and high values will be evenly shaded between black and white. IMDISP is written in a modular form to make it easy to change it to work with different display devices or on other computers. The code can also be adapted for use in other application programs. There are device dependent image display modules, general image display subroutines, image I/O routines, and image label and command line parsing routines. The IMDISP system is written in C-language (94%) and Assembler (6%). It was implemented on an IBM PC with the MS DOS 3.21 operating system. IMDISP has a memory requirement of about 142k bytes. IMDISP was developed in 1989 and is a copyrighted work with all copyright vested in NASA. Additional planetary images can be obtained from the National Space Science Data Center at (301) 286-6695.

  4. Image-based metal artifact reduction in x-ray computed tomography utilizing local anatomical similarity

    NASA Astrophysics Data System (ADS)

    Dong, Xue; Yang, Xiaofeng; Rosenfield, Jonathan; Elder, Eric; Dhabaan, Anees

    2017-03-01

    X-ray computed tomography (CT) is widely used in radiation therapy treatment planning in recent years. However, metal implants such as dental fillings and hip prostheses can cause severe bright and dark streaking artifacts in reconstructed CT images. These artifacts decrease image contrast and degrade HU accuracy, leading to inaccuracies in target delineation and dose calculation. In this work, a metal artifact reduction method is proposed based on the intrinsic anatomical similarity between neighboring CT slices. Neighboring CT slices from the same patient exhibit similar anatomical features. Exploiting this anatomical similarity, a gamma map is calculated as a weighted summation of relative HU error and distance error for each pixel in an artifact-corrupted CT image relative to a neighboring, artifactfree image. The minimum value in the gamma map for each pixel is used to identify an appropriate pixel from the artifact-free CT slice to replace the corresponding artifact-corrupted pixel. With the proposed method, the mean CT HU error was reduced from 360 HU and 460 HU to 24 HU and 34 HU on head and pelvis CT images, respectively. Dose calculation accuracy also improved, as the dose difference was reduced from greater than 20% to less than 4%. Using 3%/3mm criteria, the gamma analysis failure rate was reduced from 23.25% to 0.02%. An image-based metal artifact reduction method is proposed that replaces corrupted image pixels with pixels from neighboring CT slices free of metal artifacts. This method is shown to be capable of suppressing streaking artifacts, thereby improving HU and dose calculation accuracy.

  5. Sub-pixel spatial resolution wavefront phase imaging

    NASA Technical Reports Server (NTRS)

    Stahl, H. Philip (Inventor); Mooney, James T. (Inventor)

    2012-01-01

    A phase imaging method for an optical wavefront acquires a plurality of phase images of the optical wavefront using a phase imager. Each phase image is unique and is shifted with respect to another of the phase images by a known/controlled amount that is less than the size of the phase imager's pixels. The phase images are then combined to generate a single high-spatial resolution phase image of the optical wavefront.

  6. CMOS Imaging of Temperature Effects on Pin-Printed Xerogel Sensor Microarrays.

    PubMed

    Lei Yao; Ka Yi Yung; Chodavarapu, Vamsy P; Bright, Frank V

    2011-04-01

    In this paper, we study the effect of temperature on the operation and performance of a xerogel-based sensor microarrays coupled to a complementary metal-oxide semiconductor (CMOS) imager integrated circuit (IC) that images the photoluminescence response from the sensor microarray. The CMOS imager uses a 32 × 32 (1024 elements) array of active pixel sensors and each pixel includes a high-gain phototransistor to convert the detected optical signals into electrical currents. A correlated double sampling circuit and pixel address/digital control/signal integration circuit are also implemented on-chip. The CMOS imager data are read out as a serial coded signal. The sensor system uses a light-emitting diode to excite target analyte responsive organometallic luminophores doped within discrete xerogel-based sensor elements. As a proto type, we developed a 3 × 3 (9 elements) array of oxygen (O2) sensors. Each group of three sensor elements in the array (arranged in a column) is designed to provide a different and specific sensitivity to the target gaseous O2 concentration. This property of multiple sensitivities is achieved by using a mix of two O2 sensitive luminophores in each pin-printed xerogel sensor element. The CMOS imager is designed to be low noise and consumes a static power of 320.4 μW and an average dynamic power of 624.6 μW when operating at 100-Hz sampling frequency and 1.8-V dc power supply.

  7. BOREAS TE-18, 60-m, Radiometrically Rectified Landsat TM Imagery

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Knapp, David

    2000-01-01

    The BOREAS TE-18 team used a radiometric rectification process to produce standardized DN values for a series of Landsat TM images of the BOREAS SSA and NSA in order to compare images that were collected under different atmospheric conditions. The images for each study area were referenced to an image that had very clear atmospheric qualities. The reference image for the SSA was collected on 02-Sep-1994, while the reference image for the NSA was collected on 2 1 Jun-1995. The 23 rectified images cover the period of 07-Jul-1985 to 18-Sep-1994 in the SSA and 22-Jun-1984 to 09-Jun-1994 in the NSA. Each of the reference scenes had coincident atmospheric optical thickness measurements made by RSS-11. The radiometric rectification process is described in more detail by Hall et al. (1991). The original Landsat TM data were received from CCRS for use in the BOREAS project. Due to the nature of the radiometric rectification process and copyright issues, the full-resolution (30-m) images may not be publicly distributed. However, this spatially degraded 60-m resolution version of the images may be openly distributed and is available on the BOREAS CD-ROM series. After the radiometric rectification processing, the original data were degraded to a 60-m pixel size from the original 30-m pixel size by averaging the data over a 2- by 2-pixel window. The data are stored in binary image-format files. The data files are available on a CD-ROM (see document number 20010000884), or from the Oak Ridge National Laboratory (ORNL) Distributed Activity Archive Center (DAAC).

  8. Crack image segmentation based on improved DBC method

    NASA Astrophysics Data System (ADS)

    Cao, Ting; Yang, Nan; Wang, Fengping; Gao, Ting; Wang, Weixing

    2017-11-01

    With the development of computer vision technology, crack detection based on digital image segmentation method arouses global attentions among researchers and transportation ministries. Since the crack always exhibits the random shape and complex texture, it is still a challenge to accomplish reliable crack detection results. Therefore, a novel crack image segmentation method based on fractal DBC (differential box counting) is introduced in this paper. The proposed method can estimate every pixel fractal feature based on neighborhood information which can consider the contribution from all possible direction in the related block. The block moves just one pixel every time so that it could cover all the pixels in the crack image. Unlike the classic DBC method which only describes fractal feature for the related region, this novel method can effectively achieve crack image segmentation according to the fractal feature of every pixel. The experiment proves the proposed method can achieve satisfactory results in crack detection.

  9. Digital micromirror device camera with per-pixel coded exposure for high dynamic range imaging.

    PubMed

    Feng, Wei; Zhang, Fumin; Wang, Weijing; Xing, Wei; Qu, Xinghua

    2017-05-01

    In this paper, we overcome the limited dynamic range of the conventional digital camera, and propose a method of realizing high dynamic range imaging (HDRI) from a novel programmable imaging system called a digital micromirror device (DMD) camera. The unique feature of the proposed new method is that the spatial and temporal information of incident light in our DMD camera can be flexibly modulated, and it enables the camera pixels always to have reasonable exposure intensity by DMD pixel-level modulation. More importantly, it allows different light intensity control algorithms used in our programmable imaging system to achieve HDRI. We implement the optical system prototype, analyze the theory of per-pixel coded exposure for HDRI, and put forward an adaptive light intensity control algorithm to effectively modulate the different light intensity to recover high dynamic range images. Via experiments, we demonstrate the effectiveness of our method and implement the HDRI on different objects.

  10. Fast, Deep-Record-Length, Fiber-Coupled Photodiode Imaging Array for Plasma Diagnostics

    NASA Astrophysics Data System (ADS)

    Brockington, Samuel; Case, Andrew; Witherspoon, F. Douglas

    2014-10-01

    HyperV Technologies has been developing an imaging diagnostic comprised of an array of fast, low-cost, long-record-length, fiber-optically-coupled photodiode channels to investigate plasma dynamics and other fast, bright events. By coupling an imaging fiber bundle to a bank of amplified photodiode channels, imagers and streak imagers of 100 to 1000 pixels can be constructed. By interfacing analog photodiode systems directly to commercial analog-to-digital converters and modern memory chips, a prototype 100 pixel array with an extremely deep record length (128 k points at 20 Msamples/s) and 10 bit pixel resolution has already been achieved. HyperV now seeks to extend these techniques to construct a prototype 1000 Pixel framing camera with up to 100 Msamples/sec rate and 10 to 12 bit depth. Preliminary experimental results as well as Phase 2 plans will be discussed. Work supported by USDOE Phase 2 SBIR Grant DE-SC0009492.

  11. Programmable remapper for image processing

    NASA Technical Reports Server (NTRS)

    Juday, Richard D. (Inventor); Sampsell, Jeffrey B. (Inventor)

    1991-01-01

    A video-rate coordinate remapper includes a memory for storing a plurality of transformations on look-up tables for remapping input images from one coordinate system to another. Such transformations are operator selectable. The remapper includes a collective processor by which certain input pixels of an input image are transformed to a portion of the output image in a many-to-one relationship. The remapper includes an interpolative processor by which the remaining input pixels of the input image are transformed to another portion of the output image in a one-to-many relationship. The invention includes certain specific transforms for creating output images useful for certain defects of visually impaired people. The invention also includes means for shifting input pixels and means for scrolling the output matrix.

  12. Novel spectral imaging system combining spectroscopy with imaging applications for biology

    NASA Astrophysics Data System (ADS)

    Malik, Zvi; Cabib, Dario; Buckwald, Robert A.; Garini, Yuval; Soenksen, Dirk G.

    1995-02-01

    A novel analytical spectral-imaging system and its results in the examination of biological specimens are presented. The SpectraCube 1000 system measures the transmission, absorbance, or fluorescence spectra of images studied by light microscopy. The system is based on an interferometer combined with a CCD camera, enabling measurement of the interferogram for each pixel constructing the image. Fourier transformation of the interferograms derives pixel by pixel spectra for 170 X 170 pixels of the image. A special `similarity mapping' program has been developed, enabling comparisons of spectral algorithms of all the spatial and spectral information measured by the system in the image. By comparing the spectrum of each pixel in the specimen with a selected reference spectrum (similarity mapping), there is a depiction of the spatial distribution of macromolecules possessing the characteristics of the reference spectrum. The system has been applied to analyses of bone marrow blood cells as well as fluorescent specimens, and has revealed information which could not be unveiled by other techniques. Similarity mapping has enabled visualization of fine details of chromatin packing in the nucleus of cells and other cytoplasmic compartments. Fluorescence analysis by the system has enabled the determination of porphyrin concentrations and distribution in cytoplasmic organelles of living cells.

  13. Single Pixel Black Phosphorus Photodetector for Near-Infrared Imaging.

    PubMed

    Miao, Jinshui; Song, Bo; Xu, Zhihao; Cai, Le; Zhang, Suoming; Dong, Lixin; Wang, Chuan

    2018-01-01

    Infrared imaging systems have wide range of military or civil applications and 2D nanomaterials have recently emerged as potential sensing materials that may outperform conventional ones such as HgCdTe, InGaAs, and InSb. As an example, 2D black phosphorus (BP) thin film has a thickness-dependent direct bandgap with low shot noise and noncryogenic operation for visible to mid-infrared photodetection. In this paper, the use of a single-pixel photodetector made with few-layer BP thin film for near-infrared imaging applications is demonstrated. The imaging is achieved by combining the photodetector with a digital micromirror device to encode and subsequently reconstruct the image based on compressive sensing algorithm. Stationary images of a near-infrared laser spot (λ = 830 nm) with up to 64 × 64 pixels are captured using this single-pixel BP camera with 2000 times of measurements, which is only half of the total number of pixels. The imaging platform demonstrated in this work circumvents the grand challenges of scalable BP material growth for photodetector array fabrication and shows the efficacy of utilizing the outstanding performance of BP photodetector for future high-speed infrared camera applications. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Cascaded image analysis for dynamic crack detection in material testing

    NASA Astrophysics Data System (ADS)

    Hampel, U.; Maas, H.-G.

    Concrete probes in civil engineering material testing often show fissures or hairline-cracks. These cracks develop dynamically. Starting at a width of a few microns, they usually cannot be detected visually or in an image of a camera imaging the whole probe. Conventional image analysis techniques will detect fissures only if they show a width in the order of one pixel. To be able to detect and measure fissures with a width of a fraction of a pixel at an early stage of their development, a cascaded image analysis approach has been developed, implemented and tested. The basic idea of the approach is to detect discontinuities in dense surface deformation vector fields. These deformation vector fields between consecutive stereo image pairs, which are generated by cross correlation or least squares matching, show a precision in the order of 1/50 pixel. Hairline-cracks can be detected and measured by applying edge detection techniques such as a Sobel operator to the results of the image matching process. Cracks will show up as linear discontinuities in the deformation vector field and can be vectorized by edge chaining. In practical tests of the method, cracks with a width of 1/20 pixel could be detected, and their width could be determined at a precision of 1/50 pixel.

  15. Point spread function based classification of regions for linear digital tomosynthesis

    NASA Astrophysics Data System (ADS)

    Israni, Kenny; Avinash, Gopal; Li, Baojun

    2007-03-01

    In digital tomosynthesis, one of the limitations is the presence of out-of-plane blur due to the limited angle acquisition. The point spread function (PSF) characterizes blur in the imaging volume, and is shift-variant in tomosynthesis. The purpose of this research is to classify the tomosynthesis imaging volume into four different categories based on PSF-driven focus criteria. We considered linear tomosynthesis geometry and simple back projection algorithm for reconstruction. The three-dimensional PSF at every pixel in the imaging volume was determined. Intensity profiles were computed for every pixel by integrating the PSF-weighted intensities contained within the line segment defined by the PSF, at each slice. Classification rules based on these intensity profiles were used to categorize image regions. At background and low-frequency pixels, the derived intensity profiles were flat curves with relatively low and high maximum intensities respectively. At in-focus pixels, the maximum intensity of the profiles coincided with the PSF-weighted intensity of the pixel. At out-of-focus pixels, the PSF-weighted intensity of the pixel was always less than the maximum intensity of the profile. We validated our method using human observer classified regions as gold standard. Based on the computed and manual classifications, the mean sensitivity and specificity of the algorithm were 77+/-8.44% and 91+/-4.13% respectively (t=-0.64, p=0.56, DF=4). Such a classification algorithm may assist in mitigating out-of-focus blur from tomosynthesis image slices.

  16. Limb Viewing Hyper Spectral Imager (LiVHySI) for airglow measurements onboard YOUTHSAT-1

    NASA Astrophysics Data System (ADS)

    Bisht, R. S.; Hait, A. K.; Babu, P. N.; Sarkar, S. S.; Benerji, A.; Biswas, A.; Saji, A. K.; Samudraiah, D. R. M.; Kirankumar, A. S.; Pant, T. K.; Parimalarangan, T.

    2014-08-01

    The Limb Viewing Hyper Spectral Imager (LiVHySI) is one of the Indian payloads onboard YOUTHSAT (inclination 98.73°, apogee 817 km) launched in April, 2011. The Hyper-spectral imager has been operated in Earth’s limb viewing mode to measure airglow emissions in the spectral range 550-900 nm, from terrestrial upper atmosphere (i.e. 80 km altitude and above) with a line-of-sight range of about 3200 km. The altitude coverage is about 500 km with command selectable lowest altitude. This imaging spectrometer employs a Linearly Variable Filter (LVF) to generate the spectrum and an Active Pixel Sensor (APS) area array of 256 × 512 pixels, placed in close proximity of the LVF as detector. The spectral sampling is done at 1.06 nm interval. The optics used is an eight element f/2 telecentric lens system with 80 mm effective focal length. The detector is aligned with respect to the LVF such that its 512 pixel dimension covers the spectral range. The radiometric sensitivity of the imager is about 20 Rayleigh at noise floor through the signal integration for 10 s at wavelength 630 nm. The imager is being operated during the eclipsed portion of satellite orbits. The integration in the time/spatial domain could be chosen depending upon the season, solar and geomagnetic activity and/or specific target area. This paper primarily aims at describing LiVHySI, its in-orbit operations, quality, potential of the data and its first observations. The images reveal the thermospheric airglow at 630 nm to be the most prominent. These first LiVHySI observations carried out on the night of 21st April, 2011 are presented here, while the variability exhibited by the thermospheric nightglow at O(1D) 630 nm has been described in detail.

  17. High-speed massively parallel scanning

    DOEpatents

    Decker, Derek E [Byron, CA

    2010-07-06

    A new technique for recording a series of images of a high-speed event (such as, but not limited to: ballistics, explosives, laser induced changes in materials, etc.) is presented. Such technique(s) makes use of a lenslet array to take image picture elements (pixels) and concentrate light from each pixel into a spot that is much smaller than the pixel. This array of spots illuminates a detector region (e.g., film, as one embodiment) which is scanned transverse to the light, creating tracks of exposed regions. Each track is a time history of the light intensity for a single pixel. By appropriately configuring the array of concentrated spots with respect to the scanning direction of the detection material, different tracks fit between pixels and sufficient lengths are possible which can be of interest in several high-speed imaging applications.

  18. Hardware Implementation of a Bilateral Subtraction Filter

    NASA Technical Reports Server (NTRS)

    Huertas, Andres; Watson, Robert; Villalpando, Carlos; Goldberg, Steven

    2009-01-01

    A bilateral subtraction filter has been implemented as a hardware module in the form of a field-programmable gate array (FPGA). In general, a bilateral subtraction filter is a key subsystem of a high-quality stereoscopic machine vision system that utilizes images that are large and/or dense. Bilateral subtraction filters have been implemented in software on general-purpose computers, but the processing speeds attainable in this way even on computers containing the fastest processors are insufficient for real-time applications. The present FPGA bilateral subtraction filter is intended to accelerate processing to real-time speed and to be a prototype of a link in a stereoscopic-machine- vision processing chain, now under development, that would process large and/or dense images in real time and would be implemented in an FPGA. In terms that are necessarily oversimplified for the sake of brevity, a bilateral subtraction filter is a smoothing, edge-preserving filter for suppressing low-frequency noise. The filter operation amounts to replacing the value for each pixel with a weighted average of the values of that pixel and the neighboring pixels in a predefined neighborhood or window (e.g., a 9 9 window). The filter weights depend partly on pixel values and partly on the window size. The present FPGA implementation of a bilateral subtraction filter utilizes a 9 9 window. This implementation was designed to take advantage of the ability to do many of the component computations in parallel pipelines to enable processing of image data at the rate at which they are generated. The filter can be considered to be divided into the following parts (see figure): a) An image pixel pipeline with a 9 9- pixel window generator, b) An array of processing elements; c) An adder tree; d) A smoothing-and-delaying unit; and e) A subtraction unit. After each 9 9 window is created, the affected pixel data are fed to the processing elements. Each processing element is fed the pixel value for its position in the window as well as the pixel value for the central pixel of the window. The absolute difference between these two pixel values is calculated and used as an address in a lookup table. Each processing element has a lookup table, unique for its position in the window, containing the weight coefficients for the Gaussian function for that position. The pixel value is multiplied by the weight, and the outputs of the processing element are the weight and pixel-value weight product. The products and weights are fed to the adder tree. The sum of the products and the sum of the weights are fed to the divider, which computes the sum of products the sum of weights. The output of the divider is denoted the bilateral smoothed image. The smoothing function is a simple weighted average computed over a 3 3 subwindow centered in the 9 9 window. After smoothing, the image is delayed by an additional amount of time needed to match the processing time for computing the bilateral smoothed image. The bilateral smoothed image is then subtracted from the 3 3 smoothed image to produce the final output. The prototype filter as implemented in a commercially available FPGA processes one pixel per clock cycle. Operation at a clock speed of 66 MHz has been demonstrated, and results of a static timing analysis have been interpreted as suggesting that the clock speed could be increased to as much as 100 MHz.

  19. The Mammographic Head Demonstrator Developed in the Framework of the “IMI” Project:. First Imaging Tests Results

    NASA Astrophysics Data System (ADS)

    Bisogni, Maria Giuseppina

    2006-04-01

    In this paper we report on the performances and the first imaging test results of a digital mammographic demonstrator based on GaAs pixel detectors. The heart of this prototype is the X-ray detection unit, which is a GaAs pixel sensor read-out by the PCC/MEDIPIXI circuit. Since the active area of the sensor is 1 cm2, 18 detectors have been organized in two staggered rows of nine chips each. To cover the typical mammographic format (18 × 24 cm2) a linear scanning is performed by means of a stepper motor. The system is integrated in mammographic equipment comprehending the X-ray tube, the bias and data acquisition systems and the PC-based control system. The prototype has been developed in the framework of the integrated Mammographic Imaging (IMI) project, an industrial research activity aiming to develop innovative instrumentation for morphologic and functional imaging. The project has been supported by the Italian Ministry of Education, University and Research (MIUR) and by five Italian High Tech companies in collaboration with the universities of Ferrara, Roma “La Sapienza”, Pisa and the INFN.

  20. On-Orbit Solar Dynamics Observatory (SDO) Star Tracker Warm Pixel Analysis

    NASA Technical Reports Server (NTRS)

    Felikson, Denis; Ekinci, Matthew; Hashmall, Joseph A.; Vess, Melissa

    2011-01-01

    This paper describes the process of identification and analysis of warm pixels in two autonomous star trackers on the Solar Dynamics Observatory (SDO) mission. A brief description of the mission orbit and attitude regimes is discussed and pertinent star tracker hardware specifications are given. Warm pixels are defined and the Quality Index parameter is introduced, which can be explained qualitatively as a manifestation of a possible warm pixel event. A description of the algorithm used to identify warm pixel candidates is given. Finally, analysis of dumps of on-orbit star tracker charge coupled devices (CCD) images is presented and an operational plan going forward is discussed. SDO, launched on February 11, 2010, is operated from the NASA Goddard Space Flight Center (GSFC). SDO is in a geosynchronous orbit with a 28.5 inclination. The nominal mission attitude points the spacecraft X-axis at the Sun, with the spacecraft Z-axis roughly aligned with the Solar North Pole. The spacecraft Y-axis completes the triad. In attitude, SDO moves approximately 0.04 per hour, mostly about the spacecraft Z-axis. The SDO star trackers, manufactured by Galileo Avionica, project the images of stars in their 16.4deg x 16.4deg fields-of-view onto CCD detectors consisting of 512 x 512 pixels. The trackers autonomously identify the star patterns and provide an attitude estimate. Each unit is able to track up to 9 stars. Additionally, each tracker calculates a parameter called the Quality Index, which is a measure of the quality of the attitude solution. Each pixel in the CCD measures the intensity of light and a warns pixel is defined as having a measurement consistently and significantly higher than the mean background intensity level. A warns pixel should also have lower intensity than a pixel containing a star image and will not move across the field of view as the attitude changes (as would a dim star image). It should be noted that the maximum error introduced in the star tracker attitude solution during suspected warm pixel corruptions is within the specified 36 attitude error budget requirement of [35, 70, 70] arcseconds. Thus, the star trackers provided attitude accuracy within the specification for SDO. The star tracker images are intentionally defocused so each star image is detected in more than one CCD pixel. The position of each star is calculated as an intensity-weighted average of the illuminated pixels. The exact method of finding the positions is proprietary to the tracker manufacturer. When a warm pixel happens to be in the vicinity of a star, it can corrupt the calculation of the position of that particular star, thereby corrupting the estimate of the attitude.

  1. A 7 ke-SD-FWC 1.2 e-RMS Temporal Random Noise 128×256 Time-Resolved CMOS Image Sensor With Two In-Pixel SDs for Biomedical Applications.

    PubMed

    Seo, Min-Woong; Kawahito, Shoji

    2017-12-01

    A large full well capacity (FWC) for wide signal detection range and low temporal random noise for high sensitivity lock-in pixel CMOS image sensor (CIS) embedded with two in-pixel storage diodes (SDs) has been developed and presented in this paper. For fast charge transfer from photodiode to SDs, a lateral electric field charge modulator (LEFM) is used for the developed lock-in pixel. As a result, the time-resolved CIS achieves a very large SD-FWC of approximately 7ke-, low temporal random noise of 1.2e-rms at 20 fps with true correlated double sampling operation and fast intrinsic response less than 500 ps at 635 nm. The proposed imager has an effective pixel array of and a pixel size of . The sensor chip is fabricated by Dongbu HiTek 1P4M 0.11 CIS process.

  2. A bio-image sensor for simultaneous detection of multi-neurotransmitters.

    PubMed

    Lee, You-Na; Okumura, Koichi; Horio, Tomoko; Iwata, Tatsuya; Takahashi, Kazuhiro; Hattori, Toshiaki; Sawada, Kazuaki

    2018-03-01

    We report here a new bio-image sensor for simultaneous detection of spatial and temporal distribution of multi-neurotransmitters. It consists of multiple enzyme-immobilized membranes on a 128 × 128 pixel array with read-out circuit. Apyrase and acetylcholinesterase (AChE), as selective elements, are used to recognize adenosine 5'-triphosphate (ATP) and acetylcholine (ACh), respectively. To enhance the spatial resolution, hydrogen ion (H + ) diffusion barrier layers are deposited on top of the bio-image sensor and demonstrated their prevention capability. The results are used to design the space among enzyme-immobilized pixels and the null H + sensor to minimize the undesired signal overlap by H + diffusion. Using this bio-image sensor, we can obtain H + diffusion-independent imaging of concentration gradients of ATP and ACh in real-time. The sensing characteristics, such as sensitivity and detection of limit, are determined experimentally. With the proposed bio-image sensor the possibility exists for customizable monitoring of the activities of various neurochemicals by using different kinds of proton-consuming or generating enzymes. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Angular sensitivity of modeled scientific silicon charge-coupled devices to initial electron direction

    NASA Astrophysics Data System (ADS)

    Plimley, Brian; Coffer, Amy; Zhang, Yigong; Vetter, Kai

    2016-08-01

    Previously, scientific silicon charge-coupled devices (CCDs) with 10.5-μm pixel pitch and a thick (650 μm), fully depleted bulk have been used to measure gamma-ray-induced fast electrons and demonstrate electron track Compton imaging. A model of the response of this CCD was also developed and benchmarked to experiment using Monte Carlo electron tracks. We now examine the trade-off in pixel pitch and electronic noise. We extend our CCD response model to different pixel pitch and readout noise per pixel, including pixel pitch of 2.5 μm, 5 μm, 10.5 μm, 20 μm, and 40 μm, and readout noise from 0 eV/pixel to 2 keV/pixel for 10.5 μm pixel pitch. The CCD images generated by this model using simulated electron tracks are processed by our trajectory reconstruction algorithm. The performance of the reconstruction algorithm defines the expected angular sensitivity as a function of electron energy, CCD pixel pitch, and readout noise per pixel. Results show that our existing pixel pitch of 10.5 μm is near optimal for our approach, because smaller pixels add little new information but are subject to greater statistical noise. In addition, we measured the readout noise per pixel for two different device temperatures in order to estimate the effect of temperature on the reconstruction algorithm performance, although the readout is not optimized for higher temperatures. The noise in our device at 240 K increases the FWHM of angular measurement error by no more than a factor of 2, from 26° to 49° FWHM for electrons between 425 keV and 480 keV. Therefore, a CCD could be used for electron-track-based imaging in a Peltier-cooled device.

  4. 3D reconstructions with pixel-based images are made possible by digitally clearing plant and animal tissue

    USDA-ARS?s Scientific Manuscript database

    Reconstruction of 3D images from a series of 2D images has been restricted by the limited capacity to decrease the opacity of surrounding tissue. Commercial software that allows color-keying and manipulation of 2D images in true 3D space allowed us to produce 3D reconstructions from pixel based imag...

  5. Text image authenticating algorithm based on MD5-hash function and Henon map

    NASA Astrophysics Data System (ADS)

    Wei, Jinqiao; Wang, Ying; Ma, Xiaoxue

    2017-07-01

    In order to cater to the evidentiary requirements of the text image, this paper proposes a fragile watermarking algorithm based on Hash function and Henon map. The algorithm is to divide a text image into parts, get flippable pixels and nonflippable pixels of every lump according to PSD, generate watermark of non-flippable pixels with MD5-Hash, encrypt watermark with Henon map and select embedded blocks. The simulation results show that the algorithm with a good ability in tampering localization can be used to authenticate and forensics the authenticity and integrity of text images

  6. Pixel-based meshfree modelling of skeletal muscles.

    PubMed

    Chen, Jiun-Shyan; Basava, Ramya Rao; Zhang, Yantao; Csapo, Robert; Malis, Vadim; Sinha, Usha; Hodgson, John; Sinha, Shantanu

    2016-01-01

    This paper introduces the meshfree Reproducing Kernel Particle Method (RKPM) for 3D image-based modeling of skeletal muscles. This approach allows for construction of simulation model based on pixel data obtained from medical images. The material properties and muscle fiber direction obtained from Diffusion Tensor Imaging (DTI) are input at each pixel point. The reproducing kernel (RK) approximation allows a representation of material heterogeneity with smooth transition. A multiphase multichannel level set based segmentation framework is adopted for individual muscle segmentation using Magnetic Resonance Images (MRI) and DTI. The application of the proposed methods for modeling the human lower leg is demonstrated.

  7. Geometrical superresolved imaging using nonperiodic spatial masking.

    PubMed

    Borkowski, Amikam; Zalevsky, Zeev; Javidi, Bahram

    2009-03-01

    The resolution of every imaging system is limited either by the F-number of its optics or by the geometry of its detection array. The geometrical limitation is caused by lack of spatial sampling points as well as by the shape of every sampling pixel that generates spectral low-pass filtering. We present a novel approach to overcome the low-pass filtering that is due to the shape of the sampling pixels. The approach combines special algorithms together with spatial masking placed in the intermediate image plane and eventually allows geometrical superresolved imaging without relation to the actual shape of the pixels.

  8. A method of object recognition for single pixel imaging

    NASA Astrophysics Data System (ADS)

    Li, Boxuan; Zhang, Wenwen

    2018-01-01

    Computational ghost imaging(CGI), utilizing a single-pixel detector, has been extensively used in many fields. However, in order to achieve a high-quality reconstructed image, a large number of iterations are needed, which limits the flexibility of using CGI in practical situations, especially in the field of object recognition. In this paper, we purpose a method utilizing the feature matching to identify the number objects. In the given system, approximately 90% of accuracy of recognition rates can be achieved, which provides a new idea for the application of single pixel imaging in the field of object recognition

  9. An Investigation into the Spectral Imaging of Hall Thruster Plumes

    DTIC Science & Technology

    2015-07-01

    imaging experiment. It employs a Kodak KAF-3200E 3 megapixel CCD (2184×1472 with 6.8 µm pixels). The camera was designed for astronomical imaging and thus...19 mml 14c--7_0_m_m_~•... ,. ,. 50 mm I· ·I ,. 41 mm I Kodak KAF- 3200E ceo 2184 x 1472 px 14.9 x 10.0 mm 6.8 x 6.8J..Lm pixel size SBIG ST...It employs a Kodak KAF-3200E 3 megapixel CCD (2184×1472 with 6.8 µm pixels). The camera was designed for astronomical imaging and thus long exposure

  10. High-speed on-chip windowed centroiding using photodiode-based CMOS imager

    NASA Technical Reports Server (NTRS)

    Pain, Bedabrata (Inventor); Sun, Chao (Inventor); Yang, Guang (Inventor); Cunningham, Thomas J. (Inventor); Hancock, Bruce (Inventor)

    2003-01-01

    A centroid computation system is disclosed. The system has an imager array, a switching network, computation elements, and a divider circuit. The imager array has columns and rows of pixels. The switching network is adapted to receive pixel signals from the image array. The plurality of computation elements operates to compute inner products for at least x and y centroids. The plurality of computation elements has only passive elements to provide inner products of pixel signals the switching network. The divider circuit is adapted to receive the inner products and compute the x and y centroids.

  11. High-speed on-chip windowed centroiding using photodiode-based CMOS imager

    NASA Technical Reports Server (NTRS)

    Pain, Bedabrata (Inventor); Sun, Chao (Inventor); Yang, Guang (Inventor); Cunningham, Thomas J. (Inventor); Hancock, Bruce (Inventor)

    2004-01-01

    A centroid computation system is disclosed. The system has an imager array, a switching network, computation elements, and a divider circuit. The imager array has columns and rows of pixels. The switching network is adapted to receive pixel signals from the image array. The plurality of computation elements operates to compute inner products for at least x and y centroids. The plurality of computation elements has only passive elements to provide inner products of pixel signals the switching network. The divider circuit is adapted to receive the inner products and compute the x and y centroids.

  12. A 128×96 Pixel Stack-Type Color Image Sensor: Stack of Individual Blue-, Green-, and Red-Sensitive Organic Photoconductive Films Integrated with a ZnO Thin Film Transistor Readout Circuit

    NASA Astrophysics Data System (ADS)

    Seo, Hokuto; Aihara, Satoshi; Watabe, Toshihisa; Ohtake, Hiroshi; Sakai, Toshikatsu; Kubota, Misao; Egami, Norifumi; Hiramatsu, Takahiro; Matsuda, Tokiyoshi; Furuta, Mamoru; Hirao, Takashi

    2011-02-01

    A color image was produced by a vertically stacked image sensor with blue (B)-, green (G)-, and red (R)-sensitive organic photoconductive films, each having a thin-film transistor (TFT) array that uses a zinc oxide (ZnO) channel to read out the signal generated in each organic film. The number of the pixels of the fabricated image sensor is 128×96 for each color, and the pixel size is 100×100 µm2. The current on/off ratio of the ZnO TFT is over 106, and the B-, G-, and R-sensitive organic photoconductive films show excellent wavelength selectivity. The stacked image sensor can produce a color image at 10 frames per second with a resolution corresponding to the pixel number. This result clearly shows that color separation is achieved without using any conventional color separation optical system such as a color filter array or a prism.

  13. Design and implementation of Gm-APD array readout integrated circuit for infrared 3D imaging

    NASA Astrophysics Data System (ADS)

    Zheng, Li-xia; Yang, Jun-hao; Liu, Zhao; Dong, Huai-peng; Wu, Jin; Sun, Wei-feng

    2013-09-01

    A single-photon detecting array of readout integrated circuit (ROIC) capable of infrared 3D imaging by photon detection and time-of-flight measurement is presented in this paper. The InGaAs avalanche photon diodes (APD) dynamic biased under Geiger operation mode by gate controlled active quenching circuit (AQC) are used here. The time-of-flight is accurately measured by a high accurate time-to-digital converter (TDC) integrated in the ROIC. For 3D imaging, frame rate controlling technique is utilized to the pixel's detection, so that the APD related to each pixel should be controlled by individual AQC to sense and quench the avalanche current, providing a digital CMOS-compatible voltage pulse. After each first sense, the detector is reset to wait for next frame operation. We employ counters of a two-segmental coarse-fine architecture, where the coarse conversion is achieved by a 10-bit pseudo-random linear feedback shift register (LFSR) in each pixel and a 3-bit fine conversion is realized by a ring delay line shared by all pixels. The reference clock driving the LFSR counter can be generated within the ring delay line Oscillator or provided by an external clock source. The circuit is designed and implemented by CSMC 0.5μm standard CMOS technology and the total chip area is around 2mm×2mm for 8×8 format ROIC with 150μm pixel pitch. The simulation results indicate that the relative time resolution of the proposed ROIC can achieve less than 1ns, and the preliminary test results show that the circuit function is correct.

  14. 1T Pixel Using Floating-Body MOSFET for CMOS Image Sensors.

    PubMed

    Lu, Guo-Neng; Tournier, Arnaud; Roy, François; Deschamps, Benoît

    2009-01-01

    We present a single-transistor pixel for CMOS image sensors (CIS). It is a floating-body MOSFET structure, which is used as photo-sensing device and source-follower transistor, and can be controlled to store and evacuate charges. Our investigation into this 1T pixel structure includes modeling to obtain analytical description of conversion gain. Model validation has been done by comparing theoretical predictions and experimental results. On the other hand, the 1T pixel structure has been implemented in different configurations, including rectangular-gate and ring-gate designs, and variations of oxidation parameters for the fabrication process. The pixel characteristics are presented and discussed.

  15. A 45 nm Stacked CMOS Image Sensor Process Technology for Submicron Pixel.

    PubMed

    Takahashi, Seiji; Huang, Yi-Min; Sze, Jhy-Jyi; Wu, Tung-Ting; Guo, Fu-Sheng; Hsu, Wei-Cheng; Tseng, Tung-Hsiung; Liao, King; Kuo, Chin-Chia; Chen, Tzu-Hsiang; Chiang, Wei-Chieh; Chuang, Chun-Hao; Chou, Keng-Yu; Chung, Chi-Hsien; Chou, Kuo-Yu; Tseng, Chien-Hsien; Wang, Chuan-Joung; Yaung, Dun-Nien

    2017-12-05

    A submicron pixel's light and dark performance were studied by experiment and simulation. An advanced node technology incorporated with a stacked CMOS image sensor (CIS) is promising in that it may enhance performance. In this work, we demonstrated a low dark current of 3.2 e - /s at 60 °C, an ultra-low read noise of 0.90 e - ·rms, a high full well capacity (FWC) of 4100 e - , and blooming of 0.5% in 0.9 μm pixels with a pixel supply voltage of 2.8 V. In addition, the simulation study result of 0.8 μm pixels is discussed.

  16. Observing Bridge Dynamic Deflection in Green Time by Information Technology

    NASA Astrophysics Data System (ADS)

    Yu, Chengxin; Zhang, Guojian; Zhao, Yongqian; Chen, Mingzhi

    2018-01-01

    As traditional surveying methods are limited to observe bridge dynamic deflection; information technology is adopted to observe bridge dynamic deflection in Green time. Information technology used in this study means that we use digital cameras to photograph the bridge in red time as a zero image. Then, a series of successive images are photographed in green time. Deformation point targets are identified and located by Hough transform. With reference to the control points, the deformation values of these deformation points are obtained by differencing the successive images with a zero image, respectively. Results show that the average measurement accuracies of C0 are 0.46 pixels, 0.51 pixels and 0.74 pixels in X, Z and comprehensive direction. The average measurement accuracies of C1 are 0.43 pixels, 0.43 pixels and 0.67 pixels in X, Z and comprehensive direction in these tests. The maximal bridge deflection is 44.16mm, which is less than 75mm (Bridge deflection tolerance value). Information technology in this paper can monitor bridge dynamic deflection and depict deflection trend curves of the bridge in real time. It can provide data support for the site decisions to the bridge structure safety.

  17. Estimation bias from using nonlinear Fourier plane correlators for sub-pixel image shift measurement and implications for the binary joint transform correlator

    NASA Astrophysics Data System (ADS)

    Grycewicz, Thomas J.; Florio, Christopher J.; Franz, Geoffrey A.; Robinson, Ross E.

    2007-09-01

    When using Fourier plane digital algorithms or an optical correlator to measure the correlation between digital images, interpolation by center-of-mass or quadratic estimation techniques can be used to estimate image displacement to the sub-pixel level. However, this can lead to a bias in the correlation measurement. This bias shifts the sub-pixel output measurement to be closer to the nearest pixel center than the actual location. The paper investigates the bias in the outputs of both digital and optical correlators, and proposes methods to minimize this effect. We use digital studies and optical implementations of the joint transform correlator to demonstrate optical registration with accuracies better than 0.1 pixels. We use both simulations of image shift and movies of a moving target as inputs. We demonstrate bias error for both center-of-mass and quadratic interpolation, and discuss the reasons that this bias is present. Finally, we suggest measures to reduce or eliminate the bias effects. We show that when sub-pixel bias is present, it can be eliminated by modifying the interpolation method. By removing the bias error, we improve registration accuracy by thirty percent.

  18. A 256×256 low-light-level CMOS imaging sensor with digital CDS

    NASA Astrophysics Data System (ADS)

    Zou, Mei; Chen, Nan; Zhong, Shengyou; Li, Zhengfen; Zhang, Jicun; Yao, Li-bin

    2016-10-01

    In order to achieve high sensitivity for low-light-level CMOS image sensors (CIS), a capacitive transimpedance amplifier (CTIA) pixel circuit with a small integration capacitor is used. As the pixel and the column area are highly constrained, it is difficult to achieve analog correlated double sampling (CDS) to remove the noise for low-light-level CIS. So a digital CDS is adopted, which realizes the subtraction algorithm between the reset signal and pixel signal off-chip. The pixel reset noise and part of the column fixed-pattern noise (FPN) can be greatly reduced. A 256×256 CIS with CTIA array and digital CDS is implemented in the 0.35μm CMOS technology. The chip size is 7.7mm×6.75mm, and the pixel size is 15μm×15μm with a fill factor of 20.6%. The measured pixel noise is 24LSB with digital CDS in RMS value at dark condition, which shows 7.8× reduction compared to the image sensor without digital CDS. Running at 7fps, this low-light-level CIS can capture recognizable images with the illumination down to 0.1lux.

  19. Honeycomb-Textured Landforms in Northwestern Hellas Planitia

    NASA Image and Video Library

    2017-11-28

    This image from NASA's Mars Reconnaissance Orbiter (MRO) targets a portion of a group of honeycomb-textured landforms in northwestern Hellas Planitia, which is part of one of the largest and most ancient impact basins on Mars. In a larger Context Camera image, the individual "cells" are about 5 to 10 kilometers wide. With HiRISE, we see much greater detail of these cells, like sand ripples that indicate wind erosion has played some role here. We also see distinctive exposures of bedrock that cut across the floor and wall of the cells. These resemble dykes, which are usually formed by volcanic activity. Additionally, the lack of impact craters suggests that the landscape, along with these features, have been recently reshaped by a process, or number of processes that may even be active today. Scientists have been debating how these honeycombed features are created, theorizing from glacial events, lake formation, volcanic activity, and tectonic activity, to wind erosion. The map is projected here at a scale of 50 centimeters (19.7 inches) per pixel. [The original image scale is 53.8 centimeters (21.2 inches) per pixel (with 2 x 2 binning); objects on the order of 161 centimeters (23.5 inches) across are resolved.] North is up. https://photojournal.jpl.nasa.gov/catalog/PIA22118

  20. Characterization study of an intensified complementary metal-oxide-semiconductor active pixel sensor.

    PubMed

    Griffiths, J A; Chen, D; Turchetta, R; Royle, G J

    2011-03-01

    An intensified CMOS active pixel sensor (APS) has been constructed for operation in low-light-level applications: a high-gain, fast-light decay image intensifier has been coupled via a fiber optic stud to a prototype "VANILLA" APS, developed by the UK based MI3 consortium. The sensor is capable of high frame rates and sparse readout. This paper presents a study of the performance parameters of the intensified VANILLA APS system over a range of image intensifier gain levels when uniformly illuminated with 520 nm green light. Mean-variance analysis shows the APS saturating around 3050 Digital Units (DU), with the maximum variance increasing with increasing image intensifier gain. The system's quantum efficiency varies in an exponential manner from 260 at an intensifier gain of 7.45 × 10(3) to 1.6 at a gain of 3.93 × 10(1). The usable dynamic range of the system is 60 dB for intensifier gains below 1.8 × 10(3), dropping to around 40 dB at high gains. The conclusion is that the system shows suitability for the desired application.

  1. Characterization study of an intensified complementary metal-oxide-semiconductor active pixel sensor

    NASA Astrophysics Data System (ADS)

    Griffiths, J. A.; Chen, D.; Turchetta, R.; Royle, G. J.

    2011-03-01

    An intensified CMOS active pixel sensor (APS) has been constructed for operation in low-light-level applications: a high-gain, fast-light decay image intensifier has been coupled via a fiber optic stud to a prototype "VANILLA" APS, developed by the UK based MI3 consortium. The sensor is capable of high frame rates and sparse readout. This paper presents a study of the performance parameters of the intensified VANILLA APS system over a range of image intensifier gain levels when uniformly illuminated with 520 nm green light. Mean-variance analysis shows the APS saturating around 3050 Digital Units (DU), with the maximum variance increasing with increasing image intensifier gain. The system's quantum efficiency varies in an exponential manner from 260 at an intensifier gain of 7.45 × 103 to 1.6 at a gain of 3.93 × 101. The usable dynamic range of the system is 60 dB for intensifier gains below 1.8 × 103, dropping to around 40 dB at high gains. The conclusion is that the system shows suitability for the desired application.

  2. Adaptive pseudo-color enhancement method of weld radiographic images based on HSI color space and self-transformation of pixels.

    PubMed

    Jiang, Hongquan; Zhao, Yalin; Gao, Jianmin; Gao, Zhiyong

    2017-06-01

    The radiographic testing (RT) image of a steam turbine manufacturing enterprise has the characteristics of low gray level, low contrast, and blurriness, which lead to a substandard image quality. Moreover, it is not conducive for human eyes to detect and evaluate defects. This study proposes an adaptive pseudo-color enhancement method for weld radiographic images based on the hue, saturation, and intensity (HSI) color space and the self-transformation of pixels to solve these problems. First, the pixel's self-transformation is performed to the pixel value of the original RT image. The function value after the pixel's self-transformation is assigned to the HSI components in the HSI color space. Thereafter, the average intensity of the enhanced image is adaptively adjusted to 0.5 according to the intensity of the original image. Moreover, the hue range and interval can be adjusted according to personal habits. Finally, the HSI components after the adaptive adjustment can be transformed to display in the red, green, and blue color space. Numerous weld radiographic images from a steam turbine manufacturing enterprise are used to validate the proposed method. The experimental results show that the proposed pseudo-color enhancement method can improve image definition and make the target and background areas distinct in weld radiographic images. The enhanced images will be more conducive for defect recognition. Moreover, the image enhanced using the proposed method conforms to the human eye visual properties, and the effectiveness of defect recognition and evaluation can be ensured.

  3. Adaptive pseudo-color enhancement method of weld radiographic images based on HSI color space and self-transformation of pixels

    NASA Astrophysics Data System (ADS)

    Jiang, Hongquan; Zhao, Yalin; Gao, Jianmin; Gao, Zhiyong

    2017-06-01

    The radiographic testing (RT) image of a steam turbine manufacturing enterprise has the characteristics of low gray level, low contrast, and blurriness, which lead to a substandard image quality. Moreover, it is not conducive for human eyes to detect and evaluate defects. This study proposes an adaptive pseudo-color enhancement method for weld radiographic images based on the hue, saturation, and intensity (HSI) color space and the self-transformation of pixels to solve these problems. First, the pixel's self-transformation is performed to the pixel value of the original RT image. The function value after the pixel's self-transformation is assigned to the HSI components in the HSI color space. Thereafter, the average intensity of the enhanced image is adaptively adjusted to 0.5 according to the intensity of the original image. Moreover, the hue range and interval can be adjusted according to personal habits. Finally, the HSI components after the adaptive adjustment can be transformed to display in the red, green, and blue color space. Numerous weld radiographic images from a steam turbine manufacturing enterprise are used to validate the proposed method. The experimental results show that the proposed pseudo-color enhancement method can improve image definition and make the target and background areas distinct in weld radiographic images. The enhanced images will be more conducive for defect recognition. Moreover, the image enhanced using the proposed method conforms to the human eye visual properties, and the effectiveness of defect recognition and evaluation can be ensured.

  4. Fractional order integration and fuzzy logic based filter for denoising of echocardiographic image.

    PubMed

    Saadia, Ayesha; Rashdi, Adnan

    2016-12-01

    Ultrasound is widely used for imaging due to its cost effectiveness and safety feature. However, ultrasound images are inherently corrupted with speckle noise which severely affects the quality of these images and create difficulty for physicians in diagnosis. To get maximum benefit from ultrasound imaging, image denoising is an essential requirement. To perform image denoising, a two stage methodology using fuzzy weighted mean and fractional integration filter has been proposed in this research work. In stage-1, image pixels are processed by applying a 3 × 3 window around each pixel and fuzzy logic is used to assign weights to the pixels in each window, replacing central pixel of the window with weighted mean of all neighboring pixels present in the same window. Noise suppression is achieved by assigning weights to the pixels while preserving edges and other important features of an image. In stage-2, the resultant image is further improved by fractional order integration filter. Effectiveness of the proposed methodology has been analyzed for standard test images artificially corrupted with speckle noise and real ultrasound B-mode images. Results of the proposed technique have been compared with different state-of-the-art techniques including Lsmv, Wiener, Geometric filter, Bilateral, Non-local means, Wavelet, Perona et al., Total variation (TV), Global Adaptive Fractional Integral Algorithm (GAFIA) and Improved Fractional Order Differential (IFD) model. Comparison has been done on quantitative and qualitative basis. For quantitative analysis different metrics like Peak Signal to Noise Ratio (PSNR), Speckle Suppression Index (SSI), Structural Similarity (SSIM), Edge Preservation Index (β) and Correlation Coefficient (ρ) have been used. Simulations have been done using Matlab. Simulation results of artificially corrupted standard test images and two real Echocardiographic images reveal that the proposed method outperforms existing image denoising techniques reported in the literature. The proposed method for denoising of Echocardiographic images is effective in noise suppression/removal. It not only removes noise from an image but also preserves edges and other important structure. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  5. Mapping of major volcanic structures on Pavonis Mons in Tharsis, Mars

    NASA Astrophysics Data System (ADS)

    Orlandi, Diana; Mazzarini, Francesco; Pagli, Carolina; Pozzobon, Riccardo

    2017-04-01

    Pavonis Mons, with its 300 km of diameter and 14 km of height, is one of the largest volcanoes of Mars. It rests on a topographic high called Tharsis rise and it is located in the centre of a SW-NE trending row of volcanoes, including Arsia and Ascraeus Montes. In this study we mapped and analyzed the volcanic and tectonic structures of Pavonis Mons in order to understand its formation and the relationship between magmatic and tectonic activity. We use the mapping ArcGIS software and vast set of high resolution topographic and multi-spectral images including CTX (6 m/pixel) as well as HRSC (12.5 m/pixel) and HiRiSE ( 0.25 m/pixel) mosaic images. Furthemore, we used MOLA ( 463 m/pixel in the MOLA MEGDR gridded topographic data), THEMIS thermal inertia (IR-day, 100 m/pixel) and THEMIS (IR-night, 100 m/pixel) images global mosaic to map structures at the regional scale. We found a wide range of structures including ring dykes, wrinkle ridges, pit chains, lava flows, lava channels, fissures and depressions that we preliminary interpreted as coalescent lava tubes. Many sinuous rilles have eroded Pavonis' slopes and culminate with lava aprons, similar to alluvial fans. South of Pavonis Mons we also identify a series of volcanic vents mainly aligned along a SW-NE trend. Displacements across recent crater rim and volcanic deposits (strike slip faults and wrinkle ridges) have been documented suggesting that, at least during the most recent volcanic phases, the regional tectonics has contributed in shaping the morphology of Pavonis. The kinematics of the mapped structures is consistent with a ENE-SSW direction of the maximum horizontal stress suggesting a possible interaction with nearby Valles Marineris. Our study provides new morphometric analysis of volcano-tectonic features that can be used to depict an evolutionary history for the Pavonis Volcano.

  6. An Active Fire Temperature Retrieval Model Using Hyperspectral Remote Sensing

    NASA Astrophysics Data System (ADS)

    Quigley, K. W.; Roberts, D. A.; Miller, D.

    2017-12-01

    Wildfire is both an important ecological process and a dangerous natural threat that humans face. In situ measurements of wildfire temperature are notoriously difficult to collect due to dangerous conditions. Imaging spectrometry data has the potential to provide some of the most accurate and highest temporally-resolved active fire temperature retrieval information for monitoring and modeling. Recent studies on fire temperature retrieval have used have used Multiple Endmember Spectral Mixture Analysis applied to Airborne Visible applied to Airborne Visible / Infrared Imaging Spectrometer (AVIRIS) bands to model fire temperatures within the regions marked to contain fire, but these methods are less effective at coarser spatial resolutions, as linear mixing methods are degraded by saturation within the pixel. The assumption of a distribution of temperatures within pixels allows us to model pixels with an effective maximum and likely minimum temperature. This assumption allows a more robust approach to modeling temperature at different spatial scales. In this study, instrument-corrected radiance is forward-modeled for different ranges of temperatures, with weighted temperatures from an effective maximum temperature to a likely minimum temperature contributing to the total radiance of the modeled pixel. Effective maximum fire temperature is estimated by minimizing the Root Mean Square Error (RMSE) between modeled and measured fires. The model was tested using AVIRIS collected over the 2016 Sherpa Fire in Santa Barbara County, California,. While only in situ experimentation would be able to confirm active fire temperatures, the fit of the data to modeled radiance can be assessed, as well as the similarity in temperature distributions seen on different spatial resolution scales. Results show that this model improves upon current modeling methods in producing similar effective temperatures on multiple spatial scales as well as a similar modeled area distribution of those temperatures.

  7. Active pixel sensors: the sensor of choice for future space applications?

    NASA Astrophysics Data System (ADS)

    Leijtens, Johan; Theuwissen, Albert; Rao, Padmakumar R.; Wang, Xinyang; Xie, Ning

    2007-10-01

    It is generally known that active pixel sensors (APS) have a number of advantages over CCD detectors if it comes to cost for mass production, power consumption and ease of integration. Nevertheless, most space applications still use CCD detectors because they tend to give better performance and have a successful heritage. To this respect a change may be at hand with the advent of deep sub-micron processed APS imagers (< 0.25-micron feature size). Measurements performed on test structures at the University of Delft have shown that the imagers are very radiation tolerant even if made in a standard process without the use of special design rules. Furthermore it was shown that the 1/f noise associated with deep sub-micron imagers is reduced as compared to previous generations APS imagers due to the improved quality of the gate oxides. Considering that end of life performance will have to be guaranteed, limited budget for adding shielding metal will be available for most applications and lower power operations is always seen as a positive characteristic in space applications, deep sub-micron APS imagers seem to have a number of advantages over CCD's that will probably cause them to replace CCD's in those applications where radiation tolerance and low power operation are important

  8. A 16 x 16-pixel retinal-prosthesis vision chip with in-pixel digital image processing in a frequency domain by use of a pulse-frequency-modulation photosensor

    NASA Astrophysics Data System (ADS)

    Kagawa, Keiichiro; Furumiya, Tetsuo; Ng, David C.; Uehara, Akihiro; Ohta, Jun; Nunoshita, Masahiro

    2004-06-01

    We are exploring the application of pulse-frequency-modulation (PFM) photosensor to retinal prosthesis for the blind because behavior of PFM photosensors is similar to retinal ganglion cells, from which visual data are transmitted from the retina toward the brain. We have developed retinal-prosthesis vision chips that reshape the output pulses of the PFM photosensor to biphasic current pulses suitable for electric stimulation of retinal cells. In this paper, we introduce image-processing functions to the pixel circuits. We have designed a 16x16-pixel retinal-prosthesis vision chip with several kinds of in-pixel digital image processing such as edge enhancement, edge detection, and low-pass filtering. This chip is a prototype demonstrator of the retinal prosthesis vision chip applicable to in-vitro experiments. By utilizing the feature of PFM photosensor, we propose a new scheme to implement the above image processing in a frequency domain by digital circuitry. Intensity of incident light is converted to a 1-bit data stream by a PFM photosensor, and then image processing is executed by a 1-bit image processor based on joint and annihilation of pulses. The retinal prosthesis vision chip is composed of four blocks: a pixels array block, a row-parallel stimulation current amplifiers array block, a decoder block, and a base current generators block. All blocks except PFM photosensors and stimulation current amplifiers are embodied as digital circuitry. This fact contributes to robustness against noises and fluctuation of power lines. With our vision chip, we can control photosensitivity and intensity and durations of stimulus biphasic currents, which are necessary for retinal prosthesis vision chip. The designed dynamic range is more than 100 dB. The amplitude of the stimulus current is given by a base current, which is common for all pixels, multiplied by a value in an amplitude memory of pixel. Base currents of the negative and positive pulses are common for the all pixels, and they are set in a linear manner. Otherwise, the value in the amplitude memory of the pixel is presented in an exponential manner to cover the wide range. The stimulus currents are put out column by column by scanning. The pixel size is 240um x 240um. Each pixel has a bonding pad on which stimulus electrode is to be formed. We will show the experimental results of the test chip.

  9. A Different Way to Visualize Solar Changes

    NASA Astrophysics Data System (ADS)

    Kohler, Susanna

    2016-07-01

    This time series of SDO images of an active region shows coronal dimming as well as flares. These images can be combined into a minimum-value persistence map (bottom panel) that better reveals the entire dimming region. [Adapted from Thompson Young 2016]What if there were a better way to analyze a comets tail, the dimming of the Suns surface, or the path of material in a bright solar eruption? A recent study examines a new technique for looking at these evolving features.Mapping Evolving FeaturesSometimes interesting advances in astronomy come from simple, creative new approaches to analyzing old data. Such is the case in a new study by Barbara Thompson and Alex Young (NASA Goddard Space Flight Center), which introduces a technique called persistence mapping to better examine solar phenomena whose dynamic natures make them difficult to analyze.What is a persistence map? Suppose you have a set of N images of the same spatial region, with each image taken at a different time. To create a persistence map of these images, you would combine this set of images by retaining only the most extreme (for example, the maximum) value for each pixel, throwing away the remaining N-1 values for each pixel.Persistence mapping is especially useful for bringing out rare or intermittent phenomena features that would often be washed out if the images were combined in a sum or average instead. Thompson and Young describe three example cases where persistence mapping brings something new to the table.Top: Single SDO image of Comet Lovejoy. Center: 17 minutes of SDO images, combined in a persistence map. The structure of the tail is now clearly visible. Bottom: For comparison, the average pixel value for this sequence of images. Click for a closer look![Thompson Young 2016]A Comets TailAs Comet Lovejoy passed through the solar corona in 2011, solar physicists analyzed extreme ultraviolet images of its tail because the motion of the tail particles reveals information about the local coronal magnetic field.Past analyses have averaged or summed images of the comet in orbit to examine its tail. But a persistence map of the maximum pixel values far more clearly shows the striations within the tail that reveal the directions of the local magnetic field lines.Dimming of the SunDimming of the Suns corona near active regions tells us about the material thats evacuated during coronal mass ejections. This process can be complex: regions dim at different times, and flares sometimes hide the dimming, making it difficult to observe. But understanding the entire dimming region is necessary to infer the total mass loss and complete magnetic footprint of a gradual eruption from the Suns surface.SDO and STEREO-A images of a prominence eruption. Tracking the falling material is difficult due to the complex background. [Thompson Young 2016]Creating a persistence map of minimum pixel values achieves this and also neatly sidesteps the problem of flares hiding the dimming regions, since the bright pixels are discarded. In the authors example, a persistence map estimates 50% more mass loss for a coronal dimming event than the traditional image analysis method, and it reveals connections between dimming regions that were previously missed.An Erupting ProminenceThe authors final example is of falling prominence material after a solar eruption, seen in absorption against the bright corona. They show that you can construct a persistence map of minimum pixel values over the time the material falls (see the cover image), allowing the materials paths to be tracked despite the evolving background behind it. Tracing these trajectories provides information about the local magnetic field.Thompson and Youngs examples indicate that persistence mapping clearly provides new information in some cases of intermittent or slowly evolving solar phenomena. It will be interesting to see where else this technique can be applied!CitationB. J. Thompson and C. A. Young 2016 ApJ 825 27. doi:10.3847/0004-637X/825/1/27

  10. Thematic mapper studies of central Andean volcanoes

    NASA Technical Reports Server (NTRS)

    Francis, Peter W.

    1987-01-01

    A series of false color composite images covering the volcanic cordillera was written. Each image is 45 km (1536 x 1536 pixels) and was constructed using bands 7, 4, and 2 of the Thematic Mapper (TM) data. Approximately 100 images were prepared to date. A set of LANDSAT Multispectral Scanner (MSS) images was used in conjunction with the TM hardcopy to compile a computer data base of all volcanic structure in the Central Andean province. Over 500 individual structures were identified. About 75 major volcanoes were identified as active, or potentially active. A pilot study was begun combining Shuttle Imaging Radar (SIR) data with TM for a test area in north Chile and Bolivia.

  11. Fast Fiber-Coupled Imaging Devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brockington, Samuel; Case, Andrew; Witherspoon, Franklin Douglas

    HyperV Technologies Corp. has successfully designed, built and experimentally demonstrated a full scale 1024 pixel 100 MegaFrames/s fiber coupled camera with 12 or 14 bits, and record lengths of 32K frames, exceeding our original performance objectives. This high-pixel-count, fiber optically-coupled, imaging diagnostic can be used for investigating fast, bright plasma events. In Phase 1 of this effort, a 100 pixel fiber-coupled fast streak camera for imaging plasma jet profiles was constructed and successfully demonstrated. The resulting response from outside plasma physics researchers emphasized development of increased pixel performance as a higher priority over increasing pixel count. In this Phase 2more » effort, HyperV therefore focused on increasing the sample rate and bit-depth of the photodiode pixel designed in Phase 1, while still maintaining a long record length and holding the cost per channel to levels which allowed up to 1024 pixels to be constructed. Cost per channel was 53.31 dollars, very close to our original target of $50 per channel. The system consists of an imaging "camera head" coupled to a photodiode bank with an array of optical fibers. The output of these fast photodiodes is then digitized at 100 Megaframes per second and stored in record lengths of 32,768 samples with bit depths of 12 to 14 bits per pixel. Longer record lengths are possible with additional memory. A prototype imaging system with up to 1024 pixels was designed and constructed and used to successfully take movies of very fast moving plasma jets as a demonstration of the camera performance capabilities. Some faulty electrical components on the 64 circuit boards resulted in only 1008 functional channels out of 1024 on this first generation prototype system. We experimentally observed backlit high speed fan blades in initial camera testing and then followed that with full movies and streak images of free flowing high speed plasma jets (at 30-50 km/s). Jet structure and jet collisions onto metal pillars in the path of the plasma jets were recorded in a single shot. This new fast imaging system is an attractive alternative to conventional fast framing cameras for applications and experiments where imaging events using existing techniques are inefficient or impossible. The development of HyperV's new diagnostic was split into two tracks: a next generation camera track, in which HyperV built, tested, and demonstrated a prototype 1024 channel camera at its own facility, and a second plasma community beta test track, where selected plasma physics programs received small systems of a few test pixels to evaluate the expected performance of a full scale camera on their experiments. These evaluations were performed as part of an unfunded collaboration with researchers at Los Alamos National Laboratory and the University of California at Davis. Results from the prototype 1024-pixel camera are discussed, as well as results from the collaborations with test pixel system deployment sites.« less

  12. Tiled fuzzy Hough transform for crack detection

    NASA Astrophysics Data System (ADS)

    Vaheesan, Kanapathippillai; Chandrakumar, Chanjief; Mathavan, Senthan; Kamal, Khurram; Rahman, Mujib; Al-Habaibeh, Amin

    2015-04-01

    Surface cracks can be the bellwether of the failure of any component under loading as it indicates the component's fracture due to stresses and usage. For this reason, crack detection is indispensable for the condition monitoring and quality control of road surfaces. Pavement images have high levels of intensity variation and texture content, hence the crack detection is difficult. Moreover, shallow cracks result in very low contrast image pixels making their detection difficult. For these reasons, studies on pavement crack detection is active even after years of research. In this paper, the fuzzy Hough transform is employed, for the first time to detect cracks on any surface. The contribution of texture pixels to the accumulator array is reduced by using the tiled version of the Hough transform. Precision values of 78% and a recall of 72% are obtaining for an image set obtained from an industrial imaging system containing very low contrast cracking. When only high contrast crack segments are considered the values move to mid to high 90%.

  13. Experimental investigation of the 2D ion beam profile generated by an ESI octopole-QMS system.

    PubMed

    Syed, Sarfaraz U A H; Eijkel, Gert B; Kistemaker, Piet; Ellis, Shane; Maher, Simon; Smith, Donald F; Heeren, Ron M A

    2014-10-01

    In this paper, we have employed an ion imaging approach to investigate the behavior of ions exiting from a quadrupole mass spectrometer (QMS) system that employs a radio frequency octopole ion guide before the QMS. An in-vacuum active pixel detector (Timepix) is employed at the exit of the QMS to image the ion patterns. The detector assembly simultaneously records the ion impact position and number of ions per pixel in every measurement frame. The transmission characteristics of the ion beam exiting the QMS are studied using this imaging detector under different operating conditions. Experimental results confirm that the ion spatial distribution exiting the QMS is heavily influenced by ion injection conditions. Furthermore, ion images from Timepix measurements of protein standards demonstrate the capability to enhance the quality of the mass spectral information and provide a detailed insight in the spatial distribution of different charge states (and hence different m/z) ions exiting the QMS.

  14. High-polarization-discriminating infrared detection using a single quantum well sandwiched in plasmonic micro-cavity.

    PubMed

    Li, Qian; Li, ZhiFeng; Li, Ning; Chen, XiaoShuang; Chen, PingPing; Shen, XueChu; Lu, Wei

    2014-09-11

    Polarimetric imaging has proved its value in medical diagnostics, bionics, remote sensing, astronomy, and in many other wide fields. Pixel-level solid monolithically integrated polarimetric imaging photo-detectors are the trend for infrared polarimetric imaging devices. For better polarimetric imaging performance the high polarization discriminating detectors are very much critical. Here we demonstrate the high infrared light polarization resolving capabilities of a quantum well (QW) detector in hybrid structure of single QW and plasmonic micro-cavity that uses QW as an active structure in the near field regime of plasmonic effect enhanced cavity, in which the photoelectric conversion in such a plasmonic micro-cavity has been realized. The detector's extinction ratio reaches 65 at the wavelength of 14.7 μm, about 6 times enhanced in such a type of pixel-level polarization long wave infrared photodetectors. The enhancement mechanism is attributed to artificial plasmonic modulation on optical propagation and distribution in the plasmonic micro-cavities.

  15. High-Polarization-Discriminating Infrared Detection Using a Single Quantum Well Sandwiched in Plasmonic Micro-Cavity

    PubMed Central

    Li, Qian; Li, ZhiFeng; Li, Ning; Chen, XiaoShuang; Chen, PingPing; Shen, XueChu; Lu, Wei

    2014-01-01

    Polarimetric imaging has proved its value in medical diagnostics, bionics, remote sensing, astronomy, and in many other wide fields. Pixel-level solid monolithically integrated polarimetric imaging photo-detectors are the trend for infrared polarimetric imaging devices. For better polarimetric imaging performance the high polarization discriminating detectors are very much critical. Here we demonstrate the high infrared light polarization resolving capabilities of a quantum well (QW) detector in hybrid structure of single QW and plasmonic micro-cavity that uses QW as an active structure in the near field regime of plasmonic effect enhanced cavity, in which the photoelectric conversion in such a plasmonic micro-cavity has been realized. The detector's extinction ratio reaches 65 at the wavelength of 14.7 μm, about 6 times enhanced in such a type of pixel-level polarization long wave infrared photodetectors. The enhancement mechanism is attributed to artificial plasmonic modulation on optical propagation and distribution in the plasmonic micro-cavities. PMID:25208580

  16. Exploration of maximum count rate capabilities for large-area photon counting arrays based on polycrystalline silicon thin-film transistors

    NASA Astrophysics Data System (ADS)

    Liang, Albert K.; Koniczek, Martin; Antonuk, Larry E.; El-Mohri, Youcef; Zhao, Qihua

    2016-03-01

    Pixelated photon counting detectors with energy discrimination capabilities are of increasing clinical interest for x-ray imaging. Such detectors, presently in clinical use for mammography and under development for breast tomosynthesis and spectral CT, usually employ in-pixel circuits based on crystalline silicon - a semiconductor material that is generally not well-suited for economic manufacture of large-area devices. One interesting alternative semiconductor is polycrystalline silicon (poly-Si), a thin-film technology capable of creating very large-area, monolithic devices. Similar to crystalline silicon, poly-Si allows implementation of the type of fast, complex, in-pixel circuitry required for photon counting - operating at processing speeds that are not possible with amorphous silicon (the material currently used for large-area, active matrix, flat-panel imagers). The pixel circuits of two-dimensional photon counting arrays are generally comprised of four stages: amplifier, comparator, clock generator and counter. The analog front-end (in particular, the amplifier) strongly influences performance and is therefore of interest to study. In this paper, the relationship between incident and output count rate of the analog front-end is explored under diagnostic imaging conditions for a promising poly-Si based design. The input to the amplifier is modeled in the time domain assuming a realistic input x-ray spectrum. Simulations of circuits based on poly-Si thin-film transistors are used to determine the resulting output count rate as a function of input count rate, energy discrimination threshold and operating conditions.

  17. Construction of pixel-level resolution DEMs from monocular images by shape and albedo from shading constrained with low-resolution DEM

    NASA Astrophysics Data System (ADS)

    Wu, Bo; Liu, Wai Chung; Grumpe, Arne; Wöhler, Christian

    2018-06-01

    Lunar Digital Elevation Model (DEM) is important for lunar successful landing and exploration missions. Lunar DEMs are typically generated by photogrammetry or laser altimetry approaches. Photogrammetric methods require multiple stereo images of the region of interest and it may not be applicable in cases where stereo coverage is not available. In contrast, reflectance based shape reconstruction techniques, such as shape from shading (SfS) and shape and albedo from shading (SAfS), apply monocular images to generate DEMs with pixel-level resolution. We present a novel hierarchical SAfS method that refines a lower-resolution DEM to pixel-level resolution given a monocular image with known light source. We also estimate the corresponding pixel-wise albedo map in the process and based on that to regularize the shape reconstruction with pixel-level resolution based on the low-resolution DEM. In this study, a Lunar-Lambertian reflectance model is applied to estimate the albedo map. Experiments were carried out using monocular images from the Lunar Reconnaissance Orbiter Narrow Angle Camera (LRO NAC), with spatial resolution of 0.5-1.5 m per pixel, constrained by the Selenological and Engineering Explorer and LRO Elevation Model (SLDEM), with spatial resolution of 60 m. The results indicate that local details are well recovered by the proposed algorithm with plausible albedo estimation. The low-frequency topographic consistency depends on the quality of low-resolution DEM and the resolution difference between the image and the low-resolution DEM.

  18. Single-pixel imaging by Hadamard transform and its application for hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Mizutani, Yasuhiro; Shibuya, Kyuki; Taguchi, Hiroki; Iwata, Tetsuo; Takaya, Yasuhiro; Yasui, Takeshi

    2016-10-01

    In this paper, we report on comparisons of single-pixel imagings using Hadamard Transform (HT) and the ghost imaging (GI) in the view point of the visibility under weak light conditions. For comparing the two methods, we have discussed about qualities of images based on experimental results and numerical analysis. To detect images by the TH method, we have illuminated the Hadamard-pattern mask and calculated by orthogonal transform. On the other hand, the GH method can detect images by illuminating random patterns and a correlation measurement. For comparing two methods under weak light intensity, we have controlled illuminated intensities of a DMD projector about 0.1 in signal-to-noise ratio. Though a process speed of the HT image was faster then an image via the GI, the GI method has an advantage of detection under weak light condition. An essential difference between the HT and the GI method is discussed about reconstruction process. Finally, we also show a typical application of the single-pixel imaging such as hyperspectral images by using dual-optical frequency combs. An optical setup consists of two fiber lasers, spatial light modulated for generating patten illumination, and a single pixel detector. We are successful to detect hyperspectrul images in a range from 1545 to 1555 nm at 0.01nm resolution.

  19. Soccer player recognition by pixel classification in a hybrid color space

    NASA Astrophysics Data System (ADS)

    Vandenbroucke, Nicolas; Macaire, Ludovic; Postaire, Jack-Gerard

    1997-08-01

    Soccer is a very popular sport all over the world, Coaches and sport commentators need accurate information about soccer games, especially about the players behavior. These information can be gathered by inspectors who watch the soccer match and report manually the actions of the players involved in the principal phases of the game. Generally, these inspectors focus their attention on the few players standing near the ball and don't report about the motion of all the other players. So it seems desirable to design a system which automatically tracks all the players in real- time. That's why we propose to automatically track each player through the successive color images of the sequences acquired by a fixed color camera. Each player which is present in the image, is modelized by an active contour model or snake. When, during the soccer match, a player is hidden by another, the snakes which track these two players merge. So, it becomes impossible to track the players, except if the snakes are interactively re-initialized. Fortunately, in most cases, the two players don't belong to the same team. That is why we present an algorithm which recognizes the teams of the players by pixels representing the soccer ground which must be withdrawn before considering the players themselves. To eliminate these pixels, the color characteristics of the ground are determined interactively. In a second step, dealing with windows containing only one player of one team, the color features which yield the best discrimination between the two teams are selected. Thanks to these color features, the pixels associated to the players of the two teams form two separated clusters into a color space. In fact, there are many color representation systems and it's interesting to evaluate the features which provide the best separation between the two classes of pixels according to the players soccer suit. Finally, the classification process for image segmentation is based on the three most discriminating color features which define the coordinates of each pixel in an 'hybrid color space.' Thanks to this hybrid color representation, each pixel can be assigned to one of the two classes by a minimum distance classification.

  20. A 65k pixel, 150k frames-per-second camera with global gating and micro-lenses suitable for fluorescence lifetime imaging

    NASA Astrophysics Data System (ADS)

    Burri, Samuel; Powolny, François; Bruschini, Claudio E.; Michalet, Xavier; Regazzoni, Francesco; Charbon, Edoardo

    2014-05-01

    This paper presents our work on a 65k pixel single-photon avalanche diode (SPAD) based imaging sensor realized in a 0.35μm standard CMOS process. At a resolution of 512 by 128 pixels the sensor is read out in 6.4μs to deliver over 150k monochrome frames per second. The individual pixel has a size of 24μm2 and contains the SPAD with a 12T quenching and gating circuitry along with a memory element. The gating signals are distributed across the chip through a balanced tree to minimize the signal skew between the pixels. The array of pixels is row-addressable and data is sent out of the chip on 128 lines in parallel at a frequency of 80MHz. The system is controlled by an FPGA which generates the gating and readout signals and can be used for arbitrary real-time computation on the frames from the sensor. The communication protocol between the camera and a conventional PC is USB2. The active area of the chip is 5% and can be significantly improved with the application of a micro-lens array. A micro-lens array, for use with collimated light, has been designed and its performance is reviewed in the paper. Among other high-speed phenomena the gating circuitry capable of generating illumination periods shorter than 5ns can be used for Fluorescence Lifetime Imaging (FLIM). In order to measure the lifetime of fluorophores excited by a picosecond laser, the sensor's illumination period is synchronized with the excitation laser pulses. A histogram of the photon arrival times relative to the excitation is then constructed by counting the photons arriving during the sensitive time for several positions of the illumination window. The histogram for each pixel is transferred afterwards to a computer where software routines extract the lifetime at each location with an accuracy better than 100ps. We show results for fluorescence lifetime measurements using different fluorophores with lifetimes ranging from 150ps to 5ns.

  1. Optimization method of superpixel analysis for multi-contrast Jones matrix tomography (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Miyazawa, Arata; Hong, Young-Joo; Makita, Shuichi; Kasaragod, Deepa K.; Miura, Masahiro; Yasuno, Yoshiaki

    2017-02-01

    Local statistics are widely utilized for quantification and image processing of OCT. For example, local mean is used to reduce speckle, local variation of polarization state (degree-of-polarization-uniformity (DOPU)) is used to visualize melanin. Conventionally, these statistics are calculated in a rectangle kernel whose size is uniform over the image. However, the fixed size and shape of the kernel result in a tradeoff between image sharpness and statistical accuracy. Superpixel is a cluster of pixels which is generated by grouping image pixels based on the spatial proximity and similarity of signal values. Superpixels have variant size and flexible shapes which preserve the tissue structure. Here we demonstrate a new superpixel method which is tailored for multifunctional Jones matrix OCT (JM-OCT). This new method forms the superpixels by clustering image pixels in a 6-dimensional (6-D) feature space (spatial two dimensions and four dimensions of optical features). All image pixels were clustered based on their spatial proximity and optical feature similarity. The optical features are scattering, OCT-A, birefringence and DOPU. The method is applied to retinal OCT. Generated superpixels preserve the tissue structures such as retinal layers, sclera, vessels, and retinal pigment epithelium. Hence, superpixel can be utilized as a local statistics kernel which would be more suitable than a uniform rectangle kernel. Superpixelized image also can be used for further image processing and analysis. Since it reduces the number of pixels to be analyzed, it reduce the computational cost of such image processing.

  2. Tissue Cancellation in Dual Energy Mammography Using a Calibration Phantom Customized for Direct Mapping.

    PubMed

    Han, Seokmin; Kang, Dong-Goo

    2014-01-01

    An easily implementable tissue cancellation method for dual energy mammography is proposed to reduce anatomical noise and enhance lesion visibility. For dual energy calibration, the images of an imaging object are directly mapped onto the images of a customized calibration phantom. Each pixel pair of the low and high energy images of the imaging object was compared to pixel pairs of the low and high energy images of the calibration phantom. The correspondence was measured by absolute difference between the pixel values of imaged object and those of the calibration phantom. Then the closest pixel pair of the calibration phantom images is marked and selected. After the calibration using direct mapping, the regions with lesion yielded different thickness from the background tissues. Taking advantage of the different thickness, the visibility of cancerous lesions was enhanced with increased contrast-to-noise ratio, depending on the size of lesion and breast thickness. However, some tissues near the edge of imaged object still remained after tissue cancellation. These remaining residuals seem to occur due to the heel effect, scattering, nonparallel X-ray beam geometry and Poisson distribution of photons. To improve its performance further, scattering and the heel effect should be compensated.

  3. Laser pixelation of thick scintillators for medical imaging applications: x-ray studies

    NASA Astrophysics Data System (ADS)

    Sabet, Hamid; Kudrolli, Haris; Marton, Zsolt; Singh, Bipin; Nagarkar, Vivek V.

    2013-09-01

    To achieve high spatial resolution required in nuclear imaging, scintillation light spread has to be controlled. This has been traditionally achieved by introducing structures in the bulk of scintillation materials; typically by mechanical pixelation of scintillators and fill the resultant inter-pixel gaps by reflecting materials. Mechanical pixelation however, is accompanied by various cost and complexity issues especially for hard, brittle and hygroscopic materials. For example LSO and LYSO, hard and brittle scintillators of interest to medical imaging community, are known to crack under thermal and mechanical stress; the material yield drops quickly with large arrays with high aspect ratio pixels and therefore the pixelation process cost increases. We are utilizing a novel technique named Laser Induced Optical Barriers (LIOB) for pixelation of scintillators that overcomes the issues associated with mechanical pixelation. In this technique, we can introduce optical barriers within the bulk of scintillator crystals to form pixelated arrays with small pixel size and large thickness. We applied LIOB to LYSO using a high-frequency solid-state laser. Arrays with different crystal thickness (5 to 20 mm thick), and pixel size (0.8×0.8 to 1.5×1.5 mm2) were fabricated and tested. The width of the optical barriers were controlled by fine-tuning key parameters such as lens focal spot size and laser energy density. Here we report on LIOB process, its optimization, and the optical crosstalk measurements using X-rays. There are many applications that can potentially benefit from LIOB including but not limited to clinical/pre-clinical PET and SPECT systems, and photon counting CT detectors.

  4. Physical Conditions in the Solar Corona Derived from the Total Solar Eclipse Observations obtained on 2017 August 21 Using a Polarization Camera

    NASA Astrophysics Data System (ADS)

    Gopalswamy, N.; Yashiro, Seiji; Reginald, Nelson; Thakur, Neeharika; Thompson, Barbara J.; Gong, Qian

    2018-01-01

    We present preliminary results obtained by observing the solar corona during the 2017 August 21 total solar eclipse using a polarization camera mounted on an eight-inch Schmidt-Cassegrain telescope. The observations were made from Madras Oregon during 17:19 to 17:21 UT. Total and polarized brightness images were obtained at four wavelengths (385, 398.5, 410, and 423 nm). The polarization camera had a polarization mask mounted on a 2048x2048 pixel CCD with a pixel size of 7.4 microns. The resulting images had a size of 975x975 pixels because four neighboring pixels were summed to yield the polarization and total brightness images. The ratio of 410 and 385 nm images is a measure of the coronal temperature, while that at 423 and 398.5 nm images is a measure of the coronal flow speed. We compared the temperature map from the eclipse observations with that obtained from the Solar Dynamics Observatory’s Atmospheric Imaging Assembly images at six EUV wavelengths, yielding consistent temperature information of the corona.

  5. Comparative study of various pixel photodiodes for digital radiography: Junction structure, corner shape and noble window opening

    NASA Astrophysics Data System (ADS)

    Kang, Dong-Uk; Cho, Minsik; Lee, Dae Hee; Yoo, Hyunjun; Kim, Myung Soo; Bae, Jun Hyung; Kim, Hyoungtaek; Kim, Jongyul; Kim, Hyunduk; Cho, Gyuseong

    2012-05-01

    Recently, large-size 3-transistors (3-Tr) active pixel complementary metal-oxide silicon (CMOS) image sensors have been being used for medium-size digital X-ray radiography, such as dental computed tomography (CT), mammography and nondestructive testing (NDT) for consumer products. We designed and fabricated 50 µm × 50 µm 3-Tr test pixels having a pixel photodiode with various structures and shapes by using the TSMC 0.25-m standard CMOS process to compare their optical characteristics. The pixel photodiode output was continuously sampled while a test pixel was continuously illuminated by using 550-nm light at a constant intensity. The measurement was repeated 300 times for each test pixel to obtain reliable results on the mean and the variance of the pixel output at each sampling time. The sampling rate was 50 kHz, and the reset period was 200 msec. To estimate the conversion gain, we used the mean-variance method. From the measured results, the n-well/p-substrate photodiode, among 3 photodiode structures available in a standard CMOS process, showed the best performance at a low illumination equivalent to the typical X-ray signal range. The quantum efficiencies of the n+/p-well, n-well/p-substrate, and n+/p-substrate photodiodes were 18.5%, 62.1%, and 51.5%, respectively. From a comparison of pixels with rounded and rectangular corners, we found that a rounded corner structure could reduce the dark current in large-size pixels. A pixel with four rounded corners showed a reduced dark current of about 200fA compared to a pixel with four rectangular corners in our pixel sample size. Photodiodes with round p-implant openings showed about 5% higher dark current, but about 34% higher sensitivities, than the conventional photodiodes.

  6. Sub-pixel flood inundation mapping from multispectral remotely sensed images based on discrete particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Li, Linyi; Chen, Yun; Yu, Xin; Liu, Rui; Huang, Chang

    2015-03-01

    The study of flood inundation is significant to human life and social economy. Remote sensing technology has provided an effective way to study the spatial and temporal characteristics of inundation. Remotely sensed images with high temporal resolutions are widely used in mapping inundation. However, mixed pixels do exist due to their relatively low spatial resolutions. One of the most popular approaches to resolve this issue is sub-pixel mapping. In this paper, a novel discrete particle swarm optimization (DPSO) based sub-pixel flood inundation mapping (DPSO-SFIM) method is proposed to achieve an improved accuracy in mapping inundation at a sub-pixel scale. The evaluation criterion for sub-pixel inundation mapping is formulated. The DPSO-SFIM algorithm is developed, including particle discrete encoding, fitness function designing and swarm search strategy. The accuracy of DPSO-SFIM in mapping inundation at a sub-pixel scale was evaluated using Landsat ETM + images from study areas in Australia and China. The results show that DPSO-SFIM consistently outperformed the four traditional SFIM methods in these study areas. A sensitivity analysis of DPSO-SFIM was also carried out to evaluate its performances. It is hoped that the results of this study will enhance the application of medium-low spatial resolution images in inundation detection and mapping, and thereby support the ecological and environmental studies of river basins.

  7. Polarized-pixel performance model for DoFP polarimeter

    NASA Astrophysics Data System (ADS)

    Feng, Bin; Shi, Zelin; Liu, Haizheng; Liu, Li; Zhao, Yaohong; Zhang, Junchao

    2018-06-01

    A division of a focal plane (DoFP) polarimeter is manufactured by placing a micropolarizer array directly onto the focal plane array (FPA) of a detector. Each element of the DoFP polarimeter is a polarized pixel. This paper proposes a performance model for a polarized pixel. The proposed model characterizes the optical and electronic performance of a polarized pixel by three parameters. They are respectively major polarization responsivity, minor polarization responsivity and polarization orientation. Each parameter corresponds to an intuitive physical feature of a polarized pixel. This paper further extends this model to calibrate polarization images from a DoFP (division of focal plane) polarimeter. This calibration work is evaluated quantitatively by a developed DoFP polarimeter under varying illumination intensity and angle of linear polarization. The experiment proves that our model reduces nonuniformity to 6.79% of uncalibrated DoLP (degree of linear polarization) images, and significantly improves the visual effect of DoLP images.

  8. Improved Fast, Deep Record Length, Time-Resolved Visible Spectroscopy of Plasmas Using Fiber Grids

    NASA Astrophysics Data System (ADS)

    Brockington, S.; Case, A.; Cruz, E.; Williams, A.; Witherspoon, F. D.; Horton, R.; Klauser, R.; Hwang, D.

    2017-10-01

    HyperV Technologies is developing a fiber-coupled, deep record-length, low-light camera head for performing high time resolution spectroscopy on visible emission from plasma events. By coupling the output of a spectrometer to an imaging fiber bundle connected to a bank of amplified silicon photomultipliers, time-resolved spectroscopic imagers of 100 to 1,000 pixels can be constructed. A second generation prototype 32-pixel spectroscopic imager employing this technique was constructed and successfully tested at the University of California at Davis Compact Toroid Injection Experiment (CTIX). Pixel performance of 10 Megaframes/sec with record lengths of up to 256,000 frames ( 25.6 milliseconds) were achieved. Pixel resolution was 12 bits. Pixel pitch can be refined by using grids of 100 μm to 1000 μm diameter fibers. Experimental results will be discussed, along with future plans for this diagnostic. Work supported by USDOE SBIR Grant DE-SC0013801.

  9. Radar velocity determination using direction of arrival measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doerry, Armin W.; Bickel, Douglas L.; Naething, Richard M.

    The various technologies presented herein relate to utilizing direction of arrival (DOA) data to determine various flight parameters for an aircraft A plurality of radar images (e.g., SAR images) can be analyzed to identify a plurality of pixels in the radar images relating to one or more ground targets. In an embodiment, the plurality of pixels can be selected based upon the pixels exceeding a SNR threshold. The DOA data in conjunction with a measurable Doppler frequency for each pixel can be obtained. Multi-aperture technology enables derivation of an independent measure of DOA to each pixel based on interferometric analysis.more » This independent measure of DOA enables decoupling of the aircraft velocity from the DOA in a range-Doppler map, thereby enabling determination of a radar velocity. The determined aircraft velocity can be utilized to update an onboard INS, and to keep it aligned, without the need for additional velocity-measuring instrumentation.« less

  10. Image encryption using a synchronous permutation-diffusion technique

    NASA Astrophysics Data System (ADS)

    Enayatifar, Rasul; Abdullah, Abdul Hanan; Isnin, Ismail Fauzi; Altameem, Ayman; Lee, Malrey

    2017-03-01

    In the past decade, the interest on digital images security has been increased among scientists. A synchronous permutation and diffusion technique is designed in order to protect gray-level image content while sending it through internet. To implement the proposed method, two-dimensional plain-image is converted to one dimension. Afterward, in order to reduce the sending process time, permutation and diffusion steps for any pixel are performed in the same time. The permutation step uses chaotic map and deoxyribonucleic acid (DNA) to permute a pixel, while diffusion employs DNA sequence and DNA operator to encrypt the pixel. Experimental results and extensive security analyses have been conducted to demonstrate the feasibility and validity of this proposed image encryption method.

  11. Estimation and Detection of Images Degraded by Film-Grain Noise

    DTIC Science & Technology

    1976-09-01

    conclude that any isolated pixel was erroneously detected. J After the first stage we define a pixel to belong to a boundary if one or more of its eight...degraded image and deciding, according to eq. (5.2-10) which interval R. it most likely belongs to. rhe value of the observed pixel is changed to D. if... it is decided that the pixel1 belongs to region R..I Corresponding to the signal space h defined inneq. (5. 2-13) we can rewrite eq. (5.2-11) as i 2 Bi

  12. Restoration of hot pixels in digital imagers using lossless approximation techniques

    NASA Astrophysics Data System (ADS)

    Hadar, O.; Shleifer, A.; Cohen, E.; Dotan, Y.

    2015-09-01

    During the last twenty years, digital imagers have spread into industrial and everyday devices, such as satellites, security cameras, cell phones, laptops and more. "Hot pixels" are the main defects in remote digital cameras. In this paper we prove an improvement of existing restoration methods that use (solely or as an auxiliary tool) some average of the surrounding single pixel, such as the method of the Chapman-Koren study 1,2. The proposed method uses the CALIC algorithm and adapts it to a full use of the surrounding pixels.

  13. Electrophoretically mediated microanalysis of a nicotinamide adenine dinucleotide-dependent enzyme and its facile multiplexing using an active pixel sensor UV detector.

    PubMed

    Urban, Pawel L; Goodall, David M; Bergström, Edmund T; Bruce, Neil C

    2007-08-31

    An electrophoretically mediated microanalysis (EMMA) method has been developed for yeast alcohol dehydrogenase and quantification of reactant and product cofactors, NAD and NADH. The enzyme substrate ethanol (1% (v/v)) was added to the buffer (50 mM borate, pH 8.8). Results are presented for parallel capillary electrophoresis with a novel miniature UV area detector, with an active pixel sensor imaging an array of two or six parallel capillaries connected via a manifold to a single output capillary in a commercial CE instrument, allowing conversions with five different yeast alcohol dehydrogenase concentrations to be quantified in a single experiment.

  14. Classifying features in CT imagery: accuracy for some single- and multiple-species classifiers

    Treesearch

    Daniel L. Schmoldt; Jing He; A. Lynn Abbott

    1998-01-01

    Our current approach to automatically label features in CT images of hardwood logs classifies each pixel of an image individually. These feature classifiers use a back-propagation artificial neural network (ANN) and feature vectors that include a small, local neighborhood of pixels and the distance of the target pixel to the center of the log. Initially, this type of...

  15. Linear dynamic range enhancement in a CMOS imager

    NASA Technical Reports Server (NTRS)

    Pain, Bedabrata (Inventor)

    2008-01-01

    A CMOS imager with increased linear dynamic range but without degradation in noise, responsivity, linearity, fixed-pattern noise, or photometric calibration comprises a linear calibrated dual gain pixel in which the gain is reduced after a pre-defined threshold level by switching in an additional capacitance. The pixel may include a novel on-pixel latch circuit that is used to switch in the additional capacitance.

  16. Assessment of Thematic Mapper band-to-band registration by the block correlation method

    NASA Technical Reports Server (NTRS)

    Card, D. H.; Wrigley, R. C.; Mertz, F. C.; Hall, J. R.

    1983-01-01

    Rectangular blocks of pixels from one band image were statistically correlated against blocks centered on identical pixels from a second band image. The block pairs were shifted in pixel increments both vertically and horizontally with respect to each other and the correlation coefficient to the maximum correlation was taken as the best estimate of registration error for each block pair. For the band combinations of the Arkansas scene studied, the misregistration of TM spectral bands within the noncooled focal plane lie well within the 0.2 pixel target specification. Misregistration between the middle IR bands is well within this specification also. The thermal IR band has an apparent misregistration with TM band 7 of approximately 3 pixels in each direction. The TM band 3 has a misregistration of approximately 0.2 pixel in the across-scan direction and 0.5 pixel in the along-scan direction, with both TM bands 5 and 7.

  17. Assessment of Thematic Mapper Band-to-band Registration by the Block Correlation Method

    NASA Technical Reports Server (NTRS)

    Card, D. H.; Wrigley, R. C.; Mertz, F. C.; Hall, J. R.

    1985-01-01

    Rectangular blocks of pixels from one band image were statistically correlated against blocks centered on identical pixels from a second band image. The block pairs were shifted in pixel increments both vertically and horizontally with respect to each other and the correlation coefficient to the maximum correlation was taken as the best estimate of registration error for each block pair. For the band combinations of the Arkansas scene studied, the misregistration of TM spectral bands within the noncooled focal plane lie well within the 0.2 pixel target specification. Misregistration between the middle IR bands is well within this specification also. The thermal IR band has an apparent misregistration with TM band 7 of approximately 3 pixels in each direction. The TM band 3 has a misregistration of approximately 0.2 pixel in the across-scan direction and 0.5 pixel in the along-scan direction, with both TM bands 5 and 7.

  18. Mercuric iodide room-temperature array detectors for gamma-ray imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patt, B.

    Significant progress has been made recently in the development of mercuric iodide detector arrays for gamma-ray imaging, making real the possibility of constructing high-performance small, light-weight, portable gamma-ray imaging systems. New techniques have been applied in detector fabrication and then low noise electronics which have produced pixel arrays with high-energy resolution, high spatial resolution, high gamma stopping efficiency. Measurements of the energy resolution capability have been made on a 19-element protypical array. Pixel energy resolutions of 2.98% fwhm and 3.88% fwhm were obtained at 59 keV (241-Am) and 140-keV (99m-Tc), respectively. The pixel spectra for a 14-element section of themore » data is shown together with the composition of the overlapped individual pixel spectra. These techniques are now being applied to fabricate much larger arrays with thousands of pixels. Extension of these principles to imaging scenarios involving gamma-ray energies up to several hundred keV is also possible. This would enable imaging of the 208 keV and 375-414 keV 239-Pu and 240-Pu structures, as well as the 186 keV line of 235-U.« less

  19. An RGB colour image steganography scheme using overlapping block-based pixel-value differencing

    PubMed Central

    Pal, Arup Kumar

    2017-01-01

    This paper presents a steganographic scheme based on the RGB colour cover image. The secret message bits are embedded into each colour pixel sequentially by the pixel-value differencing (PVD) technique. PVD basically works on two consecutive non-overlapping components; as a result, the straightforward conventional PVD technique is not applicable to embed the secret message bits into a colour pixel, since a colour pixel consists of three colour components, i.e. red, green and blue. Hence, in the proposed scheme, initially the three colour components are represented into two overlapping blocks like the combination of red and green colour components, while another one is the combination of green and blue colour components, respectively. Later, the PVD technique is employed on each block independently to embed the secret data. The two overlapping blocks are readjusted to attain the modified three colour components. The notion of overlapping blocks has improved the embedding capacity of the cover image. The scheme has been tested on a set of colour images and satisfactory results have been achieved in terms of embedding capacity and upholding the acceptable visual quality of the stego-image. PMID:28484623

  20. How to model moon signals using 2-dimensional Gaussian function: Classroom activity for measuring nighttime cloud cover

    NASA Astrophysics Data System (ADS)

    Gacal, G. F. B.; Lagrosas, N.

    2016-12-01

    Nowadays, cameras are commonly used by students. In this study, we use this instrument to look at moon signals and relate these signals to Gaussian functions. To implement this as a classroom activity, students need computers, computer software to visualize signals, and moon images. A normalized Gaussian function is often used to represent probability density functions of normal distribution. It is described by its mean m and standard deviation s. The smaller standard deviation implies less spread from the mean. For the 2-dimensional Gaussian function, the mean can be described by coordinates (x0, y0), while the standard deviations can be described by sx and sy. In modelling moon signals obtained from sky-cameras, the position of the mean (x0, y0) is solved by locating the coordinates of the maximum signal of the moon. The two standard deviations are the mean square weighted deviation based from the sum of total pixel values of all rows/columns. If visualized in three dimensions, the 2D Gaussian function appears as a 3D bell surface (Fig. 1a). This shape is similar to the pixel value distribution of moon signals as captured by a sky-camera. An example of this is illustrated in Fig 1b taken around 22:20 (local time) of January 31, 2015. The local time is 8 hours ahead of coordinated universal time (UTC). This image is produced by a commercial camera (Canon Powershot A2300) with 1s exposure time, f-stop of f/2.8, and 5mm focal length. One has to chose a camera with high sensitivity when operated at nighttime to effectively detect these signals. Fig. 1b is obtained by converting the red-green-blue (RGB) photo to grayscale values. The grayscale values are then converted to a double data type matrix. The last conversion process is implemented for the purpose of having the same scales for both Gaussian model and pixel distribution of raw signals. Subtraction of the Gaussian model from the raw data produces a moonless image as shown in Fig. 1c. This moonless image can be used for quantifying cloud cover as captured by ordinary cameras (Gacal et al, 2016). Cloud cover can be defined as the ratio of number of pixels whose values exceeds 0.07 and the total number of pixels. In this particular image, cloud cover value is 0.67.

  1. A robust color signal processing with wide dynamic range WRGB CMOS image sensor

    NASA Astrophysics Data System (ADS)

    Kawada, Shun; Kuroda, Rihito; Sugawa, Shigetoshi

    2011-01-01

    We have developed a robust color reproduction methodology by a simple calculation with a new color matrix using the formerly developed wide dynamic range WRGB lateral overflow integration capacitor (LOFIC) CMOS image sensor. The image sensor was fabricated through a 0.18 μm CMOS technology and has a 45 degrees oblique pixel array, the 4.2 μm effective pixel pitch and the W pixels. A W pixel was formed by replacing one of the two G pixels in the Bayer RGB color filter. The W pixel has a high sensitivity through the visible light waveband. An emerald green and yellow (EGY) signal is generated from the difference between the W signal and the sum of RGB signals. This EGY signal mainly includes emerald green and yellow lights. These colors are difficult to be reproduced accurately by the conventional simple linear matrix because their wave lengths are in the valleys of the spectral sensitivity characteristics of the RGB pixels. A new linear matrix based on the EGY-RGB signal was developed. Using this simple matrix, a highly accurate color processing with a large margin to the sensitivity fluctuation and noise has been achieved.

  2. Use and imaging performance of CMOS flat panel imager with LiF/ZnS(Ag) and Gadox scintillation screens for neutron radiography

    NASA Astrophysics Data System (ADS)

    Cha, B. K.; kim, J. Y.; Kim, T. J.; Sim, C.; Cho, G.; Lee, D. H.; Seo, C.-W.; Jeon, S.; Huh, Y.

    2011-01-01

    In digital neutron radiography system, a thermal neutron imaging detector based on neutron-sensitive scintillating screens with CMOS(complementary metal oxide semiconductor) flat panel imager is introduced for non-destructive testing (NDT) application. Recently, large area CMOS APS (active-pixel sensor) in conjunction with scintillation films has been widely used in many digital X-ray imaging applications. Instead of typical imaging detectors such as image plates, cooled-CCD cameras and amorphous silicon flat panel detectors in combination with scintillation screens, we tried to apply a scintillator-based CMOS APS to neutron imaging detection systems for high resolution neutron radiography. In this work, two major Gd2O2S:Tb and 6LiF/ZnS:Ag scintillation screens with various thickness were fabricated by a screen printing method. These neutron converter screens consist of a dispersion of Gd2O2S:Tb and 6LiF/ZnS:Ag scintillating particles in acrylic binder. These scintillating screens coupled-CMOS flat panel imager with 25x50mm2 active area and 48μm pixel pitch was used for neutron radiography. Thermal neutron flux with 6x106n/cm2/s was utilized at the NRF facility of HANARO in KAERI. The neutron imaging characterization of the used detector was investigated in terms of relative light output, linearity and spatial resolution in detail. The experimental results of scintillating screen-based CMOS flat panel detectors demonstrate possibility of high sensitive and high spatial resolution imaging in neutron radiography system.

  3. Low-power low-noise mixed-mode VLSI ASIC for infinite dynamic range imaging applications

    NASA Astrophysics Data System (ADS)

    Turchetta, Renato; Hu, Y.; Zinzius, Y.; Colledani, C.; Loge, A.

    1998-11-01

    Solid state solutions for imaging are mainly represented by CCDs and, more recently, by CMOS imagers. Both devices are based on the integration of the total charge generated by the impinging radiation, with no processing of the single photon information. The dynamic range of these devices is intrinsically limited by the finite value of noise. Here we present the design of an architecture which allows efficient, in-pixel, noise reduction to a practically zero level, thus allowing infinite dynamic range imaging. A detailed calculation of the dynamic range is worked out, showing that noise is efficiently suppressed. This architecture is based on the concept of single-photon counting. In each pixel, we integrate both the front-end, low-noise, low-power analog part and the digital part. The former consists of a charge preamplifier, an active filter for optimal noise bandwidth reduction, a buffer and a threshold comparator, and the latter is simply a counter, which can be programmed to act as a normal shift register for the readout of the counters' contents. Two different ASIC's based on this concept have been designed for different applications. The first one has been optimized for silicon edge-on microstrips detectors, used in a digital mammography R and D project. It is a 32-channel circuit, with a 16-bit binary static counter.It has been optimized for a relatively large detector capacitance of 5 pF. Noise has been measured to be equal to 100 + 7*Cd (pF) electron rms with the digital part, showing no degradation of the noise performances with respect to the design values. The power consumption is 3.8mW/channel for a peaking time of about 1 microsecond(s) . The second circuit is a prototype for pixel imaging. The total active area is about (250 micrometers )**2. The main differences of the electronic architecture with respect to the first prototype are: i) different optimization of the analog front-end part for low-capacitance detectors, ii) in- pixel 4-bit comparator-offset compensation, iii) 15-bit pseudo-random counter. The power consumption is 255 (mu) W/channel for a peaking time of 300 ns and an equivalent noise charge of 185 + 97*Cd electrons rms. Simulation and experimental result as well as imaging results will be presented.

  4. Measuring and Estimating Normalized Contrast in Infrared Flash Thermography

    NASA Technical Reports Server (NTRS)

    Koshti, Ajay M.

    2013-01-01

    Infrared flash thermography (IRFT) is used to detect void-like flaws in a test object. The IRFT technique involves heating up the part surface using a flash of flash lamps. The post-flash evolution of the part surface temperature is sensed by an IR camera in terms of pixel intensity of image pixels. The IR technique involves recording of the IR video image data and analysis of the data using the normalized pixel intensity and temperature contrast analysis method for characterization of void-like flaws for depth and width. This work introduces a new definition of the normalized IR pixel intensity contrast and normalized surface temperature contrast. A procedure is provided to compute the pixel intensity contrast from the camera pixel intensity evolution data. The pixel intensity contrast and the corresponding surface temperature contrast differ but are related. This work provides a method to estimate the temperature evolution and the normalized temperature contrast from the measured pixel intensity evolution data and some additional measurements during data acquisition.

  5. Giga-pixel lensfree holographic microscopy and tomography using color image sensors.

    PubMed

    Isikman, Serhan O; Greenbaum, Alon; Luo, Wei; Coskun, Ahmet F; Ozcan, Aydogan

    2012-01-01

    We report Giga-pixel lensfree holographic microscopy and tomography using color sensor-arrays such as CMOS imagers that exhibit Bayer color filter patterns. Without physically removing these color filters coated on the sensor chip, we synthesize pixel super-resolved lensfree holograms, which are then reconstructed to achieve ~350 nm lateral resolution, corresponding to a numerical aperture of ~0.8, across a field-of-view of ~20.5 mm(2). This constitutes a digital image with ~0.7 Billion effective pixels in both amplitude and phase channels (i.e., ~1.4 Giga-pixels total). Furthermore, by changing the illumination angle (e.g., ± 50°) and scanning a partially-coherent light source across two orthogonal axes, super-resolved images of the same specimen from different viewing angles are created, which are then digitally combined to synthesize tomographic images of the object. Using this dual-axis lensfree tomographic imager running on a color sensor-chip, we achieve a 3D spatial resolution of ~0.35 µm × 0.35 µm × ~2 µm, in x, y and z, respectively, creating an effective voxel size of ~0.03 µm(3) across a sample volume of ~5 mm(3), which is equivalent to >150 Billion voxels. We demonstrate the proof-of-concept of this lensfree optical tomographic microscopy platform on a color CMOS image sensor by creating tomograms of micro-particles as well as a wild-type C. elegans nematode.

  6. Comparison of ultra-widefield fluorescein angiography with the Heidelberg Spectralis(®) noncontact ultra-widefield module versus the Optos(®) Optomap(®).

    PubMed

    Witmer, Matthew T; Parlitsis, George; Patel, Sarju; Kiss, Szilárd

    2013-01-01

    To compare ultra-widefield fluorescein angiography imaging using the Optos(®) Optomap(®) and the Heidelberg Spectralis(®) noncontact ultra-widefield module. Five patients (ten eyes) underwent ultra-widefield fluorescein angiography using the Optos(®) panoramic P200Tx imaging system and the noncontact ultra-widefield module in the Heidelberg Spectralis(®) HRA+OCT system. The images were obtained as a single, nonsteered shot centered on the macula. The area of imaged retina was outlined and quantified using Adobe(®) Photoshop(®) C5 software. The total area and area within each of four visualized quadrants was calculated and compared between the two imaging modalities. Three masked reviewers also evaluated each quadrant per eye (40 total quadrants) to determine which modality imaged the retinal vasculature most peripherally. Optos(®) imaging captured a total retinal area averaging 151,362 pixels, ranging from 116,998 to 205,833 pixels, while the area captured using the Heidelberg Spectralis(®) was 101,786 pixels, ranging from 73,424 to 116,319 (P = 0.0002). The average area per individual quadrant imaged by Optos(®) versus the Heidelberg Spectralis(®) superiorly was 32,373 vs 32,789 pixels, respectively (P = 0.91), inferiorly was 24,665 vs 26,117 pixels, respectively (P = 0.71), temporally was 47,948 vs 20,645 pixels, respectively (P = 0.0001), and nasally was 46,374 vs 22,234 pixels, respectively (P = 0.0001). The Heidelberg Spectralis(®) was able to image the superior and inferior retinal vasculature to a more distal point than was the Optos(®), in nine of ten eyes (18 of 20 quadrants). The Optos(®) was able to image the nasal and temporal retinal vasculature to a more distal point than was the Heidelberg Spectralis(®), in ten of ten eyes (20 of 20 quadrants). The ultra-widefield fluorescein angiography obtained with the Optos(®) and Heidelberg Spectralis(®) ultra-widefield imaging systems are both excellent modalities that provide views of the peripheral retina. On a single nonsteered image, the Optos(®) Optomap(®) covered a significantly larger total retinal surface area, with greater image variability, than did the Heidelberg Spectralis(®) ultra-widefield module. The Optos(®) captured an appreciably wider view of the retina temporally and nasally, albeit with peripheral distortion, while the ultra-widefield Heidelberg Spectralis(®) module was able to image the superior and inferior retinal vasculature more peripherally. The clinical significance of these findings as well as the area imaged on steered montaged images remains to be determined.

  7. Dense range map reconstruction from a versatile robotic sensor system with an active trinocular vision and a passive binocular vision.

    PubMed

    Kim, Min Young; Lee, Hyunkee; Cho, Hyungsuck

    2008-04-10

    One major research issue associated with 3D perception by robotic systems is the creation of efficient sensor systems that can generate dense range maps reliably. A visual sensor system for robotic applications is developed that is inherently equipped with two types of sensor, an active trinocular vision and a passive stereo vision. Unlike in conventional active vision systems that use a large number of images with variations of projected patterns for dense range map acquisition or from conventional passive vision systems that work well on specific environments with sufficient feature information, a cooperative bidirectional sensor fusion method for this visual sensor system enables us to acquire a reliable dense range map using active and passive information simultaneously. The fusion algorithms are composed of two parts, one in which the passive stereo vision helps active vision and the other in which the active trinocular vision helps the passive one. The first part matches the laser patterns in stereo laser images with the help of intensity images; the second part utilizes an information fusion technique using the dynamic programming method in which image regions between laser patterns are matched pixel-by-pixel with help of the fusion results obtained in the first part. To determine how the proposed sensor system and fusion algorithms can work in real applications, the sensor system is implemented on a robotic system, and the proposed algorithms are applied. A series of experimental tests is performed for a variety of configurations of robot and environments. The performance of the sensor system is discussed in detail.

  8. On Adapting the Tensor Voting Framework to Robust Color Image Denoising

    NASA Astrophysics Data System (ADS)

    Moreno, Rodrigo; Garcia, Miguel Angel; Puig, Domenec; Julià, Carme

    This paper presents an adaptation of the tensor voting framework for color image denoising, while preserving edges. Tensors are used in order to encode the CIELAB color channels, the uniformity and the edginess of image pixels. A specific voting process is proposed in order to propagate color from a pixel to its neighbors by considering the distance between pixels, the perceptual color difference (by using an optimized version of CIEDE2000), a uniformity measurement and the likelihood of the pixels being impulse noise. The original colors are corrected with those encoded by the tensors obtained after the voting process. Peak to noise ratios and visual inspection show that the proposed methodology has a better performance than state-of-the-art techniques.

  9. Pixel-based flood mapping from SAR imagery: a comparison of approaches

    NASA Astrophysics Data System (ADS)

    Landuyt, Lisa; Van Wesemael, Alexandra; Van Coillie, Frieke M. B.; Verhoest, Niko E. C.

    2017-04-01

    Due to their all-weather, day and night capabilities, SAR sensors have been shown to be particularly suitable for flood mapping applications. Thus, they can provide spatially-distributed flood extent data which are valuable for calibrating, validating and updating flood inundation models. These models are an invaluable tool for water managers, to take appropriate measures in times of high water levels. Image analysis approaches to delineate flood extent on SAR imagery are numerous. They can be classified into two categories, i.e. pixel-based and object-based approaches. Pixel-based approaches, e.g. thresholding, are abundant and in general computationally inexpensive. However, large discrepancies between these techniques exist and often subjective user intervention is needed. Object-based approaches require more processing but allow for the integration of additional object characteristics, like contextual information and object geometry, and thus have significant potential to provide an improved classification result. As means of benchmark, a selection of pixel-based techniques is applied on a ERS-2 SAR image of the 2006 flood event of River Dee, United Kingdom. This selection comprises Otsu thresholding, Kittler & Illingworth thresholding, the Fine To Coarse segmentation algorithm and active contour modelling. The different classification results are evaluated and compared by means of several accuracy measures, including binary performance measures.

  10. Image steganography based on 2k correction and coherent bit length

    NASA Astrophysics Data System (ADS)

    Sun, Shuliang; Guo, Yongning

    2014-10-01

    In this paper, a novel algorithm is proposed. Firstly, the edge of cover image is detected with Canny operator and secret data is embedded in edge pixels. Sorting method is used to randomize the edge pixels in order to enhance security. Coherent bit length L is determined by relevant edge pixels. Finally, the method of 2k correction is applied to achieve better imperceptibility in stego image. The experiment shows that the proposed method is better than LSB-3 and Jae-Gil Yu's in PSNR and capacity.

  11. Chip-scale fluorescence microscope based on a silo-filter complementary metal-oxide semiconductor image sensor.

    PubMed

    Ah Lee, Seung; Ou, Xiaoze; Lee, J Eugene; Yang, Changhuei

    2013-06-01

    We demonstrate a silo-filter (SF) complementary metal-oxide semiconductor (CMOS) image sensor for a chip-scale fluorescence microscope. The extruded pixel design with metal walls between neighboring pixels guides fluorescence emission through the thick absorptive filter to the photodiode of a pixel. Our prototype device achieves 13 μm resolution over a wide field of view (4.8 mm × 4.4 mm). We demonstrate bright-field and fluorescence longitudinal imaging of living cells in a compact, low-cost configuration.

  12. Learning to merge: a new tool for interactive mapping

    NASA Astrophysics Data System (ADS)

    Porter, Reid B.; Lundquist, Sheng; Ruggiero, Christy

    2013-05-01

    The task of turning raw imagery into semantically meaningful maps and overlays is a key area of remote sensing activity. Image analysts, in applications ranging from environmental monitoring to intelligence, use imagery to generate and update maps of terrain, vegetation, road networks, buildings and other relevant features. Often these tasks can be cast as a pixel labeling problem, and several interactive pixel labeling tools have been developed. These tools exploit training data, which is generated by analysts using simple and intuitive paint-program annotation tools, in order to tailor the labeling algorithm for the particular dataset and task. In other cases, the task is best cast as a pixel segmentation problem. Interactive pixel segmentation tools have also been developed, but these tools typically do not learn from training data like the pixel labeling tools do. In this paper we investigate tools for interactive pixel segmentation that also learn from user input. The input has the form of segment merging (or grouping). Merging examples are 1) easily obtained from analysts using vector annotation tools, and 2) more challenging to exploit than traditional labels. We outline the key issues in developing these interactive merging tools, and describe their application to remote sensing.

  13. Research on remote sensing image pixel attribute data acquisition method in AutoCAD

    NASA Astrophysics Data System (ADS)

    Liu, Xiaoyang; Sun, Guangtong; Liu, Jun; Liu, Hui

    2013-07-01

    The remote sensing image has been widely used in AutoCAD, but AutoCAD lack of the function of remote sensing image processing. In the paper, ObjectARX was used for the secondary development tool, combined with the Image Engine SDK to realize remote sensing image pixel attribute data acquisition in AutoCAD, which provides critical technical support for AutoCAD environment remote sensing image processing algorithms.

  14. The fragmented nature of tundra landscape

    NASA Astrophysics Data System (ADS)

    Virtanen, Tarmo; Ek, Malin

    2014-04-01

    The vegetation and land cover structure of tundra areas is fragmented when compared to other biomes. Thus, satellite images of high resolution are required for producing land cover classifications, in order to reveal the actual distribution of land cover types across these large and remote areas. We produced and compared different land cover classifications using three satellite images (QuickBird, Aster and Landsat TM5) with different pixel sizes (2.4 m, 15 m and 30 m pixel size, respectively). The study area, in north-eastern European Russia, was visited in July 2007 to obtain ground reference data. The QuickBird image was classified using supervised segmentation techniques, while the Aster and Landsat TM5 images were classified using a pixel-based supervised classification method. The QuickBird classification showed the highest accuracy when tested against field data, while the Aster image was generally more problematic to classify than the Landsat TM5 image. Use of smaller pixel sized images distinguished much greater levels of landscape fragmentation. The overall mean patch sizes in the QuickBird, Aster, and Landsat TM5-classifications were 871 m2, 2141 m2 and 7433 m2, respectively. In the QuickBird classification, the mean patch size of all the tundra and peatland vegetation classes was smaller than one pixel of the Landsat TM5 image. Water bodies and fens in particular occur in the landscape in small or elongated patches, and thus cannot be realistically classified from larger pixel sized images. Land cover patterns vary considerably at such a fine-scale, so that a lot of information is lost if only medium resolution satellite images are used. It is crucial to know the amount and spatial distribution of different vegetation types in arctic landscapes, as carbon dynamics and other climate related physical, geological and biological processes are known to vary greatly between vegetation types.

  15. Multitemporal and Multiscaled Fractal Analysis of Landsat Satellite Data Using the Image Characterization and Modeling System (ICAMS)

    NASA Technical Reports Server (NTRS)

    Quattrochi, Dale A.; Emerson, Charles W.; Lam, Nina Siu-Ngan; Laymon, Charles A.

    1997-01-01

    The Image Characterization And Modeling System (ICAMS) is a public domain software package that is designed to provide scientists with innovative spatial analytical tools to visualize, measure, and characterize landscape patterns so that environmental conditions or processes can be assessed and monitored more effectively. In this study ICAMS has been used to evaluate how changes in fractal dimension, as a landscape characterization index, and resolution, are related to differences in Landsat images collected at different dates for the same area. Landsat Thematic Mapper (TM) data obtained in May and August 1993 over a portion of the Great Basin Desert in eastern Nevada were used for analysis. These data represent contrasting periods of peak "green-up" and "dry-down" for the study area. The TM data sets were converted into Normalized Difference Vegetation Index (NDVI) images to expedite analysis of differences in fractal dimension between the two dates. These NDVI images were also resampled to resolutions of 60, 120, 240, 480, and 960 meters from the original 30 meter pixel size, to permit an assessment of how fractal dimension varies with spatial resolution. Tests of fractal dimension for two dates at various pixel resolutions show that the D values in the August image become increasingly more complex as pixel size increases to 480 meters. The D values in the May image show an even more complex relationship to pixel size than that expressed in the August image. Fractal dimension for a difference image computed for the May and August dates increase with pixel size up to a resolution of 120 meters, and then decline with increasing pixel size. This means that the greatest complexity in the difference images occur around a resolution of 120 meters, which is analogous to the operational domain of changes in vegetation and snow cover that constitute differences between the two dates.

  16. LSA SAF Meteosat FRP products - Part 2: Evaluation and demonstration for use in the Copernicus Atmosphere Monitoring Service (CAMS)

    NASA Astrophysics Data System (ADS)

    Roberts, G.; Wooster, M. J.; Xu, W.; Freeborn, P. H.; Morcrette, J.-J.; Jones, L.; Benedetti, A.; Jiangping, H.; Fisher, D.; Kaiser, J. W.

    2015-11-01

    Characterising the dynamics of landscape-scale wildfires at very high temporal resolutions is best achieved using observations from Earth Observation (EO) sensors mounted onboard geostationary satellites. As a result, a number of operational active fire products have been developed from the data of such sensors. An example of which are the Fire Radiative Power (FRP) products, the FRP-PIXEL and FRP-GRID products, generated by the Land Surface Analysis Satellite Applications Facility (LSA SAF) from imagery collected by the Spinning Enhanced Visible and Infrared Imager (SEVIRI) onboard the Meteosat Second Generation (MSG) series of geostationary EO satellites. The processing chain developed to deliver these FRP products detects SEVIRI pixels containing actively burning fires and characterises their FRP output across four geographic regions covering Europe, part of South America and Northern and Southern Africa. The FRP-PIXEL product contains the highest spatial and temporal resolution FRP data set, whilst the FRP-GRID product contains a spatio-temporal summary that includes bias adjustments for cloud cover and the non-detection of low FRP fire pixels. Here we evaluate these two products against active fire data collected by the Moderate Resolution Imaging Spectroradiometer (MODIS) and compare the results to those for three alternative active fire products derived from SEVIRI imagery. The FRP-PIXEL product is shown to detect a substantially greater number of active fire pixels than do alternative SEVIRI-based products, and comparison to MODIS on a per-fire basis indicates a strong agreement and low bias in terms of FRP values. However, low FRP fire pixels remain undetected by SEVIRI, with errors of active fire pixel detection commission and omission compared to MODIS ranging between 9-13 % and 65-77 % respectively in Africa. Higher errors of omission result in greater underestimation of regional FRP totals relative to those derived from simultaneously collected MODIS data, ranging from 35 % over the Northern Africa region to 89 % over the European region. High errors of active fire omission and FRP underestimation are found over Europe and South America and result from SEVIRI's larger pixel area over these regions. An advantage of using FRP for characterising wildfire emissions is the ability to do so very frequently and in near-real time (NRT). To illustrate the potential of this approach, wildfire fuel consumption rates derived from the SEVIRI FRP-PIXEL product are used to characterise smoke emissions of the 2007 "mega-fire" event focused on Peloponnese (Greece) and used within the European Centre for Medium-Range Weather Forecasting (ECMWF) Integrated Forecasting System (IFS) as a demonstration of what can be achieved when using geostationary active fire data within the Copernicus Atmosphere Monitoring Service (CAMS). Qualitative comparison of the modelled smoke plumes with MODIS optical imagery illustrates that the model captures the temporal and spatial dynamics of the plume very well, and that high temporal resolution emissions estimates such as those available from a geostationary orbit are important for capturing the sub-daily variability in smoke plume parameters such as aerosol optical depth (AOD), which are increasingly less well resolved using daily or coarser temporal resolution emissions data sets. Quantitative comparison of modelled AOD with coincident MODIS and AERONET (Aerosol Robotic Network) AOD indicates that the former is overestimated by ~ 20-30 %, but captures the observed AOD dynamics with a high degree of fidelity. The case study highlights the potential of using geostationary FRP data to drive fire emissions estimates for use within atmospheric transport models such as those implemented in the Monitoring Atmospheric Composition and Climate (MACC) series of projects for the CAMS.

  17. Solar thematic maps for space weather operations

    USGS Publications Warehouse

    Rigler, E. Joshua; Hill, Steven M.; Reinard, Alysha A.; Steenburgh, Robert A.

    2012-01-01

    Thematic maps are arrays of labels, or "themes", associated with discrete locations in space and time. Borrowing heavily from the terrestrial remote sensing discipline, a numerical technique based on Bayes' theorem captures operational expertise in the form of trained theme statistics, then uses this to automatically assign labels to solar image pixels. Ultimately, regular thematic maps of the solar corona will be generated from high-cadence, high-resolution SUVI images, the solar ultraviolet imager slated to fly on NOAA's next-generation GOES-R series of satellites starting ~2016. These thematic maps will not only provide quicker, more consistent synoptic views of the sun for space weather forecasters, but digital thematic pixel masks (e.g., coronal hole, active region, flare, etc.), necessary for a new generation of operational solar data products, will be generated. This paper presents the mathematical underpinnings of our thematic mapper, as well as some practical algorithmic considerations. Then, using images from the Solar Dynamics Observatory (SDO) Advanced Imaging Array (AIA) as test data, it presents results from validation experiments designed to ascertain the robustness of the technique with respect to differing expert opinions and changing solar conditions.

  18. New Methods of Entanglement with Spatial Modes of Light

    DTIC Science & Technology

    2014-02-01

    Poincare beam by state nulling. ....................................... 15 Figure 13: Poincare patterns measured by imaging polarimetry ...perform imaging polarimetry . This entails taking six single photon images, pixel by pixel, after the passage through six different polarization filters...state nulling [21,22] and by imaging polarimetry [24]. Figure 12 shows the result of state nulling measurements in diagnosing the mode of a Poincare

  19. 1024-Pixel CMOS Multimodality Joint Cellular Sensor/Stimulator Array for Real-Time Holistic Cellular Characterization and Cell-Based Drug Screening.

    PubMed

    Park, Jong Seok; Aziz, Moez Karim; Li, Sensen; Chi, Taiyun; Grijalva, Sandra Ivonne; Sung, Jung Hoon; Cho, Hee Cheol; Wang, Hua

    2018-02-01

    This paper presents a fully integrated CMOS multimodality joint sensor/stimulator array with 1024 pixels for real-time holistic cellular characterization and drug screening. The proposed system consists of four pixel groups and four parallel signal-conditioning blocks. Every pixel group contains 16 × 16 pixels, and each pixel includes one gold-plated electrode, four photodiodes, and in-pixel circuits, within a pixel footprint. Each pixel supports real-time extracellular potential recording, optical detection, charge-balanced biphasic current stimulation, and cellular impedance measurement for the same cellular sample. The proposed system is fabricated in a standard 130-nm CMOS process. Rat cardiomyocytes are successfully cultured on-chip. Measured high-resolution optical opacity images, extracellular potential recordings, biphasic current stimulations, and cellular impedance images demonstrate the unique advantages of the system for holistic cell characterization and drug screening. Furthermore, this paper demonstrates the use of optical detection on the on-chip cultured cardiomyocytes to real-time track their cyclic beating pattern and beating rate.

  20. Method for hyperspectral imagery exploitation and pixel spectral unmixing

    NASA Technical Reports Server (NTRS)

    Lin, Ching-Fang (Inventor)

    2003-01-01

    An efficiently hybrid approach to exploit hyperspectral imagery and unmix spectral pixels. This hybrid approach uses a genetic algorithm to solve the abundance vector for the first pixel of a hyperspectral image cube. This abundance vector is used as initial state in a robust filter to derive the abundance estimate for the next pixel. By using Kalman filter, the abundance estimate for a pixel can be obtained in one iteration procedure which is much fast than genetic algorithm. The output of the robust filter is fed to genetic algorithm again to derive accurate abundance estimate for the current pixel. The using of robust filter solution as starting point of the genetic algorithm speeds up the evolution of the genetic algorithm. After obtaining the accurate abundance estimate, the procedure goes to next pixel, and uses the output of genetic algorithm as the previous state estimate to derive abundance estimate for this pixel using robust filter. And again use the genetic algorithm to derive accurate abundance estimate efficiently based on the robust filter solution. This iteration continues until pixels in a hyperspectral image cube end.

  1. Phase information contained in meter-scale SAR images

    NASA Astrophysics Data System (ADS)

    Datcu, Mihai; Schwarz, Gottfried; Soccorsi, Matteo; Chaabouni, Houda

    2007-10-01

    The properties of single look complex SAR satellite images have already been analyzed by many investigators. A common belief is that, apart from inverse SAR methods or polarimetric applications, no information can be gained from the phase of each pixel. This belief is based on the assumption that we obtain uniformly distributed random phases when a sufficient number of small-scale scatterers are mixed in each image pixel. However, the random phase assumption does no longer hold for typical high resolution urban remote sensing scenes, when a limited number of prominent human-made scatterers with near-regular shape and sub-meter size lead to correlated phase patterns. If the pixel size shrinks to a critical threshold of about 1 meter, the reflectance of built-up urban scenes becomes dominated by typical metal reflectors, corner-like structures, and multiple scattering. The resulting phases are hard to model, but one can try to classify a scene based on the phase characteristics of neighboring image pixels. We provide a "cooking recipe" of how to analyze existing phase patterns that extend over neighboring pixels.

  2. Three-pass protocol scheme for bitmap image security by using vernam cipher algorithm

    NASA Astrophysics Data System (ADS)

    Rachmawati, D.; Budiman, M. A.; Aulya, L.

    2018-02-01

    Confidentiality, integrity, and efficiency are the crucial aspects of data security. Among the other digital data, image data is too prone to abuse of operation like duplication, modification, etc. There are some data security techniques, one of them is cryptography. The security of Vernam Cipher cryptography algorithm is very dependent on the key exchange process. If the key is leaked, security of this algorithm will collapse. Therefore, a method that minimizes key leakage during the exchange of messages is required. The method which is used, is known as Three-Pass Protocol. This protocol enables message delivery process without the key exchange. Therefore, the sending messages process can reach the receiver safely without fear of key leakage. The system is built by using Java programming language. The materials which are used for system testing are image in size 200×200 pixel, 300×300 pixel, 500×500 pixel, 800×800 pixel and 1000×1000 pixel. The result of experiments showed that Vernam Cipher algorithm in Three-Pass Protocol scheme could restore the original image.

  3. Adaptive local thresholding for robust nucleus segmentation utilizing shape priors

    NASA Astrophysics Data System (ADS)

    Wang, Xiuzhong; Srinivas, Chukka

    2016-03-01

    This paper describes a novel local thresholding method for foreground detection. First, a Canny edge detection method is used for initial edge detection. Then, tensor voting is applied on the initial edge pixels, using a nonsymmetric tensor field tailored to encode prior information about nucleus size, shape, and intensity spatial distribution. Tensor analysis is then performed to generate the saliency image and, based on that, the refined edge. Next, the image domain is divided into blocks. In each block, at least one foreground and one background pixel are sampled for each refined edge pixel. The saliency weighted foreground histogram and background histogram are then created. These two histograms are used to calculate a threshold by minimizing the background and foreground pixel classification error. The block-wise thresholds are then used to generate the threshold for each pixel via interpolation. Finally, the foreground is obtained by comparing the original image with the threshold image. The effective use of prior information, combined with robust techniques, results in far more reliable foreground detection, which leads to robust nucleus segmentation.

  4. Simultaneous fluorescence and quantitative phase microscopy with single-pixel detectors

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Suo, Jinli; Zhang, Yuanlong; Dai, Qionghai

    2018-02-01

    Multimodal microscopy offers high flexibilities for biomedical observation and diagnosis. Conventional multimodal approaches either use multiple cameras or a single camera spatially multiplexing different modes. The former needs expertise demanding alignment and the latter suffers from limited spatial resolution. Here, we report an alignment-free full-resolution simultaneous fluorescence and quantitative phase imaging approach using single-pixel detectors. By combining reference-free interferometry with single-pixel detection, we encode the phase and fluorescence of the sample in two detection arms at the same time. Then we employ structured illumination and the correlated measurements between the sample and the illuminations for reconstruction. The recovered fluorescence and phase images are inherently aligned thanks to single-pixel detection. To validate the proposed method, we built a proof-of-concept setup for first imaging the phase of etched glass with the depth of a few hundred nanometers and then imaging the fluorescence and phase of the quantum dot drop. This method holds great potential for multispectral fluorescence microscopy with additional single-pixel detectors or a spectrometer. Besides, this cost-efficient multimodal system might find broad applications in biomedical science and neuroscience.

  5. Ghost detection and removal based on super-pixel grouping in exposure fusion

    NASA Astrophysics Data System (ADS)

    Jiang, Shenyu; Xu, Zhihai; Li, Qi; Chen, Yueting; Feng, Huajun

    2014-09-01

    A novel multi-exposure images fusion method for dynamic scenes is proposed. The commonly used techniques for high dynamic range (HDR) imaging are based on the combination of multiple differently exposed images of the same scene. The drawback of these methods is that ghosting artifacts will be introduced into the final HDR image if the scene is not static. In this paper, a super-pixel grouping based method is proposed to detect the ghost in the image sequences. We introduce the zero mean normalized cross correlation (ZNCC) as a measure of similarity between a given exposure image and the reference. The calculation of ZNCC is implemented in super-pixel level, and the super-pixels which have low correlation with the reference are excluded by adjusting the weight maps for fusion. Without any prior information on camera response function or exposure settings, the proposed method generates low dynamic range (LDR) images which can be shown on conventional display devices directly with details preserving and ghost effects reduced. Experimental results show that the proposed method generates high quality images which have less ghost artifacts and provide a better visual quality than previous approaches.

  6. Demosaiced pixel super-resolution for multiplexed holographic color imaging

    PubMed Central

    Wu, Yichen; Zhang, Yibo; Luo, Wei; Ozcan, Aydogan

    2016-01-01

    To synthesize a holographic color image, one can sequentially take three holograms at different wavelengths, e.g., at red (R), green (G) and blue (B) parts of the spectrum, and digitally merge them. To speed up the imaging process by a factor of three, a Bayer color sensor-chip can also be used to demultiplex three wavelengths that simultaneously illuminate the sample and digitally retrieve individual set of holograms using the known transmission spectra of the Bayer color filters. However, because the pixels of different channels (R, G, B) on a Bayer color sensor are not at the same physical location, conventional demosaicing techniques generate color artifacts in holographic imaging using simultaneous multi-wavelength illumination. Here we demonstrate that pixel super-resolution can be merged into the color de-multiplexing process to significantly suppress the artifacts in wavelength-multiplexed holographic color imaging. This new approach, termed Demosaiced Pixel Super-Resolution (D-PSR), generates color images that are similar in performance to sequential illumination at three wavelengths, and therefore improves the speed of holographic color imaging by 3-fold. D-PSR method is broadly applicable to holographic microscopy applications, where high-resolution imaging and multi-wavelength illumination are desired. PMID:27353242

  7. Low cost thermal camera for use in preclinical detection of diabetic peripheral neuropathy in primary care setting

    NASA Astrophysics Data System (ADS)

    Joshi, V.; Manivannan, N.; Jarry, Z.; Carmichael, J.; Vahtel, M.; Zamora, G.; Calder, C.; Simon, J.; Burge, M.; Soliz, P.

    2018-02-01

    Diabetic peripheral neuropathy (DPN) accounts for around 73,000 lower-limb amputations annually in the US on patients with diabetes. Early detection of DPN is critical. Current clinical methods for diagnosing DPN are subjective and effective only at later stages. Until recently, thermal cameras used for medical imaging have been expensive and hence prohibitive to be installed in primary care setting. The objective of this study is to compare results from a low-cost thermal camera with a high-end thermal camera used in screening for DPN. Thermal imaging has demonstrated changes in microvascular function that correlates with nerve function affected by DPN. The limitations for using low-cost cameras for DPN imaging are: less resolution (active pixels), frame rate, thermal sensitivity etc. We integrated two FLIR Lepton (80x60 active pixels, 50° HFOV, thermal sensitivity < 50mK) as one unit. Right and left cameras record the videos of right and left foot respectively. A compactible embedded system (raspberry pi3 model Bv1.2) is used to configure the sensors, capture and stream the video via ethernet. The resulting video has 160x120 active pixels (8 frames/second). We compared the temperature measurement of feet obtained using low-cost camera against the gold standard highend FLIR SC305. Twelve subjects (aged 35-76) were recruited. Difference in the temperature measurements between cameras was calculated for each subject and the results show that the difference between the temperature measurements of two cameras (mean difference=0.4, p-value=0.2) is not statistically significant. We conclude that the low-cost thermal camera system shows potential for use in detecting early-signs of DPN in under-served and rural clinics.

  8. Single image non-uniformity correction using compressive sensing

    NASA Astrophysics Data System (ADS)

    Jian, Xian-zhong; Lu, Rui-zhi; Guo, Qiang; Wang, Gui-pu

    2016-05-01

    A non-uniformity correction (NUC) method for an infrared focal plane array imaging system was proposed. The algorithm, based on compressive sensing (CS) of single image, overcame the disadvantages of "ghost artifacts" and bulk calculating costs in traditional NUC algorithms. A point-sampling matrix was designed to validate the measurements of CS on the time domain. The measurements were corrected using the midway infrared equalization algorithm, and the missing pixels were solved with the regularized orthogonal matching pursuit algorithm. Experimental results showed that the proposed method can reconstruct the entire image with only 25% pixels. A small difference was found between the correction results using 100% pixels and the reconstruction results using 40% pixels. Evaluation of the proposed method on the basis of the root-mean-square error, peak signal-to-noise ratio, and roughness index (ρ) proved the method to be robust and highly applicable.

  9. Correcting speckle contrast at small speckle size to enhance signal to noise ratio for laser speckle contrast imaging.

    PubMed

    Qiu, Jianjun; Li, Yangyang; Huang, Qin; Wang, Yang; Li, Pengcheng

    2013-11-18

    In laser speckle contrast imaging, it was usually suggested that speckle size should exceed two camera pixels to eliminate the spatial averaging effect. In this work, we show the benefit of enhancing signal to noise ratio by correcting the speckle contrast at small speckle size. Through simulations and experiments, we demonstrated that local speckle contrast, even at speckle size much smaller than one pixel size, can be corrected through dividing the original speckle contrast by the static speckle contrast. Moreover, we show a 50% higher signal to noise ratio of the speckle contrast image at speckle size below 0.5 pixel size than that at speckle size of two pixels. These results indicate the possibility of selecting a relatively large aperture to simultaneously ensure sufficient light intensity and high accuracy and signal to noise ratio, making the laser speckle contrast imaging more flexible.

  10. Demosaicking algorithm for the Kodak-RGBW color filter array

    NASA Astrophysics Data System (ADS)

    Rafinazari, M.; Dubois, E.

    2015-01-01

    Digital cameras capture images through different Color Filter Arrays and then reconstruct the full color image. Each CFA pixel only captures one primary color component; the other primary components will be estimated using information from neighboring pixels. During the demosaicking algorithm, the two unknown color components will be estimated at each pixel location. Most of the demosaicking algorithms use the RGB Bayer CFA pattern with Red, Green and Blue filters. The least-Squares Luma-Chroma demultiplexing method is a state of the art demosaicking method for the Bayer CFA. In this paper we develop a new demosaicking algorithm using the Kodak-RGBW CFA. This particular CFA reduces noise and improves the quality of the reconstructed images by adding white pixels. We have applied non-adaptive and adaptive demosaicking method using the Kodak-RGBW CFA on the standard Kodak image dataset and the results have been compared with previous work.

  11. Unsupervised color image segmentation using a lattice algebra clustering technique

    NASA Astrophysics Data System (ADS)

    Urcid, Gonzalo; Ritter, Gerhard X.

    2011-08-01

    In this paper we introduce a lattice algebra clustering technique for segmenting digital images in the Red-Green- Blue (RGB) color space. The proposed technique is a two step procedure. Given an input color image, the first step determines the finite set of its extreme pixel vectors within the color cube by means of the scaled min-W and max-M lattice auto-associative memory matrices, including the minimum and maximum vector bounds. In the second step, maximal rectangular boxes enclosing each extreme color pixel are found using the Chebychev distance between color pixels; afterwards, clustering is performed by assigning each image pixel to its corresponding maximal box. The two steps in our proposed method are completely unsupervised or autonomous. Illustrative examples are provided to demonstrate the color segmentation results including a brief numerical comparison with two other non-maximal variations of the same clustering technique.

  12. A 75-ps Gated CMOS Image Sensor with Low Parasitic Light Sensitivity

    PubMed Central

    Zhang, Fan; Niu, Hanben

    2016-01-01

    In this study, a 40 × 48 pixel global shutter complementary metal-oxide-semiconductor (CMOS) image sensor with an adjustable shutter time as low as 75 ps was implemented using a 0.5-μm mixed-signal CMOS process. The implementation consisted of a continuous contact ring around each p+/n-well photodiode in the pixel array in order to apply sufficient light shielding. The parasitic light sensitivity of the in-pixel storage node was measured to be 1/8.5 × 107 when illuminated by a 405-nm diode laser and 1/1.4 × 104 when illuminated by a 650-nm diode laser. The pixel pitch was 24 μm, the size of the square p+/n-well photodiode in each pixel was 7 μm per side, the measured random readout noise was 217 e− rms, and the measured dynamic range of the pixel of the designed chip was 5500:1. The type of gated CMOS image sensor (CIS) that is proposed here can be used in ultra-fast framing cameras to observe non-repeatable fast-evolving phenomena. PMID:27367699

  13. A 75-ps Gated CMOS Image Sensor with Low Parasitic Light Sensitivity.

    PubMed

    Zhang, Fan; Niu, Hanben

    2016-06-29

    In this study, a 40 × 48 pixel global shutter complementary metal-oxide-semiconductor (CMOS) image sensor with an adjustable shutter time as low as 75 ps was implemented using a 0.5-μm mixed-signal CMOS process. The implementation consisted of a continuous contact ring around each p+/n-well photodiode in the pixel array in order to apply sufficient light shielding. The parasitic light sensitivity of the in-pixel storage node was measured to be 1/8.5 × 10⁷ when illuminated by a 405-nm diode laser and 1/1.4 × 10⁴ when illuminated by a 650-nm diode laser. The pixel pitch was 24 μm, the size of the square p+/n-well photodiode in each pixel was 7 μm per side, the measured random readout noise was 217 e(-) rms, and the measured dynamic range of the pixel of the designed chip was 5500:1. The type of gated CMOS image sensor (CIS) that is proposed here can be used in ultra-fast framing cameras to observe non-repeatable fast-evolving phenomena.

  14. Mapping Electrical Crosstalk in Pixelated Sensor Arrays

    NASA Technical Reports Server (NTRS)

    Seshadri, S.; Cole, D. M.; Hancock, B. R.; Smith, R. M.

    2008-01-01

    Electronic coupling effects such as Inter-Pixel Capacitance (IPC) affect the quantitative interpretation of image data from CMOS, hybrid visible and infrared imagers alike. Existing methods of characterizing IPC do not provide a map of the spatial variation of IPC over all pixels. We demonstrate a deterministic method that provides a direct quantitative map of the crosstalk across an imager. The approach requires only the ability to reset single pixels to an arbitrary voltage, different from the rest of the imager. No illumination source is required. Mapping IPC independently for each pixel is also made practical by the greater S/N ratio achievable for an electrical stimulus than for an optical stimulus, which is subject to both Poisson statistics and diffusion effects of photo-generated charge. The data we present illustrates a more complex picture of IPC in Teledyne HgCdTe and HyViSi focal plane arrays than is presently understood, including the presence of a newly discovered, long range IPC in the HyViSi FPA that extends tens of pixels in distance, likely stemming from extended field effects in the fully depleted substrate. The sensitivity of the measurement approach has been shown to be good enough to distinguish spatial structure in IPC of the order of 0.1%.

  15. Reduction of time-resolved space-based CCD photometry developed for MOST Fabry Imaging data*

    NASA Astrophysics Data System (ADS)

    Reegen, P.; Kallinger, T.; Frast, D.; Gruberbauer, M.; Huber, D.; Matthews, J. M.; Punz, D.; Schraml, S.; Weiss, W. W.; Kuschnig, R.; Moffat, A. F. J.; Walker, G. A. H.; Guenther, D. B.; Rucinski, S. M.; Sasselov, D.

    2006-04-01

    The MOST (Microvariability and Oscillations of Stars) satellite obtains ultraprecise photometry from space with high sampling rates and duty cycles. Astronomical photometry or imaging missions in low Earth orbits, like MOST, are especially sensitive to scattered light from Earthshine, and all these missions have a common need to extract target information from voluminous data cubes. They consist of upwards of hundreds of thousands of two-dimensional CCD frames (or subrasters) containing from hundreds to millions of pixels each, where the target information, superposed on background and instrumental effects, is contained only in a subset of pixels (Fabry Images, defocused images, mini-spectra). We describe a novel reduction technique for such data cubes: resolving linear correlations of target and background pixel intensities. This step-wise multiple linear regression removes only those target variations which are also detected in the background. The advantage of regression analysis versus background subtraction is the appropriate scaling, taking into account that the amount of contamination may differ from pixel to pixel. The multivariate solution for all pairs of target/background pixels is minimally invasive of the raw photometry while being very effective in reducing contamination due to, e.g. stray light. The technique is tested and demonstrated with both simulated oscillation signals and real MOST photometry.

  16. Mk x Nk gated CMOS imager

    NASA Astrophysics Data System (ADS)

    Janesick, James; Elliott, Tom; Andrews, James; Tower, John; Bell, Perry; Teruya, Alan; Kimbrough, Joe; Bishop, Jeanne

    2014-09-01

    Our paper will describe a recently designed Mk x Nk x 10 um pixel CMOS gated imager intended to be first employed at the LLNL National Ignition Facility (NIF). Fabrication involves stitching MxN 1024x1024x10 um pixel blocks together into a monolithic imager (where M = 1, 2, . .10 and N = 1, 2, . . 10). The imager has been designed for either NMOS or PMOS pixel fabrication using a base 0.18 um/3.3V CMOS process. Details behind the design are discussed with emphasis on a custom global reset feature which erases the imager of unwanted charge in ~1 us during the fusion ignition process followed by an exposure to obtain useful data. Performance data generated by prototype imagers designed similar to the Mk x Nk sensor is presented.

  17. Using Trained Pixel Classifiers to Select Images of Interest

    NASA Technical Reports Server (NTRS)

    Mazzoni, D.; Wagstaff, K.; Castano, R.

    2004-01-01

    We present a machine-learning-based approach to ranking images based on learned priorities. Unlike previous methods for image evaluation, which typically assess the value of each image based on the presence of predetermined specific features, this method involves using two levels of machine-learning classifiers: one level is used to classify each pixel as belonging to one of a group of rather generic classes, and another level is used to rank the images based on these pixel classifications, given some example rankings from a scientist as a guide. Initial results indicate that the technique works well, producing new rankings that match the scientist's rankings significantly better than would be expected by chance. The method is demonstrated for a set of images collected by a Mars field-test rover.

  18. Techniques to derive geometries for image-based Eulerian computations

    PubMed Central

    Dillard, Seth; Buchholz, James; Vigmostad, Sarah; Kim, Hyunggun; Udaykumar, H.S.

    2014-01-01

    Purpose The performance of three frequently used level set-based segmentation methods is examined for the purpose of defining features and boundary conditions for image-based Eulerian fluid and solid mechanics models. The focus of the evaluation is to identify an approach that produces the best geometric representation from a computational fluid/solid modeling point of view. In particular, extraction of geometries from a wide variety of imaging modalities and noise intensities, to supply to an immersed boundary approach, is targeted. Design/methodology/approach Two- and three-dimensional images, acquired from optical, X-ray CT, and ultrasound imaging modalities, are segmented with active contours, k-means, and adaptive clustering methods. Segmentation contours are converted to level sets and smoothed as necessary for use in fluid/solid simulations. Results produced by the three approaches are compared visually and with contrast ratio, signal-to-noise ratio, and contrast-to-noise ratio measures. Findings While the active contours method possesses built-in smoothing and regularization and produces continuous contours, the clustering methods (k-means and adaptive clustering) produce discrete (pixelated) contours that require smoothing using speckle-reducing anisotropic diffusion (SRAD). Thus, for images with high contrast and low to moderate noise, active contours are generally preferable. However, adaptive clustering is found to be far superior to the other two methods for images possessing high levels of noise and global intensity variations, due to its more sophisticated use of local pixel/voxel intensity statistics. Originality/value It is often difficult to know a priori which segmentation will perform best for a given image type, particularly when geometric modeling is the ultimate goal. This work offers insight to the algorithm selection process, as well as outlining a practical framework for generating useful geometric surfaces in an Eulerian setting. PMID:25750470

  19. CMOS Image Sensor and System for Imaging Hemodynamic Changes in Response to Deep Brain Stimulation.

    PubMed

    Zhang, Xiao; Noor, Muhammad S; McCracken, Clinton B; Kiss, Zelma H T; Yadid-Pecht, Orly; Murari, Kartikeya

    2016-06-01

    Deep brain stimulation (DBS) is a therapeutic intervention used for a variety of neurological and psychiatric disorders, but its mechanism of action is not well understood. It is known that DBS modulates neural activity which changes metabolic demands and thus the cerebral circulation state. However, it is unclear whether there are correlations between electrophysiological, hemodynamic and behavioral changes and whether they have any implications for clinical benefits. In order to investigate these questions, we present a miniaturized system for spectroscopic imaging of brain hemodynamics. The system consists of a 144 ×144, [Formula: see text] pixel pitch, high-sensitivity, analog-output CMOS imager fabricated in a standard 0.35 μm CMOS process, along with a miniaturized imaging system comprising illumination, focusing, analog-to-digital conversion and μSD card based data storage. This enables stand alone operation without a computer, nor electrical or fiberoptic tethers. To achieve high sensitivity, the pixel uses a capacitive transimpedance amplifier (CTIA). The nMOS transistors are in the pixel while pMOS transistors are column-parallel, resulting in a fill factor (FF) of 26%. Running at 60 fps and exposed to 470 nm light, the CMOS imager has a minimum detectable intensity of 2.3 nW/cm(2) , a maximum signal-to-noise ratio (SNR) of 49 dB at 2.45 μW/cm(2) leading to a dynamic range (DR) of 61 dB while consuming 167 μA from a 3.3 V supply. In anesthetized rats, the system was able to detect temporal, spatial and spectral hemodynamic changes in response to DBS.

  20. Pixel Paradise

    NASA Technical Reports Server (NTRS)

    1998-01-01

    PixelVision, Inc., has developed a series of integrated imaging engines capable of high-resolution image capture at dynamic speeds. This technology was used originally at Jet Propulsion Laboratory in a series of imaging engines for a NASA mission to Pluto. By producing this integrated package, Charge-Coupled Device (CCD) technology has been made accessible to a wide range of users.

Top