Time-Resolved and Spectroscopic Three-Dimensional Optical Breast Tomography
2008-04-01
Appendix 1. Each raw image was then cropped to select out the information-rich region, and binned to enhance the signal-to-noise ratio. All the binned...component analysis, near infrared (NIR) imaging, optical mammography , optical imaging using independent component analysis (OPTICA). I. INTRODUCTION N EAR...merging 5 × 5 pixels into one to enhance the SNR, resulting in a total of 352 images of 54 × 55 pixels each. All the binned images corresponding to
Characterization of a hybrid energy-resolving photon-counting detector
NASA Astrophysics Data System (ADS)
Zang, A.; Pelzer, G.; Anton, G.; Ballabriga Sune, R.; Bisello, F.; Campbell, M.; Fauler, A.; Fiederle, M.; Llopart Cudie, X.; Ritter, I.; Tennert, F.; Wölfel, S.; Wong, W. S.; Michel, T.
2014-03-01
Photon-counting detectors in medical x-ray imaging provide a higher dose efficiency than integrating detectors. Even further possibilities for imaging applications arise, if the energy of each photon counted is measured, as for example K-edge-imaging or optimizing image quality by applying energy weighting factors. In this contribution, we show results of the characterization of the Dosepix detector. This hybrid photon- counting pixel detector allows energy resolved measurements with a novel concept of energy binning included in the pixel electronics. Based on ideas of the Medipix detector family, it provides three different modes of operation: An integration mode, a photon-counting mode, and an energy-binning mode. In energy-binning mode, it is possible to set 16 energy thresholds in each pixel individually to derive a binned energy spectrum in every pixel in one acquisition. The hybrid setup allows using different sensor materials. For the measurements 300 μm Si and 1 mm CdTe were used. The detector matrix consists of 16 x 16 square pixels for CdTe (16 x 12 for Si) with a pixel pitch of 220 μm. The Dosepix was originally intended for applications in the field of radiation measurement. Therefore it is not optimized towards medical imaging. The detector concept itself still promises potential as an imaging detector. We present spectra measured in one single pixel as well as in the whole pixel matrix in energy-binning mode with a conventional x-ray tube. In addition, results concerning the count rate linearity for the different sensor materials are shown as well as measurements regarding energy resolution.
SNR improvement for hyperspectral application using frame and pixel binning
NASA Astrophysics Data System (ADS)
Rehman, Sami Ur; Kumar, Ankush; Banerjee, Arup
2016-05-01
Hyperspectral imaging spectrometer systems are increasingly being used in the field of remote sensing for variety of civilian and military applications. The ability of such instruments in discriminating finer spectral features along with improved spatial and radiometric performance have made such instruments a powerful tool in the field of remote sensing. Design and development of spaceborne hyper spectral imaging spectrometers poses lot of technological challenges in terms of optics, dispersion element, detectors, electronics and mechanical systems. The main factors that define the type of detectors are the spectral region, SNR, dynamic range, pixel size, number of pixels, frame rate, operating temperature etc. Detectors with higher quantum efficiency and higher well depth are the preferred choice for such applications. CCD based Si detectors serves the requirement of high well depth for VNIR band spectrometers but suffers from smear. Smear can be controlled by using CMOS detectors. Si CMOS detectors with large format arrays are available. These detectors generally have smaller pitch and low well depth. Binning technique can be used with available CMOS detectors to meet the large swath, higher resolution and high SNR requirements. Availability of larger dwell time of satellite can be used to bin multiple frames to increase the signal collection even with lesser well depth detectors and ultimately increase the SNR. Lab measurements reveal that SNR improvement by frame binning is more in comparison to pixel binning. Effect of pixel binning as compared to the frame binning will be discussed and degradation of SNR as compared to theoretical value for pixel binning will be analyzed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mazur, T; Wang, Y; Fischer-Valuck, B
2015-06-15
Purpose: To develop a novel and rapid, SIFT-based algorithm for assessing feature motion on cine MR images acquired during MRI-guided radiotherapy treatments. In particular, we apply SIFT descriptors toward both partitioning cine images into respiratory states and tracking regions across frames. Methods: Among a training set of images acquired during a fraction, we densely assign SIFT descriptors to pixels within the images. We cluster these descriptors across all frames in order to produce a dictionary of trackable features. Associating the best-matching descriptors at every frame among the training images to these features, we construct motion traces for the features. Wemore » use these traces to define respiratory bins for sorting images in order to facilitate robust pixel-by-pixel tracking. Instead of applying conventional methods for identifying pixel correspondences across frames we utilize a recently-developed algorithm that derives correspondences via a matching objective for SIFT descriptors. Results: We apply these methods to a collection of lung, abdominal, and breast patients. We evaluate the procedure for respiratory binning using target sites exhibiting high-amplitude motion among 20 lung and abdominal patients. In particular, we investigate whether these methods yield minimal variation between images within a bin by perturbing the resulting image distributions among bins. Moreover, we compare the motion between averaged images across respiratory states to 4DCT data for these patients. We evaluate the algorithm for obtaining pixel correspondences between frames by tracking contours among a set of breast patients. As an initial case, we track easily-identifiable edges of lumpectomy cavities that show minimal motion over treatment. Conclusions: These SIFT-based methods reliably extract motion information from cine MR images acquired during patient treatments. While we performed our analysis retrospectively, the algorithm lends itself to prospective motion assessment. Applications of these methods include motion assessment, identifying treatment windows for gating, and determining optimal margins for treatment.« less
SU-F-I-08: CT Image Ring Artifact Reduction Based On Prior Image
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yuan, C; Qi, H; Chen, Z
Purpose: In computed tomography (CT) system, CT images with ring artifacts will be reconstructed when some adjacent bins of detector don’t work. The ring artifacts severely degrade CT image quality. We present a useful CT ring artifacts reduction based on projection data correction, aiming at estimating the missing data of projection data accurately, thus removing the ring artifacts of CT images. Methods: The method consists of ten steps: 1) Identification of abnormal pixel line in projection sinogram; 2) Linear interpolation within the pixel line of projection sinogram; 3) FBP reconstruction using interpolated projection data; 4) Filtering FBP image using meanmore » filter; 5) Forwarding projection of filtered FBP image; 6) Subtraction forwarded projection from original projection; 7) Linear interpolation of abnormal pixel line area in the subtraction projection; 8) Adding the interpolated subtraction projection on the forwarded projection; 9) FBP reconstruction using corrected projection data; 10) Return to step 4 until the pre-set iteration number is reached. The method is validated on simulated and real data to restore missing projection data and reconstruct ring artifact-free CT images. Results: We have studied impact of amount of dead bins of CT detector on the accuracy of missing data estimation in projection sinogram. For the simulated case with a resolution of 256 by 256 Shepp-Logan phantom, three iterations are sufficient to restore projection data and reconstruct ring artifact-free images when the dead bins rating is under 30%. The dead-bin-induced artifacts are substantially reduced. More iteration number is needed to reconstruct satisfactory images while the rating of dead bins increases. Similar results were found for a real head phantom case. Conclusion: A practical CT image ring artifact correction scheme based on projection data is developed. This method can produce ring artifact-free CT images feasibly and effectively.« less
Effects of empty bins on image upscaling in capsule endoscopy
NASA Astrophysics Data System (ADS)
Rukundo, Olivier
2017-07-01
This paper presents a preliminary study of the effect of empty bins on image upscaling in capsule endoscopy. The presented study was conducted based on results of existing contrast enhancement and interpolation methods. A low contrast enhancement method based on pixels consecutiveness and modified bilinear weighting scheme has been developed to distinguish between necessary empty bins and unnecessary empty bins in the effort to minimize the number of empty bins in the input image, before further processing. Linear interpolation methods have been used for upscaling input images with stretched histograms. Upscaling error differences and similarity indices between pairs of interpolation methods have been quantified using the mean squared error and feature similarity index techniques. Simulation results demonstrated more promising effects using the developed method than other contrast enhancement methods mentioned.
Information-efficient spectral imaging sensor
Sweatt, William C.; Gentry, Stephen M.; Boye, Clinton A.; Grotbeck, Carter L.; Stallard, Brian R.; Descour, Michael R.
2003-01-01
A programmable optical filter for use in multispectral and hyperspectral imaging. The filter splits the light collected by an optical telescope into two channels for each of the pixels in a row in a scanned image, one channel to handle the positive elements of a spectral basis filter and one for the negative elements of the spectral basis filter. Each channel for each pixel disperses its light into n spectral bins, with the light in each bin being attenuated in accordance with the value of the associated positive or negative element of the spectral basis vector. The spectral basis vector is constructed so that its positive elements emphasize the presence of a target and its negative elements emphasize the presence of the constituents of the background of the imaged scene. The attenuated light in the channels is re-imaged onto separate detectors for each pixel and then the signals from the detectors are combined to give an indication of the presence or not of the target in each pixel of the scanned scene. This system provides for a very efficient optical determination of the presence of the target, as opposed to the very data intensive data manipulations that are required in conventional hyperspectral imaging systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faby, Sebastian; Maier, Joscha; Sawall, Stefan
2016-07-15
Purpose: To introduce and evaluate an increment matrix approach (IMA) describing the signal statistics of energy-selective photon counting detectors including spatial–spectral correlations between energy bins of neighboring detector pixels. The importance of the occurring correlations for image-based material decomposition is studied. Methods: An IMA describing the counter increase patterns in a photon counting detector is proposed. This IMA has the potential to decrease the number of required random numbers compared to Monte Carlo simulations by pursuing an approach based on convolutions. To validate and demonstrate the IMA, an approximate semirealistic detector model is provided, simulating a photon counting detector inmore » a simplified manner, e.g., by neglecting count rate-dependent effects. In this way, the spatial–spectral correlations on the detector level are obtained and fed into the IMA. The importance of these correlations in reconstructed energy bin images and the corresponding detector performance in image-based material decomposition is evaluated using a statistically optimal decomposition algorithm. Results: The results of IMA together with the semirealistic detector model were compared to other models and measurements using the spectral response and the energy bin sensitivity, finding a good agreement. Correlations between the different reconstructed energy bin images could be observed, and turned out to be of weak nature. These correlations were found to be not relevant in image-based material decomposition. An even simpler simulation procedure based on the energy bin sensitivity was tested instead and yielded similar results for the image-based material decomposition task, as long as the fact that one incident photon can increase multiple counters across neighboring detector pixels is taken into account. Conclusions: The IMA is computationally efficient as it required about 10{sup 2} random numbers per ray incident on a detector pixel instead of an estimated 10{sup 8} random numbers per ray as Monte Carlo approaches would need. The spatial–spectral correlations as described by IMA are not important for the studied image-based material decomposition task. Respecting the absolute photon counts and thus the multiple counter increases by a single x-ray photon, the same material decomposition performance could be obtained with a simpler detector description using the energy bin sensitivity.« less
NASA Astrophysics Data System (ADS)
Lloyd, G. R.; Nallala, J.; Stone, N.
2016-03-01
FTIR is a well-established technique and there is significant interest in applying this technique to medical diagnostics e.g. to detect cancer. The introduction of focal plane array (FPA) detectors means that FTIR is particularly suited to rapid imaging of biopsy sections as an adjunct to digital pathology. Until recently however each pixel in the image has been limited to a minimum of 5.5 µm which results in a comparatively low magnification image or histology applications and potentially the loss of important diagnostic information. The recent introduction of higher magnification optics gives image pixels that cover approx. 1.1 µm. This reduction in image pixel size gives images of higher magnification and improved spatial detail can be observed. However, the effect of increasing the magnification on spectral quality and the ability to discriminate between disease states is not well studied. In this work we test the discriminatory performance of FTIR imaging using both standard (5.5 µm) and high (1.1 µm) magnification for the detection of colorectal cancer and explore the effect of binning to degrade high resolution images to determine whether similar diagnostic information and performance can be obtained using both magnifications. Results indicate that diagnostic performance using high magnification may be reduced as compared to standard magnification when using existing multivariate approaches. Reduction of the high magnification data to standard magnification via binning can potentially recover some of the lost performance.
VizieR Online Data Catalog: Mission Accessible Near-Earth Objects Survey (Thirouin+, 2016)
NASA Astrophysics Data System (ADS)
Thirouin, A.; Moskovitz, N.; Binzel, R. P.; Christensen, E.; DeMeo, F. E.; Person, M. J.; Polishook, D.; Thomas, C. A.; Trilling, D.; Willman, M.; Hinkle, M.; Burt, B.; Avner, D.; Aceituno, F. J.
2017-06-01
The data were obtained with the 4.3m Lowell Discovery Channel Telescope (DCT), the 4.1m Southern Astrophysical Research (SOAR) telescope, the 4m Nicholas U. Mayall Telescope, the 2.1m at Kitt Peak Observatory, the 1.8m Perkins telescope, the 1.5m Sierra Nevada Observatory (OSN), and the 1.3m SMARTS telescope between 2013 August and 2015 October. The DCT is forty miles southeast of Flagstaff at the Happy Jack site (Arizona, USA). Images were obtained using the Large Monolithic Imager (LMI), which is a 6144*6160 CCD. The total field of view is 12.5*12.5 with a plate scale of 0.12''/pixel (unbinned). Images were obtained using the 3*3 or 2*2 binning modes. Observations were carried out in situ. The SOAR telescope is located on Cerro Pachon, Chile. Images were obtained using the Goodman High Throughput Spectrograph (Goodman-HTS) instrument in its imaging mode. The instrument consists of a 4096*4096 Fairchild CCD, with a 7.2' diameter field of view (circular field of view) and a plate scale of 0.15''/pixel. Images were obtained using the 2*2 binning mode. Observations were conducted remotely. The Mayall telescope is a 4m telescope located at the Kitt Peak National Observatory (Tucson, Arizona, USA). The National Optical Astronomy Observatory (NOAO) CCD Mosaic-1.1 is a wide field imager composed of an array of eight CCD chips. The field of view is 36'*36', and the plate scale is 0.26''/pixel. Observations were performed remotely. The 2.1m at Kitt Peak Observatory was operated with the STA3 2k*4k CCD, which has a plate scale of 0.305''/pixel and a field of view of 10.2'*6.6'. The instrument was binned 2*2 and the observations were conducted in situ. The Perkins 72'' telescope is located at the Anderson Mesa station at Lowell Observatory (Flagstaff, Arizona, USA). We used the Perkins ReImaging SysteM (PRISM) instrument, a 2*2k Fairchild CCD. The PRISM plate scale is 0.39''/pixel for a field of view of 13'*13'. Observations were performed in situ. The 1.5m telescope located at the OSN at Loma de Dilar in the National Park of Sierra Nevada (Granada, Spain) was operated in situ. Observations were carried out with a 2k*2k CCD, with a total field of view of 7.8'*7.8'. We used 2*2 binning mode, resulting in an effective plate scale of 0.46''/pixel. The 1.3m SMARTS telescope is located at the Cerro Tololo Inter-American Observatory (Coquimbo region, Chile). This telescope is equipped with a camera called ANDICAM (A Novel Dual Imaging CAMera). ANDICAM is a Fairchild 2048*2048 CCD. The pixel scale is 0.371''/pixel, and the field of view is 6'*6'. Observations were carried out in queue mode. (2 data files).
NASA Astrophysics Data System (ADS)
Shankar, A.; Russ, M.; Vijayan, S.; Bednarek, D. R.; Rudin, S.
2017-03-01
Apodized Aperture Pixel (AAP) design, proposed by Ismailova et.al, is an alternative to the conventional pixel design. The advantages of AAP processing with a sinc filter in comparison with using other filters include non-degradation of MTF values and elimination of signal and noise aliasing, resulting in an increased performance at higher frequencies, approaching the Nyquist frequency. If high resolution small field-of-view (FOV) detectors with small pixels used during critical stages of Endovascular Image Guided Interventions (EIGIs) could also be extended to cover a full field-of-view typical of flat panel detectors (FPDs) and made to have larger effective pixels, then methods must be used to preserve the MTF over the frequency range up to the Nyquist frequency of the FPD while minimizing aliasing. In this work, we convolve the experimentally measured MTFs of an Microangiographic Fluoroscope (MAF) detector, (the MAF-CCD with 35μm pixels) and a High Resolution Fluoroscope (HRF) detector (HRF-CMOS50 with 49.5μm pixels) with the AAP filter and show the superiority of the results compared to MTFs resulting from moving average pixel binning and to the MTF of a standard FPD. The effect of using AAP is also shown in the spatial domain, when used to image an infinitely small point object. For detectors in neurovascular interventions, where high resolution is the priority during critical parts of the intervention, but full FOV with larger pixels are needed during less critical parts, AAP design provides an alternative to simple pixel binning while effectively eliminating signal and noise aliasing yet allowing the small FOV high resolution imaging to be maintained during critical parts of the EIGI.
Precision of FLEET Velocimetry Using High-speed CMOS Camera Systems
NASA Technical Reports Server (NTRS)
Peters, Christopher J.; Danehy, Paul M.; Bathel, Brett F.; Jiang, Naibo; Calvert, Nathan D.; Miles, Richard B.
2015-01-01
Femtosecond laser electronic excitation tagging (FLEET) is an optical measurement technique that permits quantitative velocimetry of unseeded air or nitrogen using a single laser and a single camera. In this paper, we seek to determine the fundamental precision of the FLEET technique using high-speed complementary metal-oxide semiconductor (CMOS) cameras. Also, we compare the performance of several different high-speed CMOS camera systems for acquiring FLEET velocimetry data in air and nitrogen free-jet flows. The precision was defined as the standard deviation of a set of several hundred single-shot velocity measurements. Methods of enhancing the precision of the measurement were explored such as digital binning (similar in concept to on-sensor binning, but done in post-processing), row-wise digital binning of the signal in adjacent pixels and increasing the time delay between successive exposures. These techniques generally improved precision; however, binning provided the greatest improvement to the un-intensified camera systems which had low signal-to-noise ratio. When binning row-wise by 8 pixels (about the thickness of the tagged region) and using an inter-frame delay of 65 micro sec, precisions of 0.5 m/s in air and 0.2 m/s in nitrogen were achieved. The camera comparison included a pco.dimax HD, a LaVision Imager scientific CMOS (sCMOS) and a Photron FASTCAM SA-X2, along with a two-stage LaVision High Speed IRO intensifier. Excluding the LaVision Imager sCMOS, the cameras were tested with and without intensification and with both short and long inter-frame delays. Use of intensification and longer inter-frame delay generally improved precision. Overall, the Photron FASTCAM SA-X2 exhibited the best performance in terms of greatest precision and highest signal-to-noise ratio primarily because it had the largest pixels.
NASA Astrophysics Data System (ADS)
Lederman, Dror; Leader, Joseph K.; Zheng, Bin; Sciurba, Frank C.; Tan, Jun; Gur, David
2011-03-01
Quantitative computed tomography (CT) has been widely used to detect and evaluate the presence (or absence) of emphysema applying the density masks at specific thresholds, e.g., -910 or -950 Hounsfield Unit (HU). However, it has also been observed that subjects with similar density-mask based emphysema scores could have varying lung function, possibly indicating differences of disease severity. To assess this possible discrepancy, we investigated whether density distribution of "viable" lung parenchyma regions with pixel values > -910 HU correlates with lung function. A dataset of 38 subjects, who underwent both pulmonary function testing and CT examinations in a COPD SCCOR study, was assembled. After the lung regions depicted on CT images were automatically segmented by a computerized scheme, we systematically divided the lung parenchyma into different density groups (bins) and computed a number of statistical features (i.e., mean, standard deviation (STD), skewness of the pixel value distributions) in these density bins. We then analyzed the correlations between each feature and lung function. The correlation between diffusion lung capacity (DLCO) and STD of pixel values in the bin of -910HU <= PV < -750HU was -0.43, as compared with a correlation of -0.49 obtained between the post-bronchodilator ratio (FEV1/FVC) measured by the forced expiratory volume in 1 second (FEV1) dividing the forced vital capacity (FVC) and the STD of pixel values in the bin of -1024HU <= PV < -910HU. The results showed an association between the distribution of pixel values in "viable" lung parenchyma and lung function, which indicates that similar to the conventional density mask method, the pixel value distribution features in "viable" lung parenchyma areas may also provide clinically useful information to improve assessments of lung disease severity as measured by lung functional tests.
The CTIO Acquisition CCD-TV camera design
NASA Astrophysics Data System (ADS)
Schmidt, Ricardo E.
1990-07-01
A CCD-based Acquisition TV Camera has been developed at CTIO to replace the existing ISIT units. In a 60 second exposure, the new Camera shows a sixfold improvement in sensitivity over an ISIT used with a Leaky Memory. Integration times can be varied over a 0.5 to 64 second range. The CCD, contained in an evacuated enclosure, is operated at -45 C. Only the image section, an area of 8.5 mm x 6.4 mm, gets exposed to light. Pixel size is 22 microns and either no binning or 2 x 2 binning can be selected. The typical readout rates used vary between 3.5 and 9 microseconds/pixel. Images are stored in a PC/XT/AT, which generates RS-170 video. The contrast in the RS-170 frames is automatically enhanced by the software.
VizieR Online Data Catalog: Cheshire Cat galaxies: redshifts and magnitudes (Irwin+, 2015)
NASA Astrophysics Data System (ADS)
Irwin, J. A.; Dupke, R.; Carrasco, E. R.; Maksym, W. P.; Johnson, L.; White, R. E., III
2017-09-01
The optical observations (imaging and spectroscopy) were performed with the Gemini Multi-Object Spectrograph (hereafter GMOS; Hook et al. 2004PASP..116..425H) at the Gemini North Telescope in Hawaii, in queue mode, as part of the program GN-2011A-Q-25. The direct images were recorded through the r' and i' filters during the night of 2011 January 4, in dark time, with seeing median values of 0.8" and 0.9" for the r' and i' filters, respectively. The night was not photometric. Three 300 s exposures (binned by two in both axes, with pixel scale of 0.146") were observed in each filter. Offsets between exposures were used to take into account the gaps between the CCDs (37 un-binned pixels) and for cosmic ray removal. (1 data file).
VizieR Online Data Catalog: New SDSS and Washington photometry in Segue 3 (Hughes+, 2017)
NASA Astrophysics Data System (ADS)
Hughes, J.; Lacy, B.; Sakari, C.; Wallerstein, G.; Davis, C. E.; Schiefelbein, S.; Corrin, O.; Joudi, H.; Le, D.; Haynes, R. M.
2017-10-01
We used the Apache Point Observatory (APO) new Astrophysical Research Consortium Telescope Imaging Camera (ARCTIC) imager and the camera it replaced, Seaver Prototype Imaging camera (SPIcam) for our observations with the 3.5m telescope. The ARCTIC camera has a 4096*4096 STA chip giving 7.5'*7.5' as the FOV when the new 5-inch diameter circular filters are used. The older Washington filters are 3''*3'' and vigniette the FOV. SPIcam had a FOV of 4.8'*4.8'. We have several filter wheels that can handle up to ten 3*3 inch square filters (fewer in full-field mode), where binning 1*1 yields 0.11arcseconds/pixel. The fastest readout time in 2*2 binned mode is about 5s. The blue-UV sensitivity of ARCTIC is greater than that of SPIcam, which was a backside-illuminated SITe TK2048E 2048*2048 pixel CCD with 24 micron pixels, which we also binned (2*2), giving a plate scale of 0.28 arcsec per pixel. Where we combined the data sets, we binned ARCTIC 2*2 and slightly degraded its resolution. We found no irreducible color terms between frames taken with both imagers, internally. From 2013 to 2015, we had 11 half-nights total, and 102 frames had seeing better than 2'', many of which were under photometric conditions, and several nights had subarcsecond seeing. Some of the observations were repeated between SPIcam and ARCTIC, which served to test the new imager. We observed Seg 3 in the Washington filters (Canterna 1976AJ.....81..228C) C and T1 and SDSS ugri filters with both SPIcam and ARCTIC. The frames used are listed in Table1, the overlap between this paper and the Vr-data from Fadely et al. 2011 (Cat. J/AJ/142/88) (not the g and r mag values) and Ortolani et al. 2013 (Cat. J/MNRAS/433/1966) is detailed in Table2. Our photometry is presented in Table3 for all 218 objects detected in our field-of-view in CT1ugri-filters, where we required detections in all filters in order to produce spectral energy distributions (SEDs). We include the z-filter from SDSS DR13 and any 2MASS objects detected, for completeness. (4 data files).
Low-Light Image Enhancement Using Adaptive Digital Pixel Binning
Yoo, Yoonjong; Im, Jaehyun; Paik, Joonki
2015-01-01
This paper presents an image enhancement algorithm for low-light scenes in an environment with insufficient illumination. Simple amplification of intensity exhibits various undesired artifacts: noise amplification, intensity saturation, and loss of resolution. In order to enhance low-light images without undesired artifacts, a novel digital binning algorithm is proposed that considers brightness, context, noise level, and anti-saturation of a local region in the image. The proposed algorithm does not require any modification of the image sensor or additional frame-memory; it needs only two line-memories in the image signal processor (ISP). Since the proposed algorithm does not use an iterative computation, it can be easily embedded in an existing digital camera ISP pipeline containing a high-resolution image sensor. PMID:26121609
Electron imaging with an EBSD detector.
Wright, Stuart I; Nowell, Matthew M; de Kloe, René; Camus, Patrick; Rampton, Travis
2015-01-01
Electron Backscatter Diffraction (EBSD) has proven to be a useful tool for characterizing the crystallographic orientation aspects of microstructures at length scales ranging from tens of nanometers to millimeters in the scanning electron microscope (SEM). With the advent of high-speed digital cameras for EBSD use, it has become practical to use the EBSD detector as an imaging device similar to a backscatter (or forward-scatter) detector. Using the EBSD detector in this manner enables images exhibiting topographic, atomic density and orientation contrast to be obtained at rates similar to slow scanning in the conventional SEM manner. The high-speed acquisition is achieved through extreme binning of the camera-enough to result in a 5 × 5 pixel pattern. At such high binning, the captured patterns are not suitable for indexing. However, no indexing is required for using the detector as an imaging device. Rather, a 5 × 5 array of images is formed by essentially using each pixel in the 5 × 5 pixel pattern as an individual scattered electron detector. The images can also be formed at traditional EBSD scanning rates by recording the image data during a scan or can also be formed through post-processing of patterns recorded at each point in the scan. Such images lend themselves to correlative analysis of image data with the usual orientation data provided by and with chemical data obtained simultaneously via X-Ray Energy Dispersive Spectroscopy (XEDS). Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.
Compensation of PVT Variations in ToF Imagers with In-Pixel TDC
Vornicu, Ion; Carmona-Galán, Ricardo; Rodríguez-Vázquez, Ángel
2017-01-01
The design of a direct time-of-flight complementary metal-oxide-semiconductor (CMOS) image sensor (dToF-CIS) based on a single-photon avalanche-diode (SPAD) array with an in-pixel time-to-digital converter (TDC) must contemplate system-level aspects that affect its overall performance. This paper provides a detailed analysis of the impact of process parameters, voltage supply, and temperature (PVT) variations on the time bin of the TDC array. Moreover, the design and characterization of a global compensation loop is presented. It is based on a phase locked loop (PLL) that is integrated on-chip. The main building block of the PLL is a voltage-controlled ring-oscillator (VCRO) that is identical to the ones employed for the in-pixel TDCs. The reference voltage that drives the master VCRO is distributed to the voltage control inputs of the slave VCROs such that their multiphase outputs become invariant to PVT changes. These outputs act as time interpolators for the TDCs. Therefore the compensation scheme prevents the time bin of the TDCs from drifting over time due to the aforementioned factors. Moreover, the same scheme is used to program different time resolutions of the direct time-of-flight (ToF) imager aimed at 3D ranging or depth map imaging. Experimental results that validate the analysis are provided as well. The compensation loop proves to be remarkably effective. The spreading of the TDCs time bin is lowered from: (i) 20% down to 2.4% while the temperature ranges from 0 °C to 100 °C; (ii) 27% down to 0.27%, when the voltage supply changes within ±10% of the nominal value; (iii) 5.2 ps to 2 ps standard deviation over 30 sample chips, due to process parameters’ variation. PMID:28486405
Compensation of PVT Variations in ToF Imagers with In-Pixel TDC.
Vornicu, Ion; Carmona-Galán, Ricardo; Rodríguez-Vázquez, Ángel
2017-05-09
The design of a direct time-of-flight complementary metal-oxide-semiconductor (CMOS) image sensor (dToF-CIS) based on a single-photon avalanche-diode (SPAD) array with an in-pixel time-to-digital converter (TDC) must contemplate system-level aspects that affect its overall performance. This paper provides a detailed analysis of the impact of process parameters, voltage supply, and temperature (PVT) variations on the time bin of the TDC array. Moreover, the design and characterization of a global compensation loop is presented. It is based on a phase locked loop (PLL) that is integrated on-chip. The main building block of the PLL is a voltage-controlled ring-oscillator (VCRO) that is identical to the ones employed for the in-pixel TDCs. The reference voltage that drives the master VCRO is distributed to the voltage control inputs of the slave VCROs such that their multiphase outputs become invariant to PVT changes. These outputs act as time interpolators for the TDCs. Therefore the compensation scheme prevents the time bin of the TDCs from drifting over time due to the aforementioned factors. Moreover, the same scheme is used to program different time resolutions of the direct time-of-flight (ToF) imager aimed at 3D ranging or depth map imaging. Experimental results that validate the analysis are provided as well. The compensation loop proves to be remarkably effective. The spreading of the TDCs time bin is lowered from: (i) 20% down to 2.4% while the temperature ranges from 0 °C to 100 °C; (ii) 27% down to 0.27%, when the voltage supply changes within ±10% of the nominal value; (iii) 5.2 ps to 2 ps standard deviation over 30 sample chips, due to process parameters' variation.
Precision of FLEET Velocimetry Using High-Speed CMOS Camera Systems
NASA Technical Reports Server (NTRS)
Peters, Christopher J.; Danehy, Paul M.; Bathel, Brett F.; Jiang, Naibo; Calvert, Nathan D.; Miles, Richard B.
2015-01-01
Femtosecond laser electronic excitation tagging (FLEET) is an optical measurement technique that permits quantitative velocimetry of unseeded air or nitrogen using a single laser and a single camera. In this paper, we seek to determine the fundamental precision of the FLEET technique using high-speed complementary metal-oxide semiconductor (CMOS) cameras. Also, we compare the performance of several different high-speed CMOS camera systems for acquiring FLEET velocimetry data in air and nitrogen free-jet flows. The precision was defined as the standard deviation of a set of several hundred single-shot velocity measurements. Methods of enhancing the precision of the measurement were explored such as digital binning (similar in concept to on-sensor binning, but done in post-processing), row-wise digital binning of the signal in adjacent pixels and increasing the time delay between successive exposures. These techniques generally improved precision; however, binning provided the greatest improvement to the un-intensified camera systems which had low signal-to-noise ratio. When binning row-wise by 8 pixels (about the thickness of the tagged region) and using an inter-frame delay of 65 microseconds, precisions of 0.5 meters per second in air and 0.2 meters per second in nitrogen were achieved. The camera comparison included a pco.dimax HD, a LaVision Imager scientific CMOS (sCMOS) and a Photron FASTCAM SA-X2, along with a two-stage LaVision HighSpeed IRO intensifier. Excluding the LaVision Imager sCMOS, the cameras were tested with and without intensification and with both short and long inter-frame delays. Use of intensification and longer inter-frame delay generally improved precision. Overall, the Photron FASTCAM SA-X2 exhibited the best performance in terms of greatest precision and highest signal-to-noise ratio primarily because it had the largest pixels.
NASA Astrophysics Data System (ADS)
Zang, A.; Anton, G.; Ballabriga, R.; Bisello, F.; Campbell, M.; Celi, J. C.; Fauler, A.; Fiederle, M.; Jensch, M.; Kochanski, N.; Llopart, X.; Michel, N.; Mollenhauer, U.; Ritter, I.; Tennert, F.; Wölfel, S.; Wong, W.; Michel, T.
2015-04-01
The Dosepix detector is a hybrid photon-counting pixel detector based on ideas of the Medipix and Timepix detector family. 1 mm thick cadmium telluride and 300 μm thick silicon were used as sensor material. The pixel matrix of the Dosepix consists of 16 x 16 square pixels with 12 rows of (200 μm)2 and 4 rows of (55 μm)2 sensitive area for the silicon sensor layer and 16 rows of pixels with 220 μm pixel pitch for CdTe. Besides digital energy integration and photon-counting mode, a novel concept of energy binning is included in the pixel electronics, allowing energy-resolved measurements in 16 energy bins within one acquisition. The possibilities of this detector concept range from applications in personal dosimetry and energy-resolved imaging to quality assurance of medical X-ray sources by analysis of the emitted photon spectrum. In this contribution the Dosepix detector, its response to X-rays as well as spectrum measurements with Si and CdTe sensor layer are presented. Furthermore, a first evaluation was carried out to use the Dosepix detector as a kVp-meter, that means to determine the applied acceleration voltage from measured X-ray tubes spectra.
An embedded face-classification system for infrared images on an FPGA
NASA Astrophysics Data System (ADS)
Soto, Javier E.; Figueroa, Miguel
2014-10-01
We present a face-classification architecture for long-wave infrared (IR) images implemented on a Field Programmable Gate Array (FPGA). The circuit is fast, compact and low power, can recognize faces in real time and be embedded in a larger image-processing and computer vision system operating locally on an IR camera. The algorithm uses Local Binary Patterns (LBP) to perform feature extraction on each IR image. First, each pixel in the image is represented as an LBP pattern that encodes the similarity between the pixel and its neighbors. Uniform LBP codes are then used to reduce the number of patterns to 59 while preserving more than 90% of the information contained in the original LBP representation. Then, the image is divided into 64 non-overlapping regions, and each region is represented as a 59-bin histogram of patterns. Finally, the algorithm concatenates all 64 regions to create a 3,776-bin spatially enhanced histogram. We reduce the dimensionality of this histogram using Linear Discriminant Analysis (LDA), which improves clustering and enables us to store an entire database of 53 subjects on-chip. During classification, the circuit applies LBP and LDA to each incoming IR image in real time, and compares the resulting feature vector to each pattern stored in the local database using the Manhattan distance. We implemented the circuit on a Xilinx Artix-7 XC7A100T FPGA and tested it with the UCHThermalFace database, which consists of 28 81 x 150-pixel images of 53 subjects in indoor and outdoor conditions. The circuit achieves a 98.6% hit ratio, trained with 16 images and tested with 12 images of each subject in the database. Using a 100 MHz clock, the circuit classifies 8,230 images per second, and consumes only 309mW.
NASA Technical Reports Server (NTRS)
Salu, Yehuda; Tilton, James
1993-01-01
The classification of multispectral image data obtained from satellites has become an important tool for generating ground cover maps. This study deals with the application of nonparametric pixel-by-pixel classification methods in the classification of pixels, based on their multispectral data. A new neural network, the Binary Diamond, is introduced, and its performance is compared with a nearest neighbor algorithm and a back-propagation network. The Binary Diamond is a multilayer, feed-forward neural network, which learns from examples in unsupervised, 'one-shot' mode. It recruits its neurons according to the actual training set, as it learns. The comparisons of the algorithms were done by using a realistic data base, consisting of approximately 90,000 Landsat 4 Thematic Mapper pixels. The Binary Diamond and the nearest neighbor performances were close, with some advantages to the Binary Diamond. The performance of the back-propagation network lagged behind. An efficient nearest neighbor algorithm, the binned nearest neighbor, is described. Ways for improving the performances, such as merging categories, and analyzing nonboundary pixels, are addressed and evaluated.
A novel algorithm for fast and efficient multifocus wavefront shaping
NASA Astrophysics Data System (ADS)
Fayyaz, Zahra; Nasiriavanaki, Mohammadreza
2018-02-01
Wavefront shaping using spatial light modulator (SLM) is a popular method for focusing light through a turbid media, such as biological tissues. Usually, in iterative optimization methods, due to the very large number of pixels in SLM, larger pixels are formed, bins, and the phase value of the bins are changed to obtain an optimum phase map, hence a focus. In this study an efficient optimization algorithm is proposed to obtain an arbitrary map of focus utilizing all the SLM pixels or small bin sizes. The application of such methodology in dermatology, hair removal in particular, is explored and discussed
Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) on Mars Reconnaissance Orbiter (MRO)
NASA Astrophysics Data System (ADS)
Murchie, S.; Arvidson, R.; Bedini, P.; Beisser, K.; Bibring, J.-P.; Bishop, J.; Boldt, J.; Cavender, P.; Choo, T.; Clancy, R. T.; Darlington, E. H.; Des Marais, D.; Espiritu, R.; Fort, D.; Green, R.; Guinness, E.; Hayes, J.; Hash, C.; Heffernan, K.; Hemmler, J.; Heyler, G.; Humm, D.; Hutcheson, J.; Izenberg, N.; Lee, R.; Lees, J.; Lohr, D.; Malaret, E.; Martin, T.; McGovern, J. A.; McGuire, P.; Morris, R.; Mustard, J.; Pelkey, S.; Rhodes, E.; Robinson, M.; Roush, T.; Schaefer, E.; Seagrave, G.; Seelos, F.; Silverglate, P.; Slavney, S.; Smith, M.; Shyong, W.-J.; Strohbehn, K.; Taylor, H.; Thompson, P.; Tossman, B.; Wirzburger, M.; Wolff, M.
2007-05-01
The Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) is a hyperspectral imager on the Mars Reconnaissance Orbiter (MRO) spacecraft. CRISM consists of three subassemblies, a gimbaled Optical Sensor Unit (OSU), a Data Processing Unit (DPU), and the Gimbal Motor Electronics (GME). CRISM's objectives are (1) to map the entire surface using a subset of bands to characterize crustal mineralogy, (2) to map the mineralogy of key areas at high spectral and spatial resolution, and (3) to measure spatial and seasonal variations in the atmosphere. These objectives are addressed using three major types of observations. In multispectral mapping mode, with the OSU pointed at planet nadir, data are collected at a subset of 72 wavelengths covering key mineralogic absorptions and binned to pixel footprints of 100 or 200 m/pixel. Nearly the entire planet can be mapped in this fashion. In targeted mode the OSU is scanned to remove most along-track motion, and a region of interest is mapped at full spatial and spectral resolution (15-19 m/pixel, 362-3920 nm at 6.55 nm/channel). Ten additional abbreviated, spatially binned images are taken before and after the main image, providing an emission phase function (EPF) of the site for atmospheric study and correction of surface spectra for atmospheric effects. In atmospheric mode, only the EPF is acquired. Global grids of the resulting lower data volume observations are taken repeatedly throughout the Martian year to measure seasonal variations in atmospheric properties. Raw, calibrated, and map-projected data are delivered to the community with a spectral library to aid in interpretation.
Mars reconnaissance orbiter's high resolution imaging science experiment (HiRISE)
McEwen, A.S.; Eliason, E.M.; Bergstrom, J.W.; Bridges, N.T.; Hansen, C.J.; Delamere, W.A.; Grant, J. A.; Gulick, V.C.; Herkenhoff, K. E.; Keszthelyi, L.; Kirk, R.L.; Mellon, M.T.; Squyres, S. W.; Thomas, N.; Weitz, C.M.
2007-01-01
The HiRISE camera features a 0.5 m diameter primary mirror, 12 m effective focal length, and a focal plane system that can acquire images containing up to 28 Gb (gigabits) of data in as little as 6 seconds. HiRISE will provide detailed images (0.25 to 1.3 m/pixel) covering ???1% of the Martian surface during the 2-year Primary Science Phase (PSP) beginning November 2006. Most images will include color data covering 20% of the potential field of view. A top priority is to acquire ???1000 stereo pairs and apply precision geometric corrections to enable topographic measurements to better than 25 cm vertical precision. We expect to return more than 12 Tb of HiRISE data during the 2-year PSP, and use pixel binning, conversion from 14 to 8 bit values, and a lossless compression system to increase coverage. HiRISE images are acquired via 14 CCD detectors, each with 2 output channels, and with multiple choices for pixel binning and number of Time Delay and Integration lines. HiRISE will support Mars exploration by locating and characterizing past, present, and future landing sites, unsuccessful landing sites, and past and potentially future rover traverses. We will investigate cratering, volcanism, tectonism, hydrology, sedimentary processes, stratigraphy, aeolian processes, mass wasting, landscape evolution, seasonal processes, climate change, spectrophotometry, glacial and periglacial processes, polar geology, and regolith properties. An Internet Web site (HiWeb) will enable anyone in the world to suggest HiRISE targets on Mars and to easily locate, view, and download HiRISE data products. Copyright 2007 by the American Geophysical Union.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao Bo; Zhao Wei
2008-05-15
In breast tomosynthesis a rapid sequence of N images is acquired when the x-ray tube sweeps through different angular views with respect to the breast. Since the total dose to the breast is kept the same as that in regular mammography, the exposure used for each image of tomosynthesis is 1/N. The low dose and high frame rate pose a tremendous challenge to the imaging performance of digital mammography detectors. The purpose of the present work is to investigate the detector performance in different operational modes designed for tomosynthesis acquisition, e.g., binning or full resolution readout, the range of viewmore » angles, and the number of views N. A prototype breast tomosynthesis system with a nominal angular range of {+-}25 deg. was used in our investigation. The system was equipped with an amorphous selenium (a-Se) full field digital mammography detector with pixel size of 85 {mu}m. The detector can be read out in full resolution or 2x1 binning (binning in the tube travel direction). The focal spot blur due to continuous tube travel was measured for different acquisition geometries, and it was found that pixel binning, instead of focal spot blur, dominates the detector modulation transfer function (MTF). The noise power spectrum (NPS) and detective quantum efficiency (DQE) of the detector were measured with the exposure range of 0.4-6 mR, which is relevant to the low dose used in tomosynthesis. It was found that DQE at 0.4 mR is only 20% less than that at highest exposure for both detector readout modes. The detector temporal performance was categorized as lag and ghosting, both of which were measured as a function of x-ray exposure. The first frame lags were 8% and 4%, respectively, for binning and full resolution mode. Ghosting is negligible and independent of the frame rate. The results showed that the detector performance is x-ray quantum noise limited at the low exposures used in each view of tomosynthesis, and the temporal performance at high frame rate (up to 2 frames per second) is adequate for tomosynthesis.« less
dada - a web-based 2D detector analysis tool
NASA Astrophysics Data System (ADS)
Osterhoff, Markus
2017-06-01
The data daemon, dada, is a server backend for unified access to 2D pixel detector image data stored with different detectors, file formats and saved with varying naming conventions and folder structures across instruments. Furthermore, dada implements basic pre-processing and analysis routines from pixel binning over azimuthal integration to raster scan processing. Common user interactions with dada are by a web frontend, but all parameters for an analysis are encoded into a Uniform Resource Identifier (URI) which can also be written by hand or scripts for batch processing.
Plains South of Valles Marineris
2017-03-28
This enhanced-color sample reveals the incredible diversity of landforms on some Martian plains that appear bland and uniform at larger scales. Here we see layers, small channels suggesting water flow, craters, and indurated sand dunes. The map is projected here at a scale of 25 centimeters (9.8 inches) per pixel. [The original image scale is 25.7 centimeters (10.1 inches) per pixel (with 1 x 1 binning); objects on the order of 77 centimeters (30.3 inches) across are resolved.] North is up. http://photojournal.jpl.nasa.gov/catalog/PIA21573
Dark Materials on Olympus Mons
2018-01-23
This image from NASA's Mars Reconnaissance Orbiter (MRO) shows blocks of layered terrain within the Olympus Mons aureole. The aureole is a giant apron of chaotic material around the volcano, perhaps formed by enormous landslides off the flanks of the giant volcano. These blocks of layered material have been eroded by the wind into the scenic landscape we see here. The map is projected here at a scale of 25 centimeters (9.8 inches) per pixel. [The original image scale is 28.3 centimeters (11.1 inches) per pixel (with 1 x 1 binning); objects on the order of 85 centimeters (33.5 inches) across are resolved.] North is up. https://photojournal.jpl.nasa.gov/catalog/PIA22181
The Effects of Towfish Motion on Sidescan Sonar Images: Extension to a Multiple-Beam Device
1994-02-01
simulation, the raw simulated sidescan image is formed from pixels G , which are the sum of energies E,", assigned to the nearest range- bin k as noted in...for stable motion at constant velocity V0, are applied to (divided into) the G ,, and the simulated sidescan image is ready to display. Maximal energy...limitation is likely to apply to all multiple-beam sonais of similar construction. The yaw correction was incorporated in the MBEAM model by an
Walsh, Alex J.; Sharick, Joe T.; Skala, Melissa C.; Beier, Hope T.
2016-01-01
Time-correlated single photon counting (TCSPC) enables acquisition of fluorescence lifetime decays with high temporal resolution within the fluorescence decay. However, many thousands of photons per pixel are required for accurate lifetime decay curve representation, instrument response deconvolution, and lifetime estimation, particularly for two-component lifetimes. TCSPC imaging speed is inherently limited due to the single photon per laser pulse nature and low fluorescence event efficiencies (<10%) required to reduce bias towards short lifetimes. Here, simulated fluorescence lifetime decays are analyzed by SPCImage and SLIM Curve software to determine the limiting lifetime parameters and photon requirements of fluorescence lifetime decays that can be accurately fit. Data analysis techniques to improve fitting accuracy for low photon count data were evaluated. Temporal binning of the decays from 256 time bins to 42 time bins significantly (p<0.0001) improved fit accuracy in SPCImage and enabled accurate fits with low photon counts (as low as 700 photons/decay), a 6-fold reduction in required photons and therefore improvement in imaging speed. Additionally, reducing the number of free parameters in the fitting algorithm by fixing the lifetimes to known values significantly reduced the lifetime component error from 27.3% to 3.2% in SPCImage (p<0.0001) and from 50.6% to 4.2% in SLIM Curve (p<0.0001). Analysis of nicotinamide adenine dinucleotide–lactate dehydrogenase (NADH-LDH) solutions confirmed temporal binning of TCSPC data and a reduced number of free parameters improves exponential decay fit accuracy in SPCImage. Altogether, temporal binning (in SPCImage) and reduced free parameters are data analysis techniques that enable accurate lifetime estimation from low photon count data and enable TCSPC imaging speeds up to 6x and 300x faster, respectively, than traditional TCSPC analysis. PMID:27446663
Temporal and spatial binning of TCSPC data to improve signal-to-noise ratio and imaging speed
NASA Astrophysics Data System (ADS)
Walsh, Alex J.; Beier, Hope T.
2016-03-01
Time-correlated single photon counting (TCSPC) is the most robust method for fluorescence lifetime imaging using laser scanning microscopes. However, TCSPC is inherently slow making it ineffective to capture rapid events due to the single photon product per laser pulse causing extensive acquisition time limitations and the requirement of low fluorescence emission efficiency to avoid bias of measurement towards short lifetimes. Furthermore, thousands of photons per pixel are required for traditional instrument response deconvolution and fluorescence lifetime exponential decay estimation. Instrument response deconvolution and fluorescence exponential decay estimation can be performed in several ways including iterative least squares minimization and Laguerre deconvolution. This paper compares the limitations and accuracy of these fluorescence decay analysis techniques to accurately estimate double exponential decays across many data characteristics including various lifetime values, lifetime component weights, signal-to-noise ratios, and number of photons detected. Furthermore, techniques to improve data fitting, including binning data temporally and spatially, are evaluated as methods to improve decay fits and reduce image acquisition time. Simulation results demonstrate that binning temporally to 36 or 42 time bins, improves accuracy of fits for low photon count data. Such a technique reduces the required number of photons for accurate component estimation if lifetime values are known, such as for commercial fluorescent dyes and FRET experiments, and improve imaging speed 10-fold.
2017-03-21
This is an odd-looking image. It shows gullies during the winter while entirely in the shadow of the crater wall. Illumination comes only from the winter skylight. We acquire such images because gullies on Mars actively form in the winter when there is carbon dioxide frost on the ground, so we image them in the winter, even though not well illuminated, to look for signs of activity. The dark streaks might be signs of current activity, removing the frost, but further analysis is needed. NB: North is down in the cutout, and the terrain slopes towards the bottom of the image. The map is projected here at a scale of 50 centimeters (19.7 inches) per pixel. [The original image scale is 62.3 centimeters (24.5 inches) per pixel (with 2 x 2 binning); objects on the order of 187 centimeters (73.6 inches) across are resolved.] North is up. http://photojournal.jpl.nasa.gov/catalog/PIA21568
El-Mohri, Youcef; Antonuk, Larry E.; Choroszucha, Richard B.; Zhao, Qihua; Jiang, Hao; Liu, Langechuan
2014-01-01
Thick, segmented crystalline scintillators have shown increasing promise as replacement x-ray converters for the phosphor screens currently used in active matrix flat-panel imagers (AMFPIs) in radiotherapy, by virtue of providing over an order of magnitude improvement in the DQE. However, element-to-element misalignment in current segmented scintillator prototypes creates a challenge for optimal registration with underlying AMFPI arrays, resulting in degradation of spatial resolution. To overcome this challenge, a methodology involving the use of a relatively high resolution AMFPI array in combination with novel binning techniques is presented. The array, which has a pixel pitch of 0.127 mm, was coupled to prototype segmented scintillators based on BGO, LYSO and CsI:Tl materials, each having a nominal element-to-element pitch of 1.016 mm and thickness of ~1 cm. The AMFPI systems incorporating these prototypes were characterized at a radiotherapy energy of 6 MV in terms of MTF, NPS, DQE, and reconstructed images of a resolution phantom acquired using a cone-beam CT geometry. For each prototype, the application of 8×8 pixel binning to achieve a sampling pitch of 1.016 mm was optimized through use of an alignment metric which minimized misregistration and thereby improved spatial resolution. In addition, the application of alternative binning techniques that exclude the collection of signal near septal walls resulted in further significant improvement in spatial resolution for the BGO and LYSO prototypes, though not for the CsI:Tl prototype due to the large amount of optical cross-talk resulting from significant light spread between scintillator elements in that device. The efficacy of these techniques for improving spatial resolution appears to be enhanced for scintillator materials that exhibit mechanical hardness, high density and high refractive index, such as BGO. Moreover, materials that exhibit these properties as well as offer significantly higher light output than BGO, such as CdWO4, should provide the additional benefit of preserving DQE performance. PMID:24487347
Bedrock Outcrops in Kaiser Crater
2017-03-13
This enhanced-color image from NASA Mars Reconnaissance Orbiter shows a patch of well-exposed bedrock on the floor of Kaiser Crater. The wind has stripped off the overlying soil, and created grooves and scallops in the bedrock. The narrow linear ridges are fractures that have been indurated, probably by precipitation of cementing minerals from groundwater flow. The rippled dark blue patches consist of sand. The map is projected here at a scale of 25 centimeters (9.8 inches) per pixel. [The original image scale is 25.3 centimeters (9.9 inches) per pixel (with 1 x 1 binning); objects on the order of 76 centimeters (29.9 inches) across are resolved.] North is up. http://photojournal.jpl.nasa.gov/catalog/PIA21559
A CMOS In-Pixel CTIA High Sensitivity Fluorescence Imager.
Murari, Kartikeya; Etienne-Cummings, Ralph; Thakor, Nitish; Cauwenberghs, Gert
2011-10-01
Traditionally, charge coupled device (CCD) based image sensors have held sway over the field of biomedical imaging. Complementary metal oxide semiconductor (CMOS) based imagers so far lack sensitivity leading to poor low-light imaging. Certain applications including our work on animal-mountable systems for imaging in awake and unrestrained rodents require the high sensitivity and image quality of CCDs and the low power consumption, flexibility and compactness of CMOS imagers. We present a 132×124 high sensitivity imager array with a 20.1 μm pixel pitch fabricated in a standard 0.5 μ CMOS process. The chip incorporates n-well/p-sub photodiodes, capacitive transimpedance amplifier (CTIA) based in-pixel amplification, pixel scanners and delta differencing circuits. The 5-transistor all-nMOS pixel interfaces with peripheral pMOS transistors for column-parallel CTIA. At 70 fps, the array has a minimum detectable signal of 4 nW/cm(2) at a wavelength of 450 nm while consuming 718 μA from a 3.3 V supply. Peak signal to noise ratio (SNR) was 44 dB at an incident intensity of 1 μW/cm(2). Implementing 4×4 binning allowed the frame rate to be increased to 675 fps. Alternately, sensitivity could be increased to detect about 0.8 nW/cm(2) while maintaining 70 fps. The chip was used to image single cell fluorescence at 28 fps with an average SNR of 32 dB. For comparison, a cooled CCD camera imaged the same cell at 20 fps with an average SNR of 33.2 dB under the same illumination while consuming over a watt.
A CMOS In-Pixel CTIA High Sensitivity Fluorescence Imager
Murari, Kartikeya; Etienne-Cummings, Ralph; Thakor, Nitish; Cauwenberghs, Gert
2012-01-01
Traditionally, charge coupled device (CCD) based image sensors have held sway over the field of biomedical imaging. Complementary metal oxide semiconductor (CMOS) based imagers so far lack sensitivity leading to poor low-light imaging. Certain applications including our work on animal-mountable systems for imaging in awake and unrestrained rodents require the high sensitivity and image quality of CCDs and the low power consumption, flexibility and compactness of CMOS imagers. We present a 132×124 high sensitivity imager array with a 20.1 μm pixel pitch fabricated in a standard 0.5 μ CMOS process. The chip incorporates n-well/p-sub photodiodes, capacitive transimpedance amplifier (CTIA) based in-pixel amplification, pixel scanners and delta differencing circuits. The 5-transistor all-nMOS pixel interfaces with peripheral pMOS transistors for column-parallel CTIA. At 70 fps, the array has a minimum detectable signal of 4 nW/cm2 at a wavelength of 450 nm while consuming 718 μA from a 3.3 V supply. Peak signal to noise ratio (SNR) was 44 dB at an incident intensity of 1 μW/cm2. Implementing 4×4 binning allowed the frame rate to be increased to 675 fps. Alternately, sensitivity could be increased to detect about 0.8 nW/cm2 while maintaining 70 fps. The chip was used to image single cell fluorescence at 28 fps with an average SNR of 32 dB. For comparison, a cooled CCD camera imaged the same cell at 20 fps with an average SNR of 33.2 dB under the same illumination while consuming over a watt. PMID:23136624
NASA Astrophysics Data System (ADS)
Miecznik, Grzegorz; Shafer, Jeff; Baugh, William M.; Bader, Brett; Karspeck, Milan; Pacifici, Fabio
2017-05-01
WorldView-3 (WV-3) is a DigitalGlobe commercial, high resolution, push-broom imaging satellite with three instruments: visible and near-infrared VNIR consisting of panchromatic (0.3m nadir GSD) plus multi-spectral (1.2m), short-wave infrared SWIR (3.7m), and multi-spectral CAVIS (30m). Nine VNIR bands, which are on one instrument, are nearly perfectly registered to each other, whereas eight SWIR bands, belonging to the second instrument, are misaligned with respect to VNIR and to each other. Geometric calibration and ortho-rectification results in a VNIR/SWIR alignment which is accurate to approximately 0.75 SWIR pixel at 3.7m GSD, whereas inter-SWIR, band to band registration is 0.3 SWIR pixel. Numerous high resolution, spectral applications, such as object classification and material identification, require more accurate registration, which can be achieved by utilizing image processing algorithms, for example Mutual Information (MI). Although MI-based co-registration algorithms are highly accurate, implementation details for automated processing can be challenging. One particular challenge is how to compute bin widths of intensity histograms, which are fundamental building blocks of MI. We solve this problem by making the bin widths proportional to instrument shot noise. Next, we show how to take advantage of multiple VNIR bands, and improve registration sensitivity to image alignment. To meet this goal, we employ Canonical Correlation Analysis, which maximizes VNIR/SWIR correlation through an optimal linear combination of VNIR bands. Finally we explore how to register images corresponding to different spatial resolutions. We show that MI computed at a low-resolution grid is more sensitive to alignment parameters than MI computed at a high-resolution grid. The proposed modifications allow us to improve VNIR/SWIR registration to better than ¼ of a SWIR pixel, as long as terrain elevation is properly accounted for, and clouds and water are masked out.
MRO's High Resolution Imaging Science Experiment (HiRISE): Polar Science Expectations
NASA Technical Reports Server (NTRS)
McEwen, A.; Herkenhoff, K.; Hansen, C.; Bridges, N.; Delamere, W. A.; Eliason, E.; Grant, J.; Gulick, V.; Keszthelyi, L.; Kirk, R.
2003-01-01
The Mars Reconnaissance Orbiter (MRO) is expected to launch in August 2005, arrive at Mars in March 2006, and begin the primary science phase in November 2006. MRO will carry a suite of remote-sensing instruments and is designed to routinely point off-nadir to precisely target locations on Mars for high-resolution observations. The mission will have a much higher data return than any previous planetary mission, with 34 Tbits of returned data expected in the first Mars year in the mapping orbit (255 x 320 km). The HiRISE camera features a 0.5 m telescope, 12 m focal length, and 14 CCDs. We expect to acquire approximately 10,000 observations in the primary science phase (approximately 1 Mars year), including approximately 2,000 images for 1,000 stereo targets. Each observation will be accompanied by a approximately 6 m/pixel image over a 30 x 45 km region acquired by MRO s context imager. Many HiRISE images will be full resolution in the center portion of the swath width and binned (typically 4x4) on the sides. This provides two levels of context, so we step out from 0.3 m/pixel to 1.2 m/pixel to 6 m/pixel (at 300 km altitude). We expect to cover approximately 1% of Mars at better than 1.2 m/pixel, approximately 0.1% at 0.3 m/pixel, approximately 0.1% in 3 colors, and approximately 0.05% in stereo. Our major challenge is to find the dey contacts, exposures and type morphologies to observe.
Filtered Rayleigh Scattering Measurements in a Buoyant Flowfield
2007-03-01
common filter used in FRS applications . Iodine is more attractive than mercury to use in a filter due to its broader range of blocking and transmission...is a 4032x2688 pixel camera with a monochrome or colored CCD imaging sensor. The binning range of the camera is (HxV) 1x1 to 2x8. The manufacturer...center position of the jet of the time averaged image . The z center position is chosen so that it is the average z value bounding helium
2017-03-02
This scene is a jumbled mess. There are blocks and smears of many different rocks types that appear to have been dumped into a pile. That's probably about what happened, as ejecta from the Isidis impact basin to the east. This pile of old rocks is an island surrounded by younger lava flows from Syrtis Major. The map is projected here at a scale of 25 centimeters (9.8 inches) per pixel. [The original image scale is 27.4 centimeters (10.8 inches) per pixel (with 1 x 1 binning); objects on the order of 82 centimeters (32.2 inches) across are resolved.] North is up. http://photojournal.jpl.nasa.gov/catalog/PIA21553
High-energy X-ray diffraction using the Pixium 4700 flat-panel detector.
Daniels, J E; Drakopoulos, M
2009-07-01
The Pixium 4700 detector represents a significant step forward in detector technology for high-energy X-ray diffraction. The detector design is based on digital flat-panel technology, combining an amorphous Si panel with a CsI scintillator. The detector has a useful pixel array of 1910 x 2480 pixels with a pixel size of 154 microm x 154 microm, and thus it covers an effective area of 294 mm x 379 mm. Designed for medical imaging, the detector has good efficiency at high X-ray energies. Furthermore, it is capable of acquiring sequences of images at 7.5 frames per second in full image mode, and up to 60 frames per second in binned region of interest modes. Here, the basic properties of this detector applied to high-energy X-ray diffraction are presented. Quantitative comparisons with a widespread high-energy detector, the MAR345 image plate scanner, are shown. Other properties of the Pixium 4700 detector, including a narrow point-spread function and distortion-free image, allows for the acquisition of high-quality diffraction data at high X-ray energies. In addition, high frame rates and shutterless operation open new experimental possibilities. Also provided are the necessary data for the correction of images collected using the Pixium 4700 for diffraction purposes.
A Year at the Moon on Chandrayaan-1: Moon Mineralogy Mapper Data in a Global Perspective
NASA Astrophysics Data System (ADS)
Boardman, J. W.; Pieters, C. M.; Clark, R. N.; Combe, J.; Green, R. O.; Isaacson, P.; Lundeen, S.; Malaret, E.; McCord, T. B.; Nettles, J. W.; Petro, N. E.; Staid, M.; Varanasi, P.
2009-12-01
The Moon Mineralogy Mapper, M3, a high-fidelity high-resolution imaging spectrometer on Chandrayaan-1 has completed two of its four scheduled optical periods during its maiden year in lunar orbit, collecting over 4.6 billion spectra covering most of the lunar surface. These imaging periods (November 2008-February 2009 and April 2009-August 2009) correspond to times of equatorial solar zenith angle less than sixty degrees, relative to the Chandrayaan-1 orbit. The vast majority of the data collected in these first two optical periods are in Global Mode (85 binned spectral bands from 460 to 2976 nanometers with a 2-by-2 binned angular pixel size of 1.4 milliradians). Full-resolution Target Mode data (259 spectral bands and 0.7 milliradian pixels) will be the focus of the remaining two collection periods. Chandrayaan-1 operated initially in a 100-kilometer polar orbit, yielding 70 meter Target pixels and 140 meter Global pixels. The orbit was raised on May 20, 2009, during Optical Period 2, to a nominal 200 kilometer altitude, effectively doubling the pixel spatial sizes. While the high spatial and spectral resolutions of the data allow detailed examination of specific local areas on the Moon, they can also reveal remarkable features when combined, processed and viewed in a global context. Using preliminary calibration and selenolocation, we have explored the spectral and spatial properties of the Moon as a whole as revealed by M3. The data display striking new diversity and information related to surface mineralogy, distribution of volatiles, thermal processes and photometry. Large volumes of complex imaging spectrometry data are, by their nature, simultaneously information-rich and challenging to process. For an initial assessment of the gross information content of the data set we performed a Principal Components analysis on the entire suite of Global Mode imagery. More than a dozen linearly independent spectral dimensions are present, even at the global scale. An animation of a Grand Tour Projection, sweeping a three-dimensional red/green/blue image visualization window through the M3 hyperdimensional spectral space, confirms both spatially and spectrally that the M3 data will revolutionize our understanding of our nearest celestial neighbor.
The resolved star formation history of M51a through successive Bayesian marginalization
NASA Astrophysics Data System (ADS)
Martínez-García, Eric E.; Bruzual, Gustavo; Magris C., Gladis; González-Lópezlira, Rosa A.
2018-02-01
We have obtained the time and space-resolved star formation history (SFH) of M51a (NGC 5194) by fitting Galaxy Evolution Explorer (GALEX), Sloan Digital Sky Survey and near-infrared pixel-by-pixel photometry to a comprehensive library of stellar population synthesis models drawn from the Synthetic Spectral Atlas of Galaxies (SSAG). We fit for each space-resolved element (pixel) an independent model where the SFH is averaged in 137 age bins, each one 100 Myr wide. We used the Bayesian Successive Priors (BSP) algorithm to mitigate the bias in the present-day spatial mass distribution. We test BSP with different prior probability distribution functions (PDFs); this exercise suggests that the best prior PDF is the one concordant with the spatial distribution of the stellar mass as inferred from the near-infrared images. We also demonstrate that varying the implicit prior PDF of the SFH in SSAG does not affect the results. By summing the contributions to the global star formation rate of each pixel, at each age bin, we have assembled the resolved SFH of the whole galaxy. According to these results, the star formation rate of M51a was exponentially increasing for the first 10 Gyr after the big bang, and then turned into an exponentially decreasing function until the present day. Superimposed, we find a main burst of star formation at t ≈ 11.9 Gyr after the big bang.
Larkin, J D; Publicover, N G; Sutko, J L
2011-01-01
In photon event distribution sampling, an image formation technique for scanning microscopes, the maximum likelihood position of origin of each detected photon is acquired as a data set rather than binning photons in pixels. Subsequently, an intensity-related probability density function describing the uncertainty associated with the photon position measurement is applied to each position and individual photon intensity distributions are summed to form an image. Compared to pixel-based images, photon event distribution sampling images exhibit increased signal-to-noise and comparable spatial resolution. Photon event distribution sampling is superior to pixel-based image formation in recognizing the presence of structured (non-random) photon distributions at low photon counts and permits use of non-raster scanning patterns. A photon event distribution sampling based method for localizing single particles derived from a multi-variate normal distribution is more precise than statistical (Gaussian) fitting to pixel-based images. Using the multi-variate normal distribution method, non-raster scanning and a typical confocal microscope, localizations with 8 nm precision were achieved at 10 ms sampling rates with acquisition of ~200 photons per frame. Single nanometre precision was obtained with a greater number of photons per frame. In summary, photon event distribution sampling provides an efficient way to form images when low numbers of photons are involved and permits particle tracking with confocal point-scanning microscopes with nanometre precision deep within specimens. © 2010 The Authors Journal of Microscopy © 2010 The Royal Microscopical Society.
2018-01-23
Ladon Basin was a large impact structure that was filled in by the deposits from Ladon Valles, a major ancient river on Mars as seen in this image from NASA's Mars Reconnaissance Orbiter (MRO). These wet sediments were altered into minerals such as various clay minerals. Clays imply chemistry that may have been favorable for life on ancient Mars, if anything lived there, so this could be a good spot for future exploration by rovers and perhaps return of samples to Earth. The map is projected here at a scale of 50 centimeters (19.7 inches) per pixel. [The original image scale is 52.1 centimeters (20.5 inches) per pixel (with 2 x 2 binning); objects on the order of 156 centimeters (61.4 inches) across are resolved.] North is up. https://photojournal.jpl.nasa.gov/catalog/PIA22183
2018-01-23
Ground cemented by ice cover the high latitudes of Mars, much as they do on Earth's cold climates. A common landform that occurs in icy terrain are polygons as shown in this image from NASA's Mars Reconnaissance Orbiter (MRO). Polygonal patterns form by winter cooling and contraction cracking of the frozen ground. Over time these thin cracks develop and coalesce into a honeycomb network, with a few meters spacing between neighboring cracks. Shallow troughs mark the locations of the underground cracks, which are clearly visible form orbit. The map is projected here at a scale of 25 centimeters (9.8 inches) per pixel. [The original image scale is 30.2 centimeters (11.9 inches) per pixel (with 1 x 1 binning); objects on the order of 91 centimeters (35.8 inches) across are resolved.] North is up. https://photojournal.jpl.nasa.gov/catalog/PIA22180
NASA Astrophysics Data System (ADS)
Takashima, Ichiro; Kajiwara, Riichi; Murano, Kiyo; Iijima, Toshio; Morinaka, Yasuhiro; Komobuchi, Hiroyoshi
2001-04-01
We have designed and built a high-speed CCD imaging system for monitoring neural activity in an exposed animal cortex stained with a voltage-sensitive dye. Two types of custom-made CCD sensors were developed for this system. The type I chip has a resolution of 2664 (H) X 1200 (V) pixels and a wide imaging area of 28.1 X 13.8 mm, while the type II chip has 1776 X 1626 pixels and an active imaging area of 20.4 X 18.7 mm. The CCD arrays were constructed with multiple output amplifiers in order to accelerate the readout rate. The two chips were divided into either 24 (I) or 16 (II) distinct areas that were driven in parallel. The parallel CCD outputs were digitized by 12-bit A/D converters and then stored in the frame memory. The frame memory was constructed with synchronous DRAM modules, which provided a capacity of 128 MB per channel. On-chip and on-memory binning methods were incorporated into the system, e.g., this enabled us to capture 444 X 200 pixel-images for periods of 36 seconds at a rate of 500 frames/second. This system was successfully used to visualize neural activity in the cortices of rats, guinea pigs, and monkeys.
All-digital full waveform recording photon counting flash lidar
NASA Astrophysics Data System (ADS)
Grund, Christian J.; Harwit, Alex
2010-08-01
Current generation analog and photon counting flash lidar approaches suffer from limitation in waveform depth, dynamic range, sensitivity, false alarm rates, optical acceptance angle (f/#), optical and electronic cross talk, and pixel density. To address these issues Ball Aerospace is developing a new approach to flash lidar that employs direct coupling of a photocathode and microchannel plate front end to a high-speed, pipelined, all-digital Read Out Integrated Circuit (ROIC) to achieve photon-counting temporal waveform capture in each pixel on each laser return pulse. A unique characteristic is the absence of performance-limiting analog or mixed signal components. When implemented in 65nm CMOS technology, the Ball Intensified Imaging Photon Counting (I2PC) flash lidar FPA technology can record up to 300 photon arrivals in each pixel with 100 ps resolution on each photon return, with up to 6000 range bins in each pixel. The architecture supports near 100% fill factor and fast optical system designs (f/#<1), and array sizes to 3000×3000 pixels. Compared to existing technologies, >60 dB ultimate dynamic range improvement, and >104 reductions in false alarm rates are anticipated, while achieving single photon range precision better than 1cm. I2PC significantly extends long-range and low-power hard target imaging capabilities useful for autonomous hazard avoidance (ALHAT), navigation, imaging vibrometry, and inspection applications, and enables scannerless 3D imaging for distributed target applications such as range-resolved atmospheric remote sensing, vegetation canopies, and camouflage penetration from terrestrial, airborne, GEO, and LEO platforms. We discuss the I2PC architecture, development status, anticipated performance advantages, and limitations.
Fifty Years of Mars Imaging: from Mariner 4 to HiRISE
2017-11-20
This image from NASA's Mars Reconnaissance Orbiter (MRO) shows Mars' surface in detail. Mars has captured the imagination of astronomers for thousands of years, but it wasn't until the last half a century that we were able to capture images of its surface in detail. This particular site on Mars was first imaged in 1965 by the Mariner 4 spacecraft during the first successful fly-by mission to Mars. From an altitude of around 10,000 kilometers, this image (the ninth frame taken) achieved a resolution of approximately 1.25 kilometers per pixel. Since then, this location has been observed by six other visible cameras producing images with varying resolutions and sizes. This includes HiRISE (highlighted in yellow), which is the highest-resolution and has the smallest "footprint." This compilation, spanning Mariner 4 to HiRISE, shows each image at full-resolution. Beginning with Viking 1 and ending with our HiRISE image, this animation documents the historic imaging of a particular site on another world. In 1976, the Viking 1 orbiter began imaging Mars in unprecedented detail, and by 1980 had successfully mosaicked the planet at approximately 230 meters per pixel. In 1999, the Mars Orbiter Camera onboard the Mars Global Surveyor (1996) also imaged this site with its Wide Angle lens, at around 236 meters per pixel. This was followed by the Thermal Emission Imaging System on Mars Odyssey (2001), which also provided a visible camera producing the image we see here at 17 meters per pixel. Later in 2012, the High-Resolution Stereo Camera on the Mars Express orbiter (2003) captured this image of the surface at 25 meters per pixel. In 2010, the Context Camera on the Mars Reconnaissance Orbiter (2005) imaged this site at about 5 meters per pixel. Finally, in 2017, HiRISE acquired the highest resolution image of this location to date at 50 centimeters per pixel. When seen at this unprecedented scale, we can discern a crater floor strewn with small rocky deposits, boulders several meters across, and wind-blown deposits in the floors of small craters and depressions. This compilation of Mars images spanning over 50 years gives us a visual appreciation of the evolution of orbital Mars imaging over a single site. The map is projected here at a scale of 50 centimeters (19.7 inches) per pixel. [The original image scale is 52.2 centimeters (20.6 inches) per pixel (with 2 x 2 binning); objects on the order of 156 centimeters (61.4 inches) across are resolved.] North is up. https://photojournal.jpl.nasa.gov/catalog/PIA22115
Sources of Gullies in Hale Crater
2017-04-12
Color from the High Resolution Imaging Science Experiment (HiRISE) instrument onboard NASA's Mars Reconnaissance Orbiter can show mineralogical differences due to the near-infrared filter. The sources of channels on the north rim of Hale Crater show fresh blue, green, purple and light toned exposures under the the overlying reddish dust. The causes and timing of activity in channels and gullies on Mars remains an active area of research. Geologists infer the timing of different events based on what are called "superposition relationships" between different landforms. Areas like this are a puzzle. The map is projected here at a scale of 25 centimeters (9.8 inches) per pixel. [The original image scale is 25.2 centimeters (9.9 inches) per pixel (with 1 x 1 binning); objects on the order of 76 centimeters (29.9 inches) across are resolved.] North is up. https://photojournal.jpl.nasa.gov/catalog/PIA21586
2017-04-10
In this image from NASA's Mars Reconnaissance Orbiter, a group of steeply inclined light-toned layers is bounded above and below by unconformities (sudden or irregular changes from one deposit to another) that indicate a "break" where erosion of pre-existing layers was taking place at a higher rate than deposition of new materials. The layered deposits in Melas Basin may have been deposited during the growth of a delta complex. This depositional sequence likely represents a period where materials were being deposited on the floor of a lake or running river. The map is projected here at a scale of 25 centimeters (9.8 inches) per pixel. [The original image scale is 28.9 centimeters (11.4 inches) per pixel (with 1 x 1 binning); objects on the order of 87 centimeters (34.2 inches) across are resolved.] North is up. https://photojournal.jpl.nasa.gov/catalog/PIA21580
2017-03-27
The mound in the center of this image appears to have blocked the path of the dunes as they marched south (north is to the left in this image) across the scene. Many of these transverse dunes have slipfaces that face south, although in some cases, it's hard to tell for certain. Smaller dunes run perpendicular to some of the larger-scale dunes, probably indicating a shift in wind directions in this area. Although it might be hard to tell, this group of dunes is very near the central pit of a 35-kilometer-wide impact crater. Data from other instruments indicate the presence of clay-like materials in the rock exposed in the central pit. The map is projected here at a scale of 50 centimeters (9.8 inches) per pixel. [The original image scale is 52 centimeters (20.5 inches) per pixel (with 2 x 2 binning); objects on the order of 156 centimeters (61.4 inches) across are resolved.] North is up. http://photojournal.jpl.nasa.gov/catalog/PIA21572
NASA Astrophysics Data System (ADS)
Maddox, Jacob; Delgado-Aparicio, Luis; Pablant, Novimir; Rutman, Max; Hill, Ken; Bitter, Manfred; Reinke, Matthew; Rice, John
2016-10-01
Novel energy resolved measurements of x-ray emissions were used to characterize impurity concentrations, electron temperature, and ΔZeff in a variety of Alcator C-Mod plasmas. A PILATUS2 detector programmed in a multi-energy configuration and used in a pinhole camera geometry provides the capability to function similar to a pulse height analyzer (PHA) but with full plasma profile views and sufficient spatial ( 1 cm), energy ( .5 keV), and temporal ( 10 ms) resolution. Each of the PILATUS2's 100k (487x195) pixels can be set to an energy threshold, which sorts x-ray emissions into energy bins by counting only photons with energy above the threshold energy. By setting every 13th pixel row to the same energy bin and the 12 interjacent pixel rows to different energy bins on the PILATUS2 detector gives 38 poloidal sightlines (487 rows/13 energy bins). The number of photons detected in each energy bin depends on (nZ/ne) , Te, and ne2Zeff, so that these plasma parameters can be extracted by fitting the data to an emission model, which includes free-free, free-bound, and bound-bound emissions from a De/H background plasma with perturbing medium and high-Z impurities, like intrinsic Mo, Fe, and Cu or injected W. Also, radial electron temperature profiles were measured during LHRF and ICRF and compared to Thomson scattering and ECE.
Layered Mantling Deposits in the Northern Mid-Latitudes
2017-02-22
Ice-rich mantling deposits accumulate from the atmosphere in the Martian mid-latitudes in cycles during periods of high obliquity (axial tilt), as recently as several million years ago. These deposits accumulate over cycles in layers, and here in the southern mid-latitudes, where the deposits have mostly eroded away due to warmer temperatures, small patches of the remnant layered deposits can still be observed. The map is projected here at a scale of 25 centimeters (9.8 inches) per pixel. [The original image scale is 29.5 centimeters (11.6 inches) per pixel (with 1 x 1 binning); objects on the order of 89 centimeters (35 inches) across are resolved.] North is up. http://photojournal.jpl.nasa.gov/catalog/PIA21462
VizieR Online Data Catalog: Coordinates and photometry of stars in Haffner 16 (Davidge, 2017)
NASA Astrophysics Data System (ADS)
Davidge, T. J.
2017-11-01
The images and spectra that are the basis of this study were recorded with Gemini Multi-Object Spectrograph (GMOS) on Gemini South as part of program GS-2014A-Q-84 (PI: Davidge). GMOS is the facility visible-light imager and spectrograph. The detector was (the CCDs that make up the GMOS detector have since been replaced) a mosaic of three 2048*4068 EEV CCDs. Each 13.5μm square pixel subtended 0.073arcsec on the sky. The three CCDs covered an area that is larger than that illuminated by the sky so that spectra could be dispersed outside of the sky field. The images and spectra were both recorded with 2*2 pixel binning. The g' (FWHM=0.55) and i' (FWHM=0.45) images of Haffner 16 were recorded on the night of 2013 December 31. The GMOS spectra were recorded during five nights in 2014 March (Mar 19, Mar 27, and Mar 30) and April (Apr 2, and Apr 3). The spectra were dispersed with the R400 grating (λblaze=7640Å, 400lines/mm). (1 data file).
VizieR Online Data Catalog: Times of transits and occultations of WASP-12b (Patra+, 2017)
NASA Astrophysics Data System (ADS)
Patra, K. C.; Winn, J. N.; Holman, M. J.; Yu, L.; Deming, D.; Dai, F.
2017-08-01
Between 2016 October and 2017 February, we observed seven transits of WASP-12 with the 1.2m telescope at the Fred Lawrence Whipple Observatory on Mt. Hopkins, Arizona. Images were obtained with the KeplerCam detector through a Sloan r'-band filter. The typical exposure time was 15s, chosen to give a signal-to-noise ratio of about 200 for WASP-12. The field of view of this camera is 23.1' on a side. We used 2*2 binning, giving a pixel scale of 0.68''. We measured two new occultation times based on hitherto unpublished Spitzer observations in 2013 December (program 90186, P.I. Todorov). Two different transits were observed, one at 3.6μm and one at 4.5μm. The data take the form of a time series of 32*32-pixel subarray images, with an exposure time of 2.0s per image. The data were acquired over a wide range of orbital phases, but for our purpose, we analyzed only the ~14000 images within 4hr of each occultation. (1 data file).
Evaluation of intrinsic respiratory signal determination methods for 4D CBCT adapted for mice
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martin, Rachael; Pan, Tinsu, E-mail: tpan@mdanderson.org; Rubinstein, Ashley
Purpose: 4D CT imaging in mice is important in a variety of areas including studies of lung function and tumor motion. A necessary step in 4D imaging is obtaining a respiratory signal, which can be done through an external system or intrinsically through the projection images. A number of methods have been developed that can successfully determine the respiratory signal from cone-beam projection images of humans, however only a few have been utilized in a preclinical setting and most of these rely on step-and-shoot style imaging. The purpose of this work is to assess and make adaptions of several successfulmore » methods developed for humans for an image-guided preclinical radiation therapy system. Methods: Respiratory signals were determined from the projection images of free-breathing mice scanned on the X-RAD system using four methods: the so-called Amsterdam shroud method, a method based on the phase of the Fourier transform, a pixel intensity method, and a center of mass method. The Amsterdam shroud method was modified so the sharp inspiration peaks associated with anesthetized mouse breathing could be detected. Respiratory signals were used to sort projections into phase bins and 4D images were reconstructed. Error and standard deviation in the assignment of phase bins for the four methods compared to a manual method considered to be ground truth were calculated for a range of region of interest (ROI) sizes. Qualitative comparisons were additionally made between the 4D images obtained using each of the methods and the manual method. Results: 4D images were successfully created for all mice with each of the respiratory signal extraction methods. Only minimal qualitative differences were noted between each of the methods and the manual method. The average error (and standard deviation) in phase bin assignment was 0.24 ± 0.08 (0.49 ± 0.11) phase bins for the Fourier transform method, 0.09 ± 0.03 (0.31 ± 0.08) phase bins for the modified Amsterdam shroud method, 0.09 ± 0.02 (0.33 ± 0.07) phase bins for the intensity method, and 0.37 ± 0.10 (0.57 ± 0.08) phase bins for the center of mass method. Little dependence on ROI size was noted for the modified Amsterdam shroud and intensity methods while the Fourier transform and center of mass methods showed a noticeable dependence on the ROI size. Conclusions: The modified Amsterdam shroud, Fourier transform, and intensity respiratory signal methods are sufficiently accurate to be used for 4D imaging on the X-RAD system and show improvement over the existing center of mass method. The intensity and modified Amsterdam shroud methods are recommended due to their high accuracy and low dependence on ROI size.« less
2007-03-31
Unlimited, Nivisys, Insight technology, Elcan, FLIR Systems, Stanford photonics Hardware Sensor fusion processors Video processing boards Image, video...Engineering The SPIE Digital Library is a resource for optics and photonics information. It contains more than 70,000 full-text papers from SPIE...conditions Top row: Stanford Photonics XR-Mega-10 Extreme 1400 x 1024 pixels ICCD detector, 33 msec exposure, no binning. Middle row: Andor EEV iXon
Identifying Jets Using Artifical Neural Networks
NASA Astrophysics Data System (ADS)
Rosand, Benjamin; Caines, Helen; Checa, Sofia
2017-09-01
We investigate particle jet interactions with the Quark Gluon Plasma (QGP) using artificial neural networks modeled on those used in computer image recognition. We create jet images by binning jet particles into pixels and preprocessing every image. We analyzed the jets with a Multi-layered maxout network and a convolutional network. We demonstrate each network's effectiveness in differentiating simulated quenched jets from unquenched jets, and we investigate the method that the network uses to discriminate among different quenched jet simulations. Finally, we develop a greater understanding of the physics behind quenched jets by investigating what the network learnt as well as its effectiveness in differentiating samples. Yale College Freshman Summer Research Fellowship in the Sciences and Engineering.
NASA Astrophysics Data System (ADS)
Szantai, Andre; Audouard, Joachim; Madeleine, Jean-Baptiste; Forget, Francois; Pottier, Alizée; Millour, Ehouarn; Gondet, Brigitte; Langevin, Yves; Bibring, Jean-Pierre
2016-10-01
The mapping in space and time of water ice clouds can help to explain the Martian water cycle and atmospheric circulation. For this purpose, an ice cloud index (ICI) corresponding to the depth of a water ice absorption band at 3.4 microns is derived from a series of OMEGA images (spectels) covering 5 Martian years. The ICI values for the corresponding pixels are then binned on a high-resolution regular grid (1° longitude x 1° latitude x 5° Ls x 1 h local time) and averaged. Inside each bin, the cloud cover is calculated by dividing the number of pixels considered as cloudy (after comparison to a threshold) to the number of all (valid) pixelsWe compare the maps of clouds obtained around local time 14:00 with collocated TES cloud observations (which were only obtained around this time of the day). A good agreement is found.Averaged ICI compared to the water ice column variable from the Martian Climate Database (MCD) show a correct correlation (~0.5) , which increases when values limited to the tropics only are compared.The number of gridpoints containing ICI values is small ( ~1%), but by taking several neighbor gridpoints and over longer periods, we can observe a cloud life cycle during daytime. An example in the the tropics, around the northern summer solstice, shows a decrease of cloudiness in the morning followed by an increase in the afternoon.
Imaging system for cardiac planar imaging using a dedicated dual-head gamma camera
Majewski, Stanislaw [Morgantown, VA; Umeno, Marc M [Woodinville, WA
2011-09-13
A cardiac imaging system employing dual gamma imaging heads co-registered with one another to provide two dynamic simultaneous views of the heart sector of a patient torso. A first gamma imaging head is positioned in a first orientation with respect to the heart sector and a second gamma imaging head is positioned in a second orientation with respect to the heart sector. An adjustment arrangement is capable of adjusting the distance between the separate imaging heads and the angle between the heads. With the angle between the imaging heads set to 180 degrees and operating in a range of 140-159 keV and at a rate of up to 500kHz, the imaging heads are co-registered to produce simultaneous dynamic recording of two stereotactic views of the heart. The use of co-registered imaging heads maximizes the uniformity of detection sensitivity of blood flow in and around the heart over the whole heart volume and minimizes radiation absorption effects. A normalization/image fusion technique is implemented pixel-by-corresponding pixel to increase signal for any cardiac region viewed in two images obtained from the two opposed detector heads for the same time bin. The imaging system is capable of producing enhanced first pass studies, bloodpool studies including planar, gated and non-gated EKG studies, planar EKG perfusion studies, and planar hot spot imaging.
2017-03-22
Hellas is an ancient impact structure and is the deepest and broadest enclosed basin on Mars. It measures about 2,300 kilometers across and the floor of the basin, Hellas Planitia, contains the lowest elevations on Mars. The Hellas region can often be difficult to view from orbit due to seasonal frost, water-ice clouds and dust storms, yet this region is intriguing because of its diverse, and oftentimes bizarre, landforms. This image from eastern Hellas Planitia shows some of the unusual features on the basin floor. These relatively flat-lying "cells" appear to have concentric layers or bands, similar to a honeycomb. This "honeycomb" terrain exists elsewhere in Hellas, but the geologic process responsible for creating these features remains unresolved. The map is projected here at a scale of 50 centimeters (19.7 inches) per pixel. [The original image scale is 52.2 centimeters (20.6 inches) per pixel (with 2 x 2 binning); objects on the order of 157 centimeters (61.8 inches) across are resolved.] North is up. http://photojournal.jpl.nasa.gov/catalog/PIA21570
Dunes of the Southern Highlands
2017-03-23
Sand dunes are scattered across Mars and one of the larger populations exists in the Southern hemisphere, just west of the Hellas impact basin. The Hellespontus region features numerous collections of dark, dune formations that collect both within depressions such as craters, and among "extra-crater" plains areas. This image displays the middle portion of a large dune field composed primarily of crescent-shaped "barchan" dunes. Here, the steep, sunlit side of the dune, called a slip face, indicates the down-wind side of the dune and direction of its migration. Other long, narrow linear dunes known as "seif" dunes are also here and in other locales to the east. NB: "Seif" comes from the Arabic word meaning "sword." The map is projected here at a scale of 25 centimeters (9.8 inches) per pixel. [The original image scale is 25.5 centimeters (10 inches) per pixel (with 1 x 1 binning); objects on the order of 77 centimeters (30.3 inches) across are resolved.] North is up. http://photojournal.jpl.nasa.gov/catalog/PIA21571
NASA Astrophysics Data System (ADS)
Kim, Namkug; Seo, Joon Beom; Sung, Yu Sub; Park, Bum-Woo; Lee, Youngjoo; Park, Seong Hoon; Lee, Young Kyung; Kang, Suk-Ho
2008-03-01
To find optimal binning, variable binning size linear binning (LB) and non-linear binning (NLB) methods were tested. In case of small binning size (Q <= 10), NLB shows significant better accuracy than the LB. K-means NLB (Q = 26) is statistically significant better than every LB. To find optimal binning method and ROI size of the automatic classification system for differentiation between diffuse infiltrative lung diseases on the basis of textural analysis at HRCT Six-hundred circular regions of interest (ROI) with 10, 20, and 30 pixel diameter, comprising of each 100 ROIs representing six regional disease patterns (normal, NL; ground-glass opacity, GGO; reticular opacity, RO; honeycombing, HC; emphysema, EMPH; and consolidation, CONS) were marked by an experienced radiologist from HRCT images. Histogram (mean) and co-occurrence matrix (mean and SD of angular second moment, contrast, correlation, entropy, and inverse difference momentum) features were employed to test binning and ROI effects. To find optimal binning, variable binning size LB (bin size Q: 4~30, 32, 64, 128, 144, 196, 256, 384) and NLB (Q: 4~30) methods (K-means, and Fuzzy C-means clustering) were tested. For automated classification, a SVM classifier was implemented. To assess cross-validation of the system, a five-folding method was used. Each test was repeatedly performed twenty times. Overall accuracies with every combination of variable ROIs, and binning sizes were statistically compared. In case of small binning size (Q <= 10), NLB shows significant better accuracy than the LB. K-means NLB (Q = 26) is statistically significant better than every LB. In case of 30x30 ROI size and most of binning size, the K-means method showed better than other NLB and LB methods. When optimal binning and other parameters were set, overall sensitivity of the classifier was 92.85%. The sensitivity and specificity of the system for each class were as follows: NL, 95%, 97.9%; GGO, 80%, 98.9%; RO 85%, 96.9%; HC, 94.7%, 97%; EMPH, 100%, 100%; and CONS, 100%, 100%, respectively. We determined the optimal binning method and ROI size of the automatic classification system for differentiation between diffuse infiltrative lung diseases on the basis of texture features at HRCT.
NASA Technical Reports Server (NTRS)
2006-01-01
[figure removed for brevity, see original site] Click on image for larger version This HiRISE image covers the western portion of the primary cavity of Gratteri crater situated in the Memnonia Fossae region. Gratteri crater is one of five definitive large rayed craters on Mars. Gratteri crater has a diameter of approximately 6.9 kilometers. Crater rays are long, linear features formed from the high-velocity ejection of blocks of material that re-impact the surface in linear clusters or chains that appear to emanate from the main or primary cavity. Such craters have been long recognized as the 'brightest' and 'freshest' craters on the Moon. However, Martian rays differ from lunar rays in that they are not 'bright,' but best recognized by their thermal signature (at night) in 100 meter/pixel THEMIS thermal infrared images. The HiRISE image shows that Gratteri crater has well-developed and sharp crater morphologic features with no discernable superimposed impact craters. The HiRISE sub-image shows that this is true for the ejecta and crater floor up to the full resolution of the image. Massive slumped blocks of materials on the crater floor and the 'spur and gully' morphology with the crater wall may suggest that the subsurface in this area may be thick and homogenous. Gratteri crater's ejecta blanket (as seen in THEMIS images) can be described as 'fluidized,' which may be suggestive of the presence of ground-ice that may have helped to 'liquefy' the ejecta as it was deposited near the crater. Gratteri's ejecta can be observed to have flowed in and around obstacles including an older, degraded crater lying immediately to the SW of Gratteri's primary cavity. Image PSP_001367_1620 was taken by the High Resolution Imaging Science Experiment (HiRISE) camera onboard the Mars Reconnaissance Orbiter spacecraft on November 10, 2006. The complete image is centered at -17.7 degrees latitude, 199.9 degrees East longitude. The range to the target site was 257.1 km (160.7 miles). At this distance the image scale ranges from 25.7 cm/pixel (with 1 x 1 binning) to 102.9 cm/pixel (with 4 x 4 binning). The image shown here has been map-projected to 25 cm/pixel and north is up. The image was taken at a local Mars time of 3:33 PM and the scene is illuminated from the west with a solar incidence angle of 64 degrees, thus the sun was about 26 degrees above the horizon. At a solar longitude of 133.6 degrees, the season on Mars is Northern Summer. NASA's Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the Mars Reconnaissance Orbiter for NASA's Science Mission Directorate, Washington. Lockheed Martin Space Systems, Denver, is the prime contractor for the project and built the spacecraft. The High Resolution Imaging Science Experiment is operated by the University of Arizona, Tucson, and the instrument was built by Ball Aerospace and Technology Corp., Boulder, Colo.Learning Compact Binary Face Descriptor for Face Recognition.
Lu, Jiwen; Liong, Venice Erin; Zhou, Xiuzhuang; Zhou, Jie
2015-10-01
Binary feature descriptors such as local binary patterns (LBP) and its variations have been widely used in many face recognition systems due to their excellent robustness and strong discriminative power. However, most existing binary face descriptors are hand-crafted, which require strong prior knowledge to engineer them by hand. In this paper, we propose a compact binary face descriptor (CBFD) feature learning method for face representation and recognition. Given each face image, we first extract pixel difference vectors (PDVs) in local patches by computing the difference between each pixel and its neighboring pixels. Then, we learn a feature mapping to project these pixel difference vectors into low-dimensional binary vectors in an unsupervised manner, where 1) the variance of all binary codes in the training set is maximized, 2) the loss between the original real-valued codes and the learned binary codes is minimized, and 3) binary codes evenly distribute at each learned bin, so that the redundancy information in PDVs is removed and compact binary codes are obtained. Lastly, we cluster and pool these binary codes into a histogram feature as the final representation for each face image. Moreover, we propose a coupled CBFD (C-CBFD) method by reducing the modality gap of heterogeneous faces at the feature level to make our method applicable to heterogeneous face recognition. Extensive experimental results on five widely used face datasets show that our methods outperform state-of-the-art face descriptors.
Klemm, Matthias; Schweitzer, Dietrich; Peters, Sven; Sauer, Lydia; Hammer, Martin; Haueisen, Jens
2015-01-01
Fluorescence lifetime imaging ophthalmoscopy (FLIO) is a new technique for measuring the in vivo autofluorescence intensity decays generated by endogenous fluorophores in the ocular fundus. Here, we present a software package called FLIM eXplorer (FLIMX) for analyzing FLIO data. Specifically, we introduce a new adaptive binning approach as an optimal tradeoff between the spatial resolution and the number of photons required per pixel. We also expand existing decay models (multi-exponential, stretched exponential, spectral global analysis, incomplete decay) to account for the layered structure of the eye and present a method to correct for the influence of the crystalline lens fluorescence on the retina fluorescence. Subsequently, the Holm-Bonferroni method is applied to FLIO measurements to allow for group comparisons between patients and controls on the basis of fluorescence lifetime parameters. The performance of the new approaches was evaluated in five experiments. Specifically, we evaluated static and adaptive binning in a diabetes mellitus patient, we compared the different decay models in a healthy volunteer and performed a group comparison between diabetes patients and controls. An overview of the visualization capabilities and a comparison of static and adaptive binning is shown for a patient with macular hole. FLIMX's applicability to fluorescence lifetime imaging microscopy is shown in the ganglion cell layer of a porcine retina sample, obtained by a laser scanning microscope using two-photon excitation.
New lightcurve of asteroid (216) Kleopatra to evaluate the shape model
NASA Astrophysics Data System (ADS)
Hannan, Melissa A.; Howell, Ellen S.; Woodney, Laura M.; Taylor, Patrick A.
2014-11-01
Asteroid 216 Kleopatra is an M class asteroid in the Main Belt with an unusual shape model that looks like a dog bone. This model was created, from the radar data taken at Arecibo Observatory (Ostro et al. 1999). The discovery of satellites orbiting Kleopatra (Marchis et al. 2008) has led to determination of its mass and density (Descamps et al. 2011). New higher quality data were taken to improve upon the existing shape model. Radar images were obtained in November and December 2013, at Arecibo Observatory with resolution of 10.5 km per pixel. In addition, observations were made with the fully automated 20-inch telescope of the Murillo Family Observatory located on the CSUSB campus. The telescope was equipped with an Apogee U16M CCD camera with a 31 arcmin square field of view and BVR filters. Image data were acquired on 7 and 9 November, 2013 under mostly clear conditions and with 2x2 binning to a pixel scale of 0.9 arcseconds per pixel. These images were taken close in time to the radar observations in order to determine the rotational phase. These data also can be used to look for color changes with rotation. We used the lightcurve and the existing radar shape model to simulate the new radar observations. Although the model matches fairly well overall, it does not reproduce all of the features in the images, indicating that the model can be improved. Results of this analysis will be presented.
First CRISM Observations of Mars
NASA Astrophysics Data System (ADS)
Murchie, S.; Arvidson, R.; Bedini, P.; Beisser, K.; Bibring, J.; Bishop, J.; Brown, A.; Boldt, J.; Cavender, P.; Choo, T.; Clancy, R. T.; Darlington, E. H.; Des Marais, D.; Espiritu, R.; Fort, D.; Green, R.; Guinness, E.; Hayes, J.; Hash, C.; Heffernan, K.; Humm, D.; Hutcheson, J.; Izenberg, N.; Lees, J.; Malaret, E.; Martin, T.; McGovern, J. A.; McGuire, P.; Morris, R.; Mustard, J.; Pelkey, S.; Robinson, M.; Roush, T.; Seelos, F.; Seelos, K.; Slavney, S.; Smith, M.; Shyong, W. J.; Strohbehn, K.; Taylor, H.; Wirzburger, M.; Wolff, M.
2006-12-01
CRISM will make its first observations of Mars from MRO in late September 2006, and regular science observations begin in early November. CRISM is a gimbaled, hyperspectral imager whose objectives are (1) to map the entire surface using a subset of bands to characterize crustal mineralogy, (2) to map the mineralogy of key areas at high spectral and spatial resolution, and (3) to measure spatial and seasonal variations in the atmosphere. These objectives are addressed using three major types of observations. In the multispectral survey, with the gimbal pointed at planet nadir, data are collected at a subset of 72 wavelengths covering key mineralogic absorptions, and binned to pixel footprints of 100 or 200 m per pixel. Nearly the entire planet will be mapped in this fashion. In targeted orservations, the gimbal is scanned to remove most along-track motion, and a region of interest is mapped at full spatial and spectral resolution (15-19 m per pixel, 362-3920 nm at 6.55 nm per channel). Ten additional abbreviated, spatially-binned images are taken before and after the main image, providing an emission phase function (EPF) of the site for atmospheric study and correction of surface spectra for atmospheric effects. In atmospheric mode, only the EPF is acquired. Global grids of the resulting lower data volume observations are taken repeatedly throughout the Martian year to measure seasonal variations in atmospheric properties. Raw, calibrated, and map-projected data are delivered to the community with a spectral library to aid in interpretation. CRISM has undergone calibrations during its cruise to Mars using internal sources, including a closed loop controlled integrating sphere that serves as a radiometric reference. On 26 September a protective lens cover will be deployed. First data from Mars will focus on targeted observations of Phoenix and MER, targeted observations of sulfate- and phyllosilicate-containing sites identified by Mars Express per OMEGA, acquisition of initial EPF grids, and multispectral survey of the northern plains. Our presentation will discuss first results from targeted observations and multispectral mapping. Data processing and first analysis of EPFs will be discussed in companion abstracts.
Comparative analysis of respiratory motion tracking using Microsoft Kinect v2 sensor.
Silverstein, Evan; Snyder, Michael
2018-05-01
To present and evaluate a straightforward implementation of a marker-less, respiratory motion-tracking process utilizing Kinect v2 camera as a gating tool during 4DCT or during radiotherapy treatments. Utilizing the depth sensor on the Kinect as well as author written C# code, respiratory motion of a subject was tracked by recording depth values obtained at user selected points on the subject, with each point representing one pixel on the depth image. As a patient breathes, specific anatomical points on the chest/abdomen will move slightly within the depth image across pixels. By tracking how depth values change for a specific pixel, instead of how the anatomical point moves throughout the image, a respiratory trace can be obtained based on changing depth values of the selected pixel. Tracking these values was implemented via marker-less setup. Varian's RPM system and the Anzai belt system were used in tandem with the Kinect to compare respiratory traces obtained by each using two different subjects. Analysis of the depth information from the Kinect for purposes of phase- and amplitude-based binning correlated well with the RPM and Anzai systems. Interquartile Range (IQR) values were obtained comparing times correlated with specific amplitude and phase percentages against each product. The IQR time spans indicated the Kinect would measure specific percentage values within 0.077 s for Subject 1 and 0.164 s for Subject 2 when compared to values obtained with RPM or Anzai. For 4DCT scans, these times correlate to less than 1 mm of couch movement and would create an offset of 1/2 an acquired slice. By tracking depth values of user selected pixels within the depth image, rather than tracking specific anatomical locations, respiratory motion can be tracked and visualized utilizing the Kinect with results comparable to that of the Varian RPM and Anzai belt. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
Real time automated inspection
Fant, Karl M.; Fundakowski, Richard A.; Levitt, Tod S.; Overland, John E.; Suresh, Bindinganavle R.; Ulrich, Franz W.
1985-01-01
A method and apparatus relating to the real time automatic detection and classification of characteristic type surface imperfections occurring on the surfaces of material of interest such as moving hot metal slabs produced by a continuous steel caster. A data camera transversely scans continuous lines of such a surface to sense light intensities of scanned pixels and generates corresponding voltage values. The voltage values are converted to corresponding digital values to form a digital image of the surface which is subsequently processed to form an edge-enhanced image having scan lines characterized by intervals corresponding to the edges of the image. The edge-enhanced image is thresholded to segment out the edges and objects formed by the edges are segmented out by interval matching and bin tracking. Features of the objects are derived and such features are utilized to classify the objects into characteristic type surface imperfections.
Imaging of gaseous oxygen through DFB laser illumination
NASA Astrophysics Data System (ADS)
Cocola, L.; Fedel, M.; Tondello, G.; Poletto, L.
2016-05-01
A Tunable Diode Laser Absorption Spectroscopy setup with Wavelength Modulation has been used together with a synchronous sampling imaging sensor to obtain two-dimensional transmission-mode images of oxygen content. Modulated laser light from a 760nm DFB source has been used to illuminate a scene from the back while image frames were acquired with a high dynamic range camera. Thanks to synchronous timing between the imaging device and laser light modulation, the traditional lock-in approach used in Wavelength Modulation Spectroscopy was replaced by image processing techniques, and many scanning periods were averaged together to allow resolution of small intensity variation over the already weak absorption signals from oxygen absorption band. After proper binning and filtering, the time-domain waveform obtained from each pixel in a set of frames representing the wavelength scan was used as the single detector signal in a traditional TDLAS-WMS setup, and so processed through a software defined digital lock-in demodulation and a second harmonic signal fitting routine. In this way the WMS artifacts of a gas absorption feature were obtained from each pixel together with intensity normalization parameter, allowing a reconstruction of oxygen distribution in a two-dimensional scene regardless from broadband transmitted intensity. As a first demonstration of the effectiveness of this setup, oxygen absorption images of similar containers filled with either oxygen or nitrogen were acquired and processed.
Layered Ice Near the South Pole of Mars
2017-12-12
The two largest ice sheets in the inner solar system are here on Earth, Antarctica and Greenland. The third largest is at the South Pole of Mars and a small part of it is shown in this image from NASA's Mars Reconnaissance Orbiter (MRO). Much like the terrestrial examples, this ice sheet is layered and scientists refer to it as the South Polar layered deposits. The ice layers contain information about past climates on Mars and deciphering this record has been a major goal of Mars science for decades. This slope, near the ice sheet's edge, shows the internal layers that have this climate record. With stereo images, we can tell the heights of these layers so we can measure their thickness and try to unravel the climatic information they contain. (Be sure to view the digital terrain model for this observation.) The map is projected here at a scale of 25 centimeters (9.8 inches) per pixel. [The original image scale is 25.0 centimeters (9.8 inches) per pixel (with 1 x 1 binning); objects on the order of 75 centimeters (29.5 inches) across are resolved.] North is up. https://photojournal.jpl.nasa.gov/catalog/PIA22125
A Sneak Peek into Saheki Secret Layers
2017-04-04
This image from NASA's Mars Reconnaissance Orbiter is of Saheki Crater, about 84 kilometers across, and located in the Southern highlands of Mars, to the north of Hellas Planitia. It's filled with beautiful alluvial fans that formed when water (likely melting snow) carried fine material, such as sand, silt and mud, from the interior crater rim down to the bottom of the crater. Two smaller craters impacted into the alluvial fan surface in Saheki, excavating holes that allow us to see what the fans look like beneath the surface. Exposed along the crater's interior walls, we can see that the fan is made up of multiple individual layers (white and purple tones in the enhanced color image) that were deposited on the floor (the green and brown tones). The brown, circular shapes on the fan layers are small impact craters. The map is projected here at a scale of 25 centimeters (9.8 inches) per pixel. [The original image scale is 26.2 centimeters (10.3 inches) per pixel (with 1 x 1 binning); objects on the order of 78 centimeters (30.7 inches) across are resolved.] North is up. https://photojournal.jpl.nasa.gov/catalog/PIA21577
Wrinkle Ridges and Pit Craters
2016-10-19
Tectonic stresses highly modified this area of Ganges Catena, north of Valles Marineris. The long, skinny ridges (called "wrinkle ridges") are evidence of compressional stresses in Mars' crust that created a crack (fault) where one side was pushed on top of the other side, also known as a thrust fault. As shown by cross-cutting relationships, however, extensional stresses have more recently pulled the crust of Mars apart in this region. (HiRISE imaged this area in 2-by-2 binning mode, so a pixel represents a 50 x 50 square centimeter.) http://photojournal.jpl.nasa.gov/catalog/PIA21112
Wang, Jing; Li, Tianfang; Lu, Hongbing; Liang, Zhengrong
2006-01-01
Reconstructing low-dose X-ray CT (computed tomography) images is a noise problem. This work investigated a penalized weighted least-squares (PWLS) approach to address this problem in two dimensions, where the WLS considers first- and second-order noise moments and the penalty models signal spatial correlations. Three different implementations were studied for the PWLS minimization. One utilizes a MRF (Markov random field) Gibbs functional to consider spatial correlations among nearby detector bins and projection views in sinogram space and minimizes the PWLS cost function by iterative Gauss-Seidel algorithm. Another employs Karhunen-Loève (KL) transform to de-correlate data signals among nearby views and minimizes the PWLS adaptively to each KL component by analytical calculation, where the spatial correlation among nearby bins is modeled by the same Gibbs functional. The third one models the spatial correlations among image pixels in image domain also by a MRF Gibbs functional and minimizes the PWLS by iterative successive over-relaxation algorithm. In these three implementations, a quadratic functional regularization was chosen for the MRF model. Phantom experiments showed a comparable performance of these three PWLS-based methods in terms of suppressing noise-induced streak artifacts and preserving resolution in the reconstructed images. Computer simulations concurred with the phantom experiments in terms of noise-resolution tradeoff and detectability in low contrast environment. The KL-PWLS implementation may have the advantage in terms of computation for high-resolution dynamic low-dose CT imaging. PMID:17024831
2017-12-12
Lyot Crater (220-kilometers in diameter) is located in the Northern lowlands of Mars. The crater's floor marks the lowest elevation in the Northern Hemisphere as seen in this image from NASA's Mars Reconnaissance Orbiter (MRO). On the crater's floor, we see a network of channels. connecting a series of irregular shaped pits. These resemble terrestrial beaded streams, which are common in the Arctic regions of Earth and develop from uneven permafrost thawing. If terrestrial beaded streams are a good analog, these landforms suggest liquid water flow in the past. If not then these pits may result from the process of sublimation and would indicate pockets of easily accessible near-surface ground ice, which might have potentially preserved evidence of past habitability. The map is projected here at a scale of 25 centimeters (9.8 inches) per pixel. [The original image scale is 12.2 centimeters (9.8 inches) per pixel (with 1 x 1 binning); objects on the order of 93 centimeters (36.6 inches) across are resolved.] North is up. https://photojournal.jpl.nasa.gov/catalog/PIA22186
Depressions and Channels on the Floor of Lyot Crater
2017-12-12
Lyot Crater (220-kilometers in diameter) is located in the Northern lowlands of Mars. The crater's floor marks the lowest elevation in the Northern Hemisphere as seen in this image from NASA's Mars Reconnaissance Orbiter (MRO). On the crater's floor, we see a network of channels. connecting a series of irregular shaped pits. These resemble terrestrial beaded streams, which are common in the Arctic regions of Earth and develop from uneven permafrost thawing. If terrestrial beaded streams are a good analog, these landforms suggest liquid water flow in the past. If not then these pits may result from the process of sublimation and would indicate pockets of easily accessible near-surface ground ice, which might have potentially preserved evidence of past habitability. The map is projected here at a scale of 25 centimeters (9.8 inches) per pixel. [The original image scale is 12.2 centimeters (9.8 inches) per pixel (with 1 x 1 binning); objects on the order of 93 centimeters (36.6 inches) across are resolved.] North is up. https://photojournal.jpl.nasa.gov/catalog/PIA22186
NASA Technical Reports Server (NTRS)
2007-01-01
[figure removed for brevity, see original site] Figure 1 The south polar region of Mars is covered seasonally with translucent carbon dioxide ice. In the spring gas subliming (evaporating) from the underside of the seasonal layer of ice bursts through weak spots, carrying dust from below with it, to form numerous dust fans aligned in the direction of the prevailing wind. The dust gets trapped in the shallow grooves on the surface, helping to define the small-scale structure of the surface. The surface texture is reminiscent of lizard skin (figure 1). Observation Geometry Image PSP_003730_0945 was taken by the High Resolution Imaging Science Experiment (HiRISE) camera onboard the Mars Reconnaissance Orbiter spacecraft on 14-May-2007. The complete image is centered at -85.2 degrees latitude, 181.5 degrees East longitude. The range to the target site was 248.5 km (155.3 miles). At this distance the image scale is 24.9 cm/pixel (with 1 x 1 binning) so objects 75 cm across are resolved. The image shown here has been map-projected to 25 cm/pixel . The image was taken at a local Mars time of 06:04 PM and the scene is illuminated from the west with a solar incidence angle of 69 degrees, thus the sun was about 21 degrees above the horizon. At a solar longitude of 237.5 degrees, the season on Mars is Northern Autumn.Klemm, Matthias; Schweitzer, Dietrich; Peters, Sven; Sauer, Lydia; Hammer, Martin; Haueisen, Jens
2015-01-01
Fluorescence lifetime imaging ophthalmoscopy (FLIO) is a new technique for measuring the in vivo autofluorescence intensity decays generated by endogenous fluorophores in the ocular fundus. Here, we present a software package called FLIM eXplorer (FLIMX) for analyzing FLIO data. Specifically, we introduce a new adaptive binning approach as an optimal tradeoff between the spatial resolution and the number of photons required per pixel. We also expand existing decay models (multi-exponential, stretched exponential, spectral global analysis, incomplete decay) to account for the layered structure of the eye and present a method to correct for the influence of the crystalline lens fluorescence on the retina fluorescence. Subsequently, the Holm-Bonferroni method is applied to FLIO measurements to allow for group comparisons between patients and controls on the basis of fluorescence lifetime parameters. The performance of the new approaches was evaluated in five experiments. Specifically, we evaluated static and adaptive binning in a diabetes mellitus patient, we compared the different decay models in a healthy volunteer and performed a group comparison between diabetes patients and controls. An overview of the visualization capabilities and a comparison of static and adaptive binning is shown for a patient with macular hole. FLIMX’s applicability to fluorescence lifetime imaging microscopy is shown in the ganglion cell layer of a porcine retina sample, obtained by a laser scanning microscope using two-photon excitation. PMID:26192624
Real time automated inspection
Fant, K.M.; Fundakowski, R.A.; Levitt, T.S.; Overland, J.E.; Suresh, B.R.; Ulrich, F.W.
1985-05-21
A method and apparatus are described relating to the real time automatic detection and classification of characteristic type surface imperfections occurring on the surfaces of material of interest such as moving hot metal slabs produced by a continuous steel caster. A data camera transversely scans continuous lines of such a surface to sense light intensities of scanned pixels and generates corresponding voltage values. The voltage values are converted to corresponding digital values to form a digital image of the surface which is subsequently processed to form an edge-enhanced image having scan lines characterized by intervals corresponding to the edges of the image. The edge-enhanced image is thresholded to segment out the edges and objects formed by the edges by interval matching and bin tracking. Features of the objects are derived and such features are utilized to classify the objects into characteristic type surface imperfections. 43 figs.
NASA Technical Reports Server (NTRS)
2006-01-01
This full HiRISE image shows a cliff-face that has been eroded into the ice-rich polar layered deposits at the head of the large canyon, Chasma Boreale. In a similar way to layers in the Earth's ice caps, these Martian layers are thought to record variations in climate, which makes them very interesting to scientists. This particular cliff-face is several hundred meters high and the layers exposed here are the deepest (and so the oldest) in the polar layered deposits. The lower layers exposed in this scarp appear to be rich in dark sand, and erosion of these layers has produced the sand dunes that cover sections of this cliff-face. A close examination of the layers in the center of the image shows they have curved shapes and intersect each other. Scientists call this cross-bedding and it may indicate that these sandy layers were laid down as a large dunefield before being buried. At the bottom of the image, the floor of Chasma Boreale in this area appears to have been swept clean of sandy material. There is a complex history of erosion and deposition of material at this location. On the right of the image one can see a smooth material that covers the lower layers and which must have been deposited after the main cliff face was initially eroded. Closer to the center of the image, this smooth mantling material is in turn being eroded away to once again expose the layers beneath it. Image PSP_001334_2645 was taken by the High Resolution Imaging Science Experiment (HiRISE) camera onboard the Mars Reconnaissance Orbiter spacecraft on November 8, 2006. The complete image is centered at 84.4 degrees latitude, 343.5 degrees East longitude. The range to the target site was 317.4 km (198.4 miles). At this distance the image scale ranges from 31.8 cm/pixel (with 1 x 1 binning) to 63.5 cm/pixel (with 2 x 2 binning). The image shown here has been map-projected to 25 cm/pixel. The image was taken at a local Mars time of 1:38 PM and the scene is illuminated from the west with a solar incidence angle of 67 degrees, thus the sun was about 23 degrees above the horizon. At a solar longitude of 132.3 degrees, the season on Mars is Northern Summer. NASA's Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the Mars Reconnaissance Orbiter for NASA's Science Mission Directorate, Washington. Lockheed Martin Space Systems, Denver, is the prime contractor for the project and built the spacecraft. The High Resolution Imaging Science Experiment is operated by the University of Arizona, Tucson, and the instrument was built by Ball Aerospace and Technology Corp., Boulder, Colo.Data-driven optimal binning for respiratory motion management in PET.
Kesner, Adam L; Meier, Joseph G; Burckhardt, Darrell D; Schwartz, Jazmin; Lynch, David A
2018-01-01
Respiratory gating has been used in PET imaging to reduce the amount of image blurring caused by patient motion. Optimal binning is an approach for using the motion-characterized data by binning it into a single, easy to understand/use, optimal bin. To date, optimal binning protocols have utilized externally driven motion characterization strategies that have been tuned with population-derived assumptions and parameters. In this work, we are proposing a new strategy with which to characterize motion directly from a patient's gated scan, and use that signal to create a patient/instance-specific optimal bin image. Two hundred and nineteen phase-gated FDG PET scans, acquired using data-driven gating as described previously, were used as the input for this study. For each scan, a phase-amplitude motion characterization was generated and normalized using principle component analysis. A patient-specific "optimal bin" window was derived using this characterization, via methods that mirror traditional optimal window binning strategies. The resulting optimal bin images were validated by correlating quantitative and qualitative measurements in the population of PET scans. In 53% (n = 115) of the image population, the optimal bin was determined to include 100% of the image statistics. In the remaining images, the optimal binning windows averaged 60% of the statistics and ranged between 20% and 90%. Tuning the algorithm, through a single acceptance window parameter, allowed for adjustments of the algorithm's performance in the population toward conservation of motion or reduced noise-enabling users to incorporate their definition of optimal. In the population of images that were deemed appropriate for segregation, average lesion SUV max were 7.9, 8.5, and 9.0 for nongated images, optimal bin, and gated images, respectively. The Pearson correlation of FWHM measurements between optimal bin images and gated images were better than with nongated images, 0.89 and 0.85, respectively. Generally, optimal bin images had better resolution than the nongated images and better noise characteristics than the gated images. We extended the concept of optimal binning to a data-driven form, updating a traditionally one-size-fits-all approach to a conformal one that supports adaptive imaging. This automated strategy was implemented easily within a large population and encapsulated motion information in an easy to use 3D image. Its simplicity and practicality may make this, or similar approaches ideal for use in clinical settings. © 2017 American Association of Physicists in Medicine.
Shu, Jie; Dolman, G E; Duan, Jiang; Qiu, Guoping; Ilyas, Mohammad
2016-04-27
Colour is the most important feature used in quantitative immunohistochemistry (IHC) image analysis; IHC is used to provide information relating to aetiology and to confirm malignancy. Statistical modelling is a technique widely used for colour detection in computer vision. We have developed a statistical model of colour detection applicable to detection of stain colour in digital IHC images. Model was first trained by massive colour pixels collected semi-automatically. To speed up the training and detection processes, we removed luminance channel, Y channel of YCbCr colour space and chose 128 histogram bins which is the optimal number. A maximum likelihood classifier is used to classify pixels in digital slides into positively or negatively stained pixels automatically. The model-based tool was developed within ImageJ to quantify targets identified using IHC and histochemistry. The purpose of evaluation was to compare the computer model with human evaluation. Several large datasets were prepared and obtained from human oesophageal cancer, colon cancer and liver cirrhosis with different colour stains. Experimental results have demonstrated the model-based tool achieves more accurate results than colour deconvolution and CMYK model in the detection of brown colour, and is comparable to colour deconvolution in the detection of pink colour. We have also demostrated the proposed model has little inter-dataset variations. A robust and effective statistical model is introduced in this paper. The model-based interactive tool in ImageJ, which can create a visual representation of the statistical model and detect a specified colour automatically, is easy to use and available freely at http://rsb.info.nih.gov/ij/plugins/ihc-toolbox/index.html . Testing to the tool by different users showed only minor inter-observer variations in results.
Sun, Shaojie; Hu, Chuanmin; Feng, Lian; Swayze, Gregg A.; Holmes, Jamie; Graettinger, George; MacDonald, Ian R.; Garcia, Oscar; Leifer, Ira
2016-01-01
Using fine spatial resolution (~ 7.6 m) hyperspectral AVIRIS data collected over the Deepwater Horizon oil spill in the Gulf of Mexico, we statistically estimated slick lengths, widths and length/width ratios to characterize oil slick morphology for different thickness classes. For all AVIRIS-detected oil slicks (N = 52,100 continuous features) binned into four thickness classes (≤ 50 μm but thicker than sheen, 50–200 μm, 200–1000 μm, and > 1000 μm), the median lengths, widths, and length/width ratios of these classes ranged between 22 and 38 m, 7–11 m, and 2.5–3.3, respectively. The AVIRIS data were further aggregated to 30-m (Landsat resolution) and 300-m (MERIS resolution) spatial bins to determine the fractional oil coverage in each bin. Overall, if 50% fractional pixel coverage were to be required to detect oil with thickness greater than sheen for most oil containing pixels, a 30-m resolution sensor would be needed.
Graphical user interface for a dual-module EMCCD x-ray detector array
NASA Astrophysics Data System (ADS)
Wang, Weiyuan; Ionita, Ciprian; Kuhls-Gilcrist, Andrew; Huang, Ying; Qu, Bin; Gupta, Sandesh K.; Bednarek, Daniel R.; Rudin, Stephen
2011-03-01
A new Graphical User Interface (GUI) was developed using Laboratory Virtual Instrumentation Engineering Workbench (LabVIEW) for a high-resolution, high-sensitivity Solid State X-ray Image Intensifier (SSXII), which is a new x-ray detector for radiographic and fluoroscopic imaging, consisting of an array of Electron-Multiplying CCDs (EMCCDs) each having a variable on-chip electron-multiplication gain of up to 2000x to reduce the effect of readout noise. To enlarge the field-of-view (FOV), each EMCCD sensor is coupled to an x-ray phosphor through a fiberoptic taper. Two EMCCD camera modules are used in our prototype to form a computer-controlled array; however, larger arrays are under development. The new GUI provides patient registration, EMCCD module control, image acquisition, and patient image review. Images from the array are stitched into a 2kx1k pixel image that can be acquired and saved at a rate of 17 Hz (faster with pixel binning). When reviewing the patient's data, the operator can select images from the patient's directory tree listed by the GUI and cycle through the images using a slider bar. Commonly used camera parameters including exposure time, trigger mode, and individual EMCCD gain can be easily adjusted using the GUI. The GUI is designed to accommodate expansion of the EMCCD array to even larger FOVs with more modules. The high-resolution, high-sensitivity EMCCD modular-array SSXII imager with the new user-friendly GUI should enable angiographers and interventionalists to visualize smaller vessels and endovascular devices, helping them to make more accurate diagnoses and to perform more precise image-guided interventions.
Honeycomb-Textured Landforms in Northwestern Hellas Planitia
2017-11-28
This image from NASA's Mars Reconnaissance Orbiter (MRO) targets a portion of a group of honeycomb-textured landforms in northwestern Hellas Planitia, which is part of one of the largest and most ancient impact basins on Mars. In a larger Context Camera image, the individual "cells" are about 5 to 10 kilometers wide. With HiRISE, we see much greater detail of these cells, like sand ripples that indicate wind erosion has played some role here. We also see distinctive exposures of bedrock that cut across the floor and wall of the cells. These resemble dykes, which are usually formed by volcanic activity. Additionally, the lack of impact craters suggests that the landscape, along with these features, have been recently reshaped by a process, or number of processes that may even be active today. Scientists have been debating how these honeycombed features are created, theorizing from glacial events, lake formation, volcanic activity, and tectonic activity, to wind erosion. The map is projected here at a scale of 50 centimeters (19.7 inches) per pixel. [The original image scale is 53.8 centimeters (21.2 inches) per pixel (with 2 x 2 binning); objects on the order of 161 centimeters (23.5 inches) across are resolved.] North is up. https://photojournal.jpl.nasa.gov/catalog/PIA22118
A New Impact Site in the Southern Middle Latitudes
2017-04-05
Over 500 new impact events have been detected from before-and-after images from NASA's Mars Reconnaissance Orbiter, mostly from MRO's Context Camera, with a HiRISE followup. Those new craters that expose shallow ice are of special interest, especially at latitudes where not previously detected, to better map the ice distribution. We hope to find ice at relatively low latitudes both for understanding recent climate change and as a resource for possible future humans on Mars. This new impact, which occurred between August and December 2016 (at 42.5 degree South latitude) would provide an important constraint if ice was detected. Alas, the HiRISE color image does not indicate that ice is exposed. There is an elongated cluster of new craters (or just dark spots where the craters are too small to resolve), due to an oblique impact in which the bolide fragmented in the Martian atmosphere. The map is projected here at a scale of 25 centimeters (9.8 inches) per pixel. [The original image scale is 25.1 centimeters (9.9 inches) per pixel (with 1 x 1 binning); objects on the order of 75 centimeters (29.5 inches) across are resolved.] North is up. https://photojournal.jpl.nasa.gov/catalog/PIA21578
Optimizing 4DCBCT projection allocation to respiratory bins.
O'Brien, Ricky T; Kipritidis, John; Shieh, Chun-Chien; Keall, Paul J
2014-10-07
4D cone beam computed tomography (4DCBCT) is an emerging image guidance strategy used in radiotherapy where projections acquired during a scan are sorted into respiratory bins based on the respiratory phase or displacement. 4DCBCT reduces the motion blur caused by respiratory motion but increases streaking artefacts due to projection under-sampling as a result of the irregular nature of patient breathing and the binning algorithms used. For displacement binning the streak artefacts are so severe that displacement binning is rarely used clinically. The purpose of this study is to investigate if sharing projections between respiratory bins and adjusting the location of respiratory bins in an optimal manner can reduce or eliminate streak artefacts in 4DCBCT images. We introduce a mathematical optimization framework and a heuristic solution method, which we will call the optimized projection allocation algorithm, to determine where to position the respiratory bins and which projections to source from neighbouring respiratory bins. Five 4DCBCT datasets from three patients were used to reconstruct 4DCBCT images. Projections were sorted into respiratory bins using equispaced, equal density and optimized projection allocation. The standard deviation of the angular separation between projections was used to assess streaking and the consistency of the segmented volume of a fiducial gold marker was used to assess motion blur. The standard deviation of the angular separation between projections using displacement binning and optimized projection allocation was 30%-50% smaller than conventional phase based binning and 59%-76% smaller than conventional displacement binning indicating more uniformly spaced projections and fewer streaking artefacts. The standard deviation in the marker volume was 20%-90% smaller when using optimized projection allocation than using conventional phase based binning suggesting more uniform marker segmentation and less motion blur. Images reconstructed using displacement binning and the optimized projection allocation algorithm were clearer, contained visibly fewer streak artefacts and produced more consistent marker segmentation than those reconstructed with either equispaced or equal-density binning. The optimized projection allocation algorithm significantly improves image quality in 4DCBCT images and provides, for the first time, a method to consistently generate high quality displacement binned 4DCBCT images in clinical applications.
SU-E-I-09: The Impact of X-Ray Scattering On Image Noise for Dedicated Breast CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, K; Gazi, P; Boone, J
2015-06-15
Purpose: To quantify the impact of detected x-ray scatter on image noise in flat panel based dedicated breast CT systems and to determine the optimal scanning geometry given practical trade-offs between radiation dose and scatter reduction. Methods: Four different uniform polyethylene cylinders (104, 131, 156, and 184 mm in diameter) were scanned as the phantoms on a dedicated breast CT scanner developed in our laboratory. Both stationary projection imaging and rotational cone-beam CT imaging was performed. For each acquisition type, three different x-ray beam collimations were used (12, 24, and 109 mm measured at isocenter). The aim was to quantifymore » image noise properties (pixel variance, SNR, and image NPS) under different levels of x-ray scatter, in order to optimize the scanning geometry. For both projection images and reconstructed CT images, individual pixel variance and NPS were determined and compared. Noise measurement from the CT images were also performed with different detector binning modes and reconstruction matrix sizes. Noise propagation was also tracked throughout the intermediate steps of cone-beam CT reconstruction, including the inverse-logarithmic process, Fourier-filtering before backprojection. Results: Image noise was lower in the presence of higher scatter levels. For the 184 mm polyethylene phantom, the image noise (measured in pixel variance) was ∼30% lower with full cone-beam acquisition compared to a narrow (12 mm) fan-beam acquisition. This trend is consistent across all phantom sizes and throughout all steps of CT image reconstruction. Conclusion: From purely a noise perspective, the cone-beam geometry (i.e. the full cone-angle acquisition) produces lower image noise compared to the lower-scatter fan-beam acquisition for breast CT. While these results are relevant in homogeneous phantoms, the full impact of scatter on noise in bCT should involve contrast-to-noise-ratio measurements in heterogeneous phantoms if the goal is to optimize the scanning geometry for dedicated breast CT. This work was supported by a grant from the National Institute for Biomedical Imaging and Bioengineering (R01 EB002138)« less
Atmospheric Science using CRISM EPF Sequences
NASA Astrophysics Data System (ADS)
Wolff, M. J.; Clancy, R. T.; Arvidson, R.; Smith, M. D.; Murchie, S. L.; McGuire, P. C.
2006-12-01
Near the end of September 2006, the MRO/CRISM (Compact Reconnaissance Imaging Spectrometer for Mars; Murchie et al., 2006, JGR, in press.) will acquire its first observations of Mars. MRO's Primary Science Phase beginning in early November. One of CRISM's investigations is characterization of seasonal variations in dust and ice aerosols and trace gases using a systematic, global grid of hyperspectral measurements of emission phase functions (EPFs) acquired repetitively throughout the Martian year. EPFs will also be obtained as part of each of approximately 5000 "targeted" observations of surface geologic features. EPF measurements allow accurate determination of column abundances of water vapor, CO, dust and ice aerosols, and their seasonal variations (e.g., Clancy et al., 2003, 108(E9), 5098). EPFs are measured using eleven superimposed images within which the slit field-of-view is swept across a target point on the Martian surface. When EPFs are taken as part of a global grid, 10x spatial pixel binning will be used in all of the images, providing data at 150-200 m/pixel. In the targeted observations, the central image will be obtained at either full resolution or with 2x binning (15-38 m/pixel). In all cases, hyperspectral data (545 wavelengths) will be taken during each of the 11 superimposed scans. There are two types of global EPF grids, one with better temporal sampling and one with better spatial sampling of the atmosphere. The "atmospheric monitoring campaign" consists one Martian day of pole-to-pole EPF's every ~9°\\ of solar longitude (Ls). There is sufficient time for 8 EPFs in an orbit, one approximately every 22°\\ of latitude. Alternate orbits (projected onto the planet) are offset in latitude by about 11°\\ north or south to increase latitudinal resolution. Longitude spacing between the orbits is about 27°. The "seasonal change campaign" occurs approximately every ~36°\\ of Ls. A grid similar to that executed during the atmospheric monitoring campaign is taken on 3 non-contiguous days over about 2 weeks, to provide a higher spatial density grid (longitude spacing about 10°) to monitor seasonal changes in surface material spectral properties, especially absorption and desorption of H2O. Every 3 orbits projected on the planet, the EPFs are offset by 0°, +8°, and -8°\\ north or south to increase latitudinal resolution. Our presentation will discuss several aspects of the atmospheric analyses (optical depths, radiative properties, radiative transfer methodology) to be performed using the early-mission EPFs, with the primary focus being those EPFs planned for the end of September.
An Inverted Crater West of Mawrth Vallis
2017-11-28
This image from NASA's Mars Reconnaissance Orbiter (MRO) captures details of an approximately 1-kilometer inverted crater west of Mawrth Vallis. A Context Camera image provides context for the erosional features observed at this site. The location of this HiRISE image is north of the proposed landing ellipse for the ExoMars 2020 rover mission that will investigate diverse rocks and minerals related to ancient water-related activity in this region. Prolonged erosion removed less resistant rocks leaving behind other rocks that stand up locally such as the crater seen here and other nearby remnants. These resistant layers may belong to a phase of volcanism and/or water-related activity that carved Mawrth Vallis and filled in existing craters, and other lower-lying depressions, with darker materials. Erosion has also exposed these layers down to older, more resistant lighter rocks that are clay-bearing. The diversity of exposed bedrock made this location an ideal candidate for exploring a potentially water-rich ancient environment that might have once harbored life. The map is projected here at a scale of 25 centimeters (9.8 inches) per pixel. [The original image scale is 28.7 centimeters (11.3 inches) per pixel (with 1 x 1 binning); objects on the order of 86 centimeters (33.9 inches) across are resolved.] North is up. https://photojournal.jpl.nasa.gov/catalog/PIA22117
2017-02-10
The broader scene for this image is the fluidized ejecta from Bakhuysen Crater to the southwest, but there's something very interesting going on here on a much smaller scale. A small impact crater, about 25 meters in diameter, with a gouged-out trench extends to the south. The ejecta (rocky material ejected from the crater) mostly extends to the east and west of the crater. This "butterfly" ejecta is very common for craters formed at low impact angles. Taken together, these observations suggest that the crater-forming impactor came in at a low angle from the north, hit the ground and ejected material to the sides. The top of the impactor may have sheared off ("decapitating" the impactor) and continued downrange, forming the trench. We can't prove that's what happened, but this explanation is consistent with the observations. Regardless of how it formed, it's quite an interesting-looking "dragonfly" crater. The map is projected here at a scale of 50 centimeters (19.69 inches) per pixel. [The original image scale is 55.7 centimeters (21.92 inches) per pixel (with 2 x 2 binning); objects on the order of 167 centimeters (65.7 inches) across are resolved.] North is up. http://photojournal.jpl.nasa.gov/catalog/PIA21454
Improvement of spatial resolution in a Timepix based CdTe photon counting detector using ToT method
NASA Astrophysics Data System (ADS)
Park, Kyeongjin; Lee, Daehee; Lim, Kyung Taek; Kim, Giyoon; Chang, Hojong; Yi, Yun; Cho, Gyuseong
2018-05-01
Photon counting detectors (PCDs) have been recognized as potential candidates in X-ray radiography and computed tomography due to their many advantages over conventional energy-integrating detectors. In particular, a PCD-based X-ray system shows an improved contrast-to-noise ratio, reduced radiation exposure dose, and more importantly, exhibits a capability for material decomposition with energy binning. For some applications, a very high resolution is required, which translates into smaller pixel size. Unfortunately, small pixels may suffer from energy spectral distortions (distortion in energy resolution) due to charge sharing effects (CSEs). In this work, we propose a method for correcting CSEs by measuring the point of interaction of an incident X-ray photon by the time-of-threshold (ToT) method. Moreover, we also show that it is possible to obtain an X-ray image with a reduced pixel size by using the concept of virtual pixels at a given pixel size. To verify the proposed method, modulation transfer function (MTF) and signal-to-noise ratio (SNR) measurements were carried out with the Timepix chip combined with the CdTe pixel sensor. The X-ray test condition was set at 80 kVp with 5 μA, and a tungsten edge phantom and a lead line phantom were used for the measurements. Enhanced spatial resolution was achieved by applying the proposed method when compared to that of the conventional photon counting method. From experiment results, MTF increased from 6.3 (conventional counting method) to 8.3 lp/mm (proposed method) at 0.3 MTF. On the other hand, the SNR decreased from 33.08 to 26.85 dB due to four virtual pixels.
A high-speed pnCCD detector system for optical applications
NASA Astrophysics Data System (ADS)
Hartmann, R.; Buttler, W.; Gorke, H.; Herrmann, S.; Holl, P.; Meidinger, N.; Soltau, H.; Strüder, L.
2006-11-01
Measurements of a frame-store pnCCD detector system, optimized for high-speed applications in the optical and near infrared (NIR) region, will be presented. The device with an image area of 13.5 mm by 13.5 mm and a pixelsize of 51 μm by 51 μm exhibits a readout time faster than 1100 frames per second with an overall electronic noise contribution of less than three electrons. Variable operation modes of the detector system allow for even higher readout speeds by a pixel binning in transfer direction or, at slightly slower readout speeds, a further improvement in noise performance. We will also present the concept of a data acquisition system being able to handle pixel rates of more than 75 megapixel per second. The application of an anti-reflective coating on the ultra-thin entrance window of the back illuminated detector together with the large sensitive volume ensures a high and uniform detection efficiency from the ultra violet to the NIR.
2017-03-08
The material on the floor of this crater appears to have flowed like ice, and contains pits that might result from sublimation of subsurface ice. The surface is entirely dust-covered today. There probably was ice here sometime in the past, but could it persist at some depth? This crater is at latitude 26 degrees north, and near-surface ice at this latitude (rather than further toward one of the poles) could be a valuable resource for future human exploration. A future orbiter with a special kind of radar instrument could answer the question of whether or not there is shallow ice at low latitudes on Mars. The map is projected here at a scale of 50 centimeters (19.7 inches) per pixel. [The original image scale is 57.5 centimeters (22.6 inches) per pixel (with 2 x 2 binning); objects on the order of 172 centimeters (67.7 inches) across are resolved.] North is up. http://photojournal.jpl.nasa.gov/catalog/PIA21556
DOE Office of Scientific and Technical Information (OSTI.GOV)
Islam, Md. Shafiqul, E-mail: shafique@eng.ukm.my; Hannan, M.A., E-mail: hannan@eng.ukm.my; Basri, Hassan
Highlights: • Solid waste bin level detection using Dynamic Time Warping (DTW). • Gabor wavelet filter is used to extract the solid waste image features. • Multi-Layer Perceptron classifier network is used for bin image classification. • The classification performance evaluated by ROC curve analysis. - Abstract: The increasing requirement for Solid Waste Management (SWM) has become a significant challenge for municipal authorities. A number of integrated systems and methods have introduced to overcome this challenge. Many researchers have aimed to develop an ideal SWM system, including approaches involving software-based routing, Geographic Information Systems (GIS), Radio-frequency Identification (RFID), or sensormore » intelligent bins. Image processing solutions for the Solid Waste (SW) collection have also been developed; however, during capturing the bin image, it is challenging to position the camera for getting a bin area centralized image. As yet, there is no ideal system which can correctly estimate the amount of SW. This paper briefly discusses an efficient image processing solution to overcome these problems. Dynamic Time Warping (DTW) was used for detecting and cropping the bin area and Gabor wavelet (GW) was introduced for feature extraction of the waste bin image. Image features were used to train the classifier. A Multi-Layer Perceptron (MLP) classifier was used to classify the waste bin level and estimate the amount of waste inside the bin. The area under the Receiver Operating Characteristic (ROC) curves was used to statistically evaluate classifier performance. The results of this developed system are comparable to previous image processing based system. The system demonstration using DTW with GW for feature extraction and an MLP classifier led to promising results with respect to the accuracy of waste level estimation (98.50%). The application can be used to optimize the routing of waste collection based on the estimated bin level.« less
Spectral CT Reconstruction with Image Sparsity and Spectral Mean
Zhang, Yi; Xi, Yan; Yang, Qingsong; Cong, Wenxiang; Zhou, Jiliu
2017-01-01
Photon-counting detectors can acquire x-ray intensity data in different energy bins. The signal to noise ratio of resultant raw data in each energy bin is generally low due to the narrow bin width and quantum noise. To address this problem, here we propose an image reconstruction approach for spectral CT to simultaneously reconstructs x-ray attenuation coefficients in all the energy bins. Because the measured spectral data are highly correlated among the x-ray energy bins, the intra-image sparsity and inter-image similarity are important prior acknowledge for image reconstruction. Inspired by this observation, the total variation (TV) and spectral mean (SM) measures are combined to improve the quality of reconstructed images. For this purpose, a linear mapping function is used to minimalize image differences between energy bins. The split Bregman technique is applied to perform image reconstruction. Our numerical and experimental results show that the proposed algorithms outperform competing iterative algorithms in this context. PMID:29034267
A Kp-based model of auroral boundaries
NASA Astrophysics Data System (ADS)
Carbary, James F.
2005-10-01
The auroral oval can serve as both a representation and a prediction of space weather on a global scale, so a competent model of the oval as a function of a geomagnetic index could conveniently appraise space weather itself. A simple model of the auroral boundaries is constructed by binning several months of images from the Polar Ultraviolet Imager by Kp index. The pixel intensities are first averaged into magnetic latitude-magnetic local time (MLT-MLAT) and local time bins, and intensity profiles are then derived for each Kp level at 1 hour intervals of MLT. After background correction, the boundary latitudes of each profile are determined at a threshold of 4 photons cm-2 s1. The peak locations and peak intensities are also found. The boundary and peak locations vary linearly with Kp index, and the coefficients of the linear fits are tabulated for each MLT. As a general rule of thumb, the UV intensity peak shifts 1° in magnetic latitude for each increment in Kp. The fits are surprisingly good for Kp < 6 but begin to deteriorate at high Kp because of auroral boundary irregularities and poor statistics. The statistical model allows calculation of the auroral boundaries at most MLTs as a function of Kp and can serve as an approximation to the shape and extent of the statistical oval.
Kishimoto, S; Mitsui, T; Haruki, R; Yoda, Y; Taniguchi, T; Shimazaki, S; Ikeno, M; Saito, M; Tanaka, M
2014-11-01
We developed a silicon avalanche photodiode (Si-APD) linear-array detector for use in nuclear resonant scattering experiments using synchrotron X-rays. The Si-APD linear array consists of 64 pixels (pixel size: 100 × 200 μm(2)) with a pixel pitch of 150 μm and depletion depth of 10 μm. An ultrafast frontend circuit allows the X-ray detector to obtain a high output rate of >10(7) cps per pixel. High-performance integrated circuits achieve multichannel scaling over 1024 continuous time bins with a 1 ns resolution for each pixel without dead time. The multichannel scaling method enabled us to record a time spectrum of the 14.4 keV nuclear radiation at each pixel with a time resolution of 1.4 ns (FWHM). This method was successfully applied to nuclear forward scattering and nuclear small-angle scattering on (57)Fe.
Fretted Terrain Valley in Coloe Fossae Region
NASA Technical Reports Server (NTRS)
2006-01-01
[figure removed for brevity, see original site] Figure 1 Click on image for larger version The image in figure 1 shows lineated valley fill in one of a series of enclosed, intersecting troughs known as Coloe (Choloe) Fossae. Lineated valley fill consists of rows of material in valley centers that are parallel to the valley walls. It is probably made of ice-rich material and boulders that are left behind when the ice-rich material sublimates. Very distinct rows can be seen near the south (bottom) wall of the valley. Lineated valley fill is thought to result from mass wasting (downslope movement) of ice-rich material from valley walls towards their centers. It is commonly found in valleys near the crustal dichotomy that separates the two hemispheres of Mars. The valley shown here joins four other valleys with lineated fill near the top left corner of this image. Their juncture is a topographic low, suggesting that the lineated valley fill from the different valleys may be flowing or creeping towards the low area (movement towards the upper left of the image). The valley walls appear smooth at first glance but are seen to be speckled with small craters several meters in diameter at HiRISE resolution (see contrast-enhanced subimage). This indicates that at least some of the wall material has been stable to mass wasting for some period of time. Also seen on the valley wall are elongated features shaped like teardrops. These are most likely slightly older craters that have been degraded due to potentially recent downhill creep. It is unknown whether the valley walls are shedding material today. The subimage is approximately 140 x 400 m (450 x 1280 ft). Image PSP_001372_2160 was taken by the High Resolution Imaging Science Experiment (HiRISE) camera onboard the Mars Reconnaissance Orbiter spacecraft on November 11, 2006. The complete image is centered at 35.5 degrees latitude, 56.8 degrees East longitude. The range to the target site was 290.3 km (181.4 miles). At this distance the image scale ranges from 58.1 cm/pixel (with 2 x 2 binning) to 116.2 cm/pixel (with 4 x 4 binning). This image has been map-projected to 50 cm/pixel and north is up. The image was taken at a local Mars time of 3:23 PM and the scene is illuminated from the west with a solar incidence angle of 48 degrees, thus the sun was about 42 degrees above the horizon. At a solar longitude of 133.8 degrees, the season on Mars is Northern Summer. NASA's Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the Mars Reconnaissance Orbiter for NASA's Science Mission Directorate, Washington. Lockheed Martin Space Systems, Denver, is the prime contractor for the project and built the spacecraft. The High Resolution Imaging Science Experiment is operated by the University of Arizona, Tucson, and the instrument was built by Ball Aerospace and Technology Corp., Boulder, Colo.VizieR Online Data Catalog: BVRI light curves of GR Boo (Wang+, 2017)
NASA Astrophysics Data System (ADS)
Wang, D.; Zhang, L.; Han, X. L.; Lu, H.
2017-11-01
We observed the eclipsing binary GR Boo on May 12, 22 and 24 in 2015 using the SARA 90-cm telescope located at Kitt Peak National Observatory, Arizona, USA. This telescope was equipped with an ARC CCD camera with a resolution of 2048x2048pixels but used at 2x2 binning, resulting in 1024x1024pixels. We used the Bessel BVRI filters. (1 data file).
Graphical User Interface for a Dual-Module EMCCD X-ray Detector Array.
Wang, Weiyuan; Ionita, Ciprian; Kuhls-Gilcrist, Andrew; Huang, Ying; Qu, Bin; Gupta, Sandesh K; Bednarek, Daniel R; Rudin, Stephen
2011-03-16
A new Graphical User Interface (GUI) was developed using Laboratory Virtual Instrumentation Engineering Workbench (LabVIEW) for a high-resolution, high-sensitivity Solid State X-ray Image Intensifier (SSXII), which is a new x-ray detector for radiographic and fluoroscopic imaging, consisting of an array of Electron-Multiplying CCDs (EMCCDs) each having a variable on-chip electron-multiplication gain of up to 2000× to reduce the effect of readout noise. To enlarge the field-of-view (FOV), each EMCCD sensor is coupled to an x-ray phosphor through a fiberoptic taper. Two EMCCD camera modules are used in our prototype to form a computer-controlled array; however, larger arrays are under development. The new GUI provides patient registration, EMCCD module control, image acquisition, and patient image review. Images from the array are stitched into a 2k×1k pixel image that can be acquired and saved at a rate of 17 Hz (faster with pixel binning). When reviewing the patient's data, the operator can select images from the patient's directory tree listed by the GUI and cycle through the images using a slider bar. Commonly used camera parameters including exposure time, trigger mode, and individual EMCCD gain can be easily adjusted using the GUI. The GUI is designed to accommodate expansion of the EMCCD array to even larger FOVs with more modules. The high-resolution, high-sensitivity EMCCD modular-array SSXII imager with the new user-friendly GUI should enable angiographers and interventionalists to visualize smaller vessels and endovascular devices, helping them to make more accurate diagnoses and to perform more precise image-guided interventions.
The Days Dwindle Down to a Precious Few
2015-04-27
This image is located just inside the southern rim of Chong Chol crater and was obtained on April 25, 2015, the day following NASA MESSENGER final orbital correction maneuver. The spacecraft fuel tanks are now completely empty, and there is no means to prevent the Sun's gravity from pulling MESSENGER's orbit closer and closer to the surface of Mercury. Impact is expected to occur on April 30, 2015. The image is located just inside the southern rim of Chong Chol crater, named for a Korean poet of the 1500s. It is challenging to obtain good images when the spacecraft is very low above the planet, because of the high speed at which the camera's field of view is moving across the surface. Very short exposure times are used to limit smear, and this image was binned from its original size of 1024 x 1024 pixels to 512 x 512 to improve the image quality. The title of today's image is a line from "September Song" (composed by Kurt Weill, with lyrics by Maxwell Anderson. The song was subsequently covered by artists including Ian McCulloch of Echo & the Bunnymen, Lou Reed, and Bryan Ferry). Date acquired: April 25, 2015 Image Mission Elapsed Time (MET): 72264694 Image ID: 8392292 Instrument: Narrow Angle Camera (NAC) of the Mercury Dual Imaging System (MDIS) Center Latitude: 45.43° N Center Longitude: 298.62° E Resolution: 2.1 meters/pixel Scale: The scene is about 2.1 km (1.3 miles) across. This image has not been map projected. Incidence Angle: 69.9° Emission Angle: 20.1° Phase Angle: 90.0° http://photojournal.jpl.nasa.gov/catalog/PIA19436
NASA Astrophysics Data System (ADS)
Fu, Y.; Brezina, C.; Desch, K.; Poikela, T.; Llopart, X.; Campbell, M.; Massimiliano, D.; Gromov, V.; Kluit, R.; van Beauzekom, M.; Zappon, F.; Zivkovic, V.
2014-01-01
Timepix3 is a newly developed pixel readout chip which is expected to be operated in a wide range of gaseous and silicon detectors. It is made of 256 × 256 pixels organized in a square pixel-array with 55 μm pitch. Oscillators running at 640 MHz are distributed across the pixel-array and allow for a highly accurate measurement of the arrival time of a hit. This paper concentrates on a low-jitter phase locked loop (PLL) that is located in the chip periphery. This PLL provides a control voltage which regulates the actual frequency of the individual oscillators, allowing for compensation of process, voltage, and temperature variations.
Random Walk Graph Laplacian-Based Smoothness Prior for Soft Decoding of JPEG Images.
Liu, Xianming; Cheung, Gene; Wu, Xiaolin; Zhao, Debin
2017-02-01
Given the prevalence of joint photographic experts group (JPEG) compressed images, optimizing image reconstruction from the compressed format remains an important problem. Instead of simply reconstructing a pixel block from the centers of indexed discrete cosine transform (DCT) coefficient quantization bins (hard decoding), soft decoding reconstructs a block by selecting appropriate coefficient values within the indexed bins with the help of signal priors. The challenge thus lies in how to define suitable priors and apply them effectively. In this paper, we combine three image priors-Laplacian prior for DCT coefficients, sparsity prior, and graph-signal smoothness prior for image patches-to construct an efficient JPEG soft decoding algorithm. Specifically, we first use the Laplacian prior to compute a minimum mean square error initial solution for each code block. Next, we show that while the sparsity prior can reduce block artifacts, limiting the size of the overcomplete dictionary (to lower computation) would lead to poor recovery of high DCT frequencies. To alleviate this problem, we design a new graph-signal smoothness prior (desired signal has mainly low graph frequencies) based on the left eigenvectors of the random walk graph Laplacian matrix (LERaG). Compared with the previous graph-signal smoothness priors, LERaG has desirable image filtering properties with low computation overhead. We demonstrate how LERaG can facilitate recovery of high DCT frequencies of a piecewise smooth signal via an interpretation of low graph frequency components as relaxed solutions to normalized cut in spectral clustering. Finally, we construct a soft decoding algorithm using the three signal priors with appropriate prior weights. Experimental results show that our proposal outperforms the state-of-the-art soft decoding algorithms in both objective and subjective evaluations noticeably.
NASA Technical Reports Server (NTRS)
2007-01-01
[figure removed for brevity, see original site] Figure 1 Every year seasonal carbon dioxide ice, known to us as 'dry ice,' covers the poles of Mars. In the south polar region this ice is translucent, allowing sunlight to pass through and warm the surface below. The ice then sublimes (evaporates) from the bottom of the ice layer, and carves channels in the surface. The channels take on many forms. In the subimage shown here (figure 1) the gas from the dry ice has etched wide shallow channels. This region is relatively flat, which may be the reason these channels have a different morphology than the 'spiders' seen in more hummocky terrain. Observation Geometry Image PSP_003364_0945 was taken by the High Resolution Imaging Science Experiment (HiRISE) camera onboard the Mars Reconnaissance Orbiter spacecraft on 15-Apr-2007. The complete image is centered at -85.4 degrees latitude, 104.0 degrees East longitude. The range to the target site was 251.5 km (157.2 miles). At this distance the image scale is 25.2 cm/pixel (with 1 x 1 binning) so objects 75 cm across are resolved. The image shown here has been map-projected to 25 cm/pixel . The image was taken at a local Mars time of 06:57 PM and the scene is illuminated from the west with a solar incidence angle of 75 degrees, thus the sun was about 15 degrees above the horizon. At a solar longitude of 219.6 degrees, the season on Mars is Northern Autumn.NASA Technical Reports Server (NTRS)
2007-01-01
[figure removed for brevity, see original site] Figure 1 There is an enigmatic region near the south pole of Mars known as the 'cryptic' terrain. It stays cold in the spring, even as its albedo darkens and the sun rises in the sky. This region is covered by a layer of translucent seasonal carbon dioxide ice that warms and evaporates from below. As carbon dioxide gas escapes from below the slab of seasonal ice it scours dust from the surface. The gas vents to the surface, where the dust is carried downwind by the prevailing wind. The channels carved by the escaping gas are often radially organized and are known informally as 'spiders' (figure 1). Observation Geometry Image PSP_003179_0945 was taken by the High Resolution Imaging Science Experiment (HiRISE) camera onboard the Mars Reconnaissance Orbiter spacecraft on 01-Apr-2007. The complete image is centered at -85.4 degrees latitude, 104.0 degrees East longitude. The range to the target site was 245.9 km (153.7 miles). At this distance the image scale is 49.2 cm/pixel (with 2 x 2 binning) so objects 148 cm across are resolved. The image shown here has been map-projected to 50 cm/pixel . The image was taken at a local Mars time of 06:19 PM and the scene is illuminated from the west with a solar incidence angle of 78 degrees, thus the sun was about 12 degrees above the horizon. At a solar longitude of 210.8 degrees, the season on Mars is Northern Autumn.Science in Motion: Isolated Araneiform Topography
NASA Technical Reports Server (NTRS)
2007-01-01
[figure removed for brevity, see original site] Figure 1 Have you ever found that to describe something you had to go to the dictionary and search for just the right word? The south polar terrain is so full of unearthly features that we had to visit Mr. Webster to find a suitable term. 'Araneiform' means 'spider-like'. These are channels that are carved in the surface by carbon dioxide gas. We do not have this process on Earth. The channels are somewhat radially organized (figure 1) and widen and deepen as they converge. In the past we've just refered to them as 'spiders.' 'Isolated araneiform topography' means that our features look like spiders that are not in contact with each other. Observation Geometry Image PSP_003087_0930 was taken by the High Resolution Imaging Science Experiment (HiRISE) camera onboard the Mars Reconnaissance Orbiter spacecraft on 24-Mar-2007. The complete image is centered at -87.1 degrees latitude, 126.3 degrees East longitude. The range to the target site was 244.4 km (152.8 miles). At this distance the image scale is 24.5 cm/pixel (with 1 x 1 binning) so objects 73 cm across are resolved. The image shown here has been map-projected to 25 cm/pixel . The image was taken at a local Mars time of 08:22 PM and the scene is illuminated from the west with a solar incidence angle of 81 degrees, thus the sun was about 9 degrees above the horizon. At a solar longitude of 206.4 degrees, the season on Mars is Northern Autumn.DOE Office of Scientific and Technical Information (OSTI.GOV)
Kishimoto, S., E-mail: syunji.kishimoto@kek.jp; Haruki, R.; Mitsui, T.
We developed a silicon avalanche photodiode (Si-APD) linear-array detector for use in nuclear resonant scattering experiments using synchrotron X-rays. The Si-APD linear array consists of 64 pixels (pixel size: 100 × 200 μm{sup 2}) with a pixel pitch of 150 μm and depletion depth of 10 μm. An ultrafast frontend circuit allows the X-ray detector to obtain a high output rate of >10{sup 7} cps per pixel. High-performance integrated circuits achieve multichannel scaling over 1024 continuous time bins with a 1 ns resolution for each pixel without dead time. The multichannel scaling method enabled us to record a time spectrummore » of the 14.4 keV nuclear radiation at each pixel with a time resolution of 1.4 ns (FWHM). This method was successfully applied to nuclear forward scattering and nuclear small-angle scattering on {sup 57}Fe.« less
Yardangs: Nature's Weathervanes
2017-11-28
The prominent tear-shaped features in this image from NASA's Mars Reconnaissance Orbiter (MRO) are erosional features called yardangs. Yardangs are composed of sand grains that have clumped together and have become more resistant to erosion than their surrounding materials. As the winds of Mars blow and erode away at the landscape, the more cohesive rock is left behind as a standing feature. (This Context Camera image shows several examples of yardangs that overlie the darker iron-rich material that makes up the lava plains in the southern portion of Elysium Planitia.) Resistant as they may be, the yardangs are not permanent, and will eventually be eroded away by the persistence of the Martian winds. For scientists observing the Red Planet, yardangs serve as a useful indicator of regional prevailing wind direction. The sandy structures are slowly eroded down and carved into elongated shapes that point in the downwind direction, like giant weathervanes. In this instance, the yardangs are all aligned, pointing towards north-northwest. This shows that the winds in this area generally gust in that direction. The map is projected here at a scale of 50 centimeters (19.7 inches) per pixel. [The original image scale is 55.8 centimeters (21 inches) per pixel (with 2 x 2 binning); objects on the order of 167 centimeters (65.7 inches) across are resolved.] North is up. https://photojournal.jpl.nasa.gov/catalog/PIA22119
2015-08-01
lifetime ( t2 ) corresponds to protein- bound NADH (23). Conversely, protein-bound FAD corre- sponds to the short lifetime, whereas free FAD corresponds...single photon counting (TCSPC) electronics (SPC-150, Becker and Hickl). TCSPC uses a fast detector PMT to measure the time between a laser pulse and... Becker and Hickl). A binning of nine surrounding pixels was used. Then, the fluorescence lifetime components were computed for each pixel by deconvolving
Evaluation of imaging quality for flat-panel detector based low dose C-arm CT system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seo, Chang-Woo; Cha, Bo Kyung; Jeon, Sungchae
The image quality associated with the extent of the angle of gantry rotation, the number of projection views, and the dose of X-ray radiation was investigated in flat-panel detector (FPD) based C-arm cone-beam computed tomography (CBCT) system for medical applications. A prototype CBCT system for the projection acquisition used the X-ray tube (A-132, Varian inc.) having rhenium-tungsten molybdenum target and flat panel a-Si X-ray detector (PaxScan 4030CB, Varian inc.) having a 397 x 298 mm active area with 388 μm pixel pitch and 1024 x 768 pixels in 2 by 2 binning mode. The performance comparison of X-ray imaging qualitymore » was carried out using the Feldkamp, Davis, and Kress (FDK) reconstruction algorithm between different conditions of projection acquisition. In this work, head-and-dental (75 kVp/20 mA) and chest (90 kVp/25 mA) phantoms were used to evaluate the image quality. The 361 (30 fps x 12 s) projection data during 360 deg. gantry rotation with 1 deg. interval for the 3D reconstruction were acquired. Parke weighting function were applied to handle redundant data and improve the reconstructed image quality in a mobile C-arm system with limited rotation angles. The reconstructed 3D images were investigated for comparison of qualitative image quality in terms of scan protocols (projection views, rotation angles and exposure dose). Furthermore, the performance evaluation in image quality will be investigated regarding X-ray dose and limited projection data for a FPD based mobile C-arm CBCT system. (authors)« less
Improved image retrieval based on fuzzy colour feature vector
NASA Astrophysics Data System (ADS)
Ben-Ahmeida, Ahlam M.; Ben Sasi, Ahmed Y.
2013-03-01
One of Image indexing techniques is the Content-Based Image Retrieval which is an efficient way for retrieving images from the image database automatically based on their visual contents such as colour, texture, and shape. In this paper will be discuss how using content-based image retrieval (CBIR) method by colour feature extraction and similarity checking. By dividing the query image and all images in the database into pieces and extract the features of each part separately and comparing the corresponding portions in order to increase the accuracy in the retrieval. The proposed approach is based on the use of fuzzy sets, to overcome the problem of curse of dimensionality. The contribution of colour of each pixel is associated to all the bins in the histogram using fuzzy-set membership functions. As a result, the Fuzzy Colour Histogram (FCH), outperformed the Conventional Colour Histogram (CCH) in image retrieving, due to its speedy results, where were images represented as signatures that took less size of memory, depending on the number of divisions. The results also showed that FCH is less sensitive and more robust to brightness changes than the CCH with better retrieval recall values.
Development of a fast multi-line x-ray CT detector for NDT
NASA Astrophysics Data System (ADS)
Hofmann, T.; Nachtrab, F.; Schlechter, T.; Neubauer, H.; Mühlbauer, J.; Schröpfer, S.; Ernst, J.; Firsching, M.; Schweiger, T.; Oberst, M.; Meyer, A.; Uhlmann, N.
2015-04-01
Typical X-ray detectors for non-destructive testing (NDT) are line detectors or area detectors, like e.g. flat panel detectors. Multi-line detectors are currently only available in medical Computed Tomography (CT) scanners. Compared to flat panel detectors, line and multi-line detectors can achieve much higher frame rates. This allows time-resolved 3D CT scans of an object under investigation. Also, an improved image quality can be achieved due to reduced scattered radiation from object and detector themselves. Another benefit of line and multi-line detectors is that very wide detectors can be assembled easily, while flat panel detectors are usually limited to an imaging field with a size of approx. 40 × 40 cm2 at maximum. The big disadvantage of line detectors is the limited number of object slices that can be scanned simultaneously. This leads to long scan times for large objects. Volume scans with a multi-line detector are much faster, but with almost similar image quality. Due to the promising properties of multi-line detectors their application outside of medical CT would also be very interesting for NDT. However, medical CT multi-line detectors are optimized for the scanning of human bodies. Many non-medical applications require higher spatial resolutions and/or higher X-ray energies. For those non-medical applications we are developing a fast multi-line X-ray detector.In the scope of this work, we present the current state of the development of the novel detector, which includes several outstanding properties like an adjustable curved design for variable focus-detector-distances, conserving nearly uniform perpendicular irradiation over the entire detector width. Basis of the detector is a specifically designed, radiation hard CMOS imaging sensor with a pixel pitch of 200 μ m. Each pixel has an automatic in-pixel gain adjustment, which allows for both: a very high sensitivity and a wide dynamic range. The final detector is planned to have 256 lines of pixels. By using a modular assembly of the detector, the width can be chosen as multiples of 512 pixels. With a frame rate of up to 300 frames/s (full resolution) or 1200 frame/s (analog binning to 400 μ m pixel pitch) time-resolved 3D CT applications become possible. Two versions of the detector are in development, one with a high resolution scintillator and one with a thick, structured and very efficient scintillator (pitch 400 μ m). This way the detector can even work with X-ray energies up to 450 kVp.
NASA Astrophysics Data System (ADS)
Cooper, N. J.; Lainey, V.; Meunier, L.-E.; Murray, C. D.; Zhang, Q.-F.; Baillie, K.; Evans, M. W.; Thuillot, W.; Vienne, A.
2018-02-01
Aims: Caviar is a software package designed for the astrometric measurement of natural satellite positions in images taken using the Imaging Science Subsystem (ISS) of the Cassini spacecraft. Aspects of the structure, functionality, and use of the software are described, and examples are provided. The integrity of the software is demonstrated by generating new measurements of the positions of selected major satellites of Saturn, 2013-2016, along with their observed minus computed (O-C) residuals relative to published ephemerides. Methods: Satellite positions were estimated by fitting a model to the imaged limbs of the target satellites. Corrections to the nominal spacecraft pointing were computed using background star positions based on the UCAC5 and Tycho2 star catalogues. UCAC5 is currently used in preference to Gaia-DR1 because of the availability of proper motion information in UCAC5. Results: The Caviar package is available for free download. A total of 256 new astrometric observations of the Saturnian moons Mimas (44), Tethys (58), Dione (55), Rhea (33), Iapetus (63), and Hyperion (3) have been made, in addition to opportunistic detections of Pandora (20), Enceladus (4), Janus (2), and Helene (5), giving an overall total of 287 new detections. Mean observed-minus-computed residuals for the main moons relative to the JPL SAT375 ephemeris were - 0.66 ± 1.30 pixels in the line direction and 0.05 ± 1.47 pixels in the sample direction. Mean residuals relative to the IMCCE NOE-6-2015-MAIN-coorb2 ephemeris were -0.34 ± 0.91 pixels in the line direction and 0.15 ± 1.65 pixels in the sample direction. The reduced astrometric data are provided in the form of satellite positions for each image. The reference star positions are included in order to allow reprocessing at some later date using improved star catalogues, such as later releases of Gaia, without the need to re-estimate the imaged star positions. The Caviar software is available for free download from: ftp://ftp://ftp.imcce.fr/pub/softwares/caviar.Full Tables 1 and 5 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/610/A2
Ho, Shirley; Agarwal, Nishant; Myers, Adam D.; ...
2015-05-22
Here, the Sloan Digital Sky Survey has surveyed 14,555 square degrees of the sky, and delivered over a trillion pixels of imaging data. We present the large-scale clustering of 1.6 million quasars between z=0.5 and z=2.5 that have been classified from this imaging, representing the highest density of quasars ever studied for clustering measurements. This data set spans 0~ 11,00 square degrees and probes a volume of 80 h –3 Gpc 3. In principle, such a large volume and medium density of tracers should facilitate high-precision cosmological constraints. We measure the angular clustering of photometrically classified quasars using an optimalmore » quadratic estimator in four redshift slices with an accuracy of ~ 25% over a bin width of δ l ~ 10–15 on scales corresponding to matter-radiation equality and larger (0ℓ ~ 2–3).« less
Transient Slope Lineae Formation in a Well-Preserved Crater
2017-11-20
This enhanced color image from NASA's Mars Reconnaissance Orbiter (MRO) shows what are called "recurring slope lineae"s in Tivat Crater. The narrow, dark flows descend downhill (towards the upper left). Analysis shows that the flows all end at approximately the same slope, which is similar to the angle of repose for sand. RSL are mostly found on steep rocky slopes in dark regions of Mars, such as the southern mid-latitudes, Valles Marineris near the equator, and in Acidalia Planitia on the northern plains. The appearance and growth of these features resemble seeping liquid water, but how they form remains unclear, and this research demonstrated that the RSL flows seen by HiRISE are likely moving granular material like sand and dust. These findings indicate that present-day Mars may not have a significant volume of liquid water. The water-restricted conditions that exist on Mars would make it difficult for Earth-like life to exist near the surface of the planet. The map is projected here at a scale of 25 centimeters (9.8 inches) per pixel. [The original image scale is 25.6 centimeters (10.8 inches) per pixel (with 1 x 1 binning); objects on the order of 77 centimeters (30.3 inches) across are resolved.] North is up. https://photojournal.jpl.nasa.gov/catalog/PIA22114
NASA Technical Reports Server (NTRS)
Adams, M. L.; Hagyard, M. J.; West, E. A.; Smith, J. E.; Whitaker, Ann F. (Technical Monitor)
2001-01-01
The Marshall Space Flight Center's (MSFC) solar group announces the successful upgrade of our tower vector magnetograph. In operation since 1973, the last major alterations to the system (which includes telescope, filter, polarizing optics, camera, and data acquisition computer) were made in 1982, when we upgraded from an SEC Vidicon camera to a CCD. In 1985, other changes were made which increased the field-of-view from 5 x 5 arc min (2.4 arc sec per pixel) to 6 x 6 arc min with a resolution of 2.81 arc sec. In 1989, the Apollo Telescope Mount H-alpha telescope was coaligned with the optics of the magnetograph. The most recent upgrades (year 2000), funded to support the High Energy Solar Spectroscopic Imager (HESSI) mission, have resulted in a pixel size of 0.64 arc sec over a 7 x 5.2 arc min field-of-view (binning 1x1). This poster describes the physical characteristics of the new system and compares spatial resolution, timing, and versatility with the old system. Finally, we provide a description of our Internet web site, which includes images of our most recent observations, and links to our data archives, as well as the history of magnetography at MSFC and education outreach pages.
A Closer Look at Holden Crater
2017-03-15
Holden Crater in southern Margaritifer Terra displays a series of finely layered deposits on its floor. The layered deposits are especially well exposed in the southwestern section of the crater where erosion by water flowing through a breach in the crater rim created spectacular outcrops. In this location, the deposits appear beneath a cap of alluvial fan materials (tan to brown in this image). Within the deposits, individual layers are nearly flat-lying and can be traced for hundreds of meters to kilometers. Information from the CRISM instrument on the Mars Reconnaissance Orbiter suggests that at least some of these beds contain clays. By contrast, the beds in the overlying alluvial fan are less continuous and dip in varying directions, showing less evidence for clays. Collectively, the characteristics of the finely bedded deposits suggest they may have been deposited into a lake on the crater floor, perhaps fed by runoff related to formation of the overlying fans. The map is projected here at a scale of 25 centimeters (9.8 inches) per pixel. [The original image scale is 25.9 centimeters (10.2 inches) per pixel (with 1 x 1 binning); objects on the order of 78 centimeters (30.7 inches) across are resolved.] North is up. http://photojournal.jpl.nasa.gov/catalog/PIA21561
Bin Ratio-Based Histogram Distances and Their Application to Image Classification.
Hu, Weiming; Xie, Nianhua; Hu, Ruiguang; Ling, Haibin; Chen, Qiang; Yan, Shuicheng; Maybank, Stephen
2014-12-01
Large variations in image background may cause partial matching and normalization problems for histogram-based representations, i.e., the histograms of the same category may have bins which are significantly different, and normalization may produce large changes in the differences between corresponding bins. In this paper, we deal with this problem by using the ratios between bin values of histograms, rather than bin values' differences which are used in the traditional histogram distances. We propose a bin ratio-based histogram distance (BRD), which is an intra-cross-bin distance, in contrast with previous bin-to-bin distances and cross-bin distances. The BRD is robust to partial matching and histogram normalization, and captures correlations between bins with only a linear computational complexity. We combine the BRD with the ℓ1 histogram distance and the χ(2) histogram distance to generate the ℓ1 BRD and the χ(2) BRD, respectively. These combinations exploit and benefit from the robustness of the BRD under partial matching and the robustness of the ℓ1 and χ(2) distances to small noise. We propose a method for assessing the robustness of histogram distances to partial matching. The BRDs and logistic regression-based histogram fusion are applied to image classification. The experimental results on synthetic data sets show the robustness of the BRDs to partial matching, and the experiments on seven benchmark data sets demonstrate promising results of the BRDs for image classification.
CRISM (Compact Reconnaissance Imaging Spectrometer for Mars) on MRO (Mars Reconnaissance Orbiter)
NASA Astrophysics Data System (ADS)
Murchie, Scott L.; Arvidson, Raymond E.; Bedini, Peter; Beisser, K.; Bibring, Jean-Pierre; Bishop, J.; Boldt, John D.; Choo, Tech H.; Clancy, R. Todd; Darlington, Edward H.; Des Marais, D.; Espiritu, R.; Fasold, Melissa J.; Fort, Dennis; Green, Richard N.; Guinness, E.; Hayes, John R.; Hash, C.; Heffernan, Kevin J.; Hemmler, J.; Heyler, Gene A.; Humm, David C.; Hutchison, J.; Izenberg, Noam R.; Lee, Robert E.; Lees, Jeffrey J.; Lohr, David A.; Malaret, Erick R.; Martin, T.; Morris, Richard V.; Mustard, John F.; Rhodes, Edgar A.; Robinson, Mark S.; Roush, Ted L.; Schaefer, Edward D.; Seagrave, Gordon G.; Silverglate, Peter R.; Slavney, S.; Smith, Mark F.; Strohbehn, Kim; Taylor, Howard W.; Thompson, Patrick L.; Tossman, Barry E.
2004-12-01
CRISM (Compact Reconnaissance Imaging Spectrometer for Mars) is a hyperspectral imager that will be launched on the MRO (Mars Reconnaissance Orbiter) spacecraft in August 2005. MRO"s objectives are to recover climate science originally to have been conducted on the Mars Climate Orbiter (MCO), to identify and characterize sites of possible aqueous activity to which future landed missions may be sent, and to characterize the composition, geology, and stratigraphy of Martian surface deposits. MRO will operate from a sun-synchronous, near-circular (255x320 km altitude), near-polar orbit with a mean local solar time of 3 PM. CRISM"s spectral range spans the ultraviolet (UV) to the mid-wave infrared (MWIR), 383 nm to 3960 nm. The instrument utilizes a Ritchey-Chretien telescope with a 2.12° field-of-view (FOV) to focus light on the entrance slit of a dual spectrometer. Within the spectrometer, light is split by a dichroic into VNIR (visible-near-infrared, 383-1071 nm) and IR (infrared, 988-3960 nm) beams. Each beam is directed into a separate modified Offner spectrometer that focuses a spectrally dispersed image of the slit onto a two dimensional focal plane (FP). The IR FP is a 640 x 480 HgCdTe area array; the VNIR FP is a 640 x 480 silicon photodiode area array. The spectral image is contiguously sampled with a 6.6 nm spectral spacing and an instantaneous field of view of 61.5 μradians. The Optical Sensor Unit (OSU) can be gimbaled to take out along-track smear, allowing long integration times that afford high signal-to-noise ratio (SNR) at high spectral and spatial resolution. The scan motor and encoder are controlled by a separately housed Gimbal Motor Electronics (GME) unit. A Data Processing Unit (DPU) provides power, command and control, and data editing and compression. CRISM acquires three major types of observations of the Martian surface and atmosphere. In Multispectral Mapping Mode, with the gimbal pointed at planet nadir, data are collected at frame rates of 15 or 30 Hz. A commandable subset of wavelengths is saved by the DPU and binned 5:1 or 10:1 cross-track. The combination of frame rates and binning yields pixel footprints of 100 or 200 m. In this mode, nearly the entire planet can be mapped at wavelengths of key mineralogic absorption bands to select regions of interest. In Targeted Mode, the gimbal is scanned over +/-60° from nadir to remove most along-track motion, and a region of interest is mapped at full spatial and spectral resolution. Ten additional abbreviated, pixel-binned observations are taken before and after the main hyperspectral image at longer atmospheric path lengths, providing an emission phase function (EPF) of the site for atmospheric study and correction of surface spectra for atmospheric effects. In Atmospheric Mode, the central observation is eliminated and only the EPF is acquired. Global grids of the resulting lower data volume observation are taken repeatedly throughout the Martian year to measure seasonal variations in atmospheric properties.
A fast double shutter for CCD-based metrology
NASA Astrophysics Data System (ADS)
Geisler, R.
2017-02-01
Image based metrology such as Particle Image Velocimetry (PIV) depends on the comparison of two images of an object taken in fast succession. Cameras for these applications provide the so-called `double shutter' mode: One frame is captured with a short exposure time and in direct succession a second frame with a long exposure time can be recorded. The difference in the exposure times is typically no problem since illumination is provided by a pulsed light source such as a laser and the measurements are performed in a darkened environment to prevent ambient light from accumulating in the long second exposure time. However, measurements of self-luminous processes (e.g. plasma, combustion ...) as well as experiments in ambient light are difficult to perform and require special equipment (external shutters, highspeed image sensors, multi-sensor systems ...). Unfortunately, all these methods incorporate different drawbacks such as reduced resolution, degraded image quality, decreased light sensitivity or increased susceptibility to decalibration. In the solution presented here, off-the-shelf CCD sensors are used with a special timing to combine neighbouring pixels in a binning-like way. As a result, two frames of short exposure time can be captured in fast succession. They are stored in the on-chip vertical register in a line-interleaved pattern, read out in the common way and separated again by software. The two resultant frames are completely congruent; they expose no insensitive lines or line shifts and thus enable sub-pixel accurate measurements. A third frame can be captured at the full resolution analogue to the double shutter technique. Image based measurement techniques such as PIV can benefit from this mode when applied in bright environments. The third frame is useful e.g. for acceleration measurements or for particle tracking applications.
NASA Astrophysics Data System (ADS)
Didierlaurent, D.; Ribes, S.; Batatia, H.; Jaudet, C.; Dierickx, L. O.; Zerdoud, S.; Brillouet, S.; Caselles, O.; Courbon, F.
2012-12-01
This study assesses the accuracy of prospective phase-gated PET/CT data binning and presents a retrospective data binning method that improves image quality and consistency. Respiratory signals from 17 patients who underwent 4D PET/CT were analysed to evaluate the reproducibility of temporal triggers used for the standard phase-based gating method. Breathing signals were reprocessed to implement retrospective PET data binning. The mean and standard deviation of time lags between automatic triggers provided by the Real-time Position Management (RPM, Varian) gating device and inhalation peaks derived from respiratory curves were computed for each patient. The total number of respiratory cycles available for 4D PET/CT according to the binning mode (prospective versus retrospective) was compared. The maximum standardized uptake value (SUVmax), biological tumour volume (BTV) and tumour trajectory measures were determined from the PET/CT images of five patients. Compared to retrospective binning (RB), prospective gating approach led to (i) a significant loss in breathing cycles (15%) and (ii) the inconsistency of data binning due to temporal dispersion of triggers (average 396 ms). Consequently, tumour characterization could be impacted. In retrospective mode, SUVmax was up to 27% higher, where no significant difference appeared in BTV. In addition, prospective mode gave an inconsistent spatial location of the tumour throughout the bins. Improved consistency with breathing patterns and greater motion amplitude of the tumour centroid were observed with retrospective mode. The detection of the tumour motion and trajectory was improved also for small temporal dispersion of triggers. This study shows that the binning mode could have a significant impact on 4D PET images. The consistency of triggers with breathing signals should be checked before clinical use of gated PET/CT images, and our RB method improves 4D PET/CT image quantification.
NASA Astrophysics Data System (ADS)
Zhou, Peng; Zhang, Xi; Sun, Weifeng; Dai, Yongshou; Wan, Yong
2018-01-01
An algorithm based on time-frequency analysis is proposed to select an imaging time window for the inverse synthetic aperture radar imaging of ships. An appropriate range bin is selected to perform the time-frequency analysis after radial motion compensation. The selected range bin is that with the maximum mean amplitude among the range bins whose echoes are confirmed to be contributed by a dominant scatter. The criterion for judging whether the echoes of a range bin are contributed by a dominant scatter is key to the proposed algorithm and is therefore described in detail. When the first range bin that satisfies the judgment criterion is found, a sequence composed of the frequencies that have the largest amplitudes in every moment's time-frequency spectrum corresponding to this range bin is employed to calculate the length and the center moment of the optimal imaging time window. Experiments performed with simulation data and real data show the effectiveness of the proposed algorithm, and comparisons between the proposed algorithm and the image contrast-based algorithm (ICBA) are provided. Similar image contrast and lower entropy are acquired using the proposed algorithm as compared with those values when using the ICBA.
2016-03-10
It's hard to see in the dark. Most HiRISE images are are taken when the sun is at least 15 degrees above the horizon. (If you hold your hand at arm's length with fingers together, it's about five degrees wide on average.) However, to see what's going on in winter, we need to look at times and places where the Sun is just barely over the horizon. This image was taken to look at seasonal frost in gullies during southern winter on Mars, with the Sun only about two degrees over the horizon (just before sunset). To make things more difficult, the gullies are on a steep slope facing away from the sun, so they are in deep shadow. Under these conditions, HiRISE takes what are called "bin 4" images. This means that the image shows less detail, but by adding up the light from 16 pixels (a 4x4 square) we can see details in shadows. Even with the reduced resolution, we can see plenty of detail in the gullies, and learn about the seasonal frost. http://photojournal.jpl.nasa.gov/catalog/PIA20480
NASA Astrophysics Data System (ADS)
Hu, Yue-Houng; Rottmann, Joerg; Fueglistaller, Rony; Myronakis, Marios; Wang, Adam; Huber, Pascal; Shedlock, Daniel; Morf, Daniel; Baturin, Paul; Star-Lack, Josh; Berbeco, Ross
2018-02-01
While megavoltage cone-beam computed tomography (CBCT) using an electronic portal imaging device (EPID) provides many advantages over kilovoltage (kV) CBCT, clinical adoption is limited by its high doses. Multi-layer imager (MLI) EPIDs increase DQE(0) while maintaining high resolution. However, even well-designed, high-performance MLIs suffer from increased electronic noise from each readout, degrading low-dose image quality. To improve low-dose performance, shift-and-bin addition (ShiBA) imaging is proposed, leveraging the unique architecture of the MLI. ShiBA combines hardware readout-binning and super-resolution concepts, reducing electronic noise while maintaining native image sampling. The imaging performance of full-resolution (FR); standard, aligned binned (BIN); and ShiBA images in terms of noise power spectrum (NPS), electronic NPS, modulation transfer function (MTF), and the ideal observer signal-to-noise ratio (SNR)—the detectability index (d‧)—are compared. The FR 4-layer readout of the prototype MLI exhibits an electronic NPS magnitude 6-times higher than a state-of-the-art single layer (SLI) EPID. Although the MLI is built on the same readout platform as the SLI, with each layer exhibiting equivalent electronic noise, the multi-stage readout of the MLI results in electronic noise 50% higher than simple summation. Electronic noise is mitigated in both BIN and ShiBA imaging, reducing its total by ~12 times. ShiBA further reduces the NPS, effectively upsampling the image, resulting in a multiplication by a sinc2 function. Normalized NPS show that neither ShiBA nor BIN otherwise affects image noise. The LSF shows that ShiBA removes the pixilation artifact of BIN images and mitigates the effect of detector shift, but does not quantifiably improve the MTF. ShiBA provides a pre-sampled representation of the images, mitigating phase dependence. Hardware binning strategies lower the quantum noise floor, with 2 × 2 implementation reducing the dose at which DQE(0) degrades by 10% from 0.01 MU to 0.004 MU, representing 20% improvement in d‧.
Measuring Fast Calcium Fluxes in Cardiomyocytes
Golebiewska, Urszula; Scarlata, Suzanne
2011-01-01
Cardiomyocytes have multiple Ca2+ fluxes of varying duration that work together to optimize function 1,2. Changes in Ca2+ activity in response to extracellular agents is predominantly regulated by the phospholipase Cβ- Gαq pathway localized on the plasma membrane which is stimulated by agents such as acetylcholine 3,4. We have recently found that plasma membrane protein domains called caveolae5,6 can entrap activated Gαq7. This entrapment has the effect of stabilizing the activated state of Gαq and resulting in prolonged Ca2+ signals in cardiomyocytes and other cell types8. We uncovered this surprising result by measuring dynamic calcium responses on a fast scale in living cardiomyocytes. Briefly, cells are loaded with a fluorescent Ca2+ indicator. In our studies, we used Ca2+ Green (Invitrogen, Inc.) which exhibits an increase in fluorescence emission intensity upon binding of calcium ions. The fluorescence intensity is then recorded for using a line-scan mode of a laser scanning confocal microscope. This method allows rapid acquisition of the time course of fluorescence intensity in pixels along a selected line, producing several hundreds of time traces on the microsecond time scale. These very fast traces are transferred into excel and then into Sigmaplot for analysis, and are compared to traces obtained for electronic noise, free dye, and other controls. To dissect Ca2+ responses of different flux rates, we performed a histogram analysis that binned pixel intensities with time. Binning allows us to group over 500 traces of scans and visualize the compiled results spatially and temporally on a single plot. Thus, the slow Ca2+ waves that are difficult to discern when the scans are overlaid due to different peak placement and noise, can be readily seen in the binned histograms. Very fast fluxes in the time scale of the measurement show a narrow distribution of intensities in the very short time bins whereas longer Ca2+ waves show binned data with a broad distribution over longer time bins. These different time distributions allow us to dissect the timing of Ca2+fluxes in the cells, and to determine their impact on various cellular events. PMID:22143396
Measuring fast calcium fluxes in cardiomyocytes.
Golebiewska, Urszula; Scarlata, Suzanne
2011-11-29
Cardiomyocytes have multiple Ca(2+) fluxes of varying duration that work together to optimize function (1,2). Changes in Ca(2+) activity in response to extracellular agents is predominantly regulated by the phospholipase Cβ- Gα(q;) pathway localized on the plasma membrane which is stimulated by agents such as acetylcholine (3,4). We have recently found that plasma membrane protein domains called caveolae(5,6) can entrap activated Gα(q;)(7). This entrapment has the effect of stabilizing the activated state of Gα(q;) and resulting in prolonged Ca(2+) signals in cardiomyocytes and other cell types(8). We uncovered this surprising result by measuring dynamic calcium responses on a fast scale in living cardiomyocytes. Briefly, cells are loaded with a fluorescent Ca(2+) indicator. In our studies, we used Ca(2+) Green (Invitrogen, Inc.) which exhibits an increase in fluorescence emission intensity upon binding of calcium ions. The fluorescence intensity is then recorded for using a line-scan mode of a laser scanning confocal microscope. This method allows rapid acquisition of the time course of fluorescence intensity in pixels along a selected line, producing several hundreds of time traces on the microsecond time scale. These very fast traces are transferred into excel and then into Sigmaplot for analysis, and are compared to traces obtained for electronic noise, free dye, and other controls. To dissect Ca(2+) responses of different flux rates, we performed a histogram analysis that binned pixel intensities with time. Binning allows us to group over 500 traces of scans and visualize the compiled results spatially and temporally on a single plot. Thus, the slow Ca(2+) waves that are difficult to discern when the scans are overlaid due to different peak placement and noise, can be readily seen in the binned histograms. Very fast fluxes in the time scale of the measurement show a narrow distribution of intensities in the very short time bins whereas longer Ca(2+) waves show binned data with a broad distribution over longer time bins. These different time distributions allow us to dissect the timing of Ca(2+)fluxes in the cells, and to determine their impact on various cellular events.
Task-based modeling and optimization of a cone-beam CT scanner for musculoskeletal imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prakash, P.; Zbijewski, W.; Gang, G. J.
2011-10-15
Purpose: This work applies a cascaded systems model for cone-beam CT imaging performance to the design and optimization of a system for musculoskeletal extremity imaging. The model provides a quantitative guide to the selection of system geometry, source and detector components, acquisition techniques, and reconstruction parameters. Methods: The model is based on cascaded systems analysis of the 3D noise-power spectrum (NPS) and noise-equivalent quanta (NEQ) combined with factors of system geometry (magnification, focal spot size, and scatter-to-primary ratio) and anatomical background clutter. The model was extended to task-based analysis of detectability index (d') for tasks ranging in contrast and frequencymore » content, and d' was computed as a function of system magnification, detector pixel size, focal spot size, kVp, dose, electronic noise, voxel size, and reconstruction filter to examine trade-offs and optima among such factors in multivariate analysis. The model was tested quantitatively versus the measured NPS and qualitatively in cadaver images as a function of kVp, dose, pixel size, and reconstruction filter under conditions corresponding to the proposed scanner. Results: The analysis quantified trade-offs among factors of spatial resolution, noise, and dose. System magnification (M) was a critical design parameter with strong effect on spatial resolution, dose, and x-ray scatter, and a fairly robust optimum was identified at M {approx} 1.3 for the imaging tasks considered. The results suggested kVp selection in the range of {approx}65-90 kVp, the lower end (65 kVp) maximizing subject contrast and the upper end maximizing NEQ (90 kVp). The analysis quantified fairly intuitive results--e.g., {approx}0.1-0.2 mm pixel size (and a sharp reconstruction filter) optimal for high-frequency tasks (bone detail) compared to {approx}0.4 mm pixel size (and a smooth reconstruction filter) for low-frequency (soft-tissue) tasks. This result suggests a specific protocol for 1 x 1 (full-resolution) projection data acquisition followed by full-resolution reconstruction with a sharp filter for high-frequency tasks along with 2 x 2 binning reconstruction with a smooth filter for low-frequency tasks. The analysis guided selection of specific source and detector components implemented on the proposed scanner. The analysis also quantified the potential benefits and points of diminishing return in focal spot size, reduced electronic noise, finer detector pixels, and low-dose limits of detectability. Theoretical results agreed quantitatively with the measured NPS and qualitatively with evaluation of cadaver images by a musculoskeletal radiologist. Conclusions: A fairly comprehensive model for 3D imaging performance in cone-beam CT combines factors of quantum noise, system geometry, anatomical background, and imaging task. The analysis provided a valuable, quantitative guide to design, optimization, and technique selection for a musculoskeletal extremities imaging system under development.« less
Characterization of operating parameters of an in vivo micro CT system
NASA Astrophysics Data System (ADS)
Ghani, Muhammad U.; Ren, Liqiang; Yang, Kai; Chen, Wei R.; Wu, Xizeng; Liu, Hong
2016-03-01
The objective of this study was to characterize the operating parameters of an in-vivo micro CT system. In-plane spatial resolution, noise, geometric accuracy, CT number uniformity and linearity, and phase effects were evaluated using various phantoms. The system employs a flat panel detector with a 127 μm pixel pitch, and a micro focus x-ray tube with a focal spot size ranging from 5-30 μm. The system accommodates three magnification sets of 1.72, 2.54 and 5.10. The in-plane cutoff frequencies (10% MTF) ranged from 2.31 lp/mm (60 mm FOV, M=1.72, 2×2 binning) to 13 lp/mm (10 mm FOV, M=5.10, 1×1 binning). The results were qualitatively validated by a resolution bar pattern phantom and the smallest visible lines were in 30-40 μm range. Noise power spectrum (NPS) curves revealed that the noise peaks exponentially increased as the geometric magnification (M) increased. True in-plane pixel spacing and slice thickness were within 2% of the system's specifications. The CT numbers in cone beam modality are greatly affected by scattering and thus they do not remain the same in the three magnifications. A high linear relationship (R2 > 0.999) was found between the measured CT numbers and Hydroxyapatite (HA) loadings of the rods of a water filled mouse phantom. Projection images of a laser cut acrylic edge acquired at a small focal spot size of 5 μm with 1.5 fps revealed that noticeable phase effects occur at M=5.10 in the form of overshooting at the boundary of air and acrylic. In order to make the CT numbers consistent across all the scan settings, scatter correction methods may be a valuable improvement for this system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kishimoto, S., E-mail: syunji.kishimoto@kek.jp; Haruki, R.; Mitsui, T.
We developed a silicon avalanche photodiode (Si-APD) linear-array detector to be used for time-resolved X-ray scattering experiments using synchrotron X-rays. The Si-APD linear array consists of 64 pixels (pixel size: 100 × 200 μm{sup 2}) with a pixel pitch of 150 μm and a depletion depth of 10 μm. The multichannel scaler counted X-ray pulses over continuous 2046 time bins for every 0.5 ns and recorded a time spectrum at each pixel with a time resolution of 0.5 ns (FWHM) for 8.0 keV X-rays. Using the detector system, we were able to observe X-ray peaks clearly separated with 2 nsmore » interval in the multibunch-mode operation of the Photon Factory ring. The small-angle X-ray scattering for polyvinylidene fluoride film was also observed with the detector.« less
NASA Technical Reports Server (NTRS)
Howard, Richard T. (Inventor); Bryan, ThomasC. (Inventor); Book, Michael L. (Inventor)
2004-01-01
A method and system for processing an image including capturing an image and storing the image as image pixel data. Each image pixel datum is stored in a respective memory location having a corresponding address. Threshold pixel data is selected from the image pixel data and linear spot segments are identified from the threshold pixel data selected.. Ihe positions of only a first pixel and a last pixel for each linear segment are saved. Movement of one or more objects are tracked by comparing the positions of fust and last pixels of a linear segment present in the captured image with respective first and last pixel positions in subsequent captured images. Alternatively, additional data for each linear data segment is saved such as sum of pixels and the weighted sum of pixels i.e., each threshold pixel value is multiplied by that pixel's x-location).
NASA Astrophysics Data System (ADS)
McEwen, A. S.; Eliason, E.; Gulick, V. C.; Spinoza, Y.; Beyer, R. A.; HiRISE Team
2010-12-01
The High Resolution Imaging Science Experiment (HiRISE) camera, orbiting Mars since 2006 on the Mars Reconnaissance Orbiter (MRO), has returned more than 17,000 large images with scales as small as 25 cm/pixel. From it’s beginning, the HiRISE team has followed “The People’s Camera” concept, with rapid release of useful images, explanations, and tools, and facilitating public image suggestions. The camera includes 14 CCDs, each read out into 2 data channels, so compressed images are returned from MRO as 28 long (up to 120,000 line) images that are 1024 pixels wide (or binned 2x2 to 512 pixels, etc.). This raw data is very difficult to use, especially for the public. At the HiRISE operations center the raw data are calibrated and processed into a series of B&W and color products, including browse images and JPEG2000-compressed images and tools to make it easy for everyone to explore these enormous images (see http://hirise.lpl.arizona.edu/). Automated pipelines do all of this processing, so we can keep up with the high data rate; images go directly to the format of the Planetary Data System (PDS). After students visually check each image product for errors, they are fully released just 1 month after receipt; captioned images (written by science team members) may be released sooner. These processed HiRISE images have been incorporated into tools such as Google Mars and World Wide Telescope for even greater accessibility. 51 Digital Terrain Models derived from HiRISE stereo pairs have been released, resulting in some spectacular flyover movies produced by members of the public and viewed up to 50,000 times according to YouTube. Public targeting began in 2007 via NASA Quest (http://marsoweb.nas.nasa.gov/HiRISE/quest/) and more than 200 images have been acquired, mostly by students and educators. At the beginning of 2010 we released HiWish (http://www.uahirise.org/hiwish/), opening HiRISE targeting to anyone in the world with Internet access, and already more than 100 public suggestions have been acquired. HiRISE has proven very popular with the public and science community. For example, a Google search on “HiRISE Mars” returns 626,000 results. We've participated in well over a two dozen presentations, specifically talking to middle and high-schoolers about HiRISE. Our images and captions have been featured in high-quality print magazines such as "National Geographic, Ciel et Espace, and Sky and Telescope.
Qu, Bin; Huang, Ying; Wang, Weiyuan; Sharma, Prateek; Kuhls-Gilcrist, Andrew T.; Cartwright, Alexander N.; Titus, Albert H.; Bednarek, Daniel R.; Rudin, Stephen
2011-01-01
Use of an extensible array of Electron Multiplying CCDs (EMCCDs) in medical x-ray imager applications was demonstrated for the first time. The large variable electronic-gain (up to 2000) and small pixel size of EMCCDs provide effective suppression of readout noise compared to signal, as well as high resolution, enabling the development of an x-ray detector with far superior performance compared to conventional x-ray image intensifiers and flat panel detectors. We are developing arrays of EMCCDs to overcome their limited field of view (FOV). In this work we report on an array of two EMCCD sensors running simultaneously at a high frame rate and optically focused on a mammogram film showing calcified ducts. The work was conducted on an optical table with a pulsed LED bar used to provide a uniform diffuse light onto the film to simulate x-ray projection images. The system can be selected to run at up to 17.5 frames per second or even higher frame rate with binning. Integration time for the sensors can be adjusted from 1 ms to 1000 ms. Twelve-bit correlated double sampling AD converters were used to digitize the images, which were acquired by a National Instruments dual-channel Camera Link PC board in real time. A user-friendly interface was programmed using LabVIEW to save and display 2K × 1K pixel matrix digital images. The demonstration tiles a 2 × 1 array to acquire increased-FOV stationary images taken at different gains and fluoroscopic-like videos recorded by scanning the mammogram simultaneously with both sensors. The results show high resolution and high dynamic range images stitched together with minimal adjustments needed. The EMCCD array design allows for expansion to an M×N array for arbitrarily larger FOV, yet with high resolution and large dynamic range maintained. PMID:23505330
NASA Astrophysics Data System (ADS)
Tajeddine, R.; Lainey, V.; Cooper, N. J.; Murray, C. D.
2015-03-01
Context. The Cassini spacecraft has been orbiting Saturn since 2004 and has returned images of satellites with an astrometric resolution as high as a few hundred meters per pixel. Aims: We used the images taken by the Narrow Angle Camera (NAC) of the Image Science Subsystem (ISS) instrument on board Cassini, for the purpose of astrometry. Methods: We applied the same method that was previously developed to reduce Cassini NAC images of Mimas and Enceladus. Results: We provide 5463 astrometric positions in right ascension and declination (α, δ) of the satellites: Tethys, Dione, Rhea, Iapetus, and Phoebe, using images that were taken by Cassini NAC between 2004 and 2012. the mean residuals compared to the JPL ephemeris SAT365 are of the order of hundreds of meters with standard deviations of the order of a few kilometers. The frequency analysis of the residuals shows the remaining unmodelled effects of satellites on the dynamics of other satellites. Full Table 1 is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/575/A73
X-ray imaging performance of scintillator-filled silicon pore arrays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simon, Matthias; Engel, Klaus Juergen; Menser, Bernd
2008-03-15
The need for fine detail visibility in various applications such as dental imaging, mammography, but also neurology and cardiology, is the driver for intensive efforts in the development of new x-ray detectors. The spatial resolution of current scintillator layers is limited by optical diffusion. This limitation can be overcome by a pixelation, which prevents optical photons from crossing the interface between two neighboring pixels. In this work, an array of pores was etched in a silicon wafer with a pixel pitch of 50 {mu}m. A very high aspect ratio was achieved with wall thicknesses of 4-7 {mu}m and pore depthsmore » of about 400 {mu}m. Subsequently, the pores were filled with Tl-doped cesium iodide (CsI:Tl) as a scintillator in a special process, which includes powder melting and solidification of the CsI. From the sample geometry and x-ray absorption measurement the pore fill grade was determined to be 75%. The scintillator-filled samples have a circular active area of 16 mm diameter. They are coupled with an optical sensor binned to the same pixel pitch in order to measure the x-ray imaging performance. The x-ray sensitivity, i.e., the light output per absorbed x-ray dose, is found to be only 2.5%-4.5% of a commercial CsI-layer of similar thickness, thus very low. The efficiency of the pores to transport the generated light to the photodiode is estimated to be in the best case 6.5%. The modulation transfer function is 40% at 4 lp/mm and 10%-20% at 8 lp/mm. It is limited most likely by the optical gap between scintillator and sensor and by K-escape quanta. The detective quantum efficiency (DQE) is determined at different beam qualities and dose settings. The maximum DQE(0) is 0.28, while the x-ray absorption with the given thickness and fill factor is 0.57. High Swank noise is suspected to be the reason, mainly caused by optical scatter inside the CsI-filled pores. The results are compared to Monte Carlo simulations of the photon transport inside the pore array structure. In addition, some x-ray images of technical and anatomical phantoms are shown. This work shows that scintillator-filled pore arrays can provide x-ray imaging with high spatial resolution, but are not suitable in their current state for most of the applications in medical imaging, where increasing the x-ray doses cannot be tolerated.« less
Full-field OCT: ex vivo and in vivo biological imaging applications
NASA Astrophysics Data System (ADS)
Grieve, Katharine; Dubois, Arnaud; Moneron, Gael; Guyot, Elvire; Boccara, Albert C.
2005-04-01
We present results of studies in embryology and ophthalmology performed using our ultrahigh-resolution full-field OCT system. We also discuss recent developments to our ultrashort acquisition time full-field optical coherence tomography system designed to allow in vivo biological imaging. Preliminary results of high-speed imaging in biological samples are presented. The core of the experimental setup is the Linnik interferometer, illuminated by a white light source. En face tomographic images are obtained in real-time without scanning by computing the difference of two phase-opposed interferometric images recorded by high-resolution CCD cameras. An isotropic spatial resolution of ~1 μm is achieved thanks to the short source coherence length and the use of high numerical aperture microscope objectives. A detection sensitivity of ~90 dB is obtained by means of image averaging and pixel binning. In ophthalmology, reconstructed xz images from rat ocular tissue are presented, where cellular-level structures in the retina are revealed, demonstrating the unprecedented resolution of our instrument. Three-dimensional reconstructions of the mouse embryo allowing the study of the establishment of the anterior-posterior axis are shown. Finally we present the first results of embryonic imaging using the new rapid acquisition full-field OCT system, which offers an acquisition time of 10 μs per frame.
A New Instrument for the IRTF: the MIT Optical Rapid Imaging System (MORIS)
NASA Astrophysics Data System (ADS)
Gulbis, Amanda A. S.; Elliot, J. L.; Rojas, F. E.; Bus, S. J.; Rayner, J. T.; Stahlberger, W. E.; Tokunaga, A. T.; Adams, E. R.; Person, M. J.
2010-10-01
NASA's 3-m Infrared Telescope Facility (IRTF) on Mauna Kea, HI plays a leading role in obtaining planetary science observations. However, there has been no capability for high-speed, visible imaging from this telescope. Here we present a new IRTF instrument, MORIS, the MIT Optical Rapid Imaging System. MORIS is based on POETS (Portable Occultation Eclipse and Transit Systems; Souza et al., 2006, PASP, 118, 1550). Its primary component is an Andor iXon camera, a 512x512 array of 16-micron pixels with high quantum efficiency, low read noise, low dark current, and full-frame readout rates of between 3.5 Hz (6 e /pixel read noise) and 35 Hz (49 e /pixel read noise at electron-multiplying gain=1). User-selectable binning and subframing can increase the cadence to a few hundred Hz. An electron-multiplying mode can be employed for photon counting, effectively reducing the read noise to sub-electron levels at the expense of dynamic range. Data cubes, or individual frames, can be triggered to nanosecond accuracy using a GPS. MORIS is mounted on the side-facing widow of SpeX (Rayner et al. 2003, PASP, 115, 362), allowing simultaneous near-infrared and visible observations. The mounting box contains 3:1 reducing optics to produce a 60 arcsec x 60 arcsec field of view at f/12.7. It hosts a ten-slot filter wheel, with Sloan g×, r×, i×, and z×, VR, Johnson V, and long-pass red filters. We describe the instrument design, components, and measured characteristics. We report results from the first science observations, a 24 June 2008 stellar occultation by Pluto. We also discuss a recent overhaul of the optical path, performed in order to eliminate scattered light. This work is supported in part by NASA Planetary Major Equipment grant NNX07AK95G. We are indebted to the University of Hawai'i Institute for Astronomy machine shop, in particular Randy Chung, for fabricating instrument components.
Dual- and Multi-Energy CT: Principles, Technical Approaches, and Clinical Applications
Leng, Shuai; Yu, Lifeng; Fletcher, Joel G.
2015-01-01
In x-ray computed tomography (CT), materials having different elemental compositions can be represented by identical pixel values on a CT image (ie, CT numbers), depending on the mass density of the material. Thus, the differentiation and classification of different tissue types and contrast agents can be extremely challenging. In dual-energy CT, an additional attenuation measurement is obtained with a second x-ray spectrum (ie, a second “energy”), allowing the differentiation of multiple materials. Alternatively, this allows quantification of the mass density of two or three materials in a mixture with known elemental composition. Recent advances in the use of energy-resolving, photon-counting detectors for CT imaging suggest the ability to acquire data in multiple energy bins, which is expected to further improve the signal-to-noise ratio for material-specific imaging. In this review, the underlying motivation and physical principles of dual- or multi-energy CT are reviewed and each of the current technical approaches is described. In addition, current and evolving clinical applications are introduced. © RSNA, 2015 PMID:26302388
Method and system for non-linear motion estimation
NASA Technical Reports Server (NTRS)
Lu, Ligang (Inventor)
2011-01-01
A method and system for extrapolating and interpolating a visual signal including determining a first motion vector between a first pixel position in a first image to a second pixel position in a second image, determining a second motion vector between the second pixel position in the second image and a third pixel position in a third image, determining a third motion vector between one of the first pixel position in the first image and the second pixel position in the second image, and the second pixel position in the second image and the third pixel position in the third image using a non-linear model, determining a position of the fourth pixel in a fourth image based upon the third motion vector.
Monitoring Sand Sheets and Dunes
2017-06-12
NASA's Mars Reconnaissance Orbiter (MRO) captured this crater featuring sand dunes and sand sheets on its floor. What are sand sheets? Snow fall on Earth is a good example of sand sheets: when it snows, the ground gets blanketed with up to a few meters of snow. The snow mantles the ground and "mimics" the underlying topography. Sand sheets likewise mantle the ground as a relatively thin deposit. This kind of environment has been monitored by HiRISE since 2007 to look for movement in the ripples covering the dunes and sheets. This is how scientists who study wind-blown sand can track the amount of sand moving through the area and possibly where the sand came from. Using the present environment is crucial to understanding the past: sand dunes, sheets, and ripples sometimes become preserved as sandstone and contain clues as to how they were deposited The map is projected here at a scale of 25 centimeters (9.8 inches) per pixel. [The original image scale is 25 centimeters (9.8 inches) per pixel (with 1 x 1 binning); objects on the order of 75 centimeters (29.5 inches) across are resolved.] North is up. https://photojournal.jpl.nasa.gov/catalog/PIA21757
Ghadiri, H; Ay, M R; Shiran, M B; Soltanian-Zadeh, H
2013-01-01
Objective: Recently introduced energy-sensitive X-ray CT makes it feasible to discriminate different nanoparticulate contrast materials. The purpose of this work is to present a K-edge ratio method for differentiating multiple simultaneous contrast agents using spectral CT. Methods: The ratio of two images relevant to energy bins straddling the K-edge of the materials is calculated using an analytic CT simulator. In the resulting parametric map, the selected contrast agent regions can be identified using a thresholding algorithm. The K-edge ratio algorithm is applied to spectral images of simulated phantoms to identify and differentiate up to four simultaneous and targeted CT contrast agents. Results: We show that different combinations of simultaneous CT contrast agents can be identified by the proposed K-edge ratio method when energy-sensitive CT is used. In the K-edge parametric maps, the pixel values for biological tissues and contrast agents reach a maximum of 0.95, whereas for the selected contrast agents, the pixel values are larger than 1.10. The number of contrast agents that can be discriminated is limited owing to photon starvation. For reliable material discrimination, minimum photon counts corresponding to 140 kVp, 100 mAs and 5-mm slice thickness must be used. Conclusion: The proposed K-edge ratio method is a straightforward and fast method for identification and discrimination of multiple simultaneous CT contrast agents. Advances in knowledge: A new spectral CT-based algorithm is proposed which provides a new concept of molecular CT imaging by non-iteratively identifying multiple contrast agents when they are simultaneously targeting different organs. PMID:23934964
Investigating at the Moon With new Eyes: The Lunar Reconnaissance Orbiter Mission Camera (LROC)
NASA Astrophysics Data System (ADS)
Hiesinger, H.; Robinson, M. S.; McEwen, A. S.; Turtle, E. P.; Eliason, E. M.; Jolliff, B. L.; Malin, M. C.; Thomas, P. C.
The Lunar Reconnaissance Orbiter Mission Camera (LROC) H. Hiesinger (1,2), M.S. Robinson (3), A.S. McEwen (4), E.P. Turtle (4), E.M. Eliason (4), B.L. Jolliff (5), M.C. Malin (6), and P.C. Thomas (7) (1) Brown Univ., Dept. of Geological Sciences, Providence RI 02912, Harald_Hiesinger@brown.edu, (2) Westfaelische Wilhelms-University, (3) Northwestern Univ., (4) LPL, Univ. of Arizona, (5) Washington Univ., (6) Malin Space Science Systems, (7) Cornell Univ. The Lunar Reconnaissance Orbiter (LRO) mission is scheduled for launch in October 2008 as a first step to return humans to the Moon by 2018. The main goals of the Lunar Reconnaissance Orbiter Camera (LROC) are to: 1) assess meter and smaller- scale features for safety analyses for potential lunar landing sites near polar resources, and elsewhere on the Moon; and 2) acquire multi-temporal images of the poles to characterize the polar illumination environment (100 m scale), identifying regions of permanent shadow and permanent or near permanent illumination over a full lunar year. In addition, LROC will return six high-value datasets such as 1) meter-scale maps of regions of permanent or near permanent illumination of polar massifs; 2) high resolution topography through stereogrammetric and photometric stereo analyses for potential landing sites; 3) a global multispectral map in 7 wavelengths (300-680 nm) to characterize lunar resources, in particular ilmenite; 4) a global 100-m/pixel basemap with incidence angles (60-80 degree) favorable for morphologic interpretations; 5) images of a variety of geologic units at sub-meter resolution to investigate physical properties and regolith variability; and 6) meter-scale coverage overlapping with Apollo Panoramic images (1-2 m/pixel) to document the number of small impacts since 1971-1972, to estimate hazards for future surface operations. LROC consists of two narrow-angle cameras (NACs) which will provide 0.5-m scale panchromatic images over a 5-km swath, a wide-angle camera (WAC) to acquire images at about 100 m/pixel in seven color bands over a 100-km swath, and a common Sequence and Compressor System (SCS). Each NAC has a 700-mm-focal-length optic that images onto a 5000-pixel CCD line-array, providing a cross-track field-of-view (FOV) of 2.86 degree. The NAC readout noise is better than 100 e- , and the data are sampled at 12 bits. Its internal buffer holds 256 MB of uncompressed data, enough for a full-swath image 25-km long or a 2x2 binned image 100-km long. The WAC has two 6-mm- focal-length lenses imaging onto the same 1000 x 1000 pixel, electronically shuttered CCD area-array, one imaging in the visible/near IR, and the other in the UV. Each has a cross-track FOV of 90 degree. From the nominal 50-km orbit, the WAC will have a resolution of 100 m/pixel in the visible, and a swath width of ˜100 km. The seven-band color capability of the WAC is achieved by color filters mounted directly 1 over the detector, providing different sections of the CCD with different filters [1]. The readout noise is less than 40 e- , and, as with the NAC, pixel values are digitized to 12-bits and may be subsequently converted to 8-bit values. The total mass of the LROC system is about 12 kg; the total LROC power consumption averages at 22 W (30 W peak). Assuming a downlink with lossless compression, LRO will produce a total of 20 TeraBytes (TB) of raw data. Production of higher-level data products will result in a total of 70 TB for Planetary Data System (PDS) archiving, 100 times larger than any previous missions. [1] Malin et al., JGR, 106, 17651-17672, 2001. 2
SU-F-T-253: Volumetric Comparison Between 4D CT Amplitude and Phase Binning Mode
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, G; Ma, R; Reyngold, M
2016-06-15
Purpose: Motion artifact in 4DCT images can affect radiation treatment quality. To identify the most robust and accurate binning method, we compare the volume difference between targets delineated on amplitude and phase binned 4DCT scans. Methods: Varian RPM system and CT scanner were used to acquire 4DCTs of a Quasar phantom with embedded cubic and spherical objects having superior-inferior motion. Eight patients’ respiration waveforms were used to drive the phantom. The 4DCT scan was reconstructed into 10 phase and 10 amplitude bins (2 mm slices). A scan of the static phantom was also acquired. For each waveform, sphere and cubemore » volumes were generated automatically on each phase using HU thresholding. Phase (amplitude) ITVs were the union of object volumes over all phase (amplitude) binned images. The sphere and cube volumes measured in the static phantom scan were V{sub sphere}=4.19cc and V{sub cube}=27.0cc. Volume difference (VD) and dice similarity coefficient (DSC) of the ITVs, and mean volume error (MVE) defined as the average target volume percentage difference between each phase image and the static image, were used to evaluate the performance of amplitude and phase binning. Results: Averaged over the eight breathing traces, the VD and DSC of the internal target volume (ITV) between amplitude and phase binning were 3.4%±3.2% (mean ± std) and 95.9%±2.1% for sphere; 2.1%±3.3% and 98.0% ±1.5% for cube, respectively.For all waveforms, the average sphere MVE of amplitude and phase binning was 6.5% ± 5.0% and 8.2%±6.3%, respectively; and the average cube MVE of amplitude and phase binning was 5.7%±3.5%and 12.9%±8.9%, respectively. Conclusion: ITV volume and spatial overlap as assessed by VD and DSC are similar between amplitude and phase binning. Compared to phase binning, amplitude binning results in lower MVE suggesting it is less susceptible to motion artifact.« less
New Vocabulary: Araneiform and Lace Terrains
NASA Technical Reports Server (NTRS)
2007-01-01
[figure removed for brevity, see original site] [figure removed for brevity, see original site] Figure 1Figure 2 The south polar terrain on Mars contains landforms unlike any that we see on Earth, so much that a new vocabulary is required to describe them. The word 'araneiform' means 'spider-like.' There are radially organized channels on Mars that look spider-like, but we don't want to confuse anyone by talking about 'spiders' when we really mean 'channels,' not 'bugs.' The first subimage (figure 1) shows an example of 'connected araneiform topography,' terrain that is filled with spider-like channels whose arms branch and connect to each other. Gas flows through these channels until it encounters a vent, where is escapes out to the atmosphere, carrying dust along with it. The dark dust is blown around by the prevailing wind. The second subimage (figure 2) shows a different region of the same image where the channels are not radially organized. In this region they form a dense tangled network of tortuous strands. We refer to this as 'lace.' Observation Geometry Image PSP_002651_0930 was taken by the High Resolution Imaging Science Experiment (HiRISE) camera onboard the Mars Reconnaissance Orbiter spacecraft on 18-Feb-2007. The complete image is centered at -86.9 degrees latitude, 97.2 degrees East longitude. The range to the target site was 268.7 km (167.9 miles). At this distance the image scale is 53.8 cm/pixel (with 2 x 2 binning) so objects 161 cm across are resolved. The image shown here has been map-projected to 50 cm/pixel . The image was taken at a local Mars time of 04:56 PM and the scene is illuminated from the west with a solar incidence angle of 86 degrees, thus the sun was about 4 degrees above the horizon. At a solar longitude of 186.4 degrees, the season on Mars is Northern Autumn.Shieh, Chun-Chien; Kipritidis, John; O’Brien, Ricky T.; Kuncic, Zdenka; Keall, Paul J.
2014-01-01
Purpose: Respiratory signal, binning method, and reconstruction algorithm are three major controllable factors affecting image quality in thoracic 4D cone-beam CT (4D-CBCT), which is widely used in image guided radiotherapy (IGRT). Previous studies have investigated each of these factors individually, but no integrated sensitivity analysis has been performed. In addition, projection angular spacing is also a key factor in reconstruction, but how it affects image quality is not obvious. An investigation of the impacts of these four factors on image quality can help determine the most effective strategy in improving 4D-CBCT for IGRT. Methods: Fourteen 4D-CBCT patient projection datasets with various respiratory motion features were reconstructed with the following controllable factors: (i) respiratory signal (real-time position management, projection image intensity analysis, or fiducial marker tracking), (ii) binning method (phase, displacement, or equal-projection-density displacement binning), and (iii) reconstruction algorithm [Feldkamp–Davis–Kress (FDK), McKinnon–Bates (MKB), or adaptive-steepest-descent projection-onto-convex-sets (ASD-POCS)]. The image quality was quantified using signal-to-noise ratio (SNR), contrast-to-noise ratio, and edge-response width in order to assess noise/streaking and blur. The SNR values were also analyzed with respect to the maximum, mean, and root-mean-squared-error (RMSE) projection angular spacing to investigate how projection angular spacing affects image quality. Results: The choice of respiratory signals was found to have no significant impact on image quality. Displacement-based binning was found to be less prone to motion artifacts compared to phase binning in more than half of the cases, but was shown to suffer from large interbin image quality variation and large projection angular gaps. Both MKB and ASD-POCS resulted in noticeably improved image quality almost 100% of the time relative to FDK. In addition, SNR values were found to increase with decreasing RMSE values of projection angular gaps with strong correlations (r ≈ −0.7) regardless of the reconstruction algorithm used. Conclusions: Based on the authors’ results, displacement-based binning methods, better reconstruction algorithms, and the acquisition of even projection angular views are the most important factors to consider for improving thoracic 4D-CBCT image quality. In view of the practical issues with displacement-based binning and the fact that projection angular spacing is not currently directly controllable, development of better reconstruction algorithms represents the most effective strategy for improving image quality in thoracic 4D-CBCT for IGRT applications at the current stage. PMID:24694143
The Athena Microscopic Imager Investigation
NASA Technical Reports Server (NTRS)
Herkenhoff, K. E.; Aquyres, S. W.; Bell, J. F., III; Maki, J. N.; Arneson, H. M.; Brown, D. I.; Collins, S. A.; Dingizian, A.; Elliot, S. T.; Geotz, W.
2003-01-01
The Athena science payload on the Mars Exploration Rovers (MER) includes the Microscopic Imager (MI) [1]. The MI is a fixed-focus camera mounted on the end of an extendable instrument arm, the Instrument Deployment Device (IDD; see Figure 1).The MI was designed to acquire images at a spatial resolution of 30 microns/pixel over a broad spectral range (400 - 700 nm; see Table 1). Technically, the microscopic imager is not a microscope: it has a fixed magnification of 0.4 and is intended to produce images that simulate a geologist s view through a common hand lens. In photographers parlance, the system makes use of a macro lens. The MI uses the same electronics design as the other MER cameras [2, 3] but has optics that yield a field of view of 31 31 mm across a 1024 1024 pixel CCD image (Figure 2). The MI acquires images using only solar or skylightillumination of the target surface. A contact sensor is used to place the MI slightly closer to the target surface than its best focus distance (about 66 mm), allowing concave surfaces to be imaged in good focus. Because the MI has a relatively small depth of field (3 mm), a single MI image of a rough surface will contain both focused and unfocused areas. Coarse focusing will be achieved by moving the IDD away from a rock target after the contact sensor is activated. Multiple images taken at various distances will be acquired to ensure good focus on all parts of rough surfaces. By combining a set of images acquired in this way, a completely focused image can be assembled. Stereoscopic observations can be obtained by moving the MI laterally relative to its boresight. Estimates of the position and orientation of the MI for each acquired image will be stored in the rover computer and returned to Earth with the image data. The MI optics will be protected from the Martian environment by a retractable dust cover. The dust cover includes a Kapton window that is tinted orange to restrict the spectral bandpass to 500-700 nm, allowing color information to be obtained by taking images with the dust cover open and closed. The MI will image the same materials measured by other Athena instruments (including surfaces prepared by the Rock Abrasion Tool), as well as rock and soil targets of opportunity. Subsets of the full image array can be selected and/or pixels can be binned to reduce data volume. Image compression will be used to maximize the information contained in the data returned to Earth. The resulting MI data will place other MER instrument data in context and aid in petrologic and geologic interpretations of rocks and soils on Mars.
NASA Technical Reports Server (NTRS)
Mazzoni, Dominic; Wagstaff, Kiri; Bornstein, Benjamin; Tang, Nghia; Roden, Joseph
2006-01-01
PixelLearn is an integrated user-interface computer program for classifying pixels in scientific images. Heretofore, training a machine-learning algorithm to classify pixels in images has been tedious and difficult. PixelLearn provides a graphical user interface that makes it faster and more intuitive, leading to more interactive exploration of image data sets. PixelLearn also provides image-enhancement controls to make it easier to see subtle details in images. PixelLearn opens images or sets of images in a variety of common scientific file formats and enables the user to interact with several supervised or unsupervised machine-learning pixel-classifying algorithms while the user continues to browse through the images. The machinelearning algorithms in PixelLearn use advanced clustering and classification methods that enable accuracy much higher than is achievable by most other software previously available for this purpose. PixelLearn is written in portable C++ and runs natively on computers running Linux, Windows, or Mac OS X.
New Mars Camera's First Image of Mars from Mapping Orbit (Full Frame)
NASA Technical Reports Server (NTRS)
2006-01-01
The high resolution camera on NASA's Mars Reconnaissance Orbiter captured its first image of Mars in the mapping orbit, demonstrating the full resolution capability, on Sept. 29, 2006. The High Resolution Imaging Science Experiment (HiRISE) acquired this first image at 8:16 AM (Pacific Time). With the spacecraft at an altitude of 280 kilometers (174 miles), the image scale is 25 centimeters per pixel (10 inches per pixel). If a person were located on this part of Mars, he or she would just barely be visible in this image. The image covers a small portion of the floor of Ius Chasma, one branch of the giant Valles Marineris system of canyons. The image illustrates a variety of processes that have shaped the Martian surface. There are bedrock exposures of layered materials, which could be sedimentary rocks deposited in water or from the air. Some of the bedrock has been faulted and folded, perhaps the result of large-scale forces in the crust or from a giant landslide. The image resolves rocks as small as small as 90 centimeters (3 feet) in diameter. It includes many dunes or ridges of windblown sand. This image (TRA_000823_1720) was taken by the High Resolution Imaging Science Experiment camera onboard the Mars Reconnaissance Orbiter spacecraft on Sept. 29, 2006. Shown here is the full image, centered at minus 7.8 degrees latitude, 279.5 degrees east longitude. The image is oriented such that north is to the top. The range to the target site was 297 kilometers (185.6 miles). At this distance the image scale is 25 centimeters (10 inches) per pixel (with one-by-one binning) so objects about 75 centimeters (30 inches) across are resolved. The image was taken at a local Mars time of 3:30 PM and the scene is illuminated from the west with a solar incidence angle of 59.7 degrees, thus the sun was about 30.3 degrees above the horizon. The season on Mars is northern winter, southern summer. [Photojournal note: Due to the large sizes of the high-resolution TIFF and JPEG files, some systems may experience extremely slow downlink time while viewing or downloading these images; some systems may be incapable of handling the download entirely.] NASA's Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the Mars Reconnaissance Orbiter for NASA's Science Mission Directorate, Washington. Lockheed Martin Space Systems, Denver, is the prime contractor for the project and built the spacecraft. The HiRISE camera was built by Ball Aerospace & Technologies Corporation, Boulder, Colo., and is operated by the University of Arizona, Tucson.Yu, Zhicong; Leng, Shuai; Li, Zhoubo; McCollough, Cynthia H.
2016-01-01
Photon-counting computed tomography (PCCT) is an emerging imaging technique that enables multi-energy imaging with only a single scan acquisition. To enable multi-energy imaging, the detected photons corresponding to the full x-ray spectrum are divided into several subgroups of bin data that correspond to narrower energy windows. Consequently, noise in each energy bin increases compared to the full-spectrum data. This work proposes an iterative reconstruction algorithm for noise suppression in the narrower energy bins used in PCCT imaging. The algorithm is based on the framework of prior image constrained compressed sensing (PICCS) and is called spectral PICCS; it uses the full-spectrum image reconstructed using conventional filtered back-projection as the prior image. The spectral PICCS algorithm is implemented using a constrained optimization scheme with adaptive iterative step sizes such that only two tuning parameters are required in most cases. The algorithm was first evaluated using computer simulations, and then validated by both physical phantoms and in-vivo swine studies using a research PCCT system. Results from both computer-simulation and experimental studies showed substantial image noise reduction in narrow energy bins (43~73%) without sacrificing CT number accuracy or spatial resolution. PMID:27551878
NASA Astrophysics Data System (ADS)
Yu, Zhicong; Leng, Shuai; Li, Zhoubo; McCollough, Cynthia H.
2016-09-01
Photon-counting computed tomography (PCCT) is an emerging imaging technique that enables multi-energy imaging with only a single scan acquisition. To enable multi-energy imaging, the detected photons corresponding to the full x-ray spectrum are divided into several subgroups of bin data that correspond to narrower energy windows. Consequently, noise in each energy bin increases compared to the full-spectrum data. This work proposes an iterative reconstruction algorithm for noise suppression in the narrower energy bins used in PCCT imaging. The algorithm is based on the framework of prior image constrained compressed sensing (PICCS) and is called spectral PICCS; it uses the full-spectrum image reconstructed using conventional filtered back-projection as the prior image. The spectral PICCS algorithm is implemented using a constrained optimization scheme with adaptive iterative step sizes such that only two tuning parameters are required in most cases. The algorithm was first evaluated using computer simulations, and then validated by both physical phantoms and in vivo swine studies using a research PCCT system. Results from both computer-simulation and experimental studies showed substantial image noise reduction in narrow energy bins (43-73%) without sacrificing CT number accuracy or spatial resolution.
Full-field OCT: applications in ophthalmology
NASA Astrophysics Data System (ADS)
Grieve, Kate; Dubois, Arnaud; Paques, Michel; Le Gargasson, Jean-Francois; Boccara, Albert C.
2005-04-01
We present images of ocular tissues obtained using ultrahigh resolution full-field OCT. The experimental setup is based on the Linnik interferometer, illuminated by a tungsten halogen lamp. En face tomographic images are obtained in real-time without scanning by computing the difference of two phase-opposed interferometric images recorded by a high-resolution CCD camera. A spatial resolution of 0.7 μm × 0.9 μm (axial × transverse) is achieved thanks to the short source coherence length and the use of high numerical aperture microscope objectives. A detection sensitivity of 90 dB is obtained by means of image averaging and pixel binning. Whole unfixed eyes and unstained tissue samples (cornea, lens, retina, choroid and sclera) of ex vivo rat, mouse, rabbit and porcine ocular tissues were examined. The unprecedented resolution of our instrument allows cellular-level resolution in the cornea and retina, and visualization of individual fibers in the lens. Transcorneal lens imaging was possible in all animals, and in albino animals, transscleral retinal imaging was achieved. We also introduce our rapid acquisition full-field optical coherence tomography system designed to accommodate in vivo ophthalmologic imaging. The variations on the original system technology include the introduction of a xenon arc lamp as source, and rapid image acquisition performed by a high-speed CMOS camera, reducing acquisition time to 5 ms per frame.
Analysis of astronomical data from optical superconducting tunnel junctions
NASA Astrophysics Data System (ADS)
de Bruijne, J. H.; Reynolds, A. P.; Perryman, Michael A.; Favata, Fabio; Peacock, Anthony J.
2002-06-01
Currently operating optical superconducting tunnel junction (STJ) detectors, developed at the European Space Agency (ESA), can simultaneously measure the wavelength ((Delta) (gamma) equals 50 nm at 500 nm) and arrival time (to within approximately 5 microsecond(s) ) of individual photons in the range 310 to 720 nm with an efficiency of approximately 70%, and with count rates of the order of 5000 photons s-1 per junction. A number of STJs placed in an array format generates 4-D data: photon arrival time, energy, and array element (X,Y). Such STJ cameras are ideally suited for, e.g., high-time-resolution spectrally resolved monitoring of variable sources or low- resolution spectroscopy of faint extragalactic objects. The reduction of STJ data involves detector efficiency correction, atmospheric extinction correction, sky background subtraction, and, unlike that of data from CCD-based systems, a more complex energy calibration, barycentric arrival time correction, energy range selection, and time binning; these steps are, in many respects, analogous to procedures followed in high-energy astrophysics. We discuss these calibration steps in detail using a representative observation of the cataclysmic variable UZ Fornacis; these data were obtained with ESA's S-Cam2 6 X 6-pixel device. We furthermore discuss issues related to telescope pointing and guiding, differential atmospheric refraction, and atmosphere-induced image motion and image smearing (`seeing') in the focal plane. We also present a simple and effective recipe for extracting the evolution of atmospheric seeing with time from any science exposure and discuss a number of caveats in the interpretation of STJ-based time-binned data, such as light curves and hardness ratio plots.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shieh, Chun-Chien; Kipritidis, John; O’Brien, Ricky T.
Purpose: Respiratory signal, binning method, and reconstruction algorithm are three major controllable factors affecting image quality in thoracic 4D cone-beam CT (4D-CBCT), which is widely used in image guided radiotherapy (IGRT). Previous studies have investigated each of these factors individually, but no integrated sensitivity analysis has been performed. In addition, projection angular spacing is also a key factor in reconstruction, but how it affects image quality is not obvious. An investigation of the impacts of these four factors on image quality can help determine the most effective strategy in improving 4D-CBCT for IGRT. Methods: Fourteen 4D-CBCT patient projection datasets withmore » various respiratory motion features were reconstructed with the following controllable factors: (i) respiratory signal (real-time position management, projection image intensity analysis, or fiducial marker tracking), (ii) binning method (phase, displacement, or equal-projection-density displacement binning), and (iii) reconstruction algorithm [Feldkamp–Davis–Kress (FDK), McKinnon–Bates (MKB), or adaptive-steepest-descent projection-onto-convex-sets (ASD-POCS)]. The image quality was quantified using signal-to-noise ratio (SNR), contrast-to-noise ratio, and edge-response width in order to assess noise/streaking and blur. The SNR values were also analyzed with respect to the maximum, mean, and root-mean-squared-error (RMSE) projection angular spacing to investigate how projection angular spacing affects image quality. Results: The choice of respiratory signals was found to have no significant impact on image quality. Displacement-based binning was found to be less prone to motion artifacts compared to phase binning in more than half of the cases, but was shown to suffer from large interbin image quality variation and large projection angular gaps. Both MKB and ASD-POCS resulted in noticeably improved image quality almost 100% of the time relative to FDK. In addition, SNR values were found to increase with decreasing RMSE values of projection angular gaps with strong correlations (r ≈ −0.7) regardless of the reconstruction algorithm used. Conclusions: Based on the authors’ results, displacement-based binning methods, better reconstruction algorithms, and the acquisition of even projection angular views are the most important factors to consider for improving thoracic 4D-CBCT image quality. In view of the practical issues with displacement-based binning and the fact that projection angular spacing is not currently directly controllable, development of better reconstruction algorithms represents the most effective strategy for improving image quality in thoracic 4D-CBCT for IGRT applications at the current stage.« less
Vectorized image segmentation via trixel agglomeration
Prasad, Lakshman [Los Alamos, NM; Skourikhine, Alexei N [Los Alamos, NM
2006-10-24
A computer implemented method transforms an image comprised of pixels into a vectorized image specified by a plurality of polygons that can be subsequently used to aid in image processing and understanding. The pixelated image is processed to extract edge pixels that separate different colors and a constrained Delaunay triangulation of the edge pixels forms a plurality of triangles having edges that cover the pixelated image. A color for each one of the plurality of triangles is determined from the color pixels within each triangle. A filter is formed with a set of grouping rules related to features of the pixelated image and applied to the plurality of triangle edges to merge adjacent triangles consistent with the filter into polygons having a plurality of vertices. The pixelated image may be then reformed into an array of the polygons, that can be represented collectively and efficiently by standard vector image.
Vañó, Eliseo; Alejo, Luis; Ubeda, Carlos; Gutiérrez‐Larraya, Federico; Garayoa, Julia
2016-01-01
The aim of this study was to assess image quality and radiation dose of a biplane angiographic system with cone‐beam CT (CBCT) capability tuned for pediatric cardiac procedures. The results of this study can be used to explore dose reduction techniques. For pulsed fluoroscopy and cine modes, polymethyl methacrylate phantoms of various thicknesses and a Leeds TOR 18‐FG test object were employed. Various fields of view (FOV) were selected. For CBCT, the study employed head and body dose phantoms, Catphan 504, and an anthropomorphic cardiology phantom. The study also compared two 3D rotational angiography protocols. The entrance surface air kerma per frame increases by a factor of 3–12 when comparing cine and fluoroscopy frames. The biggest difference in the signal‐to‐noise ratio between fluoroscopy and cine modes occurs at FOV 32 cm because fluoroscopy is acquired at a 1440×1440 pixel matrix size and in unbinned mode, whereas cine is acquired at 720×720 pixels and in binned mode. The high‐contrast spatial resolution of cine is better than that of fluoroscopy, except for FOV 32 cm, because fluoroscopy mode with 32 cm FOV is unbinned. Acquiring CBCT series with a 16 cm head phantom using the standard dose protocol results in a threefold dose increase compared with the low‐dose protocol. Although the amount of noise present in the images acquired with the low‐dose protocol is much higher than that obtained with the standard mode, the images present better spatial resolution. A 1 mm diameter rod with 250 Hounsfield units can be distinguished in reconstructed images with an 8 mm slice width. Pediatric‐specific protocols provide lower doses while maintaining sufficient image quality. The system offers a novel 3D imaging mode. The acquisition of CBCT images results in increased doses administered to the patients, but also provides further diagnostic information contained in the volumetric images. The assessed CBCT protocols provide images that are noisy, but with very good spatial resolution. PACS number(s): 87.59.‐e, 87.59.‐C, 87.59.‐cf, 87.59.Dj, 87.57. uq PMID:27455474
Corredoira, Eva; Vañó, Eliseo; Alejo, Luis; Ubeda, Carlos; Gutiérrez-Larraya, Federico; Garayoa, Julia
2016-07-08
The aim of this study was to assess image quality and radiation dose of a biplane angiographic system with cone-beam CT (CBCT) capability tuned for pediatric cardiac procedures. The results of this study can be used to explore dose reduction techniques. For pulsed fluoroscopy and cine modes, polymethyl methacrylate phantoms of various thicknesses and a Leeds TOR 18-FG test object were employed. Various fields of view (FOV) were selected. For CBCT, the study employed head and body dose phantoms, Catphan 504, and an anthropomorphic cardiology phantom. The study also compared two 3D rotational angiography protocols. The entrance surface air kerma per frame increases by a factor of 3-12 when comparing cine and fluoroscopy frames. The biggest difference in the signal-to- noise ratio between fluoroscopy and cine modes occurs at FOV 32 cm because fluoroscopy is acquired at a 1440 × 1440 pixel matrix size and in unbinned mode, whereas cine is acquired at 720 × 720 pixels and in binned mode. The high-contrast spatial resolution of cine is better than that of fluoroscopy, except for FOV 32 cm, because fluoroscopy mode with 32 cm FOV is unbinned. Acquiring CBCT series with a 16 cm head phantom using the standard dose protocol results in a threefold dose increase compared with the low-dose protocol. Although the amount of noise present in the images acquired with the low-dose protocol is much higher than that obtained with the standard mode, the images present better spatial resolution. A 1 mm diameter rod with 250 Hounsfield units can be distinguished in reconstructed images with an 8 mm slice width. Pediatric-specific protocols provide lower doses while maintaining sufficient image quality. The system offers a novel 3D imaging mode. The acquisition of CBCT images results in increased doses administered to the patients, but also provides further diagnostic information contained in the volumetric images. The assessed CBCT protocols provide images that are noisy, but with very good spatial resolution. © 2016 The Authors.
Active Processes: Bright Streaks and Dark Fans
NASA Technical Reports Server (NTRS)
2007-01-01
[figure removed for brevity, see original site] [figure removed for brevity, see original site] Figure 1Figure 2 In a region of the south pole known informally as 'Ithaca' numerous fans of dark frost form every spring. HiRISE collected a time lapse series of these images, starting at Ls = 185 and culminating at Ls = 294. 'Ls' is the way we measure time on Mars: at Ls = 180 the sun passes the equator on its way south; at Ls = 270 it reaches its maximum subsolar latitude and summer begins. In the earliest image (figure 1) fans are dark, but small narrow bright streaks can be detected. In the next image (figure 2), acquired at Ls = 187, just 106 hours later, dramatic differences are apparent. The dark fans are larger and the bright fans are more pronounced and easily detectable. The third image in the sequence shows no bright fans at all. We believe that the bright streaks are fine frost condensed from the gas exiting the vent. The conditions must be just right for the bright frost to condense. Observation Geometry Image PSP_002622_0945 was taken by the High Resolution Imaging Science Experiment (HiRISE) camera onboard the Mars Reconnaissance Orbiter spacecraft on 16-Feb-2007. The complete image is centered at -85.2 degrees latitude, 181.5 degrees East longitude. The range to the target site was 246.9 km (154.3 miles). At this distance the image scale is 49.4 cm/pixel (with 2 x 2 binning) so objects 148 cm across are resolved. The image shown here has been map-projected to 50 cm/pixel . The image was taken at a local Mars time of 05:46 PM and the scene is illuminated from the west with a solar incidence angle of 88 degrees, thus the sun was about 2 degrees above the horizon. At a solar longitude of 185.1 degrees, the season on Mars is Northern Autumn.Chandra ACIS Sub-pixel Resolution
NASA Astrophysics Data System (ADS)
Kim, Dong-Woo; Anderson, C. S.; Mossman, A. E.; Allen, G. E.; Fabbiano, G.; Glotfelty, K. J.; Karovska, M.; Kashyap, V. L.; McDowell, J. C.
2011-05-01
We investigate how to achieve the best possible ACIS spatial resolution by binning in ACIS sub-pixel and applying an event repositioning algorithm after removing pixel-randomization from the pipeline data. We quantitatively assess the improvement in spatial resolution by (1) measuring point source sizes and (2) detecting faint point sources. The size of a bright (but no pile-up), on-axis point source can be reduced by about 20-30%. With the improve resolution, we detect 20% more faint sources when embedded on the extended, diffuse emission in a crowded field. We further discuss the false source rate of about 10% among the newly detected sources, using a few ultra-deep observations. We also find that the new algorithm does not introduce a grid structure by an aliasing effect for dithered observations and does not worsen the positional accuracy
Li, Guang; Wei, Jie; Olek, Devin; Kadbi, Mo; Tyagi, Neelam; Zakian, Kristen; Mechalakos, James; Deasy, Joseph O; Hunt, Margie
2017-03-01
To compare the image quality of amplitude-binned 4-dimensional magnetic resonance imaging (4DMRI) reconstructed using 2 concurrent respiratory (navigator and bellows) waveforms. A prospective, respiratory-correlated 4DMRI scanning program was used to acquire T2-weighted single-breath 4DMRI images with internal navigator and external bellows. After a 10-second training waveform of a surrogate signal, 2-dimensional MRI acquisition was triggered at a level (bin) and anatomic location (slice) until the bin-slice table was completed for 4DMRI reconstruction. The bellows signal was always collected, even when the navigator trigger was used, to retrospectively reconstruct a bellows-rebinned 4DMRI. Ten volunteers participated in this institutional review board-approved 4DMRI study. Four scans were acquired for each subject, including coronal and sagittal scans triggered by either navigator or bellows, and 6 4DMRI images (navigator-triggered, bellows-rebinned, and bellows-triggered) were reconstructed. The simultaneously acquired waveforms and resulting 4DMRI quality were compared using signal correlation, bin/phase shift, and binning motion artifacts. The consecutive bellows-triggered 4DMRI scan was used for indirect comparison. Correlation coefficients between the navigator and bellows signals were found to be patient-specific and inhalation-/exhalation-dependent, ranging from 0.1 to 0.9 because of breathing irregularities (>50% scans) and commonly observed bin/phase shifts (-1.1 ± 0.6 bin) in both 1-dimensional waveforms and diaphragm motion extracted from 4D images. Navigator-triggered 4DMRI contained many fewer binning motion artifacts at the diaphragm than did the bellows-rebinned and bellows-triggered 4DMRI scans. Coronal scans were faster than sagittal scans because of the fewer slices and higher achievable acceleration factors. Navigator-triggered 4DMRI contains substantially fewer binning motion artifacts than bellows-rebinned and bellows-triggered 4DMRI, primarily owing to the deviation of the external from the internal surrogate. The present study compared 2 concurrent surrogates during the same 4DMRI scan and their resulting 4DMRI quality. The navigator-triggered 4DMRI scanning protocol should be preferred to the bellows-based, especially for coronal scans, for clinical respiratory motion simulation. Copyright © 2016 Elsevier Inc. All rights reserved.
Toward Space-like Photometric Precision from the Ground with Beam-shaping Diffusers
NASA Astrophysics Data System (ADS)
Stefansson, Gudmundur; Mahadevan, Suvrath; Hebb, Leslie; Wisniewski, John; Huehnerhoff, Joseph; Morris, Brett; Halverson, Sam; Zhao, Ming; Wright, Jason; O'rourke, Joseph; Knutson, Heather; Hawley, Suzanne; Kanodia, Shubham; Li, Yiting; Hagen, Lea M. Z.; Liu, Leo J.; Beatty, Thomas; Bender, Chad; Robertson, Paul; Dembicky, Jack; Gray, Candace; Ketzeback, William; McMillan, Russet; Rudyk, Theodore
2017-10-01
We demonstrate a path to hitherto unachievable differential photometric precisions from the ground, both in the optical and near-infrared (NIR), using custom-fabricated beam-shaping diffusers produced using specialized nanofabrication techniques. Such diffusers mold the focal plane image of a star into a broad and stable top-hat shape, minimizing photometric errors due to non-uniform pixel response, atmospheric seeing effects, imperfect guiding, and telescope-induced variable aberrations seen in defocusing. This PSF reshaping significantly increases the achievable dynamic range of our observations, increasing our observing efficiency and thus better averages over scintillation. Diffusers work in both collimated and converging beams. We present diffuser-assisted optical observations demonstrating {62}-16+26 ppm precision in 30 minute bins on a nearby bright star 16 Cygni A (V = 5.95) using the ARC 3.5 m telescope—within a factor of ˜2 of Kepler's photometric precision on the same star. We also show a transit of WASP-85-Ab (V = 11.2) and TRES-3b (V = 12.4), where the residuals bin down to {180}-41+66 ppm in 30 minute bins for WASP-85-Ab—a factor of ˜4 of the precision achieved by the K2 mission on this target—and to 101 ppm for TRES-3b. In the NIR, where diffusers may provide even more significant improvements over the current state of the art, our preliminary tests demonstrated {137}-36+64 ppm precision for a K S = 10.8 star on the 200 inch Hale Telescope. These photometric precisions match or surpass the expected photometric precisions of TESS for the same magnitude range. This technology is inexpensive, scalable, easily adaptable, and can have an important and immediate impact on the observations of transits and secondary eclipses of exoplanets.
Selective document image data compression technique
Fu, C.Y.; Petrich, L.I.
1998-05-19
A method of storing information from filled-in form-documents comprises extracting the unique user information in the foreground from the document form information in the background. The contrast of the pixels is enhanced by a gamma correction on an image array, and then the color value of each of pixel is enhanced. The color pixels lying on edges of an image are converted to black and an adjacent pixel is converted to white. The distance between black pixels and other pixels in the array is determined, and a filled-edge array of pixels is created. User information is then converted to a two-color format by creating a first two-color image of the scanned image by converting all pixels darker than a threshold color value to black. All the pixels that are lighter than the threshold color value to white. Then a second two-color image of the filled-edge file is generated by converting all pixels darker than a second threshold value to black and all pixels lighter than the second threshold color value to white. The first two-color image and the second two-color image are then combined and filtered to smooth the edges of the image. The image may be compressed with a unique Huffman coding table for that image. The image file is also decimated to create a decimated-image file which can later be interpolated back to produce a reconstructed image file using a bilinear interpolation kernel. 10 figs.
Selective document image data compression technique
Fu, Chi-Yung; Petrich, Loren I.
1998-01-01
A method of storing information from filled-in form-documents comprises extracting the unique user information in the foreground from the document form information in the background. The contrast of the pixels is enhanced by a gamma correction on an image array, and then the color value of each of pixel is enhanced. The color pixels lying on edges of an image are converted to black and an adjacent pixel is converted to white. The distance between black pixels and other pixels in the array is determined, and a filled-edge array of pixels is created. User information is then converted to a two-color format by creating a first two-color image of the scanned image by converting all pixels darker than a threshold color value to black. All the pixels that are lighter than the threshold color value to white. Then a second two-color image of the filled-edge file is generated by converting all pixels darker than a second threshold value to black and all pixels lighter than the second threshold color value to white. The first two-color image and the second two-color image are then combined and filtered to smooth the edges of the image. The image may be compressed with a unique Huffman coding table for that image. The image file is also decimated to create a decimated-image file which can later be interpolated back to produce a reconstructed image file using a bilinear interpolation kernel.--(235 words)
Image Processing for Binarization Enhancement via Fuzzy Reasoning
NASA Technical Reports Server (NTRS)
Dominguez, Jesus A. (Inventor)
2009-01-01
A technique for enhancing a gray-scale image to improve conversions of the image to binary employs fuzzy reasoning. In the technique, pixels in the image are analyzed by comparing the pixel's gray scale value, which is indicative of its relative brightness, to the values of pixels immediately surrounding the selected pixel. The degree to which each pixel in the image differs in value from the values of surrounding pixels is employed as the variable in a fuzzy reasoning-based analysis that determines an appropriate amount by which the selected pixel's value should be adjusted to reduce vagueness and ambiguity in the image and improve retention of information during binarization of the enhanced gray-scale image.
Efficient visibility-driven medical image visualisation via adaptive binned visibility histogram.
Jung, Younhyun; Kim, Jinman; Kumar, Ashnil; Feng, David Dagan; Fulham, Michael
2016-07-01
'Visibility' is a fundamental optical property that represents the observable, by users, proportion of the voxels in a volume during interactive volume rendering. The manipulation of this 'visibility' improves the volume rendering processes; for instance by ensuring the visibility of regions of interest (ROIs) or by guiding the identification of an optimal rendering view-point. The construction of visibility histograms (VHs), which represent the distribution of all the visibility of all voxels in the rendered volume, enables users to explore the volume with real-time feedback about occlusion patterns among spatially related structures during volume rendering manipulations. Volume rendered medical images have been a primary beneficiary of VH given the need to ensure that specific ROIs are visible relative to the surrounding structures, e.g. the visualisation of tumours that may otherwise be occluded by neighbouring structures. VH construction and its subsequent manipulations, however, are computationally expensive due to the histogram binning of the visibilities. This limits the real-time application of VH to medical images that have large intensity ranges and volume dimensions and require a large number of histogram bins. In this study, we introduce an efficient adaptive binned visibility histogram (AB-VH) in which a smaller number of histogram bins are used to represent the visibility distribution of the full VH. We adaptively bin medical images by using a cluster analysis algorithm that groups the voxels according to their intensity similarities into a smaller subset of bins while preserving the distribution of the intensity range of the original images. We increase efficiency by exploiting the parallel computation and multiple render targets (MRT) extension of the modern graphical processing units (GPUs) and this enables efficient computation of the histogram. We show the application of our method to single-modality computed tomography (CT), magnetic resonance (MR) imaging and multi-modality positron emission tomography-CT (PET-CT). In our experiments, the AB-VH markedly improved the computational efficiency for the VH construction and thus improved the subsequent VH-driven volume manipulations. This efficiency was achieved without major degradation in the VH visually and numerical differences between the AB-VH and its full-bin counterpart. We applied several variants of the K-means clustering algorithm with varying Ks (the number of clusters) and found that higher values of K resulted in better performance at a lower computational gain. The AB-VH also had an improved performance when compared to the conventional method of down-sampling of the histogram bins (equal binning) for volume rendering visualisation. Copyright © 2016 Elsevier Ltd. All rights reserved.
Fundamental performance differences between CMOS and CCD imagers, part IV
NASA Astrophysics Data System (ADS)
Janesick, James; Pinter, Jeff; Potter, Robert; Elliott, Tom; Andrews, James; Tower, John; Grygon, Mark; Keller, Dave
2010-07-01
This paper is a continuation of past papers written on fundamental performance differences of scientific CMOS and CCD imagers. New characterization results presented below include: 1). a new 1536 × 1536 × 8μm 5TPPD pixel CMOS imager, 2). buried channel MOSFETs for random telegraph noise (RTN) and threshold reduction, 3) sub-electron noise pixels, 4) 'MIM pixel' for pixel sensitivity (V/e-) control, 5) '5TPPD RING pixel' for large pixel, high-speed charge transfer applications, 6) pixel-to-pixel blooming control, 7) buried channel photo gate pixels and CMOSCCDs, 8) substrate bias for deep depletion CMOS imagers, 9) CMOS dark spikes and dark current issues and 10) high energy radiation damage test data. Discussions are also given to a 1024 × 1024 × 16 um 5TPPD pixel imager currently in fabrication and new stitched CMOS imagers that are in the design phase including 4k × 4k × 10 μm and 10k × 10k × 10 um imager formats.
Nonlocal low-rank and sparse matrix decomposition for spectral CT reconstruction
NASA Astrophysics Data System (ADS)
Niu, Shanzhou; Yu, Gaohang; Ma, Jianhua; Wang, Jing
2018-02-01
Spectral computed tomography (CT) has been a promising technique in research and clinics because of its ability to produce improved energy resolution images with narrow energy bins. However, the narrow energy bin image is often affected by serious quantum noise because of the limited number of photons used in the corresponding energy bin. To address this problem, we present an iterative reconstruction method for spectral CT using nonlocal low-rank and sparse matrix decomposition (NLSMD), which exploits the self-similarity of patches that are collected in multi-energy images. Specifically, each set of patches can be decomposed into a low-rank component and a sparse component, and the low-rank component represents the stationary background over different energy bins, while the sparse component represents the rest of the different spectral features in individual energy bins. Subsequently, an effective alternating optimization algorithm was developed to minimize the associated objective function. To validate and evaluate the NLSMD method, qualitative and quantitative studies were conducted by using simulated and real spectral CT data. Experimental results show that the NLSMD method improves spectral CT images in terms of noise reduction, artifact suppression and resolution preservation.
Penrose high-dynamic-range imaging
NASA Astrophysics Data System (ADS)
Li, Jia; Bai, Chenyan; Lin, Zhouchen; Yu, Jian
2016-05-01
High-dynamic-range (HDR) imaging is becoming increasingly popular and widespread. The most common multishot HDR approach, based on multiple low-dynamic-range images captured with different exposures, has difficulties in handling camera and object movements. The spatially varying exposures (SVE) technology provides a solution to overcome this limitation by obtaining multiple exposures of the scene in only one shot but suffers from a loss in spatial resolution of the captured image. While aperiodic assignment of exposures has been shown to be advantageous during reconstruction in alleviating resolution loss, almost all the existing imaging sensors use the square pixel layout, which is a periodic tiling of square pixels. We propose the Penrose pixel layout, using pixels in aperiodic rhombus Penrose tiling, for HDR imaging. With the SVE technology, Penrose pixel layout has both exposure and pixel aperiodicities. To investigate its performance, we have to reconstruct HDR images in square pixel layout from Penrose raw images with SVE. Since the two pixel layouts are different, the traditional HDR reconstruction methods are not applicable. We develop a reconstruction method for Penrose pixel layout using a Gaussian mixture model for regularization. Both quantitative and qualitative results show the superiority of Penrose pixel layout over square pixel layout.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goggin, L; Kilby, W; Noll, M
2015-06-15
Purpose: A technique using a scintillator-mirror-camera system to measure MLC leakage was developed to provide an efficient alternative to film dosimetry while maintaining high spatial resolution. This work describes the technique together with measurement uncertainties. Methods: Leakage measurements were made for the InCise™ MLC using the Logos XRV-2020A device. For each measurement approximately 170 leakage and background images were acquired using optimized camera settings. Average background was subtracted from each leakage frame before filtering the integrated leakage image to replace anomalous pixels. Pixel value to dose conversion was performed using a calibration image. Mean leakage was calculated within an ROImore » corresponding to the primary beam, and maximum leakage was determined by binning the image into overlapping 1mm x 1mm ROIs. 48 measurements were performed using 3 cameras and multiple MLC-linac combinations in varying beam orientations, with each compared to film dosimetry. Optical and environmental influences were also investigated. Results: Measurement time with the XRV-2020A was 8 minutes vs. 50 minutes using radiochromic film, and results were available immediately. Camera radiation exposure degraded measurement accuracy. With a relatively undamaged camera, mean leakage agreed with film measurement to ≤0.02% in 92% cases, ≤0.03% in 100% (for maximum leakage the values were 88% and 96%) relative to reference open field dose. The estimated camera lifetime over which this agreement is maintained is at least 150 measurements, and can be monitored using reference field exposures. A dependency on camera temperature was identified and a reduction in sensitivity with distance from image center due to optical distortion was characterized. Conclusion: With periodic monitoring of the degree of camera radiation damage, the XRV-2020A system can be used to measure MLC leakage. This represents a significant time saving when compared to the traditional film-based approach without any substantial reduction in accuracy.« less
Image Edge Extraction via Fuzzy Reasoning
NASA Technical Reports Server (NTRS)
Dominquez, Jesus A. (Inventor); Klinko, Steve (Inventor)
2008-01-01
A computer-based technique for detecting edges in gray level digital images employs fuzzy reasoning to analyze whether each pixel in an image is likely on an edge. The image is analyzed on a pixel-by-pixel basis by analyzing gradient levels of pixels in a square window surrounding the pixel being analyzed. An edge path passing through the pixel having the greatest intensity gradient is used as input to a fuzzy membership function, which employs fuzzy singletons and inference rules to assigns a new gray level value to the pixel that is related to the pixel's edginess degree.
An X-ray halo around Cassiopeia A
NASA Astrophysics Data System (ADS)
Stewart, G. C.; Fabian, A. C.; Seward, F. D.
The large-scale X-ray emission of Cas A is characterized, and mechanisms are proposed to explain it. The Einstein HRI image of Murray et al. (1979) is binned into 16-arcsec pixels, a point-spread function based on the 2.04-keV monochromatic Zr source is applied, and the data are modeled as a series of circularly symmetric rings of emission. A significant excess extending to a radius of 6 arcmin (roughly the size of the optical H II region) is found to have a total 0.5-3-keV luminosity of about 5 x 10 to the 34th erg/s, or about 2 percent of the total luminosity of Cas A, which is assumed to lie at a distance of 3 kpc. Thermal bremsstrahlung, synchrotron radiation, and dust scattering of the main-shell emission are examined and found to be plausible emission mechanisms; further observations are required to identify the one active in Cas A.
Large format geiger-mode avalanche photodiode LADAR camera
NASA Astrophysics Data System (ADS)
Yuan, Ping; Sudharsanan, Rengarajan; Bai, Xiaogang; Labios, Eduardo; Morris, Bryan; Nicholson, John P.; Stuart, Gary M.; Danny, Harrison
2013-05-01
Recently Spectrolab has successfully demonstrated a compact 32x32 Laser Detection and Range (LADAR) camera with single photo-level sensitivity with small size, weight, and power (SWAP) budget for threedimensional (3D) topographic imaging at 1064 nm on various platforms. With 20-kHz frame rate and 500- ps timing uncertainty, this LADAR system provides coverage down to inch-level fidelity and allows for effective wide-area terrain mapping. At a 10 mph forward speed and 1000 feet above ground level (AGL), it covers 0.5 square-mile per hour with a resolution of 25 in2/pixel after data averaging. In order to increase the forward speed to fit for more platforms and survey a large area more effectively, Spectrolab is developing 32x128 Geiger-mode LADAR camera with 43 frame rate. With the increase in both frame rate and array size, the data collection rate is improved by 10 times. With a programmable bin size from 0.3 ps to 0.5 ns and 14-bit timing dynamic range, LADAR developers will have more freedom in system integration for various applications. Most of the special features of Spectrolab 32x32 LADAR camera, such as non-uniform bias correction, variable range gate width, windowing for smaller arrays, and short pixel protection, are implemented in this camera.
Plane-grating flat-field soft x-ray spectrometer
NASA Astrophysics Data System (ADS)
Hague, C. F.; Underwood, J. H.; Avila, A.; Delaunay, R.; Ringuenet, H.; Marsi, M.; Sacchi, M.
2005-02-01
We describe a soft x-ray spectrometer covering the 120-800 eV range. It is intended for resonant inelastic x-ray scattering experiments performed at third generation synchrotron radiation (SR) facilities and has been developed with SOLEIL, the future French national SR source in mind. The Hettrick-Underwood principle is at the heart of the design using a combination of varied line-spacing plane grating and spherical-mirror to provide a flat-field image. It is slitless for optimum acceptance. This means the source size determines the resolving power. A spot size of ⩽5μm is planned at SOLEIL which, according to simulations, should ensure a resolving power ⩾1000 over the whole energy range. A 1024×1024 pixel charge-coupled device (CCD) with a 13μm×13μm pixel size is used. This is an improvement on the use of microchannel-plate detectors, both as concerns efficiency and spatial resolution. Additionally spectral line curvature is avoided by the use of a horizontal focusing mirror concentrating the beam in the nondispersing direction. It allows for readout using a binning mode to reduce the intrinsically large CCD readout noise. Preliminary results taken at beamlines at Elettra (Trieste) and at BESSY (Berlin) are presented.
Super-pixel extraction based on multi-channel pulse coupled neural network
NASA Astrophysics Data System (ADS)
Xu, GuangZhu; Hu, Song; Zhang, Liu; Zhao, JingJing; Fu, YunXia; Lei, BangJun
2018-04-01
Super-pixel extraction techniques group pixels to form over-segmented image blocks according to the similarity among pixels. Compared with the traditional pixel-based methods, the image descripting method based on super-pixel has advantages of less calculation, being easy to perceive, and has been widely used in image processing and computer vision applications. Pulse coupled neural network (PCNN) is a biologically inspired model, which stems from the phenomenon of synchronous pulse release in the visual cortex of cats. Each PCNN neuron can correspond to a pixel of an input image, and the dynamic firing pattern of each neuron contains both the pixel feature information and its context spatial structural information. In this paper, a new color super-pixel extraction algorithm based on multi-channel pulse coupled neural network (MPCNN) was proposed. The algorithm adopted the block dividing idea of SLIC algorithm, and the image was divided into blocks with same size first. Then, for each image block, the adjacent pixels of each seed with similar color were classified as a group, named a super-pixel. At last, post-processing was adopted for those pixels or pixel blocks which had not been grouped. Experiments show that the proposed method can adjust the number of superpixel and segmentation precision by setting parameters, and has good potential for super-pixel extraction.
Fu, Chi-Yung; Petrich, Loren I.
1997-01-01
An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace's equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image.
Fu, C.Y.; Petrich, L.I.
1997-03-25
An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace`s equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image. 16 figs.
A New Pixels Flipping Method for Huge Watermarking Capacity of the Invoice Font Image
Li, Li; Hou, Qingzheng; Lu, Jianfeng; Dai, Junping; Mao, Xiaoyang; Chang, Chin-Chen
2014-01-01
Invoice printing just has two-color printing, so invoice font image can be seen as binary image. To embed watermarks into invoice image, the pixels need to be flipped. The more huge the watermark is, the more the pixels need to be flipped. We proposed a new pixels flipping method in invoice image for huge watermarking capacity. The pixels flipping method includes one novel interpolation method for binary image, one flippable pixels evaluation mechanism, and one denoising method based on gravity center and chaos degree. The proposed interpolation method ensures that the invoice image keeps features well after scaling. The flippable pixels evaluation mechanism ensures that the pixels keep better connectivity and smoothness and the pattern has highest structural similarity after flipping. The proposed denoising method makes invoice font image smoother and fiter for human vision. Experiments show that the proposed flipping method not only keeps the invoice font structure well but also improves watermarking capacity. PMID:25489606
Lamare, F; Le Maitre, A; Dawood, M; Schäfers, K P; Fernandez, P; Rimoldi, O E; Visvikis, D
2014-07-01
Cardiac imaging suffers from both respiratory and cardiac motion. One of the proposed solutions involves double gated acquisitions. Although such an approach may lead to both respiratory and cardiac motion compensation there are issues associated with (a) the combination of data from cardiac and respiratory motion bins, and (b) poor statistical quality images as a result of using only part of the acquired data. The main objective of this work was to evaluate different schemes of combining binned data in order to identify the best strategy to reconstruct motion free cardiac images from dual gated positron emission tomography (PET) acquisitions. A digital phantom study as well as seven human studies were used in this evaluation. PET data were acquired in list mode (LM). A real-time position management system and an electrocardiogram device were used to provide the respiratory and cardiac motion triggers registered within the LM file. Acquired data were subsequently binned considering four and six cardiac gates, or the diastole only in combination with eight respiratory amplitude gates. PET images were corrected for attenuation, but no randoms nor scatter corrections were included. Reconstructed images from each of the bins considered above were subsequently used in combination with an affine or an elastic registration algorithm to derive transformation parameters allowing the combination of all acquired data in a particular position in the cardiac and respiratory cycles. Images were assessed in terms of signal-to-noise ratio (SNR), contrast, image profile, coefficient-of-variation (COV), and relative difference of the recovered activity concentration. Regardless of the considered motion compensation strategy, the nonrigid motion model performed better than the affine model, leading to higher SNR and contrast combined with a lower COV. Nevertheless, when compensating for respiration only, no statistically significant differences were observed in the performance of the two motion models considered. Superior image SNR and contrast were seen using the affine respiratory motion model in combination with the diastole cardiac bin in comparison to the use of the whole cardiac cycle. In contrast, when simultaneously correcting for cardiac beating and respiration, the elastic respiratory motion model outperformed the affine model. In this context, four cardiac bins associated with eight respiratory amplitude bins seemed to be adequate. Considering the compensation of respiratory motion effects only, both affine and elastic based approaches led to an accurate resizing and positioning of the myocardium. The use of the diastolic phase combined with an affine model based respiratory motion correction may therefore be a simple approach leading to significant quality improvements in cardiac PET imaging. However, the best performance was obtained with the combined correction for both cardiac and respiratory movements considering all the dual-gated bins independently through the use of an elastic model based motion compensation.
Classification of breast microcalcifications using spectral mammography
NASA Astrophysics Data System (ADS)
Ghammraoui, B.; Glick, S. J.
2017-03-01
Purpose: To investigate the potential of spectral mammography to distinguish between type I calcifications, consisting of calcium oxalate dihydrate or weddellite compounds that are more often associated with benign lesions, and type II calcifications containing hydroxyapatite which are predominantly associated with malignant tumors. Methods: Using a ray tracing algorithm, we simulated the total number of x-ray photons recorded by the detector at one pixel from a single pencil-beam projection through a breast of 50/50 (adipose/glandular) tissues with inserted microcalcifications of different types and sizes. Material decomposition using two energy bins was then applied to characterize the simulated calcifications into hydroxyapatite and weddellite using maximumlikelihood estimation, taking into account the polychromatic source, the detector response function and the energy dependent attenuation. Results: Simulation tests were carried out for different doses and calcification sizes for multiple realizations. The results were summarized using receiver operating characteristic (ROC) analysis with the area under the curve (AUC) taken as an overall indicator of discrimination performance and showing high AUC values up to 0.99. Conclusion: Our simulation results obtained for a uniform breast imaging phantom indicate that spectral mammography using two energy bins has the potential to be used as a non-invasive method for discrimination between type I and type II microcalcifications to improve early breast cancer diagnosis and reduce the number of unnecessary breast biopsies.
Microlens performance limits in sub-2mum pixel CMOS image sensors.
Huo, Yijie; Fesenmaier, Christian C; Catrysse, Peter B
2010-03-15
CMOS image sensors with smaller pixels are expected to enable digital imaging systems with better resolution. When pixel size scales below 2 mum, however, diffraction affects the optical performance of the pixel and its microlens, in particular. We present a first-principles electromagnetic analysis of microlens behavior during the lateral scaling of CMOS image sensor pixels. We establish for a three-metal-layer pixel that diffraction prevents the microlens from acting as a focusing element when pixels become smaller than 1.4 microm. This severely degrades performance for on and off-axis pixels in red, green and blue color channels. We predict that one-metal-layer or backside-illuminated pixels are required to extend the functionality of microlenses beyond the 1.4 microm pixel node.
Theory and applications of structured light single pixel imaging
NASA Astrophysics Data System (ADS)
Stokoe, Robert J.; Stockton, Patrick A.; Pezeshki, Ali; Bartels, Randy A.
2018-02-01
Many single-pixel imaging techniques have been developed in recent years. Though the methods of image acquisition vary considerably, the methods share unifying features that make general analysis possible. Furthermore, the methods developed thus far are based on intuitive processes that enable simple and physically-motivated reconstruction algorithms, however, this approach may not leverage the full potential of single-pixel imaging. We present a general theoretical framework of single-pixel imaging based on frame theory, which enables general, mathematically rigorous analysis. We apply our theoretical framework to existing single-pixel imaging techniques, as well as provide a foundation for developing more-advanced methods of image acquisition and reconstruction. The proposed frame theoretic framework for single-pixel imaging results in improved noise robustness, decrease in acquisition time, and can take advantage of special properties of the specimen under study. By building on this framework, new methods of imaging with a single element detector can be developed to realize the full potential associated with single-pixel imaging.
Impact of defective pixels in AMLCDs on the perception of medical images
NASA Astrophysics Data System (ADS)
Kimpe, Tom; Sneyders, Yuri
2006-03-01
With LCD displays, each pixel has its own individual transistor that controls the transmittance of that pixel. Occasionally, these individual transistors will short or alternatively malfunction, resulting in a defective pixel that always shows the same brightness. With ever increasing resolution of displays the number of defect pixels per display increases accordingly. State of the art processes are capable of producing displays with no more than one faulty transistor out of 3 million. A five Mega Pixel medical LCD panel contains 15 million individual sub pixels (3 sub pixels per pixel), each having an individual transistor. This means that a five Mega Pixel display on average will have 5 failing pixels. This paper investigates the visibility of defective pixels and analyzes the possible impact of defective pixels on the perception of medical images. JND simulations were done to study the effect of defective pixels on medical images. Our results indicate that defective LCD pixels can mask subtle features in medical images in an unexpectedly broad area around the defect and therefore may reduce the quality of diagnosis for specific high-demanding areas such as mammography. As a second contribution an innovative solution is proposed. A specialized image processing algorithm can make defective pixels completely invisible and moreover can also recover the information of the defect so that the radiologist perceives the medical image correctly. This correction algorithm has been validated with both JND simulations and psycho visual tests.
Method and System for Temporal Filtering in Video Compression Systems
NASA Technical Reports Server (NTRS)
Lu, Ligang; He, Drake; Jagmohan, Ashish; Sheinin, Vadim
2011-01-01
Three related innovations combine improved non-linear motion estimation, video coding, and video compression. The first system comprises a method in which side information is generated using an adaptive, non-linear motion model. This method enables extrapolating and interpolating a visual signal, including determining the first motion vector between the first pixel position in a first image to a second pixel position in a second image; determining a second motion vector between the second pixel position in the second image and a third pixel position in a third image; determining a third motion vector between the first pixel position in the first image and the second pixel position in the second image, the second pixel position in the second image, and the third pixel position in the third image using a non-linear model; and determining a position of the fourth pixel in a fourth image based upon the third motion vector. For the video compression element, the video encoder has low computational complexity and high compression efficiency. The disclosed system comprises a video encoder and a decoder. The encoder converts the source frame into a space-frequency representation, estimates the conditional statistics of at least one vector of space-frequency coefficients with similar frequencies, and is conditioned on previously encoded data. It estimates an encoding rate based on the conditional statistics and applies a Slepian-Wolf code with the computed encoding rate. The method for decoding includes generating a side-information vector of frequency coefficients based on previously decoded source data and encoder statistics and previous reconstructions of the source frequency vector. It also performs Slepian-Wolf decoding of a source frequency vector based on the generated side-information and the Slepian-Wolf code bits. The video coding element includes receiving a first reference frame having a first pixel value at a first pixel position, a second reference frame having a second pixel value at a second pixel position, and a third reference frame having a third pixel value at a third pixel position. It determines a first motion vector between the first pixel position and the second pixel position, a second motion vector between the second pixel position and the third pixel position, and a fourth pixel value for a fourth frame based upon a linear or nonlinear combination of the first pixel value, the second pixel value, and the third pixel value. A stationary filtering process determines the estimated pixel values. The parameters of the filter may be predetermined constants.
4. TROJAN MILL, DETAIL OF CRUDE ORE BINS FROM NORTH, ...
4. TROJAN MILL, DETAIL OF CRUDE ORE BINS FROM NORTH, c. 1912. SHOWS TIMBER FRAMING UNDER CONSTRUCTION FOR EAST AND WEST CRUDE ORE BINS AT PREVIOUS LOCATION OF CRUSHER HOUSE, AND SNOW SHED PRESENT OVER SOUTH CRUDE ORE BIN WITH PHASE CHANGE IN SNOW SHED CONSTRUCTION INDICATED AT EAST END OF EAST CRUDE ORE BIN. THIS PHOTOGRAPH IS THE FIRST IMAGE OF THE MACHINE SHOP, UPPER LEFT CORNER. CREDIT JW. - Bald Mountain Gold Mill, Nevada Gulch at head of False Bottom Creek, Lead, Lawrence County, SD
Computational imaging with a single-pixel detector and a consumer video projector
NASA Astrophysics Data System (ADS)
Sych, D.; Aksenov, M.
2018-02-01
Single-pixel imaging is a novel rapidly developing imaging technique that employs spatially structured illumination and a single-pixel detector. In this work, we experimentally demonstrate a fully operating modular single-pixel imaging system. Light patterns in our setup are created with help of a computer-controlled digital micromirror device from a consumer video projector. We investigate how different working modes and settings of the projector affect the quality of reconstructed images. We develop several image reconstruction algorithms and compare their performance for real imaging. Also, we discuss the potential use of the single-pixel imaging system for quantum applications.
Energy dispersive CdTe and CdZnTe detectors for spectral clinical CT and NDT applications
NASA Astrophysics Data System (ADS)
Barber, W. C.; Wessel, J. C.; Nygard, E.; Iwanczyk, J. S.
2015-06-01
We are developing room temperature compound semiconductor detectors for applications in energy-resolved high-flux single x-ray photon-counting spectral computed tomography (CT), including functional imaging with nanoparticle contrast agents for medical applications and non-destructive testing (NDT) for security applications. Energy-resolved photon-counting can provide reduced patient dose through optimal energy weighting for a particular imaging task in CT, functional contrast enhancement through spectroscopic imaging of metal nanoparticles in CT, and compositional analysis through multiple basis function material decomposition in CT and NDT. These applications produce high input count rates from an x-ray generator delivered to the detector. Therefore, in order to achieve energy-resolved single photon counting in these applications, a high output count rate (OCR) for an energy-dispersive detector must be achieved at the required spatial resolution and across the required dynamic range for the application. The required performance in terms of the OCR, spatial resolution, and dynamic range must be obtained with sufficient field of view (FOV) for the application thus requiring the tiling of pixel arrays and scanning techniques. Room temperature cadmium telluride (CdTe) and cadmium zinc telluride (CdZnTe) compound semiconductors, operating as direct conversion x-ray sensors, can provide the required speed when connected to application specific integrated circuits (ASICs) operating at fast peaking times with multiple fixed thresholds per pixel provided the sensors are designed for rapid signal formation across the x-ray energy ranges of the application at the required energy and spatial resolutions, and at a sufficiently high detective quantum efficiency (DQE). We have developed high-flux energy-resolved photon-counting x-ray imaging array sensors using pixellated CdTe and CdZnTe semiconductors optimized for clinical CT and security NDT. We have also fabricated high-flux ASICs with a two dimensional (2D) array of inputs for readout from the sensors. The sensors are guard ring free and have a 2D array of pixels and can be tiled in 2D while preserving pixel pitch. The 2D ASICs have four energy bins with a linear energy response across sufficient dynamic range for clinical CT and some NDT applications. The ASICs can also be tiled in 2D and are designed to fit within the active area of the sensors. We have measured several important performance parameters including: the output count rate (OCR) in excess of 20 million counts per second per square mm with a minimum loss of counts due to pulse pile-up, an energy resolution of 7 keV full width at half-maximum (FWHM) across the entire dynamic range, and a noise floor about 20 keV. This is achieved by directly interconnecting the ASIC inputs to the pixels of the CdZnTe sensors incurring very little input capacitance to the ASICs. We present measurements of the performance of the CdTe and CdZnTe sensors including the OCR, FWHM energy resolution, noise floor, as well as the temporal stability and uniformity under the rapidly varying high flux expected in CT and NDT applications.
Energy dispersive CdTe and CdZnTe detectors for spectral clinical CT and NDT applications
Barber, W. C.; Wessel, J. C.; Nygard, E.; Iwanczyk, J. S.
2014-01-01
We are developing room temperature compound semiconductor detectors for applications in energy-resolved high-flux single x-ray photon-counting spectral computed tomography (CT), including functional imaging with nanoparticle contrast agents for medical applications and non destructive testing (NDT) for security applications. Energy-resolved photon-counting can provide reduced patient dose through optimal energy weighting for a particular imaging task in CT, functional contrast enhancement through spectroscopic imaging of metal nanoparticles in CT, and compositional analysis through multiple basis function material decomposition in CT and NDT. These applications produce high input count rates from an x-ray generator delivered to the detector. Therefore, in order to achieve energy-resolved single photon counting in these applications, a high output count rate (OCR) for an energy-dispersive detector must be achieved at the required spatial resolution and across the required dynamic range for the application. The required performance in terms of the OCR, spatial resolution, and dynamic range must be obtained with sufficient field of view (FOV) for the application thus requiring the tiling of pixel arrays and scanning techniques. Room temperature cadmium telluride (CdTe) and cadmium zinc telluride (CdZnTe) compound semiconductors, operating as direct conversion x-ray sensors, can provide the required speed when connected to application specific integrated circuits (ASICs) operating at fast peaking times with multiple fixed thresholds per pixel provided the sensors are designed for rapid signal formation across the x-ray energy ranges of the application at the required energy and spatial resolutions, and at a sufficiently high detective quantum efficiency (DQE). We have developed high-flux energy-resolved photon-counting x-ray imaging array sensors using pixellated CdTe and CdZnTe semiconductors optimized for clinical CT and security NDT. We have also fabricated high-flux ASICs with a two dimensional (2D) array of inputs for readout from the sensors. The sensors are guard ring free and have a 2D array of pixels and can be tiled in 2D while preserving pixel pitch. The 2D ASICs have four energy bins with a linear energy response across sufficient dynamic range for clinical CT and some NDT applications. The ASICs can also be tiled in 2D and are designed to fit within the active area of the sensors. We have measured several important performance parameters including; the output count rate (OCR) in excess of 20 million counts per second per square mm with a minimum loss of counts due to pulse pile-up, an energy resolution of 7 keV full width at half maximum (FWHM) across the entire dynamic range, and a noise floor about 20keV. This is achieved by directly interconnecting the ASIC inputs to the pixels of the CdZnTe sensors incurring very little input capacitance to the ASICs. We present measurements of the performance of the CdTe and CdZnTe sensors including the OCR, FWHM energy resolution, noise floor, as well as the temporal stability and uniformity under the rapidly varying high flux expected in CT and NDT applications. PMID:25937684
Energy dispersive CdTe and CdZnTe detectors for spectral clinical CT and NDT applications.
Barber, W C; Wessel, J C; Nygard, E; Iwanczyk, J S
2015-06-01
We are developing room temperature compound semiconductor detectors for applications in energy-resolved high-flux single x-ray photon-counting spectral computed tomography (CT), including functional imaging with nanoparticle contrast agents for medical applications and non destructive testing (NDT) for security applications. Energy-resolved photon-counting can provide reduced patient dose through optimal energy weighting for a particular imaging task in CT, functional contrast enhancement through spectroscopic imaging of metal nanoparticles in CT, and compositional analysis through multiple basis function material decomposition in CT and NDT. These applications produce high input count rates from an x-ray generator delivered to the detector. Therefore, in order to achieve energy-resolved single photon counting in these applications, a high output count rate (OCR) for an energy-dispersive detector must be achieved at the required spatial resolution and across the required dynamic range for the application. The required performance in terms of the OCR, spatial resolution, and dynamic range must be obtained with sufficient field of view (FOV) for the application thus requiring the tiling of pixel arrays and scanning techniques. Room temperature cadmium telluride (CdTe) and cadmium zinc telluride (CdZnTe) compound semiconductors, operating as direct conversion x-ray sensors, can provide the required speed when connected to application specific integrated circuits (ASICs) operating at fast peaking times with multiple fixed thresholds per pixel provided the sensors are designed for rapid signal formation across the x-ray energy ranges of the application at the required energy and spatial resolutions, and at a sufficiently high detective quantum efficiency (DQE). We have developed high-flux energy-resolved photon-counting x-ray imaging array sensors using pixellated CdTe and CdZnTe semiconductors optimized for clinical CT and security NDT. We have also fabricated high-flux ASICs with a two dimensional (2D) array of inputs for readout from the sensors. The sensors are guard ring free and have a 2D array of pixels and can be tiled in 2D while preserving pixel pitch. The 2D ASICs have four energy bins with a linear energy response across sufficient dynamic range for clinical CT and some NDT applications. The ASICs can also be tiled in 2D and are designed to fit within the active area of the sensors. We have measured several important performance parameters including; the output count rate (OCR) in excess of 20 million counts per second per square mm with a minimum loss of counts due to pulse pile-up, an energy resolution of 7 keV full width at half maximum (FWHM) across the entire dynamic range, and a noise floor about 20keV. This is achieved by directly interconnecting the ASIC inputs to the pixels of the CdZnTe sensors incurring very little input capacitance to the ASICs. We present measurements of the performance of the CdTe and CdZnTe sensors including the OCR, FWHM energy resolution, noise floor, as well as the temporal stability and uniformity under the rapidly varying high flux expected in CT and NDT applications.
A time-resolved image sensor for tubeless streak cameras
NASA Astrophysics Data System (ADS)
Yasutomi, Keita; Han, SangMan; Seo, Min-Woong; Takasawa, Taishi; Kagawa, Keiichiro; Kawahito, Shoji
2014-03-01
This paper presents a time-resolved CMOS image sensor with draining-only modulation (DOM) pixels for tube-less streak cameras. Although the conventional streak camera has high time resolution, the device requires high voltage and bulky system due to the structure with a vacuum tube. The proposed time-resolved imager with a simple optics realize a streak camera without any vacuum tubes. The proposed image sensor has DOM pixels, a delay-based pulse generator, and a readout circuitry. The delay-based pulse generator in combination with an in-pixel logic allows us to create and to provide a short gating clock to the pixel array. A prototype time-resolved CMOS image sensor with the proposed pixel is designed and implemented using 0.11um CMOS image sensor technology. The image array has 30(Vertical) x 128(Memory length) pixels with the pixel pitch of 22.4um. .
Alignment by Maximization of Mutual Information
1995-06-01
Davi Geiger, David Chapman, Jose Robles, Tao Alter, Misha Bolotski, Jonathan Connel, Karen Sarachik, Maja Mataric , Ian Horswill, Colin Angle...the same pose. These images are very different and are in fact anti-correlated: bright pixels in the left image correspond to dark pixels in the right...image; dark pixels in the left image correspond to bright pixels in the right image. No variant of correlation could match these images together
2017-09-04
The combination of morphological and topographic information from stereo images from NASA's Mars Reconnaissance Orbiter, as well as compositional data from near-infrared spectroscopy has been proven to be a powerful tool for understanding the geology of Mars. Beginning with the OMEGA instrument on the European Space Agency's Mars Express orbiter in 2003, the surface of Mars has been examined at near-infrared wavelengths by imaging spectrometers that are capable of detecting specific minerals and mapping their spatial extent. The CRISM (Compact Reconnaissance Imaging Spectrometer for Mars) instrument on our orbiter is a visible/near-infrared imaging spectrometer, and the HiRISE camera works together with it to document the appearance of mineral deposits detected by this orbital prospecting. Mawrth Vallis is one of the regions on Mars that has attracted much attention because of the nature and diversity of the minerals identified by these spectrometers. It is a large, ancient outflow channel on the margin of the Southern highlands and Northern lowlands. Both the OMEGA and CRISM instruments have detected clay minerals here that must have been deposited in a water-rich environment, probably more than 4 billion years ago. For this reason, Mawrth Vallis is one of the two candidate landing sites for the future Mars Express Rover Mission planned by the European Space Agency. This image was targeted on a location where the CRISM instrument detected a specific mineral called alunite, KAl3(SO4)2(OH)6. Alunite is a hydrated aluminum potassium sulfate, a mineral that is notable because it must have been deposited in a wet acidic environment, rich in sulfuric acid. Our image shows that the deposit is bright and colorful, and extensively fractured. The width of the cutout is 1.2 kilometers. The map is projected here at a scale of 50 centimeters (19.7 inches) per pixel. [The original image scale is 60.1 centimeters (23.7 inches) per pixel (with 2 x 2 binning); objects on the order of 180 centimeters (70.9 inches) across are resolved.] North is up. https://photojournal.jpl.nasa.gov/catalog/PIA21936
Technique for ship/wake detection
Roskovensky, John K [Albuquerque, NM
2012-05-01
An automated ship detection technique includes accessing data associated with an image of a portion of Earth. The data includes reflectance values. A first portion of pixels within the image are masked with a cloud and land mask based on spectral flatness of the reflectance values associated with the pixels. A given pixel selected from the first portion of pixels is unmasked when a threshold number of localized pixels surrounding the given pixel are not masked by the cloud and land mask. A spatial variability image is generated based on spatial derivatives of the reflectance values of the pixels which remain unmasked by the cloud and land mask. The spatial variability image is thresholded to identify one or more regions within the image as possible ship detection regions.
Alivov, Yahya; Baturin, Pavlo; Le, Huy Q.; Ducote, Justin; Molloi, Sabee
2014-01-01
We investigated the effect of different imaging parameters such as dose, beam energy, energy resolution, and number of energy bins on image quality of K-edge spectral computed tomography (CT) of gold nanoparticles (GNP) accumulated in an atherosclerotic plaque. Maximum likelihood technique was employed to estimate the concentration of GNP, which served as a targeted intravenous contrast material intended to detect the degree of plaque's inflammation. The simulations studies used a single slice parallel beam CT geometry with an X-ray beam energy ranging between 50 and 140 kVp. The synthetic phantoms included small (3 cm in diameter) cylinder and chest (33x24 cm2) phantom, where both phantoms contained tissue, calcium, and gold. In the simulation studies GNP quantification and background (calcium and tissue) suppression task were pursued. The X-ray detection sensor was represented by an energy resolved photon counting detector (e.g., CdZnTe) with adjustable energy bins. Both ideal and more realistic (12% FWHM energy resolution) implementations of photon counting detector were simulated. The simulations were performed for the CdZnTe detector with pixel pitch of 0.5-1 mm, which corresponds to the performance without significant charge sharing and cross-talk effects. The Rose model was employed to estimate the minimum detectable concentration of GNPs. A figure of merit (FOM) was used to optimize the X-ray beam energy (kVp) to achieve the highest signal-to-noise ratio (SNR) with respect to patient dose. As a result, the successful identification of gold and background suppression was demonstrated. The highest FOM was observed at 125 kVp X-ray beam energy. The minimum detectable GNP concentration was determined to be approximately 1.06 μmol/mL (0.21 mg/mL) for an ideal detector and about 2.5 μmol/mL (0.49 mg/mL) for more realistic (12% FWHM) detector. The studies show the optimal imaging parameters at lowest patient dose using an energy resolved photon counting detector to image GNP in an atherosclerotic plaque. PMID:24334301
Steganography on quantum pixel images using Shannon entropy
NASA Astrophysics Data System (ADS)
Laurel, Carlos Ortega; Dong, Shi-Hai; Cruz-Irisson, M.
2016-07-01
This paper presents a steganographical algorithm based on least significant bit (LSB) from the most significant bit information (MSBI) and the equivalence of a bit pixel image to a quantum pixel image, which permits to make the information communicate secretly onto quantum pixel images for its secure transmission through insecure channels. This algorithm offers higher security since it exploits the Shannon entropy for an image.
Thermal wake/vessel detection technique
Roskovensky, John K [Albuquerque, NM; Nandy, Prabal [Albuquerque, NM; Post, Brian N [Albuquerque, NM
2012-01-10
A computer-automated method for detecting a vessel in water based on an image of a portion of Earth includes generating a thermal anomaly mask. The thermal anomaly mask flags each pixel of the image initially deemed to be a wake pixel based on a comparison of a thermal value of each pixel against other thermal values of other pixels localized about each pixel. Contiguous pixels flagged by the thermal anomaly mask are grouped into pixel clusters. A shape of each of the pixel clusters is analyzed to determine whether each of the pixel clusters represents a possible vessel detection event. The possible vessel detection events are represented visually within the image.
High resolution Thomson scattering system for steady-state linear plasma sources
NASA Astrophysics Data System (ADS)
Lee, K. Y.; Lee, K. I.; Kim, J. H.; Lho, T.
2018-01-01
The high resolution Thomson scattering system with 63 points along a 25 mm line measures the radial electron temperature (Te) and its density (ne) in an argon plasma. By using a DC arc source with lanthanum hexaboride (LaB6) electrode, plasmas with electron temperature of over 5 eV and densities of 1.5 × 1019 m-3 have been measured. The system uses a frequency doubled (532 nm) Nd:YAG laser with 0.25 J/pulse at 20 Hz. The scattered light is collected and sent to a triple-grating spectrometer via optical-fibers, where images are recorded by an intensified charge coupled device (ICCD) camera. Although excellent in stray-light reduction, a disadvantage comes with its relatively low optical transmission and in sampling a tiny scattering volume. Thus requires accumulating multitude of images. In order to improve photon statistics, pixel binning in the ICCD camera as well as enlarging the intermediate slit-width inside the triple-grating spectrometer has been exploited. In addition, the ICCD camera capture images at 40 Hz while the laser is at 20 Hz. This operation mode allows us to alternate between background and scattering shot images. By image subtraction, influences from the plasma background are effectively taken out. Maximum likelihood estimation that uses a parameter sweep finds best fitting parameters Te and ne with the incoherent scattering spectrum.
High resolution Thomson scattering system for steady-state linear plasma sources.
Lee, K Y; Lee, K I; Kim, J H; Lho, T
2018-01-01
The high resolution Thomson scattering system with 63 points along a 25 mm line measures the radial electron temperature (T e ) and its density (n e ) in an argon plasma. By using a DC arc source with lanthanum hexaboride (LaB 6 ) electrode, plasmas with electron temperature of over 5 eV and densities of 1.5 × 10 19 m -3 have been measured. The system uses a frequency doubled (532 nm) Nd:YAG laser with 0.25 J/pulse at 20 Hz. The scattered light is collected and sent to a triple-grating spectrometer via optical-fibers, where images are recorded by an intensified charge coupled device (ICCD) camera. Although excellent in stray-light reduction, a disadvantage comes with its relatively low optical transmission and in sampling a tiny scattering volume. Thus requires accumulating multitude of images. In order to improve photon statistics, pixel binning in the ICCD camera as well as enlarging the intermediate slit-width inside the triple-grating spectrometer has been exploited. In addition, the ICCD camera capture images at 40 Hz while the laser is at 20 Hz. This operation mode allows us to alternate between background and scattering shot images. By image subtraction, influences from the plasma background are effectively taken out. Maximum likelihood estimation that uses a parameter sweep finds best fitting parameters T e and n e with the incoherent scattering spectrum.
Variable gamma-ray sky at 1 GeV
NASA Astrophysics Data System (ADS)
Pshirkov, M. S.; Rubtsov, G. I.
2013-01-01
We search for the long-term variability of the gamma-ray sky in the energy range E > 1 GeV with 168 weeks of the gamma-ray telescope Fermi-LAT data. We perform a full sky blind search for regions with variable flux looking for deviations from uniformity. We bin the sky into 12288 pixels using the HEALPix package and use the Kolmogorov-Smirnov test to compare weekly photon counts in each pixel with the constant flux hypothesis. The weekly exposure of Fermi-LAT for each pixel is calculated with the Fermi-LAT tools. We consider flux variations in a pixel significant if the statistical probability of uniformity is less than 4 × 10-6, which corresponds to 0.05 false detections in the whole set. We identified 117 variable sources, 27 of which have not been reported variable before. The sources with previously unidentified variability contain 25 active galactic nuclei (AGN) belonging to the blazar class (11 BL Lacs and 14 FSRQs), one AGN of an uncertain type, and one pulsar PSR J0633+1746 (Geminga).
Nurmoja, Merle; Eamets, Triin; Härma, Hanne-Loore; Bachmann, Talis
2012-10-01
While the dependence of face identification on the level of pixelation-transform of the images of faces has been well studied, similar research on face-based trait perception is underdeveloped. Because depiction formats used for hiding individual identity in visual media and evidential material recorded by surveillance cameras often consist of pixelized images, knowing the effects of pixelation on person perception has practical relevance. Here, the results of two experiments are presented showing the effect of facial image pixelation on the perception of criminality, trustworthiness, and suggestibility. It appears that individuals (N = 46, M age = 21.5 yr., SD = 3.1 for criminality ratings; N = 94, M age = 27.4 yr., SD = 10.1 for other ratings) have the ability to discriminate between facial cues ndicative of these perceived traits from the coarse level of image pixelation (10-12 pixels per face horizontally) and that the discriminability increases with a decrease in the coarseness of pixelation. Perceived criminality and trustworthiness appear to be better carried by the pixelized images than perceived suggestibility.
Terahertz imaging with compressive sensing
NASA Astrophysics Data System (ADS)
Chan, Wai Lam
Most existing terahertz imaging systems are generally limited by slow image acquisition due to mechanical raster scanning. Other systems using focal plane detector arrays can acquire images in real time, but are either too costly or limited by low sensitivity in the terahertz frequency range. To design faster and more cost-effective terahertz imaging systems, the first part of this thesis proposes two new terahertz imaging schemes based on compressive sensing (CS). Both schemes can acquire amplitude and phase-contrast images efficiently with a single-pixel detector, thanks to the powerful CS algorithms which enable the reconstruction of N-by- N pixel images with much fewer than N2 measurements. The first CS Fourier imaging approach successfully reconstructs a 64x64 image of an object with pixel size 1.4 mm using a randomly chosen subset of the 4096 pixels which defines the image in the Fourier plane. Only about 12% of the pixels are required for reassembling the image of a selected object, equivalent to a 2/3 reduction in acquisition time. The second approach is single-pixel CS imaging, which uses a series of random masks for acquisition. Besides speeding up acquisition with a reduced number of measurements, the single-pixel system can further cut down acquisition time by electrical or optical spatial modulation of random patterns. In order to switch between random patterns at high speed in the single-pixel imaging system, the second part of this thesis implements a multi-pixel electrical spatial modulator for terahertz beams using active terahertz metamaterials. The first generation of this device consists of a 4x4 pixel array, where each pixel is an array of sub-wavelength-sized split-ring resonator elements fabricated on a semiconductor substrate, and is independently controlled by applying an external voltage. The spatial modulator has a uniform modulation depth of around 40 percent across all pixels, and negligible crosstalk, at the resonant frequency. The second-generation spatial terahertz modulator, also based on metamaterials with a higher resolution (32x32), is under development. A FPGA-based circuit is designed to control the large number of modulator pixels. Once fully implemented, this second-generation device will enable fast terahertz imaging with both pulsed and continuous-wave terahertz sources.
Evaluation of respiratory and cardiac motion correction schemes in dual gated PET/CT cardiac imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lamare, F., E-mail: frederic.lamare@chu-bordeaux.fr; Fernandez, P.; CNRS, INCIA, UMR 5287, F-33400 Talence
Purpose: Cardiac imaging suffers from both respiratory and cardiac motion. One of the proposed solutions involves double gated acquisitions. Although such an approach may lead to both respiratory and cardiac motion compensation there are issues associated with (a) the combination of data from cardiac and respiratory motion bins, and (b) poor statistical quality images as a result of using only part of the acquired data. The main objective of this work was to evaluate different schemes of combining binned data in order to identify the best strategy to reconstruct motion free cardiac images from dual gated positron emission tomography (PET)more » acquisitions. Methods: A digital phantom study as well as seven human studies were used in this evaluation. PET data were acquired in list mode (LM). A real-time position management system and an electrocardiogram device were used to provide the respiratory and cardiac motion triggers registered within the LM file. Acquired data were subsequently binned considering four and six cardiac gates, or the diastole only in combination with eight respiratory amplitude gates. PET images were corrected for attenuation, but no randoms nor scatter corrections were included. Reconstructed images from each of the bins considered above were subsequently used in combination with an affine or an elastic registration algorithm to derive transformation parameters allowing the combination of all acquired data in a particular position in the cardiac and respiratory cycles. Images were assessed in terms of signal-to-noise ratio (SNR), contrast, image profile, coefficient-of-variation (COV), and relative difference of the recovered activity concentration. Results: Regardless of the considered motion compensation strategy, the nonrigid motion model performed better than the affine model, leading to higher SNR and contrast combined with a lower COV. Nevertheless, when compensating for respiration only, no statistically significant differences were observed in the performance of the two motion models considered. Superior image SNR and contrast were seen using the affine respiratory motion model in combination with the diastole cardiac bin in comparison to the use of the whole cardiac cycle. In contrast, when simultaneously correcting for cardiac beating and respiration, the elastic respiratory motion model outperformed the affine model. In this context, four cardiac bins associated with eight respiratory amplitude bins seemed to be adequate. Conclusions: Considering the compensation of respiratory motion effects only, both affine and elastic based approaches led to an accurate resizing and positioning of the myocardium. The use of the diastolic phase combined with an affine model based respiratory motion correction may therefore be a simple approach leading to significant quality improvements in cardiac PET imaging. However, the best performance was obtained with the combined correction for both cardiac and respiratory movements considering all the dual-gated bins independently through the use of an elastic model based motion compensation.« less
Considerations for the Use of STEREO -HI Data for Astronomical Studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tappin, S. J., E-mail: james.tappin@stfc.ac.uk
Recent refinements to the photometric calibrations of the Heliospheric Imagers (HI) on board the Solar TErrestrial RElations Observatory ( STEREO ) have revealed a number of subtle effects in the measurement of stellar signals with those instruments. These effects need to be considered in the interpretation of STEREO -HI data for astronomy. In this paper we present an analysis of these effects and how to compensate for them when using STEREO -HI data for astronomical studies. We determine how saturation of the HI CCD detectors affects the apparent count rates of stars after the on-board summing of pixels and exposures.more » Single-exposure calibration images are analyzed and compared with binned and summed science images to determine the influence of saturation on the science images. We also analyze how the on-board cosmic-ray scrubbing algorithm affects stellar images. We determine how this interacts with the variations of instrument pointing to affect measurements of stars. We find that saturation is a significant effect only for the brightest stars, and that its onset is gradual. We also find that degraded pointing stability, whether of the entire spacecraft or of the imagers, leads to reduced stellar count rates and also increased variation thereof through interaction with the on-board cosmic-ray scrubbing algorithm. We suggest ways in which these effects can be mitigated for astronomical studies and also suggest how the situation can be improved for future imagers.« less
Monte Carlo Radiative Transfer Modeling of Lightning Observed in Galileo Images of Jupiter
NASA Technical Reports Server (NTRS)
Dyudine, U. A.; Ingersoll, Andrew P.
2002-01-01
We study lightning on Jupiter and the clouds illuminated by the lightning using images taken by the Galileo orbiter. The Galileo images have a resolution of 25 km/pixel and axe able to resolve the shape of the single lightning spots in the images, which have full widths at half the maximum intensity in the range of 90-160 km. We compare the measured lightning flash images with simulated images produced by our ED Monte Carlo light-scattering model. The model calculates Monte Carlo scattering of photons in a ED opacity distribution. During each scattering event, light is partially absorbed. The new direction of the photon after scattering is chosen according to a Henyey-Greenstein phase function. An image from each direction is produced by accumulating photons emerging from the cloud in a small range (bins) of emission angles. Lightning bolts are modeled either as points or vertical lines. Our results suggest that some of the observed scattering patterns axe produced in a 3-D cloud rather than in a plane-parallel cloud layer. Lightning is estimated to occur at least as deep as the bottom of the expected water cloud. For the six cases studied, we find that the clouds above the lightning are optically thick (tau > 5). Jovian flashes are more regular and circular than the largest terrestrial flashes observed from space. On Jupiter there is nothing equivalent to the 30-40-km horizontal flashes which axe seen on Earth.
Method and apparatus for detecting a desired behavior in digital image data
Kegelmeyer, Jr., W. Philip
1997-01-01
A method for detecting stellate lesions in digitized mammographic image data includes the steps of prestoring a plurality of reference images, calculating a plurality of features for each of the pixels of the reference images, and creating a binary decision tree from features of randomly sampled pixels from each of the reference images. Once the binary decision tree has been created, a plurality of features, preferably including an ALOE feature (analysis of local oriented edges), are calculated for each of the pixels of the digitized mammographic data. Each of these plurality of features of each pixel are input into the binary decision tree and a probability is determined, for each of the pixels, corresponding to the likelihood of the presence of a stellate lesion, to create a probability image. Finally, the probability image is spatially filtered to enforce local consensus among neighboring pixels and the spatially filtered image is output.
Method and apparatus for detecting a desired behavior in digital image data
Kegelmeyer, Jr., W. Philip
1997-01-01
A method for detecting stellate lesions in digitized mammographic image data includes the steps of prestoring a plurality of reference images, calculating a plurality of features for each of the pixels of the reference images, and creating a binary decision tree from features of randomly sampled pixels from each of the reference images. Once the binary decision tree has been created, a plurality of features, preferably including an ALOE feature (analysis of local oriented edges), are calculated for each of the pixels of the digitized mammographic data. Each of these plurality of features of each pixel are input into the binary decision tree and a probability is determined, for each of the pixels, corresponding to the likelihood of the presence of a stellate lesion, to create a probability image. Finally, the probability image is spacially filtered to enforce local consensus among neighboring pixels and the spacially filtered image is output.
Error analysis of filtering operations in pixel-duplicated images of diabetic retinopathy
NASA Astrophysics Data System (ADS)
Mehrubeoglu, Mehrube; McLauchlan, Lifford
2010-08-01
In this paper, diabetic retinopathy is chosen for a sample target image to demonstrate the effectiveness of image enlargement through pixel duplication in identifying regions of interest. Pixel duplication is presented as a simpler alternative to data interpolation techniques for detecting small structures in the images. A comparative analysis is performed on different image processing schemes applied to both original and pixel-duplicated images. Structures of interest are detected and and classification parameters optimized for minimum false positive detection in the original and enlarged retinal pictures. The error analysis demonstrates the advantages as well as shortcomings of pixel duplication in image enhancement when spatial averaging operations (smoothing filters) are also applied.
Spatial light modulator array with heat minimization and image enhancement features
Jain, Kanti [Briarcliff Manor, NY; Sweatt, William C [Albuquerque, NM; Zemel, Marc [New Rochelle, NY
2007-01-30
An enhanced spatial light modulator (ESLM) array, a microelectronics patterning system and a projection display system using such an ESLM for heat-minimization and resolution enhancement during imaging, and the method for fabricating such an ESLM array. The ESLM array includes, in each individual pixel element, a small pixel mirror (reflective region) and a much larger pixel surround. Each pixel surround includes diffraction-grating regions and resolution-enhancement regions. During imaging, a selected pixel mirror reflects a selected-pixel beamlet into the capture angle of a projection lens, while the diffraction grating of the pixel surround redirects heat-producing unused radiation away from the projection lens. The resolution-enhancement regions of selected pixels provide phase shifts that increase effective modulation-transfer function in imaging. All of the non-selected pixel surrounds redirect all radiation energy away from the projection lens. All elements of the ESLM are fabricated by deposition, patterning, etching and other microelectronic process technologies.
Measurements of system sharpness for two digital breast tomosynthesis systems
NASA Astrophysics Data System (ADS)
Marshall, N. W.; Bosmans, H.
2012-11-01
The aim of this work was to propose system sharpness parameters for digital breast tomosynthesis (DBT) systems that include the influence of focus size and focus motion for use in quality assurance protocols. X-ray focus size was measured using a multiple pinhole test object, while detector presampling modulation transfer function (MTF) was measured from projection images of a 10 cm × 10 cm, 1 mm thick steel edge, for the Siemens Inspiration and Hologic Selenia Dimensions DBT systems. The height of the edge above the table was then varied from 1 to 78 mm. The MTF expected from theory for the projection images was calculated from the measured detector MTF, focus size MTF and focus motion MTF and was compared against measured curves. Two methods were used to measure the in-plane MTF in the DBT volume: a tungsten wire of diameter 25 µm and an Al edge 0.2 mm thick, both imaged with a 15 mm thick poly(methyl methacrylate) (PMMA) plate. The in-depth point spread function (PSF) was measured using an angled tungsten wire. The full 3D MTF was estimated with a 0.5 mm diameter aluminium bead held in a 45 mm thick PMMA phantom, with the bead 15 and 65 mm above the table. Inspiration DBT projection images are saved at native detector resolution (85 µm), while the Dimensions re-bins projections to 140 µm pixels (2 × 2 binning); both systems used 2 × 2 binning of projection data before reconstruction. The 50% point for the MTF (MTF0.50) measured in the DBT projection images for the tube-travel direction fell as a function of height above the table from 3.60 to 0.90 mm-1 for the Inspiration system and from 2.50 to 1.20 mm-1 for the Dimensions unit. The maximum deviation of measured MTF0.50 from the calculated value was 13%. MTF0.50 measured in-plane (tube-travel direction) fell as a function of height above the table from 1.66 to 0.97 mm-1 for the Inspiration system and from 2.21 to 1.31 mm-1 for the Dimensions system. The full-width half-maximum for the in-depth PSF was 3.0 and 5.9 mm for the Inspiration and Dimensions systems, respectively. There was no difference in the 3D MTF curves, sectioned in the tube-travel direction, for bead heights of 15 and 65 mm above the table. A 25 µm tungsten wire held within a 15 mm thick PMMA plate was found to be a suitable test object for measurement of in-plane MTF. Evaluation of MTF as a function of height above the table, both in the projection images and in the reconstructed planes, provides important information on the impact of focus size and focus motion on the DBT system's imaging performance.
Measurements of system sharpness for two digital breast tomosynthesis systems.
Marshall, N W; Bosmans, H
2012-11-21
The aim of this work was to propose system sharpness parameters for digital breast tomosynthesis (DBT) systems that include the influence of focus size and focus motion for use in quality assurance protocols. X-ray focus size was measured using a multiple pinhole test object, while detector presampling modulation transfer function (MTF) was measured from projection images of a 10 cm × 10 cm, 1 mm thick steel edge, for the Siemens Inspiration and Hologic Selenia Dimensions DBT systems. The height of the edge above the table was then varied from 1 to 78 mm. The MTF expected from theory for the projection images was calculated from the measured detector MTF, focus size MTF and focus motion MTF and was compared against measured curves. Two methods were used to measure the in-plane MTF in the DBT volume: a tungsten wire of diameter 25 µm and an Al edge 0.2 mm thick, both imaged with a 15 mm thick poly(methyl methacrylate) (PMMA) plate. The in-depth point spread function (PSF) was measured using an angled tungsten wire. The full 3D MTF was estimated with a 0.5 mm diameter aluminium bead held in a 45 mm thick PMMA phantom, with the bead 15 and 65 mm above the table. Inspiration DBT projection images are saved at native detector resolution (85 µm), while the Dimensions re-bins projections to 140 µm pixels (2 × 2 binning); both systems used 2 × 2 binning of projection data before reconstruction. The 50% point for the MTF (MTF(0.50)) measured in the DBT projection images for the tube-travel direction fell as a function of height above the table from 3.60 to 0.90 mm(-1) for the Inspiration system and from 2.50 to 1.20 mm(-1) for the Dimensions unit. The maximum deviation of measured MTF(0.50) from the calculated value was 13%. MTF(0.50) measured in-plane (tube-travel direction) fell as a function of height above the table from 1.66 to 0.97 mm(-1) for the Inspiration system and from 2.21 to 1.31 mm(-1) for the Dimensions system. The full-width half-maximum for the in-depth PSF was 3.0 and 5.9 mm for the Inspiration and Dimensions systems, respectively. There was no difference in the 3D MTF curves, sectioned in the tube-travel direction, for bead heights of 15 and 65 mm above the table. A 25 µm tungsten wire held within a 15 mm thick PMMA plate was found to be a suitable test object for measurement of in-plane MTF. Evaluation of MTF as a function of height above the table, both in the projection images and in the reconstructed planes, provides important information on the impact of focus size and focus motion on the DBT system's imaging performance.
Supervised pixel classification using a feature space derived from an artificial visual system
NASA Technical Reports Server (NTRS)
Baxter, Lisa C.; Coggins, James M.
1991-01-01
Image segmentation involves labelling pixels according to their membership in image regions. This requires the understanding of what a region is. Using supervised pixel classification, the paper investigates how groups of pixels labelled manually according to perceived image semantics map onto the feature space created by an Artificial Visual System. Multiscale structure of regions are investigated and it is shown that pixels form clusters based on their geometric roles in the image intensity function, not by image semantics. A tentative abstract definition of a 'region' is proposed based on this behavior.
A kind of color image segmentation algorithm based on super-pixel and PCNN
NASA Astrophysics Data System (ADS)
Xu, GuangZhu; Wang, YaWen; Zhang, Liu; Zhao, JingJing; Fu, YunXia; Lei, BangJun
2018-04-01
Image segmentation is a very important step in the low-level visual computing. Although image segmentation has been studied for many years, there are still many problems. PCNN (Pulse Coupled Neural network) has biological background, when it is applied to image segmentation it can be viewed as a region-based method, but due to the dynamics properties of PCNN, many connectionless neurons will pulse at the same time, so it is necessary to identify different regions for further processing. The existing PCNN image segmentation algorithm based on region growing is used for grayscale image segmentation, cannot be directly used for color image segmentation. In addition, the super-pixel can better reserve the edges of images, and reduce the influences resulted from the individual difference between the pixels on image segmentation at the same time. Therefore, on the basis of the super-pixel, the original PCNN algorithm based on region growing is improved by this paper. First, the color super-pixel image was transformed into grayscale super-pixel image which was used to seek seeds among the neurons that hadn't been fired. And then it determined whether to stop growing by comparing the average of each color channel of all the pixels in the corresponding regions of the color super-pixel image. Experiment results show that the proposed algorithm for the color image segmentation is fast and effective, and has a certain effect and accuracy.
VizieR Online Data Catalog: Photometry and spectroscopy of KELT-11 (Pepper+, 2017)
NASA Astrophysics Data System (ADS)
Pepper, J.; Rodriguez, J. E.; Collins, K. A.; Johnson, J. A.; Fulton, B. J.; Howard, A. W.; Beatty, T. G.; Stassun, K. G.; Isaacson, H.; Colon, K. D.; Lund, M. B.; Kuhn, R. B.; Siverd, R. J.; Gaudi, B. S.; Tan, T. G.; Curtis, I.; Stockdale, C.; Mawet, D.; Bottom, M.; James, D.; Zhou, G.; Bayliss, D.; Cargile, P.; Bieryla, A.; Penev, K.; Latham, D. W.; Labadie-Bartz, J.; Kielkopf, J.; Eastman, J. D.; Oberst, T. E.; Jensen, E. L. N.; Nelson, P.; Sliski, D. H.; Wittenmyer, R. A.; McCrady, N.; Wright, J. T.; Relles, H. M.; Stevens, D. J.; Joner, M. D.; Hintz, E.
2017-08-01
KELT-11b is located in the Kilodegree Extremely Little Telescope (KELT)-South field 23, which is centered at J2000 α=10h43m48s, δ=-20°00'00''. This field was monitored from UT 2010 March 12 to UT 2014 July 9, resulting in 3910 images after post-processing and removal of bad images. We obtained follow-up time-series photometry of KELT-11b. We obtained nine full or partial transits in multiple bands between 2015 January and 2016 February. We observed an ingress of KELT-11b from the Westminster College Observatory (WCO), PA, on UT 2015 January 1 in the I filter. The observations employed a 0.35m f/11 Celestron C14 Schmidt-Cassegrain telescope and SBIG STL-6303E CCD with a 3k*2k array of 9μm pixels, yielding a 24'*16' field of view and 1.4''/pixel image scale at 3*3 pixel binning. We observed a partial transit of KELT-11b using an 0.6m RCOS telescope at the Moore Observatory (MORC), operated by the University of Louisville. The telescope has an Apogee U16M 4K*4K CCD, giving a 26'*26' field of view and 0.39''/pixel. We observed the transit on UT 2015 February 08 in alternating Sloan g and i filters from before the ingress and past the mid-transit. We observed a transit of KELT-11b in the Sloan i-band using one of the Miniature Exoplanet Radial Velocity Array (MINERVA) Project telescopes (Swift et al. 2015JATIS...1b7002S) on the night of UT 2015 February 08. MINERVA used four 0.7m PlaneWave CDK-700 telescopes that are located on Mt. Hopkins, Arizona, at the Fred L. Whipple Observatory. While the four telescopes are normally used to feed a single spectrograph to discover and characterize exoplanets through radial velocity measurements, for the KELT-11 observations, we used a single MINERVA telescope in its photometric imaging mode. That telescope had an Andor iKON-L 2048*2048 camera, which gave a field of view of 20.9'*20.9' and a plate scale of 0.6''/pixel. The camera has a 2048*2048 back-illuminated deep depletion sensor with fringe suppression. Due to the brightness of KELT-11, we heavily defocused for our observations, such that the image of KELT-11 was a "donut" approximately 20 pixels in diameter. On UT 2015 March 08, we observed a partial transit from the Perth Exoplanet Survey Telescope (PEST) Observatory, located in Perth, Australia. The observations were taken with a 0.3m Meade LX200 telescope working at f/5, and with a 31'*21' field of view. The camera is an SBIG ST-8XME, with 1530*1020 pixels, yielding 1.2''/pixel. An ingress was observed using a Cousins I filter. On UT 2015 March 03, we observed a partial transit at the Ivan Curtis Observatory (ICO), located in Adelaide, Australia. The observations were taken with a 0.235m Celestron Schmidt-Cassegrain telescope with an Antares 0.63x focal reducer, giving an overall focal ratio of f/6.3. The camera is an Atik 320e, which uses a cooled Sony ICX274 CCD of 1620*1220 pixels. The field of view is 16.6'*12.3', with a resolution of 0.62''/pixel. An egress was observed using a Johnson R filter. We observed an ingress in the Sloan z-band at the Swarthmore College Peter van de Kamp Observatory (PvdK) on 2015 March 18. The observatory uses a 0.6m RCOS Telescope with an Apogee U16M 4K*4K CCD, giving a 26'*26' field of view. Using 2*2 binning, it has 0.76''/pixel. We observed an egress of KELT-11b in the Sloan i-band during bright time on UT 2015 May 04, using one of the 1m telescopes in the Las Cumbres Observatory Global Telescope (LCOGT) network (http://lcogt.net/) located at the South African Astronomical Observatory (SAAO) in Sutherland, South Africa. The LCOGT telescopes at SAAO have 4K*4K SBIG Science cameras and offer a 16'*16' field of view and an unbinned pixel scale of 0.23''/pixel. We observed one full transit of KELT-11b using the Manner-Vanderbilt Ritchey-Chretien (MVRC) telescope located at the Mt. Lemmon summit of the Steward Observatory, Arizona, on UT 2016 February 22 in the r' filter. The observations employed a 0.6m f/8 RC Optical Systems Ritchey-Chretien telescope and SBIG STX-16803 CCD with a 4k*4k array of 9μm pixels, yielding a 26.6'*26.6' field of view and 0.39''/pixel image scale. The telescope was heavily defocused, resulting in a typical "donut" shaped stellar PSF with a diameter of ~25''. We obtained spectroscopic observations of KELT-11. The observations that provide radial velocity measurements are listed in Table6. We obtained a spectrum with Tillinghast Reflector Echelle Spectrograph (TRES), on the 1.5m telescope at the Fred Lawrence Whipple Observatory (FLWO) on Mt. Hopkins, Arizona, on UT 2015 January 28. The spectrum has a resolution of R=44000, a signal-to-noise ratio (S/N)=100.4. Well before KELT observations of this star began, the radial velocity of HD93396 had been monitored at the Keck Observatory using KECK High Resolution Echelle Spectrometer (HIRES) starting in 2007 as part of the "Retired A Stars" program (Johnson et al. 2006ApJ...652.1724J, 2011ApJS..197...26J). Observations were conducted using the standard setup of the California Planet Survey (Howard et al. 2010ApJ...721.1467H; Johnson et al. 2010PASP..122..149J) using the B5 decker and the iodine cell. Radial velocity measurements were made with respect to a high S/N, iodine-free template observation (Butler et al. 1996PASP..108..500B), which we also use to measure the stellar properties. Exposure times ranged from 50 to 120s depending on the seeing, with an exposure meter ensuring that all exposures reached S/N{simeq}150 per pixel at 550nm. To supplement the HIRES radial velocity spectra, we also observed KELT-11 with the Levy spectrograph on the Automated Planet Finder (APF) telescope at Lick Observatory. We collected 16 radial velocity measurements between 2015 January 12 and 2015 November 4. The observational setup was similar to the setup used for the APF observations described in Fulton et al. (2015ApJ...810...30F). We observed the star through a cell of gaseous iodine using the standard 1''*3'' slit for a spectral resolution of R{simeq}100000, and collected an iodine-free template spectrum using the 0.75''*8'' slit (R{simeq}120000, Vogt et al. 2014PASP..126..359V). Exposure times ranged from 18 to 30 minutes depending on seeing and transparency to obtain S/N{simeq}100pixel-1 at 550nm. (4 data files).
Mitigating illumination gradients in a SAR image based on the image data and antenna beam pattern
Doerry, Armin W.
2013-04-30
Illumination gradients in a synthetic aperture radar (SAR) image of a target can be mitigated by determining a correction for pixel values associated with the SAR image. This correction is determined based on information indicative of a beam pattern used by a SAR antenna apparatus to illuminate the target, and also based on the pixel values associated with the SAR image. The correction is applied to the pixel values associated with the SAR image to produce corrected pixel values that define a corrected SAR image.
Shadow-free single-pixel imaging
NASA Astrophysics Data System (ADS)
Li, Shunhua; Zhang, Zibang; Ma, Xiao; Zhong, Jingang
2017-11-01
Single-pixel imaging is an innovative imaging scheme and receives increasing attention in recent years, for it is applicable for imaging at non-visible wavelengths and imaging under weak light conditions. However, as in conventional imaging, shadows would likely occur in single-pixel imaging and sometimes bring negative effects in practical uses. In this paper, the principle of shadows occurrence in single-pixel imaging is analyzed, following which a technique for shadows removal is proposed. In the proposed technique, several single-pixel detectors are used to detect the backscattered light at different locations so that the shadows in the reconstructed images corresponding to each detector shadows are complementary. Shadow-free reconstruction can be derived by fusing the shadow-complementary images using maximum selection rule. To deal with the problem of intensity mismatch in image fusion, we put forward a simple calibration. As experimentally demonstrated, the technique is able to reconstruct monochromatic and full-color shadow-free images.
Fast Pixel Buffer For Processing With Lookup Tables
NASA Technical Reports Server (NTRS)
Fisher, Timothy E.
1992-01-01
Proposed scheme for buffering data on intensities of picture elements (pixels) of image increases rate or processing beyond that attainable when data read, one pixel at time, from main image memory. Scheme applied in design of specialized image-processing circuitry. Intended to optimize performance of processor in which electronic equivalent of address-lookup table used to address those pixels in main image memory required for processing.
Detector motion method to increase spatial resolution in photon-counting detectors
NASA Astrophysics Data System (ADS)
Lee, Daehee; Park, Kyeongjin; Lim, Kyung Taek; Cho, Gyuseong
2017-03-01
Medical imaging requires high spatial resolution of an image to identify fine lesions. Photon-counting detectors in medical imaging have recently been rapidly replacing energy-integrating detectors due to the former`s high spatial resolution, high efficiency and low noise. Spatial resolution in a photon counting image is determined by the pixel size. Therefore, the smaller the pixel size, the higher the spatial resolution that can be obtained in an image. However, detector redesigning is required to reduce pixel size, and an expensive fine process is required to integrate a signal processing unit with reduced pixel size. Furthermore, as the pixel size decreases, charge sharing severely deteriorates spatial resolution. To increase spatial resolution, we propose a detector motion method using a large pixel detector that is less affected by charge sharing. To verify the proposed method, we utilized a UNO-XRI photon-counting detector (1-mm CdTe, Timepix chip) at the maximum X-ray tube voltage of 80 kVp. A similar spatial resolution of a 55- μm-pixel image was achieved by application of the proposed method to a 110- μm-pixel detector with a higher signal-to-noise ratio. The proposed method could be a way to increase spatial resolution without a pixel redesign when pixels severely suffer from charge sharing as pixel size is reduced.
Imaging through scattering media by Fourier filtering and single-pixel detection
NASA Astrophysics Data System (ADS)
Jauregui-Sánchez, Y.; Clemente, P.; Lancis, J.; Tajahuerce, E.
2018-02-01
We present a novel imaging system that combines the principles of Fourier spatial filtering and single-pixel imaging in order to recover images of an object hidden behind a turbid medium by transillumination. We compare the performance of our single-pixel imaging setup with that of a conventional system. We conclude that the introduction of Fourier gating improves the contrast of images in both cases. Furthermore, we show that the combination of single-pixel imaging and Fourier spatial filtering techniques is particularly well adapted to provide images of objects transmitted through scattering media.
NASA Astrophysics Data System (ADS)
Igoe, Damien P.; Parisi, Alfio V.; Amar, Abdurazaq; Rummenie, Katherine J.
2018-01-01
An evaluation of the use of median filters in the reduction of dark noise in smartphone high resolution image sensors is presented. The Sony Xperia Z1 employed has a maximum image sensor resolution of 20.7 Mpixels, with each pixel having a side length of just over 1 μm. Due to the large number of photosites, this provides an image sensor with very high sensitivity but also makes them prone to noise effects such as hot-pixels. Similar to earlier research with older models of smartphone, no appreciable temperature effects were observed in the overall average pixel values for images taken in ambient temperatures between 5 °C and 25 °C. In this research, hot-pixels are defined as pixels with intensities above a specific threshold. The threshold is determined using the distribution of pixel values of a set of images with uniform statistical properties associated with the application of median-filters of increasing size. An image with uniform statistics was employed as a training set from 124 dark images, and the threshold was determined to be 9 digital numbers (DN). The threshold remained constant for multiple resolutions and did not appreciably change even after a year of extensive field use and exposure to solar ultraviolet radiation. Although the temperature effects' uniformity masked an increase in hot-pixel occurrences, the total number of occurrences represented less than 0.1% of the total image. Hot-pixels were removed by applying a median filter, with an optimum filter size of 7 × 7; similar trends were observed for four additional smartphone image sensors used for validation. Hot-pixels were also reduced by decreasing image resolution. The method outlined in this research provides a methodology to characterise the dark noise behavior of high resolution image sensors for use in scientific investigations, especially as pixel sizes decrease.
Cross, Russell; Olivieri, Laura; O'Brien, Kendall; Kellman, Peter; Xue, Hui; Hansen, Michael
2016-02-25
Traditional cine imaging for cardiac functional assessment requires breath-holding, which can be problematic in some situations. Free-breathing techniques have relied on multiple averages or real-time imaging, producing images that can be spatially and/or temporally blurred. To overcome this, methods have been developed to acquire real-time images over multiple cardiac cycles, which are subsequently motion corrected and reformatted to yield a single image series displaying one cardiac cycle with high temporal and spatial resolution. Application of these algorithms has required significant additional reconstruction time. The use of distributed computing was recently proposed as a way to improve clinical workflow with such algorithms. In this study, we have deployed a distributed computing version of motion corrected re-binning reconstruction for free-breathing evaluation of cardiac function. Twenty five patients and 25 volunteers underwent cardiovascular magnetic resonance (CMR) for evaluation of left ventricular end-systolic volume (ESV), end-diastolic volume (EDV), and end-diastolic mass. Measurements using motion corrected re-binning were compared to those using breath-held SSFP and to free-breathing SSFP with multiple averages, and were performed by two independent observers. Pearson correlation coefficients and Bland-Altman plots tested agreement across techniques. Concordance correlation coefficient and Bland-Altman analysis tested inter-observer variability. Total scan plus reconstruction times were tested for significant differences using paired t-test. Measured volumes and mass obtained by motion corrected re-binning and by averaged free-breathing SSFP compared favorably to those obtained by breath-held SSFP (r = 0.9863/0.9813 for EDV, 0.9550/0.9685 for ESV, 0.9952/0.9771 for mass). Inter-observer variability was good with concordance correlation coefficients between observers across all acquisition types suggesting substantial agreement. Both motion corrected re-binning and averaged free-breathing SSFP acquisition and reconstruction times were shorter than breath-held SSFP techniques (p < 0.0001). On average, motion corrected re-binning required 3 min less than breath-held SSFP imaging, a 37% reduction in acquisition and reconstruction time. The motion corrected re-binning image reconstruction technique provides robust cardiac imaging that can be used for quantification that compares favorably to breath-held SSFP as well as multiple average free-breathing SSFP, but can be obtained in a fraction of the time when using cloud-based distributed computing reconstruction.
All-passive pixel super-resolution of time-stretch imaging
Chan, Antony C. S.; Ng, Ho-Cheung; Bogaraju, Sharat C. V.; So, Hayden K. H.; Lam, Edmund Y.; Tsia, Kevin K.
2017-01-01
Based on image encoding in a serial-temporal format, optical time-stretch imaging entails a stringent requirement of state-of-the-art fast data acquisition unit in order to preserve high image resolution at an ultrahigh frame rate — hampering the widespread utilities of such technology. Here, we propose a pixel super-resolution (pixel-SR) technique tailored for time-stretch imaging that preserves pixel resolution at a relaxed sampling rate. It harnesses the subpixel shifts between image frames inherently introduced by asynchronous digital sampling of the continuous time-stretch imaging process. Precise pixel registration is thus accomplished without any active opto-mechanical subpixel-shift control or other additional hardware. Here, we present the experimental pixel-SR image reconstruction pipeline that restores high-resolution time-stretch images of microparticles and biological cells (phytoplankton) at a relaxed sampling rate (≈2–5 GSa/s)—more than four times lower than the originally required readout rate (20 GSa/s) — is thus effective for high-throughput label-free, morphology-based cellular classification down to single-cell precision. Upon integration with the high-throughput image processing technology, this pixel-SR time-stretch imaging technique represents a cost-effective and practical solution for large scale cell-based phenotypic screening in biomedical diagnosis and machine vision for quality control in manufacturing. PMID:28303936
A fast image encryption algorithm based on only blocks in cipher text
NASA Astrophysics Data System (ADS)
Wang, Xing-Yuan; Wang, Qian
2014-03-01
In this paper, a fast image encryption algorithm is proposed, in which the shuffling and diffusion is performed simultaneously. The cipher-text image is divided into blocks and each block has k ×k pixels, while the pixels of the plain-text are scanned one by one. Four logistic maps are used to generate the encryption key stream and the new place in the cipher image of plain image pixels, including the row and column of the block which the pixel belongs to and the place where the pixel would be placed in the block. After encrypting each pixel, the initial conditions of logistic maps would be changed according to the encrypted pixel's value; after encrypting each row of plain image, the initial condition would also be changed by the skew tent map. At last, it is illustrated that this algorithm has a faster speed, big key space, and better properties in withstanding differential attacks, statistical analysis, known plaintext, and chosen plaintext attacks.
Mapping Electrical Crosstalk in Pixelated Sensor Arrays
NASA Technical Reports Server (NTRS)
Seshadri, Suresh (Inventor); Cole, David (Inventor); Smith, Roger M. (Inventor); Hancock, Bruce R. (Inventor)
2017-01-01
The effects of inter pixel capacitance in a pixilated array may be measured by first resetting all pixels in the array to a first voltage, where a first image is read out, followed by resetting only a subset of pixels in the array to a second voltage, where a second image is read out, where the difference in the first and second images provide information about the inter pixel capacitance. Other embodiments are described and claimed.
Mapping Electrical Crosstalk in Pixelated Sensor Arrays
NASA Technical Reports Server (NTRS)
Smith, Roger M (Inventor); Hancock, Bruce R. (Inventor); Cole, David (Inventor); Seshadri, Suresh (Inventor)
2013-01-01
The effects of inter pixel capacitance in a pixilated array may be measured by first resetting all pixels in the array to a first voltage, where a first image is read out, followed by resetting only a subset of pixels in the array to a second voltage, where a second image is read out, where the difference in the first and second images provide information about the inter pixel capacitance. Other embodiments are described and claimed.
How Many Pixels Does It Take to Make a Good 4"×6" Print? Pixel Count Wars Revisited
NASA Astrophysics Data System (ADS)
Kriss, Michael A.
Digital still cameras emerged following the introduction of the Sony Mavica analog prototype camera in 1981. These early cameras produced poor image quality and did not challenge film cameras for overall quality. By 1995 digital still cameras in expensive SLR formats had 6 mega-pixels and produced high quality images (with significant image processing). In 2005 significant improvement in image quality was apparent and lower prices for digital still cameras (DSCs) started a rapid decline in film usage and film camera sells. By 2010 film usage was mostly limited to professionals and the motion picture industry. The rise of DSCs was marked by a “pixel war” where the driving feature of the cameras was the pixel count where even moderate cost, ˜120, DSCs would have 14 mega-pixels. The improvement of CMOS technology pushed this trend of lower prices and higher pixel counts. Only the single lens reflex cameras had large sensors and large pixels. The drive for smaller pixels hurt the quality aspects of the final image (sharpness, noise, speed, and exposure latitude). Only today are camera manufactures starting to reverse their course and producing DSCs with larger sensors and pixels. This paper will explore why larger pixels and sensors are key to the future of DSCs.
The effect of split pixel HDR image sensor technology on MTF measurements
NASA Astrophysics Data System (ADS)
Deegan, Brian M.
2014-03-01
Split-pixel HDR sensor technology is particularly advantageous in automotive applications, because the images are captured simultaneously rather than sequentially, thereby reducing motion blur. However, split pixel technology introduces artifacts in MTF measurement. To achieve a HDR image, raw images are captured from both large and small sub-pixels, and combined to make the HDR output. In some cases, a large sub-pixel is used for long exposure captures, and a small sub-pixel for short exposures, to extend the dynamic range. The relative size of the photosensitive area of the pixel (fill factor) plays a very significant role in the output MTF measurement. Given an identical scene, the MTF will be significantly different, depending on whether you use the large or small sub-pixels i.e. a smaller fill factor (e.g. in the short exposure sub-pixel) will result in higher MTF scores, but significantly greater aliasing. Simulations of split-pixel sensors revealed that, when raw images from both sub-pixels are combined, there is a significant difference in rising edge (i.e. black-to-white transition) and falling edge (white-to-black) reproduction. Experimental results showed a difference of ~50% in measured MTF50 between the falling and rising edges of a slanted edge test chart.
Dynamically re-configurable CMOS imagers for an active vision system
NASA Technical Reports Server (NTRS)
Yang, Guang (Inventor); Pain, Bedabrata (Inventor)
2005-01-01
A vision system is disclosed. The system includes a pixel array, at least one multi-resolution window operation circuit, and a pixel averaging circuit. The pixel array has an array of pixels configured to receive light signals from an image having at least one tracking target. The multi-resolution window operation circuits are configured to process the image. Each of the multi-resolution window operation circuits processes each tracking target within a particular multi-resolution window. The pixel averaging circuit is configured to sample and average pixels within the particular multi-resolution window.
Martian Meanders and Scroll-Bars
2017-03-01
This is a portion of an inverted fluvial channel in the region of Aeolis/Zephyria Plana, at the Martian equator. Channels become inverted when the sediments filling them become more resistant to erosion than the surrounding material. Here, the most likely process leading to hardening of the channel material is chemical cementation by precipitation of minerals. Once the surrounding material erodes, the channel is left standing as a ridge. The series of curvilinear lineations are ancient scroll-bars, which are features typical of river meanders (bends) in terrestrial fluvial channels. Scroll-bars are series of ridges that result from the continuous lateral migration of a meander. On Earth, they are more common in mature rivers. The presence of scroll bars suggests that the water flow in this channel may have been sustained for a relatively long time. Measuring characteristics of these scroll-bars and meanders may help to estimate the amount of water that once flowed in this channel, aiding our understanding of the history of water on Mars. The map is projected here at a scale of 25 centimeters (9.8 inches) per pixel. [The original image scale is 29.3 centimeters (11.5 inches) per pixel (with 1 x 1 binning); objects on the order of 88 centimeters (29.6 inches) across are resolved.] North is up. http://photojournal.jpl.nasa.gov/catalog/PIA21551
NASA Technical Reports Server (NTRS)
Grycewicz, Thomas J.; Tan, Bin; Isaacson, Peter J.; De Luccia, Frank J.; Dellomo, John
2016-01-01
In developing software for independent verification and validation (IVV) of the Image Navigation and Registration (INR) capability for the Geostationary Operational Environmental Satellite R Series (GOES-R) Advanced Baseline Imager (ABI), we have encountered an image registration artifact which limits the accuracy of image offset estimation at the subpixel scale using image correlation. Where the two images to be registered have the same pixel size, subpixel image registration preferentially selects registration values where the image pixel boundaries are close to lined up. Because of the shape of a curve plotting input displacement to estimated offset, we call this a stair-step artifact. When one image is at a higher resolution than the other, the stair-step artifact is minimized by correlating at the higher resolution. For validating ABI image navigation, GOES-R images are correlated with Landsat-based ground truth maps. To create the ground truth map, the Landsat image is first transformed to the perspective seen from the GOES-R satellite, and then is scaled to an appropriate pixel size. Minimizing processing time motivates choosing the map pixels to be the same size as the GOES-R pixels. At this pixel size image processing of the shift estimate is efficient, but the stair-step artifact is present. If the map pixel is very small, stair-step is not a problem, but image correlation is computation-intensive. This paper describes simulation-based selection of the scale for truth maps for registering GOES-R ABI images.
Smith, Matthew R.; Artz, Nathan S.; Koch, Kevin M.; Samsonov, Alexey; Reeder, Scott B.
2014-01-01
Purpose To demonstrate feasibility of exploiting the spatial distribution of off-resonance surrounding metallic implants for accelerating multispectral imaging techniques. Theory Multispectral imaging (MSI) techniques perform time-consuming independent 3D acquisitions with varying RF frequency offsets to address the extreme off-resonance from metallic implants. Each off-resonance bin provides a unique spatial sensitivity that is analogous to the sensitivity of a receiver coil, and therefore provides a unique opportunity for acceleration. Methods Fully sampled MSI was performed to demonstrate retrospective acceleration. A uniform sampling pattern across off-resonance bins was compared to several adaptive sampling strategies using a total hip replacement phantom. Monte Carlo simulations were performed to compare noise propagation of two of these strategies. With a total knee replacement phantom, positive and negative off-resonance bins were strategically sampled with respect to the B0 field to minimize aliasing. Reconstructions were performed with a parallel imaging framework to demonstrate retrospective acceleration. Results An adaptive sampling scheme dramatically improved reconstruction quality, which was supported by the noise propagation analysis. Independent acceleration of negative and positive off-resonance bins demonstrated reduced overlapping of aliased signal to improve the reconstruction. Conclusion This work presents the feasibility of acceleration in the presence of metal by exploiting the spatial sensitivities of off-resonance bins. PMID:24431210
NASA Astrophysics Data System (ADS)
Chavarrías, C.; Vaquero, J. J.; Sisniega, A.; Rodríguez-Ruano, A.; Soto-Montenegro, M. L.; García-Barreno, P.; Desco, M.
2008-09-01
We propose a retrospective respiratory gating algorithm to generate dynamic CT studies. To this end, we compared three different methods of extracting the respiratory signal from the projections of small-animal cone-beam computed tomography (CBCT) scanners. Given a set of frames acquired from a certain axial angle, subtraction of their average image from each individual frame produces a set of difference images. Pixels in these images have positive or negative values (according to the respiratory phase) in those areas where there is lung movement. The respiratory signals were extracted by analysing the shape of the histogram of these difference images: we calculated the first four central and non-central moments. However, only odd-order moments produced the desired breathing signal, as the even-order moments lacked information about the phase. Each of these curves was compared to a reference signal recorded by means of a pneumatic pillow. Given the similar correlation coefficients yielded by all of them, we selected the mean to implement our retrospective protocol. Respiratory phase bins were separated, reconstructed independently and included in a dynamic sequence, suitable for cine playback. We validated our method in five adult rat studies by comparing profiles drawn across the diaphragm dome, with and without retrospective respiratory gating. Results showed a sharper transition in the gated reconstruction, with an average slope improvement of 60.7%.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, L; O’Connell, D; Lee, P
2016-06-15
Purpose: A published 5DCT breathing motion model enables image reconstruction at any user-selected breathing phase, defined by the model as a specific amplitude (v) and rate (f). Generation of reconstructed phase-specific CT scans will be required for time-independent radiation dose distribution simulations. This work answers the question: how many amplitude and rate bins are required to describe the tumor motion with a specific spatial resolution? Methods: 19 lung-cancer patients with 21 tumors were scanned using a free-breathing 5DCT protocol, employing an abdominally positioned pneumatic-bellows breathing surrogate and yielding voxel-specific motion model parameters α and β corresponding to motion as amore » function of amplitude and rate, respectively. Tumor GTVs were contoured on the first (reference) of 25 successive free-breathing fast helical CT image sets. The tumor displacements were binned into widths of 1mm to 5mm in 1mm steps and the total required number of bins recorded. The simulation evaluated the number of bins needed to encompass 100% of the breathing-amplitude and between the 5th and 95th percentile amplitudes to exclude breathing outliers. Results: The mean respiration-induced tumor motion was 9.90mm ± 7.86mm with a maximum of 25mm. The number of bins required was a strong function of the spatial resolution and varied widely between patients. For example, for 2mm bins, between 1–13 amplitude bins and 1–9 rate bins were required to encompass 100% of the breathing amplitude, while 1–6 amplitude bins and 1–3 rate bins were required to encompass 90% of the breathing amplitude. Conclusion: The strong relationship between number of bins and spatial resolution as well as the large variation between patients implies that time-independent radiation dose distribution simulations should be conducted using patient-specific data and that the breathing conditions will have to be carefully considered. This work will lead to the assessment of the dosimetric impact of binning resolution. This study is supported by Siemens Healthcare.« less
SVM Pixel Classification on Colour Image Segmentation
NASA Astrophysics Data System (ADS)
Barui, Subhrajit; Latha, S.; Samiappan, Dhanalakshmi; Muthu, P.
2018-04-01
The aim of image segmentation is to simplify the representation of an image with the help of cluster pixels into something meaningful to analyze. Segmentation is typically used to locate boundaries and curves in an image, precisely to label every pixel in an image to give each pixel an independent identity. SVM pixel classification on colour image segmentation is the topic highlighted in this paper. It holds useful application in the field of concept based image retrieval, machine vision, medical imaging and object detection. The process is accomplished step by step. At first we need to recognize the type of colour and the texture used as an input to the SVM classifier. These inputs are extracted via local spatial similarity measure model and Steerable filter also known as Gabon Filter. It is then trained by using FCM (Fuzzy C-Means). Both the pixel level information of the image and the ability of the SVM Classifier undergoes some sophisticated algorithm to form the final image. The method has a well developed segmented image and efficiency with respect to increased quality and faster processing of the segmented image compared with the other segmentation methods proposed earlier. One of the latest application result is the Light L16 camera.
Half-unit weighted bilinear algorithm for image contrast enhancement in capsule endoscopy
NASA Astrophysics Data System (ADS)
Rukundo, Olivier
2018-04-01
This paper proposes a novel enhancement method based exclusively on the bilinear interpolation algorithm for capsule endoscopy images. The proposed method does not convert the original RBG image components to HSV or any other color space or model; instead, it processes directly RGB components. In each component, a group of four adjacent pixels and half-unit weight in the bilinear weighting function are used to calculate the average pixel value, identical for each pixel in that particular group. After calculations, groups of identical pixels are overlapped successively in horizontal and vertical directions to achieve a preliminary-enhanced image. The final-enhanced image is achieved by halving the sum of the original and preliminary-enhanced image pixels. Quantitative and qualitative experiments were conducted focusing on pairwise comparisons between original and enhanced images. Final-enhanced images have generally the best diagnostic quality and gave more details about the visibility of vessels and structures in capsule endoscopy images.
Method and apparatus of high dynamic range image sensor with individual pixel reset
NASA Technical Reports Server (NTRS)
Yadid-Pecht, Orly (Inventor); Pain, Bedabrata (Inventor); Fossum, Eric R. (Inventor)
2001-01-01
A wide dynamic range image sensor provides individual pixel reset to vary the integration time of individual pixels. The integration time of each pixel is controlled by column and row reset control signals which activate a logical reset transistor only when both signals coincide for a given pixel.
Leng, Shuai; Yu, Lifeng; Wang, Jia; Fletcher, Joel G; Mistretta, Charles A; McCollough, Cynthia H
2011-09-01
Our purpose was to reduce image noise in spectral CT by exploiting data redundancies in the energy domain to allow flexible selection of the number, width, and location of the energy bins. Using a variety of spectral CT imaging methods, conventional filtered backprojection (FBP) reconstructions were performed and resulting images were compared to those processed using a Local HighlY constrained backPRojection Reconstruction (HYPR-LR) algorithm. The mean and standard deviation of CT numbers were measured within regions of interest (ROIs), and results were compared between FBP and HYPR-LR. For these comparisons, the following spectral CT imaging methods were used:(i) numerical simulations based on a photon-counting, detector-based CT system, (ii) a photon-counting, detector-based micro CT system using rubidium and potassium chloride solutions, (iii) a commercial CT system equipped with integrating detectors utilizing tube potentials of 80, 100, 120, and 140 kV, and (iv) a clinical dual-energy CT examination. The effects of tube energy and energy bin width were evaluated appropriate to each CT system. The mean CT number in each ROI was unchanged between FBP and HYPR-LR images for each of the spectral CT imaging scenarios, irrespective of bin width or tube potential. However, image noise, as represented by the standard deviation of CT numbers in each ROI, was reduced by 36%-76%. In all scenarios, image noise after HYPR-LR algorithm was similar to that of composite images, which used all available photons. No difference in spatial resolution was observed between HYPR-LR processing and FBP. Dual energy patient data processed using HYPR-LR demonstrated reduced noise in the individual, low- and high-energy images, as well as in the material-specific basis images. Noise reduction can be accomplished for spectral CT by exploiting data redundancies in the energy domain. HYPR-LR is a robust method for reducing image noise in a variety of spectral CT imaging systems without losing spatial resolution or CT number accuracy. This method improves the flexibility to select energy bins in the manner that optimizes material identification and separation without paying the penalty of increased image noise or its corollary, increased patient dose.
Pandey, Anil K; Bisht, Chandan S; Sharma, Param D; ArunRaj, Sreedharan Thankarajan; Taywade, Sameer; Patel, Chetan; Bal, Chandrashekhar; Kumar, Rakesh
2017-11-01
Tc-methylene diphosphonate (Tc-MDP) bone scintigraphy images have limited number of counts per pixel. A noise filtering method based on local statistics of the image produces better results than a linear filter. However, the mask size has a significant effect on image quality. In this study, we have identified the optimal mask size that yields a good smooth bone scan image. Forty four bone scan images were processed using mask sizes 3, 5, 7, 9, 11, 13, and 15 pixels. The input and processed images were reviewed in two steps. In the first step, the images were inspected and the mask sizes that produced images with significant loss of clinical details in comparison with the input image were excluded. In the second step, the image quality of the 40 sets of images (each set had input image, and its corresponding three processed images with 3, 5, and 7-pixel masks) was assessed by two nuclear medicine physicians. They selected one good smooth image from each set of images. The image quality was also assessed quantitatively with a line profile. Fisher's exact test was used to find statistically significant differences in image quality processed with 5 and 7-pixel mask at a 5% cut-off. A statistically significant difference was found between the image quality processed with 5 and 7-pixel mask at P=0.00528. The identified optimal mask size to produce a good smooth image was found to be 7 pixels. The best mask size for the John-Sen Lee filter was found to be 7×7 pixels, which yielded Tc-methylene diphosphonate bone scan images with the highest acceptable smoothness.
VizieR Online Data Catalog: S4G disk galaxies stellar mass distribution (Diaz-Garcia+, 2016)
NASA Astrophysics Data System (ADS)
Diaz-Garcia, S.; Salo, H.; Laurikainen, E.
2016-08-01
We provide the tabulated radial profiles of mean stellar mass density in bins of total stellar mass (M*, from Munoz-Mateos et al., 2015ApJS..219....3M) and Hubble stage (T, from Buta et al., 2015, Cat. J/ApJS/217/32). We used the 3.6um imaging for the non-highly inclined galaxies (i<65° in Salo et al., 2015, Cat. J/ApJS/219/4) in the Spitzer Survey of Stellar Structure in Galaxies (Sheth et al., 2010, Cat. J/PASP/122/1397). We also provide the averaged stellar contribution to the circular velocity, computed from the radial force profiles of individual galaxies (from Diaz-Garcia et al., 2016A&A...587A.160D). Besides, we provide the FITS files of the bar synthetic images (2D) obtained by stacking images rescaled to a common frame determined by the bar parameters (from Herrera-Endoqui et al., 2015A&A...582A..86H) in bins of M*, T, and galaxy family (from Buta et al. 2015). For the bar stacks, we also tabulate the azimuthally averaged luminosity profiles, the tangential-to-radial forces (Qt), the m=2,4 Fourier amplitudes (A2,A4), and the radial profiles of ellipticity and b4 parameter. The fits files (.fit) of the bar stacks, in units of flux (MJy/sr). The pixel size is 0.02 x rbar, where rbar refers to the bar radius. The images are cut at a radius of 3 x rbar. In every folder, the terminology used to label the ".dat" and ".fit" files, in relation to their content, is the following: a) The term "starmass" is used when the binning of the sample was based on the total stellar mass of the galaxy, from Munoz-Mateos et al. (2015ApJS..219....3M). We indicate the common logarithm of the boundaries: (8.5,9.9.5,10,10.5,11). b) The term "ttype" is used when the binning of the sample was based on the Hubble stage of the galaxy (-3,0,3,5,8,11), from Buta et al. (2015, Cat. J/ApJS/217/32) c) The term "family" is used when the binning of the sample was based on the morphological family of the galaxy (AB,AB,AB,B), from Buta et al. (2015, Cat. J/ApJS/217/32). d) The term "hr" is used when the 1-D luminosity stacks were obtained in a common frame determined by the scalelength of the disks (from Salo et al., 2015, Cat. J/ApJS/219/4). e) The term "kpc" is used when the 1-D luminosity stacks were obtained in a common frame determined by the disk extent in physical units (kpc). f) The term "barred" is used when only barred galaxies are stacked (according to Buta et al., 2015, Cat. J/ApJS/217/32). g) The term "unbarred" is used when only non-barred galaxies are stacked. IDL reading: readcol,'luminositydiskkpc/luminositydiskkpc_*.dat',Radius,$ Steldens,bSteldens,BSteldens,SuBr,bSuBr,BSuBr,Nsample,$ format='F,F,F,F,F,F,F,F',delim=' ' readcol,'luminositydiskhr/luminositydiskhr_*.dat',Radius,$ Steldens,bSteldens,BSteldens,SuBr,bSuBr,BSuB,Nsample,$ format='F,F,F,F,F,F,F,F',delim=' ' readcol,'vrotdiskkpc/vrotdiskkpc_*.dat',Radius,Vrotmean,$ Vrotmedian,Sigma,Nsample,format='F,F,F,F,F',delim=' ' readcol,'vrotdiskhr/vrotdiskhr_*.dat',Radius,Vrotmean,Vrotmedian,$ Sigma,Nsample,format='F,F,F,F,F',delim=' ' readcol,'luminositybar/barsradialluminosity*.dat',Radius,$ Steldens,SuBr,format='F,F,F',delim=' ' readcol,'forceprofbar/barsradialforces_*.dat',Radius,Qt,A2,A4,$ format='F,F,F,F',delim=' ' readcol,'ellipseprofbar/barsradialellipse_*.dat',Radius,ellipticity,b4,$ format='F,F,F',delim=' ' fitsread,'barstackfits/barstack_*.fit',image (10 data files).
Method for removing RFI from SAR images
Doerry, Armin W.
2003-08-19
A method of removing RFI from a SAR by comparing two SAR images on a pixel by pixel basis and selecting the pixel with the lower magnitude to form a composite image. One SAR image is the conventional image produced by the SAR. The other image is created from phase-history data which has been filtered to have the frequency bands containing the RFI removed.
Imaging properties of pixellated scintillators with deep pixels
Barber, H. Bradford; Fastje, David; Lemieux, Daniel; Grim, Gary P.; Furenlid, Lars R.; Miller, Brian W.; Parkhurst, Philip; Nagarkar, Vivek V.
2015-01-01
We have investigated the light-transport properties of scintillator arrays with long, thin pixels (deep pixels) for use in high-energy gamma-ray imaging. We compared 10×10 pixel arrays of YSO:Ce, LYSO:Ce and BGO (1mm × 1mm × 20 mm pixels) made by Proteus, Inc. with similar 10×10 arrays of LSO:Ce and BGO (1mm × 1mm × 15mm pixels) loaned to us by Saint-Gobain. The imaging and spectroscopic behaviors of these scintillator arrays are strongly affected by the choice of a reflector used as an inter-pixel spacer (3M ESR in the case of the Proteus arrays and white, diffuse-reflector for the Saint-Gobain arrays). We have constructed a 3700-pixel LYSO:Ce Prototype NIF Gamma-Ray Imager for use in diagnosing target compression in inertial confinement fusion. This system was tested at the OMEGA Laser and exhibited significant optical, inter-pixel cross-talk that was traced to the use of a single-layer of ESR film as an inter-pixel spacer. We show how the optical cross-talk can be mapped, and discuss correction procedures. We demonstrate a 10×10 YSO:Ce array as part of an iQID (formerly BazookaSPECT) imager and discuss issues related to the internal activity of 176Lu in LSO:Ce and LYSO:Ce detectors. PMID:26236070
Imaging properties of pixellated scintillators with deep pixels
NASA Astrophysics Data System (ADS)
Barber, H. Bradford; Fastje, David; Lemieux, Daniel; Grim, Gary P.; Furenlid, Lars R.; Miller, Brian W.; Parkhurst, Philip; Nagarkar, Vivek V.
2014-09-01
We have investigated the light-transport properties of scintillator arrays with long, thin pixels (deep pixels) for use in high-energy gamma-ray imaging. We compared 10x10 pixel arrays of YSO:Ce, LYSO:Ce and BGO (1mm x 1mm x 20 mm pixels) made by Proteus, Inc. with similar 10x10 arrays of LSO:Ce and BGO (1mm x 1mm x 15mm pixels) loaned to us by Saint-Gobain. The imaging and spectroscopic behaviors of these scintillator arrays are strongly affected by the choice of a reflector used as an inter-pixel spacer (3M ESR in the case of the Proteus arrays and white, diffuse-reflector for the Saint-Gobain arrays). We have constructed a 3700-pixel LYSO:Ce Prototype NIF Gamma-Ray Imager for use in diagnosing target compression in inertial confinement fusion. This system was tested at the OMEGA Laser and exhibited significant optical, inter-pixel cross-talk that was traced to the use of a single-layer of ESR film as an inter-pixel spacer. We show how the optical cross-talk can be mapped, and discuss correction procedures. We demonstrate a 10x10 YSO:Ce array as part of an iQID (formerly BazookaSPECT) imager and discuss issues related to the internal activity of 176Lu in LSO:Ce and LYSO:Ce detectors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Altunbas, Cem, E-mail: caltunbas@gmail.com; Lai, Chao-Jen; Zhong, Yuncheng
Purpose: In using flat panel detectors (FPD) for cone beam computed tomography (CBCT), pixel gain variations may lead to structured nonuniformities in projections and ring artifacts in CBCT images. Such gain variations can be caused by change in detector entrance exposure levels or beam hardening, and they are not accounted by conventional flat field correction methods. In this work, the authors presented a method to identify isolated pixel clusters that exhibit gain variations and proposed a pixel gain correction (PGC) method to suppress both beam hardening and exposure level dependent gain variations. Methods: To modulate both beam spectrum and entrancemore » exposure, flood field FPD projections were acquired using beam filters with varying thicknesses. “Ideal” pixel values were estimated by performing polynomial fits in both raw and flat field corrected projections. Residuals were calculated by taking the difference between measured and ideal pixel values to identify clustered image and FPD artifacts in flat field corrected and raw images, respectively. To correct clustered image artifacts, the ratio of ideal to measured pixel values in filtered images were utilized as pixel-specific gain correction factors, referred as PGC method, and they were tabulated as a function of pixel value in a look-up table. Results: 0.035% of detector pixels lead to clustered image artifacts in flat field corrected projections, where 80% of these pixels were traced back and linked to artifacts in the FPD. The performance of PGC method was tested in variety of imaging conditions and phantoms. The PGC method reduced clustered image artifacts and fixed pattern noise in projections, and ring artifacts in CBCT images. Conclusions: Clustered projection image artifacts that lead to ring artifacts in CBCT can be better identified with our artifact detection approach. When compared to the conventional flat field correction method, the proposed PGC method enables characterization of nonlinear pixel gain variations as a function of change in x-ray spectrum and intensity. Hence, it can better suppress image artifacts due to beam hardening as well as artifacts that arise from detector entrance exposure variation.« less
Evaluation of PET Imaging Resolution Using 350 mu{m} Pixelated CZT as a VP-PET Insert Detector
NASA Astrophysics Data System (ADS)
Yin, Yongzhi; Chen, Ximeng; Li, Chongzheng; Wu, Heyu; Komarov, Sergey; Guo, Qingzhen; Krawczynski, Henric; Meng, Ling-Jian; Tai, Yuan-Chuan
2014-02-01
A cadmium-zinc-telluride (CZT) detector with 350 μm pitch pixels was studied in high-resolution positron emission tomography (PET) imaging applications. The PET imaging system was based on coincidence detection between a CZT detector and a lutetium oxyorthosilicate (LSO)-based Inveon PET detector in virtual-pinhole PET geometry. The LSO detector is a 20 ×20 array, with 1.6 mm pitches, and 10 mm thickness. The CZT detector uses ac 20 ×20 ×5 mm substrate, with 350 μm pitch pixelated anodes and a coplanar cathode. A NEMA NU4 Na-22 point source of 250 μm in diameter was imaged by this system. Experiments show that the image resolution of single-pixel photopeak events was 590 μm FWHM while the image resolution of double-pixel photopeak events was 640 μm FWHM. The inclusion of double-pixel full-energy events increased the sensitivity of the imaging system. To validate the imaging experiment, we conducted a Monte Carlo (MC) simulation for the same PET system in Geant4 Application for Emission Tomography. We defined LSO detectors as a scanner ring and 350 μm pixelated CZT detectors as an insert ring. GATE simulated coincidence data were sorted into an insert-scanner sinogram and reconstructed. The image resolution of MC-simulated data (which did not factor in positron range and acolinearity effect) was 460 μm at FWHM for single-pixel events. The image resolutions of experimental data, MC simulated data, and theoretical calculation are all close to 500 μm FWHM when the proposed 350 μm pixelated CZT detector is used as a PET insert. The interpolation algorithm for the charge sharing events was also investigated. The PET image that was reconstructed using the interpolation algorithm shows improved image resolution compared with the image resolution without interpolation algorithm.
Simulation and Spectrum Extraction in the Spectroscopic Channel of the SNAP Experiment
NASA Astrophysics Data System (ADS)
Tilquin, Andre; Bonissent, A.; Gerdes, D.; Ealet, A.; Prieto, E.; Macaire, C.; Aumenier, M. H.
2007-05-01
A pixel-level simulation software is described. It is composed of two modules. The first module applies Fourier optics at each active element of the system to construct the PSF at a large variety of wavelengths and spatial locations of the point source. The input is provided by the engineer's design program (Zemax). It describes the optical path and the distortions. The PSF properties are compressed and interpolated using shapelets decomposition and neural network techniques. A second module is used for production jobs. It uses the output of the first module to reconstruct the relevant PSF and integrate it on the detector pixels. Extended and polychromatic sources are approximated by a combination of monochromatic point sources. For the spectrum extraction, we use a fast simulator based on a multidimensional linear interpolation of the pixel response tabulated on a grid of values of wavelength, position on sky and slice number. The prediction of the fast simulator is compared to the observed pixel content, and a chi-square minimization where the parameters are the bin contents is used to build the extracted spectrum. The visible and infrared arms are combined in the same chi-square, providing a single spectrum.
Small target detection using bilateral filter and temporal cross product in infrared images
NASA Astrophysics Data System (ADS)
Bae, Tae-Wuk
2011-09-01
We introduce a spatial and temporal target detection method using spatial bilateral filter (BF) and temporal cross product (TCP) of temporal pixels in infrared (IR) image sequences. At first, the TCP is presented to extract the characteristics of temporal pixels by using temporal profile in respective spatial coordinates of pixels. The TCP represents the cross product values by the gray level distance vector of a current temporal pixel and the adjacent temporal pixel, as well as the horizontal distance vector of the current temporal pixel and a temporal pixel corresponding to potential target center. The summation of TCP values of temporal pixels in spatial coordinates makes the temporal target image (TTI), which represents the temporal target information of temporal pixels in spatial coordinates. And then the proposed BF filter is used to extract the spatial target information. In order to predict background without targets, the proposed BF filter uses standard deviations obtained by an exponential mapping of the TCP value corresponding to the coordinate of a pixel processed spatially. The spatial target image (STI) is made by subtracting the predicted image from the original image. Thus, the spatial and temporal target image (STTI) is achieved by multiplying the STI and the TTI, and then targets finally are detected in STTI. In experimental result, the receiver operating characteristics (ROC) curves were computed experimentally to compare the objective performance. From the results, the proposed algorithm shows better discrimination of target and clutters and lower false alarm rates than the existing target detection methods.
A Hopfield neural network for image change detection.
Pajares, Gonzalo
2006-09-01
This paper outlines an optimization relaxation approach based on the analog Hopfield neural network (HNN) for solving the image change detection problem between two images. A difference image is obtained by subtracting pixel by pixel both images. The network topology is built so that each pixel in the difference image is a node in the network. Each node is characterized by its state, which determines if a pixel has changed. An energy function is derived, so that the network converges to stable states. The analog Hopfield's model allows each node to take on analog state values. Unlike most widely used approaches, where binary labels (changed/unchanged) are assigned to each pixel, the analog property provides the strength of the change. The main contribution of this paper is reflected in the customization of the analog Hopfield neural network to derive an automatic image change detection approach. When a pixel is being processed, some existing image change detection procedures consider only interpixel relations on its neighborhood. The main drawback of such approaches is the labeling of this pixel as changed or unchanged according to the information supplied by its neighbors, where its own information is ignored. The Hopfield model overcomes this drawback and for each pixel allows a tradeoff between the influence of its neighborhood and its own criterion. This is mapped under the energy function to be minimized. The performance of the proposed method is illustrated by comparative analysis against some existing image change detection methods.
Fiber pixelated image database
NASA Astrophysics Data System (ADS)
Shinde, Anant; Perinchery, Sandeep Menon; Matham, Murukeshan Vadakke
2016-08-01
Imaging of physically inaccessible parts of the body such as the colon at micron-level resolution is highly important in diagnostic medical imaging. Though flexible endoscopes based on the imaging fiber bundle are used for such diagnostic procedures, their inherent honeycomb-like structure creates fiber pixelation effects. This impedes the observer from perceiving the information from an image captured and hinders the direct use of image processing and machine intelligence techniques on the recorded signal. Significant efforts have been made by researchers in the recent past in the development and implementation of pixelation removal techniques. However, researchers have often used their own set of images without making source data available which subdued their usage and adaptability universally. A database of pixelated images is the current requirement to meet the growing diagnostic needs in the healthcare arena. An innovative fiber pixelated image database is presented, which consists of pixelated images that are synthetically generated and experimentally acquired. Sample space encompasses test patterns of different scales, sizes, and shapes. It is envisaged that this proposed database will alleviate the current limitations associated with relevant research and development and would be of great help for researchers working on comb structure removal algorithms.
NASA Astrophysics Data System (ADS)
Watanabe, Shigeo; Takahashi, Teruo; Bennett, Keith
2017-02-01
The"scientific" CMOS (sCMOS) camera architecture fundamentally differs from CCD and EMCCD cameras. In digital CCD and EMCCD cameras, conversion from charge to the digital output is generally through a single electronic chain, and the read noise and the conversion factor from photoelectrons to digital outputs are highly uniform for all pixels, although quantum efficiency may spatially vary. In CMOS cameras, the charge to voltage conversion is separate for each pixel and each column has independent amplifiers and analog-to-digital converters, in addition to possible pixel-to-pixel variation in quantum efficiency. The "raw" output from the CMOS image sensor includes pixel-to-pixel variability in the read noise, electronic gain, offset and dark current. Scientific camera manufacturers digitally compensate the raw signal from the CMOS image sensors to provide usable images. Statistical noise in images, unless properly modeled, can introduce errors in methods such as fluctuation correlation spectroscopy or computational imaging, for example, localization microscopy using maximum likelihood estimation. We measured the distributions and spatial maps of individual pixel offset, dark current, read noise, linearity, photoresponse non-uniformity and variance distributions of individual pixels for standard, off-the-shelf Hamamatsu ORCA-Flash4.0 V3 sCMOS cameras using highly uniform and controlled illumination conditions, from dark conditions to multiple low light levels between 20 to 1,000 photons / pixel per frame to higher light conditions. We further show that using pixel variance for flat field correction leads to errors in cameras with good factory calibration.
Fabry-Perot observations of comet Austin
NASA Technical Reports Server (NTRS)
Schultz, David; Scherb, F.; Roesler, F. L.; Li, G.; Harlander, J.; Roberts, T. P. P.; Vandenberk, D.; Nossal, S.; Coakley, M.; Oliversen, Ronald J.
1990-01-01
Preliminary results of a program to observe Comet Austin (1990c1) from 16 April to 4 May and from 11 May to 27 May 1990 using the West Auxiliary of the McMath Solar Telescope on Kitt Peak, Arizona were presetned. The observations were made with a 15 cm duel-etalon Fabry-Perot scanning and imaging spectrometer with two modes of operation: a high resolution mode with a velocity resolution of 1.2 km/s and a medium resolution mode with a velocity resolution 10 km/s. Scanning data was obtained with an RCA C31034A photomultiplier tube and imaging data was obtained with a Photometrics LN2 cooled CCD camera with a 516 by 516 Ford chip. The results include: (1) information on the coma outflow velocity from high resolution spectral profiles of (OI)6300 and NH2 emissions, (2) gaseous water production rates from medium resolution observation of (OI)6300, (3) spectra of H2O(+) emissions in order to study the ionized component of the coma, (4) spatial distribution of H2O(+) emission features from sequences of velocity resolved images (data cubes), and (5) spatial distribution of (OI)6300 and NH2 emissions from medium resolution images. The field of view on the sky was 10.5 arcminutes in diameter. In the imaging mode the CCD was binned 4 by 4 resulting in 7.6 sec power pixel and a subarray readout for a field of view of 10.5 min.
Adaptive box filters for removal of random noise from digital images
Eliason, E.M.; McEwen, A.S.
1990-01-01
We have developed adaptive box-filtering algorithms to (1) remove random bit errors (pixel values with no relation to the image scene) and (2) smooth noisy data (pixels related to the image scene but with an additive or multiplicative component of noise). For both procedures, we use the standard deviation (??) of those pixels within a local box surrounding each pixel, hence they are adaptive filters. This technique effectively reduces speckle in radar images without eliminating fine details. -from Authors
NASA Technical Reports Server (NTRS)
Scott, Peter (Inventor); Sridhar, Ramalingam (Inventor); Bandera, Cesar (Inventor); Xia, Shu (Inventor)
2002-01-01
A foveal image sensor integrated circuit comprising a plurality of CMOS active pixel sensors arranged both within and about a central fovea region of the chip. The pixels in the central fovea region have a smaller size than the pixels arranged in peripheral rings about the central region. A new photocharge normalization scheme and associated circuitry normalizes the output signals from the different size pixels in the array. The pixels are assembled into a multi-resolution rectilinear foveal image sensor chip using a novel access scheme to reduce the number of analog RAM cells needed. Localized spatial resolution declines monotonically with offset from the imager's optical axis, analogous to biological foveal vision.
NASA Technical Reports Server (NTRS)
Boardman, J. W.; Pieters, C. M.; Green, R. O.; Clark, R. N.; Sunshine, J.; Combe, J.-P.; Isaacson, P.; Lundeen, S. R.; Malaret, E.; McCord, T.;
2010-01-01
The Moon Mineralogy Mapper (M3), a NASA Discovery Mission of Opportunity, was launched October 22, 2008 from Shriharikota in India on board the Indian ISRO Chandrayaan- 1 spacecraft for a nominal two-year mission in a 100-km polar lunar orbit. M3 is a high-fidelity imaging spectrometer with 260 spectral bands in Target Mode and 85 spectral bands in a reduced-resolution Global Mode. Target Mode pixel sizes are nominally 70 meters and Global pixels (binned 2 by 2) are 140 meters, from the planned 100-km orbit. The mission was cut short, just before halfway, in August, 2009 when the spacecraft ceased operations. Despite the abbreviated mission and numerous technical and scientific challenges during the flight, M3 was able to cover more than 95% of the Moon in Global Mode. These data, presented and analyzed here as a global whole, are revolutionizing our understanding of the Moon. Already, numerous discoveries relating to volatiles and unexpected mineralogy have been published [1], [2], [3]. The rich spectral and spatial information content of the M3 data indicates that many more discoveries and an improved understanding of the mineralogy, geology, photometry, thermal regime and volatile status of our nearest neighbor are forthcoming from these data. Sadly, only minimal high-resolution Target Mode images were acquired, as these were to be the focus of the second half of the mission. This abstract gives the reader a global overview of all the M3 data that were collected and an introduction to their rich spectral character and complexity. We employ a Principal Components statistical method to assess the underlying dimensionality of the Moon as a whole, as seen by M3, and to identify numerous areas that are low-probability targets and thus of potential interest to selenologists.
Large area x-ray detectors for cargo radiography
NASA Astrophysics Data System (ADS)
Bueno, C.; Albagli, D.; Bendahan, J.; Castleberry, D.; Gordon, C.; Hopkins, F.; Ross, W.
2007-04-01
Large area x-ray detectors based on phosphors coupled to flat panel amorphous silicon diode technology offer significant advances for cargo radiologic imaging. Flat panel area detectors provide large object coverage offering high throughput inspections to meet the high flow rate of container commerce. These detectors provide excellent spatial resolution when needed, and enhanced SNR through low noise electronics. If the resolution is reduced through pixel binning, further advances in SNR are achievable. Extended exposure imaging and frame averaging enables improved x-ray penetration of ultra-thick objects, or "select-your-own" contrast sensitivity at a rate many times faster than LDAs. The areal coverage of flat panel technology provides inherent volumetric imaging with the appropriate scanning methods. Flat panel area detectors have flexible designs in terms of electronic control, scintillator selection, pixel pitch, and frame rates. Their cost is becoming more competitive as production ramps up for the healthcare, nondestructive testing (NDT), and homeland protection industries. Typically used medical and industrial polycrystalline phosphor materials such as Gd2O2S:Tb (GOS) can be applied to megavolt applications if the phosphor layer is sufficiently thick to enhance x-ray absorption, and if a metal radiator is used to augment the quantum detection efficiency and reduce x-ray scatter. Phosphor layers ranging from 0.2-mm to 1-mm can be "sandwiched" between amorphous silicon flat panel diode arrays and metal radiators. Metal plates consisting of W, Pb or Cu, with thicknesses ranging from 0.25-mm to well over 1-mm can be used by covering the entire area of the phosphor plate. In some combinations of high density metal and phosphor layers, the metal plate provides an intensification of 25% in signal due to electron emission from the plate and subsequent excitation within the phosphor material. This further improves the SNR of the system.
Color constancy using bright-neutral pixels
NASA Astrophysics Data System (ADS)
Wang, Yanfang; Luo, Yupin
2014-03-01
An effective illuminant-estimation approach for color constancy is proposed. Bright and near-neutral pixels are selected to jointly represent the illuminant color and utilized for illuminant estimation. To assess the representing capability of pixels, bright-neutral strength (BNS) is proposed by combining pixel chroma and brightness. Accordingly, a certain percentage of pixels with the largest BNS is selected to be the representative set. For every input image, a proper percentage value is determined via an iterative strategy by seeking the optimal color-corrected image. To compare various color-corrected images of an input image, image color-cast degree (ICCD) is devised using means and standard deviations of RGB channels. Experimental evaluation on standard real-world datasets validates the effectiveness of the proposed approach.
Algorithm for Detecting a Bright Spot in an Image
NASA Technical Reports Server (NTRS)
2009-01-01
An algorithm processes the pixel intensities of a digitized image to detect and locate a circular bright spot, the approximate size of which is known in advance. The algorithm is used to find images of the Sun in cameras aboard the Mars Exploration Rovers. (The images are used in estimating orientations of the Rovers relative to the direction to the Sun.) The algorithm can also be adapted to tracking of circular shaped bright targets in other diverse applications. The first step in the algorithm is to calculate a dark-current ramp a correction necessitated by the scheme that governs the readout of pixel charges in the charge-coupled-device camera in the original Mars Exploration Rover application. In this scheme, the fraction of each frame period during which dark current is accumulated in a given pixel (and, hence, the dark-current contribution to the pixel image-intensity reading) is proportional to the pixel row number. For the purpose of the algorithm, the dark-current contribution to the intensity reading from each pixel is assumed to equal the average of intensity readings from all pixels in the same row, and the factor of proportionality is estimated on the basis of this assumption. Then the product of the row number and the factor of proportionality is subtracted from the reading from each pixel to obtain a dark-current-corrected intensity reading. The next step in the algorithm is to determine the best location, within the overall image, for a window of N N pixels (where N is an odd number) large enough to contain the bright spot of interest plus a small margin. (In the original application, the overall image contains 1,024 by 1,024 pixels, the image of the Sun is about 22 pixels in diameter, and N is chosen to be 29.)
Limited-angle effect compensation for respiratory binned cardiac SPECT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qi, Wenyuan; Yang, Yongyi, E-mail: yy@ece.iit.edu; Wernick, Miles N.
Purpose: In cardiac single photon emission computed tomography (SPECT), respiratory-binned study is used to combat the motion blur associated with respiratory motion. However, owing to the variability in respiratory patterns during data acquisition, the acquired data counts can vary significantly both among respiratory bins and among projection angles within individual bins. If not properly accounted for, such variation could lead to artifacts similar to limited-angle effect in image reconstruction. In this work, the authors aim to investigate several reconstruction strategies for compensating the limited-angle effect in respiratory binned data for the purpose of reducing the image artifacts. Methods: The authorsmore » first consider a model based correction approach, in which the variation in acquisition time is directly incorporated into the imaging model, such that the data statistics are accurately described among both the projection angles and respiratory bins. Afterward, the authors consider an approximation approach, in which the acquired data are rescaled to accommodate the variation in acquisition time among different projection angles while the imaging model is kept unchanged. In addition, the authors also consider the use of a smoothing prior in reconstruction for suppressing the artifacts associated with limited-angle effect. In our evaluation study, the authors first used Monte Carlo simulated imaging with 4D NCAT phantom wherein the ground truth is known for quantitative comparison. The authors evaluated the accuracy of the reconstructed myocardium using a number of metrics, including regional and overall accuracy of the myocardium, uniformity and spatial resolution of the left ventricle (LV) wall, and detectability of perfusion defect using a channelized Hotelling observer. As a preliminary demonstration, the authors also tested the different approaches on five sets of clinical acquisitions. Results: The quantitative evaluation results show that the three compensation methods could all, but to different extents, reduce the reconstruction artifacts over no compensation. In particular, the model based approach reduced the mean-squared-error of the reconstructed myocardium by as much as 40%. Compared to the approach of data rescaling, the model based approach further improved both the overall and regional accuracy of the myocardium; it also further improved the lesion detectability and the uniformity of the LV wall. When ML reconstruction was used, the model based approach was notably more effective for improving the LV wall; when MAP reconstruction was used, the smoothing prior could reduce the noise level and artifacts with little or no increase in bias, but at the cost of a slight resolution loss of the LV wall. The improvements in image quality by the different compensation methods were also observed in the clinical acquisitions. Conclusions: Compensating for the uneven distribution of acquisition time among both projection angles and respiratory bins can effectively reduce the limited-angle artifacts in respiratory-binned cardiac SPECT reconstruction. Direct incorporation of the time variation into the imaging model together with a smoothing prior in reconstruction can lead to the most improvement in the accuracy of the reconstructed myocardium.« less
Kieper, Douglas Arthur [Seattle, WA; Majewski, Stanislaw [Morgantown, WV; Welch, Benjamin L [Hampton, VA
2012-07-03
An improved method for enhancing the contrast between background and lesion areas of a breast undergoing dual-head scintimammographic examination comprising: 1) acquiring a pair of digital images from a pair of small FOV or mini gamma cameras compressing the breast under examination from opposing sides; 2) inverting one of the pair of images to align or co-register with the other of the images to obtain co-registered pixel values; 3) normalizing the pair of images pixel-by-pixel by dividing pixel values from each of the two acquired images and the co-registered image by the average count per pixel in the entire breast area of the corresponding detector; and 4) multiplying the number of counts in each pixel by the value obtained in step 3 to produce a normalization enhanced two dimensional contrast map. This enhanced (increased contrast) contrast map enhances the visibility of minor local increases (uptakes) of activity over the background and therefore improves lesion detection sensitivity, especially of small lesions.
Kieper, Douglas Arthur [Newport News, VA; Majewski, Stanislaw [Yorktown, VA; Welch, Benjamin L [Hampton, VA
2008-10-28
An improved method for enhancing the contrast between background and lesion areas of a breast undergoing dual-head scintimammographic examination comprising: 1) acquiring a pair of digital images from a pair of small FOV or mini gamma cameras compressing the breast under examination from opposing sides; 2) inverting one of the pair of images to align or co-register with the other of the images to obtain co-registered pixel values; 3) normalizing the pair of images pixel-by-pixel by dividing pixel values from each of the two acquired images and the co-registered image by the average count per pixel in the entire breast area of the corresponding detector; and 4) multiplying the number of counts in each pixel by the value obtained in step 3 to produce a normalization enhanced two dimensional contrast map. This enhanced (increased contrast) contrast map enhances the visibility of minor local increases (uptakes) of activity over the background and therefore improves lesion detection sensitivity, especially of small lesions.
Estimating pixel variances in the scenes of staring sensors
Simonson, Katherine M [Cedar Crest, NM; Ma, Tian J [Albuquerque, NM
2012-01-24
A technique for detecting changes in a scene perceived by a staring sensor is disclosed. The technique includes acquiring a reference image frame and a current image frame of a scene with the staring sensor. A raw difference frame is generated based upon differences between the reference image frame and the current image frame. Pixel error estimates are generated for each pixel in the raw difference frame based at least in part upon spatial error estimates related to spatial intensity gradients in the scene. The pixel error estimates are used to mitigate effects of camera jitter in the scene between the current image frame and the reference image frame.
Variable waveband infrared imager
Hunter, Scott R.
2013-06-11
A waveband imager includes an imaging pixel that utilizes photon tunneling with a thermally actuated bimorph structure to convert infrared radiation to visible radiation. Infrared radiation passes through a transparent substrate and is absorbed by a bimorph structure formed with a pixel plate. The absorption generates heat which deflects the bimorph structure and pixel plate towards the substrate and into an evanescent electric field generated by light propagating through the substrate. Penetration of the bimorph structure and pixel plate into the evanescent electric field allows a portion of the visible wavelengths propagating through the substrate to tunnel through the substrate, bimorph structure, and/or pixel plate as visible radiation that is proportional to the intensity of the incident infrared radiation. This converted visible radiation may be superimposed over visible wavelengths passed through the imaging pixel.
The CAOS camera platform: ushering in a paradigm change in extreme dynamic range imager design
NASA Astrophysics Data System (ADS)
Riza, Nabeel A.
2017-02-01
Multi-pixel imaging devices such as CCD, CMOS and Focal Plane Array (FPA) photo-sensors dominate the imaging world. These Photo-Detector Array (PDA) devices certainly have their merits including increasingly high pixel counts and shrinking pixel sizes, nevertheless, they are also being hampered by limitations in instantaneous dynamic range, inter-pixel crosstalk, quantum full well capacity, signal-to-noise ratio, sensitivity, spectral flexibility, and in some cases, imager response time. Recently invented is the Coded Access Optical Sensor (CAOS) Camera platform that works in unison with current Photo-Detector Array (PDA) technology to counter fundamental limitations of PDA-based imagers while providing high enough imaging spatial resolution and pixel counts. Using for example the Texas Instruments (TI) Digital Micromirror Device (DMD) to engineer the CAOS camera platform, ushered in is a paradigm change in advanced imager design, particularly for extreme dynamic range applications.
Photodiode area effect on performance of X-ray CMOS active pixel sensors
NASA Astrophysics Data System (ADS)
Kim, M. S.; Kim, Y.; Kim, G.; Lim, K. T.; Cho, G.; Kim, D.
2018-02-01
Compared to conventional TFT-based X-ray imaging devices, CMOS-based X-ray imaging sensors are considered next generation because they can be manufactured in very small pixel pitches and can acquire high-speed images. In addition, CMOS-based sensors have the advantage of integration of various functional circuits within the sensor. The image quality can also be improved by the high fill-factor in large pixels. If the size of the subject is small, the size of the pixel must be reduced as a consequence. In addition, the fill factor must be reduced to aggregate various functional circuits within the pixel. In this study, 3T-APS (active pixel sensor) with photodiodes of four different sizes were fabricated and evaluated. It is well known that a larger photodiode leads to improved overall performance. Nonetheless, if the size of the photodiode is > 1000 μm2, the degree to which the sensor performance increases as the photodiode size increases, is reduced. As a result, considering the fill factor, pixel-pitch > 32 μm is not necessary to achieve high-efficiency image quality. In addition, poor image quality is to be expected unless special sensor-design techniques are included for sensors with a pixel pitch of 25 μm or less.
Evaluation of the MTF for a-Si:H imaging arrays
NASA Astrophysics Data System (ADS)
Yorkston, John; Antonuk, Larry E.; Seraji, N.; Huang, Weidong; Siewerdsen, Jeffrey H.; El-Mohri, Youcef
1994-05-01
Hydrogenated amorphous silicon imaging arrays are being developed for numerous applications in medical imaging. Diagnostic and megavoltage images have previously been reported and a number of the intrinsic properties of the arrays have been investigated. This paper reports on the first attempt to characterize the intrinsic spatial resolution of the imaging pixels on a 450 micrometers pitch, n-i-p imaging array fabricated at Xerox P.A.R.C. The pre- sampled modulation transfer function was measured by scanning a approximately 25 micrometers wide slit of visible wavelength light across a pixel in both the DATA and FET directions. The results show that the response of the pixel in these orthogonal directions is well described by a simple model that accounts for asymmetries in the pixel response due to geometric aspects of the pixel design.
Multiple image encryption scheme based on pixel exchange operation and vector decomposition
NASA Astrophysics Data System (ADS)
Xiong, Y.; Quan, C.; Tay, C. J.
2018-02-01
We propose a new multiple image encryption scheme based on a pixel exchange operation and a basic vector decomposition in Fourier domain. In this algorithm, original images are imported via a pixel exchange operator, from which scrambled images and pixel position matrices are obtained. Scrambled images encrypted into phase information are imported using the proposed algorithm and phase keys are obtained from the difference between scrambled images and synthesized vectors in a charge-coupled device (CCD) plane. The final synthesized vector is used as an input in a random phase encoding (DRPE) scheme. In the proposed encryption scheme, pixel position matrices and phase keys serve as additional private keys to enhance the security of the cryptosystem which is based on a 4-f system. Numerical simulations are presented to demonstrate the feasibility and robustness of the proposed encryption scheme.
Color filter array pattern identification using variance of color difference image
NASA Astrophysics Data System (ADS)
Shin, Hyun Jun; Jeon, Jong Ju; Eom, Il Kyu
2017-07-01
A color filter array is placed on the image sensor of a digital camera to acquire color images. Each pixel uses only one color, since the image sensor can measure only one color per pixel. Therefore, empty pixels are filled using an interpolation process called demosaicing. The original and the interpolated pixels have different statistical characteristics. If the image is modified by manipulation or forgery, the color filter array pattern is altered. This pattern change can be a clue for image forgery detection. However, most forgery detection algorithms have the disadvantage of assuming the color filter array pattern. We present an identification method of the color filter array pattern. Initially, the local mean is eliminated to remove the background effect. Subsequently, the color difference block is constructed to emphasize the difference between the original pixel and the interpolated pixel. The variance measure of the color difference image is proposed as a means of estimating the color filter array configuration. The experimental results show that the proposed method is effective in identifying the color filter array pattern. Compared with conventional methods, our method provides superior performance.
Daymet: Daily Surface Weather Data on a 1-km Grid for North America, Version 2.
NASA Astrophysics Data System (ADS)
Devarakonda, R.
2014-12-01
Daymet: Daily Surface Weather Data and Climatological Summaries provides gridded estimates of daily weather parameters for North America, including daily continuous surfaces of minimum and maximum temperature, precipitation occurrence and amount, humidity, shortwave radiation, snow water equivalent, and day length. The current data product (Version 2) covers the period January 1, 1980 to December 31, 2013 [1]. Data are available on a daily time step at a 1-km x 1-km spatial resolution in Lambert Conformal Conic projection with a spatial extent that covers the North America as meteorological station density allows. Daymet data can be downloaded from 1) the ORNL Distributed Active Archive Center (DAAC) search and order tools (http://daac.ornl.gov/cgi-bin/cart/add2cart.pl?add=1219) or directly from the DAAC FTP site (http://daac.ornl.gov/cgi-bin/dsviewer.pl?ds_id=1219) and 2) the Single Pixel Tool (http://daymet.ornl.gov/singlepixel.html) and THREDDS (Thematic Real-time Environmental Data Services) Data Server (TDS) (http://daymet.ornl.gov/thredds_mosaics.html). The Single Pixel Data Extraction Tool [2] allows users to enter a single geographic point by latitude and longitude in decimal degrees. A routine is executed that translates the (lon, lat) coordinates into projected Daymet (x,y) coordinates. These coordinates are used to access the Daymet database of daily-interpolated surface weather variables. The Single Pixel Data Extraction Tool also provides the option to download multiple coordinates programmatically. The ORNL DAAC's TDS provides customized visualization and access to Daymet time series of North American mosaics. Users can subset and download Daymet data via a variety of community standards, including OPeNDAP, NetCDF Subset service, and Open Geospatial Consortium (OGC) Web Map/Coverage Service. References: [1] Thornton, P. E., Thornton, M. M., Mayer, B. W., Wilhelmi, N., Wei, Y., Devarakonda, R., & Cook, R. (2012). "Daymet: Daily surface weather on a 1 km grid for North America, 1980-2008". Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center for Biogeochemical Dynamics (DAAC), 1. [2] Devarakonda R., et al. 2012. Daymet: Single Pixel Data Extraction Tool. Available [http://daymet.ornl.go/singlepixel.html].
Spatial clustering of pixels of a multispectral image
Conger, James Lynn
2014-08-19
A method and system for clustering the pixels of a multispectral image is provided. A clustering system computes a maximum spectral similarity score for each pixel that indicates the similarity between that pixel and the most similar neighboring. To determine the maximum similarity score for a pixel, the clustering system generates a similarity score between that pixel and each of its neighboring pixels and then selects the similarity score that represents the highest similarity as the maximum similarity score. The clustering system may apply a filtering criterion based on the maximum similarity score so that pixels with similarity scores below a minimum threshold are not clustered. The clustering system changes the current pixel values of the pixels in a cluster based on an averaging of the original pixel values of the pixels in the cluster.
Wavelength scanning achieves pixel super-resolution in holographic on-chip microscopy
NASA Astrophysics Data System (ADS)
Luo, Wei; Göröcs, Zoltan; Zhang, Yibo; Feizi, Alborz; Greenbaum, Alon; Ozcan, Aydogan
2016-03-01
Lensfree holographic on-chip imaging is a potent solution for high-resolution and field-portable bright-field imaging over a wide field-of-view. Previous lensfree imaging approaches utilize a pixel super-resolution technique, which relies on sub-pixel lateral displacements between the lensfree diffraction patterns and the image sensor's pixel-array, to achieve sub-micron resolution under unit magnification using state-of-the-art CMOS imager chips, commonly used in e.g., mobile-phones. Here we report, for the first time, a wavelength scanning based pixel super-resolution technique in lensfree holographic imaging. We developed an iterative super-resolution algorithm, which generates high-resolution reconstructions of the specimen from low-resolution (i.e., under-sampled) diffraction patterns recorded at multiple wavelengths within a narrow spectral range (e.g., 10-30 nm). Compared with lateral shift-based pixel super-resolution, this wavelength scanning approach does not require any physical shifts in the imaging setup, and the resolution improvement is uniform in all directions across the sensor-array. Our wavelength scanning super-resolution approach can also be integrated with multi-height and/or multi-angle on-chip imaging techniques to obtain even higher resolution reconstructions. For example, using wavelength scanning together with multi-angle illumination, we achieved a halfpitch resolution of 250 nm, corresponding to a numerical aperture of 1. In addition to pixel super-resolution, the small scanning steps in wavelength also enable us to robustly unwrap phase, revealing the specimen's optical path length in our reconstructed images. We believe that this new wavelength scanning based pixel super-resolution approach can provide competitive microscopy solutions for high-resolution and field-portable imaging needs, potentially impacting tele-pathology applications in resource-limited-settings.
A Decision-Based Modified Total Variation Diffusion Method for Impulse Noise Removal
Zhu, Qingxin; Song, Xiuli; Tao, Jinsong
2017-01-01
Impulsive noise removal usually employs median filtering, switching median filtering, the total variation L1 method, and variants. These approaches however often introduce excessive smoothing and can result in extensive visual feature blurring and thus are suitable only for images with low density noise. A new method to remove noise is proposed in this paper to overcome this limitation, which divides pixels into different categories based on different noise characteristics. If an image is corrupted by salt-and-pepper noise, the pixels are divided into corrupted and noise-free; if the image is corrupted by random valued impulses, the pixels are divided into corrupted, noise-free, and possibly corrupted. Pixels falling into different categories are processed differently. If a pixel is corrupted, modified total variation diffusion is applied; if the pixel is possibly corrupted, weighted total variation diffusion is applied; otherwise, the pixel is left unchanged. Experimental results show that the proposed method is robust to different noise strengths and suitable for different images, with strong noise removal capability as shown by PSNR/SSIM results as well as the visual quality of restored images. PMID:28536602
SAR Image Change Detection Based on Fuzzy Markov Random Field Model
NASA Astrophysics Data System (ADS)
Zhao, J.; Huang, G.; Zhao, Z.
2018-04-01
Most existing SAR image change detection algorithms only consider single pixel information of different images, and not consider the spatial dependencies of image pixels. So the change detection results are susceptible to image noise, and the detection effect is not ideal. Markov Random Field (MRF) can make full use of the spatial dependence of image pixels and improve detection accuracy. When segmenting the difference image, different categories of regions have a high degree of similarity at the junction of them. It is difficult to clearly distinguish the labels of the pixels near the boundaries of the judgment area. In the traditional MRF method, each pixel is given a hard label during iteration. So MRF is a hard decision in the process, and it will cause loss of information. This paper applies the combination of fuzzy theory and MRF to the change detection of SAR images. The experimental results show that the proposed method has better detection effect than the traditional MRF method.
Image indexing using color correlograms
Huang, Jing; Kumar, Shanmugasundaram Ravi; Mitra, Mandar; Zhu, Wei-Jing
2001-01-01
A color correlogram is a three-dimensional table indexed by color and distance between pixels which expresses how the spatial correlation of color changes with distance in a stored image. The color correlogram may be used to distinguish an image from other images in a database. To create a color correlogram, the colors in the image are quantized into m color values, c.sub.i . . . c.sub.m. Also, the distance values k.epsilon.[d] to be used in the correlogram are determined where [d] is the set of distances between pixels in the image, and where dmax is the maximum distance measurement between pixels in the image. Each entry (i, j, k) in the table is the probability of finding a pixel of color c.sub.i at a selected distance k from a pixel of color c.sub.i. A color autocorrelogram, which is a restricted version of the color correlogram that considers color pairs of the form (i,i) only, may also be used to identify an image.
Alivov, Yahya; Baturin, Pavlo; Le, Huy Q; Ducote, Justin; Molloi, Sabee
2014-01-06
We investigated the effect of different imaging parameters, such as dose, beam energy, energy resolution and the number of energy bins, on the image quality of K-edge spectral computed tomography (CT) of gold nanoparticles (GNP) accumulated in an atherosclerotic plaque. A maximum likelihood technique was employed to estimate the concentration of GNP, which served as a targeted intravenous contrast material intended to detect the degree of the plaque's inflammation. The simulation studies used a single-slice parallel beam CT geometry with an x-ray beam energy ranging between 50 and 140 kVp. The synthetic phantoms included small (3 cm in diameter) cylinder and chest (33 × 24 cm(2)) phantoms, where both phantoms contained tissue, calcium and gold. In the simulation studies, GNP quantification and background (calcium and tissue) suppression tasks were pursued. The x-ray detection sensor was represented by an energy resolved photon counting detector (e.g., CdZnTe) with adjustable energy bins. Both ideal and more realistic (12% full width at half maximum (FWHM) energy resolution) implementations of the photon counting detector were simulated. The simulations were performed for the CdZnTe detector with a pixel pitch of 0.5-1 mm, which corresponds to a performance without significant charge sharing and cross-talk effects. The Rose model was employed to estimate the minimum detectable concentration of GNPs. A figure of merit (FOM) was used to optimize the x-ray beam energy (kVp) to achieve the highest signal-to-noise ratio with respect to the patient dose. As a result, the successful identification of gold and background suppression was demonstrated. The highest FOM was observed at the 125 kVp x-ray beam energy. The minimum detectable GNP concentration was determined to be approximately 1.06 µmol mL(-1) (0.21 mg mL(-1)) for an ideal detector and about 2.5 µmol mL(-1) (0.49 mg mL(-1)) for a more realistic (12% FWHM) detector. The studies show the optimal imaging parameters at the lowest patient dose using an energy resolved photon counting detector to image GNP in an atherosclerotic plaque.
Hannan, M A; Arebey, Maher; Begum, R A; Basri, Hassan
2011-12-01
This paper deals with a system of integration of Radio Frequency Identification (RFID) and communication technologies for solid waste bin and truck monitoring system. RFID, GPS, GPRS and GIS along with camera technologies have been integrated and developed the bin and truck intelligent monitoring system. A new kind of integrated theoretical framework, hardware architecture and interface algorithm has been introduced between the technologies for the successful implementation of the proposed system. In this system, bin and truck database have been developed such a way that the information of bin and truck ID, date and time of waste collection, bin status, amount of waste and bin and truck GPS coordinates etc. are complied and stored for monitoring and management activities. The results showed that the real-time image processing, histogram analysis, waste estimation and other bin information have been displayed in the GUI of the monitoring system. The real-time test and experimental results showed that the performance of the developed system was stable and satisfied the monitoring system with high practicability and validity. Copyright © 2011 Elsevier Ltd. All rights reserved.
2005-06-01
Time Fourier Transform WVD Wigner - Ville Distribution GA Genetic Algorithm PSO Particle Swarm Optimization JEM Jet Engine Modulation CPI...of the Wigner - Ville Distribution ( WVD ), cross-terms appear in the time-frequency image. As shown in Figure 9, which is a WVD of range bin 31 of...14 Figure 9. Wigner - Ville Distribution of Unfocused Range Bin 31 (After [3] and [5].) ...15
Experimental single-chip color HDTV image acquisition system with 8M-pixel CMOS image sensor
NASA Astrophysics Data System (ADS)
Shimamoto, Hiroshi; Yamashita, Takayuki; Funatsu, Ryohei; Mitani, Kohji; Nojiri, Yuji
2006-02-01
We have developed an experimental single-chip color HDTV image acquisition system using 8M-pixel CMOS image sensor. The sensor has 3840 × 2160 effective pixels and is progressively scanned at 60 frames per second. We describe the color filter array and interpolation method to improve image quality with a high-pixel-count single-chip sensor. We also describe an experimental image acquisition system we used to measured spatial frequency characteristics in the horizontal direction. The results indicate good prospects for achieving a high quality single chip HDTV camera that reduces pseudo signals and maintains high spatial frequency characteristics within the frequency band for HDTV.
Mode Transitions in Hall Effect Thrusters
2013-07-01
bM = number of pixels per bin m = spoke order 0m = spoke order m = 0 em = electron mass, 9.1110 -31 kg im = Xe ion mass, 2.18×10 -25...periodogram spectral estimate, Arb Hz -1 eT = electron temperature eT = electron temperature parallel to magnetic field, eV eT = electron ...Fourier transform of x(t) = inverse angle from 2D DFT, deg-1 = mean electron energy, eV * = material dependent cross-over energy, eV xy
The Brown Dwarf Kinematics Project (BDKP. III. Parallaxes for 70 Ultracool Dwarfs
2012-06-10
highest mass exoplanets (Saumon et al. 1996; Chabrier & Baraffe 1997). In early 2000, the standard stellar spectral classification scheme was extended...Journal, 752:56 (22pp), 2012 June 10 Faherty et al. routine xdimsum was used to perform sky subtractions and mask holes from bright stars.13 3. PARALLAX...epoch. The precise centroids of the stars were measured by binning the stellar profile in the X and Y directions using a box of ∼2′′ around the pixel
Random On-Board Pixel Sampling (ROPS) X-Ray Camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Zhehui; Iaroshenko, O.; Li, S.
Recent advances in compressed sensing theory and algorithms offer new possibilities for high-speed X-ray camera design. In many CMOS cameras, each pixel has an independent on-board circuit that includes an amplifier, noise rejection, signal shaper, an analog-to-digital converter (ADC), and optional in-pixel storage. When X-ray images are sparse, i.e., when one of the following cases is true: (a.) The number of pixels with true X-ray hits is much smaller than the total number of pixels; (b.) The X-ray information is redundant; or (c.) Some prior knowledge about the X-ray images exists, sparse sampling may be allowed. Here we first illustratemore » the feasibility of random on-board pixel sampling (ROPS) using an existing set of X-ray images, followed by a discussion about signal to noise as a function of pixel size. Next, we describe a possible circuit architecture to achieve random pixel access and in-pixel storage. The combination of a multilayer architecture, sparse on-chip sampling, and computational image techniques, is expected to facilitate the development and applications of high-speed X-ray camera technology.« less
A compressed sensing X-ray camera with a multilayer architecture
NASA Astrophysics Data System (ADS)
Wang, Zhehui; Iaroshenko, O.; Li, S.; Liu, T.; Parab, N.; Chen, W. W.; Chu, P.; Kenyon, G. T.; Lipton, R.; Sun, K.-X.
2018-01-01
Recent advances in compressed sensing theory and algorithms offer new possibilities for high-speed X-ray camera design. In many CMOS cameras, each pixel has an independent on-board circuit that includes an amplifier, noise rejection, signal shaper, an analog-to-digital converter (ADC), and optional in-pixel storage. When X-ray images are sparse, i.e., when one of the following cases is true: (a.) The number of pixels with true X-ray hits is much smaller than the total number of pixels; (b.) The X-ray information is redundant; or (c.) Some prior knowledge about the X-ray images exists, sparse sampling may be allowed. Here we first illustrate the feasibility of random on-board pixel sampling (ROPS) using an existing set of X-ray images, followed by a discussion about signal to noise as a function of pixel size. Next, we describe a possible circuit architecture to achieve random pixel access and in-pixel storage. The combination of a multilayer architecture, sparse on-chip sampling, and computational image techniques, is expected to facilitate the development and applications of high-speed X-ray camera technology.
Wang, Fei; Qin, Zhihao; Li, Wenjuan; Song, Caiying; Karnieli, Arnon; Zhao, Shuhe
2014-12-25
Land surface temperature (LST) images retrieved from the thermal infrared (TIR) band data of Moderate Resolution Imaging Spectroradiometer (MODIS) have much lower spatial resolution than the MODIS visible and near-infrared (VNIR) band data. The coarse pixel scale of MODIS LST images (1000 m under nadir) have limited their capability in applying to many studies required high spatial resolution in comparison of the MODIS VNIR band data with pixel scale of 250-500 m. In this paper we intend to develop an efficient approach for pixel decomposition to increase the spatial resolution of MODIS LST image using the VNIR band data as assistance. The unique feature of this approach is to maintain the thermal radiance of parent pixels in the MODIS LST image unchanged after they are decomposed into the sub-pixels in the resulted image. There are two important steps in the decomposition: initial temperature estimation and final temperature determination. Therefore the approach can be termed double-step pixel decomposition (DSPD). Both steps involve a series of procedures to achieve the final result of decomposed LST image, including classification of the surface patterns, establishment of LST change with normalized difference of vegetation index (NDVI) and building index (NDBI), reversion of LST into thermal radiance through Planck equation, and computation of weights for the sub-pixels of the resulted image. Since the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) with much higher spatial resolution than MODIS data was on-board the same platform (Terra) as MODIS for Earth observation, an experiment had been done in the study to validate the accuracy and efficiency of our approach for pixel decomposition. The ASTER LST image was used as the reference to compare with the decomposed LST image. The result showed that the spatial distribution of the decomposed LST image was very similar to that of the ASTER LST image with a root mean square error (RMSE) of 2.7 K for entire image. Comparison with the evaluation DisTrad (E-DisTrad) and re-sampling methods for pixel decomposition also indicate that our DSPD has the lowest RMSE in all cases, including urban region, water bodies, and natural terrain. The obvious increase in spatial resolution remarkably uplifts the capability of the coarse MODIS LST images in highlighting the details of LST variation. Therefore it can be concluded that, in spite of complicated procedures, the proposed DSPD approach provides an alternative to improve the spatial resolution of MODIS LST image hence expand its applicability to the real world.
Compressed sensing with cyclic-S Hadamard matrix for terahertz imaging applications
NASA Astrophysics Data System (ADS)
Ermeydan, Esra Şengün; ćankaya, Ilyas
2018-01-01
Compressed Sensing (CS) with Cyclic-S Hadamard matrix is proposed for single pixel imaging applications in this study. In single pixel imaging scheme, N = r . c samples should be taken for r×c pixel image where . denotes multiplication. CS is a popular technique claiming that the sparse signals can be reconstructed with samples under Nyquist rate. Therefore to solve the slow data acquisition problem in Terahertz (THz) single pixel imaging, CS is a good candidate. However, changing mask for each measurement is a challenging problem since there is no commercial Spatial Light Modulators (SLM) for THz band yet, therefore circular masks are suggested so that for each measurement one or two column shifting will be enough to change the mask. The CS masks are designed using cyclic-S matrices based on Hadamard transform for 9 × 7 and 15 × 17 pixel images within the framework of this study. The %50 compressed images are reconstructed using total variation based TVAL3 algorithm. Matlab simulations demonstrates that cyclic-S matrices can be used for single pixel imaging based on CS. The circular masks have the advantage to reduce the mechanical SLMs to a single sliding strip, whereas the CS helps to reduce acquisition time and energy since it allows to reconstruct the image from fewer samples.
Roughness effects on thermal-infrared emissivities estimated from remotely sensed images
NASA Astrophysics Data System (ADS)
Mushkin, Amit; Danilina, Iryna; Gillespie, Alan R.; Balick, Lee K.; McCabe, Matthew F.
2007-10-01
Multispectral thermal-infrared images from the Mauna Loa caldera in Hawaii, USA are examined to study the effects of surface roughness on remotely retrieved emissivities. We find up to a 3% decrease in spectral contrast in ASTER (Advanced Spaceborne Thermal Emission and Reflection Radiometer) 90-m/pixel emissivities due to sub-pixel surface roughness variations on the caldera floor. A similar decrease in spectral contrast of emissivities extracted from MASTER (MODIS/ASTER Airborne Simulator) ~12.5-m/pixel data can be described as a function of increasing surface roughness, which was measured remotely from ASTER 15-m/pixel stereo images. The ratio between ASTER stereo images provides a measure of sub-pixel surface-roughness variations across the scene. These independent roughness estimates complement a radiosity model designed to quantify the unresolved effects of multiple scattering and differential solar heating due to sub-pixel roughness elements and to compensate for both sub-pixel temperature dispersion and cavity radiation on TIR measurements.
A CMOS image sensor with programmable pixel-level analog processing.
Massari, Nicola; Gottardi, Massimo; Gonzo, Lorenzo; Stoppa, David; Simoni, Andrea
2005-11-01
A prototype of a 34 x 34 pixel image sensor, implementing real-time analog image processing, is presented. Edge detection, motion detection, image amplification, and dynamic-range boosting are executed at pixel level by means of a highly interconnected pixel architecture based on the absolute value of the difference among neighbor pixels. The analog operations are performed over a kernel of 3 x 3 pixels. The square pixel, consisting of 30 transistors, has a pitch of 35 microm with a fill-factor of 20%. The chip was fabricated in a 0.35 microm CMOS technology, and its power consumption is 6 mW with 3.3 V power supply. The device was fully characterized and achieves a dynamic range of 50 dB with a light power density of 150 nW/mm2 and a frame rate of 30 frame/s. The measured fixed pattern noise corresponds to 1.1% of the saturation level. The sensor's dynamic range can be extended up to 96 dB using the double-sampling technique.
A hyperspectral image optimizing method based on sub-pixel MTF analysis
NASA Astrophysics Data System (ADS)
Wang, Yun; Li, Kai; Wang, Jinqiang; Zhu, Yajie
2015-04-01
Hyperspectral imaging is used to collect tens or hundreds of images continuously divided across electromagnetic spectrum so that the details under different wavelengths could be represented. A popular hyperspectral imaging methods uses a tunable optical band-pass filter settled in front of the focal plane to acquire images of different wavelengths. In order to alleviate the influence of chromatic aberration in some segments in a hyperspectral series, in this paper, a hyperspectral optimizing method uses sub-pixel MTF to evaluate image blurring quality was provided. This method acquired the edge feature in the target window by means of the line spread function (LSF) to calculate the reliable position of the edge feature, then the evaluation grid in each line was interpolated by the real pixel value based on its relative position to the optimal edge and the sub-pixel MTF was used to analyze the image in frequency domain, by which MTF calculation dimension was increased. The sub-pixel MTF evaluation was reliable, since no image rotation and pixel value estimation was needed, and no artificial information was introduced. With theoretical analysis, the method proposed in this paper is reliable and efficient when evaluation the common images with edges of small tilt angle in real scene. It also provided a direction for the following hyperspectral image blurring evaluation and the real-time focal plane adjustment in real time in related imaging system.
In-flight calibration of the Hitomi Soft X-ray Spectrometer. (2) Point spread function
NASA Astrophysics Data System (ADS)
Maeda, Yoshitomo; Sato, Toshiki; Hayashi, Takayuki; Iizuka, Ryo; Angelini, Lorella; Asai, Ryota; Furuzawa, Akihiro; Kelley, Richard; Koyama, Shu; Kurashima, Sho; Ishida, Manabu; Mori, Hideyuki; Nakaniwa, Nozomi; Okajima, Takashi; Serlemitsos, Peter J.; Tsujimoto, Masahiro; Yaqoob, Tahir
2018-03-01
We present results of inflight calibration of the point spread function of the Soft X-ray Telescope that focuses X-rays onto the pixel array of the Soft X-ray Spectrometer system. We make a full array image of a point-like source by extracting a pulsed component of the Crab nebula emission. Within the limited statistics afforded by an exposure time of only 6.9 ks and limited knowledge of the systematic uncertainties, we find that the raytracing model of 1 {^'.} 2 half-power-diameter is consistent with an image of the observed event distributions across pixels. The ratio between the Crab pulsar image and the raytracing shows scatter from pixel to pixel that is 40% or less in all except one pixel. The pixel-to-pixel ratio has a spread of 20%, on average, for the 15 edge pixels, with an averaged statistical error of 17% (1 σ). In the central 16 pixels, the corresponding ratio is 15% with an error of 6%.
Villiger, Martin; Zhang, Ellen Ziyi; Nadkarni, Seemantini K.; Oh, Wang-Yuhl; Vakoc, Benjamin J.; Bouma, Brett E.
2013-01-01
Polarization mode dispersion (PMD) has been recognized as a significant barrier to sensitive and reproducible birefringence measurements with fiber-based, polarization-sensitive optical coherence tomography systems. Here, we present a signal processing strategy that reconstructs the local retardation robustly in the presence of system PMD. The algorithm uses a spectral binning approach to limit the detrimental impact of system PMD and benefits from the final averaging of the PMD-corrected retardation vectors of the spectral bins. The algorithm was validated with numerical simulations and experimental measurements of a rubber phantom. When applied to the imaging of human cadaveric coronary arteries, the algorithm was found to yield a substantial improvement in the reconstructed birefringence maps. PMID:23938487
Müllner, Marie; Schlattl, Helmut; Hoeschen, Christoph; Dietrich, Olaf
2015-12-01
To demonstrate the feasibility of gold-specific spectral CT imaging for the detection of liver lesions in humans at low concentrations of gold as targeted contrast agent. A Monte Carlo simulation study of spectral CT imaging with a photon-counting and energy-resolving detector (with 6 energy bins) was performed in a realistic phantom of the human abdomen. The detector energy thresholds were optimized for the detection of gold. The simulation results were reconstructed with the K-edge imaging algorithm; the reconstructed gold-specific images were filtered and evaluated with respect to signal-to-noise ratio and contrast-to-noise ratio (CNR). The simulations demonstrate the feasibility of spectral CT with CNRs of the specific gold signal between 2.7 and 4.8 after bilateral filtering. Using the optimized bin thresholds increases the CNRs of the lesions by up to 23% compared to bin thresholds described in former studies. Gold is a promising new CT contrast agent for spectral CT in humans; minimum tissue mass fractions of 0.2 wt% of gold are required for sufficient image contrast. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
VizieR Online Data Catalog: LY And photometric followup (Lu+, 2017)
NASA Astrophysics Data System (ADS)
Lu, H.-P.; Zhang, L.-Y.; Han, X. L.; Pi, Q.-F.; Wang, D.-M.
2017-04-01
We obtained our first photometric data set in R and I bands for LY And on November 24, 2014 using the 1-m RCC reflecting telescope at Yunnan Observatory, which was equipped with an Andor DW436 2048x2048 CCD camera with a field of view of 7.3'x7.3'. The exposure times were 300s for both R and I bands. We obtained our second photometric data set in B, V, R and I bands using the SARA 914-mm telescope at Kitt Peak National Observatory on October 23, 2015. This telescope was equipped with a 2048x2048 pixels CCD and each pixel after 2x2 binning is about 0.86". The exposure times were 120s in B band and 60 s in V, R and I bands, respectively. (3 data files).
NASA Astrophysics Data System (ADS)
Chan, Heang-Ping; Helvie, Mark A.; Petrick, Nicholas; Sahiner, Berkman; Adler, Dorit D.; Blane, Caroline E.; Joynt, Lynn K.; Paramagul, Chintana; Roubidoux, Marilyn A.; Wilson, Todd E.; Hadjiiski, Lubomir M.; Goodsitt, Mitchell M.
1999-05-01
A receiver operating characteristic (ROC) experiment was conducted to evaluate the effects of pixel size on the characterization of mammographic microcalcifications. Digital mammograms were obtained by digitizing screen-film mammograms with a laser film scanner. One hundred twelve two-view mammograms with biopsy-proven microcalcifications were digitized at a pixel size of 35 micrometer X 35 micrometer. A region of interest (ROI) containing the microcalcifications was extracted from each image. ROI images with pixel sizes of 70 micrometers, 105 micrometers, and 140 micrometers were derived from the ROI of 35 micrometer pixel size by averaging 2 X 2, 3 X 3, and 4 X 4 neighboring pixels, respectively. The ROI images were printed on film with a laser imager. Seven MQSA-approved radiologists participated as observers. The likelihood of malignancy of the microcalcifications was rated on a 10-point confidence rating scale and analyzed with ROC methodology. The classification accuracy was quantified by the area, Az, under the ROC curve. The statistical significance of the differences in the Az values for different pixel sizes was estimated with the Dorfman-Berbaum-Metz (DBM) method for multi-reader, multi-case ROC data. It was found that five of the seven radiologists demonstrated a higher classification accuracy with the 70 micrometer or 105 micrometer images. The average Az also showed a higher classification accuracy in the range of 70 to 105 micrometer pixel size. However, the differences in A(subscript z/ between different pixel sizes did not achieve statistical significance. The low specificity of image features of microcalcifications an the large interobserver and intraobserver variabilities may have contributed to the relatively weak dependence of classification accuracy on pixel size.
CMOS image sensors: State-of-the-art
NASA Astrophysics Data System (ADS)
Theuwissen, Albert J. P.
2008-09-01
This paper gives an overview of the state-of-the-art of CMOS image sensors. The main focus is put on the shrinkage of the pixels : what is the effect on the performance characteristics of the imagers and on the various physical parameters of the camera ? How is the CMOS pixel architecture optimized to cope with the negative performance effects of the ever-shrinking pixel size ? On the other hand, the smaller dimensions in CMOS technology allow further integration on column level and even on pixel level. This will make CMOS imagers even smarter that they are already.
Keleshis, C; Ionita, CN; Yadava, G; Patel, V; Bednarek, DR; Hoffmann, KR; Verevkin, A; Rudin, S
2008-01-01
A graphical user interface based on LabVIEW software was developed to enable clinical evaluation of a new High-Sensitivity Micro-Angio-Fluoroscopic (HSMAF) system for real-time acquisition, display and rapid frame transfer of high-resolution region-of-interest images. The HSMAF detector consists of a CsI(Tl) phosphor, a light image intensifier (LII), and a fiber-optic taper coupled to a progressive scan, frame-transfer, charged-coupled device (CCD) camera which provides real-time 12 bit, 1k × 1k images capable of greater than 10 lp/mm resolution. Images can be captured in continuous or triggered mode, and the camera can be programmed by a computer using Camera Link serial communication. A graphical user interface was developed to control the camera modes such as gain and pixel binning as well as to acquire, store, display, and process the images. The program, written in LabVIEW, has the following capabilities: camera initialization, synchronized image acquisition with the x-ray pulses, roadmap and digital subtraction angiography acquisition (DSA), flat field correction, brightness and contrast control, last frame hold in fluoroscopy, looped playback of the acquired images in angiography, recursive temporal filtering and LII gain control. Frame rates can be up to 30 fps in full-resolution mode. The user friendly implementation of the interface along with the high framerate acquisition and display for this unique high-resolution detector should provide angiographers and interventionalists with a new capability for visualizing details of small vessels and endovascular devices such as stents and hence enable more accurate diagnoses and image guided interventions. (Support: NIH Grants R01NS43924, R01EB002873) PMID:18836570
Keleshis, C; Ionita, Cn; Yadava, G; Patel, V; Bednarek, Dr; Hoffmann, Kr; Verevkin, A; Rudin, S
2008-01-01
A graphical user interface based on LabVIEW software was developed to enable clinical evaluation of a new High-Sensitivity Micro-Angio-Fluoroscopic (HSMAF) system for real-time acquisition, display and rapid frame transfer of high-resolution region-of-interest images. The HSMAF detector consists of a CsI(Tl) phosphor, a light image intensifier (LII), and a fiber-optic taper coupled to a progressive scan, frame-transfer, charged-coupled device (CCD) camera which provides real-time 12 bit, 1k × 1k images capable of greater than 10 lp/mm resolution. Images can be captured in continuous or triggered mode, and the camera can be programmed by a computer using Camera Link serial communication. A graphical user interface was developed to control the camera modes such as gain and pixel binning as well as to acquire, store, display, and process the images. The program, written in LabVIEW, has the following capabilities: camera initialization, synchronized image acquisition with the x-ray pulses, roadmap and digital subtraction angiography acquisition (DSA), flat field correction, brightness and contrast control, last frame hold in fluoroscopy, looped playback of the acquired images in angiography, recursive temporal filtering and LII gain control. Frame rates can be up to 30 fps in full-resolution mode. The user friendly implementation of the interface along with the high framerate acquisition and display for this unique high-resolution detector should provide angiographers and interventionalists with a new capability for visualizing details of small vessels and endovascular devices such as stents and hence enable more accurate diagnoses and image guided interventions. (Support: NIH Grants R01NS43924, R01EB002873).
Methods in quantitative image analysis.
Oberholzer, M; Ostreicher, M; Christen, H; Brühlmann, M
1996-05-01
The main steps of image analysis are image capturing, image storage (compression), correcting imaging defects (e.g. non-uniform illumination, electronic-noise, glare effect), image enhancement, segmentation of objects in the image and image measurements. Digitisation is made by a camera. The most modern types include a frame-grabber, converting the analog-to-digital signal into digital (numerical) information. The numerical information consists of the grey values describing the brightness of every point within the image, named a pixel. The information is stored in bits. Eight bits are summarised in one byte. Therefore, grey values can have a value between 0 and 256 (2(8)). The human eye seems to be quite content with a display of 5-bit images (corresponding to 64 different grey values). In a digitised image, the pixel grey values can vary within regions that are uniform in the original scene: the image is noisy. The noise is mainly manifested in the background of the image. For an optimal discrimination between different objects or features in an image, uniformity of illumination in the whole image is required. These defects can be minimised by shading correction [subtraction of a background (white) image from the original image, pixel per pixel, or division of the original image by the background image]. The brightness of an image represented by its grey values can be analysed for every single pixel or for a group of pixels. The most frequently used pixel-based image descriptors are optical density, integrated optical density, the histogram of the grey values, mean grey value and entropy. The distribution of the grey values existing within an image is one of the most important characteristics of the image. However, the histogram gives no information about the texture of the image. The simplest way to improve the contrast of an image is to expand the brightness scale by spreading the histogram out to the full available range. Rules for transforming the grey value histogram of an existing image (input image) into a new grey value histogram (output image) are most quickly handled by a look-up table (LUT). The histogram of an image can be influenced by gain, offset and gamma of the camera. Gain defines the voltage range, offset defines the reference voltage and gamma the slope of the regression line between the light intensity and the voltage of the camera. A very important descriptor of neighbourhood relations in an image is the co-occurrence matrix. The distance between the pixels (original pixel and its neighbouring pixel) can influence the various parameters calculated from the co-occurrence matrix. The main goals of image enhancement are elimination of surface roughness in an image (smoothing), correction of defects (e.g. noise), extraction of edges, identification of points, strengthening texture elements and improving contrast. In enhancement, two types of operations can be distinguished: pixel-based (point operations) and neighbourhood-based (matrix operations). The most important pixel-based operations are linear stretching of grey values, application of pre-stored LUTs and histogram equalisation. The neighbourhood-based operations work with so-called filters. These are organising elements with an original or initial point in their centre. Filters can be used to accentuate or to suppress specific structures within the image. Filters can work either in the spatial or in the frequency domain. The method used for analysing alterations of grey value intensities in the frequency domain is the Hartley transform. Filter operations in the spatial domain can be based on averaging or ranking the grey values occurring in the organising element. The most important filters, which are usually applied, are the Gaussian filter and the Laplace filter (both averaging filters), and the median filter, the top hat filter and the range operator (all ranking filters). Segmentation of objects is traditionally based on threshold grey values. (AB
Single-pixel non-imaging object recognition by means of Fourier spectrum acquisition
NASA Astrophysics Data System (ADS)
Chen, Huichao; Shi, Jianhong; Liu, Xialin; Niu, Zhouzhou; Zeng, Guihua
2018-04-01
Single-pixel imaging has emerged over recent years as a novel imaging technique, which has significant application prospects. In this paper, we propose and experimentally demonstrate a scheme that can achieve single-pixel non-imaging object recognition by acquiring the Fourier spectrum. In an experiment, a four-step phase-shifting sinusoid illumination light is used to irradiate the object image, the value of the light intensity is measured with a single-pixel detection unit, and the Fourier coefficients of the object image are obtained by a differential measurement. The Fourier coefficients are first cast into binary numbers to obtain the hash value. We propose a new method of perceptual hashing algorithm, which is combined with a discrete Fourier transform to calculate the hash value. The hash distance is obtained by calculating the difference of the hash value between the object image and the contrast images. By setting an appropriate threshold, the object image can be quickly and accurately recognized. The proposed scheme realizes single-pixel non-imaging perceptual hashing object recognition by using fewer measurements. Our result might open a new path for realizing object recognition with non-imaging.
Carotid Stenosis And Ulcer Detectability As A Function Of Pixel Size
NASA Astrophysics Data System (ADS)
Mintz, Leslie J.; Enzmann, Dieter R.; Keyes, Gary S.; Mainiero, Louis M.; Brody, William R.
1981-11-01
Digital radiography, in conjunction with digital subtraction methods can provide high quality images of the vascular system,1-4 Spatial resolution is one important limiting factor of this imaging technique. Since spatial resolution of a digital image is a function of pixel size, it is important to determine the pixel size threshold necessary to provide information comparable to that of conventional angiograms. This study was designed to establish the pixel size necessary to identify accurately stenotic and ulcerative lesions of the carotid artery.
Multi-channel imaging cytometry with a single detector
NASA Astrophysics Data System (ADS)
Locknar, Sarah; Barton, John; Entwistle, Mark; Carver, Gary; Johnson, Robert
2018-02-01
Multi-channel microscopy and multi-channel flow cytometry generate high bit data streams. Multiple channels (both spectral and spatial) are important in diagnosing diseased tissue and identifying individual cells. Omega Optical has developed techniques for mapping multiple channels into the time domain for detection by a single high gain, high bandwidth detector. This approach is based on pulsed laser excitation and a serial array of optical fibers coated with spectral reflectors such that up to 15 wavelength bins are sequentially detected by a single-element detector within 2.5 μs. Our multichannel microscopy system uses firmware running on dedicated DSP and FPGA chips to synchronize the laser, scanning mirrors, and sampling clock. The signals are digitized by an NI board into 14 bits at 60MHz - allowing for 232 by 174 pixel fields in up to 15 channels with 10x over sampling. Our multi-channel imaging cytometry design adds channels for forward scattering and back scattering to the fluorescence spectral channels. All channels are detected within the 2.5 μs - which is compatible with fast cytometry. Going forward, we plan to digitize at 16 bits with an A-toD chip attached to a custom board. Processing these digital signals in custom firmware would allow an on-board graphics processing unit to display imaging flow cytometry data over configurable scanning line lengths. The scatter channels can be used to trigger data buffering when a cell is present in the beam. This approach enables a low cost mechanically robust imaging cytometer.
NASA Astrophysics Data System (ADS)
Villanueva, Steven; Gaudi, B. Scott; Pogge, Richard; Stassun, Keivan G.; Eastman, Jason; Trueblood, Mark; Trueblood, Pat
2018-01-01
The DEdicated MONitor of EXotransits and Transients (DEMONEXT) is a 20 inch (0.5-m) robotic telescope that has been in operation since May 2016. Fully automated, DEMONEXT has observed over 150 transits of exoplanet candidates for the KELT survey, including confirmation observations of KELT-20b. DEMONEXT achieves 2-4 mmag precision with unbinned, 20-120 second exposures, on targets orbiting V<13 host stars. Millimagnitude precision can be achieved by binning the transits on 5-6 minute timescales. During observations of 8 hours with hundreds of consecutive exposures, DEMONEXT maintains sub-pixel (<0.5 pixels) target position stability on the CCD during good observing conditions, with degraded performance during poor observing conditions (<1 pixel). DEMONEXT achieves 1% photometry on targets with V<17 in 5 minute exposures, with detection limits of V~21. In addition to the 150 transits observed by DEMONEXT, 50 supernovae and transients haven been observed for the ASAS-SN supernovae group, as well as time-series observations of Galactic microlensing, active galactic nuclei, stellar variability, and stellar rotation.
Dual-Gated Motion-Frozen Cardiac PET with Flurpiridaz F 18.
Slomka, Piotr J; Rubeaux, Mathieu; Le Meunier, Ludovic; Dey, Damini; Lazewatsky, Joel L; Pan, Tinsu; Dweck, Marc R; Newby, David E; Germano, Guido; Berman, Daniel S
2015-12-01
A novel PET radiotracer, Flurpiridaz F 18, has undergone phase II clinical trial evaluation as a high-resolution PET cardiac perfusion imaging agent. In a subgroup of patients imaged with this agent, we assessed the feasibility and benefit of simultaneous correction of respiratory and cardiac motion. In 16 patients, PET imaging was performed on a 4-ring scanner in dual cardiac and respiratory gating mode. Four sets of data were reconstructed with high-definition reconstruction (HD•PET): ungated and 8-bin electrocardiography-gated images using 5-min acquisition, optimal respiratory gating (ORG)-as developed for oncologic imaging-using a narrow range of breathing amplitude around end-expiration level with 35% of the counts in a 7-min acquisition, and 4-bin respiration-gated and 8-bin electrocardiography-gated images (32 bins in total) using the 7-min acquisition (dual-gating, using all data). Motion-frozen (MF) registration algorithms were applied to electrocardiography-gated and dual-gated data, creating cardiac-MF and dual-MF images. We computed wall thickness, wall/cavity contrast, and contrast-to-noise ratio for standard, ORG, cardiac-MF, and dual-MF images to assess image quality. The wall/cavity contrast was similar for ungated (9.3 ± 2.9) and ORG (9.5 ± 3.2) images and improved for cardiac-MF (10.8 ± 3.6) and dual-MF images (14.8 ± 8.0) (P < 0.05). The contrast-to-noise ratio was 22.2 ± 9.1 with ungated, 24.7 ± 12.2 with ORG, 35.5 ± 12.8 with cardiac-MF, and 42.1 ± 13.2 with dual-MF images (all P < 0.05). The wall thickness was significantly decreased (P < 0.05) with dual-MF (11.6 ± 1.9 mm) compared with ungated (13.9 ± 2.8 mm), ORG (13.1 ± 2.9 mm), and cardiac-MF images (12.1 ± 2.7 mm). Dual (respiratory/cardiac)-gated perfusion imaging with Flurpiridaz F 18 is feasible and improves image resolution, contrast, and contrast-to-noise ratio when MF registration methods are applied. © 2015 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
Time multiplexing for increased FOV and resolution in virtual reality
NASA Astrophysics Data System (ADS)
Miñano, Juan C.; Benitez, Pablo; Grabovičkić, Dejan; Zamora, Pablo; Buljan, Marina; Narasimhan, Bharathwaj
2017-06-01
We introduce a time multiplexing strategy to increase the total pixel count of the virtual image seen in a VR headset. This translates into an improvement of the pixel density or the Field of View FOV (or both) A given virtual image is displayed by generating a succession of partial real images, each representing part of the virtual image and together representing the virtual image. Each partial real image uses the full set of physical pixels available in the display. The partial real images are successively formed and combine spatially and temporally to form a virtual image viewable from the eye position. Partial real images are imaged through different optical channels depending of its time slot. Shutters or other schemes are used to avoid that a partial real image be imaged through the wrong optical channels or at the wrong time slot. This time multiplexing strategy needs real images be shown at high frame rates (>120fps). Available display and shutters technologies are discussed. Several optical designs for achieving this time multiplexing scheme in a compact format are shown. This time multiplexing scheme allows increasing the resolution/FOV of the virtual image not only by increasing the physical pixel density but also by decreasing the pixels switching time, a feature that may be simpler to achieve in certain circumstances.
NASA Technical Reports Server (NTRS)
2006-01-01
This HiRISE image is of the north polar layered deposits (PLD) and underlying units exposed along the margins of Chasma Boreale. Chasma Boreale is the largest trough in the north PLD, thought to have formed due to outflow of water from underneath the polar cap, or due to winds blowing off the polar cap, or a combination of both. At the top and left of the image, the bright area with uniform striping is the gently sloping surface of the PLD. In the middle of the image this surface drops off in a steeper scarp, or cliff. At the top of this cliff we see the bright PLD in a side view, or cross-section. From these two perspectives of the PLD it is evident that the PLD are a stack of roughly horizontal layers. The gently sloping top surface cuts through the vertical sequence of layers at a low angle, apparently stretching the layers out horizontally and thus revealing details of the brightness and texture of individual layers. The surface of the PLD on the scarp is also criss-crossed by fine scale fractures. The layers of the PLD are probably composed of differing proportions of ice and dust, believed to be related to the climate conditions at the time they were deposited. In this way, sequences of polar layers are records of past climates on Mars, as ice cores from terrestrial ice sheets hold evidence of past climates on Earth. Further down the scarp in the center of the image the bright layers give way suddenly to a much darker section where a few layers are visible intermittently amongst aprons of dark material. The darkest material, with a smooth surface suggestive of loose grains, is thought to be sandy because similar exposures elsewhere show it to be formed into dunes by the wind. An intermediate-toned material also appears to form aprons draped over layers in the scarp, but its surface contains lobate structures that appear hardened into place and its edges are more abrupt in places, suggesting it may contain some ice or other cementing agent that makes it more competent, or resistant. At the base of the cliff, especially visible on the right side of the image, are several prominent bright layers with regular, rectangular-shaped polygons. Due to similarities in brightness and surface fracturing with the upper PLD, these bottom layers are also likely to be ice rich. The presence of sandy material sandwiched in between the upper PLD and these bottom layers suggests that the climate was once much different from the times during which the icier layers were deposited. The scattered bright and dark points are boulder-sized blocks that are likely pieces of the fractured PLD or other darker layers that have broken off and fallen downhill. At the bottom and right of the image, the floor of Chasma Boreale is dark, with a knobby texture and irregular polygons. Several circular features surrounded by an area that is slightly smoother, lighter, and raised relative to the chasm floor may be impact craters that have been modified after their formation in ice-rich ground. Image PSP_001412_2650 was taken by the High Resolution Imaging Science Experiment (HiRISE) camera onboard the Mars Reconnaissance Orbiter spacecraft on November 14, 2006. The complete image is centered at 84.7 degrees latitude, 4.0 degrees East longitude. The range to the target site was 320.9 km (200.6 miles). At this distance the image scale ranges from 32.1 cm/pixel (with 1 x 1 binning) to 128.4 cm/pixel (with 4 x 4 binning). The image shown here has been map-projected to 25 cm/pixel. The image was taken at a local Mars time of 12:52 PM and the scene is illuminated from the west with a solar incidence angle of 67 degrees, thus the sun was about 23 degrees above the horizon. At a solar longitude of 135.3 degrees, the season on Mars is Northern Summer. NASA's Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the Mars Reconnaissance Orbiter for NASA's Science Mission Directorate, Washington. Lockheed Martin Space Systems, Denver, is the prime contractor for the project and built the spacecraft. The High Resolution Imaging Science Experiment is operated by the University of Arizona, Tucson, and the instrument was built by Ball Aerospace and Technology Corp., Boulder, Colo.Distance-based over-segmentation for single-frame RGB-D images
NASA Astrophysics Data System (ADS)
Fang, Zhuoqun; Wu, Chengdong; Chen, Dongyue; Jia, Tong; Yu, Xiaosheng; Zhang, Shihong; Qi, Erzhao
2017-11-01
Over-segmentation, known as super-pixels, is a widely used preprocessing step in segmentation algorithms. Oversegmentation algorithm segments an image into regions of perceptually similar pixels, but performs badly based on only color image in the indoor environments. Fortunately, RGB-D images can improve the performances on the images of indoor scene. In order to segment RGB-D images into super-pixels effectively, we propose a novel algorithm, DBOS (Distance-Based Over-Segmentation), which realizes full coverage of super-pixels on the image. DBOS fills the holes in depth images to fully utilize the depth information, and applies SLIC-like frameworks for fast running. Additionally, depth features such as plane projection distance are extracted to compute distance which is the core of SLIC-like frameworks. Experiments on RGB-D images of NYU Depth V2 dataset demonstrate that DBOS outperforms state-ofthe-art methods in quality while maintaining speeds comparable to them.
Heterogeneity of Particle Deposition by Pixel Analysis of 2D Gamma Scintigraphy Images
Xie, Miao; Zeman, Kirby; Hurd, Harry; Donaldson, Scott
2015-01-01
Abstract Background: Heterogeneity of inhaled particle deposition in airways disease may be a sensitive indicator of physiologic changes in the lungs. Using planar gamma scintigraphy, we developed new methods to locate and quantify regions of high (hot) and low (cold) particle deposition in the lungs. Methods: Initial deposition and 24 hour retention images were obtained from healthy (n=31) adult subjects and patients with mild cystic fibrosis lung disease (CF) (n=14) following inhalation of radiolabeled particles (Tc99m-sulfur colloid, 5.4 μm MMAD) under controlled breathing conditions. The initial deposition image of the right lung was normalized to (i.e., same median pixel value), and then divided by, a transmission (Tc99m) image in the same individual to obtain a pixel-by-pixel ratio image. Hot spots were defined where pixel values in the deposition image were greater than 2X those of the transmission, and cold spots as pixels where the deposition image was less than 0.5X of the transmission. The number ratio (NR) of the hot and cold pixels to total lung pixels, and the sum ratio (SR) of total counts in hot pixels to total lung counts were compared between healthy and CF subjects. Other traditional measures of regional particle deposition, nC/P and skew of the pixel count histogram distribution, were also compared. Results: The NR of cold spots was greater in mild CF, 0.221±0.047(CF) vs. 0.186±0.038 (healthy) (p<0.005) and was significantly correlated with FEV1 %pred in the patients (R=−0.70). nC/P (central to peripheral count ratio), skew of the count histogram, and hot NR or SR were not different between the healthy and mild CF patients. Conclusions: These methods may provide more sensitive measures of airway function and localization of deposition that might be useful for assessing treatment efficacy in these patients. PMID:25393109
Mesoscale variability of the Upper Colorado River snowpack
Ling, C.-H.; Josberger, E.G.; Thorndike, A.S.
1996-01-01
In the mountainous regions of the Upper Colorado River Basin, snow course observations give local measurements of snow water equivalent, which can be used to estimate regional averages of snow conditions. We develop a statistical technique to estimate the mesoscale average snow accumulation, using 8 years of snow course observations. For each of three major snow accumulation regions in the Upper Colorado River Basin - the Colorado Rocky Mountains, Colorado, the Uinta Mountains, Utah, and the Wind River Range, Wyoming - the snow course observations yield a correlation length scale of 38 km, 46 km, and 116 km respectively. This is the scale for which the snow course data at different sites are correlated with 70 per cent correlation. This correlation of snow accumulation over large distances allows for the estimation of the snow water equivalent on a mesoscale basis. With the snow course data binned into 1/4?? latitude by 1/4?? longitude pixels, an error analysis shows the following: for no snow course data in a given pixel, the uncertainty in the water equivalent estimate reaches 50 cm; that is, the climatological variability. However, as the number of snow courses in a pixel increases the uncertainty decreases, and approaches 5-10 cm when there are five snow courses in a pixel.
Efficient Solar Scene Wavefront Estimation with Reduced Systematic and RMS Errors: Summary
NASA Astrophysics Data System (ADS)
Anugu, N.; Garcia, P.
2016-04-01
Wave front sensing for solar telescopes is commonly implemented with the Shack-Hartmann sensors. Correlation algorithms are usually used to estimate the extended scene Shack-Hartmann sub-aperture image shifts or slopes. The image shift is computed by correlating a reference sub-aperture image with the target distorted sub-aperture image. The pixel position where the maximum correlation is located gives the image shift in integer pixel coordinates. Sub-pixel precision image shifts are computed by applying a peak-finding algorithm to the correlation peak Poyneer (2003); Löfdahl (2010). However, the peak-finding algorithm results are usually biased towards the integer pixels, these errors are called as systematic bias errors Sjödahl (1994). These errors are caused due to the low pixel sampling of the images. The amplitude of these errors depends on the type of correlation algorithm and the type of peak-finding algorithm being used. To study the systematic errors in detail, solar sub-aperture synthetic images are constructed by using a Swedish Solar Telescope solar granulation image1. The performance of cross-correlation algorithm in combination with different peak-finding algorithms is investigated. The studied peak-finding algorithms are: parabola Poyneer (2003); quadratic polynomial Löfdahl (2010); threshold center of gravity Bailey (2003); Gaussian Nobach & Honkanen (2005) and Pyramid Bailey (2003). The systematic error study reveals that that the pyramid fit is the most robust to pixel locking effects. The RMS error analysis study reveals that the threshold centre of gravity behaves better in low SNR, although the systematic errors in the measurement are large. It is found that no algorithm is best for both the systematic and the RMS error reduction. To overcome the above problem, a new solution is proposed. In this solution, the image sampling is increased prior to the actual correlation matching. The method is realized in two steps to improve its computational efficiency. In the first step, the cross-correlation is implemented at the original image spatial resolution grid (1 pixel). In the second step, the cross-correlation is performed using a sub-pixel level grid by limiting the field of search to 4 × 4 pixels centered at the first step delivered initial position. The generation of these sub-pixel grid based region of interest images is achieved with the bi-cubic interpolation. The correlation matching with sub-pixel grid technique was previously reported in electronic speckle photography Sjö'dahl (1994). This technique is applied here for the solar wavefront sensing. A large dynamic range and a better accuracy in the measurements are achieved with the combination of the original pixel grid based correlation matching in a large field of view and a sub-pixel interpolated image grid based correlation matching within a small field of view. The results revealed that the proposed method outperforms all the different peak-finding algorithms studied in the first approach. It reduces both the systematic error and the RMS error by a factor of 5 (i.e., 75% systematic error reduction), when 5 times improved image sampling was used. This measurement is achieved at the expense of twice the computational cost. With the 5 times improved image sampling, the wave front accuracy is increased by a factor of 5. The proposed solution is strongly recommended for wave front sensing in the solar telescopes, particularly, for measuring large dynamic image shifts involved open loop adaptive optics. Also, by choosing an appropriate increment of image sampling in trade-off between the computational speed limitation and the aimed sub-pixel image shift accuracy, it can be employed in closed loop adaptive optics. The study is extended to three other class of sub-aperture images (a point source; a laser guide star; a Galactic Center extended scene). The results are planned to submit for the Optical Express journal.
A multichannel block-matching denoising algorithm for spectral photon-counting CT images.
Harrison, Adam P; Xu, Ziyue; Pourmorteza, Amir; Bluemke, David A; Mollura, Daniel J
2017-06-01
We present a denoising algorithm designed for a whole-body prototype photon-counting computed tomography (PCCT) scanner with up to 4 energy thresholds and associated energy-binned images. Spectral PCCT images can exhibit low signal to noise ratios (SNRs) due to the limited photon counts in each simultaneously-acquired energy bin. To help address this, our denoising method exploits the correlation and exact alignment between energy bins, adapting the highly-effective block-matching 3D (BM3D) denoising algorithm for PCCT. The original single-channel BM3D algorithm operates patch-by-patch. For each small patch in the image, a patch grouping action collects similar patches from the rest of the image, which are then collaboratively filtered together. The resulting performance hinges on accurate patch grouping. Our improved multi-channel version, called BM3D_PCCT, incorporates two improvements. First, BM3D_PCCT uses a more accurate shared patch grouping based on the image reconstructed from photons detected in all 4 energy bins. Second, BM3D_PCCT performs a cross-channel decorrelation, adding a further dimension to the collaborative filtering process. These two improvements produce a more effective algorithm for PCCT denoising. Preliminary results compare BM3D_PCCT against BM3D_Naive, which denoises each energy bin independently. Experiments use a three-contrast PCCT image of a canine abdomen. Within five regions of interest, selected from paraspinal muscle, liver, and visceral fat, BM3D_PCCT reduces the noise standard deviation by 65.0%, compared to 40.4% for BM3D_Naive. Attenuation values of the contrast agents in calibration vials also cluster much tighter to their respective lines of best fit. Mean angular differences (in degrees) for the original, BM3D_Naive, and BM3D_PCCT images, respectively, were 15.61, 7.34, and 4.45 (iodine); 12.17, 7.17, and 4.39 (galodinium); and 12.86, 6.33, and 3.96 (bismuth). We outline a multi-channel denoising algorithm tailored for spectral PCCT images, demonstrating improved performance over an independent, yet state-of-the-art, single-channel approach. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.
CMOS Image Sensors for High Speed Applications.
El-Desouki, Munir; Deen, M Jamal; Fang, Qiyin; Liu, Louis; Tse, Frances; Armstrong, David
2009-01-01
Recent advances in deep submicron CMOS technologies and improved pixel designs have enabled CMOS-based imagers to surpass charge-coupled devices (CCD) imaging technology for mainstream applications. The parallel outputs that CMOS imagers can offer, in addition to complete camera-on-a-chip solutions due to being fabricated in standard CMOS technologies, result in compelling advantages in speed and system throughput. Since there is a practical limit on the minimum pixel size (4∼5 μm) due to limitations in the optics, CMOS technology scaling can allow for an increased number of transistors to be integrated into the pixel to improve both detection and signal processing. Such smart pixels truly show the potential of CMOS technology for imaging applications allowing CMOS imagers to achieve the image quality and global shuttering performance necessary to meet the demands of ultrahigh-speed applications. In this paper, a review of CMOS-based high-speed imager design is presented and the various implementations that target ultrahigh-speed imaging are described. This work also discusses the design, layout and simulation results of an ultrahigh acquisition rate CMOS active-pixel sensor imager that can take 8 frames at a rate of more than a billion frames per second (fps).
Zhang, Chu; Liu, Fei; He, Yong
2018-02-01
Hyperspectral imaging was used to identify and to visualize the coffee bean varieties. Spectral preprocessing of pixel-wise spectra was conducted by different methods, including moving average smoothing (MA), wavelet transform (WT) and empirical mode decomposition (EMD). Meanwhile, spatial preprocessing of the gray-scale image at each wavelength was conducted by median filter (MF). Support vector machine (SVM) models using full sample average spectra and pixel-wise spectra, and the selected optimal wavelengths by second derivative spectra all achieved classification accuracy over 80%. Primarily, the SVM models using pixel-wise spectra were used to predict the sample average spectra, and these models obtained over 80% of the classification accuracy. Secondly, the SVM models using sample average spectra were used to predict pixel-wise spectra, but achieved with lower than 50% of classification accuracy. The results indicated that WT and EMD were suitable for pixel-wise spectra preprocessing. The use of pixel-wise spectra could extend the calibration set, and resulted in the good prediction results for pixel-wise spectra and sample average spectra. The overall results indicated the effectiveness of using spectral preprocessing and the adoption of pixel-wise spectra. The results provided an alternative way of data processing for applications of hyperspectral imaging in food industry.
Pixels, Imagers and Related Fabrication Methods
NASA Technical Reports Server (NTRS)
Pain, Bedabrata (Inventor); Cunningham, Thomas J. (Inventor)
2014-01-01
Pixels, imagers and related fabrication methods are described. The described methods result in cross-talk reduction in imagers and related devices by generating depletion regions. The devices can also be used with electronic circuits for imaging applications.
Pixels, Imagers and Related Fabrication Methods
NASA Technical Reports Server (NTRS)
Pain, Bedabrata (Inventor); Cunningham, Thomas J. (Inventor)
2016-01-01
Pixels, imagers and related fabrication methods are described. The described methods result in cross-talk reduction in imagers and related devices by generating depletion regions. The devices can also be used with electronic circuits for imaging applications.
Log polar image sensor in CMOS technology
NASA Astrophysics Data System (ADS)
Scheffer, Danny; Dierickx, Bart; Pardo, Fernando; Vlummens, Jan; Meynants, Guy; Hermans, Lou
1996-08-01
We report on the design, design issues, fabrication and performance of a log-polar CMOS image sensor. The sensor is developed for the use in a videophone system for deaf and hearing impaired people, who are not capable of communicating through a 'normal' telephone. The system allows 15 detailed images per second to be transmitted over existing telephone lines. This framerate is sufficient for conversations by means of sign language or lip reading. The pixel array of the sensor consists of 76 concentric circles with (up to) 128 pixels per circle, in total 8013 pixels. The interior pixels have a pitch of 14 micrometers, up to 250 micrometers at the border. The 8013-pixels image is mapped (log-polar transformation) in a X-Y addressable 76 by 128 array.
Mapping Capacitive Coupling Among Pixels in a Sensor Array
NASA Technical Reports Server (NTRS)
Seshadri, Suresh; Cole, David M.; Smith, Roger M.
2010-01-01
An improved method of mapping the capacitive contribution to cross-talk among pixels in an imaging array of sensors (typically, an imaging photodetector array) has been devised for use in calibrating and/or characterizing such an array. The method involves a sequence of resets of subarrays of pixels to specified voltages and measurement of the voltage responses of neighboring non-reset pixels.
A compressed sensing X-ray camera with a multilayer architecture
Wang, Zhehui; Laroshenko, O.; Li, S.; ...
2018-01-25
Recent advances in compressed sensing theory and algorithms offer new possibilities for high-speed X-ray camera design. In many CMOS cameras, each pixel has an independent on-board circuit that includes an amplifier, noise rejection, signal shaper, an analog-to-digital converter (ADC), and optional in-pixel storage. When X-ray images are sparse, i.e., when one of the following cases is true: (a.) The number of pixels with true X-ray hits is much smaller than the total number of pixels; (b.) The X-ray information is redundant; or (c.) Some prior knowledge about the X-ray images exists, sparse sampling may be allowed. In this work, wemore » first illustrate the feasibility of random on-board pixel sampling (ROPS) using an existing set of X-ray images, followed by a discussion about signal to noise as a function of pixel size. Next, we describe a possible circuit architecture to achieve random pixel access and in-pixel storage. The combination of a multilayer architecture, sparse on-chip sampling, and computational image techniques, is expected to facilitate the development and applications of high-speed X-ray camera technology.« less
A compressed sensing X-ray camera with a multilayer architecture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Zhehui; Laroshenko, O.; Li, S.
Recent advances in compressed sensing theory and algorithms offer new possibilities for high-speed X-ray camera design. In many CMOS cameras, each pixel has an independent on-board circuit that includes an amplifier, noise rejection, signal shaper, an analog-to-digital converter (ADC), and optional in-pixel storage. When X-ray images are sparse, i.e., when one of the following cases is true: (a.) The number of pixels with true X-ray hits is much smaller than the total number of pixels; (b.) The X-ray information is redundant; or (c.) Some prior knowledge about the X-ray images exists, sparse sampling may be allowed. In this work, wemore » first illustrate the feasibility of random on-board pixel sampling (ROPS) using an existing set of X-ray images, followed by a discussion about signal to noise as a function of pixel size. Next, we describe a possible circuit architecture to achieve random pixel access and in-pixel storage. The combination of a multilayer architecture, sparse on-chip sampling, and computational image techniques, is expected to facilitate the development and applications of high-speed X-ray camera technology.« less
A 100 Mfps image sensor for biological applications
NASA Astrophysics Data System (ADS)
Etoh, T. Goji; Shimonomura, Kazuhiro; Nguyen, Anh Quang; Takehara, Kosei; Kamakura, Yoshinari; Goetschalckx, Paul; Haspeslagh, Luc; De Moor, Piet; Dao, Vu Truong Son; Nguyen, Hoang Dung; Hayashi, Naoki; Mitsui, Yo; Inumaru, Hideo
2018-02-01
Two ultrahigh-speed CCD image sensors with different characteristics were fabricated for applications to advanced scientific measurement apparatuses. The sensors are BSI MCG (Backside-illuminated Multi-Collection-Gate) image sensors with multiple collection gates around the center of the front side of each pixel, placed like petals of a flower. One has five collection gates and one drain gate at the center, which can capture consecutive five frames at 100 Mfps with the pixel count of about 600 kpixels (512 x 576 x 2 pixels). In-pixel signal accumulation is possible for repetitive image capture of reproducible events. The target application is FLIM. The other is equipped with four collection gates each connected to an in-situ CCD memory with 305 elements, which enables capture of 1,220 (4 x 305) consecutive images at 50 Mfps. The CCD memory is folded and looped with the first element connected to the last element, which also makes possible the in-pixel signal accumulation. The sensor is a small test sensor with 32 x 32 pixels. The target applications are imaging TOF MS, pulse neutron tomography and dynamic PSP. The paper also briefly explains an expression of the temporal resolution of silicon image sensors theoretically derived by the authors in 2017. It is shown that the image sensor designed based on the theoretical analysis achieves imaging of consecutive frames at the frame interval of 50 ps.
Photometric normalization of LROC WAC images
NASA Astrophysics Data System (ADS)
Sato, H.; Denevi, B.; Robinson, M. S.; Hapke, B. W.; McEwen, A. S.; LROC Science Team
2010-12-01
The Lunar Reconnaissance Orbiter Camera (LROC) Wide Angle Camera (WAC) acquires near global coverage on a monthly basis. The WAC is a push frame sensor with a 90° field of view (FOV) in BW mode and 60° FOV in 7-color mode (320 nm to 689 nm). WAC images are acquired during each orbit in 10° latitude segments with cross track coverage of ~50 km. Before mosaicking, WAC images are radiometrically calibrated to remove instrumental artifacts and to convert at sensor radiance to I/F. Images are also photometrically normalized to common viewing and illumination angles (30° phase), a challenge due to the wide angle nature of the WAC where large differences in phase angle are observed in a single image line (±30°). During a single month the equatorial incidence angle drifts about 28° and over the course of ~1 year the lighting completes a 360° cycle. The light scattering properties of the lunar surface depend on incidence(i), emission(e), and phase(p) angles as well as soil properties such as single-scattering albedo and roughness that vary with terrain type and state of maturity [1]. We first tested a Lommel-Seeliger Correction (LSC) [cos(i)/(cos(i) + cos(e))] [2] with a phase function defined by an exponential decay plus 4th order polynomial term [3] which did not provide an adequate solution. Next we employed a LSC with an exponential 2nd order decay phase correction that was an improvement, but still exhibited unacceptable frame-to-frame residuals. In both cases we fitted the LSC I/F vs. phase angle to derive the phase corrections. To date, the best results are with a lunar-lambert function [4] with exponential 2nd order decay phase correction (LLEXP2) [(A1exp(B1p)+A2exp(B2p)+A3) * cos(i)/(cos(e) + cos(i)) + B3cos(i)]. We derived the parameters for the LLEXP2 from repeat imaging of a small region and then corrected that region with excellent results. When this correction was applied to the whole Moon the results were less than optimal - no surprise given the variability of the regolith from region to region. As the fitting area increases, the accuracy of curve fitting decreases due to the larger variety of albedo, topography, and composition. Thus we have adopted an albedo-dependent photometric normalization routine. Phase curves are derived for discreet bins of preliminary normalized reflectance calculated from Clementine global mosaic in a fitting area that is composed of predominantly mare in Oceanus Procellarum. The global WAC mosaic was then corrected pixel-by-pixel according to its preliminary reflectance map with satisfactory results. We observed that the phase curves per normalized-reflectance bins become steeper as the reflectance value increases. Further filtering by using FeO, TiO2, or optical maturity [5] for parameter calculations may help elucidate the effects of surface composition and maturity on photometric properties of the surface. [1] Hapke, B.W. (1993) Theory of Reflectance and Emittance Spectroscopy, Cambridge Univ. Press. [2] Schoenberg (1925) Ada. Soc. Febb., vol. 50. [3] Hillier et al. (1999) Icarus 141, 205-225. [4] McEwen (1991) Icarus 92, 298-311. [5] Lucey et al. (2000) JGR, v105, no E8, p20377-20386.
Wang, Fei; Qin, Zhihao; Li, Wenjuan; Song, Caiying; Karnieli, Arnon; Zhao, Shuhe
2015-01-01
Land surface temperature (LST) images retrieved from the thermal infrared (TIR) band data of Moderate Resolution Imaging Spectroradiometer (MODIS) have much lower spatial resolution than the MODIS visible and near-infrared (VNIR) band data. The coarse pixel scale of MODIS LST images (1000 m under nadir) have limited their capability in applying to many studies required high spatial resolution in comparison of the MODIS VNIR band data with pixel scale of 250–500 m. In this paper we intend to develop an efficient approach for pixel decomposition to increase the spatial resolution of MODIS LST image using the VNIR band data as assistance. The unique feature of this approach is to maintain the thermal radiance of parent pixels in the MODIS LST image unchanged after they are decomposed into the sub-pixels in the resulted image. There are two important steps in the decomposition: initial temperature estimation and final temperature determination. Therefore the approach can be termed double-step pixel decomposition (DSPD). Both steps involve a series of procedures to achieve the final result of decomposed LST image, including classification of the surface patterns, establishment of LST change with normalized difference of vegetation index (NDVI) and building index (NDBI), reversion of LST into thermal radiance through Planck equation, and computation of weights for the sub-pixels of the resulted image. Since the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) with much higher spatial resolution than MODIS data was on-board the same platform (Terra) as MODIS for Earth observation, an experiment had been done in the study to validate the accuracy and efficiency of our approach for pixel decomposition. The ASTER LST image was used as the reference to compare with the decomposed LST image. The result showed that the spatial distribution of the decomposed LST image was very similar to that of the ASTER LST image with a root mean square error (RMSE) of 2.7 K for entire image. Comparison with the evaluation DisTrad (E-DisTrad) and re-sampling methods for pixel decomposition also indicate that our DSPD has the lowest RMSE in all cases, including urban region, water bodies, and natural terrain. The obvious increase in spatial resolution remarkably uplifts the capability of the coarse MODIS LST images in highlighting the details of LST variation. Therefore it can be concluded that, in spite of complicated procedures, the proposed DSPD approach provides an alternative to improve the spatial resolution of MODIS LST image hence expand its applicability to the real world. PMID:25609048
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leng, Shuai; Yu, Lifeng; Wang, Jia
Purpose: Our purpose was to reduce image noise in spectral CT by exploiting data redundancies in the energy domain to allow flexible selection of the number, width, and location of the energy bins. Methods: Using a variety of spectral CT imaging methods, conventional filtered backprojection (FBP) reconstructions were performed and resulting images were compared to those processed using a Local HighlY constrained backPRojection Reconstruction (HYPR-LR) algorithm. The mean and standard deviation of CT numbers were measured within regions of interest (ROIs), and results were compared between FBP and HYPR-LR. For these comparisons, the following spectral CT imaging methods were used:(i)more » numerical simulations based on a photon-counting, detector-based CT system, (ii) a photon-counting, detector-based micro CT system using rubidium and potassium chloride solutions, (iii) a commercial CT system equipped with integrating detectors utilizing tube potentials of 80, 100, 120, and 140 kV, and (iv) a clinical dual-energy CT examination. The effects of tube energy and energy bin width were evaluated appropriate to each CT system. Results: The mean CT number in each ROI was unchanged between FBP and HYPR-LR images for each of the spectral CT imaging scenarios, irrespective of bin width or tube potential. However, image noise, as represented by the standard deviation of CT numbers in each ROI, was reduced by 36%-76%. In all scenarios, image noise after HYPR-LR algorithm was similar to that of composite images, which used all available photons. No difference in spatial resolution was observed between HYPR-LR processing and FBP. Dual energy patient data processed using HYPR-LR demonstrated reduced noise in the individual, low- and high-energy images, as well as in the material-specific basis images. Conclusions: Noise reduction can be accomplished for spectral CT by exploiting data redundancies in the energy domain. HYPR-LR is a robust method for reducing image noise in a variety of spectral CT imaging systems without losing spatial resolution or CT number accuracy. This method improves the flexibility to select energy bins in the manner that optimizes material identification and separation without paying the penalty of increased image noise or its corollary, increased patient dose.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Y; Rottmann, J; Myronakis, M
2016-06-15
Purpose: The purpose of this study was to validate the use of a cascaded linear system model for MV cone-beam CT (CBCT) using a multi-layer (MLI) electronic portal imaging device (EPID) and provide experimental insight into image formation. A validated 3D model provides insight into salient factors affecting reconstructed image quality, allowing potential for optimizing detector design for CBCT applications. Methods: A cascaded linear system model was developed to investigate the potential improvement in reconstructed image quality for MV CBCT using an MLI EPID. Inputs to the three-dimensional (3D) model include projection space MTF and NPS. Experimental validation was performedmore » on a prototype MLI detector installed on the portal imaging arm of a Varian TrueBeam radiotherapy system. CBCT scans of up to 898 projections over 360 degrees were acquired at exposures of 16 and 64 MU. Image volumes were reconstructed using a Feldkamp-type (FDK) filtered backprojection (FBP) algorithm. Flat field images and scans of a Catphan model 604 phantom were acquired. The effect of 2×2 and 4×4 detector binning was also examined. Results: Using projection flat fields as an input, examination of the modeled and measured NPS in the axial plane exhibits good agreement. Binning projection images was shown to improve axial slice SDNR by a factor of approximately 1.4. This improvement is largely driven by a decrease in image noise of roughly 20%. However, this effect is accompanied by a subsequent loss in image resolution. Conclusion: The measured axial NPS shows good agreement with the theoretical calculation using a linear system model. Binning of projection images improves SNR of large objects on the Catphan phantom by decreasing noise. Specific imaging tasks will dictate the implementation image binning to two-dimensional projection images. The project was partially supported by a grant from Varian Medical Systems, Inc. and grant No. R01CA188446-01 from the National Cancer Institute.« less
NASA Astrophysics Data System (ADS)
Li, Zhuo; Seo, Min-Woong; Kagawa, Keiichiro; Yasutomi, Keita; Kawahito, Shoji
2016-04-01
This paper presents the design and implementation of a time-resolved CMOS image sensor with a high-speed lateral electric field modulation (LEFM) gating structure for time domain fluorescence lifetime measurement. Time-windowed signal charge can be transferred from a pinned photodiode (PPD) to a pinned storage diode (PSD) by turning on a pair of transfer gates, which are situated beside the channel. Unwanted signal charge can be drained from the PPD to the drain by turning on another pair of gates. The pixel array contains 512 (V) × 310 (H) pixels with 5.6 × 5.6 µm2 pixel size. The imager chip was fabricated using 0.11 µm CMOS image sensor process technology. The prototype sensor has a time response of 150 ps at 374 nm. The fill factor of the pixels is 5.6%. The usefulness of the prototype sensor is demonstrated for fluorescence lifetime imaging through simulation and measurement results.
Structural colour printing from a reusable generic nanosubstrate masked for the target image
NASA Astrophysics Data System (ADS)
Rezaei, M.; Jiang, H.; Kaminska, B.
2016-02-01
Structural colour printing has advantages over traditional pigment-based colour printing. However, the high fabrication cost has hindered its applications in printing large-area images because each image requires patterning structural pixels in nanoscale resolution. In this work, we present a novel strategy to print structural colour images from a pixelated substrate which is called a nanosubstrate. The nanosubstrate is fabricated only once using nanofabrication tools and can be reused for printing a large quantity of structural colour images. It contains closely packed arrays of nanostructures from which red, green, blue and infrared structural pixels can be imprinted. To print a target colour image, the nanosubstrate is first covered with a mask layer to block all the structural pixels. The mask layer is subsequently patterned according to the target colour image to make apertures of controllable sizes on top of the wanted primary colour pixels. The masked nanosubstrate is then used as a stamp to imprint the colour image onto a separate substrate surface using nanoimprint lithography. Different visual colours are achieved by properly mixing the red, green and blue primary colours into appropriate ratios controlled by the aperture sizes on the patterned mask layer. Such a strategy significantly reduces the cost and complexity of printing a structural colour image from lengthy nanoscale patterning into high throughput micro-patterning and makes it possible to apply structural colour printing in personalized security features and data storage. In this paper, nanocone array grating pixels were used as the structural pixels and the nanosubstrate contains structures to imprint the nanocone arrays. Laser lithography was implemented to pattern the mask layer with submicron resolution. The optical properties of the nanocone array gratings are studied in detail. Multiple printed structural colour images with embedded covert information are demonstrated.
How many pixels does it take to make a good 4"×6" print? Pixel count wars revisited
NASA Astrophysics Data System (ADS)
Kriss, Michael A.
2011-01-01
In the early 1980's the future of conventional silver-halide photographic systems was of great concern due to the potential introduction of electronic imaging systems then typified by the Sony Mavica analog electronic camera. The focus was on the quality of film-based systems as expressed in the number of equivalent number pixels and bits-per-pixel, and how many pixels would be required to create an equivalent quality image from a digital camera. It was found that 35-mm frames, for ISO 100 color negative film, contained equivalent pixels of 12 microns for a total of 18 million pixels per frame (6 million pixels per layer) with about 6 bits of information per pixel; the introduction of new emulsion technology, tabular AgX grains, increased the value to 8 bit per pixel. Higher ISO speed films had larger equivalent pixels, fewer pixels per frame, but retained the 8 bits per pixel. Further work found that a high quality 3.5" x 5.25" print could be obtained from a three layer system containing 1300 x 1950 pixels per layer or about 7.6 million pixels in all. In short, it became clear that when a digital camera contained about 6 million pixels (in a single layer using a color filter array and appropriate image processing) that digital systems would challenge and replace conventional film-based system for the consumer market. By 2005 this became the reality. Since 2005 there has been a "pixel war" raging amongst digital camera makers. The question arises about just how many pixels are required and are all pixels equal? This paper will provide a practical look at how many pixels are needed for a good print based on the form factor of the sensor (sensor size) and the effective optical modulation transfer function (optical spread function) of the camera lens. Is it better to have 16 million, 5.7-micron pixels or 6 million 7.8-micron pixels? How does intrinsic (no electronic boost) ISO speed and exposure latitude vary with pixel size? A systematic review of these issues will be provided within the context of image quality and ISO speed models developed over the last 15 years.
47 CFR 73.9003 - Compliance requirements for covered demodulator products: Unscreened content.
Code of Federal Regulations, 2010 CFR
2010-10-01
... operating in a mode compatible with the digital visual interface (DVI) rev. 1.0 Specification as an image having the visual equivalent of no more than 350,000 pixels per frame (e.g. an image with resolution of 720×480 pixels for a 4:3 (nonsquare pixel) aspect ratio), and 30 frames per second. Such an image may...
47 CFR 73.9004 - Compliance requirements for covered demodulator products: Marked content.
Code of Federal Regulations, 2010 CFR
2010-10-01
... compatible with the digital visual interface (DVI) Rev. 1.0 Specification as an image having the visual equivalent of no more than 350,000 pixels per frame (e.g., an image with resolution of 720×480 pixels for a 4:3 (nonsquare pixel) aspect ratio), and 30 frames per second. Such an image may be attained by...
Pixelated camouflage patterns from the perspective of hyperspectral imaging
NASA Astrophysics Data System (ADS)
Racek, František; Jobánek, Adam; Baláž, Teodor; Krejčí, Jaroslav
2016-10-01
Pixelated camouflage patterns fulfill the role of both principles the matching and the disrupting that are exploited for blending the target into the background. It means that pixelated pattern should respect natural background in spectral and spatial characteristics embodied in micro and macro patterns. The HS imaging plays the similar, however the reverse role in the field of reconnaissance systems. The HS camera fundamentally records and extracts both the spectral and spatial information belonging to the recorded scenery. Therefore, the article deals with problems of hyperspectral (HS) imaging and subsequent processing of HS images of pixelated camouflage patterns which are among others characterized by their specific spatial frequency heterogeneity.
Faxed document image restoration method based on local pixel patterns
NASA Astrophysics Data System (ADS)
Akiyama, Teruo; Miyamoto, Nobuo; Oguro, Masami; Ogura, Kenji
1998-04-01
A method for restoring degraded faxed document images using the patterns of pixels that construct small areas in a document is proposed. The method effectively restores faxed images that contain the halftone textures and/or density salt-and-pepper noise that degrade OCR system performance. The halftone image restoration process, white-centered 3 X 3 pixels, in which black-and-white pixels alternate, are identified first using the distribution of the pixel values as halftone textures, and then the white center pixels are inverted to black. To remove high-density salt- and-pepper noise, it is assumed that the degradation is caused by ill-balanced bias and inappropriate thresholding of the sensor output which results in the addition of random noise. Restored image can be estimated using an approximation that uses the inverse operation of the assumed original process. In order to process degraded faxed images, the algorithms mentioned above are combined. An experiment is conducted using 24 especially poor quality examples selected from data sets that exemplify what practical fax- based OCR systems cannot handle. The maximum recovery rate in terms of mean square error was 98.8 percent.
Active pixel image sensor with a winner-take-all mode of operation
NASA Technical Reports Server (NTRS)
Yadid-Pecht, Orly (Inventor); Mead, Carver (Inventor); Fossum, Eric R. (Inventor)
2003-01-01
An integrated CMOS semiconductor imaging device having two modes of operation that can be performed simultaneously to produce an output image and provide information of a brightest or darkest pixel in the image.
Weber-aware weighted mutual information evaluation for infrared-visible image fusion
NASA Astrophysics Data System (ADS)
Luo, Xiaoyan; Wang, Shining; Yuan, Ding
2016-10-01
A performance metric for infrared and visible image fusion is proposed based on Weber's law. To indicate the stimulus of source images, two Weber components are provided. One is differential excitation to reflect the spectral signal of visible and infrared images, and the other is orientation to capture the scene structure feature. By comparing the corresponding Weber component in infrared and visible images, the source pixels can be marked with different dominant properties in intensity or structure. If the pixels have the same dominant property label, the pixels are grouped to calculate the mutual information (MI) on the corresponding Weber components between dominant source and fused images. Then, the final fusion metric is obtained via weighting the group-wise MI values according to the number of pixels in different groups. Experimental results demonstrate that the proposed metric performs well on popular image fusion cases and outperforms other image fusion metrics.
Multi-Scale Fractal Analysis of Image Texture and Pattern
NASA Technical Reports Server (NTRS)
Emerson, Charles W.; Lam, Nina Siu-Ngan; Quattrochi, Dale A.
1999-01-01
Analyses of the fractal dimension of Normalized Difference Vegetation Index (NDVI) images of homogeneous land covers near Huntsville, Alabama revealed that the fractal dimension of an image of an agricultural land cover indicates greater complexity as pixel size increases, a forested land cover gradually grows smoother, and an urban image remains roughly self-similar over the range of pixel sizes analyzed (10 to 80 meters). A similar analysis of Landsat Thematic Mapper images of the East Humboldt Range in Nevada taken four months apart show a more complex relation between pixel size and fractal dimension. The major visible difference between the spring and late summer NDVI images is the absence of high elevation snow cover in the summer image. This change significantly alters the relation between fractal dimension and pixel size. The slope of the fractal dimension-resolution relation provides indications of how image classification or feature identification will be affected by changes in sensor spatial resolution.
Multi-Scale Fractal Analysis of Image Texture and Pattern
NASA Technical Reports Server (NTRS)
Emerson, Charles W.; Lam, Nina Siu-Ngan; Quattrochi, Dale A.
1999-01-01
Analyses of the fractal dimension of Normalized Difference Vegetation Index (NDVI) images of homogeneous land covers near Huntsville, Alabama revealed that the fractal dimension of an image of an agricultural land cover indicates greater complexity as pixel size increases, a forested land cover gradually grows smoother, and an urban image remains roughly self-similar over the range of pixel sizes analyzed (10 to 80 meters). A similar analysis of Landsat Thematic Mapper images of the East Humboldt Range in Nevada taken four months apart show a more complex relation between pixel size and fractal dimension. The major visible difference between the spring and late summer NDVI images of the absence of high elevation snow cover in the summer image. This change significantly alters the relation between fractal dimension and pixel size. The slope of the fractal dimensional-resolution relation provides indications of how image classification or feature identification will be affected by changes in sensor spatial resolution.
A 128 x 128 CMOS Active Pixel Image Sensor for Highly Integrated Imaging Systems
NASA Technical Reports Server (NTRS)
Mendis, Sunetra K.; Kemeny, Sabrina E.; Fossum, Eric R.
1993-01-01
A new CMOS-based image sensor that is intrinsically compatible with on-chip CMOS circuitry is reported. The new CMOS active pixel image sensor achieves low noise, high sensitivity, X-Y addressability, and has simple timing requirements. The image sensor was fabricated using a 2 micrometer p-well CMOS process, and consists of a 128 x 128 array of 40 micrometer x 40 micrometer pixels. The CMOS image sensor technology enables highly integrated smart image sensors, and makes the design, incorporation and fabrication of such sensors widely accessible to the integrated circuit community.
UV Imaging of R136 with the GHRS and the WFPC-2
NASA Astrophysics Data System (ADS)
Malumuth, E. M.; Ebbets, D.; Heap, S. R.; Maran, S. P.; Hutchings, J. B.; Lindler, D. J.
1994-05-01
Now that the COSTAR corrective optics have been installed and aligned in the Hubble Space Telescope (HST), the Goddard High Resolution Spectrograph (GHRS) can obtain clean spectra and images of stars in very crowded fields. To demonstrate this restored capability, an Early Release Observation program to observe hot, luminous stars in the center of R136a (the central cluster of the 30 Doradus complex in the Large Magellanic Cloud) has been scheduled in early April. Through this program we will obtain a series of UV images through the Small Science Aperture (SSA) and Large Science Aperture (LSA) of the GHRS. The images will be taken with the N2 mirror and D2 detector (CsTe cathode on a MgF_2 window) and thus will have a bandpass that extends from 1150 to 3200 Angstroms. The SSA images will consist of 13 x 13 pixels with a pixel spacing of 0\\farcs027 pixel(-1) . Each pixel covers a 0\\farcs11 x 0\\farcs11 area on the sky. Thus each image will cover the entire SSA (0\\farcs22 x 0\\farcs22). The SSA images will include one centered on the initial pointing (located between R136a1 and R136a2; separation = 0\\farcs12), an image of R136a2, and an image of R136a5 (0\\farcs18 from R136a2). Two LSA images of the central region of R136 will be taken. The first, a 3 x 3 mosaic centered on R136a5, will consist of 22 x 22 pixels each, with a pixel spacing of 0\\farcs11 pixel(-1) . Together these images cover a 5\\farcs22 x 5\\farcs22 area. The second, will cover the central 1\\farcs2 x 1\\farcs2 with a pixel spacing of 0\\farcs055 pixel(-1) . These images will be examined to determine the true pointing for the spectra of R136a2 and R136a5, the imaging characteristics of the GHRS, and the UV brightnesses of all of the stars within the field. In addition to these images, 3 WFPC-2 PC exposures will be obtained with the F336W filter. These images are 5, 10 and 20 seconds in duration. Photometry of the stars in these images will be compared with the GHRS UV photometry, as well as published WFPC photometry.
Superpixel-Augmented Endmember Detection for Hyperspectral Images
NASA Technical Reports Server (NTRS)
Thompson, David R.; Castano, Rebecca; Gilmore, Martha
2011-01-01
Superpixels are homogeneous image regions comprised of several contiguous pixels. They are produced by shattering the image into contiguous, homogeneous regions that each cover between 20 and 100 image pixels. The segmentation aims for a many-to-one mapping from superpixels to image features; each image feature could contain several superpixels, but each superpixel occupies no more than one image feature. This conservative segmentation is relatively easy to automate in a robust fashion. Superpixel processing is related to the more general idea of improving hyperspectral analysis through spatial constraints, which can recognize subtle features at or below the level of noise by exploiting the fact that their spectral signatures are found in neighboring pixels. Recent work has explored spatial constraints for endmember extraction, showing significant advantages over techniques that ignore pixels relative positions. Methods such as AMEE (automated morphological endmember extraction) express spatial influence using fixed isometric relationships a local square window or Euclidean distance in pixel coordinates. In other words, two pixels covariances are based on their spatial proximity, but are independent of their absolute location in the scene. These isometric spatial constraints are most appropriate when spectral variation is smooth and constant over the image. Superpixels are simple to implement, efficient to compute, and are empirically effective. They can be used as a preprocessing step with any desired endmember extraction technique. Superpixels also have a solid theoretical basis in the hyperspectral linear mixing model, making them a principled approach for improving endmember extraction. Unlike existing approaches, superpixels can accommodate non-isometric covariance between image pixels (characteristic of discrete image features separated by step discontinuities). These kinds of image features are common in natural scenes. Analysts can substitute superpixels for image pixels during endmember analysis that leverages the spatial contiguity of scene features to enhance subtle spectral features. Superpixels define populations of image pixels that are independent samples from each image feature, permitting robust estimation of spectral properties, and reducing measurement noise in proportion to the area of the superpixel. This permits improved endmember extraction, and enables automated search for novel and constituent minerals in very noisy, hyperspatial images. This innovation begins with a graph-based segmentation based on the work of Felzenszwalb et al., but then expands their approach to the hyperspectral image domain with a Euclidean distance metric. Then, the mean spectrum of each segment is computed, and the resulting data cloud is used as input into sequential maximum angle convex cone (SMACC) endmember extraction.
Wang, Jiali; Byrne, James; Franquiz, Juan; McGoron, Anthony
2007-08-01
develop and validate a PET sorting algorithm based on the respiratory amplitude to correct for abnormal respiratory cycles. using the 4D NCAT phantom model, 3D PET images were simulated in lung and other structures at different times within a respiratory cycle and noise was added. To validate the amplitude binning algorithm, NCAT phantom was used to simulate one case of five different respiratory periods and another case of five respiratory periods alone with five respiratory amplitudes. Comparison was performed for gated and un-gated images and for the new amplitude binning algorithm with the time binning algorithm by calculating the mean number of counts in the ROI (region of interest). an average of 8.87+/-5.10% improvement was reported for total 16 tumors with different tumor sizes and different T/B (tumor to background) ratios using the new sorting algorithm. As both the T/B ratio and tumor size decreases, image degradation due to respiration increases. The greater benefit for smaller diameter tumor and lower T/B ratio indicates a potential improvement in detecting more problematic tumors.
Site-specific multipoint fluorescence measurement system with end-capped optical fibers.
Song, Woosub; Moon, Sucbei; Lee, Byoung-Cheol; Park, Chul-Seung; Kim, Dug Young; Kwon, Hyuk Sang
2011-07-10
We present the development and implementation of a spatially and spectrally resolved multipoint fluorescence correlation spectroscopy (FCS) system utilizing multiple end-capped optical fibers and an inexpensive laser source. Specially prepared end-capped optical fibers placed in an image plane were used to both collect fluorescence signals from the sample and to deliver signals to the detectors. The placement of independently selected optical fibers on the image plane was done by monitoring the end-capped fiber tips at the focus using a CCD, and fluorescence from specific positions of a sample were collected by an end-capped fiber, which could accurately represent light intensities or spectral data without incurring any disturbance. A fast multipoint spectroscopy system with a time resolution of ∼1.5 ms was then implemented using a prism and an electron multiplying charge coupled device with a pixel binning for the region of interest. The accuracy of our proposed system was subsequently confirmed by experimental results, based on an FCS analysis of microspheres in distilled water. We expect that the proposed multipoint site-specific fluorescence measurement system can be used as an inexpensive fluorescence measurement tool to study many intracellular and molecular dynamics in cell biology. © 2011 Optical Society of America
Chromatic Modulator for a High-Resolution CCD or APS
NASA Technical Reports Server (NTRS)
Hartley, Frank; Hull, Anthony
2008-01-01
A chromatic modulator has been proposed to enable the separate detection of the red, green, and blue (RGB) color components of the same scene by a single charge-coupled device (CCD), active-pixel sensor (APS), or similar electronic image detector. Traditionally, the RGB color-separation problem in an electronic camera has been solved by use of either (1) fixed color filters over three separate image detectors; (2) a filter wheel that repeatedly imposes a red, then a green, then a blue filter over a single image detector; or (3) different fixed color filters over adjacent pixels. The use of separate image detectors necessitates precise registration of the detectors and the use of complicated optics; filter wheels are expensive and add considerably to the bulk of the camera; and fixed pixelated color filters reduce spatial resolution and introduce color-aliasing effects. The proposed chromatic modulator would not exhibit any of these shortcomings. The proposed chromatic modulator would be an electromechanical device fabricated by micromachining. It would include a filter having a spatially periodic pattern of RGB strips at a pitch equal to that of the pixels of the image detector. The filter would be placed in front of the image detector, supported at its periphery by a spring suspension and electrostatic comb drive. The spring suspension would bias the filter toward a middle position in which each filter strip would be registered with a row of pixels of the image detector. Hard stops would limit the excursion of the spring suspension to precisely one pixel row above and one pixel row below the middle position. In operation, the electrostatic comb drive would be actuated to repeatedly snap the filter to the upper extreme, middle, and lower extreme positions. This action would repeatedly place a succession of the differently colored filter strips in front of each pixel of the image detector. To simplify the processing, it would be desirable to encode information on the color of the filter strip over each row (or at least over some representative rows) of pixels at a given instant of time in synchronism with the pixel output at that instant.
NASA Astrophysics Data System (ADS)
Doi, Ryoichi
2016-04-01
The effects of a pseudo-colour imaging method were investigated by discriminating among similar agricultural plots in remote sensing images acquired using the Airborne Visible/Infrared Imaging Spectrometer (Indiana, USA) and the Landsat 7 satellite (Fergana, Uzbekistan), and that provided by GoogleEarth (Toyama, Japan). From each dataset, red (R)-green (G)-R-G-blue yellow (RGrgbyB), and RGrgby-1B pseudo-colour images were prepared. From each, cyan, magenta, yellow, key black, L*, a*, and b* derivative grayscale images were generated. In the Airborne Visible/Infrared Imaging Spectrometer image, pixels were selected for corn no tillage (29 pixels), corn minimum tillage (27), and soybean (34) plots. Likewise, in the Landsat 7 image, pixels representing corn (73 pixels), cotton (110), and wheat (112) plots were selected, and in the GoogleEarth image, those representing soybean (118 pixels) and rice (151) were selected. When the 14 derivative grayscale images were used together with an RGB yellow grayscale image, the overall classification accuracy improved from 74 to 94% (Airborne Visible/Infrared Imaging Spectrometer), 64 to 83% (Landsat), or 77 to 90% (GoogleEarth). As an indicator of discriminatory power, the kappa significance improved 1018-fold (Airborne Visible/Infrared Imaging Spectrometer) or greater. The derivative grayscale images were found to increase the dimensionality and quantity of data. Herein, the details of the increases in dimensionality and quantity are further analysed and discussed.
Multi-Pixel Simultaneous Classification of PolSAR Image Using Convolutional Neural Networks
Xu, Xin; Gui, Rong; Pu, Fangling
2018-01-01
Convolutional neural networks (CNN) have achieved great success in the optical image processing field. Because of the excellent performance of CNN, more and more methods based on CNN are applied to polarimetric synthetic aperture radar (PolSAR) image classification. Most CNN-based PolSAR image classification methods can only classify one pixel each time. Because all the pixels of a PolSAR image are classified independently, the inherent interrelation of different land covers is ignored. We use a fixed-feature-size CNN (FFS-CNN) to classify all pixels in a patch simultaneously. The proposed method has several advantages. First, FFS-CNN can classify all the pixels in a small patch simultaneously. When classifying a whole PolSAR image, it is faster than common CNNs. Second, FFS-CNN is trained to learn the interrelation of different land covers in a patch, so it can use the interrelation of land covers to improve the classification results. The experiments of FFS-CNN are evaluated on a Chinese Gaofen-3 PolSAR image and other two real PolSAR images. Experiment results show that FFS-CNN is comparable with the state-of-the-art PolSAR image classification methods. PMID:29510499
Multi-Pixel Simultaneous Classification of PolSAR Image Using Convolutional Neural Networks.
Wang, Lei; Xu, Xin; Dong, Hao; Gui, Rong; Pu, Fangling
2018-03-03
Convolutional neural networks (CNN) have achieved great success in the optical image processing field. Because of the excellent performance of CNN, more and more methods based on CNN are applied to polarimetric synthetic aperture radar (PolSAR) image classification. Most CNN-based PolSAR image classification methods can only classify one pixel each time. Because all the pixels of a PolSAR image are classified independently, the inherent interrelation of different land covers is ignored. We use a fixed-feature-size CNN (FFS-CNN) to classify all pixels in a patch simultaneously. The proposed method has several advantages. First, FFS-CNN can classify all the pixels in a small patch simultaneously. When classifying a whole PolSAR image, it is faster than common CNNs. Second, FFS-CNN is trained to learn the interrelation of different land covers in a patch, so it can use the interrelation of land covers to improve the classification results. The experiments of FFS-CNN are evaluated on a Chinese Gaofen-3 PolSAR image and other two real PolSAR images. Experiment results show that FFS-CNN is comparable with the state-of-the-art PolSAR image classification methods.
Pixelated coatings and advanced IR coatings
NASA Astrophysics Data System (ADS)
Pradal, Fabien; Portier, Benjamin; Oussalah, Meihdi; Leplan, Hervé
2017-09-01
Reosc developed pixelated infrared coatings on detector. Reosc manufactured thick pixelated multilayer stacks on IR-focal plane arrays for bi-spectral imaging systems, demonstrating high filter performance, low crosstalk, and no deterioration of the device sensitivities. More recently, a 5-pixel filter matrix was designed and fabricated. Recent developments in pixelated coatings, shows that high performance infrared filters can be coated directly on detector for multispectral imaging. Next generation space instrument can benefit from this technology to reduce their weight and consumptions.
Synthetic aperture radar images with composite azimuth resolution
Bielek, Timothy P; Bickel, Douglas L
2015-03-31
A synthetic aperture radar (SAR) image is produced by using all phase histories of a set of phase histories to produce a first pixel array having a first azimuth resolution, and using less than all phase histories of the set to produce a second pixel array having a second azimuth resolution that is coarser than the first azimuth resolution. The first and second pixel arrays are combined to produce a third pixel array defining a desired SAR image that shows distinct shadows of moving objects while preserving detail in stationary background clutter.
A custom hardware classifier for bruised apple detection in hyperspectral images
NASA Astrophysics Data System (ADS)
Cárdenas, Javier; Figueroa, Miguel; Pezoa, Jorge E.
2015-09-01
We present a custom digital architecture for bruised apple classification using hyperspectral images in the near infrared (NIR) spectrum. The algorithm classifies each pixel in an image into one of three classes: bruised, non-bruised, and background. We extract two 5-element feature vectors for each pixel using only 10 out of the 236 spectral bands provided by the hyperspectral camera, thereby greatly reducing both the requirements of the imager and the computational complexity of the algorithm. We then use two linear-kernel support vector machine (SVM) to classify each pixel. Each SVM was trained with 504 windows of size 17×17-pixel taken from 14 hyperspectral images of 320×320 pixels each, for each class. The architecture then computes the percentage of bruised pixels in each apple in order to adequately classify the fruit. We implemented the architecture on a Xilinx Zynq Z-7010 field-programmable gate array (FPGA) and tested it on images from a NIR N17E push-broom camera with a frame rate of 25 fps, a band-pixel rate of 1.888 MHz, and 236 spectral bands between 900 and 1700 nanometers in laboratory conditions. Using 28-bit fixed-point arithmetic, the circuit accurately discriminates 95.2% of the pixels corresponding to an apple, 81% of the pixels corresponding to a bruised apple, and 96.4% of the background. With the default threshold settings, the highest false positive (FP) for a bruised apple is 18.7%. The circuit operates at the native frame rate of the camera, consumes 67 mW of dynamic power, and uses less than 10% of the logic resources on the FPGA.
Compression of color-mapped images
NASA Technical Reports Server (NTRS)
Hadenfeldt, A. C.; Sayood, Khalid
1992-01-01
In a standard image coding scenario, pixel-to-pixel correlation nearly always exists in the data, especially if the image is a natural scene. This correlation is what allows predictive coding schemes (e.g., DPCM) to perform efficient compression. In a color-mapped image, the values stored in the pixel array are no longer directly related to the pixel intensity. Two color indices which are numerically adjacent (close) may point to two very different colors. The correlation still exists, but only via the colormap. This fact can be exploited by sorting the color map to reintroduce the structure. The sorting of colormaps is studied and it is shown how the resulting structure can be used in both lossless and lossy compression of images.
Regional SAR Image Segmentation Based on Fuzzy Clustering with Gamma Mixture Model
NASA Astrophysics Data System (ADS)
Li, X. L.; Zhao, Q. H.; Li, Y.
2017-09-01
Most of stochastic based fuzzy clustering algorithms are pixel-based, which can not effectively overcome the inherent speckle noise in SAR images. In order to deal with the problem, a regional SAR image segmentation algorithm based on fuzzy clustering with Gamma mixture model is proposed in this paper. First, initialize some generating points randomly on the image, the image domain is divided into many sub-regions using Voronoi tessellation technique. Each sub-region is regarded as a homogeneous area in which the pixels share the same cluster label. Then, assume the probability of the pixel to be a Gamma mixture model with the parameters respecting to the cluster which the pixel belongs to. The negative logarithm of the probability represents the dissimilarity measure between the pixel and the cluster. The regional dissimilarity measure of one sub-region is defined as the sum of the measures of pixels in the region. Furthermore, the Markov Random Field (MRF) model is extended from pixels level to Voronoi sub-regions, and then the regional objective function is established under the framework of fuzzy clustering. The optimal segmentation results can be obtained by the solution of model parameters and generating points. Finally, the effectiveness of the proposed algorithm can be proved by the qualitative and quantitative analysis from the segmentation results of the simulated and real SAR images.
NASA Technical Reports Server (NTRS)
Stanfill, D. F.
1994-01-01
Pixel Pusher is a Macintosh application used for viewing and performing minor enhancements on imagery. It will read image files in JPL's two primary image formats- VICAR and PDS - as well as the Macintosh PICT format. VICAR (NPO-18076) handles an array of image processing capabilities which may be used for a variety of applications including biomedical image processing, cartography, earth resources, and geological exploration. Pixel Pusher can also import VICAR format color lookup tables for viewing images in pseudocolor (256 colors). This program currently supports only eight bit images but will work on monitors with any number of colors. Arbitrarily large image files may be viewed in a normal Macintosh window. Color and contrast enhancement can be performed with a graphical "stretch" editor (as in contrast stretch). In addition, VICAR images may be saved as Macintosh PICT files for exporting into other Macintosh programs, and individual pixels can be queried to determine their locations and actual data values. Pixel Pusher is written in Symantec's Think C and was developed for use on a Macintosh SE30, LC, or II series computer running System Software 6.0.3 or later and 32 bit QuickDraw. Pixel Pusher will only run on a Macintosh which supports color (whether a color monitor is being used or not). The standard distribution medium for this program is a set of three 3.5 inch Macintosh format diskettes. The program price includes documentation. Pixel Pusher was developed in 1991 and is a copyrighted work with all copyright vested in NASA. Think C is a trademark of Symantec Corporation. Macintosh is a registered trademark of Apple Computer, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shrestha, S; Vedantham, S; Karellas, A
Purpose: Detectors with hexagonal pixels require resampling to square pixels for distortion-free display of acquired images. In this work, the presampling modulation transfer function (MTF) of a hexagonal pixel array photon-counting CdTe detector for region-of-interest fluoroscopy was measured and the optimal square pixel size for resampling was determined. Methods: A 0.65mm thick CdTe Schottky sensor capable of concurrently acquiring up to 3 energy-windowed images was operated in a single energy-window mode to include ≥10 KeV photons. The detector had hexagonal pixels with apothem of 30 microns resulting in pixel spacing of 60 and 51.96 microns along the two orthogonal directions.more » Images of a tungsten edge test device acquired under IEC RQA5 conditions were double Hough transformed to identify the edge and numerically differentiated. The presampling MTF was determined from the finely sampled line spread function that accounted for the hexagonal sampling. The optimal square pixel size was determined in two ways; the square pixel size for which the aperture function evaluated at the Nyquist frequencies along the two orthogonal directions matched that from the hexagonal pixel aperture functions, and the square pixel size for which the mean absolute difference between the square and hexagonal aperture functions was minimized over all frequencies up to the Nyquist limit. Results: Evaluation of the aperture functions over the entire frequency range resulted in square pixel size of 53 microns with less than 2% difference from the hexagonal pixel. Evaluation of the aperture functions at Nyquist frequencies alone resulted in 54 microns square pixels. For the photon-counting CdTe detector and after resampling to 53 microns square pixels using quadratic interpolation, the presampling MTF at Nyquist frequency of 9.434 cycles/mm along the two directions were 0.501 and 0.507. Conclusion: Hexagonal pixel array photon-counting CdTe detector after resampling to square pixels provides high-resolution imaging suitable for fluoroscopy.« less
Compact SPAD-Based Pixel Architectures for Time-Resolved Image Sensors
Perenzoni, Matteo; Pancheri, Lucio; Stoppa, David
2016-01-01
This paper reviews the state of the art of single-photon avalanche diode (SPAD) image sensors for time-resolved imaging. The focus of the paper is on pixel architectures featuring small pixel size (<25 μm) and high fill factor (>20%) as a key enabling technology for the successful implementation of high spatial resolution SPAD-based image sensors. A summary of the main CMOS SPAD implementations, their characteristics and integration challenges, is provided from the perspective of targeting large pixel arrays, where one of the key drivers is the spatial uniformity. The main analog techniques aimed at time-gated photon counting and photon timestamping suitable for compact and low-power pixels are critically discussed. The main features of these solutions are the adoption of analog counting techniques and time-to-analog conversion, in NMOS-only pixels. Reliable quantum-limited single-photon counting, self-referenced analog-to-digital conversion, time gating down to 0.75 ns and timestamping with 368 ps jitter are achieved. PMID:27223284
Correction of clipped pixels in color images.
Xu, Di; Doutre, Colin; Nasiopoulos, Panos
2011-03-01
Conventional images store a very limited dynamic range of brightness. The true luma in the bright area of such images is often lost due to clipping. When clipping changes the R, G, B color ratios of a pixel, color distortion also occurs. In this paper, we propose an algorithm to enhance both the luma and chroma of the clipped pixels. Our method is based on the strong chroma spatial correlation between clipped pixels and their surrounding unclipped area. After identifying the clipped areas in the image, we partition the clipped areas into regions with similar chroma, and estimate the chroma of each clipped region based on the chroma of its surrounding unclipped region. We correct the clipped R, G, or B color channels based on the estimated chroma and the unclipped color channel(s) of the current pixel. The last step involves smoothing of the boundaries between regions of different clipping scenarios. Both objective and subjective experimental results show that our algorithm is very effective in restoring the color of clipped pixels. © 2011 IEEE
Label-Free Biomedical Imaging Using High-Speed Lock-In Pixel Sensor for Stimulated Raman Scattering
Mars, Kamel; Kawahito, Shoji; Yasutomi, Keita; Kagawa, Keiichiro; Yamada, Takahiro
2017-01-01
Raman imaging eliminates the need for staining procedures, providing label-free imaging to study biological samples. Recent developments in stimulated Raman scattering (SRS) have achieved fast acquisition speed and hyperspectral imaging. However, there has been a problem of lack of detectors suitable for MHz modulation rate parallel detection, detecting multiple small SRS signals while eliminating extremely strong offset due to direct laser light. In this paper, we present a complementary metal-oxide semiconductor (CMOS) image sensor using high-speed lock-in pixels for stimulated Raman scattering that is capable of obtaining the difference of Stokes-on and Stokes-off signal at modulation frequency of 20 MHz in the pixel before reading out. The generated small SRS signal is extracted and amplified in a pixel using a high-speed and large area lateral electric field charge modulator (LEFM) employing two-step ion implantation and an in-pixel pair of low-pass filter, a sample and hold circuit and a switched capacitor integrator using a fully differential amplifier. A prototype chip is fabricated using 0.11 μm CMOS image sensor technology process. SRS spectra and images of stearic acid and 3T3-L1 samples are successfully obtained. The outcomes suggest that hyperspectral and multi-focus SRS imaging at video rate is viable after slight modifications to the pixel architecture and the acquisition system. PMID:29120358
Label-Free Biomedical Imaging Using High-Speed Lock-In Pixel Sensor for Stimulated Raman Scattering.
Mars, Kamel; Lioe, De Xing; Kawahito, Shoji; Yasutomi, Keita; Kagawa, Keiichiro; Yamada, Takahiro; Hashimoto, Mamoru
2017-11-09
Raman imaging eliminates the need for staining procedures, providing label-free imaging to study biological samples. Recent developments in stimulated Raman scattering (SRS) have achieved fast acquisition speed and hyperspectral imaging. However, there has been a problem of lack of detectors suitable for MHz modulation rate parallel detection, detecting multiple small SRS signals while eliminating extremely strong offset due to direct laser light. In this paper, we present a complementary metal-oxide semiconductor (CMOS) image sensor using high-speed lock-in pixels for stimulated Raman scattering that is capable of obtaining the difference of Stokes-on and Stokes-off signal at modulation frequency of 20 MHz in the pixel before reading out. The generated small SRS signal is extracted and amplified in a pixel using a high-speed and large area lateral electric field charge modulator (LEFM) employing two-step ion implantation and an in-pixel pair of low-pass filter, a sample and hold circuit and a switched capacitor integrator using a fully differential amplifier. A prototype chip is fabricated using 0.11 μm CMOS image sensor technology process. SRS spectra and images of stearic acid and 3T3-L1 samples are successfully obtained. The outcomes suggest that hyperspectral and multi-focus SRS imaging at video rate is viable after slight modifications to the pixel architecture and the acquisition system.
Development of high energy micro-tomography system at SPring-8
NASA Astrophysics Data System (ADS)
Uesugi, Kentaro; Hoshino, Masato
2017-09-01
A high energy X-ray micro-tomography system has been developed at BL20B2 in SPring-8. The available range of the energy is between 20keV and 113keV with a Si (511) double crystal monochromator. The system enables us to image large or heavy materials such as fossils and metals. The X-ray image detector consists of visible light conversion system and sCMOS camera. The effective pixel size is variable by changing a tandem lens between 6.5 μm/pixel and 25.5 μm/pixel discretely. The format of the camera is 2048 pixels x 2048 pixels. As a demonstration of the system, alkaline battery and a nodule from Bolivia were imaged. A detail of the structure of the battery and a female mold Trilobite were successfully imaged without breaking those fossils.
Wang, Qian; Liu, Zhen; Ziegler, Sibylle I; Shi, Kuangyu
2015-07-07
Position-sensitive positron cameras using silicon pixel detectors have been applied for some preclinical and intraoperative clinical applications. However, the spatial resolution of a positron camera is limited by positron multiple scattering in the detector. An incident positron may fire a number of successive pixels on the imaging plane. It is still impossible to capture the primary fired pixel along a particle trajectory by hardware or to perceive the pixel firing sequence by direct observation. Here, we propose a novel data-driven method to improve the spatial resolution by classifying the primary pixels within the detector using support vector machine. A classification model is constructed by learning the features of positron trajectories based on Monte-Carlo simulations using Geant4. Topological and energy features of pixels fired by (18)F positrons were considered for the training and classification. After applying the classification model on measurements, the primary fired pixels of the positron tracks in the silicon detector were estimated. The method was tested and assessed for [(18)F]FDG imaging of an absorbing edge protocol and a leaf sample. The proposed method improved the spatial resolution from 154.6 ± 4.2 µm (energy weighted centroid approximation) to 132.3 ± 3.5 µm in the absorbing edge measurements. For the positron imaging of a leaf sample, the proposed method achieved lower root mean square error relative to phosphor plate imaging, and higher similarity with the reference optical image. The improvements of the preliminary results support further investigation of the proposed algorithm for the enhancement of positron imaging in clinical and preclinical applications.
NASA Astrophysics Data System (ADS)
Wang, Qian; Liu, Zhen; Ziegler, Sibylle I.; Shi, Kuangyu
2015-07-01
Position-sensitive positron cameras using silicon pixel detectors have been applied for some preclinical and intraoperative clinical applications. However, the spatial resolution of a positron camera is limited by positron multiple scattering in the detector. An incident positron may fire a number of successive pixels on the imaging plane. It is still impossible to capture the primary fired pixel along a particle trajectory by hardware or to perceive the pixel firing sequence by direct observation. Here, we propose a novel data-driven method to improve the spatial resolution by classifying the primary pixels within the detector using support vector machine. A classification model is constructed by learning the features of positron trajectories based on Monte-Carlo simulations using Geant4. Topological and energy features of pixels fired by 18F positrons were considered for the training and classification. After applying the classification model on measurements, the primary fired pixels of the positron tracks in the silicon detector were estimated. The method was tested and assessed for [18F]FDG imaging of an absorbing edge protocol and a leaf sample. The proposed method improved the spatial resolution from 154.6 ± 4.2 µm (energy weighted centroid approximation) to 132.3 ± 3.5 µm in the absorbing edge measurements. For the positron imaging of a leaf sample, the proposed method achieved lower root mean square error relative to phosphor plate imaging, and higher similarity with the reference optical image. The improvements of the preliminary results support further investigation of the proposed algorithm for the enhancement of positron imaging in clinical and preclinical applications.
Le, Huy Q.; Molloi, Sabee
2011-01-01
Purpose: To experimentally investigate whether a computed tomography (CT) system based on CdZnTe (CZT) detectors in conjunction with a least-squares parameter estimation technique can be used to decompose four different materials. Methods: The material decomposition process was divided into a segmentation task and a quantification task. A least-squares minimization algorithm was used to decompose materials with five measurements of the energy dependent linear attenuation coefficients. A small field-of-view energy discriminating CT system was built. The CT system consisted of an x-ray tube, a rotational stage, and an array of CZT detectors. The CZT array was composed of 64 pixels, each of which is 0.8×0.8×3 mm. Images were acquired at 80 kVp in fluoroscopic mode at 50 ms per frame. The detector resolved the x-ray spectrum into energy bins of 22–32, 33–39, 40–46, 47–56, and 57–80 keV. Four phantoms were constructed from polymethylmethacrylate (PMMA), polyethylene, polyoxymethylene, hydroxyapatite, and iodine. Three phantoms were composed of three materials with embedded hydroxyapatite (50, 150, 250, and 350 mg∕ml) and iodine (4, 8, 12, and 16 mg∕ml) contrast elements. One phantom was composed of four materials with embedded hydroxyapatite (150 and 350 mg∕ml) and iodine (8 and 16 mg∕ml). Calibrations consisted of PMMA phantoms with either hydroxyapatite (100, 200, 300, 400, and 500 mg∕ml) or iodine (5, 15, 25, 35, and 45 mg∕ml) embedded. Filtered backprojection and a ramp filter were used to reconstruct images from each energy bin. Material segmentation and quantification were performed and compared between different phantoms. Results: All phantoms were decomposed accurately, but some voxels in the base material regions were incorrectly identified. Average quantification errors of hydroxyapatite∕iodine were 9.26∕7.13%, 7.73∕5.58%, and 12.93∕8.23% for the three-material PMMA, polyethylene, and polyoxymethylene phantoms, respectively. The average errors for the four-material phantom were 15.62% and 2.76% for hydroxyapatite and iodine, respectively. Conclusions: The calibrated least-squares minimization technique of decomposition performed well in breast imaging tasks with an energy resolving detector. This method can provide material basis images containing concentrations of the relevant materials that can potentially be valuable in the diagnostic process. PMID:21361191
Takayanagi, Isao; Yoshimura, Norio; Mori, Kazuya; Matsuo, Shinichiro; Tanaka, Shunsuke; Abe, Hirofumi; Yasuda, Naoto; Ishikawa, Kenichiro; Okura, Shunsuke; Ohsawa, Shinji; Otaka, Toshinori
2018-01-12
To respond to the high demand for high dynamic range imaging suitable for moving objects with few artifacts, we have developed a single-exposure dynamic range image sensor by introducing a triple-gain pixel and a low noise dual-gain readout circuit. The developed 3 μm pixel is capable of having three conversion gains. Introducing a new split-pinned photodiode structure, linear full well reaches 40 ke - . Readout noise under the highest pixel gain condition is 1 e - with a low noise readout circuit. Merging two signals, one with high pixel gain and high analog gain, and the other with low pixel gain and low analog gain, a single exposure dynamic rage (SEHDR) signal is obtained. Using this technology, a 1/2.7", 2M-pixel CMOS image sensor has been developed and characterized. The image sensor also employs an on-chip linearization function, yielding a 16-bit linear signal at 60 fps, and an intra-scene dynamic range of higher than 90 dB was successfully demonstrated. This SEHDR approach inherently mitigates the artifacts from moving objects or time-varying light sources that can appear in the multiple exposure high dynamic range (MEHDR) approach.
Takayanagi, Isao; Yoshimura, Norio; Mori, Kazuya; Matsuo, Shinichiro; Tanaka, Shunsuke; Abe, Hirofumi; Yasuda, Naoto; Ishikawa, Kenichiro; Okura, Shunsuke; Ohsawa, Shinji; Otaka, Toshinori
2018-01-01
To respond to the high demand for high dynamic range imaging suitable for moving objects with few artifacts, we have developed a single-exposure dynamic range image sensor by introducing a triple-gain pixel and a low noise dual-gain readout circuit. The developed 3 μm pixel is capable of having three conversion gains. Introducing a new split-pinned photodiode structure, linear full well reaches 40 ke−. Readout noise under the highest pixel gain condition is 1 e− with a low noise readout circuit. Merging two signals, one with high pixel gain and high analog gain, and the other with low pixel gain and low analog gain, a single exposure dynamic rage (SEHDR) signal is obtained. Using this technology, a 1/2.7”, 2M-pixel CMOS image sensor has been developed and characterized. The image sensor also employs an on-chip linearization function, yielding a 16-bit linear signal at 60 fps, and an intra-scene dynamic range of higher than 90 dB was successfully demonstrated. This SEHDR approach inherently mitigates the artifacts from moving objects or time-varying light sources that can appear in the multiple exposure high dynamic range (MEHDR) approach. PMID:29329210
NASA Astrophysics Data System (ADS)
Holte, Elias Peter; Sirbu, Dan; Belikov, Ruslan
2018-01-01
Binary stars have been largely left out of direct imaging surveys for exoplanets, specifically for earth-sized planets in their star's habitable zone. Utilizing new direct imaging techniques brings us closer to being able to detect earth-like exoplanets around binary stars. In preparation for the upcoming WFIRST mission and other direct imaging-capable missions (HabEx, LUVIOR) it is important to understand the expected science yield resulting from the implementation of these imaging techniques. BinCat is a catalog of binary systems within 30 parsecs to be used as a target list for future direct imaging missions. There is a non-static component along with BinCat that allows researchers to predict the expected light-leakage between a binary component and its off-axis companion (a value critical to the aforementioned techniques) at any epoch. This is accomplished by using orbital elements from the Sixth Orbital Catalog to model the orbits of the binaries. The software was validated against the historical data used to generate the orbital parameters. When orbital information is unknown or the binaries are purely optical the proper motion of the pair taken from the Washington Double Star catalog is integrated in time to estimate expected light-leakage.
Dead pixel replacement in LWIR microgrid polarimeters.
Ratliff, Bradley M; Tyo, J Scott; Boger, James K; Black, Wiley T; Bowers, David L; Fetrow, Matthew P
2007-06-11
LWIR imaging arrays are often affected by nonresponsive pixels, or "dead pixels." These dead pixels can severely degrade the quality of imagery and often have to be replaced before subsequent image processing and display of the imagery data. For LWIR arrays that are integrated with arrays of micropolarizers, the problem of dead pixels is amplified. Conventional dead pixel replacement (DPR) strategies cannot be employed since neighboring pixels are of different polarizations. In this paper we present two DPR schemes. The first is a modified nearest-neighbor replacement method. The second is a method based on redundancy in the polarization measurements.We find that the redundancy-based DPR scheme provides an order-of-magnitude better performance for typical LWIR polarimetric data.
NASA Astrophysics Data System (ADS)
Wu, L.; San Segundo Bello, D.; Coppejans, P.; Craninckx, J.; Wambacq, P.; Borremans, J.
2017-02-01
This paper presents a 20 Mfps 32 × 84 pixels CMOS burst-mode imager featuring high frame depth with a passive in-pixel amplifier. Compared to the CCD alternatives, CMOS burst-mode imagers are attractive for their low power consumption and integration of circuitry such as ADCs. Due to storage capacitor size and its noise limitations, CMOS burst-mode imagers usually suffer from a lower frame depth than CCD implementations. In order to capture fast transitions over a longer time span, an in-pixel CDS technique has been adopted to reduce the required memory cells for each frame by half. Moreover, integrated with in-pixel CDS, an in-pixel NMOS-only passive amplifier alleviates the kTC noise requirements of the memory bank allowing the usage of smaller capacitors. Specifically, a dense 108-cell MOS memory bank (10fF/cell) has been implemented inside a 30μm pitch pixel, with an area of 25 × 30μm2 occupied by the memory bank. There is an improvement of about 4x in terms of frame depth per pixel area by applying in-pixel CDS and amplification. With the amplifier's gain of 3.3, an FD input-referred RMS noise of 1mV is achieved at 20 Mfps operation. While the amplification is done without burning DC current, including the pixel source follower biasing, the full pixel consumes 10μA at 3.3V supply voltage at full speed. The chip has been fabricated in imec's 130nm CMOS CIS technology.
High dynamic range pixel architecture for advanced diagnostic medical x-ray imaging applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Izadi, Mohammad Hadi; Karim, Karim S.
2006-05-15
The most widely used architecture in large-area amorphous silicon (a-Si) flat panel imagers is a passive pixel sensor (PPS), which consists of a detector and a readout switch. While the PPS has the advantage of being compact and amenable toward high-resolution imaging, small PPS output signals are swamped by external column charge amplifier and data line thermal noise, which reduce the minimum readable sensor input signal. In contrast to PPS circuits, on-pixel amplifiers in a-Si technology reduce readout noise to levels that can meet even the stringent requirements for low noise digital x-ray fluoroscopy (<1000 noise electrons). However, larger voltagesmore » at the pixel input cause the output of the amplified pixel to become nonlinear thus reducing the dynamic range. We reported a hybrid amplified pixel architecture based on a combination of PPS and amplified pixel designs that, in addition to low noise performance, also resulted in large-signal linearity and consequently higher dynamic range [K. S. Karim et al., Proc. SPIE 5368, 657 (2004)]. The additional benefit in large-signal linearity, however, came at the cost of an additional pixel transistor. We present an amplified pixel design that achieves the goals of low noise performance and large-signal linearity without the need for an additional pixel transistor. Theoretical calculations and simulation results for noise indicate the applicability of the amplified a-Si pixel architecture for high dynamic range, medical x-ray imaging applications that require switching between low exposure, real-time fluoroscopy and high-exposure radiography.« less
Camouflaging in Digital Image for Secure Communication
NASA Astrophysics Data System (ADS)
Jindal, B.; Singh, A. P.
2013-06-01
The present paper reports on a new type of camouflaging in digital image for hiding crypto-data using moderate bit alteration in the pixel. In the proposed method, cryptography is combined with steganography to provide a two layer security to the hidden data. The novelty of the algorithm proposed in the present work lies in the fact that the information about hidden bit is reflected by parity condition in one part of the image pixel. The remaining part of the image pixel is used to perform local pixel adjustment to improve the visual perception of the cover image. In order to examine the effectiveness of the proposed method, image quality measuring parameters are computed. In addition to this, security analysis is also carried by comparing the histograms of cover and stego images. This scheme provides a higher security as well as robustness to intentional as well as unintentional attacks.
Wavelet imaging cleaning method for atmospheric Cherenkov telescopes
NASA Astrophysics Data System (ADS)
Lessard, R. W.; Cayón, L.; Sembroski, G. H.; Gaidos, J. A.
2002-07-01
We present a new method of image cleaning for imaging atmospheric Cherenkov telescopes. The method is based on the utilization of wavelets to identify noise pixels in images of gamma-ray and hadronic induced air showers. This method selects more signal pixels with Cherenkov photons than traditional image processing techniques. In addition, the method is equally efficient at rejecting pixels with noise alone. The inclusion of more signal pixels in an image of an air shower allows for a more accurate reconstruction, especially at lower gamma-ray energies that produce low levels of light. We present the results of Monte Carlo simulations of gamma-ray and hadronic air showers which show improved angular resolution using this cleaning procedure. Data from the Whipple Observatory's 10-m telescope are utilized to show the efficacy of the method for extracting a gamma-ray signal from the background of hadronic generated images.
A hyperspectral image projector for hyperspectral imagers
NASA Astrophysics Data System (ADS)
Rice, Joseph P.; Brown, Steven W.; Neira, Jorge E.; Bousquet, Robert R.
2007-04-01
We have developed and demonstrated a Hyperspectral Image Projector (HIP) intended for system-level validation testing of hyperspectral imagers, including the instrument and any associated spectral unmixing algorithms. HIP, based on the same digital micromirror arrays used in commercial digital light processing (DLP*) displays, is capable of projecting any combination of many different arbitrarily programmable basis spectra into each image pixel at up to video frame rates. We use a scheme whereby one micromirror array is used to produce light having the spectra of endmembers (i.e. vegetation, water, minerals, etc.), and a second micromirror array, optically in series with the first, projects any combination of these arbitrarily-programmable spectra into the pixels of a 1024 x 768 element spatial image, thereby producing temporally-integrated images having spectrally mixed pixels. HIP goes beyond conventional DLP projectors in that each spatial pixel can have an arbitrary spectrum, not just arbitrary color. As such, the resulting spectral and spatial content of the projected image can simulate realistic scenes that a hyperspectral imager will measure during its use. Also, the spectral radiance of the projected scenes can be measured with a calibrated spectroradiometer, such that the spectral radiance projected into each pixel of the hyperspectral imager can be accurately known. Use of such projected scenes in a controlled laboratory setting would alleviate expensive field testing of instruments, allow better separation of environmental effects from instrument effects, and enable system-level performance testing and validation of hyperspectral imagers as used with analysis algorithms. For example, known mixtures of relevant endmember spectra could be projected into arbitrary spatial pixels in a hyperspectral imager, enabling tests of how well a full system, consisting of the instrument + calibration + analysis algorithm, performs in unmixing (i.e. de-convolving) the spectra in all pixels. We discuss here the performance of a visible prototype HIP. The technology is readily extendable to the ultraviolet and infrared spectral ranges, and the scenes can be static or dynamic.
CRISM's Global Mapping of Mars, Part 1
NASA Technical Reports Server (NTRS)
2007-01-01
After a year in Mars orbit, CRISM has taken enough images to allow the team to release the first parts of a global spectral map of Mars to the Planetary Data System (PDS), NASA's digital library of planetary data. CRISM's global mapping is called the 'multispectral survey.' The team uses the word 'survey' because a reason for gathering this data set is to search for new sites for targeted observations, high-resolution views of the surface at 18 meters per pixel in 544 colors. Another reason for the multispectral survey is to provide contextual information. Targeted observations have such a large data volume (about 200 megabytes apiece) that only about 1% of Mars can be imaged at CRISM's highest resolution. The multispectral survey is a lower data volume type of observation that fills in the gaps between targeted observations, allowing scientists to better understand their geologic context. The global map is built from tens of thousands of image strips each about 10 kilometers (6.2 miles) wide and thousands of kilometers long. During the multispectral survey, CRISM returns data from only 72 carefully selected wavelengths that cover absorptions indicative of the mineral groups that CRISM is looking for on Mars. Data volume is further decreased by binning image pixels inside the instrument to a scale of about 200 meters (660 feet) per pixel. The total reduction in data volume per square kilometer is a factor of 700, making the multispectral survey manageable to acquire and transmit to Earth. Once on the ground, the strips of data are mosaicked into maps. The multispectral survey is too large to show the whole planet in a single map, so the map is divided into 1,964 'tiles,' each about 300 kilometers (186 miles) across. There are three versions of each tile, processed to progressively greater levels to strip away the obscuring effects of the dusty atmosphere and to highlight mineral variations in surface materials. This is the first version of tile 750, one of 209 tiles just delivered to the PDS. It shows a part of the planet called Tyrrhena Terra in the ancient, heavily cratered highlands. The colored strips are CRISM multispectral survey data acquired over several months, in which each pixel has a calibrated 72-color spectrum of Mars. The three wavelengths shown are 2.53, 1.50, and 1.08 micrometers in the red, green, and blue image planes respectively. At these wavelengths, rocky areas appear brown, dusty areas appear tan, and regions with hazy atmosphere appear bluish. Note that there is a large difference in brightness between strips, because there is no correction for the lighting conditions at the time of each observation. The gray areas between the strips are from an earlier mosaic of the planet taken by the Thermal Emission Imaging System (THEMIS) instrument on Mars Odyssey, and are included only for context. Ultimately the multispectral survey will cover nearly all of this area. CRISM is one of six science instruments on NASA's Mars Reconnaissance Orbiter. Led by The Johns Hopkins University Applied Physics Laboratory, Laurel, Md., the CRISM team includes expertise from universities, government agencies and small businesses in the United States and abroad. NASA's Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the Mars Reconnaissance Orbiter and the Mars Science Laboratory for NASA's Science Mission Directorate, Washington. Lockheed Martin Space Systems, Denver, built the orbiter.CRISM's Global Mapping of Mars, Part 2
NASA Technical Reports Server (NTRS)
2007-01-01
After a year in Mars orbit, CRISM has taken enough images to allow the team to release the first parts of a global spectral map of Mars to the Planetary Data System (PDS), NASA's digital library of planetary data. CRISM's global mapping is called the 'multispectral survey.' The team uses the word 'survey' because a reason for gathering this data set is to search for new sites for targeted observations, high-resolution views of the surface at 18 meters per pixel in 544 colors. Another reason for the multispectral survey is to provide contextual information. Targeted observations have such a large data volume (about 200 megabytes apiece) that only about 1% of Mars can be imaged at CRISM's highest resolution. The multispectral survey is a lower data volume type of observation that fills in the gaps between targeted observations, allowing scientists to better understand their geologic context. The global map is built from tens of thousands of image strips each about 10 kilometers (6.2 miles) wide and thousands of kilometers long. During the multispectral survey, CRISM returns data from only 72 carefully selected wavelengths that cover absorptions indicative of the mineral groups that CRISM is looking for on Mars. Data volume is further decreased by binning image pixels inside the instrument to a scale of about 200 meters (660 feet) per pixel. The total reduction in data volume per square kilometer is a factor of 700, making the multispectral survey manageable to acquire and transmit to Earth. Once on the ground, the strips of data are mosaicked into maps. The multispectral survey is too large to show the whole planet in a single map, so the map is divided into 1,964 'tiles,' each about 300 kilometers (186 miles) across. There are three versions of each tile, processed to progressively greater levels to strip away the obscuring effects of the dusty atmosphere and to highlight mineral variations in surface materials. This is the first version of tile 750, one of 209 tiles just delivered to the PDS. It shows a part of the planet called Tyrrhena Terra in the ancient, heavily cratered highlands. The colored strips are CRISM multispectral survey data acquired over several months, in which each pixel has a calibrated 72-color spectrum of Mars. The three wavelengths shown are 2.53, 1.50, and 1.08 micrometers in the red, green, and blue image planes respectively. At these wavelengths, rocky areas appear brown, dusty areas appear tan, and regions with hazy atmosphere appear bluish. Note that there is a large difference in brightness between strips, because there is no correction for the lighting conditions at the time of each observation. The gray areas between the strips are from an earlier mosaic of the planet taken by the Thermal Emission Imaging System (THEMIS) instrument on Mars Odyssey, and are included only for context. Ultimately the multispectral survey will cover nearly all of this area. CRISM is one of six science instruments on NASA's Mars Reconnaissance Orbiter. Led by The Johns Hopkins University Applied Physics Laboratory, Laurel, Md., the CRISM team includes expertise from universities, government agencies and small businesses in the United States and abroad. NASA's Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the Mars Reconnaissance Orbiter and the Mars Science Laboratory for NASA's Science Mission Directorate, Washington. Lockheed Martin Space Systems, Denver, built the orbiter.Coloured computational imaging with single-pixel detectors based on a 2D discrete cosine transform
NASA Astrophysics Data System (ADS)
Liu, Bao-Lei; Yang, Zhao-Hua; Liu, Xia; Wu, Ling-An
2017-02-01
We propose and demonstrate a computational imaging technique that uses structured illumination based on a two-dimensional discrete cosine transform to perform imaging with a single-pixel detector. A scene is illuminated by a projector with two sets of orthogonal patterns, then by applying an inverse cosine transform to the spectra obtained from the single-pixel detector a full-colour image is retrieved. This technique can retrieve an image from sub-Nyquist measurements, and the background noise is easily cancelled to give excellent image quality. Moreover, the experimental set-up is very simple.
NASA Astrophysics Data System (ADS)
Gao, Kun; Yang, Hu; Chen, Xiaomei; Ni, Guoqiang
2008-03-01
Because of complex thermal objects in an infrared image, the prevalent image edge detection operators are often suitable for a certain scene and extract too wide edges sometimes. From a biological point of view, the image edge detection operators work reliably when assuming a convolution-based receptive field architecture. A DoG (Difference-of- Gaussians) model filter based on ON-center retinal ganglion cell receptive field architecture with artificial eye tremors introduced is proposed for the image contour detection. Aiming at the blurred edges of an infrared image, the subsequent orthogonal polynomial interpolation and sub-pixel level edge detection in rough edge pixel neighborhood is adopted to locate the foregoing rough edges in sub-pixel level. Numerical simulations show that this method can locate the target edge accurately and robustly.
Anomaly clustering in hyperspectral images
NASA Astrophysics Data System (ADS)
Doster, Timothy J.; Ross, David S.; Messinger, David W.; Basener, William F.
2009-05-01
The topological anomaly detection algorithm (TAD) differs from other anomaly detection algorithms in that it uses a topological/graph-theoretic model for the image background instead of modeling the image with a Gaussian normal distribution. In the construction of the model, TAD produces a hard threshold separating anomalous pixels from background in the image. We build on this feature of TAD by extending the algorithm so that it gives a measure of the number of anomalous objects, rather than the number of anomalous pixels, in a hyperspectral image. This is done by identifying, and integrating, clusters of anomalous pixels via a graph theoretical method combining spatial and spectral information. The method is applied to a cluttered HyMap image and combines small groups of pixels containing like materials, such as those corresponding to rooftops and cars, into individual clusters. This improves visualization and interpretation of objects.
A new algorithm to reduce noise in microscopy images implemented with a simple program in python.
Papini, Alessio
2012-03-01
All microscopical images contain noise, increasing when (e.g., transmission electron microscope or light microscope) approaching the resolution limit. Many methods are available to reduce noise. One of the most commonly used is image averaging. We propose here to use the mode of pixel values. Simple Python programs process a given number of images, recorded consecutively from the same subject. The programs calculate the mode of the pixel values in a given position (a, b). The result is a new image containing in (a, b) the mode of the values. Therefore, the final pixel value corresponds to that read in at least two of the pixels in position (a, b). The application of the program on a set of images obtained by applying salt and pepper noise and GIMP hurl noise with 10-90% standard deviation showed that the mode performs better than averaging with three-eight images. The data suggest that the mode would be more efficient (in the sense of a lower number of recorded images to process to reduce noise below a given limit) for lower number of total noisy pixels and high standard deviation (as impulse noise and salt and pepper noise), while averaging would be more efficient when the number of varying pixels is high, and the standard deviation is low, as in many cases of Gaussian noise affected images. The two methods may be used serially. Copyright © 2011 Wiley Periodicals, Inc.
The Effects of Radiation on Imagery Sensors in Space
NASA Technical Reports Server (NTRS)
Mathis, Dylan
2007-01-01
Recent experience using high definition video on the International Space Station reveals camera pixel degradation due to particle radiation to be a much more significant problem with high definition cameras than with standard definition video. Although it may at first appear that increased pixel density on the imager is the logical explanation for this, the ISS implementations of high definition suggest a more complex causal and mediating factor mix. The degree of damage seems to vary from one type of camera to another, and this variation prompts a reconsideration of the possible factors in pixel loss, such as imager size, number of pixels, pixel aperture ratio, imager type (CCD or CMOS), method of error correction/concealment, and the method of compression used for recording or transmission. The problem of imager pixel loss due to particle radiation is not limited to out-of-atmosphere applications. Since particle radiation increases with altitude, it is not surprising to find anecdotal evidence that video cameras subject to many hours of airline travel show an increased incidence of pixel loss. This is even evident in some standard definition video applications, and pixel loss due to particle radiation only stands to become a more salient issue considering the continued diffusion of high definition video cameras in the marketplace.
Image reconstruction of dynamic infrared single-pixel imaging system
NASA Astrophysics Data System (ADS)
Tong, Qi; Jiang, Yilin; Wang, Haiyan; Guo, Limin
2018-03-01
Single-pixel imaging technique has recently received much attention. Most of the current single-pixel imaging is aimed at relatively static targets or the imaging system is fixed, which is limited by the number of measurements received through the single detector. In this paper, we proposed a novel dynamic compressive imaging method to solve the imaging problem, where exists imaging system motion behavior, for the infrared (IR) rosette scanning system. The relationship between adjacent target images and scene is analyzed under different system movement scenarios. These relationships are used to build dynamic compressive imaging models. Simulation results demonstrate that the proposed method can improve the reconstruction quality of IR image and enhance the contrast between the target and the background in the presence of system movement.
Detector Sampling of Optical/IR Spectra: How Many Pixels per FWHM?
NASA Astrophysics Data System (ADS)
Robertson, J. Gordon
2017-08-01
Most optical and IR spectra are now acquired using detectors with finite-width pixels in a square array. Each pixel records the received intensity integrated over its own area, and pixels are separated by the array pitch. This paper examines the effects of such pixellation, using computed simulations to illustrate the effects which most concern the astronomer end-user. It is shown that coarse sampling increases the random noise errors in wavelength by typically 10-20 % at 2 pixels per Full Width at Half Maximum, but with wide variation depending on the functional form of the instrumental Line Spread Function (i.e. the instrumental response to a monochromatic input) and on the pixel phase. If line widths are determined, they are even more strongly affected at low sampling frequencies. However, the noise in fitted peak amplitudes is minimally affected by pixellation, with increases less than about 5%. Pixellation has a substantial but complex effect on the ability to see a relative minimum between two closely spaced peaks (or relative maximum between two absorption lines). The consistent scale of resolving power presented by Robertson to overcome the inadequacy of the Full Width at Half Maximum as a resolution measure is here extended to cover pixellated spectra. The systematic bias errors in wavelength introduced by pixellation, independent of signal/noise ratio, are examined. While they may be negligible for smooth well-sampled symmetric Line Spread Functions, they are very sensitive to asymmetry and high spatial frequency sub-structure. The Modulation Transfer Function for sampled data is shown to give a useful indication of the extent of improperly sampled signal in an Line Spread Function. The common maxim that 2 pixels per Full Width at Half Maximum is the Nyquist limit is incorrect and most Line Spread Functions will exhibit some aliasing at this sample frequency. While 2 pixels per Full Width at Half Maximum is nevertheless often an acceptable minimum for moderate signal/noise work, it is preferable to carry out simulations for any actual or proposed Line Spread Function to find the effects of various sampling frequencies. Where spectrograph end-users have a choice of sampling frequencies, through on-chip binning and/or spectrograph configurations, it is desirable that the instrument user manual should include an examination of the effects of the various choices.
The progress of sub-pixel imaging methods
NASA Astrophysics Data System (ADS)
Wang, Hu; Wen, Desheng
2014-02-01
This paper reviews the Sub-pixel imaging technology principles, characteristics, the current development status at home and abroad and the latest research developments. As Sub-pixel imaging technology has achieved the advantages of high resolution of optical remote sensor, flexible working ways and being miniaturized with no moving parts. The imaging system is suitable for the application of space remote sensor. Its application prospect is very extensive. It is quite possible to be the research development direction of future space optical remote sensing technology.
NASA Astrophysics Data System (ADS)
Xie, Huan; Luo, Xin; Xu, Xiong; Wang, Chen; Pan, Haiyan; Tong, Xiaohua; Liu, Shijie
2016-10-01
Water body is a fundamental element in urban ecosystems and water mapping is critical for urban and landscape planning and management. As remote sensing has increasingly been used for water mapping in rural areas, this spatially explicit approach applied in urban area is also a challenging work due to the water bodies mainly distributed in a small size and the spectral confusion widely exists between water and complex features in the urban environment. Water index is the most common method for water extraction at pixel level, and spectral mixture analysis (SMA) has been widely employed in analyzing urban environment at subpixel level recently. In this paper, we introduce an automatic subpixel water mapping method in urban areas using multispectral remote sensing data. The objectives of this research consist of: (1) developing an automatic land-water mixed pixels extraction technique by water index; (2) deriving the most representative endmembers of water and land by utilizing neighboring water pixels and adaptive iterative optimal neighboring land pixel for respectively; (3) applying a linear unmixing model for subpixel water fraction estimation. Specifically, to automatically extract land-water pixels, the locally weighted scatter plot smoothing is firstly used to the original histogram curve of WI image . And then the Ostu threshold is derived as the start point to select land-water pixels based on histogram of the WI image with the land threshold and water threshold determination through the slopes of histogram curve . Based on the previous process at pixel level, the image is divided into three parts: water pixels, land pixels, and mixed land-water pixels. Then the spectral mixture analysis (SMA) is applied to land-water mixed pixels for water fraction estimation at subpixel level. With the assumption that the endmember signature of a target pixel should be more similar to adjacent pixels due to spatial dependence, the endmember of water and land are determined by neighboring pure land or pure water pixels within a distance. To obtaining the most representative endmembers in SMA, we designed an adaptive iterative endmember selection method based on the spatial similarity of adjacent pixels. According to the spectral similarity in a spatial adjacent region, the spectrum of land endmember is determined by selecting the most representative land pixel in a local window, and the spectrum of water endmember is determined by calculating an average of the water pixels in the local window. The proposed hierarchical processing method based on WI and SMA (WISMA) is applied to urban areas for reliability evaluation using the Landsat-8 Operational Land Imager (OLI) images. For comparison, four methods at pixel level and subpixel level were chosen respectively. Results indicate that the water maps generated by the proposed method correspond as closely with the truth water maps with subpixel precision. And the results showed that the WISMA achieved the best performance in water mapping with comprehensive analysis of different accuracy evaluation indexes (RMSE and SE).
An enhanced fast scanning algorithm for image segmentation
NASA Astrophysics Data System (ADS)
Ismael, Ahmed Naser; Yusof, Yuhanis binti
2015-12-01
Segmentation is an essential and important process that separates an image into regions that have similar characteristics or features. This will transform the image for a better image analysis and evaluation. An important benefit of segmentation is the identification of region of interest in a particular image. Various algorithms have been proposed for image segmentation and this includes the Fast Scanning algorithm which has been employed on food, sport and medical images. It scans all pixels in the image and cluster each pixel according to the upper and left neighbor pixels. The clustering process in Fast Scanning algorithm is performed by merging pixels with similar neighbor based on an identified threshold. Such an approach will lead to a weak reliability and shape matching of the produced segments. This paper proposes an adaptive threshold function to be used in the clustering process of the Fast Scanning algorithm. This function used the gray'value in the image's pixels and variance Also, the level of the image that is more the threshold are converted into intensity values between 0 and 1, and other values are converted into intensity values zero. The proposed enhanced Fast Scanning algorithm is realized on images of the public and private transportation in Iraq. Evaluation is later made by comparing the produced images of proposed algorithm and the standard Fast Scanning algorithm. The results showed that proposed algorithm is faster in terms the time from standard fast scanning.
Model of Image Artifacts from Dust Particles
NASA Technical Reports Server (NTRS)
Willson, Reg
2008-01-01
A mathematical model of image artifacts produced by dust particles on lenses has been derived. Machine-vision systems often have to work with camera lenses that become dusty during use. Dust particles on the front surface of a lens produce image artifacts that can potentially affect the performance of a machine-vision algorithm. The present model satisfies a need for a means of synthesizing dust image artifacts for testing machine-vision algorithms for robustness (or the lack thereof) in the presence of dust on lenses. A dust particle can absorb light or scatter light out of some pixels, thereby giving rise to a dark dust artifact. It can also scatter light into other pixels, thereby giving rise to a bright dust artifact. For the sake of simplicity, this model deals only with dark dust artifacts. The model effectively represents dark dust artifacts as an attenuation image consisting of an array of diffuse darkened spots centered at image locations corresponding to the locations of dust particles. The dust artifacts are computationally incorporated into a given test image by simply multiplying the brightness value of each pixel by a transmission factor that incorporates the factor of attenuation, by dust particles, of the light incident on that pixel. With respect to computation of the attenuation and transmission factors, the model is based on a first-order geometric (ray)-optics treatment of the shadows cast by dust particles on the image detector. In this model, the light collected by a pixel is deemed to be confined to a pair of cones defined by the location of the pixel s image in object space, the entrance pupil of the lens, and the location of the pixel in the image plane (see Figure 1). For simplicity, it is assumed that the size of a dust particle is somewhat less than the diameter, at the front surface of the lens, of any collection cone containing all or part of that dust particle. Under this assumption, the shape of any individual dust particle artifact is the shape (typically, circular) of the aperture, and the contribution of the particle to the attenuation factor for a given pixel is the fraction of the cross-sectional area of the collection cone occupied by the particle. Assuming that dust particles do not overlap, the net transmission factor for a given pixel is calculated as one minus the sum of attenuation factors contributed by all dust particles affecting that pixel. In a test, the model was used to synthesize attenuation images for random distributions of dust particles on the front surface of a lens at various relative aperture (F-number) settings. As shown in Figure 2, the attenuation images resembled dust artifacts in real test images recorded while the lens was aimed at a white target.
NASA Astrophysics Data System (ADS)
Li, Song; Wang, Caizhu; Li, Yeqiu; Wang, Ling; Sakata, Shiro; Sekiya, Hiroo; Kuroiwa, Shingo
In this paper, we propose a new framework of removing salt and pepper impulse noise. In our proposed framework, the most important point is that the number of noise-free white and black pixels in a noisy image can be determined by using the noise rates estimated by Fuzzy Impulse Noise Detection and Reduction Method (FINDRM) and Efficient Detail-Preserving Approach (EDPA). For the noisy image includes many noise-free white and black pixels, the detected noisy pixel from the FINDRM is re-checked by using the alpha-trimmed mean. Finally, the impulse noise filtering phase of the FINDRM is used to restore the image. Simulation results show that for the noisy image including many noise-free white and black pixels, the proposed framework can decrease the False Hit Rate (FHR) efficiently compared with the FINDRM. Therefore, the proposed framework can be used more widely than the FINDRM.
Mattioli Della Rocca, Francescopaolo
2018-01-01
This paper examines methods to best exploit the High Dynamic Range (HDR) of the single photon avalanche diode (SPAD) in a high fill-factor HDR photon counting pixel that is scalable to megapixel arrays. The proposed method combines multi-exposure HDR with temporal oversampling in-pixel. We present a silicon demonstration IC with 96 × 40 array of 8.25 µm pitch 66% fill-factor SPAD-based pixels achieving >100 dB dynamic range with 3 back-to-back exposures (short, mid, long). Each pixel sums 15 bit-planes or binary field images internally to constitute one frame providing 3.75× data compression, hence the 1k frames per second (FPS) output off-chip represents 45,000 individual field images per second on chip. Two future projections of this work are described: scaling SPAD-based image sensors to HDR 1 MPixel formats and shrinking the pixel pitch to 1–3 µm. PMID:29641479
Facial recognition using enhanced pixelized image for simulated visual prosthesis.
Li, Ruonan; Zhhang, Xudong; Zhang, Hui; Hu, Guanshu
2005-01-01
A simulated face recognition experiment using enhanced pixelized images is designed and performed for the artificial visual prosthesis. The results of the simulation reveal new characteristics of visual performance in an enhanced pixelization condition, and then new suggestions on the future design of visual prosthesis are provided.
Acquisition of STEM Images by Adaptive Compressive Sensing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Weiyi; Feng, Qianli; Srinivasan, Ramprakash
Compressive Sensing (CS) allows a signal to be sparsely measured first and accurately recovered later in software [1]. In scanning transmission electron microscopy (STEM), it is possible to compress an image spatially by reducing the number of measured pixels, which decreases electron dose and increases sensing speed [2,3,4]. The two requirements for CS to work are: (1) sparsity of basis coefficients and (2) incoherence of the sensing system and the representation system. However, when pixels are missing from the image, it is difficult to have an incoherent sensing matrix. Nevertheless, dictionary learning techniques such as Beta-Process Factor Analysis (BPFA) [5]more » are able to simultaneously discover a basis and the sparse coefficients in the case of missing pixels. On top of CS, we would like to apply active learning [6,7] to further reduce the proportion of pixels being measured, while maintaining image reconstruction quality. Suppose we initially sample 10% of random pixels. We wish to select the next 1% of pixels that are most useful in recovering the image. Now, we have 11% of pixels, and we want to decide the next 1% of “most informative” pixels. Active learning methods are online and sequential in nature. Our goal is to adaptively discover the best sensing mask during acquisition using feedback about the structures in the image. In the end, we hope to recover a high quality reconstruction with a dose reduction relative to the non-adaptive (random) sensing scheme. In doing this, we try three metrics applied to the partial reconstructions for selecting the new set of pixels: (1) variance, (2) Kullback-Leibler (KL) divergence using a Radial Basis Function (RBF) kernel, and (3) entropy. Figs. 1 and 2 display the comparison of Peak Signal-to-Noise (PSNR) using these three different active learning methods at different percentages of sampled pixels. At 20% level, all the three active learning methods underperform the original CS without active learning. However, they all beat the original CS as more of the “most informative” pixels are sampled. One can also argue that CS equipped with active learning requires less sampled pixels to achieve the same value of PSNR than CS with pixels randomly sampled, since all the three PSNR curves with active learning grow at a faster pace than that without active learning. For this particular STEM image, by observing the reconstructed images and the sensing masks, we find that while the method based on RBF kernel acquires samples more uniformly, the one on entropy samples more areas of significant change, thus less uniformly. The KL-divergence method performs the best in terms of reconstruction error (PSNR) for this example [8].« less
Pixel Stability in the Hubble Space Telescope WFC3/UVIS Detector
NASA Astrophysics Data System (ADS)
Bourque, Matthew; Baggett, Sylvia M.; Borncamp, David; Desjardins, Tyler D.; Grogin, Norman A.; Wide Field Camera 3 Team
2018-06-01
The Hubble Space Telescope (HST) Wide Field Camera 3 (WFC3) Ultraviolet-Visible (UVIS) detector has acquired roughly 12,000 dark images since the installation of WFC3 in 2009, as part of a daily monitoring program to measure the instrinsic dark current of the detector. These images have been reconfigured into 'pixel history' images in which detector columns are extracted from each dark and placed into a new time-ordered array, allowing for efficient analysis of a given pixel's behavior over time. We discuss how we measure each pixel's stability, as well as plans for a new Data Quality (DQ) flag to be introduced in a future release of the WFC3 calibration pipeline (CALWF3) for flagging pixels that are deemed unstable.
Automatic Detection of Clouds and Shadows Using High Resolution Satellite Image Time Series
NASA Astrophysics Data System (ADS)
Champion, Nicolas
2016-06-01
Detecting clouds and their shadows is one of the primaries steps to perform when processing satellite images because they may alter the quality of some products such as large-area orthomosaics. The main goal of this paper is to present the automatic method developed at IGN-France for detecting clouds and shadows in a sequence of satellite images. In our work, surface reflectance orthoimages are used. They were processed from initial satellite images using a dedicated software. The cloud detection step consists of a region-growing algorithm. Seeds are firstly extracted. For that purpose and for each input ortho-image to process, we select the other ortho-images of the sequence that intersect it. The pixels of the input ortho-image are secondly labelled seeds if the difference of reflectance (in the blue channel) with overlapping ortho-images is bigger than a given threshold. Clouds are eventually delineated using a region-growing method based on a radiometric and homogeneity criterion. Regarding the shadow detection, our method is based on the idea that a shadow pixel is darker when comparing to the other images of the time series. The detection is basically composed of three steps. Firstly, we compute a synthetic ortho-image covering the whole study area. Its pixels have a value corresponding to the median value of all input reflectance ortho-images intersecting at that pixel location. Secondly, for each input ortho-image, a pixel is labelled shadows if the difference of reflectance (in the NIR channel) with the synthetic ortho-image is below a given threshold. Eventually, an optional region-growing step may be used to refine the results. Note that pixels labelled clouds during the cloud detection are not used for computing the median value in the first step; additionally, the NIR input data channel is used to perform the shadow detection, because it appeared to better discriminate shadow pixels. The method was tested on times series of Landsat 8 and Pléiades-HR images and our first experiments show the feasibility to automate the detection of shadows and clouds in satellite image sequences.
Boucheron, Laura E
2013-07-16
Quantitative object and spatial arrangement-level analysis of tissue are detailed using expert (pathologist) input to guide the classification process. A two-step method is disclosed for imaging tissue, by classifying one or more biological materials, e.g. nuclei, cytoplasm, and stroma, in the tissue into one or more identified classes on a pixel-by-pixel basis, and segmenting the identified classes to agglomerate one or more sets of identified pixels into segmented regions. Typically, the one or more biological materials comprises nuclear material, cytoplasm material, and stromal material. The method further allows a user to markup the image subsequent to the classification to re-classify said materials. The markup is performed via a graphic user interface to edit designated regions in the image.
The ESCRT-III pathway facilitates cardiomyocyte release of cBIN1-containing microparticles
Xu, Bing; Fu, Ying; Liu, Yan; Agvanian, Sosse; Wirka, Robert C.; Baum, Rachel; Zhou, Kang; Shaw, Robin M.
2017-01-01
Microparticles (MPs) are cell–cell communication vesicles derived from the cell surface plasma membrane, although they are not known to originate from cardiac ventricular muscle. In ventricular cardiomyocytes, the membrane deformation protein cardiac bridging integrator 1 (cBIN1 or BIN1+13+17) creates transverse-tubule (t-tubule) membrane microfolds, which facilitate ion channel trafficking and modulate local ionic concentrations. The microfold-generated microdomains continuously reorganize, adapting in response to stress to modulate the calcium signaling apparatus. We explored the possibility that cBIN1-microfolds are externally released from cardiomyocytes. Using electron microscopy imaging with immunogold labeling, we found in mouse plasma that cBIN1 exists in membrane vesicles about 200 nm in size, which is consistent with the size of MPs. In mice with cardiac-specific heterozygous Bin1 deletion, flow cytometry identified 47% less cBIN1-MPs in plasma, supporting cardiac origin. Cardiac release was also evidenced by the detection of cBIN1-MPs in medium bathing a pure population of isolated adult mouse cardiomyocytes. In human plasma, osmotic shock increased cBIN1 detection by enzyme-linked immunosorbent assay (ELISA), and cBIN1 level decreased in humans with heart failure, a condition with reduced cardiac muscle cBIN1, both of which support cBIN1 release in MPs from human hearts. Exploring putative mechanisms of MP release, we found that the membrane fission complex endosomal sorting complexes required for transport (ESCRT)-III subunit charged multivesicular body protein 4B (CHMP4B) colocalizes and coimmunoprecipitates with cBIN1, an interaction enhanced by actin stabilization. In HeLa cells with cBIN1 overexpression, knockdown of CHMP4B reduced the release of cBIN1-MPs. Using truncation mutants, we identified that the N-terminal BAR (N-BAR) domain in cBIN1 is required for CHMP4B binding and MP release. This study links the BAR protein superfamily to the ESCRT pathway for MP biogenesis in mammalian cardiac ventricular cells, identifying elements of a pathway by which cytoplasmic cBIN1 is released into blood. PMID:28806752
The ESCRT-III pathway facilitates cardiomyocyte release of cBIN1-containing microparticles.
Xu, Bing; Fu, Ying; Liu, Yan; Agvanian, Sosse; Wirka, Robert C; Baum, Rachel; Zhou, Kang; Shaw, Robin M; Hong, TingTing
2017-08-01
Microparticles (MPs) are cell-cell communication vesicles derived from the cell surface plasma membrane, although they are not known to originate from cardiac ventricular muscle. In ventricular cardiomyocytes, the membrane deformation protein cardiac bridging integrator 1 (cBIN1 or BIN1+13+17) creates transverse-tubule (t-tubule) membrane microfolds, which facilitate ion channel trafficking and modulate local ionic concentrations. The microfold-generated microdomains continuously reorganize, adapting in response to stress to modulate the calcium signaling apparatus. We explored the possibility that cBIN1-microfolds are externally released from cardiomyocytes. Using electron microscopy imaging with immunogold labeling, we found in mouse plasma that cBIN1 exists in membrane vesicles about 200 nm in size, which is consistent with the size of MPs. In mice with cardiac-specific heterozygous Bin1 deletion, flow cytometry identified 47% less cBIN1-MPs in plasma, supporting cardiac origin. Cardiac release was also evidenced by the detection of cBIN1-MPs in medium bathing a pure population of isolated adult mouse cardiomyocytes. In human plasma, osmotic shock increased cBIN1 detection by enzyme-linked immunosorbent assay (ELISA), and cBIN1 level decreased in humans with heart failure, a condition with reduced cardiac muscle cBIN1, both of which support cBIN1 release in MPs from human hearts. Exploring putative mechanisms of MP release, we found that the membrane fission complex endosomal sorting complexes required for transport (ESCRT)-III subunit charged multivesicular body protein 4B (CHMP4B) colocalizes and coimmunoprecipitates with cBIN1, an interaction enhanced by actin stabilization. In HeLa cells with cBIN1 overexpression, knockdown of CHMP4B reduced the release of cBIN1-MPs. Using truncation mutants, we identified that the N-terminal BAR (N-BAR) domain in cBIN1 is required for CHMP4B binding and MP release. This study links the BAR protein superfamily to the ESCRT pathway for MP biogenesis in mammalian cardiac ventricular cells, identifying elements of a pathway by which cytoplasmic cBIN1 is released into blood.
Lagrange constraint neural networks for massive pixel parallel image demixing
NASA Astrophysics Data System (ADS)
Szu, Harold H.; Hsu, Charles C.
2002-03-01
We have shown that the remote sensing optical imaging to achieve detailed sub-pixel decomposition is a unique application of blind source separation (BSS) that is truly linear of far away weak signal, instantaneous speed of light without delay, and along the line of sight without multiple paths. In early papers, we have presented a direct application of statistical mechanical de-mixing method called Lagrange Constraint Neural Network (LCNN). While the BSAO algorithm (using a posteriori MaxEnt ANN and neighborhood pixel average) is not acceptable for remote sensing, a mirror symmetric LCNN approach is all right assuming a priori MaxEnt for unknown sources to be averaged over the source statistics (not neighborhood pixel data) in a pixel-by-pixel independent fashion. LCNN reduces the computation complexity, save a great number of memory devices, and cut the cost of implementation. The Landsat system is designed to measure the radiation to deduce surface conditions and materials. For any given material, the amount of emitted and reflected radiation varies by the wavelength. In practice, a single pixel of a Landsat image has seven channels receiving 0.1 to 12 microns of radiation from the ground within a 20x20 meter footprint containing a variety of radiation materials. A-priori LCNN algorithm provides the spatial-temporal variation of mixture that is hardly de-mixable by other a-posteriori BSS or ICA methods. We have already compared the Landsat remote sensing using both methods in WCCI 2002 Hawaii. Unfortunately the absolute benchmark is not possible because of lacking of the ground truth. We will arbitrarily mix two incoherent sampled images as the ground truth. However, the constant total probability of co-located sources within the pixel footprint is necessary for the remote sensing constraint (since on a clear day the total reflecting energy is constant in neighborhood receiving pixel sensors), we have to normalized two image pixel-by-pixel as well. Then, the result is indeed as expected.
An ultra-low power CMOS image sensor with on-chip energy harvesting and power management capability.
Cevik, Ismail; Huang, Xiwei; Yu, Hao; Yan, Mei; Ay, Suat U
2015-03-06
An ultra-low power CMOS image sensor with on-chip energy harvesting and power management capability is introduced in this paper. The photodiode pixel array can not only capture images but also harvest solar energy. As such, the CMOS image sensor chip is able to switch between imaging and harvesting modes towards self-power operation. Moreover, an on-chip maximum power point tracking (MPPT)-based power management system (PMS) is designed for the dual-mode image sensor to further improve the energy efficiency. A new isolated P-well energy harvesting and imaging (EHI) pixel with very high fill factor is introduced. Several ultra-low power design techniques such as reset and select boosting techniques have been utilized to maintain a wide pixel dynamic range. The chip was designed and fabricated in a 1.8 V, 1P6M 0.18 µm CMOS process. Total power consumption of the imager is 6.53 µW for a 96 × 96 pixel array with 1 V supply and 5 fps frame rate. Up to 30 μW of power could be generated by the new EHI pixels. The PMS is capable of providing 3× the power required during imaging mode with 50% efficiency allowing energy autonomous operation with a 72.5% duty cycle.
An Ultra-Low Power CMOS Image Sensor with On-Chip Energy Harvesting and Power Management Capability
Cevik, Ismail; Huang, Xiwei; Yu, Hao; Yan, Mei; Ay, Suat U.
2015-01-01
An ultra-low power CMOS image sensor with on-chip energy harvesting and power management capability is introduced in this paper. The photodiode pixel array can not only capture images but also harvest solar energy. As such, the CMOS image sensor chip is able to switch between imaging and harvesting modes towards self-power operation. Moreover, an on-chip maximum power point tracking (MPPT)-based power management system (PMS) is designed for the dual-mode image sensor to further improve the energy efficiency. A new isolated P-well energy harvesting and imaging (EHI) pixel with very high fill factor is introduced. Several ultra-low power design techniques such as reset and select boosting techniques have been utilized to maintain a wide pixel dynamic range. The chip was designed and fabricated in a 1.8 V, 1P6M 0.18 µm CMOS process. Total power consumption of the imager is 6.53 µW for a 96 × 96 pixel array with 1 V supply and 5 fps frame rate. Up to 30 μW of power could be generated by the new EHI pixels. The PMS is capable of providing 3× the power required during imaging mode with 50% efficiency allowing energy autonomous operation with a 72.5% duty cycle. PMID:25756863
Sub-pixel localisation of passive micro-coil fiducial markers in interventional MRI.
Rea, Marc; McRobbie, Donald; Elhawary, Haytham; Tse, Zion T H; Lamperth, Michael; Young, Ian
2009-04-01
Electromechanical devices enable increased accuracy in surgical procedures, and the recent development of MRI-compatible mechatronics permits the use of MRI for real-time image guidance. Integrated imaging of resonant micro-coil fiducials provides an accurate method of tracking devices in a scanner with increased flexibility compared to gradient tracking. Here we report on the ability of ten different image-processing algorithms to track micro-coil fiducials with sub-pixel accuracy. Five algorithms: maximum pixel, barycentric weighting, linear interpolation, quadratic fitting and Gaussian fitting were applied both directly to the pixel intensity matrix and to the cross-correlation matrix obtained by 2D convolution with a reference image. Using images of a 3 mm fiducial marker and a pixel size of 1.1 mm, intensity linear interpolation, which calculates the position of the fiducial centre by interpolating the pixel data to find the fiducial edges, was found to give the best performance for minimal computing power; a maximum error of 0.22 mm was observed in fiducial localisation for displacements up to 40 mm. The inherent standard deviation of fiducial localisation was 0.04 mm. This work enables greater accuracy to be achieved in passive fiducial tracking.
Contrast-guided image interpolation.
Wei, Zhe; Ma, Kai-Kuang
2013-11-01
In this paper a contrast-guided image interpolation method is proposed that incorporates contrast information into the image interpolation process. Given the image under interpolation, four binary contrast-guided decision maps (CDMs) are generated and used to guide the interpolation filtering through two sequential stages: 1) the 45(°) and 135(°) CDMs for interpolating the diagonal pixels and 2) the 0(°) and 90(°) CDMs for interpolating the row and column pixels. After applying edge detection to the input image, the generation of a CDM lies in evaluating those nearby non-edge pixels of each detected edge for re-classifying them possibly as edge pixels. This decision is realized by solving two generalized diffusion equations over the computed directional variation (DV) fields using a derived numerical approach to diffuse or spread the contrast boundaries or edges, respectively. The amount of diffusion or spreading is proportional to the amount of local contrast measured at each detected edge. The diffused DV fields are then thresholded for yielding the binary CDMs, respectively. Therefore, the decision bands with variable widths will be created on each CDM. The two CDMs generated in each stage will be exploited as the guidance maps to conduct the interpolation process: for each declared edge pixel on the CDM, a 1-D directional filtering will be applied to estimate its associated to-be-interpolated pixel along the direction as indicated by the respective CDM; otherwise, a 2-D directionless or isotropic filtering will be used instead to estimate the associated missing pixels for each declared non-edge pixel. Extensive simulation results have clearly shown that the proposed contrast-guided image interpolation is superior to other state-of-the-art edge-guided image interpolation methods. In addition, the computational complexity is relatively low when compared with existing methods; hence, it is fairly attractive for real-time image applications.
Oh, Paul; Lee, Sukho; Kang, Moon Gi
2017-01-01
Recently, several RGB-White (RGBW) color filter arrays (CFAs) have been proposed, which have extra white (W) pixels in the filter array that are highly sensitive. Due to the high sensitivity, the W pixels have better SNR (Signal to Noise Ratio) characteristics than other color pixels in the filter array, especially, in low light conditions. However, most of the RGBW CFAs are designed so that the acquired RGBW pattern image can be converted into the conventional Bayer pattern image, which is then again converted into the final color image by using conventional demosaicing methods, i.e., color interpolation techniques. In this paper, we propose a new RGBW color filter array based on a totally different color interpolation technique, the colorization algorithm. The colorization algorithm was initially proposed for colorizing a gray image into a color image using a small number of color seeds. Here, we adopt this algorithm as a color interpolation technique, so that the RGBW color filter array can be designed with a very large number of W pixels to make the most of the highly sensitive characteristics of the W channel. The resulting RGBW color filter array has a pattern with a large proportion of W pixels, while the small-numbered RGB pixels are randomly distributed over the array. The colorization algorithm makes it possible to reconstruct the colors from such a small number of RGB values. Due to the large proportion of W pixels, the reconstructed color image has a high SNR value, especially higher than those of conventional CFAs in low light condition. Experimental results show that many important information which are not perceived in color images reconstructed with conventional CFAs are perceived in the images reconstructed with the proposed method. PMID:28657602
Oh, Paul; Lee, Sukho; Kang, Moon Gi
2017-06-28
Recently, several RGB-White (RGBW) color filter arrays (CFAs) have been proposed, which have extra white (W) pixels in the filter array that are highly sensitive. Due to the high sensitivity, the W pixels have better SNR (Signal to Noise Ratio) characteristics than other color pixels in the filter array, especially, in low light conditions. However, most of the RGBW CFAs are designed so that the acquired RGBW pattern image can be converted into the conventional Bayer pattern image, which is then again converted into the final color image by using conventional demosaicing methods, i.e., color interpolation techniques. In this paper, we propose a new RGBW color filter array based on a totally different color interpolation technique, the colorization algorithm. The colorization algorithm was initially proposed for colorizing a gray image into a color image using a small number of color seeds. Here, we adopt this algorithm as a color interpolation technique, so that the RGBW color filter array can be designed with a very large number of W pixels to make the most of the highly sensitive characteristics of the W channel. The resulting RGBW color filter array has a pattern with a large proportion of W pixels, while the small-numbered RGB pixels are randomly distributed over the array. The colorization algorithm makes it possible to reconstruct the colors from such a small number of RGB values. Due to the large proportion of W pixels, the reconstructed color image has a high SNR value, especially higher than those of conventional CFAs in low light condition. Experimental results show that many important information which are not perceived in color images reconstructed with conventional CFAs are perceived in the images reconstructed with the proposed method.
Saliency-Guided Change Detection of Remotely Sensed Images Using Random Forest
NASA Astrophysics Data System (ADS)
Feng, W.; Sui, H.; Chen, X.
2018-04-01
Studies based on object-based image analysis (OBIA) representing the paradigm shift in change detection (CD) have achieved remarkable progress in the last decade. Their aim has been developing more intelligent interpretation analysis methods in the future. The prediction effect and performance stability of random forest (RF), as a new kind of machine learning algorithm, are better than many single predictors and integrated forecasting method. In this paper, we present a novel CD approach for high-resolution remote sensing images, which incorporates visual saliency and RF. First, highly homogeneous and compact image super-pixels are generated using super-pixel segmentation, and the optimal segmentation result is obtained through image superimposition and principal component analysis (PCA). Second, saliency detection is used to guide the search of interest regions in the initial difference image obtained via the improved robust change vector analysis (RCVA) algorithm. The salient regions within the difference image that correspond to the binarized saliency map are extracted, and the regions are subject to the fuzzy c-means (FCM) clustering to obtain the pixel-level pre-classification result, which can be used as a prerequisite for superpixel-based analysis. Third, on the basis of the optimal segmentation and pixel-level pre-classification results, different super-pixel change possibilities are calculated. Furthermore, the changed and unchanged super-pixels that serve as the training samples are automatically selected. The spectral features and Gabor features of each super-pixel are extracted. Finally, superpixel-based CD is implemented by applying RF based on these samples. Experimental results on Ziyuan 3 (ZY3) multi-spectral images show that the proposed method outperforms the compared methods in the accuracy of CD, and also confirm the feasibility and effectiveness of the proposed approach.
Fundamental performance differences of CMOS and CCD imagers: part V
NASA Astrophysics Data System (ADS)
Janesick, James R.; Elliott, Tom; Andrews, James; Tower, John; Pinter, Jeff
2013-02-01
Previous papers delivered over the last decade have documented developmental progress made on large pixel scientific CMOS imagers that match or surpass CCD performance. New data and discussions presented in this paper include: 1) a new buried channel CCD fabricated on a CMOS process line, 2) new data products generated by high performance custom scientific CMOS 4T/5T/6T PPD pixel imagers, 3) ultimate CTE and speed limits for large pixel CMOS imagers, 4) fabrication and test results of a flight 4k x 4k CMOS imager for NRL's SoloHi Solar Orbiter Mission, 5) a progress report on ultra large stitched Mk x Nk CMOS imager, 6) data generated by on-chip sub-electron CDS signal chain circuitry used in our imagers, 7) CMOS and CMOSCCD proton and electron radiation damage data for dose levels up to 10 Mrd, 8) discussions and data for a new class of PMOS pixel CMOS imagers and 9) future CMOS development work planned.
NASA Technical Reports Server (NTRS)
Thompson, Karl E.; Rust, David M.; Chen, Hua
1995-01-01
A new type of image detector has been designed to analyze the polarization of light simultaneously at all picture elements (pixels) in a scene. The Integrated Dual Imaging Detector (IDID) consists of a polarizing beamsplitter bonded to a custom-designed charge-coupled device with signal-analysis circuitry, all integrated on a silicon chip. The IDID should simplify the design and operation of imaging polarimeters and spectroscopic imagers used, for example, in atmospheric and solar research. Other applications include environmental monitoring and robot vision. Innovations in the IDID include two interleaved 512 x 1024 pixel imaging arrays (one for each polarization plane), large dynamic range (well depth of 10(exp 6) electrons per pixel), simultaneous readout and display of both images at 10(exp 6) pixels per second, and on-chip analog signal processing to produce polarization maps in real time. When used with a lithium niobate Fabry-Perot etalon or other color filter that can encode spectral information as polarization, the IDID can reveal tiny differences between simultaneous images at two wavelengths.
Precise color images a high-speed color video camera system with three intensified sensors
NASA Astrophysics Data System (ADS)
Oki, Sachio; Yamakawa, Masafumi; Gohda, Susumu; Etoh, Takeharu G.
1999-06-01
High speed imaging systems have been used in a large field of science and engineering. Although the high speed camera systems have been improved to high performance, most of their applications are only to get high speed motion pictures. However, in some fields of science and technology, it is useful to get some other information, such as temperature of combustion flame, thermal plasma and molten materials. Recent digital high speed video imaging technology should be able to get such information from those objects. For this purpose, we have already developed a high speed video camera system with three-intensified-sensors and cubic prism image splitter. The maximum frame rate is 40,500 pps (picture per second) at 64 X 64 pixels and 4,500 pps at 256 X 256 pixels with 256 (8 bit) intensity resolution for each pixel. The camera system can store more than 1,000 pictures continuously in solid state memory. In order to get the precise color images from this camera system, we need to develop a digital technique, which consists of a computer program and ancillary instruments, to adjust displacement of images taken from two or three image sensors and to calibrate relationship between incident light intensity and corresponding digital output signals. In this paper, the digital technique for pixel-based displacement adjustment are proposed. Although the displacement of the corresponding circle was more than 8 pixels in original image, the displacement was adjusted within 0.2 pixels at most by this method.
Sparsely-sampled hyperspectral stimulated Raman scattering microscopy: a theoretical investigation
NASA Astrophysics Data System (ADS)
Lin, Haonan; Liao, Chien-Sheng; Wang, Pu; Huang, Kai-Chih; Bouman, Charles A.; Kong, Nan; Cheng, Ji-Xin
2017-02-01
A hyperspectral image corresponds to a data cube with two spatial dimensions and one spectral dimension. Through linear un-mixing, hyperspectral images can be decomposed into spectral signatures of pure components as well as their concentration maps. Due to this distinct advantage on component identification, hyperspectral imaging becomes a rapidly emerging platform for engineering better medicine and expediting scientific discovery. Among various hyperspectral imaging techniques, hyperspectral stimulated Raman scattering (HSRS) microscopy acquires data in a pixel-by-pixel scanning manner. Nevertheless, current image acquisition speed for HSRS is insufficient to capture the dynamics of freely moving subjects. Instead of reducing the pixel dwell time to achieve speed-up, which would inevitably decrease signal-to-noise ratio (SNR), we propose to reduce the total number of sampled pixels. Location of sampled pixels are carefully engineered with triangular wave Lissajous trajectory. Followed by a model-based image in-painting algorithm, the complete data is recovered for linear unmixing. Simulation results show that by careful selection of trajectory, a fill rate as low as 10% is sufficient to generate accurate linear unmixing results. The proposed framework applies to any hyperspectral beam-scanning imaging platform which demands high acquisition speed.
Imaging During MESSENGER's Second Flyby of Mercury
NASA Astrophysics Data System (ADS)
Chabot, N. L.; Prockter, L. M.; Murchie, S. L.; Robinson, M. S.; Laslo, N. R.; Kang, H. K.; Hawkins, S. E.; Vaughan, R. M.; Head, J. W.; Solomon, S. C.; MESSENGER Team
2008-12-01
During MESSENGER's second flyby of Mercury on October 6, 2008, the Mercury Dual Imaging System (MDIS) will acquire 1287 images. The images will include coverage of about 30% of Mercury's surface not previously seen by spacecraft. A portion of the newly imaged terrain will be viewed during the inbound portion of the flyby. On the outbound leg, MDIS will image additional previously unseen terrain as well as regions imaged under different illumination geometry by Mariner 10. These new images, when combined with images from Mariner 10 and from MESSENGER's first Mercury flyby, will enable the first regional- resolution global view of Mercury constituting a combined total coverage of about 96% of the planet's surface. MDIS consists of both a Wide Angle Camera (WAC) and a Narrow Angle Camera (NAC). During MESSENGER's second Mercury flyby, the following imaging activities are planned: about 86 minutes before the spacecraft's closest pass by the planet, the WAC will acquire images through 11 different narrow-band color filters of the approaching crescent planet at a resolution of about 5 km/pixel. At slightly less than 1 hour to closest approach, the NAC will acquire a 4-column x 11-row mosaic with an approximate resolution of 450 m/pixel. At 8 minutes after closest approach, the WAC will obtain the highest-resolution multispectral images to date of Mercury's surface, imaging a portion of the surface through 11 color filters at resolutions of about 250-600 m/pixel. A strip of high-resolution NAC images, with a resolution of approximately 100 m/pixel, will follow these WAC observations. The NAC will next acquire a 15-column x 13- row high-resolution mosaic of the northern hemisphere of the departing planet, beginning approximately 21 minutes after closest approach, with resolutions of 140-300 m/pixel; this mosaic will fill a large gore in the Mariner 10 data. At about 42 minutes following closest approach, the WAC will acquire a 3x3, 11-filter, full- planet mosaic with an average resolution of 2.5 km/pixel. Two NAC mosaics of the entire departing planet will be acquired beginning about 66 minutes after closest approach, with resolutions of 500-700 m/pixel. About 89 minutes following closest approach, the WAC will acquire a multispectral image set with a resolution of about 5 km/pixel. Following this WAC image set, MDIS will continue to acquire occasional images with both the WAC and NAC until 20 hours after closest approach, at which time the flyby data will begin being transmitted to Earth.
A 10MHz Fiber-Coupled Photodiode Imaging Array for Plasma Diagnostics
NASA Astrophysics Data System (ADS)
Brockington, Samuel; Case, Andrew; Witherspoon, F. Douglas
2013-10-01
HyperV Technologies has been developing an imaging diagnostic comprised of arrays of fast, low-cost, long-record-length, fiber-optically-coupled photodiode channels to investigate plasma dynamics and other fast, bright events. By coupling an imaging fiber bundle to a bank of amplified photodiode channels, imagers and streak imagers of 100 to 10,000 pixels can be constructed. By interfacing analog photodiode systems directly to commercial analog to digital convertors and modern memory chips, a prototype pixel with an extremely deep record length (128 k points at 40 Msamples/s) has been achieved for a 10 bit resolution system with signal bandwidths of at least 10 MHz. Progress on a prototype 100 Pixel streak camera employing this technique is discussed along with preliminary experimental results and plans for a 10,000 pixel imager. Work supported by USDOE Phase 1 SBIR Grant DE-SC0009492.
Terahertz imaging with compressed sensing and phase retrieval.
Chan, Wai Lam; Moravec, Matthew L; Baraniuk, Richard G; Mittleman, Daniel M
2008-05-01
We describe a novel, high-speed pulsed terahertz (THz) Fourier imaging system based on compressed sensing (CS), a new signal processing theory, which allows image reconstruction with fewer samples than traditionally required. Using CS, we successfully reconstruct a 64 x 64 image of an object with pixel size 1.4 mm using a randomly chosen subset of the 4096 pixels, which defines the image in the Fourier plane, and observe improved reconstruction quality when we apply phase correction. For our chosen image, only about 12% of the pixels are required for reassembling the image. In combination with phase retrieval, our system has the capability to reconstruct images with only a small subset of Fourier amplitude measurements and thus has potential application in THz imaging with cw sources.
A fast and efficient segmentation scheme for cell microscopic image.
Lebrun, G; Charrier, C; Lezoray, O; Meurie, C; Cardot, H
2007-04-27
Microscopic cellular image segmentation schemes must be efficient for reliable analysis and fast to process huge quantity of images. Recent studies have focused on improving segmentation quality. Several segmentation schemes have good quality but processing time is too expensive to deal with a great number of images per day. For segmentation schemes based on pixel classification, the classifier design is crucial since it is the one which requires most of the processing time necessary to segment an image. The main contribution of this work is focused on how to reduce the complexity of decision functions produced by support vector machines (SVM) while preserving recognition rate. Vector quantization is used in order to reduce the inherent redundancy present in huge pixel databases (i.e. images with expert pixel segmentation). Hybrid color space design is also used in order to improve data set size reduction rate and recognition rate. A new decision function quality criterion is defined to select good trade-off between recognition rate and processing time of pixel decision function. The first results of this study show that fast and efficient pixel classification with SVM is possible. Moreover posterior class pixel probability estimation is easy to compute with Platt method. Then a new segmentation scheme using probabilistic pixel classification has been developed. This one has several free parameters and an automatic selection must dealt with, but criteria for evaluate segmentation quality are not well adapted for cell segmentation, especially when comparison with expert pixel segmentation must be achieved. Another important contribution in this paper is the definition of a new quality criterion for evaluation of cell segmentation. The results presented here show that the selection of free parameters of the segmentation scheme by optimisation of the new quality cell segmentation criterion produces efficient cell segmentation.
Image Segmentation Analysis for NASA Earth Science Applications
NASA Technical Reports Server (NTRS)
Tilton, James C.
2010-01-01
NASA collects large volumes of imagery data from satellite-based Earth remote sensing sensors. Nearly all of the computerized image analysis of this data is performed pixel-by-pixel, in which an algorithm is applied directly to individual image pixels. While this analysis approach is satisfactory in many cases, it is usually not fully effective in extracting the full information content from the high spatial resolution image data that s now becoming increasingly available from these sensors. The field of object-based image analysis (OBIA) has arisen in recent years to address the need to move beyond pixel-based analysis. The Recursive Hierarchical Segmentation (RHSEG) software developed by the author is being used to facilitate moving from pixel-based image analysis to OBIA. The key unique aspect of RHSEG is that it tightly intertwines region growing segmentation, which produces spatially connected region objects, with region object classification, which groups sets of region objects together into region classes. No other practical, operational image segmentation approach has this tight integration of region growing object finding with region classification This integration is made possible by the recursive, divide-and-conquer implementation utilized by RHSEG, in which the input image data is recursively subdivided until the image data sections are small enough to successfully mitigat the combinatorial explosion caused by the need to compute the dissimilarity between each pair of image pixels. RHSEG's tight integration of region growing object finding and region classification is what enables the high spatial fidelity of the image segmentations produced by RHSEG. This presentation will provide an overview of the RHSEG algorithm and describe how it is currently being used to support OBIA or Earth Science applications such as snow/ice mapping and finding archaeological sites from remotely sensed data.
Geometric registration of images by similarity transformation using two reference points
NASA Technical Reports Server (NTRS)
Kang, Yong Q. (Inventor); Jo, Young-Heon (Inventor); Yan, Xiao-Hai (Inventor)
2011-01-01
A method for registering a first image to a second image using a similarity transformation. The each image includes a plurality of pixels. The first image pixels are mapped to a set of first image coordinates and the second image pixels are mapped to a set of second image coordinates. The first image coordinates of two reference points in the first image are determined. The second image coordinates of these reference points in the second image are determined. A Cartesian translation of the set of second image coordinates is performed such that the second image coordinates of the first reference point match its first image coordinates. A similarity transformation of the translated set of second image coordinates is performed. This transformation scales and rotates the second image coordinates about the first reference point such that the second image coordinates of the second reference point match its first image coordinates.
Preliminary investigations of active pixel sensors in Nuclear Medicine imaging
NASA Astrophysics Data System (ADS)
Ott, Robert; Evans, Noel; Evans, Phil; Osmond, J.; Clark, A.; Turchetta, R.
2009-06-01
Three CMOS active pixel sensors have been investigated for their application to Nuclear Medicine imaging. Startracker with 525×525 25 μm square pixels has been coupled via a fibre optic stud to a 2 mm thick segmented CsI(Tl) crystal. Imaging tests were performed using 99mTc sources, which emit 140 keV gamma rays. The system was interfaced to a PC via FPGA-based DAQ and optical link enabling imaging rates of 10 f/s. System noise was measured to be >100e and it was shown that the majority of this noise was fixed pattern in nature. The intrinsic spatial resolution was measured to be ˜80 μm and the system spatial resolution measured with a slit was ˜450 μm. The second sensor, On Pixel Intelligent CMOS (OPIC), had 64×72 40 μm pixels and was used to evaluate noise characteristics and to develop a method of differentiation between fixed pattern and statistical noise. The third sensor, Vanilla, had 520×520 25 μm pixels and a measured system noise of ˜25e. This sensor was coupled directly to the segmented phosphor. Imaging results show that even at this lower level of noise the signal from 140 keV gamma rays is small as the light from the phosphor is spread over a large number of pixels. Suggestions for the 'ideal' sensor are made.
A Chip and Pixel Qualification Methodology on Imaging Sensors
NASA Technical Reports Server (NTRS)
Chen, Yuan; Guertin, Steven M.; Petkov, Mihail; Nguyen, Duc N.; Novak, Frank
2004-01-01
This paper presents a qualification methodology on imaging sensors. In addition to overall chip reliability characterization based on sensor s overall figure of merit, such as Dark Rate, Linearity, Dark Current Non-Uniformity, Fixed Pattern Noise and Photon Response Non-Uniformity, a simulation technique is proposed and used to project pixel reliability. The projected pixel reliability is directly related to imaging quality and provides additional sensor reliability information and performance control.
NASA Astrophysics Data System (ADS)
Goss, Tristan M.
2016-05-01
With 640x512 pixel format IR detector arrays having been on the market for the past decade, Standard Definition (SD) thermal imaging sensors have been developed and deployed across the world. Now with 1280x1024 pixel format IR detector arrays becoming readily available designers of thermal imager systems face new challenges as pixel sizes reduce and the demand and applications for High Definition (HD) thermal imaging sensors increases. In many instances the upgrading of existing under-sampled SD thermal imaging sensors into more optimally sampled or oversampled HD thermal imaging sensors provides a more cost effective and reduced time to market option than to design and develop a completely new sensor. This paper presents the analysis and rationale behind the selection of the best suited HD pixel format MWIR detector for the upgrade of an existing SD thermal imaging sensor to a higher performing HD thermal imaging sensor. Several commercially available and "soon to be" commercially available HD small pixel IR detector options are included as part of the analysis and are considered for this upgrade. The impact the proposed detectors have on the sensor's overall sensitivity, noise and resolution is analyzed, and the improved range performance is predicted. Furthermore with reduced dark currents due to the smaller pixel sizes, the candidate HD MWIR detectors are operated at higher temperatures when compared to their SD predecessors. Therefore, as an additional constraint and as a design goal, the feasibility of achieving upgraded performance without any increase in the size, weight and power consumption of the thermal imager is discussed herein.
CRISM's Global Mapping of Mars, Part 3
NASA Technical Reports Server (NTRS)
2007-01-01
After a year in Mars orbit, CRISM has taken enough images to allow the team to release the first parts of a global spectral map of Mars to the Planetary Data System (PDS), NASA's digital library of planetary data. CRISM's global mapping is called the 'multispectral survey.' The team uses the word 'survey' because a reason for gathering this data set is to search for new sites for targeted observations, high-resolution views of the surface at 18 meters per pixel in 544 colors. Another reason for the multispectral survey is to provide contextual information. Targeted observations have such a large data volume (about 200 megabytes apiece) that only about 1% of Mars can be imaged at CRISM's highest resolution. The multispectral survey is a lower data volume type of observation that fills in the gaps between targeted observations, allowing scientists to better understand their geologic context. The global map is built from tens of thousands of image strips each about 10 kilometers (6.2 miles) wide and thousands of kilometers long. During the multispectral survey, CRISM returns data from only 72 carefully selected wavelengths that cover absorptions indicative of the mineral groups that CRISM is looking for on Mars. Data volume is further decreased by binning image pixels inside the instrument to a scale of about 200 meters (660 feet) per pixel. The total reduction in data volume per square kilometer is a factor of 700, making the multispectral survey manageable to acquire and transmit to Earth. Once on the ground, the strips of data are mosaicked into maps. The multispectral survey is too large to show the whole planet in a single map, so the map is divided into 1,964 'tiles,' each about 300 kilometers (186 miles) across. There are three versions of each tile, processed to progressively greater levels to strip away the obscuring effects of the dusty atmosphere and to highlight mineral variations in surface materials. This is the third and most processed version of tile 750, showing a part of Mars called Tyrrhena Terra in the ancient, heavily cratered highlands. The colored strips are CRISM multispectral survey data acquired over several months, in which each pixel began as calibrated 72-color spectrum of Mars. An experimental correction for illumination and atmospheric effects was applied to the data, to show how Mars' surface would appear if each strip was imaged with the same illumination and without an atmosphere. Then, the spectrum for each pixel was transformed into a set of 'summary parameters,' which indicate absorptions showing the presence of different minerals. Detections of the igneous, iron-bearing minerals olivine and pyroxene are shown in the red and blue image planes, respectively. Clay-like minerals called phyllosilicates, which formed when liquid water altered the igneous rocks, are shown in the green image plane. The gray areas between the strips are from an earlier mosaic of the planet taken by the Thermal Emission Imaging System (THEMIS) instrument on Mars Odyssey, and are included for context. Note that most areas imaged by CRISM contain pyroxene, and that olivine-containing rocks are concentrated on smooth deposits that fill some crater floors and the low areas between craters. Phyllosilicate-containing rocks are concentrated in and around small craters, such as the one at 13 degrees south latitude, 97 degrees east longitude. Their concentration in crater materials suggests that they were excavated when the craters formed, from a layer that was buried by the younger, less altered, olivine- and pyroxene-containing rocks. CRISM is one of six science instruments on NASA's Mars Reconnaissance Orbiter. Led by The Johns Hopkins University Applied Physics Laboratory, Laurel, Md., the CRISM team includes expertise from universities, government agencies and small businesses in the United States and abroad. NASA's Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the Mars Reconnaissance Orbiter and the Mars Science Laboratory for NASA's Science Mission Directorate, Washington. Lockheed Martin Space Systems, Denver, built the orbiter.Adaptive single-pixel imaging with aggregated sampling and continuous differential measurements
NASA Astrophysics Data System (ADS)
Huo, Yaoran; He, Hongjie; Chen, Fan; Tai, Heng-Ming
2018-06-01
This paper proposes an adaptive compressive imaging technique with one single-pixel detector and single arm. The aggregated sampling (AS) method enables the reduction of resolutions of the reconstructed images. It aims to reduce the time and space consumption. The target image with a resolution up to 1024 × 1024 can be reconstructed successfully at the 20% sampling rate. The continuous differential measurement (CDM) method combined with a ratio factor of significant coefficient (RFSC) improves the imaging quality. Moreover, RFSC reduces the human intervention in parameter setting. This technique enhances the practicability of single-pixel imaging with the benefits from less time and space consumption, better imaging quality and less human intervention.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jakubek, J.; Cejnarova, A.; Platkevic, M.
Single quantum counting pixel detectors of Medipix type are starting to be used in various radiographic applications. Compared to standard devices for digital imaging (such as CCDs or CMOS sensors) they present significant advantages: direct conversion of radiation to electric signal, energy sensitivity, noiseless image integration, unlimited dynamic range, absolute linearity. In this article we describe usage of the pixel device TimePix for image accumulation gated by late trigger signal. Demonstration of the technique is given on imaging coincidence instrumental neutron activation analysis (Imaging CINAA). This method allows one to determine concentration and distribution of certain preselected element in anmore » inspected sample.« less
SU-D-218-05: Material Quantification in Spectral X-Ray Imaging: Optimization and Validation.
Nik, S J; Thing, R S; Watts, R; Meyer, J
2012-06-01
To develop and validate a multivariate statistical method to optimize scanning parameters for material quantification in spectral x-rayimaging. An optimization metric was constructed by extensively sampling the thickness space for the expected number of counts for m (two or three) materials. This resulted in an m-dimensional confidence region ofmaterial quantities, e.g. thicknesses. Minimization of the ellipsoidal confidence region leads to the optimization of energy bins. For the given spectrum, the minimum counts required for effective material separation can be determined by predicting the signal-to-noise ratio (SNR) of the quantification. A Monte Carlo (MC) simulation framework using BEAM was developed to validate the metric. Projection data of the m-materials was generated and material decomposition was performed for combinations of iodine, calcium and water by minimizing the z-score between the expected spectrum and binned measurements. The mean square error (MSE) and variance were calculated to measure the accuracy and precision of this approach, respectively. The minimum MSE corresponds to the optimal energy bins in the BEAM simulations. In the optimization metric, this is equivalent to the smallest confidence region. The SNR of the simulated images was also compared to the predictions from the metric. TheMSE was dominated by the variance for the given material combinations,which demonstrates accurate material quantifications. The BEAMsimulations revealed that the optimization of energy bins was accurate to within 1keV. The SNRs predicted by the optimization metric yielded satisfactory agreement but were expectedly higher for the BEAM simulations due to the inclusion of scattered radiation. The validation showed that the multivariate statistical method provides accurate material quantification, correct location of optimal energy bins and adequateprediction of image SNR. The BEAM code system is suitable for generating spectral x- ray imaging simulations. © 2012 American Association of Physicists in Medicine.
NASA Technical Reports Server (NTRS)
2007-01-01
[figure removed for brevity, see original site] [figure removed for brevity, see original site] Subimage #1 Figure 1 Subimage #2 Figure 2 [figure removed for brevity, see original site] [figure removed for brevity, see original site] Anaglyph Figure 3 Subimage #3 Figure 4
At the very beginning of spring in the southern hemisphere on Mars the ground is covered with a seasonal layer of carbon dioxide ice. In this image there are two lanes of undisturbed ice bordered by two lanes peppered with fans of dark dust. When we zoom in to the subimage (figure 1), the fans are seen to be pointed in the same direction, dust carried along by the prevailing wind. The fans seem to emanate from spider-like features. The second subimage (figure 2) zooms in to full HiRISE resolution to reveal the nature of the 'spiders.' The arms are channels carved in the surface, blanketed by the seasonal carbon dioxide ice. The seasonal ice, warmed from below, evaporates and the gas is carried along the channels. Wherever a weak spot is found the gas vents to the top of the seasonal ice, carrying along dust from below. The anaglyph (figure 3) of this spider shows that these channels are deep, deepening and widening as they converge. Spiders like this are often draped over the local topography and often channels get larger as they go uphill. This is consistent with a gas eroding the channels. A different channel morphology is apparent in the lanes not showing fans. In these regions the channels are dense, more like lace, and are not radially organized. The third subimage (figure 4) shows an example of 'lace.' Observation Geometry Image PSP_002532_0935 was taken by the High Resolution Imaging Science Experiment (HiRISE) camera onboard the Mars Reconnaissance Orbiter spacecraft on 09-Feb-2007. The complete image is centered at -86.4 degrees latitude, 99.1 degrees East longitude. The range to the target site was 276.1 km (172.6 miles). At this distance the image scale is 55.2 cm/pixel (with 2 x 2 binning) so objects 166 cm across are resolved. The image shown here has been map-projected to 50 cm/pixel. The image was taken at a local Mars time of 04:27 PM and the scene is illuminated from the west with a solar incidence angle of 88 degrees, thus the sun was about 2 degrees above the horizon. At a solar longitude of 181.1 degrees, the season on Mars is Northern Autumn.NASA Astrophysics Data System (ADS)
Leijenaar, Ralph T. H.; Nalbantov, Georgi; Carvalho, Sara; van Elmpt, Wouter J. C.; Troost, Esther G. C.; Boellaard, Ronald; Aerts, Hugo J. W. L.; Gillies, Robert J.; Lambin, Philippe
2015-08-01
FDG-PET-derived textural features describing intra-tumor heterogeneity are increasingly investigated as imaging biomarkers. As part of the process of quantifying heterogeneity, image intensities (SUVs) are typically resampled into a reduced number of discrete bins. We focused on the implications of the manner in which this discretization is implemented. Two methods were evaluated: (1) RD, dividing the SUV range into D equally spaced bins, where the intensity resolution (i.e. bin size) varies per image; and (2) RB, maintaining a constant intensity resolution B. Clinical feasibility was assessed on 35 lung cancer patients, imaged before and in the second week of radiotherapy. Forty-four textural features were determined for different D and B for both imaging time points. Feature values depended on the intensity resolution and out of both assessed methods, RB was shown to allow for a meaningful inter- and intra-patient comparison of feature values. Overall, patients ranked differently according to feature values-which was used as a surrogate for textural feature interpretation-between both discretization methods. Our study shows that the manner of SUV discretization has a crucial effect on the resulting textural features and the interpretation thereof, emphasizing the importance of standardized methodology in tumor texture analysis.
Enhancing the image resolution in a single-pixel sub-THz imaging system based on compressed sensing
NASA Astrophysics Data System (ADS)
Alkus, Umit; Ermeydan, Esra Sengun; Sahin, Asaf Behzat; Cankaya, Ilyas; Altan, Hakan
2018-04-01
Compressed sensing (CS) techniques allow for faster imaging when combined with scan architectures, which typically suffer from speed. This technique when implemented with a subterahertz (sub-THz) single detector scan imaging system provides images whose resolution is only limited by the pixel size of the pattern used to scan the image plane. To overcome this limitation, the image of the target can be oversampled; however, this results in slower imaging rates especially if this is done in two-dimensional across the image plane. We show that by implementing a one-dimensional (1-D) scan of the image plane, a modified approach to CS theory applied with an appropriate reconstruction algorithm allows for successful reconstruction of the reflected oversampled image of a target placed in standoff configuration from the source. The experiments are done in reflection mode configuration where the operating frequency is 93 GHz and the corresponding wavelength is λ = 3.2 mm. To reconstruct the image with fewer samples, CS theory is applied using masks where the pixel size is 5 mm × 5 mm, and each mask covers an image area of 5 cm × 5 cm, meaning that the basic image is resolved as 10 × 10 pixels. To enhance the resolution, the information between two consecutive pixels is used, and oversampling along 1-D coupled with a modification of the masks in CS theory allowed for oversampled images to be reconstructed rapidly in 20 × 20 and 40 × 40 pixel formats. These are then compared using two different reconstruction algorithms, TVAL3 and ℓ1-MAGIC. The performance of these methods is compared for both simulated signals and real signals. It is found that the modified CS theory approach coupled with the TVAL3 reconstruction process, even when scanning along only 1-D, allows for rapid precise reconstruction of the oversampled target.
NASA Astrophysics Data System (ADS)
MacMahon, Heber; Vyborny, Carl; Powell, Gregory; Doi, Kunio; Metz, Charles E.
1984-08-01
In digital radiography the pixel size used determines the potential spatial resolution of the system. The need for spatial resolution varies depending on the subject matter imaged. In many areas, including the chest, the minimum spatial resolution requirements have not been determined. Sarcoidosis is a disease which frequently causes subtle interstitial infiltrates in the lungs. As the initial step in an investigation designed to determine the minimum pixel size required in digital chest radiographic systems, we have studied 1 mm pixel digitized images on patients with early pulmonary sarcoidosis. The results of this preliminary study suggest that neither mild interstitial pulmonary infiltrates nor other abnormalities such as pneumothoraces may be detected reliably with 1 mm pixel digital images.
ACE: Automatic Centroid Extractor for real time target tracking
NASA Technical Reports Server (NTRS)
Cameron, K.; Whitaker, S.; Canaris, J.
1990-01-01
A high performance video image processor has been implemented which is capable of grouping contiguous pixels from a raster scan image into groups and then calculating centroid information for each object in a frame. The algorithm employed to group pixels is very efficient and is guaranteed to work properly for all convex shapes as well as most concave shapes. Processing speeds are adequate for real time processing of video images having a pixel rate of up to 20 million pixels per second. Pixels may be up to 8 bits wide. The processor is designed to interface directly to a transputer serial link communications channel with no additional hardware. The full custom VLSI processor was implemented in a 1.6 mu m CMOS process and measures 7200 mu m on a side.
3-D Spatial Resolution of 350 μm Pitch Pixelated CdZnTe Detectors for Imaging Applications.
Yin, Yongzhi; Chen, Ximeng; Wu, Heyu; Komarov, Sergey; Garson, Alfred; Li, Qiang; Guo, Qingzhen; Krawczynski, Henric; Meng, Ling-Jian; Tai, Yuan-Chuan
2013-02-01
We are currently investigating the feasibility of using highly pixelated Cadmium Zinc Telluride (CdZnTe) detectors for sub-500 μ m resolution PET imaging applications. A 20 mm × 20 mm × 5 mm CdZnTe substrate was fabricated with 350 μ m pitch pixels (250 μ m anode pixels with 100 μ m gap) and coplanar cathode. Charge sharing among the pixels of a 350 μ m pitch detector was studied using collimated 122 keV and 511 keV gamma ray sources. For a 350 μ m pitch CdZnTe detector, scatter plots of the charge signal of two neighboring pixels clearly show more charge sharing when the collimated beam hits the gap between adjacent pixels. Using collimated Co-57 and Ge-68 sources, we measured the count profiles and estimated the intrinsic spatial resolution of 350 μ m pitch detector biased at -1000 V. Depth of interaction was analyzed based on two methods, i.e., cathode/anode ratio and electron drift time, in both 122 keV and 511 keV measurements. For single-pixel photopeak events, a linear correlation between cathode/anode ratio and electron drift time was shown, which would be useful for estimating the DOI information and preserving image resolution in CdZnTe PET imaging applications.
3-D Spatial Resolution of 350 μm Pitch Pixelated CdZnTe Detectors for Imaging Applications
Yin, Yongzhi; Chen, Ximeng; Wu, Heyu; Komarov, Sergey; Garson, Alfred; Li, Qiang; Guo, Qingzhen; Krawczynski, Henric; Meng, Ling-Jian; Tai, Yuan-Chuan
2016-01-01
We are currently investigating the feasibility of using highly pixelated Cadmium Zinc Telluride (CdZnTe) detectors for sub-500 μm resolution PET imaging applications. A 20 mm × 20 mm × 5 mm CdZnTe substrate was fabricated with 350 μm pitch pixels (250 μm anode pixels with 100 μm gap) and coplanar cathode. Charge sharing among the pixels of a 350 μm pitch detector was studied using collimated 122 keV and 511 keV gamma ray sources. For a 350 μm pitch CdZnTe detector, scatter plots of the charge signal of two neighboring pixels clearly show more charge sharing when the collimated beam hits the gap between adjacent pixels. Using collimated Co-57 and Ge-68 sources, we measured the count profiles and estimated the intrinsic spatial resolution of 350 μm pitch detector biased at −1000 V. Depth of interaction was analyzed based on two methods, i.e., cathode/anode ratio and electron drift time, in both 122 keV and 511 keV measurements. For single-pixel photopeak events, a linear correlation between cathode/anode ratio and electron drift time was shown, which would be useful for estimating the DOI information and preserving image resolution in CdZnTe PET imaging applications. PMID:28250476
NASA Astrophysics Data System (ADS)
Hernandez-Cardoso, G. G.; Alfaro-Gomez, M.; Rojas-Landeros, S. C.; Salas-Gutierrez, I.; Castro-Camus, E.
2018-03-01
In this article, we present a series of hydration mapping images of the foot soles of diabetic and non-diabetic subjects measured by terahertz reflectance. In addition to the hydration images, we present a series of RYG-color-coded (red yellow green) images where pixels are assigned one of the three colors in order to easily identify areas in risk of ulceration. We also present the statistics of the number of pixels with each color as a potential quantitative indicator for diabetic foot-syndrome deterioration.
Microradiography with Semiconductor Pixel Detectors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jakubek, Jan; Cejnarova, Andrea; Dammer, Jiri
High resolution radiography (with X-rays, neutrons, heavy charged particles, ...) often exploited also in tomographic mode to provide 3D images stands as a powerful imaging technique for instant and nondestructive visualization of fine internal structure of objects. Novel types of semiconductor single particle counting pixel detectors offer many advantages for radiation imaging: high detection efficiency, energy discrimination or direct energy measurement, noiseless digital integration (counting), high frame rate and virtually unlimited dynamic range. This article shows the application and potential of pixel detectors (such as Medipix2 or TimePix) in different fields of radiation imaging.
Development and use of an L3CCD high-cadence imaging system for Optical Astronomy
NASA Astrophysics Data System (ADS)
Sheehan, Brendan J.; Butler, Raymond F.
2008-02-01
A high cadence imaging system, based on a Low Light Level CCD (L3CCD) camera, has been developed for photometric and polarimetric applications. The camera system is an iXon DV-887 from Andor Technology, which uses a CCD97 L3CCD detector from E2V technologies. This is a back illuminated device, giving it an extended blue response, and has an active area of 512×512 pixels. The camera system allows frame-rates ranging from 30 fps (full frame) to 425 fps (windowed & binned frame). We outline the system design, concentrating on the calibration and control of the L3CCD camera. The L3CCD detector can be either triggered directly by a GPS timeserver/frequency generator or be internally triggered. A central PC remotely controls the camera computer system and timeserver. The data is saved as standard `FITS' files. The large data loads associated with high frame rates, leads to issues with gathering and storing the data effectively. To overcome such problems, a specific data management approach is used, and a Python/PYRAF data reduction pipeline was written for the Linux environment. This uses calibration data collected either on-site, or from lab based measurements, and enables a fast and reliable method for reducing images. To date, the system has been used twice on the 1.5 m Cassini Telescope in Loiano (Italy) we present the reduction methods and observations made.
Iodine contrast cone beam CT imaging of breast cancer
NASA Astrophysics Data System (ADS)
Partain, Larry; Prionas, Stavros; Seppi, Edward; Virshup, Gary; Roos, Gerhard; Sutherland, Robert; Boone, John
2007-03-01
An iodine contrast agent, in conjunction with an X-ray cone beam CT imaging system, was used to clearly image three, biopsy verified, cancer lesions in two patients. The lesions were approximately in the 10 mm to 6 mm diameter range. Additional regions were also enhanced with approximate dimensions down to 1 mm or less in diameter. A flat panel detector, with 194 μm pixels in 2 x 2 binning mode, was used to obtain 500 projection images at 30 fps with an 80 kVp X-ray system operating at 112 mAs, for an 8-9 mGy dose - equivalent to two view mammography for these women. The patients were positioned prone, while the gantry rotated in the horizontal plane around the uncompressed, pendant breasts. This gantry rotated 360 degrees during the patient's 16.6 sec breath hold. A volume of 100 cc of 320 mg/ml iodine-contrast was power injected at 4 cc/sec, via catheter into the arm vein of the patient. The resulting 512 x 512 x 300 cone beam CT data set of Feldkamp reconstructed ~(0.3 mm) 3 voxels were analyzed. An interval of voxel contrast values, characteristic of the regions with iodine contrast enhancement, were used with surface rendering to clearly identify up to a total of 13 highlighted volumes. This included the three largest lesions, that were previously biopsied and confirmed to be malignant. The other ten highlighted regions, of smaller diameters, are likely areas of increased contrast trapping unrelated to cancer angiogenesis. However the technique itself is capable of resolving lesions that small.
NASA Astrophysics Data System (ADS)
Mackler, D. A.; Jahn, J.; Pollock, C. J.
2013-12-01
Plamasheet particles transported Earthward during times of active magnetic convection can interact with thermospheric neutrals through charge exchange. The resulting ENAs are free to leave the influence of the magnetosphere and can be remotely detected. ENAs associated with low altitude (300-800 km) ion precipitation in the high latitude inner mangetosphere are termed Low Altitude Emissions (LAEs). LAEs are highly non-isotropic in velocity space such that the pitch angle distribution at the time of charge exchange is near 90 degrees. The observed Geomagnetic Emission Cone (GEC) of LAEs can be mapped spatially, showing where energy is deposited during storm/sub-storm times. In this study we present a statistical look at the particulate albedo of LAEs over the declining phase of solar cycle 23. The particulate albedo is defined as the ratio of the emitting energetic neutrals to the precipitating ions. The precipitating ion differential directional flux maps are built up from combining NOAA 14/15/16 TED and DMSP 13/14/15 SSJ4 data. Low altitude ENA signatures are identified manually using IMAGE/MENA images and selected out. The geomagnetic location of each pixel representing a LAE source region in the neutral images is computed assuming an altitude of 650 km. Before taking the ratio of the resulting flux of neutrals and ions, the Magnetic Local Time (MLT) and Invariant Latitude (IL) bin sizes are changed such that each has less than 20% error in counting statistics. The particulate albedo maps are then evaluated over changes in geomagnetic storm activity.
Modulation transfer function measurement technique for small-pixel detectors
NASA Technical Reports Server (NTRS)
Marchywka, Mike; Socker, Dennis G.
1992-01-01
A modulation transfer function (MTF) measurement technique suitable for large-format, small-pixel detector characterization has been investigated. A volume interference grating is used as a test image instead of the bar or sine wave target images normally used. This technique permits a high-contrast, large-area, sinusoidal intensity distribution to illuminate the device being tested, avoiding the need to deconvolve raw data with imaging system characteristics. A high-confidence MTF result at spatial frequencies near 200 cycles/mm is obtained. We present results at several visible light wavelengths with a 6.8-micron-pixel CCD. Pixel response functions are derived from the MTF results.
NASA Astrophysics Data System (ADS)
Ackermann, Ulrich; Eschbaumer, Stephan; Bergmaier, Andreas; Egger, Werner; Sperr, Peter; Greubel, Christoph; Löwe, Benjamin; Schotanus, Paul; Dollinger, Günther
2016-07-01
To perform Four Dimensional Age Momentum Correlation measurements in the near future, where one obtains the positron lifetime in coincidence with the three dimensional momentum of the electron annihilating with the positron, we have investigated the time and position resolution of two CeBr3 scintillators (monolithic and an array of pixels) using a Photek IPD340/Q/BI/RS microchannel plate image intensifier. The microchannel plate image intensifier has an active diameter of 40 mm and a stack of two microchannel plates in chevron configuration. The monolithic CeBr3 scintillator was cylindrically shaped with a diameter of 40 mm and a height of 5 mm. The pixelated scintillator array covered the whole active area of the microchannel plate image intensifier and the shape of each pixel was 2.5·2.5·8 mm3 with a pixel pitch of 3.3 mm. For the monolithic setup the measured mean single time resolution was 330 ps (FWHM) at a gamma energy of 511 keV. No significant dependence on the position was detected. The position resolution at the center of the monolithic scintillator was about 2.5 mm (FWHM) at a gamma energy of 662 keV. The single time resolution of the pixelated crystal setup reached 320 ps (FWHM) in the region of the center of the active area of the microchannel plate image intensifier. The position resolution was limited by the cross-section of the pixels. The gamma energy for the pixel setup measurements was 511 keV.
Impulsive noise suppression in color images based on the geodesic digital paths
NASA Astrophysics Data System (ADS)
Smolka, Bogdan; Cyganek, Boguslaw
2015-02-01
In the paper a novel filtering design based on the concept of exploration of the pixel neighborhood by digital paths is presented. The paths start from the boundary of a filtering window and reach its center. The cost of transitions between adjacent pixels is defined in the hybrid spatial-color space. Then, an optimal path of minimum total cost, leading from pixels of the window's boundary to its center is determined. The cost of an optimal path serves as a degree of similarity of the central pixel to the samples from the local processing window. If a pixel is an outlier, then all the paths starting from the window's boundary will have high costs and the minimum one will also be high. The filter output is calculated as a weighted mean of the central pixel and an estimate constructed using the information on the minimum cost assigned to each image pixel. So, first the costs of optimal paths are used to build a smoothed image and in the second step the minimum cost of the central pixel is utilized for construction of the weights of a soft-switching scheme. The experiments performed on a set of standard color images, revealed that the efficiency of the proposed algorithm is superior to the state-of-the-art filtering techniques in terms of the objective restoration quality measures, especially for high noise contamination ratios. The proposed filter, due to its low computational complexity, can be applied for real time image denoising and also for the enhancement of video streams.
NASA Technical Reports Server (NTRS)
Skakun, Sergii; Roger, Jean-Claude; Vermote, Eric F.; Masek, Jeffrey G.; Justice, Christopher O.
2017-01-01
This study investigates misregistration issues between Landsat-8/OLI and Sentinel-2A/MSI at 30 m resolution, and between multi-temporal Sentinel-2A images at 10 m resolution using a phase correlation approach and multiple transformation functions. Co-registration of 45 Landsat-8 to Sentinel-2A pairs and 37 Sentinel-2A to Sentinel-2A pairs were analyzed. Phase correlation proved to be a robust approach that allowed us to identify hundreds and thousands of control points on images acquired more than 100 days apart. Overall, misregistration of up to 1.6 pixels at 30 m resolution between Landsat-8 and Sentinel-2A images, and 1.2 pixels and 2.8 pixels at 10 m resolution between multi-temporal Sentinel-2A images from the same and different orbits, respectively, were observed. The non-linear Random Forest regression used for constructing the mapping function showed best results in terms of root mean square error (RMSE), yielding an average RMSE error of 0.07+/-0.02 pixels at 30 m resolution, and 0.09+/-0.05 and 0.15+/-0.06 pixels at 10 m resolution for the same and adjacent Sentinel-2A orbits, respectively, for multiple tiles and multiple conditions. A simpler 1st order polynomial function (affine transformation) yielded RMSE of 0.08+/-0.02 pixels at 30 m resolution and 0.12+/-0.06 (same Sentinel-2A orbits) and 0.20+/-0.09 (adjacent orbits) pixels at 10 m resolution.
Reflectance Prediction Modelling for Residual-Based Hyperspectral Image Coding
Xiao, Rui; Gao, Junbin; Bossomaier, Terry
2016-01-01
A Hyperspectral (HS) image provides observational powers beyond human vision capability but represents more than 100 times the data compared to a traditional image. To transmit and store the huge volume of an HS image, we argue that a fundamental shift is required from the existing “original pixel intensity”-based coding approaches using traditional image coders (e.g., JPEG2000) to the “residual”-based approaches using a video coder for better compression performance. A modified video coder is required to exploit spatial-spectral redundancy using pixel-level reflectance modelling due to the different characteristics of HS images in their spectral and shape domain of panchromatic imagery compared to traditional videos. In this paper a novel coding framework using Reflectance Prediction Modelling (RPM) in the latest video coding standard High Efficiency Video Coding (HEVC) for HS images is proposed. An HS image presents a wealth of data where every pixel is considered a vector for different spectral bands. By quantitative comparison and analysis of pixel vector distribution along spectral bands, we conclude that modelling can predict the distribution and correlation of the pixel vectors for different bands. To exploit distribution of the known pixel vector, we estimate a predicted current spectral band from the previous bands using Gaussian mixture-based modelling. The predicted band is used as the additional reference band together with the immediate previous band when we apply the HEVC. Every spectral band of an HS image is treated like it is an individual frame of a video. In this paper, we compare the proposed method with mainstream encoders. The experimental results are fully justified by three types of HS dataset with different wavelength ranges. The proposed method outperforms the existing mainstream HS encoders in terms of rate-distortion performance of HS image compression. PMID:27695102
Analysis of identification of digital images from a map of cosmic microwaves
NASA Astrophysics Data System (ADS)
Skeivalas, J.; Turla, V.; Jurevicius, M.; Viselga, G.
2018-04-01
This paper discusses identification of digital images from the cosmic microwave background radiation map formed according to the data of the European Space Agency "Planck" telescope by applying covariance functions and wavelet theory. The estimates of covariance functions of two digital images or single images are calculated according to the random functions formed of the digital images in the form of pixel vectors. The estimates of pixel vectors are formed on expansion of the pixel arrays of the digital images by a single vector. When the scale of a digital image is varied, the frequencies of single-pixel color waves remain constant and the procedure for calculation of covariance functions is not affected. For identification of the images, the RGB format spectrum has been applied. The impact of RGB spectrum components and the color tensor on the estimates of covariance functions was analyzed. The identity of digital images is assessed according to the changes in the values of the correlation coefficients in a certain range of values by applying the developed computer program.
BIN1 is reduced and Cav1.2 trafficking is impaired in human failing cardiomyocytes.
Hong, Ting-Ting; Smyth, James W; Chu, Kevin Y; Vogan, Jacob M; Fong, Tina S; Jensen, Brian C; Fang, Kun; Halushka, Marc K; Russell, Stuart D; Colecraft, Henry; Hoopes, Charles W; Ocorr, Karen; Chi, Neil C; Shaw, Robin M
2012-05-01
Heart failure is a growing epidemic, and a typical aspect of heart failure pathophysiology is altered calcium transients. Normal cardiac calcium transients are initiated by Cav1.2 channels at cardiac T tubules. Bridging integrator 1 (BIN1) is a membrane scaffolding protein that causes Cav1.2 to traffic to T tubules in healthy hearts. The mechanisms of Cav1.2 trafficking in heart failure are not known. To study BIN1 expression and its effect on Cav1.2 trafficking in failing hearts. Intact myocardium and freshly isolated cardiomyocytes from nonfailing and end-stage failing human hearts were used to study BIN1 expression and Cav1.2 localization. To confirm Cav1.2 surface expression dependence on BIN1, patch-clamp recordings were performed of Cav1.2 current in cell lines with and without trafficking-competent BIN1. Also, in adult mouse cardiomyocytes, surface Cav1.2 and calcium transients were studied after small hairpin RNA-mediated knockdown of BIN1. For a functional readout in intact heart, calcium transients and cardiac contractility were analyzed in a zebrafish model with morpholino-mediated knockdown of BIN1. BIN1 expression is significantly decreased in failing cardiomyocytes at both mRNA (30% down) and protein (36% down) levels. Peripheral Cav1.2 is reduced to 42% by imaging, and a biochemical T-tubule fraction of Cav1.2 is reduced to 68%. The total calcium current is reduced to 41% in a cell line expressing a nontrafficking BIN1 mutant. In mouse cardiomyocytes, BIN1 knockdown decreases surface Cav1.2 and impairs calcium transients. In zebrafish hearts, BIN1 knockdown causes a 75% reduction in calcium transients and severe ventricular contractile dysfunction. The data indicate that BIN1 is significantly reduced in human heart failure, and this reduction impairs Cav1.2 trafficking, calcium transients, and contractility. Copyright © 2012 Heart Rhythm Society. Published by Elsevier Inc. All rights reserved.
BIN1 is Reduced and Cav1.2 Trafficking is Impaired in Human Failing Cardiomyocytes
Hong, Ting-Ting; Smyth, James W.; Chu, Kevin Y.; Vogan, Jacob M.; Fong, Tina S.; Jensen, Brian C.; Fang, Kun; Halushka, Marc K.; Russell, Stuart D.; Colecraft, Henry; Hoopes, Charles W.; Ocorr, Karen; Chi, Neil C.; Shaw, Robin M.
2011-01-01
Background Heart failure is a growing epidemic and a typical aspect of heart failure pathophysiology is altered calcium transients. Normal cardiac calcium transients are initiated by Cav1.2 channels at cardiac T-tubules. BIN1 is a membrane scaffolding protein that causes Cav1.2 to traffic to T-tubules in healthy hearts. The mechanisms of Cav1.2 trafficking in heart failure are not known. Objective To study BIN1 expression and its effect on Cav1.2 trafficking in failing hearts. Methods Intact myocardium and freshly isolated cardiomyocytes from non-failing and end-stage failing human hearts were used to study BIN1 expression and Cav1.2 localization. To confirm Cav1.2 surface expression dependence on BIN1, patch clamp recordings were performed of Cav1.2 current in cell lines with and without trafficking competent BIN1. Also, in adult mouse cardiomyocytes, surface Cav1.2 and calcium transients were studied after shRNA mediated knockdown of BIN1. For a functional readout in intact heart, calcium transients and cardiac contractility were analyzed in a zebrafish model with morpholino mediated knockdown of BIN1. Results BIN1 expression is significantly decreased in failing cardiomyocytes at both mRNA (30% down) and protein (36% down) levels. Peripheral Cav1.2 is reduced 42% by imaging and biochemical T-tubule fraction of Cav1.2 is reduced 68%. Total calcium current is reduced 41% in a cell line expressing non-trafficking BIN1 mutant. In mouse cardiomyocytes, BIN1 knockdown decreases surface Cav1.2 and impairs calcium transients. In zebrafish hearts, BIN1 knockdown causes a 75% reduction in calcium transients and severe ventricular contractile dysfunction. Conclusions The data indicate that BIN1 is significantly reduced in human heart failure, and this reduction impairs Cav1.2 trafficking, calcium transients, and contractility. PMID:22138472
G-Channel Restoration for RWB CFA with Double-Exposed W Channel
Park, Chulhee; Song, Ki Sun; Kang, Moon Gi
2017-01-01
In this paper, we propose a green (G)-channel restoration for a red–white–blue (RWB) color filter array (CFA) image sensor using the dual sampling technique. By using white (W) pixels instead of G pixels, the RWB CFA provides high-sensitivity imaging and an improved signal-to-noise ratio compared to the Bayer CFA. However, owing to this high sensitivity, the W pixel values become rapidly over-saturated before the red–blue (RB) pixel values reach the appropriate levels. Because the missing G color information included in the W channel cannot be restored with a saturated W, multiple captures with dual sampling are necessary to solve this early W-pixel saturation problem. Each W pixel has a different exposure time when compared to those of the R and B pixels, because the W pixels are double-exposed. Therefore, a RWB-to-RGB color conversion method is required in order to restore the G color information, using a double-exposed W channel. The proposed G-channel restoration algorithm restores G color information from the W channel by considering the energy difference caused by the different exposure times. Using the proposed method, the RGB full-color image can be obtained while maintaining the high-sensitivity characteristic of the W pixels. PMID:28165425
G-Channel Restoration for RWB CFA with Double-Exposed W Channel.
Park, Chulhee; Song, Ki Sun; Kang, Moon Gi
2017-02-05
In this paper, we propose a green (G)-channel restoration for a red-white-blue (RWB) color filter array (CFA) image sensor using the dual sampling technique. By using white (W) pixels instead of G pixels, the RWB CFA provides high-sensitivity imaging and an improved signal-to-noise ratio compared to the Bayer CFA. However, owing to this high sensitivity, the W pixel values become rapidly over-saturated before the red-blue (RB) pixel values reach the appropriate levels. Because the missing G color information included in the W channel cannot be restored with a saturated W, multiple captures with dual sampling are necessary to solve this early W-pixel saturation problem. Each W pixel has a different exposure time when compared to those of the R and B pixels, because the W pixels are double-exposed. Therefore, a RWB-to-RGB color conversion method is required in order to restore the G color information, using a double-exposed W channel. The proposed G-channel restoration algorithm restores G color information from the W channel by considering the energy difference caused by the different exposure times. Using the proposed method, the RGB full-color image can be obtained while maintaining the high-sensitivity characteristic of the W pixels.
It's not the pixel count, you fool
NASA Astrophysics Data System (ADS)
Kriss, Michael A.
2012-01-01
The first thing a "marketing guy" asks the digital camera engineer is "how many pixels does it have, for we need as many mega pixels as possible since the other guys are killing us with their "umpteen" mega pixel pocket sized digital cameras. And so it goes until the pixels get smaller and smaller in order to inflate the pixel count in the never-ending pixel-wars. These small pixels just are not very good. The truth of the matter is that the most important feature of digital cameras in the last five years is the automatic motion control to stabilize the image on the sensor along with some very sophisticated image processing. All the rest has been hype and some "cool" design. What is the future for digital imaging and what will drive growth of camera sales (not counting the cell phone cameras which totally dominate the market in terms of camera sales) and more importantly after sales profits? Well sit in on the Dark Side of Color and find out what is being done to increase the after sales profits and don't be surprised if has been done long ago in some basement lab of a photographic company and of course, before its time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dong, X; Yang, X; Rosenfield, J
Purpose: Metal implants such as orthopedic hardware and dental fillings cause severe bright and dark streaking in reconstructed CT images. These artifacts decrease image contrast and degrade HU accuracy, leading to inaccuracies in target delineation and dose calculation. Additionally, such artifacts negatively impact patient set-up in image guided radiation therapy (IGRT). In this work, we propose a novel method for metal artifact reduction which utilizes the anatomical similarity between neighboring CT slices. Methods: Neighboring CT slices show similar anatomy. Based on this anatomical similarity, the proposed method replaces corrupted CT pixels with pixels from adjacent, artifact-free slices. A gamma map,more » which is the weighted summation of relative HU error and distance error, is calculated for each pixel in the artifact-corrupted CT image. The minimum value in each pixel’s gamma map is used to identify a pixel from the adjacent CT slice to replace the corresponding artifact-corrupted pixel. This replacement only occurs if the minimum value in a particular pixel’s gamma map is larger than a threshold. The proposed method was evaluated with clinical images. Results: Highly attenuating dental fillings and hip implants cause severe streaking artifacts on CT images. The proposed method eliminates the dark and bright streaking and improves the implant delineation and visibility. In particular, the image non-uniformity in the central region of interest was reduced from 1.88 and 1.01 to 0.28 and 0.35, respectively. Further, the mean CT HU error was reduced from 328 HU and 460 HU to 60 HU and 36 HU, respectively. Conclusions: The proposed metal artifact reduction method replaces corrupted image pixels with pixels from neighboring slices that are free of metal artifacts. This method proved capable of suppressing streaking artifacts, improving HU accuracy and image detectability.« less
Shrestha, Suman; Karellas, Andrew; Shi, Linxi; Gounis, Matthew J.; Bellazzini, Ronaldo; Spandre, Gloria; Brez, Alessandro; Minuti, Massimo
2016-01-01
Purpose: High-resolution, photon-counting, energy-resolved detector with fast-framing capability can facilitate simultaneous acquisition of precontrast and postcontrast images for subtraction angiography without pixel registration artifacts and can facilitate high-resolution real-time imaging during image-guided interventions. Hence, this study was conducted to determine the spatial resolution characteristics of a hexagonal pixel array photon-counting cadmium telluride (CdTe) detector. Methods: A 650 μm thick CdTe Schottky photon-counting detector capable of concurrently acquiring up to two energy-windowed images was operated in a single energy-window mode to include photons of 10 keV or higher. The detector had hexagonal pixels with apothem of 30 μm resulting in pixel pitch of 60 and 51.96 μm along the two orthogonal directions. The detector was characterized at IEC-RQA5 spectral conditions. Linear response of the detector was determined over the air kerma rate relevant to image-guided interventional procedures ranging from 1.3 nGy/frame to 91.4 μGy/frame. Presampled modulation transfer was determined using a tungsten edge test device. The edge-spread function and the finely sampled line spread function accounted for hexagonal sampling, from which the presampled modulation transfer function (MTF) was determined. Since detectors with hexagonal pixels require resampling to square pixels for distortion-free display, the optimal square pixel size was determined by minimizing the root-mean-squared-error of the aperture functions for the square and hexagonal pixels up to the Nyquist limit. Results: At Nyquist frequencies of 8.33 and 9.62 cycles/mm along the apothem and orthogonal to the apothem directions, the modulation factors were 0.397 and 0.228, respectively. For the corresponding axis, the limiting resolution defined as 10% MTF occurred at 13.3 and 12 cycles/mm, respectively. Evaluation of the aperture functions yielded an optimal square pixel size of 54 μm. After resampling to 54 μm square pixels using trilinear interpolation, the presampled MTF at Nyquist frequency of 9.26 cycles/mm was 0.29 and 0.24 along the orthogonal directions and the limiting resolution (10% MTF) occurred at approximately 12 cycles/mm. Visual analysis of a bar pattern image showed the ability to resolve close to 12 line-pairs/mm and qualitative evaluation of a neurovascular nitinol-stent showed the ability to visualize its struts at clinically relevant conditions. Conclusions: Hexagonal pixel array photon-counting CdTe detector provides high spatial resolution in single-photon counting mode. After resampling to optimal square pixel size for distortion-free display, the spatial resolution is preserved. The dual-energy capabilities of the detector could allow for artifact-free subtraction angiography and basis material decomposition. The proposed high-resolution photon-counting detector with energy-resolving capability can be of importance for several image-guided interventional procedures as well as for pediatric applications. PMID:27147324
Vedantham, Srinivasan; Shrestha, Suman; Karellas, Andrew; Shi, Linxi; Gounis, Matthew J; Bellazzini, Ronaldo; Spandre, Gloria; Brez, Alessandro; Minuti, Massimo
2016-05-01
High-resolution, photon-counting, energy-resolved detector with fast-framing capability can facilitate simultaneous acquisition of precontrast and postcontrast images for subtraction angiography without pixel registration artifacts and can facilitate high-resolution real-time imaging during image-guided interventions. Hence, this study was conducted to determine the spatial resolution characteristics of a hexagonal pixel array photon-counting cadmium telluride (CdTe) detector. A 650 μm thick CdTe Schottky photon-counting detector capable of concurrently acquiring up to two energy-windowed images was operated in a single energy-window mode to include photons of 10 keV or higher. The detector had hexagonal pixels with apothem of 30 μm resulting in pixel pitch of 60 and 51.96 μm along the two orthogonal directions. The detector was characterized at IEC-RQA5 spectral conditions. Linear response of the detector was determined over the air kerma rate relevant to image-guided interventional procedures ranging from 1.3 nGy/frame to 91.4 μGy/frame. Presampled modulation transfer was determined using a tungsten edge test device. The edge-spread function and the finely sampled line spread function accounted for hexagonal sampling, from which the presampled modulation transfer function (MTF) was determined. Since detectors with hexagonal pixels require resampling to square pixels for distortion-free display, the optimal square pixel size was determined by minimizing the root-mean-squared-error of the aperture functions for the square and hexagonal pixels up to the Nyquist limit. At Nyquist frequencies of 8.33 and 9.62 cycles/mm along the apothem and orthogonal to the apothem directions, the modulation factors were 0.397 and 0.228, respectively. For the corresponding axis, the limiting resolution defined as 10% MTF occurred at 13.3 and 12 cycles/mm, respectively. Evaluation of the aperture functions yielded an optimal square pixel size of 54 μm. After resampling to 54 μm square pixels using trilinear interpolation, the presampled MTF at Nyquist frequency of 9.26 cycles/mm was 0.29 and 0.24 along the orthogonal directions and the limiting resolution (10% MTF) occurred at approximately 12 cycles/mm. Visual analysis of a bar pattern image showed the ability to resolve close to 12 line-pairs/mm and qualitative evaluation of a neurovascular nitinol-stent showed the ability to visualize its struts at clinically relevant conditions. Hexagonal pixel array photon-counting CdTe detector provides high spatial resolution in single-photon counting mode. After resampling to optimal square pixel size for distortion-free display, the spatial resolution is preserved. The dual-energy capabilities of the detector could allow for artifact-free subtraction angiography and basis material decomposition. The proposed high-resolution photon-counting detector with energy-resolving capability can be of importance for several image-guided interventional procedures as well as for pediatric applications.
CMOS Active-Pixel Image Sensor With Simple Floating Gates
NASA Technical Reports Server (NTRS)
Fossum, Eric R.; Nakamura, Junichi; Kemeny, Sabrina E.
1996-01-01
Experimental complementary metal-oxide/semiconductor (CMOS) active-pixel image sensor integrated circuit features simple floating-gate structure, with metal-oxide/semiconductor field-effect transistor (MOSFET) as active circuit element in each pixel. Provides flexibility of readout modes, no kTC noise, and relatively simple structure suitable for high-density arrays. Features desirable for "smart sensor" applications.
Providing integrity, authenticity, and confidentiality for header and pixel data of DICOM images.
Al-Haj, Ali
2015-04-01
Exchange of medical images over public networks is subjected to different types of security threats. This has triggered persisting demands for secured telemedicine implementations that will provide confidentiality, authenticity, and integrity for the transmitted images. The medical image exchange standard (DICOM) offers mechanisms to provide confidentiality for the header data of the image but not for the pixel data. On the other hand, it offers mechanisms to achieve authenticity and integrity for the pixel data but not for the header data. In this paper, we propose a crypto-based algorithm that provides confidentially, authenticity, and integrity for the pixel data, as well as for the header data. This is achieved by applying strong cryptographic primitives utilizing internally generated security data, such as encryption keys, hashing codes, and digital signatures. The security data are generated internally from the header and the pixel data, thus a strong bond is established between the DICOM data and the corresponding security data. The proposed algorithm has been evaluated extensively using DICOM images of different modalities. Simulation experiments show that confidentiality, authenticity, and integrity have been achieved as reflected by the results we obtained for normalized correlation, entropy, PSNR, histogram analysis, and robustness.
NASA Astrophysics Data System (ADS)
Zhang, Jialin; Chen, Qian; Sun, Jiasong; Li, Jiaji; Zuo, Chao
2018-01-01
Lensfree holography provides a new way to effectively bypass the intrinsical trade-off between the spatial resolution and field-of-view (FOV) of conventional lens-based microscopes. Unfortunately, due to the limited sensor pixel-size, unpredictable disturbance during image acquisition, and sub-optimum solution to the phase retrieval problem, typical lensfree microscopes only produce compromised imaging quality in terms of lateral resolution and signal-to-noise ratio (SNR). In this paper, we propose an adaptive pixel-super-resolved lensfree imaging (APLI) method to address the pixel aliasing problem by Z-scanning only, without resorting to subpixel shifting or beam-angle manipulation. Furthermore, an automatic positional error correction algorithm and adaptive relaxation strategy are introduced to enhance the robustness and SNR of reconstruction significantly. Based on APLI, we perform full-FOV reconstruction of a USAF resolution target across a wide imaging area of {29.85 mm2 and achieve half-pitch lateral resolution of 770 nm, surpassing 2.17 times of the theoretical Nyquist-Shannon sampling resolution limit imposed by the sensor pixel-size (1.67 μm). Full-FOV imaging result of a typical dicot root is also provided to demonstrate its promising potential applications in biologic imaging.
IMDISP - INTERACTIVE IMAGE DISPLAY PROGRAM
NASA Technical Reports Server (NTRS)
Martin, M. D.
1994-01-01
The Interactive Image Display Program (IMDISP) is an interactive image display utility for the IBM Personal Computer (PC, XT and AT) and compatibles. Until recently, efforts to utilize small computer systems for display and analysis of scientific data have been hampered by the lack of sufficient data storage capacity to accomodate large image arrays. Most planetary images, for example, require nearly a megabyte of storage. The recent development of the "CDROM" (Compact Disk Read-Only Memory) storage technology makes possible the storage of up to 680 megabytes of data on a single 4.72-inch disk. IMDISP was developed for use with the CDROM storage system which is currently being evaluated by the Planetary Data System. The latest disks to be produced by the Planetary Data System are a set of three disks containing all of the images of Uranus acquired by the Voyager spacecraft. The images are in both compressed and uncompressed format. IMDISP can read the uncompressed images directly, but special software is provided to decompress the compressed images, which can not be processed directly. IMDISP can also display images stored on floppy or hard disks. A digital image is a picture converted to numerical form so that it can be stored and used in a computer. The image is divided into a matrix of small regions called picture elements, or pixels. The rows and columns of pixels are called "lines" and "samples", respectively. Each pixel has a numerical value, or DN (data number) value, quantifying the darkness or brightness of the image at that spot. In total, each pixel has an address (line number, sample number) and a DN value, which is all that the computer needs for processing. DISPLAY commands allow the IMDISP user to display all or part of an image at various positions on the display screen. The user may also zoom in and out from a point on the image defined by the cursor, and may pan around the image. To enable more or all of the original image to be displayed on the screen at once, the image can be "subsampled." For example, if the image were subsampled by a factor of 2, every other pixel from every other line would be displayed, starting from the upper left corner of the image. Any positive integer may be used for subsampling. The user may produce a histogram of an image file, which is a graph showing the number of pixels per DN value, or per range of DN values, for the entire image. IMDISP can also plot the DN value versus pixels along a line between two points on the image. The user can "stretch" or increase the contrast of an image by specifying low and high DN values; all pixels with values lower than the specified "low" will then become black, and all pixels higher than the specified "high" value will become white. Pixels between the low and high values will be evenly shaded between black and white. IMDISP is written in a modular form to make it easy to change it to work with different display devices or on other computers. The code can also be adapted for use in other application programs. There are device dependent image display modules, general image display subroutines, image I/O routines, and image label and command line parsing routines. The IMDISP system is written in C-language (94%) and Assembler (6%). It was implemented on an IBM PC with the MS DOS 3.21 operating system. IMDISP has a memory requirement of about 142k bytes. IMDISP was developed in 1989 and is a copyrighted work with all copyright vested in NASA. Additional planetary images can be obtained from the National Space Science Data Center at (301) 286-6695.
NASA Astrophysics Data System (ADS)
Dong, Xue; Yang, Xiaofeng; Rosenfield, Jonathan; Elder, Eric; Dhabaan, Anees
2017-03-01
X-ray computed tomography (CT) is widely used in radiation therapy treatment planning in recent years. However, metal implants such as dental fillings and hip prostheses can cause severe bright and dark streaking artifacts in reconstructed CT images. These artifacts decrease image contrast and degrade HU accuracy, leading to inaccuracies in target delineation and dose calculation. In this work, a metal artifact reduction method is proposed based on the intrinsic anatomical similarity between neighboring CT slices. Neighboring CT slices from the same patient exhibit similar anatomical features. Exploiting this anatomical similarity, a gamma map is calculated as a weighted summation of relative HU error and distance error for each pixel in an artifact-corrupted CT image relative to a neighboring, artifactfree image. The minimum value in the gamma map for each pixel is used to identify an appropriate pixel from the artifact-free CT slice to replace the corresponding artifact-corrupted pixel. With the proposed method, the mean CT HU error was reduced from 360 HU and 460 HU to 24 HU and 34 HU on head and pelvis CT images, respectively. Dose calculation accuracy also improved, as the dose difference was reduced from greater than 20% to less than 4%. Using 3%/3mm criteria, the gamma analysis failure rate was reduced from 23.25% to 0.02%. An image-based metal artifact reduction method is proposed that replaces corrupted image pixels with pixels from neighboring CT slices free of metal artifacts. This method is shown to be capable of suppressing streaking artifacts, thereby improving HU and dose calculation accuracy.
NASA Technical Reports Server (NTRS)
2006-01-01
This HiRISE image covers a portion of a delta that partially fills Eberswalde crater in Margaritifer Sinus. The delta was first recognized and mapped using MOC images that revealed various features whose presence required sustained flow and deposition into a lake that once occupied the crater. The HiRISE image resolves meter-scale features that record the migration of channels and delta distributaries as the delta grew over time. Differences in grain-size of sediments within the environments on the delta enable differential erosion of the deposits. As a result, coarser channel deposits are slightly more resistant and stand in relief relative to finer-grained over-bank and more easily eroded distal delta deposits. Close examination of the relict channel deposits confirms the presence of some meter-size blocks that were likely too coarse to have been transported by water flowing within the channels. These blocks may be formed of the sand and gravel that more likely moved along the channels that was lithified and eroded. Numerous meter-scale polygonal structures are common on many surfaces, but mostly those associated with more quiescent depositional environments removed from the channels. The polygons could be the result of deposition of fine-grained sediments that were either exposed and desiccated (dried out), rich in clays that shrunk when the water was removed, turned into rock and then fractured and eroded, or some combination of these processes. Image PSP_001336_1560 was taken by the High Resolution Imaging Science Experiment (HiRISE) camera onboard the Mars Reconnaissance Orbiter spacecraft on November 8, 2006. The complete image is centered at -23.8 degrees latitude, 326.4 degrees East longitude. The range to the target site was 256.3 km (160.2 miles). At this distance the image scale is 25.6 cm/pixel (with 1 x 1 binning) so objects 77 cm across are resolved. The image shown here has been map-projected to 25 cm/pixel and north is up. The image was taken at a local Mars time of 3:35 PM and the scene is illuminated from the west with a solar incidence angle of 67 degrees, thus the sun was about 23 degrees above the horizon. At a solar longitude of 132.4 degrees, the season on Mars is Northern Summer. NASA's Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the Mars Reconnaissance Orbiter for NASA's Science Mission Directorate, Washington. Lockheed Martin Space Systems, Denver, is the prime contractor for the project and built the spacecraft. The High Resolution Imaging Science Experiment is operated by the University of Arizona, Tucson, and the instrument was built by Ball Aerospace and Technology Corp., Boulder, Colo.Sub-pixel spatial resolution wavefront phase imaging
NASA Technical Reports Server (NTRS)
Stahl, H. Philip (Inventor); Mooney, James T. (Inventor)
2012-01-01
A phase imaging method for an optical wavefront acquires a plurality of phase images of the optical wavefront using a phase imager. Each phase image is unique and is shifted with respect to another of the phase images by a known/controlled amount that is less than the size of the phase imager's pixels. The phase images are then combined to generate a single high-spatial resolution phase image of the optical wavefront.
Crack image segmentation based on improved DBC method
NASA Astrophysics Data System (ADS)
Cao, Ting; Yang, Nan; Wang, Fengping; Gao, Ting; Wang, Weixing
2017-11-01
With the development of computer vision technology, crack detection based on digital image segmentation method arouses global attentions among researchers and transportation ministries. Since the crack always exhibits the random shape and complex texture, it is still a challenge to accomplish reliable crack detection results. Therefore, a novel crack image segmentation method based on fractal DBC (differential box counting) is introduced in this paper. The proposed method can estimate every pixel fractal feature based on neighborhood information which can consider the contribution from all possible direction in the related block. The block moves just one pixel every time so that it could cover all the pixels in the crack image. Unlike the classic DBC method which only describes fractal feature for the related region, this novel method can effectively achieve crack image segmentation according to the fractal feature of every pixel. The experiment proves the proposed method can achieve satisfactory results in crack detection.
Digital micromirror device camera with per-pixel coded exposure for high dynamic range imaging.
Feng, Wei; Zhang, Fumin; Wang, Weijing; Xing, Wei; Qu, Xinghua
2017-05-01
In this paper, we overcome the limited dynamic range of the conventional digital camera, and propose a method of realizing high dynamic range imaging (HDRI) from a novel programmable imaging system called a digital micromirror device (DMD) camera. The unique feature of the proposed new method is that the spatial and temporal information of incident light in our DMD camera can be flexibly modulated, and it enables the camera pixels always to have reasonable exposure intensity by DMD pixel-level modulation. More importantly, it allows different light intensity control algorithms used in our programmable imaging system to achieve HDRI. We implement the optical system prototype, analyze the theory of per-pixel coded exposure for HDRI, and put forward an adaptive light intensity control algorithm to effectively modulate the different light intensity to recover high dynamic range images. Via experiments, we demonstrate the effectiveness of our method and implement the HDRI on different objects.
Fast, Deep-Record-Length, Fiber-Coupled Photodiode Imaging Array for Plasma Diagnostics
NASA Astrophysics Data System (ADS)
Brockington, Samuel; Case, Andrew; Witherspoon, F. Douglas
2014-10-01
HyperV Technologies has been developing an imaging diagnostic comprised of an array of fast, low-cost, long-record-length, fiber-optically-coupled photodiode channels to investigate plasma dynamics and other fast, bright events. By coupling an imaging fiber bundle to a bank of amplified photodiode channels, imagers and streak imagers of 100 to 1000 pixels can be constructed. By interfacing analog photodiode systems directly to commercial analog-to-digital converters and modern memory chips, a prototype 100 pixel array with an extremely deep record length (128 k points at 20 Msamples/s) and 10 bit pixel resolution has already been achieved. HyperV now seeks to extend these techniques to construct a prototype 1000 Pixel framing camera with up to 100 Msamples/sec rate and 10 to 12 bit depth. Preliminary experimental results as well as Phase 2 plans will be discussed. Work supported by USDOE Phase 2 SBIR Grant DE-SC0009492.
2017-06-26
Various researchers are often pre-occupied with the quest for flowing water on Mars. However, this image from NASA's Mars Reconnaissance Orbiter (MRO), shows one of the many examples from Mars where lava (when it was molten) behaved in a similar fashion to liquid water. The northern rim of a 30-kilometer diameter crater situated in the western part of the Tharsis volcanic province is shown. The image shows that a lava flow coming from the north-northeast surrounded the crater rim, and rose to such levels that it breached the crater rim at four locations to produce spectacular multi-level lava falls (one in the northwest and three in the north). These lava "falls" cascaded down the wall and terraces of the crater to produce a quasi-circular flow deposit. It seems that the flows were insufficient to fill or even cover the pre-existing deposits of the crater floor. This is evidenced by the darker-toned lavas that overlie the older, and possibly dustier, lighter-toned deposits on the crater floor. This image covers the three falls in the north-central region of the crater wall. The lava flows and falls are distinct as they are rougher than the original features that are smooth and knobby. In a close-up image the rough-textured lava flow to the north has breached the crater wall at a narrow point, where it then cascades downwards, fanning out and draping the steeper slopes of the wall in the process. Image scale is 54.5 centimeters (21.5 inches) per pixel (with 2 x 2 binning); objects on the order of 164 centimeters (64.6 inches) across are resolved.] North is up. https://photojournal.jpl.nasa.gov/catalog/PIA21763
Programmable remapper for image processing
NASA Technical Reports Server (NTRS)
Juday, Richard D. (Inventor); Sampsell, Jeffrey B. (Inventor)
1991-01-01
A video-rate coordinate remapper includes a memory for storing a plurality of transformations on look-up tables for remapping input images from one coordinate system to another. Such transformations are operator selectable. The remapper includes a collective processor by which certain input pixels of an input image are transformed to a portion of the output image in a many-to-one relationship. The remapper includes an interpolative processor by which the remaining input pixels of the input image are transformed to another portion of the output image in a one-to-many relationship. The invention includes certain specific transforms for creating output images useful for certain defects of visually impaired people. The invention also includes means for shifting input pixels and means for scrolling the output matrix.
Novel spectral imaging system combining spectroscopy with imaging applications for biology
NASA Astrophysics Data System (ADS)
Malik, Zvi; Cabib, Dario; Buckwald, Robert A.; Garini, Yuval; Soenksen, Dirk G.
1995-02-01
A novel analytical spectral-imaging system and its results in the examination of biological specimens are presented. The SpectraCube 1000 system measures the transmission, absorbance, or fluorescence spectra of images studied by light microscopy. The system is based on an interferometer combined with a CCD camera, enabling measurement of the interferogram for each pixel constructing the image. Fourier transformation of the interferograms derives pixel by pixel spectra for 170 X 170 pixels of the image. A special `similarity mapping' program has been developed, enabling comparisons of spectral algorithms of all the spatial and spectral information measured by the system in the image. By comparing the spectrum of each pixel in the specimen with a selected reference spectrum (similarity mapping), there is a depiction of the spatial distribution of macromolecules possessing the characteristics of the reference spectrum. The system has been applied to analyses of bone marrow blood cells as well as fluorescent specimens, and has revealed information which could not be unveiled by other techniques. Similarity mapping has enabled visualization of fine details of chromatin packing in the nucleus of cells and other cytoplasmic compartments. Fluorescence analysis by the system has enabled the determination of porphyrin concentrations and distribution in cytoplasmic organelles of living cells.
Single Pixel Black Phosphorus Photodetector for Near-Infrared Imaging.
Miao, Jinshui; Song, Bo; Xu, Zhihao; Cai, Le; Zhang, Suoming; Dong, Lixin; Wang, Chuan
2018-01-01
Infrared imaging systems have wide range of military or civil applications and 2D nanomaterials have recently emerged as potential sensing materials that may outperform conventional ones such as HgCdTe, InGaAs, and InSb. As an example, 2D black phosphorus (BP) thin film has a thickness-dependent direct bandgap with low shot noise and noncryogenic operation for visible to mid-infrared photodetection. In this paper, the use of a single-pixel photodetector made with few-layer BP thin film for near-infrared imaging applications is demonstrated. The imaging is achieved by combining the photodetector with a digital micromirror device to encode and subsequently reconstruct the image based on compressive sensing algorithm. Stationary images of a near-infrared laser spot (λ = 830 nm) with up to 64 × 64 pixels are captured using this single-pixel BP camera with 2000 times of measurements, which is only half of the total number of pixels. The imaging platform demonstrated in this work circumvents the grand challenges of scalable BP material growth for photodetector array fabrication and shows the efficacy of utilizing the outstanding performance of BP photodetector for future high-speed infrared camera applications. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Sub-pixel mapping of hyperspectral imagery using super-resolution
NASA Astrophysics Data System (ADS)
Sharma, Shreya; Sharma, Shakti; Buddhiraju, Krishna M.
2016-04-01
With the development of remote sensing technologies, it has become possible to obtain an overview of landscape elements which helps in studying the changes on earth's surface due to climate, geological, geomorphological and human activities. Remote sensing measures the electromagnetic radiations from the earth's surface and match the spectral similarity between the observed signature and the known standard signatures of the various targets. However, problem lies when image classification techniques assume pixels to be pure. In hyperspectral imagery, images have high spectral resolution but poor spatial resolution. Therefore, the spectra obtained is often contaminated due to the presence of mixed pixels and causes misclassification. To utilise this high spectral information, spatial resolution has to be enhanced. Many factors make the spatial resolution one of the most expensive and hardest to improve in imaging systems. To solve this problem, post-processing of hyperspectral images is done to retrieve more information from the already acquired images. The algorithm to enhance spatial resolution of the images by dividing them into sub-pixels is known as super-resolution and several researches have been done in this domain.In this paper, we propose a new method for super-resolution based on ant colony optimization and review the popular methods of sub-pixel mapping of hyperspectral images along with their comparative analysis.
Cascaded image analysis for dynamic crack detection in material testing
NASA Astrophysics Data System (ADS)
Hampel, U.; Maas, H.-G.
Concrete probes in civil engineering material testing often show fissures or hairline-cracks. These cracks develop dynamically. Starting at a width of a few microns, they usually cannot be detected visually or in an image of a camera imaging the whole probe. Conventional image analysis techniques will detect fissures only if they show a width in the order of one pixel. To be able to detect and measure fissures with a width of a fraction of a pixel at an early stage of their development, a cascaded image analysis approach has been developed, implemented and tested. The basic idea of the approach is to detect discontinuities in dense surface deformation vector fields. These deformation vector fields between consecutive stereo image pairs, which are generated by cross correlation or least squares matching, show a precision in the order of 1/50 pixel. Hairline-cracks can be detected and measured by applying edge detection techniques such as a Sobel operator to the results of the image matching process. Cracks will show up as linear discontinuities in the deformation vector field and can be vectorized by edge chaining. In practical tests of the method, cracks with a width of 1/20 pixel could be detected, and their width could be determined at a precision of 1/50 pixel.
Point spread function based classification of regions for linear digital tomosynthesis
NASA Astrophysics Data System (ADS)
Israni, Kenny; Avinash, Gopal; Li, Baojun
2007-03-01
In digital tomosynthesis, one of the limitations is the presence of out-of-plane blur due to the limited angle acquisition. The point spread function (PSF) characterizes blur in the imaging volume, and is shift-variant in tomosynthesis. The purpose of this research is to classify the tomosynthesis imaging volume into four different categories based on PSF-driven focus criteria. We considered linear tomosynthesis geometry and simple back projection algorithm for reconstruction. The three-dimensional PSF at every pixel in the imaging volume was determined. Intensity profiles were computed for every pixel by integrating the PSF-weighted intensities contained within the line segment defined by the PSF, at each slice. Classification rules based on these intensity profiles were used to categorize image regions. At background and low-frequency pixels, the derived intensity profiles were flat curves with relatively low and high maximum intensities respectively. At in-focus pixels, the maximum intensity of the profiles coincided with the PSF-weighted intensity of the pixel. At out-of-focus pixels, the PSF-weighted intensity of the pixel was always less than the maximum intensity of the profile. We validated our method using human observer classified regions as gold standard. Based on the computed and manual classifications, the mean sensitivity and specificity of the algorithm were 77+/-8.44% and 91+/-4.13% respectively (t=-0.64, p=0.56, DF=4). Such a classification algorithm may assist in mitigating out-of-focus blur from tomosynthesis image slices.
High-speed massively parallel scanning
Decker, Derek E [Byron, CA
2010-07-06
A new technique for recording a series of images of a high-speed event (such as, but not limited to: ballistics, explosives, laser induced changes in materials, etc.) is presented. Such technique(s) makes use of a lenslet array to take image picture elements (pixels) and concentrate light from each pixel into a spot that is much smaller than the pixel. This array of spots illuminates a detector region (e.g., film, as one embodiment) which is scanned transverse to the light, creating tracks of exposed regions. Each track is a time history of the light intensity for a single pixel. By appropriately configuring the array of concentrated spots with respect to the scanning direction of the detection material, different tracks fit between pixels and sufficient lengths are possible which can be of interest in several high-speed imaging applications.
CMOS Active-Pixel Image Sensor With Intensity-Driven Readout
NASA Technical Reports Server (NTRS)
Langenbacher, Harry T.; Fossum, Eric R.; Kemeny, Sabrina
1996-01-01
Proposed complementary metal oxide/semiconductor (CMOS) integrated-circuit image sensor automatically provides readouts from pixels in order of decreasing illumination intensity. Sensor operated in integration mode. Particularly useful in number of image-sensing tasks, including diffractive laser range-finding, three-dimensional imaging, event-driven readout of sparse sensor arrays, and star tracking.
Biological tissue imaging with a position and time sensitive pixelated detector.
Jungmann, Julia H; Smith, Donald F; MacAleese, Luke; Klinkert, Ivo; Visser, Jan; Heeren, Ron M A
2012-10-01
We demonstrate the capabilities of a highly parallel, active pixel detector for large-area, mass spectrometric imaging of biological tissue sections. A bare Timepix assembly (512 × 512 pixels) is combined with chevron microchannel plates on an ion microscope matrix-assisted laser desorption time-of-flight mass spectrometer (MALDI TOF-MS). The detector assembly registers position- and time-resolved images of multiple m/z species in every measurement frame. We prove the applicability of the detection system to biomolecular mass spectrometry imaging on biologically relevant samples by mass-resolved images from Timepix measurements of a peptide-grid benchmark sample and mouse testis tissue slices. Mass-spectral and localization information of analytes at physiologic concentrations are measured in MALDI-TOF-MS imaging experiments. We show a high spatial resolution (pixel size down to 740 × 740 nm(2) on the sample surface) and a spatial resolving power of 6 μm with a microscope mode laser field of view of 100-335 μm. Automated, large-area imaging is demonstrated and the Timepix' potential for fast, large-area image acquisition is highlighted.
Hardware Implementation of a Bilateral Subtraction Filter
NASA Technical Reports Server (NTRS)
Huertas, Andres; Watson, Robert; Villalpando, Carlos; Goldberg, Steven
2009-01-01
A bilateral subtraction filter has been implemented as a hardware module in the form of a field-programmable gate array (FPGA). In general, a bilateral subtraction filter is a key subsystem of a high-quality stereoscopic machine vision system that utilizes images that are large and/or dense. Bilateral subtraction filters have been implemented in software on general-purpose computers, but the processing speeds attainable in this way even on computers containing the fastest processors are insufficient for real-time applications. The present FPGA bilateral subtraction filter is intended to accelerate processing to real-time speed and to be a prototype of a link in a stereoscopic-machine- vision processing chain, now under development, that would process large and/or dense images in real time and would be implemented in an FPGA. In terms that are necessarily oversimplified for the sake of brevity, a bilateral subtraction filter is a smoothing, edge-preserving filter for suppressing low-frequency noise. The filter operation amounts to replacing the value for each pixel with a weighted average of the values of that pixel and the neighboring pixels in a predefined neighborhood or window (e.g., a 9 9 window). The filter weights depend partly on pixel values and partly on the window size. The present FPGA implementation of a bilateral subtraction filter utilizes a 9 9 window. This implementation was designed to take advantage of the ability to do many of the component computations in parallel pipelines to enable processing of image data at the rate at which they are generated. The filter can be considered to be divided into the following parts (see figure): a) An image pixel pipeline with a 9 9- pixel window generator, b) An array of processing elements; c) An adder tree; d) A smoothing-and-delaying unit; and e) A subtraction unit. After each 9 9 window is created, the affected pixel data are fed to the processing elements. Each processing element is fed the pixel value for its position in the window as well as the pixel value for the central pixel of the window. The absolute difference between these two pixel values is calculated and used as an address in a lookup table. Each processing element has a lookup table, unique for its position in the window, containing the weight coefficients for the Gaussian function for that position. The pixel value is multiplied by the weight, and the outputs of the processing element are the weight and pixel-value weight product. The products and weights are fed to the adder tree. The sum of the products and the sum of the weights are fed to the divider, which computes the sum of products the sum of weights. The output of the divider is denoted the bilateral smoothed image. The smoothing function is a simple weighted average computed over a 3 3 subwindow centered in the 9 9 window. After smoothing, the image is delayed by an additional amount of time needed to match the processing time for computing the bilateral smoothed image. The bilateral smoothed image is then subtracted from the 3 3 smoothed image to produce the final output. The prototype filter as implemented in a commercially available FPGA processes one pixel per clock cycle. Operation at a clock speed of 66 MHz has been demonstrated, and results of a static timing analysis have been interpreted as suggesting that the clock speed could be increased to as much as 100 MHz.
On-Orbit Solar Dynamics Observatory (SDO) Star Tracker Warm Pixel Analysis
NASA Technical Reports Server (NTRS)
Felikson, Denis; Ekinci, Matthew; Hashmall, Joseph A.; Vess, Melissa
2011-01-01
This paper describes the process of identification and analysis of warm pixels in two autonomous star trackers on the Solar Dynamics Observatory (SDO) mission. A brief description of the mission orbit and attitude regimes is discussed and pertinent star tracker hardware specifications are given. Warm pixels are defined and the Quality Index parameter is introduced, which can be explained qualitatively as a manifestation of a possible warm pixel event. A description of the algorithm used to identify warm pixel candidates is given. Finally, analysis of dumps of on-orbit star tracker charge coupled devices (CCD) images is presented and an operational plan going forward is discussed. SDO, launched on February 11, 2010, is operated from the NASA Goddard Space Flight Center (GSFC). SDO is in a geosynchronous orbit with a 28.5 inclination. The nominal mission attitude points the spacecraft X-axis at the Sun, with the spacecraft Z-axis roughly aligned with the Solar North Pole. The spacecraft Y-axis completes the triad. In attitude, SDO moves approximately 0.04 per hour, mostly about the spacecraft Z-axis. The SDO star trackers, manufactured by Galileo Avionica, project the images of stars in their 16.4deg x 16.4deg fields-of-view onto CCD detectors consisting of 512 x 512 pixels. The trackers autonomously identify the star patterns and provide an attitude estimate. Each unit is able to track up to 9 stars. Additionally, each tracker calculates a parameter called the Quality Index, which is a measure of the quality of the attitude solution. Each pixel in the CCD measures the intensity of light and a warns pixel is defined as having a measurement consistently and significantly higher than the mean background intensity level. A warns pixel should also have lower intensity than a pixel containing a star image and will not move across the field of view as the attitude changes (as would a dim star image). It should be noted that the maximum error introduced in the star tracker attitude solution during suspected warm pixel corruptions is within the specified 36 attitude error budget requirement of [35, 70, 70] arcseconds. Thus, the star trackers provided attitude accuracy within the specification for SDO. The star tracker images are intentionally defocused so each star image is detected in more than one CCD pixel. The position of each star is calculated as an intensity-weighted average of the illuminated pixels. The exact method of finding the positions is proprietary to the tracker manufacturer. When a warm pixel happens to be in the vicinity of a star, it can corrupt the calculation of the position of that particular star, thereby corrupting the estimate of the attitude.
Seo, Min-Woong; Kawahito, Shoji
2017-12-01
A large full well capacity (FWC) for wide signal detection range and low temporal random noise for high sensitivity lock-in pixel CMOS image sensor (CIS) embedded with two in-pixel storage diodes (SDs) has been developed and presented in this paper. For fast charge transfer from photodiode to SDs, a lateral electric field charge modulator (LEFM) is used for the developed lock-in pixel. As a result, the time-resolved CIS achieves a very large SD-FWC of approximately 7ke-, low temporal random noise of 1.2e-rms at 20 fps with true correlated double sampling operation and fast intrinsic response less than 500 ps at 635 nm. The proposed imager has an effective pixel array of and a pixel size of . The sensor chip is fabricated by Dongbu HiTek 1P4M 0.11 CIS process.
Active-Pixel Image Sensor With Analog-To-Digital Converters
NASA Technical Reports Server (NTRS)
Fossum, Eric R.; Mendis, Sunetra K.; Pain, Bedabrata; Nixon, Robert H.
1995-01-01
Proposed single-chip integrated-circuit image sensor contains 128 x 128 array of active pixel sensors at 50-micrometer pitch. Output terminals of all pixels in each given column connected to analog-to-digital (A/D) converter located at bottom of column. Pixels scanned in semiparallel fashion, one row at time; during time allocated to scanning row, outputs of all active pixel sensors in row fed to respective A/D converters. Design of chip based on complementary metal oxide semiconductor (CMOS) technology, and individual circuit elements fabricated according to 2-micrometer CMOS design rules. Active pixel sensors designed to operate at video rate of 30 frames/second, even at low light levels. A/D scheme based on first-order Sigma-Delta modulation.
NASA Astrophysics Data System (ADS)
Plimley, Brian; Coffer, Amy; Zhang, Yigong; Vetter, Kai
2016-08-01
Previously, scientific silicon charge-coupled devices (CCDs) with 10.5-μm pixel pitch and a thick (650 μm), fully depleted bulk have been used to measure gamma-ray-induced fast electrons and demonstrate electron track Compton imaging. A model of the response of this CCD was also developed and benchmarked to experiment using Monte Carlo electron tracks. We now examine the trade-off in pixel pitch and electronic noise. We extend our CCD response model to different pixel pitch and readout noise per pixel, including pixel pitch of 2.5 μm, 5 μm, 10.5 μm, 20 μm, and 40 μm, and readout noise from 0 eV/pixel to 2 keV/pixel for 10.5 μm pixel pitch. The CCD images generated by this model using simulated electron tracks are processed by our trajectory reconstruction algorithm. The performance of the reconstruction algorithm defines the expected angular sensitivity as a function of electron energy, CCD pixel pitch, and readout noise per pixel. Results show that our existing pixel pitch of 10.5 μm is near optimal for our approach, because smaller pixels add little new information but are subject to greater statistical noise. In addition, we measured the readout noise per pixel for two different device temperatures in order to estimate the effect of temperature on the reconstruction algorithm performance, although the readout is not optimized for higher temperatures. The noise in our device at 240 K increases the FWHM of angular measurement error by no more than a factor of 2, from 26° to 49° FWHM for electrons between 425 keV and 480 keV. Therefore, a CCD could be used for electron-track-based imaging in a Peltier-cooled device.
USDA-ARS?s Scientific Manuscript database
Reconstruction of 3D images from a series of 2D images has been restricted by the limited capacity to decrease the opacity of surrounding tissue. Commercial software that allows color-keying and manipulation of 2D images in true 3D space allowed us to produce 3D reconstructions from pixel based imag...
Text image authenticating algorithm based on MD5-hash function and Henon map
NASA Astrophysics Data System (ADS)
Wei, Jinqiao; Wang, Ying; Ma, Xiaoxue
2017-07-01
In order to cater to the evidentiary requirements of the text image, this paper proposes a fragile watermarking algorithm based on Hash function and Henon map. The algorithm is to divide a text image into parts, get flippable pixels and nonflippable pixels of every lump according to PSD, generate watermark of non-flippable pixels with MD5-Hash, encrypt watermark with Henon map and select embedded blocks. The simulation results show that the algorithm with a good ability in tampering localization can be used to authenticate and forensics the authenticity and integrity of text images
Pixel-based meshfree modelling of skeletal muscles.
Chen, Jiun-Shyan; Basava, Ramya Rao; Zhang, Yantao; Csapo, Robert; Malis, Vadim; Sinha, Usha; Hodgson, John; Sinha, Shantanu
2016-01-01
This paper introduces the meshfree Reproducing Kernel Particle Method (RKPM) for 3D image-based modeling of skeletal muscles. This approach allows for construction of simulation model based on pixel data obtained from medical images. The material properties and muscle fiber direction obtained from Diffusion Tensor Imaging (DTI) are input at each pixel point. The reproducing kernel (RK) approximation allows a representation of material heterogeneity with smooth transition. A multiphase multichannel level set based segmentation framework is adopted for individual muscle segmentation using Magnetic Resonance Images (MRI) and DTI. The application of the proposed methods for modeling the human lower leg is demonstrated.
Geometrical superresolved imaging using nonperiodic spatial masking.
Borkowski, Amikam; Zalevsky, Zeev; Javidi, Bahram
2009-03-01
The resolution of every imaging system is limited either by the F-number of its optics or by the geometry of its detection array. The geometrical limitation is caused by lack of spatial sampling points as well as by the shape of every sampling pixel that generates spectral low-pass filtering. We present a novel approach to overcome the low-pass filtering that is due to the shape of the sampling pixels. The approach combines special algorithms together with spatial masking placed in the intermediate image plane and eventually allows geometrical superresolved imaging without relation to the actual shape of the pixels.
A method of object recognition for single pixel imaging
NASA Astrophysics Data System (ADS)
Li, Boxuan; Zhang, Wenwen
2018-01-01
Computational ghost imaging(CGI), utilizing a single-pixel detector, has been extensively used in many fields. However, in order to achieve a high-quality reconstructed image, a large number of iterations are needed, which limits the flexibility of using CGI in practical situations, especially in the field of object recognition. In this paper, we purpose a method utilizing the feature matching to identify the number objects. In the given system, approximately 90% of accuracy of recognition rates can be achieved, which provides a new idea for the application of single pixel imaging in the field of object recognition
An Investigation into the Spectral Imaging of Hall Thruster Plumes
2015-07-01
imaging experiment. It employs a Kodak KAF-3200E 3 megapixel CCD (2184×1472 with 6.8 µm pixels). The camera was designed for astronomical imaging and thus...19 mml 14c--7_0_m_m_~•... ,. ,. 50 mm I· ·I ,. 41 mm I Kodak KAF- 3200E ceo 2184 x 1472 px 14.9 x 10.0 mm 6.8 x 6.8J..Lm pixel size SBIG ST...It employs a Kodak KAF-3200E 3 megapixel CCD (2184×1472 with 6.8 µm pixels). The camera was designed for astronomical imaging and thus long exposure
High-speed on-chip windowed centroiding using photodiode-based CMOS imager
NASA Technical Reports Server (NTRS)
Pain, Bedabrata (Inventor); Sun, Chao (Inventor); Yang, Guang (Inventor); Cunningham, Thomas J. (Inventor); Hancock, Bruce (Inventor)
2003-01-01
A centroid computation system is disclosed. The system has an imager array, a switching network, computation elements, and a divider circuit. The imager array has columns and rows of pixels. The switching network is adapted to receive pixel signals from the image array. The plurality of computation elements operates to compute inner products for at least x and y centroids. The plurality of computation elements has only passive elements to provide inner products of pixel signals the switching network. The divider circuit is adapted to receive the inner products and compute the x and y centroids.
High-speed on-chip windowed centroiding using photodiode-based CMOS imager
NASA Technical Reports Server (NTRS)
Pain, Bedabrata (Inventor); Sun, Chao (Inventor); Yang, Guang (Inventor); Cunningham, Thomas J. (Inventor); Hancock, Bruce (Inventor)
2004-01-01
A centroid computation system is disclosed. The system has an imager array, a switching network, computation elements, and a divider circuit. The imager array has columns and rows of pixels. The switching network is adapted to receive pixel signals from the image array. The plurality of computation elements operates to compute inner products for at least x and y centroids. The plurality of computation elements has only passive elements to provide inner products of pixel signals the switching network. The divider circuit is adapted to receive the inner products and compute the x and y centroids.
NASA Astrophysics Data System (ADS)
Seo, Hokuto; Aihara, Satoshi; Watabe, Toshihisa; Ohtake, Hiroshi; Sakai, Toshikatsu; Kubota, Misao; Egami, Norifumi; Hiramatsu, Takahiro; Matsuda, Tokiyoshi; Furuta, Mamoru; Hirao, Takashi
2011-02-01
A color image was produced by a vertically stacked image sensor with blue (B)-, green (G)-, and red (R)-sensitive organic photoconductive films, each having a thin-film transistor (TFT) array that uses a zinc oxide (ZnO) channel to read out the signal generated in each organic film. The number of the pixels of the fabricated image sensor is 128×96 for each color, and the pixel size is 100×100 µm2. The current on/off ratio of the ZnO TFT is over 106, and the B-, G-, and R-sensitive organic photoconductive films show excellent wavelength selectivity. The stacked image sensor can produce a color image at 10 frames per second with a resolution corresponding to the pixel number. This result clearly shows that color separation is achieved without using any conventional color separation optical system such as a color filter array or a prism.
1T Pixel Using Floating-Body MOSFET for CMOS Image Sensors.
Lu, Guo-Neng; Tournier, Arnaud; Roy, François; Deschamps, Benoît
2009-01-01
We present a single-transistor pixel for CMOS image sensors (CIS). It is a floating-body MOSFET structure, which is used as photo-sensing device and source-follower transistor, and can be controlled to store and evacuate charges. Our investigation into this 1T pixel structure includes modeling to obtain analytical description of conversion gain. Model validation has been done by comparing theoretical predictions and experimental results. On the other hand, the 1T pixel structure has been implemented in different configurations, including rectangular-gate and ring-gate designs, and variations of oxidation parameters for the fabrication process. The pixel characteristics are presented and discussed.
A 45 nm Stacked CMOS Image Sensor Process Technology for Submicron Pixel.
Takahashi, Seiji; Huang, Yi-Min; Sze, Jhy-Jyi; Wu, Tung-Ting; Guo, Fu-Sheng; Hsu, Wei-Cheng; Tseng, Tung-Hsiung; Liao, King; Kuo, Chin-Chia; Chen, Tzu-Hsiang; Chiang, Wei-Chieh; Chuang, Chun-Hao; Chou, Keng-Yu; Chung, Chi-Hsien; Chou, Kuo-Yu; Tseng, Chien-Hsien; Wang, Chuan-Joung; Yaung, Dun-Nien
2017-12-05
A submicron pixel's light and dark performance were studied by experiment and simulation. An advanced node technology incorporated with a stacked CMOS image sensor (CIS) is promising in that it may enhance performance. In this work, we demonstrated a low dark current of 3.2 e - /s at 60 °C, an ultra-low read noise of 0.90 e - ·rms, a high full well capacity (FWC) of 4100 e - , and blooming of 0.5% in 0.9 μm pixels with a pixel supply voltage of 2.8 V. In addition, the simulation study result of 0.8 μm pixels is discussed.
Observing Bridge Dynamic Deflection in Green Time by Information Technology
NASA Astrophysics Data System (ADS)
Yu, Chengxin; Zhang, Guojian; Zhao, Yongqian; Chen, Mingzhi
2018-01-01
As traditional surveying methods are limited to observe bridge dynamic deflection; information technology is adopted to observe bridge dynamic deflection in Green time. Information technology used in this study means that we use digital cameras to photograph the bridge in red time as a zero image. Then, a series of successive images are photographed in green time. Deformation point targets are identified and located by Hough transform. With reference to the control points, the deformation values of these deformation points are obtained by differencing the successive images with a zero image, respectively. Results show that the average measurement accuracies of C0 are 0.46 pixels, 0.51 pixels and 0.74 pixels in X, Z and comprehensive direction. The average measurement accuracies of C1 are 0.43 pixels, 0.43 pixels and 0.67 pixels in X, Z and comprehensive direction in these tests. The maximal bridge deflection is 44.16mm, which is less than 75mm (Bridge deflection tolerance value). Information technology in this paper can monitor bridge dynamic deflection and depict deflection trend curves of the bridge in real time. It can provide data support for the site decisions to the bridge structure safety.
NASA Astrophysics Data System (ADS)
Grycewicz, Thomas J.; Florio, Christopher J.; Franz, Geoffrey A.; Robinson, Ross E.
2007-09-01
When using Fourier plane digital algorithms or an optical correlator to measure the correlation between digital images, interpolation by center-of-mass or quadratic estimation techniques can be used to estimate image displacement to the sub-pixel level. However, this can lead to a bias in the correlation measurement. This bias shifts the sub-pixel output measurement to be closer to the nearest pixel center than the actual location. The paper investigates the bias in the outputs of both digital and optical correlators, and proposes methods to minimize this effect. We use digital studies and optical implementations of the joint transform correlator to demonstrate optical registration with accuracies better than 0.1 pixels. We use both simulations of image shift and movies of a moving target as inputs. We demonstrate bias error for both center-of-mass and quadratic interpolation, and discuss the reasons that this bias is present. Finally, we suggest measures to reduce or eliminate the bias effects. We show that when sub-pixel bias is present, it can be eliminated by modifying the interpolation method. By removing the bias error, we improve registration accuracy by thirty percent.
A 256×256 low-light-level CMOS imaging sensor with digital CDS
NASA Astrophysics Data System (ADS)
Zou, Mei; Chen, Nan; Zhong, Shengyou; Li, Zhengfen; Zhang, Jicun; Yao, Li-bin
2016-10-01
In order to achieve high sensitivity for low-light-level CMOS image sensors (CIS), a capacitive transimpedance amplifier (CTIA) pixel circuit with a small integration capacitor is used. As the pixel and the column area are highly constrained, it is difficult to achieve analog correlated double sampling (CDS) to remove the noise for low-light-level CIS. So a digital CDS is adopted, which realizes the subtraction algorithm between the reset signal and pixel signal off-chip. The pixel reset noise and part of the column fixed-pattern noise (FPN) can be greatly reduced. A 256×256 CIS with CTIA array and digital CDS is implemented in the 0.35μm CMOS technology. The chip size is 7.7mm×6.75mm, and the pixel size is 15μm×15μm with a fill factor of 20.6%. The measured pixel noise is 24LSB with digital CDS in RMS value at dark condition, which shows 7.8× reduction compared to the image sensor without digital CDS. Running at 7fps, this low-light-level CIS can capture recognizable images with the illumination down to 0.1lux.
Jiang, Hongquan; Zhao, Yalin; Gao, Jianmin; Gao, Zhiyong
2017-06-01
The radiographic testing (RT) image of a steam turbine manufacturing enterprise has the characteristics of low gray level, low contrast, and blurriness, which lead to a substandard image quality. Moreover, it is not conducive for human eyes to detect and evaluate defects. This study proposes an adaptive pseudo-color enhancement method for weld radiographic images based on the hue, saturation, and intensity (HSI) color space and the self-transformation of pixels to solve these problems. First, the pixel's self-transformation is performed to the pixel value of the original RT image. The function value after the pixel's self-transformation is assigned to the HSI components in the HSI color space. Thereafter, the average intensity of the enhanced image is adaptively adjusted to 0.5 according to the intensity of the original image. Moreover, the hue range and interval can be adjusted according to personal habits. Finally, the HSI components after the adaptive adjustment can be transformed to display in the red, green, and blue color space. Numerous weld radiographic images from a steam turbine manufacturing enterprise are used to validate the proposed method. The experimental results show that the proposed pseudo-color enhancement method can improve image definition and make the target and background areas distinct in weld radiographic images. The enhanced images will be more conducive for defect recognition. Moreover, the image enhanced using the proposed method conforms to the human eye visual properties, and the effectiveness of defect recognition and evaluation can be ensured.
NASA Astrophysics Data System (ADS)
Jiang, Hongquan; Zhao, Yalin; Gao, Jianmin; Gao, Zhiyong
2017-06-01
The radiographic testing (RT) image of a steam turbine manufacturing enterprise has the characteristics of low gray level, low contrast, and blurriness, which lead to a substandard image quality. Moreover, it is not conducive for human eyes to detect and evaluate defects. This study proposes an adaptive pseudo-color enhancement method for weld radiographic images based on the hue, saturation, and intensity (HSI) color space and the self-transformation of pixels to solve these problems. First, the pixel's self-transformation is performed to the pixel value of the original RT image. The function value after the pixel's self-transformation is assigned to the HSI components in the HSI color space. Thereafter, the average intensity of the enhanced image is adaptively adjusted to 0.5 according to the intensity of the original image. Moreover, the hue range and interval can be adjusted according to personal habits. Finally, the HSI components after the adaptive adjustment can be transformed to display in the red, green, and blue color space. Numerous weld radiographic images from a steam turbine manufacturing enterprise are used to validate the proposed method. The experimental results show that the proposed pseudo-color enhancement method can improve image definition and make the target and background areas distinct in weld radiographic images. The enhanced images will be more conducive for defect recognition. Moreover, the image enhanced using the proposed method conforms to the human eye visual properties, and the effectiveness of defect recognition and evaluation can be ensured.
Fractional order integration and fuzzy logic based filter for denoising of echocardiographic image.
Saadia, Ayesha; Rashdi, Adnan
2016-12-01
Ultrasound is widely used for imaging due to its cost effectiveness and safety feature. However, ultrasound images are inherently corrupted with speckle noise which severely affects the quality of these images and create difficulty for physicians in diagnosis. To get maximum benefit from ultrasound imaging, image denoising is an essential requirement. To perform image denoising, a two stage methodology using fuzzy weighted mean and fractional integration filter has been proposed in this research work. In stage-1, image pixels are processed by applying a 3 × 3 window around each pixel and fuzzy logic is used to assign weights to the pixels in each window, replacing central pixel of the window with weighted mean of all neighboring pixels present in the same window. Noise suppression is achieved by assigning weights to the pixels while preserving edges and other important features of an image. In stage-2, the resultant image is further improved by fractional order integration filter. Effectiveness of the proposed methodology has been analyzed for standard test images artificially corrupted with speckle noise and real ultrasound B-mode images. Results of the proposed technique have been compared with different state-of-the-art techniques including Lsmv, Wiener, Geometric filter, Bilateral, Non-local means, Wavelet, Perona et al., Total variation (TV), Global Adaptive Fractional Integral Algorithm (GAFIA) and Improved Fractional Order Differential (IFD) model. Comparison has been done on quantitative and qualitative basis. For quantitative analysis different metrics like Peak Signal to Noise Ratio (PSNR), Speckle Suppression Index (SSI), Structural Similarity (SSIM), Edge Preservation Index (β) and Correlation Coefficient (ρ) have been used. Simulations have been done using Matlab. Simulation results of artificially corrupted standard test images and two real Echocardiographic images reveal that the proposed method outperforms existing image denoising techniques reported in the literature. The proposed method for denoising of Echocardiographic images is effective in noise suppression/removal. It not only removes noise from an image but also preserves edges and other important structure. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kagawa, Keiichiro; Furumiya, Tetsuo; Ng, David C.; Uehara, Akihiro; Ohta, Jun; Nunoshita, Masahiro
2004-06-01
We are exploring the application of pulse-frequency-modulation (PFM) photosensor to retinal prosthesis for the blind because behavior of PFM photosensors is similar to retinal ganglion cells, from which visual data are transmitted from the retina toward the brain. We have developed retinal-prosthesis vision chips that reshape the output pulses of the PFM photosensor to biphasic current pulses suitable for electric stimulation of retinal cells. In this paper, we introduce image-processing functions to the pixel circuits. We have designed a 16x16-pixel retinal-prosthesis vision chip with several kinds of in-pixel digital image processing such as edge enhancement, edge detection, and low-pass filtering. This chip is a prototype demonstrator of the retinal prosthesis vision chip applicable to in-vitro experiments. By utilizing the feature of PFM photosensor, we propose a new scheme to implement the above image processing in a frequency domain by digital circuitry. Intensity of incident light is converted to a 1-bit data stream by a PFM photosensor, and then image processing is executed by a 1-bit image processor based on joint and annihilation of pulses. The retinal prosthesis vision chip is composed of four blocks: a pixels array block, a row-parallel stimulation current amplifiers array block, a decoder block, and a base current generators block. All blocks except PFM photosensors and stimulation current amplifiers are embodied as digital circuitry. This fact contributes to robustness against noises and fluctuation of power lines. With our vision chip, we can control photosensitivity and intensity and durations of stimulus biphasic currents, which are necessary for retinal prosthesis vision chip. The designed dynamic range is more than 100 dB. The amplitude of the stimulus current is given by a base current, which is common for all pixels, multiplied by a value in an amplitude memory of pixel. Base currents of the negative and positive pulses are common for the all pixels, and they are set in a linear manner. Otherwise, the value in the amplitude memory of the pixel is presented in an exponential manner to cover the wide range. The stimulus currents are put out column by column by scanning. The pixel size is 240um x 240um. Each pixel has a bonding pad on which stimulus electrode is to be formed. We will show the experimental results of the test chip.
VizieR Online Data Catalog: Equivalent widths and atomic data for GCs (Lamb+, 2015)
NASA Astrophysics Data System (ADS)
Lamb, M. P.; Venn, K. A.; Shetrone, M. D.; Sakari, C. M.; Pritzl, B. J.
2017-11-01
Optical spectra were gathered with the High Resolution Spectrograph (HRS; Tull 1998, Proc. SPIE, 3355, 387) on the HET. The HRS was configured at resolution R=30000 with 2x2 pixel binning using the 2 arcsec fibre. The HRS splits the incoming beam on to two CCD chips, from which the spectral regions 6000-7000 Å (red chip) and 4800-5900 Å (blue chip) were extracted for this work. Two standard stars were also observed, RGB stars with previously published spectral analyses in each of the GCs M3 and M13. (2 data files).
NASA Tech Briefs, November 2005
NASA Technical Reports Server (NTRS)
2005-01-01
Topics covered include: Laser System for Precise, Unambiguous Range Measurements; Flexible Cryogenic Temperature and Liquid-Level Probes; Precision Cryogenic Dilatometer; Stroboscopic Interferometer for Measuring Mirror Vibrations; Some Improvements in H-PDLCs; Multiple-Bit Differential Detection of OQPSK; Absolute Position Encoders With Vertical Image Binning; Flexible, Carbon-Based Ohmic Contacts for Organic Transistors; GaAs QWIP Array Containing More Than a Million Pixels; AutoChem; Virtual Machine Language; Two-Dimensional Ffowcs Williams/Hawkings Equation Solver; Full Multigrid Flow Solver; Doclet To Synthesize UML; Computing Thermal Effects of Cavitation in Cryogenic Liquids; GUI for Computational Simulation of a Propellant Mixer; Control Program for an Optical-Calibration Robot; SQL-RAMS; Distributing Data from Desktop to Hand-Held Computers; Best-Fit Conic Approximation of Spacecraft Trajectory; Improved Charge-Transfer Fluorescent Dyes; Stability-Augmentation Devices for Miniature Aircraft; Tool Measures Depths of Defects on a Case Tang Joint; Two Heat-Transfer Improvements for Gas Liquefiers; Controlling Force and Depth in Friction Stir Welding; Spill-Resistant Alkali-Metal-Vapor Dispenser; A Methodology for Quantifying Certain Design Requirements During the Design Phase; Measuring Two Key Parameters of H3 Color Centers in Diamond; Improved Compression of Wavelet-Transformed Images; NASA Interactive Forms Type Interface - NIFTI; Predicting Numbers of Problems in Development of Software; Hot-Electron Photon Counters for Detecting Terahertz Photons; Magnetic Variations Associated With Solar Flares; and Artificial Intelligence for Controlling Robotic Aircraft.
X-Ray Computed Tomography Monitors Damage in Composites
NASA Technical Reports Server (NTRS)
Baaklini, George Y.
1997-01-01
The NASA Lewis Research Center recently codeveloped a state-of-the-art x-ray CT facility (designated SMS SMARTSCAN model 100-112 CITA by Scientific Measurement Systems, Inc., Austin, Texas). This multipurpose, modularized, digital x-ray facility includes an imaging system for digital radiography, CT, and computed laminography. The system consists of a 160-kV microfocus x-ray source, a solid-state charge-coupled device (CCD) area detector, a five-axis object-positioning subassembly, and a Sun SPARCstation-based computer system that controls data acquisition and image processing. The x-ray source provides a beam spot size down to 3 microns. The area detector system consists of a 50- by 50- by 3-mm-thick terbium-doped glass fiber-optic scintillation screen, a right-angle mirror, and a scientific-grade, digital CCD camera with a resolution of 1000 by 1018 pixels and 10-bit digitization at ambient cooling. The digital output is recorded with a high-speed, 16-bit frame grabber that allows data to be binned. The detector can be configured to provide a small field-of-view, approximately 45 by 45 mm in cross section, or a larger field-of-view, approximately 60 by 60 mm in cross section. Whenever the highest spatial resolution is desired, the small field-of-view is used, and for larger samples with some reduction in spatial resolution, the larger field-of-view is used.
A Wavelet-Based Algorithm for the Spatial Analysis of Poisson Data
NASA Astrophysics Data System (ADS)
Freeman, P. E.; Kashyap, V.; Rosner, R.; Lamb, D. Q.
2002-01-01
Wavelets are scalable, oscillatory functions that deviate from zero only within a limited spatial regime and have average value zero, and thus may be used to simultaneously characterize the shape, location, and strength of astronomical sources. But in addition to their use as source characterizers, wavelet functions are rapidly gaining currency within the source detection field. Wavelet-based source detection involves the correlation of scaled wavelet functions with binned, two-dimensional image data. If the chosen wavelet function exhibits the property of vanishing moments, significantly nonzero correlation coefficients will be observed only where there are high-order variations in the data; e.g., they will be observed in the vicinity of sources. Source pixels are identified by comparing each correlation coefficient with its probability sampling distribution, which is a function of the (estimated or a priori known) background amplitude. In this paper, we describe the mission-independent, wavelet-based source detection algorithm ``WAVDETECT,'' part of the freely available Chandra Interactive Analysis of Observations (CIAO) software package. Our algorithm uses the Marr, or ``Mexican Hat'' wavelet function, but may be adapted for use with other wavelet functions. Aspects of our algorithm include: (1) the computation of local, exposure-corrected normalized (i.e., flat-fielded) background maps; (2) the correction for exposure variations within the field of view (due to, e.g., telescope support ribs or the edge of the field); (3) its applicability within the low-counts regime, as it does not require a minimum number of background counts per pixel for the accurate computation of source detection thresholds; (4) the generation of a source list in a manner that does not depend upon a detailed knowledge of the point spread function (PSF) shape; and (5) error analysis. These features make our algorithm considerably more general than previous methods developed for the analysis of X-ray image data, especially in the low count regime. We demonstrate the robustness of WAVDETECT by applying it to an image from an idealized detector with a spatially invariant Gaussian PSF and an exposure map similar to that of the Einstein IPC; to Pleiades Cluster data collected by the ROSAT PSPC; and to simulated Chandra ACIS-I image of the Lockman Hole region.
Hyperspectral remote sensing for monitoring species-specific drought impacts in southern California
NASA Astrophysics Data System (ADS)
Coates, Austin Reece
A drought persisting since the winter of 2011-2012 has resulted in severe impacts on shrublands and forests in southern California, USA. Effects of drought on vegetation include leaf wilting, leaf abscission, and potential plant mortality. These impacts vary across plant species, depending on differences in species' adaptations to drought, rooting depth, and edaphic factors. During 2013 and 2014, Airborne Visible Infrared Imaging Spectrometer (AVIRIS) data were acquired seasonally over the Santa Ynez Mountains and Santa Ynez Valley north of Santa Barbara, California. To determine the impacts of drought on individual plant species, spectral mixture analysis was used to model a relative green vegetation fraction (RGVF) for each image date in 2013 and 2014. A July 2011 AVIRIS image acquired during the last nondrought year was used to determine a reference green vegetation (GV) endmember for each pixel. For each image date in 2013 and 2014, a three-endmember model using the 2011 pixel spectrum as GV, a lab nonphotosynthetic vegetation (NPV) spectrum, and a photometric shade spectrum was applied. The resulting RGVF provided a change in green vegetation cover relative to 2011. Reference polygons collected for 14 plant species and land cover classes were used to extract the RGVF values from each date. The deeply rooted tree species and tree species found in mesic areas appeared to be the least affected by the drought, whereas the evergreen chaparral showed the most extreme signs of distress. Coastal sage scrub had large seasonal variability; however, each year, it returned to an RGVF value only slightly below the previous year. By binning all the RGVF values together, a general decreasing trend was observed from the spring of 2013 to the fall of 2014. This study intends to lay the groundwork for future research in the area of multitemporal, hyperspectral remote sensing. With proposed plans for a hyperspectral sensor in space (HyspIRI), this type of research will prove to be invaluable in the years to come. This study also intends to be used as a benchmark to show how specific species of plants are being affected by a prolonged drought. The research performed in this study will provide a reference point for analysis of future droughts.
Fast Fiber-Coupled Imaging Devices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brockington, Samuel; Case, Andrew; Witherspoon, Franklin Douglas
HyperV Technologies Corp. has successfully designed, built and experimentally demonstrated a full scale 1024 pixel 100 MegaFrames/s fiber coupled camera with 12 or 14 bits, and record lengths of 32K frames, exceeding our original performance objectives. This high-pixel-count, fiber optically-coupled, imaging diagnostic can be used for investigating fast, bright plasma events. In Phase 1 of this effort, a 100 pixel fiber-coupled fast streak camera for imaging plasma jet profiles was constructed and successfully demonstrated. The resulting response from outside plasma physics researchers emphasized development of increased pixel performance as a higher priority over increasing pixel count. In this Phase 2more » effort, HyperV therefore focused on increasing the sample rate and bit-depth of the photodiode pixel designed in Phase 1, while still maintaining a long record length and holding the cost per channel to levels which allowed up to 1024 pixels to be constructed. Cost per channel was 53.31 dollars, very close to our original target of $50 per channel. The system consists of an imaging "camera head" coupled to a photodiode bank with an array of optical fibers. The output of these fast photodiodes is then digitized at 100 Megaframes per second and stored in record lengths of 32,768 samples with bit depths of 12 to 14 bits per pixel. Longer record lengths are possible with additional memory. A prototype imaging system with up to 1024 pixels was designed and constructed and used to successfully take movies of very fast moving plasma jets as a demonstration of the camera performance capabilities. Some faulty electrical components on the 64 circuit boards resulted in only 1008 functional channels out of 1024 on this first generation prototype system. We experimentally observed backlit high speed fan blades in initial camera testing and then followed that with full movies and streak images of free flowing high speed plasma jets (at 30-50 km/s). Jet structure and jet collisions onto metal pillars in the path of the plasma jets were recorded in a single shot. This new fast imaging system is an attractive alternative to conventional fast framing cameras for applications and experiments where imaging events using existing techniques are inefficient or impossible. The development of HyperV's new diagnostic was split into two tracks: a next generation camera track, in which HyperV built, tested, and demonstrated a prototype 1024 channel camera at its own facility, and a second plasma community beta test track, where selected plasma physics programs received small systems of a few test pixels to evaluate the expected performance of a full scale camera on their experiments. These evaluations were performed as part of an unfunded collaboration with researchers at Los Alamos National Laboratory and the University of California at Davis. Results from the prototype 1024-pixel camera are discussed, as well as results from the collaborations with test pixel system deployment sites.« less
The Multidimensional Integrated Intelligent Imaging project (MI-3)
NASA Astrophysics Data System (ADS)
Allinson, N.; Anaxagoras, T.; Aveyard, J.; Arvanitis, C.; Bates, R.; Blue, A.; Bohndiek, S.; Cabello, J.; Chen, L.; Chen, S.; Clark, A.; Clayton, C.; Cook, E.; Cossins, A.; Crooks, J.; El-Gomati, M.; Evans, P. M.; Faruqi, W.; French, M.; Gow, J.; Greenshaw, T.; Greig, T.; Guerrini, N.; Harris, E. J.; Henderson, R.; Holland, A.; Jeyasundra, G.; Karadaglic, D.; Konstantinidis, A.; Liang, H. X.; Maini, K. M. S.; McMullen, G.; Olivo, A.; O'Shea, V.; Osmond, J.; Ott, R. J.; Prydderch, M.; Qiang, L.; Riley, G.; Royle, G.; Segneri, G.; Speller, R.; Symonds-Tayler, J. R. N.; Triger, S.; Turchetta, R.; Venanzi, C.; Wells, K.; Zha, X.; Zin, H.
2009-06-01
MI-3 is a consortium of 11 universities and research laboratories whose mission is to develop complementary metal-oxide semiconductor (CMOS) active pixel sensors (APS) and to apply these sensors to a range of imaging challenges. A range of sensors has been developed: On-Pixel Intelligent CMOS (OPIC)—designed for in-pixel intelligence; FPN—designed to develop novel techniques for reducing fixed pattern noise; HDR—designed to develop novel techniques for increasing dynamic range; Vanilla/PEAPS—with digital and analogue modes and regions of interest, which has also been back-thinned; Large Area Sensor (LAS)—a novel, stitched LAS; and eLeNA—which develops a range of low noise pixels. Applications being developed include autoradiography, a gamma camera system, radiotherapy verification, tissue diffraction imaging, X-ray phase-contrast imaging, DNA sequencing and electron microscopy.
NASA Astrophysics Data System (ADS)
Wu, Bo; Liu, Wai Chung; Grumpe, Arne; Wöhler, Christian
2018-06-01
Lunar Digital Elevation Model (DEM) is important for lunar successful landing and exploration missions. Lunar DEMs are typically generated by photogrammetry or laser altimetry approaches. Photogrammetric methods require multiple stereo images of the region of interest and it may not be applicable in cases where stereo coverage is not available. In contrast, reflectance based shape reconstruction techniques, such as shape from shading (SfS) and shape and albedo from shading (SAfS), apply monocular images to generate DEMs with pixel-level resolution. We present a novel hierarchical SAfS method that refines a lower-resolution DEM to pixel-level resolution given a monocular image with known light source. We also estimate the corresponding pixel-wise albedo map in the process and based on that to regularize the shape reconstruction with pixel-level resolution based on the low-resolution DEM. In this study, a Lunar-Lambertian reflectance model is applied to estimate the albedo map. Experiments were carried out using monocular images from the Lunar Reconnaissance Orbiter Narrow Angle Camera (LRO NAC), with spatial resolution of 0.5-1.5 m per pixel, constrained by the Selenological and Engineering Explorer and LRO Elevation Model (SLDEM), with spatial resolution of 60 m. The results indicate that local details are well recovered by the proposed algorithm with plausible albedo estimation. The low-frequency topographic consistency depends on the quality of low-resolution DEM and the resolution difference between the image and the low-resolution DEM.
Single-pixel imaging by Hadamard transform and its application for hyperspectral imaging
NASA Astrophysics Data System (ADS)
Mizutani, Yasuhiro; Shibuya, Kyuki; Taguchi, Hiroki; Iwata, Tetsuo; Takaya, Yasuhiro; Yasui, Takeshi
2016-10-01
In this paper, we report on comparisons of single-pixel imagings using Hadamard Transform (HT) and the ghost imaging (GI) in the view point of the visibility under weak light conditions. For comparing the two methods, we have discussed about qualities of images based on experimental results and numerical analysis. To detect images by the TH method, we have illuminated the Hadamard-pattern mask and calculated by orthogonal transform. On the other hand, the GH method can detect images by illuminating random patterns and a correlation measurement. For comparing two methods under weak light intensity, we have controlled illuminated intensities of a DMD projector about 0.1 in signal-to-noise ratio. Though a process speed of the HT image was faster then an image via the GI, the GI method has an advantage of detection under weak light condition. An essential difference between the HT and the GI method is discussed about reconstruction process. Finally, we also show a typical application of the single-pixel imaging such as hyperspectral images by using dual-optical frequency combs. An optical setup consists of two fiber lasers, spatial light modulated for generating patten illumination, and a single pixel detector. We are successful to detect hyperspectrul images in a range from 1545 to 1555 nm at 0.01nm resolution.
A general method for motion compensation in x-ray computed tomography
NASA Astrophysics Data System (ADS)
Biguri, Ander; Dosanjh, Manjit; Hancock, Steven; Soleimani, Manuchehr
2017-08-01
Motion during data acquisition is a known source of error in medical tomography, resulting in blur artefacts in the regions that move. It is critical to reduce these artefacts in applications such as image-guided radiation therapy as a clearer image translates into a more accurate treatment and the sparing of healthy tissue close to a tumour site. Most research in 4D x-ray tomography involving the thorax relies on respiratory phase binning of the acquired data and reconstructing each of a set of images using the limited subset of data per phase. In this work, we demonstrate a motion-compensation method to reconstruct images from the complete dataset taken during breathing without recourse to phase-binning or breath-hold techniques. As long as the motion is sufficiently well known, the new method can accurately reconstruct an image at any time during the acquisition time span. It can be applied to any iterative reconstruction algorithm.
A general method for motion compensation in x-ray computed tomography.
Biguri, Ander; Dosanjh, Manjit; Hancock, Steven; Soleimani, Manuchehr
2017-07-24
Motion during data acquisition is a known source of error in medical tomography, resulting in blur artefacts in the regions that move. It is critical to reduce these artefacts in applications such as image-guided radiation therapy as a clearer image translates into a more accurate treatment and the sparing of healthy tissue close to a tumour site. Most research in 4D x-ray tomography involving the thorax relies on respiratory phase binning of the acquired data and reconstructing each of a set of images using the limited subset of data per phase. In this work, we demonstrate a motion-compensation method to reconstruct images from the complete dataset taken during breathing without recourse to phase-binning or breath-hold techniques. As long as the motion is sufficiently well known, the new method can accurately reconstruct an image at any time during the acquisition time span. It can be applied to any iterative reconstruction algorithm.
NASA Astrophysics Data System (ADS)
Miyazawa, Arata; Hong, Young-Joo; Makita, Shuichi; Kasaragod, Deepa K.; Miura, Masahiro; Yasuno, Yoshiaki
2017-02-01
Local statistics are widely utilized for quantification and image processing of OCT. For example, local mean is used to reduce speckle, local variation of polarization state (degree-of-polarization-uniformity (DOPU)) is used to visualize melanin. Conventionally, these statistics are calculated in a rectangle kernel whose size is uniform over the image. However, the fixed size and shape of the kernel result in a tradeoff between image sharpness and statistical accuracy. Superpixel is a cluster of pixels which is generated by grouping image pixels based on the spatial proximity and similarity of signal values. Superpixels have variant size and flexible shapes which preserve the tissue structure. Here we demonstrate a new superpixel method which is tailored for multifunctional Jones matrix OCT (JM-OCT). This new method forms the superpixels by clustering image pixels in a 6-dimensional (6-D) feature space (spatial two dimensions and four dimensions of optical features). All image pixels were clustered based on their spatial proximity and optical feature similarity. The optical features are scattering, OCT-A, birefringence and DOPU. The method is applied to retinal OCT. Generated superpixels preserve the tissue structures such as retinal layers, sclera, vessels, and retinal pigment epithelium. Hence, superpixel can be utilized as a local statistics kernel which would be more suitable than a uniform rectangle kernel. Superpixelized image also can be used for further image processing and analysis. Since it reduces the number of pixels to be analyzed, it reduce the computational cost of such image processing.
Han, Seokmin; Kang, Dong-Goo
2014-01-01
An easily implementable tissue cancellation method for dual energy mammography is proposed to reduce anatomical noise and enhance lesion visibility. For dual energy calibration, the images of an imaging object are directly mapped onto the images of a customized calibration phantom. Each pixel pair of the low and high energy images of the imaging object was compared to pixel pairs of the low and high energy images of the calibration phantom. The correspondence was measured by absolute difference between the pixel values of imaged object and those of the calibration phantom. Then the closest pixel pair of the calibration phantom images is marked and selected. After the calibration using direct mapping, the regions with lesion yielded different thickness from the background tissues. Taking advantage of the different thickness, the visibility of cancerous lesions was enhanced with increased contrast-to-noise ratio, depending on the size of lesion and breast thickness. However, some tissues near the edge of imaged object still remained after tissue cancellation. These remaining residuals seem to occur due to the heel effect, scattering, nonparallel X-ray beam geometry and Poisson distribution of photons. To improve its performance further, scattering and the heel effect should be compensated.
Laser pixelation of thick scintillators for medical imaging applications: x-ray studies
NASA Astrophysics Data System (ADS)
Sabet, Hamid; Kudrolli, Haris; Marton, Zsolt; Singh, Bipin; Nagarkar, Vivek V.
2013-09-01
To achieve high spatial resolution required in nuclear imaging, scintillation light spread has to be controlled. This has been traditionally achieved by introducing structures in the bulk of scintillation materials; typically by mechanical pixelation of scintillators and fill the resultant inter-pixel gaps by reflecting materials. Mechanical pixelation however, is accompanied by various cost and complexity issues especially for hard, brittle and hygroscopic materials. For example LSO and LYSO, hard and brittle scintillators of interest to medical imaging community, are known to crack under thermal and mechanical stress; the material yield drops quickly with large arrays with high aspect ratio pixels and therefore the pixelation process cost increases. We are utilizing a novel technique named Laser Induced Optical Barriers (LIOB) for pixelation of scintillators that overcomes the issues associated with mechanical pixelation. In this technique, we can introduce optical barriers within the bulk of scintillator crystals to form pixelated arrays with small pixel size and large thickness. We applied LIOB to LYSO using a high-frequency solid-state laser. Arrays with different crystal thickness (5 to 20 mm thick), and pixel size (0.8×0.8 to 1.5×1.5 mm2) were fabricated and tested. The width of the optical barriers were controlled by fine-tuning key parameters such as lens focal spot size and laser energy density. Here we report on LIOB process, its optimization, and the optical crosstalk measurements using X-rays. There are many applications that can potentially benefit from LIOB including but not limited to clinical/pre-clinical PET and SPECT systems, and photon counting CT detectors.
NASA Astrophysics Data System (ADS)
Gopalswamy, N.; Yashiro, Seiji; Reginald, Nelson; Thakur, Neeharika; Thompson, Barbara J.; Gong, Qian
2018-01-01
We present preliminary results obtained by observing the solar corona during the 2017 August 21 total solar eclipse using a polarization camera mounted on an eight-inch Schmidt-Cassegrain telescope. The observations were made from Madras Oregon during 17:19 to 17:21 UT. Total and polarized brightness images were obtained at four wavelengths (385, 398.5, 410, and 423 nm). The polarization camera had a polarization mask mounted on a 2048x2048 pixel CCD with a pixel size of 7.4 microns. The resulting images had a size of 975x975 pixels because four neighboring pixels were summed to yield the polarization and total brightness images. The ratio of 410 and 385 nm images is a measure of the coronal temperature, while that at 423 and 398.5 nm images is a measure of the coronal flow speed. We compared the temperature map from the eclipse observations with that obtained from the Solar Dynamics Observatory’s Atmospheric Imaging Assembly images at six EUV wavelengths, yielding consistent temperature information of the corona.
NASA Astrophysics Data System (ADS)
Li, Linyi; Chen, Yun; Yu, Xin; Liu, Rui; Huang, Chang
2015-03-01
The study of flood inundation is significant to human life and social economy. Remote sensing technology has provided an effective way to study the spatial and temporal characteristics of inundation. Remotely sensed images with high temporal resolutions are widely used in mapping inundation. However, mixed pixels do exist due to their relatively low spatial resolutions. One of the most popular approaches to resolve this issue is sub-pixel mapping. In this paper, a novel discrete particle swarm optimization (DPSO) based sub-pixel flood inundation mapping (DPSO-SFIM) method is proposed to achieve an improved accuracy in mapping inundation at a sub-pixel scale. The evaluation criterion for sub-pixel inundation mapping is formulated. The DPSO-SFIM algorithm is developed, including particle discrete encoding, fitness function designing and swarm search strategy. The accuracy of DPSO-SFIM in mapping inundation at a sub-pixel scale was evaluated using Landsat ETM + images from study areas in Australia and China. The results show that DPSO-SFIM consistently outperformed the four traditional SFIM methods in these study areas. A sensitivity analysis of DPSO-SFIM was also carried out to evaluate its performances. It is hoped that the results of this study will enhance the application of medium-low spatial resolution images in inundation detection and mapping, and thereby support the ecological and environmental studies of river basins.
VizieR Online Data Catalog: Follow-up photometry and spectroscopy of KELT-17 (Zhou+, 2016)
NASA Astrophysics Data System (ADS)
Zhou, G.; Rodriguez, J. E.; Collins, K. A.; Beatty, T.; Oberst, T.; Heintz, T. M.; Stassun, K. G.; Latham, D. W.; Kuhn, R. B.; Bieryla, A.; Lund, M. B.; Labadie-Bartz, J.; Siverd, R. J.; Stevens, D. J.; Gaudi, B. S.; Pepper, J.; Buchhave, L. A.; Eastman, J.; Colon, K.; Cargile, P.; James, D.; Gregorio, J.; Reed, P. A.; Jensen, E. L. N.; Cohen, D. H.; McLeod, K. K.; Tan, T. G.; Zambelli, R.; Bayliss, D.; Bento, J.; Esquerdo, G. A.; Berlind, P.; Calkins, M. L.; Blancato, K.; Manner, M.; Samulski, C.; Stockdale, C.; Nelson, P.; Stephens, D.; Curtis, I.; Kielkopf, J.; Fulton, B. J.; Depoy, D. L.; Marshall, J. L.; Pogge, R.; Gould, A.; Trueblood, M.; Trueblood, P.
2017-05-01
KELT-17, the first exoplanet host discovered through the combined observations of both the Kilodegree Extremely Little Telescope (KELT)-North and KELT-South, is located in KELT-South field 06 (KS06) and KELT-North field 14 (KN14), which are both centered on α=07h39m36s δ=+03°00'00'' (J2000). At the time of identification, the post-processed KELT data set included 2092 images from KN14, taken between UT 2011 October 11 and UT 2013 March 26 and 2636 images from KS06 taken between UT 2010 March 02 and 2013 May 10. The discovery light curves from both KELT-North and KELT-South are shown in Figure1. We obtained higher spatial resolution and precision photometric follow-up observations of KELT-17b in multiple filters. An I-band transit was observed on UT 2015 March 05 at the Canela's Robotic Observatory (CROW) with the 0.3m SCT12 telescope, remotely operated from Portalegre, Portugal. Observations were acquired with the ST10XME CCD camera, with a 30'*20' field of view and a 0.86'' pixel scale. A full multi-color (V and I) transit of KELT-17b was observed on UT 2015 March 12 at Kutztown University Observatory (KUO), located on the campus of Kutztown University in Kutztown, Pennsylvania. KUO's main instrument is the 0.6 m Ritchey-Chretien optical telescope with a focal ratio of f/8. The imaging CCD (KAF-6303E) camera has an array of 3K*2K (9μm) pixels and covers a field of view of 19.5'*13.0'. The Peter van de Kamp Observatory (PvdK) at Swarthmore College (near Philadelphia) houses a 0.62m Ritchey-Chretien reflector with a 4K*4K pixel Apogee CCD. The telescope and camera together have a 26'*26' field of view and a 0.61'' pixel scale. PvdK observed KELT-17b on UT 2015 March 12 in the SDSS z' filter. KELT-17b was observed in both g' and i' on UT 2015 March 12 at Wellesley College's Whitin Observatory in Massachusetts. The telescope is a 0.6m Boller and Chivens with a DFM focal reducer yielding an effective focal ratio of f/9.6. We used an Apogee U230 2K*2K camera with a 0.58''/pixel scale and a 20'*20' field of view. One full transit of KELT-17b was observed from the Westminster College Observatory (WCO), PA, on UT 2015 November 4 in the z' filter. The observations employed a 0.35m f/11 Celestron C14 Schmidt-Cassegrain telescope and SBIG STL-6303E CCD with a ~3K*2K array of 9μm pixels, yielding a 24'*16' field of view and 1.4''/pixel image scale at 3*3 pixel binning. The stellar FWHM was seeing-limited with a typical value of ~3.2''. Three full transits of KELT-17b were observed on UT 2016 February 26 (g' and i') and UT 2016 March 31 (r') using the Manner-Vanderbilt Ritchie-Chrtien (MVRC) telescope located at the Mt. Lemmon summit of Steward Observatory, AZ. The observations employed a 0.6m f/8 RC Optical Systems Ritchie-Chretien telescope and SBIG STX-16803 CCD with a 4K*4K array of 9μm pixels, yielding a 26'*26' field of view and 0.39''/pixel image scale. The telescope was heavily defocused for all three observations resulting in a typical stellar FWHM of ~17''. The Perth Exoplanet Survey Telescope (PEST) observatory is a backyard observatory owned and operated by ThiamGuan (TG) Tan, located in Perth, Australia. It is equipped with a 0.3m Meade LX200 SCT f/10 telescope with focal reducer yielding f/5 and an SBIG ST-8XME CCD camera. The telescope and camera combine to have a 31'*21' field of view and a 1.2'' pixel scale. PEST observed KELT-17b on UT 2016 March 06 in the B band. A series of spectroscopic follow-up observations were performed to characterize the KELT-17 system. We performed low-resolution, high-signal-to-noise reconnaissance spectroscopic follow-up of KELT-17 using the Wide Field Spectrograph (WiFeS) on the Australian National University (ANU) 2.3m telescope at Siding Spring Observatory, Australia in 2015 February. In-depth spectroscopic characterization of KELT-17 was performed by the Tillinghast Reflector Echelle Spectrograph (TRES) on the 1.5m telescope at the Fred Lawrence Whipple Observatory, Mount Hopkins, Arizona, USA. TRES has a wavelength coverage of 3900-9100Å over 51 echelle orders, with a resolving power of λ/Δλ R=44000. A total of 12 out-of-transit observations were taken to characterize the radial velocity orbital variations exhibited by KELT-17. In addition, we also observed spectroscopic transits of KELT-17b with TRES on 2016 February 23 and 2016 February 26 UT, gathering 33 and 29 sets of spectra, respectively. (4 data files).
Polarized-pixel performance model for DoFP polarimeter
NASA Astrophysics Data System (ADS)
Feng, Bin; Shi, Zelin; Liu, Haizheng; Liu, Li; Zhao, Yaohong; Zhang, Junchao
2018-06-01
A division of a focal plane (DoFP) polarimeter is manufactured by placing a micropolarizer array directly onto the focal plane array (FPA) of a detector. Each element of the DoFP polarimeter is a polarized pixel. This paper proposes a performance model for a polarized pixel. The proposed model characterizes the optical and electronic performance of a polarized pixel by three parameters. They are respectively major polarization responsivity, minor polarization responsivity and polarization orientation. Each parameter corresponds to an intuitive physical feature of a polarized pixel. This paper further extends this model to calibrate polarization images from a DoFP (division of focal plane) polarimeter. This calibration work is evaluated quantitatively by a developed DoFP polarimeter under varying illumination intensity and angle of linear polarization. The experiment proves that our model reduces nonuniformity to 6.79% of uncalibrated DoLP (degree of linear polarization) images, and significantly improves the visual effect of DoLP images.
Improved Fast, Deep Record Length, Time-Resolved Visible Spectroscopy of Plasmas Using Fiber Grids
NASA Astrophysics Data System (ADS)
Brockington, S.; Case, A.; Cruz, E.; Williams, A.; Witherspoon, F. D.; Horton, R.; Klauser, R.; Hwang, D.
2017-10-01
HyperV Technologies is developing a fiber-coupled, deep record-length, low-light camera head for performing high time resolution spectroscopy on visible emission from plasma events. By coupling the output of a spectrometer to an imaging fiber bundle connected to a bank of amplified silicon photomultipliers, time-resolved spectroscopic imagers of 100 to 1,000 pixels can be constructed. A second generation prototype 32-pixel spectroscopic imager employing this technique was constructed and successfully tested at the University of California at Davis Compact Toroid Injection Experiment (CTIX). Pixel performance of 10 Megaframes/sec with record lengths of up to 256,000 frames ( 25.6 milliseconds) were achieved. Pixel resolution was 12 bits. Pixel pitch can be refined by using grids of 100 μm to 1000 μm diameter fibers. Experimental results will be discussed, along with future plans for this diagnostic. Work supported by USDOE SBIR Grant DE-SC0013801.