Evaluating the capacity of GF-4 satellite data for estimating fractional vegetation cover
NASA Astrophysics Data System (ADS)
Zhang, C.; Qin, Q.; Ren, H.; Zhang, T.; Sun, Y.
2016-12-01
Fractional vegetation cover (FVC) is a crucial parameter for many agricultural, environmental, meteorological and ecological applications, which is of great importance for studies on ecosystem structure and function. The Chinese GaoFen-4 (GF-4) geostationary satellite designed for the purpose of environmental and ecological observation was launched in December 29, 2015, and official use has been started by Chinese Government on June 13, 2016. Multi-spectral images with spatial resolution of 50 m and high temporal resolution, could be acquired by the sensor on GF-4 satellite on the 36000 km-altitude orbit. To take full advantage of the outstanding performance of GF-4 satellite, this study evaluated the capacity of GF-4 satellite data for monitoring FVC. To the best of our knowledge, this is the first research about estimating FVC from GF-4 satellite images. First, we developed a procedure for preprocessing GF-4 satellite data, including radiometric calibration and atmospheric correction, to acquire surface reflectance. Then single image and multi-temporal images were used for extracting the endmembers of vegetation and soil, respectively. After that, dimidiate pixel model and square model based on vegetation indices were used for estimating FVC. Finally, the estimation results were comparatively analyzed with FVC estimated by other existing sensors. The experimental results showed that satisfying accuracy of FVC estimation could be achieved from GF-4 satellite images using dimidiate pixel model and square model based on vegetation indices. What's more, the multi-temporal images increased the probability to find pure vegetation and soil endmembers, thus the characteristic of high temporal resolution of GF-4 satellite images improved the accuracy of FVC estimation. This study demonstrated the capacity of GF-4 satellite data for monitoring FVC. The conclusions reached by this study are significant for improving the accuracy and spatial-temporal resolution of existing FVC products, which provides a basis for the studies on ecosystem structure and function using remote sensing data acquired by GF-4 satellite.
NASA Astrophysics Data System (ADS)
Dong, L.; Jiang, H.; Yang, L.
2018-04-01
Based on the Landsat images in 2006, 2011 and 2015, and the method of dimidiate pixel model, the Normalized Difference Vegetation Index (NDVI) and the vegetation coverage, this paper analyzes the spatio-temporal variation of vegetation coverage in Changchun, China from 2006 to 2015, and investigates the response of vegetation coverage change to natural and artificial factors. The research results show that in nearly 10 years, the vegetation coverage in Changchun dropped remarkably, and reached the minimum in 2011. Moreover, the decrease of maximum NDVI was significant, with a decrease of about 27.43 %, from 2006 to 2015. The vegetation coverage change in different regions of the research area was significantly different. Among them, the vegetation change in Changchun showed a little drop, and it decreased firstly and then increased slowly in Yushu, Nong'an and Dehui. In addition, the temperature and precipitation change, land reclamation all affect the vegetation coverage. In short, the study of vegetation coverage change contributes scientific and technical support to government and environmental protection department, so as to promote the coordinated development of ecology and economy.
Zhang, Zhiming; Ouyang, Zhiyun; Xiao, Yi; Xiao, Yang; Xu, Weihua
2017-06-01
Increasing exploitation of karst resources is causing severe environmental degradation because of the fragility and vulnerability of karst areas. By integrating principal component analysis (PCA) with annual seasonal trend analysis (ASTA), this study assessed karst rocky desertification (KRD) within a spatial context. We first produced fractional vegetation cover (FVC) data from a moderate-resolution imaging spectroradiometer normalized difference vegetation index using a dimidiate pixel model. Then, we generated three main components of the annual FVC data using PCA. Subsequently, we generated the slope image of the annual seasonal trends of FVC using median trend analysis. Finally, we combined the three PCA components and annual seasonal trends of FVC with the incidence of KRD for each type of carbonate rock to classify KRD into one of four categories based on K-means cluster analysis: high, moderate, low, and none. The results of accuracy assessments indicated that this combination approach produced greater accuracy and more reasonable KRD mapping than the average FVC based on the vegetation coverage standard. The KRD map for 2010 indicated that the total area of KRD was 78.76 × 10 3 km 2 , which constitutes about 4.06% of the eight southwest provinces of China. The largest KRD areas were found in Yunnan province. The combined PCA and ASTA approach was demonstrated to be an easily implemented, robust, and flexible method for the mapping and assessment of KRD, which can be used to enhance regional KRD management schemes or to address assessment of other environmental issues.
Variable pixel size ionospheric tomography
NASA Astrophysics Data System (ADS)
Zheng, Dunyong; Zheng, Hongwei; Wang, Yanjun; Nie, Wenfeng; Li, Chaokui; Ao, Minsi; Hu, Wusheng; Zhou, Wei
2017-06-01
A novel ionospheric tomography technique based on variable pixel size was developed for the tomographic reconstruction of the ionospheric electron density (IED) distribution. In variable pixel size computerized ionospheric tomography (VPSCIT) model, the IED distribution is parameterized by a decomposition of the lower and upper ionosphere with different pixel sizes. Thus, the lower and upper IED distribution may be very differently determined by the available data. The variable pixel size ionospheric tomography and constant pixel size tomography are similar in most other aspects. There are some differences between two kinds of models with constant and variable pixel size respectively, one is that the segments of GPS signal pay should be assigned to the different kinds of pixel in inversion; the other is smoothness constraint factor need to make the appropriate modified where the pixel change in size. For a real dataset, the variable pixel size method distinguishes different electron density distribution zones better than the constant pixel size method. Furthermore, it can be non-chided that when the effort is spent to identify the regions in a model with best data coverage. The variable pixel size method can not only greatly improve the efficiency of inversion, but also produce IED images with high fidelity which are the same as a used uniform pixel size method. In addition, variable pixel size tomography can reduce the underdetermined problem in an ill-posed inverse problem when the data coverage is irregular or less by adjusting quantitative proportion of pixels with different sizes. In comparison with constant pixel size tomography models, the variable pixel size ionospheric tomography technique achieved relatively good results in a numerical simulation. A careful validation of the reliability and superiority of variable pixel size ionospheric tomography was performed. Finally, according to the results of the statistical analysis and quantitative comparison, the proposed method offers an improvement of 8% compared with conventional constant pixel size tomography models in the forward modeling.
Research on ionospheric tomography based on variable pixel height
NASA Astrophysics Data System (ADS)
Zheng, Dunyong; Li, Peiqing; He, Jie; Hu, Wusheng; Li, Chaokui
2016-05-01
A novel ionospheric tomography technique based on variable pixel height was developed for the tomographic reconstruction of the ionospheric electron density distribution. The method considers the height of each pixel as an unknown variable, which is retrieved during the inversion process together with the electron density values. In contrast to conventional computerized ionospheric tomography (CIT), which parameterizes the model with a fixed pixel height, the variable-pixel-height computerized ionospheric tomography (VHCIT) model applies a disturbance to the height of each pixel. In comparison with conventional CIT models, the VHCIT technique achieved superior results in a numerical simulation. A careful validation of the reliability and superiority of VHCIT was performed. According to the results of the statistical analysis of the average root mean square errors, the proposed model offers an improvement by 15% compared with conventional CIT models.
NASA Technical Reports Server (NTRS)
Haralick, R. M.
1982-01-01
The facet model was used to accomplish step edge detection. The essence of the facet model is that any analysis made on the basis of the pixel values in some neighborhood has its final authoritative interpretation relative to the underlying grey tone intensity surface of which the neighborhood pixel values are observed noisy samples. Pixels which are part of regions have simple grey tone intensity surfaces over their areas. Pixels which have an edge in them have complex grey tone intensity surfaces over their areas. Specially, an edge moves through a pixel only if there is some point in the pixel's area having a zero crossing of the second directional derivative taken in the direction of a non-zero gradient at the pixel's center. To determine whether or not a pixel should be marked as a step edge pixel, its underlying grey tone intensity surface was estimated on the basis of the pixels in its neighborhood.
NASA Astrophysics Data System (ADS)
Plimley, Brian; Coffer, Amy; Zhang, Yigong; Vetter, Kai
2016-08-01
Previously, scientific silicon charge-coupled devices (CCDs) with 10.5-μm pixel pitch and a thick (650 μm), fully depleted bulk have been used to measure gamma-ray-induced fast electrons and demonstrate electron track Compton imaging. A model of the response of this CCD was also developed and benchmarked to experiment using Monte Carlo electron tracks. We now examine the trade-off in pixel pitch and electronic noise. We extend our CCD response model to different pixel pitch and readout noise per pixel, including pixel pitch of 2.5 μm, 5 μm, 10.5 μm, 20 μm, and 40 μm, and readout noise from 0 eV/pixel to 2 keV/pixel for 10.5 μm pixel pitch. The CCD images generated by this model using simulated electron tracks are processed by our trajectory reconstruction algorithm. The performance of the reconstruction algorithm defines the expected angular sensitivity as a function of electron energy, CCD pixel pitch, and readout noise per pixel. Results show that our existing pixel pitch of 10.5 μm is near optimal for our approach, because smaller pixels add little new information but are subject to greater statistical noise. In addition, we measured the readout noise per pixel for two different device temperatures in order to estimate the effect of temperature on the reconstruction algorithm performance, although the readout is not optimized for higher temperatures. The noise in our device at 240 K increases the FWHM of angular measurement error by no more than a factor of 2, from 26° to 49° FWHM for electrons between 425 keV and 480 keV. Therefore, a CCD could be used for electron-track-based imaging in a Peltier-cooled device.
Zhang, Chu; Liu, Fei; He, Yong
2018-02-01
Hyperspectral imaging was used to identify and to visualize the coffee bean varieties. Spectral preprocessing of pixel-wise spectra was conducted by different methods, including moving average smoothing (MA), wavelet transform (WT) and empirical mode decomposition (EMD). Meanwhile, spatial preprocessing of the gray-scale image at each wavelength was conducted by median filter (MF). Support vector machine (SVM) models using full sample average spectra and pixel-wise spectra, and the selected optimal wavelengths by second derivative spectra all achieved classification accuracy over 80%. Primarily, the SVM models using pixel-wise spectra were used to predict the sample average spectra, and these models obtained over 80% of the classification accuracy. Secondly, the SVM models using sample average spectra were used to predict pixel-wise spectra, but achieved with lower than 50% of classification accuracy. The results indicated that WT and EMD were suitable for pixel-wise spectra preprocessing. The use of pixel-wise spectra could extend the calibration set, and resulted in the good prediction results for pixel-wise spectra and sample average spectra. The overall results indicated the effectiveness of using spectral preprocessing and the adoption of pixel-wise spectra. The results provided an alternative way of data processing for applications of hyperspectral imaging in food industry.
Muir, Ryan D.; Pogranichney, Nicholas R.; Muir, J. Lewis; Sullivan, Shane Z.; Battaile, Kevin P.; Mulichak, Anne M.; Toth, Scott J.; Keefe, Lisa J.; Simpson, Garth J.
2014-01-01
Experiments and modeling are described to perform spectral fitting of multi-threshold counting measurements on a pixel-array detector. An analytical model was developed for describing the probability density function of detected voltage in X-ray photon-counting arrays, utilizing fractional photon counting to account for edge/corner effects from voltage plumes that spread across multiple pixels. Each pixel was mathematically calibrated by fitting the detected voltage distributions to the model at both 13.5 keV and 15.0 keV X-ray energies. The model and established pixel responses were then exploited to statistically recover images of X-ray intensity as a function of X-ray energy in a simulated multi-wavelength and multi-counting threshold experiment. PMID:25178010
Muir, Ryan D; Pogranichney, Nicholas R; Muir, J Lewis; Sullivan, Shane Z; Battaile, Kevin P; Mulichak, Anne M; Toth, Scott J; Keefe, Lisa J; Simpson, Garth J
2014-09-01
Experiments and modeling are described to perform spectral fitting of multi-threshold counting measurements on a pixel-array detector. An analytical model was developed for describing the probability density function of detected voltage in X-ray photon-counting arrays, utilizing fractional photon counting to account for edge/corner effects from voltage plumes that spread across multiple pixels. Each pixel was mathematically calibrated by fitting the detected voltage distributions to the model at both 13.5 keV and 15.0 keV X-ray energies. The model and established pixel responses were then exploited to statistically recover images of X-ray intensity as a function of X-ray energy in a simulated multi-wavelength and multi-counting threshold experiment.
Polarized-pixel performance model for DoFP polarimeter
NASA Astrophysics Data System (ADS)
Feng, Bin; Shi, Zelin; Liu, Haizheng; Liu, Li; Zhao, Yaohong; Zhang, Junchao
2018-06-01
A division of a focal plane (DoFP) polarimeter is manufactured by placing a micropolarizer array directly onto the focal plane array (FPA) of a detector. Each element of the DoFP polarimeter is a polarized pixel. This paper proposes a performance model for a polarized pixel. The proposed model characterizes the optical and electronic performance of a polarized pixel by three parameters. They are respectively major polarization responsivity, minor polarization responsivity and polarization orientation. Each parameter corresponds to an intuitive physical feature of a polarized pixel. This paper further extends this model to calibrate polarization images from a DoFP (division of focal plane) polarimeter. This calibration work is evaluated quantitatively by a developed DoFP polarimeter under varying illumination intensity and angle of linear polarization. The experiment proves that our model reduces nonuniformity to 6.79% of uncalibrated DoLP (degree of linear polarization) images, and significantly improves the visual effect of DoLP images.
Photovoltaic Pixels for Neural Stimulation: Circuit Models and Performance.
Boinagrov, David; Lei, Xin; Goetz, Georges; Kamins, Theodore I; Mathieson, Keith; Galambos, Ludwig; Harris, James S; Palanker, Daniel
2016-02-01
Photovoltaic conversion of pulsed light into pulsed electric current enables optically-activated neural stimulation with miniature wireless implants. In photovoltaic retinal prostheses, patterns of near-infrared light projected from video goggles onto subretinal arrays of photovoltaic pixels are converted into patterns of current to stimulate the inner retinal neurons. We describe a model of these devices and evaluate the performance of photovoltaic circuits, including the electrode-electrolyte interface. Characteristics of the electrodes measured in saline with various voltages, pulse durations, and polarities were modeled as voltage-dependent capacitances and Faradaic resistances. The resulting mathematical model of the circuit yielded dynamics of the electric current generated by the photovoltaic pixels illuminated by pulsed light. Voltages measured in saline with a pipette electrode above the pixel closely matched results of the model. Using the circuit model, our pixel design was optimized for maximum charge injection under various lighting conditions and for different stimulation thresholds. To speed discharge of the electrodes between the pulses of light, a shunt resistor was introduced and optimized for high frequency stimulation.
NASA Astrophysics Data System (ADS)
Fan, Yuanchao; Koukal, Tatjana; Weisberg, Peter J.
2014-10-01
Canopy shadowing mediated by topography is an important source of radiometric distortion on remote sensing images of rugged terrain. Topographic correction based on the sun-canopy-sensor (SCS) model significantly improved over those based on the sun-terrain-sensor (STS) model for surfaces with high forest canopy cover, because the SCS model considers and preserves the geotropic nature of trees. The SCS model accounts for sub-pixel canopy shadowing effects and normalizes the sunlit canopy area within a pixel. However, it does not account for mutual shadowing between neighboring pixels. Pixel-to-pixel shadowing is especially apparent for fine resolution satellite images in which individual tree crowns are resolved. This paper proposes a new topographic correction model: the sun-crown-sensor (SCnS) model based on high-resolution satellite imagery (IKONOS) and high-precision LiDAR digital elevation model. An improvement on the C-correction logic with a radiance partitioning method to address the effects of diffuse irradiance is also introduced (SCnS + C). In addition, we incorporate a weighting variable, based on pixel shadow fraction, on the direct and diffuse radiance portions to enhance the retrieval of at-sensor radiance and reflectance of highly shadowed tree pixels and form another variety of SCnS model (SCnS + W). Model evaluation with IKONOS test data showed that the new SCnS model outperformed the STS and SCS models in quantifying the correlation between terrain-regulated illumination factor and at-sensor radiance. Our adapted C-correction logic based on the sun-crown-sensor geometry and radiance partitioning better represented the general additive effects of diffuse radiation than C parameters derived from the STS or SCS models. The weighting factor Wt also significantly enhanced correction results by reducing within-class standard deviation and balancing the mean pixel radiance between sunlit and shaded slopes. We analyzed these improvements with model comparison on the red and near infrared bands. The advantages of SCnS + C and SCnS + W on both bands are expected to facilitate forest classification and change detection applications.
1T Pixel Using Floating-Body MOSFET for CMOS Image Sensors.
Lu, Guo-Neng; Tournier, Arnaud; Roy, François; Deschamps, Benoît
2009-01-01
We present a single-transistor pixel for CMOS image sensors (CIS). It is a floating-body MOSFET structure, which is used as photo-sensing device and source-follower transistor, and can be controlled to store and evacuate charges. Our investigation into this 1T pixel structure includes modeling to obtain analytical description of conversion gain. Model validation has been done by comparing theoretical predictions and experimental results. On the other hand, the 1T pixel structure has been implemented in different configurations, including rectangular-gate and ring-gate designs, and variations of oxidation parameters for the fabrication process. The pixel characteristics are presented and discussed.
Regional SAR Image Segmentation Based on Fuzzy Clustering with Gamma Mixture Model
NASA Astrophysics Data System (ADS)
Li, X. L.; Zhao, Q. H.; Li, Y.
2017-09-01
Most of stochastic based fuzzy clustering algorithms are pixel-based, which can not effectively overcome the inherent speckle noise in SAR images. In order to deal with the problem, a regional SAR image segmentation algorithm based on fuzzy clustering with Gamma mixture model is proposed in this paper. First, initialize some generating points randomly on the image, the image domain is divided into many sub-regions using Voronoi tessellation technique. Each sub-region is regarded as a homogeneous area in which the pixels share the same cluster label. Then, assume the probability of the pixel to be a Gamma mixture model with the parameters respecting to the cluster which the pixel belongs to. The negative logarithm of the probability represents the dissimilarity measure between the pixel and the cluster. The regional dissimilarity measure of one sub-region is defined as the sum of the measures of pixels in the region. Furthermore, the Markov Random Field (MRF) model is extended from pixels level to Voronoi sub-regions, and then the regional objective function is established under the framework of fuzzy clustering. The optimal segmentation results can be obtained by the solution of model parameters and generating points. Finally, the effectiveness of the proposed algorithm can be proved by the qualitative and quantitative analysis from the segmentation results of the simulated and real SAR images.
Salience Assignment for Multiple-Instance Data and Its Application to Crop Yield Prediction
NASA Technical Reports Server (NTRS)
Wagstaff, Kiri L.; Lane, Terran
2010-01-01
An algorithm was developed to generate crop yield predictions from orbital remote sensing observations, by analyzing thousands of pixels per county and the associated historical crop yield data for those counties. The algorithm determines which pixels contain which crop. Since each known yield value is associated with thousands of individual pixels, this is a multiple instance learning problem. Because individual crop growth is related to the resulting yield, this relationship has been leveraged to identify pixels that are individually related to corn, wheat, cotton, and soybean yield. Those that have the strongest relationship to a given crop s yield values are most likely to contain fields with that crop. Remote sensing time series data (a new observation every 8 days) was examined for each pixel, which contains information for that pixel s growth curve, peak greenness, and other relevant features. An alternating-projection (AP) technique was used to first estimate the "salience" of each pixel, with respect to the given target (crop yield), and then those estimates were used to build a regression model that relates input data (remote sensing observations) to the target. This is achieved by constructing an exemplar for each crop in each county that is a weighted average of all the pixels within the county; the pixels are weighted according to the salience values. The new regression model estimate then informs the next estimate of the salience values. By iterating between these two steps, the algorithm converges to a stable estimate of both the salience of each pixel and the regression model. The salience values indicate which pixels are most relevant to each crop under consideration.
Method and system for non-linear motion estimation
NASA Technical Reports Server (NTRS)
Lu, Ligang (Inventor)
2011-01-01
A method and system for extrapolating and interpolating a visual signal including determining a first motion vector between a first pixel position in a first image to a second pixel position in a second image, determining a second motion vector between the second pixel position in the second image and a third pixel position in a third image, determining a third motion vector between one of the first pixel position in the first image and the second pixel position in the second image, and the second pixel position in the second image and the third pixel position in the third image using a non-linear model, determining a position of the fourth pixel in a fourth image based upon the third motion vector.
Small pixel cross-talk MTF and its impact on MWIR sensor performance
NASA Astrophysics Data System (ADS)
Goss, Tristan M.; Willers, Cornelius J.
2017-05-01
As pixel sizes reduce in the development of modern High Definition (HD) Mid Wave Infrared (MWIR) detectors the interpixel cross-talk becomes increasingly difficult to regulate. The diffusion lengths required to achieve the quantum efficiency and sensitivity of MWIR detectors are typically longer than the pixel pitch dimension, and the probability of inter-pixel cross-talk increases as the pixel pitch/diffusion length fraction decreases. Inter-pixel cross-talk is most conveniently quantified by the focal plane array sampling Modulation Transfer Function (MTF). Cross-talk MTF will reduce the ideal sinc square pixel MTF that is commonly used when modelling sensor performance. However, cross-talk MTF data is not always readily available from detector suppliers, and since the origins of inter-pixel cross-talk are uniquely device and manufacturing process specific, no generic MTF models appear to satisfy the needs of the sensor designers and analysts. In this paper cross-talk MTF data has been collected from recent publications and the development for a generic cross-talk MTF model to fit this data is investigated. The resulting cross-talk MTF model is then included in a MWIR sensor model and the impact on sensor performance is evaluated in terms of the National Imagery Interoperability Rating Scale's (NIIRS) General Image Quality Equation (GIQE) metric for a range of fnumber/ detector pitch Fλ/d configurations and operating environments. By applying non-linear boost transfer functions in the signal processing chain, the contrast losses due to cross-talk may be compensated for. Boost transfer functions, however, also reduce the signal to noise ratio of the sensor. In this paper boost function limits are investigated and included in the sensor performance assessments.
Simulation of urban land surface temperature based on sub-pixel land cover in a coastal city
NASA Astrophysics Data System (ADS)
Zhao, Xiaofeng; Deng, Lei; Feng, Huihui; Zhao, Yanchuang
2014-11-01
The sub-pixel urban land cover has been proved to have obvious correlations with land surface temperature (LST). Yet these relationships have seldom been used to simulate LST. In this study we provided a new approach of urban LST simulation based on sub-pixel land cover modeling. Landsat TM/ETM+ images of Xiamen city, China on both the January of 2002 and 2007 were used to acquire land cover and then extract the transformation rule using logistic regression. The transformation possibility was taken as its percent in the same pixel after normalization. And cellular automata were used to acquire simulated sub-pixel land cover on 2007 and 2017. On the other hand, the correlations between retrieved LST and sub-pixel land cover achieved by spectral mixture analysis in 2002 were examined and a regression model was built. Then the regression model was used on simulated 2007 land cover to model the LST of 2007. Finally the LST of 2017 was simulated for urban planning and management. The results showed that our method is useful in LST simulation. Although the simulation accuracy is not quite satisfactory, it provides an important idea and a good start in the modeling of urban LST.
Hyperspectral target detection using heavy-tailed distributions
NASA Astrophysics Data System (ADS)
Willis, Chris J.
2009-09-01
One promising approach to target detection in hyperspectral imagery exploits a statistical mixture model to represent scene content at a pixel level. The process then goes on to look for pixels which are rare, when judged against the model, and marks them as anomalies. It is assumed that military targets will themselves be rare and therefore likely to be detected amongst these anomalies. For the typical assumption of multivariate Gaussianity for the mixture components, the presence of the anomalous pixels within the training data will have a deleterious effect on the quality of the model. In particular, the derivation process itself is adversely affected by the attempt to accommodate the anomalies within the mixture components. This will bias the statistics of at least some of the components away from their true values and towards the anomalies. In many cases this will result in a reduction in the detection performance and an increased false alarm rate. This paper considers the use of heavy-tailed statistical distributions within the mixture model. Such distributions are better able to account for anomalies in the training data within the tails of their distributions, and the balance of the pixels within their central masses. This means that an improved model of the majority of the pixels in the scene may be produced, ultimately leading to a better anomaly detection result. The anomaly detection techniques are examined using both synthetic data and hyperspectral imagery with injected anomalous pixels. A range of results is presented for the baseline Gaussian mixture model and for models accommodating heavy-tailed distributions, for different parameterizations of the algorithms. These include scene understanding results, anomalous pixel maps at given significance levels and Receiver Operating Characteristic curves.
Method and System for Temporal Filtering in Video Compression Systems
NASA Technical Reports Server (NTRS)
Lu, Ligang; He, Drake; Jagmohan, Ashish; Sheinin, Vadim
2011-01-01
Three related innovations combine improved non-linear motion estimation, video coding, and video compression. The first system comprises a method in which side information is generated using an adaptive, non-linear motion model. This method enables extrapolating and interpolating a visual signal, including determining the first motion vector between the first pixel position in a first image to a second pixel position in a second image; determining a second motion vector between the second pixel position in the second image and a third pixel position in a third image; determining a third motion vector between the first pixel position in the first image and the second pixel position in the second image, the second pixel position in the second image, and the third pixel position in the third image using a non-linear model; and determining a position of the fourth pixel in a fourth image based upon the third motion vector. For the video compression element, the video encoder has low computational complexity and high compression efficiency. The disclosed system comprises a video encoder and a decoder. The encoder converts the source frame into a space-frequency representation, estimates the conditional statistics of at least one vector of space-frequency coefficients with similar frequencies, and is conditioned on previously encoded data. It estimates an encoding rate based on the conditional statistics and applies a Slepian-Wolf code with the computed encoding rate. The method for decoding includes generating a side-information vector of frequency coefficients based on previously decoded source data and encoder statistics and previous reconstructions of the source frequency vector. It also performs Slepian-Wolf decoding of a source frequency vector based on the generated side-information and the Slepian-Wolf code bits. The video coding element includes receiving a first reference frame having a first pixel value at a first pixel position, a second reference frame having a second pixel value at a second pixel position, and a third reference frame having a third pixel value at a third pixel position. It determines a first motion vector between the first pixel position and the second pixel position, a second motion vector between the second pixel position and the third pixel position, and a fourth pixel value for a fourth frame based upon a linear or nonlinear combination of the first pixel value, the second pixel value, and the third pixel value. A stationary filtering process determines the estimated pixel values. The parameters of the filter may be predetermined constants.
NASA Astrophysics Data System (ADS)
Xin, X.; Li, F.; Peng, Z.; Qinhuo, L.
2017-12-01
Land surface heterogeneities significantly affect the reliability and accuracy of remotely sensed evapotranspiration (ET), and it gets worse for lower resolution data. At the same time, temporal scale extrapolation of the instantaneous latent heat flux (LE) at satellite overpass time to daily ET are crucial for applications of such remote sensing product. The purpose of this paper is to propose a simple but efficient model for estimating daytime evapotranspiration considering heterogeneity of mixed pixels. In order to do so, an equation to calculate evapotranspiration fraction (EF) of mixed pixels was derived based on two key assumptions. Assumption 1: the available energy (AE) of each sub-pixel equals approximately to that of any other sub-pixels in the same mixed pixel within acceptable margin of bias, and as same as the AE of the mixed pixel. It's only for a simpification of the equation, and its uncertainties and resulted errors in estimated ET are very small. Assumption 2: EF of each sub-pixel equals to the EF of the nearest pure pixel(s) of same land cover type. This equation is supposed to be capable of correcting the spatial scale error of the mixed pixels EF and can be used to calculated daily ET with daily AE data.The model was applied to an artificial oasis in the midstream of Heihe River. HJ-1B satellite data were used to estimate the lumped fluxes at the scale of 300 m after resampling the 30-m resolution datasets to 300 m resolution, which was used to carry on the key step of the model. The results before and after correction were compare to each other and validated using site data of eddy-correlation systems. Results indicated that the new model is capable of improving accuracy of daily ET estimation relative to the lumped method. Validations at 12 sites of eddy-correlation systems for 9 days of HJ-1B overpass showed that the R² increased to 0.82 from 0.62; the RMSE decreased to 1.60 MJ/m² from 2.47MJ/m²; the MBE decreased from 1.92 MJ/m² to 1.18MJ/m², which is a quite significant enhancement.The model is easy to apply. And the moduler of inhomogeneous surfaces is independent and easy to be embedded in the traditional remote sensing algorithms of heat fluxes to get daily ET, which were mainly designed to calculate LE or ET under unsaturated conditions and did not consider heterogeneities of land surface.
NASA Astrophysics Data System (ADS)
Hartzell, P. J.; Glennie, C. L.; Hauser, D. L.; Okyay, U.; Khan, S.; Finnegan, D. C.
2016-12-01
Recent advances in remote sensing technology have expanded the acquisition and fusion of active lidar and passive hyperspectral imagery (HSI) from an exclusively airborne technique to terrestrial modalities. This enables high resolution 3D spatial and spectral quantification of vertical geologic structures for applications such as virtual 3D rock outcrop models for hydrocarbon reservoir analog analysis and mineral quantification in open pit mining environments. In contrast to airborne observation geometry, the vertical surfaces observed by horizontal-viewing terrestrial HSI sensors are prone to extensive topography-induced solar shadowing, which leads to reduced pixel classification accuracy or outright removal of shadowed pixels from analysis tasks. Using a precisely calibrated and registered offset cylindrical linear array camera model, we demonstrate the use of 3D lidar data for sub-pixel HSI shadow detection and the restoration of the shadowed pixel spectra via empirical methods that utilize illuminated and shadowed pixels of similar material composition. We further introduce a new HSI shadow restoration technique that leverages collocated backscattered lidar intensity, which is resistant to solar conditions, obtained by projecting the 3D lidar points through the HSI camera model into HSI pixel space. Using ratios derived from the overlapping lidar laser and HSI wavelengths, restored shadow pixel spectra are approximated using a simple scale factor. Simulations of multiple lidar wavelengths, i.e., multi-spectral lidar, indicate the potential for robust HSI spectral restoration that is independent of the complexity and costs associated with rigorous radiometric transfer models, which have yet to be developed for horizontal-viewing terrestrial HSI sensors. The spectral restoration performance is quantified through HSI pixel classification consistency between full sun and partial sun exposures of a single geologic outcrop.
Luminance compensation for AMOLED displays using integrated MIS sensors
NASA Astrophysics Data System (ADS)
Vygranenko, Yuri; Fernandes, Miguel; Louro, Paula; Vieira, Manuela
2017-05-01
Active-matrix organic light-emitting diodes (AMOLEDs) are ideal for future TV applications due to their ability to faithfully reproduce real images. However, pixel luminance can be affected by instability of driver TFTs and aging effect in OLEDs. This paper reports on a pixel driver utilizing a metal-insulator-semiconductor (MIS) sensor for luminance control of the OLED element. In the proposed pixel architecture for bottom-emission AMOLEDs, the embedded MIS sensor shares the same layer stack with back-channel etched a Si:H TFTs to maintain the fabrication simplicity. The pixel design for a large-area HD display is presented. The external electronics performs image processing to modify incoming video using correction parameters for each pixel in the backplane, and also sensor data processing to update the correction parameters. The luminance adjusting algorithm is based on realistic models for pixel circuit elements to predict the relation between the programming voltage and OLED luminance. SPICE modeling of the sensing part of the backplane is performed to demonstrate its feasibility. Details on the pixel circuit functionality including the sensing and programming operations are also discussed.
Charge Sharing and Charge Loss in a Cadmium-Zinc-Telluride Fine-Pixel Detector Array
NASA Technical Reports Server (NTRS)
Gaskin, J. A.; Sharma, D. P.; Ramsey, B. D.; Six, N. Frank (Technical Monitor)
2002-01-01
Because of its high atomic number, room temperature operation, low noise, and high spatial resolution a Cadmium-Zinc-Telluride (CZT) multi-pixel detector is ideal for hard x-ray astrophysical observation. As part of on-going research at MSFC (Marshall Space Flight Center) to develop multi-pixel CdZnTe detectors for this purpose, we have measured charge sharing and charge loss for a 4x4 (750micron pitch), lmm thick pixel array and modeled these results using a Monte-Carlo simulation. This model was then used to predict the amount of charge sharing for a much finer pixel array (with a 300micron pitch). Future work will enable us to compare the simulated results for the finer array to measured values.
Pixel-based meshfree modelling of skeletal muscles.
Chen, Jiun-Shyan; Basava, Ramya Rao; Zhang, Yantao; Csapo, Robert; Malis, Vadim; Sinha, Usha; Hodgson, John; Sinha, Shantanu
2016-01-01
This paper introduces the meshfree Reproducing Kernel Particle Method (RKPM) for 3D image-based modeling of skeletal muscles. This approach allows for construction of simulation model based on pixel data obtained from medical images. The material properties and muscle fiber direction obtained from Diffusion Tensor Imaging (DTI) are input at each pixel point. The reproducing kernel (RK) approximation allows a representation of material heterogeneity with smooth transition. A multiphase multichannel level set based segmentation framework is adopted for individual muscle segmentation using Magnetic Resonance Images (MRI) and DTI. The application of the proposed methods for modeling the human lower leg is demonstrated.
Automation of Endmember Pixel Selection in SEBAL/METRIC Model
NASA Astrophysics Data System (ADS)
Bhattarai, N.; Quackenbush, L. J.; Im, J.; Shaw, S. B.
2015-12-01
The commonly applied surface energy balance for land (SEBAL) and its variant, mapping evapotranspiration (ET) at high resolution with internalized calibration (METRIC) models require manual selection of endmember (i.e. hot and cold) pixels to calibrate sensible heat flux. Current approaches for automating this process are based on statistical methods and do not appear to be robust under varying climate conditions and seasons. In this paper, we introduce a new approach based on simple machine learning tools and search algorithms that provides an automatic and time efficient way of identifying endmember pixels for use in these models. The fully automated models were applied on over 100 cloud-free Landsat images with each image covering several eddy covariance flux sites in Florida and Oklahoma. Observed land surface temperatures at automatically identified hot and cold pixels were within 0.5% of those from pixels manually identified by an experienced operator (coefficient of determination, R2, ≥ 0.92, Nash-Sutcliffe efficiency, NSE, ≥ 0.92, and root mean squared error, RMSE, ≤ 1.67 K). Daily ET estimates derived from the automated SEBAL and METRIC models were in good agreement with their manual counterparts (e.g., NSE ≥ 0.91 and RMSE ≤ 0.35 mm day-1). Automated and manual pixel selection resulted in similar estimates of observed ET across all sites. The proposed approach should reduce time demands for applying SEBAL/METRIC models and allow for their more widespread and frequent use. This automation can also reduce potential bias that could be introduced by an inexperienced operator and extend the domain of the models to new users.
NASA Astrophysics Data System (ADS)
Lazri, Mourad; Ameur, Soltane
2018-05-01
A model combining three classifiers, namely Support vector machine, Artificial neural network and Random forest (SAR) is designed for improving the classification of convective and stratiform rain. This model (SAR model) has been trained and then tested on a datasets derived from MSG-SEVIRI (Meteosat Second Generation-Spinning Enhanced Visible and Infrared Imager). Well-classified, mid-classified and misclassified pixels are determined from the combination of three classifiers. Mid-classified and misclassified pixels that are considered unreliable pixels are reclassified by using a novel training of the developed scheme. In this novel training, only the input data corresponding to the pixels in question to are used. This whole process is repeated a second time and applied to mid-classified and misclassified pixels separately. Learning and validation of the developed scheme are realized against co-located data observed by ground radar. The developed scheme outperformed different classifiers used separately and reached 97.40% of overall accuracy of classification.
Model of Image Artifacts from Dust Particles
NASA Technical Reports Server (NTRS)
Willson, Reg
2008-01-01
A mathematical model of image artifacts produced by dust particles on lenses has been derived. Machine-vision systems often have to work with camera lenses that become dusty during use. Dust particles on the front surface of a lens produce image artifacts that can potentially affect the performance of a machine-vision algorithm. The present model satisfies a need for a means of synthesizing dust image artifacts for testing machine-vision algorithms for robustness (or the lack thereof) in the presence of dust on lenses. A dust particle can absorb light or scatter light out of some pixels, thereby giving rise to a dark dust artifact. It can also scatter light into other pixels, thereby giving rise to a bright dust artifact. For the sake of simplicity, this model deals only with dark dust artifacts. The model effectively represents dark dust artifacts as an attenuation image consisting of an array of diffuse darkened spots centered at image locations corresponding to the locations of dust particles. The dust artifacts are computationally incorporated into a given test image by simply multiplying the brightness value of each pixel by a transmission factor that incorporates the factor of attenuation, by dust particles, of the light incident on that pixel. With respect to computation of the attenuation and transmission factors, the model is based on a first-order geometric (ray)-optics treatment of the shadows cast by dust particles on the image detector. In this model, the light collected by a pixel is deemed to be confined to a pair of cones defined by the location of the pixel s image in object space, the entrance pupil of the lens, and the location of the pixel in the image plane (see Figure 1). For simplicity, it is assumed that the size of a dust particle is somewhat less than the diameter, at the front surface of the lens, of any collection cone containing all or part of that dust particle. Under this assumption, the shape of any individual dust particle artifact is the shape (typically, circular) of the aperture, and the contribution of the particle to the attenuation factor for a given pixel is the fraction of the cross-sectional area of the collection cone occupied by the particle. Assuming that dust particles do not overlap, the net transmission factor for a given pixel is calculated as one minus the sum of attenuation factors contributed by all dust particles affecting that pixel. In a test, the model was used to synthesize attenuation images for random distributions of dust particles on the front surface of a lens at various relative aperture (F-number) settings. As shown in Figure 2, the attenuation images resembled dust artifacts in real test images recorded while the lens was aimed at a white target.
ViBe: a universal background subtraction algorithm for video sequences.
Barnich, Olivier; Van Droogenbroeck, Marc
2011-06-01
This paper presents a technique for motion detection that incorporates several innovative mechanisms. For example, our proposed technique stores, for each pixel, a set of values taken in the past at the same location or in the neighborhood. It then compares this set to the current pixel value in order to determine whether that pixel belongs to the background, and adapts the model by choosing randomly which values to substitute from the background model. This approach differs from those based upon the classical belief that the oldest values should be replaced first. Finally, when the pixel is found to be part of the background, its value is propagated into the background model of a neighboring pixel. We describe our method in full details (including pseudo-code and the parameter values used) and compare it to other background subtraction techniques. Efficiency figures show that our method outperforms recent and proven state-of-the-art methods in terms of both computation speed and detection rate. We also analyze the performance of a downscaled version of our algorithm to the absolute minimum of one comparison and one byte of memory per pixel. It appears that even such a simplified version of our algorithm performs better than mainstream techniques.
Comparison of Sub-Pixel Classification Approaches for Crop-Specific Mapping
This paper examined two non-linear models, Multilayer Perceptron (MLP) regression and Regression Tree (RT), for estimating sub-pixel crop proportions using time-series MODIS-NDVI data. The sub-pixel proportions were estimated for three major crop types including corn, soybean, a...
Super-pixel extraction based on multi-channel pulse coupled neural network
NASA Astrophysics Data System (ADS)
Xu, GuangZhu; Hu, Song; Zhang, Liu; Zhao, JingJing; Fu, YunXia; Lei, BangJun
2018-04-01
Super-pixel extraction techniques group pixels to form over-segmented image blocks according to the similarity among pixels. Compared with the traditional pixel-based methods, the image descripting method based on super-pixel has advantages of less calculation, being easy to perceive, and has been widely used in image processing and computer vision applications. Pulse coupled neural network (PCNN) is a biologically inspired model, which stems from the phenomenon of synchronous pulse release in the visual cortex of cats. Each PCNN neuron can correspond to a pixel of an input image, and the dynamic firing pattern of each neuron contains both the pixel feature information and its context spatial structural information. In this paper, a new color super-pixel extraction algorithm based on multi-channel pulse coupled neural network (MPCNN) was proposed. The algorithm adopted the block dividing idea of SLIC algorithm, and the image was divided into blocks with same size first. Then, for each image block, the adjacent pixels of each seed with similar color were classified as a group, named a super-pixel. At last, post-processing was adopted for those pixels or pixel blocks which had not been grouped. Experiments show that the proposed method can adjust the number of superpixel and segmentation precision by setting parameters, and has good potential for super-pixel extraction.
Utilizing soil polypedons to improve model performance for digital soil mapping
USDA-ARS?s Scientific Manuscript database
Most digital soil mapping approaches that use point data to develop relationships with covariate data intersect sample locations with one raster pixel regardless of pixel size. Resulting models are subject to spurious values in covariate data which may limit model performance. An alternative approac...
Aerosol and Surface Parameter Retrievals for a Multi-Angle, Multiband Spectrometer
NASA Technical Reports Server (NTRS)
Broderick, Daniel
2012-01-01
This software retrieves the surface and atmosphere parameters of multi-angle, multiband spectra. The synthetic spectra are generated by applying the modified Rahman-Pinty-Verstraete Bidirectional Reflectance Distribution Function (BRDF) model, and a single-scattering dominated atmosphere model to surface reflectance data from Multiangle Imaging SpectroRadiometer (MISR). The aerosol physical model uses a single scattering approximation using Rayleigh scattering molecules, and Henyey-Greenstein aerosols. The surface and atmosphere parameters of the models are retrieved using the Lavenberg-Marquardt algorithm. The software can retrieve the surface and atmosphere parameters with two different scales. The surface parameters are retrieved pixel-by-pixel while the atmosphere parameters are retrieved for a group of pixels where the same atmosphere model parameters are applied. This two-scale approach allows one to select the natural scale of the atmosphere properties relative to surface properties. The software also takes advantage of an intelligent initial condition given by the solution of the neighbor pixels.
A neighbor pixel communication filtering structure for Dynamic Vision Sensors
NASA Astrophysics Data System (ADS)
Xu, Yuan; Liu, Shiqi; Lu, Hehui; Zhang, Zilong
2017-02-01
For Dynamic Vision Sensors (DVS), thermal noise and junction leakage current induced Background Activity (BA) is the major cause of the deterioration of images quality. Inspired by the smoothing filtering principle of horizontal cells in vertebrate retina, A DVS pixel with Neighbor Pixel Communication (NPC) filtering structure is proposed to solve this issue. The NPC structure is designed to judge the validity of pixel's activity through the communication between its 4 adjacent pixels. The pixel's outputs will be suppressed if its activities are determined not real. The proposed pixel's area is 23.76×24.71μm2 and only 3ns output latency is introduced. In order to validate the effectiveness of the structure, a 5×5 pixel array has been implemented in SMIC 0.13μm CIS process. 3 test cases of array's behavioral model show that the NPC-DVS have an ability of filtering the BA.
In-flight calibration of the Hitomi Soft X-ray Spectrometer. (2) Point spread function
NASA Astrophysics Data System (ADS)
Maeda, Yoshitomo; Sato, Toshiki; Hayashi, Takayuki; Iizuka, Ryo; Angelini, Lorella; Asai, Ryota; Furuzawa, Akihiro; Kelley, Richard; Koyama, Shu; Kurashima, Sho; Ishida, Manabu; Mori, Hideyuki; Nakaniwa, Nozomi; Okajima, Takashi; Serlemitsos, Peter J.; Tsujimoto, Masahiro; Yaqoob, Tahir
2018-03-01
We present results of inflight calibration of the point spread function of the Soft X-ray Telescope that focuses X-rays onto the pixel array of the Soft X-ray Spectrometer system. We make a full array image of a point-like source by extracting a pulsed component of the Crab nebula emission. Within the limited statistics afforded by an exposure time of only 6.9 ks and limited knowledge of the systematic uncertainties, we find that the raytracing model of 1 {^'.} 2 half-power-diameter is consistent with an image of the observed event distributions across pixels. The ratio between the Crab pulsar image and the raytracing shows scatter from pixel to pixel that is 40% or less in all except one pixel. The pixel-to-pixel ratio has a spread of 20%, on average, for the 15 edge pixels, with an averaged statistical error of 17% (1 σ). In the central 16 pixels, the corresponding ratio is 15% with an error of 6%.
Anomaly clustering in hyperspectral images
NASA Astrophysics Data System (ADS)
Doster, Timothy J.; Ross, David S.; Messinger, David W.; Basener, William F.
2009-05-01
The topological anomaly detection algorithm (TAD) differs from other anomaly detection algorithms in that it uses a topological/graph-theoretic model for the image background instead of modeling the image with a Gaussian normal distribution. In the construction of the model, TAD produces a hard threshold separating anomalous pixels from background in the image. We build on this feature of TAD by extending the algorithm so that it gives a measure of the number of anomalous objects, rather than the number of anomalous pixels, in a hyperspectral image. This is done by identifying, and integrating, clusters of anomalous pixels via a graph theoretical method combining spatial and spectral information. The method is applied to a cluttered HyMap image and combines small groups of pixels containing like materials, such as those corresponding to rooftops and cars, into individual clusters. This improves visualization and interpretation of objects.
A Gaussian Mixture Model Representation of Endmember Variability in Hyperspectral Unmixing
NASA Astrophysics Data System (ADS)
Zhou, Yuan; Rangarajan, Anand; Gader, Paul D.
2018-05-01
Hyperspectral unmixing while considering endmember variability is usually performed by the normal compositional model (NCM), where the endmembers for each pixel are assumed to be sampled from unimodal Gaussian distributions. However, in real applications, the distribution of a material is often not Gaussian. In this paper, we use Gaussian mixture models (GMM) to represent the endmember variability. We show, given the GMM starting premise, that the distribution of the mixed pixel (under the linear mixing model) is also a GMM (and this is shown from two perspectives). The first perspective originates from the random variable transformation and gives a conditional density function of the pixels given the abundances and GMM parameters. With proper smoothness and sparsity prior constraints on the abundances, the conditional density function leads to a standard maximum a posteriori (MAP) problem which can be solved using generalized expectation maximization. The second perspective originates from marginalizing over the endmembers in the GMM, which provides us with a foundation to solve for the endmembers at each pixel. Hence, our model can not only estimate the abundances and distribution parameters, but also the distinct endmember set for each pixel. We tested the proposed GMM on several synthetic and real datasets, and showed its potential by comparing it to current popular methods.
Daytime Water Detection Based on Sky Reflections
NASA Technical Reports Server (NTRS)
Rankin, Arturo; Matthies, Larry; Bellutta, Paolo
2011-01-01
A water body s surface can be modeled as a horizontal mirror. Water detection based on sky reflections and color variation are complementary. A reflection coefficient model suggests sky reflections dominate the color of water at ranges > 12 meters. Water detection based on sky reflections: (1) geometrically locates the pixel in the sky that is reflecting on a candidate water pixel on the ground (2) predicts if the ground pixel is water based on color similarity and local terrain features. Water detection has been integrated on XUVs.
Penrose high-dynamic-range imaging
NASA Astrophysics Data System (ADS)
Li, Jia; Bai, Chenyan; Lin, Zhouchen; Yu, Jian
2016-05-01
High-dynamic-range (HDR) imaging is becoming increasingly popular and widespread. The most common multishot HDR approach, based on multiple low-dynamic-range images captured with different exposures, has difficulties in handling camera and object movements. The spatially varying exposures (SVE) technology provides a solution to overcome this limitation by obtaining multiple exposures of the scene in only one shot but suffers from a loss in spatial resolution of the captured image. While aperiodic assignment of exposures has been shown to be advantageous during reconstruction in alleviating resolution loss, almost all the existing imaging sensors use the square pixel layout, which is a periodic tiling of square pixels. We propose the Penrose pixel layout, using pixels in aperiodic rhombus Penrose tiling, for HDR imaging. With the SVE technology, Penrose pixel layout has both exposure and pixel aperiodicities. To investigate its performance, we have to reconstruct HDR images in square pixel layout from Penrose raw images with SVE. Since the two pixel layouts are different, the traditional HDR reconstruction methods are not applicable. We develop a reconstruction method for Penrose pixel layout using a Gaussian mixture model for regularization. Both quantitative and qualitative results show the superiority of Penrose pixel layout over square pixel layout.
A study on rational function model generation for TerraSAR-X imagery.
Eftekhari, Akram; Saadatseresht, Mohammad; Motagh, Mahdi
2013-09-09
The Rational Function Model (RFM) has been widely used as an alternative to rigorous sensor models of high-resolution optical imagery in photogrammetry and remote sensing geometric processing. However, not much work has been done to evaluate the applicability of the RF model for Synthetic Aperture Radar (SAR) image processing. This paper investigates how to generate a Rational Polynomial Coefficient (RPC) for high-resolution TerraSAR-X imagery using an independent approach. The experimental results demonstrate that the RFM obtained using the independent approach fits the Range-Doppler physical sensor model with an accuracy of greater than 10-3 pixel. Because independent RPCs indicate absolute errors in geolocation, two methods can be used to improve the geometric accuracy of the RFM. In the first method, Ground Control Points (GCPs) are used to update SAR sensor orientation parameters, and the RPCs are calculated using the updated parameters. Our experiment demonstrates that by using three control points in the corners of the image, an accuracy of 0.69 pixels in range and 0.88 pixels in the azimuth direction is achieved. For the second method, we tested the use of an affine model for refining RPCs. In this case, by applying four GCPs in the corners of the image, the accuracy reached 0.75 pixels in range and 0.82 pixels in the azimuth direction.
A Study on Rational Function Model Generation for TerraSAR-X Imagery
Eftekhari, Akram; Saadatseresht, Mohammad; Motagh, Mahdi
2013-01-01
The Rational Function Model (RFM) has been widely used as an alternative to rigorous sensor models of high-resolution optical imagery in photogrammetry and remote sensing geometric processing. However, not much work has been done to evaluate the applicability of the RF model for Synthetic Aperture Radar (SAR) image processing. This paper investigates how to generate a Rational Polynomial Coefficient (RPC) for high-resolution TerraSAR-X imagery using an independent approach. The experimental results demonstrate that the RFM obtained using the independent approach fits the Range-Doppler physical sensor model with an accuracy of greater than 10−3 pixel. Because independent RPCs indicate absolute errors in geolocation, two methods can be used to improve the geometric accuracy of the RFM. In the first method, Ground Control Points (GCPs) are used to update SAR sensor orientation parameters, and the RPCs are calculated using the updated parameters. Our experiment demonstrates that by using three control points in the corners of the image, an accuracy of 0.69 pixels in range and 0.88 pixels in the azimuth direction is achieved. For the second method, we tested the use of an affine model for refining RPCs. In this case, by applying four GCPs in the corners of the image, the accuracy reached 0.75 pixels in range and 0.82 pixels in the azimuth direction. PMID:24021971
Modelling electron distributions within ESA's Gaia satellite CCD pixels to mitigate radiation damage
NASA Astrophysics Data System (ADS)
Seabroke, G. M.; Holland, A. D.; Burt, D.; Robbins, M. S.
2009-08-01
The Gaia satellite is a high-precision astrometry, photometry and spectroscopic ESA cornerstone mission, currently scheduled for launch in 2012. Its primary science drivers are the composition, formation and evolution of the Galaxy. Gaia will achieve its unprecedented positional accuracy requirements with detailed calibration and correction for radiation damage. At L2, protons cause displacement damage in the silicon of CCDs. The resulting traps capture and emit electrons from passing charge packets in the CCD pixel, distorting the image PSF and biasing its centroid. Microscopic models of Gaia's CCDs are being developed to simulate this effect. The key to calculating the probability of an electron being captured by a trap is the 3D electron density within each CCD pixel. However, this has not been physically modelled for the Gaia CCD pixels. In Seabroke, Holland & Cropper (2008), the first paper of this series, we motivated the need for such specialised 3D device modelling and outlined how its future results will fit into Gaia's overall radiation calibration strategy. In this paper, the second of the series, we present our first results using Silvaco's physics-based, engineering software: the ATLAS device simulation framework. Inputting a doping profile, pixel geometry and materials into ATLAS and comparing the results to other simulations reveals that ATLAS has a free parameter, fixed oxide charge, that needs to be calibrated. ATLAS is successfully benchmarked against other simulations and measurements of a test device, identifying how to use it to model Gaia pixels and highlighting the affect of different doping approximations.
Time dependency of temperature of a laser-irradiated infrared target pixel as a low-pass filter
NASA Technical Reports Server (NTRS)
Scholl, Marija S.; Scholl, James W.
1990-01-01
The thermal response of a surface layer of a pixel on an infrared target simulator is discussed. This pixel is maintained at a constant temperature by a rapidly scanning laser beam. An analytical model has been developed to describe the exact temperature dependence of a pixel as a function of time for different pixel refresh rates. The top layer of the pixel surface that generates the gray-body radiation shows the temperature dependence on time that is characteristic of a low-pass filter. The experimental results agree with the analytical predictions. The application of a pulsed laser beam to a noncontact, nondestructive diagnostic technique of surface characterization for the presence of microdefects is discussed.
A pixel read-out architecture implementing a two-stage token ring, zero suppression and compression
NASA Astrophysics Data System (ADS)
Heuvelmans, S.; Boerrigter, M.
2011-01-01
Increasing luminosity in high energy physics experiments leads to new challenges in the design of data acquisition systems for pixel detectors. With the upgrade of the LHCb experiment, the data processing will be changed; hit data from every collision will be transported off the pixel chip, without any trigger selection. A read-out architecture is proposed which is able to obtain low hit data loss on limited silicon area by using the logic beneath the pixels as a data buffer. Zero suppression and redundancy reduction ensure that the data rate off chip is minimized. A C++ model has been created for simulation of functionality and data loss, and for system development. A VHDL implementation has been derived from this model.
Multi-target detection and positioning in crowds using multiple camera surveillance
NASA Astrophysics Data System (ADS)
Huang, Jiahu; Zhu, Qiuyu; Xing, Yufeng
2018-04-01
In this study, we propose a pixel correspondence algorithm for positioning in crowds based on constraints on the distance between lines of sight, grayscale differences, and height in a world coordinates system. First, a Gaussian mixture model is used to obtain the background and foreground from multi-camera videos. Second, the hair and skin regions are extracted as regions of interest. Finally, the correspondences between each pixel in the region of interest are found under multiple constraints and the targets are positioned by pixel clustering. The algorithm can provide appropriate redundancy information for each target, which decreases the risk of losing targets due to a large viewing angle and wide baseline. To address the correspondence problem for multiple pixels, we construct a pixel-based correspondence model based on a similar permutation matrix, which converts the correspondence problem into a linear programming problem where a similar permutation matrix is found by minimizing an objective function. The correct pixel correspondences can be obtained by determining the optimal solution of this linear programming problem and the three-dimensional position of the targets can also be obtained by pixel clustering. Finally, we verified the algorithm with multiple cameras in experiments, which showed that the algorithm has high accuracy and robustness.
Modeling and analysis of hybrid pixel detector deficiencies for scientific applications
NASA Astrophysics Data System (ADS)
Fahim, Farah; Deptuch, Grzegorz W.; Hoff, James R.; Mohseni, Hooman
2015-08-01
Semiconductor hybrid pixel detectors often consist of a pixellated sensor layer bump bonded to a matching pixelated readout integrated circuit (ROIC). The sensor can range from high resistivity Si to III-V materials, whereas a Si CMOS process is typically used to manufacture the ROIC. Independent, device physics and electronic design automation (EDA) tools are used to determine sensor characteristics and verify functional performance of ROICs respectively with significantly different solvers. Some physics solvers provide the capability of transferring data to the EDA tool. However, single pixel transient simulations are either not feasible due to convergence difficulties or are prohibitively long. A simplified sensor model, which includes a current pulse in parallel with detector equivalent capacitor, is often used; even then, spice type top-level (entire array) simulations range from days to weeks. In order to analyze detector deficiencies for a particular scientific application, accurately defined transient behavioral models of all the functional blocks are required. Furthermore, various simulations, such as transient, noise, Monte Carlo, inter-pixel effects, etc. of the entire array need to be performed within a reasonable time frame without trading off accuracy. The sensor and the analog front-end can be modeling using a real number modeling language, as complex mathematical functions or detailed data can be saved to text files, for further top-level digital simulations. Parasitically aware digital timing is extracted in a standard delay format (sdf) from the pixel digital back-end layout as well as the periphery of the ROIC. For any given input, detector level worst-case and best-case simulations are performed using a Verilog simulation environment to determine the output. Each top-level transient simulation takes no more than 10-15 minutes. The impact of changing key parameters such as sensor Poissonian shot noise, analog front-end bandwidth, jitter due to clock distribution etc. can be accurately analyzed to determine ROIC architectural viability and bottlenecks. Hence the impact of the detector parameters on the scientific application can be studied.
Monte Carlo Modeling of VLWIR HgCdTe Interdigitated Pixel Response
NASA Astrophysics Data System (ADS)
D'Souza, A. I.; Stapelbroek, M. G.; Wijewarnasuriya, P. S.
2010-07-01
Increasing very long-wave infrared (VLWIR, λ c ≈ 15 μm) pixel operability was approached by subdividing each pixel into four interdigitated subpixels. High response is maintained across the pixel, even if one or two interdigitated subpixels are deselected (turned off), because interdigitation provides that the preponderance of minority carriers photogenerated in the pixel are collected by the selected subpixels. Monte Carlo modeling of the photoresponse of the interdigitated subpixel simulates minority-carrier diffusion from carrier creation to recombination. Each carrier generated at an appropriately weighted random location is assigned an exponentially distributed random lifetime τ i, where < τ i> is the bulk minority-carrier lifetime. The minority carrier is allowed to diffuse for a short time d τ, and the fate of the carrier is decided from its present position and the boundary conditions, i.e., whether the carrier is absorbed in a junction, recombined at a surface, reflected from a surface, or recombined in the bulk because it lived for its designated lifetime. If nothing happens, the process is then repeated until one of the boundary conditions is attained. The next step is to go on to the next carrier and repeat the procedure for all the launches of minority carriers. For each minority carrier launched, the original location and boundary condition at fatality are recorded. An example of the results from Monte Carlo modeling is that, for a 20- μm diffusion length, the calculated quantum efficiency (QE) changed from 85% with no subpixels deselected, to 78% with one subpixel deselected, 67% with two subpixels deselected, and 48% with three subpixels deselected. Demonstration of the interdigitated pixel concept and verification of the Monte Carlo modeling utilized λ c(60 K) ≈ 15 μm HgCdTe pixels in a 96 × 96 array format. The measured collection efficiency for one, two, and three subelements selected, divided by the collection efficiency for all four subelements selected, matched that calculated using Monte Carlo modeling.
ISBDD Model for Classification of Hyperspectral Remote Sensing Imagery
Li, Na; Xu, Zhaopeng; Zhao, Huijie; Huang, Xinchen; Drummond, Jane; Wang, Daming
2018-01-01
The diverse density (DD) algorithm was proposed to handle the problem of low classification accuracy when training samples contain interference such as mixed pixels. The DD algorithm can learn a feature vector from training bags, which comprise instances (pixels). However, the feature vector learned by the DD algorithm cannot always effectively represent one type of ground cover. To handle this problem, an instance space-based diverse density (ISBDD) model that employs a novel training strategy is proposed in this paper. In the ISBDD model, DD values of each pixel are computed instead of learning a feature vector, and as a result, the pixel can be classified according to its DD values. Airborne hyperspectral data collected by the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) sensor and the Push-broom Hyperspectral Imager (PHI) are applied to evaluate the performance of the proposed model. Results show that the overall classification accuracy of ISBDD model on the AVIRIS and PHI images is up to 97.65% and 89.02%, respectively, while the kappa coefficient is up to 0.97 and 0.88, respectively. PMID:29510547
High-Order Model and Dynamic Filtering for Frame Rate Up-Conversion.
Bao, Wenbo; Zhang, Xiaoyun; Chen, Li; Ding, Lianghui; Gao, Zhiyong
2018-08-01
This paper proposes a novel frame rate up-conversion method through high-order model and dynamic filtering (HOMDF) for video pixels. Unlike the constant brightness and linear motion assumptions in traditional methods, the intensity and position of the video pixels are both modeled with high-order polynomials in terms of time. Then, the key problem of our method is to estimate the polynomial coefficients that represent the pixel's intensity variation, velocity, and acceleration. We propose to solve it with two energy objectives: one minimizes the auto-regressive prediction error of intensity variation by its past samples, and the other minimizes video frame's reconstruction error along the motion trajectory. To efficiently address the optimization problem for these coefficients, we propose the dynamic filtering solution inspired by video's temporal coherence. The optimal estimation of these coefficients is reformulated into a dynamic fusion of the prior estimate from pixel's temporal predecessor and the maximum likelihood estimate from current new observation. Finally, frame rate up-conversion is implemented using motion-compensated interpolation by pixel-wise intensity variation and motion trajectory. Benefited from the advanced model and dynamic filtering, the interpolated frame has much better visual quality. Extensive experiments on the natural and synthesized videos demonstrate the superiority of HOMDF over the state-of-the-art methods in both subjective and objective comparisons.
a Data Field Method for Urban Remotely Sensed Imagery Classification Considering Spatial Correlation
NASA Astrophysics Data System (ADS)
Zhang, Y.; Qin, K.; Zeng, C.; Zhang, E. B.; Yue, M. X.; Tong, X.
2016-06-01
Spatial correlation between pixels is important information for remotely sensed imagery classification. Data field method and spatial autocorrelation statistics have been utilized to describe and model spatial information of local pixels. The original data field method can represent the spatial interactions of neighbourhood pixels effectively. However, its focus on measuring the grey level change between the central pixel and the neighbourhood pixels results in exaggerating the contribution of the central pixel to the whole local window. Besides, Geary's C has also been proven to well characterise and qualify the spatial correlation between each pixel and its neighbourhood pixels. But the extracted object is badly delineated with the distracting salt-and-pepper effect of isolated misclassified pixels. To correct this defect, we introduce the data field method for filtering and noise limitation. Moreover, the original data field method is enhanced by considering each pixel in the window as the central pixel to compute statistical characteristics between it and its neighbourhood pixels. The last step employs a support vector machine (SVM) for the classification of multi-features (e.g. the spectral feature and spatial correlation feature). In order to validate the effectiveness of the developed method, experiments are conducted on different remotely sensed images containing multiple complex object classes inside. The results show that the developed method outperforms the traditional method in terms of classification accuracies.
A Hopfield neural network for image change detection.
Pajares, Gonzalo
2006-09-01
This paper outlines an optimization relaxation approach based on the analog Hopfield neural network (HNN) for solving the image change detection problem between two images. A difference image is obtained by subtracting pixel by pixel both images. The network topology is built so that each pixel in the difference image is a node in the network. Each node is characterized by its state, which determines if a pixel has changed. An energy function is derived, so that the network converges to stable states. The analog Hopfield's model allows each node to take on analog state values. Unlike most widely used approaches, where binary labels (changed/unchanged) are assigned to each pixel, the analog property provides the strength of the change. The main contribution of this paper is reflected in the customization of the analog Hopfield neural network to derive an automatic image change detection approach. When a pixel is being processed, some existing image change detection procedures consider only interpixel relations on its neighborhood. The main drawback of such approaches is the labeling of this pixel as changed or unchanged according to the information supplied by its neighbors, where its own information is ignored. The Hopfield model overcomes this drawback and for each pixel allows a tradeoff between the influence of its neighborhood and its own criterion. This is mapped under the energy function to be minimized. The performance of the proposed method is illustrated by comparative analysis against some existing image change detection methods.
Reflectance Prediction Modelling for Residual-Based Hyperspectral Image Coding
Xiao, Rui; Gao, Junbin; Bossomaier, Terry
2016-01-01
A Hyperspectral (HS) image provides observational powers beyond human vision capability but represents more than 100 times the data compared to a traditional image. To transmit and store the huge volume of an HS image, we argue that a fundamental shift is required from the existing “original pixel intensity”-based coding approaches using traditional image coders (e.g., JPEG2000) to the “residual”-based approaches using a video coder for better compression performance. A modified video coder is required to exploit spatial-spectral redundancy using pixel-level reflectance modelling due to the different characteristics of HS images in their spectral and shape domain of panchromatic imagery compared to traditional videos. In this paper a novel coding framework using Reflectance Prediction Modelling (RPM) in the latest video coding standard High Efficiency Video Coding (HEVC) for HS images is proposed. An HS image presents a wealth of data where every pixel is considered a vector for different spectral bands. By quantitative comparison and analysis of pixel vector distribution along spectral bands, we conclude that modelling can predict the distribution and correlation of the pixel vectors for different bands. To exploit distribution of the known pixel vector, we estimate a predicted current spectral band from the previous bands using Gaussian mixture-based modelling. The predicted band is used as the additional reference band together with the immediate previous band when we apply the HEVC. Every spectral band of an HS image is treated like it is an individual frame of a video. In this paper, we compare the proposed method with mainstream encoders. The experimental results are fully justified by three types of HS dataset with different wavelength ranges. The proposed method outperforms the existing mainstream HS encoders in terms of rate-distortion performance of HS image compression. PMID:27695102
Hyperspectral Image Classification via Kernel Sparse Representation
2013-01-01
classification algorithms. Moreover, the spatial coherency across neighboring pixels is also incorporated through a kernelized joint sparsity model , where...joint sparsity model , where all of the pixels within a small neighborhood are jointly represented in the feature space by selecting a few common training...hyperspectral imagery, joint spar- sity model , kernel methods, sparse representation. I. INTRODUCTION HYPERSPECTRAL imaging sensors capture images
Modeling misregistration and related effects on multispectral classification
NASA Technical Reports Server (NTRS)
Billingsley, F. C.
1981-01-01
The effects of misregistration on the multispectral classification accuracy when the scene registration accuracy is relaxed from 0.3 to 0.5 pixel are investigated. Noise, class separability, spatial transient response, and field size are considered simultaneously with misregistration in their effects on accuracy. Any noise due to the scene, sensor, or to the analog/digital conversion, causes a finite fraction of the measurements to fall outside of the classification limits, even within nominally uniform fields. Misregistration causes field borders in a given band or set of bands to be closer than expected to a given pixel, causing additional pixels to be misclassified due to the mixture of materials in the pixel. Simplified first order models of the various effects are presented, and are used to estimate the performance to be expected.
Modeling the frequency-dependent detective quantum efficiency of photon-counting x-ray detectors.
Stierstorfer, Karl
2018-01-01
To find a simple model for the frequency-dependent detective quantum efficiency (DQE) of photon-counting detectors in the low flux limit. Formula for the spatial cross-talk, the noise power spectrum and the DQE of a photon-counting detector working at a given threshold are derived. Parameters are probabilities for types of events like single counts in the central pixel, double counts in the central pixel and a neighboring pixel or single count in a neighboring pixel only. These probabilities can be derived in a simple model by extensive use of Monte Carlo techniques: The Monte Carlo x-ray propagation program MOCASSIM is used to simulate the energy deposition from the x-rays in the detector material. A simple charge cloud model using Gaussian clouds of fixed width is used for the propagation of the electric charge generated by the primary interactions. Both stages are combined in a Monte Carlo simulation randomizing the location of impact which finally produces the required probabilities. The parameters of the charge cloud model are fitted to the spectral response to a polychromatic spectrum measured with our prototype detector. Based on the Monte Carlo model, the DQE of photon-counting detectors as a function of spatial frequency is calculated for various pixel sizes, photon energies, and thresholds. The frequency-dependent DQE of a photon-counting detector in the low flux limit can be described with an equation containing only a small set of probabilities as input. Estimates for the probabilities can be derived from a simple model of the detector physics. © 2017 American Association of Physicists in Medicine.
Land cover mapping at sub-pixel scales
NASA Astrophysics Data System (ADS)
Makido, Yasuyo Kato
One of the biggest drawbacks of land cover mapping from remotely sensed images relates to spatial resolution, which determines the level of spatial details depicted in an image. Fine spatial resolution images from satellite sensors such as IKONOS and QuickBird are now available. However, these images are not suitable for large-area studies, since a single image is very small and therefore it is costly for large area studies. Much research has focused on attempting to extract land cover types at sub-pixel scale, and little research has been conducted concerning the spatial allocation of land cover types within a pixel. This study is devoted to the development of new algorithms for predicting land cover distribution using remote sensory imagery at sub-pixel level. The "pixel-swapping" optimization algorithm, which was proposed by Atkinson for predicting sub-pixel land cover distribution, is investigated in this study. Two limitations of this method, the arbitrary spatial range value and the arbitrary exponential model of spatial autocorrelation, are assessed. Various weighting functions, as alternatives to the exponential model, are evaluated in order to derive the optimum weighting function. Two different simulation models were employed to develop spatially autocorrelated binary class maps. In all tested models, Gaussian, Exponential, and IDW, the pixel swapping method improved classification accuracy compared with the initial random allocation of sub-pixels. However the results suggested that equal weight could be used to increase accuracy and sub-pixel spatial autocorrelation instead of using these more complex models of spatial structure. New algorithms for modeling the spatial distribution of multiple land cover classes at sub-pixel scales are developed and evaluated. Three methods are examined: sequential categorical swapping, simultaneous categorical swapping, and simulated annealing. These three methods are applied to classified Landsat ETM+ data that has been resampled to 210 meters. The result suggested that the simultaneous method can be considered as the optimum method in terms of accuracy performance and computation time. The case study employs remote sensing imagery at the following sites: tropical forests in Brazil and temperate multiple land mosaic in East China. Sub-areas for both sites are used to examine how the characteristics of the landscape affect the ability of the optimum technique. Three types of measurement: Moran's I, mean patch size (MPS), and patch size standard deviation (STDEV), are used to characterize the landscape. All results suggested that this technique could increase the classification accuracy more than traditional hard classification. The methods developed in this study can benefit researchers who employ coarse remote sensing imagery but are interested in detailed landscape information. In many cases, the satellite sensor that provides large spatial coverage has insufficient spatial detail to identify landscape patterns. Application of the super-resolution technique described in this dissertation could potentially solve this problem by providing detailed land cover predictions from the coarse resolution satellite sensor imagery.
NASA Astrophysics Data System (ADS)
He, Dianning; Zamora, Marta; Oto, Aytekin; Karczmar, Gregory S.; Fan, Xiaobing
2017-09-01
Differences between region-of-interest (ROI) and pixel-by-pixel analysis of dynamic contrast enhanced (DCE) MRI data were investigated in this study with computer simulations and pre-clinical experiments. ROIs were simulated with 10, 50, 100, 200, 400, and 800 different pixels. For each pixel, a contrast agent concentration as a function of time, C(t), was calculated using the Tofts DCE-MRI model with randomly generated physiological parameters (K trans and v e) and the Parker population arterial input function. The average C(t) for each ROI was calculated and then K trans and v e for the ROI was extracted. The simulations were run 100 times for each ROI with new K trans and v e generated. In addition, white Gaussian noise was added to C(t) with 3, 6, and 12 dB signal-to-noise ratios to each C(t). For pre-clinical experiments, Copenhagen rats (n = 6) with implanted prostate tumors in the hind limb were used in this study. The DCE-MRI data were acquired with a temporal resolution of ~5 s in a 4.7 T animal scanner, before, during, and after a bolus injection (<5 s) of Gd-DTPA for a total imaging duration of ~10 min. K trans and v e were calculated in two ways: (i) by fitting C(t) for each pixel, and then averaging the pixel values over the entire ROI, and (ii) by averaging C(t) over the entire ROI, and then fitting averaged C(t) to extract K trans and v e. The simulation results showed that in heterogeneous ROIs, the pixel-by-pixel averaged K trans was ~25% to ~50% larger (p < 0.01) than the ROI-averaged K trans. At higher noise levels, the pixel-averaged K trans was greater than the ‘true’ K trans, but the ROI-averaged K trans was lower than the ‘true’ K trans. The ROI-averaged K trans was closer to the true K trans than pixel-averaged K trans for high noise levels. In pre-clinical experiments, the pixel-by-pixel averaged K trans was ~15% larger than the ROI-averaged K trans. Overall, with the Tofts model, the extracted physiological parameters from the pixel-by-pixel averages were larger than the ROI averages. These differences were dependent on the heterogeneity of the ROI.
Subpixel target detection and enhancement in hyperspectral images
NASA Astrophysics Data System (ADS)
Tiwari, K. C.; Arora, M.; Singh, D.
2011-06-01
Hyperspectral data due to its higher information content afforded by higher spectral resolution is increasingly being used for various remote sensing applications including information extraction at subpixel level. There is however usually a lack of matching fine spatial resolution data particularly for target detection applications. Thus, there always exists a tradeoff between the spectral and spatial resolutions due to considerations of type of application, its cost and other associated analytical and computational complexities. Typically whenever an object, either manmade, natural or any ground cover class (called target, endmembers, components or class) gets spectrally resolved but not spatially, mixed pixels in the image result. Thus, numerous manmade and/or natural disparate substances may occur inside such mixed pixels giving rise to mixed pixel classification or subpixel target detection problems. Various spectral unmixing models such as Linear Mixture Modeling (LMM) are in vogue to recover components of a mixed pixel. Spectral unmixing outputs both the endmember spectrum and their corresponding abundance fractions inside the pixel. It, however, does not provide spatial distribution of these abundance fractions within a pixel. This limits the applicability of hyperspectral data for subpixel target detection. In this paper, a new inverse Euclidean distance based super-resolution mapping method has been presented that achieves subpixel target detection in hyperspectral images by adjusting spatial distribution of abundance fraction within a pixel. Results obtained at different resolutions indicate that super-resolution mapping may effectively aid subpixel target detection.
How many pixels does it take to make a good 4"×6" print? Pixel count wars revisited
NASA Astrophysics Data System (ADS)
Kriss, Michael A.
2011-01-01
In the early 1980's the future of conventional silver-halide photographic systems was of great concern due to the potential introduction of electronic imaging systems then typified by the Sony Mavica analog electronic camera. The focus was on the quality of film-based systems as expressed in the number of equivalent number pixels and bits-per-pixel, and how many pixels would be required to create an equivalent quality image from a digital camera. It was found that 35-mm frames, for ISO 100 color negative film, contained equivalent pixels of 12 microns for a total of 18 million pixels per frame (6 million pixels per layer) with about 6 bits of information per pixel; the introduction of new emulsion technology, tabular AgX grains, increased the value to 8 bit per pixel. Higher ISO speed films had larger equivalent pixels, fewer pixels per frame, but retained the 8 bits per pixel. Further work found that a high quality 3.5" x 5.25" print could be obtained from a three layer system containing 1300 x 1950 pixels per layer or about 7.6 million pixels in all. In short, it became clear that when a digital camera contained about 6 million pixels (in a single layer using a color filter array and appropriate image processing) that digital systems would challenge and replace conventional film-based system for the consumer market. By 2005 this became the reality. Since 2005 there has been a "pixel war" raging amongst digital camera makers. The question arises about just how many pixels are required and are all pixels equal? This paper will provide a practical look at how many pixels are needed for a good print based on the form factor of the sensor (sensor size) and the effective optical modulation transfer function (optical spread function) of the camera lens. Is it better to have 16 million, 5.7-micron pixels or 6 million 7.8-micron pixels? How does intrinsic (no electronic boost) ISO speed and exposure latitude vary with pixel size? A systematic review of these issues will be provided within the context of image quality and ISO speed models developed over the last 15 years.
NASA Astrophysics Data System (ADS)
Schuster, J.
2018-02-01
Military requirements demand both single and dual-color infrared (IR) imaging systems with both high resolution and sharp contrast. To quantify the performance of these imaging systems, a key measure of performance, the modulation transfer function (MTF), describes how well an optical system reproduces an objects contrast in the image plane at different spatial frequencies. At the center of an IR imaging system is the focal plane array (FPA). IR FPAs are hybrid structures consisting of a semiconductor detector pixel array, typically fabricated from HgCdTe, InGaAs or III-V superlattice materials, hybridized with heat/pressure to a silicon read-out integrated circuit (ROIC) with indium bumps on each pixel providing the mechanical and electrical connection. Due to the growing sophistication of the pixel arrays in these FPAs, sophisticated modeling techniques are required to predict, understand, and benchmark the pixel array MTF that contributes to the total imaging system MTF. To model the pixel array MTF, computationally exhaustive 2D and 3D numerical simulation approaches are required to correctly account for complex architectures and effects such as lateral diffusion from the pixel corners. It is paramount to accurately model the lateral di_usion (pixel crosstalk) as it can become the dominant mechanism limiting the detector MTF if not properly mitigated. Once the detector MTF has been simulated, it is directly decomposed into its constituent contributions to reveal exactly what is limiting the total detector MTF, providing a path for optimization. An overview of the MTF will be given and the simulation approach will be discussed in detail, along with how different simulation parameters effect the MTF calculation. Finally, MTF optimization strategies (crosstalk mitigation) will be discussed.
Effect of Using 2 mm Voxels on Observer Performance for PET Lesion Detection
NASA Astrophysics Data System (ADS)
Morey, A. M.; Noo, Frédéric; Kadrmas, Dan J.
2016-06-01
Positron emission tomography (PET) images are typically reconstructed with an in-plane pixel size of approximately 4 mm for cancer imaging. The objective of this work was to evaluate the effect of using smaller pixels on general oncologic lesion-detection. A series of observer studies was performed using experimental phantom data from the Utah PET Lesion Detection Database, which modeled whole-body FDG PET cancer imaging of a 92 kg patient. The data comprised 24 scans over 4 days on a Biograph mCT time-of-flight (TOF) PET/CT scanner, with up to 23 lesions (diam. 6-16 mm) distributed throughout the phantom each day. Images were reconstructed with 2.036 mm and 4.073 mm pixels using ordered-subsets expectation-maximization (OSEM) both with and without point spread function (PSF) modeling and TOF. Detection performance was assessed using the channelized non-prewhitened numerical observer with localization receiver operating characteristic (LROC) analysis. Tumor localization performance and the area under the LROC curve were then analyzed as functions of the pixel size. In all cases, the images with 2 mm pixels provided higher detection performance than those with 4 mm pixels. The degree of improvement from the smaller pixels was larger than that offered by PSF modeling for these data, and provided roughly half the benefit of using TOF. Key results were confirmed by two human observers, who read subsets of the test data. This study suggests that a significant improvement in tumor detection performance for PET can be attained by using smaller voxel sizes than commonly used at many centers. The primary drawback is a 4-fold increase in reconstruction time and data storage requirements.
Optimization of Focusing by Strip and Pixel Arrays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burke, G J; White, D A; Thompson, C A
Professor Kevin Webb and students at Purdue University have demonstrated the design of conducting strip and pixel arrays for focusing electromagnetic waves [1, 2]. Their key point was to design structures to focus waves in the near field using full wave modeling and optimization methods for design. Their designs included arrays of conducting strips optimized with a downhill search algorithm and arrays of conducting and dielectric pixels optimized with the iterative direct binary search method. They used a finite element code for modeling. This report documents our attempts to duplicate and verify their results. We have modeled 2D conducting stripsmore » and both conducting and dielectric pixel arrays with moment method and FDTD codes to compare with Webb's results. New designs for strip arrays were developed with optimization by the downhill simplex method with simulated annealing. Strip arrays were optimized to focus an incident plane wave at a point or at two separated points and to switch between focusing points with a change in frequency. We also tried putting a line current source at the focus point for the plane wave to see how it would work as a directive antenna. We have not tried optimizing the conducting or dielectric pixel arrays, but modeled the structures designed by Webb with the moment method and FDTD to compare with the Purdue results.« less
Dynamic Black-Level Correction and Artifact Flagging in the Kepler Data Pipeline
NASA Technical Reports Server (NTRS)
Clarke, B. D.; Kolodziejczak, J. J.; Caldwell, D. A.
2013-01-01
Instrument-induced artifacts in the raw Kepler pixel data include time-varying crosstalk from the fine guidance sensor (FGS) clock signals, manifestations of drifting moiré pattern as locally correlated nonstationary noise and rolling bands in the images which find their way into the calibrated pixel time series and ultimately into the calibrated target flux time series. Using a combination of raw science pixel data, full frame images, reverse-clocked pixel data and ancillary temperature data the Keplerpipeline models and removes the FGS crosstalk artifacts by dynamically adjusting the black level correction. By examining the residuals to the model fits, the pipeline detects and flags spatial regions and time intervals of strong time-varying blacklevel (rolling bands ) on a per row per cadence basis. These flags are made available to downstream users of the data since the uncorrected rolling band artifacts could complicate processing or lead to misinterpretation of instrument behavior as stellar. This model fitting and artifact flagging is performed within the new stand-alone pipeline model called Dynablack. We discuss the implementation of Dynablack in the Kepler data pipeline and present results regarding the improvement in calibrated pixels and the expected improvement in cotrending performances as a result of including FGS corrections in the calibration. We also discuss the effectiveness of the rolling band flagging for downstream users and illustrate with some affected light curves.
A Causal, Data-driven Approach to Modeling the Kepler Data
NASA Astrophysics Data System (ADS)
Wang, Dun; Hogg, David W.; Foreman-Mackey, Daniel; Schölkopf, Bernhard
2016-09-01
Astronomical observations are affected by several kinds of noise, each with its own causal source; there is photon noise, stochastic source variability, and residuals coming from imperfect calibration of the detector or telescope. The precision of NASA Kepler photometry for exoplanet science—the most precise photometric measurements of stars ever made—appears to be limited by unknown or untracked variations in spacecraft pointing and temperature, and unmodeled stellar variability. Here, we present the causal pixel model (CPM) for Kepler data, a data-driven model intended to capture variability but preserve transit signals. The CPM works at the pixel level so that it can capture very fine-grained information about the variation of the spacecraft. The CPM models the systematic effects in the time series of a pixel using the pixels of many other stars and the assumption that any shared signal in these causally disconnected light curves is caused by instrumental effects. In addition, we use the target star’s future and past (autoregression). By appropriately separating, for each data point, the data into training and test sets, we ensure that information about any transit will be perfectly isolated from the model. The method has four tuning parameters—the number of predictor stars or pixels, the autoregressive window size, and two L2-regularization amplitudes for model components, which we set by cross-validation. We determine values for tuning parameters that works well for most of the stars and apply the method to a corresponding set of target stars. We find that CPM can consistently produce low-noise light curves. In this paper, we demonstrate that pixel-level de-trending is possible while retaining transit signals, and we think that methods like CPM are generally applicable and might be useful for K2, TESS, etc., where the data are not clean postage stamps like Kepler.
Roughness effects on thermal-infrared emissivities estimated from remotely sensed images
NASA Astrophysics Data System (ADS)
Mushkin, Amit; Danilina, Iryna; Gillespie, Alan R.; Balick, Lee K.; McCabe, Matthew F.
2007-10-01
Multispectral thermal-infrared images from the Mauna Loa caldera in Hawaii, USA are examined to study the effects of surface roughness on remotely retrieved emissivities. We find up to a 3% decrease in spectral contrast in ASTER (Advanced Spaceborne Thermal Emission and Reflection Radiometer) 90-m/pixel emissivities due to sub-pixel surface roughness variations on the caldera floor. A similar decrease in spectral contrast of emissivities extracted from MASTER (MODIS/ASTER Airborne Simulator) ~12.5-m/pixel data can be described as a function of increasing surface roughness, which was measured remotely from ASTER 15-m/pixel stereo images. The ratio between ASTER stereo images provides a measure of sub-pixel surface-roughness variations across the scene. These independent roughness estimates complement a radiosity model designed to quantify the unresolved effects of multiple scattering and differential solar heating due to sub-pixel roughness elements and to compensate for both sub-pixel temperature dispersion and cavity radiation on TIR measurements.
NASA Astrophysics Data System (ADS)
Gao, Kun; Yang, Hu; Chen, Xiaomei; Ni, Guoqiang
2008-03-01
Because of complex thermal objects in an infrared image, the prevalent image edge detection operators are often suitable for a certain scene and extract too wide edges sometimes. From a biological point of view, the image edge detection operators work reliably when assuming a convolution-based receptive field architecture. A DoG (Difference-of- Gaussians) model filter based on ON-center retinal ganglion cell receptive field architecture with artificial eye tremors introduced is proposed for the image contour detection. Aiming at the blurred edges of an infrared image, the subsequent orthogonal polynomial interpolation and sub-pixel level edge detection in rough edge pixel neighborhood is adopted to locate the foregoing rough edges in sub-pixel level. Numerical simulations show that this method can locate the target edge accurately and robustly.
NASA Astrophysics Data System (ADS)
Wu, Bo; Liu, Wai Chung; Grumpe, Arne; Wöhler, Christian
2018-06-01
Lunar Digital Elevation Model (DEM) is important for lunar successful landing and exploration missions. Lunar DEMs are typically generated by photogrammetry or laser altimetry approaches. Photogrammetric methods require multiple stereo images of the region of interest and it may not be applicable in cases where stereo coverage is not available. In contrast, reflectance based shape reconstruction techniques, such as shape from shading (SfS) and shape and albedo from shading (SAfS), apply monocular images to generate DEMs with pixel-level resolution. We present a novel hierarchical SAfS method that refines a lower-resolution DEM to pixel-level resolution given a monocular image with known light source. We also estimate the corresponding pixel-wise albedo map in the process and based on that to regularize the shape reconstruction with pixel-level resolution based on the low-resolution DEM. In this study, a Lunar-Lambertian reflectance model is applied to estimate the albedo map. Experiments were carried out using monocular images from the Lunar Reconnaissance Orbiter Narrow Angle Camera (LRO NAC), with spatial resolution of 0.5-1.5 m per pixel, constrained by the Selenological and Engineering Explorer and LRO Elevation Model (SLDEM), with spatial resolution of 60 m. The results indicate that local details are well recovered by the proposed algorithm with plausible albedo estimation. The low-frequency topographic consistency depends on the quality of low-resolution DEM and the resolution difference between the image and the low-resolution DEM.
NASA Astrophysics Data System (ADS)
Grefenstette, Brian W.; Bhalerao, Varun; Cook, W. Rick; Harrison, Fiona A.; Kitaguchi, Takao; Madsen, Kristin K.; Mao, Peter H.; Miyasaka, Hiromasa; Rana, Vikram
2017-08-01
Pixelated Cadmium Zinc Telluride (CdZnTe) detectors are currently flying on the Nuclear Spectroscopic Telescope ARray (NuSTAR) NASA Astrophysics Small Explorer. While the pixel pitch of the detectors is ≍ 605 μm, we can leverage the detector readout architecture to determine the interaction location of an individual photon to much higher spatial accuracy. The sub-pixel spatial location allows us to finely oversample the point spread function of the optics and reduces imaging artifacts due to pixelation. In this paper we demonstrate how the sub-pixel information is obtained, how the detectors were calibrated, and provide ground verification of the quantum efficiency of our Monte Carlo model of the detector response.
Continuous Change Detection and Classification (CCDC) of Land Cover Using All Available Landsat Data
NASA Astrophysics Data System (ADS)
Zhu, Z.; Woodcock, C. E.
2012-12-01
A new algorithm for Continuous Change Detection and Classification (CCDC) of land cover using all available Landsat data is developed. This new algorithm is capable of detecting many kinds of land cover change as new images are collected and at the same time provide land cover maps for any given time. To better identify land cover change, a two step cloud, cloud shadow, and snow masking algorithm is used for eliminating "noisy" observations. Next, a time series model that has components of seasonality, trend, and break estimates the surface reflectance and temperature. The time series model is updated continuously with newly acquired observations. Due to the high variability in spectral response for different kinds of land cover change, the CCDC algorithm uses a data-driven threshold derived from all seven Landsat bands. When the difference between observed and predicted exceeds the thresholds three consecutive times, a pixel is identified as land cover change. Land cover classification is done after change detection. Coefficients from the time series models and the Root Mean Square Error (RMSE) from model fitting are used as classification inputs for the Random Forest Classifier (RFC). We applied this new algorithm for one Landsat scene (Path 12 Row 31) that includes all of Rhode Island as well as much of Eastern Massachusetts and parts of Connecticut. A total of 532 Landsat images acquired between 1982 and 2011 were processed. During this period, 619,924 pixels were detected to change once (91% of total changed pixels) and 60,199 pixels were detected to change twice (8% of total changed pixels). The most frequent land cover change category is from mixed forest to low density residential which occupies more than 8% of total land cover change pixels.
Realistic full wave modeling of focal plane array pixels
Campione, Salvatore; Warne, Larry K.; Jorgenson, Roy E.; ...
2017-11-01
Here, we investigate full-wave simulations of realistic implementations of multifunctional nanoantenna enabled detectors (NEDs). We focus on a 2x2 pixelated array structure that supports two wavelengths of operation. We design each resonating structure independently using full-wave simulations with periodic boundary conditions mimicking the whole infinite array. We then construct a supercell made of a 2x2 pixelated array with periodic boundary conditions mimicking the full NED; in this case, however, each pixel comprises 10-20 antennas per side. In this way, the cross-talk between contiguous pixels is accounted for in our simulations. We observe that, even though there are finite extent effects,more » the pixels work as designed, each responding at the respective wavelength of operation. This allows us to stress that realistic simulations of multifunctional NEDs need to be performed to verify the design functionality by taking into account finite extent and cross-talk effects.« less
A polygon-based modeling approach to assess exposure of resources and assets to wildfire
Matthew P. Thompson; Joe Scott; Jeffrey D. Kaiden; Julie W. Gilbertson-Day
2013-01-01
Spatially explicit burn probability modeling is increasingly applied to assess wildfire risk and inform mitigation strategy development. Burn probabilities are typically expressed on a per-pixel basis, calculated as the number of times a pixel burns divided by the number of simulation iterations. Spatial intersection of highly valued resources and assets (HVRAs) with...
NASA Astrophysics Data System (ADS)
Rossella, M.; Bariani, S.; Barnaba, O.; Cattaneo, P. W.; Cervi, T.; Menegolli, A.; Nardò, R.; Prata, M. C.; Romano, E.; Scagliotti, C.; Simonetta, M.; Vercellati, F.
2017-02-01
The MEG II Timing Counter will measure the positron time of arrival with a resolution of 30 ps relying on two arrays of scintillator pixels read out by 6144 Silicon Photomultipliers (SiPMs) from AdvanSiD. They must be characterized, measuring their breakdown voltage, to assure that the gains of the SiPMs of each pixel are as uniform as possible, to maximize the pixel resolution. To do this an automatic test system that can measure sequentially the parameters of 32 devices has been developed.
Characterization and correction of charge-induced pixel shifts in DECam
Gruen, D.; Bernstein, G. M.; Jarvis, M.; ...
2015-05-28
Interaction of charges in CCDs with the already accumulated charge distribution causes both a flux dependence of the point-spread function (an increase of observed size with flux, also known as the brighter/fatter effect) and pixel-to-pixel correlations of the Poissonian noise in flat fields. We describe these effects in the Dark Energy Camera (DECam) with charge dependent shifts of effective pixel borders, i.e. the Antilogus et al. (2014) model, which we fit to measurements of flat-field Poissonian noise correlations. The latter fall off approximately as a power-law r -2.5 with pixel separation r, are isotropic except for an asymmetry in themore » direct neighbors along rows and columns, are stable in time, and are weakly dependent on wavelength. They show variations from chip to chip at the 20% level that correlate with the silicon resistivity. The charge shifts predicted by the model cause biased shape measurements, primarily due to their effect on bright stars, at levels exceeding weak lensing science requirements. We measure the flux dependence of star images and show that the effect can be mitigated by applying the reverse charge shifts at the pixel level during image processing. Differences in stellar size, however, remain significant due to residuals at larger distance from the centroid.« less
Scattering property based contextual PolSAR speckle filter
NASA Astrophysics Data System (ADS)
Mullissa, Adugna G.; Tolpekin, Valentyn; Stein, Alfred
2017-12-01
Reliability of the scattering model based polarimetric SAR (PolSAR) speckle filter depends upon the accurate decomposition and classification of the scattering mechanisms. This paper presents an improved scattering property based contextual speckle filter based upon an iterative classification of the scattering mechanisms. It applies a Cloude-Pottier eigenvalue-eigenvector decomposition and a fuzzy H/α classification to determine the scattering mechanisms on a pre-estimate of the coherency matrix. The H/α classification identifies pixels with homogeneous scattering properties. A coarse pixel selection rule groups pixels that are either single bounce, double bounce or volume scatterers. A fine pixel selection rule is applied to pixels within each canonical scattering mechanism. We filter the PolSAR data and depending on the type of image scene (urban or rural) use either the coarse or fine pixel selection rule. Iterative refinement of the Wishart H/α classification reduces the speckle in the PolSAR data. Effectiveness of this new filter is demonstrated by using both simulated and real PolSAR data. It is compared with the refined Lee filter, the scattering model based filter and the non-local means filter. The study concludes that the proposed filter compares favorably with other polarimetric speckle filters in preserving polarimetric information, point scatterers and subtle features in PolSAR data.
Characterization of multiport solid state imagers at megahertz data rates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yates, G.J.; Pena, C.R.; Turko, B.T.
1994-08-01
Test results obtained from two recently developed multiport Charge-Coupled Devices (CCDs) operated at pixel rates in the 10-to-100 MHz range will be presented . The CCDs were evaluated in Los Alamos National Laboratory`s High Speed Solid State Imager Test Station (HSTS) which features PC-based programmable clock waveform generation (Tektronix DAS 9200) and synchronously clocked Digital Sampling Oscilloscopes (DSOs) (LeCroy 9424/9314 series) for CCD pixel data acquisition, analysis and storage. The HSTS also provided special designed optical pinhole array test patterns in the 5-to-50 micron diameter range for use with Xenon Strobe and pulsed laser light sources to simultaneously provide multiplemore » single-pixel illumination patterns to study CCD point-spread-function (PSF) and pixel smear characteristics. The two CCDs tested, EEV model CCD-13 and EG&G Reticon model HSO512J, are both 512 {times} 512 pixel arrays with eight (8) and sixteen (16) video output ports respectively. Both devices are generically Frame Transfer CCDs (FT CCDs) designed for parallel bi-directional vertical readout to augment their multiport design for increased pixel rates over common single port serial readout architecture. Although both CCDs were tested similarly, differences in their designs precluded normalization or any direct comparisons of test results. Rate dependent parameters investigated include S/N, PSF, and MTF. The performance observed for the two imagers at various pixel rates from selected typical output ports is discussed.« less
NASA Astrophysics Data System (ADS)
Vandenberghe, Stefaan; Staelens, Steven; Byrne, Charles L.; Soares, Edward J.; Lemahieu, Ignace; Glick, Stephen J.
2006-06-01
In discrete detector PET, natural pixels are image basis functions calculated from the response of detector pairs. By using reconstruction with natural pixel basis functions, the discretization of the object into a predefined grid can be avoided. Here, we propose to use generalized natural pixel reconstruction. Using this approach, the basis functions are not the detector sensitivity functions as in the natural pixel case but uniform parallel strips. The backprojection of the strip coefficients results in the reconstructed image. This paper proposes an easy and efficient way to generate the matrix M directly by Monte Carlo simulation. Elements of the generalized natural pixel system matrix are formed by calculating the intersection of a parallel strip with the detector sensitivity function. These generalized natural pixels are easier to use than conventional natural pixels because the final step from solution to a square pixel representation is done by simple backprojection. Due to rotational symmetry in the PET scanner, the matrix M is block circulant and only the first blockrow needs to be stored. Data were generated using a fast Monte Carlo simulator using ray tracing. The proposed method was compared to a listmode MLEM algorithm, which used ray tracing for doing forward and backprojection. Comparison of the algorithms with different phantoms showed that an improved resolution can be obtained using generalized natural pixel reconstruction with accurate system modelling. In addition, it was noted that for the same resolution a lower noise level is present in this reconstruction. A numerical observer study showed the proposed method exhibited increased performance as compared to a standard listmode EM algorithm. In another study, more realistic data were generated using the GATE Monte Carlo simulator. For these data, a more uniform contrast recovery and a better contrast-to-noise performance were observed. It was observed that major improvements in contrast recovery were obtained with MLEM when the correct system matrix was used instead of simple ray tracing. The correct modelling was the major cause of improved contrast for the same background noise. Less important factors were the choice of the algorithm (MLEM performed better than ART) and the basis functions (generalized natural pixels gave better results than pixels).
Model-based optimization of near-field binary-pixelated beam shapers
Dorrer, C.; Hassett, J.
2017-01-23
The optimization of components that rely on spatially dithered distributions of transparent or opaque pixels and an imaging system with far-field filtering for transmission control is demonstrated. The binary-pixel distribution can be iteratively optimized to lower an error function that takes into account the design transmission and the characteristics of the required far-field filter. Simulations using a design transmission chosen in the context of high-energy lasers show that the beam-fluence modulation at an image plane can be reduced by a factor of 2, leading to performance similar to using a non-optimized spatial-dithering algorithm with pixels of size reduced by amore » factor of 2 without the additional fabrication complexity or cost. The optimization process preserves the pixel distribution statistical properties. Analysis shows that the optimized pixel distribution starting from a high-noise distribution defined by a random-draw algorithm should be more resilient to fabrication errors than the optimized pixel distributions starting from a low-noise, error-diffusion algorithm, while leading to similar beamshaping performance. Furthermore, this is confirmed by experimental results obtained with various pixel distributions and induced fabrication errors.« less
Wang, Qian; Liu, Zhen; Ziegler, Sibylle I; Shi, Kuangyu
2015-07-07
Position-sensitive positron cameras using silicon pixel detectors have been applied for some preclinical and intraoperative clinical applications. However, the spatial resolution of a positron camera is limited by positron multiple scattering in the detector. An incident positron may fire a number of successive pixels on the imaging plane. It is still impossible to capture the primary fired pixel along a particle trajectory by hardware or to perceive the pixel firing sequence by direct observation. Here, we propose a novel data-driven method to improve the spatial resolution by classifying the primary pixels within the detector using support vector machine. A classification model is constructed by learning the features of positron trajectories based on Monte-Carlo simulations using Geant4. Topological and energy features of pixels fired by (18)F positrons were considered for the training and classification. After applying the classification model on measurements, the primary fired pixels of the positron tracks in the silicon detector were estimated. The method was tested and assessed for [(18)F]FDG imaging of an absorbing edge protocol and a leaf sample. The proposed method improved the spatial resolution from 154.6 ± 4.2 µm (energy weighted centroid approximation) to 132.3 ± 3.5 µm in the absorbing edge measurements. For the positron imaging of a leaf sample, the proposed method achieved lower root mean square error relative to phosphor plate imaging, and higher similarity with the reference optical image. The improvements of the preliminary results support further investigation of the proposed algorithm for the enhancement of positron imaging in clinical and preclinical applications.
NASA Astrophysics Data System (ADS)
Wang, Qian; Liu, Zhen; Ziegler, Sibylle I.; Shi, Kuangyu
2015-07-01
Position-sensitive positron cameras using silicon pixel detectors have been applied for some preclinical and intraoperative clinical applications. However, the spatial resolution of a positron camera is limited by positron multiple scattering in the detector. An incident positron may fire a number of successive pixels on the imaging plane. It is still impossible to capture the primary fired pixel along a particle trajectory by hardware or to perceive the pixel firing sequence by direct observation. Here, we propose a novel data-driven method to improve the spatial resolution by classifying the primary pixels within the detector using support vector machine. A classification model is constructed by learning the features of positron trajectories based on Monte-Carlo simulations using Geant4. Topological and energy features of pixels fired by 18F positrons were considered for the training and classification. After applying the classification model on measurements, the primary fired pixels of the positron tracks in the silicon detector were estimated. The method was tested and assessed for [18F]FDG imaging of an absorbing edge protocol and a leaf sample. The proposed method improved the spatial resolution from 154.6 ± 4.2 µm (energy weighted centroid approximation) to 132.3 ± 3.5 µm in the absorbing edge measurements. For the positron imaging of a leaf sample, the proposed method achieved lower root mean square error relative to phosphor plate imaging, and higher similarity with the reference optical image. The improvements of the preliminary results support further investigation of the proposed algorithm for the enhancement of positron imaging in clinical and preclinical applications.
NASA Technical Reports Server (NTRS)
Platnick, Steven; Wind, Galina; Zhang, Zhibo; Ackerman, Steven A.; Maddux, Brent
2012-01-01
The optical and microphysical structure of warm boundary layer marine clouds is of fundamental importance for understanding a variety of cloud radiation and precipitation processes. With the advent of MODIS (Moderate Resolution Imaging Spectroradiometer) on the NASA EOS Terra and Aqua platforms, simultaneous global/daily 1km retrievals of cloud optical thickness and effective particle size are provided, as well as the derived water path. In addition, the cloud product (MOD06/MYD06 for MODIS Terra and Aqua, respectively) provides separate effective radii results using the l.6, 2.1, and 3.7 m spectral channels. Cloud retrieval statistics are highly sensitive to how a pixel identified as being "notclear" by a cloud mask (e.g., the MOD35/MYD35 product) is determined to be useful for an optical retrieval based on a 1-D cloud model. The Collection 5 MODIS retrieval algorithm removed pixels associated with cloud'edges as well as ocean pixels with partly cloudy elements in the 250m MODIS cloud mask - part of the so-called Clear Sky Restoral (CSR) algorithm. Collection 6 attempts retrievals for those two pixel populations, but allows a user to isolate or filter out the populations via CSR pixel-level Quality Assessment (QA) assignments. In this paper, using the preliminary Collection 6 MOD06 product, we present global and regional statistical results of marine warm cloud retrieval sensitivities to the cloud edge and 250m partly cloudy pixel populations. As expected, retrievals for these pixels are generally consistent with a breakdown of the ID cloud model. While optical thickness for these suspect pixel populations may have some utility for radiative studies, the retrievals should be used with extreme caution for process and microphysical studies.
Investigation on Constrained Matrix Factorization for Hyperspectral Image Analysis
2005-07-25
analysis. Keywords: matrix factorization; nonnegative matrix factorization; linear mixture model ; unsupervised linear unmixing; hyperspectral imagery...spatial resolution permits different materials present in the area covered by a single pixel. The linear mixture model says that a pixel reflectance in...in r. In the linear mixture model , r is considered as the linear mixture of m1, m2, …, mP as nMαr += (1) where n is included to account for
Shannon L. Savage; Rick L. Lawrence; John R. Squires
2015-01-01
Ecological and land management applications would often benefit from maps of relative canopy cover of each species present within a pixel, instead of traditional remote-sensing based maps of either dominant species or percent canopy cover without regard to species composition. Widely used statistical models for remote sensing, such as randomForest (RF),...
Conditional random fields for pattern recognition applied to structured data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burr, Tom; Skurikhin, Alexei
In order to predict labels from an output domain, Y, pattern recognition is used to gather measurements from an input domain, X. Image analysis is one setting where one might want to infer whether a pixel patch contains an object that is “manmade” (such as a building) or “natural” (such as a tree). Suppose the label for a pixel patch is “manmade”; if the label for a nearby pixel patch is then more likely to be “manmade” there is structure in the output domain that can be exploited to improve pattern recognition performance. Modeling P(X) is difficult because features betweenmore » parts of the model are often correlated. Thus, conditional random fields (CRFs) model structured data using the conditional distribution P(Y|X = x), without specifying a model for P(X), and are well suited for applications with dependent features. Our paper has two parts. First, we overview CRFs and their application to pattern recognition in structured problems. Our primary examples are image analysis applications in which there is dependence among samples (pixel patches) in the output domain. Second, we identify research topics and present numerical examples.« less
NASA Technical Reports Server (NTRS)
Franklin, Janet; Simonett, David
1988-01-01
The Li-Strahler reflectance model, driven by LANDSAT Thematic Mapper (TM) data, provided regional estimates of tree size and density within 20 percent of sampled values in two bioclimatic zones in West Africa. This model exploits tree geometry in an inversion technique to predict average tree size and density from reflectance data using a few simple parameters measured in the field (spatial pattern, shape, and size distribution of trees) and in the imagery (spectral signatures of scene components). Trees are treated as simply shaped objects, and multispectral reflectance of a pixel is assumed to be related only to the proportions of tree crown, shadow, and understory in the pixel. These, in turn, are a direct function of the number and size of trees, the solar illumination angle, and the spectral signatures of crown, shadow and understory. Given the variance in reflectance from pixel to pixel within a homogeneous area of woodland, caused by the variation in the number and size of trees, the model can be inverted to give estimates of average tree size and density. Because the inversion is sensitive to correct determination of component signatures, predictions are not accurate for small areas.
Conditional random fields for pattern recognition applied to structured data
Burr, Tom; Skurikhin, Alexei
2015-07-14
In order to predict labels from an output domain, Y, pattern recognition is used to gather measurements from an input domain, X. Image analysis is one setting where one might want to infer whether a pixel patch contains an object that is “manmade” (such as a building) or “natural” (such as a tree). Suppose the label for a pixel patch is “manmade”; if the label for a nearby pixel patch is then more likely to be “manmade” there is structure in the output domain that can be exploited to improve pattern recognition performance. Modeling P(X) is difficult because features betweenmore » parts of the model are often correlated. Thus, conditional random fields (CRFs) model structured data using the conditional distribution P(Y|X = x), without specifying a model for P(X), and are well suited for applications with dependent features. Our paper has two parts. First, we overview CRFs and their application to pattern recognition in structured problems. Our primary examples are image analysis applications in which there is dependence among samples (pixel patches) in the output domain. Second, we identify research topics and present numerical examples.« less
NASA Astrophysics Data System (ADS)
Drzewiecki, Wojciech
2017-12-01
We evaluated the performance of nine machine learning regression algorithms and their ensembles for sub-pixel estimation of impervious areas coverages from Landsat imagery. The accuracy of imperviousness mapping in individual time points was assessed based on RMSE, MAE and R2. These measures were also used for the assessment of imperviousness change intensity estimations. The applicability for detection of relevant changes in impervious areas coverages at sub-pixel level was evaluated using overall accuracy, F-measure and ROC Area Under Curve. The results proved that Cubist algorithm may be advised for Landsat-based mapping of imperviousness for single dates. Stochastic gradient boosting of regression trees (GBM) may be also considered for this purpose. However, Random Forest algorithm is endorsed for both imperviousness change detection and mapping of its intensity. In all applications the heterogeneous model ensembles performed at least as well as the best individual models or better. They may be recommended for improving the quality of sub-pixel imperviousness and imperviousness change mapping. The study revealed also limitations of the investigated methodology for detection of subtle changes of imperviousness inside the pixel. None of the tested approaches was able to reliably classify changed and non-changed pixels if the relevant change threshold was set as one or three percent. Also for fi ve percent change threshold most of algorithms did not ensure that the accuracy of change map is higher than the accuracy of random classifi er. For the threshold of relevant change set as ten percent all approaches performed satisfactory.
NASA Astrophysics Data System (ADS)
Platnick, S.; Wind, G.; Zhang, Z.; Ackerman, S. A.; Maddux, B. C.
2012-12-01
The optical and microphysical structure of warm boundary layer marine clouds is of fundamental importance for understanding a variety of cloud radiation and precipitation processes. With the advent of MODIS (Moderate Resolution Imaging Spectroradiometer) on the NASA EOS Terra and Aqua platforms, simultaneous global/daily 1km retrievals of cloud optical thickness and effective particle size are provided, as well as the derived water path. In addition, the cloud product (MOD06/MYD06 for MODIS Terra and Aqua, respectively) provides separate effective radii results using the 1.6, 2.1, and 3.7 μm spectral channels. Cloud retrieval statistics are highly sensitive to how a pixel identified as being "not-clear" by a cloud mask (e.g., the MOD35/MYD35 product) is determined to be useful for an optical retrieval based on a 1-D cloud model. The Collection 5 MODIS retrieval algorithm removed pixels associated with cloud edges (defined by immediate adjacency to "clear" MOD/MYD35 pixels) as well as ocean pixels with partly cloudy elements in the 250m MODIS cloud mask - part of the so-called Clear Sky Restoral (CSR) algorithm. Collection 6 attempts retrievals for those two pixel populations, but allows a user to isolate or filter out the populations via CSR pixel-level Quality Assessment (QA) assignments. In this paper, using the preliminary Collection 6 MOD06 product, we present global and regional statistical results of marine warm cloud retrieval sensitivities to the cloud edge and 250m partly cloudy pixel populations. As expected, retrievals for these pixels are generally consistent with a breakdown of the 1D cloud model. While optical thickness for these suspect pixel populations may have some utility for radiative studies, the retrievals should be used with extreme caution for process and microphysical studies.
A high-resolution and intelligent dead pixel detection scheme for an electrowetting display screen
NASA Astrophysics Data System (ADS)
Luo, ZhiJie; Luo, JianKun; Zhao, WenWen; Cao, Yang; Lin, WeiJie; Zhou, GuoFu
2018-02-01
Electrowetting display technology is realized by tuning the surface energy of a hydrophobic surface by applying a voltage based on electrowetting mechanism. With the rapid development of the electrowetting industry, how to analyze efficiently the quality of an electrowetting display screen has a very important significance. There are two kinds of dead pixels on the electrowetting display screen. One is that the oil of pixel cannot completely cover the display area. The other is that indium tin oxide semiconductor wire connecting pixel and foil was burned. In this paper, we propose a high-resolution and intelligent dead pixel detection scheme for an electrowetting display screen. First, we built an aperture ratio-capacitance model based on the electrical characteristics of electrowetting display. A field-programmable gate array is used as the integrated logic hub of the system for a highly reliable and efficient control of the circuit. Dead pixels can be detected and displayed on a PC-based 2D graphical interface in real time. The proposed dead pixel detection scheme reported in this work has promise in automating electrowetting display experiments.
A novel method to improve MODIS AOD retrievals in cloudy pixels using an analog ensemble approach
NASA Astrophysics Data System (ADS)
Kumar, R.; Raman, A.; Delle Monache, L.; Alessandrini, S.; Cheng, W. Y. Y.; Gaubert, B.; Arellano, A. F.
2016-12-01
Particulate matter (PM) concentrations are one of the fundamental indicators of air quality. Earth orbiting satellite platforms acquire column aerosol abundance that can in turn provide information about the PM concentrations. One of the serious limitations of column aerosol retrievals from low earth orbiting satellites is that these algorithms are based on clear sky assumptions. They do not retrieve AOD in cloudy pixels. After filtering cloudy pixels, these algorithms also arbitrarily remove brightest and darkest 25% of remaining pixels over ocean and brightest and darkest 50% pixels over land to filter any residual contamination from clouds. This becomes a critical issue especially in regions that experience monsoon, like Asia and North America. In case of North America, monsoon season experiences wide variety of extreme air quality events such as fires in California and dust storms in Arizona. Assessment of these episodic events warrants frequent monitoring of aerosol observations from remote sensing retrievals. In this study, we demonstrate a method to fill in cloudy pixels in Moderate Imaging Resolution Spectroradiometer (MODIS) AOD retrievals based on ensembles generated using an analog-based approach (AnEn). It provides a probabilistic distribution of AOD in cloudy pixels using historical records of model simulations of meteorological predictors such as AOD, relative humidity, and wind speed, and past observational records of MODIS AOD at a given target site. We use simulations from a coupled community weather forecasting model with chemistry (WRF-Chem) run at a resolution comparable to MODIS AOD. Analogs selected from summer months (June, July) of 2011-2013 from model and corresponding observations are used as a training dataset. Then, missing AOD retrievals in cloudy pixels in the last 31 days of the selected period are estimated. Here, we use AERONET stations as target sites to facilitate comparison against in-situ measurements. We use two approaches to evaluate the estimated AOD: 1) by comparing against reanalysis AOD, 2) by inverting AOD to PM10 concentrations and then comparing those with measured PM10. AnEn is an efficient approach to generate an ensemble as it involves only one model run and provides an estimate of uncertainty that complies with the physical and chemical state of the atmosphere.
High Accuracy 3D Processing of Satellite Imagery
NASA Technical Reports Server (NTRS)
Gruen, A.; Zhang, L.; Kocaman, S.
2007-01-01
Automatic DSM/DTM generation reproduces not only general features, but also detailed features of the terrain relief. Height accuracy of around 1 pixel in cooperative terrain. RMSE values of 1.3-1.5 m (1.0-2.0 pixels) for IKONOS and RMSE values of 2.9-4.6 m (0.5-1.0 pixels) for SPOT5 HRS. For 3D city modeling, the manual and semi-automatic feature extraction capability of SAT-PP provides a good basis. The tools of SAT-PP allowed the stereo-measurements of points on the roofs in order to generate a 3D city model with CCM The results show that building models with main roof structures can be successfully extracted by HRSI. As expected, with Quickbird more details are visible.
Modeling radiation damage to pixel sensors in the ATLAS detector
NASA Astrophysics Data System (ADS)
Ducourthial, A.
2018-03-01
Silicon pixel detectors are at the core of the current and planned upgrade of the ATLAS detector at the Large Hadron Collider (LHC) . As the closest detector component to the interaction point, these detectors will be subject to a significant amount of radiation over their lifetime: prior to the High-Luminosity LHC (HL-LHC) [1], the innermost layers will receive a fluence in excess of 1015 neq/cm2 and the HL-LHC detector upgrades must cope with an order of magnitude higher fluence integrated over their lifetimes. Simulating radiation damage is essential in order to make accurate predictions for current and future detector performance that will enable searches for new particles and forces as well as precision measurements of Standard Model particles such as the Higgs boson. We present a digitization model that includes radiation damage effects on the ATLAS pixel sensors for the first time. In addition to thoroughly describing the setup, we present first predictions for basic pixel cluster properties alongside early studies with LHC Run 2 proton-proton collision data.
Development of a Random Field Model for Gas Plume Detection in Multiple LWIR Images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heasler, Patrick G.
This report develops a random field model that describes gas plumes in LWIR remote sensing images. The random field model serves as a prior distribution that can be combined with LWIR data to produce a posterior that determines the probability that a gas plume exists in the scene and also maps the most probable location of any plume. The random field model is intended to work with a single pixel regression estimator--a regression model that estimates gas concentration on an individual pixel basis.
NASA Astrophysics Data System (ADS)
Watanabe, Shigeo; Takahashi, Teruo; Bennett, Keith
2017-02-01
The"scientific" CMOS (sCMOS) camera architecture fundamentally differs from CCD and EMCCD cameras. In digital CCD and EMCCD cameras, conversion from charge to the digital output is generally through a single electronic chain, and the read noise and the conversion factor from photoelectrons to digital outputs are highly uniform for all pixels, although quantum efficiency may spatially vary. In CMOS cameras, the charge to voltage conversion is separate for each pixel and each column has independent amplifiers and analog-to-digital converters, in addition to possible pixel-to-pixel variation in quantum efficiency. The "raw" output from the CMOS image sensor includes pixel-to-pixel variability in the read noise, electronic gain, offset and dark current. Scientific camera manufacturers digitally compensate the raw signal from the CMOS image sensors to provide usable images. Statistical noise in images, unless properly modeled, can introduce errors in methods such as fluctuation correlation spectroscopy or computational imaging, for example, localization microscopy using maximum likelihood estimation. We measured the distributions and spatial maps of individual pixel offset, dark current, read noise, linearity, photoresponse non-uniformity and variance distributions of individual pixels for standard, off-the-shelf Hamamatsu ORCA-Flash4.0 V3 sCMOS cameras using highly uniform and controlled illumination conditions, from dark conditions to multiple low light levels between 20 to 1,000 photons / pixel per frame to higher light conditions. We further show that using pixel variance for flat field correction leads to errors in cameras with good factory calibration.
Sohl, Terry L.; Dornbierer, Jordan; Wika, Steve; Sayler, Kristi L.; Quenzer, Robert
2017-01-01
Land use and land cover (LULC) change occurs at a local level within contiguous ownership and management units (parcels), yet LULC models primarily use pixel-based spatial frameworks. The few parcel-based models being used overwhelmingly focus on small geographic areas, limiting the ability to assess LULC change impacts at regional to national scales. We developed a modified version of the Forecasting Scenarios of land use change model to project parcel-based agricultural change across a large region in the United States Great Plains. A scenario representing an agricultural biofuel scenario was modeled from 2012 to 2030, using real parcel boundaries based on contiguous ownership and land management units. The resulting LULC projection provides a vastly improved representation of landscape pattern over existing pixel-based models, while simultaneously providing an unprecedented combination of thematic detail and broad geographic extent. The conceptual approach is practical and scalable, with potential use for national-scale projections.
Interactive classification and content-based retrieval of tissue images
NASA Astrophysics Data System (ADS)
Aksoy, Selim; Marchisio, Giovanni B.; Tusk, Carsten; Koperski, Krzysztof
2002-11-01
We describe a system for interactive classification and retrieval of microscopic tissue images. Our system models tissues in pixel, region and image levels. Pixel level features are generated using unsupervised clustering of color and texture values. Region level features include shape information and statistics of pixel level feature values. Image level features include statistics and spatial relationships of regions. To reduce the gap between low-level features and high-level expert knowledge, we define the concept of prototype regions. The system learns the prototype regions in an image collection using model-based clustering and density estimation. Different tissue types are modeled using spatial relationships of these regions. Spatial relationships are represented by fuzzy membership functions. The system automatically selects significant relationships from training data and builds models which can also be updated using user relevance feedback. A Bayesian framework is used to classify tissues based on these models. Preliminary experiments show that the spatial relationship models we developed provide a flexible and powerful framework for classification and retrieval of tissue images.
Evaluation of the MTF for a-Si:H imaging arrays
NASA Astrophysics Data System (ADS)
Yorkston, John; Antonuk, Larry E.; Seraji, N.; Huang, Weidong; Siewerdsen, Jeffrey H.; El-Mohri, Youcef
1994-05-01
Hydrogenated amorphous silicon imaging arrays are being developed for numerous applications in medical imaging. Diagnostic and megavoltage images have previously been reported and a number of the intrinsic properties of the arrays have been investigated. This paper reports on the first attempt to characterize the intrinsic spatial resolution of the imaging pixels on a 450 micrometers pitch, n-i-p imaging array fabricated at Xerox P.A.R.C. The pre- sampled modulation transfer function was measured by scanning a approximately 25 micrometers wide slit of visible wavelength light across a pixel in both the DATA and FET directions. The results show that the response of the pixel in these orthogonal directions is well described by a simple model that accounts for asymmetries in the pixel response due to geometric aspects of the pixel design.
4.3 μm quantum cascade detector in pixel configuration.
Harrer, A; Schwarz, B; Schuler, S; Reininger, P; Wirthmüller, A; Detz, H; MacFarland, D; Zederbauer, T; Andrews, A M; Rothermund, M; Oppermann, H; Schrenk, W; Strasser, G
2016-07-25
We present the design simulation and characterization of a quantum cascade detector operating at 4.3μm wavelength. Array integration and packaging processes were investigated. The device operates in the 4.3μm CO2 absorption region and consists of 64 pixels. The detector is designed fully compatible to standard processing and material growth methods for scalability to large pixel counts. The detector design is optimized for a high device resistance at elevated temperatures. A QCD simulation model was enhanced for resistance and responsivity optimization. The substrate illuminated pixels utilize a two dimensional Au diffraction grating to couple the light to the active region. A single pixel responsivity of 16mA/W at room temperature with a specific detectivity D* of 5⋅107 cmHz/W was measured.
3D track reconstruction capability of a silicon hybrid active pixel detector
NASA Astrophysics Data System (ADS)
Bergmann, Benedikt; Pichotka, Martin; Pospisil, Stanislav; Vycpalek, Jiri; Burian, Petr; Broulim, Pavel; Jakubek, Jan
2017-06-01
Timepix3 detectors are the latest generation of hybrid active pixel detectors of the Medipix/Timepix family. Such detectors consist of an active sensor layer which is connected to the readout ASIC (application specific integrated circuit), segmenting the detector into a square matrix of 256 × 256 pixels (pixel pitch 55 μm). Particles interacting in the active sensor material create charge carriers, which drift towards the pixelated electrode, where they are collected. In each pixel, the time of the interaction (time resolution 1.56 ns) and the amount of created charge carriers are measured. Such a device was employed in an experiment in a 120 GeV/c pion beam. It is demonstrated, how the drift time information can be used for "4D" particle tracking, with the three spatial dimensions and the energy losses along the particle trajectory (dE/dx). Since the coordinates in the detector plane are given by the pixelation ( x, y), the x- and y-resolution is determined by the pixel pitch (55 μm). A z-resolution of 50.4 μm could be achieved (for a 500 μm thick silicon sensor at 130 V bias), whereby the drift time model independent z-resolution was found to be 28.5 μm.
Pixelation Effects in Weak Lensing
NASA Technical Reports Server (NTRS)
High, F. William; Rhodes, Jason; Massey, Richard; Ellis, Richard
2007-01-01
Weak gravitational lensing can be used to investigate both dark matter and dark energy but requires accurate measurements of the shapes of faint, distant galaxies. Such measurements are hindered by the finite resolution and pixel scale of digital cameras. We investigate the optimum choice of pixel scale for a space-based mission, using the engineering model and survey strategy of the proposed Supernova Acceleration Probe as a baseline. We do this by simulating realistic astronomical images containing a known input shear signal and then attempting to recover the signal using the Rhodes, Refregier, and Groth algorithm. We find that the quality of shear measurement is always improved by smaller pixels. However, in practice, telescopes are usually limited to a finite number of pixels and operational life span, so the total area of a survey increases with pixel size. We therefore fix the survey lifetime and the number of pixels in the focal plane while varying the pixel scale, thereby effectively varying the survey size. In a pure trade-off for image resolution versus survey area, we find that measurements of the matter power spectrum would have minimum statistical error with a pixel scale of 0.09' for a 0.14' FWHM point-spread function (PSF). The pixel scale could be increased to 0.16' if images dithered by exactly half-pixel offsets were always available. Some of our results do depend on our adopted shape measurement method and should be regarded as an upper limit: future pipelines may require smaller pixels to overcome systematic floors not yet accessible, and, in certain circumstances, measuring the shape of the PSF might be more difficult than those of galaxies. However, the relative trends in our analysis are robust, especially those of the surface density of resolved galaxies. Our approach thus provides a snapshot of potential in available technology, and a practical counterpart to analytic studies of pixelation, which necessarily assume an idealized shape measurement method.
An Active Fire Temperature Retrieval Model Using Hyperspectral Remote Sensing
NASA Astrophysics Data System (ADS)
Quigley, K. W.; Roberts, D. A.; Miller, D.
2017-12-01
Wildfire is both an important ecological process and a dangerous natural threat that humans face. In situ measurements of wildfire temperature are notoriously difficult to collect due to dangerous conditions. Imaging spectrometry data has the potential to provide some of the most accurate and highest temporally-resolved active fire temperature retrieval information for monitoring and modeling. Recent studies on fire temperature retrieval have used have used Multiple Endmember Spectral Mixture Analysis applied to Airborne Visible applied to Airborne Visible / Infrared Imaging Spectrometer (AVIRIS) bands to model fire temperatures within the regions marked to contain fire, but these methods are less effective at coarser spatial resolutions, as linear mixing methods are degraded by saturation within the pixel. The assumption of a distribution of temperatures within pixels allows us to model pixels with an effective maximum and likely minimum temperature. This assumption allows a more robust approach to modeling temperature at different spatial scales. In this study, instrument-corrected radiance is forward-modeled for different ranges of temperatures, with weighted temperatures from an effective maximum temperature to a likely minimum temperature contributing to the total radiance of the modeled pixel. Effective maximum fire temperature is estimated by minimizing the Root Mean Square Error (RMSE) between modeled and measured fires. The model was tested using AVIRIS collected over the 2016 Sherpa Fire in Santa Barbara County, California,. While only in situ experimentation would be able to confirm active fire temperatures, the fit of the data to modeled radiance can be assessed, as well as the similarity in temperature distributions seen on different spatial resolution scales. Results show that this model improves upon current modeling methods in producing similar effective temperatures on multiple spatial scales as well as a similar modeled area distribution of those temperatures.
NASA Astrophysics Data System (ADS)
Weatherill, Daniel P.; Stefanov, Konstantin D.; Greig, Thomas A.; Holland, Andrew D.
2014-07-01
Pixellated monolithic silicon detectors operated in a photon-counting regime are useful in spectroscopic imaging applications. Since a high energy incident photon may produce many excess free carriers upon absorption, both energy and spatial information can be recovered by resolving each interaction event. The performance of these devices in terms of both the energy and spatial resolution is in large part determined by the amount of diffusion which occurs during the collection of the charge cloud by the pixels. Past efforts to predict the X-ray performance of imaging sensors have used either analytical solutions to the diffusion equation or simplified monte carlo electron transport models. These methods are computationally attractive and highly useful but may be complemented using more physically detailed models based on TCAD simulations of the devices. Here we present initial results from a model which employs a full transient numerical solution of the classical semiconductor equations to model charge collection in device pixels under stimulation from initially Gaussian photogenerated charge clouds, using commercial TCAD software. Realistic device geometries and doping are included. By mapping the pixel response to different initial interaction positions and charge cloud sizes, the charge splitting behaviour of the model sensor under various illuminations and operating conditions is investigated. Experimental validation of the model is presented from an e2v CCD30-11 device under varying substrate bias, illuminated using an Fe-55 source.
Huang, C.; Townshend, J.R.G.; Liang, S.; Kalluri, S.N.V.; DeFries, R.S.
2002-01-01
Measured and modeled point spread functions (PSF) of sensor systems indicate that a significant portion of the recorded signal of each pixel of a satellite image originates from outside the area represented by that pixel. This hinders the ability to derive surface information from satellite images on a per-pixel basis. In this study, the impact of the PSF of the Moderate Resolution Imaging Spectroradiometer (MODIS) 250 m bands was assessed using four images representing different landscapes. Experimental results showed that though differences between pixels derived with and without PSF effects were small on the average, the PSF generally brightened dark objects and darkened bright objects. This impact of the PSF lowered the performance of a support vector machine (SVM) classifier by 5.4% in overall accuracy and increased the overall root mean square error (RMSE) by 2.4% in estimating subpixel percent land cover. An inversion method based on the known PSF model reduced the signals originating from surrounding areas by as much as 53%. This method differs from traditional PSF inversion deconvolution methods in that the PSF was adjusted with lower weighting factors for signals originating from neighboring pixels than those specified by the PSF model. By using this deconvolution method, the lost classification accuracy due to residual impact of PSF effects was reduced to only 1.66% in overall accuracy. The increase in the RMSE of estimated subpixel land cover proportions due to the residual impact of PSF effects was reduced to 0.64%. Spatial aggregation also effectively reduced the errors in estimated land cover proportion images. About 50% of the estimation errors were removed after applying the deconvolution method and aggregating derived proportion images to twice their dimensional pixel size. ?? 2002 Elsevier Science Inc. All rights reserved.
Haney, C R; Fan, X; Markiewicz, E; Mustafi, D; Karczmar, G S; Stadler, W M
2013-02-01
Sorafenib is a multi-kinase inhibitor that blocks cell proliferation and angiogenesis. It is currently approved for advanced hepatocellular and renal cell carcinomas in humans, where its major mechanism of action is thought to be through inhibition of vascular endothelial growth factor and platelet-derived growth factor receptors. The purpose of this study was to determine whether pixel-by-pixel analysis of dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is better able to capture the heterogeneous response of Sorafenib in a murine model of colorectal tumor xenografts (as compared with region of interest analysis). MRI was performed on a 9.4 T pre-clinical scanner on the initial treatment day. Then either vehicle or drug were gavaged daily (3 days) up to the final image. Four days later, the mice were again imaged. The two-compartment model and reference tissue method of DCE-MRI were used to analyze the data. The results demonstrated that the contrast agent distribution rate constant (K(trans)) were significantly reduced (p < 0.005) at day-4 of Sorafenib treatment. In addition, the K(trans) of nearby muscle was also reduced after Sorafenib treatment. The pixel-by-pixel analysis (compared to region of interest analysis) was better able to capture the heterogeneity of the tumor and the decrease in K(trans) four days after treatment. For both methods, the volume of the extravascular extracellular space did not change significantly after treatment. These results confirm that parameters such as K(trans), could provide a non-invasive biomarker to assess the response to anti-angiogenic therapies such as Sorafenib, but that the heterogeneity of response across a tumor requires a more detailed analysis than has typically been undertaken.
NASA Astrophysics Data System (ADS)
Chatterjee, R. S.; Singh, Narendra; Thapa, Shailaja; Sharma, Dravneeta; Kumar, Dheeraj
2017-06-01
The present study proposes land surface temperature (LST) retrieval from satellite-based thermal IR data by single channel radiative transfer algorithm using atmospheric correction parameters derived from satellite-based and in-situ data and land surface emissivity (LSE) derived by a hybrid LSE model. For example, atmospheric transmittance (τ) was derived from Terra MODIS spectral radiance in atmospheric window and absorption bands, whereas the atmospheric path radiance and sky radiance were estimated using satellite- and ground-based in-situ solar radiation, geographic location and observation conditions. The hybrid LSE model which is coupled with ground-based emissivity measurements is more versatile than the previous LSE models and yields improved emissivity values by knowledge-based approach. It uses NDVI-based and NDVI Threshold method (NDVITHM) based algorithms and field-measured emissivity values. The model is applicable for dense vegetation cover, mixed vegetation cover, bare earth including coal mining related land surface classes. The study was conducted in a coalfield of India badly affected by coal fire for decades. In a coal fire affected coalfield, LST would provide precise temperature difference between thermally anomalous coal fire pixels and background pixels to facilitate coal fire detection and monitoring. The derived LST products of the present study were compared with radiant temperature images across some of the prominent coal fire locations in the study area by graphical means and by some standard mathematical dispersion coefficients such as coefficient of variation, coefficient of quartile deviation, coefficient of quartile deviation for 3rd quartile vs. maximum temperature, coefficient of mean deviation (about median) indicating significant increase in the temperature difference among the pixels. The average temperature slope between adjacent pixels, which increases the potential of coal fire pixel detection from background pixels, is significantly larger in the derived LST products than the corresponding radiant temperature images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fahim, Farah; Deptuch, Grzegorz W.; Hoff, James R.
Semiconductor hybrid pixel detectors often consist of a pixellated sensor layer bump bonded to a matching pixelated readout integrated circuit (ROIC). The sensor can range from high resistivity Si to III-V materials, whereas a Si CMOS process is typically used to manufacture the ROIC. Independent, device physics and electronic design automation (EDA) tools are used to determine sensor characteristics and verify functional performance of ROICs respectively with significantly different solvers. Some physics solvers provide the capability of transferring data to the EDA tool. However, single pixel transient simulations are either not feasible due to convergence difficulties or are prohibitively long.more » A simplified sensor model, which includes a current pulse in parallel with detector equivalent capacitor, is often used; even then, spice type top-level (entire array) simulations range from days to weeks. In order to analyze detector deficiencies for a particular scientific application, accurately defined transient behavioral models of all the functional blocks are required. Furthermore, various simulations, such as transient, noise, Monte Carlo, inter-pixel effects, etc. of the entire array need to be performed within a reasonable time frame without trading off accuracy. The sensor and the analog front-end can be modeling using a real number modeling language, as complex mathematical functions or detailed data can be saved to text files, for further top-level digital simulations. Parasitically aware digital timing is extracted in a standard delay format (sdf) from the pixel digital back-end layout as well as the periphery of the ROIC. For any given input, detector level worst-case and best-case simulations are performed using a Verilog simulation environment to determine the output. Each top-level transient simulation takes no more than 10-15 minutes. The impact of changing key parameters such as sensor Poissonian shot noise, analog front-end bandwidth, jitter due to clock distribution etc. can be accurately analyzed to determine ROIC architectural viability and bottlenecks. Hence the impact of the detector parameters on the scientific application can be studied.« less
NASA Astrophysics Data System (ADS)
Liu, W. C.; Wu, B.
2018-04-01
High-resolution 3D modelling of lunar surface is important for lunar scientific research and exploration missions. Photogrammetry is known for 3D mapping and modelling from a pair of stereo images based on dense image matching. However dense matching may fail in poorly textured areas and in situations when the image pair has large illumination differences. As a result, the actual achievable spatial resolution of the 3D model from photogrammetry is limited by the performance of dense image matching. On the other hand, photoclinometry (i.e., shape from shading) is characterised by its ability to recover pixel-wise surface shapes based on image intensity and imaging conditions such as illumination and viewing directions. More robust shape reconstruction through photoclinometry can be achieved by incorporating images acquired under different illumination conditions (i.e., photometric stereo). Introducing photoclinometry into photogrammetric processing can therefore effectively increase the achievable resolution of the mapping result while maintaining its overall accuracy. This research presents an integrated photogrammetric and photoclinometric approach for pixel-resolution 3D modelling of the lunar surface. First, photoclinometry is interacted with stereo image matching to create robust and spatially well distributed dense conjugate points. Then, based on the 3D point cloud derived from photogrammetric processing of the dense conjugate points, photoclinometry is further introduced to derive the 3D positions of the unmatched points and to refine the final point cloud. The approach is able to produce one 3D point for each image pixel within the overlapping area of the stereo pair so that to obtain pixel-resolution 3D models. Experiments using the Lunar Reconnaissance Orbiter Camera - Narrow Angle Camera (LROC NAC) images show the superior performances of the approach compared with traditional photogrammetric technique. The results and findings from this research contribute to optimal exploitation of image information for high-resolution 3D modelling of the lunar surface, which is of significance for the advancement of lunar and planetary mapping.
2012-09-01
Feasibility (MT Modeling ) a. Continuum of mixture distributions interpolated b. Mixture infeasibilities calculated for each pixel c. Valid detections...Visible/Infrared Imaging Spectrometer BRDF Bidirectional Reflectance Distribution Function CASI Compact Airborne Spectrographic Imager CCD...filtering (MTMF), and was designed by Healey and Slater (1999) to use “a physical model to generate the set of sensor spectra for a target that will be
NASA Astrophysics Data System (ADS)
Maczewski, Lukasz
2010-05-01
The International Linear Collider (ILC) is a project of an electron-positron (e+e-) linear collider with the centre-of-mass energy of 200-500 GeV. Monolithic Active Pixel Sensors (MAPS) are one of the proposed silicon pixel detector concepts for the ILC vertex detector (VTX). Basic characteristics of two MAPS pixel matrices MIMOSA-5 (17 μm pixel pitch) and MIMOSA-18 (10 μm pixel pitch) are studied and compared (pedestals, noises, calibration of the ADC-to-electron conversion gain, detector efficiency and charge collection properties). The e+e- collisions at the ILC will be accompanied by intense beamsstrahlung background of electrons and positrons hitting inner planes of the vertex detector. Tracks of this origin leave elongated clusters contrary to those of secondary hadrons. Cluster characteristics and orientation with respect to the pixels netting are studied for perpendicular and inclined tracks. Elongation and precision of determining the cluster orientation as a function of the angle of incidence were measured. A simple model of signal formation (based on charge diffusion) is proposed and tested using the collected data.
Temporal Noise Analysis of Charge-Domain Sampling Readout Circuits for CMOS Image Sensors.
Ge, Xiaoliang; Theuwissen, Albert J P
2018-02-27
This paper presents a temporal noise analysis of charge-domain sampling readout circuits for Complementary Metal-Oxide Semiconductor (CMOS) image sensors. In order to address the trade-off between the low input-referred noise and high dynamic range, a Gm-cell-based pixel together with a charge-domain correlated-double sampling (CDS) technique has been proposed to provide a way to efficiently embed a tunable conversion gain along the read-out path. Such readout topology, however, operates in a non-stationery large-signal behavior, and the statistical properties of its temporal noise are a function of time. Conventional noise analysis methods for CMOS image sensors are based on steady-state signal models, and therefore cannot be readily applied for Gm-cell-based pixels. In this paper, we develop analysis models for both thermal noise and flicker noise in Gm-cell-based pixels by employing the time-domain linear analysis approach and the non-stationary noise analysis theory, which help to quantitatively evaluate the temporal noise characteristic of Gm-cell-based pixels. Both models were numerically computed in MATLAB using design parameters of a prototype chip, and compared with both simulation and experimental results. The good agreement between the theoretical and measurement results verifies the effectiveness of the proposed noise analysis models.
Analytical expressions for noise and crosstalk voltages of the High Energy Silicon Particle Detector
NASA Astrophysics Data System (ADS)
Yadav, I.; Shrimali, H.; Liberali, V.; Andreazza, A.
2018-01-01
The paper presents design and implementation of a silicon particle detector array with the derived closed form equations of signal-to-noise ratio (SNR) and crosstalk voltages. The noise analysis demonstrates the effect of interpixel capacitances (IPC) between center pixel (where particle hits) and its neighbouring pixels, resulting as a capacitive crosstalk. The pixel array has been designed and simulated in a 180 nm BCD technology of STMicroelectronics. The technology uses the supply voltage (VDD) of 1.8 V and the substrate potential of -50 V. The area of unit pixel is 250×50 μm2 with the substrate resistivity of 125 Ωcm and the depletion depth of 30 μm. The mathematical model includes the effects of various types of noise viz. the shot noise, flicker noise, thermal noise and the capacitive crosstalk. This work compares the results of noise and crosstalk analysis from the proposed mathematical model with the circuit simulation results for a given simulation environment. The results show excellent agreement with the circuit simulations and the mathematical model. The average relative error (AVR) generated for the noise spectral densities with respect to the simulations and the model is 12% whereas the comparison gives the errors of 3% and 11.5% for the crosstalk voltages and the SNR results respectively.
Temporal Noise Analysis of Charge-Domain Sampling Readout Circuits for CMOS Image Sensors †
Theuwissen, Albert J. P.
2018-01-01
This paper presents a temporal noise analysis of charge-domain sampling readout circuits for Complementary Metal-Oxide Semiconductor (CMOS) image sensors. In order to address the trade-off between the low input-referred noise and high dynamic range, a Gm-cell-based pixel together with a charge-domain correlated-double sampling (CDS) technique has been proposed to provide a way to efficiently embed a tunable conversion gain along the read-out path. Such readout topology, however, operates in a non-stationery large-signal behavior, and the statistical properties of its temporal noise are a function of time. Conventional noise analysis methods for CMOS image sensors are based on steady-state signal models, and therefore cannot be readily applied for Gm-cell-based pixels. In this paper, we develop analysis models for both thermal noise and flicker noise in Gm-cell-based pixels by employing the time-domain linear analysis approach and the non-stationary noise analysis theory, which help to quantitatively evaluate the temporal noise characteristic of Gm-cell-based pixels. Both models were numerically computed in MATLAB using design parameters of a prototype chip, and compared with both simulation and experimental results. The good agreement between the theoretical and measurement results verifies the effectiveness of the proposed noise analysis models. PMID:29495496
Liu, Shuguang; Tan, Zhengxi; Chen, Mingshi; Liu, Jinxun; Wein, Anne; Li, Zhengpeng; Huang, Shengli; Oeding, Jennifer; Young, Claudia; Verma, Shashi B.; Suyker, Andrew E.; Faulkner, Stephen P.
2012-01-01
The General Ensemble Biogeochemical Modeling System (GEMS) was es in individual models, it uses multiple site-scale biogeochemical models to perform model simulations. Second, it adopts Monte Carlo ensemble simulations of each simulation unit (one site/pixel or group of sites/pixels with similar biophysical conditions) to incorporate uncertainties and variability (as measured by variances and covariance) of input variables into model simulations. In this chapter, we illustrate the applications of GEMS at the site and regional scales with an emphasis on incorporating agricultural practices. Challenges in modeling soil carbon dynamics and greenhouse emissions are also discussed.
Half-unit weighted bilinear algorithm for image contrast enhancement in capsule endoscopy
NASA Astrophysics Data System (ADS)
Rukundo, Olivier
2018-04-01
This paper proposes a novel enhancement method based exclusively on the bilinear interpolation algorithm for capsule endoscopy images. The proposed method does not convert the original RBG image components to HSV or any other color space or model; instead, it processes directly RGB components. In each component, a group of four adjacent pixels and half-unit weight in the bilinear weighting function are used to calculate the average pixel value, identical for each pixel in that particular group. After calculations, groups of identical pixels are overlapped successively in horizontal and vertical directions to achieve a preliminary-enhanced image. The final-enhanced image is achieved by halving the sum of the original and preliminary-enhanced image pixels. Quantitative and qualitative experiments were conducted focusing on pairwise comparisons between original and enhanced images. Final-enhanced images have generally the best diagnostic quality and gave more details about the visibility of vessels and structures in capsule endoscopy images.
Modeling and Measuring Charge-Sharing in Hard X-ray Imagers Using HEXITEC CdTe Detectors
NASA Technical Reports Server (NTRS)
Ryan, Daniel F.; Christe, Steven D.; Shih, Albert Y.; Baumgartner, Wayne H.; Wilson, Matthew D.; Seller, Paul; Gaskin, Jessica A.; Inglis, Andrew
2017-01-01
The Rutherford Appleton Laboratory's HEXITEC ASIC has been designed to provide fine pixelated X-ray spectroscopic imaging in combination with a CdTe or CZT detector layer. Although HEXITEC's small pixels enable higher spatial resolution as well as higher spectral resolution via the small-pixel effect, they also increase the probability of charge sharing, a process which degrades spectral performance by dividing the charge induced by a single photon among multiple pixels. In this paper, we investigate the effect of this process on a continuum X-ray spectrum below the Cd and Te fluorescence energies (23 keV). This is done by comparing laboratory measurements with simulations performed with a custom designed model of the HEXITEC ASIC. We find that the simulations closely match the observations implying that we have an adequate understanding of both charge sharing and the HEXITEC ASIC itself. These results can be used to predict the distortion of a spectrum measured with HEXITEC and will help determine to what extent it can be corrected. They also show that models like this one are important tools in developing and interpreting observations from ASICs like HEXITEC.
NASA Astrophysics Data System (ADS)
Atzberger, C.; Richter, K.
2009-09-01
The robust and accurate retrieval of vegetation biophysical variables using radiative transfer models (RTM) is seriously hampered by the ill-posedness of the inverse problem. With this research we further develop our previously published (object-based) inversion approach [Atzberger (2004)]. The object-based RTM inversion takes advantage of the geostatistical fact that the biophysical characteristics of nearby pixel are generally more similar than those at a larger distance. A two-step inversion based on PROSPECT+SAIL generated look-up-tables is presented that can be easily implemented and adapted to other radiative transfer models. The approach takes into account the spectral signatures of neighboring pixel and optimizes a common value of the average leaf angle (ALA) for all pixel of a given image object, such as an agricultural field. Using a large set of leaf area index (LAI) measurements (n = 58) acquired over six different crops of the Barrax test site, Spain), we demonstrate that the proposed geostatistical regularization yields in most cases more accurate and spatially consistent results compared to the traditional (pixel-based) inversion. Pros and cons of the approach are discussed and possible future extensions presented.
NASA Astrophysics Data System (ADS)
Wu, Bo; Chung Liu, Wai; Grumpe, Arne; Wöhler, Christian
2016-06-01
Lunar topographic information, e.g., lunar DEM (Digital Elevation Model), is very important for lunar exploration missions and scientific research. Lunar DEMs are typically generated from photogrammetric image processing or laser altimetry, of which photogrammetric methods require multiple stereo images of an area. DEMs generated from these methods are usually achieved by various interpolation techniques, leading to interpolation artifacts in the resulting DEM. On the other hand, photometric shape reconstruction, e.g., SfS (Shape from Shading), extensively studied in the field of Computer Vision has been introduced to pixel-level resolution DEM refinement. SfS methods have the ability to reconstruct pixel-wise terrain details that explain a given image of the terrain. If the terrain and its corresponding pixel-wise albedo were to be estimated simultaneously, this is a SAfS (Shape and Albedo from Shading) problem and it will be under-determined without additional information. Previous works show strong statistical regularities in albedo of natural objects, and this is even more logically valid in the case of lunar surface due to its lower surface albedo complexity than the Earth. In this paper we suggest a method that refines a lower-resolution DEM to pixel-level resolution given a monocular image of the coverage with known light source, at the same time we also estimate the corresponding pixel-wise albedo map. We regulate the behaviour of albedo and shape such that the optimized terrain and albedo are the likely solutions that explain the corresponding image. The parameters in the approach are optimized through a kernel-based relaxation framework to gain computational advantages. In this research we experimentally employ the Lunar-Lambertian model for reflectance modelling; the framework of the algorithm is expected to be independent of a specific reflectance model. Experiments are carried out using the monocular images from Lunar Reconnaissance Orbiter (LRO) Narrow Angle Camera (NAC) (0.5 m spatial resolution), constrained by the SELENE and LRO Elevation Model (SLDEM 2015) of 60 m spatial resolution. The results indicate that local details are largely recovered by the algorithm while low frequency topographic consistency is affected by the low-resolution DEM.
Calibrating the pixel-level Kepler imaging data with a causal data-driven model
NASA Astrophysics Data System (ADS)
Wang, Dun; Foreman-Mackey, Daniel; Hogg, David W.; Schölkopf, Bernhard
2015-01-01
In general, astronomical observations are affected by several kinds of noise, each with it's own causal source; there is photon noise, stochastic source variability, and residuals coming from imperfect calibration of the detector or telescope. In particular, the precision of NASA Kepler photometry for exoplanet science—the most precise photometric measurements of stars ever made—appears to be limited by unknown or untracked variations in spacecraft pointing and temperature, and unmodeled stellar variability. Here we present the Causal Pixel Model (CPM) for Kepler data, a data-driven model intended to capture variability but preserve transit signals. The CPM works at the pixel level (not the photometric measurement level); it can capture more fine-grained information about the variation of the spacecraft than is available in the pixel-summed aperture photometry. The basic idea is that CPM predicts each target pixel value from a large number of pixels of other stars sharing the instrument variabilities while not containing any information on possible transits at the target star. In addition, we use the target star's future and past (auto-regression). By appropriately separating the data into training and test sets, we ensure that information about any transit will be perfectly isolated from the fitting of the model. The method has four hyper-parameters (the number of predictor stars, the auto-regressive window size, and two L2-regularization amplitudes for model components), which we set by cross-validation. We determine a generic set of hyper-parameters that works well on most of the stars with 11≤V≤12 mag and apply the method to a corresponding set of target stars with known planet transits. We find that we can consistently outperform (for the purposes of exoplanet detection) the Kepler Pre-search Data Conditioning (PDC) method for exoplanet discovery, often improving the SNR by a factor of two. While we have not yet exhaustively tested the method at other magnitudes, we expect that it should be generally applicable, with positive consequences for subsequent exoplanet detection or stellar variability (in which case we must exclude the autoregressive part to preserve intrinsic variability).
Is flat fielding safe for precision CCD astronomy?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baumer, Michael; Davis, Christopher P.; Roodman, Aaron
The ambitious goals of precision cosmology with wide-field optical surveys such as the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope (LSST) demand precision CCD astronomy as their foundation. This in turn requires an understanding of previously uncharacterized sources of systematic error in CCD sensors, many of which manifest themselves as static effective variations in pixel area. Such variation renders a critical assumption behind the traditional procedure of flat fielding—that a sensor's pixels comprise a uniform grid—invalid. In this work, we present a method to infer a curl-free model of a sensor's underlying pixel grid from flat-field images,more » incorporating the superposition of all electrostatic sensor effects—both known and unknown—present in flat-field data. We use these pixel grid models to estimate the overall impact of sensor systematics on photometry, astrometry, and PSF shape measurements in a representative sensor from the Dark Energy Camera (DECam) and a prototype LSST sensor. Applying the method to DECam data recovers known significant sensor effects for which corrections are currently being developed within DES. For an LSST prototype CCD with pixel-response non-uniformity (PRNU) of 0.4%, we find the impact of "improper" flat fielding on these observables is negligible in nominal .7'' seeing conditions. Furthermore, these errors scale linearly with the PRNU, so for future LSST production sensors, which may have larger PRNU, our method provides a way to assess whether pixel-level calibration beyond flat fielding will be required.« less
Is flat fielding safe for precision CCD astronomy?
Baumer, Michael; Davis, Christopher P.; Roodman, Aaron
2017-07-06
The ambitious goals of precision cosmology with wide-field optical surveys such as the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope (LSST) demand precision CCD astronomy as their foundation. This in turn requires an understanding of previously uncharacterized sources of systematic error in CCD sensors, many of which manifest themselves as static effective variations in pixel area. Such variation renders a critical assumption behind the traditional procedure of flat fielding—that a sensor's pixels comprise a uniform grid—invalid. In this work, we present a method to infer a curl-free model of a sensor's underlying pixel grid from flat-field images,more » incorporating the superposition of all electrostatic sensor effects—both known and unknown—present in flat-field data. We use these pixel grid models to estimate the overall impact of sensor systematics on photometry, astrometry, and PSF shape measurements in a representative sensor from the Dark Energy Camera (DECam) and a prototype LSST sensor. Applying the method to DECam data recovers known significant sensor effects for which corrections are currently being developed within DES. For an LSST prototype CCD with pixel-response non-uniformity (PRNU) of 0.4%, we find the impact of "improper" flat fielding on these observables is negligible in nominal .7'' seeing conditions. Furthermore, these errors scale linearly with the PRNU, so for future LSST production sensors, which may have larger PRNU, our method provides a way to assess whether pixel-level calibration beyond flat fielding will be required.« less
Zhao, C; Konstantinidis, A C; Zheng, Y; Anaxagoras, T; Speller, R D; Kanicki, J
2015-12-07
Wafer-scale CMOS active pixel sensors (APSs) have been developed recently for x-ray imaging applications. The small pixel pitch and low noise are very promising properties for medical imaging applications such as digital breast tomosynthesis (DBT). In this work, we evaluated experimentally and through modeling the imaging properties of a 50 μm pixel pitch CMOS APS x-ray detector named DynAMITe (Dynamic Range Adjustable for Medical Imaging Technology). A modified cascaded system model was developed for CMOS APS x-ray detectors by taking into account the device nonlinear signal and noise properties. The imaging properties such as modulation transfer function (MTF), noise power spectrum (NPS), and detective quantum efficiency (DQE) were extracted from both measurements and the nonlinear cascaded system analysis. The results show that the DynAMITe x-ray detector achieves a high spatial resolution of 10 mm(-1) and a DQE of around 0.5 at spatial frequencies <1 mm(-1). In addition, the modeling results were used to calculate the image signal-to-noise ratio (SNRi) of microcalcifications at various mean glandular dose (MGD). For an average breast (5 cm thickness, 50% glandular fraction), 165 μm microcalcifications can be distinguished at a MGD of 27% lower than the clinical value (~1.3 mGy). To detect 100 μm microcalcifications, further optimizations of the CMOS APS x-ray detector, image aquisition geometry and image reconstruction techniques should be considered.
Combined statistical analysis of landslide release and propagation
NASA Astrophysics Data System (ADS)
Mergili, Martin; Rohmaneo, Mohammad; Chu, Hone-Jay
2016-04-01
Statistical methods - often coupled with stochastic concepts - are commonly employed to relate areas affected by landslides with environmental layers, and to estimate spatial landslide probabilities by applying these relationships. However, such methods only concern the release of landslides, disregarding their motion. Conceptual models for mass flow routing are used for estimating landslide travel distances and possible impact areas. Automated approaches combining release and impact probabilities are rare. The present work attempts to fill this gap by a fully automated procedure combining statistical and stochastic elements, building on the open source GRASS GIS software: (1) The landslide inventory is subset into release and deposition zones. (2) We employ a traditional statistical approach to estimate the spatial release probability of landslides. (3) We back-calculate the probability distribution of the angle of reach of the observed landslides, employing the software tool r.randomwalk. One set of random walks is routed downslope from each pixel defined as release area. Each random walk stops when leaving the observed impact area of the landslide. (4) The cumulative probability function (cdf) derived in (3) is used as input to route a set of random walks downslope from each pixel in the study area through the DEM, assigning the probability gained from the cdf to each pixel along the path (impact probability). The impact probability of a pixel is defined as the average impact probability of all sets of random walks impacting a pixel. Further, the average release probabilities of the release pixels of all sets of random walks impacting a given pixel are stored along with the area of the possible release zone. (5) We compute the zonal release probability by increasing the release probability according to the size of the release zone - the larger the zone, the larger the probability that a landslide will originate from at least one pixel within this zone. We quantify this relationship by a set of empirical curves. (6) Finally, we multiply the zonal release probability with the impact probability in order to estimate the combined impact probability for each pixel. We demonstrate the model with a 167 km² study area in Taiwan, using an inventory of landslides triggered by the typhoon Morakot. Analyzing the model results leads us to a set of key conclusions: (i) The average composite impact probability over the entire study area corresponds well to the density of observed landside pixels. Therefore we conclude that the method is valid in general, even though the concept of the zonal release probability bears some conceptual issues that have to be kept in mind. (ii) The parameters used as predictors cannot fully explain the observed distribution of landslides. The size of the release zone influences the composite impact probability to a larger degree than the pixel-based release probability. (iii) The prediction rate increases considerably when excluding the largest, deep-seated, landslides from the analysis. We conclude that such landslides are mainly related to geological features hardly reflected in the predictor layers used.
Nonlinear hyperspectral unmixing based on sparse non-negative matrix factorization
NASA Astrophysics Data System (ADS)
Li, Jing; Li, Xiaorun; Zhao, Liaoying
2016-01-01
Hyperspectral unmixing aims at extracting pure material spectra, accompanied by their corresponding proportions, from a mixed pixel. Owing to modeling more accurate distribution of real material, nonlinear mixing models (non-LMM) are usually considered to hold better performance than LMMs in complicated scenarios. In the past years, numerous nonlinear models have been successfully applied to hyperspectral unmixing. However, most non-LMMs only think of sum-to-one constraint or positivity constraint while the widespread sparsity among real materials mixing is the very factor that cannot be ignored. That is, for non-LMMs, a pixel is usually composed of a few spectral signatures of different materials from all the pure pixel set. Thus, in this paper, a smooth sparsity constraint is incorporated into the state-of-the-art Fan nonlinear model to exploit the sparsity feature in nonlinear model and use it to enhance the unmixing performance. This sparsity-constrained Fan model is solved with the non-negative matrix factorization. The algorithm was implemented on synthetic and real hyperspectral data and presented its advantage over those competing algorithms in the experiments.
Model of the lines of sight for an off-axis optical instrument Pleiades
NASA Astrophysics Data System (ADS)
Sauvage, Dominique; Gaudin-Delrieu, Catherine; Tournier, Thierry
2017-11-01
The future Earth observation missions aim at delivering images with a high resolution and a large field of view. These images have to be processed to get a very accurate localisation. In that goal, the individual lines of sight of each photosensitive element must be evaluated according to the localisation of the pixels in the focal plane. But, with off-axis Korsch telescope (like PLEIADES), the classical model has to be adapted. This is possible by using optical ground measurements made after the integration of the instrument. The processing of these results leads to several parameters, which are function of the offsets of the focal plane and the real focal length. All this study which has been proposed for the PLEIADES mission leads to a more elaborated model which provides the relation between the lines of sight and the location of the pixels, with a very good accuracy, close to the pixel size.
Sensitivity of geographic information system outputs to errors in remotely sensed data
NASA Technical Reports Server (NTRS)
Ramapriyan, H. K.; Boyd, R. K.; Gunther, F. J.; Lu, Y. C.
1981-01-01
The sensitivity of the outputs of a geographic information system (GIS) to errors in inputs derived from remotely sensed data (RSD) is investigated using a suitability model with per-cell decisions and a gridded geographic data base whose cells are larger than the RSD pixels. The process of preparing RSD as input to a GIS is analyzed, and the errors associated with classification and registration are examined. In the case of the model considered, it is found that the errors caused during classification and registration are partially compensated by the aggregation of pixels. The compensation is quantified by means of an analytical model, a Monte Carlo simulation, and experiments with Landsat data. The results show that error reductions of the order of 50% occur because of aggregation when 25 pixels of RSD are used per cell in the geographic data base.
Pixel electronic noise as a function of position in an active matrix flat panel imaging array
NASA Astrophysics Data System (ADS)
Yazdandoost, Mohammad Y.; Wu, Dali; Karim, Karim S.
2010-04-01
We present an analysis of output referred pixel electronic noise as a function of position in the active matrix array for both active and passive pixel architectures. Three different noise sources for Active Pixel Sensor (APS) arrays are considered: readout period noise, reset period noise and leakage current noise of the reset TFT during readout. For the state-of-the-art Passive Pixel Sensor (PPS) array, the readout noise of the TFT switch is considered. Measured noise results are obtained by modeling the array connections with RC ladders on a small in-house fabricated prototype. The results indicate that the pixels in the rows located in the middle part of the array have less random electronic noise at the output of the off-panel charge amplifier compared to the ones in rows at the two edges of the array. These results can help optimize for clearer images as well as help define the region-of-interest with the best signal-to-noise ratio in an active matrix digital flat panel imaging array.
A Semiparametric Model for Hyperspectral Anomaly Detection
2012-01-01
treeline ) in the presence of natural background clutter (e.g., trees, dirt roads, grasses). Each target consists of about 7 × 4 pixels, and each pixel...vehicles near the treeline in Cube 1 (Figure 1) constitutes the target set, but, since anomaly detectors are not designed to detect a particular target
Davis, Anthony B.; Xu, Feng; Collins, William D.
2015-03-01
Atmospheric hyperspectral VNIR sensing struggles with sub-pixel variability of clouds and limited spectral resolution mixing molecular lines. Our generalized radiative transfer model addresses both issues with new propagation kernels characterized by power-law decay in space.
Vedadi, Farhang; Shirani, Shahram
2014-01-01
A new method of image resolution up-conversion (image interpolation) based on maximum a posteriori sequence estimation is proposed. Instead of making a hard decision about the value of each missing pixel, we estimate the missing pixels in groups. At each missing pixel of the high resolution (HR) image, we consider an ensemble of candidate interpolation methods (interpolation functions). The interpolation functions are interpreted as states of a Markov model. In other words, the proposed method undergoes state transitions from one missing pixel position to the next. Accordingly, the interpolation problem is translated to the problem of estimating the optimal sequence of interpolation functions corresponding to the sequence of missing HR pixel positions. We derive a parameter-free probabilistic model for this to-be-estimated sequence of interpolation functions. Then, we solve the estimation problem using a trellis representation and the Viterbi algorithm. Using directional interpolation functions and sequence estimation techniques, we classify the new algorithm as an adaptive directional interpolation using soft-decision estimation techniques. Experimental results show that the proposed algorithm yields images with higher or comparable peak signal-to-noise ratios compared with some benchmark interpolation methods in the literature while being efficient in terms of implementation and complexity considerations.
Image interpolation by adaptive 2-D autoregressive modeling and soft-decision estimation.
Zhang, Xiangjun; Wu, Xiaolin
2008-06-01
The challenge of image interpolation is to preserve spatial details. We propose a soft-decision interpolation technique that estimates missing pixels in groups rather than one at a time. The new technique learns and adapts to varying scene structures using a 2-D piecewise autoregressive model. The model parameters are estimated in a moving window in the input low-resolution image. The pixel structure dictated by the learnt model is enforced by the soft-decision estimation process onto a block of pixels, including both observed and estimated. The result is equivalent to that of a high-order adaptive nonseparable 2-D interpolation filter. This new image interpolation approach preserves spatial coherence of interpolated images better than the existing methods, and it produces the best results so far over a wide range of scenes in both PSNR measure and subjective visual quality. Edges and textures are well preserved, and common interpolation artifacts (blurring, ringing, jaggies, zippering, etc.) are greatly reduced.
Taguchi, Katsuyuki; Polster, Christoph; Lee, Okkyun; Stierstorfer, Karl; Kappler, Steffen
2016-12-01
An x-ray photon interacts with photon counting detectors (PCDs) and generates an electron charge cloud or multiple clouds. The clouds (thus, the photon energy) may be split between two adjacent PCD pixels when the interaction occurs near pixel boundaries, producing a count at both of the pixels. This is called double-counting with charge sharing. (A photoelectric effect with K-shell fluorescence x-ray emission would result in double-counting as well). As a result, PCD data are spatially and energetically correlated, although the output of individual PCD pixels is Poisson distributed. Major problems include the lack of a detector noise model for the spatio-energetic cross talk and lack of a computationally efficient simulation tool for generating correlated Poisson data. A Monte Carlo (MC) simulation can accurately simulate these phenomena and produce noisy data; however, it is not computationally efficient. In this study, the authors developed a new detector model and implemented it in an efficient software simulator that uses a Poisson random number generator to produce correlated noisy integer counts. The detector model takes the following effects into account: (1) detection efficiency; (2) incomplete charge collection and ballistic effect; (3) interaction with PCDs via photoelectric effect (with or without K-shell fluorescence x-ray emission, which may escape from the PCDs or be reabsorbed); and (4) electronic noise. The correlation was modeled by using these two simplifying assumptions: energy conservation and mutual exclusiveness. The mutual exclusiveness is that no more than two pixels measure energy from one photon. The effect of model parameters has been studied and results were compared with MC simulations. The agreement, with respect to the spectrum, was evaluated using the reduced χ 2 statistics or a weighted sum of squared errors, χ red 2 (≥1), where χ red 2 =1 indicates a perfect fit. The model produced spectra with flat field irradiation that qualitatively agree with previous studies. The spectra generated with different model and geometry parameters allowed for understanding the effect of the parameters on the spectrum and the correlation of data. The agreement between the model and MC data was very strong. The mean spectra with 90 keV and 140 kVp agreed exceptionally well: χ red 2 values were 1.049 with 90 keV data and 1.007 with 140 kVp data. The degrees of cross talk (in terms of the relative increase from single pixel irradiation to flat field irradiation) were 22% with 90 keV and 19% with 140 kVp for MC simulations, while they were 21% and 17%, respectively, for the model. The covariance was in strong agreement qualitatively, although it was overestimated. The noisy data generation was very efficient, taking less than a CPU minute as opposed to CPU hours for MC simulators. The authors have developed a novel, computationally efficient PCD model that takes into account double-counting and resulting spatio-energetic correlation between PCD pixels. The MC simulation validated the accuracy.
Tokuda, Junichi; Mamata, Hatsuho; Gill, Ritu R; Hata, Nobuhiko; Kikinis, Ron; Padera, Robert F; Lenkinski, Robert E; Sugarbaker, David J; Hatabu, Hiroto
2011-04-01
To investigates the impact of nonrigid motion correction on pixel-wise pharmacokinetic analysis of free-breathing DCE-MRI in patients with solitary pulmonary nodules (SPNs). Misalignment of focal lesions due to respiratory motion in free-breathing dynamic contrast-enhanced MRI (DCE-MRI) precludes obtaining reliable time-intensity curves, which are crucial for pharmacokinetic analysis for tissue characterization. Single-slice 2D DCE-MRI was obtained in 15 patients. Misalignments of SPNs were corrected using nonrigid B-spline image registration. Pixel-wise pharmacokinetic parameters K(trans) , v(e) , and k(ep) were estimated from both original and motion-corrected DCE-MRI by fitting the two-compartment pharmacokinetic model to the time-intensity curve obtained in each pixel. The "goodness-of-fit" was tested with χ(2) -test in pixel-by-pixel basis to evaluate the reliability of the parameters. The percentages of reliable pixels within the SPNs were compared between the original and motion-corrected DCE-MRI. In addition, the parameters obtained from benign and malignant SPNs were compared. The percentage of reliable pixels in the motion-corrected DCE-MRI was significantly larger than the original DCE-MRI (P = 4 × 10(-7) ). Both K(trans) and k(ep) derived from the motion-corrected DCE-MRI showed significant differences between benign and malignant SPNs (P = 0.024, 0.015). The study demonstrated the impact of nonrigid motion correction technique on pixel-wise pharmacokinetic analysis of free-breathing DCE-MRI in SPNs. Copyright © 2011 Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
Wang, Jinliang; Wu, Xuejiao
2010-11-01
Geometric correction of imagery is a basic application of remote sensing technology. Its precision will impact directly on the accuracy and reliability of applications. The accuracy of geometric correction depends on many factors, including the used model for correction and the accuracy of the reference map, the number of ground control points (GCP) and its spatial distribution, resampling methods. The ETM+ image of Kunming Dianchi Lake Basin and 1:50000 geographical maps had been used to compare different correction methods. The results showed that: (1) The correction errors were more than one pixel and some of them were several pixels when the polynomial model was used. The correction accuracy was not stable when the Delaunay model was used. The correction errors were less than one pixel when the collinearity equation was used. (2) 6, 9, 25 and 35 GCP were selected randomly for geometric correction using the polynomial correction model respectively, the best result was obtained when 25 GCPs were used. (3) The contrast ratio of image corrected by using nearest neighbor and the best resampling rate was compared to that of using the cubic convolution and bilinear model. But the continuity of pixel gravy value was not very good. The contrast of image corrected was the worst and the computation time was the longest by using the cubic convolution method. According to the above results, the result was the best by using bilinear to resample.
A unified tensor level set for image segmentation.
Wang, Bin; Gao, Xinbo; Tao, Dacheng; Li, Xuelong
2010-06-01
This paper presents a new region-based unified tensor level set model for image segmentation. This model introduces a three-order tensor to comprehensively depict features of pixels, e.g., gray value and the local geometrical features, such as orientation and gradient, and then, by defining a weighted distance, we generalized the representative region-based level set method from scalar to tensor. The proposed model has four main advantages compared with the traditional representative method as follows. First, involving the Gaussian filter bank, the model is robust against noise, particularly the salt- and pepper-type noise. Second, considering the local geometrical features, e.g., orientation and gradient, the model pays more attention to boundaries and makes the evolving curve stop more easily at the boundary location. Third, due to the unified tensor pixel representation representing the pixels, the model segments images more accurately and naturally. Fourth, based on a weighted distance definition, the model possesses the capacity to cope with data varying from scalar to vector, then to high-order tensor. We apply the proposed method to synthetic, medical, and natural images, and the result suggests that the proposed method is superior to the available representative region-based level set method.
Badel-Mogollón, Jaime; Rodríguez-Figueroa, Laura; Parra-Henao, Gabriel
2017-03-29
Due to the lack of information regarding biophysical and spatio-temporal conditions (hydrometheorologic and vegetal coverage density) in areas with Triatoma dimidiata in the Colombian departments of Santander and Boyacá, there is a need to elucidate the association patterns of these variables to determine the distribution and control of this species. To make a spatio-temporal analysis of biophysical variables related to the distribution of T. dimidiate observed in the northeast region of Colombia. We used the Intergovernmental Panel on Climate Change Special Report on Emissions Scenarios (IPCC SRES) data bases registering vector presence and hydrometheorologic data. We studied the variables of environmental temperature, relative humidity, rainfall and vegetal coverage density at regional and local levels, and we conducted spatial geostatistic, descriptive statistical and Fourier temporal series analyses. Temperatures two meters above the ground and on covered surface ranged from 14,5°C to 18,8°C in the areas with the higher density of T. dimidiata. The environmental temperature fluctuated between 30 and 32°C. Vegetal coverage density and rainfall showed patterns of annual and biannual peaks. Relative humidity values fluctuated from 66,8 to 85,1%. Surface temperature and soil coverage were the variables that better explained the life cycle of T. dimidiata in the area. High relative humidity promoted the seek of shelters and an increase of the geographic distribution in the annual and biannual peaks of regional rainfall. The ecologic and anthropic conditions suggest that T. dimidiata is a highly resilient species.
NASA Astrophysics Data System (ADS)
Kim, S. K.; Lee, J.; Zhang, C.; Ames, S.; Williams, D. N.
2017-12-01
Deep learning techniques have been successfully applied to solve many problems in climate and geoscience using massive-scaled observed and modeled data. For extreme climate event detections, several models based on deep neural networks have been recently proposed and attend superior performance that overshadows all previous handcrafted expert based method. The issue arising, though, is that accurate localization of events requires high quality of climate data. In this work, we propose framework capable of detecting and localizing extreme climate events in very coarse climate data. Our framework is based on two models using deep neural networks, (1) Convolutional Neural Networks (CNNs) to detect and localize extreme climate events, and (2) Pixel recursive recursive super resolution model to reconstruct high resolution climate data from low resolution climate data. Based on our preliminary work, we have presented two CNNs in our framework for different purposes, detection and localization. Our results using CNNs for extreme climate events detection shows that simple neural nets can capture the pattern of extreme climate events with high accuracy from very coarse reanalysis data. However, localization accuracy is relatively low due to the coarse resolution. To resolve this issue, the pixel recursive super resolution model reconstructs the resolution of input of localization CNNs. We present a best networks using pixel recursive super resolution model that synthesizes details of tropical cyclone in ground truth data while enhancing their resolution. Therefore, this approach not only dramat- ically reduces the human effort, but also suggests possibility to reduce computing cost required for downscaling process to increase resolution of data.
Effectual switching filter for removing impulse noise using a SCM detector
NASA Astrophysics Data System (ADS)
Yuan, Jin-xia; Zhang, Hong-juan; Ma, Yi-de
2012-03-01
An effectual method is proposed to remove impulse noise from corrupted color images. The spiking cortical model (SCM) is adopted as a noise detector to identify noisy pixels in each channel of color images, and detected noise pixels are saved in three marking matrices. According to the three marking matrices, the detected noisy pixels are divided into two types (type I and type II). They are filtered differently: an adaptive median filter is used for type I and an adaptive vector median for type II. Noise-free pixels are left unchanged. Extensive experiments show that the proposed method outperforms most of the other well-known filters in the aspects of both visual and objective quality measures, and this method can also reduce the possibility of generating color artifacts while preserving image details.
NASA Astrophysics Data System (ADS)
Xie, Huan; Luo, Xin; Xu, Xiong; Wang, Chen; Pan, Haiyan; Tong, Xiaohua; Liu, Shijie
2016-10-01
Water body is a fundamental element in urban ecosystems and water mapping is critical for urban and landscape planning and management. As remote sensing has increasingly been used for water mapping in rural areas, this spatially explicit approach applied in urban area is also a challenging work due to the water bodies mainly distributed in a small size and the spectral confusion widely exists between water and complex features in the urban environment. Water index is the most common method for water extraction at pixel level, and spectral mixture analysis (SMA) has been widely employed in analyzing urban environment at subpixel level recently. In this paper, we introduce an automatic subpixel water mapping method in urban areas using multispectral remote sensing data. The objectives of this research consist of: (1) developing an automatic land-water mixed pixels extraction technique by water index; (2) deriving the most representative endmembers of water and land by utilizing neighboring water pixels and adaptive iterative optimal neighboring land pixel for respectively; (3) applying a linear unmixing model for subpixel water fraction estimation. Specifically, to automatically extract land-water pixels, the locally weighted scatter plot smoothing is firstly used to the original histogram curve of WI image . And then the Ostu threshold is derived as the start point to select land-water pixels based on histogram of the WI image with the land threshold and water threshold determination through the slopes of histogram curve . Based on the previous process at pixel level, the image is divided into three parts: water pixels, land pixels, and mixed land-water pixels. Then the spectral mixture analysis (SMA) is applied to land-water mixed pixels for water fraction estimation at subpixel level. With the assumption that the endmember signature of a target pixel should be more similar to adjacent pixels due to spatial dependence, the endmember of water and land are determined by neighboring pure land or pure water pixels within a distance. To obtaining the most representative endmembers in SMA, we designed an adaptive iterative endmember selection method based on the spatial similarity of adjacent pixels. According to the spectral similarity in a spatial adjacent region, the spectrum of land endmember is determined by selecting the most representative land pixel in a local window, and the spectrum of water endmember is determined by calculating an average of the water pixels in the local window. The proposed hierarchical processing method based on WI and SMA (WISMA) is applied to urban areas for reliability evaluation using the Landsat-8 Operational Land Imager (OLI) images. For comparison, four methods at pixel level and subpixel level were chosen respectively. Results indicate that the water maps generated by the proposed method correspond as closely with the truth water maps with subpixel precision. And the results showed that the WISMA achieved the best performance in water mapping with comprehensive analysis of different accuracy evaluation indexes (RMSE and SE).
Pixel-based flood mapping from SAR imagery: a comparison of approaches
NASA Astrophysics Data System (ADS)
Landuyt, Lisa; Van Wesemael, Alexandra; Van Coillie, Frieke M. B.; Verhoest, Niko E. C.
2017-04-01
Due to their all-weather, day and night capabilities, SAR sensors have been shown to be particularly suitable for flood mapping applications. Thus, they can provide spatially-distributed flood extent data which are valuable for calibrating, validating and updating flood inundation models. These models are an invaluable tool for water managers, to take appropriate measures in times of high water levels. Image analysis approaches to delineate flood extent on SAR imagery are numerous. They can be classified into two categories, i.e. pixel-based and object-based approaches. Pixel-based approaches, e.g. thresholding, are abundant and in general computationally inexpensive. However, large discrepancies between these techniques exist and often subjective user intervention is needed. Object-based approaches require more processing but allow for the integration of additional object characteristics, like contextual information and object geometry, and thus have significant potential to provide an improved classification result. As means of benchmark, a selection of pixel-based techniques is applied on a ERS-2 SAR image of the 2006 flood event of River Dee, United Kingdom. This selection comprises Otsu thresholding, Kittler & Illingworth thresholding, the Fine To Coarse segmentation algorithm and active contour modelling. The different classification results are evaluated and compared by means of several accuracy measures, including binary performance measures.
SAR Image Change Detection Based on Fuzzy Markov Random Field Model
NASA Astrophysics Data System (ADS)
Zhao, J.; Huang, G.; Zhao, Z.
2018-04-01
Most existing SAR image change detection algorithms only consider single pixel information of different images, and not consider the spatial dependencies of image pixels. So the change detection results are susceptible to image noise, and the detection effect is not ideal. Markov Random Field (MRF) can make full use of the spatial dependence of image pixels and improve detection accuracy. When segmenting the difference image, different categories of regions have a high degree of similarity at the junction of them. It is difficult to clearly distinguish the labels of the pixels near the boundaries of the judgment area. In the traditional MRF method, each pixel is given a hard label during iteration. So MRF is a hard decision in the process, and it will cause loss of information. This paper applies the combination of fuzzy theory and MRF to the change detection of SAR images. The experimental results show that the proposed method has better detection effect than the traditional MRF method.
SVM Pixel Classification on Colour Image Segmentation
NASA Astrophysics Data System (ADS)
Barui, Subhrajit; Latha, S.; Samiappan, Dhanalakshmi; Muthu, P.
2018-04-01
The aim of image segmentation is to simplify the representation of an image with the help of cluster pixels into something meaningful to analyze. Segmentation is typically used to locate boundaries and curves in an image, precisely to label every pixel in an image to give each pixel an independent identity. SVM pixel classification on colour image segmentation is the topic highlighted in this paper. It holds useful application in the field of concept based image retrieval, machine vision, medical imaging and object detection. The process is accomplished step by step. At first we need to recognize the type of colour and the texture used as an input to the SVM classifier. These inputs are extracted via local spatial similarity measure model and Steerable filter also known as Gabon Filter. It is then trained by using FCM (Fuzzy C-Means). Both the pixel level information of the image and the ability of the SVM Classifier undergoes some sophisticated algorithm to form the final image. The method has a well developed segmented image and efficiency with respect to increased quality and faster processing of the segmented image compared with the other segmentation methods proposed earlier. One of the latest application result is the Light L16 camera.
Bayesian model for matching the radiometric measurements of aerospace and field ocean color sensors.
Salama, Mhd Suhyb; Su, Zhongbo
2010-01-01
A Bayesian model is developed to match aerospace ocean color observation to field measurements and derive the spatial variability of match-up sites. The performance of the model is tested against populations of synthesized spectra and full and reduced resolutions of MERIS data. The model derived the scale difference between synthesized satellite pixel and point measurements with R(2) > 0.88 and relative error < 21% in the spectral range from 400 nm to 695 nm. The sub-pixel variabilities of reduced resolution MERIS image are derived with less than 12% of relative errors in heterogeneous region. The method is generic and applicable to different sensors.
Application of field dependent polynomial model
NASA Astrophysics Data System (ADS)
Janout, Petr; Páta, Petr; Skala, Petr; Fliegel, Karel; Vítek, Stanislav; Bednář, Jan
2016-09-01
Extremely wide-field imaging systems have many advantages regarding large display scenes whether for use in microscopy, all sky cameras, or in security technologies. The Large viewing angle is paid by the amount of aberrations, which are included with these imaging systems. Modeling wavefront aberrations using the Zernike polynomials is known a longer time and is widely used. Our method does not model system aberrations in a way of modeling wavefront, but directly modeling of aberration Point Spread Function of used imaging system. This is a very complicated task, and with conventional methods, it was difficult to achieve the desired accuracy. Our optimization techniques of searching coefficients space-variant Zernike polynomials can be described as a comprehensive model for ultra-wide-field imaging systems. The advantage of this model is that the model describes the whole space-variant system, unlike the majority models which are partly invariant systems. The issue that this model is the attempt to equalize the size of the modeled Point Spread Function, which is comparable to the pixel size. Issues associated with sampling, pixel size, pixel sensitivity profile must be taken into account in the design. The model was verified in a series of laboratory test patterns, test images of laboratory light sources and consequently on real images obtained by an extremely wide-field imaging system WILLIAM. Results of modeling of this system are listed in this article.
Highly Reflective Multi-stable Electrofluidic Display Pixels
NASA Astrophysics Data System (ADS)
Yang, Shu
Electronic papers (E-papers) refer to the displays that mimic the appearance of printed papers, but still owning the features of conventional electronic displays, such as the abilities of browsing websites and playing videos. The motivation of creating paper-like displays is inspired by the truths that reading on a paper caused least eye fatigue due to the paper's reflective and light diffusive nature, and, unlike the existing commercial displays, there is no cost of any form of energy for sustaining the displayed image. To achieve the equivalent visual effect of a paper print, an ideal E-paper has to be a highly reflective with good contrast ratio and full-color capability. To sustain the image with zero power consumption, the display pixels need to be bistable, which means the "on" and "off" states are both lowest energy states. Pixel can change its state only when sufficient external energy is given. There are many emerging technologies competing to demonstrate the first ideal E-paper device. However, none is able to achieve satisfactory visual effect, bistability and video speed at the same time. Challenges come from either the inherent physical/chemical properties or the fabrication process. Electrofluidic display is one of the most promising E-paper technologies. It has successfully demonstrated high reflectivity, brilliant color and video speed operation by moving colored pigment dispersion between visible and invisible places with electrowetting force. However, the pixel design did not allow the image bistability. Presented in this dissertation are the multi-stable electrofluidic display pixels that are able to sustain grayscale levels without any power consumption, while keeping the favorable features of the previous generation electrofluidic display. The pixel design, fabrication method using multiple layer dry film photoresist lamination, and physical/optical characterizations are discussed in details. Based on the pixel structure, the preliminary results of a simplified design and fabrication method are demonstrated. As advanced research topics regarding the device optical performance, firstly an optical model for evaluating reflective displays' light out-coupling efficiency is established to guide the pixel design; Furthermore, Aluminum surface diffusers are analytically modeled and then fabricated onto multi-stable electrofluidic display pixels to demonstrate truly "white" multi-stable electrofluidic display modules. The achieved results successfully promoted multi-stable electrofluidic display as excellent candidate for the ultimate E-paper device especially for larger scale signage applications.
Optimum viewing distance for target acquisition
NASA Astrophysics Data System (ADS)
Holst, Gerald C.
2015-05-01
Human visual system (HVS) "resolution" (a.k.a. visual acuity) varies with illumination level, target characteristics, and target contrast. For signage, computer displays, cell phones, and TVs a viewing distance and display size are selected. Then the number of display pixels is chosen such that each pixel subtends 1 min-1. Resolution of low contrast targets is quite different. It is best described by Barten's contrast sensitivity function. Target acquisition models predict maximum range when the display pixel subtends 3.3 min-1. The optimum viewing distance is nearly independent of magnification. Noise increases the optimum viewing distance.
Automatic sub-pixel coastline extraction based on spectral mixture analysis using EO-1 Hyperion data
NASA Astrophysics Data System (ADS)
Hong, Zhonghua; Li, Xuesu; Han, Yanling; Zhang, Yun; Wang, Jing; Zhou, Ruyan; Hu, Kening
2018-06-01
Many megacities (such as Shanghai) are located in coastal areas, therefore, coastline monitoring is critical for urban security and urban development sustainability. A shoreline is defined as the intersection between coastal land and a water surface and features seawater edge movements as tides rise and fall. Remote sensing techniques have increasingly been used for coastline extraction; however, traditional hard classification methods are performed only at the pixel-level and extracting subpixel accuracy using soft classification methods is both challenging and time consuming due to the complex features in coastal regions. This paper presents an automatic sub-pixel coastline extraction method (ASPCE) from high-spectral satellite imaging that performs coastline extraction based on spectral mixture analysis and, thus, achieves higher accuracy. The ASPCE method consists of three main components: 1) A Water- Vegetation-Impervious-Soil (W-V-I-S) model is first presented to detect mixed W-V-I-S pixels and determine the endmember spectra in coastal regions; 2) The linear spectral mixture unmixing technique based on Fully Constrained Least Squares (FCLS) is applied to the mixed W-V-I-S pixels to estimate seawater abundance; and 3) The spatial attraction model is used to extract the coastline. We tested this new method using EO-1 images from three coastal regions in China: the South China Sea, the East China Sea, and the Bohai Sea. The results showed that the method is accurate and robust. Root mean square error (RMSE) was utilized to evaluate the accuracy by calculating the distance differences between the extracted coastline and the digitized coastline. The classifier's performance was compared with that of the Multiple Endmember Spectral Mixture Analysis (MESMA), Mixture Tuned Matched Filtering (MTMF), Sequential Maximum Angle Convex Cone (SMACC), Constrained Energy Minimization (CEM), and one classical Normalized Difference Water Index (NDWI). The results from the three test sites indicated that the proposed ASPCE method extracted coastlines more efficiently than did the compared methods, and its coastline extraction accuracy corresponded closely to the digitized coastline, with 0.39 pixels, 0.40 pixels, and 0.35 pixels in the three test regions, showing that the ASPCE method achieves an accuracy below 12.0 m (0.40 pixels). Moreover, in the quantitative accuracy assessment for the three test sites, the ASPCE method shows the best performance in coastline extraction, achieving a 0.35 pixel-level at the Bohai Sea, China test site. Therefore, the proposed ASPCE method can extract coastline more accurately than can the hard classification methods or other spectral unmixing methods.
Makeev, Andrey; Clajus, Martin; Snyder, Scott; Wang, Xiaolang; Glick, Stephen J.
2015-01-01
Abstract. Semiconductor photon-counting detectors based on high atomic number, high density materials [cadmium zinc telluride (CZT)/cadmium telluride (CdTe)] for x-ray computed tomography (CT) provide advantages over conventional energy-integrating detectors, including reduced electronic and Swank noise, wider dynamic range, capability of spectral CT, and improved signal-to-noise ratio. Certain CT applications require high spatial resolution. In breast CT, for example, visualization of microcalcifications and assessment of tumor microvasculature after contrast enhancement require resolution on the order of 100 μm. A straightforward approach to increasing spatial resolution of pixellated CZT-based radiation detectors by merely decreasing the pixel size leads to two problems: (1) fabricating circuitry with small pixels becomes costly and (2) inter-pixel charge spreading can obviate any improvement in spatial resolution. We have used computer simulations to investigate position estimation algorithms that utilize charge sharing to achieve subpixel position resolution. To study these algorithms, we model a simple detector geometry with a 5×5 array of 200 μm pixels, and use a conditional probability function to model charge transport in CZT. We used COMSOL finite element method software to map the distribution of charge pulses and the Monte Carlo package PENELOPE for simulating fluorescent radiation. Performance of two x-ray interaction position estimation algorithms was evaluated: the method of maximum-likelihood estimation and a fast, practical algorithm that can be implemented in a readout application-specific integrated circuit and allows for identification of a quadrant of the pixel in which the interaction occurred. Both methods demonstrate good subpixel resolution; however, their actual efficiency is limited by the presence of fluorescent K-escape photons. Current experimental breast CT systems typically use detectors with a pixel size of 194 μm, with 2×2 binning during the acquisition giving an effective pixel size of 388 μm. Thus, it would be expected that the position estimate accuracy reported in this study would improve detection and visualization of microcalcifications as compared to that with conventional detectors. PMID:26158095
Makeev, Andrey; Clajus, Martin; Snyder, Scott; Wang, Xiaolang; Glick, Stephen J
2015-04-01
Semiconductor photon-counting detectors based on high atomic number, high density materials [cadmium zinc telluride (CZT)/cadmium telluride (CdTe)] for x-ray computed tomography (CT) provide advantages over conventional energy-integrating detectors, including reduced electronic and Swank noise, wider dynamic range, capability of spectral CT, and improved signal-to-noise ratio. Certain CT applications require high spatial resolution. In breast CT, for example, visualization of microcalcifications and assessment of tumor microvasculature after contrast enhancement require resolution on the order of [Formula: see text]. A straightforward approach to increasing spatial resolution of pixellated CZT-based radiation detectors by merely decreasing the pixel size leads to two problems: (1) fabricating circuitry with small pixels becomes costly and (2) inter-pixel charge spreading can obviate any improvement in spatial resolution. We have used computer simulations to investigate position estimation algorithms that utilize charge sharing to achieve subpixel position resolution. To study these algorithms, we model a simple detector geometry with a [Formula: see text] array of [Formula: see text] pixels, and use a conditional probability function to model charge transport in CZT. We used COMSOL finite element method software to map the distribution of charge pulses and the Monte Carlo package PENELOPE for simulating fluorescent radiation. Performance of two x-ray interaction position estimation algorithms was evaluated: the method of maximum-likelihood estimation and a fast, practical algorithm that can be implemented in a readout application-specific integrated circuit and allows for identification of a quadrant of the pixel in which the interaction occurred. Both methods demonstrate good subpixel resolution; however, their actual efficiency is limited by the presence of fluorescent [Formula: see text]-escape photons. Current experimental breast CT systems typically use detectors with a pixel size of [Formula: see text], with [Formula: see text] binning during the acquisition giving an effective pixel size of [Formula: see text]. Thus, it would be expected that the position estimate accuracy reported in this study would improve detection and visualization of microcalcifications as compared to that with conventional detectors.
Exploring the limits of identifying sub-pixel thermal features using ASTER TIR data
Vaughan, R.G.; Keszthelyi, L.P.; Davies, A.G.; Schneider, D.J.; Jaworowski, C.; Heasler, H.
2010-01-01
Understanding the characteristics of volcanic thermal emissions and how they change with time is important for forecasting and monitoring volcanic activity and potential hazards. Satellite instruments view volcanic thermal features across the globe at various temporal and spatial resolutions. Thermal features that may be a precursor to a major eruption, or indicative of important changes in an on-going eruption can be subtle, making them challenging to reliably identify with satellite instruments. The goal of this study was to explore the limits of the types and magnitudes of thermal anomalies that could be detected using satellite thermal infrared (TIR) data. Specifically, the characterization of sub-pixel thermal features with a wide range of temperatures is considered using ASTER multispectral TIR data. First, theoretical calculations were made to define a "thermal mixing detection threshold" for ASTER, which quantifies the limits of ASTER's ability to resolve sub-pixel thermal mixing over a range of hot target temperatures and % pixel areas. Then, ASTER TIR data were used to model sub-pixel thermal features at the Yellowstone National Park geothermal area (hot spring pools with temperatures from 40 to 90 ??C) and at Mount Erebus Volcano, Antarctica (an active lava lake with temperatures from 200 to 800 ??C). Finally, various sources of uncertainty in sub-pixel thermal calculations were quantified for these empirical measurements, including pixel resampling, atmospheric correction, and background temperature and emissivity assumptions.
Simulation study of pixel detector charge digitization
NASA Astrophysics Data System (ADS)
Wang, Fuyue; Nachman, Benjamin; Sciveres, Maurice; Lawrence Berkeley National Laboratory Team
2017-01-01
Reconstruction of tracks from nearly overlapping particles, called Tracking in Dense Environments (TIDE), is an increasingly important component of many physics analyses at the Large Hadron Collider as signatures involving highly boosted jets are investigated. TIDE makes use of the charge distribution inside a pixel cluster to resolve tracks that share one of more of their pixel detector hits. In practice, the pixel charge is discretized using the Time-over-Threshold (ToT) technique. More charge information is better for discrimination, but more challenging for designing and operating the detector. A model of the silicon pixels has been developed in order to study the impact of the precision of the digitized charge distribution on distinguishing multi-particle clusters. The output of the GEANT4-based simulation is used to train neutral networks that predict the multiplicity and location of particles depositing energy inside one cluster of pixels. By studying the multi-particle cluster identification efficiency and position resolution, we quantify the trade-off between the number of ToT bits and low-level tracking inputs. As both ATLAS and CMS are designing upgraded detectors, this work provides guidance for the pixel module designs to meet TIDE needs. Work funded by the China Scholarship Council and the Office of High Energy Physics of the U.S. Department of Energy under contract DE-AC02-05CH11231.
Zhao, C; Vassiljev, N; Konstantinidis, A C; Speller, R D; Kanicki, J
2017-03-07
High-resolution, low-noise x-ray detectors based on the complementary metal-oxide-semiconductor (CMOS) active pixel sensor (APS) technology have been developed and proposed for digital breast tomosynthesis (DBT). In this study, we evaluated the three-dimensional (3D) imaging performance of a 50 µm pixel pitch CMOS APS x-ray detector named DynAMITe (Dynamic Range Adjustable for Medical Imaging Technology). The two-dimensional (2D) angle-dependent modulation transfer function (MTF), normalized noise power spectrum (NNPS), and detective quantum efficiency (DQE) were experimentally characterized and modeled using the cascaded system analysis at oblique incident angles up to 30°. The cascaded system model was extended to the 3D spatial frequency space in combination with the filtered back-projection (FBP) reconstruction method to calculate the 3D and in-plane MTF, NNPS and DQE parameters. The results demonstrate that the beam obliquity blurs the 2D MTF and DQE in the high spatial frequency range. However, this effect can be eliminated after FBP image reconstruction. In addition, impacts of the image acquisition geometry and detector parameters were evaluated using the 3D cascaded system analysis for DBT. The result shows that a wider projection angle range (e.g. ±30°) improves the low spatial frequency (below 5 mm -1 ) performance of the CMOS APS detector. In addition, to maintain a high spatial resolution for DBT, a focal spot size of smaller than 0.3 mm should be used. Theoretical analysis suggests that a pixelated scintillator in combination with the 50 µm pixel pitch CMOS APS detector could further improve the 3D image resolution. Finally, the 3D imaging performance of the CMOS APS and an indirect amorphous silicon (a-Si:H) thin-film transistor (TFT) passive pixel sensor (PPS) detector was simulated and compared.
NASA Astrophysics Data System (ADS)
Zhao, C.; Vassiljev, N.; Konstantinidis, A. C.; Speller, R. D.; Kanicki, J.
2017-03-01
High-resolution, low-noise x-ray detectors based on the complementary metal-oxide-semiconductor (CMOS) active pixel sensor (APS) technology have been developed and proposed for digital breast tomosynthesis (DBT). In this study, we evaluated the three-dimensional (3D) imaging performance of a 50 µm pixel pitch CMOS APS x-ray detector named DynAMITe (Dynamic Range Adjustable for Medical Imaging Technology). The two-dimensional (2D) angle-dependent modulation transfer function (MTF), normalized noise power spectrum (NNPS), and detective quantum efficiency (DQE) were experimentally characterized and modeled using the cascaded system analysis at oblique incident angles up to 30°. The cascaded system model was extended to the 3D spatial frequency space in combination with the filtered back-projection (FBP) reconstruction method to calculate the 3D and in-plane MTF, NNPS and DQE parameters. The results demonstrate that the beam obliquity blurs the 2D MTF and DQE in the high spatial frequency range. However, this effect can be eliminated after FBP image reconstruction. In addition, impacts of the image acquisition geometry and detector parameters were evaluated using the 3D cascaded system analysis for DBT. The result shows that a wider projection angle range (e.g. ±30°) improves the low spatial frequency (below 5 mm-1) performance of the CMOS APS detector. In addition, to maintain a high spatial resolution for DBT, a focal spot size of smaller than 0.3 mm should be used. Theoretical analysis suggests that a pixelated scintillator in combination with the 50 µm pixel pitch CMOS APS detector could further improve the 3D image resolution. Finally, the 3D imaging performance of the CMOS APS and an indirect amorphous silicon (a-Si:H) thin-film transistor (TFT) passive pixel sensor (PPS) detector was simulated and compared.
NASA Astrophysics Data System (ADS)
Roberts, G.; Wooster, M. J.; Xu, W.; Freeborn, P. H.; Morcrette, J.-J.; Jones, L.; Benedetti, A.; Kaiser, J.
2015-06-01
Characterising the dynamics of landscape scale wildfires at very high temporal resolutions is best achieved using observations from Earth Observation (EO) sensors mounted onboard geostationary satellites. As a result, a number of operational active fire products have been developed from the data of such sensors. An example of which are the Fire Radiative Power (FRP) products, the FRP-PIXEL and FRP-GRID products, generated by the Land Surface Analysis Satellite Applications Facility (LSA SAF) from imagery collected by the Spinning Enhanced Visible and Infrared Imager (SEVIRI) on-board the Meteosat Second Generation (MSG) series of geostationary EO satellites. The processing chain developed to deliver these FRP products detects SEVIRI pixels containing actively burning fires and characterises their FRP output across four geographic regions covering Europe, part of South America and northern and southern Africa. The FRP-PIXEL product contains the highest spatial and temporal resolution FRP dataset, whilst the FRP-GRID product contains a spatio-temporal summary that includes bias adjustments for cloud cover and the non-detection of low FRP fire pixels. Here we evaluate these two products against active fire data collected by the Moderate Resolution Imaging Spectroradiometer (MODIS), and compare the results to those for three alternative active fire products derived from SEVIRI imagery. The FRP-PIXEL product is shown to detect a substantially greater number of active fire pixels than do alternative SEVIRI-based products, and comparison to MODIS on a per-fire basis indicates a strong agreement and low bias in terms of FRP values. However, low FRP fire pixels remain undetected by SEVIRI, with errors of active fire pixel detection commission and omission compared to MODIS ranging between 9-13 and 65-77% respectively in Africa. Higher errors of omission result in greater underestimation of regional FRP totals relative to those derived from simultaneously collected MODIS data, ranging from 35% over the Northern Africa region to 89% over the European region. High errors of active fire omission and FRP underestimation are found over Europe and South America, and result from SEVIRI's larger pixel area over these regions. An advantage of using FRP for characterising wildfire emissions is the ability to do so very frequently and in near real time (NRT). To illustrate the potential of this approach, wildfire fuel consumption rates derived from the SEVIRI FRP-PIXEL product are used to characterise smoke emissions of the 2007 Peloponnese wildfires within the European Centre for Medium-Range Weather Forecasting (ECMWF) Integrated Forecasting System (IFS), as a demonstration of what can be achieved when using geostationary active fire data within the Copernicus Atmosphere Monitoring System (CAMS). Qualitative comparison of the modelled smoke plumes with MODIS optical imagery illustrates that the model captures the temporal and spatial dynamics of the plume very well, and that high temporal resolution emissions estimates such as those available from geostationary orbit are important for capturing the sub-daily variability in smoke plume parameters such as aerosol optical depth (AOD), which are increasingly less well resolved using daily or coarser temporal resolution emissions datasets. Quantitative comparison of modelled AOD with coincident MODIS and AERONET AOD indicates that the former is overestimated by ∼ 20-30%, but captures the observed AOD dynamics with a high degree of fidelity. The case study highlights the potential of using geostationary FRP data to drive fire emissions estimates for use within atmospheric transport models such as those currently implemented as part of the Monitoring Atmospheric Composition and Climate (MACC) programme within the CAMS.
Feature discrimination/identification based upon SAR return variations
NASA Technical Reports Server (NTRS)
Rasco, W. A., Sr.; Pietsch, R.
1978-01-01
A study of the statistics of The look-to-look variation statistics in the returns recorded in-flight by a digital, realtime SAR system are analyzed. The determination that the variations in the look-to-look returns from different classes do carry information content unique to the classes was illustrated by a model based on four variants derived from four look in-flight SAR data under study. The model was limited to four classes of returns: mowed grass on a athletic field, rough unmowed grass and weeds on a large vacant field, young fruit trees in a large orchard, and metal mobile homes and storage buildings in a large mobile home park. The data population in excess of 1000 returns represented over 250 individual pixels from the four classes. The multivariant discriminant model operated on the set of returns for each pixel and assigned that pixel to one of the four classes, based on the target variants and the probability distribution function of the four variants for each class.
NASA Astrophysics Data System (ADS)
Igoe, Damien P.; Parisi, Alfio V.; Amar, Abdurazaq; Rummenie, Katherine J.
2018-01-01
An evaluation of the use of median filters in the reduction of dark noise in smartphone high resolution image sensors is presented. The Sony Xperia Z1 employed has a maximum image sensor resolution of 20.7 Mpixels, with each pixel having a side length of just over 1 μm. Due to the large number of photosites, this provides an image sensor with very high sensitivity but also makes them prone to noise effects such as hot-pixels. Similar to earlier research with older models of smartphone, no appreciable temperature effects were observed in the overall average pixel values for images taken in ambient temperatures between 5 °C and 25 °C. In this research, hot-pixels are defined as pixels with intensities above a specific threshold. The threshold is determined using the distribution of pixel values of a set of images with uniform statistical properties associated with the application of median-filters of increasing size. An image with uniform statistics was employed as a training set from 124 dark images, and the threshold was determined to be 9 digital numbers (DN). The threshold remained constant for multiple resolutions and did not appreciably change even after a year of extensive field use and exposure to solar ultraviolet radiation. Although the temperature effects' uniformity masked an increase in hot-pixel occurrences, the total number of occurrences represented less than 0.1% of the total image. Hot-pixels were removed by applying a median filter, with an optimum filter size of 7 × 7; similar trends were observed for four additional smartphone image sensors used for validation. Hot-pixels were also reduced by decreasing image resolution. The method outlined in this research provides a methodology to characterise the dark noise behavior of high resolution image sensors for use in scientific investigations, especially as pixel sizes decrease.
NASA Technical Reports Server (NTRS)
Protopapa, S.; Grundy, W. M.; Reuter, D. C.; Hamilton, D. P.; Dalle Ore, C. M.; Cook, J. C.; Cruikshank, D. P.; Schmitt, B.; Philippe, S.; Quirico, E.;
2016-01-01
On July 14th 2015, NASA's New Horizons mission gave us an unprecedented detailed view of the Pluto system. The complex compositional diversity of Pluto's encounter hemisphere was revealed by the Ralph/LEISA infrared spectrometer on board of New Horizons. We present compositional maps of Pluto defining the spatial distribution of the abundance and textural properties of the volatiles methane and nitrogen ices and non-volatiles water ice and tholin. These results are obtained by applying a pixel-by-pixel Hapke radiative transfer model to the LEISA scans. Our analysis focuses mainly on the large scale latitudinal variations of methane and nitrogen ices and aims at setting observational constraints to volatile transport models. Specifically, we find three latitudinal bands: the first, enriched in methane, extends from the pole to 55degN, the second dominated by nitrogen, continues south to 35 degN, and the third, com- posed again mainly of methane, reaches 20 degN. We demonstrate that the distribution of volatiles across these surface units can be explained by differences in insolation over the past few decades. The latitudinal pattern is broken by Sputnik Planitia, a large reservoir of volatiles, with nitrogen playing the most important role. The physical properties of methane and nitrogen in this region are suggestive of the presence of a cold trap or possible volatile stratification. Furthermore our modeling results point to a possible sublimation transport of nitrogen from the northwest edge of Sputnik Planitia toward the south.
NASA Astrophysics Data System (ADS)
Protopapa, S.; Grundy, W. M.; Reuter, D. C.; Hamilton, D. P.; Dalle Ore, C. M.; Cook, J. C.; Cruikshank, D. P.; Schmitt, B.; Philippe, S.; Quirico, E.; Binzel, R. P.; Earle, A. M.; Ennico, K.; Howett, C. J. A.; Lunsford, A. W.; Olkin, C. B.; Parker, A.; Singer, K. N.; Stern, A.; Verbiscer, A. J.; Weaver, H. A.; Young, L. A.; New Horizons Science Team
2017-05-01
On July 14th 2015, NASA's New Horizons mission gave us an unprecedented detailed view of the Pluto system. The complex compositional diversity of Pluto's encounter hemisphere was revealed by the Ralph/LEISA infrared spectrometer on board of New Horizons. We present compositional maps of Pluto defining the spatial distribution of the abundance and textural properties of the volatiles methane and nitrogen ices and non-volatiles water ice and tholin. These results are obtained by applying a pixel-by-pixel Hapke radiative transfer model to the LEISA scans. Our analysis focuses mainly on the large scale latitudinal variations of methane and nitrogen ices and aims at setting observational constraints to volatile transport models. Specifically, we find three latitudinal bands: the first, enriched in methane, extends from the pole to 55°N, the second dominated by nitrogen, continues south to 35°N, and the third, composed again mainly of methane, reaches 20°N. We demonstrate that the distribution of volatiles across these surface units can be explained by differences in insolation over the past few decades. The latitudinal pattern is broken by Sputnik Planitia, a large reservoir of volatiles, with nitrogen playing the most important role. The physical properties of methane and nitrogen in this region are suggestive of the presence of a cold trap or possible volatile stratification. Furthermore our modeling results point to a possible sublimation transport of nitrogen from the northwest edge of Sputnik Planitia toward the south.
Rolling Band Artifact Flagging in the Kepler Data Pipeline
NASA Astrophysics Data System (ADS)
Clarke, Bruce; Kolodziejczak, Jeffery J; Caldwell, Douglas A.
2014-06-01
Instrument-induced artifacts in the raw Kepler pixel data include time-varying crosstalk from the fine guidance sensor (FGS) clock signals, manifestations of drifting moiré pattern as locally correlated nonstationary noise and rolling bands in the images. These systematics find their way into the calibrated pixel time series and ultimately into the target flux time series. The Kepler pipeline module Dynablack models the FGS crosstalk artifacts using a combination of raw science pixel data, full frame images, reverse-clocked pixel data and ancillary temperature data. The calibration module (CAL) uses the fitted Dynablack models to remove FGS crosstalk artifacts in the calibrated pixels by adjusting the black level correction per cadence. Dynablack also detects and flags spatial regions and time intervals of strong time-varying black-level. These rolling band artifact (RBA) flags are produced on a per row per cadence basis by searching for transit signatures in the Dynablack fit residuals. The Photometric Analysis module (PA) generates per target per cadence data quality flags based on the Dynablack RBA flags. Proposed future work includes using the target data quality flags as a basis for de-weighting in the Presearch Data Conditioning (PDC), Transiting Planet Search (TPS) and Data Validation (DV) pipeline modules. We discuss the effectiveness of RBA flagging for downstream users and illustrate with some affected light curves. We also discuss the implementation of Dynablack in the Kepler data pipeline and present results regarding the improvement in calibrated pixels and the expected improvement in cotrending performance as a result of including FGS corrections in the calibration. Funding for the Kepler Mission has been provided by the NASA Science Mission Directorate.
A BLIND METHOD TO DETREND INSTRUMENTAL SYSTEMATICS IN EXOPLANETARY LIGHT CURVES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morello, G., E-mail: giuseppe.morello.11@ucl.ac.uk
2015-07-20
The study of the atmospheres of transiting exoplanets requires a photometric precision, and repeatability, of one part in ∼10{sup 4}. This is beyond the original calibration plans of current observatories, hence the necessity to disentangle the instrumental systematics from the astrophysical signals in raw data sets. Most methods used in the literature are based on an approximate instrument model. The choice of parameters of the model and their functional forms can sometimes be subjective, causing controversies in the literature. Recently, Morello et al. (2014, 2015) have developed a non-parametric detrending method that gave coherent and repeatable results when applied tomore » Spitzer/IRAC data sets that were debated in the literature. Said method is based on independent component analysis (ICA) of individual pixel time-series, hereafter “pixel-ICA”. The main purpose of this paper is to investigate the limits and advantages of pixel-ICA on a series of simulated data sets with different instrument properties, and a range of jitter timescales and shapes, non-stationarity, sudden change points, etc. The performances of pixel-ICA are compared against the ones of other methods, in particular polynomial centroid division, and pixel-level decorrelation method. We find that in simulated cases pixel-ICA performs as well or better than other methods, and it also guarantees a higher degree of objectivity, because of its purely statistical foundation with no prior information on the instrument systematics. The results of this paper, together with previous analyses of Spitzer/IRAC data sets, suggest that photometric precision and repeatability of one part in 10{sup 4} can be achieved with current infrared space instruments.« less
Modeling Charge Collection in Detector Arrays
NASA Technical Reports Server (NTRS)
Hardage, Donna (Technical Monitor); Pickel, J. C.
2003-01-01
A detector array charge collection model has been developed for use as an engineering tool to aid in the design of optical sensor missions for operation in the space radiation environment. This model is an enhancement of the prototype array charge collection model that was developed for the Next Generation Space Telescope (NGST) program. The primary enhancements were accounting for drift-assisted diffusion by Monte Carlo modeling techniques and implementing the modeling approaches in a windows-based code. The modeling is concerned with integrated charge collection within discrete pixels in the focal plane array (FPA), with high fidelity spatial resolution. It is applicable to all detector geometries including monolithc charge coupled devices (CCDs), Active Pixel Sensors (APS) and hybrid FPA geometries based on a detector array bump-bonded to a readout integrated circuit (ROIC).
Pixel by pixel: the evolving landscapes of remote sensing.
Sally Duncan
1999-01-01
This issue of "Science Findings" focuses on remote sensing research and how it can be used to assess a landscape. The work of PNW Research Station scientists Tom Spies and Warren Cohen and their use of satellite technology in developing the coastal landscape analysis and modeling study (CLAMS) is featured. The CLAMS study area includes more than 5 million...
Li, Jing; Mahmoodi, Alireza; Joseph, Dileepan
2015-10-16
An important class of complementary metal-oxide-semiconductor (CMOS) image sensors are those where pixel responses are monotonic nonlinear functions of light stimuli. This class includes various logarithmic architectures, which are easily capable of wide dynamic range imaging, at video rates, but which are vulnerable to image quality issues. To minimize fixed pattern noise (FPN) and maximize photometric accuracy, pixel responses must be calibrated and corrected due to mismatch and process variation during fabrication. Unlike literature approaches, which employ circuit-based models of varying complexity, this paper introduces a novel approach based on low-degree polynomials. Although each pixel may have a highly nonlinear response, an approximately-linear FPN calibration is possible by exploiting the monotonic nature of imaging. Moreover, FPN correction requires only arithmetic, and an optimal fixed-point implementation is readily derived, subject to a user-specified number of bits per pixel. Using a monotonic spline, involving cubic polynomials, photometric calibration is also possible without a circuit-based model, and fixed-point photometric correction requires only a look-up table. The approach is experimentally validated with a logarithmic CMOS image sensor and is compared to a leading approach from the literature. The novel approach proves effective and efficient.
The Area Coverage of Geophysical Fields as a Function of Sensor Field-of View
NASA Technical Reports Server (NTRS)
Key, Jeffrey R.
1994-01-01
In many remote sensing studies of geophysical fields such as clouds, land cover, or sea ice characteristics, the fractional area coverage of the field in an image is estimated as the proportion of pixels that have the characteristic of interest (i.e., are part of the field) as determined by some thresholding operation. The effect of sensor field-of-view on this estimate is examined by modeling the unknown distribution of subpixel area fraction with the beta distribution, whose two parameters depend upon the true fractional area coverage, the pixel size, and the spatial structure of the geophysical field. Since it is often not possible to relate digital number, reflectance, or temperature to subpixel area fraction, the statistical models described are used to determine the effect of pixel size and thresholding operations on the estimate of area fraction for hypothetical geophysical fields. Examples are given for simulated cumuliform clouds and linear openings in sea ice, whose spatial structures are described by an exponential autocovariance function. It is shown that the rate and direction of change in total area fraction with changing pixel size depends on the true area fraction, the spatial structure, and the thresholding operation used.
Bayesian Model for Matching the Radiometric Measurements of Aerospace and Field Ocean Color Sensors
Salama, Mhd. Suhyb; Su, Zhongbo
2010-01-01
A Bayesian model is developed to match aerospace ocean color observation to field measurements and derive the spatial variability of match-up sites. The performance of the model is tested against populations of synthesized spectra and full and reduced resolutions of MERIS data. The model derived the scale difference between synthesized satellite pixel and point measurements with R2 > 0.88 and relative error < 21% in the spectral range from 400 nm to 695 nm. The sub-pixel variabilities of reduced resolution MERIS image are derived with less than 12% of relative errors in heterogeneous region. The method is generic and applicable to different sensors. PMID:22163615
Teixidó, Mercè; Font, Davinia; Pallejà, Tomàs; Tresanchez, Marcel; Nogués, Miquel; Palacín, Jordi
2012-01-01
This work proposes the detection of red peaches in orchard images based on the definition of different linear color models in the RGB vector color space. The classification and segmentation of the pixels of the image is then performed by comparing the color distance from each pixel to the different previously defined linear color models. The methodology proposed has been tested with images obtained in a real orchard under natural light. The peach variety in the orchard was the paraguayo (Prunus persica var. platycarpa) peach with red skin. The segmentation results showed that the area of the red peaches in the images was detected with an average error of 11.6%; 19.7% in the case of bright illumination; 8.2% in the case of low illumination; 8.6% for occlusion up to 33%; 12.2% in the case of occlusion between 34 and 66%; and 23% for occlusion above 66%. Finally, a methodology was proposed to estimate the diameter of the fruits based on an ellipsoidal fitting. A first diameter was obtained by using all the contour pixels and a second diameter was obtained by rejecting some pixels of the contour. This approach enables a rough estimate of the fruit occlusion percentage range by comparing the two diameter estimates. PMID:22969369
Teixidó, Mercè; Font, Davinia; Pallejà, Tomàs; Tresanchez, Marcel; Nogués, Miquel; Palacín, Jordi
2012-01-01
This work proposes the detection of red peaches in orchard images based on the definition of different linear color models in the RGB vector color space. The classification and segmentation of the pixels of the image is then performed by comparing the color distance from each pixel to the different previously defined linear color models. The methodology proposed has been tested with images obtained in a real orchard under natural light. The peach variety in the orchard was the paraguayo (Prunus persica var. platycarpa) peach with red skin. The segmentation results showed that the area of the red peaches in the images was detected with an average error of 11.6%; 19.7% in the case of bright illumination; 8.2% in the case of low illumination; 8.6% for occlusion up to 33%; 12.2% in the case of occlusion between 34 and 66%; and 23% for occlusion above 66%. Finally, a methodology was proposed to estimate the diameter of the fruits based on an ellipsoidal fitting. A first diameter was obtained by using all the contour pixels and a second diameter was obtained by rejecting some pixels of the contour. This approach enables a rough estimate of the fruit occlusion percentage range by comparing the two diameter estimates.
Spectral Unmixing With Multiple Dictionaries
NASA Astrophysics Data System (ADS)
Cohen, Jeremy E.; Gillis, Nicolas
2018-02-01
Spectral unmixing aims at recovering the spectral signatures of materials, called endmembers, mixed in a hyperspectral or multispectral image, along with their abundances. A typical assumption is that the image contains one pure pixel per endmember, in which case spectral unmixing reduces to identifying these pixels. Many fully automated methods have been proposed in recent years, but little work has been done to allow users to select areas where pure pixels are present manually or using a segmentation algorithm. Additionally, in a non-blind approach, several spectral libraries may be available rather than a single one, with a fixed number (or an upper or lower bound) of endmembers to chose from each. In this paper, we propose a multiple-dictionary constrained low-rank matrix approximation model that address these two problems. We propose an algorithm to compute this model, dubbed M2PALS, and its performance is discussed on both synthetic and real hyperspectral images.
Phase information contained in meter-scale SAR images
NASA Astrophysics Data System (ADS)
Datcu, Mihai; Schwarz, Gottfried; Soccorsi, Matteo; Chaabouni, Houda
2007-10-01
The properties of single look complex SAR satellite images have already been analyzed by many investigators. A common belief is that, apart from inverse SAR methods or polarimetric applications, no information can be gained from the phase of each pixel. This belief is based on the assumption that we obtain uniformly distributed random phases when a sufficient number of small-scale scatterers are mixed in each image pixel. However, the random phase assumption does no longer hold for typical high resolution urban remote sensing scenes, when a limited number of prominent human-made scatterers with near-regular shape and sub-meter size lead to correlated phase patterns. If the pixel size shrinks to a critical threshold of about 1 meter, the reflectance of built-up urban scenes becomes dominated by typical metal reflectors, corner-like structures, and multiple scattering. The resulting phases are hard to model, but one can try to classify a scene based on the phase characteristics of neighboring image pixels. We provide a "cooking recipe" of how to analyze existing phase patterns that extend over neighboring pixels.
NASA Technical Reports Server (NTRS)
Shimabukuro, Yosio Edemir; Smith, James A.
1991-01-01
Constrained-least-squares and weighted-least-squares mixing models for generating fraction images derived from remote sensing multispectral data are presented. An experiment considering three components within the pixels-eucalyptus, soil (understory), and shade-was performed. The generated fraction images for shade (shade image) derived from these two methods were compared by considering the performance and computer time. The derived shade images are related to the observed variation in forest structure, i.e., the fraction of inferred shade in the pixel is related to different eucalyptus ages.
Where can pixel counting area estimates meet user-defined accuracy requirements?
NASA Astrophysics Data System (ADS)
Waldner, François; Defourny, Pierre
2017-08-01
Pixel counting is probably the most popular way to estimate class areas from satellite-derived maps. It involves determining the number of pixels allocated to a specific thematic class and multiplying it by the pixel area. In the presence of asymmetric classification errors, the pixel counting estimator is biased. The overarching objective of this article is to define the applicability conditions of pixel counting so that the estimates are below a user-defined accuracy target. By reasoning in terms of landscape fragmentation and spatial resolution, the proposed framework decouples the resolution bias and the classifier bias from the overall classification bias. The consequence is that prior to any classification, part of the tolerated bias is already committed due to the choice of the spatial resolution of the imagery. How much classification bias is affordable depends on the joint interaction of spatial resolution and fragmentation. The method was implemented over South Africa for cropland mapping, demonstrating its operational applicability. Particular attention was paid to modeling a realistic sensor's spatial response by explicitly accounting for the effect of its point spread function. The diagnostic capabilities offered by this framework have multiple potential domains of application such as guiding users in their choice of imagery and providing guidelines for space agencies to elaborate the design specifications of future instruments.
New DTM Extraction Approach from Airborne Images Derived Dsm
NASA Astrophysics Data System (ADS)
Mousa, Y. A.; Helmholz, P.; Belton, D.
2017-05-01
In this work, a new filtering approach is proposed for a fully automatic Digital Terrain Model (DTM) extraction from very high resolution airborne images derived Digital Surface Models (DSMs). Our approach represents an enhancement of the existing DTM extraction algorithm Multi-directional and Slope Dependent (MSD) by proposing parameters that are more reliable for the selection of ground pixels and the pixelwise classification. To achieve this, four main steps are implemented: Firstly, 8 well-distributed scanlines are used to search for minima as a ground point within a pre-defined filtering window size. These selected ground points are stored with their positions on a 2D surface to create a network of ground points. Then, an initial DTM is created using an interpolation method to fill the gaps in the 2D surface. Afterwards, a pixel to pixel comparison between the initial DTM and the original DSM is performed utilising pixelwise classification of ground and non-ground pixels by applying a vertical height threshold. Finally, the pixels classified as non-ground are removed and the remaining holes are filled. The approach is evaluated using the Vaihingen benchmark dataset provided by the ISPRS working group III/4. The evaluation includes the comparison of our approach, denoted as Network of Ground Points (NGPs) algorithm, with the DTM created based on MSD as well as a reference DTM generated from LiDAR data. The results show that our proposed approach over performs the MSD approach.
X-ray characterization of a multichannel smart-pixel array detector.
Ross, Steve; Haji-Sheikh, Michael; Huntington, Andrew; Kline, David; Lee, Adam; Li, Yuelin; Rhee, Jehyuk; Tarpley, Mary; Walko, Donald A; Westberg, Gregg; Williams, George; Zou, Haifeng; Landahl, Eric
2016-01-01
The Voxtel VX-798 is a prototype X-ray pixel array detector (PAD) featuring a silicon sensor photodiode array of 48 × 48 pixels, each 130 µm × 130 µm × 520 µm thick, coupled to a CMOS readout application specific integrated circuit (ASIC). The first synchrotron X-ray characterization of this detector is presented, and its ability to selectively count individual X-rays within two independent arrival time windows, a programmable energy range, and localized to a single pixel is demonstrated. During our first trial run at Argonne National Laboratory's Advance Photon Source, the detector achieved a 60 ns gating time and 700 eV full width at half-maximum energy resolution in agreement with design parameters. Each pixel of the PAD holds two independent digital counters, and the discriminator for X-ray energy features both an upper and lower threshold to window the energy of interest discarding unwanted background. This smart-pixel technology allows energy and time resolution to be set and optimized in software. It is found that the detector linearity follows an isolated dead-time model, implying that megahertz count rates should be possible in each pixel. Measurement of the line and point spread functions showed negligible spatial blurring. When combined with the timing structure of the synchrotron storage ring, it is demonstrated that the area detector can perform both picosecond time-resolved X-ray diffraction and fluorescence spectroscopy measurements.
Characterization of a 2-mm thick, 16x16 Cadmium-Zinc-Telluride Pixel Array
NASA Technical Reports Server (NTRS)
Gaskin, Jessica; Richardson, Georgia; Mitchell, Shannon; Ramsey, Brian; Seller, Paul; Sharma, Dharma
2003-01-01
The detector under study is a 2-mm-thick, 16x16 Cadmium-Zinc-Telluride pixel array with a pixel pitch of 300 microns and inter-pixel gap of 50 microns. This detector is a precursor to that which will be used at the focal plane of the High Energy Replicated Optics (HERO) telescope currently being developed at Marshall Space Flight Center. With a telescope focal length of 6 meters, the detector needs to have a spatial resolution of around 200 microns in order to take full advantage of the HERO angular resolution. We discuss to what degree charge sharing will degrade energy resolution but will improve our spatial resolution through position interpolation. In addition, we discuss electric field modeling for this specific detector geometry and the role this mapping will play in terms of charge sharing and charge loss in the detector.
NASA Astrophysics Data System (ADS)
Fan, Ching-Lin; Lin, Yu-Sheng; Liu, Yan-Wei
A new pixel design and driving method for active matrix organic light emitting diode (AMOLED) displays that use low-temperature polycrystalline silicon thin-film transistors (LTPS-TFTs) with a voltage programming method are proposed and verified using the SPICE simulator. We had employed an appropriate TFT model in SPICE simulation to demonstrate the performance of the pixel circuit. The OLED anode voltage variation error rates are below 0.35% under driving TFT threshold voltage deviation (Δ Vth =± 0.33V). The OLED current non-uniformity caused by the OLED threshold voltage degradation (Δ VTO =+0.33V) is significantly reduced (below 6%). The simulation results show that the pixel design can improve the display image non-uniformity by compensating for the threshold voltage deviation in the driving TFT and the OLED threshold voltage degradation at the same time.
NASA Astrophysics Data System (ADS)
Hirigoyen, Flavien; Crocherie, Axel; Vaillant, Jérôme M.; Cazaux, Yvon
2008-02-01
This paper presents a new FDTD-based optical simulation model dedicated to describe the optical performances of CMOS image sensors taking into account diffraction effects. Following market trend and industrialization constraints, CMOS image sensors must be easily embedded into even smaller packages, which are now equipped with auto-focus and short-term coming zoom system. Due to miniaturization, the ray-tracing models used to evaluate pixels optical performances are not accurate anymore to describe the light propagation inside the sensor, because of diffraction effects. Thus we adopt a more fundamental description to take into account these diffraction effects: we chose to use Maxwell-Boltzmann based modeling to compute the propagation of light, and to use a software with an FDTD-based (Finite Difference Time Domain) engine to solve this propagation. We present in this article the complete methodology of this modeling: on one hand incoherent plane waves are propagated to approximate a product-use diffuse-like source, on the other hand we use periodic conditions to limit the size of the simulated model and both memory and computation time. After having presented the correlation of the model with measurements we will illustrate its use in the case of the optimization of a 1.75μm pixel.
NASA Astrophysics Data System (ADS)
Xu, Jianxin; Liang, Hong
2013-07-01
Terrestrial laser scanning creates a point cloud composed of thousands or millions of 3D points. Through pre-processing, generating TINs, mapping texture, a 3D model of a real object is obtained. When the object is too large, the object is separated into some parts. This paper mainly focuses on problem of gray uneven of two adjacent textures' intersection. The new algorithm is presented in the paper, which is per-pixel linear interpolation along loop line buffer .The experiment data derives from point cloud of stone lion which is situated in front of west gate of Henan Polytechnic University. The model flow is composed of three parts. First, the large object is separated into two parts, and then each part is modeled, finally the whole 3D model of the stone lion is composed of two part models. When the two part models are combined, there is an obvious fissure line in the overlapping section of two adjacent textures for the two models. Some researchers decrease brightness value of all pixels for two adjacent textures by some algorithms. However, some algorithms are effect and the fissure line still exists. Gray uneven of two adjacent textures is dealt by the algorithm in the paper. The fissure line in overlapping section textures is eliminated. The gray transition in overlapping section become more smoothly.
NASA Technical Reports Server (NTRS)
Panciera, Rocco; Walker, Jeffrey P.; Kalma, Jetse; Kim, Edward
2011-01-01
The Soil Moisture and Ocean Salinity (SMOS)mission, launched in November 2009, provides global maps of soil moisture and ocean salinity by measuring the L-band (1.4 GHz) emission of the Earth's surface with a spatial resolution of 40-50 km.Uncertainty in the retrieval of soilmoisture over large heterogeneous areas such as SMOS pixels is expected, due to the non-linearity of the relationship between soil moisture and the microwave emission. The current baseline soilmoisture retrieval algorithm adopted by SMOS and implemented in the SMOS Level 2 (SMOS L2) processor partially accounts for the sub-pixel heterogeneity of the land surface, by modelling the individual contributions of different pixel fractions to the overall pixel emission. This retrieval approach is tested in this study using airborne L-band data over an area the size of a SMOS pixel characterised by a mix Eucalypt forest and moderate vegetation types (grassland and crops),with the objective of assessing its ability to correct for the soil moisture retrieval error induced by the land surface heterogeneity. A preliminary analysis using a traditional uniform pixel retrieval approach shows that the sub-pixel heterogeneity of land cover type causes significant errors in soil moisture retrieval (7.7%v/v RMSE, 2%v/v bias) in pixels characterised by a significant amount of forest (40-60%). Although the retrieval approach adopted by SMOS partially reduces this error, it is affected by errors beyond the SMOS target accuracy, presenting in particular a strong dry bias when a fraction of the pixel is occupied by forest (4.1%v/v RMSE,-3.1%v/v bias). An extension to the SMOS approach is proposed that accounts for the heterogeneity of vegetation optical depth within the SMOS pixel. The proposed approach is shown to significantly reduce the error in retrieved soil moisture (2.8%v/v RMSE, -0.3%v/v bias) in pixels characterised by a critical amount of forest (40-60%), at the limited cost of only a crude estimate of the optical depth of the forested area (better than 35% uncertainty). This study makes use of an unprecedented data set of airborne L-band observations and ground supporting data from the National Airborne Field Experiment 2005 (NAFE'05), which allowed accurate characterisation of the land surface heterogeneity over an area equivalent in size to a SMOS pixel.
NASA Astrophysics Data System (ADS)
Koniczek, Martin; El-Mohri, Youcef; Antonuk, Larry E.; Liang, Albert; Zhao, Qihua; Jiang, Hao
2011-03-01
A decade after the clinical introduction of active matrix, flat-panel imagers (AMFPIs), the performance of this technology continues to be limited by the relatively large additive electronic noise of these systems - resulting in significant loss of detective quantum efficiency (DQE) under conditions of low exposure or high spatial frequencies. An increasingly promising approach for overcoming such limitations involves the incorporation of in-pixel amplification circuits, referred to as active pixel architectures (AP) - based on low-temperature polycrystalline silicon (poly-Si) thin-film transistors (TFTs). In this study, a methodology for theoretically examining the limiting noise and DQE performance of circuits employing 1-stage in-pixel amplification is presented. This methodology involves sophisticated SPICE circuit simulations along with cascaded systems modeling. In these simulations, a device model based on the RPI poly-Si TFT model is used with additional controlled current sources corresponding to thermal and flicker (1/f) noise. From measurements of transfer and output characteristics (as well as current noise densities) performed upon individual, representative, poly-Si TFTs test devices, model parameters suitable for these simulations are extracted. The input stimuli and operating-point-dependent scaling of the current sources are derived from the measured current noise densities (for flicker noise), or from fundamental equations (for thermal noise). Noise parameters obtained from the simulations, along with other parametric information, is input to a cascaded systems model of an AP imager design to provide estimates of DQE performance. In this paper, this method of combining circuit simulations and cascaded systems analysis to predict the lower limits on additive noise (and upper limits on DQE) for large area AP imagers with signal levels representative of those generated at fluoroscopic exposures is described, and initial results are reported.
Cellular image segmentation using n-agent cooperative game theory
NASA Astrophysics Data System (ADS)
Dimock, Ian B.; Wan, Justin W. L.
2016-03-01
Image segmentation is an important problem in computer vision and has significant applications in the segmentation of cellular images. Many different imaging techniques exist and produce a variety of image properties which pose difficulties to image segmentation routines. Bright-field images are particularly challenging because of the non-uniform shape of the cells, the low contrast between cells and background, and imaging artifacts such as halos and broken edges. Classical segmentation techniques often produce poor results on these challenging images. Previous attempts at bright-field imaging are often limited in scope to the images that they segment. In this paper, we introduce a new algorithm for automatically segmenting cellular images. The algorithm incorporates two game theoretic models which allow each pixel to act as an independent agent with the goal of selecting their best labelling strategy. In the non-cooperative model, the pixels choose strategies greedily based only on local information. In the cooperative model, the pixels can form coalitions, which select labelling strategies that benefit the entire group. Combining these two models produces a method which allows the pixels to balance both local and global information when selecting their label. With the addition of k-means and active contour techniques for initialization and post-processing purposes, we achieve a robust segmentation routine. The algorithm is applied to several cell image datasets including bright-field images, fluorescent images and simulated images. Experiments show that the algorithm produces good segmentation results across the variety of datasets which differ in cell density, cell shape, contrast, and noise levels.
NASA Astrophysics Data System (ADS)
Atzberger, C.
2013-12-01
The robust and accurate retrieval of vegetation biophysical variables using RTM is seriously hampered by the ill-posedness of the inverse problem. The contribution presents our object-based inversion approach and evaluate it against measured data. The proposed method takes advantage of the fact that nearby pixels are generally more similar than those at a larger distance. For example, within a given vegetation patch, nearby pixels often share similar leaf angular distributions. This leads to spectral co-variations in the n-dimensional spectral features space, which can be used for regularization purposes. Using a set of leaf area index (LAI) measurements (n=26) acquired over alfalfa, sugar beet and garlic crops of the Barrax test site (Spain), it is demonstrated that the proposed regularization using neighbourhood information yields more accurate results compared to the traditional pixel-based inversion. Principle of the ill-posed inverse problem and the proposed solution illustrated in the red-nIR feature space using (PROSAIL). [A] spectral trajectory ('soil trajectory') obtained for one leaf angle (ALA) and one soil brightness (αsoil), when LAI varies between 0 and 10, [B] 'soil trajectories' for 5 soil brightness values and three leaf angles, [C] ill-posed inverse problem: different combinations of ALA × αsoil yield an identical crossing point, [D] object-based RTM inversion; only one 'soil trajectory' fits all nine pixelswithin a gliding (3×3) window. The black dots (plus the rectangle=central pixel) represent the hypothetical position of nine pixels within a 3×3 (gliding) window. Assuming that over short distances (× 1 pixel) variations in soil brightness can be neglected, the proposed object-based inversion searches for one common set of ALA × αsoil so that the resulting 'soil trajectory' best fits the nine measured pixels. Ground measured vs. retrieved LAI values for three crops. Left: proposed object-based approach. Right: pixel-based inversion
Performance assessment of a compressive sensing single-pixel imaging system
NASA Astrophysics Data System (ADS)
Du Bosq, Todd W.; Preece, Bradley L.
2017-04-01
Conventional sensors measure the light incident at each pixel in a focal plane array. Compressive sensing (CS) involves capturing a smaller number of unconventional measurements from the scene, and then using a companion process to recover the image. CS has the potential to acquire imagery with equivalent information content to a large format array while using smaller, cheaper, and lower bandwidth components. However, the benefits of CS do not come without compromise. The CS architecture chosen must effectively balance between physical considerations, reconstruction accuracy, and reconstruction speed to meet operational requirements. Performance modeling of CS imagers is challenging due to the complexity and nonlinearity of the system and reconstruction algorithm. To properly assess the value of such systems, it is necessary to fully characterize the image quality, including artifacts and sensitivity to noise. Imagery of a two-handheld object target set was collected using an shortwave infrared single-pixel CS camera for various ranges and number of processed measurements. Human perception experiments were performed to determine the identification performance within the trade space. The performance of the nonlinear CS camera was modeled by mapping the nonlinear degradations to an equivalent linear shift invariant model. Finally, the limitations of CS modeling techniques are discussed.
Quantification of plume opacity by digital photography.
Du, Ke; Rood, Mark J; Kim, Byung J; Kemme, Michael R; Franek, Bill; Mattison, Kevin
2007-02-01
The United States Environmental Protection Agency (USEPA) developed Method 9 to describe how plume opacity can be quantified by humans. However, use of observations by humans introduces subjectivity, and is expensive due to semiannual certification requirements of the observers. The Digital Opacity Method (DOM) was developed to quantify plume opacity at lower cost, with improved objectivity, and to provide a digital record. Photographs of plumes were taken with a calibrated digital camera under specified conditions. Pixel values from those photographs were then interpreted to quantify the plume's opacity using a contrast model and a transmission model. The contrast model determines plume opacity based on pixel values that are related to the change in contrast between two backgrounds that are located behind and next to the plume. The transmission model determines the plume's opacity based on pixel values that are related to radiances from the plume and its background. DOM was field tested with a smoke generator. The individual and average opacity errors of DOM were within the USEPA Method 9 acceptable error limits for both field campaigns. Such results are encouraging and support the use of DOM as an alternative to Method 9.
Statistics of Optical Coherence Tomography Data From Human Retina
de Juan, Joaquín; Ferrone, Claudia; Giannini, Daniela; Huang, David; Koch, Giorgio; Russo, Valentina; Tan, Ou; Bruni, Carlo
2010-01-01
Optical coherence tomography (OCT) has recently become one of the primary methods for noninvasive probing of the human retina. The pseudoimage formed by OCT (the so-called B-scan) varies probabilistically across pixels due to complexities in the measurement technique. Hence, sensitive automatic procedures of diagnosis using OCT may exploit statistical analysis of the spatial distribution of reflectance. In this paper, we perform a statistical study of retinal OCT data. We find that the stretched exponential probability density function can model well the distribution of intensities in OCT pseudoimages. Moreover, we show a small, but significant correlation between neighbor pixels when measuring OCT intensities with pixels of about 5 µm. We then develop a simple joint probability model for the OCT data consistent with known retinal features. This model fits well the stretched exponential distribution of intensities and their spatial correlation. In normal retinas, fit parameters of this model are relatively constant along retinal layers, but varies across layers. However, in retinas with diabetic retinopathy, large spikes of parameter modulation interrupt the constancy within layers, exactly where pathologies are visible. We argue that these results give hope for improvement in statistical pathology-detection methods even when the disease is in its early stages. PMID:20304733
Hierarchical Probabilistic Inference of Cosmic Shear
NASA Astrophysics Data System (ADS)
Schneider, Michael D.; Hogg, David W.; Marshall, Philip J.; Dawson, William A.; Meyers, Joshua; Bard, Deborah J.; Lang, Dustin
2015-07-01
Point estimators for the shearing of galaxy images induced by gravitational lensing involve a complex inverse problem in the presence of noise, pixelization, and model uncertainties. We present a probabilistic forward modeling approach to gravitational lensing inference that has the potential to mitigate the biased inferences in most common point estimators and is practical for upcoming lensing surveys. The first part of our statistical framework requires specification of a likelihood function for the pixel data in an imaging survey given parameterized models for the galaxies in the images. We derive the lensing shear posterior by marginalizing over all intrinsic galaxy properties that contribute to the pixel data (i.e., not limited to galaxy ellipticities) and learn the distributions for the intrinsic galaxy properties via hierarchical inference with a suitably flexible conditional probabilitiy distribution specification. We use importance sampling to separate the modeling of small imaging areas from the global shear inference, thereby rendering our algorithm computationally tractable for large surveys. With simple numerical examples we demonstrate the improvements in accuracy from our importance sampling approach, as well as the significance of the conditional distribution specification for the intrinsic galaxy properties when the data are generated from an unknown number of distinct galaxy populations with different morphological characteristics.
NASA Technical Reports Server (NTRS)
Pincus, Robert; Platnick, Steven E.; Ackerman, Steve; Hemler, Richard; Hofmann, Patrick
2011-01-01
The properties of clouds that may be observed by satellite instruments, such as optical depth and cloud top pressure, are only loosely related to the way clouds are represented in models of the atmosphere. One way to bridge this gap is through "instrument simulators," diagnostic tools that map the model representation to synthetic observations so that differences between simulator output and observations can be interpreted unambiguously as model error. But simulators may themselves be restricted by limited information available from the host model or by internal assumptions. This work examines the extent to which instrument simulators are able to capture essential differences between MODIS and ISCCP, two similar but independent estimates of cloud properties. We focus on the stark differences between MODIS and ISCCP observations of total cloudiness and the distribution of cloud optical thickness can be traced to different approaches to marginal pixels, which MODIS excludes and ISCCP treats as homogeneous. These pixels, which likely contain broken clouds, cover about 15% of the planet and contain almost all of the optically thinnest clouds observed by either instrument. Instrument simulators can not reproduce these differences because the host model does not consider unresolved spatial scales and so can not produce broken pixels. Nonetheless, MODIS and ISCCP observation are consistent for all but the optically-thinnest clouds, and models can be robustly evaluated using instrument simulators by excluding ambiguous observations.
Phase Distribution and Selection of Partially Correlated Persistent Scatterers
NASA Astrophysics Data System (ADS)
Lien, J.; Zebker, H. A.
2012-12-01
Interferometric synthetic aperture radar (InSAR) time-series methods can effectively estimate temporal surface changes induced by geophysical phenomena. However, such methods are susceptible to decorrelation due to spatial and temporal baselines (radar pass separation), changes in orbital geometries, atmosphere, and noise. These effects limit the number of interferograms that can be used for differential analysis and obscure the deformation signal. InSAR decorrelation effects may be ameliorated by exploiting pixels that exhibit phase stability across the stack of interferograms. These so-called persistent scatterer (PS) pixels are dominated by a single point-like scatterer that remains phase-stable over the spatial and temporal baseline. By identifying a network of PS pixels for use in phase unwrapping, reliable deformation measurements may be obtained even in areas of low correlation, where traditional InSAR techniques fail to produce useful observations. Many additional pixels can be added to the PS list if we are able to identify those in which a dominant scatterer exhibits partial, rather than complete, correlation across all radar scenes. In this work, we quantify and exploit the phase stability of partially correlated PS pixels. We present a new system model for producing interferometric pixel values from a complex surface backscatter function characterized by signal-to-clutter ratio (SCR). From this model, we derive the joint probabilistic distribution for PS pixel phases in a stack of interferograms as a function of SCR and spatial baselines. This PS phase distribution generalizes previous results that assume the clutter phase contribution is uncorrelated between radar passes. We verify the analytic distribution through a series of radar scattering simulations. We use the derived joint PS phase distribution with maximum-likelihood SCR estimation to analyze an area of the Hayward Fault Zone in the San Francisco Bay Area. We obtain a series of 38 interferometric images of the area from C-band ERS radar satellite passes between May 1995 and December 2000. We compare the estimated SCRs to those calculated with previously derived PS phase distributions. Finally, we examine the PS network density resulting from varying selection thresholds of SCR and compare to other PS identification techniques.
Soccer player recognition by pixel classification in a hybrid color space
NASA Astrophysics Data System (ADS)
Vandenbroucke, Nicolas; Macaire, Ludovic; Postaire, Jack-Gerard
1997-08-01
Soccer is a very popular sport all over the world, Coaches and sport commentators need accurate information about soccer games, especially about the players behavior. These information can be gathered by inspectors who watch the soccer match and report manually the actions of the players involved in the principal phases of the game. Generally, these inspectors focus their attention on the few players standing near the ball and don't report about the motion of all the other players. So it seems desirable to design a system which automatically tracks all the players in real- time. That's why we propose to automatically track each player through the successive color images of the sequences acquired by a fixed color camera. Each player which is present in the image, is modelized by an active contour model or snake. When, during the soccer match, a player is hidden by another, the snakes which track these two players merge. So, it becomes impossible to track the players, except if the snakes are interactively re-initialized. Fortunately, in most cases, the two players don't belong to the same team. That is why we present an algorithm which recognizes the teams of the players by pixels representing the soccer ground which must be withdrawn before considering the players themselves. To eliminate these pixels, the color characteristics of the ground are determined interactively. In a second step, dealing with windows containing only one player of one team, the color features which yield the best discrimination between the two teams are selected. Thanks to these color features, the pixels associated to the players of the two teams form two separated clusters into a color space. In fact, there are many color representation systems and it's interesting to evaluate the features which provide the best separation between the two classes of pixels according to the players soccer suit. Finally, the classification process for image segmentation is based on the three most discriminating color features which define the coordinates of each pixel in an 'hybrid color space.' Thanks to this hybrid color representation, each pixel can be assigned to one of the two classes by a minimum distance classification.
Cerebral hypoxia during cardiopulmonary bypass: a magnetic resonance imaging study.
Mutch, W A; Ryner, L N; Kozlowski, P; Scarth, G; Warrian, R K; Lefevre, G R; Wong, T G; Thiessen, D B; Girling, L G; Doiron, L; McCudden, C; Saunders, J K
1997-09-01
Neurocognitive deficits after open heart operations have been correlated to jugular venous oxygen desaturation on rewarming from hypothermic cardiopulmonary bypass (CPB). Using a porcine model, we looked for evidence of cerebral hypoxia by magnetic resonance imaging during CPB. Brain oxygenation was assessed by T2*-weighted imaging, based on the blood oxygenation level-dependent effect (decreased T2*-weighted signal intensity with increased tissue concentrations of deoxyhemoglobin). Pigs were placed on normothermic CPB, then cooled to 28 degrees C for 2 hours of hypothermic CPB, then rewarmed to baseline temperature. T2*-weighted, imaging was undertaken before CPB, during normothermic CPB, at 30-minute intervals during hypothermic CPB, after rewarming, and then 15 minutes after death. Imaging was with a Bruker 7.0 Tesla, 40-cm bore magnetic resonance scanner with actively shielded gradient coils. Regions of interest from the magnetic resonance images were analyzed to identify parenchymal hypoxia and correlated with jugular venous oxygen saturation. Post-hoc fuzzy clustering analysis was used to examine spatially distributed regions of interest whose pixels followed similar time courses. Attention was paid to pixels showing decreased T2* signal intensity over time. T2* signal intensity decreased with rewarming and in five of seven experiments correlated with the decrease in jugular venous oxygen saturation. T2* imaging with fuzzy clustering analysis revealed two diffusely distributed pixel groups during CPB. One large group of pixels (50% +/- 13% of total pixel count) showed increased T2* signal intensity (well-oxygenated tissue) during hypothermia, with decreased intensity on rewarming. Changes in a second group of pixels (34% +/- 8% of total pixel count) showed a progressive decrease in T2* signal intensity, independent of temperature, suggestive of increased brain hypoxia during CPB. Decreased T2* signal intensity in a diffuse spatial distribution indicates that a large proportion of cerebral parenchyma is hypoxic (evidenced by an increased proportion of tissue deoxyhemoglobin) during CPB in this porcine model. Neuronal damage secondary to parenchymal hypoxia may explain the postoperative neuropsychological dysfunction after cardiac operations.
NASA Astrophysics Data System (ADS)
Chirouze, J.; Boulet, G.; Jarlan, L.; Fieuzal, R.; Rodriguez, J. C.; Ezzahar, J.; Er-Raki, S.; Bigeard, G.; Merlin, O.; Garatuza-Payan, J.; Watts, C.; Chehbouni, G.
2014-03-01
Instantaneous evapotranspiration rates and surface water stress levels can be deduced from remotely sensed surface temperature data through the surface energy budget. Two families of methods can be defined: the contextual methods, where stress levels are scaled on a given image between hot/dry and cool/wet pixels for a particular vegetation cover, and single-pixel methods, which evaluate latent heat as the residual of the surface energy balance for one pixel independently from the others. Four models, two contextual (S-SEBI and a modified triangle method, named VIT) and two single-pixel (TSEB, SEBS) are applied over one growing season (December-May) for a 4 km × 4 km irrigated agricultural area in the semi-arid northern Mexico. Their performance, both at local and spatial standpoints, are compared relatively to energy balance data acquired at seven locations within the area, as well as an uncalibrated soil-vegetation-atmosphere transfer (SVAT) model forced with local in situ data including observed irrigation and rainfall amounts. Stress levels are not always well retrieved by most models, but S-SEBI as well as TSEB, although slightly biased, show good performance. The drop in model performance is observed for all models when vegetation is senescent, mostly due to a poor partitioning both between turbulent fluxes and between the soil/plant components of the latent heat flux and the available energy. As expected, contextual methods perform well when contrasted soil moisture and vegetation conditions are encountered in the same image (therefore, especially in spring and early summer) while they tend to exaggerate the spread in water status in more homogeneous conditions (especially in winter). Surface energy balance models run with available remotely sensed products prove to be nearly as accurate as the uncalibrated SVAT model forced with in situ data.
The construction of sparse models of Mars' crustal magnetic field
NASA Astrophysics Data System (ADS)
Moore, Kimberly; Bloxham, Jeremy
2017-04-01
The crustal magnetic field of Mars is a key constraint on Martian geophysical history, especially the timing of the dynamo shutoff. Maps of the crustal magnetic field of Mars show wide variations in the intensity of magnetization, with most of the Northern hemisphere only weakly magnetized. Previous methods of analysis tend to favor smooth solutions for the crustal magnetic field of Mars, making use of techniques such as L2 norms. Here we utilize inversion methods designed for sparse models, to see how much of the surface area of Mars must be magnetized in order to fit available spacecraft magnetic field data. We solve for the crustal magnetic field at 10,000 individual magnetic pixels on the surface of Mars. We employ an L1 regularization, and solve for models where each magnetic pixel is identically zero, unless required otherwise by the data. We find solutions with an adequate fit to the data with over 90% sparsity (90% of magnetic pixels having a field value of exactly 0). We contrast these solutions with L2-based solutions, as well as an elastic net model (combination of L1 and L2). We find our sparse solutions look dramatically different from previous models in the literature, but still give a physically reasonable history of the dynamo (shutting off around 4.1 Ga).
NASA Technical Reports Server (NTRS)
Jasinski, Michael F.; Eagleson, Peter S.
1989-01-01
A stochastic-geometric landsurface reflectance model is formulated and tested for the parameterization of spatially variable vegetation and soil at subpixel scales using satellite multispectral images without ground truth. Landscapes are conceptualized as 3-D Lambertian reflecting surfaces consisting of plant canopies, represented by solid geometric figures, superposed on a flat soil background. A computer simulation program is developed to investigate image characteristics at various spatial aggregations representative of satellite observational scales, or pixels. The evolution of the shape and structure of the red-infrared space, or scattergram, of typical semivegetated scenes is investigated by sequentially introducing model variables into the simulation. The analytical moments of the total pixel reflectance, including the mean, variance, spatial covariance, and cross-spectral covariance, are derived in terms of the moments of the individual fractional cover and reflectance components. The moments are applied to the solution of the inverse problem: The estimation of subpixel landscape properties on a pixel-by-pixel basis, given only one multispectral image and limited assumptions on the structure of the landscape. The landsurface reflectance model and inversion technique are tested using actual aerial radiometric data collected over regularly spaced pecan trees, and using both aerial and LANDSAT Thematic Mapper data obtained over discontinuous, randomly spaced conifer canopies in a natural forested watershed. Different amounts of solar backscattered diffuse radiation are assumed and the sensitivity of the estimated landsurface parameters to those amounts is examined.
A Novel BA Complex Network Model on Color Template Matching
Han, Risheng; Yue, Guangxue; Ding, Hui
2014-01-01
A novel BA complex network model of color space is proposed based on two fundamental rules of BA scale-free network model: growth and preferential attachment. The scale-free characteristic of color space is discovered by analyzing evolving process of template's color distribution. And then the template's BA complex network model can be used to select important color pixels which have much larger effects than other color pixels in matching process. The proposed BA complex network model of color space can be easily integrated into many traditional template matching algorithms, such as SSD based matching and SAD based matching. Experiments show the performance of color template matching results can be improved based on the proposed algorithm. To the best of our knowledge, this is the first study about how to model the color space of images using a proper complex network model and apply the complex network model to template matching. PMID:25243235
A novel BA complex network model on color template matching.
Han, Risheng; Shen, Shigen; Yue, Guangxue; Ding, Hui
2014-01-01
A novel BA complex network model of color space is proposed based on two fundamental rules of BA scale-free network model: growth and preferential attachment. The scale-free characteristic of color space is discovered by analyzing evolving process of template's color distribution. And then the template's BA complex network model can be used to select important color pixels which have much larger effects than other color pixels in matching process. The proposed BA complex network model of color space can be easily integrated into many traditional template matching algorithms, such as SSD based matching and SAD based matching. Experiments show the performance of color template matching results can be improved based on the proposed algorithm. To the best of our knowledge, this is the first study about how to model the color space of images using a proper complex network model and apply the complex network model to template matching.
NASA Astrophysics Data System (ADS)
Okamura, Rintaro; Iwabuchi, Hironobu; Schmidt, K. Sebastian
2017-12-01
Three-dimensional (3-D) radiative-transfer effects are a major source of retrieval errors in satellite-based optical remote sensing of clouds. The challenge is that 3-D effects manifest themselves across multiple satellite pixels, which traditional single-pixel approaches cannot capture. In this study, we present two multi-pixel retrieval approaches based on deep learning, a technique that is becoming increasingly successful for complex problems in engineering and other areas. Specifically, we use deep neural networks (DNNs) to obtain multi-pixel estimates of cloud optical thickness and column-mean cloud droplet effective radius from multispectral, multi-pixel radiances. The first DNN method corrects traditional bispectral retrievals based on the plane-parallel homogeneous cloud assumption using the reflectances at the same two wavelengths. The other DNN method uses so-called convolutional layers and retrieves cloud properties directly from the reflectances at four wavelengths. The DNN methods are trained and tested on cloud fields from large-eddy simulations used as input to a 3-D radiative-transfer model to simulate upward radiances. The second DNN-based retrieval, sidestepping the bispectral retrieval step through convolutional layers, is shown to be more accurate. It reduces 3-D radiative-transfer effects that would otherwise affect the radiance values and estimates cloud properties robustly even for optically thick clouds.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ross, Steve; Haji-Sheikh, Michael; Huntington, Andrew
The Voxtel VX-798 is a prototype X-ray pixel array detector (PAD) featuring a silicon sensor photodiode array of 48 x 48 pixels, each 130 mu m x 130 mu m x 520 mu m thick, coupled to a CMOS readout application specific integrated circuit (ASIC). The first synchrotron X-ray characterization of this detector is presented, and its ability to selectively count individual X-rays within two independent arrival time windows, a programmable energy range, and localized to a single pixel is demonstrated. During our first trial run at Argonne National Laboratory's Advance Photon Source, the detector achieved a 60 ns gatingmore » time and 700 eV full width at half-maximum energy resolution in agreement with design parameters. Each pixel of the PAD holds two independent digital counters, and the discriminator for X-ray energy features both an upper and lower threshold to window the energy of interest discarding unwanted background. This smart-pixel technology allows energy and time resolution to be set and optimized in software. It is found that the detector linearity follows an isolated dead-time model, implying that megahertz count rates should be possible in each pixel. Measurement of the line and point spread functions showed negligible spatial blurring. When combined with the timing structure of the synchrotron storage ring, it is demonstrated that the area detector can perform both picosecond time-resolved X-ray diffraction and fluorescence spectroscopy measurements.« less
Improving the quantification of contrast enhanced ultrasound using a Bayesian approach
NASA Astrophysics Data System (ADS)
Rizzo, Gaia; Tonietto, Matteo; Castellaro, Marco; Raffeiner, Bernd; Coran, Alessandro; Fiocco, Ugo; Stramare, Roberto; Grisan, Enrico
2017-03-01
Contrast Enhanced Ultrasound (CEUS) is a sensitive imaging technique to assess tissue vascularity, that can be useful in the quantification of different perfusion patterns. This can be particularly important in the early detection and staging of arthritis. In a recent study we have shown that a Gamma-variate can accurately quantify synovial perfusion and it is flexible enough to describe many heterogeneous patterns. Moreover, we have shown that through a pixel-by-pixel analysis the quantitative information gathered characterizes more effectively the perfusion. However, the SNR ratio of the data and the nonlinearity of the model makes the parameter estimation difficult. Using classical non-linear-leastsquares (NLLS) approach the number of unreliable estimates (those with an asymptotic coefficient of variation greater than a user-defined threshold) is significant, thus affecting the overall description of the perfusion kinetics and of its heterogeneity. In this work we propose to solve the parameter estimation at the pixel level within a Bayesian framework using Variational Bayes (VB), and an automatic and data-driven prior initialization. When evaluating the pixels for which both VB and NLLS provided reliable estimates, we demonstrated that the parameter values provided by the two methods are well correlated (Pearson's correlation between 0.85 and 0.99). Moreover, the mean number of unreliable pixels drastically reduces from 54% (NLLS) to 26% (VB), without increasing the computational time (0.05 s/pixel for NLLS and 0.07 s/pixel for VB). When considering the efficiency of the algorithms as computational time per reliable estimate, VB outperforms NLLS (0.11 versus 0.25 seconds per reliable estimate respectively).
Detailed measurements of shower properties in a high granularity digital electromagnetic calorimeter
NASA Astrophysics Data System (ADS)
van der Kolk, N.
2018-03-01
The MAPS (Monolithic Active Pixel Sensors) prototype of the proposed ALICE Forward Calorimeter (FoCal) is the highest granularity electromagnetic calorimeter, with 39 million pixels with a size of 30 × 30 μm2. Particle showers can be studied with unprecedented detail with this prototype. Electromagnetic showers at energies between 2 GeV and 244 GeV have been studied and compared with GEANT4 simulations. Simulation models can be tested in more detail than ever before and the differences observed between FoCal data and GEANT4 simulations illustrate that improvements in electromagnetic models are still possible.
Least-Squares Self-Calibration of Imaging Array Data
NASA Technical Reports Server (NTRS)
Arendt, R. G.; Moseley, S. H.; Fixsen, D. J.
2004-01-01
When arrays are used to collect multiple appropriately-dithered images of the same region of sky, the resulting data set can be calibrated using a least-squares minimization procedure that determines the optimal fit between the data and a model of that data. The model parameters include the desired sky intensities as well as instrument parameters such as pixel-to-pixel gains and offsets. The least-squares solution simultaneously provides the formal error estimates for the model parameters. With a suitable observing strategy, the need for separate calibration observations is reduced or eliminated. We show examples of this calibration technique applied to HST NICMOS observations of the Hubble Deep Fields and simulated SIRTF IRAC observations.
Matthew P. Thompson; Julie W. Gilbertson-Day; Joe H. Scott
2015-01-01
We develop a novel risk assessment approach that integrates complementary, yet distinct, spatial modeling approaches currently used in wildfire risk assessment. Motivation for this work stems largely from limitations of existing stochastic wildfire simulation systems, which can generate pixel-based outputs of fire behavior as well as polygon-based outputs of simulated...
Design and Calibration of a Novel Bio-Inspired Pixelated Polarized Light Compass.
Han, Guoliang; Hu, Xiaoping; Lian, Junxiang; He, Xiaofeng; Zhang, Lilian; Wang, Yujie; Dong, Fengliang
2017-11-14
Animals, such as Savannah sparrows and North American monarch butterflies, are able to obtain compass information from skylight polarization patterns to help them navigate effectively and robustly. Inspired by excellent navigation ability of animals, this paper proposes a novel image-based polarized light compass, which has the advantages of having a small size and being light weight. Firstly, the polarized light compass, which is composed of a Charge Coupled Device (CCD) camera, a pixelated polarizer array and a wide-angle lens, is introduced. Secondly, the measurement method of a skylight polarization pattern and the orientation method based on a single scattering Rayleigh model are presented. Thirdly, the error model of the sensor, mainly including the response error of CCD pixels and the installation error of the pixelated polarizer, is established. A calibration method based on iterative least squares estimation is proposed. In the outdoor environment, the skylight polarization pattern can be measured in real time by our sensor. The orientation accuracy of the sensor increases with the decrease of the solar elevation angle, and the standard deviation of orientation error is 0 . 15 ∘ at sunset. Results of outdoor experiments show that the proposed polarization navigation sensor can be used for outdoor autonomous navigation.
Design and Calibration of a Novel Bio-Inspired Pixelated Polarized Light Compass
Hu, Xiaoping; Lian, Junxiang; He, Xiaofeng; Zhang, Lilian; Wang, Yujie; Dong, Fengliang
2017-01-01
Animals, such as Savannah sparrows and North American monarch butterflies, are able to obtain compass information from skylight polarization patterns to help them navigate effectively and robustly. Inspired by excellent navigation ability of animals, this paper proposes a novel image-based polarized light compass, which has the advantages of having a small size and being light weight. Firstly, the polarized light compass, which is composed of a Charge Coupled Device (CCD) camera, a pixelated polarizer array and a wide-angle lens, is introduced. Secondly, the measurement method of a skylight polarization pattern and the orientation method based on a single scattering Rayleigh model are presented. Thirdly, the error model of the sensor, mainly including the response error of CCD pixels and the installation error of the pixelated polarizer, is established. A calibration method based on iterative least squares estimation is proposed. In the outdoor environment, the skylight polarization pattern can be measured in real time by our sensor. The orientation accuracy of the sensor increases with the decrease of the solar elevation angle, and the standard deviation of orientation error is 0.15∘ at sunset. Results of outdoor experiments show that the proposed polarization navigation sensor can be used for outdoor autonomous navigation. PMID:29135927
A method of minimum volume simplex analysis constrained unmixing for hyperspectral image
NASA Astrophysics Data System (ADS)
Zou, Jinlin; Lan, Jinhui; Zeng, Yiliang; Wu, Hongtao
2017-07-01
The signal recorded by a low resolution hyperspectral remote sensor from a given pixel, letting alone the effects of the complex terrain, is a mixture of substances. To improve the accuracy of classification and sub-pixel object detection, hyperspectral unmixing(HU) is a frontier-line in remote sensing area. Unmixing algorithm based on geometric has become popular since the hyperspectral image possesses abundant spectral information and the mixed model is easy to understand. However, most of the algorithms are based on pure pixel assumption, and since the non-linear mixed model is complex, it is hard to obtain the optimal endmembers especially under a highly mixed spectral data. To provide a simple but accurate method, we propose a minimum volume simplex analysis constrained (MVSAC) unmixing algorithm. The proposed approach combines the algebraic constraints that are inherent to the convex minimum volume with abundance soft constraint. While considering abundance fraction, we can obtain the pure endmember set and abundance fraction correspondingly, and the final unmixing result is closer to reality and has better accuracy. We illustrate the performance of the proposed algorithm in unmixing simulated data and real hyperspectral data, and the result indicates that the proposed method can obtain the distinct signatures correctly without redundant endmember and yields much better performance than the pure pixel based algorithm.
Artificial neural network prediction of ischemic tissue fate in acute stroke imaging
Huang, Shiliang; Shen, Qiang; Duong, Timothy Q
2010-01-01
Multimodal magnetic resonance imaging of acute stroke provides predictive value that can be used to guide stroke therapy. A flexible artificial neural network (ANN) algorithm was developed and applied to predict ischemic tissue fate on three stroke groups: 30-, 60-minute, and permanent middle cerebral artery occlusion in rats. Cerebral blood flow (CBF), apparent diffusion coefficient (ADC), and spin–spin relaxation time constant (T2) were acquired during the acute phase up to 3 hours and again at 24 hours followed by histology. Infarct was predicted on a pixel-by-pixel basis using only acute (30-minute) stroke data. In addition, neighboring pixel information and infarction incidence were also incorporated into the ANN model to improve prediction accuracy. Receiver-operating characteristic analysis was used to quantify prediction accuracy. The major findings were the following: (1) CBF alone poorly predicted the final infarct across three experimental groups; (2) ADC alone adequately predicted the infarct; (3) CBF+ADC improved the prediction accuracy; (4) inclusion of neighboring pixel information and infarction incidence further improved the prediction accuracy; and (5) prediction was more accurate for permanent occlusion, followed by 60- and 30-minute occlusion. The ANN predictive model could thus provide a flexible and objective framework for clinicians to evaluate stroke treatment options on an individual patient basis. PMID:20424631
NASA Astrophysics Data System (ADS)
Gao, Feng; Dong, Junyu; Li, Bo; Xu, Qizhi; Xie, Cui
2016-10-01
Change detection is of high practical value to hazard assessment, crop growth monitoring, and urban sprawl detection. A synthetic aperture radar (SAR) image is the ideal information source for performing change detection since it is independent of atmospheric and sunlight conditions. Existing SAR image change detection methods usually generate a difference image (DI) first and use clustering methods to classify the pixels of DI into changed class and unchanged class. Some useful information may get lost in the DI generation process. This paper proposed an SAR image change detection method based on neighborhood-based ratio (NR) and extreme learning machine (ELM). NR operator is utilized for obtaining some interested pixels that have high probability of being changed or unchanged. Then, image patches centered at these pixels are generated, and ELM is employed to train a model by using these patches. Finally, pixels in both original SAR images are classified by the pretrained ELM model. The preclassification result and the ELM classification result are combined to form the final change map. The experimental results obtained on three real SAR image datasets and one simulated dataset show that the proposed method is robust to speckle noise and is effective to detect change information among multitemporal SAR images.
Li, Jing; Mahmoodi, Alireza; Joseph, Dileepan
2015-01-01
An important class of complementary metal-oxide-semiconductor (CMOS) image sensors are those where pixel responses are monotonic nonlinear functions of light stimuli. This class includes various logarithmic architectures, which are easily capable of wide dynamic range imaging, at video rates, but which are vulnerable to image quality issues. To minimize fixed pattern noise (FPN) and maximize photometric accuracy, pixel responses must be calibrated and corrected due to mismatch and process variation during fabrication. Unlike literature approaches, which employ circuit-based models of varying complexity, this paper introduces a novel approach based on low-degree polynomials. Although each pixel may have a highly nonlinear response, an approximately-linear FPN calibration is possible by exploiting the monotonic nature of imaging. Moreover, FPN correction requires only arithmetic, and an optimal fixed-point implementation is readily derived, subject to a user-specified number of bits per pixel. Using a monotonic spline, involving cubic polynomials, photometric calibration is also possible without a circuit-based model, and fixed-point photometric correction requires only a look-up table. The approach is experimentally validated with a logarithmic CMOS image sensor and is compared to a leading approach from the literature. The novel approach proves effective and efficient. PMID:26501287
Optical modeling of waveguide coupled TES detectors towards the SAFARI instrument for SPICA
NASA Astrophysics Data System (ADS)
Trappe, N.; Bracken, C.; Doherty, S.; Gao, J. R.; Glowacka, D.; Goldie, D.; Griffin, D.; Hijmering, R.; Jackson, B.; Khosropanah, P.; Mauskopf, P.; Morozov, D.; Murphy, A.; O'Sullivan, C.; Ridder, M.; Withington, S.
2012-09-01
The next generation of space missions targeting far-infrared wavelengths will require large-format arrays of extremely sensitive detectors. The development of Transition Edge Sensor (TES) array technology is being developed for future Far-Infrared (FIR) space applications such as the SAFARI instrument for SPICA where low-noise and high sensitivity is required to achieve ambitious science goals. In this paper we describe a modal analysis of multi-moded horn antennas feeding integrating cavities housing TES detectors with superconducting film absorbers. In high sensitivity TES detector technology the ability to control the electromagnetic and thermo-mechanical environment of the detector is critical. Simulating and understanding optical behaviour of such detectors at far IR wavelengths is difficult and requires development of existing analysis tools. The proposed modal approach offers a computationally efficient technique to describe the partial coherent response of the full pixel in terms of optical efficiency and power leakage between pixels. Initial wok carried out as part of an ESA technical research project on optical analysis is described and a prototype SAFARI pixel design is analyzed where the optical coupling between the incoming field and the pixel containing horn, cavity with an air gap, and thin absorber layer are all included in the model to allow a comprehensive optical characterization. The modal approach described is based on the mode matching technique where the horn and cavity are described in the traditional way while a technique to include the absorber was developed. Radiation leakage between pixels is also included making this a powerful analysis tool.
Panchromatic SED modelling of spatially resolved galaxies
NASA Astrophysics Data System (ADS)
Smith, Daniel J. B.; Hayward, Christopher C.
2018-05-01
We test the efficacy of the energy-balance spectral energy distribution (SED) fitting code MAGPHYS for recovering the spatially resolved properties of a simulated isolated disc galaxy, for which it was not designed. We perform 226 950 MAGPHYS SED fits to regions between 0.2 and 25 kpc in size across the galaxy's disc, viewed from three different sight-lines, to probe how well MAGPHYS can recover key galaxy properties based on 21 bands of UV-far-infrared model photometry. MAGPHYS yields statistically acceptable fits to >99 per cent of the pixels within the r-band effective radius and between 59 and 77 percent of pixels within 20 kpc of the nucleus. MAGPHYS is able to recover the distribution of stellar mass, star formation rate (SFR), specific SFR, dust luminosity, dust mass, and V-band attenuation reasonably well, especially when the pixel size is ≳ 1 kpc, whereas non-standard outputs (stellar metallicity and mass-weighted age) are recovered less well. Accurate recovery is more challenging in the smallest sub-regions of the disc (pixel scale ≲ 1 kpc), where the energy balance criterion becomes increasingly incorrect. Estimating integrated galaxy properties by summing the recovered pixel values, the true integrated values of all parameters considered except metallicity and age are well recovered at all spatial resolutions, ranging from 0.2 kpc to integrating across the disc, albeit with some evidence for resolution-dependent biases. These results must be considered when attempting to analyse the structure of real galaxies with actual observational data, for which the `ground truth' is unknown.
MKID digital readout tuning with deep learning
NASA Astrophysics Data System (ADS)
Dodkins, R.; Mahashabde, S.; O'Brien, K.; Thatte, N.; Fruitwala, N.; Walter, A. B.; Meeker, S. R.; Szypryt, P.; Mazin, B. A.
2018-04-01
Microwave Kinetic Inductance Detector (MKID) devices offer inherent spectral resolution, simultaneous read out of thousands of pixels, and photon-limited sensitivity at optical wavelengths. Before taking observations the readout power and frequency of each pixel must be individually tuned, and if the equilibrium state of the pixels change, then the readout must be retuned. This process has previously been performed through manual inspection, and typically takes one hour per 500 resonators (20 h for a ten-kilo-pixel array). We present an algorithm based on a deep convolution neural network (CNN) architecture to determine the optimal bias power for each resonator. The bias point classifications from this CNN model, and those from alternative automated methods, are compared to those from human decisions, and the accuracy of each method is assessed. On a test feed-line dataset, the CNN achieves an accuracy of 90% within 1 dB of the designated optimal value, which is equivalent accuracy to a randomly selected human operator, and superior to the highest scoring alternative automated method by 10%. On a full ten-kilopixel array, the CNN performs the characterization in a matter of minutes - paving the way for future mega-pixel MKID arrays.
On-chip skin color detection using a triple-well CMOS process
NASA Astrophysics Data System (ADS)
Boussaid, Farid; Chai, Douglas; Bouzerdoum, Abdesselam
2004-03-01
In this paper, a current-mode VLSI architecture enabling on read-out skin detection without the need for any on-chip memory elements is proposed. An important feature of the proposed architecture is that it removes the need for demosaicing. Color separation is achieved using the strong wavelength dependence of the absorption coefficient in silicon. This wavelength dependence causes a very shallow absorption of blue light and enables red light to penetrate deeply in silicon. A triple-well process, allowing a P-well to be placed inside an N-well, is chosen to fabricate three vertically integrated photodiodes acting as the RGB color detector for each pixel. Pixels of an input RGB image are classified as skin or non-skin pixels using a statistical skin color model, chosen to offer an acceptable trade-off between skin detection performance and implementation complexity. A single processing unit is used to classify all pixels of the input RGB image. This results in reduced mismatch and also in an increased pixel fill-factor. Furthermore, the proposed current-mode architecture is programmable, allowing external control of all classifier parameters to compensate for mismatch and changing lighting conditions.
Sparsely-sampled hyperspectral stimulated Raman scattering microscopy: a theoretical investigation
NASA Astrophysics Data System (ADS)
Lin, Haonan; Liao, Chien-Sheng; Wang, Pu; Huang, Kai-Chih; Bouman, Charles A.; Kong, Nan; Cheng, Ji-Xin
2017-02-01
A hyperspectral image corresponds to a data cube with two spatial dimensions and one spectral dimension. Through linear un-mixing, hyperspectral images can be decomposed into spectral signatures of pure components as well as their concentration maps. Due to this distinct advantage on component identification, hyperspectral imaging becomes a rapidly emerging platform for engineering better medicine and expediting scientific discovery. Among various hyperspectral imaging techniques, hyperspectral stimulated Raman scattering (HSRS) microscopy acquires data in a pixel-by-pixel scanning manner. Nevertheless, current image acquisition speed for HSRS is insufficient to capture the dynamics of freely moving subjects. Instead of reducing the pixel dwell time to achieve speed-up, which would inevitably decrease signal-to-noise ratio (SNR), we propose to reduce the total number of sampled pixels. Location of sampled pixels are carefully engineered with triangular wave Lissajous trajectory. Followed by a model-based image in-painting algorithm, the complete data is recovered for linear unmixing. Simulation results show that by careful selection of trajectory, a fill rate as low as 10% is sufficient to generate accurate linear unmixing results. The proposed framework applies to any hyperspectral beam-scanning imaging platform which demands high acquisition speed.
To BG or not to BG: Background Subtraction for EIT Coronal Loops
NASA Astrophysics Data System (ADS)
Beene, J. E.; Schmelz, J. T.
2003-05-01
One of the few observational tests for various coronal heating models is to determine the temperature profile along coronal loops. Since loops are such an abundant coronal feature, this method originally seemed quite promising - that the coronal heating problem might actually be solved by determining the temperature as a function of arc length and comparing these observations with predictions made by different models. But there are many instruments currently available to study loops, as well as various techniques used to determine their temperature characteristics. Consequently, there are many different, mostly conflicting temperature results. We chose data for ten coronal loops observed with the Extreme ultraviolet Imaging Telescope (EIT), and chose specific pixels along each loop, as well as corresponding nearby background pixels where the loop emission was not present. Temperature analysis from the 171-to-195 and 195-to-284 angstrom image ratios was then performed on three forms of the data: the original data alone, the original data with a uniform background subtraction, and the original data with a pixel-by-pixel background subtraction. The original results show loops of constant temperature, as other authors have found before us, but the 171-to-195 and 195-to-284 results are significantly different. Background subtraction does not change the constant-temperature result or the value of the temperature itself. This does not mean that loops are isothermal, however, because the background pixels, which are not part of any contiguous structure, also produce a constant-temperature result with the same value as the loop pixels. These results indicate that EIT temperature analysis should not be trusted, and the isothermal loops that result from EIT (and TRACE) analysis may be an artifact of the analysis process. Solar physics research at the University of Memphis is supported by NASA grants NAG5-9783 and NAG5-12096.
Automated Sargassum Detection for Landsat Imagery
NASA Astrophysics Data System (ADS)
McCarthy, S.; Gallegos, S. C.; Armstrong, D.
2016-02-01
We implemented a system to automatically detect Sargassum, a floating seaweed, in 30-meter LANDSAT-8 Operational Land Imager (OLI) imagery. Our algorithm for Sargassum detection is an extended form of Hu's approach to derive a floating algae index (FAI) [1]. Hu's algorithm was developed for Moderate Resolution Imaging Spectroradiometer (MODIS) data, but we extended it for use with the OLI bands centered at 655, 865, and 1609 nm, which are comparable to the MODIS bands located at 645, 859, and 1640 nm. We also developed a high resolution true color product to mask cloud pixels in the OLI scene by applying a threshold to top of the atmosphere (TOA) radiances in the red (655 nm), green (561 nm), and blue (443 nm) wavelengths, as well as a method for removing false positive identifications of Sargassum in the imagery. Hu's algorithm derives a FAI for each Sargassum identified pixel. Our algorithm is currently set to only flag the presence of Sargassum in an OLI pixel by classifying any pixel with a FAI > 0.0 as Sargassum. Additionally, our system geo-locates the flagged Sargassum pixels identified in the OLI imagery into the U.S. Navy Global HYCOM model grid. One element of the model grid covers an area 0.125 degrees of latitude by 0.125 degrees of longitude. To resolve the differences in spatial coverage between Landsat and HYCOM, a scheme was developed to calculate the percentage of pixels flagged within the grid element and if above a threshold, it will be flagged as Sargassum. This work is a part of a larger system, sponsored by NASA/Applied Science and Technology Project at J.C. Stennis Space Center, to forecast when and where Sargassum will land on shore. The focus area of this work is currently the Texas coast. Plans call for extending our efforts into the Caribbean. References: [1] Hu, Chuanmin. A novel ocean color index to detect floating algae in the global oceans. Remote Sensing of Environment 113 (2009) 2118-2129.
NASA Technical Reports Server (NTRS)
Quattrochi, Dale A.; Emerson, Charles W.; Lam, Nina Siu-Ngan; Laymon, Charles A.
1997-01-01
The Image Characterization And Modeling System (ICAMS) is a public domain software package that is designed to provide scientists with innovative spatial analytical tools to visualize, measure, and characterize landscape patterns so that environmental conditions or processes can be assessed and monitored more effectively. In this study ICAMS has been used to evaluate how changes in fractal dimension, as a landscape characterization index, and resolution, are related to differences in Landsat images collected at different dates for the same area. Landsat Thematic Mapper (TM) data obtained in May and August 1993 over a portion of the Great Basin Desert in eastern Nevada were used for analysis. These data represent contrasting periods of peak "green-up" and "dry-down" for the study area. The TM data sets were converted into Normalized Difference Vegetation Index (NDVI) images to expedite analysis of differences in fractal dimension between the two dates. These NDVI images were also resampled to resolutions of 60, 120, 240, 480, and 960 meters from the original 30 meter pixel size, to permit an assessment of how fractal dimension varies with spatial resolution. Tests of fractal dimension for two dates at various pixel resolutions show that the D values in the August image become increasingly more complex as pixel size increases to 480 meters. The D values in the May image show an even more complex relationship to pixel size than that expressed in the August image. Fractal dimension for a difference image computed for the May and August dates increase with pixel size up to a resolution of 120 meters, and then decline with increasing pixel size. This means that the greatest complexity in the difference images occur around a resolution of 120 meters, which is analogous to the operational domain of changes in vegetation and snow cover that constitute differences between the two dates.
Three dimensional modelling for the target asteroid of HAYABUSA
NASA Astrophysics Data System (ADS)
Demura, H.; Kobayashi, S.; Asada, N.; Hashimoto, T.; Saito, J.
Hayabusa program is the first sample return mission of Japan. This was launched at May 9 2003, and will arrive at the target asteroid 25143 Itokawa on June 2005. The spacecraft has three optical navigation cameras, which are two wide angle ones and a telescopic one. The telescope with a filter wheel was named AMICA (Asteroid Multiband Imaging CAmera). We are going to model a shape of the target asteroid by this telescope; expected resolution: 1m/pixel at 10 km in distanc, field of view: 5.7 squared degrees, MPP-type CCD with 1024 x 1000 pixels. Because size of the Hayabusa is about 1x1x1 m, our goal is shape modeling with about 1m in precision on the basis of a camera system with scanning by rotation of the asteroid. This image-based modeling requires sequential images via AMICA and a history of distance between the asteroid and Hayabusa provided by a Laser Range Finder. We established a system of hierarchically recursive search with sub-pixel matching of Ground Control Points, which are picked up with Susan Operator. The matched dataset is restored with a restriction of epipolar geometry, and the obtained a group of three dimensional points are converted to a polygon model with Delaunay Triangulation. The current status of our development for the shape modeling is displayed.
NASA Astrophysics Data System (ADS)
Roberts, G.; Wooster, M. J.; Xu, W.; Freeborn, P. H.; Morcrette, J.-J.; Jones, L.; Benedetti, A.; Jiangping, H.; Fisher, D.; Kaiser, J. W.
2015-11-01
Characterising the dynamics of landscape-scale wildfires at very high temporal resolutions is best achieved using observations from Earth Observation (EO) sensors mounted onboard geostationary satellites. As a result, a number of operational active fire products have been developed from the data of such sensors. An example of which are the Fire Radiative Power (FRP) products, the FRP-PIXEL and FRP-GRID products, generated by the Land Surface Analysis Satellite Applications Facility (LSA SAF) from imagery collected by the Spinning Enhanced Visible and Infrared Imager (SEVIRI) onboard the Meteosat Second Generation (MSG) series of geostationary EO satellites. The processing chain developed to deliver these FRP products detects SEVIRI pixels containing actively burning fires and characterises their FRP output across four geographic regions covering Europe, part of South America and Northern and Southern Africa. The FRP-PIXEL product contains the highest spatial and temporal resolution FRP data set, whilst the FRP-GRID product contains a spatio-temporal summary that includes bias adjustments for cloud cover and the non-detection of low FRP fire pixels. Here we evaluate these two products against active fire data collected by the Moderate Resolution Imaging Spectroradiometer (MODIS) and compare the results to those for three alternative active fire products derived from SEVIRI imagery. The FRP-PIXEL product is shown to detect a substantially greater number of active fire pixels than do alternative SEVIRI-based products, and comparison to MODIS on a per-fire basis indicates a strong agreement and low bias in terms of FRP values. However, low FRP fire pixels remain undetected by SEVIRI, with errors of active fire pixel detection commission and omission compared to MODIS ranging between 9-13 % and 65-77 % respectively in Africa. Higher errors of omission result in greater underestimation of regional FRP totals relative to those derived from simultaneously collected MODIS data, ranging from 35 % over the Northern Africa region to 89 % over the European region. High errors of active fire omission and FRP underestimation are found over Europe and South America and result from SEVIRI's larger pixel area over these regions. An advantage of using FRP for characterising wildfire emissions is the ability to do so very frequently and in near-real time (NRT). To illustrate the potential of this approach, wildfire fuel consumption rates derived from the SEVIRI FRP-PIXEL product are used to characterise smoke emissions of the 2007 "mega-fire" event focused on Peloponnese (Greece) and used within the European Centre for Medium-Range Weather Forecasting (ECMWF) Integrated Forecasting System (IFS) as a demonstration of what can be achieved when using geostationary active fire data within the Copernicus Atmosphere Monitoring Service (CAMS). Qualitative comparison of the modelled smoke plumes with MODIS optical imagery illustrates that the model captures the temporal and spatial dynamics of the plume very well, and that high temporal resolution emissions estimates such as those available from a geostationary orbit are important for capturing the sub-daily variability in smoke plume parameters such as aerosol optical depth (AOD), which are increasingly less well resolved using daily or coarser temporal resolution emissions data sets. Quantitative comparison of modelled AOD with coincident MODIS and AERONET (Aerosol Robotic Network) AOD indicates that the former is overestimated by ~ 20-30 %, but captures the observed AOD dynamics with a high degree of fidelity. The case study highlights the potential of using geostationary FRP data to drive fire emissions estimates for use within atmospheric transport models such as those implemented in the Monitoring Atmospheric Composition and Climate (MACC) series of projects for the CAMS.
[The research on bidirectional reflectance computer simulation of forest canopy at pixel scale].
Song, Jin-Ling; Wang, Jin-Di; Shuai, Yan-Min; Xiao, Zhi-Qiang
2009-08-01
Computer simulation is based on computer graphics to generate the realistic 3D structure scene of vegetation, and to simulate the canopy regime using radiosity method. In the present paper, the authors expand the computer simulation model to simulate forest canopy bidirectional reflectance at pixel scale. But usually, the trees are complex structures, which are tall and have many branches. So there is almost a need for hundreds of thousands or even millions of facets to built up the realistic structure scene for the forest It is difficult for the radiosity method to compute so many facets. In order to make the radiosity method to simulate the forest scene at pixel scale, in the authors' research, the authors proposed one idea to simplify the structure of forest crowns, and abstract the crowns to ellipsoids. And based on the optical characteristics of the tree component and the characteristics of the internal energy transmission of photon in real crown, the authors valued the optical characteristics of ellipsoid surface facets. In the computer simulation of the forest, with the idea of geometrical optics model, the gap model is considered to get the forest canopy bidirectional reflectance at pixel scale. Comparing the computer simulation results with the GOMS model, and Multi-angle Imaging SpectroRadiometer (MISR) multi-angle remote sensing data, the simulation results are in agreement with the GOMS simulation result and MISR BRF. But there are also some problems to be solved. So the authors can conclude that the study has important value for the application of multi-angle remote sensing and the inversion of vegetation canopy structure parameters.
Separation of metadata and pixel data to speed DICOM tag morphing.
Ismail, Mahmoud; Philbin, James
2013-01-01
The DICOM information model combines pixel data and metadata in single DICOM object. It is not possible to access the metadata separately from the pixel data. There are use cases where only metadata is accessed. The current DICOM object format increases the running time of those use cases. Tag morphing is one of those use cases. Tag morphing includes deletion, insertion or manipulation of one or more of the metadata attributes. It is typically used for order reconciliation on study acquisition or to localize the issuer of patient ID (IPID) and the patient ID attributes when data from one domain is transferred to a different domain. In this work, we propose using Multi-Series DICOM (MSD) objects, which separate metadata from pixel data and remove duplicate attributes, to reduce the time required for Tag Morphing. The time required to update a set of study attributes in each format is compared. The results show that the MSD format significantly reduces the time required for tag morphing.
A spatially adaptive spectral re-ordering technique for lossless coding of hyper-spectral images
NASA Technical Reports Server (NTRS)
Memon, Nasir D.; Galatsanos, Nikolas
1995-01-01
In this paper, we propose a new approach, applicable to lossless compression of hyper-spectral images, that alleviates some limitations of linear prediction as applied to this problem. According to this approach, an adaptive re-ordering of the spectral components of each pixel is performed prior to prediction and encoding. This re-ordering adaptively exploits, on a pixel-by pixel basis, the presence of inter-band correlations for prediction. Furthermore, the proposed approach takes advantage of spatial correlations, and does not introduce any coding overhead to transmit the order of the spectral bands. This is accomplished by using the assumption that two spatially adjacent pixels are expected to have similar spectral relationships. We thus have a simple technique to exploit spectral and spatial correlations in hyper-spectral data sets, leading to compression performance improvements as compared to our previously reported techniques for lossless compression. We also look at some simple error modeling techniques for further exploiting any structure that remains in the prediction residuals prior to entropy coding.
Spatial optical crosstalk in CMOS image sensors integrated with plasmonic color filters.
Yu, Yan; Chen, Qin; Wen, Long; Hu, Xin; Zhang, Hui-Fang
2015-08-24
Imaging resolution of complementary metal oxide semiconductor (CMOS) image sensor (CIS) keeps increasing to approximately 7k × 4k. As a result, the pixel size shrinks down to sub-2μm, which greatly increases the spatial optical crosstalk. Recently, plasmonic color filter was proposed as an alternative to conventional colorant pigmented ones. However, there is little work on its size effect and the spatial optical crosstalk in a model of CIS. By numerical simulation, we investigate the size effect of nanocross array plasmonic color filters and analyze the spatial optical crosstalk of each pixel in a Bayer array of a CIS with a pixel size of 1μm. It is found that the small pixel size deteriorates the filtering performance of nanocross color filters and induces substantial spatial color crosstalk. By integrating the plasmonic filters in the low Metal layer in standard CMOS process, the crosstalk reduces significantly, which is compatible to pigmented filters in a state-of-the-art backside illumination CIS.
CMOS active pixel sensors response to low energy light ions
NASA Astrophysics Data System (ADS)
Spiriti, E.; Finck, Ch.; Baudot, J.; Divay, C.; Juliani, D.; Labalme, M.; Rousseau, M.; Salvador, S.; Vanstalle, M.; Agodi, C.; Cuttone, G.; De Napoli, M.; Romano, F.
2017-12-01
Recently CMOS active pixel sensors have been used in Hadrontherapy ions fragmentation cross section measurements. Their main goal is to reconstruct tracks generated by the non interacting primary ions or by the produced fragments. In this framework the sensors, unexpectedly, demonstrated the possibility to obtain also some informations that could contribute to the ion type identification. The present analysis shows a clear dependency in charge and number of pixels per cluster (pixels with a collected amount of charge above a given threshold) with both fragment atomic number Z and energy loss in the sensor. This information, in the FIRST (F ragmentation of I ons R elevant for S pace and T herapy) experiment, has been used in the overall particle identification analysis algorithm. The aim of this paper is to present the data analysis and the obtained results. An empirical model was developed, in this paper, that reproduce the cluster size as function of the deposited energy in the sensor.
Image reconstruction of dynamic infrared single-pixel imaging system
NASA Astrophysics Data System (ADS)
Tong, Qi; Jiang, Yilin; Wang, Haiyan; Guo, Limin
2018-03-01
Single-pixel imaging technique has recently received much attention. Most of the current single-pixel imaging is aimed at relatively static targets or the imaging system is fixed, which is limited by the number of measurements received through the single detector. In this paper, we proposed a novel dynamic compressive imaging method to solve the imaging problem, where exists imaging system motion behavior, for the infrared (IR) rosette scanning system. The relationship between adjacent target images and scene is analyzed under different system movement scenarios. These relationships are used to build dynamic compressive imaging models. Simulation results demonstrate that the proposed method can improve the reconstruction quality of IR image and enhance the contrast between the target and the background in the presence of system movement.
New lightcurve of asteroid (216) Kleopatra to evaluate the shape model
NASA Astrophysics Data System (ADS)
Hannan, Melissa A.; Howell, Ellen S.; Woodney, Laura M.; Taylor, Patrick A.
2014-11-01
Asteroid 216 Kleopatra is an M class asteroid in the Main Belt with an unusual shape model that looks like a dog bone. This model was created, from the radar data taken at Arecibo Observatory (Ostro et al. 1999). The discovery of satellites orbiting Kleopatra (Marchis et al. 2008) has led to determination of its mass and density (Descamps et al. 2011). New higher quality data were taken to improve upon the existing shape model. Radar images were obtained in November and December 2013, at Arecibo Observatory with resolution of 10.5 km per pixel. In addition, observations were made with the fully automated 20-inch telescope of the Murillo Family Observatory located on the CSUSB campus. The telescope was equipped with an Apogee U16M CCD camera with a 31 arcmin square field of view and BVR filters. Image data were acquired on 7 and 9 November, 2013 under mostly clear conditions and with 2x2 binning to a pixel scale of 0.9 arcseconds per pixel. These images were taken close in time to the radar observations in order to determine the rotational phase. These data also can be used to look for color changes with rotation. We used the lightcurve and the existing radar shape model to simulate the new radar observations. Although the model matches fairly well overall, it does not reproduce all of the features in the images, indicating that the model can be improved. Results of this analysis will be presented.
Pixel-by-Pixel SED Fitting of Intermediate Redshift Galaxies
NASA Astrophysics Data System (ADS)
Cohen, Seth H.; Kim, Hwihyun; Petty, Sara M.; Farrah, Duncan
2015-01-01
We select intermediate redshift galaxies from the Hubble Space Telescope CANDELS and GOODS surveys to study their stellar populations on sub-kilo-parsec scales by fitting SED models on a pixel-by-pixel basis. Galaxies are chosen to have measured spectroscopic redshifts (z<1.5), to be bright (H_AB<21 mag), to be relatively face-on (b/a > 0.6), and have a minimum of ten individual resolution elements across the face of the galaxy, as defined by the broadest PSF (F160W-band) in the data. The sample contains ~200 galaxies with BViz(Y)JH band HST photometry. The main goal of the study is to better understand the effects of population blending when using a pixel-by-pixel SED fitting (pSED) approach. We outline our pSED fitting method which gives maps of stellar mass, age, star-formation rate, etc. Several examples of individual pSED-fit maps are presented in detail, as well as some preliminary results on the full sample. The pSED method is necessarily biased by the brightest population in a given pixel outshining the rest of the stars, and, therefore, we intend to study this apparent population blending in a set of artificially redshifted images of nearby galaxies, for which we have star-by-star measurements of their stellar populations. This local sample will be used to better interpret the measurements for the higher redshift galaxies.Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the Data Archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. This archival research is associated with program #13241.
Matched-filter algorithm for subpixel spectral detection in hyperspectral image data
NASA Astrophysics Data System (ADS)
Borough, Howard C.
1991-11-01
Hyperspectral imagery, spatial imagery with associated wavelength data for every pixel, offers a significant potential for improved detection and identification of certain classes of targets. The ability to make spectral identifications of objects which only partially fill a single pixel (due to range or small size) is of considerable interest. Multiband imagery such as Landsat's 5 and 7 band imagery has demonstrated significant utility in the past. Hyperspectral imaging systems with hundreds of spectral bands offer improved performance. To explore the application of differentpixel spectral detection algorithms a synthesized set of hyperspectral image data (hypercubes) was generated utilizing NASA earth resources and other spectral data. The data was modified using LOWTRAN 7 to model the illumination, atmospheric contributions, attenuations and viewing geometry to represent a nadir view from 10,000 ft. altitude. The base hypercube (HC) represented 16 by 21 spatial pixels with 101 wavelength samples from 0.5 to 2.5 micrometers for each pixel. Insertions were made into the base data to provide random location, random pixel percentage, and random material. Fifteen different hypercubes were generated for blind testing of candidate algorithms. An algorithm utilizing a matched filter in the spectral dimension proved surprisingly good yielding 100% detections for pixels filled greater than 40% with a standard camouflage paint, and a 50% probability of detection for pixels filled 20% with the paint, with no false alarms. The false alarm rate as a function of the number of spectral bands in the range from 101 to 12 bands was measured and found to increase from zero to 50% illustrating the value of a large number of spectral bands. This test was on imagery without system noise; the next step is to incorporate typical system noise sources.
Joint Dictionary Learning for Multispectral Change Detection.
Lu, Xiaoqiang; Yuan, Yuan; Zheng, Xiangtao
2017-04-01
Change detection is one of the most important applications of remote sensing technology. It is a challenging task due to the obvious variations in the radiometric value of spectral signature and the limited capability of utilizing spectral information. In this paper, an improved sparse coding method for change detection is proposed. The intuition of the proposed method is that unchanged pixels in different images can be well reconstructed by the joint dictionary, which corresponds to knowledge of unchanged pixels, while changed pixels cannot. First, a query image pair is projected onto the joint dictionary to constitute the knowledge of unchanged pixels. Then reconstruction error is obtained to discriminate between the changed and unchanged pixels in the different images. To select the proper thresholds for determining changed regions, an automatic threshold selection strategy is presented by minimizing the reconstruction errors of the changed pixels. Adequate experiments on multispectral data have been tested, and the experimental results compared with the state-of-the-art methods prove the superiority of the proposed method. Contributions of the proposed method can be summarized as follows: 1) joint dictionary learning is proposed to explore the intrinsic information of different images for change detection. In this case, change detection can be transformed as a sparse representation problem. To the authors' knowledge, few publications utilize joint learning dictionary in change detection; 2) an automatic threshold selection strategy is presented, which minimizes the reconstruction errors of the changed pixels without the prior assumption of the spectral signature. As a result, the threshold value provided by the proposed method can adapt to different data due to the characteristic of joint dictionary learning; and 3) the proposed method makes no prior assumption of the modeling and the handling of the spectral signature, which can be adapted to different data.
NASA Astrophysics Data System (ADS)
Zhang, Shunli; Zhang, Dinghua; Gong, Hao; Ghasemalizadeh, Omid; Wang, Ge; Cao, Guohua
2014-11-01
Iterative algorithms, such as the algebraic reconstruction technique (ART), are popular for image reconstruction. For iterative reconstruction, the area integral model (AIM) is more accurate for better reconstruction quality than the line integral model (LIM). However, the computation of the system matrix for AIM is more complex and time-consuming than that for LIM. Here, we propose a fast and accurate method to compute the system matrix for AIM. First, we calculate the intersection of each boundary line of a narrow fan-beam with pixels in a recursive and efficient manner. Then, by grouping the beam-pixel intersection area into six types according to the slopes of the two boundary lines, we analytically compute the intersection area of the narrow fan-beam with the pixels in a simple algebraic fashion. Overall, experimental results show that our method is about three times faster than the Siddon algorithm and about two times faster than the distance-driven model (DDM) in computation of the system matrix. The reconstruction speed of our AIM-based ART is also faster than the LIM-based ART that uses the Siddon algorithm and DDM-based ART, for one iteration. The fast reconstruction speed of our method was accomplished without compromising the image quality.
HIERARCHICAL PROBABILISTIC INFERENCE OF COSMIC SHEAR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schneider, Michael D.; Dawson, William A.; Hogg, David W.
2015-07-01
Point estimators for the shearing of galaxy images induced by gravitational lensing involve a complex inverse problem in the presence of noise, pixelization, and model uncertainties. We present a probabilistic forward modeling approach to gravitational lensing inference that has the potential to mitigate the biased inferences in most common point estimators and is practical for upcoming lensing surveys. The first part of our statistical framework requires specification of a likelihood function for the pixel data in an imaging survey given parameterized models for the galaxies in the images. We derive the lensing shear posterior by marginalizing over all intrinsic galaxymore » properties that contribute to the pixel data (i.e., not limited to galaxy ellipticities) and learn the distributions for the intrinsic galaxy properties via hierarchical inference with a suitably flexible conditional probabilitiy distribution specification. We use importance sampling to separate the modeling of small imaging areas from the global shear inference, thereby rendering our algorithm computationally tractable for large surveys. With simple numerical examples we demonstrate the improvements in accuracy from our importance sampling approach, as well as the significance of the conditional distribution specification for the intrinsic galaxy properties when the data are generated from an unknown number of distinct galaxy populations with different morphological characteristics.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Domengie, F., E-mail: florian.domengie@st.com; Morin, P.; Bauza, D.
We propose a model for dark current induced by metallic contamination in a CMOS image sensor. Based on Shockley-Read-Hall kinetics, the expression of dark current proposed accounts for the electric field enhanced emission factor due to the Poole-Frenkel barrier lowering and phonon-assisted tunneling mechanisms. To that aim, we considered the distribution of the electric field magnitude and metal atoms in the depth of the pixel. Poisson statistics were used to estimate the random distribution of metal atoms in each pixel for a given contamination dose. Then, we performed a Monte-Carlo-based simulation for each pixel to set the number of metalmore » atoms the pixel contained and the enhancement factor each atom underwent, and obtained a histogram of the number of pixels versus dark current for the full sensor. Excellent agreement with the dark current histogram measured on an ion-implanted gold-contaminated imager has been achieved, in particular, for the description of the distribution tails due to the pixel regions in which the contaminant atoms undergo a large electric field. The agreement remains very good when increasing the temperature by 15 °C. We demonstrated that the amplification of the dark current generated for the typical electric fields encountered in the CMOS image sensors, which depends on the nature of the metal contaminant, may become very large at high electric field. The electron and hole emissions and the resulting enhancement factor are described as a function of the trap characteristics, electric field, and temperature.« less
Automated determination of arterial input function for DCE-MRI of the prostate
NASA Astrophysics Data System (ADS)
Zhu, Yingxuan; Chang, Ming-Ching; Gupta, Sandeep
2011-03-01
Prostate cancer is one of the commonest cancers in the world. Dynamic contrast enhanced MRI (DCE-MRI) provides an opportunity for non-invasive diagnosis, staging, and treatment monitoring. Quantitative analysis of DCE-MRI relies on determination of an accurate arterial input function (AIF). Although several methods for automated AIF detection have been proposed in literature, none are optimized for use in prostate DCE-MRI, which is particularly challenging due to large spatial signal inhomogeneity. In this paper, we propose a fully automated method for determining the AIF from prostate DCE-MRI. Our method is based on modeling pixel uptake curves as gamma variate functions (GVF). First, we analytically compute bounds on GVF parameters for more robust fitting. Next, we approximate a GVF for each pixel based on local time domain information, and eliminate the pixels with false estimated AIFs using the deduced upper and lower bounds. This makes the algorithm robust to signal inhomogeneity. After that, according to spatial information such as similarity and distance between pixels, we formulate the global AIF selection as an energy minimization problem and solve it using a message passing algorithm to further rule out the weak pixels and optimize the detected AIF. Our method is fully automated without training or a priori setting of parameters. Experimental results on clinical data have shown that our method obtained promising detection accuracy (all detected pixels inside major arteries), and a very good match with expert traced manual AIF.
Digital identification of cartographic control points
NASA Technical Reports Server (NTRS)
Gaskell, R. W.
1988-01-01
Techniques have been developed for the sub-pixel location of control points in satellite images returned by the Voyager spacecraft. The procedure uses digital imaging data in the neighborhood of the point to form a multipicture model of a piece of the surface. Comparison of this model with the digital image in each picture determines the control point locations to about a tenth of a pixel. At this level of precision, previously insignificant effects must be considered, including chromatic aberration, high level imaging distortions, and systematic errors due to navigation uncertainties. Use of these methods in the study of Jupiter's satellite Io has proven very fruitful.
Aircraft target detection algorithm based on high resolution spaceborne SAR imagery
NASA Astrophysics Data System (ADS)
Zhang, Hui; Hao, Mengxi; Zhang, Cong; Su, Xiaojing
2018-03-01
In this paper, an image classification algorithm for airport area is proposed, which based on the statistical features of synthetic aperture radar (SAR) images and the spatial information of pixels. The algorithm combines Gamma mixture model and MRF. The algorithm using Gamma mixture model to obtain the initial classification result. Pixel space correlation based on the classification results are optimized by the MRF technique. Additionally, morphology methods are employed to extract airport (ROI) region where the suspected aircraft target samples are clarified to reduce the false alarm and increase the detection performance. Finally, this paper presents the plane target detection, which have been verified by simulation test.
Development of an automated film-reading system for ballistic ranges
NASA Technical Reports Server (NTRS)
Yates, Leslie A.
1992-01-01
Software for an automated film-reading system that uses personal computers and digitized shadowgraphs is described. The software identifies pixels associated with fiducial-line and model images, and least-squares procedures are used to calculate the positions and orientations of the images. Automated position and orientation readings for sphere and cone models are compared to those obtained using a manual film reader. When facility calibration errors are removed from these readings, the accuracy of the automated readings is better than the pixel resolution, and it is equal to, or better than, the manual readings. The effects of film-reading and facility-calibration errors on calculated aerodynamic coefficients is discussed.
Talwar, Sameer; Roopwani, Rahul; Anderson, Carl A; Buckner, Ira S; Drennen, James K
2017-08-01
Near-infrared chemical imaging (NIR-CI) combines spectroscopy with digital imaging, enabling spatially resolved analysis and characterization of pharmaceutical samples. Hardness and relative density are critical quality attributes (CQA) that affect tablet performance. Intra-sample density or hardness variability can reveal deficiencies in formulation design or the tableting process. This study was designed to develop NIR-CI methods to predict spatially resolved tablet density and hardness. The method was implemented using a two-step procedure. First, NIR-CI was used to develop a relative density/solid fraction (SF) prediction method for pure microcrystalline cellulose (MCC) compacts only. A partial least squares (PLS) model for predicting SF was generated by regressing the spectra of certain representative pixels selected from each image against the compact SF. Pixel selection was accomplished with a threshold based on the Euclidean distance from the median tablet spectrum. Second, micro-indentation was performed on the calibration compacts to obtain hardness values. A univariate model was developed by relating the empirical hardness values to the NIR-CI predicted SF at the micro-indented pixel locations: this model generated spatially resolved hardness predictions for the entire tablet surface.
NASA Astrophysics Data System (ADS)
Jin, Y.; Lee, D. K.; Jeong, S. G.
2015-12-01
The ecological and social values of forests have recently been highlighted. Assessments of the biodiversity of forests, as well as their other ecological values, play an important role in regional and national conservation planning. The preservation of habitats is linked to the protection of biodiversity. For mapping habitats, species distribution model (SDM) is used for predicting suitable habitat of significant species, and such distribution modeling is increasingly being used in conservation science. However, the pixel-based analysis does not contain contextual or topological information. In order to provide more accurate habitats predictions, a continuous field view that assumes the real world is required. Here we analyze and compare at different scales, habitats of the Yellow Marten's(Martes Flavigula), which is a top predator and also an umbrella species in South Korea. The object-scale, which is a group of pixels that have similar spatial and spectral characteristics, and pixel-scale were used for SDM. Our analysis using the SDM at different scales suggests that object-scale analysis provides a superior representation of continuous habitat, and thus will be useful in forest conservation planning as well as for species habitat monitoring.
Spectral-spatial classification of hyperspectral imagery with cooperative game
NASA Astrophysics Data System (ADS)
Zhao, Ji; Zhong, Yanfei; Jia, Tianyi; Wang, Xinyu; Xu, Yao; Shu, Hong; Zhang, Liangpei
2018-01-01
Spectral-spatial classification is known to be an effective way to improve classification performance by integrating spectral information and spatial cues for hyperspectral imagery. In this paper, a game-theoretic spectral-spatial classification algorithm (GTA) using a conditional random field (CRF) model is presented, in which CRF is used to model the image considering the spatial contextual information, and a cooperative game is designed to obtain the labels. The algorithm establishes a one-to-one correspondence between image classification and game theory. The pixels of the image are considered as the players, and the labels are considered as the strategies in a game. Similar to the idea of soft classification, the uncertainty is considered to build the expected energy model in the first step. The local expected energy can be quickly calculated, based on a mixed strategy for the pixels, to establish the foundation for a cooperative game. Coalitions can then be formed by the designed merge rule based on the local expected energy, so that a majority game can be performed to make a coalition decision to obtain the label of each pixel. The experimental results on three hyperspectral data sets demonstrate the effectiveness of the proposed classification algorithm.
Adaptive skin segmentation via feature-based face detection
NASA Astrophysics Data System (ADS)
Taylor, Michael J.; Morris, Tim
2014-05-01
Variations in illumination can have significant effects on the apparent colour of skin, which can be damaging to the efficacy of any colour-based segmentation approach. We attempt to overcome this issue by presenting a new adaptive approach, capable of generating skin colour models at run-time. Our approach adopts a Viola-Jones feature-based face detector, in a moderate-recall, high-precision configuration, to sample faces within an image, with an emphasis on avoiding potentially detrimental false positives. From these samples, we extract a set of pixels that are likely to be from skin regions, filter them according to their relative luma values in an attempt to eliminate typical non-skin facial features (eyes, mouths, nostrils, etc.), and hence establish a set of pixels that we can be confident represent skin. Using this representative set, we train a unimodal Gaussian function to model the skin colour in the given image in the normalised rg colour space - a combination of modelling approach and colour space that benefits us in a number of ways. A generated function can subsequently be applied to every pixel in the given image, and, hence, the probability that any given pixel represents skin can be determined. Segmentation of the skin, therefore, can be as simple as applying a binary threshold to the calculated probabilities. In this paper, we touch upon a number of existing approaches, describe the methods behind our new system, present the results of its application to arbitrary images of people with detectable faces, which we have found to be extremely encouraging, and investigate its potential to be used as part of real-time systems.
NASA Astrophysics Data System (ADS)
Donlon, Kevan; Ninkov, Zoran; Baum, Stefi
2016-08-01
Interpixel capacitance (IPC) is a deterministic electronic coupling by which signal generated in one pixel is measured in neighboring pixels. Examination of dark frames from test NIRcam arrays corroborates earlier results and simulations illustrating a signal dependent coupling. When the signal on an individual pixel is larger, the fractional coupling to nearest neighbors is lesser than when the signal is lower. Frames from test arrays indicate a drop in average coupling from approximately 1.0% at low signals down to approximately 0.65% at high signals depending on the particular array in question. The photometric ramifications for this non-uniformity are not fully understood. This non-uniformity intro-duces a non-linearity in the current mathematical model for IPC coupling. IPC coupling has been mathematically formalized as convolution by a blur kernel. Signal dependence requires that the blur kernel be locally defined as a function of signal intensity. Through application of a signal dependent coupling kernel, the IPC coupling can be modeled computationally. This method allows for simultaneous knowledge of the intrinsic parameters of the image scene, the result of applying a constant IPC, and the result of a signal dependent IPC. In the age of sub-pixel precision in astronomy these effects must be properly understood and accounted for in order for the data to accurately represent the object of observation. Implementation of this method is done through python scripted processing of images. The introduction of IPC into simulated frames is accomplished through convolution of the image with a blur kernel whose parameters are themselves locally defined functions of the image. These techniques can be used to enhance the data processing pipeline for NIRcam.
NASA Technical Reports Server (NTRS)
Howard, Richard T. (Inventor); Bryan, ThomasC. (Inventor); Book, Michael L. (Inventor)
2004-01-01
A method and system for processing an image including capturing an image and storing the image as image pixel data. Each image pixel datum is stored in a respective memory location having a corresponding address. Threshold pixel data is selected from the image pixel data and linear spot segments are identified from the threshold pixel data selected.. Ihe positions of only a first pixel and a last pixel for each linear segment are saved. Movement of one or more objects are tracked by comparing the positions of fust and last pixels of a linear segment present in the captured image with respective first and last pixel positions in subsequent captured images. Alternatively, additional data for each linear data segment is saved such as sum of pixels and the weighted sum of pixels i.e., each threshold pixel value is multiplied by that pixel's x-location).
SAR backscatter from coniferous forest gaps
NASA Technical Reports Server (NTRS)
Day, John L.; Davis, Frank W.
1992-01-01
A study is in progress comparing Airborne Synthetic Aperture Radar (AIRSAR) backscatter from coniferous forest plots containing gaps to backscatter from adjacent gap-free plots. Issues discussed are how do gaps in the range of 400 to 1600 sq m (approximately 4-14 pixels at intermediate incidence angles) affect forest backscatter statistics and what incidence angles, wavelengths, and polarizations are most sensitive to forest gaps. In order to visualize the slant-range imaging of forest and gaps, a simple conceptual model is used. This strictly qualitative model has led us to hypothesize that forest radar returns at short wavelengths (eg., C-band) and large incidence angles (e.g., 50 deg) should be most affected by the presence of gaps, whereas returns at long wavelengths and small angles should be least affected. Preliminary analysis of 1989 AIRSAR data from forest near Mt. Shasta supports the hypothesis. Current forest backscatter models such as MIMICS and Santa Barbara Discontinuous Canopy Backscatter Model have in several cases correctly predicted backscatter from forest stands based on inputs of measured or estimated forest parameters. These models do not, however, predict within-stand SAR scene texture, or 'intrinsic scene variability' as Ulaby et al. has referred to it. For instance, the Santa Barbara model, which may be the most spatially coupled of the existing models, is not truly spatial. Tree locations within a simulated pixel are distributed according to a Poisson process, as they are in many natural forests, but tree size is unrelated to location, which is not the case in nature. Furthermore, since pixels of a simulated stand are generated independently in the Santa Barbara model, spatial processes larger than one pixel are not modeled. Using a different approach, Oliver modeled scene texture based on an hypothetical forest geometry. His simulated scenes do not agree well with SAR data, perhaps due to the simple geometric model used. Insofar as texture is the expression of biological forest processes, such as succession and disease, and physical ones, such as fire and wind-throw, it contains useful information about the forest, and has value in image interpretation and classification. Forest gaps are undoubtedly important contributors to scene variance. By studying the localized effects of gaps on forest backscatter, guided by our qualitative model, we hope to understand more clearly the manner in which spatial heterogeneities in forests produce variations in backscatter, which collectively give rise to scene texture.
A Stochastic Model of Space-Time Variability of Tropical Rainfall: I. Statistics of Spatial Averages
NASA Technical Reports Server (NTRS)
Kundu, Prasun K.; Bell, Thomas L.; Lau, William K. M. (Technical Monitor)
2002-01-01
Global maps of rainfall are of great importance in connection with modeling of the earth s climate. Comparison between the maps of rainfall predicted by computer-generated climate models with observation provides a sensitive test for these models. To make such a comparison, one typically needs the total precipitation amount over a large area, which could be hundreds of kilometers in size over extended periods of time of order days or months. This presents a difficult problem since rain varies greatly from place to place as well as in time. Remote sensing methods using ground radar or satellites detect rain over a large area by essentially taking a series of snapshots at infrequent intervals and indirectly deriving the average rain intensity within a collection of pixels , usually several kilometers in size. They measure area average of rain at a particular instant. Rain gauges, on the other hand, record rain accumulation continuously in time but only over a very small area tens of centimeters across, say, the size of a dinner plate. They measure only a time average at a single location. In making use of either method one needs to fill in the gaps in the observation - either the gaps in the area covered or the gaps in time of observation. This involves using statistical models to obtain information about the rain that is missed from what is actually detected. This paper investigates such a statistical model and validates it with rain data collected over the tropical Western Pacific from ship borne radars during TOGA COARE (Tropical Oceans Global Atmosphere Coupled Ocean-Atmosphere Response Experiment). The model incorporates a number of commonly observed features of rain. While rain varies rapidly with location and time, the variability diminishes when averaged over larger areas or longer periods of time. Moreover, rain is patchy in nature - at any instant on the average only a certain fraction of the observed pixels contain rain. The fraction of area covered by rain decreases, as the size of a pixel becomes smaller. This means that within what looks like a patch of rainy area in a coarse resolution view with larger pixel size, one finds clusters of rainy and dry patches when viewed on a finer scale. The model makes definite predictions about how these and other related statistics depend on the pixel size. These predictions were found to agree well with data. In a subsequent second part of the work we plan to test the model with rain gauge data collected during the TRMM (Tropical Rainfall Measuring Mission) ground validation campaign.
Shu, Jie; Dolman, G E; Duan, Jiang; Qiu, Guoping; Ilyas, Mohammad
2016-04-27
Colour is the most important feature used in quantitative immunohistochemistry (IHC) image analysis; IHC is used to provide information relating to aetiology and to confirm malignancy. Statistical modelling is a technique widely used for colour detection in computer vision. We have developed a statistical model of colour detection applicable to detection of stain colour in digital IHC images. Model was first trained by massive colour pixels collected semi-automatically. To speed up the training and detection processes, we removed luminance channel, Y channel of YCbCr colour space and chose 128 histogram bins which is the optimal number. A maximum likelihood classifier is used to classify pixels in digital slides into positively or negatively stained pixels automatically. The model-based tool was developed within ImageJ to quantify targets identified using IHC and histochemistry. The purpose of evaluation was to compare the computer model with human evaluation. Several large datasets were prepared and obtained from human oesophageal cancer, colon cancer and liver cirrhosis with different colour stains. Experimental results have demonstrated the model-based tool achieves more accurate results than colour deconvolution and CMYK model in the detection of brown colour, and is comparable to colour deconvolution in the detection of pink colour. We have also demostrated the proposed model has little inter-dataset variations. A robust and effective statistical model is introduced in this paper. The model-based interactive tool in ImageJ, which can create a visual representation of the statistical model and detect a specified colour automatically, is easy to use and available freely at http://rsb.info.nih.gov/ij/plugins/ihc-toolbox/index.html . Testing to the tool by different users showed only minor inter-observer variations in results.
Terrestrial hyperspectral image shadow restoration through fusion with terrestrial lidar
NASA Astrophysics Data System (ADS)
Hartzell, Preston J.; Glennie, Craig L.; Finnegan, David C.; Hauser, Darren L.
2017-05-01
Recent advances in remote sensing technology have expanded the acquisition and fusion of active lidar and passive hyperspectral imagery (HSI) from exclusively airborne observations to include terrestrial modalities. In contrast to airborne collection geometry, hyperspectral imagery captured from terrestrial cameras is prone to extensive solar shadowing on vertical surfaces leading to reductions in pixel classification accuracies or outright removal of shadowed areas from subsequent analysis tasks. We demonstrate the use of lidar spatial information for sub-pixel HSI shadow detection and the restoration of shadowed pixel spectra via empirical methods that utilize sunlit and shadowed pixels of similar material composition. We examine the effectiveness of radiometrically calibrated lidar intensity in identifying these similar materials in sun and shade conditions and further evaluate a restoration technique that leverages ratios derived from the overlapping lidar laser and HSI wavelengths. Simulations of multiple lidar wavelengths, i.e., multispectral lidar, indicate the potential for HSI spectral restoration that is independent of the complexity and costs associated with rigorous radiometric transfer models, which have yet to be developed for horizontal-viewing terrestrial HSI sensors. The spectral restoration performance of shadowed HSI pixels is quantified for imagery of a geologic outcrop through improvements in spectral shape, spectral scale, and HSI band correlation.
QLog Solar-Cell Mode Photodiode Logarithmic CMOS Pixel Using Charge Compression and Readout †
Ni, Yang
2018-01-01
In this paper, we present a new logarithmic pixel design currently under development at New Imaging Technologies SA (NIT). This new logarithmic pixel design uses charge domain logarithmic signal compression and charge-transfer-based signal readout. This structure gives a linear response in low light conditions and logarithmic response in high light conditions. The charge transfer readout efficiently suppresses the reset (KTC) noise by using true correlated double sampling (CDS) in low light conditions. In high light conditions, thanks to charge domain logarithmic compression, it has been demonstrated that 3000 electrons should be enough to cover a 120 dB dynamic range with a mobile phone camera-like signal-to-noise ratio (SNR) over the whole dynamic range. This low electron count permits the use of ultra-small floating diffusion capacitance (sub-fF) without charge overflow. The resulting large conversion gain permits a single photon detection capability with a wide dynamic range without a complex sensor/system design. A first prototype sensor with 320 × 240 pixels has been implemented to validate this charge domain logarithmic pixel concept and modeling. The first experimental results validate the logarithmic charge compression theory and the low readout noise due to the charge-transfer-based readout. PMID:29443903
QLog Solar-Cell Mode Photodiode Logarithmic CMOS Pixel Using Charge Compression and Readout.
Ni, Yang
2018-02-14
In this paper, we present a new logarithmic pixel design currently under development at New Imaging Technologies SA (NIT). This new logarithmic pixel design uses charge domain logarithmic signal compression and charge-transfer-based signal readout. This structure gives a linear response in low light conditions and logarithmic response in high light conditions. The charge transfer readout efficiently suppresses the reset (KTC) noise by using true correlated double sampling (CDS) in low light conditions. In high light conditions, thanks to charge domain logarithmic compression, it has been demonstrated that 3000 electrons should be enough to cover a 120 dB dynamic range with a mobile phone camera-like signal-to-noise ratio (SNR) over the whole dynamic range. This low electron count permits the use of ultra-small floating diffusion capacitance (sub-fF) without charge overflow. The resulting large conversion gain permits a single photon detection capability with a wide dynamic range without a complex sensor/system design. A first prototype sensor with 320 × 240 pixels has been implemented to validate this charge domain logarithmic pixel concept and modeling. The first experimental results validate the logarithmic charge compression theory and the low readout noise due to the charge-transfer-based readout.
The NUC and blind pixel eliminating in the DTDI application
NASA Astrophysics Data System (ADS)
Su, Xiao Feng; Chen, Fan Sheng; Pan, Sheng Da; Gong, Xue Yi; Dong, Yu Cui
2013-12-01
AS infrared CMOS Digital TDI (Time Delay and integrate) has a simple structure, excellent performance and flexible operation, it has been used in more and more applications. Because of the limitation of the Production process level, the plane array of the infrared detector has a large NU (non-uniformity) and a certain blind pixel rate. Both of the two will raise the noise and lead to the TDI works not very well. In this paper, for the impact of the system performance, the most important elements are analyzed, which are the NU of the optical system, the NU of the Plane array and the blind pixel in the Plane array. Here a reasonable algorithm which considers the background removal and the linear response model of the infrared detector is used to do the NUC (Non-uniformity correction) process, when the infrared detector array is used as a Digital TDI. In order to eliminate the impact of the blind pixel, the concept of surplus pixel method is introduced in, through the method, the SNR (signal to noise ratio) can be improved and the spatial and temporal resolution will not be changed. Finally we use a MWIR (Medium Ware Infrared) detector to do the experiment and the result proves the effectiveness of the method.
Taguchi, Katsuyuki; Stierstorfer, Karl; Polster, Christoph; Lee, Okkyun; Kappler, Steffen
2018-05-01
The interpixel cross-talk of energy-sensitive photon counting x-ray detectors (PCDs) has been studied and an analytical model (version 2.1) has been developed for double-counting between neighboring pixels due to charge sharing and K-shell fluorescence x-ray emission followed by its reabsorption (Taguchi K, et al., Medical Physics 2016;43(12):6386-6404). While the model version 2.1 simulated the spectral degradation well, it had the following problems that has been found to be significant recently: (1) The spectrum is inaccurate with smaller pixel sizes; (2) the charge cloud size must be smaller than the pixel size; (3) the model underestimates the spectrum/counts for 10-40 keV; and (4) the model version 2.1 cannot handlen-tuple-counting withn > 2 (i.e., triple-counting or higher). These problems are inherent to the design of the model version 2.1; therefore, we developed a new model and addressed these problems in this study. We propose a new PCD cross-talk model (version 3.2; Pc TK for "photon counting toolkit") that is based on a completely different design concept from the previous version. It uses a numerical approach and starts with a 2-D model of charge sharing (as opposed to an analytical approach and a 1-D model with version 2.1) and addresses all of the four problems. The model takes the following factors into account: (1) shift-variant electron density of the charge cloud (Gaussian-distributed), (2) detection efficiency, (3) interactions between photons and PCDs via photoelectric effect, and (4) electronic noise. Correlated noisy PCD data can be generated using either a multivariate normal random number generator or a Poisson random number generator. The effect of the two parameters, the effective charge cloud diameter (d 0 ) and pixel size (d pix ), was studied and results were compared with Monte Carlo simulations and the previous model version 2.1. Finally, a script for the workflow for CT image quality assessment has been developed, which started with a few material density images, generated material-specific sinogram (line integrals) data, noisy PCD data with spectral distortion using the model version 3.2, and reconstructed PCD- CT images for four energy windows. The model version 3.2 addressed all of the four problems listed above. The spectra withd pix = 56-113 μm agreed with that of Medipix3 detector withd pix = 55-110 μm without charge summing mode qualitatively. The counts for 10-40 keV were larger than the previous model (version 2.1) and agreed with MC simulations very well (root-mean-square difference values with model version 3.2 were decreased to 16%-67% of the values with version 2.1). There were many non-zero off-diagonal elements withn-tuple-counting withn > 2 in the normalized covariance matrix of 3 × 3 neighboring pixels. Reconstructed images showed biases and artifacts attributed to the spectral distortion due to the charge sharing and fluorescence x rays. We have developed a new PCD model for spatio-energetic cross-talk and correlation between PCD pixels. The workflow demonstrated the utility of the model for general or task-specific image quality assessments for the PCD- CT.Note: The program (Pc TK) and the workflow scripts have been made available to academic researchers. Interested readers should visit the website (pctk.jhu.edu) or contact the corresponding author. © 2018 American Association of Physicists in Medicine.
Spatial clustering of pixels of a multispectral image
Conger, James Lynn
2014-08-19
A method and system for clustering the pixels of a multispectral image is provided. A clustering system computes a maximum spectral similarity score for each pixel that indicates the similarity between that pixel and the most similar neighboring. To determine the maximum similarity score for a pixel, the clustering system generates a similarity score between that pixel and each of its neighboring pixels and then selects the similarity score that represents the highest similarity as the maximum similarity score. The clustering system may apply a filtering criterion based on the maximum similarity score so that pixels with similarity scores below a minimum threshold are not clustered. The clustering system changes the current pixel values of the pixels in a cluster based on an averaging of the original pixel values of the pixels in the cluster.
Direct Estimation of Kinetic Parametric Images for Dynamic PET
Wang, Guobao; Qi, Jinyi
2013-01-01
Dynamic positron emission tomography (PET) can monitor spatiotemporal distribution of radiotracer in vivo. The spatiotemporal information can be used to estimate parametric images of radiotracer kinetics that are of physiological and biochemical interests. Direct estimation of parametric images from raw projection data allows accurate noise modeling and has been shown to offer better image quality than conventional indirect methods, which reconstruct a sequence of PET images first and then perform tracer kinetic modeling pixel-by-pixel. Direct reconstruction of parametric images has gained increasing interests with the advances in computing hardware. Many direct reconstruction algorithms have been developed for different kinetic models. In this paper we review the recent progress in the development of direct reconstruction algorithms for parametric image estimation. Algorithms for linear and nonlinear kinetic models are described and their properties are discussed. PMID:24396500
Modelling radiation damage to ESA's Gaia satellite CCDs
NASA Astrophysics Data System (ADS)
Seabroke, George; Holland, Andrew; Cropper, Mark
2008-07-01
The Gaia satellite is a high-precision astrometry, photometry and spectroscopic ESA cornerstone mission, currently scheduled for launch in late 2011. Its primary science drivers are the composition, formation and evolution of the Galaxy. Gaia will not achieve its scientific requirements without detailed calibration and correction for radiation damage. Microscopic models of Gaia's CCDs are being developed to simulate the effect of radiation damage, charge trapping, which causes charge transfer inefficiency. The key to calculating the probability of a photoelectron being captured by a trap is the 3D electron density within each CCD pixel. However, this has not been physically modelled for Gaia CCD pixels. In this paper, the first of a series, we motivate the need for such specialised 3D device modelling and outline how its future results will fit into Gaia's overall radiation calibration strategy.
NASA Astrophysics Data System (ADS)
Pelamatti, Alice; Goiffon, Vincent; Chabane, Aziouz; Magnan, Pierre; Virmontois, Cédric; Saint-Pé, Olivier; de Boisanger, Michel Breart
2016-11-01
The charge transfer time represents the bottleneck in terms of temporal resolution in Pinned Photodiode (PPD) CMOS image sensors. This work focuses on the modeling and estimation of this key parameter. A simple numerical model of charge transfer in PPDs is presented. The model is based on a Montecarlo simulation and takes into account both charge diffusion in the PPD and the effect of potential obstacles along the charge transfer path. This work also presents a new experimental approach for the estimation of the charge transfer time, called pulsed Storage Gate (SG) method. This method, which allows reproduction of a ;worst-case; transfer condition, is based on dedicated SG pixel structures and is particularly suitable to compare transfer efficiency performances for different pixel geometries.
Friesz, Aaron M.; Wylie, Bruce K.; Howard, Daniel M.
2017-01-01
Crop cover maps have become widely used in a range of research applications. Multiple crop cover maps have been developed to suite particular research interests. The National Agricultural Statistics Service (NASS) Cropland Data Layers (CDL) are a series of commonly used crop cover maps for the conterminous United States (CONUS) that span from 2008 to 2013. In this investigation, we sought to contribute to the availability of consistent CONUS crop cover maps by extending temporal coverage of the NASS CDL archive back eight additional years to 2000 by creating annual NASS CDL-like crop cover maps derived from a classification tree model algorithm. We used over 11 million records to train a classification tree algorithm and develop a crop classification model (CCM). The model was used to create crop cover maps for the CONUS for years 2000–2013 at 250 m spatial resolution. The CCM and the maps for years 2008–2013 were assessed for accuracy relative to resampled NASS CDLs. The CCM performed well against a withheld test data set with a model prediction accuracy of over 90%. The assessment of the crop cover maps indicated that the model performed well spatially, placing crop cover pixels within their known domains; however, the model did show a bias towards the ‘Other’ crop cover class, which caused frequent misclassifications of pixels around the periphery of large crop cover patch clusters and of pixels that form small, sparsely dispersed crop cover patches.
Estimating cropland NPP using national crop inventory and MODIS derived crop specific parameters
NASA Astrophysics Data System (ADS)
Bandaru, V.; West, T. O.; Ricciuto, D. M.
2011-12-01
Estimates of cropland net primary production (NPP) are needed as input for estimates of carbon flux and carbon stock changes. Cropland NPP is currently estimated using terrestrial ecosystem models, satellite remote sensing, or inventory data. All three of these methods have benefits and problems. Terrestrial ecosystem models are often better suited for prognostic estimates rather than diagnostic estimates. Satellite-based NPP estimates often underestimate productivity on intensely managed croplands and are also limited to a few broad crop categories. Inventory-based estimates are consistent with nationally collected data on crop yields, but they lack sub-county spatial resolution. Integrating these methods will allow for spatial resolution consistent with current land cover and land use, while also maintaining total biomass quantities recorded in national inventory data. The main objective of this study was to improve cropland NPP estimates by using a modification of the CASA NPP model with individual crop biophysical parameters partly derived from inventory data and MODIS 8day 250m EVI product. The study was conducted for corn and soybean crops in Iowa and Illinois for years 2006 and 2007. We used EVI as a linear function for fPAR, and used crop land cover data (56m spatial resolution) to extract individual crop EVI pixels. First, we separated mixed pixels of both corn and soybean that occur when MODIS 250m pixel contains more than one crop. Second, we substituted mixed EVI pixels with nearest pure pixel values of the same crop within 1km radius. To get more accurate photosynthetic active radiation (PAR), we applied the Mountain Climate Simulator (MTCLIM) algorithm with the use of temperature and precipitation data from the North American Land Data Assimilation System (NLDAS-2) to generate shortwave radiation data. Finally, county specific light use efficiency (LUE) values of each crop for years 2006 to 2007 were determined by application of mean county inventory NPP and EVI-derived APAR into the Monteith equation. Results indicate spatial variability in LUE values across Iowa and Illinois. Northern regions of both Iowa and Illinois have higher LUE values than southern regions. This trend is reflected in NPP estimates. Results also show that corn has higher LUE values than soybean, resulting in higher NPP for corn than for soybean. Current NPP estimates were compared with NPP estimates from MOD17A3 product and with county inventory-based NPP estimates. Results indicate that current NPP estimates closely agree with inventory-based estimates, and that current NPP estimates are higher than those of the MOD17A3 product. It was also found that when mixed pixels were substituted with nearest pure pixels, revised NPP estimates were improved showing better agreement with inventory-based estimates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lentine, Anthony L.; Nielson, Gregory N.; Cruz-Campa, Jose Luis
A photovoltaic module includes colorized reflective photovoltaic cells that act as pixels. The colorized reflective photovoltaic cells are arranged so that reflections from the photovoltaic cells or pixels visually combine into an image on the photovoltaic module. The colorized photovoltaic cell or pixel is composed of a set of 100 to 256 base color sub-pixel reflective segments or sub-pixels. The color of each pixel is determined by the combination of base color sub-pixels forming the pixel. As a result, each pixel can have a wide variety of colors using a set of base colors, which are created, from sub-pixel reflectivemore » segments having standard film thicknesses.« less
Measuring distance “as the horse runs”: Cross-scale comparison of terrain-based metrics
Buttenfield, Barbara P.; Ghandehari, M; Leyk, S; Stanislawski, Larry V.; Brantley, M E; Qiang, Yi
2016-01-01
Distance metrics play significant roles in spatial modeling tasks, such as flood inundation (Tucker and Hancock 2010), stream extraction (Stanislawski et al. 2015), power line routing (Kiessling et al. 2003) and analysis of surface pollutants such as nitrogen (Harms et al. 2009). Avalanche risk is based on slope, aspect, and curvature, all directly computed from distance metrics (Gutiérrez 2012). Distance metrics anchor variogram analysis, kernel estimation, and spatial interpolation (Cressie 1993). Several approaches are employed to measure distance. Planar metrics measure straight line distance between two points (“as the crow flies”) and are simple and intuitive, but suffer from uncertainties. Planar metrics assume that Digital Elevation Model (DEM) pixels are rigid and flat, as tiny facets of ceramic tile approximating a continuous terrain surface. In truth, terrain can bend, twist and undulate within each pixel.Work with Light Detection and Ranging (lidar) data or High Resolution Topography to achieve precise measurements present challenges, as filtering can eliminate or distort significant features (Passalacqua et al. 2015). The current availability of lidar data is far from comprehensive in developed nations, and non-existent in many rural and undeveloped regions. Notwithstanding computational advances, distance estimation on DEMs has never been systematically assessed, due to assumptions that improvements are so small that surface adjustment is unwarranted. For individual pixels inaccuracies may be small, but additive effects can propagate dramatically, especially in regional models (e.g., disaster evacuation) or global models (e.g., sea level rise) where pixels span dozens to hundreds of kilometers (Usery et al 2003). Such models are increasingly common, lending compelling reasons to understand shortcomings in the use of planar distance metrics. Researchers have studied curvature-based terrain modeling. Jenny et al. (2011) use curvature to generate hierarchical terrain models. Schneider (2001) creates a ‘plausibility’ metric for DEM-extracted structure lines. d’Oleire- Oltmanns et al. (2014) adopt object-based image processing as an alternative to working with DEMs; acknowledging the pre-processing involved in converting terrain into an object model is computationally intensive, and likely infeasible for some applications.This paper compares planar distance with surface adjusted distance, evolving from distance “as the crow flies” to distance “as the horse runs”. Several methods are compared for DEMs spanning a range of resolutions for the study area and validated against a 3 meter (m) lidar data benchmark. Error magnitudes vary with pixel size and with the method of surface adjustment. The rate of error increase may also vary with landscape type (terrain roughness, precipitation regimes and land settlement patterns). Cross-scale analysis for a single study area is reported here. Additional areas will be presented at the conference.
Comparison and Analysis of Geometric Correction Models of Spaceborne SAR
Jiang, Weihao; Yu, Anxi; Dong, Zhen; Wang, Qingsong
2016-01-01
Following the development of synthetic aperture radar (SAR), SAR images have become increasingly common. Many researchers have conducted large studies on geolocation models, but little work has been conducted on the available models for the geometric correction of SAR images of different terrain. To address the terrain issue, four different models were compared and are described in this paper: a rigorous range-doppler (RD) model, a rational polynomial coefficients (RPC) model, a revised polynomial (PM) model and an elevation derivation (EDM) model. The results of comparisons of the geolocation capabilities of the models show that a proper model for a SAR image of a specific terrain can be determined. A solution table was obtained to recommend a suitable model for users. Three TerraSAR-X images, two ALOS-PALSAR images and one Envisat-ASAR image were used for the experiment, including flat terrain and mountain terrain SAR images as well as two large area images. Geolocation accuracies of the models for different terrain SAR images were computed and analyzed. The comparisons of the models show that the RD model was accurate but was the least efficient; therefore, it is not the ideal model for real-time implementations. The RPC model is sufficiently accurate and efficient for the geometric correction of SAR images of flat terrain, whose precision is below 0.001 pixels. The EDM model is suitable for the geolocation of SAR images of mountainous terrain, and its precision can reach 0.007 pixels. Although the PM model does not produce results as precise as the other models, its efficiency is excellent and its potential should not be underestimated. With respect to the geometric correction of SAR images over large areas, the EDM model has higher accuracy under one pixel, whereas the RPC model consumes one third of the time of the EDM model. PMID:27347973
Spring floods prediction with the use of optical satellite data in Québec
NASA Astrophysics Data System (ADS)
Roy, A.; Royer, A.; Turcotte, R.
2009-04-01
The Centre d'expertise hydrique du Québec (CEHQ) operates a distributed hydrological model, which integrates a snow model, for the management of dams in the south of Québec. It appears that the estimation of the water quantity of snowmelt in spring remains a variable with a large uncertainty and induces generally to an important error in stream flow simulation. Therefore, the National snow and ice center (NSIDC) produces, from MODIS (Moderate Resolution Imaging Spectroradiometer) data, continuous and homogeneous spatial snow cover (snow/swow-free) data on the whole territory, but with a cloud contamination. This research aims to improve the prediction of spring floods and the estimation of the rate of discharge by integrating snow cover data in the CEHQ's snow model. The study is done on two watersheds: du Nord river watershed (45,8°N) and Aux Écorces river watershed (47,9°N). The snow model used in the study (SPH-AV) is an implementation developed by the CEHQ of the snowmelt model of HYDROLTEL in is hydrological forecast system to simulate the melted water. The melted water estimated is then used as input in the empirical hydrological model MOHYSE to simulate stream flow. MODIS data are considered valid only when the cloud cover on each pixel of the watersheds is less then 30%. A pixel by pixel correction is applied to the snow pack when there is a difference between satellite snow cover and modeled snow cover. In the case of model shows to much snow, a factor is applied on temperatures by iterative process (starting from the last valid MODIS data) to melt the snow. In the opposite case, the snow quantity added to the last valid MODIS data is found by iterative process so that the pixel's snow water equivalent is equal to the nonzero neighbor minimum value. The study shows, through the simulations done on the two watersheds, the interest of the use of snow/snow-free product for the operational update of snow water equivalent with the objective to improve spring snowmelt stream flow simulations. The binary aspect (snow/snowfree) of the data is however a limit. Alternatives are discussed (passive microwave data). Keywords : satellite snow cover data, MODIS, satellite data integration, snow model, hydrological model, stream flow simulation, flood.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Chumin; Kanicki, Jerzy, E-mail: kanicki@eecs.umich.edu
Purpose: The breast cancer detection rate for digital breast tomosynthesis (DBT) is limited by the x-ray image quality. The limiting Nyquist frequency for current DBT systems is around 5 lp/mm, while the fine image details contained in the high spatial frequency region (>5 lp/mm) are lost. Also today the tomosynthesis patient dose is high (0.67–3.52 mGy). To address current issues, in this paper, for the first time, a high-resolution low-dose organic photodetector/amorphous In–Ga–Zn–O thin-film transistor (a-IGZO TFT) active pixel sensor (APS) x-ray imager is proposed for next generation DBT systems. Methods: The indirect x-ray detector is based on a combination of a novelmore » low-cost organic photodiode (OPD) and a cesium iodide-based (CsI:Tl) scintillator. The proposed APS x-ray imager overcomes the difficulty of weak signal detection, when small pixel size and low exposure conditions are used, by an on-pixel signal amplification with a significant charge gain. The electrical performance of a-IGZO TFT APS pixel circuit is investigated by SPICE simulation using modified Rensselaer Polytechnic Institute amorphous silicon (a-Si:H) TFT model. Finally, the noise, detective quantum efficiency (DQE), and resolvability of the complete system are modeled using the cascaded system formalism. Results: The result demonstrates that a large charge gain of 31–122 is achieved for the proposed high-mobility (5–20 cm{sup 2}/V s) amorphous metal-oxide TFT APS. The charge gain is sufficient to eliminate the TFT thermal noise, flicker noise as well as the external readout circuit noise. Moreover, the low TFT (<10{sup −13} A) and OPD (<10{sup −8} A/cm{sup 2}) leakage currents can further reduce the APS noise. Cascaded system analysis shows that the proposed APS imager with a 75 μm pixel pitch can effectively resolve the Nyquist frequency of 6.67 lp/mm, which can be further improved to ∼10 lp/mm if the pixel pitch is reduced to 50 μm. Moreover, the detector entrance exposure per projection can be reduced from 1 to 0.3 mR without a significant reduction of DQE. The signal-to-noise ratio of the a-IGZO APS imager under 0.3 mR x-ray exposure is comparable to that of a-Si:H passive pixel sensor imager under 1 mR, indicating good image quality under low dose. A threefold reduction of current tomosynthesis dose is expected if proposed technology is combined with an advanced DBT image reconstruction method. Conclusions: The proposed a-IGZO APS x-ray imager with a pixel pitch <75 μm is capable to achieve a high spatial frequency (>6.67 lp/mm) and a low dose (<0.4 mGy) in next generation DBT systems.« less
Zhao, Chumin; Kanicki, Jerzy
2014-09-01
The breast cancer detection rate for digital breast tomosynthesis (DBT) is limited by the x-ray image quality. The limiting Nyquist frequency for current DBT systems is around 5 lp/mm, while the fine image details contained in the high spatial frequency region (>5 lp/mm) are lost. Also today the tomosynthesis patient dose is high (0.67-3.52 mGy). To address current issues, in this paper, for the first time, a high-resolution low-dose organic photodetector/amorphous In-Ga-Zn-O thin-film transistor (a-IGZO TFT) active pixel sensor (APS) x-ray imager is proposed for next generation DBT systems. The indirect x-ray detector is based on a combination of a novel low-cost organic photodiode (OPD) and a cesium iodide-based (CsI:Tl) scintillator. The proposed APS x-ray imager overcomes the difficulty of weak signal detection, when small pixel size and low exposure conditions are used, by an on-pixel signal amplification with a significant charge gain. The electrical performance of a-IGZO TFT APS pixel circuit is investigated by SPICE simulation using modified Rensselaer Polytechnic Institute amorphous silicon (a-Si:H) TFT model. Finally, the noise, detective quantum efficiency (DQE), and resolvability of the complete system are modeled using the cascaded system formalism. The result demonstrates that a large charge gain of 31-122 is achieved for the proposed high-mobility (5-20 cm2/V s) amorphous metal-oxide TFT APS. The charge gain is sufficient to eliminate the TFT thermal noise, flicker noise as well as the external readout circuit noise. Moreover, the low TFT (<10(-13) A) and OPD (<10(-8) A/cm2) leakage currents can further reduce the APS noise. Cascaded system analysis shows that the proposed APS imager with a 75 μm pixel pitch can effectively resolve the Nyquist frequency of 6.67 lp/mm, which can be further improved to ∼10 lp/mm if the pixel pitch is reduced to 50 μm. Moreover, the detector entrance exposure per projection can be reduced from 1 to 0.3 mR without a significant reduction of DQE. The signal-to-noise ratio of the a-IGZO APS imager under 0.3 mR x-ray exposure is comparable to that of a-Si:H passive pixel sensor imager under 1 mR, indicating good image quality under low dose. A threefold reduction of current tomosynthesis dose is expected if proposed technology is combined with an advanced DBT image reconstruction method. The proposed a-IGZO APS x-ray imager with a pixel pitch<75 μm is capable to achieve a high spatial frequency (>6.67 lp/mm) and a low dose (<0.4 mGy) in next generation DBT systems.
Probabilistic multi-resolution human classification
NASA Astrophysics Data System (ADS)
Tu, Jun; Ran, H.
2006-02-01
Recently there has been some interest in using infrared cameras for human detection because of the sharply decreasing prices of infrared cameras. The training data used in our work for developing the probabilistic template consists images known to contain humans in different poses and orientation but having the same height. Multiresolution templates are performed. They are based on contour and edges. This is done so that the model does not learn the intensity variations among the background pixels and intensity variations among the foreground pixels. Each template at every level is then translated so that the centroid of the non-zero pixels matches the geometrical center of the image. After this normalization step, for each pixel of the template, the probability of it being pedestrian is calculated based on the how frequently it appears as 1 in the training data. We also use periodicity gait to verify the pedestrian in a Bayesian manner for the whole blob in a probabilistic way. The videos had quite a lot of variations in the scenes, sizes of people, amount of occlusions and clutter in the backgrounds as is clearly evident. Preliminary experiments show the robustness.
Charge collection properties in an irradiated pixel sensor built in a thick-film HV-SOI process
NASA Astrophysics Data System (ADS)
Hiti, B.; Cindro, V.; Gorišek, A.; Hemperek, T.; Kishishita, T.; Kramberger, G.; Krüger, H.; Mandić, I.; Mikuž, M.; Wermes, N.; Zavrtanik, M.
2017-10-01
Investigation of HV-CMOS sensors for use as a tracking detector in the ATLAS experiment at the upgraded LHC (HL-LHC) has recently been an active field of research. A potential candidate for a pixel detector built in Silicon-On-Insulator (SOI) technology has already been characterized in terms of radiation hardness to TID (Total Ionizing Dose) and charge collection after a moderate neutron irradiation. In this article we present results of an extensive irradiation hardness study with neutrons up to a fluence of 1× 1016 neq/cm2. Charge collection in a passive pixelated structure was measured by Edge Transient Current Technique (E-TCT). The evolution of the effective space charge concentration was found to be compliant with the acceptor removal model, with the minimum of the space charge concentration being reached after 5× 1014 neq/cm2. An investigation of the in-pixel uniformity of the detector response revealed parasitic charge collection by the epitaxial silicon layer characteristic for the SOI design. The results were backed by a numerical simulation of charge collection in an equivalent detector layout.
Segmentation of images of abdominal organs.
Wu, Jie; Kamath, Markad V; Noseworthy, Michael D; Boylan, Colm; Poehlman, Skip
2008-01-01
Abdominal organ segmentation, which is, the delineation of organ areas in the abdomen, plays an important role in the process of radiological evaluation. Attempts to automate segmentation of abdominal organs will aid radiologists who are required to view thousands of images daily. This review outlines the current state-of-the-art semi-automated and automated methods used to segment abdominal organ regions from computed tomography (CT), magnetic resonance imaging (MEI), and ultrasound images. Segmentation methods generally fall into three categories: pixel based, region based and boundary tracing. While pixel-based methods classify each individual pixel, region-based methods identify regions with similar properties. Boundary tracing is accomplished by a model of the image boundary. This paper evaluates the effectiveness of the above algorithms with an emphasis on their advantages and disadvantages for abdominal organ segmentation. Several evaluation metrics that compare machine-based segmentation with that of an expert (radiologist) are identified and examined. Finally, features based on intensity as well as the texture of a small region around a pixel are explored. This review concludes with a discussion of possible future trends for abdominal organ segmentation.
Shade images of forested areas obtained from LANDSAT MSS data
NASA Technical Reports Server (NTRS)
Shimabukuro, Yosio Edemir; Smith, James A.
1989-01-01
The pixel size in the present day Remote Sensing systems is large enough to include different types of land cover. Depending upon the target area, several components may be present within the pixel. In forested areas, generally, three main components are present: tree canopy, soil (understory), and shadow. The objective is to generate a shade (shadow) image of forested areas from multispectral measurements of LANDSAT MSS (Multispectral Scanner) data by implementing a linear mixing model, where shadow is considered as one of the primary components in a pixel. The shade images are related to the observed variation in forest structure, i.e., the proportion of inferred shadow in a pixel is related to different forest ages, forest types, and tree crown cover. The Constrained Least Squares (CLS) method is used to generate shade images for forest of eucalyptus and vegetation of cerrado using LANDSAT MSS imagery over Itapeva study area in Brazil. The resulted shade images may explain the difference on ages for forest of eucalyptus and the difference on three crown cover for vegetation of cerrado.
Machine learning to analyze images of shocked materials for precise and accurate measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dresselhaus-Cooper, Leora; Howard, Marylesa; Hock, Margaret C.
A supervised machine learning algorithm, called locally adaptive discriminant analysis (LADA), has been developed to locate boundaries between identifiable image features that have varying intensities. LADA is an adaptation of image segmentation, which includes techniques that find the positions of image features (classes) using statistical intensity distributions for each class in the image. In order to place a pixel in the proper class, LADA considers the intensity at that pixel and the distribution of intensities in local (nearby) pixels. This paper presents the use of LADA to provide, with statistical uncertainties, the positions and shapes of features within ultrafast imagesmore » of shock waves. We demonstrate the ability to locate image features including crystals, density changes associated with shock waves, and material jetting caused by shock waves. This algorithm can analyze images that exhibit a wide range of physical phenomena because it does not rely on comparison to a model. LADA enables analysis of images from shock physics with statistical rigor independent of underlying models or simulations.« less
Kriging in the Shadows: Geostatistical Interpolation for Remote Sensing
NASA Technical Reports Server (NTRS)
Rossi, Richard E.; Dungan, Jennifer L.; Beck, Louisa R.
1994-01-01
It is often useful to estimate obscured or missing remotely sensed data. Traditional interpolation methods, such as nearest-neighbor or bilinear resampling, do not take full advantage of the spatial information in the image. An alternative method, a geostatistical technique known as indicator kriging, is described and demonstrated using a Landsat Thematic Mapper image in southern Chiapas, Mexico. The image was first classified into pasture and nonpasture land cover. For each pixel that was obscured by cloud or cloud shadow, the probability that it was pasture was assigned by the algorithm. An exponential omnidirectional variogram model was used to characterize the spatial continuity of the image for use in the kriging algorithm. Assuming a cutoff probability level of 50%, the error was shown to be 17% with no obvious spatial bias but with some tendency to categorize nonpasture as pasture (overestimation). While this is a promising result, the method's practical application in other missing data problems for remotely sensed images will depend on the amount and spatial pattern of the unobscured pixels and missing pixels and the success of the spatial continuity model used.
Glue detection based on teaching points constraint and tracking model of pixel convolution
NASA Astrophysics Data System (ADS)
Geng, Lei; Ma, Xiao; Xiao, Zhitao; Wang, Wen
2018-01-01
On-line glue detection based on machine version is significant for rust protection and strengthening in car production. Shadow stripes caused by reflect light and unevenness of inside front cover of car reduce the accuracy of glue detection. In this paper, we propose an effective algorithm to distinguish the edges of the glue and shadow stripes. Teaching points are utilized to calculate slope between the two adjacent points. Then a tracking model based on pixel convolution along motion direction is designed to segment several local rectangular regions using distance. The distance is the height of rectangular region. The pixel convolution along the motion direction is proposed to extract edges of gules in local rectangular region. A dataset with different illumination and complexity shape stripes are used to evaluate proposed method, which include 500 thousand images captured from the camera of glue gun machine. Experimental results demonstrate that the proposed method can detect the edges of glue accurately. The shadow stripes are distinguished and removed effectively. Our method achieves the 99.9% accuracies for the image dataset.
Automatic tissue segmentation of breast biopsies imaged by QPI
NASA Astrophysics Data System (ADS)
Majeed, Hassaan; Nguyen, Tan; Kandel, Mikhail; Marcias, Virgilia; Do, Minh; Tangella, Krishnarao; Balla, Andre; Popescu, Gabriel
2016-03-01
The current tissue evaluation method for breast cancer would greatly benefit from higher throughput and less inter-observer variation. Since quantitative phase imaging (QPI) measures physical parameters of tissue, it can be used to find quantitative markers, eliminating observer subjectivity. Furthermore, since the pixel values in QPI remain the same regardless of the instrument used, classifiers can be built to segment various tissue components without need for color calibration. In this work we use a texton-based approach to segment QPI images of breast tissue into various tissue components (epithelium, stroma or lumen). A tissue microarray comprising of 900 unstained cores from 400 different patients was imaged using Spatial Light Interference Microscopy. The training data were generated by manually segmenting the images for 36 cores and labelling each pixel (epithelium, stroma or lumen.). For each pixel in the data, a response vector was generated by the Leung-Malik (LM) filter bank and these responses were clustered using the k-means algorithm to find the centers (called textons). A random forest classifier was then trained to find the relationship between a pixel's label and the histogram of these textons in that pixel's neighborhood. The segmentation was carried out on the validation set by calculating the texton histogram in a pixel's neighborhood and generating a label based on the model learnt during training. Segmentation of the tissue into various components is an important step toward efficiently computing parameters that are markers of disease. Automated segmentation, followed by diagnosis, can improve the accuracy and speed of analysis leading to better health outcomes.
NASA Astrophysics Data System (ADS)
Jolivet, R.; Simons, M.
2016-12-01
InSAR time series analysis allows reconstruction of ground deformation with meter-scale spatial resolution and high temporal sampling. For instance, the ESA Sentinel-1 Constellation is capable of providing 6-day temporal sampling, thereby opening a new window on the spatio-temporal behavior of tectonic processes. However, due to computational limitations, most time series methods rely on a pixel-by-pixel approach. This limitation is a concern because (1) accounting for orbital errors requires referencing all interferograms to a common set of pixels before reconstruction of the time series and (2) spatially correlated atmospheric noise due to tropospheric turbulence is ignored. Decomposing interferograms into statistically independent wavelets will mitigate issues of correlated noise, but prior estimation of orbital uncertainties will still be required. Here, we explore a method that considers all pixels simultaneously when solving for the spatio-temporal evolution of interferometric phase Our method is based on a massively parallel implementation of a conjugate direction solver. We consider an interferogram as the sum of the phase difference between 2 SAR acquisitions and the corresponding orbital errors. In addition, we fit the temporal evolution with a physically parameterized function while accounting for spatially correlated noise in the data covariance. We assume noise is isotropic for any given InSAR pair with a covariance described by an exponential function that decays with increasing separation distance between pixels. We regularize our solution in space using a similar exponential function as model covariance. Given the problem size, we avoid matrix multiplications of the full covariances by computing convolutions in the Fourier domain. We first solve the unregularized least squares problem using the LSQR algorithm to approach the final solution, then run our conjugate direction solver to account for data and model covariances. We present synthetic tests showing the efficiency of our method. We then reconstruct a 20-year continuous time series covering Northern Chile. Without input from any additional GNSS data, we recover the secular deformation rate, seasonal oscillations and the deformation fields from the 2005 Mw 7.8 Tarapaca and 2007 Mw 7.7 Tocopilla earthquakes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng, X; Cheng, Z; Deen, J
Purposes: Photon counting CT is a new imaging technology that can provide tissue composition information such as calcium/iodine content quantification. Cadmium zinc telluride CZT is considered a good candidate the photon counting CT due to its relatively high atomic number and band gap. One potential challenge is the degradation of both spatial and energy resolution as the fine electrode pitch is deployed (<50 µm). We investigated the extent of charge sharing effect as functions of gap width, bias voltage and depth-of-interaction (DOI). Methods: The initial electron cloud size and diffusion process were modeled analytically. The valid range of charge sharingmore » effect refers to the range over which both signals of adjacent electrodes are above the triggering threshold (10% of the amplitude of 60keV X-ray photons). The intensity ratios of output in three regions (I1/I2/I3: left pixel, gap area and right pixel) were calculated. With Gaussian white noises modeled (a SNR of 5 based upon the preliminary experiments), the sub-pitch resolution as a function of the spatial position in-between two pixels was studied. Results: The valid range of charge sharing increases linearly with depth-of-interaction (DOI) but decreases with gap width and bias voltage. For a 1.5mm thickness CZT detector (pitch: 50µm, bias: 400 V), the range increase from ∼90µm up to ∼110µm. Such an increase can be attributed to a longer travel distance and the associated electron cloud broadening. The achievable sub-pitch resolution is in the range of ∼10–30µm. Conclusion: The preliminary results demonstrate that sub-pixel spatial resolution can be achieved using the ratio of amplitudes of two neighboring pixels. Such ratio may also be used to correct charge loss and help improve energy resolution of a CZT detector. The impact of characteristic X-rays hitting adjacent pixels (i.e., multiple interaction) on charge sharing is currently being investigated.« less
NASA Technical Reports Server (NTRS)
Yates, Leslie A.
1992-01-01
Software for an automated film-reading system that uses personal computers and digitized shadowgraphs is described. The software identifies pixels associated with fiducial-line and model images, and least-squares procedures are used to calculate the positions and orientations of the images. Automated position and orientation readings for sphere and cone models are compared to those obtained using a manual film reader. When facility calibration errors are removed from these readings, the accuracy of the automated readings is better than the pixel resolution, and it is equal to, or better than, the manual readings. The effects of film-reading and facility-calibration errors on calculated aerodynamic coefficients is discussed.
Extension of surface data by use of meteorological satellites
NASA Technical Reports Server (NTRS)
Giddings, L. E.
1976-01-01
Ways of using meteorological satellite data to extend surface data are summarized. Temperature models are prepared from infrared data from ITOS/NOAA, NIMBUS, SMS/GOES, or future LANDSAT satellites. Using temperatures for surface meteorological stations as anchors, an adjustment is made to temperature values for each pixel in the model. The result is an image with an estimated temperature for each pixel. This provides an economical way of producing detailed temperature information for data-sparse areas, such as are found in underdeveloped countries. Related uses of these satellite data are also given, including the use of computer prepared cloud-free composites to extend climatic zones, and their use in discrimination of reflectivity-thermal regime zones.
Data Filtering of Western Hemisphere GOES Wildfire ABBA Products
NASA Astrophysics Data System (ADS)
Theisen, M.; Prins, E.; Schmidt, C.; Reid, J. S.; Hunter, J.; Westphal, D.
2002-05-01
The Fire Locating and Modeling of Burning Emissions (FLAMBE') project was developed to model biomass burning emissions, transport, and radiative effects in real time. The model relies on data from the Geostationary Operational Environment Satellites (GOES-8, GOES-10), that is generated by the Wildfire Automated Biomass Burning Algorithm (WF ABBA). In an attempt to develop the most accurate modeling system the data set needs to be filtered to distinguish the true fire pixels from false alarms. False alarms occur due to reflection of solar radiation off of standing water, surface structure variances, and heat anomalies. The Reoccurring Fire Filtering algorithm (ReFF) was developed to address such false alarms by filtering data dependent on reoccurrence, location in relation to region and satellite, as well as heat intensity. WF ABBA data for the year 2000 during the peak of the burning season were analyzed using ReFF. The analysis resulted in a 45% decrease in North America and only a 15% decrease in South America, respectively, in total fire pixel occurrence. The lower percentage decrease in South America is a result of fires burning for longer periods of time, less surface variance, as well as an increase in heat intensity of fires for that region. Also fires are so prevalent in the region that multiple fires may coexist in the same 4-kilometer pixel.
Method of fabrication of display pixels driven by silicon thin film transistors
Carey, Paul G.; Smith, Patrick M.
1999-01-01
Display pixels driven by silicon thin film transistors are fabricated on plastic substrates for use in active matrix displays, such as flat panel displays. The process for forming the pixels involves a prior method for forming individual silicon thin film transistors on low-temperature plastic substrates. Low-temperature substrates are generally considered as being incapable of withstanding sustained processing temperatures greater than about 200.degree. C. The pixel formation process results in a complete pixel and active matrix pixel array. A pixel (or picture element) in an active matrix display consists of a silicon thin film transistor (TFT) and a large electrode, which may control a liquid crystal light valve, an emissive material (such as a light emitting diode or LED), or some other light emitting or attenuating material. The pixels can be connected in arrays wherein rows of pixels contain common gate electrodes and columns of pixels contain common drain electrodes. The source electrode of each pixel TFT is connected to its pixel electrode, and is electrically isolated from every other circuit element in the pixel array.
Selkowitz, David J.; Forster, Richard; Caldwell, Megan K.
2014-01-01
Remote sensing of snow-covered area (SCA) can be binary (indicating the presence/absence of snow cover at each pixel) or fractional (indicating the fraction of each pixel covered by snow). Fractional SCA mapping provides more information than binary SCA, but is more difficult to implement and may not be feasible with all types of remote sensing data. The utility of fractional SCA mapping relative to binary SCA mapping varies with the intended application as well as by spatial resolution, temporal resolution and period of interest, and climate. We quantified the frequency of occurrence of partially snow-covered (mixed) pixels at spatial resolutions between 1 m and 500 m over five dates at two study areas in the western U.S., using 0.5 m binary SCA maps derived from high spatial resolution imagery aggregated to fractional SCA at coarser spatial resolutions. In addition, we used in situ monitoring to estimate the frequency of partially snow-covered conditions for the period September 2013–August 2014 at 10 60-m grid cell footprints at two study areas with continental snow climates. Results from the image analysis indicate that at 40 m, slightly above the nominal spatial resolution of Landsat, mixed pixels accounted for 25%–93% of total pixels, while at 500 m, the nominal spatial resolution of MODIS bands used for snow cover mapping, mixed pixels accounted for 67%–100% of total pixels. Mixed pixels occurred more commonly at the continental snow climate site than at the maritime snow climate site. The in situ data indicate that some snow cover was present between 186 and 303 days, and partial snow cover conditions occurred on 10%–98% of days with snow cover. Four sites remained partially snow-free throughout most of the winter and spring, while six sites were entirely snow covered throughout most or all of the winter and spring. Within 60 m grid cells, the late spring/summer transition from snow-covered to snow-free conditions lasted 17–56 days and averaged 37 days. Our results suggest that mixed snow-covered snow-free pixels are common at the spatial resolutions imaged by both the Landsat and MODIS sensors. This highlights the additional information available from fractional SCA products and suggests fractional SCA can provide a major advantage for hydrological and climatological monitoring and modeling, particularly when accurate representation of the spatial distribution of snow cover is critical.
NASA Astrophysics Data System (ADS)
Sozzi, B.; Olivieri, M.; Mariani, P.; Giunti, C.; Zatti, S.; Porta, A.
2014-05-01
Due to the fast-growing of cooled detector sensitivity in the last years, on the image 10-20 mK temperature difference between adjacent objects can theoretically be discerned if the calibration algorithm (NUC) is capable to take into account and compensate every spatial noise source. To predict how the NUC algorithm is strong in all working condition, the modeling of the flux impinging on the detector becomes a challenge to control and improve the quality of a properly calibrated image in all scene/ambient conditions including every source of spurious signal. In literature there are just available papers dealing with NU caused by pixel-to-pixel differences of detector parameters and by the difference between the reflection of the detector cold part and the housing at the operative temperature. These models don't explain the effects on the NUC results due to vignetting, dynamic sources out and inside the FOV, reflected contributions from hot spots inside the housing (for example thermal reference far of the optical path). We propose a mathematical model in which: 1) detector and system (opto-mechanical configuration and scene) are considered separated and represented by two independent transfer functions 2) on every pixel of the array the amount of photonic signal coming from different spurious sources are considered to evaluate the effect on residual spatial noise due to dynamic operative conditions. This article also contains simulation results showing how this model can be used to predict the amount of spatial noise.
Impact of defective pixels in AMLCDs on the perception of medical images
NASA Astrophysics Data System (ADS)
Kimpe, Tom; Sneyders, Yuri
2006-03-01
With LCD displays, each pixel has its own individual transistor that controls the transmittance of that pixel. Occasionally, these individual transistors will short or alternatively malfunction, resulting in a defective pixel that always shows the same brightness. With ever increasing resolution of displays the number of defect pixels per display increases accordingly. State of the art processes are capable of producing displays with no more than one faulty transistor out of 3 million. A five Mega Pixel medical LCD panel contains 15 million individual sub pixels (3 sub pixels per pixel), each having an individual transistor. This means that a five Mega Pixel display on average will have 5 failing pixels. This paper investigates the visibility of defective pixels and analyzes the possible impact of defective pixels on the perception of medical images. JND simulations were done to study the effect of defective pixels on medical images. Our results indicate that defective LCD pixels can mask subtle features in medical images in an unexpectedly broad area around the defect and therefore may reduce the quality of diagnosis for specific high-demanding areas such as mammography. As a second contribution an innovative solution is proposed. A specialized image processing algorithm can make defective pixels completely invisible and moreover can also recover the information of the defect so that the radiologist perceives the medical image correctly. This correction algorithm has been validated with both JND simulations and psycho visual tests.
An investigation of Digital Elevation Model (DEM) structure influence on flood modelling
NASA Astrophysics Data System (ADS)
Sahid; Nurrohman, A. W.; Hadi, M. P.
2018-04-01
Flood is one of the natural calamities that cause huge losses and damages. Flood hazard zonation has been widely produced to face the impact of the disaster. DEM as the primary data to construct the earth surface has been developed from rough to fine resolution. Aster GDEM v.2 within 1arc spatial resolution has an ability to derived DEM and TIN data as bases river geometrics data. Maximum daily peak discharges used to calculate flood peak discharge. Furthermore, steady flow analysis has been used to produce flood inundation model based on four scenarios with return periods 5yr, 10yr, 50yr, and 100yr. The model results have been validated using UAV flood map in 2016 by means of pixel by pixel operation and the result shows that the vertical variance between grid DEM and TIN data about 0.3 m.
Thermal wake/vessel detection technique
Roskovensky, John K [Albuquerque, NM; Nandy, Prabal [Albuquerque, NM; Post, Brian N [Albuquerque, NM
2012-01-10
A computer-automated method for detecting a vessel in water based on an image of a portion of Earth includes generating a thermal anomaly mask. The thermal anomaly mask flags each pixel of the image initially deemed to be a wake pixel based on a comparison of a thermal value of each pixel against other thermal values of other pixels localized about each pixel. Contiguous pixels flagged by the thermal anomaly mask are grouped into pixel clusters. A shape of each of the pixel clusters is analyzed to determine whether each of the pixel clusters represents a possible vessel detection event. The possible vessel detection events are represented visually within the image.
Smart trigger logic for focal plane arrays
Levy, James E; Campbell, David V; Holmes, Michael L; Lovejoy, Robert; Wojciechowski, Kenneth; Kay, Randolph R; Cavanaugh, William S; Gurrieri, Thomas M
2014-03-25
An electronic device includes a memory configured to receive data representing light intensity values from pixels in a focal plane array and a processor that analyzes the received data to determine which light values correspond to triggered pixels, where the triggered pixels are those pixels that meet a predefined set of criteria, and determines, for each triggered pixel, a set of neighbor pixels for which light intensity values are to be stored. The electronic device also includes a buffer that temporarily stores light intensity values for at least one previously processed row of pixels, so that when a triggered pixel is identified in a current row, light intensity values for the neighbor pixels in the previously processed row and for the triggered pixel are persistently stored, as well as a data transmitter that transmits the persistently stored light intensity values for the triggered and neighbor pixels to a data receiver.
Sub-pixel image classification for forest types in East Texas
NASA Astrophysics Data System (ADS)
Westbrook, Joey
Sub-pixel classification is the extraction of information about the proportion of individual materials of interest within a pixel. Landcover classification at the sub-pixel scale provides more discrimination than traditional per-pixel multispectral classifiers for pixels where the material of interest is mixed with other materials. It allows for the un-mixing of pixels to show the proportion of each material of interest. The materials of interest for this study are pine, hardwood, mixed forest and non-forest. The goal of this project was to perform a sub-pixel classification, which allows a pixel to have multiple labels, and compare the result to a traditional supervised classification, which allows a pixel to have only one label. The satellite image used was a Landsat 5 Thematic Mapper (TM) scene of the Stephen F. Austin Experimental Forest in Nacogdoches County, Texas and the four cover type classes are pine, hardwood, mixed forest and non-forest. Once classified, a multi-layer raster datasets was created that comprised four raster layers where each layer showed the percentage of that cover type within the pixel area. Percentage cover type maps were then produced and the accuracy of each was assessed using a fuzzy error matrix for the sub-pixel classifications, and the results were compared to the supervised classification in which a traditional error matrix was used. The overall accuracy of the sub-pixel classification using the aerial photo for both training and reference data had the highest (65% overall) out of the three sub-pixel classifications. This was understandable because the analyst can visually observe the cover types actually on the ground for training data and reference data, whereas using the FIA (Forest Inventory and Analysis) plot data, the analyst must assume that an entire pixel contains the exact percentage of a cover type found in a plot. An increase in accuracy was found after reclassifying each sub-pixel classification from nine classes with 10 percent interval each to five classes with 20 percent interval each. When compared to the supervised classification which has a satisfactory overall accuracy of 90%, none of the sub-pixel classification achieved the same level. However, since traditional per-pixel classifiers assign only one label to pixels throughout the landscape while sub-pixel classifications assign multiple labels to each pixel, the traditional 85% accuracy of acceptance for pixel-based classifications should not apply to sub-pixel classifications. More research is needed in order to define the level of accuracy that is deemed acceptable for sub-pixel classifications.
MODELING THE LINE-OF-SIGHT INTEGRATED EMISSION IN THE CORONA: IMPLICATIONS FOR CORONAL HEATING
DOE Office of Scientific and Technical Information (OSTI.GOV)
Viall, Nicholeen M.; Klimchuk, James A.
2013-07-10
One of the outstanding problems in all of space science is uncovering how the solar corona is heated to temperatures greater than 1 MK. Though studied for decades, one of the major difficulties in solving this problem has been unraveling the line-of-sight (LOS) effects in the observations. The corona is optically thin, so a single pixel measures counts from an indeterminate number (perhaps tens of thousands) of independently heated flux tubes, all along that pixel's LOS. In this paper we model the emission in individual pixels imaging the active region corona in the extreme ultraviolet. If LOS effects are notmore » properly taken into account, erroneous conclusions regarding both coronal heating and coronal dynamics may be reached. We model the corona as an LOS integration of many thousands of completely independently heated flux tubes. We demonstrate that despite the superposition of randomly heated flux tubes, nanoflares leave distinct signatures in light curves observed with multi-wavelength and high time cadence data, such as those data taken with the Atmospheric Imaging Assembly on board the Solar Dynamics Observatory. These signatures are readily detected with the time-lag analysis technique of Viall and Klimchuk in 2012. Steady coronal heating leaves a different and equally distinct signature that is also revealed by the technique.« less
Design of a High-resolution Optoelectronic Retinal Prosthesis
NASA Astrophysics Data System (ADS)
Palanker, Daniel
2005-03-01
It has been demonstrated that electrical stimulation of the retina can produce visual percepts in blind patients suffering from macular degeneration and retinitis pigmentosa. So far retinal implants have had just a few electrodes, whereas at least several thousand pixels would be required for any functional restoration of sight. We will discuss physical limitations on the number of stimulating electrodes and on delivery of information and power to the retinal implant. Using a model of extracellular stimulation we derive the threshold values of current and voltage as a function of electrode size and distance to the target cell. Electrolysis, tissue heating, and cross-talk between neighboring electrodes depend critically on separation between electrodes and cells, thus strongly limiting the pixels size and spacing. Minimal pixel density required for 20/80 visual acuity (2500 pixels/mm2, pixel size 20 um) cannot be achieved unless the target neurons are within 7 um of the electrodes. At a separation of 50 um, the density drops to 44 pixels/mm2, and at 100 um it is further reduced to 10 pixels/mm2. We will present designs of subretinal implants that provide close proximity of electrodes to cells using migration of retinal cells to target areas. Two basic implant geometries will be described: perforated membranes and protruding electrode arrays. In addition, we will discuss delivery of information to the implant that allows for natural eye scanning of the scene, rather than scanning with a head-mounted camera. It operates similarly to ``virtual reality'' imaging devices where an image from a video camera is projected by a goggle-mounted collimated infrared LED-LCD display onto the retina, activating an array of powered photodiodes in the retinal implant. Optical delivery of visual information to the implant allows for flexible control of the image processing algorithms and stimulation parameters. In summary, we will describe solutions to some of the major problems facing the realization of a functional retinal implant: high pixel density, proximity of electrodes to target cells, natural eye scanning capability, and real-time image processing adjustable to retinal architecture.
NASA Astrophysics Data System (ADS)
Roehrig, Hans; Fan, Jiahua; Dallas, William J.; Krupinski, Elizabeth A.; Johnson, Jeffrey
2009-08-01
This presentation describes work in progress that is the result of an NIH SBIR Phase 1 project that addresses the wide- spread concern for the large number of breast-cancers and cancer victims [1,2]. The primary goal of the project is to increase the detection rate of microcalcifications as a result of the decrease of spatial noise of the LCDs used to display the mammograms [3,4]. Noise reduction is to be accomplished with the aid of a high performance CCD camera and subsequent application of local-mean equalization and error diffusion [5,6]. A second goal of the project is the actual detection of breast cancer. Contrary to the approach to mammography, where the mammograms typically have a pixel matrix of approximately 1900 x 2300 pixels, otherwise known as FFDM or Full-Field Digital Mammograms, we will only use sections of mammograms with a pixel matrix of 256 x 256 pixels. This is because at this time, reduction of spatial noise on an LCD can only be done on relatively small areas like 256 x 256 pixels. In addition, judging the efficacy for detection of breast cancer will be done using two methods: One is a conventional ROC study [7], the other is a vision model developed over several years starting at the Sarnoff Research Center and continuing at the Siemens Corporate Research in Princeton NJ [8].
Ensembles of satellite aerosol retrievals based on three AATSR algorithms within aerosol_cci
NASA Astrophysics Data System (ADS)
Kosmale, Miriam; Popp, Thomas
2016-04-01
Ensemble techniques are widely used in the modelling community, combining different modelling results in order to reduce uncertainties. This approach could be also adapted to satellite measurements. Aerosol_cci is an ESA funded project, where most of the European aerosol retrieval groups work together. The different algorithms are homogenized as far as it makes sense, but remain essentially different. Datasets are compared with ground based measurements and between each other. Three AATSR algorithms (Swansea university aerosol retrieval, ADV aerosol retrieval by FMI and Oxford aerosol retrieval ORAC) provide within this project 17 year global aerosol records. Each of these algorithms provides also uncertainty information on pixel level. Within the presented work, an ensembles of the three AATSR algorithms is performed. The advantage over each single algorithm is the higher spatial coverage due to more measurement pixels per gridbox. A validation to ground based AERONET measurements shows still a good correlation of the ensemble, compared to the single algorithms. Annual mean maps show the global aerosol distribution, based on a combination of the three aerosol algorithms. In addition, pixel level uncertainties of each algorithm are used for weighting the contributions, in order to reduce the uncertainty of the ensemble. Results of different versions of the ensembles for aerosol optical depth will be presented and discussed. The results are validated against ground based AERONET measurements. A higher spatial coverage on daily basis allows better results in annual mean maps. The benefit of using pixel level uncertainties is analysed.
The resolved star formation history of M51a through successive Bayesian marginalization
NASA Astrophysics Data System (ADS)
Martínez-García, Eric E.; Bruzual, Gustavo; Magris C., Gladis; González-Lópezlira, Rosa A.
2018-02-01
We have obtained the time and space-resolved star formation history (SFH) of M51a (NGC 5194) by fitting Galaxy Evolution Explorer (GALEX), Sloan Digital Sky Survey and near-infrared pixel-by-pixel photometry to a comprehensive library of stellar population synthesis models drawn from the Synthetic Spectral Atlas of Galaxies (SSAG). We fit for each space-resolved element (pixel) an independent model where the SFH is averaged in 137 age bins, each one 100 Myr wide. We used the Bayesian Successive Priors (BSP) algorithm to mitigate the bias in the present-day spatial mass distribution. We test BSP with different prior probability distribution functions (PDFs); this exercise suggests that the best prior PDF is the one concordant with the spatial distribution of the stellar mass as inferred from the near-infrared images. We also demonstrate that varying the implicit prior PDF of the SFH in SSAG does not affect the results. By summing the contributions to the global star formation rate of each pixel, at each age bin, we have assembled the resolved SFH of the whole galaxy. According to these results, the star formation rate of M51a was exponentially increasing for the first 10 Gyr after the big bang, and then turned into an exponentially decreasing function until the present day. Superimposed, we find a main burst of star formation at t ≈ 11.9 Gyr after the big bang.
NASA Astrophysics Data System (ADS)
Lin, Yingzhi; Deng, Xiangzheng; Li, Xing; Ma, Enjun
2014-12-01
Spatially explicit simulation of land use change is the basis for estimating the effects of land use and cover change on energy fluxes, ecology and the environment. At the pixel level, logistic regression is one of the most common approaches used in spatially explicit land use allocation models to determine the relationship between land use and its causal factors in driving land use change, and thereby to evaluate land use suitability. However, these models have a drawback in that they do not determine/allocate land use based on the direct relationship between land use change and its driving factors. Consequently, a multinomial logistic regression method was introduced to address this flaw, and thereby, judge the suitability of a type of land use in any given pixel in a case study area of the Jiangxi Province, China. A comparison of the two regression methods indicated that the proportion of correctly allocated pixels using multinomial logistic regression was 92.98%, which was 8.47% higher than that obtained using logistic regression. Paired t-test results also showed that pixels were more clearly distinguished by multinomial logistic regression than by logistic regression. In conclusion, multinomial logistic regression is a more efficient and accurate method for the spatial allocation of land use changes. The application of this method in future land use change studies may improve the accuracy of predicting the effects of land use and cover change on energy fluxes, ecology, and environment.
Investigating error structure of shuttle radar topography mission elevation data product
NASA Astrophysics Data System (ADS)
Becek, Kazimierz
2008-08-01
An attempt was made to experimentally assess the instrumental component of error of the C-band SRTM (SRTM). This was achieved by comparing elevation data of 302 runways from airports all over the world with the shuttle radar topography mission data product (SRTM). It was found that the rms of the instrumental error is about +/-1.55 m. Modeling of the remaining SRTM error sources, including terrain relief and pixel size, shows that downsampling from 30 m to 90 m (1 to 3 arc-sec pixels) worsened SRTM vertical accuracy threefold. It is suspected that the proximity of large metallic objects is a source of large SRTM errors. The achieved error estimates allow a pixel-based accuracy assessment of the SRTM elevation data product to be constructed. Vegetation-induced errors were not considered in this work.
NASA Astrophysics Data System (ADS)
Janesick, James; Gunawan, Ferry; Dosluoglu, Taner; Tower, John; McCaffrey, Niel
2002-08-01
High performance CMOS pixels are introduced; and their development is discussed. 3T (3-transistor) photodiode, 5T pinned diode, 6T photogate and 6T photogate back illuminated CMOS pixels are examined in detail, and the latter three are considered as scientific pixels. The advantages and disadvantagesof these options for scientific CMOS pixels are examined.Pixel characterization, which is used to gain a better understanding of CMOS pixels themselves, is also discussed.
NASA Astrophysics Data System (ADS)
Janesick, J.; Gunawan, F.; Dosluoglu, T.; Tower, J.; McCaffrey, N.
High performance CMOS pixels are introduced and their development is discussed. 3T (3-transistor) photodiode, 5T pinned diode, 6T photogate and 6T photogate back illuminated CMOS pixels are examined in detail, and the latter three are considered as scientific pixels. The advantages and disadvantages of these options for scientific CMOS pixels are examined. Pixel characterization, which is used to gain a better understanding of CMOS pixels themselves, is also discussed.
Microlens performance limits in sub-2mum pixel CMOS image sensors.
Huo, Yijie; Fesenmaier, Christian C; Catrysse, Peter B
2010-03-15
CMOS image sensors with smaller pixels are expected to enable digital imaging systems with better resolution. When pixel size scales below 2 mum, however, diffraction affects the optical performance of the pixel and its microlens, in particular. We present a first-principles electromagnetic analysis of microlens behavior during the lateral scaling of CMOS image sensor pixels. We establish for a three-metal-layer pixel that diffraction prevents the microlens from acting as a focusing element when pixels become smaller than 1.4 microm. This severely degrades performance for on and off-axis pixels in red, green and blue color channels. We predict that one-metal-layer or backside-illuminated pixels are required to extend the functionality of microlenses beyond the 1.4 microm pixel node.
Hysteresis of Soil Point Water Retention Functions Determined by Neutron Radiography
NASA Astrophysics Data System (ADS)
Perfect, E.; Kang, M.; Bilheux, H.; Willis, K. J.; Horita, J.; Warren, J.; Cheng, C.
2010-12-01
Soil point water retention functions are needed for modeling flow and transport in partially-saturated porous media. Such functions are usually determined by inverse modeling of average water retention data measured experimentally on columns of finite length. However, the resulting functions are subject to the appropriateness of the chosen model, as well as the initial and boundary condition assumptions employed. Soil point water retention functions are rarely measured directly and when they are the focus is invariably on the main drying branch. Previous direct measurement methods include time domain reflectometry and gamma beam attenuation. Here we report direct measurements of the main wetting and drying branches of the point water retention function using neutron radiography. The measurements were performed on a coarse sand (Flint #13) packed into 2.6 cm diameter x 4 cm long aluminum cylinders at the NIST BT-2 (50 μm resolution) and ORNL-HFIR CG1D (70 μm resolution) imaging beamlines. The sand columns were saturated with water and then drained and rewetted under quasi-equilibrium conditions using a hanging water column setup. 2048 x 2048 pixel images of the transmitted flux of neutrons through the column were acquired at each imposed suction (~10-15 suction values per experiment). Volumetric water contents were calculated on a pixel by pixel basis using Beer-Lambert’s law in conjunction with beam hardening and geometric corrections. The pixel rows were averaged and combined with information on the known distribution of suctions within the column to give 2048 point drying and wetting functions for each experiment. The point functions exhibited pronounced hysteresis and varied with column height, possibly due to differences in porosity caused by the packing procedure employed. Predicted point functions, extracted from the hanging water column volumetric data using the TrueCell inverse modeling procedure, showed very good agreement with the range of point functions measured within the column using neutron radiography. Extension of these experiments to 3-dimensions using neutron tomography is planned.
The Floe Size Distribution in the Marginal Ice Zone of the Beaufort and Chukchi Seas
NASA Astrophysics Data System (ADS)
Schweiger, A. J. B.; Stern, H. L., III; Stark, M.; Zhang, J.; Steele, M.; Hwang, P. B.
2014-12-01
Several key processes in the Marginal Ice Zone (MIZ) of the Arctic Ocean are related to the size of the ice floes, whose diameters range from meters to tens of kilometers. The floe size distribution (FSD) influences the mechanical properties of the ice cover, air-sea momentum and heat transfer, lateral melting, and light penetration. However, no existing sea-ice/ocean models currently simulate the FSD in the MIZ. Model development depends on observations of the FSD for parameterization, calibration, and validation. To support the development and implementation of the FSD in the Marginal Ice Zone Modeling and Assimilation System (MIZMAS), we have analyzed the FSD in the Beaufort and Chukchi seas using multiple sources of satellite imagery: NASA's Moderate Resolution Imaging Spectroradiometer (MODIS) on the Terra and Aqua satellites (250 m pixel size), the USGS Landsat 8 satellite (80 m pixel size), the Canadian Space Agency's synthetic aperture radar (SAR) on RADARSAT (50 meter pixel size), and declassified National Technical Means imagery from the Global Fiducials Library (GFL) of the USGS (1 m pixel size). The procedure for identifying ice floes in the imagery begins with manually delineating cloud-free regions (if necessary). A threshold is then chosen to separate ice from water. Morphological operations and other semi-automated techniques are used to identify individual floes, whose properties are then easily calculated. We use the mean caliper diameter as the measure of floe size. The FSD is adequately described by a power-law in which the exponent characterizes the relative number of large and small floes. Changes in the exponent over time and space reflect changes in physical processes in the MIZ, such as sea-ice deformation, fracturing, and melting. We report results of FSD analysis for the spring and summer of 2013 and 2014, and show how the FSD will be incorporated into the MIZMAS model.
NASA Technical Reports Server (NTRS)
Ackerman, Steven A.; Hemler, Richard S.; Hofman, Robert J. Patrick; Pincus, Robert; Platnick, Steven
2011-01-01
The properties of clouds that may be observed by satellite instruments, such as optical depth and cloud top pressure, are only loosely related to the way clouds m-e represented in models of the atmosphere. One way to bridge this gap is through "instrument simulators," diagnostic tools that map the model representation to synthetic observations so that differences between simulator output and observations can be interpreted unambiguously as model error. But simulators may themselves be restricted by limited information available from the host model or by internal assumptions. This paper considers the extent to which instrument simulators are able to capture essential differences between MODIS and ISCCP, two similar but independent estimates of cloud properties. The authors review the measurements and algorithms underlying these two cloud climatologies, introduce a MODIS simulator, and detail data sets developed for comparison with global models using ISCCP and MODIS simulators, In nature MODIS observes less mid-level doudines!> than ISCCP, consistent with the different methods used to determine cloud top pressure; aspects of this difference are reproduced by the simulators running in a climate modeL But stark differences between MODIS and ISCCP observations of total cloudiness and the distribution of cloud optical thickness can be traced to different approaches to marginal pixels, which MODIS excludes and ISCCP treats as homogeneous. These pixels, which likely contain broken clouds, cover about 15 k of the planet and contain almost all of the optically thinnest clouds observed by either instrument. Instrument simulators can not reproduce these differences because the host model does not consider unresolved spatial scales and so can not produce broken pixels. Nonetheless, MODIS and ISCCP observation are consistent for all but the optically-thinnest clouds, and models can be robustly evaluated using instrument simulators by excluding ambiguous observations.
Neighborhood comparison operator
NASA Technical Reports Server (NTRS)
Gennery, Donald B. (Inventor)
1987-01-01
Digital values in a moving window are compared by an operator having nine comparators (18) connected to line buffers (16) for receiving a succession of central pixels together with eight neighborhood pixels. A single bit of program control determines whether the neighborhood pixels are to be compared with the central pixel or a threshold value. The central pixel is always compared with the threshold. The comparator output, plus 2 bits indicating odd-even pixel/line information about the central pixel, addresses a lookup table (20) to provide 14 bits of information, including 2 bits which control a selector (22) to pass either the central pixel value, the other 12 bits of table information, or the bit-wise logic OR of all neighboring pixels.
NASA Technical Reports Server (NTRS)
Jasinski, Michael F.
1990-01-01
An analytical framework is provided for examining the physically based behavior of the normalized difference vegetation index (NDVI) in terms of the variability in bulk subpixel landscape components and with respect to variations in pixel scales, within the context of the stochastic-geometric canopy reflectance model. Analysis focuses on regional scale variability in horizontal plant density and soil background reflectance distribution. Modeling is generalized to different plant geometries and solar angles through the use of the nondimensional solar-geometric similarity parameter. Results demonstrate that, for Poisson-distributed plants and for one deterministic distribution, NDVI increases with increasing subpixel fractional canopy amount, decreasing soil background reflectance, and increasing shadows, at least within the limitations of the geometric reflectance model. The NDVI of a pecan orchard and a juniper landscape is presented and discussed.
Selecting Pixels for Kepler Downlink
NASA Technical Reports Server (NTRS)
Bryson, Stephen T.; Jenkins, Jon M.; Klaus, Todd C.; Cote, Miles T.; Quintana, Elisa V.; Hall, Jennifer R.; Ibrahim, Khadeejah; Chandrasekaran, Hema; Caldwell, Douglas A.; Van Cleve, Jeffrey E.;
2010-01-01
The Kepler mission monitors > 100,000 stellar targets using 42 2200 1024 pixel CCDs. Bandwidth constraints prevent the downlink of all 96 million pixels per 30-minute cadence, so the Kepler spacecraft downlinks a specified collection of pixels for each target. These pixels are selected by considering the object brightness, background and the signal-to-noise of each pixel, and are optimized to maximize the signal-to-noise ratio of the target. This paper describes pixel selection, creation of spacecraft apertures that efficiently capture selected pixels, and aperture assignment to a target. Diagnostic apertures, short-cadence targets and custom specified shapes are discussed.
Image Edge Extraction via Fuzzy Reasoning
NASA Technical Reports Server (NTRS)
Dominquez, Jesus A. (Inventor); Klinko, Steve (Inventor)
2008-01-01
A computer-based technique for detecting edges in gray level digital images employs fuzzy reasoning to analyze whether each pixel in an image is likely on an edge. The image is analyzed on a pixel-by-pixel basis by analyzing gradient levels of pixels in a square window surrounding the pixel being analyzed. An edge path passing through the pixel having the greatest intensity gradient is used as input to a fuzzy membership function, which employs fuzzy singletons and inference rules to assigns a new gray level value to the pixel that is related to the pixel's edginess degree.
NASA Astrophysics Data System (ADS)
Fu, Y.; Hu-Guo, C.; Dorokhov, A.; Pham, H.; Hu, Y.
2013-07-01
In order to exploit the ability to integrate a charge collecting electrode with analog and digital processing circuitry down to the pixel level, a new type of CMOS pixel sensors with full CMOS capability is presented in this paper. The pixel array is read out based on a column-parallel read-out architecture, where each pixel incorporates a diode, a preamplifier with a double sampling circuitry and a discriminator to completely eliminate analog read-out bottlenecks. The sensor featuring a pixel array of 8 rows and 32 columns with a pixel pitch of 80 μm×16 μm was fabricated in a 0.18 μm CMOS process. The behavior of each pixel-level discriminator isolated from the diode and the preamplifier was studied. The experimental results indicate that all in-pixel discriminators which are fully operational can provide significant improvements in the read-out speed and the power consumption of CMOS pixel sensors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mentuch, Erin; Abraham, Roberto G.; Zibetti, Stefano
2010-12-20
We have measured the near-infrared colors and the fluxes of individual pixels in 68 galaxies common to the Spitzer Infrared Nearby Galaxies Survey and the Large Galaxy Atlas Survey. Pixels from each galaxy are grouped into regions of increasingly red near-infrared colors. As expected, the majority of pixels are shown to have relatively constant NIR flux ratios (log{sub 10} I{sub 3.6}/I{sub 1.25} = -0.30 {+-} 0.07 and log{sub 10} I{sub 4.5}/I{sub 3.6} = -0.19 {+-} 0.02), representing the blackbody continuum emission of main sequence stars. However, pixels with red NIR colors correspond to pixels with higher H{sub {alpha}} emission andmore » dust extinction. We show that the NIR colors are correlated to both quantities, with the strongest correlation to the intrinsic H{sub {alpha}} emission. In addition, in regions of high star formation, the average intensity of pixels in red-excess regions (at 1.25 {mu}m, 3.6 {mu}m, 4.5 {mu}m, 5.6 {mu}m, 8.0 {mu}m and 24 {mu}m) scales linearly with the intrinsic intensity of H{alpha} emission, and thus with the star formation rate (SFR) within the pixel. This suggests that most NIR-excess regions are not red because their light is being depleted by absorption. Instead, they are red because additional infrared light is being contributed by a process linked to star formation. This is surprising because the shorter wavelength bands in our study (1.25 {mu}m-5.6 {mu}m) do not probe emission from cold (10-20 K) and warm (50-100 K) dust associated with star formation in molecular clouds. However, emission from hot dust (700-1000 K) and/or polycyclic aromatic hydrocarbon (PAH) molecules can explain the additional emission seen at the shorter wavelengths in our study. The contribution from hot dust and/or PAH emission at 2 {mu}m-5 {mu}m and PAH emission at 5.6 {mu}m and 8.0 {mu}m scales linearly with warm dust emission at 24 {mu}m and the intrinsic H{alpha} emission. Since both are tied to the SFR, our analysis shows that the NIR excess continuum emission and PAH emission at {approx}1-8 {mu}m can be added to spectral energy distribution models in a very straightforward way, by simply adding an additional component to the models that scales linearly with SFR.« less
Neighborhood comparison operator
NASA Technical Reports Server (NTRS)
Gennery, D. B. (Inventor)
1985-01-01
Digital values in a moving window are compared by an operator having nine comparators connected to line buffers for receiving a succession of central pixels together with eight neighborhood pixels. A single bit of program control determines whether the neighborhood pixels are to be compared with the central pixel or a threshold value. The central pixel is always compared with the threshold. The omparator output plus 2 bits indicating odd-even pixel/line information about the central pixel addresses a lookup table to provide 14 bits of information, including 2 bits which control a selector to pass either the central pixel value, the other 12 bits of table information, or the bit-wise logical OR of all nine pixels through circuit that implements a very wide OR gate.
Anti-aliasing techniques in photon-counting depth imaging using GHz clock rates
NASA Astrophysics Data System (ADS)
Krichel, Nils J.; McCarthy, Aongus; Collins, Robert J.; Buller, Gerald S.
2010-04-01
Single-photon detection technologies in conjunction with low laser illumination powers allow for the eye-safe acquisition of time-of-flight range information on non-cooperative target surfaces. We previously presented a photon-counting depth imaging system designed for the rapid acquisition of three-dimensional target models by steering a single scanning pixel across the field angle of interest. To minimise the per-pixel dwelling times required to obtain sufficient photon statistics for accurate distance resolution, periodic illumination at multi- MHz repetition rates was applied. Modern time-correlated single-photon counting (TCSPC) hardware allowed for depth measurements with sub-mm precision. Resolving the absolute target range with a fast periodic signal is only possible at sufficiently short distances: if the round-trip time towards an object is extended beyond the timespan between two trigger pulses, the return signal cannot be assigned to an unambiguous range value. Whereas constructing a precise depth image based on relative results may still be possible, problems emerge for large or unknown pixel-by-pixel separations or in applications with a wide range of possible scene distances. We introduce a technique to avoid range ambiguity effects in time-of-flight depth imaging systems at high average pulse rates. A long pseudo-random bitstream is used to trigger the illuminating laser. A cyclic, fast-Fourier supported analysis algorithm is used to search for the pattern within return photon events. We demonstrate this approach at base clock rates of up to 2 GHz with varying pattern lengths, allowing for unambiguous distances of several kilometers. Scans at long stand-off distances and of scenes with large pixel-to-pixel range differences are presented. Numerical simulations are performed to investigate the relative merits of the technique.
Shapiro, Stephen L.; Mani, Sudhindra; Atlas, Eugene L.; Cords, Dieter H. W.; Holbrook, Britt
1997-01-01
A data acquisition circuit for a particle detection system that allows for time tagging of particles detected by the system. The particle detection system screens out background noise and discriminate between hits from scattered and unscattered particles. The detection system can also be adapted to detect a wide variety of particle types. The detection system utilizes a particle detection pixel array, each pixel containing a back-biased PIN diode, and a data acquisition pixel array. Each pixel in the particle detection pixel array is in electrical contact with a pixel in the data acquisition pixel array. In response to a particle hit, the affected PIN diodes generate a current, which is detected by the corresponding data acquisition pixels. This current is integrated to produce a voltage across a capacitor, the voltage being related to the amount of energy deposited in the pixel by the particle. The current is also used to trigger a read of the pixel hit by the particle.
NASA Technical Reports Server (NTRS)
Mazzoni, Dominic; Wagstaff, Kiri; Bornstein, Benjamin; Tang, Nghia; Roden, Joseph
2006-01-01
PixelLearn is an integrated user-interface computer program for classifying pixels in scientific images. Heretofore, training a machine-learning algorithm to classify pixels in images has been tedious and difficult. PixelLearn provides a graphical user interface that makes it faster and more intuitive, leading to more interactive exploration of image data sets. PixelLearn also provides image-enhancement controls to make it easier to see subtle details in images. PixelLearn opens images or sets of images in a variety of common scientific file formats and enables the user to interact with several supervised or unsupervised machine-learning pixel-classifying algorithms while the user continues to browse through the images. The machinelearning algorithms in PixelLearn use advanced clustering and classification methods that enable accuracy much higher than is achievable by most other software previously available for this purpose. PixelLearn is written in portable C++ and runs natively on computers running Linux, Windows, or Mac OS X.
Overhead longwave infrared hyperspectral material identification using radiometric models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zelinski, M. E.
Material detection algorithms used in hyperspectral data processing are computationally efficient but can produce relatively high numbers of false positives. Material identification performed as a secondary processing step on detected pixels can help separate true and false positives. This paper presents a material identification processing chain for longwave infrared hyperspectral data of solid materials collected from airborne platforms. The algorithms utilize unwhitened radiance data and an iterative algorithm that determines the temperature, humidity, and ozone of the atmospheric profile. Pixel unmixing is done using constrained linear regression and Bayesian Information Criteria for model selection. The resulting product includes an optimalmore » atmospheric profile and full radiance material model that includes material temperature, abundance values, and several fit statistics. A logistic regression method utilizing all model parameters to improve identification is also presented. This paper details the processing chain and provides justification for the algorithms used. Several examples are provided using modeled data at different noise levels.« less
NASA Astrophysics Data System (ADS)
Gong, Rui; Wang, Qing; Shao, Xiaopeng; Zhou, Conghao
2016-12-01
This study aims to expand the applications of color appearance models to representing the perceptual attributes for digital images, which supplies more accurate methods for predicting image brightness and image colorfulness. Two typical models, i.e., the CIELAB model and the CIECAM02, were involved in developing algorithms to predict brightness and colorfulness for various images, in which three methods were designed to handle pixels of different color contents. Moreover, massive visual data were collected from psychophysical experiments on two mobile displays under three lighting conditions to analyze the characteristics of visual perception on these two attributes and to test the prediction accuracy of each algorithm. Afterward, detailed analyses revealed that image brightness and image colorfulness were predicted well by calculating the CIECAM02 parameters of lightness and chroma; thus, the suitable methods for dealing with different color pixels were determined for image brightness and image colorfulness, respectively. This study supplies an example of enlarging color appearance models to describe image perception.
Flood Identification from Satellite Images Using Artificial Neural Networks
NASA Astrophysics Data System (ADS)
Chang, L.; Kao, I.; Shih, K.
2011-12-01
Typhoons and storms hit Taiwan several times every year and they cause serious flood disasters. Because the rivers are short and steep, and their flows are relatively fast with floods lasting only few hours and usually less than one day. Flood identification can provide the flood disaster and extent information to disaster assistance and recovery centers. Due to the factors of the weather, it is not suitable for aircraft or traditional multispectral satellite; hence, the most appropriate way for investigating flooding extent is to use Synthetic Aperture Radar (SAR) satellite. In this study, back-propagation neural network (BPNN) model and multivariate linear regression (MLR) model are built to identify the flooding extent from SAR satellite images. The input variables of the BPNN model are Radar Cross Section (RCS) value and mean of the pixel, standard deviation, minimum and maximum of RCS values among its adjacent 3×3 pixels. The MLR model uses two images of the non-flooding and flooding periods, and The inputs are the difference between the RCS values of two images and the variances among its adjacent 3×3 pixels. The results show that the BPNN model can perform much better than the MLR model. The correct percentages are more than 80% and 73% in training and testing data, respectively. Many misidentified areas are very fragmented and unrelated. In order to reinforce the correct percentage, morphological image analysis is used to modify the outputs of these identification models. Through morphological operations, most of the small, fragmented and misidentified areas can be correctly assigned to flooding or non-flooding areas. The final results show that the flood identification of satellite images has been improved a lot and the correct percentages increases up to more than 90%.
NASA Astrophysics Data System (ADS)
Preusker, Frank; Stark, Alexander; Oberst, Jürgen; Matz, Klaus-Dieter; Gwinner, Klaus; Roatsch, Thomas; Watters, Thomas R.
2017-08-01
We selected approximately 10,500 narrow-angle camera (NAC) and wide-angle camera (WAC) images of Mercury acquired from orbit by MESSENGER's Mercury Dual Imaging System (MDIS) with an average resolution of 150 m/pixel to compute a digital terrain model (DTM) for the H6 (Kuiper) quadrangle, which extends from 22.5°S to 22.5°N and from 288.0°E to 360.0°E. From the images, we identified about 21,100 stereo image combinations consisting of at least three images each. We applied sparse multi-image matching to derive approximately 250,000 tie-points representing 50,000 ground points. We used the tie-points to carry out a photogrammetric block adjustment, which improves the image pointing and the accuracy of the ground point positions in three dimensions from about 850 m to approximately 55 m. We then applied high-density (pixel-by-pixel) multi-image matching to derive about 45 billion tie-points. Benefitting from improved image pointing data achieved through photogrammetric block adjustment, we computed about 6.3 billion surface points. By interpolation, we generated a DTM with a lateral spacing of 221.7 m/pixel (192 pixels per degree) and a vertical accuracy of about 30 m. The comparison of the DTM with Mercury Laser Altimeter (MLA) profiles obtained over four years of MESSENGER orbital operations reveals that the DTM is geometrically very rigid. It may be used as a reference to identify MLA outliers (e.g., when MLA operated at its ranging limit) or to map offsets of laser altimeter tracks, presumably caused by residual spacecraft orbit and attitude errors. After the relevant outlier removals and corrections, MLA profiles show excellent agreement with topographic profiles from H6, with a root mean square height difference of only 88 m.
Kirk, R.L.; Howington-Kraus, E.; Rosiek, M.R.; Anderson, J.A.; Archinal, B.A.; Becker, K.J.; Cook, D.A.; Galuszka, D.M.; Geissler, P.E.; Hare, T.M.; Holmberg, I.M.; Keszthelyi, L.P.; Redding, B.L.; Delamere, W.A.; Gallagher, D.; Chapel, J.D.; Eliason, E.M.; King, R.; McEwen, A.S.
2009-01-01
The objectives of this paper are twofold: first, to report our estimates of the meter-to-decameter-scale topography and slopes of candidate landing sites for the Phoenix mission, based on analysis of Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) images with a typical pixel scale of 3 m and Mars Reconnaissance Orbiter (MRO) High Resolution Imaging Science Experiment (HiRISE) images at 0.3 m pixel-1 and, second, to document in detail the geometric calibration, software, and procedures on which the photogrammetric analysis of HiRISE data is based. A combination of optical design modeling, laboratory observations, star images, and Mars images form the basis for software in the U.S. Geological Survey Integrated Software for Imagers and Spectrometers (ISIS) 3 system that corrects the images for a variety of distortions with single-pixel or subpixel accuracy. Corrected images are analyzed in the commercial photogrammetric software SOCET SET (??BAE Systems), yielding digital topographic models (DTMs) with a grid spacing of 1 m (3-4 pixels) that require minimal interactive editing. Photoclinometry yields DTMs with single-pixel grid spacing. Slopes from MOC and HiRISE are comparable throughout the latitude zone of interest and compare favorably with those where past missions have landed successfully; only the Mars Exploration Rover (MER) B site in Meridiani Planum is smoother. MOC results at multiple locations have root-mean-square (RMS) bidirectional slopes of 0.8-4.5?? at baselines of 3-10 m. HiRISE stereopairs (one per final candidate site and one in the former site) yield 1.8-2.8?? slopes at 1-m baseline. Slopes at 1 m from photoclinometry are also in the range 2-3?? after correction for image blur. Slopes exceeding the 16?? Phoenix safety limit are extremely rare. Copyright 2008 by the American Geophysical Union.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kay, Randolph R; Campbell, David V; Shinde, Subhash L
A modular, scalable focal plane array is provided as an array of integrated circuit dice, wherein each die includes a given amount of modular pixel array circuitry. The array of dice effectively multiplies the amount of modular pixel array circuitry to produce a larger pixel array without increasing die size. Desired pixel pitch across the enlarged pixel array is preserved by forming die stacks with each pixel array circuitry die stacked on a separate die that contains the corresponding signal processing circuitry. Techniques for die stack interconnections and die stack placement are implemented to ensure that the desired pixel pitchmore » is preserved across the enlarged pixel array.« less
A new 9T global shutter pixel with CDS technique
NASA Astrophysics Data System (ADS)
Liu, Yang; Ma, Cheng; Zhou, Quan; Wang, Xinyang
2015-04-01
Benefiting from motion blur free, Global shutter pixel is very widely used in the design of CMOS image sensors for high speed applications such as motion vision, scientifically inspection, etc. In global shutter sensors, all pixel signal information needs to be stored in the pixel first and then waiting for readout. For higher frame rate, we need very fast operation of the pixel array. There are basically two ways for the in pixel signal storage, one is in charge domain, such as the one shown in [1], this needs complicated process during the pixel fabrication. The other one is in voltage domain, one example is the one in [2], this pixel is based on the 4T PPD technology and normally the driving of the high capacitive transfer gate limits the speed of the array operation. In this paper we report a new 9T global shutter pixel based on 3-T partially pinned photodiode (PPPD) technology. It incorporates three in-pixel storage capacitors allowing for correlated double sampling (CDS) and pipeline operation of the array (pixel exposure during the readout of the array). Only two control pulses are needed for all the pixels at the end of exposure which allows high speed exposure control.
Hydrologic impacts of climate change and urbanization in Las Vegas Wash Watershed, Nevada
In this study, a cell-based model for the Las Vegas Wash (LVW) Watershed in Clark County, Nevada, was developed by combining the traditional hydrologic modeling methods (Thornthwaite’s water balance model and the Soil Conservation Survey’s Curve Number method) with the pixel-base...
Zhao, Chumin; Kanicki, Jerzy; Konstantinidis, Anastasios C; Patel, Tushita
2015-11-01
Large area x-ray imagers based on complementary metal-oxide-semiconductor (CMOS) active pixel sensor (APS) technology have been proposed for various medical imaging applications including digital breast tomosynthesis (DBT). The low electronic noise (50-300 e-) of CMOS APS x-ray imagers provides a possible route to shrink the pixel pitch to smaller than 75 μm for microcalcification detection and possible reduction of the DBT mean glandular dose (MGD). In this study, imaging performance of a large area (29×23 cm2) CMOS APS x-ray imager [Dexela 2923 MAM (PerkinElmer, London)] with a pixel pitch of 75 μm was characterized and modeled. The authors developed a cascaded system model for CMOS APS x-ray imagers using both a broadband x-ray radiation and monochromatic synchrotron radiation. The experimental data including modulation transfer function, noise power spectrum, and detective quantum efficiency (DQE) were theoretically described using the proposed cascaded system model with satisfactory consistency to experimental results. Both high full well and low full well (LFW) modes of the Dexela 2923 MAM CMOS APS x-ray imager were characterized and modeled. The cascaded system analysis results were further used to extract the contrast-to-noise ratio (CNR) for microcalcifications with sizes of 165-400 μm at various MGDs. The impact of electronic noise on CNR was also evaluated. The LFW mode shows better DQE at low air kerma (Ka<10 μGy) and should be used for DBT. At current DBT applications, air kerma (Ka∼10 μGy, broadband radiation of 28 kVp), DQE of more than 0.7 and ∼0.3 was achieved using the LFW mode at spatial frequency of 0.5 line pairs per millimeter (lp/mm) and Nyquist frequency ∼6.7 lp/mm, respectively. It is shown that microcalcifications of 165-400 μm in size can be resolved using a MGD range of 0.3-1 mGy, respectively. In comparison to a General Electric GEN2 prototype DBT system (at MGD of 2.5 mGy), an increased CNR (by ∼10) for microcalcifications was observed using the Dexela 2923 MAM CMOS APS x-ray imager at a lower MGD (2.0 mGy). The Dexela 2923 MAM CMOS APS x-ray imager is capable to achieve a high imaging performance at spatial frequencies up to 6.7 lp/mm. Microcalcifications of 165 μm are distinguishable based on reported data and their modeling results due to the small pixel pitch of 75 μm. At the same time, potential dose reduction is expected using the studied CMOS APS x-ray imager.
Modelling of microcracks image treated with fluorescent dye
NASA Astrophysics Data System (ADS)
Glebov, Victor; Lashmanov, Oleg U.
2015-06-01
The main reasons of catastrophes and accidents are high level of wear of equipment and violation of the production technology. The methods of nondestructive testing are designed to find out defects timely and to prevent break down of aggregates. These methods allow determining compliance of object parameters with technical requirements without destroying it. This work will discuss dye penetrant inspection or liquid penetrant inspection (DPI or LPI) methods and computer model of microcracks image treated with fluorescent dye. Usually cracks on image look like broken extended lines with small width (about 1 to 10 pixels) and ragged edges. The used method of inspection allows to detect microcracks with depth about 10 or more micrometers. During the work the mathematical model of image of randomly located microcracks treated with fluorescent dye was created in MATLAB environment. Background noises and distortions introduced by the optical systems are considered in the model. The factors that have influence on the image are listed below: 1. Background noise. Background noise is caused by the bright light from external sources and it reduces contrast on the objects edges. 2. Noises on the image sensor. Digital noise manifests itself in the form of randomly located points that are differing in their brightness and color. 3. Distortions caused by aberrations of optical system. After passing through the real optical system the homocentricity of the bundle of rays is violated or homocentricity remains but rays intersect at the point that doesn't coincide with the point of the ideal image. The stronger the influence of the above-listed factors, the worse the image quality and therefore the analysis of the image for control of the item finds difficulty. The mathematical model is created using the following algorithm: at the beginning the number of cracks that will be modeled is entered from keyboard. Then the point with random position is choosing on the matrix whose size is 1024x1024 pixels (result image size). This random pixel and two adjacent points are painted with random brightness, the points, located at the edges have lower brightness than the central pixel. The width of the paintbrush is 3 pixels. Further one of the eight possible directions is chosen and the painting continues in this direction. The number of `steps' is also entered at the beginning of the program. This method of cracks simulating is based on theory A.N. Galybin and A.V. Dyskin, which describe cracks propagation as random walk process. These operations are repeated as many times as many cracks it's necessary to simulate. After that background noises and Gaussian blur (for simulating bad focusing of optical system) are applied.
Wu, Yiping; Liu, Shuguang; Li, Zhengpeng; Dahal, Devendra; Young, Claudia J.; Schmidt, Gail L.; Liu, Jinxun; Davis, Brian; Sohl, Terry L.; Werner, Jeremy M.; Oeding, Jennifer
2014-01-01
Process-oriented ecological models are frequently used for predicting potential impacts of global changes such as climate and land-cover changes, which can be useful for policy making. It is critical but challenging to automatically derive optimal parameter values at different scales, especially at regional scale, and validate the model performance. In this study, we developed an automatic calibration (auto-calibration) function for a well-established biogeochemical model—the General Ensemble Biogeochemical Modeling System (GEMS)-Erosion Deposition Carbon Model (EDCM)—using data assimilation technique: the Shuffled Complex Evolution algorithm and a model-inversion R package—Flexible Modeling Environment (FME). The new functionality can support multi-parameter and multi-objective auto-calibration of EDCM at the both pixel and regional levels. We also developed a post-processing procedure for GEMS to provide options to save the pixel-based or aggregated county-land cover specific parameter values for subsequent simulations. In our case study, we successfully applied the updated model (EDCM-Auto) for a single crop pixel with a corn–wheat rotation and a large ecological region (Level II)—Central USA Plains. The evaluation results indicate that EDCM-Auto is applicable at multiple scales and is capable to handle land cover changes (e.g., crop rotations). The model also performs well in capturing the spatial pattern of grain yield production for crops and net primary production (NPP) for other ecosystems across the region, which is a good example for implementing calibration and validation of ecological models with readily available survey data (grain yield) and remote sensing data (NPP) at regional and national levels. The developed platform for auto-calibration can be readily expanded to incorporate other model inversion algorithms and potential R packages, and also be applied to other ecological models.
NASA Astrophysics Data System (ADS)
Ocampo Giraldo, L.; Bolotnikov, A. E.; Camarda, G. S.; De Geronimo, G.; Fried, J.; Gul, R.; Hodges, D.; Hossain, A.; Ünlü, K.; Vernon, E.; Yang, G.; James, R. B.
2018-03-01
We evaluated the sub-pixel position resolution achievable in large-volume CdZnTe pixelated detectors with conventional pixel patterns and for several different pixel sizes: 2.8 mm, 1.72 mm, 1.4 mm and 0.8 mm. Achieving position resolution below the physical dimensions of pixels (sub-pixel resolution) is a practical path for making high-granularity position-sensitive detectors, <100 μm, using a limited number of pixels dictated by the mechanical constraints and multi-channel readout electronics. High position sensitivity is important for improving the imaging capability of CZT gamma cameras. It also allows for making more accurate corrections of response non-uniformities caused by crystal defects, thus enabling use of standard-grade (unselected) and less expensive CZT crystals for producing large-volume position-sensitive CZT detectors feasible for many practical applications. We analyzed the digitized charge signals from a representative 9 pixels and the cathode, generated using a pulsed-laser light beam focused down to 10 μm (650 nm) to scan over a selected 3 × 3 pixel area. We applied our digital pulse processing technique to the time-correlated signals captured from adjacent pixels to achieve and evaluate the capability for sub-pixel position resolution. As an example, we also demonstrated an application of 3D corrections to improve the energy resolution and positional information of the events for the tested detectors.
Giraldo, L. Ocampo; Bolotnikov, A. E.; Camarda, G. S.; ...
2017-12-18
Here, we evaluated the sub-pixel position resolution achievable in large-volume CdZnTe pixelated detectors with conventional pixel patterns and for several different pixel sizes: 2.8 mm, 1.72 mm, 1.4 mm and 0.8 mm. Achieving position resolution below the physical dimensions of pixels (sub-pixel resolution) is a practical path for making high-granularity position-sensitive detectors, <100 μμm, using a limited number of pixels dictated by the mechanical constraints and multi-channel readout electronics. High position sensitivity is important for improving the imaging capability of CZT gamma cameras. It also allows for making more accurate corrections of response non-uniformities caused by crystal defects, thus enablingmore » use of standard-grade (unselected) and less expensive CZT crystals for producing large-volume position-sensitive CZT detectors feasible for many practical applications. We analyzed the digitized charge signals from a representative 9 pixels and the cathode, generated using a pulsed-laser light beam focused down to 10 m (650 nm) to scan over a selected 3×3 pixel area. We applied our digital pulse processing technique to the time-correlated signals captured from adjacent pixels to achieve and evaluate the capability for sub-pixel position resolution. As an example, we also demonstrated an application of 3D corrections to improve the energy resolution and positional information of the events for the tested detectors.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giraldo, L. Ocampo; Bolotnikov, A. E.; Camarda, G. S.
Here, we evaluated the sub-pixel position resolution achievable in large-volume CdZnTe pixelated detectors with conventional pixel patterns and for several different pixel sizes: 2.8 mm, 1.72 mm, 1.4 mm and 0.8 mm. Achieving position resolution below the physical dimensions of pixels (sub-pixel resolution) is a practical path for making high-granularity position-sensitive detectors, <100 μμm, using a limited number of pixels dictated by the mechanical constraints and multi-channel readout electronics. High position sensitivity is important for improving the imaging capability of CZT gamma cameras. It also allows for making more accurate corrections of response non-uniformities caused by crystal defects, thus enablingmore » use of standard-grade (unselected) and less expensive CZT crystals for producing large-volume position-sensitive CZT detectors feasible for many practical applications. We analyzed the digitized charge signals from a representative 9 pixels and the cathode, generated using a pulsed-laser light beam focused down to 10 m (650 nm) to scan over a selected 3×3 pixel area. We applied our digital pulse processing technique to the time-correlated signals captured from adjacent pixels to achieve and evaluate the capability for sub-pixel position resolution. As an example, we also demonstrated an application of 3D corrections to improve the energy resolution and positional information of the events for the tested detectors.« less
NASA Astrophysics Data System (ADS)
Chen, J. M.; Chen, X.; Ju, W.
2013-03-01
Due to the heterogeneous nature of the land surface, spatial scaling is an inevitable issue in the development of land models coupled with low-resolution Earth system models (ESMs) for predicting land-atmosphere interactions and carbon-climate feedbacks. In this study, a simple spatial scaling algorithm is developed to correct errors in net primary productivity (NPP) estimates made at a coarse spatial resolution based on sub-pixel information of vegetation heterogeneity and surface topography. An eco-hydrological model BEPS-TerrainLab, which considers both vegetation and topographical effects on the vertical and lateral water flows and the carbon cycle, is used to simulate NPP at 30 m and 1 km resolutions for a 5700 km2 watershed with an elevation range from 518 m to 3767 m in the Qinling Mountain, Shaanxi Province, China. Assuming that the NPP simulated at 30 m resolution represents the reality and that at 1 km resolution is subject to errors due to sub-pixel heterogeneity, a spatial scaling index (SSI) is developed to correct the coarse resolution NPP values pixel by pixel. The agreement between the NPP values at these two resolutions is improved considerably from R2 = 0.782 to R2 = 0.884 after the correction. The mean bias error (MBE) in NPP modeled at the 1 km resolution is reduced from 14.8 g C m-2 yr-1 to 4.8 g C m-2 yr-1 in comparison with NPP modeled at 30 m resolution, where the mean NPP is 668 g C m-2 yr-1. The range of spatial variations of NPP at 30 m resolution is larger than that at 1 km resolution. Land cover fraction is the most important vegetation factor to be considered in NPP spatial scaling, and slope is the most important topographical factor for NPP spatial scaling especially in mountainous areas, because of its influence on the lateral water redistribution, affecting water table, soil moisture and plant growth. Other factors including leaf area index (LAI), elevation and aspect have small and additive effects on improving the spatial scaling between these two resolutions.
NASA Astrophysics Data System (ADS)
Chen, J. M.; Chen, X.; Ju, W.
2013-07-01
Due to the heterogeneous nature of the land surface, spatial scaling is an inevitable issue in the development of land models coupled with low-resolution Earth system models (ESMs) for predicting land-atmosphere interactions and carbon-climate feedbacks. In this study, a simple spatial scaling algorithm is developed to correct errors in net primary productivity (NPP) estimates made at a coarse spatial resolution based on sub-pixel information of vegetation heterogeneity and surface topography. An eco-hydrological model BEPS-TerrainLab, which considers both vegetation and topographical effects on the vertical and lateral water flows and the carbon cycle, is used to simulate NPP at 30 m and 1 km resolutions for a 5700 km2 watershed with an elevation range from 518 m to 3767 m in the Qinling Mountain, Shanxi Province, China. Assuming that the NPP simulated at 30 m resolution represents the reality and that at 1 km resolution is subject to errors due to sub-pixel heterogeneity, a spatial scaling index (SSI) is developed to correct the coarse resolution NPP values pixel by pixel. The agreement between the NPP values at these two resolutions is improved considerably from R2 = 0.782 to R2 = 0.884 after the correction. The mean bias error (MBE) in NPP modelled at the 1 km resolution is reduced from 14.8 g C m-2 yr-1 to 4.8 g C m-2 yr-1 in comparison with NPP modelled at 30 m resolution, where the mean NPP is 668 g C m-2 yr-1. The range of spatial variations of NPP at 30 m resolution is larger than that at 1 km resolution. Land cover fraction is the most important vegetation factor to be considered in NPP spatial scaling, and slope is the most important topographical factor for NPP spatial scaling especially in mountainous areas, because of its influence on the lateral water redistribution, affecting water table, soil moisture and plant growth. Other factors including leaf area index (LAI) and elevation have small and additive effects on improving the spatial scaling between these two resolutions.
Pneumothorax detection in chest radiographs using convolutional neural networks
NASA Astrophysics Data System (ADS)
Blumenfeld, Aviel; Konen, Eli; Greenspan, Hayit
2018-02-01
This study presents a computer assisted diagnosis system for the detection of pneumothorax (PTX) in chest radiographs based on a convolutional neural network (CNN) for pixel classification. Using a pixel classification approach allows utilization of the texture information in the local environment of each pixel while training a CNN model on millions of training patches extracted from a relatively small dataset. The proposed system uses a pre-processing step of lung field segmentation to overcome the large variability in the input images coming from a variety of imaging sources and protocols. Using a CNN classification, suspected pixel candidates are extracted within each lung segment. A postprocessing step follows to remove non-physiological suspected regions and noisy connected components. The overall percentage of suspected PTX area was used as a robust global decision for the presence of PTX in each lung. The system was trained on a set of 117 chest x-ray images with ground truth segmentations of the PTX regions. The system was tested on a set of 86 images and reached diagnosis accuracy of AUC=0.95. Overall preliminary results are promising and indicate the growing ability of CAD based systems to detect findings in medical imaging on a clinical level accuracy.
Retinal vessel segmentation using the 2-D Gabor wavelet and supervised classification.
Soares, João V B; Leandro, Jorge J G; Cesar Júnior, Roberto M; Jelinek, Herbert F; Cree, Michael J
2006-09-01
We present a method for automated segmentation of the vasculature in retinal images. The method produces segmentations by classifying each image pixel as vessel or nonvessel, based on the pixel's feature vector. Feature vectors are composed of the pixel's intensity and two-dimensional Gabor wavelet transform responses taken at multiple scales. The Gabor wavelet is capable of tuning to specific frequencies, thus allowing noise filtering and vessel enhancement in a single step. We use a Bayesian classifier with class-conditional probability density functions (likelihoods) described as Gaussian mixtures, yielding a fast classification, while being able to model complex decision surfaces. The probability distributions are estimated based on a training set of labeled pixels obtained from manual segmentations. The method's performance is evaluated on publicly available DRIVE (Staal et al., 2004) and STARE (Hoover et al., 2000) databases of manually labeled images. On the DRIVE database, it achieves an area under the receiver operating characteristic curve of 0.9614, being slightly superior than that presented by state-of-the-art approaches. We are making our implementation available as open source MATLAB scripts for researchers interested in implementation details, evaluation, or development of methods.
Bioinspired architecture approach for a one-billion transistor smart CMOS camera chip
NASA Astrophysics Data System (ADS)
Fey, Dietmar; Komann, Marcus
2007-05-01
In the paper we present a massively parallel VLSI architecture for future smart CMOS camera chips with up to one billion transistors. To exploit efficiently the potential offered by future micro- or nanoelectronic devices traditional on central structures oriented parallel architectures based on MIMD or SIMD approaches will fail. They require too long and too many global interconnects for the distribution of code or the access to common memory. On the other hand nature developed self-organising and emergent principles to manage successfully complex structures based on lots of interacting simple elements. Therefore we developed a new as Marching Pixels denoted emergent computing paradigm based on a mixture of bio-inspired computing models like cellular automaton and artificial ants. In the paper we present different Marching Pixels algorithms and the corresponding VLSI array architecture. A detailed synthesis result for a 0.18 μm CMOS process shows that a 256×256 pixel image is processed in less than 10 ms assuming a moderate 100 MHz clock rate for the processor array. Future higher integration densities and a 3D chip stacking technology will allow the integration and processing of Mega pixels within the same time since our architecture is fully scalable.
Robust and efficient modulation transfer function measurement with CMOS color sensors
NASA Astrophysics Data System (ADS)
Farsani, Raziyeh A.; Sure, Thomas; Apel, Uwe
2017-06-01
Increasing challenges of the industry to improve camera performance with control and test of the alignment process will be discussed in this paper. The major difficulties, such as special CFAs that have white/clear pixels instead of a Bayer pattern and non-homogeneous back light illumination of the targets, used for such tests, will be outlined and strategies on how to handle them will be presented. The proposed algorithms are applied to synthetically generated edges, as well as to experimental images taken from ADAS cameras in standard illumination conditions, to validate the approach. In addition, to consider the influence of the chromatic aberration of the lens and the CFA's influence on the total system MTF, the on-axis focus behavior of the camera module will be presented for each pixel class separately. It will be shown that the repeatability of the measurement results of the system MTF is improved, as a result of a more accurate and robust edge angle detection, elimination of systematic errors, using an improved lateral shift of the pixels and analytical modeling of the edge transition. Results also show the necessity to have separated measurements of contrast in the different pixel classes to ensure a precise focus position.
Visible-regime polarimetric imager: a fully polarimetric, real-time imaging system.
Barter, James D; Thompson, Harold R; Richardson, Christine L
2003-03-20
A fully polarimetric optical camera system has been constructed to obtain polarimetric information simultaneously from four synchronized charge-coupled device imagers at video frame rates of 60 Hz and a resolution of 640 x 480 pixels. The imagers view the same scene along the same optical axis by means of a four-way beam-splitting prism similar to ones used for multiple-imager, common-aperture color TV cameras. Appropriate polarizing filters in front of each imager provide the polarimetric information. Mueller matrix analysis of the polarimetric response of the prism, analyzing filters, and imagers is applied to the detected intensities in each imager as a function of the applied state of polarization over a wide range of linear and circular polarization combinations to obtain an average polarimetric calibration consistent to approximately 2%. Higher accuracies can be obtained by improvement of the polarimetric modeling of the splitting prism and by implementation of a pixel-by-pixel calibration.
ImgLib2--generic image processing in Java.
Pietzsch, Tobias; Preibisch, Stephan; Tomancák, Pavel; Saalfeld, Stephan
2012-11-15
ImgLib2 is an open-source Java library for n-dimensional data representation and manipulation with focus on image processing. It aims at minimizing code duplication by cleanly separating pixel-algebra, data access and data representation in memory. Algorithms can be implemented for classes of pixel types and generic access patterns by which they become independent of the specific dimensionality, pixel type and data representation. ImgLib2 illustrates that an elegant high-level programming interface can be achieved without sacrificing performance. It provides efficient implementations of common data types, storage layouts and algorithms. It is the data model underlying ImageJ2, the KNIME Image Processing toolbox and an increasing number of Fiji-Plugins. ImgLib2 is licensed under BSD. Documentation and source code are available at http://imglib2.net and in a public repository at https://github.com/imagej/imglib. Supplementary data are available at Bioinformatics Online. saalfeld@mpi-cbg.de
Lee, Kai-Hui; Chiu, Pei-Ling
2013-10-01
Conventional visual cryptography (VC) suffers from a pixel-expansion problem, or an uncontrollable display quality problem for recovered images, and lacks a general approach to construct visual secret sharing schemes for general access structures. We propose a general and systematic approach to address these issues without sophisticated codebook design. This approach can be used for binary secret images in non-computer-aided decryption environments. To avoid pixel expansion, we design a set of column vectors to encrypt secret pixels rather than using the conventional VC-based approach. We begin by formulating a mathematic model for the VC construction problem to find the column vectors for the optimal VC construction, after which we develop a simulated-annealing-based algorithm to solve the problem. The experimental results show that the display quality of the recovered image is superior to that of previous papers.
Frahm, Jan-Michael; Pollefeys, Marc Andre Leon; Gallup, David Robert
2015-12-08
Methods of generating a three dimensional representation of an object in a reference plane from a depth map including distances from a reference point to pixels in an image of the object taken from a reference point. Weights are assigned to respective voxels in a three dimensional grid along rays extending from the reference point through the pixels in the image based on the distances in the depth map from the reference point to the respective pixels, and a height map including an array of height values in the reference plane is formed based on the assigned weights. An n-layer height map may be constructed by generating a probabilistic occupancy grid for the voxels and forming an n-dimensional height map comprising an array of layer height values in the reference plane based on the probabilistic occupancy grid.
Lensless Photoluminescence Hyperspectral Camera Employing Random Speckle Patterns.
Žídek, Karel; Denk, Ondřej; Hlubuček, Jiří
2017-11-10
We propose and demonstrate a spectrally-resolved photoluminescence imaging setup based on the so-called single pixel camera - a technique of compressive sensing, which enables imaging by using a single-pixel photodetector. The method relies on encoding an image by a series of random patterns. In our approach, the image encoding was maintained via laser speckle patterns generated by an excitation laser beam scattered on a diffusor. By using a spectrometer as the single-pixel detector we attained a realization of a spectrally-resolved photoluminescence camera with unmatched simplicity. We present reconstructed hyperspectral images of several model scenes. We also discuss parameters affecting the imaging quality, such as the correlation degree of speckle patterns, pattern fineness, and number of datapoints. Finally, we compare the presented technique to hyperspectral imaging using sample scanning. The presented method enables photoluminescence imaging for a broad range of coherent excitation sources and detection spectral areas.
Efficient graph-cut tattoo segmentation
NASA Astrophysics Data System (ADS)
Kim, Joonsoo; Parra, Albert; Li, He; Delp, Edward J.
2015-03-01
Law enforcement is interested in exploiting tattoos as an information source to identify, track and prevent gang-related crimes. Many tattoo image retrieval systems have been described. In a retrieval system tattoo segmentation is an important step for retrieval accuracy since segmentation removes background information in a tattoo image. Existing segmentation methods do not extract the tattoo very well when the background includes textures and color similar to skin tones. In this paper we describe a tattoo segmentation approach by determining skin pixels in regions near the tattoo. In these regions graph-cut segmentation using a skin color model and a visual saliency map is used to find skin pixels. After segmentation we determine which set of skin pixels are connected with each other that form a closed contour including a tattoo. The regions surrounded by the closed contours are considered tattoo regions. Our method segments tattoos well when the background includes textures and color similar to skin.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shrestha, S; Vedantham, S; Karellas, A
Purpose: Detectors with hexagonal pixels require resampling to square pixels for distortion-free display of acquired images. In this work, the presampling modulation transfer function (MTF) of a hexagonal pixel array photon-counting CdTe detector for region-of-interest fluoroscopy was measured and the optimal square pixel size for resampling was determined. Methods: A 0.65mm thick CdTe Schottky sensor capable of concurrently acquiring up to 3 energy-windowed images was operated in a single energy-window mode to include ≥10 KeV photons. The detector had hexagonal pixels with apothem of 30 microns resulting in pixel spacing of 60 and 51.96 microns along the two orthogonal directions.more » Images of a tungsten edge test device acquired under IEC RQA5 conditions were double Hough transformed to identify the edge and numerically differentiated. The presampling MTF was determined from the finely sampled line spread function that accounted for the hexagonal sampling. The optimal square pixel size was determined in two ways; the square pixel size for which the aperture function evaluated at the Nyquist frequencies along the two orthogonal directions matched that from the hexagonal pixel aperture functions, and the square pixel size for which the mean absolute difference between the square and hexagonal aperture functions was minimized over all frequencies up to the Nyquist limit. Results: Evaluation of the aperture functions over the entire frequency range resulted in square pixel size of 53 microns with less than 2% difference from the hexagonal pixel. Evaluation of the aperture functions at Nyquist frequencies alone resulted in 54 microns square pixels. For the photon-counting CdTe detector and after resampling to 53 microns square pixels using quadratic interpolation, the presampling MTF at Nyquist frequency of 9.434 cycles/mm along the two directions were 0.501 and 0.507. Conclusion: Hexagonal pixel array photon-counting CdTe detector after resampling to square pixels provides high-resolution imaging suitable for fluoroscopy.« less
NASA Astrophysics Data System (ADS)
Chakrabarty, Abhisek
2016-07-01
Crop fraction is the ratio of crop occupying a unit area in ground pixel, is very important for monitoring crop growth. One of the most important variables in crop growth monitoring is the fraction of available solar radiation intercepted by foliage. Late blight of potato (Solanum tuberosum), caused by the oomycete pathogen Phytophthora infestans, is considered to be the most destructive crop diseases of potato worldwide. Under favourable climatic conditions, and without intervention (i.e. fungicide sprays), the disease can destroy potato crop within few weeks. Therefore it is important to evaluate the crop fraction for monitoring the healthy and late blight affected potato crops. This study was conducted in potato bowl of West Bengal, which consists of districts of Hooghly, Howrah, Burdwan, Bankuara, and Paschim Medinipur. In this study different crop fraction estimation method like linear spectral un-mixing, Normalized difference vegetation index (NDVI) based DPM model (Zhang et al. 2013), Ratio vegetation index based DPM model, improved Pixel Dichotomy Model (Li et al. 2014) ware evaluated using multi-temporal IRS AWiFs data in two successive potato growing season of 2012-13 and 2013-14 over the study area and compared with measured crop fraction. The comparative study based on measured healthy and late blight affected potato crop fraction showed that improved Pixel Dichotomy Model maintain the high coefficient of determination (R2= 0.835) with low root mean square error (RMSE=0.21) whereas the correlation values of NDVI based DPM model and RVI based DPM model is 0.763 and 0.694 respectively. The changing pattern of crop fraction profile of late blight affected potato crop was studied in respect of healthy potato crop fraction which was extracted from the 269 GPS points of potato field. It showed that the healthy potato crop fraction profile maintained the normal phenological trend whereas the late blight affected potato crop fraction profile suddenly fallen after late blight disease affected in potato crops. Therefore, it can be concluded that based on the result of this study the improved Pixel Dichotomy Model is the most convenient method for crop fraction estimation for this region with satisfactory accuracy.
Performance assessment of a single-pixel compressive sensing imaging system
NASA Astrophysics Data System (ADS)
Du Bosq, Todd W.; Preece, Bradley L.
2016-05-01
Conventional electro-optical and infrared (EO/IR) systems capture an image by measuring the light incident at each of the millions of pixels in a focal plane array. Compressive sensing (CS) involves capturing a smaller number of unconventional measurements from the scene, and then using a companion process known as sparse reconstruction to recover the image as if a fully populated array that satisfies the Nyquist criteria was used. Therefore, CS operates under the assumption that signal acquisition and data compression can be accomplished simultaneously. CS has the potential to acquire an image with equivalent information content to a large format array while using smaller, cheaper, and lower bandwidth components. However, the benefits of CS do not come without compromise. The CS architecture chosen must effectively balance between physical considerations (SWaP-C), reconstruction accuracy, and reconstruction speed to meet operational requirements. To properly assess the value of such systems, it is necessary to fully characterize the image quality, including artifacts and sensitivity to noise. Imagery of the two-handheld object target set at range was collected using a passive SWIR single-pixel CS camera for various ranges, mirror resolution, and number of processed measurements. Human perception experiments were performed to determine the identification performance within the trade space. The performance of the nonlinear CS camera was modeled with the Night Vision Integrated Performance Model (NV-IPM) by mapping the nonlinear degradations to an equivalent linear shift invariant model. Finally, the limitations of CS modeling techniques will be discussed.
Registration of Panoramic/Fish-Eye Image Sequence and LiDAR Points Using Skyline Features
Zhu, Ningning; Jia, Yonghong; Ji, Shunping
2018-01-01
We propose utilizing a rigorous registration model and a skyline-based method for automatic registration of LiDAR points and a sequence of panoramic/fish-eye images in a mobile mapping system (MMS). This method can automatically optimize original registration parameters and avoid the use of manual interventions in control point-based registration methods. First, the rigorous registration model between the LiDAR points and the panoramic/fish-eye image was built. Second, skyline pixels from panoramic/fish-eye images and skyline points from the MMS’s LiDAR points were extracted, relying on the difference in the pixel values and the registration model, respectively. Third, a brute force optimization method was used to search for optimal matching parameters between skyline pixels and skyline points. In the experiments, the original registration method and the control point registration method were used to compare the accuracy of our method with a sequence of panoramic/fish-eye images. The result showed: (1) the panoramic/fish-eye image registration model is effective and can achieve high-precision registration of the image and the MMS’s LiDAR points; (2) the skyline-based registration method can automatically optimize the initial attitude parameters, realizing a high-precision registration of a panoramic/fish-eye image and the MMS’s LiDAR points; and (3) the attitude correction values of the sequences of panoramic/fish-eye images are different, and the values must be solved one by one. PMID:29883431
NASA Astrophysics Data System (ADS)
Das, Sukanta Kumar; Shukla, Ashish Kumar
2011-04-01
Single-frequency users of a satellite-based augmentation system (SBAS) rely on ionospheric models to mitigate the delay due to the ionosphere. The ionosphere is the major source of range and range rate errors for users of the Global Positioning System (GPS) who require high-accuracy positioning. The purpose of the present study is to develop a tomography model to reconstruct the total electron content (TEC) over the low-latitude Indian region which lies in the equatorial ionospheric anomaly belt. In the present study, the TEC data collected from the six TEC collection stations along a longitudinal belt of around 77 degrees are used. The main objective of the study is to find out optimum pixel size which supports a better reconstruction of the electron density and hence the TEC over the low-latitude Indian region. Performance of two reconstruction algorithms Algebraic Reconstruction Technique (ART) and Multiplicative Algebraic Reconstruction Technique (MART) is analyzed for different pixel sizes varying from 1 to 6 degrees in latitude. It is found from the analysis that the optimum pixel size is 5° × 50 km over the Indian region using both ART and MART algorithms.
Model Development and Testing for THEMIS Controlled Mars Mosaics
NASA Technical Reports Server (NTRS)
Archinal, B. A.; Sides, S.; Weller, L.; Cushing, G.; Titus, T.; Kirk, R. L.; Soderblom, L. A.; Duxbury, T. C.
2005-01-01
As part of our work [1] to develop techniques and procedures to create regional and eventually global THEMIS mosaics of Mars, we are developing algorithms and software to photogrammetrically control THEMIS IR line scanner camera images. We have found from comparison of a limited number of images to MOLA digital image models (DIMs) [2] that the a priori geometry information (i.e. SPICE [3]) for THEMIS images generally allows their relative positions to be specified at the several pixel level (e.g. approx.5 to 13 pixels). However a need for controlled solutions to improve this geometry to the sub-pixel level still exists. Only with such solutions can seamless mosaics be obtained and likely distortion from spacecraft motion during image collection removed at such levels. Past experience has shown clearly that such mosaics are in heavy demand by users for operational and scientific use, and that they are needed over large areas or globally (as opposed to being available only on a limited basis via labor intensive custom mapping projects). Uses include spacecraft navigation, landing site planning and mapping, registration of multiple data types and image sets, registration of multispectral images, registration of images with topographic information, recovery of thermal properties, change detection searches, etc.
Analysis of Performance of Stereoscopic-Vision Software
NASA Technical Reports Server (NTRS)
Kim, Won; Ansar, Adnan; Steele, Robert; Steinke, Robert
2007-01-01
A team of JPL researchers has analyzed stereoscopic vision software and produced a document describing its performance. This software is of the type used in maneuvering exploratory robotic vehicles on Martian terrain. The software in question utilizes correlations between portions of the images recorded by two electronic cameras to compute stereoscopic disparities, which, in conjunction with camera models, are used in computing distances to terrain points to be included in constructing a three-dimensional model of the terrain. The analysis included effects of correlation- window size, a pyramidal image down-sampling scheme, vertical misalignment, focus, maximum disparity, stereo baseline, and range ripples. Contributions of sub-pixel interpolation, vertical misalignment, and foreshortening to stereo correlation error were examined theoretically and experimentally. It was found that camera-calibration inaccuracy contributes to both down-range and cross-range error but stereo correlation error affects only the down-range error. Experimental data for quantifying the stereo disparity error were obtained by use of reflective metrological targets taped to corners of bricks placed at known positions relative to the cameras. For the particular 1,024-by-768-pixel cameras of the system analyzed, the standard deviation of the down-range disparity error was found to be 0.32 pixel.
Xiao, Xun; Geyer, Veikko F.; Bowne-Anderson, Hugo; Howard, Jonathon; Sbalzarini, Ivo F.
2016-01-01
Biological filaments, such as actin filaments, microtubules, and cilia, are often imaged using different light-microscopy techniques. Reconstructing the filament curve from the acquired images constitutes the filament segmentation problem. Since filaments have lower dimensionality than the image itself, there is an inherent trade-off between tracing the filament with sub-pixel accuracy and avoiding noise artifacts. Here, we present a globally optimal filament segmentation method based on B-spline vector level-sets and a generalized linear model for the pixel intensity statistics. We show that the resulting optimization problem is convex and can hence be solved with global optimality. We introduce a simple and efficient algorithm to compute such optimal filament segmentations, and provide an open-source implementation as an ImageJ/Fiji plugin. We further derive an information-theoretic lower bound on the filament segmentation error, quantifying how well an algorithm could possibly do given the information in the image. We show that our algorithm asymptotically reaches this bound in the spline coefficients. We validate our method in comprehensive benchmarks, compare with other methods, and show applications from fluorescence, phase-contrast, and dark-field microscopy. PMID:27104582
NASA Astrophysics Data System (ADS)
Ocampo Giraldo, Luis A.; Bolotnikov, Aleksey E.; Camarda, Giuseppe S.; Cui, Yonggang; De Geronimo, Gianluigi; Gul, Rubi; Fried, Jack; Hossain, Anwar; Unlu, Kenan; Vernon, Emerson; Yang, Ge; James, Ralph B.
2017-05-01
High-resolution position-sensitive detectors have been proposed to correct response non-uniformities in Cadmium Zinc Telluride (CZT) crystals by virtually subdividing the detectors area into small voxels and equalizing responses from each voxel. 3D pixelated detectors coupled with multichannel readout electronics are the most advanced type of CZT devices offering many options in signal processing and enhancing detector performance. One recent innovation proposed for pixelated detectors is to use the induced (transient) signals from neighboring pixels to achieve high sub-pixel position resolution while keeping large pixel sizes. The main hurdle in achieving this goal is the relatively low signal induced on the neighboring pixels because of the electrostatic shielding effect caused by the collecting pixel. In addition, to achieve high position sensitivity one should rely on time-correlated transient signals, which means that digitized output signals must be used. We present the results of our studies to measure the amplitude of the pixel signals so that these can be used to measure positions of the interaction points. This is done with the processing of digitized correlated time signals measured from several adjacent pixels taking into account rise-time and charge-sharing effects. In these measurements we used a focused pulsed laser to generate a 10-micron beam at one milliwatt (650-nm wavelength) over the detector surface while the collecting pixel was moved in cardinal directions. The results include measurements that present the benefits of combining conventional pixel geometry with digital pulse processing for the best approach in achieving sub-pixel position resolution with the pixel dimensions of approximately 2 mm. We also present the sub-pixel resolution measurements at comparable energies from various gamma emitting isotopes.
NASA Astrophysics Data System (ADS)
Melnikova, I.; Mukai, S.; Vasilyev, A.
Data of remote measurements of reflected radiance with the POLDER instrument on board of ADEOS satellite are used for retrieval of the optical thickness, single scattering albedo and phase function parameter of cloudy and clear atmosphere. The method of perceptron neural network that from input values of multiangle radiance and Solar incident angle allows to obtain surface albedo, the optical thickness, single scattering albedo and phase function parameter in case of clear sky. Two last parameters are determined as optical average for atmospheric column. The calculation of solar radiance with using the MODTRAN-3 code with taking into account multiple scattering is accomplished for neural network learning. All mentioned parameters were randomly varied on the base of statistical models of possible measured parameters variation. Results of processing one frame of remote observation that consists from 150,000 pixels are presented. The methodology elaborated allows operative determining optical characteristics as cloudy as clear atmosphere. Further interpretation of these results gives the possibility to extract the information about total contents of atmospheric aerosols and absorbing gases in the atmosphere and create models of the real cloudiness An analytical method of interpretation that based on asymptotic formulas of multiple scattering theory is applied to remote observations of reflected radiance in case of cloudy pixel. Details of the methodology and error analysis were published and discussed earlier. Here we present results of data processing of pixel size 6x6 km In many studies the optical thickness is evaluated earlier in the assumption of the conservative scattering. But in case of true absorption in clouds the large errors in parameter obtained are possible. The simultaneous retrieval of two parameters at every wavelength independently is the advantage comparing with earlier studies. The analytical methodology is based on the transfer theory asymptotic formula inversion for optically thick stratus clouds. The model of horizontally infinite layer is considered. The slight horizontal heterogeneity is approximately taken into account. Formulas containing only the measured values of two-direction radiance and functions of solar and view angles were derived earlier. The 6 azimuth harmonics of reflection function are taken into account. The simple approximation of the cloud top boarder heterogeneity is used. The clouds, projecting upper the cloud top plane causes the increase of diffuse radiation in the incident flux. It is essential for calculation of radiative characteristics, which depends on lighting conditions. Escape and reflection functions describe this dependence for reflected radiance and local albedo of semi-infinite medium - for irradiance. Thus the functions depending on solar incident angle is to replace by their modifications. Firstly optical thickness of every pixel is obtained with simple formula assuming conservative scattering for all available view directions. Deviations between obtained values may be taken as a measure of the cloud top deviation from the plane. The special parameter is obtained, which takes into account the shadowing effect. Then single scattering albedo and optical thickness (with the true absorption assuming) are obtained for pairs of view directions with equal optical thickness. After that the averaging of values obtained and relative error evaluation is accomplished for all viewing directions of every pixel. The procedure is repeated for all wavelengths and pixels independently.
NASA Astrophysics Data System (ADS)
Di, K.; Liu, Y.; Liu, B.; Peng, M.
2012-07-01
Chang'E-1(CE-1) and Chang'E-2(CE-2) are the two lunar orbiters of China's lunar exploration program. Topographic mapping using CE-1 and CE-2 images is of great importance for scientific research as well as for preparation of landing and surface operation of Chang'E-3 lunar rover. In this research, we developed rigorous sensor models of CE-1 and CE-2 CCD cameras based on push-broom imaging principle with interior and exterior orientation parameters. Based on the rigorous sensor model, the 3D coordinate of a ground point in lunar body-fixed (LBF) coordinate system can be calculated by space intersection from the image coordinates of con-jugate points in stereo images, and the image coordinates can be calculated from 3D coordinates by back-projection. Due to uncer-tainties of the orbit and the camera, the back-projected image points are different from the measured points. In order to reduce these inconsistencies and improve precision, we proposed two methods to refine the rigorous sensor model: 1) refining EOPs by correcting the attitude angle bias, 2) refining the interior orientation model by calibration of the relative position of the two linear CCD arrays. Experimental results show that the mean back-projection residuals of CE-1 images are reduced to better than 1/100 pixel by method 1 and the mean back-projection residuals of CE-2 images are reduced from over 20 pixels to 0.02 pixel by method 2. Consequently, high precision DEM (Digital Elevation Model) and DOM (Digital Ortho Map) are automatically generated.
Fundamental performance differences between CMOS and CCD imagers, part IV
NASA Astrophysics Data System (ADS)
Janesick, James; Pinter, Jeff; Potter, Robert; Elliott, Tom; Andrews, James; Tower, John; Grygon, Mark; Keller, Dave
2010-07-01
This paper is a continuation of past papers written on fundamental performance differences of scientific CMOS and CCD imagers. New characterization results presented below include: 1). a new 1536 × 1536 × 8μm 5TPPD pixel CMOS imager, 2). buried channel MOSFETs for random telegraph noise (RTN) and threshold reduction, 3) sub-electron noise pixels, 4) 'MIM pixel' for pixel sensitivity (V/e-) control, 5) '5TPPD RING pixel' for large pixel, high-speed charge transfer applications, 6) pixel-to-pixel blooming control, 7) buried channel photo gate pixels and CMOSCCDs, 8) substrate bias for deep depletion CMOS imagers, 9) CMOS dark spikes and dark current issues and 10) high energy radiation damage test data. Discussions are also given to a 1024 × 1024 × 16 um 5TPPD pixel imager currently in fabrication and new stitched CMOS imagers that are in the design phase including 4k × 4k × 10 μm and 10k × 10k × 10 um imager formats.
The design of high dynamic range ROIC for IRFPAs
NASA Astrophysics Data System (ADS)
Jiang, Dazhao; Liang, Qinghua; Zhang, Qiwen; Chen, Honglei; Ding, Ruijun
2015-10-01
The charge packet readout integrated circuit (ROIC) technology for the IRFPAs is introduced, which can realize that every pixel achieves a very high capacity of the electrons storage, and it also improves the performance of the SNR and reduces the saturation possibility of the pixels. The ROIC for the LWIR requires ability that obtaining high capacity for storing electrons. For the conventional ROIC, the maximum charge capacity is determined by the integration capacitance and the operating voltage, it can achieve a high charge capacity through increasing the area of the integration capacitor or raising the operating voltage. And this paper would introduce a digital method of ROIC that can achieve a very high charge capacity. The circuit architecture of this approach includes the following parts, a preamplifier, a comparator, a counter, and memory arrays. And the maximum charge capacity of the pixel is determined by the counter bits. This new method can achieve a high charge capacity more than 1Ge- every pixel and output the digital signal directly, while that of conventional ROIC is less than 50Me- and output the analog signal from the pixel. In this new circuit, the comparator is a important module, as the integration voltage value need compare with threshold voltage through the comparator all the time during the integration period, and we will discuss the influence of the comparator. This work design the circuit with the CSMC 0.35um CMOS technology, and the simulation use the spectre model.
Multiparametric dynamic contrast-enhanced ultrasound imaging of prostate cancer.
Wildeboer, Rogier R; Postema, Arnoud W; Demi, Libertario; Kuenen, Maarten P J; Wijkstra, Hessel; Mischi, Massimo
2017-08-01
The aim of this study is to improve the accuracy of dynamic contrast-enhanced ultrasound (DCE-US) for prostate cancer (PCa) localization by means of a multiparametric approach. Thirteen different parameters related to either perfusion or dispersion were extracted pixel-by-pixel from 45 DCE-US recordings in 19 patients referred for radical prostatectomy. Multiparametric maps were retrospectively produced using a Gaussian mixture model algorithm. These were subsequently evaluated on their pixel-wise performance in classifying 43 benign and 42 malignant histopathologically confirmed regions of interest, using a prostate-based leave-one-out procedure. The combination of the spatiotemporal correlation (r), mean transit time (μ), curve skewness (κ), and peak time (PT) yielded an accuracy of 81% ± 11%, which was higher than the best performing single parameters: r (73%), μ (72%), and wash-in time (72%). The negative predictive value increased to 83% ± 16% from 70%, 69% and 67%, respectively. Pixel inclusion based on the confidence level boosted these measures to 90% with half of the pixels excluded, but without disregarding any prostate or region. Our results suggest multiparametric DCE-US analysis might be a useful diagnostic tool for PCa, possibly supporting future targeting of biopsies or therapy. Application in other types of cancer can also be foreseen. • DCE-US can be used to extract both perfusion and dispersion-related parameters. • Multiparametric DCE-US performs better in detecting PCa than single-parametric DCE-US. • Multiparametric DCE-US might become a useful tool for PCa localization.
NASA Astrophysics Data System (ADS)
Luo, Yuan; Wang, Bo-yu; Zhang, Yi; Zhao, Li-ming
2018-03-01
In this paper, under different illuminations and random noises, focusing on the local texture feature's defects of a face image that cannot be completely described because the threshold of local ternary pattern (LTP) cannot be calculated adaptively, a local three-value model of improved adaptive local ternary pattern (IALTP) is proposed. Firstly, the difference function between the center pixel and the neighborhood pixel weight is established to obtain the statistical characteristics of the central pixel and the neighborhood pixel. Secondly, the adaptively gradient descent iterative function is established to calculate the difference coefficient which is defined to be the threshold of the IALTP operator. Finally, the mean and standard deviation of the pixel weight of the local region are used as the coding mode of IALTP. In order to reflect the overall properties of the face and reduce the dimension of features, the two-directional two-dimensional PCA ((2D)2PCA) is adopted. The IALTP is used to extract local texture features of eyes and mouth area. After combining the global features and local features, the fusion features (IALTP+) are obtained. The experimental results on the Extended Yale B and AR standard face databases indicate that under different illuminations and random noises, the algorithm proposed in this paper is more robust than others, and the feature's dimension is smaller. The shortest running time reaches 0.329 6 s, and the highest recognition rate reaches 97.39%.
Decision analysis for habitat conservation of an endangered, range-limited salamander
Robinson, Orin J.; McGowan, Conor P.; Apodaca, J.J.
2016-01-01
Many species of conservation concern are habitat limited and often a major focus of management for these species is habitat acquisition and/or restoration. Deciding the location of habitat restoration or acquisition to best benefit a protected species can be a complicated subject with competing management objectives, ecological uncertainties and stochasticity. Structured decision making (SDM) could be a useful approach for explicitly incorporating those complexities while still working toward species conservation and/or recovery. We applied an SDM approach to Red Hills salamander Phaeognathus hubrichti habitat conservation decision making. Phaeognathus hubrichti is a severely range-limited endemic species in south central Alabama and has highly specific habitat requirements. Many known populations live on private lands and the primary mode of habitat protection is habitat conservation planning, but such plans are non-binding and not permanent. Working with stakeholders, we developed an objectives hierarchy linking land acquisition or protection actions to fundamental objectives. We built a model to assess and compare the quality of the habitat in the known range of P. hubrichti. Our model evaluated key habitat attributes of 5814 pixels of 1 km2 each and ranked the pixels from best to worst with respect to P. hubrichti habitat requirements. Our results are a spatially explicit valuation of each pixel, with respect to its probable benefit to P. hubrichti populations. The results of this effort will be used to rank pixels from most to least beneficial, then identify land owners in the most useful areas for salamanders who are willing to sell or enter into a permanent easement agreement.
Charge amplitude distribution of the Gossip gaseous pixel detector
NASA Astrophysics Data System (ADS)
Blanco Carballo, V. M.; Chefdeville, M.; Colas, P.; Giomataris, Y.; van der Graaf, H.; Gromov, V.; Hartjes, F.; Kluit, R.; Koffeman, E.; Salm, C.; Schmitz, J.; Smits, S. M.; Timmermans, J.; Visschers, J. L.
2007-12-01
The Gossip gaseous pixel detector is being developed for the detection of charged particles in extreme high radiation environments as foreseen close to the interaction point of the proposed super LHC. The detecting medium is a thin layer of gas. Because of the low density of this medium, only a few primary electron/ion pairs are created by the traversing particle. To get a detectable signal, the electrons drift towards a perforated metal foil (Micromegas) whereafter they are multiplied in a gas avalanche to provide a detectable signal. The gas avalanche occurs in the high field between the Micromegas and the pixel readout chip (ROC). Compared to a silicon pixel detector, Gossip features a low material budget and a low cooling power. An experiment using X-rays has indicated a possible high radiation tolerance exceeding 10 16 hadrons/cm 2. The amplified charge signal has a broad amplitude distribution due to the limited statistics of the primary ionization and the statistical variation of the gas amplification. Therefore, some degree of inefficiency is inevitable. This study presents experimental results on the charge amplitude distribution for CO 2/DME (dimethyl-ether) and Ar/iC 4H 10 mixtures. The measured curves were fitted with the outcome of a theoretical model. In the model, the physical Landau distribution is approximated by a Poisson distribution that is convoluted with the variation of the gas gain and the electronic noise. The value for the fraction of pedestal events is used for a direct calculation of the cluster density. For some gases, the measured cluster density is considerably lower than given in literature.
Fixed Pattern Noise pixel-wise linear correction for crime scene imaging CMOS sensor
NASA Astrophysics Data System (ADS)
Yang, Jie; Messinger, David W.; Dube, Roger R.; Ientilucci, Emmett J.
2017-05-01
Filtered multispectral imaging technique might be a potential method for crime scene documentation and evidence detection due to its abundant spectral information as well as non-contact and non-destructive nature. Low-cost and portable multispectral crime scene imaging device would be highly useful and efficient. The second generation crime scene imaging system uses CMOS imaging sensor to capture spatial scene and bandpass Interference Filters (IFs) to capture spectral information. Unfortunately CMOS sensors suffer from severe spatial non-uniformity compared to CCD sensors and the major cause is Fixed Pattern Noise (FPN). IFs suffer from "blue shift" effect and introduce spatial-spectral correlated errors. Therefore, Fixed Pattern Noise (FPN) correction is critical to enhance crime scene image quality and is also helpful for spatial-spectral noise de-correlation. In this paper, a pixel-wise linear radiance to Digital Count (DC) conversion model is constructed for crime scene imaging CMOS sensor. Pixel-wise conversion gain Gi,j and Dark Signal Non-Uniformity (DSNU) Zi,j are calculated. Also, conversion gain is divided into four components: FPN row component, FPN column component, defects component and effective photo response signal component. Conversion gain is then corrected to average FPN column and row components and defects component so that the sensor conversion gain is uniform. Based on corrected conversion gain and estimated image incident radiance from the reverse of pixel-wise linear radiance to DC model, corrected image spatial uniformity can be enhanced to 7 times as raw image, and the bigger the image DC value within its dynamic range, the better the enhancement.
Kolbeck, Carola; Rosentritt, Martin; Lang, Reinhold; Schiller, Manuela; Handel, Gerhard
2009-10-01
To test casting capacities of impression materials under dry and wet sulcular conditions in vitro. An incisor with a circular shoulder preparation (1 mm) was inserted in a primary mold. A shiftable secondary mold allowed adaptation of sulcular depth (1 to 4 mm). An outer circular chamfer assured reproducible positioning of an impression material carrier. Tested materials were PVS of differing viscosities (extra low, Panasil Contact Plus [ELV]; low, Affinis Light Body [LV]; and medium, Virtual Monophase [MV]) and one polyether material of low viscosity (Permadyne Garant [PE]). Impressions were made with sulcular depths of 1 to 4 mm in wet and 1 and 4 mm in dry conditions, cut in half, and digitized with a light microscope (Stemi SV8). Surface area of the region of interest (ROI, at inner angle of preparation) was determined with Optimas 6.2. Medians were calculated, and statistical analysis was performed using the Mann-Whitney U test (P # .05). Median values of the measurements under wet condition demonstrated the smallest ROI areas for the ELV (297-330[pixel]) and the MV (253-421[pixel]) materials followed by the LV (582-745[pixel]) and the PELV (544-823[pixel]). All materials showed significantly higher values for the wet compared to dry sulcular conditions. Repeated measurements showed no significant differences to the corresponding first determined series. The sulcus model is applicable to assess casting abilities of impression materials in clinically approximated sulcular conditions. The PVS materials with extra low and medium viscosities showed the best properties in dry and wet conditions.
Superpixel-Augmented Endmember Detection for Hyperspectral Images
NASA Technical Reports Server (NTRS)
Thompson, David R.; Castano, Rebecca; Gilmore, Martha
2011-01-01
Superpixels are homogeneous image regions comprised of several contiguous pixels. They are produced by shattering the image into contiguous, homogeneous regions that each cover between 20 and 100 image pixels. The segmentation aims for a many-to-one mapping from superpixels to image features; each image feature could contain several superpixels, but each superpixel occupies no more than one image feature. This conservative segmentation is relatively easy to automate in a robust fashion. Superpixel processing is related to the more general idea of improving hyperspectral analysis through spatial constraints, which can recognize subtle features at or below the level of noise by exploiting the fact that their spectral signatures are found in neighboring pixels. Recent work has explored spatial constraints for endmember extraction, showing significant advantages over techniques that ignore pixels relative positions. Methods such as AMEE (automated morphological endmember extraction) express spatial influence using fixed isometric relationships a local square window or Euclidean distance in pixel coordinates. In other words, two pixels covariances are based on their spatial proximity, but are independent of their absolute location in the scene. These isometric spatial constraints are most appropriate when spectral variation is smooth and constant over the image. Superpixels are simple to implement, efficient to compute, and are empirically effective. They can be used as a preprocessing step with any desired endmember extraction technique. Superpixels also have a solid theoretical basis in the hyperspectral linear mixing model, making them a principled approach for improving endmember extraction. Unlike existing approaches, superpixels can accommodate non-isometric covariance between image pixels (characteristic of discrete image features separated by step discontinuities). These kinds of image features are common in natural scenes. Analysts can substitute superpixels for image pixels during endmember analysis that leverages the spatial contiguity of scene features to enhance subtle spectral features. Superpixels define populations of image pixels that are independent samples from each image feature, permitting robust estimation of spectral properties, and reducing measurement noise in proportion to the area of the superpixel. This permits improved endmember extraction, and enables automated search for novel and constituent minerals in very noisy, hyperspatial images. This innovation begins with a graph-based segmentation based on the work of Felzenszwalb et al., but then expands their approach to the hyperspectral image domain with a Euclidean distance metric. Then, the mean spectrum of each segment is computed, and the resulting data cloud is used as input into sequential maximum angle convex cone (SMACC) endmember extraction.
Circuit models applied to the design of a novel uncooled infrared focal plane array structure
NASA Astrophysics Data System (ADS)
Shi, Shali; Chen, Dapeng; Li, Chaobo; Jiao, Binbin; Ou, Yi; Jing, Yupeng; Ye, Tianchun; Guo, Zheying; Zhang, Qingchuan; Wu, Xiaoping
2007-05-01
This paper describes a circuit model applied to the simulation of the thermal response frequency of a novel substrate-free single-layer bi-material cantilever microstructure used as the focal plane array (FPA) in an uncooled opto-mechanical infrared imaging system. In order to obtain a high detection of the IR object, gold (Au) is coated alternately on the silicon nitride (SiNx) cantilevers of the pixels (Shi S et al Sensors and Actuators A at press), whereas the thermal response frequency decreases (Zhao Y 2002 Dissertation University of California, Berkeley). A circuit model for such a cantilever microstructure is proposed to be applied to evaluate the thermal response performance. The pixel's thermal frequency (1/τth) is calculated to be 10 Hz under the optimized design parameters, which is compatible with the response of optical readout systems and human eyes.
Spectral analysis of white ash response to emerald ash borer infestations
NASA Astrophysics Data System (ADS)
Calandra, Laura
The emerald ash borer (EAB) (Agrilus planipennis Fairmaire) is an invasive insect that has killed over 50 million ash trees in the US. The goal of this research was to establish a method to identify ash trees infested with EAB using remote sensing techniques at the leaf-level and tree crown level. First, a field-based study at the leaf-level used the range of spectral bands from the WorldView-2 sensor to determine if there was a significant difference between EAB-infested white ash (Fraxinus americana) and healthy leaves. Binary logistic regression models were developed using individual and combinations of wavelengths; the most successful model included 545 and 950 nm bands. The second half of this research employed imagery to identify healthy and EAB-infested trees, comparing pixel- and object-based methods by applying an unsupervised classification approach and a tree crown delineation algorithm, respectively. The pixel-based models attained the highest overall accuracies.
Trends in MODIS Geolocation Error Analysis
NASA Technical Reports Server (NTRS)
Wolfe, R. E.; Nishihama, Masahiro
2009-01-01
Data from the two MODIS instruments have been accurately geolocated (Earth located) to enable retrieval of global geophysical parameters. The authors describe the approach used to geolocate with sub-pixel accuracy over nine years of data from M0DIS on NASA's E0S Terra spacecraft and seven years of data from MODIS on the Aqua spacecraft. The approach uses a geometric model of the MODIS instruments, accurate navigation (orbit and attitude) data and an accurate Earth terrain model to compute the location of each MODIS pixel. The error analysis approach automatically matches MODIS imagery with a global set of over 1,000 ground control points from the finer-resolution Landsat satellite to measure static biases and trends in the MO0lS geometric model parameters. Both within orbit and yearly thermally induced cyclic variations in the pointing have been found as well as a general long-term trend.
SuBSENSE: a universal change detection method with local adaptive sensitivity.
St-Charles, Pierre-Luc; Bilodeau, Guillaume-Alexandre; Bergevin, Robert
2015-01-01
Foreground/background segmentation via change detection in video sequences is often used as a stepping stone in high-level analytics and applications. Despite the wide variety of methods that have been proposed for this problem, none has been able to fully address the complex nature of dynamic scenes in real surveillance tasks. In this paper, we present a universal pixel-level segmentation method that relies on spatiotemporal binary features as well as color information to detect changes. This allows camouflaged foreground objects to be detected more easily while most illumination variations are ignored. Besides, instead of using manually set, frame-wide constants to dictate model sensitivity and adaptation speed, we use pixel-level feedback loops to dynamically adjust our method's internal parameters without user intervention. These adjustments are based on the continuous monitoring of model fidelity and local segmentation noise levels. This new approach enables us to outperform all 32 previously tested state-of-the-art methods on the 2012 and 2014 versions of the ChangeDetection.net dataset in terms of overall F-Measure. The use of local binary image descriptors for pixel-level modeling also facilitates high-speed parallel implementations: our own version, which used no low-level or architecture-specific instruction, reached real-time processing speed on a midlevel desktop CPU. A complete C++ implementation based on OpenCV is available online.
Instrumental Response Model and Detrending for the Dark Energy Camera
Bernstein, G. M.; Abbott, T. M. C.; Desai, S.; ...
2017-09-14
We describe the model for mapping from sky brightness to the digital output of the Dark Energy Camera (DECam) and the algorithms adopted by the Dark Energy Survey (DES) for inverting this model to obtain photometric measures of celestial objects from the raw camera output. This calibration aims for fluxes that are uniform across the camera field of view and across the full angular and temporal span of the DES observations, approaching the accuracy limits set by shot noise for the full dynamic range of DES observations. The DES pipeline incorporates several substantive advances over standard detrending techniques, including principal-components-based sky and fringe subtraction; correction of the "brighter-fatter" nonlinearity; use of internal consistency in on-sky observations to disentangle the influences of quantum efficiency, pixel-size variations, and scattered light in the dome flats; and pixel-by-pixel characterization of instrument spectral response, through combination of internal-consistency constraints with auxiliary calibration data. This article provides conceptual derivations of the detrending/calibration steps, and the procedures for obtaining the necessary calibration data. Other publications will describe the implementation of these concepts for the DES operational pipeline, the detailed methods, and the validation that the techniques can bring DECam photometry and astrometry withinmore » $$\\approx 2$$ mmag and $$\\approx 3$$ mas, respectively, of fundamental atmospheric and statistical limits. In conclusion, the DES techniques should be broadly applicable to wide-field imagers.« less
Instrumental Response Model and Detrending for the Dark Energy Camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernstein, G. M.; Abbott, T. M. C.; Desai, S.
We describe the model for mapping from sky brightness to the digital output of the Dark Energy Camera (DECam) and the algorithms adopted by the Dark Energy Survey (DES) for inverting this model to obtain photometric measures of celestial objects from the raw camera output. This calibration aims for fluxes that are uniform across the camera field of view and across the full angular and temporal span of the DES observations, approaching the accuracy limits set by shot noise for the full dynamic range of DES observations. The DES pipeline incorporates several substantive advances over standard detrending techniques, including principal-components-based sky and fringe subtraction; correction of the "brighter-fatter" nonlinearity; use of internal consistency in on-sky observations to disentangle the influences of quantum efficiency, pixel-size variations, and scattered light in the dome flats; and pixel-by-pixel characterization of instrument spectral response, through combination of internal-consistency constraints with auxiliary calibration data. This article provides conceptual derivations of the detrending/calibration steps, and the procedures for obtaining the necessary calibration data. Other publications will describe the implementation of these concepts for the DES operational pipeline, the detailed methods, and the validation that the techniques can bring DECam photometry and astrometry withinmore » $$\\approx 2$$ mmag and $$\\approx 3$$ mas, respectively, of fundamental atmospheric and statistical limits. In conclusion, the DES techniques should be broadly applicable to wide-field imagers.« less
Target detection using the background model from the topological anomaly detection algorithm
NASA Astrophysics Data System (ADS)
Dorado Munoz, Leidy P.; Messinger, David W.; Ziemann, Amanda K.
2013-05-01
The Topological Anomaly Detection (TAD) algorithm has been used as an anomaly detector in hyperspectral and multispectral images. TAD is an algorithm based on graph theory that constructs a topological model of the background in a scene, and computes an anomalousness ranking for all of the pixels in the image with respect to the background in order to identify pixels with uncommon or strange spectral signatures. The pixels that are modeled as background are clustered into groups or connected components, which could be representative of spectral signatures of materials present in the background. Therefore, the idea of using the background components given by TAD in target detection is explored in this paper. In this way, these connected components are characterized in three different approaches, where the mean signature and endmembers for each component are calculated and used as background basis vectors in Orthogonal Subspace Projection (OSP) and Adaptive Subspace Detector (ASD). Likewise, the covariance matrix of those connected components is estimated and used in detectors: Constrained Energy Minimization (CEM) and Adaptive Coherence Estimator (ACE). The performance of these approaches and the different detectors is compared with a global approach, where the background characterization is derived directly from the image. Experiments and results using self-test data set provided as part of the RIT blind test target detection project are shown.
Lava flow risk maps at Mount Cameroon volcano
NASA Astrophysics Data System (ADS)
Favalli, M.; Fornaciai, A.; Papale, P.; Tarquini, S.
2009-04-01
Mount Cameroon, in the southwest Cameroon, is one of the most active volcanoes in Africa. Rising 4095 m asl, it has erupted nine times since the beginning of the past century, more recently in 1999 and 2000. Mount Cameroon documented eruptions are represented by moderate explosive and effusive eruptions occurred from both summit and flank vents. A 1922 SW-flank eruption produced a lava flow that reached the Atlantic coast near the village of Biboundi, and a lava flow from a 1999 south-flank eruption stopped only 200 m from the sea, threatening the villages of Bakingili and Dibunscha. More than 450,000 people live or work around the volcano, making the risk from lava flow invasion a great concern. In this work we propose both conventional hazard and risk maps and novel quantitative risk maps which relate vent locations to the expected total damage on existing buildings. These maps are based on lava flow simulations starting from 70,000 different vent locations, a probability distribution of vent opening, a law for the maximum length of lava flows, and a database of buildings. The simulations were run over the SRTM Digital Elevation Model (DEM) using DOWNFLOW, a fast DEM-driven model that is able to compute detailed invasion areas of lava flows from each vent. We present three different types of risk maps (90-m-pixel) for buildings around Mount Cameroon volcano: (1) a conventional risk map that assigns a probability of devastation by lava flows to each pixel representing buildings; (2) a reversed risk map where each pixel expresses the total damage expected as a consequence of vent opening in that pixel (the damage is expressed as the total surface of urbanized areas invaded); (3) maps of the lava catchments of the main towns around the volcano, within every catchment the pixels are classified according to the expected impact they might produce on the relative town in the case of a vent opening in that pixel. Maps of type (1) and (3) are useful for long term planning. Maps of type (2) and (3) are useful at the onset of a new eruption, when a vent forms. The combined use of these maps provides an efficient tool for lava flow risk assessment at Mount Cameroon.
Deformation Measurement In The Hayward Fault Zone Using Partially Correlated Persistent Scatterers
NASA Astrophysics Data System (ADS)
Lien, J.; Zebker, H. A.
2013-12-01
Interferometric synthetic aperture radar (InSAR) is an effective tool for measuring temporal changes in the Earth's surface. By combining SAR phase data collected at varying times and orbit geometries, with InSAR we can produce high accuracy, wide coverage images of crustal deformation fields. Changes in the radar imaging geometry, scatterer positions, or scattering behavior between radar passes causes the measured radar return to differ, leading to a decorrelation phase term that obscures the deformation signal and prevents the use of large baseline data. Here we present a new physically-based method of modeling decorrelation from the subset of pixels with the highest intrinsic signal-to-noise ratio, the so-called persistent scatters (PS). This more complete formulation, which includes both phase and amplitude scintillations, better describes the scattering behavior of partially correlated PS pixels and leads to a more reliable selection algorithm. The new method identifies PS pixels using maximum likelihood signal-to-clutter ratio (SCR) estimation based on the joint interferometric stack phase-amplitude distribution. Our PS selection method is unique in that it considers both phase and amplitude; accounts for correlation between all possible pairs of interferometric observations; and models the effect of spatial and temporal baselines on the stack. We use the resulting maximum likelihood SCR estimate as a criterion for PS selection. We implement the partially correlated persistent scatterer technique to analyze a stack of C-band European Remote Sensing (ERS-1/2) interferometric radar data imaging the Hayward Fault Zone from 1995 to 2000. We show that our technique achieves a better trade-off between PS pixel selection accuracy and network density compared to other PS identification methods, particularly in areas of natural terrain. We then present deformation measurements obtained by the selected PS network. Our results demonstrate that the partially correlated persistent scatterer technique can attain accurate deformation measurements even in areas that suffer decorrelation due to natural terrain. The accuracy of phase unwrapping and subsequent deformation estimation on the spatially sparse PS network depends on both pixel selection accuracy and the density of the network. We find that many additional pixels can be added to the PS list if we are able to correctly identify and add those in which the scattering mechanism exhibits partial, rather than complete, correlation across all radar scenes.
Rizzo, Gaia; Tonietto, Matteo; Castellaro, Marco; Raffeiner, Bernd; Coran, Alessandro; Fiocco, Ugo; Stramare, Roberto; Grisan, Enrico
2017-04-01
Contrast Enhanced Ultrasound (CEUS) is a sensitive imaging technique to assess tissue vascularity and it can be particularly useful in early detection and grading of arthritis. In a recent study we have shown that a Gamma-variate can accurately quantify synovial perfusion and it is flexible enough to describe many heterogeneous patterns. However, in some cases the heterogeneity of the kinetics can be such that even the Gamma model does not properly describe the curve, with a high number of outliers. In this work we apply to CEUS data the single compartment recirculation model (SCR) which takes explicitly into account the trapping of the microbubbles contrast agent by adding to the single Gamma-variate model its integral. The SCR model, originally proposed for dynamic-susceptibility magnetic resonance imaging, is solved here at pixel level within a Bayesian framework using Variational Bayes (VB). We also include the automatic relevant determination (ARD) algorithm to automatically infer the model complexity (SCR vs. Gamma model) from the data. We demonstrate that the inclusion of trapping best describes the CEUS patterns in 50% of the pixels, with the other 50% best fitted by a single Gamma. Such results highlight the necessity of the use ARD, to automatically exclude the irreversible component where not supported by the data. VB with ARD returns precise estimates in the majority of the kinetics (88% of total percentage of pixels) in a limited computational time (on average, 3.6 min per subject). Moreover, the impact of the additional trapping component has been evaluated for the differentiation of rheumatoid and non-rheumatoid patients, by means of a support vector machine classifier with backward feature selection. The results show that the trapping parameter is always present in the selected feature set, and improves the classification.
Model-based video segmentation for vision-augmented interactive games
NASA Astrophysics Data System (ADS)
Liu, Lurng-Kuo
2000-04-01
This paper presents an architecture and algorithms for model based video object segmentation and its applications to vision augmented interactive game. We are especially interested in real time low cost vision based applications that can be implemented in software in a PC. We use different models for background and a player object. The object segmentation algorithm is performed in two different levels: pixel level and object level. At pixel level, the segmentation algorithm is formulated as a maximizing a posteriori probability (MAP) problem. The statistical likelihood of each pixel is calculated and used in the MAP problem. Object level segmentation is used to improve segmentation quality by utilizing the information about the spatial and temporal extent of the object. The concept of an active region, which is defined based on motion histogram and trajectory prediction, is introduced to indicate the possibility of a video object region for both background and foreground modeling. It also reduces the overall computation complexity. In contrast with other applications, the proposed video object segmentation system is able to create background and foreground models on the fly even without introductory background frames. Furthermore, we apply different rate of self-tuning on the scene model so that the system can adapt to the environment when there is a scene change. We applied the proposed video object segmentation algorithms to several prototype virtual interactive games. In our prototype vision augmented interactive games, a player can immerse himself/herself inside a game and can virtually interact with other animated characters in a real time manner without being constrained by helmets, gloves, special sensing devices, or background environment. The potential applications of the proposed algorithms including human computer gesture interface and object based video coding such as MPEG-4 video coding.
Hot pixel generation in active pixel sensors: dosimetric and micro-dosimetric response
NASA Technical Reports Server (NTRS)
Scheick, Leif; Novak, Frank
2003-01-01
The dosimetric response of an active pixel sensor is analyzed. heavy ions are seen to damage the pixel in much the same way as gamma radiation. The probability of a hot pixel is seen to exhibit behavior that is not typical with other microdose effects.
Low complexity pixel-based halftone detection
NASA Astrophysics Data System (ADS)
Ok, Jiheon; Han, Seong Wook; Jarno, Mielikainen; Lee, Chulhee
2011-10-01
With the rapid advances of the internet and other multimedia technologies, the digital document market has been growing steadily. Since most digital images use halftone technologies, quality degradation occurs when one tries to scan and reprint them. Therefore, it is necessary to extract the halftone areas to produce high quality printing. In this paper, we propose a low complexity pixel-based halftone detection algorithm. For each pixel, we considered a surrounding block. If the block contained any flat background regions, text, thin lines, or continuous or non-homogeneous regions, the pixel was classified as a non-halftone pixel. After excluding those non-halftone pixels, the remaining pixels were considered to be halftone pixels. Finally, documents were classified as pictures or photo documents by calculating the halftone pixel ratio. The proposed algorithm proved to be memory-efficient and required low computation costs. The proposed algorithm was easily implemented using GPU.
Spatial light modulator array with heat minimization and image enhancement features
Jain, Kanti [Briarcliff Manor, NY; Sweatt, William C [Albuquerque, NM; Zemel, Marc [New Rochelle, NY
2007-01-30
An enhanced spatial light modulator (ESLM) array, a microelectronics patterning system and a projection display system using such an ESLM for heat-minimization and resolution enhancement during imaging, and the method for fabricating such an ESLM array. The ESLM array includes, in each individual pixel element, a small pixel mirror (reflective region) and a much larger pixel surround. Each pixel surround includes diffraction-grating regions and resolution-enhancement regions. During imaging, a selected pixel mirror reflects a selected-pixel beamlet into the capture angle of a projection lens, while the diffraction grating of the pixel surround redirects heat-producing unused radiation away from the projection lens. The resolution-enhancement regions of selected pixels provide phase shifts that increase effective modulation-transfer function in imaging. All of the non-selected pixel surrounds redirect all radiation energy away from the projection lens. All elements of the ESLM are fabricated by deposition, patterning, etching and other microelectronic process technologies.
NASA Astrophysics Data System (ADS)
Zou, Z.; Xiao, X.
2015-12-01
With a high temporal resolution and a large covering area, MODIS data are particularly useful in assessing vegetation destruction and recovery of a wide range of areas. In this study, MOD13Q1 data of the growing season (Mar. to Nov.) are used to calculate the Maximum NDVI (NDVImax) of each year. This study calculates each pixel's mean and standard deviation of the NDVImaxs in the 8 years before the earthquake. If the pixel's NDVImax of 2008 is two standard deviation smaller than the mean NDVImax, this pixel is detected as a vegetation destructed pixel. For each vegetation destructed pixel, its similar pixels of the same vegetation type are selected within the latitude difference of 0.5 degrees, altitude difference of 100 meters and slope difference of 3 degrees. Then the NDVImax difference of each vegetation destructed pixel and its similar pixels are calculated. The 5 similar pixels with the smallest NDVImax difference in the 8 years before the earthquake are selected as reference pixels. The mean NDVImaxs of these reference pixels after the earthquake are calculated and serve as the criterion to assess the vegetation recovery process.
Task-based modeling and optimization of a cone-beam CT scanner for musculoskeletal imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prakash, P.; Zbijewski, W.; Gang, G. J.
2011-10-15
Purpose: This work applies a cascaded systems model for cone-beam CT imaging performance to the design and optimization of a system for musculoskeletal extremity imaging. The model provides a quantitative guide to the selection of system geometry, source and detector components, acquisition techniques, and reconstruction parameters. Methods: The model is based on cascaded systems analysis of the 3D noise-power spectrum (NPS) and noise-equivalent quanta (NEQ) combined with factors of system geometry (magnification, focal spot size, and scatter-to-primary ratio) and anatomical background clutter. The model was extended to task-based analysis of detectability index (d') for tasks ranging in contrast and frequencymore » content, and d' was computed as a function of system magnification, detector pixel size, focal spot size, kVp, dose, electronic noise, voxel size, and reconstruction filter to examine trade-offs and optima among such factors in multivariate analysis. The model was tested quantitatively versus the measured NPS and qualitatively in cadaver images as a function of kVp, dose, pixel size, and reconstruction filter under conditions corresponding to the proposed scanner. Results: The analysis quantified trade-offs among factors of spatial resolution, noise, and dose. System magnification (M) was a critical design parameter with strong effect on spatial resolution, dose, and x-ray scatter, and a fairly robust optimum was identified at M {approx} 1.3 for the imaging tasks considered. The results suggested kVp selection in the range of {approx}65-90 kVp, the lower end (65 kVp) maximizing subject contrast and the upper end maximizing NEQ (90 kVp). The analysis quantified fairly intuitive results--e.g., {approx}0.1-0.2 mm pixel size (and a sharp reconstruction filter) optimal for high-frequency tasks (bone detail) compared to {approx}0.4 mm pixel size (and a smooth reconstruction filter) for low-frequency (soft-tissue) tasks. This result suggests a specific protocol for 1 x 1 (full-resolution) projection data acquisition followed by full-resolution reconstruction with a sharp filter for high-frequency tasks along with 2 x 2 binning reconstruction with a smooth filter for low-frequency tasks. The analysis guided selection of specific source and detector components implemented on the proposed scanner. The analysis also quantified the potential benefits and points of diminishing return in focal spot size, reduced electronic noise, finer detector pixels, and low-dose limits of detectability. Theoretical results agreed quantitatively with the measured NPS and qualitatively with evaluation of cadaver images by a musculoskeletal radiologist. Conclusions: A fairly comprehensive model for 3D imaging performance in cone-beam CT combines factors of quantum noise, system geometry, anatomical background, and imaging task. The analysis provided a valuable, quantitative guide to design, optimization, and technique selection for a musculoskeletal extremities imaging system under development.« less
A Minimal Path Searching Approach for Active Shape Model (ASM)-based Segmentation of the Lung.
Guo, Shengwen; Fei, Baowei
2009-03-27
We are developing a minimal path searching method for active shape model (ASM)-based segmentation for detection of lung boundaries on digital radiographs. With the conventional ASM method, the position and shape parameters of the model points are iteratively refined and the target points are updated by the least Mahalanobis distance criterion. We propose an improved searching strategy that extends the searching points in a fan-shape region instead of along the normal direction. A minimal path (MP) deformable model is applied to drive the searching procedure. A statistical shape prior model is incorporated into the segmentation. In order to keep the smoothness of the shape, a smooth constraint is employed to the deformable model. To quantitatively assess the ASM-MP segmentation, we compare the automatic segmentation with manual segmentation for 72 lung digitized radiographs. The distance error between the ASM-MP and manual segmentation is 1.75 ± 0.33 pixels, while the error is 1.99 ± 0.45 pixels for the ASM. Our results demonstrate that our ASM-MP method can accurately segment the lung on digital radiographs.
A minimal path searching approach for active shape model (ASM)-based segmentation of the lung
NASA Astrophysics Data System (ADS)
Guo, Shengwen; Fei, Baowei
2009-02-01
We are developing a minimal path searching method for active shape model (ASM)-based segmentation for detection of lung boundaries on digital radiographs. With the conventional ASM method, the position and shape parameters of the model points are iteratively refined and the target points are updated by the least Mahalanobis distance criterion. We propose an improved searching strategy that extends the searching points in a fan-shape region instead of along the normal direction. A minimal path (MP) deformable model is applied to drive the searching procedure. A statistical shape prior model is incorporated into the segmentation. In order to keep the smoothness of the shape, a smooth constraint is employed to the deformable model. To quantitatively assess the ASM-MP segmentation, we compare the automatic segmentation with manual segmentation for 72 lung digitized radiographs. The distance error between the ASM-MP and manual segmentation is 1.75 +/- 0.33 pixels, while the error is 1.99 +/- 0.45 pixels for the ASM. Our results demonstrate that our ASM-MP method can accurately segment the lung on digital radiographs.
A Minimal Path Searching Approach for Active Shape Model (ASM)-based Segmentation of the Lung
Guo, Shengwen; Fei, Baowei
2013-01-01
We are developing a minimal path searching method for active shape model (ASM)-based segmentation for detection of lung boundaries on digital radiographs. With the conventional ASM method, the position and shape parameters of the model points are iteratively refined and the target points are updated by the least Mahalanobis distance criterion. We propose an improved searching strategy that extends the searching points in a fan-shape region instead of along the normal direction. A minimal path (MP) deformable model is applied to drive the searching procedure. A statistical shape prior model is incorporated into the segmentation. In order to keep the smoothness of the shape, a smooth constraint is employed to the deformable model. To quantitatively assess the ASM-MP segmentation, we compare the automatic segmentation with manual segmentation for 72 lung digitized radiographs. The distance error between the ASM-MP and manual segmentation is 1.75 ± 0.33 pixels, while the error is 1.99 ± 0.45 pixels for the ASM. Our results demonstrate that our ASM-MP method can accurately segment the lung on digital radiographs. PMID:24386531
Relating Satellite Gravimetry to Global Snow Water Equivalent
NASA Astrophysics Data System (ADS)
Baumann, S. C.
2017-12-01
The gravimetric satellites GRACE measure changes of Earth's mass. The data mainly show changes in total water storage (TWS) but cannot distinguish between different sources. Hence, other data are necessary to extract the different compartments. Due to the spatial resolution of 200,000 km² and an accuracy of 2.5 cm w.e., other global products are compared with GRACE. In this study, the hydrological model WGHM and the land surface model GLDAS were used. All data were pre-processed in the same way as the GRACE data. Data were converted into monthly 1° grid values. Time is from 01/2003 to 12/2013 with a total of 131 months. The aim of the study was to extract SWE from GRACE as snow is an important factor for permafrost development. The main assumption is that changes in TWS can be linked to changes in SWE if SWE is the dominant compartment of TWS or if SWE changes proportionally with TWS. The study area were two river catchments in North America (Mackenzie, Yukon) and three in Russia (Lena, Ob, Yenisei). TWS as SWE from both models were correlated with GRACE as with each other (1) pixel and (2) catchment based. The (1) pixel based correlation used absolute values and the (2) catchment based correlation the sum of monthly anomalies from the total mean for each catchment. Initial results of the (1) pixel based correlation show a very high correlation of the WGHM vs. GLDAS data. The correlation of GRACE vs. WGHM is higher compared to the correlation between GRACE vs. GLDAS. The (2) correlation of the catchment data was higher than 0.74 in all river catchments for the WGHM vs. GLDAS data (TWS and SWE). As for the pixel based correlation, values for the correlation of GRACE vs. WGHM are higher compared to GRACE vs. GLDAS in all river catchments. Summed monthly WGHM anomalies of the catchments showed a uniform periodically annual pattern. GRACE data showed the same pattern between 2006 and 2011 but had more peaks in the beginning and the end of the study period. Therefore, a second correlation was performed for this uniform shorter period. The (1) pixel based correlation of GRACE vs. the two models had higher values in the river catchments in North America and lower values in Russia. This pattern was more pronounced in the correlation of GRACE vs. WGHM. The (2) catchment based correlation of the shorter period improved for nearly all correlation pairs and river catchments.
NASA Astrophysics Data System (ADS)
Gacal, G. F. B.; Lagrosas, N.
2016-12-01
Nowadays, cameras are commonly used by students. In this study, we use this instrument to look at moon signals and relate these signals to Gaussian functions. To implement this as a classroom activity, students need computers, computer software to visualize signals, and moon images. A normalized Gaussian function is often used to represent probability density functions of normal distribution. It is described by its mean m and standard deviation s. The smaller standard deviation implies less spread from the mean. For the 2-dimensional Gaussian function, the mean can be described by coordinates (x0, y0), while the standard deviations can be described by sx and sy. In modelling moon signals obtained from sky-cameras, the position of the mean (x0, y0) is solved by locating the coordinates of the maximum signal of the moon. The two standard deviations are the mean square weighted deviation based from the sum of total pixel values of all rows/columns. If visualized in three dimensions, the 2D Gaussian function appears as a 3D bell surface (Fig. 1a). This shape is similar to the pixel value distribution of moon signals as captured by a sky-camera. An example of this is illustrated in Fig 1b taken around 22:20 (local time) of January 31, 2015. The local time is 8 hours ahead of coordinated universal time (UTC). This image is produced by a commercial camera (Canon Powershot A2300) with 1s exposure time, f-stop of f/2.8, and 5mm focal length. One has to chose a camera with high sensitivity when operated at nighttime to effectively detect these signals. Fig. 1b is obtained by converting the red-green-blue (RGB) photo to grayscale values. The grayscale values are then converted to a double data type matrix. The last conversion process is implemented for the purpose of having the same scales for both Gaussian model and pixel distribution of raw signals. Subtraction of the Gaussian model from the raw data produces a moonless image as shown in Fig. 1c. This moonless image can be used for quantifying cloud cover as captured by ordinary cameras (Gacal et al, 2016). Cloud cover can be defined as the ratio of number of pixels whose values exceeds 0.07 and the total number of pixels. In this particular image, cloud cover value is 0.67.
TEMPORAL CORRELATION OF CLASSIFICATIONS IN REMOTE SENSING
A bivariate binary model is developed for estimating the change in land cover from satellite images obtained at two different times. The binary classifications of a pixel at the two times are modeled as potentially correlated random variables, conditional on the true states of th...
Linking flood peak, flood volume and inundation extent: a DEM-based approach
NASA Astrophysics Data System (ADS)
Rebolho, Cédric; Furusho-Percot, Carina; Blaquière, Simon; Brettschneider, Marco; Andréassian, Vazken
2017-04-01
Traditionally, flood inundation maps are computed based on the Shallow Water Equations (SWE) in one or two dimensions, with various simplifications that have proved to give good results. However, the complexity of the SWEs often requires a numerical resolution which can need long computing time, as well as detailed cross section data: this often results in restricting these models to rather small areas abundant with high quality data. This, along with the necessity for fast inundation mapping, are the reason why rapid inundation models are being designed, working for (almost) any river with a minimum amount of data and, above all, easily available data. Our model tries to follow this path by using a 100m DEM over France from which are extracted a drainage network and the associated drainage areas. It is based on two pre-existing methods: (1) SHYREG (Arnaud et al.,2013), a regionalized approach used to calculate the 2-year and 10-year flood quantiles (used as approximated bankfull flow and maximum discharge, respectively) for each river pixel of the DEM (below a 10 000 km2 drainage area) and (2) SOCOSE (Mailhol,1980), which gives, amongst other things, an empirical formula of a characteristic flood duration (for each pixel) based on catchment area, average precipitation and temperature. An overflow volume for each river pixel is extracted from a triangular shaped synthetic hydrograph designed with SHYREG quantiles and SOCOSE flood duration. The volume is then spread from downstream to upstream one river pixel at a time. When the entire hydrographic network is processed, the model stops and generates a map of potential inundation area associated with the 10-year flood quantile. Our model can also be calibrated using past-events inundation maps by adjusting two parameters, one which modifies the overflow duration, and the other, equivalent to a minimum drainage area for river pixels to be flooded. Thus, in calibration on a sample of 42 basins, the first draft of the model showed a 0.51 median Fit (intersection of simulated and observed areas divided by the union of the two, Bates and De Roo, 2000) and a 0.74 maximum. Obviously, this approach is quite rough, and would require testing on events of homogeneous return periods (which is not the case for now). The next steps in the test and the development of our method include the use of the AIGA distributed model to simulate past-events hydrographs, the search for a new way to automatically approach bankfull flow and the integration of the results in our model to build dynamic maps of the flood. References Arnaud, P., Eglin, Y., Janet, B., and Payrastre, O. (2013). Notice utilisateur : bases de données SHYREG-Débit. Méthode - Performances - Limites. Bates, P. D. and De Roo, A. P. J. (2000). A simple raster-based model for flood inundation simulation. Journal of Hydrology, 236(1-2):54-77. Mailhol, J. (1980). Pour une approche plus réaliste du temps caractéristique de crues des bassins versants. In Actes du Colloque d'Oxford, volume 129, pages 229-237, Oxford. IAHS-AISH.
From Pixels to Response Maps: Discriminative Image Filtering for Face Alignment in the Wild.
Asthana, Akshay; Zafeiriou, Stefanos; Tzimiropoulos, Georgios; Cheng, Shiyang; Pantic, Maja
2015-06-01
We propose a face alignment framework that relies on the texture model generated by the responses of discriminatively trained part-based filters. Unlike standard texture models built from pixel intensities or responses generated by generic filters (e.g. Gabor), our framework has two important advantages. First, by virtue of discriminative training, invariance to external variations (like identity, pose, illumination and expression) is achieved. Second, we show that the responses generated by discriminatively trained filters (or patch-experts) are sparse and can be modeled using a very small number of parameters. As a result, the optimization methods based on the proposed texture model can better cope with unseen variations. We illustrate this point by formulating both part-based and holistic approaches for generic face alignment and show that our framework outperforms the state-of-the-art on multiple "wild" databases. The code and dataset annotations are available for research purposes from http://ibug.doc.ic.ac.uk/resources.
Vertex shading of the three-dimensional model based on ray-tracing algorithm
NASA Astrophysics Data System (ADS)
Hu, Xiaoming; Sang, Xinzhu; Xing, Shujun; Yan, Binbin; Wang, Kuiru; Dou, Wenhua; Xiao, Liquan
2016-10-01
Ray Tracing Algorithm is one of the research hotspots in Photorealistic Graphics. It is an important light and shadow technology in many industries with the three-dimensional (3D) structure, such as aerospace, game, video and so on. Unlike the traditional method of pixel shading based on ray tracing, a novel ray tracing algorithm is presented to color and render vertices of the 3D model directly. Rendering results are related to the degree of subdivision of the 3D model. A good light and shade effect is achieved by realizing the quad-tree data structure to get adaptive subdivision of a triangle according to the brightness difference of its vertices. The uniform grid algorithm is adopted to improve the rendering efficiency. Besides, the rendering time is independent of the screen resolution. In theory, as long as the subdivision of a model is adequate, cool effects as the same as the way of pixel shading will be obtained. Our practical application can be compromised between the efficiency and the effectiveness.
Quick probabilistic binary image matching: changing the rules of the game
NASA Astrophysics Data System (ADS)
Mustafa, Adnan A. Y.
2016-09-01
A Probabilistic Matching Model for Binary Images (PMMBI) is presented that predicts the probability of matching binary images with any level of similarity. The model relates the number of mappings, the amount of similarity between the images and the detection confidence. We show the advantage of using a probabilistic approach to matching in similarity space as opposed to a linear search in size space. With PMMBI a complete model is available to predict the quick detection of dissimilar binary images. Furthermore, the similarity between the images can be measured to a good degree if the images are highly similar. PMMBI shows that only a few pixels need to be compared to detect dissimilarity between images, as low as two pixels in some cases. PMMBI is image size invariant; images of any size can be matched at the same quick speed. Near-duplicate images can also be detected without much difficulty. We present tests on real images that show the prediction accuracy of the model.
Lehotsky, Á; Szilágyi, L; Bánsághi, S; Szerémy, P; Wéber, G; Haidegger, T
2017-09-01
Ultraviolet spectrum markers are widely used for hand hygiene quality assessment, although their microbiological validation has not been established. A microbiology-based assessment of the procedure was conducted. Twenty-five artificial hand models underwent initial full contamination, then disinfection with UV-dyed hand-rub solution, digital imaging under UV-light, microbiological sampling and cultivation, and digital imaging of the cultivated flora were performed. Paired images of each hand model were registered by a software tool, then the UV-marked regions were compared with the pathogen-free sites pixel by pixel. Statistical evaluation revealed that the method indicates correctly disinfected areas with 95.05% sensitivity and 98.01% specificity. Copyright © 2017 The Healthcare Infection Society. Published by Elsevier Ltd. All rights reserved.
Forward model with space-variant of source size for reconstruction on X-ray radiographic image
NASA Astrophysics Data System (ADS)
Liu, Jin; Liu, Jun; Jing, Yue-feng; Xiao, Bo; Wei, Cai-hua; Guan, Yong-hong; Zhang, Xuan
2018-03-01
The Forward Imaging Technique is a method to solve the inverse problem of density reconstruction in radiographic imaging. In this paper, we introduce the forward projection equation (IFP model) for the radiographic system with areal source blur and detector blur. Our forward projection equation, based on X-ray tracing, is combined with the Constrained Conjugate Gradient method to form a new method for density reconstruction. We demonstrate the effectiveness of the new technique by reconstructing density distributions from simulated and experimental images. We show that for radiographic systems with source sizes larger than the pixel size, the effect of blur on the density reconstruction is reduced through our method and can be controlled within one or two pixels. The method is also suitable for reconstruction of non-homogeneousobjects.
Method and apparatus of high dynamic range image sensor with individual pixel reset
NASA Technical Reports Server (NTRS)
Yadid-Pecht, Orly (Inventor); Pain, Bedabrata (Inventor); Fossum, Eric R. (Inventor)
2001-01-01
A wide dynamic range image sensor provides individual pixel reset to vary the integration time of individual pixels. The integration time of each pixel is controlled by column and row reset control signals which activate a logical reset transistor only when both signals coincide for a given pixel.
NASA Astrophysics Data System (ADS)
Wilson, J.; Wetmore, P. H.; Malservisi, R.; Ferwerda, B. P.; Teran, O.
2012-12-01
We use recently collected slip vector and total offset data from the Agua Blanca fault (ABF) to constrain a pixel translation digital elevation model (DEM) to reconstruct the slip history of this fault. This model was constructed using a Perl script that reads a DEM file (Easting, Northing, Elevation) and a configuration file with coordinates that define the boundary of each fault segment. A pixel translation vector is defined as a magnitude of lateral offset in an azimuthal direction. The program translates pixels north of the fault and prints their pre-faulting position to a new DEM file that can be gridded and displayed. This analysis, where multiple DEMs are created with different translation vectors, allows us to identify areas of transtension or transpression while seeing the topographic expression in these areas. The benefit of this technique, in contrast to a simple block model, is that the DEM gives us a valuable graphic which can be used to pose new research questions. We have found that many topographic features correlate across the fault, i.e. valleys and ridges, which likely have implications for the age of the ABF, long term landscape evolution rates, and potentially provide conformation for total slip assessments The ABF of northern Baja California, Mexico is an active, dextral strike slip fault that transfers Pacific-North American plate boundary strain out of the Gulf of California and around the "Big Bend" of the San Andreas Fault. Total displacement on the ABF in the central and eastern parts of the fault is 10 +/- 2 km based on offset Early-Cretaceous features such as terrane boundaries and intrusive bodies (plutons and dike swarms). Where the fault bifurcates to the west, the northern strand (northern Agua Blanca fault or NABF) is constrained to 7 +/- 1 km. We have not yet identified piercing points on the southern strand, the Santo Tomas fault (STF), but displacement is inferred to be ~4 km assuming that the sum of slip on the NABF and STF is approximately equal to that to the east. The ABF has varying kinematics along strike due to changes in trend of the fault with respect to the nearly east-trending displacement vector of the Ensenada Block to the north of the fault relative to a stable Baja Microplate to the south. These kinematics include nearly pure strike slip in the central portion of the ABF where the fault trends nearly E-W, and minor components of normal dip-slip motion on the NABF and eastern sections of the fault where the trends become more northerly. A pixel translation vector parallel to the trend of the ABF in the central segment (290 deg, 10.5 km) produces kinematics consistent with those described above. The block between the NABF and STF has a pixel translation vector parallel the STF (291 deg, 3.5 km). We find these vectors are consistent with the kinematic variability of the fault system and realign several major drainages and ridges across the fault. This suggests these features formed prior to faulting, and they yield preferred values of offset: 10.5 km on the ABF, 7 km on the NABF and 3.5 km on the STF. This model is consistent with the kinematic model proposed by Hamilton (1971) in which the ABF is a transform fault, linking extensional regions of Valle San Felipe and the Continental Borderlands.
NASA Technical Reports Server (NTRS)
Geogdzhayev, Igor V.; Cairns, Brian; Mishchenko, Michael I.; Tsigaridis, Kostas; van Noije, Twan
2014-01-01
To evaluate the effect of sampling frequency on the global monthly mean aerosol optical thickness (AOT), we use 6 years of geographical coordinates of Moderate Resolution Imaging Spectroradiometer (MODIS) L2 aerosol data, daily global aerosol fields generated by the Goddard Institute for Space Studies General Circulation Model and the chemical transport models Global Ozone Chemistry Aerosol Radiation and Transport, Spectral Radiationtransport Model for Aerosol Species and Transport Model 5, at a spatial resolution between 1.125 deg × 1.125 deg and 2 deg × 3?: the analysis is restricted to 60 deg S-60 deg N geographical latitude. We found that, in general, the MODIS coverage causes an underestimate of the global mean AOT over the ocean. The long-term mean absolute monthly difference between all and dark target (DT) pixels was 0.01-0.02 over the ocean and 0.03-0.09 over the land, depending on the model dataset. Negative DT biases peak during boreal summers, reaching 0.07-0.12 (30-45% of the global long-term mean AOT). Addition of the Deep Blue pixels tempers the seasonal dependence of the DT biases and reduces the mean AOT difference over land by 0.01-0.02. These results provide a quantitative measure of the effect the pixel exclusion due to cloud contamination, ocean sun-glint and land type has on the MODIS estimates of the global monthly mean AOT. We also simulate global monthly mean AOT estimates from measurements provided by pixel-wide along-track instruments such as the Aerosol Polarimetry Sensor and the Cloud-Aerosol LiDAR with Orthogonal Polarization. We estimate the probable range of the global AOT standard error for an along-track sensor to be 0.0005-0.0015 (ocean) and 0.0029-0.01 (land) or 0.5-1.2% and 1.1-4% of the corresponding global means. These estimates represent errors due to sampling only and do not include potential retrieval errors. They are smaller than or comparable to the published estimate of 0.01 as being a climatologically significant change in the global mean AOT, suggesting that sampling density is unlikely to limit the use of such instruments for climate applications at least on a global, monthly scale.
Time-of-flight camera via a single-pixel correlation image sensor
NASA Astrophysics Data System (ADS)
Mao, Tianyi; Chen, Qian; He, Weiji; Dai, Huidong; Ye, Ling; Gu, Guohua
2018-04-01
A time-of-flight imager based on single-pixel correlation image sensors is proposed for noise-free depth map acquisition in presence of ambient light. Digital micro-mirror device and time-modulated IR-laser provide spatial and temporal illumination on the unknown object. Compressed sensing and ‘four bucket principle’ method are combined to reconstruct the depth map from a sequence of measurements at a low sampling rate. Second-order correlation transform is also introduced to reduce the noise from the detector itself and direct ambient light. Computer simulations are presented to validate the computational models and improvement of reconstructions.
Combined DEM Extration Method from StereoSAR and InSAR
NASA Astrophysics Data System (ADS)
Zhao, Z.; Zhang, J. X.; Duan, M. Y.; Huang, G. M.; Yang, S. C.
2015-06-01
A pair of SAR images acquired from different positions can be used to generate digital elevation model (DEM). Two techniques exploiting this characteristic have been introduced: stereo SAR and interferometric SAR. They permit to recover the third dimension (topography) and, at the same time, to identify the absolute position (geolocation) of pixels included in the imaged area, thus allowing the generation of DEMs. In this paper, StereoSAR and InSAR combined adjustment model are constructed, and unify DEM extraction from InSAR and StereoSAR into the same coordinate system, and then improve three dimensional positioning accuracy of the target. We assume that there are four images 1, 2, 3 and 4. One pair of SAR images 1,2 meet the required conditions for InSAR technology, while the other pair of SAR images 3,4 can form stereo image pairs. The phase model is based on InSAR rigorous imaging geometric model. The master image 1 and the slave image 2 will be used in InSAR processing, but the slave image 2 is only used in the course of establishment, and the pixels of the slave image 2 are relevant to the corresponding pixels of the master image 1 through image coregistration coefficient, and it calculates the corresponding phase. It doesn't require the slave image in the construction of the phase model. In Range-Doppler (RD) model, the range equation and Doppler equation are a function of target geolocation, while in the phase equation, the phase is also a function of target geolocation. We exploit combined adjustment model to deviation of target geolocation, thus the problem of target solution is changed to solve three unkonwns through seven equations. The model was tested for DEM extraction under spaceborne InSAR and StereoSAR data and compared with InSAR and StereoSAR methods respectively. The results showed that the model delivered a better performance on experimental imagery and can be used for DEM extraction applications.
How Many Pixels Does It Take to Make a Good 4"×6" Print? Pixel Count Wars Revisited
NASA Astrophysics Data System (ADS)
Kriss, Michael A.
Digital still cameras emerged following the introduction of the Sony Mavica analog prototype camera in 1981. These early cameras produced poor image quality and did not challenge film cameras for overall quality. By 1995 digital still cameras in expensive SLR formats had 6 mega-pixels and produced high quality images (with significant image processing). In 2005 significant improvement in image quality was apparent and lower prices for digital still cameras (DSCs) started a rapid decline in film usage and film camera sells. By 2010 film usage was mostly limited to professionals and the motion picture industry. The rise of DSCs was marked by a “pixel war” where the driving feature of the cameras was the pixel count where even moderate cost, ˜120, DSCs would have 14 mega-pixels. The improvement of CMOS technology pushed this trend of lower prices and higher pixel counts. Only the single lens reflex cameras had large sensors and large pixels. The drive for smaller pixels hurt the quality aspects of the final image (sharpness, noise, speed, and exposure latitude). Only today are camera manufactures starting to reverse their course and producing DSCs with larger sensors and pixels. This paper will explore why larger pixels and sensors are key to the future of DSCs.
The effect of split pixel HDR image sensor technology on MTF measurements
NASA Astrophysics Data System (ADS)
Deegan, Brian M.
2014-03-01
Split-pixel HDR sensor technology is particularly advantageous in automotive applications, because the images are captured simultaneously rather than sequentially, thereby reducing motion blur. However, split pixel technology introduces artifacts in MTF measurement. To achieve a HDR image, raw images are captured from both large and small sub-pixels, and combined to make the HDR output. In some cases, a large sub-pixel is used for long exposure captures, and a small sub-pixel for short exposures, to extend the dynamic range. The relative size of the photosensitive area of the pixel (fill factor) plays a very significant role in the output MTF measurement. Given an identical scene, the MTF will be significantly different, depending on whether you use the large or small sub-pixels i.e. a smaller fill factor (e.g. in the short exposure sub-pixel) will result in higher MTF scores, but significantly greater aliasing. Simulations of split-pixel sensors revealed that, when raw images from both sub-pixels are combined, there is a significant difference in rising edge (i.e. black-to-white transition) and falling edge (white-to-black) reproduction. Experimental results showed a difference of ~50% in measured MTF50 between the falling and rising edges of a slanted edge test chart.
Estimation of proportions in mixed pixels through their region characterization
NASA Technical Reports Server (NTRS)
Chittineni, C. B. (Principal Investigator)
1981-01-01
A region of mixed pixels can be characterized through the probability density function of proportions of classes in the pixels. Using information from the spectral vectors of a given set of pixels from the mixed pixel region, expressions are developed for obtaining the maximum likelihood estimates of the parameters of probability density functions of proportions. The proportions of classes in the mixed pixels can then be estimated. If the mixed pixels contain objects of two classes, the computation can be reduced by transforming the spectral vectors using a transformation matrix that simultaneously diagonalizes the covariance matrices of the two classes. If the proportions of the classes of a set of mixed pixels from the region are given, then expressions are developed for obtaining the estmates of the parameters of the probability density function of the proportions of mixed pixels. Development of these expressions is based on the criterion of the minimum sum of squares of errors. Experimental results from the processing of remotely sensed agricultural multispectral imagery data are presented.
NASA Astrophysics Data System (ADS)
Ermida, Sofia; DaCamara, Carlos C.; Trigo, Isabel F.; Pires, Ana C.; Ghent, Darren
2017-04-01
Land Surface Temperature (LST) is a key climatological variable and a diagnostic parameter of land surface conditions. Remote sensing constitutes the most effective method to observe LST over large areas and on a regular basis. Although LST estimation from remote sensing instruments operating in the Infrared (IR) is widely used and has been performed for nearly 3 decades, there is still a list of open issues. One of these is the LST dependence on viewing and illumination geometry. This effect introduces significant discrepancies among LST estimations from different sensors, overlapping in space and time, that are not related to uncertainties in the methodologies or input data used. Furthermore, these directional effects deviate LST products from an ideally defined LST, which should represent to the ensemble of directional radiometric temperature of all surface elements within the FOV. Angular effects on LST are here conveniently estimated by means of a kernel model of the surface thermal emission, which describes the angular dependence of LST as a function of viewing and illumination geometry. The model is calibrated using LST data as provided by a wide range of sensors to optimize spatial coverage, namely: 1) a LEO sensor - the Moderate Resolution Imaging Spectroradiometer (MODIS) on-board NASA's TERRA and AQUA; and 2) 3 GEO sensors - the Spinning Enhanced Visible and Infrared Imager (SEVIRI) on-board EUMETSAT's Meteosat Second Generation (MSG), the Japanese Meteorological Imager (JAMI) on-board the Japanese Meteorological Association (JMA) Multifunction Transport SATellite (MTSAT-2), and NASA's Geostationary Operational Environmental Satellites (GOES). As shown in our previous feasibility studies the sampling of illumination and view angles has a high impact on the obtained model parameters. This impact may be mitigated when the sampling size is increased by aggregating pixels with similar surface conditions. Here we propose a methodology where land surface is stratified by means of a cluster analysis using information on land cover type, fraction of vegetation cover and topography. The kernel model is then adjusted to LST data corresponding to each cluster. It is shown that the quality of the cluster based kernel model is very close to the pixel based one. Furthermore, the reduced number of parameters (limited to the number of identified clusters, instead of a pixel-by-pixel model calibration) allows improving the kernel model trough the incorporation of a seasonal component. The application of the here discussed procedure towards the harmonization of LST products from multi-sensors is on the framework of the ESA DUE GlobTemperature project.
Image Processing for Binarization Enhancement via Fuzzy Reasoning
NASA Technical Reports Server (NTRS)
Dominguez, Jesus A. (Inventor)
2009-01-01
A technique for enhancing a gray-scale image to improve conversions of the image to binary employs fuzzy reasoning. In the technique, pixels in the image are analyzed by comparing the pixel's gray scale value, which is indicative of its relative brightness, to the values of pixels immediately surrounding the selected pixel. The degree to which each pixel in the image differs in value from the values of surrounding pixels is employed as the variable in a fuzzy reasoning-based analysis that determines an appropriate amount by which the selected pixel's value should be adjusted to reduce vagueness and ambiguity in the image and improve retention of information during binarization of the enhanced gray-scale image.
Dead pixel replacement in LWIR microgrid polarimeters.
Ratliff, Bradley M; Tyo, J Scott; Boger, James K; Black, Wiley T; Bowers, David L; Fetrow, Matthew P
2007-06-11
LWIR imaging arrays are often affected by nonresponsive pixels, or "dead pixels." These dead pixels can severely degrade the quality of imagery and often have to be replaced before subsequent image processing and display of the imagery data. For LWIR arrays that are integrated with arrays of micropolarizers, the problem of dead pixels is amplified. Conventional dead pixel replacement (DPR) strategies cannot be employed since neighboring pixels are of different polarizations. In this paper we present two DPR schemes. The first is a modified nearest-neighbor replacement method. The second is a method based on redundancy in the polarization measurements.We find that the redundancy-based DPR scheme provides an order-of-magnitude better performance for typical LWIR polarimetric data.
NASA Astrophysics Data System (ADS)
Catanzarite, Joseph; Burke, Christopher J.; Li, Jie; Seader, Shawn; Haas, Michael R.; Batalha, Natalie; Henze, Christopher; Christiansen, Jessie; Kepler Project, NASA Advanced Supercomputing Division
2016-06-01
The Kepler Mission is developing an Analytic Completeness Model (ACM) to estimate detection completeness contours as a function of exoplanet radius and period for each target star. Accurate completeness contours are necessary for robust estimation of exoplanet occurrence rates.The main components of the ACM for a target star are: detection efficiency as a function of SNR, the window function (WF) and the one-sigma depth function (OSDF). (Ref. Burke et al. 2015). The WF captures the falloff in transit detection probability at long periods that is determined by the observation window (the duration over which the target star has been observed). The OSDF is the transit depth (in parts per million) that yields SNR of unity for the full transit train. It is a function of period, and accounts for the time-varying properties of the noise and for missing or deweighted data.We are performing flux-level transit injection (FLTI) experiments on selected Kepler target stars with the goal of refining and validating the ACM. “Flux-level” injection machinery inserts exoplanet transit signatures directly into the flux time series, as opposed to “pixel-level” injection, which inserts transit signatures into the individual pixels using the pixel response function. See Jie Li's poster: ID #2493668, "Flux-level transit injection experiments with the NASA Pleiades Supercomputer" for details, including performance statistics.Since FLTI is affordable for only a small subset of the Kepler targets, the ACM is designed to apply to most Kepler target stars. We validate this model using “deep” FLTI experiments, with ~500,000 injection realizations on each of a small number of targets and “shallow” FLTI experiments with ~2000 injection realizations on each of many targets. From the results of these experiments, we identify anomalous targets, model their behavior and refine the ACM accordingly.In this presentation, we discuss progress in validating and refining the ACM, and we compare our detection efficiency curves with those derived from the associated pixel-level transit injection experiments.Kepler was selected as the 10th mission of the Discovery Program. Funding for this mission is provided by NASA, Science Mission Directorate.
NASA Astrophysics Data System (ADS)
Shaw-Stewart, James; Mattle, Thomas; Lippert, Thomas; Nagel, Matthias; Nüesch, Frank; Wokaun, Alexander
2013-08-01
Laser-induced forward transfer (LIFT) has already been used to fabricate various types of organic light-emitting diodes (OLEDs), and the process itself has been optimised and refined considerably since OLED pixels were first demonstrated. In particular, a dynamic release layer (DRL) of triazene polymer has been used, the environmental pressure has been reduced down to a medium vacuum, and the donor receiver gap has been controlled with the use of spacers. Insight into the LIFT process's effect upon OLED pixel performance is presented here, obtained through optimisation of three-colour polyfluorene-based OLEDs. A marked dependence of the pixel morphology quality on the cathode metal is observed, and the laser transfer fluence dependence is also analysed. The pixel device performances are compared to conventionally fabricated devices, and cathode effects have been looked at in detail. The silver cathode pixels show more heterogeneous pixel morphologies, and a correspondingly poorer efficiency characteristics. The aluminium cathode pixels have greater green electroluminescent emission than both the silver cathode pixels and the conventionally fabricated aluminium devices, and the green emission has a fluence dependence for silver cathode pixels.
NASA Astrophysics Data System (ADS)
Kang, Dong-Uk; Cho, Minsik; Lee, Dae Hee; Yoo, Hyunjun; Kim, Myung Soo; Bae, Jun Hyung; Kim, Hyoungtaek; Kim, Jongyul; Kim, Hyunduk; Cho, Gyuseong
2012-05-01
Recently, large-size 3-transistors (3-Tr) active pixel complementary metal-oxide silicon (CMOS) image sensors have been being used for medium-size digital X-ray radiography, such as dental computed tomography (CT), mammography and nondestructive testing (NDT) for consumer products. We designed and fabricated 50 µm × 50 µm 3-Tr test pixels having a pixel photodiode with various structures and shapes by using the TSMC 0.25-m standard CMOS process to compare their optical characteristics. The pixel photodiode output was continuously sampled while a test pixel was continuously illuminated by using 550-nm light at a constant intensity. The measurement was repeated 300 times for each test pixel to obtain reliable results on the mean and the variance of the pixel output at each sampling time. The sampling rate was 50 kHz, and the reset period was 200 msec. To estimate the conversion gain, we used the mean-variance method. From the measured results, the n-well/p-substrate photodiode, among 3 photodiode structures available in a standard CMOS process, showed the best performance at a low illumination equivalent to the typical X-ray signal range. The quantum efficiencies of the n+/p-well, n-well/p-substrate, and n+/p-substrate photodiodes were 18.5%, 62.1%, and 51.5%, respectively. From a comparison of pixels with rounded and rectangular corners, we found that a rounded corner structure could reduce the dark current in large-size pixels. A pixel with four rounded corners showed a reduced dark current of about 200fA compared to a pixel with four rectangular corners in our pixel sample size. Photodiodes with round p-implant openings showed about 5% higher dark current, but about 34% higher sensitivities, than the conventional photodiodes.
A Binary Offset Effect in CCD Readout and Its Impact on Astronomical Data
NASA Astrophysics Data System (ADS)
Boone, K.; Aldering, G.; Copin, Y.; Dixon, S.; Domagalski, R. S.; Gangler, E.; Pecontal, E.; Perlmutter, S.
2018-06-01
We have discovered an anomalous behavior of CCD readout electronics that affects their use in many astronomical applications. An offset in the digitization of the CCD output voltage that depends on the binary encoding of one pixel is added to pixels that are read out one, two, and/or three pixels later. One result of this effect is the introduction of a differential offset in the background when comparing regions with and without flux from science targets. Conventional data reduction methods do not correct for this offset. We find this effect in 16 of 22 instruments investigated, covering a variety of telescopes and many different front-end electronics systems. The affected instruments include LRIS and DEIMOS on the Keck telescopes, WFC3 UVIS and STIS on HST, MegaCam on CFHT, SNIFS on the UH88 telescope, GMOS on the Gemini telescopes, HSC on Subaru, and FORS on VLT. The amplitude of the introduced offset is up to 4.5 ADU per pixel, and it is not directly proportional to the measured ADU level. We have developed a model that can be used to detect this “binary offset effect” in data, and correct for it. Understanding how data are affected and applying a correction for the effect is essential for precise astronomical measurements.
NASA Astrophysics Data System (ADS)
Khlopenkov, Konstantin; Duda, David; Thieman, Mandana; Minnis, Patrick; Su, Wenying; Bedka, Kristopher
2017-10-01
The Deep Space Climate Observatory (DSCOVR) enables analysis of the daytime Earth radiation budget via the onboard Earth Polychromatic Imaging Camera (EPIC) and National Institute of Standards and Technology Advanced Radiometer (NISTAR). Radiance observations and cloud property retrievals from low earth orbit and geostationary satellite imagers have to be co-located with EPIC pixels to provide scene identification in order to select anisotropic directional models needed to calculate shortwave and longwave fluxes. A new algorithm is proposed for optimal merging of selected radiances and cloud properties derived from multiple satellite imagers to obtain seamless global hourly composites at 5-km resolution. An aggregated rating is employed to incorporate several factors and to select the best observation at the time nearest to the EPIC measurement. Spatial accuracy is improved using inverse mapping with gradient search during reprojection and bicubic interpolation for pixel resampling. The composite data are subsequently remapped into EPIC-view domain by convolving composite pixels with the EPIC point spread function defined with a half-pixel accuracy. PSF-weighted average radiances and cloud properties are computed separately for each cloud phase. The algorithm has demonstrated contiguous global coverage for any requested time of day with a temporal lag of under 2 hours in over 95% of the globe.
Image mosaic and topographic map of the moon
Hare, Trent M.; Hayward, Rosalyn K.; Blue, Jennifer S.; Archinal, Brent A.
2015-01-01
Sheet 2: This map is based on data from the Lunar Orbiter Laser Altimeter (LOLA; Smith and others, 2010), an instrument on the National Aeronautics and Space Administration (NASA) Lunar Reconnaissance Orbiter (LRO) spacecraft (Tooley and others, 2010). The image used for the base of this map represents more than 6.5 billion measurements gathered between July 2009 and July 2013, adjusted for consistency in the coordinate system described below, and then converted to lunar radii (Mazarico and others, 2012). For the Mercator portion, these measurements were converted into a digital elevation model (DEM) with a resolution of 0.015625 degrees per pixel, or 64 pixels per degree. In projection, the pixels are 473.8 m in size at the equator. For the polar portion, the LOLA elevation points were used to create a DEM at 240 meters per pixel. A shaded relief map was generated from each DEM with a sun angle of 45° from horizontal, and a sun azimuth of 270°, as measured clockwise from north with no vertical exaggeration. The DEM values were then mapped to a global color look-up table, with each color representing a range of 1 km of elevation. For this map sheet, only larger feature names are shown. For references listed above, please open the full PDF.
NASA Astrophysics Data System (ADS)
Tatar, Nurollah; Saadatseresht, Mohammad; Arefi, Hossein; Hadavand, Ahmad
2018-06-01
Unwanted contrast in high resolution satellite images such as shadow areas directly affects the result of further processing in urban remote sensing images. Detecting and finding the precise position of shadows is critical in different remote sensing processing chains such as change detection, image classification and digital elevation model generation from stereo images. The spectral similarity between shadow areas, water bodies, and some dark asphalt roads makes the development of robust shadow detection algorithms challenging. In addition, most of the existing methods work on pixel-level and neglect the contextual information contained in neighboring pixels. In this paper, a new object-based shadow detection framework is introduced. In the proposed method a pixel-level shadow mask is built by extending established thresholding methods with a new C4 index which enables to solve the ambiguity of shadow and water bodies. Then the pixel-based results are further processed in an object-based majority analysis to detect the final shadow objects. Four different high resolution satellite images are used to validate this new approach. The result shows the superiority of the proposed method over some state-of-the-art shadow detection method with an average of 96% in F-measure.
NASA Technical Reports Server (NTRS)
Khlopenkov, Konstantin; Duda, David; Thieman, Mandana; Minnis, Patrick; Su, Wenying; Bedka, Kristopher
2017-01-01
The Deep Space Climate Observatory (DSCOVR) enables analysis of the daytime Earth radiation budget via the onboard Earth Polychromatic Imaging Camera (EPIC) and National Institute of Standards and Technology Advanced Radiometer (NISTAR). Radiance observations and cloud property retrievals from low earth orbit and geostationary satellite imagers have to be co-located with EPIC pixels to provide scene identification in order to select anisotropic directional models needed to calculate shortwave and longwave fluxes. A new algorithm is proposed for optimal merging of selected radiances and cloud properties derived from multiple satellite imagers to obtain seamless global hourly composites at 5-kilometer resolution. An aggregated rating is employed to incorporate several factors and to select the best observation at the time nearest to the EPIC measurement. Spatial accuracy is improved using inverse mapping with gradient search during reprojection and bicubic interpolation for pixel resampling. The composite data are subsequently remapped into EPIC-view domain by convolving composite pixels with the EPIC point spread function (PSF) defined with a half-pixel accuracy. PSF-weighted average radiances and cloud properties are computed separately for each cloud phase. The algorithm has demonstrated contiguous global coverage for any requested time of day with a temporal lag of under 2 hours in over 95 percent of the globe.
Readout of the upgraded ALICE-ITS
NASA Astrophysics Data System (ADS)
Szczepankiewicz, A.; ALICE Collaboration
2016-07-01
The ALICE experiment will undergo a major upgrade during the second long shutdown of the CERN LHC. As part of this program, the present Inner Tracking System (ITS), which employs different layers of hybrid pixels, silicon drift and strip detectors, will be replaced by a completely new tracker composed of seven layers of monolithic active pixel sensors. The upgraded ITS will have more than twelve billion pixels in total, producing 300 Gbit/s of data when tracking 50 kHz Pb-Pb events. Two families of pixel chips realized with the TowerJazz CMOS imaging process have been developed as candidate sensors: the ALPIDE, which uses a proprietary readout and sparsification mechanism and the MISTRAL-O, based on a proven rolling shutter architecture. Both chips can operate in continuous mode, with the ALPIDE also supporting triggered operations. As the communication IP blocks are shared among the two chip families, it has been possible to develop a common Readout Electronics. All the sensor components (analog stages, state machines, buffers, FIFOs, etc.) have been modelled in a system level simulation, which has been extensively used to optimize both the sensor and the whole readout chain design in an iterative process. This contribution covers the progress of the R&D efforts and the overall expected performance of the ALICE-ITS readout system.
NASA Astrophysics Data System (ADS)
Haellstig, Emil J.; Martin, Torleif; Stigwall, Johan; Sjoqvist, Lars; Lindgren, Mikael
2004-02-01
A commercial linear one-dimensional, 1x4096 pixels, zero-twist nematic liquid crystal spatial light modulator (SLM), giving more than 2π phase modulation at λ = 850 nm, was evaluated for beam steering applications. The large ratio (7:1) between the liquid crystal layer thickness and pixel width gives rise to voltage leakage and fringing fields between pixels. Due to the fringing fields the ideal calculated phase patterns cannot be perfectly realized by the device. Losses in high frequency components in the phase patterns were found to limit the maximum deflection angle. The inhomogeneous optical anisotropy of the SLM was determined by modelling of the liquid crystal director distribution within the electrode-pixel structure. The effects of the fringing fields on the amplitude and phase modulation were studied by full vector finite-difference time-domain simulations. It was found that the fringing fields also resulted in coupling into an unwanted polarization mode. Measurements of how this mode coupling affects the beam steering quality were carried out and the results compared with calculated results. A method to compensate for the fringing field effects is discussed and it is shown how the usable steering range of the SLM can be extended to +/- 2 degrees.
Defante, Adrian P; Vreeland, Wyatt N; Benkstein, Kurt D; Ripple, Dean C
2018-05-01
Nanoparticle tracking analysis (NTA) obtains particle size by analysis of particle diffusion through a time series of micrographs and particle count by a count of imaged particles. The number of observed particles imaged is controlled by the scattering cross-section of the particles and by camera settings such as sensitivity and shutter speed. Appropriate camera settings are defined as those that image, track, and analyze a sufficient number of particles for statistical repeatability. Here, we test if image attributes, features captured within the image itself, can provide measurable guidelines to assess the accuracy for particle size and count measurements using NTA. The results show that particle sizing is a robust process independent of image attributes for model systems. However, particle count is sensitive to camera settings. Using open-source software analysis, it was found that a median pixel area, 4 pixels 2 , results in a particle concentration within 20% of the expected value. The distribution of these illuminated pixel areas can also provide clues about the polydispersity of particle solutions prior to using a particle tracking analysis. Using the median pixel area serves as an operator-independent means to assess the quality of the NTA measurement for count. Published by Elsevier Inc.
An effective approach for gap-filling continental scale remotely sensed time-series
Weiss, Daniel J.; Atkinson, Peter M.; Bhatt, Samir; Mappin, Bonnie; Hay, Simon I.; Gething, Peter W.
2014-01-01
The archives of imagery and modeled data products derived from remote sensing programs with high temporal resolution provide powerful resources for characterizing inter- and intra-annual environmental dynamics. The impressive depth of available time-series from such missions (e.g., MODIS and AVHRR) affords new opportunities for improving data usability by leveraging spatial and temporal information inherent to longitudinal geospatial datasets. In this research we develop an approach for filling gaps in imagery time-series that result primarily from cloud cover, which is particularly problematic in forested equatorial regions. Our approach consists of two, complementary gap-filling algorithms and a variety of run-time options that allow users to balance competing demands of model accuracy and processing time. We applied the gap-filling methodology to MODIS Enhanced Vegetation Index (EVI) and daytime and nighttime Land Surface Temperature (LST) datasets for the African continent for 2000–2012, with a 1 km spatial resolution, and an 8-day temporal resolution. We validated the method by introducing and filling artificial gaps, and then comparing the original data with model predictions. Our approach achieved R2 values above 0.87 even for pixels within 500 km wide introduced gaps. Furthermore, the structure of our approach allows estimation of the error associated with each gap-filled pixel based on the distance to the non-gap pixels used to model its fill value, thus providing a mechanism for including uncertainty associated with the gap-filling process in downstream applications of the resulting datasets. PMID:25642100
Characterization of Adrenal Adenoma by Gaussian Model-Based Algorithm.
Hsu, Larson D; Wang, Carolyn L; Clark, Toshimasa J
2016-01-01
We confirmed that computed tomography (CT) attenuation values of pixels in an adrenal nodule approximate a Gaussian distribution. Building on this and the previously described histogram analysis method, we created an algorithm that uses mean and standard deviation to estimate the percentage of negative attenuation pixels in an adrenal nodule, thereby allowing differentiation of adenomas and nonadenomas. The institutional review board approved both components of this study in which we developed and then validated our criteria. In the first, we retrospectively assessed CT attenuation values of adrenal nodules for normality using a 2-sample Kolmogorov-Smirnov test. In the second, we evaluated a separate cohort of patients with adrenal nodules using both the conventional 10HU unit mean attenuation method and our Gaussian model-based algorithm. We compared the sensitivities of the 2 methods using McNemar's test. A total of 183 of 185 observations (98.9%) demonstrated a Gaussian distribution in adrenal nodule pixel attenuation values. The sensitivity and specificity of our Gaussian model-based algorithm for identifying adrenal adenoma were 86.1% and 83.3%, respectively. The sensitivity and specificity of the mean attenuation method were 53.2% and 94.4%, respectively. The sensitivities of the 2 methods were significantly different (P value < 0.001). In conclusion, the CT attenuation values within an adrenal nodule follow a Gaussian distribution. Our Gaussian model-based algorithm can characterize adrenal adenomas with higher sensitivity than the conventional mean attenuation method. The use of our algorithm, which does not require additional postprocessing, may increase workflow efficiency and reduce unnecessary workup of benign nodules. Copyright © 2016 Elsevier Inc. All rights reserved.
Operational Tree Species Mapping in a Diverse Tropical Forest with Airborne Imaging Spectroscopy.
Baldeck, Claire A; Asner, Gregory P; Martin, Robin E; Anderson, Christopher B; Knapp, David E; Kellner, James R; Wright, S Joseph
2015-01-01
Remote identification and mapping of canopy tree species can contribute valuable information towards our understanding of ecosystem biodiversity and function over large spatial scales. However, the extreme challenges posed by highly diverse, closed-canopy tropical forests have prevented automated remote species mapping of non-flowering tree crowns in these ecosystems. We set out to identify individuals of three focal canopy tree species amongst a diverse background of tree and liana species on Barro Colorado Island, Panama, using airborne imaging spectroscopy data. First, we compared two leading single-class classification methods--binary support vector machine (SVM) and biased SVM--for their performance in identifying pixels of a single focal species. From this comparison we determined that biased SVM was more precise and created a multi-species classification model by combining the three biased SVM models. This model was applied to the imagery to identify pixels belonging to the three focal species and the prediction results were then processed to create a map of focal species crown objects. Crown-level cross-validation of the training data indicated that the multi-species classification model had pixel-level producer's accuracies of 94-97% for the three focal species, and field validation of the predicted crown objects indicated that these had user's accuracies of 94-100%. Our results demonstrate the ability of high spatial and spectral resolution remote sensing to accurately detect non-flowering crowns of focal species within a diverse tropical forest. We attribute the success of our model to recent classification and mapping techniques adapted to species detection in diverse closed-canopy forests, which can pave the way for remote species mapping in a wider variety of ecosystems.
Operational Tree Species Mapping in a Diverse Tropical Forest with Airborne Imaging Spectroscopy
Baldeck, Claire A.; Asner, Gregory P.; Martin, Robin E.; Anderson, Christopher B.; Knapp, David E.; Kellner, James R.; Wright, S. Joseph
2015-01-01
Remote identification and mapping of canopy tree species can contribute valuable information towards our understanding of ecosystem biodiversity and function over large spatial scales. However, the extreme challenges posed by highly diverse, closed-canopy tropical forests have prevented automated remote species mapping of non-flowering tree crowns in these ecosystems. We set out to identify individuals of three focal canopy tree species amongst a diverse background of tree and liana species on Barro Colorado Island, Panama, using airborne imaging spectroscopy data. First, we compared two leading single-class classification methods—binary support vector machine (SVM) and biased SVM—for their performance in identifying pixels of a single focal species. From this comparison we determined that biased SVM was more precise and created a multi-species classification model by combining the three biased SVM models. This model was applied to the imagery to identify pixels belonging to the three focal species and the prediction results were then processed to create a map of focal species crown objects. Crown-level cross-validation of the training data indicated that the multi-species classification model had pixel-level producer’s accuracies of 94–97% for the three focal species, and field validation of the predicted crown objects indicated that these had user’s accuracies of 94–100%. Our results demonstrate the ability of high spatial and spectral resolution remote sensing to accurately detect non-flowering crowns of focal species within a diverse tropical forest. We attribute the success of our model to recent classification and mapping techniques adapted to species detection in diverse closed-canopy forests, which can pave the way for remote species mapping in a wider variety of ecosystems. PMID:26153693
Dynamically re-configurable CMOS imagers for an active vision system
NASA Technical Reports Server (NTRS)
Yang, Guang (Inventor); Pain, Bedabrata (Inventor)
2005-01-01
A vision system is disclosed. The system includes a pixel array, at least one multi-resolution window operation circuit, and a pixel averaging circuit. The pixel array has an array of pixels configured to receive light signals from an image having at least one tracking target. The multi-resolution window operation circuits are configured to process the image. Each of the multi-resolution window operation circuits processes each tracking target within a particular multi-resolution window. The pixel averaging circuit is configured to sample and average pixels within the particular multi-resolution window.
Moving object detection using dynamic motion modelling from UAV aerial images.
Saif, A F M Saifuddin; Prabuwono, Anton Satria; Mahayuddin, Zainal Rasyid
2014-01-01
Motion analysis based moving object detection from UAV aerial image is still an unsolved issue due to inconsideration of proper motion estimation. Existing moving object detection approaches from UAV aerial images did not deal with motion based pixel intensity measurement to detect moving object robustly. Besides current research on moving object detection from UAV aerial images mostly depends on either frame difference or segmentation approach separately. There are two main purposes for this research: firstly to develop a new motion model called DMM (dynamic motion model) and secondly to apply the proposed segmentation approach SUED (segmentation using edge based dilation) using frame difference embedded together with DMM model. The proposed DMM model provides effective search windows based on the highest pixel intensity to segment only specific area for moving object rather than searching the whole area of the frame using SUED. At each stage of the proposed scheme, experimental fusion of the DMM and SUED produces extracted moving objects faithfully. Experimental result reveals that the proposed DMM and SUED have successfully demonstrated the validity of the proposed methodology.
Spatial scaling of net primary productivity using subpixel landcover information
NASA Astrophysics Data System (ADS)
Chen, X. F.; Chen, Jing M.; Ju, Wei M.; Ren, L. L.
2008-10-01
Gridding the land surface into coarse homogeneous pixels may cause important biases on ecosystem model estimations of carbon budget components at local, regional and global scales. These biases result from overlooking subpixel variability of land surface characteristics. Vegetation heterogeneity is an important factor introducing biases in regional ecological modeling, especially when the modeling is made on large grids. This study suggests a simple algorithm that uses subpixel information on the spatial variability of land cover type to correct net primary productivity (NPP) estimates, made at coarse spatial resolutions where the land surface is considered as homogeneous within each pixel. The algorithm operates in such a way that NPP obtained from calculations made at coarse spatial resolutions are multiplied by simple functions that attempt to reproduce the effects of subpixel variability of land cover type on NPP. Its application to a carbon-hydrology coupled model(BEPS-TerrainLab model) estimates made at a 1-km resolution over a watershed (named Baohe River Basin) located in the southwestern part of Qinling Mountains, Shaanxi Province, China, improved estimates of average NPP as well as its spatial variability.
Image model: new perspective for image processing and computer vision
NASA Astrophysics Data System (ADS)
Ziou, Djemel; Allili, Madjid
2004-05-01
We propose a new image model in which the image support and image quantities are modeled using algebraic topology concepts. The image support is viewed as a collection of chains encoding combination of pixels grouped by dimension and linking different dimensions with the boundary operators. Image quantities are encoded using the notion of cochain which associates values for pixels of given dimension that can be scalar, vector, or tensor depending on the problem that is considered. This allows obtaining algebraic equations directly from the physical laws. The coboundary and codual operators, which are generic operations on cochains allow to formulate the classical differential operators as applied for field functions and differential forms in both global and local forms. This image model makes the association between the image support and the image quantities explicit which results in several advantages: it allows the derivation of efficient algorithms that operate in any dimension and the unification of mathematics and physics to solve classical problems in image processing and computer vision. We show the effectiveness of this model by considering the isotropic diffusion.
Plate refractive camera model and its applications
NASA Astrophysics Data System (ADS)
Huang, Longxiang; Zhao, Xu; Cai, Shen; Liu, Yuncai
2017-03-01
In real applications, a pinhole camera capturing objects through a planar parallel transparent plate is frequently employed. Due to the refractive effects of the plate, such an imaging system does not comply with the conventional pinhole camera model. Although the system is ubiquitous, it has not been thoroughly studied. This paper aims at presenting a simple virtual camera model, called a plate refractive camera model, which has a form similar to a pinhole camera model and can efficiently model refractions through a plate. The key idea is to employ a pixel-wise viewpoint concept to encode the refraction effects into a pixel-wise pinhole camera model. The proposed camera model realizes an efficient forward projection computation method and has some advantages in applications. First, the model can help to compute the caustic surface to represent the changes of the camera viewpoints. Second, the model has strengths in analyzing and rectifying the image caustic distortion caused by the plate refraction effects. Third, the model can be used to calibrate the camera's intrinsic parameters without removing the plate. Last but not least, the model contributes to putting forward the plate refractive triangulation methods in order to solve the plate refractive triangulation problem easily in multiviews. We verify our theory in both synthetic and real experiments.
Steen Magnussen; Ronald E. McRoberts; Erkki O. Tomppo
2009-01-01
New model-based estimators of the uncertainty of pixel-level and areal k-nearest neighbour (knn) predictions of attribute Y from remotely-sensed ancillary data X are presented. Non-parametric functions predict Y from scalar 'Single Index Model' transformations of X. Variance functions generated...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Chumin; Kanicki, Jerzy, E-mail: kanicki@eecs.umich.edu; Konstantinidis, Anastasios C.
Purpose: Large area x-ray imagers based on complementary metal-oxide-semiconductor (CMOS) active pixel sensor (APS) technology have been proposed for various medical imaging applications including digital breast tomosynthesis (DBT). The low electronic noise (50–300 e{sup −}) of CMOS APS x-ray imagers provides a possible route to shrink the pixel pitch to smaller than 75 μm for microcalcification detection and possible reduction of the DBT mean glandular dose (MGD). Methods: In this study, imaging performance of a large area (29 × 23 cm{sup 2}) CMOS APS x-ray imager [Dexela 2923 MAM (PerkinElmer, London)] with a pixel pitch of 75 μm was characterizedmore » and modeled. The authors developed a cascaded system model for CMOS APS x-ray imagers using both a broadband x-ray radiation and monochromatic synchrotron radiation. The experimental data including modulation transfer function, noise power spectrum, and detective quantum efficiency (DQE) were theoretically described using the proposed cascaded system model with satisfactory consistency to experimental results. Both high full well and low full well (LFW) modes of the Dexela 2923 MAM CMOS APS x-ray imager were characterized and modeled. The cascaded system analysis results were further used to extract the contrast-to-noise ratio (CNR) for microcalcifications with sizes of 165–400 μm at various MGDs. The impact of electronic noise on CNR was also evaluated. Results: The LFW mode shows better DQE at low air kerma (K{sub a} < 10 μGy) and should be used for DBT. At current DBT applications, air kerma (K{sub a} ∼ 10 μGy, broadband radiation of 28 kVp), DQE of more than 0.7 and ∼0.3 was achieved using the LFW mode at spatial frequency of 0.5 line pairs per millimeter (lp/mm) and Nyquist frequency ∼6.7 lp/mm, respectively. It is shown that microcalcifications of 165–400 μm in size can be resolved using a MGD range of 0.3–1 mGy, respectively. In comparison to a General Electric GEN2 prototype DBT system (at MGD of 2.5 mGy), an increased CNR (by ∼10) for microcalcifications was observed using the Dexela 2923 MAM CMOS APS x-ray imager at a lower MGD (2.0 mGy). Conclusions: The Dexela 2923 MAM CMOS APS x-ray imager is capable to achieve a high imaging performance at spatial frequencies up to 6.7 lp/mm. Microcalcifications of 165 μm are distinguishable based on reported data and their modeling results due to the small pixel pitch of 75 μm. At the same time, potential dose reduction is expected using the studied CMOS APS x-ray imager.« less
Imaging properties of pixellated scintillators with deep pixels
Barber, H. Bradford; Fastje, David; Lemieux, Daniel; Grim, Gary P.; Furenlid, Lars R.; Miller, Brian W.; Parkhurst, Philip; Nagarkar, Vivek V.
2015-01-01
We have investigated the light-transport properties of scintillator arrays with long, thin pixels (deep pixels) for use in high-energy gamma-ray imaging. We compared 10×10 pixel arrays of YSO:Ce, LYSO:Ce and BGO (1mm × 1mm × 20 mm pixels) made by Proteus, Inc. with similar 10×10 arrays of LSO:Ce and BGO (1mm × 1mm × 15mm pixels) loaned to us by Saint-Gobain. The imaging and spectroscopic behaviors of these scintillator arrays are strongly affected by the choice of a reflector used as an inter-pixel spacer (3M ESR in the case of the Proteus arrays and white, diffuse-reflector for the Saint-Gobain arrays). We have constructed a 3700-pixel LYSO:Ce Prototype NIF Gamma-Ray Imager for use in diagnosing target compression in inertial confinement fusion. This system was tested at the OMEGA Laser and exhibited significant optical, inter-pixel cross-talk that was traced to the use of a single-layer of ESR film as an inter-pixel spacer. We show how the optical cross-talk can be mapped, and discuss correction procedures. We demonstrate a 10×10 YSO:Ce array as part of an iQID (formerly BazookaSPECT) imager and discuss issues related to the internal activity of 176Lu in LSO:Ce and LYSO:Ce detectors. PMID:26236070
Imaging properties of pixellated scintillators with deep pixels
NASA Astrophysics Data System (ADS)
Barber, H. Bradford; Fastje, David; Lemieux, Daniel; Grim, Gary P.; Furenlid, Lars R.; Miller, Brian W.; Parkhurst, Philip; Nagarkar, Vivek V.
2014-09-01
We have investigated the light-transport properties of scintillator arrays with long, thin pixels (deep pixels) for use in high-energy gamma-ray imaging. We compared 10x10 pixel arrays of YSO:Ce, LYSO:Ce and BGO (1mm x 1mm x 20 mm pixels) made by Proteus, Inc. with similar 10x10 arrays of LSO:Ce and BGO (1mm x 1mm x 15mm pixels) loaned to us by Saint-Gobain. The imaging and spectroscopic behaviors of these scintillator arrays are strongly affected by the choice of a reflector used as an inter-pixel spacer (3M ESR in the case of the Proteus arrays and white, diffuse-reflector for the Saint-Gobain arrays). We have constructed a 3700-pixel LYSO:Ce Prototype NIF Gamma-Ray Imager for use in diagnosing target compression in inertial confinement fusion. This system was tested at the OMEGA Laser and exhibited significant optical, inter-pixel cross-talk that was traced to the use of a single-layer of ESR film as an inter-pixel spacer. We show how the optical cross-talk can be mapped, and discuss correction procedures. We demonstrate a 10x10 YSO:Ce array as part of an iQID (formerly BazookaSPECT) imager and discuss issues related to the internal activity of 176Lu in LSO:Ce and LYSO:Ce detectors.
Detector motion method to increase spatial resolution in photon-counting detectors
NASA Astrophysics Data System (ADS)
Lee, Daehee; Park, Kyeongjin; Lim, Kyung Taek; Cho, Gyuseong
2017-03-01
Medical imaging requires high spatial resolution of an image to identify fine lesions. Photon-counting detectors in medical imaging have recently been rapidly replacing energy-integrating detectors due to the former`s high spatial resolution, high efficiency and low noise. Spatial resolution in a photon counting image is determined by the pixel size. Therefore, the smaller the pixel size, the higher the spatial resolution that can be obtained in an image. However, detector redesigning is required to reduce pixel size, and an expensive fine process is required to integrate a signal processing unit with reduced pixel size. Furthermore, as the pixel size decreases, charge sharing severely deteriorates spatial resolution. To increase spatial resolution, we propose a detector motion method using a large pixel detector that is less affected by charge sharing. To verify the proposed method, we utilized a UNO-XRI photon-counting detector (1-mm CdTe, Timepix chip) at the maximum X-ray tube voltage of 80 kVp. A similar spatial resolution of a 55- μm-pixel image was achieved by application of the proposed method to a 110- μm-pixel detector with a higher signal-to-noise ratio. The proposed method could be a way to increase spatial resolution without a pixel redesign when pixels severely suffer from charge sharing as pixel size is reduced.
Vectorized image segmentation via trixel agglomeration
Prasad, Lakshman [Los Alamos, NM; Skourikhine, Alexei N [Los Alamos, NM
2006-10-24
A computer implemented method transforms an image comprised of pixels into a vectorized image specified by a plurality of polygons that can be subsequently used to aid in image processing and understanding. The pixelated image is processed to extract edge pixels that separate different colors and a constrained Delaunay triangulation of the edge pixels forms a plurality of triangles having edges that cover the pixelated image. A color for each one of the plurality of triangles is determined from the color pixels within each triangle. A filter is formed with a set of grouping rules related to features of the pixelated image and applied to the plurality of triangle edges to merge adjacent triangles consistent with the filter into polygons having a plurality of vertices. The pixelated image may be then reformed into an array of the polygons, that can be represented collectively and efficiently by standard vector image.
Characterization of pixelated TlBr detectors with Tl electrodes
NASA Astrophysics Data System (ADS)
Hitomi, Keitaro; Onodera, Toshiyuki; Kim, Seong-Yun; Shoji, Tadayoshi; Ishii, Keizo
2014-05-01
A 4.36-mm-thick pixelated thallium bromide (TlBr) detector with Tl electrodes was fabricated from a crystal grown by the traveling molten zone method using zone-purified material. The detector had four 1×1 mm2 pixelated anodes. The detector performance was characterized at room temperature. The mobility-lifetime products of electrons for each pixel of the TlBr detector were measured to be >2.8×10-3 cm2/V. The four pixelated anodes of the detector exhibited energy resolutions of 1.5-1.8% full width at half maximum (FWHM) for 662-keV gamma rays for single-pixel events with the depth correction method. An energy resolution of 4.5% FWHM for 662-keV gamma rays was obtained from a reconstructed energy spectrum using two-pixel events from the two pixelated anodes on the detector.
Simulation of Small-Pitch HgCdTe Photodetectors
NASA Astrophysics Data System (ADS)
Vallone, Marco; Goano, Michele; Bertazzi, Francesco; Ghione, Giovanni; Schirmacher, Wilhelm; Hanna, Stefan; Figgemeier, Heinrich
2017-09-01
Recent studies indicate as an important technological step the development of infrared HgCdTe-based focal plane arrays (FPAs) with sub-wavelength pixel pitch, with the advantage of smaller volume, lower weight, and potentially lower cost. In order to assess the limits of pixel pitch scaling, we present combined three-dimensional optical and electrical simulations of long-wavelength infrared HgCdTe FPAs, with 3 μm, 5 μm, and 10 μm pitch. Numerical simulations predict significant cavity effects, brought by the array periodicity. The optical and electrical contributions to spectral inter-pixel crosstalk are investigated as functions of pixel pitch, by illuminating the FPAs with Gaussian beams focused on the central pixel. Despite the FPAs being planar with 100% pixel duty cycle, our calculations suggest that the total crosstalk with nearest-neighbor pixels could be kept acceptably small also with pixels only 3 μ m wide and a diffraction-limited optical system.
NASA Astrophysics Data System (ADS)
Jobin, Benoît; Labrecque, Sandra; Grenier, Marcelle; Falardeau, Gilles
2008-01-01
The traditional method of identifying wildlife habitat distribution over large regions consists of pixel-based classification of satellite images into a suite of habitat classes used to select suitable habitat patches. Object-based classification is a new method that can achieve the same objective based on the segmentation of spectral bands of the image creating homogeneous polygons with regard to spatial or spectral characteristics. The segmentation algorithm does not solely rely on the single pixel value, but also on shape, texture, and pixel spatial continuity. The object-based classification is a knowledge base process where an interpretation key is developed using ground control points and objects are assigned to specific classes according to threshold values of determined spectral and/or spatial attributes. We developed a model using the eCognition software to identify suitable habitats for the Grasshopper Sparrow, a rare and declining species found in southwestern Québec. The model was developed in a region with known breeding sites and applied on other images covering adjacent regions where potential breeding habitats may be present. We were successful in locating potential habitats in areas where dairy farming prevailed but failed in an adjacent region covered by a distinct Landsat scene and dominated by annual crops. We discuss the added value of this method, such as the possibility to use the contextual information associated to objects and the ability to eliminate unsuitable areas in the segmentation and land cover classification processes, as well as technical and logistical constraints. A series of recommendations on the use of this method and on conservation issues of Grasshopper Sparrow habitat is also provided.
NASA Astrophysics Data System (ADS)
Takeda, Sawako; Tashiro, Makoto S.; Ishisaki, Yoshitaka; Tsujimoto, Masahiro; Seta, Hiromi; Shimoda, Yuya; Yamaguchi, Sunao; Uehara, Sho; Terada, Yukikatsu; Fujimoto, Ryuichi; Mitsuda, Kazuhisa
2014-07-01
The soft X-ray spectrometer (SXS) aboard ASTRO-H is equipped with dedicated digital signal processing units called pulse shape processors (PSPs). The X-ray microcalorimeter system SXS has 36 sensor pixels, which are operated at 50 mK to measure heat input of X-ray photons and realize an energy resolution of 7 eV FWHM in the range 0.3-12.0 keV. Front-end signal processing electronics are used to filter and amplify the electrical pulse output from the sensor and for analog-to-digital conversion. The digitized pulses from the 36 pixels are multiplexed and are sent to the PSP over low-voltage differential signaling lines. Each of two identical PSP units consists of an FPGA board, which assists the hardware logic, and two CPU boards, which assist the onboard software. The FPGA board triggers at every pixel event and stores the triggering information as a pulse waveform in the installed memory. The CPU boards read the event data to evaluate pulse heights by an optimal filtering algorithm. The evaluated X-ray photon data (including the pixel ID, energy, and arrival time information) are transferred to the satellite data recorder along with event quality information. The PSP units have been developed and tested with the engineering model (EM) and the flight model. Utilizing the EM PSP, we successfully verified the entire hardware system and the basic software design of the PSPs, including their communication capability and signal processing performance. In this paper, we show the key metrics of the EM test, such as accuracy and synchronicity of sampling clocks, event grading capability, and resultant energy resolution.
A Matlab Program for Textural Classification Using Neural Networks
NASA Astrophysics Data System (ADS)
Leite, E. P.; de Souza, C.
2008-12-01
A new MATLAB code that provides tools to perform classification of textural images for applications in the Geosciences is presented. The program, here coined TEXTNN, comprises the computation of variogram maps in the frequency domain for specific lag distances in the neighborhood of a pixel. The result is then converted back to spatial domain, where directional or ominidirectional semivariograms are extracted. Feature vectors are built with textural information composed of the semivariance values at these lag distances and, moreover, with histogram measures of mean, standard deviation and weighted fill-ratio. This procedure is applied to a selected group of pixels or to all pixels in an image using a moving window. A feed- forward back-propagation Neural Network can then be designed and trained on feature vectors of predefined classes (training set). The training phase minimizes the mean-squared error on the training set. Additionally, at each iteration, the mean-squared error for every validation is assessed and a test set is evaluated. The program also calculates contingency matrices, global accuracy and kappa coefficient for the three data sets, allowing a quantitative appraisal of the predictive power of the Neural Network models. The interpreter is able to select the best model obtained from a k-fold cross-validation or to use a unique split-sample data set for classification of all pixels in a given textural image. The code is opened to the geoscientific community and is very flexible, allowing the experienced user to modify it as necessary. The performance of the algorithms and the end-user program were tested using synthetic images, orbital SAR (RADARSAT) imagery for oil seepage detection, and airborne, multi-polarimetric SAR imagery for geologic mapping. The overall results proved very promising.
Estimation of Arable Land Loss in Shandong Province, China based on BFAST Model
NASA Astrophysics Data System (ADS)
Liu, Y.
2016-12-01
With the rapid development of national economy and rise of industrialization, China has been one of the countries which has the fastest urbanization process. From 2001 to 2005, China lost over 2000 km2 fertile arable land every year because of urban expansion. Arable land area declining continuously poses a threat to China's food security. Land survey is the direct way to statistic the arable land status, which lasts long time and needs mounts of financial support. Remote sensing is a perfect way to survey land use and its dynamics at large scale. This paper aims to evaluate the detailed status of agricultural land loss of Shandong Province, China by using BFAST (Breaks for Additive Seasonal and Trend) model. First, the 30m spatial resolution global land cover products GlobeLand30 in 2000 and 2010 are used to locate pixels transforming from agricultural land to artificial cover during this period. Within a MODIS pixel (250m) area, if over half of GlobeLand30 pixels have changed from arable land to artificial cover, then the responding MODIS pixel is classified as changed area, whose phenology reflected by NDVI time series curve will also change. Then, BFAST is used to detect the break point which represents the time of change occurred using MODIS NDVI time series data. From 2002 to 2010, Shandong Province lost its 1063.03 km2 arable land in total. Arable land loss has a declining trend in each year and most loss occurred in 2002 and 2003. Spatially, cities which has higher level of economic development in central and eastern regions lost more arable land. Finally, compare this result with statistical data from China's national Bureau of Statistics, there is a strong positive relationship.
Precision Navigation of Cassini Images Using Rings, Icy Satellites, and Fuzzy Bodies
NASA Astrophysics Data System (ADS)
French, Robert S.; Showalter, Mark R.; Gordon, Mitchell K.
2016-10-01
Before images from the Cassini spacecraft can be analyzed, errors in the published pointing information (up to ~110 pixels for the Imaging Science Subsystem Narrow Angle Camera) must be corrected so that the line of sight vector for each pixel is known. This complicated and labor-intensive process involves matching the image contents with known features such as stars, rings, or moons. Metadata, such as lighting geometry or ring radius and longitude, must be computed for each pixel as well. Both steps require mastering the SPICE toolkit, a highly capable piece of software with a steep learning curve. Only after these steps are completed can the actual scientific investigation begin.We have embarked on a three-year project to perform these steps for all 400,000+ Cassini ISS images as well as images taken by the VIMS, UVIS, and CIRS instruments. The result will be a series of SPICE kernels that include accurate pointing information and a series of backplanes that include precomputed metadata for each pixel. All data will be made public through the PDS Ring-Moon Systems Node (http://www.pds-rings.seti.org). We expect this project to dramatically decrease the time required for scientists to analyze Cassini data.In a previous poster (French et al. 2014, DPS #46, 422.01) we discussed our progress navigating images using stars, simple ring models, and well-defined icy bodies. In this poster we will report on our current progress including the use of more sophisticated ring models, navigation of "fuzzy" bodies such as Titan and Saturn, and use of crater matching on high-resolution images of the icy satellites.
Evaluating Vegetation Type Effects on Land Surface Temperature at the City Scale
NASA Astrophysics Data System (ADS)
Wetherley, E. B.; McFadden, J. P.; Roberts, D. A.
2017-12-01
Understanding the effects of different plant functional types and urban materials on surface temperatures has significant consequences for climate modeling, water management, and human health in cities. To date, doing so at the urban scale has been complicated by small-scale surface heterogeneity and limited data. In this study we examined gradients of land surface temperature (LST) across sub-pixel mixtures of different vegetation types and urban materials across the entire Los Angeles, CA, metropolitan area (4,283 km2). We used AVIRIS airborne hyperspectral imagery (36 m resolution, 224 bands, 0.35 - 2.5 μm) to estimate sub-pixel fractions of impervious, pervious, tree, and turfgrass surfaces, validating them with simulated mixtures constructed from image spectra. We then used simultaneously imaged LST retrievals collected at multiple times of day to examine how temperature changed along gradients of the sub-pixel mixtures. Diurnal in situ LST measurements were used to confirm image values. Sub-pixel fractions were well correlated with simulated validation data for turfgrass (r2 = 0.71), tree (r2 = 0.77), impervious (r2 = 0.77), and pervious (r2 = 0.83) surfaces. The LST of pure pixels showed the effects of both the diurnal cycle and the surface type, with vegetated classes having a smaller diurnal temperature range of 11.6°C whereas non-vegetated classes had a diurnal range of 16.2°C (similar to in situ measurements collected simultaneously with the imagery). Observed LST across fractional gradients of turf/impervious and tree/impervious sub-pixel mixtures decreased linearly with increasing vegetation fraction. The slopes of decreasing LST were significantly different between tree and turf mixtures, with steeper slopes observed for turf (p < 0.05). These results suggest that different physiological characteristics and different access to irrigation water of urban trees and turfgrass results in significantly different LST effects, which can be detected at large scales in fractional mixture analysis.
NASA Astrophysics Data System (ADS)
Genocchi, B.; Pickford Scienti, O.; Darambara, DG
2017-05-01
Breast cancer is one of the most frequent tumours in women. During the ‘90s, the introduction of screening programmes allowed the detection of cancer before the palpable stage, reducing its mortality up to 50%. About 50% of the women aged between 30 and 50 years present dense breast parenchyma. This percentage decreases to 30% for women between 50 to 80 years. In these women, mammography has a sensitivity of around 30%, and small tumours are covered by the dense parenchyma and missed in the mammogram. Interestingly, breast-specific gamma-cameras based on semiconductor CdZnTe detectors have shown to be of great interest to early diagnosis. Infact, due to the high energy, spatial resolution, and high sensitivity of CdZnTe, molecular breast imaging has been shown to have a sensitivity of about 90% independently of the breast parenchyma. The aim of this work is to determine the optimal combination of the detector pixel size, hole shape, and collimator material in a low dose dual head breast specific gamma camera based on a CdZnTe pixelated detector at 140 keV, in order to achieve high count rate, and the best possible image spatial resolution. The optimal combination has been studied by modeling the system using the Monte Carlo code GATE. Six different pixel sizes from 0.85 mm to 1.6 mm, two hole shapes, hexagonal and square, and two different collimator materials, lead and tungsten were considered. It was demonstrated that the camera achieved higher count rates, and better signal-to-noise ratio when equipped with square hole, and large pixels (> 1.3 mm). In these configurations, the spatial resolution was worse than using small pixel sizes (< 1.3 mm), but remained under 3.6 mm in all cases.
The Si/CdTe semiconductor Compton camera of the ASTRO-H Soft Gamma-ray Detector (SGD)
NASA Astrophysics Data System (ADS)
Watanabe, Shin; Tajima, Hiroyasu; Fukazawa, Yasushi; Ichinohe, Yuto; Takeda, Shin`ichiro; Enoto, Teruaki; Fukuyama, Taro; Furui, Shunya; Genba, Kei; Hagino, Kouichi; Harayama, Atsushi; Kuroda, Yoshikatsu; Matsuura, Daisuke; Nakamura, Ryo; Nakazawa, Kazuhiro; Noda, Hirofumi; Odaka, Hirokazu; Ohta, Masayuki; Onishi, Mitsunobu; Saito, Shinya; Sato, Goro; Sato, Tamotsu; Takahashi, Tadayuki; Tanaka, Takaaki; Togo, Atsushi; Tomizuka, Shinji
2014-11-01
The Soft Gamma-ray Detector (SGD) is one of the instrument payloads onboard ASTRO-H, and will cover a wide energy band (60-600 keV) at a background level 10 times better than instruments currently in orbit. The SGD achieves low background by combining a Compton camera scheme with a narrow field-of-view active shield. The Compton camera in the SGD is realized as a hybrid semiconductor detector system which consists of silicon and cadmium telluride (CdTe) sensors. The design of the SGD Compton camera has been finalized and the final prototype, which has the same configuration as the flight model, has been fabricated for performance evaluation. The Compton camera has overall dimensions of 12 cm×12 cm×12 cm, consisting of 32 layers of Si pixel sensors and 8 layers of CdTe pixel sensors surrounded by 2 layers of CdTe pixel sensors. The detection efficiency of the Compton camera reaches about 15% and 3% for 100 keV and 511 keV gamma rays, respectively. The pixel pitch of the Si and CdTe sensors is 3.2 mm, and the signals from all 13,312 pixels are processed by 208 ASICs developed for the SGD. Good energy resolution is afforded by semiconductor sensors and low noise ASICs, and the obtained energy resolutions with the prototype Si and CdTe pixel sensors are 1.0-2.0 keV (FWHM) at 60 keV and 1.6-2.5 keV (FWHM) at 122 keV, respectively. This results in good background rejection capability due to better constraints on Compton kinematics. Compton camera energy resolutions achieved with the final prototype are 6.3 keV (FWHM) at 356 keV and 10.5 keV (FWHM) at 662 keV, which satisfy the instrument requirements for the SGD Compton camera (better than 2%). Moreover, a low intrinsic background has been confirmed by the background measurement with the final prototype.
Modeling of Pixelated Detector in SPECT Pinhole Reconstruction.
Feng, Bing; Zeng, Gengsheng L
2014-04-10
A challenge for the pixelated detector is that the detector response of a gamma-ray photon varies with the incident angle and the incident location within a crystal. The normalization map obtained by measuring the flood of a point-source at a large distance can lead to artifacts in reconstructed images. In this work, we investigated a method of generating normalization maps by ray-tracing through the pixelated detector based on the imaging geometry and the photo-peak energy for the specific isotope. The normalization is defined for each pinhole as the normalized detector response for a point-source placed at the focal point of the pinhole. Ray-tracing is used to generate the ideal flood image for a point-source. Each crystal pitch area on the back of the detector is divided into 60 × 60 sub-pixels. Lines are obtained by connecting between a point-source and the centers of sub-pixels inside each crystal pitch area. For each line ray-tracing starts from the entrance point at the detector face and ends at the center of a sub-pixel on the back of the detector. Only the attenuation by NaI(Tl) crystals along each ray is assumed to contribute directly to the flood image. The attenuation by the silica (SiO 2 ) reflector is also included in the ray-tracing. To calculate the normalization for a pinhole, we need to calculate the ideal flood for a point-source at 360 mm distance (where the point-source was placed for the regular flood measurement) and the ideal flood image for the point-source at the pinhole focal point, together with the flood measurement at 360 mm distance. The normalizations are incorporated in the iterative OSEM reconstruction as a component of the projection matrix. Applications to single-pinhole and multi-pinhole imaging showed that this method greatly reduced the reconstruction artifacts.
Edge Sharpness Assessment by Parametric Modeling: Application to Magnetic Resonance Imaging.
Ahmad, R; Ding, Y; Simonetti, O P
2015-05-01
In biomedical imaging, edge sharpness is an important yet often overlooked image quality metric. In this work, a semi-automatic method to quantify edge sharpness in the presence of significant noise is presented with application to magnetic resonance imaging (MRI). The method is based on parametric modeling of image edges. First, an edge map is automatically generated and one or more edges-of-interest (EOI) are manually selected using graphical user interface. Multiple exclusion criteria are then enforced to eliminate edge pixels that are potentially not suitable for sharpness assessment. Second, at each pixel of the EOI, an image intensity profile is read along a small line segment that runs locally normal to the EOI. Third, the profiles corresponding to all EOI pixels are individually fitted with a sigmoid function characterized by four parameters, including one that represents edge sharpness. Last, the distribution of the sharpness parameter is used to quantify edge sharpness. For validation, the method is applied to simulated data as well as MRI data from both phantom imaging and cine imaging experiments. This method allows for fast, quantitative evaluation of edge sharpness even in images with poor signal-to-noise ratio. Although the utility of this method is demonstrated for MRI, it can be adapted for other medical imaging applications.
Gaussian model for emission rate measurement of heated plumes using hyperspectral data
NASA Astrophysics Data System (ADS)
Grauer, Samuel J.; Conrad, Bradley M.; Miguel, Rodrigo B.; Daun, Kyle J.
2018-02-01
This paper presents a novel model for measuring the emission rate of a heated gas plume using hyperspectral data from an FTIR imaging spectrometer. The radiative transfer equation (RTE) is used to relate the spectral intensity of a pixel to presumed Gaussian distributions of volume fraction and temperature within the plume, along a line-of-sight that corresponds to the pixel, whereas previous techniques exclusively presume uniform distributions for these parameters. Estimates of volume fraction and temperature are converted to a column density by integrating the local molecular density along each path. Image correlation velocimetry is then employed on raw spectral intensity images to estimate the volume-weighted normal velocity at each pixel. Finally, integrating the product of velocity and column density along a control surface yields an estimate of the instantaneous emission rate. For validation, emission rate estimates were derived from synthetic hyperspectral images of a heated methane plume, generated using data from a large-eddy simulation. Calculating the RTE with Gaussian distributions of volume fraction and temperature, instead of uniform distributions, improved the accuracy of column density measurement by 14%. Moreover, the mean methane emission rate measured using our approach was within 4% of the ground truth. These results support the use of Gaussian distributions of thermodynamic properties in calculation of the RTE for optical gas diagnostics.
Xiao, Xun; Geyer, Veikko F; Bowne-Anderson, Hugo; Howard, Jonathon; Sbalzarini, Ivo F
2016-08-01
Biological filaments, such as actin filaments, microtubules, and cilia, are often imaged using different light-microscopy techniques. Reconstructing the filament curve from the acquired images constitutes the filament segmentation problem. Since filaments have lower dimensionality than the image itself, there is an inherent trade-off between tracing the filament with sub-pixel accuracy and avoiding noise artifacts. Here, we present a globally optimal filament segmentation method based on B-spline vector level-sets and a generalized linear model for the pixel intensity statistics. We show that the resulting optimization problem is convex and can hence be solved with global optimality. We introduce a simple and efficient algorithm to compute such optimal filament segmentations, and provide an open-source implementation as an ImageJ/Fiji plugin. We further derive an information-theoretic lower bound on the filament segmentation error, quantifying how well an algorithm could possibly do given the information in the image. We show that our algorithm asymptotically reaches this bound in the spline coefficients. We validate our method in comprehensive benchmarks, compare with other methods, and show applications from fluorescence, phase-contrast, and dark-field microscopy. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Kohley, Ralf; Barbier, Rémi; Kubik, Bogna; Ferriol, Sylvain; Clemens, Jean-Claude; Ealet, Anne; Secroun, Aurélia; Conversi, Luca; Strada, Paolo
2016-08-01
Euclid is an ESA mission to map the geometry of the dark Universe with a planned launch date in 2020. Euclid is optimised for two primary cosmological probes, weak gravitational lensing and galaxy clustering. They are implemented through two science instruments on-board Euclid, a visible imager (VIS) and a near-infrared spectro-photometer (NISP), which are being developed and built by the Euclid Consortium instrument development teams. The NISP instrument contains a large focal plane assembly of 16 Teledyne HgCdTe H2RG detectors with 2.3μm cut-off wavelength and SIDECAR readout electronics. The performance of the detector systems is critical to the science return of the mission and extended on-ground tests are being performed for characterisation and calibration purposes. Special attention is given also to effects even on the scale of individual pixels, which are difficult to model and calibrate, and to identify any possible impact on science performance. This paper discusses a variety of undesired pixel behaviour including the known effect of random telegraph signal (RTS) noise based on initial on-ground test results from demonstrator model detector systems. Some stability aspects of the RTS pixel populations are addressed as well.
NASA Astrophysics Data System (ADS)
Gu, Yameng; Zhang, Xuming
2017-05-01
Optical coherence tomography (OCT) images are severely degraded by speckle noise. Existing methods for despeckling multiframe OCT data cannot deliver sufficient speckle suppression while preserving image details well. To address this problem, the spiking cortical model (SCM) based non-local means (NLM) method has been proposed in this letter. In the proposed method, the considered frame and two neighboring frames are input into three SCMs to generate the temporal series of pulse outputs. The normalized moment of inertia (NMI) of the considered patches in the pulse outputs is extracted to represent the rotational and scaling invariant features of the corresponding patches in each frame. The pixel similarity is computed based on the Euclidean distance between the NMI features and used as the weight. Each pixel in the considered frame is restored by the weighted averaging of all pixels in the pre-defined search window in the three frames. Experiments on the real multiframe OCT data of the pig eye demonstrate the advantage of the proposed method over the frame averaging method, the multiscale sparsity based tomographic denoising method, the wavelet-based method and the traditional NLM method in terms of visual inspection and objective metrics such as signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), equivalent number of looks (ENL) and cross-correlation (XCOR).
NASA Astrophysics Data System (ADS)
Noh, Myoung-Jong; Howat, Ian M.
2018-02-01
The quality and efficiency of automated Digital Elevation Model (DEM) extraction from stereoscopic satellite imagery is critically dependent on the accuracy of the sensor model used for co-locating pixels between stereo-pair images. In the absence of ground control or manual tie point selection, errors in the sensor models must be compensated with increased matching search-spaces, increasing both the computation time and the likelihood of spurious matches. Here we present an algorithm for automatically determining and compensating the relative bias in Rational Polynomial Coefficients (RPCs) between stereo-pairs utilizing hierarchical, sub-pixel image matching in object space. We demonstrate the algorithm using a suite of image stereo-pairs from multiple satellites over a range stereo-photogrammetrically challenging polar terrains. Besides providing a validation of the effectiveness of the algorithm for improving DEM quality, experiments with prescribed sensor model errors yield insight into the dependence of DEM characteristics and quality on relative sensor model bias. This algorithm is included in the Surface Extraction through TIN-based Search-space Minimization (SETSM) DEM extraction software package, which is the primary software used for the U.S. National Science Foundation ArcticDEM and Reference Elevation Model of Antarctica (REMA) products.
Moving vehicles segmentation based on Gaussian motion model
NASA Astrophysics Data System (ADS)
Zhang, Wei; Fang, Xiang Z.; Lin, Wei Y.
2005-07-01
Moving objects segmentation is a challenge in computer vision. This paper focuses on the segmentation of moving vehicles in dynamic scene. We analyses the psychology of human vision and present a framework for segmenting moving vehicles in the highway. The proposed framework consists of two parts. Firstly, we propose an adaptive background update method in which the background is updated according to the change of illumination conditions and thus can adapt to the change of illumination sensitively. Secondly, we construct a Gaussian motion model to segment moving vehicles, in which the motion vectors of the moving pixels are modeled as a Gaussian model and an on-line EM algorithm is used to update the model. The Gaussian distribution of the adaptive model is elevated to determine which moving vectors result from moving vehicles and which from other moving objects such as waving trees. Finally, the pixels with motion vector result from the moving vehicles are segmented. Experimental results of several typical scenes show that the proposed model can detect the moving vehicles correctly and is immune from influence of the moving objects caused by the waving trees and the vibration of camera.
Efficient detection of wound-bed and peripheral skin with statistical colour models.
Veredas, Francisco J; Mesa, Héctor; Morente, Laura
2015-04-01
A pressure ulcer is a clinical pathology of localised damage to the skin and underlying tissue caused by pressure, shear or friction. Reliable diagnosis supported by precise wound evaluation is crucial in order to success on treatment decisions. This paper presents a computer-vision approach to wound-area detection based on statistical colour models. Starting with a training set consisting of 113 real wound images, colour histogram models are created for four different tissue types. Back-projections of colour pixels on those histogram models are used, from a Bayesian perspective, to get an estimate of the posterior probability of a pixel to belong to any of those tissue classes. Performance measures obtained from contingency tables based on a gold standard of segmented images supplied by experts have been used for model selection. The resulting fitted model has been validated on a training set consisting of 322 wound images manually segmented and labelled by expert clinicians. The final fitted segmentation model shows robustness and gives high mean performance rates [(AUC: .9426 (SD .0563); accuracy: .8777 (SD .0799); F-score: 0.7389 (SD .1550); Cohen's kappa: .6585 (SD .1787)] when segmenting significant wound areas that include healing tissues.
Sarnoff JND Vision Model for Flat-Panel Design
NASA Technical Reports Server (NTRS)
Brill, Michael H.; Lubin, Jeffrey
1998-01-01
This document describes adaptation of the basic Sarnoff JND Vision Model created in response to the NASA/ARPA need for a general-purpose model to predict the perceived image quality attained by flat-panel displays. The JND model predicts the perceptual ratings that humans will assign to a degraded color-image sequence relative to its nondegraded counterpart. Substantial flexibility is incorporated into this version of the model so it may be used to model displays at the sub-pixel and sub-frame level. To model a display (e.g., an LCD), the input-image data can be sampled at many times the pixel resolution and at many times the digital frame rate. The first stage of the model downsamples each sequence in time and in space to physiologically reasonable rates, but with minimum interpolative artifacts and aliasing. Luma and chroma parts of the model generate (through multi-resolution pyramid representation) a map of differences-between test and reference called the JND map, from which a summary rating predictor is derived. The latest model extensions have done well in calibration against psychophysical data and against image-rating data given a CRT-based front-end. THe software was delivered to NASA Ames and is being integrated with LCD display models at that facility,
Technique for ship/wake detection
Roskovensky, John K [Albuquerque, NM
2012-05-01
An automated ship detection technique includes accessing data associated with an image of a portion of Earth. The data includes reflectance values. A first portion of pixels within the image are masked with a cloud and land mask based on spectral flatness of the reflectance values associated with the pixels. A given pixel selected from the first portion of pixels is unmasked when a threshold number of localized pixels surrounding the given pixel are not masked by the cloud and land mask. A spatial variability image is generated based on spatial derivatives of the reflectance values of the pixels which remain unmasked by the cloud and land mask. The spatial variability image is thresholded to identify one or more regions within the image as possible ship detection regions.
CVD diamond pixel detectors for LHC experiments
NASA Astrophysics Data System (ADS)
Wedenig, R.; Adam, W.; Bauer, C.; Berdermann, E.; Bergonzo, P.; Bogani, F.; Borchi, E.; Brambilla, A.; Bruzzi, M.; Colledani, C.; Conway, J.; Dabrowski, W.; Delpierre, P.; Deneuville, A.; Dulinski, W.; van Eijk, B.; Fallou, A.; Fizzotti, F.; Foulon, F.; Friedl, M.; Gan, K. K.; Gheeraert, E.; Grigoriev, E.; Hallewell, G.; Hall-Wilton, R.; Han, S.; Hartjes, F.; Hrubec, J.; Husson, D.; Kagan, H.; Kania, D.; Kaplon, J.; Karl, C.; Kass, R.; Knöpfle, K. T.; Krammer, M.; Logiudice, A.; Lu, R.; Manfredi, P. F.; Manfredotti, C.; Marshall, R. D.; Meier, D.; Mishina, M.; Oh, A.; Pan, L. S.; Palmieri, V. G.; Pernicka, M.; Peitz, A.; Pirollo, S.; Polesello, P.; Pretzl, K.; Procario, M.; Re, V.; Riester, J. L.; Roe, S.; Roff, D.; Rudge, A.; Runolfsson, O.; Russ, J.; Schnetzer, S.; Sciortino, S.; Speziali, V.; Stelzer, H.; Stone, R.; Suter, B.; Tapper, R. J.; Tesarek, R.; Trawick, M.; Trischuk, W.; Vittone, E.; Wagner, A.; Walsh, A. M.; Weilhammer, P.; White, C.; Zeuner, W.; Ziock, H.; Zoeller, M.; Blanquart, L.; Breugnion, P.; Charles, E.; Ciocio, A.; Clemens, J. C.; Dao, K.; Einsweiler, K.; Fasching, D.; Fischer, P.; Joshi, A.; Keil, M.; Klasen, V.; Kleinfelder, S.; Laugier, D.; Meuser, S.; Milgrome, O.; Mouthuy, T.; Richardson, J.; Sinervo, P.; Treis, J.; Wermes, N.; RD42 Collaboration
1999-08-01
This paper reviews the development of CVD diamond pixel detectors. The preparation of the diamond pixel sensors for bump-bonding to the pixel readout electronics for the LHC and the results from beam tests carried out at CERN are described.
Park, Jong Seok; Aziz, Moez Karim; Li, Sensen; Chi, Taiyun; Grijalva, Sandra Ivonne; Sung, Jung Hoon; Cho, Hee Cheol; Wang, Hua
2018-02-01
This paper presents a fully integrated CMOS multimodality joint sensor/stimulator array with 1024 pixels for real-time holistic cellular characterization and drug screening. The proposed system consists of four pixel groups and four parallel signal-conditioning blocks. Every pixel group contains 16 × 16 pixels, and each pixel includes one gold-plated electrode, four photodiodes, and in-pixel circuits, within a pixel footprint. Each pixel supports real-time extracellular potential recording, optical detection, charge-balanced biphasic current stimulation, and cellular impedance measurement for the same cellular sample. The proposed system is fabricated in a standard 130-nm CMOS process. Rat cardiomyocytes are successfully cultured on-chip. Measured high-resolution optical opacity images, extracellular potential recordings, biphasic current stimulations, and cellular impedance images demonstrate the unique advantages of the system for holistic cell characterization and drug screening. Furthermore, this paper demonstrates the use of optical detection on the on-chip cultured cardiomyocytes to real-time track their cyclic beating pattern and beating rate.
Nurmoja, Merle; Eamets, Triin; Härma, Hanne-Loore; Bachmann, Talis
2012-10-01
While the dependence of face identification on the level of pixelation-transform of the images of faces has been well studied, similar research on face-based trait perception is underdeveloped. Because depiction formats used for hiding individual identity in visual media and evidential material recorded by surveillance cameras often consist of pixelized images, knowing the effects of pixelation on person perception has practical relevance. Here, the results of two experiments are presented showing the effect of facial image pixelation on the perception of criminality, trustworthiness, and suggestibility. It appears that individuals (N = 46, M age = 21.5 yr., SD = 3.1 for criminality ratings; N = 94, M age = 27.4 yr., SD = 10.1 for other ratings) have the ability to discriminate between facial cues ndicative of these perceived traits from the coarse level of image pixelation (10-12 pixels per face horizontally) and that the discriminability increases with a decrease in the coarseness of pixelation. Perceived criminality and trustworthiness appear to be better carried by the pixelized images than perceived suggestibility.
Method for hyperspectral imagery exploitation and pixel spectral unmixing
NASA Technical Reports Server (NTRS)
Lin, Ching-Fang (Inventor)
2003-01-01
An efficiently hybrid approach to exploit hyperspectral imagery and unmix spectral pixels. This hybrid approach uses a genetic algorithm to solve the abundance vector for the first pixel of a hyperspectral image cube. This abundance vector is used as initial state in a robust filter to derive the abundance estimate for the next pixel. By using Kalman filter, the abundance estimate for a pixel can be obtained in one iteration procedure which is much fast than genetic algorithm. The output of the robust filter is fed to genetic algorithm again to derive accurate abundance estimate for the current pixel. The using of robust filter solution as starting point of the genetic algorithm speeds up the evolution of the genetic algorithm. After obtaining the accurate abundance estimate, the procedure goes to next pixel, and uses the output of genetic algorithm as the previous state estimate to derive abundance estimate for this pixel using robust filter. And again use the genetic algorithm to derive accurate abundance estimate efficiently based on the robust filter solution. This iteration continues until pixels in a hyperspectral image cube end.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Becker, Julian; Tate, Mark W.; Shanks, Katherine S.
Pixel Array Detectors (PADs) consist of an x-ray sensor layer bonded pixel-by-pixel to an underlying readout chip. This approach allows both the sensor and the custom pixel electronics to be tailored independently to best match the x-ray imaging requirements. Here we describe the hybridization of CdTe sensors to two different charge-integrating readout chips, the Keck PAD and the Mixed-Mode PAD (MM-PAD), both developed previously in our laboratory. The charge-integrating architecture of each of these PADs extends the instantaneous counting rate by many orders of magnitude beyond that obtainable with photon counting architectures. The Keck PAD chip consists of rapid, 8-frame,more » in-pixel storage elements with framing periods <150 ns. The second detector, the MM-PAD, has an extended dynamic range by utilizing an in-pixel overflow counter coupled with charge removal circuitry activated at each overflow. This allows the recording of signals from the single-photon level to tens of millions of x-rays/pixel/frame while framing at 1 kHz. Both detector chips consist of a 128×128 pixel array with (150 µm){sup 2} pixels.« less
NASA Technical Reports Server (NTRS)
Tralshawala, Nilesh; Brekosky, Regis; Figueroa-Feliciano, Enectali; Li, Mary; Stahle, Carl; Stahle, Caroline
2000-01-01
We report on our progress towards the development of arrays of X-ray microcalorimeters as candidates for the high resolution x-ray spectrometer on the Constellation-X mission. The microcalorimeter arrays (30 x 30) with appropriate pixel sizes (0.25 mm. x 0.25 mm) and high packing fractions (greater than 96%) are being developed. Each individual pixel has a 10 micron thick Bi X-ray absorber that is shaped like a mushroom to increase the packing fraction, and a Mo/Au proximity effect superconducting transition edge sensor (TES). These are deposited on a 0.25 or 0.5 micron thick silicon nitride membrane with slits to provide a controllable weak thermal link to the sink temperature. Studies are underway to model, test and optimize the TES pixel uniformity, critical current, heat capacity and the membrane thermal conductance in the array structure. Fabrication issues and procedures, and results of our efforts based on these optimizations will be provided.
Figure-ground segregation: A fully nonlocal approach.
Dimiccoli, Mariella
2016-09-01
We present a computational model that computes and integrates in a nonlocal fashion several configural cues for automatic figure-ground segregation. Our working hypothesis is that the figural status of each pixel is a nonlocal function of several geometric shape properties and it can be estimated without explicitly relying on object boundaries. The methodology is grounded on two elements: multi-directional linear voting and nonlinear diffusion. A first estimation of the figural status of each pixel is obtained as a result of a voting process, in which several differently oriented line-shaped neighborhoods vote to express their belief about the figural status of the pixel. A nonlinear diffusion process is then applied to enforce the coherence of figural status estimates among perceptually homogeneous regions. Computer simulations fit human perception and match the experimental evidence that several cues cooperate in defining figure-ground segregation. The results of this work suggest that figure-ground segregation involves feedback from cells with larger receptive fields in higher visual cortical areas. Copyright © 2015 Elsevier Ltd. All rights reserved.
Heterogeneity Measurement Based on Distance Measure for Polarimetric SAR Data
NASA Astrophysics Data System (ADS)
Xing, Xiaoli; Chen, Qihao; Liu, Xiuguo
2018-04-01
To effectively test the scene heterogeneity for polarimetric synthetic aperture radar (PolSAR) data, in this paper, the distance measure is introduced by utilizing the similarity between the sample and pixels. Moreover, given the influence of the distribution and modeling texture, the K distance measure is deduced according to the Wishart distance measure. Specifically, the average of the pixels in the local window replaces the class center coherency or covariance matrix. The Wishart and K distance measure are calculated between the average matrix and the pixels. Then, the ratio of the standard deviation to the mean is established for the Wishart and K distance measure, and the two features are defined and applied to reflect the complexity of the scene. The proposed heterogeneity measure is proceeded by integrating the two features using the Pauli basis. The experiments conducted on the single-look and multilook PolSAR data demonstrate the effectiveness of the proposed method for the detection of the scene heterogeneity.
Land cover change interacts with drought severity to change fire regimes in Western Amazonia.
Gutiérrez-Vélez, Víctor H; Uriarte, María; DeFries, Ruth; Pinedo-Vásquez, Miguel; Fernandes, Katia; Ceccato, Pietro; Baethgen, Walter; Padoch, Christine
Fire is becoming a pervasive driver of environmental change in Amazonia and is expected to intensify, given projected reductions in precipitation and forest cover. Understanding of the influence of post-deforestation land cover change on fires in Amazonia is limited, even though fires in cleared lands constitute a threat for ecosystems, agriculture, and human health. We used MODIS satellite data to map burned areas annually between 2001 and 2010. We then combined these maps with land cover and climate information to understand the influence of land cover change in cleared lands and dry-season severity on fire occurrence and spread in a focus area in the Peruvian Amazon. Fire occurrence, quantified as the probability of burning of individual 232-m spatial resolution MODIS pixels, was modeled as a function of the area of land cover types within each pixel, drought severity, and distance to roads. Fire spread, quantified as the number of pixels burned in 3 × 3 pixel windows around each focal burned pixel, was modeled as a function of land cover configuration and area, dry-season severity, and distance to roads. We found that vegetation regrowth and oil palm expansion are significantly correlated with fire occurrence, but that the magnitude and sign of the correlation depend on drought severity, successional stage of regrowing vegetation, and oil palm age. Burning probability increased with the area of nondegraded pastures, fallow, and young oil palm and decreased with larger extents of degraded pastures, secondary forests, and adult oil palm plantations. Drought severity had the strongest influence on fire occurrence, overriding the effectiveness of secondary forests, but not of adult plantations, to reduce fire occurrence in severely dry years. Overall, irregular and scattered land cover patches reduced fire spread but irregular and dispersed fallows and secondary forests increased fire spread during dry years. Results underscore the importance of land cover management for reducing fire proliferation in this landscape. Incentives for promoting natural regeneration and perennial crops in cleared lands might help to reduce fire risk if those areas are protected against burning in early stages of development and during severely dry years.
Indirect and direct methods for measuring a dynamic throat diameter in a solid rocket motor
NASA Astrophysics Data System (ADS)
Colbaugh, Lauren
In a solid rocket motor, nozzle throat erosion is dictated by propellant composition, throat material properties, and operating conditions. Throat erosion has a significant effect on motor performance, so it must be accurately characterized to produce a good motor design. In order to correlate throat erosion rate to other parameters, it is first necessary to know what the throat diameter is throughout a motor burn. Thus, an indirect method and a direct method for determining throat diameter in a solid rocket motor are investigated in this thesis. The indirect method looks at the use of pressure and thrust data to solve for throat diameter as a function of time. The indirect method's proof of concept was shown by the good agreement between the ballistics model and the test data from a static motor firing. The ballistics model was within 10% of all measured and calculated performance parameters (e.g. average pressure, specific impulse, maximum thrust, etc.) for tests with throat erosion and within 6% of all measured and calculated performance parameters for tests without throat erosion. The direct method involves the use of x-rays to directly observe a simulated nozzle throat erode in a dynamic environment; this is achieved with a dynamic calibration standard. An image processing algorithm is developed for extracting the diameter dimensions from the x-ray intensity digital images. Static and dynamic tests were conducted. The measured diameter was compared to the known diameter in the calibration standard. All dynamic test results were within +6% / -7% of the actual diameter. Part of the edge detection method consists of dividing the entire x-ray image by an average pixel value, calculated from a set of pixels in the x-ray image. It was found that the accuracy of the edge detection method depends upon the selection of the average pixel value area and subsequently the average pixel value. An average pixel value sensitivity analysis is presented. Both the indirect method and the direct method prove to be viable approaches to determining throat diameter during solid rocket motor operation.
Structure-aware depth super-resolution using Gaussian mixture model
NASA Astrophysics Data System (ADS)
Kim, Sunok; Oh, Changjae; Kim, Youngjung; Sohn, Kwanghoon
2015-03-01
This paper presents a probabilistic optimization approach to enhance the resolution of a depth map. Conventionally, a high-resolution color image is considered as a cue for depth super-resolution under the assumption that the pixels with similar color likely belong to similar depth. This assumption might induce a texture transferring from the color image into the depth map and an edge blurring artifact to the depth boundaries. In order to alleviate these problems, we propose an efficient depth prior exploiting a Gaussian mixture model in which an estimated depth map is considered to a feature for computing affinity between two pixels. Furthermore, a fixed-point iteration scheme is adopted to address the non-linearity of a constraint derived from the proposed prior. The experimental results show that the proposed method outperforms state-of-the-art methods both quantitatively and qualitatively.
Chemiresistive Graphene Sensors for Ammonia Detection.
Mackin, Charles; Schroeder, Vera; Zurutuza, Amaia; Su, Cong; Kong, Jing; Swager, Timothy M; Palacios, Tomás
2018-05-09
The primary objective of this work is to demonstrate a novel sensor system as a convenient vehicle for scaled-up repeatability and the kinetic analysis of a pixelated testbed. This work presents a sensor system capable of measuring hundreds of functionalized graphene sensors in a rapid and convenient fashion. The sensor system makes use of a novel array architecture requiring only one sensor per pixel and no selector transistor. The sensor system is employed specifically for the evaluation of Co(tpfpp)ClO 4 functionalization of graphene sensors for the detection of ammonia as an extension of previous work. Co(tpfpp)ClO 4 treated graphene sensors were found to provide 4-fold increased ammonia sensitivity over pristine graphene sensors. Sensors were also found to exhibit excellent selectivity over interfering compounds such as water and common organic solvents. The ability to monitor a large sensor array with 160 pixels provides insights into performance variations and reproducibility-critical factors in the development of practical sensor systems. All sensors exhibit the same linearly related responses with variations in response exhibiting Gaussian distributions, a key finding for variation modeling and quality engineering purposes. The mean correlation coefficient between sensor responses was found to be 0.999 indicating highly consistent sensor responses and excellent reproducibility of Co(tpfpp)ClO 4 functionalization. A detailed kinetic model is developed to describe sensor response profiles. The model consists of two adsorption mechanisms-one reversible and one irreversible-and is shown capable of fitting experimental data with a mean percent error of 0.01%.
NASA Astrophysics Data System (ADS)
Adam, W.; Berdermann, E.; Bergonzo, P.; Bertuccio, G.; Bogani, F.; Borchi, E.; Brambilla, A.; Bruzzi, M.; Colledani, C.; Conway, J.; D'Angelo, P.; Dabrowski, W.; Delpierre, P.; Deneuville, A.; Doroshenko, J.; Dulinski, W.; van Eijk, B.; Fallou, A.; Fizzotti, F.; Foster, J.; Foulon, F.; Friedl, M.; Gan, K. K.; Gheeraert, E.; Gobbi, B.; Grim, G. P.; Hallewell, G.; Han, S.; Hartjes, F.; Hrubec, J.; Husson, D.; Kagan, H.; Kania, D.; Kaplon, J.; Kass, R.; Koeth, T.; Krammer, M.; Lander, R.; Logiudice, A.; Lu, R.; mac Lynne, L.; Manfredotti, C.; Meier, D.; Mishina, M.; Moroni, L.; Oh, A.; Pan, L. S.; Pernicka, M.; Perera, L.; Pirollo, S.; Plano, R.; Procario, M.; Riester, J. L.; Roe, S.; Rott, C.; Rousseau, L.; Rudge, A.; Russ, J.; Sala, S.; Sampietro, M.; Schnetzer, S.; Sciortino, S.; Stelzer, H.; Stone, R.; Suter, B.; Tapper, R. J.; Tesarek, R.; Trischuk, W.; Tromson, D.; Vittone, E.; Wedenig, R.; Weilhammer, P.; White, C.; Zeuner, W.; Zoeller, M.
2001-06-01
Diamond based pixel detectors are a promising radiation-hard technology for use at the LHC. We present first results on a CMS diamond pixel sensor. With a threshold setting of 2000 electrons, an average pixel efficiency of 78% was obtained for normally incident minimum ionizing particles.
Giraldo, L. Ocampo; Bolotnikov, A. E.; Camarda, G. S.; ...
2017-04-20
For this study, we evaluated the X-Y position resolution achievable in 3D pixelated detectors by processing the signal waveforms readout from neighboring pixels. In these measurements we used a focused light beam, down to 10 μm, generated by a ~1 mW pulsed laser (650 nm) to carry out raster scans over selected 3×3 pixel areas, while recording the charge signals from the 9 pixels and the cathode using two synchronized digital oscilloscopes.
Preliminary Results from Small-Pixel CdZnTe and CdTe Arrays
NASA Technical Reports Server (NTRS)
Ramsey, B. D.; Sharma, D. P.; Meisner, J.; Austin, R. A.
1999-01-01
We have evaluated 2 small-pixel (0.75 mm) Cadmium-Zinc-Telluride arrays, and one Cadmium-Telluride array, all fabricated for MSFC by Metorex (Finland) and Baltic Science Institute (Riga, Latvia). Each array was optimized for operating temperature and collection bias. It was then exposed to Cadmium-109 and Iron-55 laboratory isotopes, to measure the energy resolution for each pixel and was then scanned with a finely-collimated x-ray beam, of width 50 micron, to examine pixel to pixel and inter-pixel charge collections efficiency. Preliminary results from these array tests will be presented.
Limits in point to point resolution of MOS based pixels detector arrays
NASA Astrophysics Data System (ADS)
Fourches, N.; Desforge, D.; Kebbiri, M.; Kumar, V.; Serruys, Y.; Gutierrez, G.; Leprêtre, F.; Jomard, F.
2018-01-01
In high energy physics point-to-point resolution is a key prerequisite for particle detector pixel arrays. Current and future experiments require the development of inner-detectors able to resolve the tracks of particles down to the micron range. Present-day technologies, although not fully implemented in actual detectors, can reach a 5-μm limit, this limit being based on statistical measurements, with a pixel-pitch in the 10 μm range. This paper is devoted to the evaluation of the building blocks for use in pixel arrays enabling accurate tracking of charged particles. Basing us on simulations we will make here a quantitative evaluation of the physical and technological limits in pixel size. Attempts to design small pixels based on SOI technology will be briefly recalled here. A design based on CMOS compatible technologies that allow a reduction of the pixel size below the micrometer is introduced here. Its physical principle relies on a buried carrier-localizing collecting gate. The fabrication process needed by this pixel design can be based on existing process steps used in silicon microelectronics. The pixel characteristics will be discussed as well as the design of pixel arrays. The existing bottlenecks and how to overcome them will be discussed in the light of recent ion implantation and material characterization experiments.
Directional effects on NDVI and LAI retrievals from MODIS: A case study in Brazil with soybean
NASA Astrophysics Data System (ADS)
Breunig, Fábio Marcelo; Galvão, Lênio Soares; Formaggio, Antônio Roberto; Epiphanio, José Carlos Neves
2011-02-01
The Moderate Resolution Imaging Spectroradiometer (MODIS) is largely used to estimate Leaf Area Index (LAI) using radiative transfer modeling (the "main" algorithm). When this algorithm fails for a pixel, which frequently occurs over Brazilian soybean areas, an empirical model (the "backup" algorithm) based on the relationship between the Normalized Difference Vegetation Index (NDVI) and LAI is utilized. The objective of this study is to evaluate directional effects on NDVI and subsequent LAI estimates using global (biome 3) and local empirical models, as a function of the soybean development in two growing seasons (2004-2005 and 2005-2006). The local model was derived from the pixels that had LAI values retrieved from the main algorithm. In order to keep the reproductive stage for a given cultivar as a constant factor while varying the viewing geometry, pairs of MODIS images acquired in close dates from opposite directions (backscattering and forward scattering) were selected. Linear regression relationships between the NDVI values calculated from these two directions were evaluated for different view angles (0-25°; 25-45°; 45-60°) and development stages (<45; 45-90; >90 days after planting). Impacts on LAI retrievals were analyzed. Results showed higher reflectance values in backscattering direction due to the predominance of sunlit soybean canopy components towards the sensor and higher NDVI values in forward scattering direction due to stronger shadow effects in the red waveband. NDVI differences between the two directions were statistically significant for view angles larger than 25°. The main algorithm for LAI estimation failed in the two growing seasons with gradual crop development. As a result, up to 94% of the pixels had LAI values calculated from the backup algorithm at the peak of canopy closure. Most of the pixels selected to compose the 8-day MODIS LAI product came from the forward scattering view because it displayed larger LAI values than the backscattering. Directional effects on the subsequent LAI retrievals were stronger at the peak of the soybean development (NDVI values between 0.70 and 0.85). When the global empirical model was used, LAI differences up to 3.2 for consecutive days and opposite viewing directions were observed. Such differences were reduced to values up to 1.5 with the local model. Because of the predominance of LAI retrievals from the MODIS backup algorithm during the Brazilian soybean development, care is necessary if one considers using these data in agronomic growing/yield models.
NASA Astrophysics Data System (ADS)
Higginbottom, Thomas P.; Symeonakis, Elias; Meyer, Hanna; van der Linden, Sebastian
2018-05-01
Increasing attention is being directed at mapping the fractional woody cover of savannahs using Earth-observation data. In this study, we test the utility of Landsat TM/ ETM-based spectral-temporal variability metrics for mapping regional-scale woody cover in the Limpopo Province of South Africa, for 2010. We employ a machine learning framework to compare the accuracies of Random Forest models derived using metrics calculated from different seasons. We compare these results to those from fused Landsat-PALSAR data to establish if seasonal metrics can compensate for structural information from the PALSAR signal. Furthermore, we test the applicability of a statistical variable selection method, the recursive feature elimination (RFE), in the automation of the model building process in order to reduce model complexity and processing time. All of our tests were repeated at four scales (30, 60, 90, and 120 m-pixels) to investigate the role of spatial resolution on modelled accuracies. Our results show that multi-seasonal composites combining imagery from both the dry and wet seasons produced the highest accuracies (R2 = 0.77, RMSE = 9.4, at the 120 m scale). When using a single season of observations, dry season imagery performed best (R2 = 0.74, RMSE = 9.9, at the 120 m resolution). Combining Landsat and radar imagery was only marginally beneficial, offering a mean relative improvement of 1% in accuracy at the 120 m scale. However, this improvement was concentrated in areas with lower densities of woody coverage (<30%), which are areas of concern for environmental monitoring. At finer spatial resolutions, the inclusion of SAR data actually reduced accuracies. Overall, the RFE was able to produce the most accurate model (R2 = 0.8, RMSE = 8.9, at the 120 m pixel scale). For mapping savannah woody cover at the 30 m pixel scale, we suggest that monitoring methodologies continue to exploit the Landsat archive, but should aim to use multi-seasonal derived information. When the coarser 120 m pixel scale is adequate, integration of Landsat and SAR data should be considered, especially in areas with lower woody cover densities. The use of multiple seasonal compositing periods offers promise for large-area mapping of savannahs, even in regions with a limited historical Landsat coverage.
Advanced Technology for Portable Personal Visualization.
1992-06-01
interactive radiosity . 6 Advanced Technology for Portable Personal Visualization Progress Report January-June 1992 9 2.5 Virtual-Environment Ultrasound...the system, with support for textures, model partitioning, more complex radiosity emitters, and the replacement of model parts with objects from our...model libraries. "* Add real-time, interactive radiosity to the display program on Pixel-Planes 5. "* Move the real-time model mesh-generation to the
NASA Technical Reports Server (NTRS)
Goodman, Michael L.; Kwan, Chiman; Ayhan, Bulent; Shang, Eric L.
2017-01-01
There are many flare forecasting models. For an excellent review and comparison of some of them see Barnes et al. (2016). All these models are successful to some degree, but there is a need for better models. We claim the most successful models explicitly or implicitly base their forecasts on various estimates of components of the photospheric current density J, based on observations of the photospheric magnetic field B. However, none of the models we are aware of compute the complete J. We seek to develop a better model based on computing the complete photospheric J. Initial results from this model are presented in this talk. We present a data driven, near photospheric, 3 D, non-force free magnetohydrodynamic (MHD) model that computes time series of the total J, and associated resistive heating rate in each pixel at the photosphere in the neutral line regions (NLRs) of 14 active regions (ARs). The model is driven by time series of B measured by the Helioseismic & Magnetic Imager (HMI) on the Solar Dynamics Observatory (SDO) satellite. Spurious Doppler periods due to SDO orbital motion are filtered out of the time series of B in every AR pixel. Errors in B due to these periods can be significant.
Selective document image data compression technique
Fu, C.Y.; Petrich, L.I.
1998-05-19
A method of storing information from filled-in form-documents comprises extracting the unique user information in the foreground from the document form information in the background. The contrast of the pixels is enhanced by a gamma correction on an image array, and then the color value of each of pixel is enhanced. The color pixels lying on edges of an image are converted to black and an adjacent pixel is converted to white. The distance between black pixels and other pixels in the array is determined, and a filled-edge array of pixels is created. User information is then converted to a two-color format by creating a first two-color image of the scanned image by converting all pixels darker than a threshold color value to black. All the pixels that are lighter than the threshold color value to white. Then a second two-color image of the filled-edge file is generated by converting all pixels darker than a second threshold value to black and all pixels lighter than the second threshold color value to white. The first two-color image and the second two-color image are then combined and filtered to smooth the edges of the image. The image may be compressed with a unique Huffman coding table for that image. The image file is also decimated to create a decimated-image file which can later be interpolated back to produce a reconstructed image file using a bilinear interpolation kernel. 10 figs.
NASA Astrophysics Data System (ADS)
Abbasi, S.; Galioglu, A.; Shafique, A.; Ceylan, O.; Yazici, M.; Gurbuz, Y.
2017-02-01
A 32x32 prototype of a digital readout IC (DROIC) for medium-wave infrared focal plane arrays (MWIR IR-FPAs) is presented. The DROIC employs in-pixel photocurrent to digital conversion based on a pulse frequency modulation (PFM) loop and boasts a novel feature of off-pixel residue conversion using 10-bit column SAR ADCs. The remaining charge at the end of integration in typical PFM based digital pixel sensors is usually wasted. Previous works employing in-pixel extended counting methods make use of extra memory and counters to convert this left-over charge to digital, thereby performing fine conversion of the incident photocurrent. This results in a low quantization noise and hence keeps the readout noise low. However, focal plane arrays (FPAs) with small pixel pitch are constrained in pixel area, which makes it difficult to benefit from in-pixel extended counting circuitry. Thus, in this work, a novel approach to measure the residue outside the pixel using column -parallel SAR ADCs has been proposed. Moreover, a modified version of the conventional PFM based pixel has been designed to help hold the residue charge and buffer it to the column ADC. In addition to the 2D array of pixels, the prototype consists of 32 SAR ADCs, a timing controller block and a memory block to buffer the residue data coming out of the ADCs. The prototype has been designed and fabricated in 90nm CMOS.
Design of the low area monotonic trim DAC in 40 nm CMOS technology for pixel readout chips
NASA Astrophysics Data System (ADS)
Drozd, A.; Szczygiel, R.; Maj, P.; Satlawa, T.; Grybos, P.
2014-12-01
The recent research in hybrid pixel detectors working in single photon counting mode focuses on nanometer or 3D technologies which allow making pixels smaller and implementing more complex solutions in each of the pixels. Usually single pixel in readout electronics for X-ray detection comprises of charge amplifier, shaper and discriminator that allow classification of events occurring at the detector as true or false hits by comparing amplitude of the signal obtained with threshold voltage, which minimizes the influence of noise effects. However, making the pixel size smaller often causes problems with pixel to pixel uniformity and additional effects like charge sharing become more visible. To improve channel-to-channel uniformity or implement an algorithm for charge sharing effect minimization, small area trimming DACs working in each pixel independently are necessary. However, meeting the requirement of small area often results in poor linearity and even non-monotonicity. In this paper we present a novel low-area thermometer coded 6-bit DAC implemented in 40 nm CMOS technology. Monte Carlo simulations were performed on the described design proving that under all conditions designed DAC is inherently monotonic. Presented DAC was implemented in the prototype readout chip with 432 pixels working in single photon counting mode, with two trimming DACs in each pixel. Each DAC occupies the area of 8 μm × 18.5 μm. Measurements and chips' tests were performed to obtain reliable statistical results.
Selective document image data compression technique
Fu, Chi-Yung; Petrich, Loren I.
1998-01-01
A method of storing information from filled-in form-documents comprises extracting the unique user information in the foreground from the document form information in the background. The contrast of the pixels is enhanced by a gamma correction on an image array, and then the color value of each of pixel is enhanced. The color pixels lying on edges of an image are converted to black and an adjacent pixel is converted to white. The distance between black pixels and other pixels in the array is determined, and a filled-edge array of pixels is created. User information is then converted to a two-color format by creating a first two-color image of the scanned image by converting all pixels darker than a threshold color value to black. All the pixels that are lighter than the threshold color value to white. Then a second two-color image of the filled-edge file is generated by converting all pixels darker than a second threshold value to black and all pixels lighter than the second threshold color value to white. The first two-color image and the second two-color image are then combined and filtered to smooth the edges of the image. The image may be compressed with a unique Huffman coding table for that image. The image file is also decimated to create a decimated-image file which can later be interpolated back to produce a reconstructed image file using a bilinear interpolation kernel.--(235 words)
Development of pixellated Ir-TESs
NASA Astrophysics Data System (ADS)
Zen, Nobuyuki; Takahashi, Hiroyuki; Kunieda, Yuichi; Damayanthi, Rathnayaka M. T.; Mori, Fumiakira; Fujita, Kaoru; Nakazawa, Masaharu; Fukuda, Daiji; Ohkubo, Masataka
2006-04-01
We have been developing Ir-based pixellated superconducting transition edge sensors (TESs). In the area of material or astronomical applications, the sensor with few eV energy resolution and over 1000 pixels imaging property is desired. In order to achieve this goal, we have been analyzing signals from pixellated TESs. In the case of a 20 pixel array of Ir-TESs, with 45 μm×45 μm pixel sizes, the incident X-ray signals have been classified into 16 groups. We have applied numerical signal analysis. On the one hand, the energy resolution of our pixellated TES is strongly degraded. However, using pulse shape analysis, we can dramatically improve the resolution. Thus, we consider that the pulse signal analysis will lead this device to be used as a practical photon incident position identifying TES.
NASA Astrophysics Data System (ADS)
Heck, E. J.; Owsley, B.; Henebry, G. M.; de Beurs, K.
2017-12-01
Since the beginning of the MODIS data record in 2000, several improvements have been made to all available products. Currently, the MODIS community is in the process of completing the roll out of collection 6. While the community takes great interest in data continuity models, less attention has been given to how the changes that are carried out between collections impact the findings of earlier analyses. Here, we assess differences between change detection results from Collection 005 and 006 of the Nadir BRDF-Adjusted Reflectance product (MCD43), the Gridded Vegetation Indices product (MOD13) and the Land Surface Temperature product (MOD11), all at 0.05-degree resolution across the Western Hemisphere. We applied the non-parametric Seasonal-Kendall trend test to time series from C005 and C006 to identify areas of significant change during the period 2001-2016. We analyzed the significant trends and the differences between collections by country, by IGBP land cover class, and by the Human Impact Index category. Preliminary results from the MOD13 product indicate agreement between C005 and C006 for 65% of the pixels when investigating EVI trends and 74% of the pixels when investigating NDVI trends. Only 1% of the pixels reveal a full disagreement in the signal of the significance trends, e.g. negative trend in C005 paired with a positive trend in C006. Though the percentage of complete reversal is low, there are variations of discrepancies between collections that are consistent between both the MOD13 and MCD43 products. For example, while almost 18% of the pixels revealed a significant browning in C005, only 5.80% of the pixels revealed significant browning in both collections. We found that 11% of the pixels with significant browning in C005, were stable in C006. Vice versa, we found that 19.18% of all pixels revealed a significant positive trend in C006 while these pixels were stable in C005. Even though C006 reveals more than double the percentage of positive trends (from 12% in C005 to 32% in C006), the large clusters of significant trends appear in the same spatial regions. We did not find similar drastic differences between collections when analyzing the trend results based on the Land Surface Temperature product. However, some key areas, such as the Amazon region, reveal significant differences.
NASA Astrophysics Data System (ADS)
Antioquia, C. T.; Uy, S. N.; Caballa, K.; Lagrosas, N.
2014-12-01
Ground based sky imaging cameras have been used to measure cloud cover over an area to aid in radiation budget models. During daytime, certain clouds tend to help decrease atmospheric temperature by obstructing sunrays in the atmosphere. Thus, the detection of clouds plays an important role in the formulation of radiation budget in the atmosphere. In this study, a wide angled sky imager (GoPro Hero 2) was brought on board M/Y Vasco to detect and quantity cloud occurrence over sea during the 2nd 7SEAS field campaign. The camera is just a part of a number of scientific instruments used to measure weather, aerosol chemistry and solar radiation among others. The data collection started during the departure from Manila Bay on 05 September 2012 and went on until the end of the cruise (29 September 2012). The camera was placed in a weather-proof box that is then affixed on a steel mast where other instruments are also attached during the cruise. The data has a temporal resolution of 1 minute, and each image is 500x666 pixels in size. Fig. 1a shows the track of the ship during the cruise. The red, blue, hue, saturation, and value of the pixels are analysed for cloud occurrence. A pixel is considered to "contain" thick cloud if it passes all four threshold parameters (R-B, R/B, R-B/R+B, HSV; R is the red pixel color value, blue is the blue pixel color value, and HSV is the hue saturation value of the pixel) and considered thin cloud if it passes two or three parameters. Fig. 1b shows the daily analysis of cloud occurrence. Cloud occurrence here is quantified as the ratio of the pixels with cloud to the total number of pixels in the data image. The average cloud cover for the days included in this dataset is 87%. These measurements show a big contrast when compared to cloud cover over land (Manila Observatory) which is usually around 67%. During the duration of the cruise, only one day (September 6) has an average cloud occurrence below 50%; the rest of the days have averages of 66% or higher - 98% being the highest. This result would then give a general trend of how cloud occurrences over land and over sea differ in the South East Asian region. In this study, these cloud occurrences come from local convection and clouds brought about by Southwest Monsoon winds.
Multichroic Bolometric Detector Architecture for Cosmic Microwave Background Polarimetry Experiments
NASA Astrophysics Data System (ADS)
Suzuki, Aritoki
Characterization of the Cosmic Microwave Background (CMB) B-mode polarization signal will test models of inflationary cosmology, as well as constrain the sum of the neutrino masses and other cosmological parameters. The low intensity of the B-mode signal combined with the need to remove polarized galactic foregrounds requires a sensitive millimeter receiver and effective methods of foreground removal. Current bolometric detector technology is reaching the sensitivity limit set by the CMB photon noise. Thus, we need to increase the optical throughput to increase an experiment's sensitivity. To increase the throughput without increasing the focal plane size, we can increase the frequency coverage of each pixel. Increased frequency coverage per pixel has additional advantage that we can split the signal into frequency bands to obtain spectral information. The detection of multiple frequency bands allows for removal of the polarized foreground emission from synchrotron radiation and thermal dust emission, by utilizing its spectral dependence. Traditionally, spectral information has been captured with a multi-chroic focal plane consisting of a heterogeneous mix of single-color pixels. To maximize the efficiency of the focal plane area, we developed a multi-chroic pixel. This increases the number of pixels per frequency with same focal plane area. We developed multi-chroic antenna-coupled transition edge sensor (TES) detector array for the CMB polarimetry. In each pixel, a silicon lens-coupled dual polarized sinuous antenna collects light over a two-octave frequency band. The antenna couples the broadband millimeter wave signal into microstrip transmission lines, and on-chip filter banks split the broadband signal into several frequency bands. Separate TES bolometers detect the power in each frequency band and linear polarization. We will describe the design and performance of these devices and present optical data taken with prototype pixels and detector arrays. Our measurements show beams with percent level ellipticity, percent level cross-polarization leakage, and partitioned bands using banks of two and three filters. We will also describe the development of broadband anti-reflection coatings for the high dielectric constant lens. The broadband anti-reflection coating has approximately 100% bandwidth and no detectable loss at cryogenic temperature. We will describe a next generation CMB polarimetry experiment, the POLARBEAR-2, in detail. The POLARBEAR-2 would have focal planes with kilo-pixel of these detectors to achieve high sensitivity. We'll also introduce proposed experiments that would use multi-chroic detector array we developed in this work. We'll conclude by listing out suggestions for future multichroic detector development.
NASA Astrophysics Data System (ADS)
Heck, E. J.; Owsley, B.; Henebry, G. M.; de Beurs, K.
2016-12-01
Since the beginning of the MODIS data record in 2000, several improvements have been made to all available products. Currently, the MODIS community is in the process of completing the roll out of collection 6. While the community takes great interest in data continuity models, less attention has been given to how the changes that are carried out between collections impact the findings of earlier analyses. Here, we assess differences between change detection results from Collection 005 and 006 of the Nadir BRDF-Adjusted Reflectance product (MCD43), the Gridded Vegetation Indices product (MOD13) and the Land Surface Temperature product (MOD11), all at 0.05-degree resolution across the Western Hemisphere. We applied the non-parametric Seasonal-Kendall trend test to time series from C005 and C006 to identify areas of significant change during the period 2001-2016. We analyzed the significant trends and the differences between collections by country, by IGBP land cover class, and by the Human Impact Index category. Preliminary results from the MOD13 product indicate agreement between C005 and C006 for 65% of the pixels when investigating EVI trends and 74% of the pixels when investigating NDVI trends. Only 1% of the pixels reveal a full disagreement in the signal of the significance trends, e.g. negative trend in C005 paired with a positive trend in C006. Though the percentage of complete reversal is low, there are variations of discrepancies between collections that are consistent between both the MOD13 and MCD43 products. For example, while almost 18% of the pixels revealed a significant browning in C005, only 5.80% of the pixels revealed significant browning in both collections. We found that 11% of the pixels with significant browning in C005, were stable in C006. Vice versa, we found that 19.18% of all pixels revealed a significant positive trend in C006 while these pixels were stable in C005. Even though C006 reveals more than double the percentage of positive trends (from 12% in C005 to 32% in C006), the large clusters of significant trends appear in the same spatial regions. We did not find similar drastic differences between collections when analyzing the trend results based on the Land Surface Temperature product. However, some key areas, such as the Amazon region, reveal significant differences.
Spectral X-Ray Diffraction using a 6 Megapixel Photon Counting Array Detector.
Muir, Ryan D; Pogranichniy, Nicholas R; Muir, J Lewis; Sullivan, Shane Z; Battaile, Kevin P; Mulichak, Anne M; Toth, Scott J; Keefe, Lisa J; Simpson, Garth J
2015-03-12
Pixel-array array detectors allow single-photon counting to be performed on a massively parallel scale, with several million counting circuits and detectors in the array. Because the number of photoelectrons produced at the detector surface depends on the photon energy, these detectors offer the possibility of spectral imaging. In this work, a statistical model of the instrument response is used to calibrate the detector on a per-pixel basis. In turn, the calibrated sensor was used to perform separation of dual-energy diffraction measurements into two monochromatic images. Targeting applications include multi-wavelength diffraction to aid in protein structure determination and X-ray diffraction imaging.
Spectral x-ray diffraction using a 6 megapixel photon counting array detector
NASA Astrophysics Data System (ADS)
Muir, Ryan D.; Pogranichniy, Nicholas R.; Muir, J. Lewis; Sullivan, Shane Z.; Battaile, Kevin P.; Mulichak, Anne M.; Toth, Scott J.; Keefe, Lisa J.; Simpson, Garth J.
2015-03-01
Pixel-array array detectors allow single-photon counting to be performed on a massively parallel scale, with several million counting circuits and detectors in the array. Because the number of photoelectrons produced at the detector surface depends on the photon energy, these detectors offer the possibility of spectral imaging. In this work, a statistical model of the instrument response is used to calibrate the detector on a per-pixel basis. In turn, the calibrated sensor was used to perform separation of dual-energy diffraction measurements into two monochromatic images. Targeting applications include multi-wavelength diffraction to aid in protein structure determination and X-ray diffraction imaging.
Mapping Electrical Crosstalk in Pixelated Sensor Arrays
NASA Technical Reports Server (NTRS)
Seshadri, Suresh (Inventor); Cole, David (Inventor); Smith, Roger M. (Inventor); Hancock, Bruce R. (Inventor)
2017-01-01
The effects of inter pixel capacitance in a pixilated array may be measured by first resetting all pixels in the array to a first voltage, where a first image is read out, followed by resetting only a subset of pixels in the array to a second voltage, where a second image is read out, where the difference in the first and second images provide information about the inter pixel capacitance. Other embodiments are described and claimed.
Mapping Electrical Crosstalk in Pixelated Sensor Arrays
NASA Technical Reports Server (NTRS)
Smith, Roger M (Inventor); Hancock, Bruce R. (Inventor); Cole, David (Inventor); Seshadri, Suresh (Inventor)
2013-01-01
The effects of inter pixel capacitance in a pixilated array may be measured by first resetting all pixels in the array to a first voltage, where a first image is read out, followed by resetting only a subset of pixels in the array to a second voltage, where a second image is read out, where the difference in the first and second images provide information about the inter pixel capacitance. Other embodiments are described and claimed.
Pixel-by-pixel absolute phase retrieval using three phase-shifted fringe patterns without markers
NASA Astrophysics Data System (ADS)
Jiang, Chufan; Li, Beiwen; Zhang, Song
2017-04-01
This paper presents a method that can recover absolute phase pixel by pixel without embedding markers on three phase-shifted fringe patterns, acquiring additional images, or introducing additional hardware component(s). The proposed three-dimensional (3D) absolute shape measurement technique includes the following major steps: (1) segment the measured object into different regions using rough priori knowledge of surface geometry; (2) artificially create phase maps at different z planes using geometric constraints of structured light system; (3) unwrap the phase pixel by pixel for each region by properly referring to the artificially created phase map; and (4) merge unwrapped phases from all regions into a complete absolute phase map for 3D reconstruction. We demonstrate that conventional three-step phase-shifted fringe patterns can be used to create absolute phase map pixel by pixel even for large depth range objects. We have successfully implemented our proposed computational framework to achieve absolute 3D shape measurement at 40 Hz.
Development of a universal water signature for the LANDSAT-3 Multispectral Scanner, part 2 of 2
NASA Technical Reports Server (NTRS)
Schlosser, E. H.
1980-01-01
A generalized four-channel hyperplane to discriminate water from non-water was developed using LANDSAT-3 multispectral scanner (MSS) scences and matching same/next-day color infrared aerial photography. The MSS scenes over upstate New York, eastern Washington, Montana and Louisiana taken between May and October 1978 varied in Sun elevation angle from 40 to 58 degrees. The 28 matching air photo frames selected for analysis contained over 1400 water bodies larger than one surface acre. A preliminary water discriminant was used to screen the data and eliminate from further consideration all pixels distant from water in MSS spectral space. Approximately 1300 pixels, half of them non-edge water pixels and half non-water pixels spectrally close to water, were labelled. A linear discriminant was iteratively fitted to the labelled pixels, giving more weight to those pixels that were difficult to discriminate. This discriminant correctly classified 98.7 percent of the water pixels and 98.6 percent of the non-water pixels.
A New Pixels Flipping Method for Huge Watermarking Capacity of the Invoice Font Image
Li, Li; Hou, Qingzheng; Lu, Jianfeng; Dai, Junping; Mao, Xiaoyang; Chang, Chin-Chen
2014-01-01
Invoice printing just has two-color printing, so invoice font image can be seen as binary image. To embed watermarks into invoice image, the pixels need to be flipped. The more huge the watermark is, the more the pixels need to be flipped. We proposed a new pixels flipping method in invoice image for huge watermarking capacity. The pixels flipping method includes one novel interpolation method for binary image, one flippable pixels evaluation mechanism, and one denoising method based on gravity center and chaos degree. The proposed interpolation method ensures that the invoice image keeps features well after scaling. The flippable pixels evaluation mechanism ensures that the pixels keep better connectivity and smoothness and the pattern has highest structural similarity after flipping. The proposed denoising method makes invoice font image smoother and fiter for human vision. Experiments show that the proposed flipping method not only keeps the invoice font structure well but also improves watermarking capacity. PMID:25489606
Measuring and Estimating Normalized Contrast in Infrared Flash Thermography
NASA Technical Reports Server (NTRS)
Koshti, Ajay M.
2013-01-01
Infrared flash thermography (IRFT) is used to detect void-like flaws in a test object. The IRFT technique involves heating up the part surface using a flash of flash lamps. The post-flash evolution of the part surface temperature is sensed by an IR camera in terms of pixel intensity of image pixels. The IR technique involves recording of the IR video image data and analysis of the data using the normalized pixel intensity and temperature contrast analysis method for characterization of void-like flaws for depth and width. This work introduces a new definition of the normalized IR pixel intensity contrast and normalized surface temperature contrast. A procedure is provided to compute the pixel intensity contrast from the camera pixel intensity evolution data. The pixel intensity contrast and the corresponding surface temperature contrast differ but are related. This work provides a method to estimate the temperature evolution and the normalized temperature contrast from the measured pixel intensity evolution data and some additional measurements during data acquisition.
Vectorial mask optimization methods for robust optical lithography
NASA Astrophysics Data System (ADS)
Ma, Xu; Li, Yanqiu; Guo, Xuejia; Dong, Lisong; Arce, Gonzalo R.
2012-10-01
Continuous shrinkage of critical dimension in an integrated circuit impels the development of resolution enhancement techniques for low k1 lithography. Recently, several pixelated optical proximity correction (OPC) and phase-shifting mask (PSM) approaches were developed under scalar imaging models to account for the process variations. However, the lithography systems with larger-NA (NA>0.6) are predominant for current technology nodes, rendering the scalar models inadequate to describe the vector nature of the electromagnetic field that propagates through the optical lithography system. In addition, OPC and PSM algorithms based on scalar models can compensate for wavefront aberrations, but are incapable of mitigating polarization aberrations in practical lithography systems, which can only be dealt with under the vector model. To this end, we focus on developing robust pixelated gradient-based OPC and PSM optimization algorithms aimed at canceling defocus, dose variation, wavefront and polarization aberrations under a vector model. First, an integrative and analytic vector imaging model is applied to formulate the optimization problem, where the effects of process variations are explicitly incorporated in the optimization framework. A steepest descent algorithm is then used to iteratively optimize the mask patterns. Simulations show that the proposed algorithms can effectively improve the process windows of the optical lithography systems.
Imaging quality evaluation method of pixel coupled electro-optical imaging system
NASA Astrophysics Data System (ADS)
He, Xu; Yuan, Li; Jin, Chunqi; Zhang, Xiaohui
2017-09-01
With advancements in high-resolution imaging optical fiber bundle fabrication technology, traditional photoelectric imaging system have become ;flexible; with greatly reduced volume and weight. However, traditional image quality evaluation models are limited by the coupling discrete sampling effect of fiber-optic image bundles and charge-coupled device (CCD) pixels. This limitation substantially complicates the design, optimization, assembly, and evaluation image quality of the coupled discrete sampling imaging system. Based on the transfer process of grayscale cosine distribution optical signal in the fiber-optic image bundle and CCD, a mathematical model of coupled modulation transfer function (coupled-MTF) is established. This model can be used as a basis for following studies on the convergence and periodically oscillating characteristics of the function. We also propose the concept of the average coupled-MTF, which is consistent with the definition of traditional MTF. Based on this concept, the relationships among core distance, core layer radius, and average coupled-MTF are investigated.
Gaussian mixed model in support of semiglobal matching leveraged by ground control points
NASA Astrophysics Data System (ADS)
Ma, Hao; Zheng, Shunyi; Li, Chang; Li, Yingsong; Gui, Li
2017-04-01
Semiglobal matching (SGM) has been widely applied in large aerial images because of its good tradeoff between complexity and robustness. The concept of ground control points (GCPs) is adopted to make SGM more robust. We model the effect of GCPs as two data terms for stereo matching between high-resolution aerial epipolar images in an iterative scheme. One term based on GCPs is formulated by Gaussian mixture model, which strengths the relation between GCPs and the pixels to be estimated and encodes some degree of consistency between them with respect to disparity values. Another term depends on pixel-wise confidence, and we further design a confidence updating equation based on three rules. With this confidence-based term, the assignment of disparity can be heuristically selected among disparity search ranges during the iteration process. Several iterations are sufficient to bring out satisfactory results according to our experiments. Experimental results validate that the proposed method outperforms surface reconstruction, which is a representative variant of SGM and behaves excellently on aerial images.
Microcomputer-based classification of environmental data in municipal areas
NASA Astrophysics Data System (ADS)
Thiergärtner, H.
1995-10-01
Multivariate data-processing methods used in mineral resource identification can be used to classify urban regions. Using elements of expert systems, geographical information systems, as well as known classification and prognosis systems, it is possible to outline a single model that consists of resistant and of temporary parts of a knowledge base including graphical input and output treatment and of resistant and temporary elements of a bank of methods and algorithms. Whereas decision rules created by experts will be stored in expert systems directly, powerful classification rules in form of resistant but latent (implicit) decision algorithms may be implemented in the suggested model. The latent functions will be transformed into temporary explicit decision rules by learning processes depending on the actual task(s), parameter set(s), pixels selection(s), and expert control(s). This takes place both at supervised and nonsupervised classification of multivariately described pixel sets representing municipal subareas. The model is outlined briefly and illustrated by results obtained in a target area covering a part of the city of Berlin (Germany).
NASA Technical Reports Server (NTRS)
Lyon, R. J. P.; Prelat, A. E.; Kirk, R. (Principal Investigator)
1981-01-01
An attempt was made to match HCMM- and U2HCMR-derived temperature data over two test sites of very local size to similar data collected in the field at nearly the same times. Results indicate that HCMM investigations using resolutions cells of 500 m or so are best conducted with areally-extensive sites, rather than point observations. The excellent quality day-VIS imagery is particularly useful for lineament studies, as is the DELTA-T imagery. Attempts to register the ground observed temperatures (even for 0.5 sq mile targets) were unsuccessful due to excessive pixel-to-pixel noise on the HCMM data. Several computer models were explored and related to thermal parameter value changes with observed data. Unless quite complex models, with many parameters which can be observed (perhaps not even measured (perhaps not even measured) only under remote sensing conditions (e.g., roughness, wind shear, etc) are used, the model outputs do not match the observed data. Empirical relationship may be most readily studied.
The Speedster-EXD- A New Event-Driven Hybrid CMOS X-ray Detector
NASA Astrophysics Data System (ADS)
Griffith, Christopher V.; Falcone, Abraham D.; Prieskorn, Zachary R.; Burrows, David N.
2016-01-01
The Speedster-EXD is a new 64×64 pixel, 40-μm pixel pitch, 100-μm depletion depth hybrid CMOS x-ray detector with the capability of reading out only those pixels containing event charge, thus enabling fast effective frame rates. A global charge threshold can be specified, and pixels containing charge above this threshold are flagged and read out. The Speedster detector has also been designed with other advanced in-pixel features to improve performance, including a low-noise, high-gain capacitive transimpedance amplifier that eliminates interpixel capacitance crosstalk (IPC), and in-pixel correlated double sampling subtraction to reduce reset noise. We measure the best energy resolution on the Speedster-EXD detector to be 206 eV (3.5%) at 5.89 keV and 172 eV (10.0%) at 1.49 keV. The average IPC to the four adjacent pixels is measured to be 0.25%±0.2% (i.e., consistent with zero). The pixel-to-pixel gain variation is measured to be 0.80%±0.03%, and a Monte Carlo simulation is applied to better characterize the contributions to the energy resolution.
Random On-Board Pixel Sampling (ROPS) X-Ray Camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Zhehui; Iaroshenko, O.; Li, S.
Recent advances in compressed sensing theory and algorithms offer new possibilities for high-speed X-ray camera design. In many CMOS cameras, each pixel has an independent on-board circuit that includes an amplifier, noise rejection, signal shaper, an analog-to-digital converter (ADC), and optional in-pixel storage. When X-ray images are sparse, i.e., when one of the following cases is true: (a.) The number of pixels with true X-ray hits is much smaller than the total number of pixels; (b.) The X-ray information is redundant; or (c.) Some prior knowledge about the X-ray images exists, sparse sampling may be allowed. Here we first illustratemore » the feasibility of random on-board pixel sampling (ROPS) using an existing set of X-ray images, followed by a discussion about signal to noise as a function of pixel size. Next, we describe a possible circuit architecture to achieve random pixel access and in-pixel storage. The combination of a multilayer architecture, sparse on-chip sampling, and computational image techniques, is expected to facilitate the development and applications of high-speed X-ray camera technology.« less
Circuit for high resolution decoding of multi-anode microchannel array detectors
NASA Technical Reports Server (NTRS)
Kasle, David B. (Inventor)
1995-01-01
A circuit for high resolution decoding of multi-anode microchannel array detectors consisting of input registers accepting transient inputs from the anode array; anode encoding logic circuits connected to the input registers; midpoint pipeline registers connected to the anode encoding logic circuits; and pixel decoding logic circuits connected to the midpoint pipeline registers is described. A high resolution algorithm circuit operates in parallel with the pixel decoding logic circuit and computes a high resolution least significant bit to enhance the multianode microchannel array detector's spatial resolution by halving the pixel size and doubling the number of pixels in each axis of the anode array. A multiplexer is connected to the pixel decoding logic circuit and allows a user selectable pixel address output according to the actual multi-anode microchannel array detector anode array size. An output register concatenates the high resolution least significant bit onto the standard ten bit pixel address location to provide an eleven bit pixel address, and also stores the full eleven bit pixel address. A timing and control state machine is connected to the input registers, the anode encoding logic circuits, and the output register for managing the overall operation of the circuit.
Pixels, Blocks of Pixels, and Polygons: Choosing a Spatial Unit for Thematic Accuracy Assessment
Pixels, polygons, and blocks of pixels are all potentially viable spatial assessment units for conducting an accuracy assessment. We develop a statistical population-based framework to examine how the spatial unit chosen affects the outcome of an accuracy assessment. The populati...
Characterization of pixel sensor designed in 180 nm SOI CMOS technology
NASA Astrophysics Data System (ADS)
Benka, T.; Havranek, M.; Hejtmanek, M.; Jakovenko, J.; Janoska, Z.; Marcisovska, M.; Marcisovsky, M.; Neue, G.; Tomasek, L.; Vrba, V.
2018-01-01
A new type of X-ray imaging Monolithic Active Pixel Sensor (MAPS), X-CHIP-02, was developed using a 180 nm deep submicron Silicon On Insulator (SOI) CMOS commercial technology. Two pixel matrices were integrated into the prototype chip, which differ by the pixel pitch of 50 μm and 100 μm. The X-CHIP-02 contains several test structures, which are useful for characterization of individual blocks. The sensitive part of the pixel integrated in the handle wafer is one of the key structures designed for testing. The purpose of this structure is to determine the capacitance of the sensitive part (diode in the MAPS pixel). The measured capacitance is 2.9 fF for 50 μm pixel pitch and 4.8 fF for 100 μm pixel pitch at -100 V (default operational voltage). This structure was used to measure the IV characteristics of the sensitive diode. In this work, we report on a circuit designed for precise determination of sensor capacitance and IV characteristics of both pixel types with respect to X-ray irradiation. The motivation for measurement of the sensor capacitance was its importance for the design of front-end amplifier circuits. The design of pixel elements, as well as circuit simulation and laboratory measurement techniques are described. The experimental results are of great importance for further development of MAPS sensors in this technology.
Fu, Chi-Yung; Petrich, Loren I.
1997-01-01
An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace's equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image.
Fu, C.Y.; Petrich, L.I.
1997-03-25
An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace`s equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image. 16 figs.
NASA Astrophysics Data System (ADS)
Tamura, K.; Jansen, R. A.; Eskridge, P. B.; Cohen, S. H.; Windhorst, R. A.
2010-06-01
We present the results of a study of the late-type spiral galaxy NGC 0959, before and after application of the pixel-based dust extinction correction described in Tamura et al. (Paper I). Galaxy Evolution Explorer far-UV, and near-UV, ground-based Vatican Advanced Technology Telescope, UBVR, and Spitzer/Infrared Array Camera 3.6, 4.5, 5.8, and 8.0 μm images are studied through pixel color-magnitude diagrams and pixel color-color diagrams (pCCDs). We define groups of pixels based on their distribution in a pCCD of (B - 3.6 μm) versus (FUV - U) colors after extinction correction. In the same pCCD, we trace their locations before the extinction correction was applied. This shows that selecting pixel groups is not meaningful when using colors uncorrected for dust. We also trace the distribution of the pixel groups on a pixel coordinate map of the galaxy. We find that the pixel-based (two-dimensional) extinction correction is crucial for revealing the spatial variations in the dominant stellar population, averaged over each resolution element. Different types and mixtures of stellar populations, and galaxy structures such as a previously unrecognized bar, become readily discernible in the extinction-corrected pCCD and as coherent spatial structures in the pixel coordinate map.
Characterization of Pixelated Cadmium-Zinc-Telluride Detectors for Astrophysical Applications
NASA Technical Reports Server (NTRS)
Gaskin, Jessica; Sharma, Dharma; Ramsey, Brian; Seller, Paul
2003-01-01
Comparisons of charge sharing and charge loss measurements between two pixelated Cadmium-Zinc-Telluride (CdZnTe) detectors are discussed. These properties along with the detector geometry help to define the limiting energy resolution and spatial resolution of the detector in question. The first detector consists of a 1-mm-thick piece of CdZnTe sputtered with a 4x4 array of pixels with pixel pitch of 750 microns (inter-pixel gap is 100 microns). Signal readout is via discrete ultra-low-noise preamplifiers, one for each of the 16 pixels. The second detector consists of a 2-mm-thick piece of CdZnTe sputtered with a 16x16 array of pixels with a pixel pitch of 300 microns (inter-pixel gap is 50 microns). This crystal is bonded to a custom-built readout chip (ASIC) providing all front-end electronics to each of the 256 independent pixels. These detectors act as precursors to that which will be used at the focal plane of the High Energy Replicated Optics (HERO) telescope currently being developed at Marshall Space Flight Center. With a telescope focal length of 6 meters, the detector needs to have a spatial resolution of around 200 microns in order to take full advantage of the HERO angular resolution. We discuss to what degree charge sharing will degrade energy resolution but will improve our spatial resolution through position interpolation.
NASA Astrophysics Data System (ADS)
Kim, Tae-Wook; Park, Sang-Gyu; Choi, Byong-Deok
2011-03-01
The previous pixel-level digital-to-analog-conversion (DAC) scheme that implements a part of a DAC in a pixel circuit turned out to be very efficient for reducing the peripheral area of an integrated data driver fabricated with low-temperature polycrystalline silicon thin-film transistors (LTPS TFTs). However, how the pixel-level DAC can be compatible with the existing pixel circuits including compensation schemes of TFT variations and IR drops on supply rails, which is of primary importance for active matrix organic light emitting diodes (AMOLEDs) is an issue in this scheme, because LTPS TFTs suffer from random variations in their characteristics. In this paper, we show that the pixel-level DAC scheme can be successfully used with the previous compensation schemes by giving two examples of voltage- and current-programming pixels. The previous pixel-level DAC schemes require additional two TFTs and one capacitor, but for these newly proposed pixel circuits, the overhead is no more than two TFTs by utilizing the already existing capacitor. In addition, through a detailed analysis, it has been shown that the pixel-level DAC can be expanded to a 4-bit resolution, or be applied together with 1:2 demultiplexing driving for 6- to 8-in. diagonal XGA AMOLED display panels.
Deptuch, Grzegorz W.; Fahim, Farah; Grybos, Pawel; ...
2017-06-28
An on-chip implementable algorithm for allocation of an X-ray photon imprint, called a hit, to a single pixel in the presence of charge sharing in a highly segmented pixel detector is described. Its proof-of-principle implementation is also given supported by the results of tests using a highly collimated X-ray photon beam from a synchrotron source. The algorithm handles asynchronous arrivals of X-ray photons. Activation of groups of pixels, comparisons of peak amplitudes of pulses within an active neighborhood and finally latching of the results of these comparisons constitute the three procedural steps of the algorithm. A grouping of pixels tomore » one virtual pixel, that recovers composite signals and event driven strobes, to control comparisons of fractional signals between neighboring pixels are the actuators of the algorithm. The circuitry necessary to implement the algorithm requires an extensive inter-pixel connection grid of analog and digital signals, that are exchanged between pixels. A test-circuit implementation of the algorithm was achieved with a small array of 32 × 32 pixels and the device was exposed to an 8 keV highly collimated to a diameter of 3-μm X-ray beam. Furthermore, the results of these tests are given in this paper assessing physical implementation of the algorithm.« less
Method for contour extraction for object representation
Skourikhine, Alexei N.; Prasad, Lakshman
2005-08-30
Contours are extracted for representing a pixelated object in a background pixel field. An object pixel is located that is the start of a new contour for the object and identifying that pixel as the first pixel of the new contour. A first contour point is then located on the mid-point of a transition edge of the first pixel. A tracing direction from the first contour point is determined for tracing the new contour. Contour points on mid-points of pixel transition edges are sequentially located along the tracing direction until the first contour point is again encountered to complete tracing the new contour. The new contour is then added to a list of extracted contours that represent the object. The contour extraction process associates regions and contours by labeling all the contours belonging to the same object with the same label.
NASA Astrophysics Data System (ADS)
Wu, L.; San Segundo Bello, D.; Coppejans, P.; Craninckx, J.; Wambacq, P.; Borremans, J.
2017-02-01
This paper presents a 20 Mfps 32 × 84 pixels CMOS burst-mode imager featuring high frame depth with a passive in-pixel amplifier. Compared to the CCD alternatives, CMOS burst-mode imagers are attractive for their low power consumption and integration of circuitry such as ADCs. Due to storage capacitor size and its noise limitations, CMOS burst-mode imagers usually suffer from a lower frame depth than CCD implementations. In order to capture fast transitions over a longer time span, an in-pixel CDS technique has been adopted to reduce the required memory cells for each frame by half. Moreover, integrated with in-pixel CDS, an in-pixel NMOS-only passive amplifier alleviates the kTC noise requirements of the memory bank allowing the usage of smaller capacitors. Specifically, a dense 108-cell MOS memory bank (10fF/cell) has been implemented inside a 30μm pitch pixel, with an area of 25 × 30μm2 occupied by the memory bank. There is an improvement of about 4x in terms of frame depth per pixel area by applying in-pixel CDS and amplification. With the amplifier's gain of 3.3, an FD input-referred RMS noise of 1mV is achieved at 20 Mfps operation. While the amplification is done without burning DC current, including the pixel source follower biasing, the full pixel consumes 10μA at 3.3V supply voltage at full speed. The chip has been fabricated in imec's 130nm CMOS CIS technology.
High dynamic range pixel architecture for advanced diagnostic medical x-ray imaging applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Izadi, Mohammad Hadi; Karim, Karim S.
2006-05-15
The most widely used architecture in large-area amorphous silicon (a-Si) flat panel imagers is a passive pixel sensor (PPS), which consists of a detector and a readout switch. While the PPS has the advantage of being compact and amenable toward high-resolution imaging, small PPS output signals are swamped by external column charge amplifier and data line thermal noise, which reduce the minimum readable sensor input signal. In contrast to PPS circuits, on-pixel amplifiers in a-Si technology reduce readout noise to levels that can meet even the stringent requirements for low noise digital x-ray fluoroscopy (<1000 noise electrons). However, larger voltagesmore » at the pixel input cause the output of the amplified pixel to become nonlinear thus reducing the dynamic range. We reported a hybrid amplified pixel architecture based on a combination of PPS and amplified pixel designs that, in addition to low noise performance, also resulted in large-signal linearity and consequently higher dynamic range [K. S. Karim et al., Proc. SPIE 5368, 657 (2004)]. The additional benefit in large-signal linearity, however, came at the cost of an additional pixel transistor. We present an amplified pixel design that achieves the goals of low noise performance and large-signal linearity without the need for an additional pixel transistor. Theoretical calculations and simulation results for noise indicate the applicability of the amplified a-Si pixel architecture for high dynamic range, medical x-ray imaging applications that require switching between low exposure, real-time fluoroscopy and high-exposure radiography.« less
Laser pixelation of thick scintillators for medical imaging applications: x-ray studies
NASA Astrophysics Data System (ADS)
Sabet, Hamid; Kudrolli, Haris; Marton, Zsolt; Singh, Bipin; Nagarkar, Vivek V.
2013-09-01
To achieve high spatial resolution required in nuclear imaging, scintillation light spread has to be controlled. This has been traditionally achieved by introducing structures in the bulk of scintillation materials; typically by mechanical pixelation of scintillators and fill the resultant inter-pixel gaps by reflecting materials. Mechanical pixelation however, is accompanied by various cost and complexity issues especially for hard, brittle and hygroscopic materials. For example LSO and LYSO, hard and brittle scintillators of interest to medical imaging community, are known to crack under thermal and mechanical stress; the material yield drops quickly with large arrays with high aspect ratio pixels and therefore the pixelation process cost increases. We are utilizing a novel technique named Laser Induced Optical Barriers (LIOB) for pixelation of scintillators that overcomes the issues associated with mechanical pixelation. In this technique, we can introduce optical barriers within the bulk of scintillator crystals to form pixelated arrays with small pixel size and large thickness. We applied LIOB to LYSO using a high-frequency solid-state laser. Arrays with different crystal thickness (5 to 20 mm thick), and pixel size (0.8×0.8 to 1.5×1.5 mm2) were fabricated and tested. The width of the optical barriers were controlled by fine-tuning key parameters such as lens focal spot size and laser energy density. Here we report on LIOB process, its optimization, and the optical crosstalk measurements using X-rays. There are many applications that can potentially benefit from LIOB including but not limited to clinical/pre-clinical PET and SPECT systems, and photon counting CT detectors.
Use of LANDSAT images of vegetation cover to estimate effective hydraulic properties of soils
NASA Technical Reports Server (NTRS)
Eagleson, Peter S.; Jasinski, Michael F.
1988-01-01
The estimation of the spatially variable surface moisture and heat fluxes of natural, semivegetated landscapes is difficult due to the highly random nature of the vegetation (e.g., plant species, density, and stress) and the soil (e.g., moisture content, and soil hydraulic conductivity). The solution to that problem lies, in part, in the use of satellite remotely sensed data, and in the preparation of those data in terms of the physical properties of the plant and soil. The work was focused on the development and testing of a stochastic geometric canopy-soil reflectance model, which can be applied to the physically-based interpretation of LANDSAT images. The model conceptualizes the landscape as a stochastic surface with bulk plant and soil reflective properties. The model is particularly suited for regional scale investigations where the quantification of the bulk landscape properties, such as fractional vegetation cover, is important on a pixel by pixel basis. A summary of the theoretical analysis and the preliminary testing of the model with actual aerial radiometric data is provided.
NASA Technical Reports Server (NTRS)
Platnick, Steven; Meyer, Kerry G.; King, Michael D.; Wind, Galina; Amarasinghe, Nandana; Marchant, Benjamin G.; Arnold, G. Thomas; Zhang, Zhibo; Hubanks, Paul A.; Holz, Robert E.;
2016-01-01
The MODIS Level-2 cloud product (Earth Science Data Set names MOD06 and MYD06 for Terra and Aqua MODIS, respectively) provides pixel-level retrievals of cloud-top properties (day and night pressure, temperature, and height) and cloud optical properties(optical thickness, effective particle radius, and water path for both liquid water and ice cloud thermodynamic phases daytime only). Collection 6 (C6) reprocessing of the product was completed in May 2014 and March 2015 for MODIS Aqua and Terra, respectively. Here we provide an overview of major C6 optical property algorithm changes relative to the previous Collection 5 (C5) product. Notable C6 optical and microphysical algorithm changes include: (i) new ice cloud optical property models and a more extensive cloud radiative transfer code lookup table (LUT) approach, (ii) improvement in the skill of the shortwave-derived cloud thermodynamic phase, (iii) separate cloud effective radius retrieval datasets for each spectral combination used in previous collections, (iv) separate retrievals for partly cloudy pixels and those associated with cloud edges, (v) failure metrics that provide diagnostic information for pixels having observations that fall outside the LUT solution space, and (vi) enhanced pixel-level retrieval uncertainty calculations.The C6 algorithm changes collectively can result in significant changes relative to C5,though the magnitude depends on the dataset and the pixels retrieval location in the cloud parameter space. Example Level-2 granule and Level-3 gridded dataset differences between the two collections are shown. While the emphasis is on the suite of cloud opticalproperty datasets, other MODIS cloud datasets are discussed when relevant.
Fully automatic segmentation of arbitrarily shaped fiducial markers in cone-beam CT projections
NASA Astrophysics Data System (ADS)
Bertholet, J.; Wan, H.; Toftegaard, J.; Schmidt, M. L.; Chotard, F.; Parikh, P. J.; Poulsen, P. R.
2017-02-01
Radio-opaque fiducial markers of different shapes are often implanted in or near abdominal or thoracic tumors to act as surrogates for the tumor position during radiotherapy. They can be used for real-time treatment adaptation, but this requires a robust, automatic segmentation method able to handle arbitrarily shaped markers in a rotational imaging geometry such as cone-beam computed tomography (CBCT) projection images and intra-treatment images. In this study, we propose a fully automatic dynamic programming (DP) assisted template-based (TB) segmentation method. Based on an initial DP segmentation, the DPTB algorithm generates and uses a 3D marker model to create 2D templates at any projection angle. The 2D templates are used to segment the marker position as the position with highest normalized cross-correlation in a search area centered at the DP segmented position. The accuracy of the DP algorithm and the new DPTB algorithm was quantified as the 2D segmentation error (pixels) compared to a manual ground truth segmentation for 97 markers in the projection images of CBCT scans of 40 patients. Also the fraction of wrong segmentations, defined as 2D errors larger than 5 pixels, was calculated. The mean 2D segmentation error of DP was reduced from 4.1 pixels to 3.0 pixels by DPTB, while the fraction of wrong segmentations was reduced from 17.4% to 6.8%. DPTB allowed rejection of uncertain segmentations as deemed by a low normalized cross-correlation coefficient and contrast-to-noise ratio. For a rejection rate of 9.97%, the sensitivity in detecting wrong segmentations was 67% and the specificity was 94%. The accepted segmentations had a mean segmentation error of 1.8 pixels and 2.5% wrong segmentations.
Platnick, Steven; Meyer, Kerry G; King, Michael D; Wind, Galina; Amarasinghe, Nandana; Marchant, Benjamin; Arnold, G Thomas; Zhang, Zhibo; Hubanks, Paul A; Holz, Robert E; Yang, Ping; Ridgway, William L; Riedi, Jérôme
2017-01-01
The MODIS Level-2 cloud product (Earth Science Data Set names MOD06 and MYD06 for Terra and Aqua MODIS, respectively) provides pixel-level retrievals of cloud-top properties (day and night pressure, temperature, and height) and cloud optical properties (optical thickness, effective particle radius, and water path for both liquid water and ice cloud thermodynamic phases-daytime only). Collection 6 (C6) reprocessing of the product was completed in May 2014 and March 2015 for MODIS Aqua and Terra, respectively. Here we provide an overview of major C6 optical property algorithm changes relative to the previous Collection 5 (C5) product. Notable C6 optical and microphysical algorithm changes include: (i) new ice cloud optical property models and a more extensive cloud radiative transfer code lookup table (LUT) approach, (ii) improvement in the skill of the shortwave-derived cloud thermodynamic phase, (iii) separate cloud effective radius retrieval datasets for each spectral combination used in previous collections, (iv) separate retrievals for partly cloudy pixels and those associated with cloud edges, (v) failure metrics that provide diagnostic information for pixels having observations that fall outside the LUT solution space, and (vi) enhanced pixel-level retrieval uncertainty calculations. The C6 algorithm changes collectively can result in significant changes relative to C5, though the magnitude depends on the dataset and the pixel's retrieval location in the cloud parameter space. Example Level-2 granule and Level-3 gridded dataset differences between the two collections are shown. While the emphasis is on the suite of cloud optical property datasets, other MODIS cloud datasets are discussed when relevant.
NASA Astrophysics Data System (ADS)
Wang, Junbang; Sun, Wenyi
2014-11-01
Remote sensing is widely applied in the study of terrestrial primary production and the global carbon cycle. The researches on the spatial heterogeneity in images with different sensors and resolutions would improve the application of remote sensing. In this study two sites on alpine meadow grassland in Qinghai, China, which have distinct fractal vegetation cover, were used to test and analyze differences between Normalized Difference Vegetation Index (NDVI) and enhanced vegetation index (EVI) derived from the Huanjing (HJ) and Landsat Thematic Mapper (TM) sensors. The results showed that: 1) NDVI estimated from HJ were smaller than the corresponding values from TM at the two sites whereas EVI were almost the same for the two sensors. 2) The overall variance represented by HJ data was consistently about half of that of Landsat TM although their nominal pixel size is approximately 30m for both sensors. The overall variance from EVI is greater than that from NDVI. The difference of the range between the two sensors is about 6 pixels at 30m resolution. The difference of the range in which there is not more corrective between two vegetation indices is about 1 pixel. 3) The sill decreased when pixel size increased from 30m to 1km, and then decreased very quickly when pixel size is changed to 250m from 30m or 90m but slowly when changed from 250m to 500m. HJ can capture this spatial heterogeneity to some extent and this study provides foundations for the use of the sensor for validation of net primary productivity estimates obtained from ecosystem process models.
NASA Astrophysics Data System (ADS)
Han, Tao; Chen, Lingyun; Lai, Chao-Jen; Liu, Xinming; Shen, Youtao; Zhong, Yuncheng; Ge, Shuaiping; Yi, Ying; Wang, Tianpeng; Shaw, Chris C.
2009-02-01
Images of mastectomy breast specimens have been acquired with a bench top experimental Cone beam CT (CBCT) system. The resulting images have been segmented to model an uncompressed breast for simulation of various CBCT techniques. To further simulate conventional or tomosynthesis mammographic imaging for comparison with the CBCT technique, a deformation technique was developed to convert the CT data for an uncompressed breast to a compressed breast without altering the breast volume or regional breast density. With this technique, 3D breast deformation is separated into two 2D deformations in coronal and axial views. To preserve the total breast volume and regional tissue composition, each 2D deformation step was achieved by altering the square pixels into rectangular ones with the pixel areas unchanged and resampling with the original square pixels using bilinear interpolation. The compression was modeled by first stretching the breast in the superior-inferior direction in the coronal view. The image data were first deformed by distorting the voxels with a uniform distortion ratio. These deformed data were then deformed again using distortion ratios varying with the breast thickness and re-sampled. The deformation procedures were applied in the axial view to stretch the breast in the chest wall to nipple direction while shrinking it in the mediolateral to lateral direction re-sampled and converted into data for uniform cubic voxels. Threshold segmentation was applied to the final deformed image data to obtain the 3D compressed breast model. Our results show that the original segmented CBCT image data were successfully converted into those for a compressed breast with the same volume and regional density preserved. Using this compressed breast model, conventional and tomosynthesis mammograms were simulated for comparison with CBCT.
Li, Jia; Xia, Changqun; Chen, Xiaowu
2017-10-12
Image-based salient object detection (SOD) has been extensively studied in past decades. However, video-based SOD is much less explored due to the lack of large-scale video datasets within which salient objects are unambiguously defined and annotated. Toward this end, this paper proposes a video-based SOD dataset that consists of 200 videos. In constructing the dataset, we manually annotate all objects and regions over 7,650 uniformly sampled keyframes and collect the eye-tracking data of 23 subjects who free-view all videos. From the user data, we find that salient objects in a video can be defined as objects that consistently pop-out throughout the video, and objects with such attributes can be unambiguously annotated by combining manually annotated object/region masks with eye-tracking data of multiple subjects. To the best of our knowledge, it is currently the largest dataset for videobased salient object detection. Based on this dataset, this paper proposes an unsupervised baseline approach for video-based SOD by using saliencyguided stacked autoencoders. In the proposed approach, multiple spatiotemporal saliency cues are first extracted at the pixel, superpixel and object levels. With these saliency cues, stacked autoencoders are constructed in an unsupervised manner that automatically infers a saliency score for each pixel by progressively encoding the high-dimensional saliency cues gathered from the pixel and its spatiotemporal neighbors. In experiments, the proposed unsupervised approach is compared with 31 state-of-the-art models on the proposed dataset and outperforms 30 of them, including 19 imagebased classic (unsupervised or non-deep learning) models, six image-based deep learning models, and five video-based unsupervised models. Moreover, benchmarking results show that the proposed dataset is very challenging and has the potential to boost the development of video-based SOD.
Schaffrath, David; Bernhofer, Christian
2013-01-01
Grasslands in Inner Mongolia are important for livestock farming while ecosystem functioning and water consumption are dominated by evapotranspiration (ET). In this paper we studied the spatiotemporal distribution and variability of ET and its components in Inner Mongolian grasslands over a period of 10 years, from 2002 to 2011. ET was modelled pixel-wise for more than 3000 1 km(2) pixels with the physically-based hydrological model BROOK90. The model was parameterised from eddy-covariance measurements and daily input was generated from MODIS leaf area index and surface temperatures. Modelled ET was also compared with the ET provided by the MODIS MOD16 ET data. The study showed ET to be highly variable in both time and space in Inner Mongolian grasslands. The mean coefficient of variation of 8-day ET in the study area varied between 25% and 40% and was up to 75% for individual pixels indicating a high innerannual variability of ET. Generally, ET equals or exceeds P during the vegetation period, but high precipitation in 2003 clearly exceeded ET in this year indicating a recharge of soil moisture and groundwater. Despite the high interannual and innerannual variations of spatial ET, the study also showed the existence of an intrinsic long-term spatial pattern of ET distribution, which can be explained partly by altitude and longitude (R(2) = 0.49). In conclusion, the results of this research suggest the development of dynamic and productive rangeland management systems according to the inherent variability of rainfall, productivity and ET in order to restore and protect Inner Mongolian grasslands.
Estimating Mixed Broadleaves Forest Stand Volume Using Dsm Extracted from Digital Aerial Images
NASA Astrophysics Data System (ADS)
Sohrabi, H.
2012-07-01
In mixed old growth broadleaves of Hyrcanian forests, it is difficult to estimate stand volume at plot level by remotely sensed data while LiDar data is absent. In this paper, a new approach has been proposed and tested for estimating stand forest volume. The approach is based on this idea that forest volume can be estimated by variation of trees height at plots. In the other word, the more the height variation in plot, the more the stand volume would be expected. For testing this idea, 120 circular 0.1 ha sample plots with systematic random design has been collected in Tonekaon forest located in Hyrcanian zone. Digital surface model (DSM) measure the height values of the first surface on the ground including terrain features, trees, building etc, which provides a topographic model of the earth's surface. The DSMs have been extracted automatically from aerial UltraCamD images so that ground pixel size for extracted DSM varied from 1 to 10 m size by 1m span. DSMs were checked manually for probable errors. Corresponded to ground samples, standard deviation and range of DSM pixels have been calculated. For modeling, non-linear regression method was used. The results showed that standard deviation of plot pixels with 5 m resolution was the most appropriate data for modeling. Relative bias and RMSE of estimation was 5.8 and 49.8 percent, respectively. Comparing to other approaches for estimating stand volume based on passive remote sensing data in mixed broadleaves forests, these results are more encouraging. One big problem in this method occurs when trees canopy cover is totally closed. In this situation, the standard deviation of height is low while stand volume is high. In future studies, applying forest stratification could be studied.