Subpixel target detection and enhancement in hyperspectral images
NASA Astrophysics Data System (ADS)
Tiwari, K. C.; Arora, M.; Singh, D.
2011-06-01
Hyperspectral data due to its higher information content afforded by higher spectral resolution is increasingly being used for various remote sensing applications including information extraction at subpixel level. There is however usually a lack of matching fine spatial resolution data particularly for target detection applications. Thus, there always exists a tradeoff between the spectral and spatial resolutions due to considerations of type of application, its cost and other associated analytical and computational complexities. Typically whenever an object, either manmade, natural or any ground cover class (called target, endmembers, components or class) gets spectrally resolved but not spatially, mixed pixels in the image result. Thus, numerous manmade and/or natural disparate substances may occur inside such mixed pixels giving rise to mixed pixel classification or subpixel target detection problems. Various spectral unmixing models such as Linear Mixture Modeling (LMM) are in vogue to recover components of a mixed pixel. Spectral unmixing outputs both the endmember spectrum and their corresponding abundance fractions inside the pixel. It, however, does not provide spatial distribution of these abundance fractions within a pixel. This limits the applicability of hyperspectral data for subpixel target detection. In this paper, a new inverse Euclidean distance based super-resolution mapping method has been presented that achieves subpixel target detection in hyperspectral images by adjusting spatial distribution of abundance fraction within a pixel. Results obtained at different resolutions indicate that super-resolution mapping may effectively aid subpixel target detection.
Estimation of proportions in mixed pixels through their region characterization
NASA Technical Reports Server (NTRS)
Chittineni, C. B. (Principal Investigator)
1981-01-01
A region of mixed pixels can be characterized through the probability density function of proportions of classes in the pixels. Using information from the spectral vectors of a given set of pixels from the mixed pixel region, expressions are developed for obtaining the maximum likelihood estimates of the parameters of probability density functions of proportions. The proportions of classes in the mixed pixels can then be estimated. If the mixed pixels contain objects of two classes, the computation can be reduced by transforming the spectral vectors using a transformation matrix that simultaneously diagonalizes the covariance matrices of the two classes. If the proportions of the classes of a set of mixed pixels from the region are given, then expressions are developed for obtaining the estmates of the parameters of the probability density function of the proportions of mixed pixels. Development of these expressions is based on the criterion of the minimum sum of squares of errors. Experimental results from the processing of remotely sensed agricultural multispectral imagery data are presented.
The Effects of Radiation on Imagery Sensors in Space
NASA Technical Reports Server (NTRS)
Mathis, Dylan
2007-01-01
Recent experience using high definition video on the International Space Station reveals camera pixel degradation due to particle radiation to be a much more significant problem with high definition cameras than with standard definition video. Although it may at first appear that increased pixel density on the imager is the logical explanation for this, the ISS implementations of high definition suggest a more complex causal and mediating factor mix. The degree of damage seems to vary from one type of camera to another, and this variation prompts a reconsideration of the possible factors in pixel loss, such as imager size, number of pixels, pixel aperture ratio, imager type (CCD or CMOS), method of error correction/concealment, and the method of compression used for recording or transmission. The problem of imager pixel loss due to particle radiation is not limited to out-of-atmosphere applications. Since particle radiation increases with altitude, it is not surprising to find anecdotal evidence that video cameras subject to many hours of airline travel show an increased incidence of pixel loss. This is even evident in some standard definition video applications, and pixel loss due to particle radiation only stands to become a more salient issue considering the continued diffusion of high definition video cameras in the marketplace.
A Gaussian Mixture Model Representation of Endmember Variability in Hyperspectral Unmixing
NASA Astrophysics Data System (ADS)
Zhou, Yuan; Rangarajan, Anand; Gader, Paul D.
2018-05-01
Hyperspectral unmixing while considering endmember variability is usually performed by the normal compositional model (NCM), where the endmembers for each pixel are assumed to be sampled from unimodal Gaussian distributions. However, in real applications, the distribution of a material is often not Gaussian. In this paper, we use Gaussian mixture models (GMM) to represent the endmember variability. We show, given the GMM starting premise, that the distribution of the mixed pixel (under the linear mixing model) is also a GMM (and this is shown from two perspectives). The first perspective originates from the random variable transformation and gives a conditional density function of the pixels given the abundances and GMM parameters. With proper smoothness and sparsity prior constraints on the abundances, the conditional density function leads to a standard maximum a posteriori (MAP) problem which can be solved using generalized expectation maximization. The second perspective originates from marginalizing over the endmembers in the GMM, which provides us with a foundation to solve for the endmembers at each pixel. Hence, our model can not only estimate the abundances and distribution parameters, but also the distinct endmember set for each pixel. We tested the proposed GMM on several synthetic and real datasets, and showed its potential by comparing it to current popular methods.
ISBDD Model for Classification of Hyperspectral Remote Sensing Imagery
Li, Na; Xu, Zhaopeng; Zhao, Huijie; Huang, Xinchen; Drummond, Jane; Wang, Daming
2018-01-01
The diverse density (DD) algorithm was proposed to handle the problem of low classification accuracy when training samples contain interference such as mixed pixels. The DD algorithm can learn a feature vector from training bags, which comprise instances (pixels). However, the feature vector learned by the DD algorithm cannot always effectively represent one type of ground cover. To handle this problem, an instance space-based diverse density (ISBDD) model that employs a novel training strategy is proposed in this paper. In the ISBDD model, DD values of each pixel are computed instead of learning a feature vector, and as a result, the pixel can be classified according to its DD values. Airborne hyperspectral data collected by the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) sensor and the Push-broom Hyperspectral Imager (PHI) are applied to evaluate the performance of the proposed model. Results show that the overall classification accuracy of ISBDD model on the AVIRIS and PHI images is up to 97.65% and 89.02%, respectively, while the kappa coefficient is up to 0.97 and 0.88, respectively. PMID:29510547
Sub-pixel mapping of hyperspectral imagery using super-resolution
NASA Astrophysics Data System (ADS)
Sharma, Shreya; Sharma, Shakti; Buddhiraju, Krishna M.
2016-04-01
With the development of remote sensing technologies, it has become possible to obtain an overview of landscape elements which helps in studying the changes on earth's surface due to climate, geological, geomorphological and human activities. Remote sensing measures the electromagnetic radiations from the earth's surface and match the spectral similarity between the observed signature and the known standard signatures of the various targets. However, problem lies when image classification techniques assume pixels to be pure. In hyperspectral imagery, images have high spectral resolution but poor spatial resolution. Therefore, the spectra obtained is often contaminated due to the presence of mixed pixels and causes misclassification. To utilise this high spectral information, spatial resolution has to be enhanced. Many factors make the spatial resolution one of the most expensive and hardest to improve in imaging systems. To solve this problem, post-processing of hyperspectral images is done to retrieve more information from the already acquired images. The algorithm to enhance spatial resolution of the images by dividing them into sub-pixels is known as super-resolution and several researches have been done in this domain.In this paper, we propose a new method for super-resolution based on ant colony optimization and review the popular methods of sub-pixel mapping of hyperspectral images along with their comparative analysis.
GPU implementation of the simplex identification via split augmented Lagrangian
NASA Astrophysics Data System (ADS)
Sevilla, Jorge; Nascimento, José M. P.
2015-10-01
Hyperspectral imaging can be used for object detection and for discriminating between different objects based on their spectral characteristics. One of the main problems of hyperspectral data analysis is the presence of mixed pixels, due to the low spatial resolution of such images. This means that several spectrally pure signatures (endmembers) are combined into the same mixed pixel. Linear spectral unmixing follows an unsupervised approach which aims at inferring pure spectral signatures and their material fractions at each pixel of the scene. The huge data volumes acquired by such sensors put stringent requirements on processing and unmixing methods. This paper proposes an efficient implementation of a unsupervised linear unmixing method on GPUs using CUDA. The method finds the smallest simplex by solving a sequence of nonsmooth convex subproblems using variable splitting to obtain a constraint formulation, and then applying an augmented Lagrangian technique. The parallel implementation of SISAL presented in this work exploits the GPU architecture at low level, using shared memory and coalesced accesses to memory. The results herein presented indicate that the GPU implementation can significantly accelerate the method's execution over big datasets while maintaining the methods accuracy.
Sub-pixel image classification for forest types in East Texas
NASA Astrophysics Data System (ADS)
Westbrook, Joey
Sub-pixel classification is the extraction of information about the proportion of individual materials of interest within a pixel. Landcover classification at the sub-pixel scale provides more discrimination than traditional per-pixel multispectral classifiers for pixels where the material of interest is mixed with other materials. It allows for the un-mixing of pixels to show the proportion of each material of interest. The materials of interest for this study are pine, hardwood, mixed forest and non-forest. The goal of this project was to perform a sub-pixel classification, which allows a pixel to have multiple labels, and compare the result to a traditional supervised classification, which allows a pixel to have only one label. The satellite image used was a Landsat 5 Thematic Mapper (TM) scene of the Stephen F. Austin Experimental Forest in Nacogdoches County, Texas and the four cover type classes are pine, hardwood, mixed forest and non-forest. Once classified, a multi-layer raster datasets was created that comprised four raster layers where each layer showed the percentage of that cover type within the pixel area. Percentage cover type maps were then produced and the accuracy of each was assessed using a fuzzy error matrix for the sub-pixel classifications, and the results were compared to the supervised classification in which a traditional error matrix was used. The overall accuracy of the sub-pixel classification using the aerial photo for both training and reference data had the highest (65% overall) out of the three sub-pixel classifications. This was understandable because the analyst can visually observe the cover types actually on the ground for training data and reference data, whereas using the FIA (Forest Inventory and Analysis) plot data, the analyst must assume that an entire pixel contains the exact percentage of a cover type found in a plot. An increase in accuracy was found after reclassifying each sub-pixel classification from nine classes with 10 percent interval each to five classes with 20 percent interval each. When compared to the supervised classification which has a satisfactory overall accuracy of 90%, none of the sub-pixel classification achieved the same level. However, since traditional per-pixel classifiers assign only one label to pixels throughout the landscape while sub-pixel classifications assign multiple labels to each pixel, the traditional 85% accuracy of acceptance for pixel-based classifications should not apply to sub-pixel classifications. More research is needed in order to define the level of accuracy that is deemed acceptable for sub-pixel classifications.
Spectral Unmixing With Multiple Dictionaries
NASA Astrophysics Data System (ADS)
Cohen, Jeremy E.; Gillis, Nicolas
2018-02-01
Spectral unmixing aims at recovering the spectral signatures of materials, called endmembers, mixed in a hyperspectral or multispectral image, along with their abundances. A typical assumption is that the image contains one pure pixel per endmember, in which case spectral unmixing reduces to identifying these pixels. Many fully automated methods have been proposed in recent years, but little work has been done to allow users to select areas where pure pixels are present manually or using a segmentation algorithm. Additionally, in a non-blind approach, several spectral libraries may be available rather than a single one, with a fixed number (or an upper or lower bound) of endmembers to chose from each. In this paper, we propose a multiple-dictionary constrained low-rank matrix approximation model that address these two problems. We propose an algorithm to compute this model, dubbed M2PALS, and its performance is discussed on both synthetic and real hyperspectral images.
Design of the small pixel pitch ROIC
NASA Astrophysics Data System (ADS)
Liang, Qinghua; Jiang, Dazhao; Chen, Honglei; Zhai, Yongcheng; Gao, Lei; Ding, Ruijun
2014-11-01
Since the technology trend of the third generation IRFPA towards resolution enhancing has steadily progressed,the pixel pitch of IRFPA has been greatly reduced.A 640×512 readout integrated circuit(ROIC) of IRFPA with 15μm pixel pitch is presented in this paper.The 15μm pixel pitch ROIC design will face many challenges.As we all known,the integrating capacitor is a key performance parameter when considering pixel area,charge capacity and dynamic range,so we adopt the effective method of 2 by 2 pixels sharing an integrating capacitor to solve this problem.The input unit cell architecture will contain two paralleled sample and hold parts,which not only allow the FPA to be operated in full frame snapshot mode but also save relatively unit circuit area.Different applications need more matching input unit circuits. Because the dimension of 2×2 pixels is 30μm×30μm, an input stage based on direct injection (DI) which has medium injection ratio and small layout area is proved to be suitable for middle wave (MW) while BDI with three-transistor cascode amplifier for long wave(LW). By adopting the 0.35μm 2P4M mixed signal process, the circuit architecture can make the effective charge capacity of 7.8Me- per pixel with 2.2V output range for MW and 7.3 Me- per pixel with 2.6V output range for LW. According to the simulation results, this circuit works well under 5V power supply and achieves less than 0.1% nonlinearity.
NASA Astrophysics Data System (ADS)
Xin, X.; Li, F.; Peng, Z.; Qinhuo, L.
2017-12-01
Land surface heterogeneities significantly affect the reliability and accuracy of remotely sensed evapotranspiration (ET), and it gets worse for lower resolution data. At the same time, temporal scale extrapolation of the instantaneous latent heat flux (LE) at satellite overpass time to daily ET are crucial for applications of such remote sensing product. The purpose of this paper is to propose a simple but efficient model for estimating daytime evapotranspiration considering heterogeneity of mixed pixels. In order to do so, an equation to calculate evapotranspiration fraction (EF) of mixed pixels was derived based on two key assumptions. Assumption 1: the available energy (AE) of each sub-pixel equals approximately to that of any other sub-pixels in the same mixed pixel within acceptable margin of bias, and as same as the AE of the mixed pixel. It's only for a simpification of the equation, and its uncertainties and resulted errors in estimated ET are very small. Assumption 2: EF of each sub-pixel equals to the EF of the nearest pure pixel(s) of same land cover type. This equation is supposed to be capable of correcting the spatial scale error of the mixed pixels EF and can be used to calculated daily ET with daily AE data.The model was applied to an artificial oasis in the midstream of Heihe River. HJ-1B satellite data were used to estimate the lumped fluxes at the scale of 300 m after resampling the 30-m resolution datasets to 300 m resolution, which was used to carry on the key step of the model. The results before and after correction were compare to each other and validated using site data of eddy-correlation systems. Results indicated that the new model is capable of improving accuracy of daily ET estimation relative to the lumped method. Validations at 12 sites of eddy-correlation systems for 9 days of HJ-1B overpass showed that the R² increased to 0.82 from 0.62; the RMSE decreased to 1.60 MJ/m² from 2.47MJ/m²; the MBE decreased from 1.92 MJ/m² to 1.18MJ/m², which is a quite significant enhancement.The model is easy to apply. And the moduler of inhomogeneous surfaces is independent and easy to be embedded in the traditional remote sensing algorithms of heat fluxes to get daily ET, which were mainly designed to calculate LE or ET under unsaturated conditions and did not consider heterogeneities of land surface.
NASA Astrophysics Data System (ADS)
Liu, Zhaoxin; Zhao, Liaoying; Li, Xiaorun; Chen, Shuhan
2018-04-01
Owing to the limitation of spatial resolution of the imaging sensor and the variability of ground surfaces, mixed pixels are widesperead in hyperspectral imagery. The traditional subpixel mapping algorithms treat all mixed pixels as boundary-mixed pixels while ignoring the existence of linear subpixels. To solve this question, this paper proposed a new subpixel mapping method based on linear subpixel feature detection and object optimization. Firstly, the fraction value of each class is obtained by spectral unmixing. Secondly, the linear subpixel features are pre-determined based on the hyperspectral characteristics and the linear subpixel feature; the remaining mixed pixels are detected based on maximum linearization index analysis. The classes of linear subpixels are determined by using template matching method. Finally, the whole subpixel mapping results are iteratively optimized by binary particle swarm optimization algorithm. The performance of the proposed subpixel mapping method is evaluated via experiments based on simulated and real hyperspectral data sets. The experimental results demonstrate that the proposed method can improve the accuracy of subpixel mapping.
Selkowitz, David J.; Forster, Richard; Caldwell, Megan K.
2014-01-01
Remote sensing of snow-covered area (SCA) can be binary (indicating the presence/absence of snow cover at each pixel) or fractional (indicating the fraction of each pixel covered by snow). Fractional SCA mapping provides more information than binary SCA, but is more difficult to implement and may not be feasible with all types of remote sensing data. The utility of fractional SCA mapping relative to binary SCA mapping varies with the intended application as well as by spatial resolution, temporal resolution and period of interest, and climate. We quantified the frequency of occurrence of partially snow-covered (mixed) pixels at spatial resolutions between 1 m and 500 m over five dates at two study areas in the western U.S., using 0.5 m binary SCA maps derived from high spatial resolution imagery aggregated to fractional SCA at coarser spatial resolutions. In addition, we used in situ monitoring to estimate the frequency of partially snow-covered conditions for the period September 2013–August 2014 at 10 60-m grid cell footprints at two study areas with continental snow climates. Results from the image analysis indicate that at 40 m, slightly above the nominal spatial resolution of Landsat, mixed pixels accounted for 25%–93% of total pixels, while at 500 m, the nominal spatial resolution of MODIS bands used for snow cover mapping, mixed pixels accounted for 67%–100% of total pixels. Mixed pixels occurred more commonly at the continental snow climate site than at the maritime snow climate site. The in situ data indicate that some snow cover was present between 186 and 303 days, and partial snow cover conditions occurred on 10%–98% of days with snow cover. Four sites remained partially snow-free throughout most of the winter and spring, while six sites were entirely snow covered throughout most or all of the winter and spring. Within 60 m grid cells, the late spring/summer transition from snow-covered to snow-free conditions lasted 17–56 days and averaged 37 days. Our results suggest that mixed snow-covered snow-free pixels are common at the spatial resolutions imaged by both the Landsat and MODIS sensors. This highlights the additional information available from fractional SCA products and suggests fractional SCA can provide a major advantage for hydrological and climatological monitoring and modeling, particularly when accurate representation of the spatial distribution of snow cover is critical.
Lagrange constraint neural network for audio varying BSS
NASA Astrophysics Data System (ADS)
Szu, Harold H.; Hsu, Charles C.
2002-03-01
Lagrange Constraint Neural Network (LCNN) is a statistical-mechanical ab-initio model without assuming the artificial neural network (ANN) model at all but derived it from the first principle of Hamilton and Lagrange Methodology: H(S,A)= f(S)- (lambda) C(s,A(x,t)) that incorporates measurement constraint C(S,A(x,t))= (lambda) ([A]S-X)+((lambda) 0-1)((Sigma) isi -1) using the vector Lagrange multiplier-(lambda) and a- priori Shannon Entropy f(S) = -(Sigma) i si log si as the Contrast function of unknown number of independent sources si. Szu et al. have first solved in 1997 the general Blind Source Separation (BSS) problem for spatial-temporal varying mixing matrix for the real world remote sensing where a large pixel footprint implies the mixing matrix [A(x,t)] necessarily fill with diurnal and seasonal variations. Because the ground truth is difficult to be ascertained in the remote sensing, we have thus illustrated in this paper, each step of the LCNN algorithm for the simulated spatial-temporal varying BSS in speech, music audio mixing. We review and compare LCNN with other popular a-posteriori Maximum Entropy methodologies defined by ANN weight matrix-[W] sigmoid-(sigma) post processing H(Y=(sigma) ([W]X)) by Bell-Sejnowski, Amari and Oja (BSAO) called Independent Component Analysis (ICA). Both are mirror symmetric of the MaxEnt methodologies and work for a constant unknown mixing matrix [A], but the major difference is whether the ensemble average is taken at neighborhood pixel data X's in BASO or at the a priori sources S variables in LCNN that dictates which method works for spatial-temporal varying [A(x,t)] that would not allow the neighborhood pixel average. We expected the success of sharper de-mixing by the LCNN method in terms of a controlled ground truth experiment in the simulation of variant mixture of two music of similar Kurtosis (15 seconds composed of Saint-Saens Swan and Rachmaninov cello concerto).
Evaluation of Aster Images for Characterization and Mapping of Amethyst Mining Residues
NASA Astrophysics Data System (ADS)
Markoski, P. R.; Rolim, S. B. A.
2012-07-01
The objective of this work was to evaluate the potential of Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), subsystems VNIR (Visible and Near Infrared) and SWIR (Short Wave Infrared) images, for discrimination and mapping of amethyst mining residues (basalt) in the Ametista do Sul Region, Rio Grande do Sul State, Brazil. This region provides the most part of amethyst mining of the World. The basalt is extracted during the mining process and deposited outside the mine. As a result, mounts of residues (basalt) rise up. These mounts are many times smaller than ASTER pixel size (VNIR - 15 meters and SWIR - 30 meters). Thus, the pixel composition becomes a mixing of various materials, hampering its identification and mapping. Trying to solve this problem, multispectral algorithm Maximum Likelihood (MaxVer) and the hyperspectral technique SAM (Spectral Angle Mapper) were used in this work. Images from ASTER subsystems VNIR and SWIR were used to perform the classifications. SAM technique produced better results than MaxVer algorithm. The main error found by the techniques was the mixing between "shadow" and "mining residues/basalt" classes. With the SAM technique the confusion decreased because it employed the basalt spectral curve as a reference, while the multispectral techniques employed pixels groups that could have spectral mixture with other targets. The results showed that in tropical terrains as the study area, ASTER data can be efficacious for the characterization of mining residues.
Single-pixel imaging based on compressive sensing with spectral-domain optical mixing
NASA Astrophysics Data System (ADS)
Zhu, Zhijing; Chi, Hao; Jin, Tao; Zheng, Shilie; Jin, Xiaofeng; Zhang, Xianmin
2017-11-01
In this letter a single-pixel imaging structure is proposed based on compressive sensing using a spatial light modulator (SLM)-based spectrum shaper. In the approach, an SLM-based spectrum shaper, the pattern of which is a predetermined pseudorandom bit sequence (PRBS), spectrally codes the optical pulse carrying image information. The energy of the spectrally mixed pulse is detected by a single-pixel photodiode and the measurement results are used to reconstruct the image via a sparse recovery algorithm. As the mixing of the image signal and the PRBS is performed in the spectral domain, optical pulse stretching, modulation, compression and synchronization in the time domain are avoided. Experiments are implemented to verify the feasibility of the approach.
Estimating Mixed Broadleaves Forest Stand Volume Using Dsm Extracted from Digital Aerial Images
NASA Astrophysics Data System (ADS)
Sohrabi, H.
2012-07-01
In mixed old growth broadleaves of Hyrcanian forests, it is difficult to estimate stand volume at plot level by remotely sensed data while LiDar data is absent. In this paper, a new approach has been proposed and tested for estimating stand forest volume. The approach is based on this idea that forest volume can be estimated by variation of trees height at plots. In the other word, the more the height variation in plot, the more the stand volume would be expected. For testing this idea, 120 circular 0.1 ha sample plots with systematic random design has been collected in Tonekaon forest located in Hyrcanian zone. Digital surface model (DSM) measure the height values of the first surface on the ground including terrain features, trees, building etc, which provides a topographic model of the earth's surface. The DSMs have been extracted automatically from aerial UltraCamD images so that ground pixel size for extracted DSM varied from 1 to 10 m size by 1m span. DSMs were checked manually for probable errors. Corresponded to ground samples, standard deviation and range of DSM pixels have been calculated. For modeling, non-linear regression method was used. The results showed that standard deviation of plot pixels with 5 m resolution was the most appropriate data for modeling. Relative bias and RMSE of estimation was 5.8 and 49.8 percent, respectively. Comparing to other approaches for estimating stand volume based on passive remote sensing data in mixed broadleaves forests, these results are more encouraging. One big problem in this method occurs when trees canopy cover is totally closed. In this situation, the standard deviation of height is low while stand volume is high. In future studies, applying forest stratification could be studied.
Estimating cropland NPP using national crop inventory and MODIS derived crop specific parameters
NASA Astrophysics Data System (ADS)
Bandaru, V.; West, T. O.; Ricciuto, D. M.
2011-12-01
Estimates of cropland net primary production (NPP) are needed as input for estimates of carbon flux and carbon stock changes. Cropland NPP is currently estimated using terrestrial ecosystem models, satellite remote sensing, or inventory data. All three of these methods have benefits and problems. Terrestrial ecosystem models are often better suited for prognostic estimates rather than diagnostic estimates. Satellite-based NPP estimates often underestimate productivity on intensely managed croplands and are also limited to a few broad crop categories. Inventory-based estimates are consistent with nationally collected data on crop yields, but they lack sub-county spatial resolution. Integrating these methods will allow for spatial resolution consistent with current land cover and land use, while also maintaining total biomass quantities recorded in national inventory data. The main objective of this study was to improve cropland NPP estimates by using a modification of the CASA NPP model with individual crop biophysical parameters partly derived from inventory data and MODIS 8day 250m EVI product. The study was conducted for corn and soybean crops in Iowa and Illinois for years 2006 and 2007. We used EVI as a linear function for fPAR, and used crop land cover data (56m spatial resolution) to extract individual crop EVI pixels. First, we separated mixed pixels of both corn and soybean that occur when MODIS 250m pixel contains more than one crop. Second, we substituted mixed EVI pixels with nearest pure pixel values of the same crop within 1km radius. To get more accurate photosynthetic active radiation (PAR), we applied the Mountain Climate Simulator (MTCLIM) algorithm with the use of temperature and precipitation data from the North American Land Data Assimilation System (NLDAS-2) to generate shortwave radiation data. Finally, county specific light use efficiency (LUE) values of each crop for years 2006 to 2007 were determined by application of mean county inventory NPP and EVI-derived APAR into the Monteith equation. Results indicate spatial variability in LUE values across Iowa and Illinois. Northern regions of both Iowa and Illinois have higher LUE values than southern regions. This trend is reflected in NPP estimates. Results also show that corn has higher LUE values than soybean, resulting in higher NPP for corn than for soybean. Current NPP estimates were compared with NPP estimates from MOD17A3 product and with county inventory-based NPP estimates. Results indicate that current NPP estimates closely agree with inventory-based estimates, and that current NPP estimates are higher than those of the MOD17A3 product. It was also found that when mixed pixels were substituted with nearest pure pixels, revised NPP estimates were improved showing better agreement with inventory-based estimates.
A fast fully constrained geometric unmixing of hyperspectral images
NASA Astrophysics Data System (ADS)
Zhou, Xin; Li, Xiao-run; Cui, Jian-tao; Zhao, Liao-ying; Zheng, Jun-peng
2014-11-01
A great challenge in hyperspectral image analysis is decomposing a mixed pixel into a collection of endmembers and their corresponding abundance fractions. This paper presents an improved implementation of Barycentric Coordinate approach to unmix hyperspectral images, integrating with the Most-Negative Remove Projection method to meet the abundance sum-to-one constraint (ASC) and abundance non-negativity constraint (ANC). The original barycentric coordinate approach interprets the endmember unmixing problem as a simplex volume ratio problem, which is solved by calculate the determinants of two augmented matrix. One consists of all the members and the other consist of the to-be-unmixed pixel and all the endmembers except for the one corresponding to the specific abundance that is to be estimated. In this paper, we first modified the algorithm of Barycentric Coordinate approach by bringing in the Matrix Determinant Lemma to simplify the unmixing process, which makes the calculation only contains linear matrix and vector operations. So, the matrix determinant calculation of every pixel, as the original algorithm did, is avoided. By the end of this step, the estimated abundance meet the ASC constraint. Then, the Most-Negative Remove Projection method is used to make the abundance fractions meet the full constraints. This algorithm is demonstrated both on synthetic and real images. The resulting algorithm yields the abundance maps that are similar to those obtained by FCLS, while the runtime is outperformed as its computational simplicity.
Circular Mixture Modeling of Color Distribution for Blind Stain Separation in Pathology Images.
Li, Xingyu; Plataniotis, Konstantinos N
2017-01-01
In digital pathology, to address color variation and histological component colocalization in pathology images, stain decomposition is usually performed preceding spectral normalization and tissue component segmentation. This paper examines the problem of stain decomposition, which is a naturally nonnegative matrix factorization (NMF) problem in algebra, and introduces a systematical and analytical solution consisting of a circular color analysis module and an NMF-based computation module. Unlike the paradigm of existing stain decomposition algorithms where stain proportions are computed from estimated stain spectra using a matrix inverse operation directly, the introduced solution estimates stain spectra and stain depths via probabilistic reasoning individually. Since the proposed method pays extra attentions to achromatic pixels in color analysis and stain co-occurrence in pixel clustering, it achieves consistent and reliable stain decomposition with minimum decomposition residue. Particularly, aware of the periodic and angular nature of hue, we propose the use of a circular von Mises mixture model to analyze the hue distribution, and provide a complete color-based pixel soft-clustering solution to address color mixing introduced by stain overlap. This innovation combined with saturation-weighted computation makes our study effective for weak stains and broad-spectrum stains. Extensive experimentation on multiple public pathology datasets suggests that our approach outperforms state-of-the-art blind stain separation methods in terms of decomposition effectiveness.
The LSST Camera 500 watt -130 degC Mixed Refrigerant Cooling System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowden, Gordon B.; Langton, Brian J.; /SLAC
2014-05-28
The LSST Camera has a higher cryogenic heat load than previous CCD telescope cameras due to its large size (634 mm diameter focal plane, 3.2 Giga pixels) and its close coupled front-end electronics operating at low temperature inside the cryostat. Various refrigeration technologies are considered for this telescope/camera environment. MMR-Technology’s Mixed Refrigerant technology was chosen. A collaboration with that company was started in 2009. The system, based on a cluster of Joule-Thomson refrigerators running a special blend of mixed refrigerants is described. Both the advantages and problems of applying this technology to telescope camera refrigeration are discussed. Test results frommore » a prototype refrigerator running in a realistic telescope configuration are reported. Current and future stages of the development program are described. (auth)« less
NASA Astrophysics Data System (ADS)
Xie, Huan; Luo, Xin; Xu, Xiong; Wang, Chen; Pan, Haiyan; Tong, Xiaohua; Liu, Shijie
2016-10-01
Water body is a fundamental element in urban ecosystems and water mapping is critical for urban and landscape planning and management. As remote sensing has increasingly been used for water mapping in rural areas, this spatially explicit approach applied in urban area is also a challenging work due to the water bodies mainly distributed in a small size and the spectral confusion widely exists between water and complex features in the urban environment. Water index is the most common method for water extraction at pixel level, and spectral mixture analysis (SMA) has been widely employed in analyzing urban environment at subpixel level recently. In this paper, we introduce an automatic subpixel water mapping method in urban areas using multispectral remote sensing data. The objectives of this research consist of: (1) developing an automatic land-water mixed pixels extraction technique by water index; (2) deriving the most representative endmembers of water and land by utilizing neighboring water pixels and adaptive iterative optimal neighboring land pixel for respectively; (3) applying a linear unmixing model for subpixel water fraction estimation. Specifically, to automatically extract land-water pixels, the locally weighted scatter plot smoothing is firstly used to the original histogram curve of WI image . And then the Ostu threshold is derived as the start point to select land-water pixels based on histogram of the WI image with the land threshold and water threshold determination through the slopes of histogram curve . Based on the previous process at pixel level, the image is divided into three parts: water pixels, land pixels, and mixed land-water pixels. Then the spectral mixture analysis (SMA) is applied to land-water mixed pixels for water fraction estimation at subpixel level. With the assumption that the endmember signature of a target pixel should be more similar to adjacent pixels due to spatial dependence, the endmember of water and land are determined by neighboring pure land or pure water pixels within a distance. To obtaining the most representative endmembers in SMA, we designed an adaptive iterative endmember selection method based on the spatial similarity of adjacent pixels. According to the spectral similarity in a spatial adjacent region, the spectrum of land endmember is determined by selecting the most representative land pixel in a local window, and the spectrum of water endmember is determined by calculating an average of the water pixels in the local window. The proposed hierarchical processing method based on WI and SMA (WISMA) is applied to urban areas for reliability evaluation using the Landsat-8 Operational Land Imager (OLI) images. For comparison, four methods at pixel level and subpixel level were chosen respectively. Results indicate that the water maps generated by the proposed method correspond as closely with the truth water maps with subpixel precision. And the results showed that the WISMA achieved the best performance in water mapping with comprehensive analysis of different accuracy evaluation indexes (RMSE and SE).
Design of FPGA ICA for hyperspectral imaging processing
NASA Astrophysics Data System (ADS)
Nordin, Anis; Hsu, Charles C.; Szu, Harold H.
2001-03-01
The remote sensing problem which uses hyperspectral imaging can be transformed into a blind source separation problem. Using this model, hyperspectral imagery can be de-mixed into sub-pixel spectra which indicate the different material present in the pixel. This can be further used to deduce areas which contain forest, water or biomass, without even knowing the sources which constitute the image. This form of remote sensing allows previously blurred images to show the specific terrain involved in that region. The blind source separation problem can be implemented using an Independent Component Analysis algorithm. The ICA Algorithm has previously been successfully implemented using software packages such as MATLAB, which has a downloadable version of FastICA. The challenge now lies in implementing it in a form of hardware, or firmware in order to improve its computational speed. Hardware implementation also solves insufficient memory problem encountered by software packages like MATLAB when employing ICA for high resolution images and a large number of channels. Here, a pipelined solution of the firmware, realized using FPGAs are drawn out and simulated using C. Since C code can be translated into HDLs or be used directly on the FPGAs, it can be used to simulate its actual implementation in hardware. The simulated results of the program is presented here, where seven channels are used to model the 200 different channels involved in hyperspectral imaging.
Spectral Unmixing Analysis of Time Series Landsat 8 Images
NASA Astrophysics Data System (ADS)
Zhuo, R.; Xu, L.; Peng, J.; Chen, Y.
2018-05-01
Temporal analysis of Landsat 8 images opens up new opportunities in the unmixing procedure. Although spectral analysis of time series Landsat imagery has its own advantage, it has rarely been studied. Nevertheless, using the temporal information can provide improved unmixing performance when compared to independent image analyses. Moreover, different land cover types may demonstrate different temporal patterns, which can aid the discrimination of different natures. Therefore, this letter presents time series K-P-Means, a new solution to the problem of unmixing time series Landsat imagery. The proposed approach is to obtain the "purified" pixels in order to achieve optimal unmixing performance. The vertex component analysis (VCA) is used to extract endmembers for endmember initialization. First, nonnegative least square (NNLS) is used to estimate abundance maps by using the endmember. Then, the estimated endmember is the mean value of "purified" pixels, which is the residual of the mixed pixel after excluding the contribution of all nondominant endmembers. Assembling two main steps (abundance estimation and endmember update) into the iterative optimization framework generates the complete algorithm. Experiments using both simulated and real Landsat 8 images show that the proposed "joint unmixing" approach provides more accurate endmember and abundance estimation results compared with "separate unmixing" approach.
Multi-target detection and positioning in crowds using multiple camera surveillance
NASA Astrophysics Data System (ADS)
Huang, Jiahu; Zhu, Qiuyu; Xing, Yufeng
2018-04-01
In this study, we propose a pixel correspondence algorithm for positioning in crowds based on constraints on the distance between lines of sight, grayscale differences, and height in a world coordinates system. First, a Gaussian mixture model is used to obtain the background and foreground from multi-camera videos. Second, the hair and skin regions are extracted as regions of interest. Finally, the correspondences between each pixel in the region of interest are found under multiple constraints and the targets are positioned by pixel clustering. The algorithm can provide appropriate redundancy information for each target, which decreases the risk of losing targets due to a large viewing angle and wide baseline. To address the correspondence problem for multiple pixels, we construct a pixel-based correspondence model based on a similar permutation matrix, which converts the correspondence problem into a linear programming problem where a similar permutation matrix is found by minimizing an objective function. The correct pixel correspondences can be obtained by determining the optimal solution of this linear programming problem and the three-dimensional position of the targets can also be obtained by pixel clustering. Finally, we verified the algorithm with multiple cameras in experiments, which showed that the algorithm has high accuracy and robustness.
Exploring the limits of identifying sub-pixel thermal features using ASTER TIR data
Vaughan, R.G.; Keszthelyi, L.P.; Davies, A.G.; Schneider, D.J.; Jaworowski, C.; Heasler, H.
2010-01-01
Understanding the characteristics of volcanic thermal emissions and how they change with time is important for forecasting and monitoring volcanic activity and potential hazards. Satellite instruments view volcanic thermal features across the globe at various temporal and spatial resolutions. Thermal features that may be a precursor to a major eruption, or indicative of important changes in an on-going eruption can be subtle, making them challenging to reliably identify with satellite instruments. The goal of this study was to explore the limits of the types and magnitudes of thermal anomalies that could be detected using satellite thermal infrared (TIR) data. Specifically, the characterization of sub-pixel thermal features with a wide range of temperatures is considered using ASTER multispectral TIR data. First, theoretical calculations were made to define a "thermal mixing detection threshold" for ASTER, which quantifies the limits of ASTER's ability to resolve sub-pixel thermal mixing over a range of hot target temperatures and % pixel areas. Then, ASTER TIR data were used to model sub-pixel thermal features at the Yellowstone National Park geothermal area (hot spring pools with temperatures from 40 to 90 ??C) and at Mount Erebus Volcano, Antarctica (an active lava lake with temperatures from 200 to 800 ??C). Finally, various sources of uncertainty in sub-pixel thermal calculations were quantified for these empirical measurements, including pixel resampling, atmospheric correction, and background temperature and emissivity assumptions.
NASA Astrophysics Data System (ADS)
Gu, Lingjia; Ren, Ruizhi; Zhao, Kai; Li, Xiaofeng
2014-01-01
The precision of snow parameter retrieval is unsatisfactory for current practical demands. The primary reason is because of the problem of mixed pixels that are caused by low spatial resolution of satellite passive microwave data. A snow passive microwave unmixing method is proposed in this paper, based on land cover type data and the antenna gain function of passive microwaves. The land cover type of Northeast China is partitioned into grass, farmland, bare soil, forest, and water body types. The component brightness temperatures (CBT), namely unmixed data, with 1 km data resolution are obtained using the proposed unmixing method. The snow depth determined by the CBT and three snow depth retrieval algorithms are validated through field measurements taken in forest and farmland areas of Northeast China in January 2012 and 2013. The results show that the overall of the retrieval precision of the snow depth is improved by 17% in farmland areas and 10% in forest areas when using the CBT in comparison with the mixed pixels. The snow cover results based on the CBT are compared with existing MODIS snow cover products. The results demonstrate that more snow cover information can be obtained with up to 86% accuracy.
Amini, Kasra; Boll, Rebecca; Lauer, Alexandra; Burt, Michael; Lee, Jason W L; Christensen, Lauge; Brauβe, Felix; Mullins, Terence; Savelyev, Evgeny; Ablikim, Utuq; Berrah, Nora; Bomme, Cédric; Düsterer, Stefan; Erk, Benjamin; Höppner, Hauke; Johnsson, Per; Kierspel, Thomas; Krecinic, Faruk; Küpper, Jochen; Müller, Maria; Müller, Erland; Redlin, Harald; Rouzée, Arnaud; Schirmel, Nora; Thøgersen, Jan; Techert, Simone; Toleikis, Sven; Treusch, Rolf; Trippel, Sebastian; Ulmer, Anatoli; Wiese, Joss; Vallance, Claire; Rudenko, Artem; Stapelfeldt, Henrik; Brouard, Mark; Rolles, Daniel
2017-07-07
Laser-induced adiabatic alignment and mixed-field orientation of 2,6-difluoroiodobenzene (C 6 H 3 F 2 I) molecules are probed by Coulomb explosion imaging following either near-infrared strong-field ionization or extreme-ultraviolet multi-photon inner-shell ionization using free-electron laser pulses. The resulting photoelectrons and fragment ions are captured by a double-sided velocity map imaging spectrometer and projected onto two position-sensitive detectors. The ion side of the spectrometer is equipped with a pixel imaging mass spectrometry camera, a time-stamping pixelated detector that can record the hit positions and arrival times of up to four ions per pixel per acquisition cycle. Thus, the time-of-flight trace and ion momentum distributions for all fragments can be recorded simultaneously. We show that we can obtain a high degree of one-and three-dimensional alignment and mixed-field orientation and compare the Coulomb explosion process induced at both wavelengths.
Unmixing AVHRR Imagery to Assess Clearcuts and Forest Regrowth in Oregon
NASA Technical Reports Server (NTRS)
Hlavka, Christine A.; Spanner, Michael A.
1995-01-01
Advanced Very High Resolution Radiometer imagery provides frequent and low-cost coverage of the earth, but its coarse spatial resolution (approx. 1.1 km by 1.1 km) does not lend itself to standard techniques of automated categorization of land cover classes because the pixels are generally mixed; that is, the extent of the pixel includes several land use/cover classes. Unmixing procedures were developed to extract land use/cover class signatures from mixed pixels, using Landsat Thematic Mapper data as a source for the training set, and to estimate fractions of class coverage within pixels. Application of these unmixing procedures to mapping forest clearcuts and regrowth in Oregon indicated that unmixing is a promising approach for mapping major trends in land cover with AVHRR bands 1 and 2. Including thermal bands by unmixing AVHRR bands 1-4 did not lead to significant improvements in accuracy, but experiments with unmixing these four bands did indicate that use of weighted least squares techniques might lead to improvements in other applications of unmixing.
Lagrange constraint neural networks for massive pixel parallel image demixing
NASA Astrophysics Data System (ADS)
Szu, Harold H.; Hsu, Charles C.
2002-03-01
We have shown that the remote sensing optical imaging to achieve detailed sub-pixel decomposition is a unique application of blind source separation (BSS) that is truly linear of far away weak signal, instantaneous speed of light without delay, and along the line of sight without multiple paths. In early papers, we have presented a direct application of statistical mechanical de-mixing method called Lagrange Constraint Neural Network (LCNN). While the BSAO algorithm (using a posteriori MaxEnt ANN and neighborhood pixel average) is not acceptable for remote sensing, a mirror symmetric LCNN approach is all right assuming a priori MaxEnt for unknown sources to be averaged over the source statistics (not neighborhood pixel data) in a pixel-by-pixel independent fashion. LCNN reduces the computation complexity, save a great number of memory devices, and cut the cost of implementation. The Landsat system is designed to measure the radiation to deduce surface conditions and materials. For any given material, the amount of emitted and reflected radiation varies by the wavelength. In practice, a single pixel of a Landsat image has seven channels receiving 0.1 to 12 microns of radiation from the ground within a 20x20 meter footprint containing a variety of radiation materials. A-priori LCNN algorithm provides the spatial-temporal variation of mixture that is hardly de-mixable by other a-posteriori BSS or ICA methods. We have already compared the Landsat remote sensing using both methods in WCCI 2002 Hawaii. Unfortunately the absolute benchmark is not possible because of lacking of the ground truth. We will arbitrarily mix two incoherent sampled images as the ground truth. However, the constant total probability of co-located sources within the pixel footprint is necessary for the remote sensing constraint (since on a clear day the total reflecting energy is constant in neighborhood receiving pixel sensors), we have to normalized two image pixel-by-pixel as well. Then, the result is indeed as expected.
A method of minimum volume simplex analysis constrained unmixing for hyperspectral image
NASA Astrophysics Data System (ADS)
Zou, Jinlin; Lan, Jinhui; Zeng, Yiliang; Wu, Hongtao
2017-07-01
The signal recorded by a low resolution hyperspectral remote sensor from a given pixel, letting alone the effects of the complex terrain, is a mixture of substances. To improve the accuracy of classification and sub-pixel object detection, hyperspectral unmixing(HU) is a frontier-line in remote sensing area. Unmixing algorithm based on geometric has become popular since the hyperspectral image possesses abundant spectral information and the mixed model is easy to understand. However, most of the algorithms are based on pure pixel assumption, and since the non-linear mixed model is complex, it is hard to obtain the optimal endmembers especially under a highly mixed spectral data. To provide a simple but accurate method, we propose a minimum volume simplex analysis constrained (MVSAC) unmixing algorithm. The proposed approach combines the algebraic constraints that are inherent to the convex minimum volume with abundance soft constraint. While considering abundance fraction, we can obtain the pure endmember set and abundance fraction correspondingly, and the final unmixing result is closer to reality and has better accuracy. We illustrate the performance of the proposed algorithm in unmixing simulated data and real hyperspectral data, and the result indicates that the proposed method can obtain the distinct signatures correctly without redundant endmember and yields much better performance than the pure pixel based algorithm.
Satellite mapping of Nile Delta coastal changes
NASA Technical Reports Server (NTRS)
Blodget, H. W.; Taylor, P. T.; Roark, J. H.
1989-01-01
Multitemporal, multispectral scanner (MSS) landsat data have been used to monitor erosion and sedimentation along the Rosetta Promontory of the Nile Delta. These processes have accelerated significantly since the completion of the Aswan High Dam in 1964. Digital differencing of four MSS data sets, using standard algorithms, show that changes observed over a single year period generally occur as strings of single mixed pixels along the coast. Therefore, these can only be used qualitatively to indicate areas where changes occur. Areas of change recorded over a multi-year period are generally larger and thus identified by clusters of pixels; this reduces errors introduced by mixed pixels. Satellites provide a synoptic perspective utilizing data acquired at frequent time intervals. This permits multiple year monitoring of delta evolution on a regional scale.
NASA Astrophysics Data System (ADS)
Feng, Guixiang; Ming, Dongping; Wang, Min; Yang, Jianyu
2017-06-01
Scale problems are a major source of concern in the field of remote sensing. Since the remote sensing is a complex technology system, there is a lack of enough cognition on the connotation of scale and scale effect in remote sensing. Thus, this paper first introduces the connotations of pixel-based scale and summarizes the general understanding of pixel-based scale effect. Pixel-based scale effect analysis is essentially important for choosing the appropriate remote sensing data and the proper processing parameters. Fractal dimension is a useful measurement to analysis pixel-based scale. However in traditional fractal dimension calculation, the impact of spatial resolution is not considered, which leads that the scale effect change with spatial resolution can't be clearly reflected. Therefore, this paper proposes to use spatial resolution as the modified scale parameter of two fractal methods to further analyze the pixel-based scale effect. To verify the results of two modified methods (MFBM (Modified Windowed Fractal Brownian Motion Based on the Surface Area) and MDBM (Modified Windowed Double Blanket Method)); the existing scale effect analysis method (information entropy method) is used to evaluate. And six sub-regions of building areas and farmland areas were cut out from QuickBird images to be used as the experimental data. The results of the experiment show that both the fractal dimension and information entropy present the same trend with the decrease of spatial resolution, and some inflection points appear at the same feature scales. Further analysis shows that these feature scales (corresponding to the inflection points) are related to the actual sizes of the geo-object, which results in fewer mixed pixels in the image, and these inflection points are significantly indicative of the observed features. Therefore, the experiment results indicate that the modified fractal methods are effective to reflect the pixel-based scale effect existing in remote sensing data and it is helpful to analyze the observation scale from different aspects. This research will ultimately benefit for remote sensing data selection and application.
Small-angle solution scattering using the mixed-mode pixel array detector.
Koerner, Lucas J; Gillilan, Richard E; Green, Katherine S; Wang, Suntao; Gruner, Sol M
2011-03-01
Solution small-angle X-ray scattering (SAXS) measurements were obtained using a 128 × 128 pixel X-ray mixed-mode pixel array detector (MMPAD) with an 860 µs readout time. The MMPAD offers advantages for SAXS experiments: a pixel full-well of >2 × 10(7) 10 keV X-rays, a maximum flux rate of 10(8) X-rays pixel(-1) s(-1), and a sub-pixel point-spread function. Data from the MMPAD were quantitatively compared with data from a charge-coupled device (CCD) fiber-optically coupled to a phosphor screen. MMPAD solution SAXS data from lysozyme solutions were of equal or better quality than data captured by the CCD. The read-noise (normalized by pixel area) of the MMPAD was less than that of the CCD by an average factor of 3.0. Short sample-to-detector distances were required owing to the small MMPAD area (19.2 mm × 19.2 mm), and were revealed to be advantageous with respect to detector read-noise. As predicted by the Shannon sampling theory and confirmed by the acquisition of lysozyme solution SAXS curves, the MMPAD at short distances is capable of sufficiently sampling a solution SAXS curve for protein shape analysis. The readout speed of the MMPAD was demonstrated by continuously monitoring lysozyme sample evolution as radiation damage accumulated. These experiments prove that a small suitably configured MMPAD is appropriate for time-resolved solution scattering measurements.
NASA Technical Reports Server (NTRS)
Shimabukuro, Yosio Edemir; Smith, James A.
1991-01-01
Constrained-least-squares and weighted-least-squares mixing models for generating fraction images derived from remote sensing multispectral data are presented. An experiment considering three components within the pixels-eucalyptus, soil (understory), and shade-was performed. The generated fraction images for shade (shade image) derived from these two methods were compared by considering the performance and computer time. The derived shade images are related to the observed variation in forest structure, i.e., the fraction of inferred shade in the pixel is related to different eucalyptus ages.
Favre-Averaged Turbulence Statistics in Variable Density Mixing of Buoyant Jets
NASA Astrophysics Data System (ADS)
Charonko, John; Prestridge, Kathy
2014-11-01
Variable density mixing of a heavy fluid jet with lower density ambient fluid in a subsonic wind tunnel was experimentally studied using Particle Image Velocimetry and Planar Laser Induced Fluorescence to simultaneously measure velocity and density. Flows involving the mixing of fluids with large density ratios are important in a range of physical problems including atmospheric and oceanic flows, industrial processes, and inertial confinement fusion. Here we focus on buoyant jets with coflow. Results from two different Atwood numbers, 0.1 (Boussinesq limit) and 0.6 (non-Boussinesq case), reveal that buoyancy is important for most of the turbulent quantities measured. Statistical characteristics of the mixing important for modeling these flows such as the PDFs of density and density gradients, turbulent kinetic energy, Favre averaged Reynolds stress, turbulent mass flux velocity, density-specific volume correlation, and density power spectra were also examined and compared with previous direct numerical simulations. Additionally, a method for directly estimating Reynolds-averaged velocity statistics on a per-pixel basis is extended to Favre-averages, yielding improved accuracy and spatial resolution as compared to traditional post-processing of velocity and density fields.
Study of run time errors of the ATLAS pixel detector in the 2012 data taking period
NASA Astrophysics Data System (ADS)
Gandrajula, Reddy Pratap
The high resolution silicon Pixel detector is critical in event vertex reconstruction and in particle track reconstruction in the ATLAS detector. During the pixel data taking operation, some modules (Silicon Pixel sensor +Front End Chip+ Module Control Chip (MCC)) go to an auto-disable state, where the Modules don't send the data for storage. Modules become operational again after reconfiguration. The source of the problem is not fully understood. One possible source of the problem is traced to the occurrence of single event upset (SEU) in the MCC. Such a module goes to either a Timeout or Busy state. This report is the study of different types and rates of errors occurring in the Pixel data taking operation. Also, the study includes the error rate dependency on Pixel detector geometry.
Davis, Anthony B.; Xu, Feng; Collins, William D.
2015-03-01
Atmospheric hyperspectral VNIR sensing struggles with sub-pixel variability of clouds and limited spectral resolution mixing molecular lines. Our generalized radiative transfer model addresses both issues with new propagation kernels characterized by power-law decay in space.
Quantification of brain tissue through incorporation of partial volume effects
NASA Astrophysics Data System (ADS)
Gage, Howard D.; Santago, Peter, II; Snyder, Wesley E.
1992-06-01
This research addresses the problem of automatically quantifying the various types of brain tissue, CSF, white matter, and gray matter, using T1-weighted magnetic resonance images. The method employs a statistical model of the noise and partial volume effect and fits the derived probability density function to that of the data. Following this fit, the optimal decision points can be found for the materials and thus they can be quantified. Emphasis is placed on repeatable results for which a confidence in the solution might be measured. Results are presented assuming a single Gaussian noise source and a uniform distribution of partial volume pixels for both simulated and actual data. Thus far results have been mixed, with no clear advantage being shown in taking into account partial volume effects. Due to the fitting problem being ill-conditioned, it is not yet clear whether these results are due to problems with the model or the method of solution.
NASA Astrophysics Data System (ADS)
Umar, M.; Rhoads, Bruce L.; Greenberg, Jonathan A.
2018-01-01
Although past work has noted that contrasts in turbidity often are detectable on remotely sensed images of rivers downstream from confluences, no systematic methodology has been developed for assessing mixing over distance of confluent flows with differing surficial suspended sediment concentrations (SSSC). In contrast to field measurements of mixing below confluences, satellite remote-sensing can provide detailed information on spatial distributions of SSSC over long distances. This paper presents a methodology that uses remote-sensing data to estimate spatial patterns of SSSC downstream of confluences along large rivers and to determine changes in the amount of mixing over distance from confluences. The method develops a calibrated Random Forest (RF) model by relating training SSSC data from river gaging stations to derived spectral indices for the pixels corresponding to gaging-station locations. The calibrated model is then used to predict SSSC values for every river pixel in a remotely sensed image, which provides the basis for mapping of spatial variability in SSSCs along the river. The pixel data are used to estimate average surficial values of SSSC at cross sections spaced uniformly along the river. Based on the cross-section data, a mixing metric is computed for each cross section. The spatial pattern of change in this metric over distance can be used to define rates and length scales of surficial mixing of suspended sediment downstream of a confluence. This type of information is useful for exploring the potential influence of various controlling factors on mixing downstream of confluences, for evaluating how mixing in a river system varies over time and space, and for determining how these variations influence water quality and ecological conditions along the river.
Nonlinear hyperspectral unmixing based on sparse non-negative matrix factorization
NASA Astrophysics Data System (ADS)
Li, Jing; Li, Xiaorun; Zhao, Liaoying
2016-01-01
Hyperspectral unmixing aims at extracting pure material spectra, accompanied by their corresponding proportions, from a mixed pixel. Owing to modeling more accurate distribution of real material, nonlinear mixing models (non-LMM) are usually considered to hold better performance than LMMs in complicated scenarios. In the past years, numerous nonlinear models have been successfully applied to hyperspectral unmixing. However, most non-LMMs only think of sum-to-one constraint or positivity constraint while the widespread sparsity among real materials mixing is the very factor that cannot be ignored. That is, for non-LMMs, a pixel is usually composed of a few spectral signatures of different materials from all the pure pixel set. Thus, in this paper, a smooth sparsity constraint is incorporated into the state-of-the-art Fan nonlinear model to exploit the sparsity feature in nonlinear model and use it to enhance the unmixing performance. This sparsity-constrained Fan model is solved with the non-negative matrix factorization. The algorithm was implemented on synthetic and real hyperspectral data and presented its advantage over those competing algorithms in the experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Becker, Julian; Tate, Mark W.; Shanks, Katherine S.
Pixel Array Detectors (PADs) consist of an x-ray sensor layer bonded pixel-by-pixel to an underlying readout chip. This approach allows both the sensor and the custom pixel electronics to be tailored independently to best match the x-ray imaging requirements. Here we describe the hybridization of CdTe sensors to two different charge-integrating readout chips, the Keck PAD and the Mixed-Mode PAD (MM-PAD), both developed previously in our laboratory. The charge-integrating architecture of each of these PADs extends the instantaneous counting rate by many orders of magnitude beyond that obtainable with photon counting architectures. The Keck PAD chip consists of rapid, 8-frame,more » in-pixel storage elements with framing periods <150 ns. The second detector, the MM-PAD, has an extended dynamic range by utilizing an in-pixel overflow counter coupled with charge removal circuitry activated at each overflow. This allows the recording of signals from the single-photon level to tens of millions of x-rays/pixel/frame while framing at 1 kHz. Both detector chips consist of a 128×128 pixel array with (150 µm){sup 2} pixels.« less
Variable pixel size ionospheric tomography
NASA Astrophysics Data System (ADS)
Zheng, Dunyong; Zheng, Hongwei; Wang, Yanjun; Nie, Wenfeng; Li, Chaokui; Ao, Minsi; Hu, Wusheng; Zhou, Wei
2017-06-01
A novel ionospheric tomography technique based on variable pixel size was developed for the tomographic reconstruction of the ionospheric electron density (IED) distribution. In variable pixel size computerized ionospheric tomography (VPSCIT) model, the IED distribution is parameterized by a decomposition of the lower and upper ionosphere with different pixel sizes. Thus, the lower and upper IED distribution may be very differently determined by the available data. The variable pixel size ionospheric tomography and constant pixel size tomography are similar in most other aspects. There are some differences between two kinds of models with constant and variable pixel size respectively, one is that the segments of GPS signal pay should be assigned to the different kinds of pixel in inversion; the other is smoothness constraint factor need to make the appropriate modified where the pixel change in size. For a real dataset, the variable pixel size method distinguishes different electron density distribution zones better than the constant pixel size method. Furthermore, it can be non-chided that when the effort is spent to identify the regions in a model with best data coverage. The variable pixel size method can not only greatly improve the efficiency of inversion, but also produce IED images with high fidelity which are the same as a used uniform pixel size method. In addition, variable pixel size tomography can reduce the underdetermined problem in an ill-posed inverse problem when the data coverage is irregular or less by adjusting quantitative proportion of pixels with different sizes. In comparison with constant pixel size tomography models, the variable pixel size ionospheric tomography technique achieved relatively good results in a numerical simulation. A careful validation of the reliability and superiority of variable pixel size ionospheric tomography was performed. Finally, according to the results of the statistical analysis and quantitative comparison, the proposed method offers an improvement of 8% compared with conventional constant pixel size tomography models in the forward modeling.
Small-angle solution scattering using the mixed-mode pixel array detector
Koerner, Lucas J.; Gillilan, Richard E.; Green, Katherine S.; Wang, Suntao; Gruner, Sol M.
2011-01-01
Solution small-angle X-ray scattering (SAXS) measurements were obtained using a 128 × 128 pixel X-ray mixed-mode pixel array detector (MMPAD) with an 860 µs readout time. The MMPAD offers advantages for SAXS experiments: a pixel full-well of >2 × 107 10 keV X-rays, a maximum flux rate of 108 X-rays pixel−1 s−1, and a sub-pixel point-spread function. Data from the MMPAD were quantitatively compared with data from a charge-coupled device (CCD) fiber-optically coupled to a phosphor screen. MMPAD solution SAXS data from lysozyme solutions were of equal or better quality than data captured by the CCD. The read-noise (normalized by pixel area) of the MMPAD was less than that of the CCD by an average factor of 3.0. Short sample-to-detector distances were required owing to the small MMPAD area (19.2 mm × 19.2 mm), and were revealed to be advantageous with respect to detector read-noise. As predicted by the Shannon sampling theory and confirmed by the acquisition of lysozyme solution SAXS curves, the MMPAD at short distances is capable of sufficiently sampling a solution SAXS curve for protein shape analysis. The readout speed of the MMPAD was demonstrated by continuously monitoring lysozyme sample evolution as radiation damage accumulated. These experiments prove that a small suitably configured MMPAD is appropriate for time-resolved solution scattering measurements. PMID:21335900
Methods of editing cloud and atmospheric layer affected pixels from satellite data
NASA Technical Reports Server (NTRS)
Nixon, P. R. (Principal Investigator); Wiegand, C. L.; Richardson, A. J.; Johnson, M. P.; Goodier, B. G.
1981-01-01
The location and migration of cloud, land and water features were examined in spectral space (reflective VIS vs. emissive IR). Daytime HCMM data showed two distinct types of cloud affected pixels in the south Texas test area. High altitude cirrus and/or cirrostratus and "subvisible cirrus" (SCi) reflected the same or only slightly more than land features. In the emissive band, the digital counts ranged from 1 to over 75 and overlapped land features. Pixels consisting of cumulus clouds, or of mixed cumulus and landscape, clustered in a different area of spectral space than the high altitude cloud pixels. Cumulus affected pixels were more reflective than land and water pixels. In August the high altitude clouds and SCi were more emissive than similar clouds were in July. Four-channel TIROS-N data were examined with the objective of developing a multispectral screening technique for removing SCi contaminated data.
A digital pixel cell for address event representation image convolution processing
NASA Astrophysics Data System (ADS)
Camunas-Mesa, Luis; Acosta-Jimenez, Antonio; Serrano-Gotarredona, Teresa; Linares-Barranco, Bernabe
2005-06-01
Address Event Representation (AER) is an emergent neuromorphic interchip communication protocol that allows for real-time virtual massive connectivity between huge number of neurons located on different chips. By exploiting high speed digital communication circuits (with nano-seconds timings), synaptic neural connections can be time multiplexed, while neural activity signals (with mili-seconds timings) are sampled at low frequencies. Also, neurons generate events according to their information levels. Neurons with more information (activity, derivative of activities, contrast, motion, edges,...) generate more events per unit time, and access the interchip communication channel more frequently, while neurons with low activity consume less communication bandwidth. AER technology has been used and reported for the implementation of various type of image sensors or retinae: luminance with local agc, contrast retinae, motion retinae,... Also, there has been a proposal for realizing programmable kernel image convolution chips. Such convolution chips would contain an array of pixels that perform weighted addition of events. Once a pixel has added sufficient event contributions to reach a fixed threshold, the pixel fires an event, which is then routed out of the chip for further processing. Such convolution chips have been proposed to be implemented using pulsed current mode mixed analog and digital circuit techniques. In this paper we present a fully digital pixel implementation to perform the weighted additions and fire the events. This way, for a given technology, there is a fully digital implementation reference against which compare the mixed signal implementations. We have designed, implemented and tested a fully digital AER convolution pixel. This pixel will be used to implement a full AER convolution chip for programmable kernel image convolution processing.
NASA Technical Reports Server (NTRS)
Myneni, Ranga
2003-01-01
The problem of how the scale, or spatial resolution, of reflectance data impacts retrievals of vegetation leaf area index (LAI) and fraction absorbed photosynthetically active radiation (PAR) has been investigated. We define the goal of scaling as the process by which it is established that LAI and FPAR values derived from coarse resolution sensor data equal the arithmetic average of values derived independently from fine resolution sensor data. The increasing probability of land cover mixtures with decreasing resolution is defined as heterogeneity, which is a key concept in scaling studies. The effect of pixel heterogeneity on spectral reflectances and LAI/FPAR retrievals is investigated with 1 km Advanced Very High Resolution Radiometer (AVHRR) data aggregated to different coarse spatial resolutions. It is shown that LAI retrieval errors at coarse resolution are inversely related to the proportion of the dominant land cover in such pixel. Further, large errors in LAI retrievals are incurred when forests are minority biomes in non-forest pixels compared to when forest biomes are mixed with one another, and vice-versa. A physically based technique for scaling with explicit spatial resolution dependent radiative transfer formulation is developed. The successful application of this theory to scaling LAI retrievals from AVHRR data of different resolutions is demonstrated
Shannon L. Savage; Rick L. Lawrence; John R. Squires
2015-01-01
Ecological and land management applications would often benefit from maps of relative canopy cover of each species present within a pixel, instead of traditional remote-sensing based maps of either dominant species or percent canopy cover without regard to species composition. Widely used statistical models for remote sensing, such as randomForest (RF),...
NASA Technical Reports Server (NTRS)
Panciera, Rocco; Walker, Jeffrey P.; Kalma, Jetse; Kim, Edward
2011-01-01
The Soil Moisture and Ocean Salinity (SMOS)mission, launched in November 2009, provides global maps of soil moisture and ocean salinity by measuring the L-band (1.4 GHz) emission of the Earth's surface with a spatial resolution of 40-50 km.Uncertainty in the retrieval of soilmoisture over large heterogeneous areas such as SMOS pixels is expected, due to the non-linearity of the relationship between soil moisture and the microwave emission. The current baseline soilmoisture retrieval algorithm adopted by SMOS and implemented in the SMOS Level 2 (SMOS L2) processor partially accounts for the sub-pixel heterogeneity of the land surface, by modelling the individual contributions of different pixel fractions to the overall pixel emission. This retrieval approach is tested in this study using airborne L-band data over an area the size of a SMOS pixel characterised by a mix Eucalypt forest and moderate vegetation types (grassland and crops),with the objective of assessing its ability to correct for the soil moisture retrieval error induced by the land surface heterogeneity. A preliminary analysis using a traditional uniform pixel retrieval approach shows that the sub-pixel heterogeneity of land cover type causes significant errors in soil moisture retrieval (7.7%v/v RMSE, 2%v/v bias) in pixels characterised by a significant amount of forest (40-60%). Although the retrieval approach adopted by SMOS partially reduces this error, it is affected by errors beyond the SMOS target accuracy, presenting in particular a strong dry bias when a fraction of the pixel is occupied by forest (4.1%v/v RMSE,-3.1%v/v bias). An extension to the SMOS approach is proposed that accounts for the heterogeneity of vegetation optical depth within the SMOS pixel. The proposed approach is shown to significantly reduce the error in retrieved soil moisture (2.8%v/v RMSE, -0.3%v/v bias) in pixels characterised by a critical amount of forest (40-60%), at the limited cost of only a crude estimate of the optical depth of the forested area (better than 35% uncertainty). This study makes use of an unprecedented data set of airborne L-band observations and ground supporting data from the National Airborne Field Experiment 2005 (NAFE'05), which allowed accurate characterisation of the land surface heterogeneity over an area equivalent in size to a SMOS pixel.
Dead pixel replacement in LWIR microgrid polarimeters.
Ratliff, Bradley M; Tyo, J Scott; Boger, James K; Black, Wiley T; Bowers, David L; Fetrow, Matthew P
2007-06-11
LWIR imaging arrays are often affected by nonresponsive pixels, or "dead pixels." These dead pixels can severely degrade the quality of imagery and often have to be replaced before subsequent image processing and display of the imagery data. For LWIR arrays that are integrated with arrays of micropolarizers, the problem of dead pixels is amplified. Conventional dead pixel replacement (DPR) strategies cannot be employed since neighboring pixels are of different polarizations. In this paper we present two DPR schemes. The first is a modified nearest-neighbor replacement method. The second is a method based on redundancy in the polarization measurements.We find that the redundancy-based DPR scheme provides an order-of-magnitude better performance for typical LWIR polarimetric data.
Pixel decomposition for tracking in low resolution videos
NASA Astrophysics Data System (ADS)
Govinda, Vivekanand; Ralph, Jason F.; Spencer, Joseph W.; Goulermas, John Y.; Yang, Lihua; Abbas, Alaa M.
2008-04-01
This paper describes a novel set of algorithms that allows indoor activity to be monitored using data from very low resolution imagers and other non-intrusive sensors. The objects are not resolved but activity may still be determined. This allows the use of such technology in sensitive environments where privacy must be maintained. Spectral un-mixing algorithms from remote sensing were adapted for this environment. These algorithms allow the fractional contributions from different colours within each pixel to be estimated and this is used to assist in the detection and monitoring of small objects or sub-pixel motion.
NASA Technical Reports Server (NTRS)
Wrigley, Christopher James (Inventor); Hancock, Bruce R. (Inventor); Cunningham, Thomas J. (Inventor); Newton, Kenneth W. (Inventor)
2014-01-01
An analog-to-digital converter (ADC) converts pixel voltages from a CMOS image into a digital output. A voltage ramp generator generates a voltage ramp that has a linear first portion and a non-linear second portion. A digital output generator generates a digital output based on the voltage ramp, the pixel voltages, and comparator output from an array of comparators that compare the voltage ramp to the pixel voltages. A return lookup table linearizes the digital output values.
Mapping shorelines to subpixel accuracy using Landsat imagery
NASA Astrophysics Data System (ADS)
Abileah, Ron; Vignudelli, Stefano; Scozzari, Andrea
2013-04-01
A promising method to accurately map the shoreline of oceans, lakes, reservoirs, and rivers is proposed and verified in this work. The method is applied to multispectral satellite imagery in two stages. The first stage is a classification of each image pixel into land/water categories using the conventional 'dark pixel' method. The approach presented here, makes use of a single shortwave IR image band (SWIR), if available. It is well known that SWIR has the least water leaving radiance and relatively little sensitivity to water pollutants and suspended sediments. It is generally the darkest (over water) and most reliable single band for land-water discrimination. The boundary of the water cover map determined in stage 1 underestimates the water cover and often misses the true shoreline by a quantity up to one pixel. A more accurate shoreline would be obtained by connecting the center point of pixels with exactly 50-50 mix of water and land. Then, stage 2 finds the 50-50 mix points. According to the method proposed, image data is interpolated and up-sampled to ten times the original resolution. The local gradient in radiance is used to find the direction to the shore, thus searching along that path for the interpolated pixel closest to a 50-50 mix. Landsat images with 30m resolution, processed by this method, may thus provide the shoreline accurate to 3m. Compared to similar approaches available in the literature, the method proposed discriminates sub-pixels crossed by the shoreline by using a criteria based on the absolute value of radiance, rather than its gradient. Preliminary experimentation of the algorithm shows that 10m resolution accuracy is easily achieved and in some cases is often better than 5m. The proposed method can be used to study long term shoreline changes by exploiting the 30 years of archived world-wide coverage Landsat imagery. Landsat imagery is free and easily accessible for downloading. Some applications that exploit the Landsat dataset and the new method are discussed in the companion poster: "Case-studies of potential applications for highly resolved shorelines."
NASA Astrophysics Data System (ADS)
Atzberger, C.
2013-12-01
The robust and accurate retrieval of vegetation biophysical variables using RTM is seriously hampered by the ill-posedness of the inverse problem. The contribution presents our object-based inversion approach and evaluate it against measured data. The proposed method takes advantage of the fact that nearby pixels are generally more similar than those at a larger distance. For example, within a given vegetation patch, nearby pixels often share similar leaf angular distributions. This leads to spectral co-variations in the n-dimensional spectral features space, which can be used for regularization purposes. Using a set of leaf area index (LAI) measurements (n=26) acquired over alfalfa, sugar beet and garlic crops of the Barrax test site (Spain), it is demonstrated that the proposed regularization using neighbourhood information yields more accurate results compared to the traditional pixel-based inversion. Principle of the ill-posed inverse problem and the proposed solution illustrated in the red-nIR feature space using (PROSAIL). [A] spectral trajectory ('soil trajectory') obtained for one leaf angle (ALA) and one soil brightness (αsoil), when LAI varies between 0 and 10, [B] 'soil trajectories' for 5 soil brightness values and three leaf angles, [C] ill-posed inverse problem: different combinations of ALA × αsoil yield an identical crossing point, [D] object-based RTM inversion; only one 'soil trajectory' fits all nine pixelswithin a gliding (3×3) window. The black dots (plus the rectangle=central pixel) represent the hypothetical position of nine pixels within a 3×3 (gliding) window. Assuming that over short distances (× 1 pixel) variations in soil brightness can be neglected, the proposed object-based inversion searches for one common set of ALA × αsoil so that the resulting 'soil trajectory' best fits the nine measured pixels. Ground measured vs. retrieved LAI values for three crops. Left: proposed object-based approach. Right: pixel-based inversion
Lee, Kai-Hui; Chiu, Pei-Ling
2013-10-01
Conventional visual cryptography (VC) suffers from a pixel-expansion problem, or an uncontrollable display quality problem for recovered images, and lacks a general approach to construct visual secret sharing schemes for general access structures. We propose a general and systematic approach to address these issues without sophisticated codebook design. This approach can be used for binary secret images in non-computer-aided decryption environments. To avoid pixel expansion, we design a set of column vectors to encrypt secret pixels rather than using the conventional VC-based approach. We begin by formulating a mathematic model for the VC construction problem to find the column vectors for the optimal VC construction, after which we develop a simulated-annealing-based algorithm to solve the problem. The experimental results show that the display quality of the recovered image is superior to that of previous papers.
SVGA and XGA LCOS microdisplays for HMD applications
NASA Astrophysics Data System (ADS)
Bolotski, Michael; Alvelda, Phillip
1999-07-01
MicroDisplay liquid crystal on silicon (LCOS) display devices are based on a combination of technologies combined with the extreme integration capability of conventionally fabricated CMOS substrates. Two recent SVGA (800 X 600) pixel resolution designs were demonstrated based on 10 micron and 12.5-micron pixel pitch architectures. The resulting microdisplays measure approximately 10 mm and 12 mm in diagonal respectively. Further, an XGA (1024 X 768) resolution display fabricated with a 12.5-micron pixel pitch with a 16-mm diagonal was also demonstrated. Both the larger SVGA and the XGA design were based on the same 12.5-micron pixel-pitch design, demonstrating a quickly scalable design architecture for rapid prototyping life-cycles. All three microdisplay designs described above function in grayscale and high-performance Field-Sequential-Color (FSC) operating modes. The fast liquid crystal operating modes and new scalable high- performance pixel addressing architectures presented in this paper enable substantially improved color, contrast, and brightness while still satisfying the optical, packaging, and power requirements of portable commercial and defense applications including ultra-portable helmet, eyeglass, and heat-mounted systems. The entire suite of The MicroDisplay Corporation's technologies was devised to create a line of mixed-signal application-specific integrated circuits (ASIC) in single-chip display systems. Mixed-signal circuits can integrate computing, memory, and communication circuitry on the same substrate as the display drivers and pixel array for a multifunctional complete system-on-a-chip. For helmet and head-mounted displays this can include capabilities such as the incorporation of customized symbology and information storage directly on the display substrate. System-on-a-chip benefits also include reduced head supported weight requirements through the elimination of off-chip drive electronics.
Mitigation of image artifacts in LWIR microgrid polarimeter images
NASA Astrophysics Data System (ADS)
Ratliff, Bradley M.; Tyo, J. Scott; Boger, James K.; Black, Wiley T.; Bowers, David M.; Kumar, Rakesh
2007-09-01
Microgrid polarimeters, also known as division of focal plane (DoFP) polarimeters, are composed of an integrated array of micropolarizing elements that immediately precedes the FPA. The result of the DoFP device is that neighboring pixels sense different polarization states. The measurements made at each pixel can be combined to estimate the Stokes vector at every reconstruction point in a scene. DoFP devices have the advantage that they are mechanically rugged and inherently optically aligned. However, they suffer from the severe disadvantage that the neighboring pixels that make up the Stokes vector estimates have different instantaneous fields of view (IFOV). This IFOV error leads to spatial differencing that causes false polarization signatures, especially in regions of the image where the scene changes rapidly in space. Furthermore, when the polarimeter is operating in the LWIR, the FPA has inherent response problems such as nonuniformity and dead pixels that make the false polarization problem that much worse. In this paper, we present methods that use spatial information from the scene to mitigate two of the biggest problems that confront DoFP devices. The first is a polarimetric dead pixel replacement (DPR) scheme, and the second is a reconstruction method that chooses the most appropriate polarimetric interpolation scheme for each particular pixel in the image based on the scene properties. We have found that these two methods can greatly improve both the visual appearance of polarization products as well as the accuracy of the polarization estimates, and can be implemented with minimal computational cost.
Vedadi, Farhang; Shirani, Shahram
2014-01-01
A new method of image resolution up-conversion (image interpolation) based on maximum a posteriori sequence estimation is proposed. Instead of making a hard decision about the value of each missing pixel, we estimate the missing pixels in groups. At each missing pixel of the high resolution (HR) image, we consider an ensemble of candidate interpolation methods (interpolation functions). The interpolation functions are interpreted as states of a Markov model. In other words, the proposed method undergoes state transitions from one missing pixel position to the next. Accordingly, the interpolation problem is translated to the problem of estimating the optimal sequence of interpolation functions corresponding to the sequence of missing HR pixel positions. We derive a parameter-free probabilistic model for this to-be-estimated sequence of interpolation functions. Then, we solve the estimation problem using a trellis representation and the Viterbi algorithm. Using directional interpolation functions and sequence estimation techniques, we classify the new algorithm as an adaptive directional interpolation using soft-decision estimation techniques. Experimental results show that the proposed algorithm yields images with higher or comparable peak signal-to-noise ratios compared with some benchmark interpolation methods in the literature while being efficient in terms of implementation and complexity considerations.
Lattice algebra approach to multispectral analysis of ancient documents.
Valdiviezo-N, Juan C; Urcid, Gonzalo
2013-02-01
This paper introduces a lattice algebra procedure that can be used for the multispectral analysis of historical documents and artworks. Assuming the presence of linearly mixed spectral pixels captured in a multispectral scene, the proposed method computes the scaled min- and max-lattice associative memories to determine the purest pixels that best represent the spectra of single pigments. The estimation of fractional proportions of pure spectra at each image pixel is used to build pigment abundance maps that can be used for subsequent restoration of damaged parts. Application examples include multispectral images acquired from the Archimedes Palimpsest and a Mexican pre-Hispanic codex.
NASA Astrophysics Data System (ADS)
Lin, Shengmin; Lin, Chi-Pin; Wang, Weng-Lyang; Hsiao, Feng-Ke; Sikora, Robert
2009-08-01
A 256x512 element digital image sensor has been developed which has a large pixel size, slow scan and low power consumption for Hyper Spectral Imager (HySI) applications. The device is a mixed mode, silicon on chip (SOC) IC. It combines analog circuitry, digital circuitry and optical sensor circuitry into a single chip. This chip integrates a 256x512 active pixel sensor array, a programming gain amplifier (PGA) for row wise gain setting, I2C interface, SRAM, 12 bit analog to digital convertor (ADC), voltage regulator, low voltage differential signal (LVDS) and timing generator. The device can be used for 256 pixels of spatial resolution and 512 bands of spectral resolution ranged from 400 nm to 950 nm in wavelength. In row wise gain readout mode, one can set a different gain on each row of the photo detector by storing the gain setting data on the SRAM thru the I2C interface. This unique row wise gain setting can be used to compensate the silicon spectral response non-uniformity problem. Due to this unique function, the device is suitable for hyper-spectral imager applications. The HySI camera located on-board the Chandrayaan-1 satellite, was successfully launched to the moon on Oct. 22, 2008. The device is currently mapping the moon and sending back excellent images of the moon surface. The device design and the moon image data will be presented in the paper.
Learning to merge: a new tool for interactive mapping
NASA Astrophysics Data System (ADS)
Porter, Reid B.; Lundquist, Sheng; Ruggiero, Christy
2013-05-01
The task of turning raw imagery into semantically meaningful maps and overlays is a key area of remote sensing activity. Image analysts, in applications ranging from environmental monitoring to intelligence, use imagery to generate and update maps of terrain, vegetation, road networks, buildings and other relevant features. Often these tasks can be cast as a pixel labeling problem, and several interactive pixel labeling tools have been developed. These tools exploit training data, which is generated by analysts using simple and intuitive paint-program annotation tools, in order to tailor the labeling algorithm for the particular dataset and task. In other cases, the task is best cast as a pixel segmentation problem. Interactive pixel segmentation tools have also been developed, but these tools typically do not learn from training data like the pixel labeling tools do. In this paper we investigate tools for interactive pixel segmentation that also learn from user input. The input has the form of segment merging (or grouping). Merging examples are 1) easily obtained from analysts using vector annotation tools, and 2) more challenging to exploit than traditional labels. We outline the key issues in developing these interactive merging tools, and describe their application to remote sensing.
A 75-ps Gated CMOS Image Sensor with Low Parasitic Light Sensitivity
Zhang, Fan; Niu, Hanben
2016-01-01
In this study, a 40 × 48 pixel global shutter complementary metal-oxide-semiconductor (CMOS) image sensor with an adjustable shutter time as low as 75 ps was implemented using a 0.5-μm mixed-signal CMOS process. The implementation consisted of a continuous contact ring around each p+/n-well photodiode in the pixel array in order to apply sufficient light shielding. The parasitic light sensitivity of the in-pixel storage node was measured to be 1/8.5 × 107 when illuminated by a 405-nm diode laser and 1/1.4 × 104 when illuminated by a 650-nm diode laser. The pixel pitch was 24 μm, the size of the square p+/n-well photodiode in each pixel was 7 μm per side, the measured random readout noise was 217 e− rms, and the measured dynamic range of the pixel of the designed chip was 5500:1. The type of gated CMOS image sensor (CIS) that is proposed here can be used in ultra-fast framing cameras to observe non-repeatable fast-evolving phenomena. PMID:27367699
A 75-ps Gated CMOS Image Sensor with Low Parasitic Light Sensitivity.
Zhang, Fan; Niu, Hanben
2016-06-29
In this study, a 40 × 48 pixel global shutter complementary metal-oxide-semiconductor (CMOS) image sensor with an adjustable shutter time as low as 75 ps was implemented using a 0.5-μm mixed-signal CMOS process. The implementation consisted of a continuous contact ring around each p+/n-well photodiode in the pixel array in order to apply sufficient light shielding. The parasitic light sensitivity of the in-pixel storage node was measured to be 1/8.5 × 10⁷ when illuminated by a 405-nm diode laser and 1/1.4 × 10⁴ when illuminated by a 650-nm diode laser. The pixel pitch was 24 μm, the size of the square p+/n-well photodiode in each pixel was 7 μm per side, the measured random readout noise was 217 e(-) rms, and the measured dynamic range of the pixel of the designed chip was 5500:1. The type of gated CMOS image sensor (CIS) that is proposed here can be used in ultra-fast framing cameras to observe non-repeatable fast-evolving phenomena.
Superpixel-based graph cuts for accurate stereo matching
NASA Astrophysics Data System (ADS)
Feng, Liting; Qin, Kaihuai
2017-06-01
Estimating the surface normal vector and disparity of a pixel simultaneously, also known as three-dimensional label method, has been widely used in recent continuous stereo matching problem to achieve sub-pixel accuracy. However, due to the infinite label space, it’s extremely hard to assign each pixel an appropriate label. In this paper, we present an accurate and efficient algorithm, integrating patchmatch with graph cuts, to approach this critical computational problem. Besides, to get robust and precise matching cost, we use a convolutional neural network to learn a similarity measure on small image patches. Compared with other MRF related methods, our method has several advantages: its sub-modular property ensures a sub-problem optimality which is easy to perform in parallel; graph cuts can simultaneously update multiple pixels, avoiding local minima caused by sequential optimizers like belief propagation; it uses segmentation results for better local expansion move; local propagation and randomization can easily generate the initial solution without using external methods. Middlebury experiments show that our method can get higher accuracy than other MRF-based algorithms.
Rapid dissolution of propofol emulsions under sink conditions.
Damitz, Robert; Chauhan, Anuj
2015-03-15
Pain accompanying intravenous injections of propofol is a major problem in anesthesia. Pain is ascribed to the interaction of propofol with the local vasculature and could be impacted by rapid dissolution of the emulsion formulation to release the drug. In this paper, we measure the dissolution of propofol emulsions including the commercial formulation Diprivan(®). We image the turbidity of blood protein sink solutions after emulsions are injected. The images are digitized, and the drug release times are estimated from the pixel intensity data for a range of starting emulsion droplet size. Drug release times are compared to a mechanistic model. After injection, pixel intensity or turbidity decreases due to reductions in emulsion droplet size. Drug release times can still be measured even if the emulsion does not completely dissolve such as with Diprivan(®). Both pure propofol emulsions and Diprivan(®) release drug very rapidly (under five seconds). Reducing emulsion droplet size significantly increases the drug release rate. Drug release times observed are slightly longer than the model prediction likely due to imperfect mixing. Drug release from emulsions occurs very rapidly after injection. This could be a contributing factor to pain on injection of propofol emulsions. Copyright © 2015. Published by Elsevier B.V.
Multi-scale pixel-based image fusion using multivariate empirical mode decomposition.
Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P; McDonald-Maier, Klaus D
2015-05-08
A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences.
Multi-Scale Pixel-Based Image Fusion Using Multivariate Empirical Mode Decomposition
Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P.; McDonald-Maier, Klaus D.
2015-01-01
A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences. PMID:26007714
In-plane "superresolution" MRI with phaseless sub-pixel encoding.
Hennel, Franciszek; Tian, Rui; Engel, Maria; Pruessmann, Klaas P
2018-04-15
Acquisition of high-resolution imaging data using multiple excitations without the sensitivity to fluctuations of the transverse magnetization phase, which is a major problem of multi-shot MRI. The concept of superresolution MRI based on microscopic tagging is analyzed using an analogy with the optical method of structured illumination. Sinusoidal tagging is shown to provide subpixel resolution by mixing of neighboring spatial frequency (k-space) bands. It represents a phaseless modulation added on top of the standard Fourier encoding, which allows the phase fluctuations to be discarded at an intermediate reconstruction step. Improvements are proposed to correct for tag distortions due to magnetic field inhomogeneity and to avoid the propagation of Gibbs ringing from intermediate low-resolution images to the final image. The method was applied to diffusion-weighted EPI. Artifact-free superresolution images can be obtained despite a finite duration of the tagging sequence and related pattern distortions by a field map based phase correction of band-wise reconstructed images. The ringing effect present in the intermediate images can be suppressed by partial overlapping of the mixed k-space bands in combination with an adapted filter. High-resolution diffusion-weighted images of the human head were obtained with a three-shot EPI sequence despite motion-related phase fluctuations between the shots. Due to its phaseless character, tagging-based sub-pixel encoding is an alternative to k-space segmenting in the presence of unknown phase fluctuations, in particular those due to motion under strong diffusion gradients. Proposed improvements render the method practicable in realistic conditions. © 2018 International Society for Magnetic Resonance in Medicine.
Automatic sub-pixel coastline extraction based on spectral mixture analysis using EO-1 Hyperion data
NASA Astrophysics Data System (ADS)
Hong, Zhonghua; Li, Xuesu; Han, Yanling; Zhang, Yun; Wang, Jing; Zhou, Ruyan; Hu, Kening
2018-06-01
Many megacities (such as Shanghai) are located in coastal areas, therefore, coastline monitoring is critical for urban security and urban development sustainability. A shoreline is defined as the intersection between coastal land and a water surface and features seawater edge movements as tides rise and fall. Remote sensing techniques have increasingly been used for coastline extraction; however, traditional hard classification methods are performed only at the pixel-level and extracting subpixel accuracy using soft classification methods is both challenging and time consuming due to the complex features in coastal regions. This paper presents an automatic sub-pixel coastline extraction method (ASPCE) from high-spectral satellite imaging that performs coastline extraction based on spectral mixture analysis and, thus, achieves higher accuracy. The ASPCE method consists of three main components: 1) A Water- Vegetation-Impervious-Soil (W-V-I-S) model is first presented to detect mixed W-V-I-S pixels and determine the endmember spectra in coastal regions; 2) The linear spectral mixture unmixing technique based on Fully Constrained Least Squares (FCLS) is applied to the mixed W-V-I-S pixels to estimate seawater abundance; and 3) The spatial attraction model is used to extract the coastline. We tested this new method using EO-1 images from three coastal regions in China: the South China Sea, the East China Sea, and the Bohai Sea. The results showed that the method is accurate and robust. Root mean square error (RMSE) was utilized to evaluate the accuracy by calculating the distance differences between the extracted coastline and the digitized coastline. The classifier's performance was compared with that of the Multiple Endmember Spectral Mixture Analysis (MESMA), Mixture Tuned Matched Filtering (MTMF), Sequential Maximum Angle Convex Cone (SMACC), Constrained Energy Minimization (CEM), and one classical Normalized Difference Water Index (NDWI). The results from the three test sites indicated that the proposed ASPCE method extracted coastlines more efficiently than did the compared methods, and its coastline extraction accuracy corresponded closely to the digitized coastline, with 0.39 pixels, 0.40 pixels, and 0.35 pixels in the three test regions, showing that the ASPCE method achieves an accuracy below 12.0 m (0.40 pixels). Moreover, in the quantitative accuracy assessment for the three test sites, the ASPCE method shows the best performance in coastline extraction, achieving a 0.35 pixel-level at the Bohai Sea, China test site. Therefore, the proposed ASPCE method can extract coastline more accurately than can the hard classification methods or other spectral unmixing methods.
Compressed sensing with cyclic-S Hadamard matrix for terahertz imaging applications
NASA Astrophysics Data System (ADS)
Ermeydan, Esra Şengün; ćankaya, Ilyas
2018-01-01
Compressed Sensing (CS) with Cyclic-S Hadamard matrix is proposed for single pixel imaging applications in this study. In single pixel imaging scheme, N = r . c samples should be taken for r×c pixel image where . denotes multiplication. CS is a popular technique claiming that the sparse signals can be reconstructed with samples under Nyquist rate. Therefore to solve the slow data acquisition problem in Terahertz (THz) single pixel imaging, CS is a good candidate. However, changing mask for each measurement is a challenging problem since there is no commercial Spatial Light Modulators (SLM) for THz band yet, therefore circular masks are suggested so that for each measurement one or two column shifting will be enough to change the mask. The CS masks are designed using cyclic-S matrices based on Hadamard transform for 9 × 7 and 15 × 17 pixel images within the framework of this study. The %50 compressed images are reconstructed using total variation based TVAL3 algorithm. Matlab simulations demonstrates that cyclic-S matrices can be used for single pixel imaging based on CS. The circular masks have the advantage to reduce the mechanical SLMs to a single sliding strip, whereas the CS helps to reduce acquisition time and energy since it allows to reconstruct the image from fewer samples.
Particle tracking with a Timepix based triple GEM detector
NASA Astrophysics Data System (ADS)
George, S. P.; Murtas, F.; Alozy, J.; Curioni, A.; Rosenfeld, A. B.; Silari, M.
2015-11-01
This paper details the response of a triple GEM detector with a 55 μmetre pitch pixelated ASIC for readout. The detector is operated as a micro TPC with 9.5 cm3 sensitive volume and characterized with a mixed beam of 120 GeV protons and positive pions. A process for reconstruction of incident particle tracks from individual ionization clusters is described and scans of the gain and drift fields are performed. The angular resolution of the measured tracks is characterized. Also, the readout was operated in a mixed mode where some pixels measure drift time and others charge. This was used to measure the energy deposition in the detector and the charge cloud size as a function of interaction depth. The future uses of the device, including in microdosimetry are discussed.
A secure steganography for privacy protection in healthcare system.
Liu, Jing; Tang, Guangming; Sun, Yifeng
2013-04-01
Private data in healthcare system require confidentiality protection while transmitting. Steganography is the art of concealing data into a cover media for conveying messages confidentially. In this paper, we propose a steganographic method which can provide private data in medical system with very secure protection. In our method, a cover image is first mapped into a 1D pixels sequence by Hilbert filling curve and then divided into non-overlapping embedding units with three consecutive pixels. We use adaptive pixel pair match (APPM) method to embed digits in the pixel value differences (PVD) of the three pixels and the base of embedded digits is dependent on the differences among the three pixels. By solving an optimization problem, minimal distortion of the pixel ternaries caused by data embedding can be obtained. The experimental results show our method is more suitable to privacy protection of healthcare system than prior steganographic works.
Macias-Montero, Jose-Gabriel; Sarraj, Maher; Chmeissani, Mokhtar; Puigdengoles, Carles; Lorenzo, Gianluca De; Martínez, Ricardo
2013-08-01
VIP-PIX will be a low noise and low power pixel readout electronics with digital output for pixelated Cadmium Telluride (CdTe) detectors. The proposed pixel will be part of a 2D pixel-array detector for various types of nuclear medicine imaging devices such as positron-emission tomography (PET) scanners, Compton gamma cameras, and positron-emission mammography (PEM) scanners. Each pixel will include a SAR ADC that provides the energy deposited with 10-bit resolution. Simultaneously, the self-triggered pixel which will be connected to a global time-to-digital converter (TDC) with 1 ns resolution will provide the event's time stamp. The analog part of the readout chain and the ADC have been fabricated with TSMC 0.25 μ m mixed-signal CMOS technology and characterized with an external test pulse. The power consumption of these parts is 200 μ W from a 2.5 V supply. It offers 4 switchable gains from ±10 mV/fC to ±40 mV/fC and an input charge dynamic range of up to ±70 fC for the minimum gain for both polarities. Based on noise measurements, the expected equivalent noise charge (ENC) is 65 e - RMS at room temperature.
Self-amplified CMOS image sensor using a current-mode readout circuit
NASA Astrophysics Data System (ADS)
Santos, Patrick M.; de Lima Monteiro, Davies W.; Pittet, Patrick
2014-05-01
The feature size of the CMOS processes decreased during the past few years and problems such as reduced dynamic range have become more significant in voltage-mode pixels, even though the integration of more functionality inside the pixel has become easier. This work makes a contribution on both sides: the possibility of a high signal excursion range using current-mode circuits together with functionality addition by making signal amplification inside the pixel. The classic 3T pixel architecture was rebuild with small modifications to integrate a transconductance amplifier providing a current as an output. The matrix with these new pixels will operate as a whole large transistor outsourcing an amplified current that will be used for signal processing. This current is controlled by the intensity of the light received by the matrix, modulated pixel by pixel. The output current can be controlled by the biasing circuits to achieve a very large range of output signal levels. It can also be controlled with the matrix size and this permits a very high degree of freedom on the signal level, observing the current densities inside the integrated circuit. In addition, the matrix can operate at very small integration times. Its applications would be those in which fast imaging processing, high signal amplification are required and low resolution is not a major problem, such as UV image sensors. Simulation results will be presented to support: operation, control, design, signal excursion levels and linearity for a matrix of pixels that was conceived using this new concept of sensor.
The realization of an SVGA OLED-on-silicon microdisplay driving circuit
NASA Astrophysics Data System (ADS)
Bohua, Zhao; Ran, Huang; Fei, Ma; Guohua, Xie; Zhensong, Zhang; Huan, Du; Jiajun, Luo; Yi, Zhao
2012-03-01
An 800 × 600 pixel organic light-emitting diode-on-silicon (OLEDoS) driving circuit is proposed. The pixel cell circuit utilizes a subthreshold-voltage-scaling structure which can modulate the pixel current between 170 pA and 11.4 nA. In order to keep the voltage of the column bus at a relatively high level, the sample-and-hold circuits adopt a ping-pong operation. The driving circuit is fabricated in a commercially available 0.35 μm two-poly four-metal 3.3 V mixed-signal CMOS process. The pixel cell area is 15 × 15 μm2 and the total chip occupies 15.5 × 12.3 mm2. Experimental results show that the chip can work properly at a frame frequency of 60 Hz and has a 64 grayscale (monochrome) display. The total power consumption of the chip is about 85 mW with a 3.3V supply voltage.
Zhang, J.-H.; Zhou, Z.-M.; Wang, P.-J.; Yao, F.-M.; Yang, L.
2011-01-01
The field spectroradiometer was used to measure spectra of different snow and snow-covered land surface objects in Beijing area. The result showed that for a pure snow spectrum, the snow reflectance peaks appeared from visible to 800 nm band locations; there was an obvious absorption valley of snow spectrum near 1030 nm wavelength. Compared with fresh snow, the reflection peaks of the old snow and melting snow showed different degrees of decline in the ranges of 300~1300, 1700~1800 and 2200~2300 nm, the lowest was from the compacted snow and frozen ice. For the vegetation and snow mixed spectral characteristics, it was indicated that the spectral reflectance increased for the snow-covered land types(including pine leaf with snow and pine leaf on snow background), due to the influence of snow background in the range of 350~1300 nm. However, the spectrum reflectance of mixed pixel remained a vegetation spectral characteristic. In the end, based on the spectrum analysis of snow, vegetation, and mixed snow/vegetation pixels, the mixed spectral fitting equations were established, and the results showed that there was good correlation between spectral curves by simulation fitting and observed ones(correlation coefficient R2=0.9509).
Proportion estimation and classification of mixed pixels in multispectral data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crouse, K.R.
1979-01-01
Remote sensing applications to crop productivity estimations are discussed with detailed instructions for developing classifier skills in multispectral data analysis for corn, soybeans, oats, and alfalfa crops. (PCS)
NASA Astrophysics Data System (ADS)
Zhang, Jialin; Chen, Qian; Sun, Jiasong; Li, Jiaji; Zuo, Chao
2018-01-01
Lensfree holography provides a new way to effectively bypass the intrinsical trade-off between the spatial resolution and field-of-view (FOV) of conventional lens-based microscopes. Unfortunately, due to the limited sensor pixel-size, unpredictable disturbance during image acquisition, and sub-optimum solution to the phase retrieval problem, typical lensfree microscopes only produce compromised imaging quality in terms of lateral resolution and signal-to-noise ratio (SNR). In this paper, we propose an adaptive pixel-super-resolved lensfree imaging (APLI) method to address the pixel aliasing problem by Z-scanning only, without resorting to subpixel shifting or beam-angle manipulation. Furthermore, an automatic positional error correction algorithm and adaptive relaxation strategy are introduced to enhance the robustness and SNR of reconstruction significantly. Based on APLI, we perform full-FOV reconstruction of a USAF resolution target across a wide imaging area of {29.85 mm2 and achieve half-pitch lateral resolution of 770 nm, surpassing 2.17 times of the theoretical Nyquist-Shannon sampling resolution limit imposed by the sensor pixel-size (1.67 μm). Full-FOV imaging result of a typical dicot root is also provided to demonstrate its promising potential applications in biologic imaging.
Development of N+ in P pixel sensors for a high-luminosity large hadron collider
NASA Astrophysics Data System (ADS)
Kamada, Shintaro; Yamamura, Kazuhisa; Unno, Yoshinobu; Ikegami, Yoichi
2014-11-01
Hamamatsu Photonics K. K. is developing an N+ in a p planar pixel sensor with high radiation tolerance for the high-luminosity large hadron collider (HL-LHC). The N+ in the p planar pixel sensor is a candidate for the HL-LHC and offers the advantages of high radiation tolerance at a reasonable price compared with the N+ in an n planar sensor, the three-dimensional sensor, and the diamond sensor. However, the N+ in the p planar pixel sensor still presents some problems that need to be solved, such as its slim edge and the danger of sparks between the sensor and readout integrated circuit. We are now attempting to solve these problems with wafer-level processes, which is important for mass production. To date, we have obtained a 250-μm edge with an applied bias voltage of 1000 V. To protect against high-voltage sparks from the edge, we suggest some possible designs for the N+ edge.
Phase information contained in meter-scale SAR images
NASA Astrophysics Data System (ADS)
Datcu, Mihai; Schwarz, Gottfried; Soccorsi, Matteo; Chaabouni, Houda
2007-10-01
The properties of single look complex SAR satellite images have already been analyzed by many investigators. A common belief is that, apart from inverse SAR methods or polarimetric applications, no information can be gained from the phase of each pixel. This belief is based on the assumption that we obtain uniformly distributed random phases when a sufficient number of small-scale scatterers are mixed in each image pixel. However, the random phase assumption does no longer hold for typical high resolution urban remote sensing scenes, when a limited number of prominent human-made scatterers with near-regular shape and sub-meter size lead to correlated phase patterns. If the pixel size shrinks to a critical threshold of about 1 meter, the reflectance of built-up urban scenes becomes dominated by typical metal reflectors, corner-like structures, and multiple scattering. The resulting phases are hard to model, but one can try to classify a scene based on the phase characteristics of neighboring image pixels. We provide a "cooking recipe" of how to analyze existing phase patterns that extend over neighboring pixels.
Spatial reasoning to determine stream network from LANDSAT imagery
NASA Technical Reports Server (NTRS)
Haralick, R. M.; Wang, S.; Elliott, D. B.
1983-01-01
In LANDSAT imagery, spectral and spatial information can be used to detect the drainage network as well as the relative elevation model in mountainous terrain. To do this, mixed information of material reflectance in the original LANDSAT imagery must be separated. From the material reflectance information, big visible rivers can be detected. From the topographic modulation information, ridges and valleys can be detected and assigned relative elevations. A complete elevation model can be generated by interpolating values for nonridge and non-valley pixels. The small streams not detectable from material reflectance information can be located in the valleys with flow direction known from the elevation model. Finally, the flow directions of big visible rivers can be inferred by solving a consistent labeling problem based on a set of spatial reasoning constraints.
Spatial and spectral simulation of LANDSAT images of agricultural areas
NASA Technical Reports Server (NTRS)
Pont, W. F., Jr. (Principal Investigator)
1982-01-01
A LANDSAT scene simulation capability was developed to study the effects of small fields and misregistration on LANDSAT-based crop proportion estimation procedures. The simulation employs a pattern of ground polygons each with a crop ID, planting date, and scale factor. Historical greenness/brightness crop development profiles generate the mean signal values for each polygon. Historical within-field covariances add texture to pixels in each polygon. The planting dates and scale factors create between-field/within-crop variation. Between field and crop variation is achieved by the above and crop profile differences. The LANDSAT point spread function is used to add correlation between nearby pixels. The next effect of the point spread function is to blur the image. Mixed pixels and misregistration are also simulated.
New SOFRADIR 10μm pixel pitch infrared products
NASA Astrophysics Data System (ADS)
Lefoul, X.; Pere-Laperne, N.; Augey, T.; Rubaldo, L.; Aufranc, Sébastien; Decaens, G.; Ricard, N.; Mazaleyrat, E.; Billon-Lanfrey, D.; Gravrand, Olivier; Bisotto, Sylvette
2014-10-01
Recent advances in miniaturization of IR imaging technology have led to a growing market for mini thermal-imaging sensors. In that respect, Sofradir development on smaller pixel pitch has made much more compact products available to the users. When this competitive advantage is mixed with smaller coolers, made possible by HOT technology, we achieved valuable reductions in the size, weight and power of the overall package. At the same time, we are moving towards a global offer based on digital interfaces that provides our customers simplifications at the IR system design process while freeing up more space. This paper discusses recent developments on hot and small pixel pitch technologies as well as efforts made on compact packaging solution developed by SOFRADIR in collaboration with CEA-LETI.
Supervised nonlinear spectral unmixing using a postnonlinear mixing model for hyperspectral imagery.
Altmann, Yoann; Halimi, Abderrahim; Dobigeon, Nicolas; Tourneret, Jean-Yves
2012-06-01
This paper presents a nonlinear mixing model for hyperspectral image unmixing. The proposed model assumes that the pixel reflectances are nonlinear functions of pure spectral components contaminated by an additive white Gaussian noise. These nonlinear functions are approximated using polynomial functions leading to a polynomial postnonlinear mixing model. A Bayesian algorithm and optimization methods are proposed to estimate the parameters involved in the model. The performance of the unmixing strategies is evaluated by simulations conducted on synthetic and real data.
SVGA and XGA active matrix microdisplays for head-mounted applications
NASA Astrophysics Data System (ADS)
Alvelda, Phillip; Bolotski, Michael; Brown, Imani L.
2000-03-01
The MicroDisplay Corporation's liquid crystal on silicon (LCOS) display devices are based on the union of several technologies with the extreme integration capability of conventionally fabricated CMOS substrates. The fast liquid crystal operation modes and new scalable high-performance pixel addressing architectures presented in this paper enable substantially improved color, contrast, and brightness while still satisfying the optical, packaging, and power requirements of portable applications. The entire suite of MicroDisplay's technologies was devised to create a line of mixed-signal application-specific integrated circuits (ASICs) in single-chip display systems. Mixed-signal circuits can integrate computing, memory, and communication circuitry on the same substrate as the display drivers and pixel array for a multifunctional complete system-on-a-chip. System-on-a-chip benefits also include reduced head supported weight requirements through the elimination of off-chip drive electronics.
NASA Astrophysics Data System (ADS)
Li, Linyi; Chen, Yun; Yu, Xin; Liu, Rui; Huang, Chang
2015-03-01
The study of flood inundation is significant to human life and social economy. Remote sensing technology has provided an effective way to study the spatial and temporal characteristics of inundation. Remotely sensed images with high temporal resolutions are widely used in mapping inundation. However, mixed pixels do exist due to their relatively low spatial resolutions. One of the most popular approaches to resolve this issue is sub-pixel mapping. In this paper, a novel discrete particle swarm optimization (DPSO) based sub-pixel flood inundation mapping (DPSO-SFIM) method is proposed to achieve an improved accuracy in mapping inundation at a sub-pixel scale. The evaluation criterion for sub-pixel inundation mapping is formulated. The DPSO-SFIM algorithm is developed, including particle discrete encoding, fitness function designing and swarm search strategy. The accuracy of DPSO-SFIM in mapping inundation at a sub-pixel scale was evaluated using Landsat ETM + images from study areas in Australia and China. The results show that DPSO-SFIM consistently outperformed the four traditional SFIM methods in these study areas. A sensitivity analysis of DPSO-SFIM was also carried out to evaluate its performances. It is hoped that the results of this study will enhance the application of medium-low spatial resolution images in inundation detection and mapping, and thereby support the ecological and environmental studies of river basins.
Human vision-based algorithm to hide defective pixels in LCDs
NASA Astrophysics Data System (ADS)
Kimpe, Tom; Coulier, Stefaan; Van Hoey, Gert
2006-02-01
Producing displays without pixel defects or repairing defective pixels is technically not possible at this moment. This paper presents a new approach to solve this problem: defects are made invisible for the user by using image processing algorithms based on characteristics of the human eye. The performance of this new algorithm has been evaluated using two different methods. First of all the theoretical response of the human eye was analyzed on a series of images and this before and after applying the defective pixel compensation algorithm. These results show that indeed it is possible to mask a defective pixel. A second method was to perform a psycho-visual test where users were asked whether or not a defective pixel could be perceived. The results of these user tests also confirm the value of the new algorithm. Our "defective pixel correction" algorithm can be implemented very efficiently and cost-effectively as pixel-dataprocessing algorithms inside the display in for instance an FPGA, a DSP or a microprocessor. The described techniques are also valid for both monochrome and color displays ranging from high-quality medical displays to consumer LCDTV applications.
Attenuating Stereo Pixel-Locking via Affine Window Adaptation
NASA Technical Reports Server (NTRS)
Stein, Andrew N.; Huertas, Andres; Matthies, Larry H.
2006-01-01
For real-time stereo vision systems, the standard method for estimating sub-pixel stereo disparity given an initial integer disparity map involves fitting parabolas to a matching cost function aggregated over rectangular windows. This results in a phenomenon known as 'pixel-locking,' which produces artificially-peaked histograms of sub-pixel disparity. These peaks correspond to the introduction of erroneous ripples or waves in the 3D reconstruction of truly Rat surfaces. Since stereo vision is a common input modality for autonomous vehicles, these inaccuracies can pose a problem for safe, reliable navigation. This paper proposes a new method for sub-pixel stereo disparity estimation, based on ideas from Lucas-Kanade tracking and optical flow, which substantially reduces the pixel-locking effect. In addition, it has the ability to correct much larger initial disparity errors than previous approaches and is more general as it applies not only to the ground plane.
Pixelated camouflage patterns from the perspective of hyperspectral imaging
NASA Astrophysics Data System (ADS)
Racek, František; Jobánek, Adam; Baláž, Teodor; Krejčí, Jaroslav
2016-10-01
Pixelated camouflage patterns fulfill the role of both principles the matching and the disrupting that are exploited for blending the target into the background. It means that pixelated pattern should respect natural background in spectral and spatial characteristics embodied in micro and macro patterns. The HS imaging plays the similar, however the reverse role in the field of reconnaissance systems. The HS camera fundamentally records and extracts both the spectral and spatial information belonging to the recorded scenery. Therefore, the article deals with problems of hyperspectral (HS) imaging and subsequent processing of HS images of pixelated camouflage patterns which are among others characterized by their specific spatial frequency heterogeneity.
NASA Astrophysics Data System (ADS)
Marson, Avishai; Stern, Adrian
2015-05-01
One of the main limitations of horizontal parallax autostereoscopic displays is the horizontal resolution loss due the need to repartition the pixels of the display panel among the multiple views. Recently we have shown that this problem can be alleviated by applying a color sub-pixel rendering technique1. Interpolated views are generated by down-sampling the panel pixels at sub-pixel level, thus increasing the number of views. The method takes advantage of lower acuity of the human eye to chromatic resolution. Here we supply further support of the technique by analyzing the spectra of the subsampled images.
Analysis of lithology: Vegetation mixes in multispectral images
NASA Technical Reports Server (NTRS)
Adams, J. B.; Smith, M.; Adams, J. D.
1982-01-01
Discrimination and identification of lithologies from multispectral images is discussed. Rock/soil identification can be facilitated by removing the component of the signal in the images that is contributed by the vegetation. Mixing models were developed to predict the spectra of combinations of pure end members, and those models were refined using laboratory measurements of real mixtures. Models in use include a simple linear (checkerboard) mix, granular mixing, semi-transparent coatings, and combinations of the above. The use of interactive computer techniques that allow quick comparison of the spectrum of a pixel stack (in a multiband set) with laboratory spectra is discussed.
Resampling approach for anomalous change detection
NASA Astrophysics Data System (ADS)
Theiler, James; Perkins, Simon
2007-04-01
We investigate the problem of identifying pixels in pairs of co-registered images that correspond to real changes on the ground. Changes that are due to environmental differences (illumination, atmospheric distortion, etc.) or sensor differences (focus, contrast, etc.) will be widespread throughout the image, and the aim is to avoid these changes in favor of changes that occur in only one or a few pixels. Formal outlier detection schemes (such as the one-class support vector machine) can identify rare occurrences, but will be confounded by pixels that are "equally rare" in both images: they may be anomalous, but they are not changes. We describe a resampling scheme we have developed that formally addresses both of these issues, and reduces the problem to a binary classification, a problem for which a large variety of machine learning tools have been developed. In principle, the effects of misregistration will manifest themselves as pervasive changes, and our method will be robust against them - but in practice, misregistration remains a serious issue.
ASIC Readout Circuit Architecture for Large Geiger Photodiode Arrays
NASA Technical Reports Server (NTRS)
Vasile, Stefan; Lipson, Jerold
2012-01-01
The objective of this work was to develop a new class of readout integrated circuit (ROIC) arrays to be operated with Geiger avalanche photodiode (GPD) arrays, by integrating multiple functions at the pixel level (smart-pixel or active pixel technology) in 250-nm CMOS (complementary metal oxide semiconductor) processes. In order to pack a maximum of functions within a minimum pixel size, the ROIC array is a full, custom application-specific integrated circuit (ASIC) design using a mixed-signal CMOS process with compact primitive layout cells. The ROIC array was processed to allow assembly in bump-bonding technology with photon-counting infrared detector arrays into 3-D imaging cameras (LADAR). The ROIC architecture was designed to work with either common- anode Si GPD arrays or common-cathode InGaAs GPD arrays. The current ROIC pixel design is hardwired prior to processing one of the two GPD array configurations, and it has the provision to allow soft reconfiguration to either array (to be implemented into the next ROIC array generation). The ROIC pixel architecture implements the Geiger avalanche quenching, bias, reset, and time to digital conversion (TDC) functions in full-digital design, and uses time domain over-sampling (vernier) to allow high temporal resolution at low clock rates, increased data yield, and improved utilization of the laser beam.
Li, Yang-yang; Zhao, Kai; Ren, Jian-hua; Ding, Yan-ling; Wu, Li-li
2014-01-01
Soil salinity is a global problem, especially in developing countries, which affects the environment and productivity of agriculture areas. Salt has a significant effect on the complex dielectric constant of wet soil. However, there is no suitable model to describe the variation in the backscattering coefficient due to changes in soil salinity content. The purpose of this paper is to use backscattering models to understand behaviors of the backscattering coefficient in saline soils based on the analysis of its dielectric constant. The effects of moisture and salinity on the dielectric constant by combined Dobson mixing model and seawater dielectric constant model are analyzed, and the backscattering coefficient is then simulated using the AIEM. Simultaneously, laboratory measurements were performed on ground samples. The frequency effect of the laboratory results was not the same as the simulated results. The frequency dependence of the ionic conductivity of an electrolyte solution is influenced by the ion's components. Finally, the simulated backscattering coefficients measured from the dielectric constant with the AIEM were analyzed using the extracted backscattering coefficient from the RADARSAT-2 image. The results show that RADARSAT-2 is potentially able to measure soil salinity; however, the mixed pixel problem needs to be more thoroughly considered.
Shape, Illumination, and Reflectance from Shading
2013-05-29
the global entropy of log-reflectance. 3) An “absolute” prior on reflectance which prefers to paint the scene with some colors ( white , gray, green...in log- RGB from pixel i to pixel j, and c (· ;α, σ) is the negative log-likelihood of a discrete univariate Gaussian scale mixture (GSM), parametrized...gs(R) = ∑ i ∑ j∈N(i) C (Ri −Rj ;αR, σR,ΣR) (6) Where Ri−Rj is now a 3-vector of the log- RGB differ- ences, α are mixing coefficients, σ are the
A Method for Qualitative Mapping of Thick Oil Spills Using Imaging Spectroscopy
Clark, Roger N.; Swayze, Gregg A.; Leifer, Ira; Livo, K. Eric; Lundeen, Sarah; Eastwood, Michael; Green, Robert O.; Kokaly, Raymond F.; Hoefen, Todd; Sarture, Charles; McCubbin, Ian; Roberts, Dar; Steele, Denis; Ryan, Thomas; Dominguez, Roseanne; Pearson, Neil; ,
2010-01-01
A method is described to create qualitative images of thick oil in oil spills on water using near-infrared imaging spectroscopy data. The method uses simple 'three-point-band depths' computed for each pixel in an imaging spectrometer image cube using the organic absorption features due to chemical bonds in aliphatic hydrocarbons at 1.2, 1.7, and 2.3 microns. The method is not quantitative because sub-pixel mixing and layering effects are not considered, which are necessary to make a quantitative volume estimate of oil.
NASA Technical Reports Server (NTRS)
Hill, C. L.
1984-01-01
A computer-implemented classification has been derived from Landsat-4 Thematic Mapper data acquired over Baldwin County, Alabama on January 15, 1983. One set of spectral signatures was developed from the data by utilizing a 3x3 pixel sliding window approach. An analysis of the classification produced from this technique identified forested areas. Additional information regarding only the forested areas. Additional information regarding only the forested areas was extracted by employing a pixel-by-pixel signature development program which derived spectral statistics only for pixels within the forested land covers. The spectral statistics from both approaches were integrated and the data classified. This classification was evaluated by comparing the spectral classes produced from the data against corresponding ground verification polygons. This iterative data analysis technique resulted in an overall classification accuracy of 88.4 percent correct for slash pine, young pine, loblolly pine, natural pine, and mixed hardwood-pine. An accuracy assessment matrix has been produced for the classification.
Mixing geometric and radiometric features for change classification
NASA Astrophysics Data System (ADS)
Fournier, Alexandre; Descombes, Xavier; Zerubia, Josiane
2008-02-01
Most basic change detection algorithms use a pixel-based approach. Whereas such approach is quite well defined for monitoring important area changes (such as urban growth monitoring) in low resolution images, an object based approach seems more relevant when the change detection is specifically aimed toward targets (such as small buildings and vehicles). In this paper, we present an approach that mixes radiometric and geometric features to qualify the changed zones. The goal is to establish bounds (appearance, disappearance, substitution ...) between the detected changes and the underlying objects. We proceed by first clustering the change map (containing each pixel bitemporal radiosity) in different classes using the entropy-kmeans algorithm. Assuming that most man-made objects have a polygonal shape, a polygonal approximation algorithm is then used in order to characterize the resulting zone shapes. Hence allowing us to refine the primary rough classification, by integrating the polygon orientations in the state space. Tests are currently conducted on Quickbird data.
High-Order Model and Dynamic Filtering for Frame Rate Up-Conversion.
Bao, Wenbo; Zhang, Xiaoyun; Chen, Li; Ding, Lianghui; Gao, Zhiyong
2018-08-01
This paper proposes a novel frame rate up-conversion method through high-order model and dynamic filtering (HOMDF) for video pixels. Unlike the constant brightness and linear motion assumptions in traditional methods, the intensity and position of the video pixels are both modeled with high-order polynomials in terms of time. Then, the key problem of our method is to estimate the polynomial coefficients that represent the pixel's intensity variation, velocity, and acceleration. We propose to solve it with two energy objectives: one minimizes the auto-regressive prediction error of intensity variation by its past samples, and the other minimizes video frame's reconstruction error along the motion trajectory. To efficiently address the optimization problem for these coefficients, we propose the dynamic filtering solution inspired by video's temporal coherence. The optimal estimation of these coefficients is reformulated into a dynamic fusion of the prior estimate from pixel's temporal predecessor and the maximum likelihood estimate from current new observation. Finally, frame rate up-conversion is implemented using motion-compensated interpolation by pixel-wise intensity variation and motion trajectory. Benefited from the advanced model and dynamic filtering, the interpolated frame has much better visual quality. Extensive experiments on the natural and synthesized videos demonstrate the superiority of HOMDF over the state-of-the-art methods in both subjective and objective comparisons.
Sparsely-sampled hyperspectral stimulated Raman scattering microscopy: a theoretical investigation
NASA Astrophysics Data System (ADS)
Lin, Haonan; Liao, Chien-Sheng; Wang, Pu; Huang, Kai-Chih; Bouman, Charles A.; Kong, Nan; Cheng, Ji-Xin
2017-02-01
A hyperspectral image corresponds to a data cube with two spatial dimensions and one spectral dimension. Through linear un-mixing, hyperspectral images can be decomposed into spectral signatures of pure components as well as their concentration maps. Due to this distinct advantage on component identification, hyperspectral imaging becomes a rapidly emerging platform for engineering better medicine and expediting scientific discovery. Among various hyperspectral imaging techniques, hyperspectral stimulated Raman scattering (HSRS) microscopy acquires data in a pixel-by-pixel scanning manner. Nevertheless, current image acquisition speed for HSRS is insufficient to capture the dynamics of freely moving subjects. Instead of reducing the pixel dwell time to achieve speed-up, which would inevitably decrease signal-to-noise ratio (SNR), we propose to reduce the total number of sampled pixels. Location of sampled pixels are carefully engineered with triangular wave Lissajous trajectory. Followed by a model-based image in-painting algorithm, the complete data is recovered for linear unmixing. Simulation results show that by careful selection of trajectory, a fill rate as low as 10% is sufficient to generate accurate linear unmixing results. The proposed framework applies to any hyperspectral beam-scanning imaging platform which demands high acquisition speed.
NASA Astrophysics Data System (ADS)
Zhu, H.; Zhao, H. L.; Jiang, Y. Z.; Zang, W. B.
2018-05-01
Soil moisture is one of the important hydrological elements. Obtaining soil moisture accurately and effectively is of great significance for water resource management in irrigation area. During the process of soil moisture content retrieval with multiremote sensing data, multi- remote sensing data always brings multi-spatial scale problems which results in inconformity of soil moisture content retrieved by remote sensing in different spatial scale. In addition, agricultural water use management has suitable spatial scale of soil moisture information so as to satisfy the demands of dynamic management of water use and water demand in certain unit. We have proposed to use land parcel unit as the minimum unit to do soil moisture content research in agricultural water using area, according to soil characteristics, vegetation coverage characteristics in underlying layer, and hydrological characteristic into the basis of study unit division. We have proposed division method of land parcel units. Based on multi thermal infrared and near infrared remote sensing data, we calculate the ndvi and tvdi index and make a statistical model between the tvdi index and soil moisture of ground monitoring station. Then we move forward to study soil moisture remote sensing retrieval method on land parcel unit scale. And the method has been applied in Hetao irrigation area. Results show that compared with pixel scale the soil moisture content in land parcel unit scale has displayed stronger correlation with true value. Hence, remote sensing retrieval method of soil moisture content in land parcel unit scale has shown good applicability in Hetao irrigation area. We converted the research unit into the scale of land parcel unit. Using the land parcel units with unified crops and soil attributes as the research units more complies with the characteristics of agricultural water areas, avoids the problems such as decomposition of mixed pixels and excessive dependence on high-resolution data caused by the research units of pixels, and doesn't involve compromises in the spatial scale and simulating precision like the grid simulation. When the application needs are met, the production efficiency of products can also be improved at a certain degree.
G-Channel Restoration for RWB CFA with Double-Exposed W Channel
Park, Chulhee; Song, Ki Sun; Kang, Moon Gi
2017-01-01
In this paper, we propose a green (G)-channel restoration for a red–white–blue (RWB) color filter array (CFA) image sensor using the dual sampling technique. By using white (W) pixels instead of G pixels, the RWB CFA provides high-sensitivity imaging and an improved signal-to-noise ratio compared to the Bayer CFA. However, owing to this high sensitivity, the W pixel values become rapidly over-saturated before the red–blue (RB) pixel values reach the appropriate levels. Because the missing G color information included in the W channel cannot be restored with a saturated W, multiple captures with dual sampling are necessary to solve this early W-pixel saturation problem. Each W pixel has a different exposure time when compared to those of the R and B pixels, because the W pixels are double-exposed. Therefore, a RWB-to-RGB color conversion method is required in order to restore the G color information, using a double-exposed W channel. The proposed G-channel restoration algorithm restores G color information from the W channel by considering the energy difference caused by the different exposure times. Using the proposed method, the RGB full-color image can be obtained while maintaining the high-sensitivity characteristic of the W pixels. PMID:28165425
G-Channel Restoration for RWB CFA with Double-Exposed W Channel.
Park, Chulhee; Song, Ki Sun; Kang, Moon Gi
2017-02-05
In this paper, we propose a green (G)-channel restoration for a red-white-blue (RWB) color filter array (CFA) image sensor using the dual sampling technique. By using white (W) pixels instead of G pixels, the RWB CFA provides high-sensitivity imaging and an improved signal-to-noise ratio compared to the Bayer CFA. However, owing to this high sensitivity, the W pixel values become rapidly over-saturated before the red-blue (RB) pixel values reach the appropriate levels. Because the missing G color information included in the W channel cannot be restored with a saturated W, multiple captures with dual sampling are necessary to solve this early W-pixel saturation problem. Each W pixel has a different exposure time when compared to those of the R and B pixels, because the W pixels are double-exposed. Therefore, a RWB-to-RGB color conversion method is required in order to restore the G color information, using a double-exposed W channel. The proposed G-channel restoration algorithm restores G color information from the W channel by considering the energy difference caused by the different exposure times. Using the proposed method, the RGB full-color image can be obtained while maintaining the high-sensitivity characteristic of the W pixels.
NASA Astrophysics Data System (ADS)
McCarty, C.; Moersch, J.
2017-12-01
Sedimentary processes have slowed over Mars' geologic history. Analysis of the surface today can provide insight into the processes that may have affected it over its history. Sub-resolved checkerboard mixtures of materials with different thermal inertias (and therefore different grain sizes) can lead to differences in thermal inertia values inferred from night and day radiance observations. Information about the grain size distribution of a surface can help determine the degree of sorting it has experienced or it's geologic maturity. Standard methods for deriving thermal inertia from measurements made with THEMIS can give values for the same location that vary by as much as 20% between scenes. Such methods make the assumption that each THEMIS pixel contains material that has uniform thermophysical properties. Here we propose that if a mixture of small and large particles is present within a pixel, the inferred thermal inertia will be strongly dominated by whichever particle is warmer at the time of the measurement because the power radiated by a surface is proportional (by the Stefan-Boltzmann law) to the fourth power of its temperature. This effect will result in a change in thermal inertia values inferred from measurements taken at different times of day and night. Therefore, we expect to see correlation between the magnitude of diurnal variations in inferred thermal inertia values and the degree of grain size mixing for a given pixel location. Preliminary work has shown that the magnitude of such diurnal variation in inferred thermal inertias is sufficient to detect geologically useful differences in grain size distributions. We hypothesize that at least some of the 20% variability in thermal inertias inferred from multiple scenes for a given location could be attributed to sub-pixel grain size mixing rather than uncertainty inherent to the experiment, as previously thought. Mapping the difference in inferred thermal inertias from day and night THEMIS observations may prove to be a new way of distinguishing surfaces that have relatively uniform grain sizes from those that have mixed grain sizes. Assessing the effects of different geologic processes can be aided by noting variations in grain size distributions, so this method may be useful as a new way to extract geologic interpretations from the THEMIS thermal data set.
Design of the low area monotonic trim DAC in 40 nm CMOS technology for pixel readout chips
NASA Astrophysics Data System (ADS)
Drozd, A.; Szczygiel, R.; Maj, P.; Satlawa, T.; Grybos, P.
2014-12-01
The recent research in hybrid pixel detectors working in single photon counting mode focuses on nanometer or 3D technologies which allow making pixels smaller and implementing more complex solutions in each of the pixels. Usually single pixel in readout electronics for X-ray detection comprises of charge amplifier, shaper and discriminator that allow classification of events occurring at the detector as true or false hits by comparing amplitude of the signal obtained with threshold voltage, which minimizes the influence of noise effects. However, making the pixel size smaller often causes problems with pixel to pixel uniformity and additional effects like charge sharing become more visible. To improve channel-to-channel uniformity or implement an algorithm for charge sharing effect minimization, small area trimming DACs working in each pixel independently are necessary. However, meeting the requirement of small area often results in poor linearity and even non-monotonicity. In this paper we present a novel low-area thermometer coded 6-bit DAC implemented in 40 nm CMOS technology. Monte Carlo simulations were performed on the described design proving that under all conditions designed DAC is inherently monotonic. Presented DAC was implemented in the prototype readout chip with 432 pixels working in single photon counting mode, with two trimming DACs in each pixel. Each DAC occupies the area of 8 μm × 18.5 μm. Measurements and chips' tests were performed to obtain reliable statistical results.
Design methodology: edgeless 3D ASICs with complex in-pixel processing for pixel detectors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fahim Farah, Fahim Farah; Deptuch, Grzegorz W.; Hoff, James R.
The design methodology for the development of 3D integrated edgeless pixel detectors with in-pixel processing using Electronic Design Automation (EDA) tools is presented. A large area 3 tier 3D detector with one sensor layer and two ASIC layers containing one analog and one digital tier, is built for x-ray photon time of arrival measurement and imaging. A full custom analog pixel is 65μm x 65μm. It is connected to a sensor pixel of the same size on one side, and on the other side it has approximately 40 connections to the digital pixel. A 32 x 32 edgeless array withoutmore » any peripheral functional blocks constitutes a sub-chip. The sub-chip is an indivisible unit, which is further arranged in a 6 x 6 array to create the entire 1.248cm x 1.248cm ASIC. Each chip has 720 bump-bond I/O connections, on the back of the digital tier to the ceramic PCB. All the analog tier power and biasing is conveyed through the digital tier from the PCB. The assembly has no peripheral functional blocks, and hence the active area extends to the edge of the detector. This was achieved by using a few flavors of almost identical analog pixels (minimal variation in layout) to allow for peripheral biasing blocks to be placed within pixels. The 1024 pixels within a digital sub-chip array have a variety of full custom, semi-custom and automated timing driven functional blocks placed together. The methodology uses a modified mixed-mode on-top digital implementation flow to not only harness the tool efficiency for timing and floor-planning but also to maintain designer control over compact parasitically aware layout. The methodology uses the Cadence design platform, however it is not limited to this tool.« less
Design methodology: edgeless 3D ASICs with complex in-pixel processing for pixel detectors
NASA Astrophysics Data System (ADS)
Fahim, Farah; Deptuch, Grzegorz W.; Hoff, James R.; Mohseni, Hooman
2015-08-01
The design methodology for the development of 3D integrated edgeless pixel detectors with in-pixel processing using Electronic Design Automation (EDA) tools is presented. A large area 3 tier 3D detector with one sensor layer and two ASIC layers containing one analog and one digital tier, is built for x-ray photon time of arrival measurement and imaging. A full custom analog pixel is 65μm x 65μm. It is connected to a sensor pixel of the same size on one side, and on the other side it has approximately 40 connections to the digital pixel. A 32 x 32 edgeless array without any peripheral functional blocks constitutes a sub-chip. The sub-chip is an indivisible unit, which is further arranged in a 6 x 6 array to create the entire 1.248cm x 1.248cm ASIC. Each chip has 720 bump-bond I/O connections, on the back of the digital tier to the ceramic PCB. All the analog tier power and biasing is conveyed through the digital tier from the PCB. The assembly has no peripheral functional blocks, and hence the active area extends to the edge of the detector. This was achieved by using a few flavors of almost identical analog pixels (minimal variation in layout) to allow for peripheral biasing blocks to be placed within pixels. The 1024 pixels within a digital sub-chip array have a variety of full custom, semi-custom and automated timing driven functional blocks placed together. The methodology uses a modified mixed-mode on-top digital implementation flow to not only harness the tool efficiency for timing and floor-planning but also to maintain designer control over compact parasitically aware layout. The methodology uses the Cadence design platform, however it is not limited to this tool.
Bringing the Coastal Zone into Finer Focus
NASA Astrophysics Data System (ADS)
Guild, L. S.; Hooker, S. B.; Kudela, R. M.; Morrow, J. H.; Torres-Perez, J. L.; Palacios, S. L.; Negrey, K.; Dungan, J. L.
2015-12-01
Measurements over extents from submeter to 10s of meters are critical science requirements for the design and integration of remote sensing instruments for coastal zone research. Various coastal ocean phenomena operate at different scales (e.g. meters to kilometers). For example, river plumes and algal blooms have typical extents of 10s of meters and therefore can be resolved with satellite data, however, shallow benthic ecosystem (e.g., coral, seagrass, and kelp) biodiversity and change are best studied at resolutions of submeter to meter, below the pixel size of typical satellite products. The delineation of natural phenomena do not fit nicely into gridded pixels and the coastal zone is complicated by mixed pixels at the land-sea interface with a range of bio-optical signals from terrestrial and water components. In many standard satellite products, these coastal mixed pixels are masked out because they confound algorithms for the ocean color parameter suite. In order to obtain data at the land/sea interface, finer spatial resolution satellite data can be achieved yet spectral resolution is sacrificed. This remote sensing resolution challenge thwarts the advancement of research in the coastal zone. Further, remote sensing of benthic ecosystems and shallow sub-surface phenomena are challenged by the requirements to sense through the sea surface and through a water column with varying light conditions from the open ocean to the water's edge. For coastal waters, >80% of the remote sensing signal is scattered/absorbed due to the atmospheric constituents, sun glint from the sea surface, and water column components. In addition to in-water measurements from various platforms (e.g., ship, glider, mooring, and divers), low altitude aircraft outfitted with high quality bio-optical radiometer sensors and targeted channels matched with in-water sensors and higher altitude platform sensors for ocean color products, bridge the sea-truth measurements to the pixels acquired from satellite and high altitude platforms. We highlight a novel NASA airborne calibration, validation, and research capability for addressing the coastal remote sensing resolution challenge.
Sekine, Hiroshi; Kobayashi, Masahiro; Onuki, Yusuke; Kawabata, Kazunari; Tsuboi, Toshiki; Matsuno, Yasushi; Takahashi, Hidekazu; Inoue, Shunsuke; Ichikawa, Takeshi
2017-12-09
CMOS image sensors (CISs) with global shutter (GS) function are strongly required in order to avoid image degradation. However, CISs with GS function have generally been inferior to the rolling shutter (RS) CIS in performance, because they have more components. This problem is remarkable in small pixel pitch. The newly developed 3.4 µm pitch GS CIS solves this problem by using multiple accumulation shutter technology and the gentle slope light guide structure. As a result, the developed GS pixel achieves 1.8 e - temporal noise and 16,200 e - full well capacity with charge domain memory in 120 fps operation. The sensitivity and parasitic light sensitivity are 28,000 e - /lx·s and -89 dB, respectively. Moreover, the incident light angle dependence of sensitivity and parasitic light sensitivity are improved by the gentle slope light guide structure.
Babin, D; Pižurica, A; Bellens, R; De Bock, J; Shang, Y; Goossens, B; Vansteenkiste, E; Philips, W
2012-07-01
Extraction of structural and geometric information from 3-D images of blood vessels is a well known and widely addressed segmentation problem. The segmentation of cerebral blood vessels is of great importance in diagnostic and clinical applications, with a special application in diagnostics and surgery on arteriovenous malformations (AVM). However, the techniques addressing the problem of the AVM inner structure segmentation are rare. In this work we present a novel method of pixel profiling with the application to segmentation of the 3-D angiography AVM images. Our algorithm stands out in situations with low resolution images and high variability of pixel intensity. Another advantage of our method is that the parameters are set automatically, which yields little manual user intervention. The results on phantoms and real data demonstrate its effectiveness and potentials for fine delineation of AVM structure. Copyright © 2012 Elsevier B.V. All rights reserved.
Impervious surface mapping with Quickbird imagery
Lu, Dengsheng; Hetrick, Scott; Moran, Emilio
2010-01-01
This research selects two study areas with different urban developments, sizes, and spatial patterns to explore the suitable methods for mapping impervious surface distribution using Quickbird imagery. The selected methods include per-pixel based supervised classification, segmentation-based classification, and a hybrid method. A comparative analysis of the results indicates that per-pixel based supervised classification produces a large number of “salt-and-pepper” pixels, and segmentation based methods can significantly reduce this problem. However, neither method can effectively solve the spectral confusion of impervious surfaces with water/wetland and bare soils and the impacts of shadows. In order to accurately map impervious surface distribution from Quickbird images, manual editing is necessary and may be the only way to extract impervious surfaces from the confused land covers and the shadow problem. This research indicates that the hybrid method consisting of thresholding techniques, unsupervised classification and limited manual editing provides the best performance. PMID:21643434
NASA Astrophysics Data System (ADS)
Szu, Harold H.; Buss, James R.; Kopriva, Ivica
2004-04-01
We proposed the physics approach to solve a physical inverse problem, namely to choose the unique equilibrium solution (at the minimum free energy: H= E - ToS, including the Wiener, l.m.s E, and ICA, Max S, as special cases). The "unsupervised classification" presumes that required information must be learned and derived directly and solely from the data alone, in consistence with the classical Duda-Hart ATR definition of the "unlabelled data". Such truly unsupervised methodology is presented for space-variant imaging processing for a single pixel in the real world case of remote sensing, early tumor detections and SARS. The indeterminacy of the multiple solutions of the inverse problem is regulated or selected by means of the absolute minimum of isothermal free energy as the ground truth of local equilibrium condition at the single-pixel foot print.
Investigation of SIS Up-Converters for Use in Multi-pixel Receivers
NASA Astrophysics Data System (ADS)
Uzawa, Yoshinori; Kojima, Takafumi; Shan, Wenlei; Gonzalez, Alvaro; Kroug, Matthias
2018-02-01
We propose the use of SIS junctions as a frequency up-converter based on quasiparticle mixing in frequency division multiplexing circuits for multi-pixel heterodyne receivers. Our theoretical calculation showed that SIS junctions have the potential to achieve positive gain and low-noise characteristics in the frequency up-conversion process at local oscillator (LO) frequencies larger than the voltage scale of the dc nonlinearity of the SIS junction. We experimentally observed up-conversion gain in a mixer with four-series Nb-based SIS junctions at the LO frequency of 105 GHz for the first time.
Some spectral and spatial characteristics of LANDSAT data
NASA Technical Reports Server (NTRS)
1982-01-01
Activities are provided for: (1) developing insight into the way in which the LANDSAT MSS produces multispectral data; (2) promoting understanding of what a "pixel" means in a LANDSAT image and the implications of the term "mixed pixel"; (3) explaining the concept of spectral signatures; (4) deriving a simple signature for a class or feature by analysis: of the four band images; (5) understanding the production of false color composites; (6) appreciating the use of color additive techniques; (7) preparing Diazo images; and (8) making quick visual identifications of major land cover types by their characteristic gray tones or colors in LANDSAT images.
Shadow-free single-pixel imaging
NASA Astrophysics Data System (ADS)
Li, Shunhua; Zhang, Zibang; Ma, Xiao; Zhong, Jingang
2017-11-01
Single-pixel imaging is an innovative imaging scheme and receives increasing attention in recent years, for it is applicable for imaging at non-visible wavelengths and imaging under weak light conditions. However, as in conventional imaging, shadows would likely occur in single-pixel imaging and sometimes bring negative effects in practical uses. In this paper, the principle of shadows occurrence in single-pixel imaging is analyzed, following which a technique for shadows removal is proposed. In the proposed technique, several single-pixel detectors are used to detect the backscattered light at different locations so that the shadows in the reconstructed images corresponding to each detector shadows are complementary. Shadow-free reconstruction can be derived by fusing the shadow-complementary images using maximum selection rule. To deal with the problem of intensity mismatch in image fusion, we put forward a simple calibration. As experimentally demonstrated, the technique is able to reconstruct monochromatic and full-color shadow-free images.
Adaptive pixel-super-resolved lensfree in-line digital holography for wide-field on-chip microscopy.
Zhang, Jialin; Sun, Jiasong; Chen, Qian; Li, Jiaji; Zuo, Chao
2017-09-18
High-resolution wide field-of-view (FOV) microscopic imaging plays an essential role in various fields of biomedicine, engineering, and physical sciences. As an alternative to conventional lens-based scanning techniques, lensfree holography provides a new way to effectively bypass the intrinsical trade-off between the spatial resolution and FOV of conventional microscopes. Unfortunately, due to the limited sensor pixel-size, unpredictable disturbance during image acquisition, and sub-optimum solution to the phase retrieval problem, typical lensfree microscopes only produce compromised imaging quality in terms of lateral resolution and signal-to-noise ratio (SNR). Here, we propose an adaptive pixel-super-resolved lensfree imaging (APLI) method which can solve, or at least partially alleviate these limitations. Our approach addresses the pixel aliasing problem by Z-scanning only, without resorting to subpixel shifting or beam-angle manipulation. Automatic positional error correction algorithm and adaptive relaxation strategy are introduced to enhance the robustness and SNR of reconstruction significantly. Based on APLI, we perform full-FOV reconstruction of a USAF resolution target (~29.85 mm 2 ) and achieve half-pitch lateral resolution of 770 nm, surpassing 2.17 times of the theoretical Nyquist-Shannon sampling resolution limit imposed by the sensor pixel-size (1.67µm). Full-FOV imaging result of a typical dicot root is also provided to demonstrate its promising potential applications in biologic imaging.
NASA Astrophysics Data System (ADS)
Liu, Haijian; Wu, Changshan
2018-06-01
Crown-level tree species classification is a challenging task due to the spectral similarity among different tree species. Shadow, underlying objects, and other materials within a crown may decrease the purity of extracted crown spectra and further reduce classification accuracy. To address this problem, an innovative pixel-weighting approach was developed for tree species classification at the crown level. The method utilized high density discrete LiDAR data for individual tree delineation and Airborne Imaging Spectrometer for Applications (AISA) hyperspectral imagery for pure crown-scale spectra extraction. Specifically, three steps were included: 1) individual tree identification using LiDAR data, 2) pixel-weighted representative crown spectra calculation using hyperspectral imagery, with which pixel-based illuminated-leaf fractions estimated using a linear spectral mixture analysis (LSMA) were employed as weighted factors, and 3) representative spectra based tree species classification was performed through applying a support vector machine (SVM) approach. Analysis of results suggests that the developed pixel-weighting approach (OA = 82.12%, Kc = 0.74) performed better than treetop-based (OA = 70.86%, Kc = 0.58) and pixel-majority methods (OA = 72.26, Kc = 0.62) in terms of classification accuracy. McNemar tests indicated the differences in accuracy between pixel-weighting and treetop-based approaches as well as that between pixel-weighting and pixel-majority approaches were statistically significant.
NASA Astrophysics Data System (ADS)
Senthil Kumar, A.; Keerthi, V.; Manjunath, A. S.; Werff, Harald van der; Meer, Freek van der
2010-08-01
Classification of hyperspectral images has been receiving considerable attention with many new applications reported from commercial and military sectors. Hyperspectral images are composed of a large number of spectral channels, and have the potential to deliver a great deal of information about a remotely sensed scene. However, in addition to high dimensionality, hyperspectral image classification is compounded with a coarse ground pixel size of the sensor for want of adequate sensor signal to noise ratio within a fine spectral passband. This makes multiple ground features jointly occupying a single pixel. Spectral mixture analysis typically begins with pixel classification with spectral matching techniques, followed by the use of spectral unmixing algorithms for estimating endmembers abundance values in the pixel. The spectral matching techniques are analogous to supervised pattern recognition approaches, and try to estimate some similarity between spectral signatures of the pixel and reference target. In this paper, we propose a spectral matching approach by combining two schemes—variable interval spectral average (VISA) method and spectral curve matching (SCM) method. The VISA method helps to detect transient spectral features at different scales of spectral windows, while the SCM method finds a match between these features of the pixel and one of library spectra by least square fitting. Here we also compare the performance of the combined algorithm with other spectral matching techniques using a simulated and the AVIRIS hyperspectral data sets. Our results indicate that the proposed combination technique exhibits a stronger performance over the other methods in the classification of both the pure and mixed class pixels simultaneously.
Shade images of forested areas obtained from LANDSAT MSS data
NASA Technical Reports Server (NTRS)
Shimabukuro, Yosio Edemir; Smith, James A.
1989-01-01
The pixel size in the present day Remote Sensing systems is large enough to include different types of land cover. Depending upon the target area, several components may be present within the pixel. In forested areas, generally, three main components are present: tree canopy, soil (understory), and shadow. The objective is to generate a shade (shadow) image of forested areas from multispectral measurements of LANDSAT MSS (Multispectral Scanner) data by implementing a linear mixing model, where shadow is considered as one of the primary components in a pixel. The shade images are related to the observed variation in forest structure, i.e., the proportion of inferred shadow in a pixel is related to different forest ages, forest types, and tree crown cover. The Constrained Least Squares (CLS) method is used to generate shade images for forest of eucalyptus and vegetation of cerrado using LANDSAT MSS imagery over Itapeva study area in Brazil. The resulted shade images may explain the difference on ages for forest of eucalyptus and the difference on three crown cover for vegetation of cerrado.
Nouri, Hamideh; Anderson, Sharolyn; Sutton, Paul; Beecham, Simon; Nagler, Pamela L.; Jarchow, Christopher J.; Roberts, Dar A.
2017-01-01
This research addresses the question as to whether or not the Normalised Difference Vegetation Index (NDVI) is scale invariant (i.e. constant over spatial aggregation) for pure pixels of urban vegetation. It has been long recognized that there are issues related to the modifiable areal unit problem (MAUP) pertaining to indices such as NDVI and images at varying spatial resolutions. These issues are relevant to using NDVI values in spatial analyses. We compare two different methods of calculation of a mean NDVI: 1) using pixel values of NDVI within feature/object boundaries and 2) first calculating the mean red and mean near-infrared across all feature pixels and then calculating NDVI. We explore the nature and magnitude of these differences for images taken from two sensors, a 1.24 m resolution WorldView-3 and a 0.1 m resolution digital aerial image. We apply these methods over an urban park located in the Adelaide Parklands of South Australia. We demonstrate that the MAUP is not an issue for calculation of NDVI within a sensor for pure urban vegetation pixels. This may prove useful for future rule-based monitoring of the ecosystem functioning of green infrastructure.
NASA Technical Reports Server (NTRS)
Wilcox, Mike
1993-01-01
The number of pixels per unit area sampling an image determines Nyquist resolution. Therefore, the highest pixel density is the goal. Unfortunately, as reduction in pixel size approaches the wavelength of light, sensitivity is lost and noise increases. Animals face the same problems and have achieved novel solutions. Emulating these solutions offers potentially unlimited sensitivity with detector size approaching the diffraction limit. Once an image is 'captured', cellular preprocessing of information allows extraction of high resolution information from the scene. Computer simulation of this system promises hyperacuity for machine vision.
Improvement of the energy resolution of pixelated CdTe detectors for applications in 0νββ searches
NASA Astrophysics Data System (ADS)
Gleixner, T.; Anton, G.; Filipenko, M.; Seller, P.; Veale, M. C.; Wilson, M. D.; Zang, A.; Michel, T.
2015-07-01
Experiments trying to detect 0νββ are very challenging. Their requirements include a good energy resolution and a good detection efficiency. With current fine pixelated CdTe detectors there is a trade off between the energy resolution and the detection efficiency, which limits their performance. It will be shown with simulations that this problem can be mostly negated by analysing the cathode signal which increases the optimal sensor thickness. We will compare different types of fine pixelated CdTe detectors (Timepix, Dosepix, HEXITEC) from this point of view.
NASA Astrophysics Data System (ADS)
Herrmann, Christoph; Engel, Klaus-Jürgen; Wiegert, Jens
2010-12-01
The most obvious problem in obtaining spectral information with energy-resolving photon counting detectors in clinical computed tomography (CT) is the huge x-ray flux present in conventional CT systems. At high tube voltages (e.g. 140 kVp), despite the beam shaper, this flux can be close to 109 Mcps mm-2 in the direct beam or in regions behind the object, which are close to the direct beam. Without accepting the drawbacks of truncated reconstruction, i.e. estimating missing direct-beam projection data, a photon-counting energy-resolving detector has to be able to deal with such high count rates. Sub-structuring pixels into sub-pixels is not enough to reduce the count rate per pixel to values that today's direct converting Cd[Zn]Te material can cope with (<=10 Mcps in an optimistic view). Below 300 µm pixel pitch, x-ray cross-talk (Compton scatter and K-escape) and the effect of charge diffusion between pixels are problematic. By organising the detector in several different layers, the count rate can be further reduced. However this alone does not limit the count rates to the required level, since the high stopping power of the material becomes a disadvantage in the layered approach: a simple absorption calculation for 300 µm pixel pitch shows that the required layer thickness of below 10 Mcps/pixel for the top layers in the direct beam is significantly below 100 µm. In a horizontal multi-layer detector, such thin layers are very difficult to manufacture due to the brittleness of Cd[Zn]Te. In a vertical configuration (also called edge-on illumination (Ludqvist et al 2001 IEEE Trans. Nucl. Sci. 48 1530-6, Roessl et al 2008 IEEE NSS-MIC-RTSD 2008, Conf. Rec. Talk NM2-3)), bonding of the readout electronics (with pixel pitches below 100 µm) is not straightforward although it has already been done successfully (Pellegrini et al 2004 IEEE NSS MIC 2004 pp 2104-9). Obviously, for the top detector layers, materials with lower stopping power would be advantageous. The possible choices are, however, quite limited, since only 'mature' materials, which operate at room temperature and can be manufactured reliably should reasonably be considered. Since GaAs is still known to cause reliability problems, the simplest choice is Si, however with the drawback of strong Compton scatter which can cause considerable inter-pixel cross-talk. To investigate the potential and the problems of Si in a multi-layer detector, in this paper the combination of top detector layers made of Si with lower layers made of Cd[Zn]Te is studied by using Monte Carlo simulated detector responses. It is found that the inter-pixel cross-talk due to Compton scatter is indeed very high; however, with an appropriate cross-talk correction scheme, which is also described, the negative effects of cross-talk are shown to be removed to a very large extent.
Timepix Device Efficiency for Pattern Recognition of Tracks Generated by Ionizing Radiation
NASA Astrophysics Data System (ADS)
Leroy, Claude; Asbah, Nedaa; Gagnon, Louis-Guilaume; Larochelle, Jean-Simon; Pospisil, Stanislav; Soueid, Paul
2014-06-01
A hybrid silicon pixelated TIMEPIX detector (256 × 256 pixels with 55 μm pitch) operated in Time Over Threshold (TOT) mode was exposed to radioactive sources (241Am, 106Ru, 137Cs), protons and alpha-particles after Rutherford Backscattering on a thin gold foil of protons and alpha-particles beams delivered by the Tandem Accelerator of Montreal University. Measurements were also performed with different mixed radiation fields of heavy charged particles (protons and alpha-particles), photons and electrons produced by simultaneous exposure of TIMEPIX to the radioactive sources and to protons beams on top of the radioactive sources. All measurements were performed in vacuum. The TOT mode of operation has allowed the direct measurement of the energy deposited in each pixel. The efficiency of track recognition with this device was tested by comparing the experimental activities (determined from number of tracks measurements) of the radioactive sources with their expected activities. The efficiency of track recognition of incident protons and alpha-particles of different energies as a function of the incidence angle was measured. The operation of TIMEPIX in TOT mode has allowed a 3D mapping of the charge sharing effect in the whole volume of the silicon sensor. The effect of the bias voltage on charge sharing was investigated as the level of charge sharing is related to the local profile of the electric field in the sensor. The results of the present measurements demonstrate the TIMEPIX capability of differentiating between different types of particles species from mixed radiation fields and measuring their energy deposition. Single track analysis gives a good precision (significantly better than the 55 μm size of one detector pixel) on the coordinates of the impact point of protons interacting in the TIMEPIX silicon layer.
Resolving the percentage of component terrains within single resolution elements
NASA Technical Reports Server (NTRS)
Marsh, S. E.; Switzer, P.; Kowalik, W. S.; Lyon, R. J. P.
1980-01-01
An approximate maximum likelihood technique employing a widely available discriminant analysis program is discussed that has been developed for resolving the percentage of component terrains within single resolution elements. The method uses all four channels of Landsat data simultaneously and does not require prior knowledge of the percentage of components in mixed pixels. It was tested in five cases that were chosen to represent mixtures of outcrop, soil and vegetation which would typically be encountered in geologic studies with Landsat data. For all five cases, the method proved to be superior to single band weighted average and linear regression techniques and permitted an estimate of the total area occupied by component terrains to within plus or minus 6% of the true area covered. Its major drawback is a consistent overestimation of the pixel component percent of the darker materials (vegetation) and an underestimation of the pixel component percent of the brighter materials (sand).
Getting small: new 10μm pixel pitch cooled infrared products
NASA Astrophysics Data System (ADS)
Reibel, Y.; Pere-Laperne, N.; Augey, T.; Rubaldo, L.; Decaens, G.; Bourqui, M.-L.; Manissadjian, A.; Billon-Lanfrey, D.; Bisotto, S.; Gravrand, O.; Destefanis, G.; Druart, G.; Guerineau, N.
2014-06-01
Recent advances in miniaturization of IR imaging technology have led to a burgeoning market for mini thermalimaging sensors. Seen in this context our development on smaller pixel pitch has opened the door to very compact products. When this competitive advantage is mixed with smaller coolers, thanks to HOT technology, we achieve valuable reductions in size, weight and power of the overall package. In the same time, we are moving towards a global offer based on digital interfaces that provides our customers lower power consumption and simplification on the IR system design process while freeing up more space. Additionally, we are also investigating new wafer level camera solution taking advantage of the progress in micro-optics. This paper discusses recent developments on hot and small pixel pitch technologies as well as efforts made on compact packaging solution developed by SOFRADIR in collaboration with CEA-LETI and ONERA.
Chaos based video encryption using maps and Ikeda time delay system
NASA Astrophysics Data System (ADS)
Valli, D.; Ganesan, K.
2017-12-01
Chaos based cryptosystems are an efficient method to deal with improved speed and highly secured multimedia encryption because of its elegant features, such as randomness, mixing, ergodicity, sensitivity to initial conditions and control parameters. In this paper, two chaos based cryptosystems are proposed: one is the higher-dimensional 12D chaotic map and the other is based on the Ikeda delay differential equation (DDE) suitable for designing a real-time secure symmetric video encryption scheme. These encryption schemes employ a substitution box (S-box) to diffuse the relationship between pixels of plain video and cipher video along with the diffusion of current input pixel with the previous cipher pixel, called cipher block chaining (CBC). The proposed method enhances the robustness against statistical, differential and chosen/known plain text attacks. Detailed analysis is carried out in this paper to demonstrate the security and uniqueness of the proposed scheme.
Noise characterization of a 512×16 spad line sensor for time-resolved spectroscopy applications
NASA Astrophysics Data System (ADS)
Finlayson, Neil; Usai, Andrea; Erdogan, Ahmet T.; Henderson, Robert K.
2018-02-01
Time-resolved spectroscopy in the presence of noise is challenging. We have developed a new 512 pixel line sensor with 16 single-photon-avalanche (SPAD) detectors per pixel and ultrafast in-pixel time-correlated single photon counting (TCSPC) histogramming for such applications. SPADs are near shot noise limited detectors but we are still faced with the problem of high dark count rate (DCR) SPADs. The noisiest SPADs can be switched off to optimise signal-to-noiseratios (SNR) at the expense of longer acquisition/exposure times than would be possible if more SPADs were exploited. Here we present detailed noise characterization of our array. We build a DCR map for the sensor and demonstrate the effect of switching off the noisiest SPADs in each pixel. 24% percent of SPADs in the array are measured to have DCR in excess of 1kHz, while the best SPAD selection per pixel reduces DCR to 53+/-7Hz across the entire array. We demonstrate that selection of the lowest DCR SPAD in each pixel leads to the emergence of sparse spatial sampling noise in the sensor.
A neural net based architecture for the segmentation of mixed gray-level and binary pictures
NASA Technical Reports Server (NTRS)
Tabatabai, Ali; Troudet, Terry P.
1991-01-01
A neural-net-based architecture is proposed to perform segmentation in real time for mixed gray-level and binary pictures. In this approach, the composite picture is divided into 16 x 16 pixel blocks, which are identified as character blocks or image blocks on the basis of a dichotomy measure computed by an adaptive 16 x 16 neural net. For compression purposes, each image block is further divided into 4 x 4 subblocks; a one-bit nonparametric quantizer is used to encode 16 x 16 character and 4 x 4 image blocks; and the binary map and quantizer levels are obtained through a neural net segmentor over each block. The efficiency of the neural segmentation in terms of computational speed, data compression, and quality of the compressed picture is demonstrated. The effect of weight quantization is also discussed. VLSI implementations of such adaptive neural nets in CMOS technology are described and simulated in real time for a maximum block size of 256 pixels.
Alternative Optimizations of X-ray TES Arrays: Soft X-rays, High Count Rates, and Mixed-Pixel Arrays
NASA Technical Reports Server (NTRS)
Kilbourne, C. A.; Bandler, S. R.; Brown, A.-D.; Chervenak, J. A.; Figueroa-Feliciano, E.; Finkbeiner, F. M.; Iyomoto, N.; Kelley, R. L.; Porter, F. S.; Smith, S. J.
2007-01-01
We are developing arrays of superconducting transition-edge sensors (TES) for imaging spectroscopy telescopes such as the XMS on Constellation-X. While our primary focus has been on arrays that meet the XMS requirements (of which, foremost, is an energy resolution of 2.5 eV at 6 keV and a bandpass from approx. 0.3 keV to 12 keV), we have also investigated other optimizations that might be used to extend the XMS capabilities. In one of these optimizations, improved resolution below 1 keV is achieved by reducing the heat capacity. Such pixels can be based on our XMS-style TES's with the separate absorbers omitted. These pixels can added to an array with broadband response either as a separate array or interspersed, depending on other factors that include telescope design and science requirements. In one version of this approach, we have designed and fabricated a composite array of low-energy and broad-band pixels to provide high spectral resolving power over a broader energy bandpass than could be obtained with a single TES design. The array consists of alternating pixels with and without overhanging absorbers. To explore optimizations for higher count rates, we are also optimizing the design and operating temperature of pixels that are coupled to a solid substrate. We will present the performance of these variations and discuss other optimizations that could be used to enhance the XMS or enable other astrophysics experiments.
Giewekemeyer, Klaus; Philipp, Hugh T; Wilke, Robin N; Aquila, Andrew; Osterhoff, Markus; Tate, Mark W; Shanks, Katherine S; Zozulya, Alexey V; Salditt, Tim; Gruner, Sol M; Mancuso, Adrian P
2014-09-01
Coherent (X-ray) diffractive imaging (CDI) is an increasingly popular form of X-ray microscopy, mainly due to its potential to produce high-resolution images and the lack of an objective lens between the sample and its corresponding imaging detector. One challenge, however, is that very high dynamic range diffraction data must be collected to produce both quantitative and high-resolution images. In this work, hard X-ray ptychographic coherent diffractive imaging has been performed at the P10 beamline of the PETRA III synchrotron to demonstrate the potential of a very wide dynamic range imaging X-ray detector (the Mixed-Mode Pixel Array Detector, or MM-PAD). The detector is capable of single photon detection, detecting fluxes exceeding 1 × 10(8) 8-keV photons pixel(-1) s(-1), and framing at 1 kHz. A ptychographic reconstruction was performed using a peak focal intensity on the order of 1 × 10(10) photons µm(-2) s(-1) within an area of approximately 325 nm × 603 nm. This was done without need of a beam stop and with a very modest attenuation, while `still' images of the empty beam far-field intensity were recorded without any attenuation. The treatment of the detector frames and CDI methodology for reconstruction of non-sensitive detector regions, partially also extending the active detector area, are described.
Study on pixel matching method of the multi-angle observation from airborne AMPR measurements
NASA Astrophysics Data System (ADS)
Hou, Weizhen; Qie, Lili; Li, Zhengqiang; Sun, Xiaobing; Hong, Jin; Chen, Xingfeng; Xu, Hua; Sun, Bin; Wang, Han
2015-10-01
For the along-track scanning mode, the same place along the ground track could be detected by the Advanced Multi-angular Polarized Radiometer (AMPR) with several different scanning angles from -55 to 55 degree, which provides a possible means to get the multi-angular detection for some nearby pixels. However, due to the ground sample spacing and spatial footprint of the detection, the different sizes of footprints cannot guarantee the spatial matching of some partly overlap pixels, which turn into a bottleneck for the effective use of the multi-angular detected information of AMPR to study the aerosol and surface polarized properties. Based on our definition and calculation of t he pixel coincidence rate for the multi-angular detection, an effective multi-angle observation's pixel matching method is presented to solve the spatial matching problem for airborne AMPR. Assuming the shape of AMPR's each pixel is an ellipse, and the major axis and minor axis depends on the flying attitude and each scanning angle. By the definition of coordinate system and origin of coordinate, the latitude and longitude could be transformed into the Euclidian distance, and the pixel coincidence rate of two nearby ellipses could be calculated. Via the traversal of each ground pixel, those pixels with high coincidence rate could be selected and merged, and with the further quality control of observation data, thus the ground pixels dataset with multi-angular detection could be obtained and analyzed, providing the support for the multi-angular and polarized retrieval algorithm research in t he next study.
Micro-pixelation and color mixing in biological photonic structures (presentation video)
NASA Astrophysics Data System (ADS)
Bartl, Michael H.; Nagi, Ramneet K.
2014-03-01
The world of insects displays myriad hues of coloration effects produced by elaborate nano-scale architectures built into wings and exoskeleton. For example, we have recently found many weevils possess photonic architectures with cubic lattices. In this talk, we will present high-resolution three-dimensional reconstructions of weevil photonic structures with diamond and gyroid lattices. Moreover, by reconstructing entire scales we found arrays of single-crystalline domains, each oriented such that only selected crystal faces are visible to an observer. This pixel-like arrangement is key to the angle-independent coloration typical of weevils—a strategy that could enable a new generation of coating technologies.
Salience Assignment for Multiple-Instance Data and Its Application to Crop Yield Prediction
NASA Technical Reports Server (NTRS)
Wagstaff, Kiri L.; Lane, Terran
2010-01-01
An algorithm was developed to generate crop yield predictions from orbital remote sensing observations, by analyzing thousands of pixels per county and the associated historical crop yield data for those counties. The algorithm determines which pixels contain which crop. Since each known yield value is associated with thousands of individual pixels, this is a multiple instance learning problem. Because individual crop growth is related to the resulting yield, this relationship has been leveraged to identify pixels that are individually related to corn, wheat, cotton, and soybean yield. Those that have the strongest relationship to a given crop s yield values are most likely to contain fields with that crop. Remote sensing time series data (a new observation every 8 days) was examined for each pixel, which contains information for that pixel s growth curve, peak greenness, and other relevant features. An alternating-projection (AP) technique was used to first estimate the "salience" of each pixel, with respect to the given target (crop yield), and then those estimates were used to build a regression model that relates input data (remote sensing observations) to the target. This is achieved by constructing an exemplar for each crop in each county that is a weighted average of all the pixels within the county; the pixels are weighted according to the salience values. The new regression model estimate then informs the next estimate of the salience values. By iterating between these two steps, the algorithm converges to a stable estimate of both the salience of each pixel and the regression model. The salience values indicate which pixels are most relevant to each crop under consideration.
GENIE: a hybrid genetic algorithm for feature classification in multispectral images
NASA Astrophysics Data System (ADS)
Perkins, Simon J.; Theiler, James P.; Brumby, Steven P.; Harvey, Neal R.; Porter, Reid B.; Szymanski, John J.; Bloch, Jeffrey J.
2000-10-01
We consider the problem of pixel-by-pixel classification of a multi- spectral image using supervised learning. Conventional spuervised classification techniques such as maximum likelihood classification and less conventional ones s uch as neural networks, typically base such classifications solely on the spectral components of each pixel. It is easy to see why: the color of a pixel provides a nice, bounded, fixed dimensional space in which these classifiers work well. It is often the case however, that spectral information alone is not sufficient to correctly classify a pixel. Maybe spatial neighborhood information is required as well. Or maybe the raw spectral components do not themselves make for easy classification, but some arithmetic combination of them would. In either of these cases we have the problem of selecting suitable spatial, spectral or spatio-spectral features that allow the classifier to do its job well. The number of all possible such features is extremely large. How can we select a suitable subset? We have developed GENIE, a hybrid learning system that combines a genetic algorithm that searches a space of image processing operations for a set that can produce suitable feature planes, and a more conventional classifier which uses those feature planes to output a final classification. In this paper we show that the use of a hybrid GA provides significant advantages over using either a GA alone or more conventional classification methods alone. We present results using high-resolution IKONOS data, looking for regions of burned forest and for roads.
Sub-pixel localization of highways in AVIRIS images
NASA Technical Reports Server (NTRS)
Salu, Yehuda
1995-01-01
Roads and highways show up clearly in many bands of AVIRIS images. A typical lane in the U.S. is 12 feet wide, and the total width of a four lane highway, including 18 feet of paved shoulders, is 19.8 m. Such a highway will cover only a portion of any 20x20 m AVIRIS pixel that it traverses. The other portion of these pixels wil be usually covered by vegetation. An interesting problem is to precisely determine the location of a highway within the AVIRIS pixels that it traverses. This information may be used for alignment and spatial calibration of AVIRIS images. Also, since the reflection properties of highway surfaces do not change with time, and they can be determined once and for all, such information can be of help in calculating and filtering out the atmospheric noise that contaminates AVIRIS measurements. The purpose of this report is to describe a method for sub-pixel localization of highways.
Structural colour printing from a reusable generic nanosubstrate masked for the target image
NASA Astrophysics Data System (ADS)
Rezaei, M.; Jiang, H.; Kaminska, B.
2016-02-01
Structural colour printing has advantages over traditional pigment-based colour printing. However, the high fabrication cost has hindered its applications in printing large-area images because each image requires patterning structural pixels in nanoscale resolution. In this work, we present a novel strategy to print structural colour images from a pixelated substrate which is called a nanosubstrate. The nanosubstrate is fabricated only once using nanofabrication tools and can be reused for printing a large quantity of structural colour images. It contains closely packed arrays of nanostructures from which red, green, blue and infrared structural pixels can be imprinted. To print a target colour image, the nanosubstrate is first covered with a mask layer to block all the structural pixels. The mask layer is subsequently patterned according to the target colour image to make apertures of controllable sizes on top of the wanted primary colour pixels. The masked nanosubstrate is then used as a stamp to imprint the colour image onto a separate substrate surface using nanoimprint lithography. Different visual colours are achieved by properly mixing the red, green and blue primary colours into appropriate ratios controlled by the aperture sizes on the patterned mask layer. Such a strategy significantly reduces the cost and complexity of printing a structural colour image from lengthy nanoscale patterning into high throughput micro-patterning and makes it possible to apply structural colour printing in personalized security features and data storage. In this paper, nanocone array grating pixels were used as the structural pixels and the nanosubstrate contains structures to imprint the nanocone arrays. Laser lithography was implemented to pattern the mask layer with submicron resolution. The optical properties of the nanocone array gratings are studied in detail. Multiple printed structural colour images with embedded covert information are demonstrated.
Low temperature performance of a commercially available InGaAs image sensor
NASA Astrophysics Data System (ADS)
Nakaya, Hidehiko; Komiyama, Yutaka; Kashikawa, Nobunari; Uchida, Tomohisa; Nagayama, Takahiro; Yoshida, Michitoshi
2016-08-01
We report the evaluation results of a commercially available InGaAs image sensor manufactured by Hamamatsu Photonics K. K., which has sensitivity between 0.95μm and 1.7μm at a room temperature. The sensor format was 128×128 pixels with 20 μm pitch. It was tested with our original readout electronics and cooled down to 80 K by a mechanical cooler to minimize the dark current. Although the readout noise and dark current were 200 e- and 20 e- /sec/pixel, respectively, we found no serious problems for the linearity, wavelength response, and intra-pixel response.
Obstacle Detection Algorithms for Rotorcraft Navigation
NASA Technical Reports Server (NTRS)
Kasturi, Rangachar; Camps, Octavia I.; Huang, Ying; Narasimhamurthy, Anand; Pande, Nitin; Ahumada, Albert (Technical Monitor)
2001-01-01
In this research we addressed the problem of obstacle detection for low altitude rotorcraft flight. In particular, the problem of detecting thin wires in the presence of image clutter and noise was studied. Wires present a serious hazard to rotorcrafts. Since they are very thin, their detection early enough so that the pilot has enough time to take evasive action is difficult, as their images can be less than one or two pixels wide. After reviewing the line detection literature, an algorithm for sub-pixel edge detection proposed by Steger was identified as having good potential to solve the considered task. The algorithm was tested using a set of images synthetically generated by combining real outdoor images with computer generated wire images. The performance of the algorithm was evaluated both, at the pixel and the wire levels. It was observed that the algorithm performs well, provided that the wires are not too thin (or distant) and that some post processing is performed to remove false alarms due to clutter.
Controlling bridging and pinching with pixel-based mask for inverse lithography
NASA Astrophysics Data System (ADS)
Kobelkov, Sergey; Tritchkov, Alexander; Han, JiWan
2016-03-01
Inverse Lithography Technology (ILT) has become a viable computational lithography candidate in recent years as it can produce mask output that results in process latitude and CD control in the fab that is hard to match with conventional OPC/SRAF insertion approaches. An approach to solving the inverse lithography problem as a nonlinear, constrained minimization problem over a domain mask pixels was suggested in the paper by Y. Granik "Fast pixel-based mask optimization for inverse lithography" in 2006. The present paper extends this method to satisfy bridging and pinching constraints imposed on print contours. Namely, there are suggested objective functions expressing penalty for constraints violations, and their minimization with gradient descent methods is considered. This approach has been tested with an ILT-based Local Printability Enhancement (LPTM) tool in an automated flow to eliminate hotspots that can be present on the full chip after conventional SRAF placement/OPC and has been applied in 14nm, 10nm node production, single and multiple-patterning flows.
NASA Astrophysics Data System (ADS)
Chen, Chao; Gao, Nan; Wang, Xiangjun; Zhang, Zonghua
2018-05-01
Three-dimensional (3D) shape measurement based on fringe pattern projection techniques has been commonly used in various fields. One of the remaining challenges in fringe pattern projection is that camera sensor saturation may occur if there is a large range of reflectivity variation across the surface that causes measurement errors. To overcome this problem, a novel fringe pattern projection method is proposed to avoid image saturation and maintain high-intensity modulation for measuring shiny surfaces by adaptively adjusting the pixel-to-pixel projection intensity according to the surface reflectivity. First, three sets of orthogonal color fringe patterns and a sequence of uniform gray-level patterns with different gray levels are projected onto a measured surface by a projector. The patterns are deformed with respect to the object surface and captured by a camera from a different viewpoint. Subsequently, the optimal projection intensity at each pixel is determined by fusing different gray levels and transforming the camera pixel coordinate system into the projector pixel coordinate system. Finally, the adapted fringe patterns are created and used for 3D shape measurement. Experimental results on a flat checkerboard and shiny objects demonstrate that the proposed method can measure shiny surfaces with high accuracy.
Development of a High Dynamic Range Pixel Array Detector for Synchrotrons and XFELs
NASA Astrophysics Data System (ADS)
Weiss, Joel Todd
Advances in synchrotron radiation light source technology have opened new lines of inquiry in material science, biology, and everything in between. However, x-ray detector capabilities must advance in concert with light source technology to fully realize experimental possibilities. X-ray free electron lasers (XFELs) place particularly large demands on the capabilities of detectors, and developments towards diffraction-limited storage ring sources also necessitate detectors capable of measuring very high flux [1-3]. The detector described herein builds on the Mixed Mode Pixel Array Detector (MM-PAD) framework, developed previously by our group to perform high dynamic range imaging, and the Adaptive Gain Integrating Pixel Detector (AGIPD) developed for the European XFEL by a collaboration between Deustsches Elektronen-Synchrotron (DESY), the Paul-Scherrer-Institute (PSI), the University of Hamburg, and the University of Bonn, led by Heinz Graafsma [4, 5]. The feasibility of combining adaptive gain with charge removal techniques to increase dynamic range in XFEL experiments is assessed by simulating XFEL scatter with a pulsed infrared laser. The strategy is incorporated into pixel prototypes which are evaluated with direct current injection to simulate very high incident x-ray flux. A fully functional 16x16 pixel hybrid integrating x-ray detector featuring several different pixel architectures based on the prototypes was developed. This dissertation describes its operation and characterization. To extend dynamic range, charge is removed from the integration node of the front-end amplifier without interrupting integration. The number of times this process occurs is recorded by a digital counter in the pixel. The parameter limiting full well is thereby shifted from the size of an integration capacitor to the depth of a digital counter. The result is similar to that achieved by counting pixel array detectors, but the integrators presented here are designed to tolerate a sustained flux >1011 x-rays/pixel/second. In addition, digitization of residual analog signals allows sensitivity for single x-rays or low flux signals. Pixel high flux linearity is evaluated by direct exposure to an unattenuated synchrotron source x-ray beam and flux measurements of more than 1010 9.52 keV x-rays/pixel/s are made. Detector sensitivity to small signals is evaluated and dominant sources of error are identified. These new pixels boast multiple orders of magnitude improvement in maximum sustained flux over the MM-PAD, which is capable of measuring a sustained flux in excess of 108 x-rays/pixel/second while maintaining sensitivity to smaller signals, down to single x-rays.
Nouri, Hamideh; Anderson, Sharolyn; Sutton, Paul; Beecham, Simon; Nagler, Pamela; Jarchow, Christopher J; Roberts, Dar A
2017-04-15
This research addresses the question as to whether or not the Normalised Difference Vegetation Index (NDVI) is scale invariant (i.e. constant over spatial aggregation) for pure pixels of urban vegetation. It has been long recognized that there are issues related to the modifiable areal unit problem (MAUP) pertaining to indices such as NDVI and images at varying spatial resolutions. These issues are relevant to using NDVI values in spatial analyses. We compare two different methods of calculation of a mean NDVI: 1) using pixel values of NDVI within feature/object boundaries and 2) first calculating the mean red and mean near-infrared across all feature pixels and then calculating NDVI. We explore the nature and magnitude of these differences for images taken from two sensors, a 1.24m resolution WorldView-3 and a 0.1m resolution digital aerial image. We apply these methods over an urban park located in the Adelaide Parklands of South Australia. We demonstrate that the MAUP is not an issue for calculation of NDVI within a sensor for pure urban vegetation pixels. This may prove useful for future rule-based monitoring of the ecosystem functioning of green infrastructure. Copyright © 2017 Elsevier B.V. All rights reserved.
2006-11-14
The martian region called Deuteronilus is characterized by hills and mesas surrounded by broad debris slopes. Some of the slopes have surface markings that may indicate volatiles are mixed in with the debris. Image information: VIS instrument. Latitude 41.9N, Longitude 18.1E. 19 meter/pixel resolution. http://photojournal.jpl.nasa.gov/catalog/PIA01773
MIXS on BepiColombo and its DEPFET based focal plane instrumentation
NASA Astrophysics Data System (ADS)
Treis, J.; Andricek, L.; Aschauer, F.; Heinzinger, K.; Herrmann, S.; Hilchenbach, M.; Lauf, T.; Lechner, P.; Lutz, G.; Majewski, P.; Porro, M.; Richter, R. H.; Schaller, G.; Schnecke, M.; Schopper, F.; Soltau, H.; Stefanescu, A.; Strüder, L.; de Vita, G.
2010-12-01
Focal plane instrumentation based on DEPFET Macropixel devices, being a combination of the Detector-Amplifier structure DEPFET with a silicon drift chamber (SDD), has been proposed for the MIXS (Mercury Imaging X-ray Spectrometer) instrument on ESA's Mercury exploration mission BepiColombo. MIXS images X-ray fluorescent radiation from the Mercury surface with a lightweight X-ray mirror system on the focal plane detector to measure the spatially resolved element abundance in Mercury's crust. The sensor needs to have an energy resolution better than 200 eV FWHM at 1 keV and is required to cover an energy range from 0.5 to 10 keV, for a pixel size of 300×300μm2. Main challenges for the instrument are radiation damage and the difficult thermal environment in the mercury orbit. The production of the first batch of flight devices has been finished at the MPI semiconductor laboratory. Prototype modules have been assembled to verify the electrical properties of the devices; selected results are presented here. The prototype devices, Macropixel prototypes for the SIMBOL-X focal plane, are electrically fully compatible, but have a pixel size of 0.5×0.5 mm2. Excellent homogeneity and near Fano-limited energy resolution at high readout speeds have been observed on these devices.
Herrmann, Christoph; Engel, Klaus-Jürgen; Wiegert, Jens
2010-12-21
The most obvious problem in obtaining spectral information with energy-resolving photon counting detectors in clinical computed tomography (CT) is the huge x-ray flux present in conventional CT systems. At high tube voltages (e.g. 140 kVp), despite the beam shaper, this flux can be close to 10⁹ Mcps mm⁻² in the direct beam or in regions behind the object, which are close to the direct beam. Without accepting the drawbacks of truncated reconstruction, i.e. estimating missing direct-beam projection data, a photon-counting energy-resolving detector has to be able to deal with such high count rates. Sub-structuring pixels into sub-pixels is not enough to reduce the count rate per pixel to values that today's direct converting Cd[Zn]Te material can cope with (≤ 10 Mcps in an optimistic view). Below 300 µm pixel pitch, x-ray cross-talk (Compton scatter and K-escape) and the effect of charge diffusion between pixels are problematic. By organising the detector in several different layers, the count rate can be further reduced. However this alone does not limit the count rates to the required level, since the high stopping power of the material becomes a disadvantage in the layered approach: a simple absorption calculation for 300 µm pixel pitch shows that the required layer thickness of below 10 Mcps/pixel for the top layers in the direct beam is significantly below 100 µm. In a horizontal multi-layer detector, such thin layers are very difficult to manufacture due to the brittleness of Cd[Zn]Te. In a vertical configuration (also called edge-on illumination (Ludqvist et al 2001 IEEE Trans. Nucl. Sci. 48 1530-6, Roessl et al 2008 IEEE NSS-MIC-RTSD 2008, Conf. Rec. Talk NM2-3)), bonding of the readout electronics (with pixel pitches below 100 µm) is not straightforward although it has already been done successfully (Pellegrini et al 2004 IEEE NSS MIC 2004 pp 2104-9). Obviously, for the top detector layers, materials with lower stopping power would be advantageous. The possible choices are, however, quite limited, since only 'mature' materials, which operate at room temperature and can be manufactured reliably should reasonably be considered. Since GaAs is still known to cause reliability problems, the simplest choice is Si, however with the drawback of strong Compton scatter which can cause considerable inter-pixel cross-talk. To investigate the potential and the problems of Si in a multi-layer detector, in this paper the combination of top detector layers made of Si with lower layers made of Cd[Zn]Te is studied by using Monte Carlo simulated detector responses. It is found that the inter-pixel cross-talk due to Compton scatter is indeed very high; however, with an appropriate cross-talk correction scheme, which is also described, the negative effects of cross-talk are shown to be removed to a very large extent.
Inverse analysis of non-uniform temperature distributions using multispectral pyrometry
NASA Astrophysics Data System (ADS)
Fu, Tairan; Duan, Minghao; Tian, Jibin; Shi, Congling
2016-05-01
Optical diagnostics can be used to obtain sub-pixel temperature information in remote sensing. A multispectral pyrometry method was developed using multiple spectral radiation intensities to deduce the temperature area distribution in the measurement region. The method transforms a spot multispectral pyrometer with a fixed field of view into a pyrometer with enhanced spatial resolution that can give sub-pixel temperature information from a "one pixel" measurement region. A temperature area fraction function was defined to represent the spatial temperature distribution in the measurement region. The method is illustrated by simulations of a multispectral pyrometer with a spectral range of 8.0-13.0 μm measuring a non-isothermal region with a temperature range of 500-800 K in the spot pyrometer field of view. The inverse algorithm for the sub-pixel temperature distribution (temperature area fractions) in the "one pixel" verifies this multispectral pyrometry method. The results show that an improved Levenberg-Marquardt algorithm is effective for this ill-posed inverse problem with relative errors in the temperature area fractions of (-3%, 3%) for most of the temperatures. The analysis provides a valuable reference for the use of spot multispectral pyrometers for sub-pixel temperature distributions in remote sensing measurements.
NASA Astrophysics Data System (ADS)
Xiao, Zhitao; Leng, Yanyi; Geng, Lei; Xi, Jiangtao
2018-04-01
In this paper, a new convolution neural network method is proposed for the inspection and classification of galvanized stamping parts. Firstly, all workpieces are divided into normal and defective by image processing, and then the defective workpieces extracted from the region of interest (ROI) area are input to the trained fully convolutional networks (FCN). The network utilizes an end-to-end and pixel-to-pixel training convolution network that is currently the most advanced technology in semantic segmentation, predicts result of each pixel. Secondly, we mark the different pixel values of the workpiece, defect and background for the training image, and use the pixel value and the number of pixels to realize the recognition of the defects of the output picture. Finally, the defect area's threshold depended on the needs of the project is set to achieve the specific classification of the workpiece. The experiment results show that the proposed method can successfully achieve defect detection and classification of galvanized stamping parts under ordinary camera and illumination conditions, and its accuracy can reach 99.6%. Moreover, it overcomes the problem of complex image preprocessing and difficult feature extraction and performs better adaptability.
a Spiral-Based Downscaling Method for Generating 30 M Time Series Image Data
NASA Astrophysics Data System (ADS)
Liu, B.; Chen, J.; Xing, H.; Wu, H.; Zhang, J.
2017-09-01
The spatial detail and updating frequency of land cover data are important factors influencing land surface dynamic monitoring applications in high spatial resolution scale. However, the fragmentized patches and seasonal variable of some land cover types (e. g. small crop field, wetland) make it labor-intensive and difficult in the generation of land cover data. Utilizing the high spatial resolution multi-temporal image data is a possible solution. Unfortunately, the spatial and temporal resolution of available remote sensing data like Landsat or MODIS datasets can hardly satisfy the minimum mapping unit and frequency of current land cover mapping / updating at the same time. The generation of high resolution time series may be a compromise to cover the shortage in land cover updating process. One of popular way is to downscale multi-temporal MODIS data with other high spatial resolution auxiliary data like Landsat. But the usual manner of downscaling pixel based on a window may lead to the underdetermined problem in heterogeneous area, result in the uncertainty of some high spatial resolution pixels. Therefore, the downscaled multi-temporal data can hardly reach high spatial resolution as Landsat data. A spiral based method was introduced to downscale low spatial and high temporal resolution image data to high spatial and high temporal resolution image data. By the way of searching the similar pixels around the adjacent region based on the spiral, the pixel set was made up in the adjacent region pixel by pixel. The underdetermined problem is prevented to a large extent from solving the linear system when adopting the pixel set constructed. With the help of ordinary least squares, the method inverted the endmember values of linear system. The high spatial resolution image was reconstructed on the basis of high spatial resolution class map and the endmember values band by band. Then, the high spatial resolution time series was formed with these high spatial resolution images image by image. Simulated experiment and remote sensing image downscaling experiment were conducted. In simulated experiment, the 30 meters class map dataset Globeland30 was adopted to investigate the effect on avoid the underdetermined problem in downscaling procedure and a comparison between spiral and window was conducted. Further, the MODIS NDVI and Landsat image data was adopted to generate the 30m time series NDVI in remote sensing image downscaling experiment. Simulated experiment results showed that the proposed method had a robust performance in downscaling pixel in heterogeneous region and indicated that it was superior to the traditional window-based methods. The high resolution time series generated may be a benefit to the mapping and updating of land cover data.
Comparing performance of standard and iterative linear unmixing methods for hyperspectral signatures
NASA Astrophysics Data System (ADS)
Gault, Travis R.; Jansen, Melissa E.; DeCoster, Mallory E.; Jansing, E. David; Rodriguez, Benjamin M.
2016-05-01
Linear unmixing is a method of decomposing a mixed signature to determine the component materials that are present in sensor's field of view, along with the abundances at which they occur. Linear unmixing assumes that energy from the materials in the field of view is mixed in a linear fashion across the spectrum of interest. Traditional unmixing methods can take advantage of adjacent pixels in the decomposition algorithm, but is not the case for point sensors. This paper explores several iterative and non-iterative methods for linear unmixing, and examines their effectiveness at identifying the individual signatures that make up simulated single pixel mixed signatures, along with their corresponding abundances. The major hurdle addressed in the proposed method is that no neighboring pixel information is available for the spectral signature of interest. Testing is performed using two collections of spectral signatures from the Johns Hopkins University Applied Physics Laboratory's Signatures Database software (SigDB): a hand-selected small dataset of 25 distinct signatures from a larger dataset of approximately 1600 pure visible/near-infrared/short-wave-infrared (VIS/NIR/SWIR) spectra. Simulated spectra are created with three and four material mixtures randomly drawn from a dataset originating from SigDB, where the abundance of one material is swept in 10% increments from 10% to 90%with the abundances of the other materials equally divided amongst the remainder. For the smaller dataset of 25 signatures, all combinations of three or four materials are used to create simulated spectra, from which the accuracy of materials returned, as well as the correctness of the abundances, is compared to the inputs. The experiment is expanded to include the signatures from the larger dataset of almost 1600 signatures evaluated using a Monte Carlo scheme with 5000 draws of three or four materials to create the simulated mixed signatures. The spectral similarity of the inputs to the output component signatures is calculated using the spectral angle mapper. Results show that iterative methods significantly outperform the traditional methods under the given test conditions.
Mapping of the Ronda peridotite massif (Spain) from AVIRIS spectro-imaging survey: A first attempt
NASA Technical Reports Server (NTRS)
Pinet, P. C.; Chabrillat, S.; Ceuleneer, G.
1993-01-01
In both AVIRIS and ISM data, through the use of mixing models, geological boundaries of the Ronda massif are identified with respect to the surrounding rocks. We can also yield first-order vegetation maps. ISM and AVIRIS instruments give consistent results. On the basis of endmember fraction images, it is then possible to discard areas highly vegetated or not belonging to the peridotite massif. Within the remaining part of the mosaic, spectro-mixing analysis reveals spectral variations in the peridotite massif between the well-exposed areas. Spatially organized units are depicted, related to differences in the relative depth of the absorption band at 1 micron, and it may be due to a different pyroxene content. At this stage, it is worth noting that, although mineralogical variations observed in the rocks are at a sub-pixel scale for the airborne analysis, we see an emerging spatial pattern in the distribution of spectral variations across the massif which might be prevailingly related to mineralogy. Although it is known from fieldwork that the Ronda peridotite massif exhibits mineralogical variations at local scale in the content of pyroxene, and at regional scale in different mineral facies, ranging from garnet-, to spinel- to plagioclase-lherzolites, no attempt has been done yet to produce a synoptic map relating the two scales of analysis. The present work is a first attempt to reach this objective, though a lot more work is still required. In particular, for the purpose of mineralogical interpretation, it is critical to relate the airborne observation to field work and laboratory spectra of Ronda rocks already obtained, with the use of image endmembers and associated reference endmembers. Also, the pretty rough linear mixing model used here is taken as a 'black-box' process which does not necessarily apply correctly to the physical situation at the sub-pixel level. One may think of using the ground-truth observations bearing on the sub-pixel statistical characteristics (texture, structural pattern, surface distribution and vegetation contribution (grass,..)) to produce a more advanced mixing model, physically appropriate to the geologic and environmental contexts.
A kind of color image segmentation algorithm based on super-pixel and PCNN
NASA Astrophysics Data System (ADS)
Xu, GuangZhu; Wang, YaWen; Zhang, Liu; Zhao, JingJing; Fu, YunXia; Lei, BangJun
2018-04-01
Image segmentation is a very important step in the low-level visual computing. Although image segmentation has been studied for many years, there are still many problems. PCNN (Pulse Coupled Neural network) has biological background, when it is applied to image segmentation it can be viewed as a region-based method, but due to the dynamics properties of PCNN, many connectionless neurons will pulse at the same time, so it is necessary to identify different regions for further processing. The existing PCNN image segmentation algorithm based on region growing is used for grayscale image segmentation, cannot be directly used for color image segmentation. In addition, the super-pixel can better reserve the edges of images, and reduce the influences resulted from the individual difference between the pixels on image segmentation at the same time. Therefore, on the basis of the super-pixel, the original PCNN algorithm based on region growing is improved by this paper. First, the color super-pixel image was transformed into grayscale super-pixel image which was used to seek seeds among the neurons that hadn't been fired. And then it determined whether to stop growing by comparing the average of each color channel of all the pixels in the corresponding regions of the color super-pixel image. Experiment results show that the proposed algorithm for the color image segmentation is fast and effective, and has a certain effect and accuracy.
A Hopfield neural network for image change detection.
Pajares, Gonzalo
2006-09-01
This paper outlines an optimization relaxation approach based on the analog Hopfield neural network (HNN) for solving the image change detection problem between two images. A difference image is obtained by subtracting pixel by pixel both images. The network topology is built so that each pixel in the difference image is a node in the network. Each node is characterized by its state, which determines if a pixel has changed. An energy function is derived, so that the network converges to stable states. The analog Hopfield's model allows each node to take on analog state values. Unlike most widely used approaches, where binary labels (changed/unchanged) are assigned to each pixel, the analog property provides the strength of the change. The main contribution of this paper is reflected in the customization of the analog Hopfield neural network to derive an automatic image change detection approach. When a pixel is being processed, some existing image change detection procedures consider only interpixel relations on its neighborhood. The main drawback of such approaches is the labeling of this pixel as changed or unchanged according to the information supplied by its neighbors, where its own information is ignored. The Hopfield model overcomes this drawback and for each pixel allows a tradeoff between the influence of its neighborhood and its own criterion. This is mapped under the energy function to be minimized. The performance of the proposed method is illustrated by comparative analysis against some existing image change detection methods.
Multiple-Event, Single-Photon Counting Imaging Sensor
NASA Technical Reports Server (NTRS)
Zheng, Xinyu; Cunningham, Thomas J.; Sun, Chao; Wang, Kang L.
2011-01-01
The single-photon counting imaging sensor is typically an array of silicon Geiger-mode avalanche photodiodes that are monolithically integrated with CMOS (complementary metal oxide semiconductor) readout, signal processing, and addressing circuits located in each pixel and the peripheral area of the chip. The major problem is its single-event method for photon count number registration. A single-event single-photon counting imaging array only allows registration of up to one photon count in each of its pixels during a frame time, i.e., the interval between two successive pixel reset operations. Since the frame time can t be too short, this will lead to very low dynamic range and make the sensor merely useful for very low flux environments. The second problem of the prior technique is a limited fill factor resulting from consumption of chip area by the monolithically integrated CMOS readout in pixels. The resulting low photon collection efficiency will substantially ruin any benefit gained from the very sensitive single-photon counting detection. The single-photon counting imaging sensor developed in this work has a novel multiple-event architecture, which allows each of its pixels to register as more than one million (or more) photon-counting events during a frame time. Because of a consequently boosted dynamic range, the imaging array of the invention is capable of performing single-photon counting under ultra-low light through high-flux environments. On the other hand, since the multiple-event architecture is implemented in a hybrid structure, back-illumination and close-to-unity fill factor can be realized, and maximized quantum efficiency can also be achieved in the detector array.
Calibration and Compensation of Instrumental Errors in Imaging Polarimeters
2007-04-01
procedures for polarimeters 3. Examine impact of focal plane nonuniformity on polarimeters 4. Understand the role of bandwidth in broadband polarimetry. 5... nonuniformity (NU) noise. NU noise is a results from pixel-to-pixel variations in the photodetector response. NU noise is a persistent problem for all... nonuniformity noise in imaging polarimeters," Proc SPIE Vol. 5888: Polarization Science and Remote Sensing 11, pp. 58880J 1 - 10, J. A. Shaw and J. S
NASA Astrophysics Data System (ADS)
Koffeman, E. N.
2007-12-01
Several years ago a revolutionary miniature TPC was developed using a pixel chip with a Micromegas foil spanned over it. To overcome the mechanical stability problems and improve the positioning accuracy while spanning a foil on top of a small readout chip a process has been developed in which a Micromegas-like grid is applied on a CMOS wafer in a post-processing step. This aluminum grid is supported on insulating pillars that are created by etching after the grid has been made. The energy resolution (measured on the absorption of the X-rays from a 55Fe source) was remarkably good. Several geometries have since been tested and we now believe that a Gas On Slimmed Silicon Pixel chip' (Gossip) may be realized. The drift region of such a gaseous pixel detector would be reduced to a millimeter. Such a detector is potentially very radiation hard (SLHC vertexing) but aging and sparking must be eliminated.
Forest Dragon-3: Decadal Trends of Northeastern Forests in China from Earth Observation Synergy
NASA Astrophysics Data System (ADS)
Schmullius, C.; Balling, J.; Schratz, P.; Thiel, C.; Santoro, M.; Wegmuller, U.; Li, Z.; Yong, P.
2016-08-01
In Forest DRAGON 3, synergy of Earth Observation products to derive information of decadal trends of forest in northeast China was investigated. Following up the results of Forest-DRAGON 1 and 2, Growing Stock Volume (GSV) products from different years were investigated to derive information on vegetational in north- east China. The BIOMASAR maps of 2005 and 2010, produced within the previous DRAGON projects, set the base for all analyses. We took a closer look at scale problems regarding GSV derivation, which are introduced by differing landcover within one pixel, to investigate differences throughout pixel classes with varying landcover class percentages. We developed an approach to select pixels containing forest only with the aim of undertaking a detailed analysis on retrieved GSV values for such pixels for the years 2005 and 2010. Using existing land cover products at different scales, the plausibility of changes in the BIOMASAR maps were checked.
Automated cloud classification with a fuzzy logic expert system
NASA Technical Reports Server (NTRS)
Tovinkere, Vasanth; Baum, Bryan A.
1993-01-01
An unresolved problem in current cloud retrieval algorithms concerns the analysis of scenes containing overlapping cloud layers. Cloud parameterizations are very important both in global climate models and in studies of the Earth's radiation budget. Most cloud retrieval schemes, such as the bispectral method used by the International Satellite Cloud Climatology Project (ISCCP), have no way of determining whether overlapping cloud layers exist in any group of satellite pixels. One promising method uses fuzzy logic to determine whether mixed cloud and/or surface types exist within a group of pixels, such as cirrus, land, and water, or cirrus and stratus. When two or more class types are present, fuzzy logic uses membership values to assign the group of pixels partially to the different class types. The strength of fuzzy logic lies in its ability to work with patterns that may include more than one class, facilitating greater information extraction from satellite radiometric data. The development of the fuzzy logic rule-based expert system involves training the fuzzy classifier with spectral and textural features calculated from accurately labeled 32x32 regions of Advanced Very High Resolution Radiometer (AVHRR) 1.1-km data. The spectral data consists of AVHRR channels 1 (0.55-0.68 mu m), 2 (0.725-1.1 mu m), 3 (3.55-3.93 mu m), 4 (10.5-11.5 mu m), and 5 (11.5-12.5 mu m), which include visible, near-infrared, and infrared window regions. The textural features are based on the gray level difference vector (GLDV) method. A sophisticated new interactive visual image Classification System (IVICS) is used to label samples chosen from scenes collected during the FIRE IFO II. The training samples are chosen from predefined classes, chosen to be ocean, land, unbroken stratiform, broken stratiform, and cirrus. The November 28, 1991 NOAA overpasses contain complex multilevel cloud situations ideal for training and validating the fuzzy logic expert system.
Blood vessels segmentation of hatching eggs based on fully convolutional networks
NASA Astrophysics Data System (ADS)
Geng, Lei; Qiu, Ling; Wu, Jun; Xiao, Zhitao
2018-04-01
FCN, trained end-to-end, pixels-to-pixels, predict result of each pixel. It has been widely used for semantic segmentation. In order to realize the blood vessels segmentation of hatching eggs, a method based on FCN is proposed in this paper. The training datasets are composed of patches extracted from very few images to augment data. The network combines with lower layer and deconvolution to enables precise segmentation. The proposed method frees from the problem that training deep networks need large scale samples. Experimental results on hatching eggs demonstrate that this method can yield more accurate segmentation outputs than previous researches. It provides a convenient reference for fertility detection subsequently.
Ercan, A; Tate, M W; Gruner, S M
2006-03-01
X-ray pixel array detectors (PADs) are generally thought of as either digital photon counters (DPADs) or X-ray analog-integrating pixel array detectors (APADs). Experiences with APADs, which are especially well suited for X-ray imaging experiments where transient or high instantaneous flux events must be recorded, are reported. The design, characterization and experimental applications of several APAD designs developed at Cornell University are discussed. The simplest design is a ;flash' architecture, wherein successive integrated X-ray images, as short as several hundred nanoseconds in duration, are stored in the detector chips for later off-chip digitization. Radiography experiments using a prototype flash APAD are summarized. Another design has been implemented that combines flash capability with the ability to continuously stream X-ray images at slower (e.g. milliseconds) rates. Progress is described towards radiation-hardened APADs that can be tiled to cover a large area. A mixed-mode PAD, design by combining many of the attractive features of both APADs and DPADs, is also described.
NASA Astrophysics Data System (ADS)
Cui, Qian; Shi, Jiancheng; Xu, Yuanliu
2011-12-01
Water is the basic needs for human society, and the determining factor of stability of ecosystem as well. There are lots of lakes on Tibet Plateau, which will lead to flood and mudslide when the water expands sharply. At present, water area is extracted from TM or SPOT data for their high spatial resolution; however, their temporal resolution is insufficient. MODIS data have high temporal resolution and broad coverage. So it is valuable resource for detecting the change of water area. Because of its low spatial resolution, mixed-pixels are common. In this paper, four spectral libraries are built using MOD09A1 product, based on that, water body is extracted in sub-pixels utilizing Multiple Endmembers Spectral Mixture Analysis (MESMA) using MODIS daily reflectance data MOD09GA. The unmixed result is comparing with contemporaneous TM data and it is proved that this method has high accuracy.
Digital colour management system for colour parameters reconstruction
NASA Astrophysics Data System (ADS)
Grudzinski, Karol; Lasmanowicz, Piotr; Assis, Lucas M. N.; Pawlicka, Agnieszka; Januszko, Adam
2013-10-01
Digital Colour Management System (DCMS) and its application to new adaptive camouflage system are presented in this paper. The DCMS is a digital colour rendering method which would allow for transformation of a real image into a set of colour pixels displayed on a computer monitor. Consequently, it can analyse pixels' colour which comprise images of the environment such as desert, semi-desert, jungle, farmland or rocky mountain in order to prepare an adaptive camouflage pattern most suited for the terrain. This system is described in present work as well as the use the subtractive colours mixing method to construct the real time colour changing electrochromic window/pixel (ECD) for camouflage purpose. The ECD with glass/ITO/Prussian Blue(PB)/electrolyte/CeO2-TiO2/ITO/glass configuration was assembled and characterized. The ECD switched between green and yellow after +/-1.5 V application and the colours have been controlled by Digital Colour Management System and described by CIE LAB parameters.
Characterization System of Multi-pixel Array TES Microcalorimeter
NASA Astrophysics Data System (ADS)
Yoshimoto, Shota; Maehata, Keisuke; Mitsuda, Kazuhisa; Yamanaka, Yoshihiro; Sakai, Kazuhiro; Nagayoshi, Kenichiro; Yamamoto, Ryo; Hayashi, Tasuku; Muramatsu, Haruka
We have constructed characterization system for 64-pixel array transition-edge sensor (TES) microcalorimeter using a 3He-4He dilution refrigerator (DR) with the cooling power of 60 µW at a temperature of 100 mK. A stick equipped with 384 of Manganin wires was inserted into the refrigerator to perform characteristic measurements of 64-pixel array TES microcalorimeter and superconducting quantum interference device (SQUID) array amplifiers. The stick and Manganin wires were thermally anchored at temperatures of 4 and 1 K with sufficient thermal contact. The cold end of the Manganin wires were thermally anchored and connected to CuNi clad NbTi wires at 0.7 K anchor. Then CuNi clad NbTi wires were wired to connectors placed on the holder mounted on the cold stage attached to the base plate of the mixing chamber. The heat flow to the cold stage through the installed wires was estimated to be 0.15 µW. In the operation test the characterization system maintained temperature below 100 mK.
Regional SAR Image Segmentation Based on Fuzzy Clustering with Gamma Mixture Model
NASA Astrophysics Data System (ADS)
Li, X. L.; Zhao, Q. H.; Li, Y.
2017-09-01
Most of stochastic based fuzzy clustering algorithms are pixel-based, which can not effectively overcome the inherent speckle noise in SAR images. In order to deal with the problem, a regional SAR image segmentation algorithm based on fuzzy clustering with Gamma mixture model is proposed in this paper. First, initialize some generating points randomly on the image, the image domain is divided into many sub-regions using Voronoi tessellation technique. Each sub-region is regarded as a homogeneous area in which the pixels share the same cluster label. Then, assume the probability of the pixel to be a Gamma mixture model with the parameters respecting to the cluster which the pixel belongs to. The negative logarithm of the probability represents the dissimilarity measure between the pixel and the cluster. The regional dissimilarity measure of one sub-region is defined as the sum of the measures of pixels in the region. Furthermore, the Markov Random Field (MRF) model is extended from pixels level to Voronoi sub-regions, and then the regional objective function is established under the framework of fuzzy clustering. The optimal segmentation results can be obtained by the solution of model parameters and generating points. Finally, the effectiveness of the proposed algorithm can be proved by the qualitative and quantitative analysis from the segmentation results of the simulated and real SAR images.
Interactive rendering of acquired materials on dynamic geometry using frequency analysis.
Bagher, Mahdi Mohammad; Soler, Cyril; Subr, Kartic; Belcour, Laurent; Holzschuch, Nicolas
2013-05-01
Shading acquired materials with high-frequency illumination is computationally expensive. Estimating the shading integral requires multiple samples of the incident illumination. The number of samples required may vary across the image, and the image itself may have high- and low-frequency variations, depending on a combination of several factors. Adaptively distributing computational budget across the pixels for shading is a challenging problem. In this paper, we depict complex materials such as acquired reflectances, interactively, without any precomputation based on geometry. In each frame, we first estimate the frequencies in the local light field arriving at each pixel, as well as the variance of the shading integrand. Our frequency analysis accounts for combinations of a variety of factors: the reflectance of the object projecting to the pixel, the nature of the illumination, the local geometry and the camera position relative to the geometry and lighting. We then exploit this frequency information (bandwidth and variance) to adaptively sample for reconstruction and integration. For example, fewer pixels per unit area are shaded for pixels projecting onto diffuse objects, and fewer samples are used for integrating illumination incident on specular objects.
NASA Astrophysics Data System (ADS)
Chung, Kunook; Sui, Jingyang; Demory, Brandon; Ku, Pei-Cheng
2017-07-01
Additive color mixing across the visible spectrum was demonstrated from an InGaN based light-emitting diode (LED) pixel comprising red, green, and blue subpixels monolithically integrated and enabled by local strain engineering. The device was fabricated using a top-down approach on a metal-organic chemical vapor deposition-grown sample consisting of a typical LED epitaxial stack. The three color subpixels were defined in a single lithographic step. The device was characterized for its electrical properties and emission spectra under an uncooled condition, which is desirable in practical applications. The color mixing was controlled by pulse-width modulation, and the degree of color control was also characterized.
Mixed ethnicity and behavioural problems in the Millennium Cohort Study
Zilanawala, Afshin; Sacker, Amanda; Kelly, Yvonne
2018-01-01
Background The population of mixed ethnicity individuals in the UK is growing. Despite this demographic trend, little is known about mixed ethnicity children and their problem behaviours. We examine trajectories of behavioural problems among non-mixed and mixed ethnicity children from early to middle childhood using nationally representative cohort data in the UK. Methods Data from 16 330 children from the Millennium Cohort Study with total difficulties scores were analysed. We estimated trajectories of behavioural problems by mixed ethnicity using growth curve models. Results White mixed (mean total difficulties score: 8.3), Indian mixed (7.7), Pakistani mixed (8.9) and Bangladeshi mixed (7.2) children had fewer problem behaviours than their non-mixed counterparts at age 3 (9.4, 10.1, 13.1 and 11.9, respectively). White mixed, Pakistani mixed and Bangladeshi mixed children had growth trajectories in problem behaviours significantly different from that of their non-mixed counterparts. Conclusions Using a detailed mixed ethnic classification revealed diverging trajectories between some non-mixed and mixed children across the early life course. Future studies should investigate the mechanisms, which may influence increasing behavioural problems in mixed ethnicity children. PMID:26912571
Remote sensing image stitch using modified structure deformation
NASA Astrophysics Data System (ADS)
Pan, Ke-cheng; Chen, Jin-wei; Chen, Yueting; Feng, Huajun
2012-10-01
To stitch remote sensing images seamlessly without producing visual artifact which is caused by severe intensity discrepancy and structure misalignment, we modify the original structure deformation based stitching algorithm which have two main problems: Firstly, using Poisson equation to propagate deformation vectors leads to the change of the topological relationship between the key points and their surrounding pixels, which may bring in wrong image characteristics. Secondly, the diffusion area of the sparse matrix is too limited to rectify the global intensity discrepancy. To solve the first problem, we adopt Spring-Mass model and bring in external force to keep the topological relationship between key points and their surrounding pixels. We also apply tensor voting algorithm to achieve the global intensity corresponding curve of the two images to solve the second problem. Both simulated and experimental results show that our algorithm is faster and can reach better result than the original algorithm.
A 3D terrain reconstruction method of stereo vision based quadruped robot navigation system
NASA Astrophysics Data System (ADS)
Ge, Zhuo; Zhu, Ying; Liang, Guanhao
2017-01-01
To provide 3D environment information for the quadruped robot autonomous navigation system during walking through rough terrain, based on the stereo vision, a novel 3D terrain reconstruction method is presented. In order to solve the problem that images collected by stereo sensors have large regions with similar grayscale and the problem that image matching is poor at real-time performance, watershed algorithm and fuzzy c-means clustering algorithm are combined for contour extraction. Aiming at the problem of error matching, duel constraint with region matching and pixel matching is established for matching optimization. Using the stereo matching edge pixel pairs, the 3D coordinate algorithm is estimated according to the binocular stereo vision imaging model. Experimental results show that the proposed method can yield high stereo matching ratio and reconstruct 3D scene quickly and efficiently.
NASA Astrophysics Data System (ADS)
Moriya, Gentaro; Chikatsu, Hirofumi
2011-07-01
Recently, pixel numbers and functions of consumer grade digital camera are amazingly increasing by modern semiconductor and digital technology, and there are many low-priced consumer grade digital cameras which have more than 10 mega pixels on the market in Japan. In these circumstances, digital photogrammetry using consumer grade cameras is enormously expected in various application fields. There is a large body of literature on calibration of consumer grade digital cameras and circular target location. Target location with subpixel accuracy had been investigated as a star tracker issue, and many target location algorithms have been carried out. It is widely accepted that the least squares models with ellipse fitting is the most accurate algorithm. However, there are still problems for efficient digital close range photogrammetry. These problems are reconfirmation of the target location algorithms with subpixel accuracy for consumer grade digital cameras, relationship between number of edge points along target boundary and accuracy, and an indicator for estimating the accuracy of normal digital close range photogrammetry using consumer grade cameras. With this motive, an empirical testing of several algorithms for target location with subpixel accuracy and an indicator for estimating the accuracy are investigated in this paper using real data which were acquired indoors using 7 consumer grade digital cameras which have 7.2 mega pixels to 14.7 mega pixels.
Satellite monitoring of cyanobacterial harmful algal bloom ...
Cyanobacterial harmful algal blooms (cyanoHABs) cause extensive problems in lakes worldwide, including human and ecological health risks, anoxia and fish kills, and taste and odor problems. CyanoHABs are a particular concern because of their dense biomass and the risk of exposure to toxins in both recreational waters and drinking source waters. Successful cyanoHAB assessment by satellites may provide a first-line of defense indicator for human and ecological health protection. In this study, assessment methods were developed to determine the utility of satellite technology for detecting cyanoHAB occurrence frequency at locations of potential management interest. The European Space Agency's MEdium Resolution Imaging Spectrometer (MERIS) was evaluated to prepare for the equivalent Sentinel-3 Ocean and Land Colour Imager (OLCI) launched in 2016. Based on the 2012 National Lakes Assessment site evaluation guidelines and National Hydrography Dataset, there were 275,897 lakes and reservoirs greater than 1 hectare in the 48 U.S. states. Results from this evaluation show that 5.6 % of waterbodies were resolvable by satellites with 300 m single pixel resolution and 0.7 % of waterbodies were resolvable when a 3x3 pixel array was applied based on minimum Euclidian distance from shore. Satellite data was also spatially joined to US public water surface intake (PWSI) locations, where single pixel resolution resolved 57% of PWSI and a 3x3 pixel array resolved 33% of
Shape and Spatially-Varying Reflectance Estimation from Virtual Exemplars.
Hui, Zhuo; Sankaranarayanan, Aswin C
2017-10-01
This paper addresses the problem of estimating the shape of objects that exhibit spatially-varying reflectance. We assume that multiple images of the object are obtained under a fixed view-point and varying illumination, i.e., the setting of photometric stereo. At the core of our techniques is the assumption that the BRDF at each pixel lies in the non-negative span of a known BRDF dictionary. This assumption enables a per-pixel surface normal and BRDF estimation framework that is computationally tractable and requires no initialization in spite of the underlying problem being non-convex. Our estimation framework first solves for the surface normal at each pixel using a variant of example-based photometric stereo. We design an efficient multi-scale search strategy for estimating the surface normal and subsequently, refine this estimate using a gradient descent procedure. Given the surface normal estimate, we solve for the spatially-varying BRDF by constraining the BRDF at each pixel to be in the span of the BRDF dictionary; here, we use additional priors to further regularize the solution. A hallmark of our approach is that it does not require iterative optimization techniques nor the need for careful initialization, both of which are endemic to most state-of-the-art techniques. We showcase the performance of our technique on a wide range of simulated and real scenes where we outperform competing methods.
Synthesis of blind source separation algorithms on reconfigurable FPGA platforms
NASA Astrophysics Data System (ADS)
Du, Hongtao; Qi, Hairong; Szu, Harold H.
2005-03-01
Recent advances in intelligence technology have boosted the development of micro- Unmanned Air Vehicles (UAVs) including Sliver Fox, Shadow, and Scan Eagle for various surveillance and reconnaissance applications. These affordable and reusable devices have to fit a series of size, weight, and power constraints. Cameras used on such micro-UAVs are therefore mounted directly at a fixed angle without any motion-compensated gimbals. This mounting scheme has resulted in the so-called jitter effect in which jitter is defined as sub-pixel or small amplitude vibrations. The jitter blur caused by the jitter effect needs to be corrected before any other processing algorithms can be practically applied. Jitter restoration has been solved by various optimization techniques, including Wiener approximation, maximum a-posteriori probability (MAP), etc. However, these algorithms normally assume a spatial-invariant blur model that is not the case with jitter blur. Szu et al. developed a smart real-time algorithm based on auto-regression (AR) with its natural generalization of unsupervised artificial neural network (ANN) learning to achieve restoration accuracy at the sub-pixel level. This algorithm resembles the capability of the human visual system, in which an agreement between the pair of eyes indicates "signal", otherwise, the jitter noise. Using this non-statistical method, for each single pixel, a deterministic blind sources separation (BSS) process can then be carried out independently based on a deterministic minimum of the Helmholtz free energy with a generalization of Shannon's information theory applied to open dynamic systems. From a hardware implementation point of view, the process of jitter restoration of an image using Szu's algorithm can be optimized by pixel-based parallelization. In our previous work, a parallelly structured independent component analysis (ICA) algorithm has been implemented on both Field Programmable Gate Array (FPGA) and Application-Specific Integrated Circuit (ASIC) using standard-height cells. ICA is an algorithm that can solve BSS problems by carrying out the all-order statistical, decorrelation-based transforms, in which an assumption that neighborhood pixels share the same but unknown mixing matrix A is made. In this paper, we continue our investigation on the design challenges of firmware approaches to smart algorithms. We think two levels of parallelization can be explored, including pixel-based parallelization and the parallelization of the restoration algorithm performed at each pixel. This paper focuses on the latter and we use ICA as an example to explain the design and implementation methods. It is well known that the capacity constraints of single FPGA have limited the implementation of many complex algorithms including ICA. Using the reconfigurability of FPGA, we show, in this paper, how to manipulate the FPGA-based system to provide extra computing power for the parallelized ICA algorithm with limited FPGA resources. The synthesis aiming at the pilchard re-configurable FPGA platform is reported. The pilchard board is embedded with single Xilinx VIRTEX 1000E FPGA and transfers data directly to CPU on the 64-bit memory bus at the maximum frequency of 133MHz. Both the feasibility performance evaluations and experimental results validate the effectiveness and practicality of this synthesis, which can be extended to the spatial-variant jitter restoration for micro-UAV deployment.
Correlation of ERTS MSS data and earth coordinate systems
NASA Technical Reports Server (NTRS)
Malila, W. A. (Principal Investigator); Hieber, R. H.; Mccleer, A. P.
1973-01-01
The author has identified the following significant results. Experience has revealed a problem in the analysis and interpretation of ERTS-1 multispectral scanner (MSS) data. The problem is one of accurately correlating ERTS-1 MSS pixels with analysis areas specified on aerial photographs or topographic maps for training recognition computers and/or evaluating recognition results. It is difficult for an analyst to accurately identify which ERTS-1 pixels on a digital image display belong to specific areas and test plots, especially when they are small. A computer-aided procedure to correlate coordinates from topographic maps and/or aerial photographs with ERTS-1 data coordinates has been developed. In the procedure, a map transformation from earth coordinates to ERTS-1 scan line and point numbers is calculated using selected ground control points nad the method of least squares. The map transformation is then applied to the earth coordinates of selected areas to obtain the corresponding ERTS-1 point and line numbers. An optional provision allows moving the boundaries of the plots inward by variable distances so the selected pixels will not overlap adjacent features.
NASA Astrophysics Data System (ADS)
Dong, J.; Liu, W.; Han, W.; Lei, T.; Xia, J.; Yuan, W.
2017-12-01
Winter wheat is a staple food crop for most of the world's population, and the area and spatial distribution of winter wheat are key elements in estimating crop production and ensuring food security. However, winter wheat planting areas contain substantial spatial heterogeneity with mixed pixels for coarse- and moderate-resolution satellite data, leading to significant errors in crop acreage estimation. This study has developed a phenology-based approach using moderate-resolution satellite data to estimate sub-pixel planting fractions of winter wheat. Based on unmanned aerial vehicle (UAV) observations, the unique characteristics of winter wheat with high vegetation index values at the heading stage (May) and low values at the harvest stage (June) were investigated. The differences in vegetation index between heading and harvest stages increased with the planting fraction of winter wheat, and therefore the planting fractions were estimated by comparing the NDVI differences of a given pixel with those of predetermined pure winter wheat and non-winter wheat pixels. This approach was evaluated using aerial images and agricultural statistical data in an intensive agricultural region, Shandong Province in North China. The method explained 60% and 85% of the spatial variation in county- and municipal-level statistical data, respectively. More importantly, the predetermined pure winter wheat and non-winter wheat pixels can be automatically identified using MODIS data according to their NDVI differences, which strengthens the potential to use this method at regional and global scales without any field observations as references.
Continuous Change Detection and Classification (CCDC) of Land Cover Using All Available Landsat Data
NASA Astrophysics Data System (ADS)
Zhu, Z.; Woodcock, C. E.
2012-12-01
A new algorithm for Continuous Change Detection and Classification (CCDC) of land cover using all available Landsat data is developed. This new algorithm is capable of detecting many kinds of land cover change as new images are collected and at the same time provide land cover maps for any given time. To better identify land cover change, a two step cloud, cloud shadow, and snow masking algorithm is used for eliminating "noisy" observations. Next, a time series model that has components of seasonality, trend, and break estimates the surface reflectance and temperature. The time series model is updated continuously with newly acquired observations. Due to the high variability in spectral response for different kinds of land cover change, the CCDC algorithm uses a data-driven threshold derived from all seven Landsat bands. When the difference between observed and predicted exceeds the thresholds three consecutive times, a pixel is identified as land cover change. Land cover classification is done after change detection. Coefficients from the time series models and the Root Mean Square Error (RMSE) from model fitting are used as classification inputs for the Random Forest Classifier (RFC). We applied this new algorithm for one Landsat scene (Path 12 Row 31) that includes all of Rhode Island as well as much of Eastern Massachusetts and parts of Connecticut. A total of 532 Landsat images acquired between 1982 and 2011 were processed. During this period, 619,924 pixels were detected to change once (91% of total changed pixels) and 60,199 pixels were detected to change twice (8% of total changed pixels). The most frequent land cover change category is from mixed forest to low density residential which occupies more than 8% of total land cover change pixels.
NASA Astrophysics Data System (ADS)
Cánovas-García, Fulgencio; Alonso-Sarría, Francisco; Gomariz-Castillo, Francisco; Oñate-Valdivieso, Fernando
2017-06-01
Random forest is a classification technique widely used in remote sensing. One of its advantages is that it produces an estimation of classification accuracy based on the so called out-of-bag cross-validation method. It is usually assumed that such estimation is not biased and may be used instead of validation based on an external data-set or a cross-validation external to the algorithm. In this paper we show that this is not necessarily the case when classifying remote sensing imagery using training areas with several pixels or objects. According to our results, out-of-bag cross-validation clearly overestimates accuracy, both overall and per class. The reason is that, in a training patch, pixels or objects are not independent (from a statistical point of view) of each other; however, they are split by bootstrapping into in-bag and out-of-bag as if they were really independent. We believe that putting whole patch, rather than pixels/objects, in one or the other set would produce a less biased out-of-bag cross-validation. To deal with the problem, we propose a modification of the random forest algorithm to split training patches instead of the pixels (or objects) that compose them. This modified algorithm does not overestimate accuracy and has no lower predictive capability than the original. When its results are validated with an external data-set, the accuracy is not different from that obtained with the original algorithm. We analysed three remote sensing images with different classification approaches (pixel and object based); in the three cases reported, the modification we propose produces a less biased accuracy estimation.
Segmentation of suspicious objects in an x-ray image using automated region filling approach
NASA Astrophysics Data System (ADS)
Fu, Kenneth; Guest, Clark; Das, Pankaj
2009-08-01
To accommodate the flow of commerce, cargo inspection systems require a high probability of detection and low false alarm rate while still maintaining a minimum scan speed. Since objects of interest (high atomic-number metals) will often be heavily shielded to avoid detection, any detection algorithm must be able to identify such objects despite the shielding. Since pixels of a shielded object have a greater opacity than the shielding, we use a clustering method to classify objects in the image by pixel intensity levels. We then look within each intensity level region for sub-clusters of pixels with greater opacity than the surrounding region. A region containing an object has an enclosed-contour region (a hole) inside of it. We apply a region filling technique to fill in the hole, which represents a shielded object of potential interest. One method for region filling is seed-growing, which puts a "seed" starting point in the hole area and uses a selected structural element to fill out that region. However, automatic seed point selection is a hard problem; it requires additional information to decide if a pixel is within an enclosed region. Here, we propose a simple, robust method for region filling that avoids the problem of seed point selection. In our approach, we calculate the gradient Gx and Gy at each pixel in a binary image, and fill in 1s between a pair of x1 Gx(x1,y)=-1 and x2 Gx(x2,y)=1, and do the same thing in y-direction. The intersection of the two results will be filled region. We give a detailed discussion of our algorithm, discuss the strengths this method has over other methods, and show results of using our method.
Mega-pixel PQR laser chips for interconnect, display ITS, and biocell-tweezers OEIC
NASA Astrophysics Data System (ADS)
Kwon, O'Dae; Yoon, J. H.; Kim, D. K.; Kim, Y. C.; Lee, S. E.; Kim, S. S.
2008-02-01
We describe a photonic quantum ring (PQR) laser device of three dimensional toroidal whispering gallery cavity. We have succeeded in fabricating the first genuine mega-pixel laser chips via regular semiconductor technology. This has been realized since the present injection laser emitting surface-normal dominant 3D whispering gallery modes (WGMs) can be operated CW with extremely low operating currents (μA-nA per pixel), together with the lasing temperature stabilities well above 140 deg C with minimal redshifts, which solves the well-known integration problems facing the conventional VCSEL. Such properties unusual for quantum well lasers become usual because the active region, involving vertically confining DBR structure in addition to the 2D concave WGM geometry, induces a 'photonic quantum ring (PQR)-like' carrier distribution through a photonic quantum corral effect. A few applications of such mega-pixel PQR chips are explained as follows: (A) Next-generation 3D semiconductor technologies demand a strategy on the inter-chip and intra-chip optical interconnect schemes with a key to the high-density emitter array. (B) Due to mounting traffic problems and fatalities ITS technology today is looking for a revolutionary change in the technology. We will thus outline how 'SLEEP-ITS' can emerge with the PQR's position-sensing capability. (C) We describe a recent PQR 'hole' laser of convex WGM: Mega-pixel PQR 'hole' laser chips are even easier to fabricate than PQR 'mesa' lasers. Genuine Laguerre-Gaussian (LG) beam patterns of PQR holes are very promising for biocell manipulations like sorting mouse myeloid leukemia (M1s) cells. (D) Energy saving and 3D speckle-free POR laser can outdo LEDs in view of red GaAs and blue GaN devices fabricated recently.
Global-scale patterns of forest fragmentation
Riitters, K.; Wickham, J.; O'Neill, R.; Jones, B.; Smith, E.
2000-01-01
We report an analysis of forest fragmentation based on 1-km resolution land-cover maps for the globe. Measurements in analysis windows from 81 km 2 (9 ?? 9 pixels, "small" scale) to 59,049 km 2 (243 ?? 243 pixels, "large" scale) were used to characterize the fragmentation around each forested pixel. We identified six categories of fragmentation (interior, perforated, edge, transitional, patch, and undetermined) from the amount of forest and its occurrence as adjacent forest pixels. Interior forest exists only at relatively small scales; at larger scales, forests are dominated by edge and patch conditions. At the smallest scale, there were significant differences in fragmentation among continents; within continents, there were significant differences among individual forest types. Tropical rain forest fragmentation was most severe in North America and least severe in Europe - Asia. Forest types with a high percentage of perforated conditions were mainly in North America (five types) and Europe - Asia (four types), in both temperate and subtropical regions. Transitional and patch conditions were most common in 11 forest types, of which only a few would be considered as "naturally patchy" (e.g., dry woodland). The five forest types with the highest percentage of interior conditions were in North America; in decreasing order, they were cool rain forest, coniferous, conifer boreal, cool mixed, and cool broadleaf. Copyright ?? 2000 by The Resilience Alliance.
A hyperspectral image projector for hyperspectral imagers
NASA Astrophysics Data System (ADS)
Rice, Joseph P.; Brown, Steven W.; Neira, Jorge E.; Bousquet, Robert R.
2007-04-01
We have developed and demonstrated a Hyperspectral Image Projector (HIP) intended for system-level validation testing of hyperspectral imagers, including the instrument and any associated spectral unmixing algorithms. HIP, based on the same digital micromirror arrays used in commercial digital light processing (DLP*) displays, is capable of projecting any combination of many different arbitrarily programmable basis spectra into each image pixel at up to video frame rates. We use a scheme whereby one micromirror array is used to produce light having the spectra of endmembers (i.e. vegetation, water, minerals, etc.), and a second micromirror array, optically in series with the first, projects any combination of these arbitrarily-programmable spectra into the pixels of a 1024 x 768 element spatial image, thereby producing temporally-integrated images having spectrally mixed pixels. HIP goes beyond conventional DLP projectors in that each spatial pixel can have an arbitrary spectrum, not just arbitrary color. As such, the resulting spectral and spatial content of the projected image can simulate realistic scenes that a hyperspectral imager will measure during its use. Also, the spectral radiance of the projected scenes can be measured with a calibrated spectroradiometer, such that the spectral radiance projected into each pixel of the hyperspectral imager can be accurately known. Use of such projected scenes in a controlled laboratory setting would alleviate expensive field testing of instruments, allow better separation of environmental effects from instrument effects, and enable system-level performance testing and validation of hyperspectral imagers as used with analysis algorithms. For example, known mixtures of relevant endmember spectra could be projected into arbitrary spatial pixels in a hyperspectral imager, enabling tests of how well a full system, consisting of the instrument + calibration + analysis algorithm, performs in unmixing (i.e. de-convolving) the spectra in all pixels. We discuss here the performance of a visible prototype HIP. The technology is readily extendable to the ultraviolet and infrared spectral ranges, and the scenes can be static or dynamic.
Liu, Hui; Zhang, Cai-Ming; Su, Zhi-Yuan; Wang, Kai; Deng, Kai
2015-01-01
The key problem of computer-aided diagnosis (CAD) of lung cancer is to segment pathologically changed tissues fast and accurately. As pulmonary nodules are potential manifestation of lung cancer, we propose a fast and self-adaptive pulmonary nodules segmentation method based on a combination of FCM clustering and classification learning. The enhanced spatial function considers contributions to fuzzy membership from both the grayscale similarity between central pixels and single neighboring pixels and the spatial similarity between central pixels and neighborhood and improves effectively the convergence rate and self-adaptivity of the algorithm. Experimental results show that the proposed method can achieve more accurate segmentation of vascular adhesion, pleural adhesion, and ground glass opacity (GGO) pulmonary nodules than other typical algorithms.
Image reconstruction of dynamic infrared single-pixel imaging system
NASA Astrophysics Data System (ADS)
Tong, Qi; Jiang, Yilin; Wang, Haiyan; Guo, Limin
2018-03-01
Single-pixel imaging technique has recently received much attention. Most of the current single-pixel imaging is aimed at relatively static targets or the imaging system is fixed, which is limited by the number of measurements received through the single detector. In this paper, we proposed a novel dynamic compressive imaging method to solve the imaging problem, where exists imaging system motion behavior, for the infrared (IR) rosette scanning system. The relationship between adjacent target images and scene is analyzed under different system movement scenarios. These relationships are used to build dynamic compressive imaging models. Simulation results demonstrate that the proposed method can improve the reconstruction quality of IR image and enhance the contrast between the target and the background in the presence of system movement.
Shade images of forested areas obtained from Landsat MSS data
NASA Technical Reports Server (NTRS)
Shimabukuro, Yosio Edemir; Smith, James A.
1989-01-01
The objective of this report is to generate a shade (shadow) image of forested areas from Landsat MSS data by implementing a linear mixing model, where shadow is considered as one of the primary components in a pixel. The shade images are related to the observed variation in forest structure; i.e., the proportion of inferred shadow in a pixel is related to different forest ages, forest types, and tree crown cover. The constrained least-squares method is used to generate shade images for forest of eucalyptus and vegetation of 'cerrado' over the Itapeva study area in Brazil. The resulted shade images may explain the difference on ages for forest of eucalyptus and the difference on tree crown cover for vegetation of cerrado.
Toward real-time quantum imaging with a single pixel camera
Lawrie, B. J.; Pooser, R. C.
2013-03-19
In this paper, we present a workbench for the study of real-time quantum imaging by measuring the frame-by-frame quantum noise reduction of multi-spatial-mode twin beams generated by four wave mixing in Rb vapor. Exploiting the multiple spatial modes of this squeezed light source, we utilize spatial light modulators to selectively pass macropixels of quantum correlated modes from each of the twin beams to a high quantum efficiency balanced detector. Finally, in low-light-level imaging applications, the ability to measure the quantum correlations between individual spatial modes and macropixels of spatial modes with a single pixel camera will facilitate compressive quantum imagingmore » with sensitivity below the photon shot noise limit.« less
Monte Carlo Optimization of Crystal Configuration for Pixelated Molecular SPECT Scanners
NASA Astrophysics Data System (ADS)
Mahani, Hojjat; Raisali, Gholamreza; Kamali-Asl, Alireza; Ay, Mohammad Reza
2017-02-01
Resolution-sensitivity-PDA tradeoff is the most challenging problem in design and optimization of pixelated preclinical SPECT scanners. In this work, we addressed such a challenge from a crystal point-of-view by looking for an optimal pixelated scintillator using GATE Monte Carlo simulation. Various crystal configurations have been investigated and the influence of different pixel sizes, pixel gaps, and three scintillators on tomographic resolution, sensitivity, and PDA of the camera were evaluated. The crystal configuration was then optimized using two objective functions: the weighted-sum and the figure-of-merit methods. The CsI(Na) reveals the highest sensitivity of the order of 43.47 cps/MBq in comparison to the NaI(Tl) and the YAP(Ce), for a 1.5×1.5 mm2 pixel size and 0.1 mm gap. The results show that the spatial resolution, in terms of FWHM, improves from 3.38 to 2.21 mm while the sensitivity simultaneously deteriorates from 42.39 cps/MBq to 27.81 cps/MBq when pixel size varies from 2×2 mm2 to 0.5×0.5 mm2 for a 0.2 mm gap, respectively. The PDA worsens from 0.91 to 0.42 when pixel size decreases from 0.5×0.5 mm2 to 1×1 mm2 for a 0.2 mm gap at 15° incident-angle. The two objective functions agree that the 1.5×1.5 mm2 pixel size and 0.1 mm Epoxy gap CsI(Na) configuration provides the best compromise for small-animal imaging, using the HiReSPECT scanner. Our study highlights that crystal configuration can significantly affect the performance of the camera, and thereby Monte Carlo optimization of pixelated detectors is mandatory in order to achieve an optimal quality tomogram.
Advancements in DEPMOSFET device developments for XEUS
NASA Astrophysics Data System (ADS)
Treis, J.; Bombelli, L.; Eckart, R.; Fiorini, C.; Fischer, P.; Hälker, O.; Herrmann, S.; Lechner, P.; Lutz, G.; Peric, I.; Porro, M.; Richter, R. H.; Schaller, G.; Schopper, F.; Soltau, H.; Strüder, L.; Wölfel, S.
2006-06-01
DEPMOSFET based Active Pixel Sensor (APS) matrices are a new detector concept for X-ray imaging spectroscopy missions. They can cope with the challenging requirements of the XEUS Wide Field Imager and combine excellent energy resolution, high speed readout and low power consumption with the attractive feature of random accessibility of pixels. From the evaluation of first prototypes, new concepts have been developed to overcome the minor drawbacks and problems encountered for the older devices. The new devices will have a pixel size of 75 μm × 75 μm. Besides 64 × 64 pixel arrays, prototypes with a sizes of 256 × 256 pixels and 128 × 512 pixels and an active area of about 3.6 cm2 will be produced, a milestone on the way towards the fully grown XEUS WFI device. The production of these improved devices is currently on the way. At the same time, the development of the next generation of front-end electronics has been started, which will permit to operate the sensor devices with the readout speed required by XEUS. Here, a summary of the DEPFET capabilities, the concept of the sensors of the next generation and the new front-end electronics will be given. Additionally, prospects of new device developments using the DEPFET as a sensitive element are shown, e.g. so-called RNDR-pixels, which feature repetitive non-destructive readout to lower the readout noise below the 1 e - ENC limit.
NASA Technical Reports Server (NTRS)
Grycewicz, Thomas J.; Tan, Bin; Isaacson, Peter J.; De Luccia, Frank J.; Dellomo, John
2016-01-01
In developing software for independent verification and validation (IVV) of the Image Navigation and Registration (INR) capability for the Geostationary Operational Environmental Satellite R Series (GOES-R) Advanced Baseline Imager (ABI), we have encountered an image registration artifact which limits the accuracy of image offset estimation at the subpixel scale using image correlation. Where the two images to be registered have the same pixel size, subpixel image registration preferentially selects registration values where the image pixel boundaries are close to lined up. Because of the shape of a curve plotting input displacement to estimated offset, we call this a stair-step artifact. When one image is at a higher resolution than the other, the stair-step artifact is minimized by correlating at the higher resolution. For validating ABI image navigation, GOES-R images are correlated with Landsat-based ground truth maps. To create the ground truth map, the Landsat image is first transformed to the perspective seen from the GOES-R satellite, and then is scaled to an appropriate pixel size. Minimizing processing time motivates choosing the map pixels to be the same size as the GOES-R pixels. At this pixel size image processing of the shift estimate is efficient, but the stair-step artifact is present. If the map pixel is very small, stair-step is not a problem, but image correlation is computation-intensive. This paper describes simulation-based selection of the scale for truth maps for registering GOES-R ABI images.
Wire Detection Algorithms for Navigation
NASA Technical Reports Server (NTRS)
Kasturi, Rangachar; Camps, Octavia I.
2002-01-01
In this research we addressed the problem of obstacle detection for low altitude rotorcraft flight. In particular, the problem of detecting thin wires in the presence of image clutter and noise was studied. Wires present a serious hazard to rotorcrafts. Since they are very thin, their detection early enough so that the pilot has enough time to take evasive action is difficult, as their images can be less than one or two pixels wide. Two approaches were explored for this purpose. The first approach involved a technique for sub-pixel edge detection and subsequent post processing, in order to reduce the false alarms. After reviewing the line detection literature, an algorithm for sub-pixel edge detection proposed by Steger was identified as having good potential to solve the considered task. The algorithm was tested using a set of images synthetically generated by combining real outdoor images with computer generated wire images. The performance of the algorithm was evaluated both, at the pixel and the wire levels. It was observed that the algorithm performs well, provided that the wires are not too thin (or distant) and that some post processing is performed to remove false alarms due to clutter. The second approach involved the use of an example-based learning scheme namely, Support Vector Machines. The purpose of this approach was to explore the feasibility of an example-based learning based approach for the task of detecting wires from their images. Support Vector Machines (SVMs) have emerged as a promising pattern classification tool and have been used in various applications. It was found that this approach is not suitable for very thin wires and of course, not suitable at all for sub-pixel thick wires. High dimensionality of the data as such does not present a major problem for SVMs. However it is desirable to have a large number of training examples especially for high dimensional data. The main difficulty in using SVMs (or any other example-based learning method) is the need for a very good set of positive and negative examples since the performance depends on the quality of the training set.
Convex set and linear mixing model
NASA Technical Reports Server (NTRS)
Xu, P.; Greeley, R.
1993-01-01
A major goal of optical remote sensing is to determine surface compositions of the earth and other planetary objects. For assessment of composition, single pixels in multi-spectral images usually record a mixture of the signals from various materials within the corresponding surface area. In this report, we introduce a closed and bounded convex set as a mathematical model for linear mixing. This model has a clear geometric implication because the closed and bounded convex set is a natural generalization of a triangle in n-space. The endmembers are extreme points of the convex set. Every point in the convex closure of the endmembers is a linear mixture of those endmembers, which is exactly how linear mixing is defined. With this model, some general criteria for selecting endmembers could be described. This model can lead to a better understanding of linear mixing models.
Spectral Dimensionality and Scale of Urban Radiance
NASA Technical Reports Server (NTRS)
Small, Christopher
2001-01-01
Characterization of urban radiance and reflectance is important for understanding the effects of solar energy flux on the urban environment as well as for satellite mapping of urban settlement patterns. Spectral mixture analyses of Landsat and Ikonos imagery suggest that the urban radiance field can very often be described with combinations of three or four spectral endmembers. Dimensionality estimates of Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) radiance measurements of urban areas reveal the existence of 30 to 60 spectral dimensions. The extent to which broadband imagery collected by operational satellites can represent the higher dimensional mixing space is a function of both the spatial and spectral resolution of the sensor. AVIRIS imagery offers the spatial and spectral resolution necessary to investigate the scale dependence of the spectral dimensionality. Dimensionality estimates derived from Minimum Noise Fraction (MNF) eigenvalue distributions show a distinct scale dependence for AVIRIS radiance measurements of Milpitas, California. Apparent dimensionality diminishes from almost 40 to less than 10 spectral dimensions between scales of 8000 m and 300 m. The 10 to 30 m scale of most features in urban mosaics results in substantial spectral mixing at the 20 m scale of high altitude AVIRIS pixels. Much of the variance at pixel scales is therefore likely to result from actual differences in surface reflectance at pixel scales. Spatial smoothing and spectral subsampling of AVIRIS spectra both result in substantial loss of information and reduction of apparent dimensionality, but the primary spectral endmembers in all cases are analogous to those found in global analyses of Landsat and Ikonos imagery of other urban areas.
Graph Laplacian Regularization for Image Denoising: Analysis in the Continuous Domain.
Pang, Jiahao; Cheung, Gene
2017-04-01
Inverse imaging problems are inherently underdetermined, and hence, it is important to employ appropriate image priors for regularization. One recent popular prior-the graph Laplacian regularizer-assumes that the target pixel patch is smooth with respect to an appropriately chosen graph. However, the mechanisms and implications of imposing the graph Laplacian regularizer on the original inverse problem are not well understood. To address this problem, in this paper, we interpret neighborhood graphs of pixel patches as discrete counterparts of Riemannian manifolds and perform analysis in the continuous domain, providing insights into several fundamental aspects of graph Laplacian regularization for image denoising. Specifically, we first show the convergence of the graph Laplacian regularizer to a continuous-domain functional, integrating a norm measured in a locally adaptive metric space. Focusing on image denoising, we derive an optimal metric space assuming non-local self-similarity of pixel patches, leading to an optimal graph Laplacian regularizer for denoising in the discrete domain. We then interpret graph Laplacian regularization as an anisotropic diffusion scheme to explain its behavior during iterations, e.g., its tendency to promote piecewise smooth signals under certain settings. To verify our analysis, an iterative image denoising algorithm is developed. Experimental results show that our algorithm performs competitively with state-of-the-art denoising methods, such as BM3D for natural images, and outperforms them significantly for piecewise smooth images.
Contrast computation methods for interferometric measurement of sensor modulation transfer function
NASA Astrophysics Data System (ADS)
Battula, Tharun; Georgiev, Todor; Gille, Jennifer; Goma, Sergio
2018-01-01
Accurate measurement of image-sensor frequency response over a wide range of spatial frequencies is very important for analyzing pixel array characteristics, such as modulation transfer function (MTF), crosstalk, and active pixel shape. Such analysis is especially significant in computational photography for the purposes of deconvolution, multi-image superresolution, and improved light-field capture. We use a lensless interferometric setup that produces high-quality fringes for measuring MTF over a wide range of frequencies (here, 37 to 434 line pairs per mm). We discuss the theoretical framework, involving Michelson and Fourier contrast measurement of the MTF, addressing phase alignment problems using a moiré pattern. We solidify the definition of Fourier contrast mathematically and compare it to Michelson contrast. Our interferometric measurement method shows high detail in the MTF, especially at high frequencies (above Nyquist frequency). We are able to estimate active pixel size and pixel pitch from measurements. We compare both simulation and experimental MTF results to a lens-free slanted-edge implementation using commercial software.
A spatially adaptive spectral re-ordering technique for lossless coding of hyper-spectral images
NASA Technical Reports Server (NTRS)
Memon, Nasir D.; Galatsanos, Nikolas
1995-01-01
In this paper, we propose a new approach, applicable to lossless compression of hyper-spectral images, that alleviates some limitations of linear prediction as applied to this problem. According to this approach, an adaptive re-ordering of the spectral components of each pixel is performed prior to prediction and encoding. This re-ordering adaptively exploits, on a pixel-by pixel basis, the presence of inter-band correlations for prediction. Furthermore, the proposed approach takes advantage of spatial correlations, and does not introduce any coding overhead to transmit the order of the spectral bands. This is accomplished by using the assumption that two spatially adjacent pixels are expected to have similar spectral relationships. We thus have a simple technique to exploit spectral and spatial correlations in hyper-spectral data sets, leading to compression performance improvements as compared to our previously reported techniques for lossless compression. We also look at some simple error modeling techniques for further exploiting any structure that remains in the prediction residuals prior to entropy coding.
An analysis of Landsat-4 Thematic Mapper geometric properties
NASA Technical Reports Server (NTRS)
Walker, R. E.; Zobrist, A. L.; Bryant, N. A.; Gohkman, B.; Friedman, S. Z.; Logan, T. L.
1984-01-01
Landsat-4 Thematic Mapper data of Washington, DC, Harrisburg, PA, and Salton Sea, CA were analyzed to determine geometric integrity and conformity of the data to known earth surface geometry. Several tests were performed. Intraband correlation and interband registration were investigated. No problems were observed in the intraband analysis, and aside from indications of slight misregistration between bands of the primary versus bands of the secondary focal planes, interband registration was well within the specified tolerances. A substantial number of ground control points were found and used to check the images' conformity to the Space Oblique Mercator (SOM) projection of their respective areas. The means of the residual offsets, which included nonprocessing related measurement errors, were close to the one pixel level in the two scenes examined. The Harrisburg scene residual mean was 28.38 m (0.95 pixels) with a standard deviation of 19.82 m (0.66 pixels), while the mean and standard deviation for the Salton Sea scene were 40.46 (1.35 pixels) and 30.57 m (1.02 pixels), respectively. Overall, the data were judged to be a high geometric quality with errors close to those targeted by the TM sensor design specifications.
NASA Astrophysics Data System (ADS)
Okamura, Rintaro; Iwabuchi, Hironobu; Schmidt, K. Sebastian
2017-12-01
Three-dimensional (3-D) radiative-transfer effects are a major source of retrieval errors in satellite-based optical remote sensing of clouds. The challenge is that 3-D effects manifest themselves across multiple satellite pixels, which traditional single-pixel approaches cannot capture. In this study, we present two multi-pixel retrieval approaches based on deep learning, a technique that is becoming increasingly successful for complex problems in engineering and other areas. Specifically, we use deep neural networks (DNNs) to obtain multi-pixel estimates of cloud optical thickness and column-mean cloud droplet effective radius from multispectral, multi-pixel radiances. The first DNN method corrects traditional bispectral retrievals based on the plane-parallel homogeneous cloud assumption using the reflectances at the same two wavelengths. The other DNN method uses so-called convolutional layers and retrieves cloud properties directly from the reflectances at four wavelengths. The DNN methods are trained and tested on cloud fields from large-eddy simulations used as input to a 3-D radiative-transfer model to simulate upward radiances. The second DNN-based retrieval, sidestepping the bispectral retrieval step through convolutional layers, is shown to be more accurate. It reduces 3-D radiative-transfer effects that would otherwise affect the radiance values and estimates cloud properties robustly even for optically thick clouds.
Label-Free Biomedical Imaging Using High-Speed Lock-In Pixel Sensor for Stimulated Raman Scattering
Mars, Kamel; Kawahito, Shoji; Yasutomi, Keita; Kagawa, Keiichiro; Yamada, Takahiro
2017-01-01
Raman imaging eliminates the need for staining procedures, providing label-free imaging to study biological samples. Recent developments in stimulated Raman scattering (SRS) have achieved fast acquisition speed and hyperspectral imaging. However, there has been a problem of lack of detectors suitable for MHz modulation rate parallel detection, detecting multiple small SRS signals while eliminating extremely strong offset due to direct laser light. In this paper, we present a complementary metal-oxide semiconductor (CMOS) image sensor using high-speed lock-in pixels for stimulated Raman scattering that is capable of obtaining the difference of Stokes-on and Stokes-off signal at modulation frequency of 20 MHz in the pixel before reading out. The generated small SRS signal is extracted and amplified in a pixel using a high-speed and large area lateral electric field charge modulator (LEFM) employing two-step ion implantation and an in-pixel pair of low-pass filter, a sample and hold circuit and a switched capacitor integrator using a fully differential amplifier. A prototype chip is fabricated using 0.11 μm CMOS image sensor technology process. SRS spectra and images of stearic acid and 3T3-L1 samples are successfully obtained. The outcomes suggest that hyperspectral and multi-focus SRS imaging at video rate is viable after slight modifications to the pixel architecture and the acquisition system. PMID:29120358
Label-Free Biomedical Imaging Using High-Speed Lock-In Pixel Sensor for Stimulated Raman Scattering.
Mars, Kamel; Lioe, De Xing; Kawahito, Shoji; Yasutomi, Keita; Kagawa, Keiichiro; Yamada, Takahiro; Hashimoto, Mamoru
2017-11-09
Raman imaging eliminates the need for staining procedures, providing label-free imaging to study biological samples. Recent developments in stimulated Raman scattering (SRS) have achieved fast acquisition speed and hyperspectral imaging. However, there has been a problem of lack of detectors suitable for MHz modulation rate parallel detection, detecting multiple small SRS signals while eliminating extremely strong offset due to direct laser light. In this paper, we present a complementary metal-oxide semiconductor (CMOS) image sensor using high-speed lock-in pixels for stimulated Raman scattering that is capable of obtaining the difference of Stokes-on and Stokes-off signal at modulation frequency of 20 MHz in the pixel before reading out. The generated small SRS signal is extracted and amplified in a pixel using a high-speed and large area lateral electric field charge modulator (LEFM) employing two-step ion implantation and an in-pixel pair of low-pass filter, a sample and hold circuit and a switched capacitor integrator using a fully differential amplifier. A prototype chip is fabricated using 0.11 μm CMOS image sensor technology process. SRS spectra and images of stearic acid and 3T3-L1 samples are successfully obtained. The outcomes suggest that hyperspectral and multi-focus SRS imaging at video rate is viable after slight modifications to the pixel architecture and the acquisition system.
Malkusch, Wolf
2005-01-01
The enzyme-linked immunospot (ELISPOT) assay was originally developed for the detection of individual antibody secreting B-cells. Since then, the method has been improved, and ELISPOT is used for the determination of the production of tumor necrosis factor (TNF)-alpha, interferon (IFN)-gamma, or various interleukins (IL)-4, IL-5. ELISPOT measurements are performed in 96-well plates with nitrocellulose membranes either visually or by means of image analysis. Image analysis offers various procedures to overcome variable background intensity problems and separate true from false spots. ELISPOT readers offer a complete solution for precise and automatic evaluation of ELISPOT assays. Number, size, and intensity of each single spot can be determined, printed, or saved for further statistical evaluation. Cytokine spots are always round, but because of floating edges with the background, they have a nonsmooth borderline. Resolution is a key feature for a precise detection of ELISPOT. In standard applications shape and edge steepness are essential parameters in addition to size and color for an accurate spot recognition. These parameters need a minimum spot diameter of 6 pixels. Collecting one single image per well with a standard color camera with 750 x 560 pixels will result in a resolution much too low to get all of the spots in a specimen. IFN-gamma spots may have only 25 microm diameters, and TNF-alpha spots just 15 microm. A 750 x 560 pixel image of a 6-mm well has a pixel size of 12 microm, resulting in only 1 or 2 pixel for a spot. Using a precise microscope optic in combination with a high resolution (1300 x 1030 pixel) integrating digital color camera, and at least 2 x 2 images per well will result in a pixel size of 2.5 microm and, as a minimum, 6 pixel diameter per spot. New approaches try to detect two cytokines per cell at the same time (i.e., IFN-gamma and IL-5). Standard staining procedures produce brownish spots (horseradish peroxidase) and blue spots (alkaline phosphatase). Problems may occur with color overlaps from cells producing both cytokines, resulting in violet spots. The latest experiments therefore try to use fluorescence labels as a marker. Fluorescein isothiocyanate results in green spots and Rhodamine in red spots. Cells producing both cytokines appear yellow. These colors can be separated much easier than the violet, red, and blue, especially using a high resolution.
Liu, Hui; Zhang, Cai-Ming; Su, Zhi-Yuan; Wang, Kai; Deng, Kai
2015-01-01
The key problem of computer-aided diagnosis (CAD) of lung cancer is to segment pathologically changed tissues fast and accurately. As pulmonary nodules are potential manifestation of lung cancer, we propose a fast and self-adaptive pulmonary nodules segmentation method based on a combination of FCM clustering and classification learning. The enhanced spatial function considers contributions to fuzzy membership from both the grayscale similarity between central pixels and single neighboring pixels and the spatial similarity between central pixels and neighborhood and improves effectively the convergence rate and self-adaptivity of the algorithm. Experimental results show that the proposed method can achieve more accurate segmentation of vascular adhesion, pleural adhesion, and ground glass opacity (GGO) pulmonary nodules than other typical algorithms. PMID:25945120
All-digital full waveform recording photon counting flash lidar
NASA Astrophysics Data System (ADS)
Grund, Christian J.; Harwit, Alex
2010-08-01
Current generation analog and photon counting flash lidar approaches suffer from limitation in waveform depth, dynamic range, sensitivity, false alarm rates, optical acceptance angle (f/#), optical and electronic cross talk, and pixel density. To address these issues Ball Aerospace is developing a new approach to flash lidar that employs direct coupling of a photocathode and microchannel plate front end to a high-speed, pipelined, all-digital Read Out Integrated Circuit (ROIC) to achieve photon-counting temporal waveform capture in each pixel on each laser return pulse. A unique characteristic is the absence of performance-limiting analog or mixed signal components. When implemented in 65nm CMOS technology, the Ball Intensified Imaging Photon Counting (I2PC) flash lidar FPA technology can record up to 300 photon arrivals in each pixel with 100 ps resolution on each photon return, with up to 6000 range bins in each pixel. The architecture supports near 100% fill factor and fast optical system designs (f/#<1), and array sizes to 3000×3000 pixels. Compared to existing technologies, >60 dB ultimate dynamic range improvement, and >104 reductions in false alarm rates are anticipated, while achieving single photon range precision better than 1cm. I2PC significantly extends long-range and low-power hard target imaging capabilities useful for autonomous hazard avoidance (ALHAT), navigation, imaging vibrometry, and inspection applications, and enables scannerless 3D imaging for distributed target applications such as range-resolved atmospheric remote sensing, vegetation canopies, and camouflage penetration from terrestrial, airborne, GEO, and LEO platforms. We discuss the I2PC architecture, development status, anticipated performance advantages, and limitations.
High-speed low-power voltage-programmed driving scheme for AMOLED displays
NASA Astrophysics Data System (ADS)
Xingheng, Xia; Weijing, Wu; Xiaofeng, Song; Guanming, Li; Lei, Zhou; Lirong, Zhang; Miao, Xu; Lei, Wang; Junbiao, Peng
2015-12-01
A new voltage-programmed driving scheme named the mixed parallel addressing scheme is presented for AMOLED displays, in which one compensation interval can be divided into the first compensation frame and the consequent N -1 post-compensation frames without periods of initialization and threshold voltage detection. The proposed driving scheme has the advantages of both high speed and low driving power due to the mixture of the pipeline technology and the threshold voltage one-time detection technology. Corresponding to the proposed driving scheme, we also propose a new voltage-programmed compensation pixel circuit, which consists of five TFTs and two capacitors (5T2C). In-Zn-O thin-film transistors (IZO TFTs) are used to build the proposed 5T2C pixel circuit. It is shown that the non-uniformity of the proposed pixel circuit is considerably reduced compared with that of the conventional 2T1C pixel circuit. The number of frames (N) preserved in the proposed driving scheme are measured and can be up to 35 with the variation of the OLED current remaining in an acceptable range. Moreover, the proposed voltage-programmed driving scheme can be more valuable for an AMOLED display with high resolution, and may also be applied to other compensation pixel circuits. Project supported by the State Key Development Program for Basic Research of China (No. 2015CB655000) the National Natural Science Foundation of China (Nos. 61204089, 61306099, 61036007, 51173049, U1301243), and the Fundamental Research Funds for the Central Universities (Nos. 2013ZZ0046, 2014ZZ0028).
Hyperspectral remote sensing image retrieval system using spectral and texture features.
Zhang, Jing; Geng, Wenhao; Liang, Xi; Li, Jiafeng; Zhuo, Li; Zhou, Qianlan
2017-06-01
Although many content-based image retrieval systems have been developed, few studies have focused on hyperspectral remote sensing images. In this paper, a hyperspectral remote sensing image retrieval system based on spectral and texture features is proposed. The main contributions are fourfold: (1) considering the "mixed pixel" in the hyperspectral image, endmembers as spectral features are extracted by an improved automatic pixel purity index algorithm, then the texture features are extracted with the gray level co-occurrence matrix; (2) similarity measurement is designed for the hyperspectral remote sensing image retrieval system, in which the similarity of spectral features is measured with the spectral information divergence and spectral angle match mixed measurement and in which the similarity of textural features is measured with Euclidean distance; (3) considering the limited ability of the human visual system, the retrieval results are returned after synthesizing true color images based on the hyperspectral image characteristics; (4) the retrieval results are optimized by adjusting the feature weights of similarity measurements according to the user's relevance feedback. The experimental results on NASA data sets can show that our system can achieve comparable superior retrieval performance to existing hyperspectral analysis schemes.
Xiao, Xun; Geyer, Veikko F.; Bowne-Anderson, Hugo; Howard, Jonathon; Sbalzarini, Ivo F.
2016-01-01
Biological filaments, such as actin filaments, microtubules, and cilia, are often imaged using different light-microscopy techniques. Reconstructing the filament curve from the acquired images constitutes the filament segmentation problem. Since filaments have lower dimensionality than the image itself, there is an inherent trade-off between tracing the filament with sub-pixel accuracy and avoiding noise artifacts. Here, we present a globally optimal filament segmentation method based on B-spline vector level-sets and a generalized linear model for the pixel intensity statistics. We show that the resulting optimization problem is convex and can hence be solved with global optimality. We introduce a simple and efficient algorithm to compute such optimal filament segmentations, and provide an open-source implementation as an ImageJ/Fiji plugin. We further derive an information-theoretic lower bound on the filament segmentation error, quantifying how well an algorithm could possibly do given the information in the image. We show that our algorithm asymptotically reaches this bound in the spline coefficients. We validate our method in comprehensive benchmarks, compare with other methods, and show applications from fluorescence, phase-contrast, and dark-field microscopy. PMID:27104582
Global crop production forecasting data system analysis
NASA Technical Reports Server (NTRS)
Castruccio, P. A. (Principal Investigator); Loats, H. L.; Lloyd, D. G.
1978-01-01
The author has identified the following significant results. Findings led to the development of a theory of radiometric discrimination employing the mathematical framework of the theory of discrimination between scintillating radar targets. The theory indicated that the functions which drive accuracy of discrimination are the contrast ratio between targets, and the number of samples, or pixels, observed. Theoretical results led to three primary consequences, as regards the data system: (1) agricultural targets must be imaged at correctly chosen times, when the relative evolution of the crop's development is such as to maximize their contrast; (2) under these favorable conditions, the number of observed pixels can be significantly reduced with respect to wall-to-wall measurements; and (3) remotely sensed radiometric data must be suitably mixed with other auxiliary data, derived from external sources.
NASA Astrophysics Data System (ADS)
Jolivet, R.; Simons, M.
2016-12-01
InSAR time series analysis allows reconstruction of ground deformation with meter-scale spatial resolution and high temporal sampling. For instance, the ESA Sentinel-1 Constellation is capable of providing 6-day temporal sampling, thereby opening a new window on the spatio-temporal behavior of tectonic processes. However, due to computational limitations, most time series methods rely on a pixel-by-pixel approach. This limitation is a concern because (1) accounting for orbital errors requires referencing all interferograms to a common set of pixels before reconstruction of the time series and (2) spatially correlated atmospheric noise due to tropospheric turbulence is ignored. Decomposing interferograms into statistically independent wavelets will mitigate issues of correlated noise, but prior estimation of orbital uncertainties will still be required. Here, we explore a method that considers all pixels simultaneously when solving for the spatio-temporal evolution of interferometric phase Our method is based on a massively parallel implementation of a conjugate direction solver. We consider an interferogram as the sum of the phase difference between 2 SAR acquisitions and the corresponding orbital errors. In addition, we fit the temporal evolution with a physically parameterized function while accounting for spatially correlated noise in the data covariance. We assume noise is isotropic for any given InSAR pair with a covariance described by an exponential function that decays with increasing separation distance between pixels. We regularize our solution in space using a similar exponential function as model covariance. Given the problem size, we avoid matrix multiplications of the full covariances by computing convolutions in the Fourier domain. We first solve the unregularized least squares problem using the LSQR algorithm to approach the final solution, then run our conjugate direction solver to account for data and model covariances. We present synthetic tests showing the efficiency of our method. We then reconstruct a 20-year continuous time series covering Northern Chile. Without input from any additional GNSS data, we recover the secular deformation rate, seasonal oscillations and the deformation fields from the 2005 Mw 7.8 Tarapaca and 2007 Mw 7.7 Tocopilla earthquakes.
NASA Astrophysics Data System (ADS)
Manolakis, Dimitris G.
2004-10-01
The linear mixing model is widely used in hyperspectral imaging applications to model the reflectance spectra of mixed pixels in the SWIR atmospheric window or the radiance spectra of plume gases in the LWIR atmospheric window. In both cases it is important to detect the presence of materials or gases and then estimate their amount, if they are present. The detection and estimation algorithms available for these tasks are related but they are not identical. The objective of this paper is to theoretically investigate how the heavy tails observed in hyperspectral background data affect the quality of abundance estimates and how the F-test, used for endmember selection, is robust to the presence of heavy tails when the model fits the data.
Chromatic Modulator for a High-Resolution CCD or APS
NASA Technical Reports Server (NTRS)
Hartley, Frank; Hull, Anthony
2008-01-01
A chromatic modulator has been proposed to enable the separate detection of the red, green, and blue (RGB) color components of the same scene by a single charge-coupled device (CCD), active-pixel sensor (APS), or similar electronic image detector. Traditionally, the RGB color-separation problem in an electronic camera has been solved by use of either (1) fixed color filters over three separate image detectors; (2) a filter wheel that repeatedly imposes a red, then a green, then a blue filter over a single image detector; or (3) different fixed color filters over adjacent pixels. The use of separate image detectors necessitates precise registration of the detectors and the use of complicated optics; filter wheels are expensive and add considerably to the bulk of the camera; and fixed pixelated color filters reduce spatial resolution and introduce color-aliasing effects. The proposed chromatic modulator would not exhibit any of these shortcomings. The proposed chromatic modulator would be an electromechanical device fabricated by micromachining. It would include a filter having a spatially periodic pattern of RGB strips at a pitch equal to that of the pixels of the image detector. The filter would be placed in front of the image detector, supported at its periphery by a spring suspension and electrostatic comb drive. The spring suspension would bias the filter toward a middle position in which each filter strip would be registered with a row of pixels of the image detector. Hard stops would limit the excursion of the spring suspension to precisely one pixel row above and one pixel row below the middle position. In operation, the electrostatic comb drive would be actuated to repeatedly snap the filter to the upper extreme, middle, and lower extreme positions. This action would repeatedly place a succession of the differently colored filter strips in front of each pixel of the image detector. To simplify the processing, it would be desirable to encode information on the color of the filter strip over each row (or at least over some representative rows) of pixels at a given instant of time in synchronism with the pixel output at that instant.
Waterjet and laser etching: the nonlinear inverse problem
NASA Astrophysics Data System (ADS)
Bilbao-Guillerna, A.; Axinte, D. A.; Billingham, J.; Cadot, G. B. J.
2017-07-01
In waterjet and laser milling, material is removed from a solid surface in a succession of layers to create a new shape, in a depth-controlled manner. The inverse problem consists of defining the control parameters, in particular, the two-dimensional beam path, to arrive at a prescribed freeform surface. Waterjet milling (WJM) and pulsed laser ablation (PLA) are studied in this paper, since a generic nonlinear material removal model is appropriate for both of these processes. The inverse problem is usually solved for this kind of process by simply controlling dwell time in proportion to the required depth of milling at a sequence of pixels on the surface. However, this approach is only valid when shallow surfaces are etched, since it does not take into account either the footprint of the beam or its overlapping on successive passes. A discrete adjoint algorithm is proposed in this paper to improve the solution. Nonlinear effects and non-straight passes are included in the optimization, while the calculation of the Jacobian matrix does not require large computation times. Several tests are performed to validate the proposed method and the results show that tracking error is reduced typically by a factor of two in comparison to the pixel-by-pixel approach and the classical raster path strategy with straight passes. The tracking error can be as low as 2-5% and 1-2% for WJM and PLA, respectively, depending on the complexity of the target surface.
The Need of Nested Grids for Aerial and Satellite Images and Digital Elevation Models
NASA Astrophysics Data System (ADS)
Villa, G.; Mas, S.; Fernández-Villarino, X.; Martínez-Luceño, J.; Ojeda, J. C.; Pérez-Martín, B.; Tejeiro, J. A.; García-González, C.; López-Romero, E.; Soteres, C.
2016-06-01
Usual workflows for production, archiving, dissemination and use of Earth observation images (both aerial and from remote sensing satellites) pose big interoperability problems, as for example: non-alignment of pixels at the different levels of the pyramids that makes it impossible to overlay, compare and mosaic different orthoimages, without resampling them and the need to apply multiple resamplings and compression-decompression cycles. These problems cause great inefficiencies in production, dissemination through web services and processing in "Big Data" environments. Most of them can be avoided, or at least greatly reduced, with the use of a common "nested grid" for mutiresolution production, archiving, dissemination and exploitation of orthoimagery, digital elevation models and other raster data. "Nested grids" are space allocation schemas that organize image footprints, pixel sizes and pixel positions at all pyramid levels, in order to achieve coherent and consistent multiresolution coverage of a whole working area. A "nested grid" must be complemented by an appropriate "tiling schema", ideally based on the "quad-tree" concept. In the last years a "de facto standard" grid and Tiling Schema has emerged and has been adopted by virtually all major geospatial data providers. It has also been adopted by OGC in its "WMTS Simple Profile" standard. In this paper we explain how the adequate use of this tiling schema as common nested grid for orthoimagery, DEMs and other types of raster data constitutes the most practical solution to most of the interoperability problems of these types of data.
NASA Astrophysics Data System (ADS)
Sun, Kang; Cady-Pereira, Karen; Miller, David J.; Tao, Lei; Zondlo, Mark A.; Nowak, John B.; Neuman, J. A.; Mikoviny, Tomas; Müller, Markus; Wisthaler, Armin; Scarino, Amy J.; Hostetler, Chris A.
2015-05-01
Ammonia measurements from a vehicle-based, mobile open-path sensor and those from aircraft were compared with Tropospheric Emission Spectrometer (TES) NH3 columns at the pixel scale during the NASA Deriving Information on Surface conditions from Column and Vertically Resolved Observations Relevant to Air Quality field experiment. Spatial and temporal mismatches were reduced by having the mobile laboratory sample in the same areas as the TES footprints. To examine how large heterogeneities in the NH3 surface mixing ratios may affect validation, a detailed spatial survey was performed within a single TES footprint around the overpass time. The TES total NH3 column above a single footprint showed excellent agreement with the in situ total column constructed from surface measurements with a difference of 2% (within the combined measurement uncertainties). The comparison was then extended to a TES transect of nine footprints where aircraft data (5-80 ppbv) were available in a narrow spatiotemporal window (<10 km, <1 h). The TES total NH3 columns above the nine footprints agreed to within 6% of the in situ total columns derived from the aircraft-based measurements. Finally, to examine how TES captures surface spatial gradients at the interpixel scale, ground-based, mobile measurements were performed directly underneath a TES transect, covering nine footprints within ±1.5 h of the overpass. The TES total columns were strongly correlated (R2 = 0.82) with the median NH3 mixing ratios measured at the surface. These results provide the first in situ validation of the TES total NH3 column product, and the methodology is applicable to other satellite observations of short-lived species at the pixel scale.
Optimal scan strategy for mega-pixel and kilo-gray-level OLED-on-silicon microdisplay.
Ji, Yuan; Ran, Feng; Ji, Weigui; Xu, Meihua; Chen, Zhangjing; Jiang, Yuxi; Shen, Weixin
2012-06-10
The digital pixel driving scheme makes the organic light-emitting diode (OLED) microdisplays more immune to the pixel luminance variations and simplifies the circuit architecture and design flow compared to the analog pixel driving scheme. Additionally, it is easily applied in full digital systems. However, the data bottleneck becomes a notable problem as the number of pixels and gray levels grow dramatically. This paper will discuss the digital driving ability to achieve kilogray-levels for megapixel displays. The optimal scan strategy is proposed for creating ultra high gray levels and increasing light efficiency and contrast ratio. Two correction schemes are discussed to improve the gray level linearity. A 1280×1024×3 OLED-on-silicon microdisplay, with 4096 gray levels, is designed based on the optimal scan strategy. The circuit driver is integrated in the silicon backplane chip in the 0.35 μm 3.3 V-6 V dual voltage one polysilicon layer, four metal layers (1P4M) complementary metal-oxide semiconductor (CMOS) process with custom top metal. The design aspects of the optimal scan controller are also discussed. The test results show the gray level linearity of the correction schemes for the optimal scan strategy is acceptable by the human eye.
Fundamental performance differences between CMOS and CCD imagers: part III
NASA Astrophysics Data System (ADS)
Janesick, James; Pinter, Jeff; Potter, Robert; Elliott, Tom; Andrews, James; Tower, John; Cheng, John; Bishop, Jeanne
2009-08-01
This paper is a status report on recent scientific CMOS imager developments since when previous publications were written. Focus today is being given on CMOS design and process optimization because fundamental problems affecting performance are now reasonably well understood. Topics found in this paper include discussions on a low cost custom scientific CMOS fabrication approach, substrate bias for deep depletion imagers, near IR and x-ray point-spread performance, custom fabricated high resisitivity epitaxial and SOI silicon wafers for backside illuminated imagers, buried channel MOSFETs for ultra low noise performance, 1 e- charge transfer imagers, high speed transfer pixels, RTS/ flicker noise versus MOSFET geometry, pixel offset and gain non uniformity measurements, high S/N dCDS/aCDS signal processors, pixel thermal dark current sources, radiation damage topics, CCDs fabricated in CMOS and future large CMOS imagers planned at Sarnoff.
Robert L. Kremens; Matthew B. Dickinson
2015-01-01
We have simulated the radiant emission spectra from wildland fires such as would be observed at a scale encompassing the pre-frontal fuel bed, the flaming front and the zone of post-frontal combustion and cooling. For these simulations, we developed a 'mixed-pixel' model where the fire infrared spectrum is estimated as the linear superposition of spectra of...
NASA Astrophysics Data System (ADS)
Chatterjee, R. S.; Singh, Narendra; Thapa, Shailaja; Sharma, Dravneeta; Kumar, Dheeraj
2017-06-01
The present study proposes land surface temperature (LST) retrieval from satellite-based thermal IR data by single channel radiative transfer algorithm using atmospheric correction parameters derived from satellite-based and in-situ data and land surface emissivity (LSE) derived by a hybrid LSE model. For example, atmospheric transmittance (τ) was derived from Terra MODIS spectral radiance in atmospheric window and absorption bands, whereas the atmospheric path radiance and sky radiance were estimated using satellite- and ground-based in-situ solar radiation, geographic location and observation conditions. The hybrid LSE model which is coupled with ground-based emissivity measurements is more versatile than the previous LSE models and yields improved emissivity values by knowledge-based approach. It uses NDVI-based and NDVI Threshold method (NDVITHM) based algorithms and field-measured emissivity values. The model is applicable for dense vegetation cover, mixed vegetation cover, bare earth including coal mining related land surface classes. The study was conducted in a coalfield of India badly affected by coal fire for decades. In a coal fire affected coalfield, LST would provide precise temperature difference between thermally anomalous coal fire pixels and background pixels to facilitate coal fire detection and monitoring. The derived LST products of the present study were compared with radiant temperature images across some of the prominent coal fire locations in the study area by graphical means and by some standard mathematical dispersion coefficients such as coefficient of variation, coefficient of quartile deviation, coefficient of quartile deviation for 3rd quartile vs. maximum temperature, coefficient of mean deviation (about median) indicating significant increase in the temperature difference among the pixels. The average temperature slope between adjacent pixels, which increases the potential of coal fire pixel detection from background pixels, is significantly larger in the derived LST products than the corresponding radiant temperature images.
Automated determination of arterial input function for DCE-MRI of the prostate
NASA Astrophysics Data System (ADS)
Zhu, Yingxuan; Chang, Ming-Ching; Gupta, Sandeep
2011-03-01
Prostate cancer is one of the commonest cancers in the world. Dynamic contrast enhanced MRI (DCE-MRI) provides an opportunity for non-invasive diagnosis, staging, and treatment monitoring. Quantitative analysis of DCE-MRI relies on determination of an accurate arterial input function (AIF). Although several methods for automated AIF detection have been proposed in literature, none are optimized for use in prostate DCE-MRI, which is particularly challenging due to large spatial signal inhomogeneity. In this paper, we propose a fully automated method for determining the AIF from prostate DCE-MRI. Our method is based on modeling pixel uptake curves as gamma variate functions (GVF). First, we analytically compute bounds on GVF parameters for more robust fitting. Next, we approximate a GVF for each pixel based on local time domain information, and eliminate the pixels with false estimated AIFs using the deduced upper and lower bounds. This makes the algorithm robust to signal inhomogeneity. After that, according to spatial information such as similarity and distance between pixels, we formulate the global AIF selection as an energy minimization problem and solve it using a message passing algorithm to further rule out the weak pixels and optimize the detected AIF. Our method is fully automated without training or a priori setting of parameters. Experimental results on clinical data have shown that our method obtained promising detection accuracy (all detected pixels inside major arteries), and a very good match with expert traced manual AIF.
NASA Astrophysics Data System (ADS)
Granja, Carlos; Polansky, Stepan; Vykydal, Zdenek; Pospisil, Stanislav; Owens, Alan; Kozacek, Zdenek; Mellab, Karim; Simcak, Marek
2016-06-01
The Space Application of Timepix based Radiation Monitor (SATRAM) is a spacecraft platform radiation monitor on board the Proba-V satellite launched in an 820 km altitude low Earth orbit in 2013. The is a technology demonstration payload is based on the Timepix chip equipped with a 300 μm silicon sensor with signal threshold of 8 keV/pixel to low-energy X-rays and all charged particles including minimum ionizing particles. For X-rays the energy working range is 10-30 keV. Event count rates can be up to 106 cnt/(cm2 s) for detailed event-by-event analysis or over 1011 cnt/(cm2 s) for particle-counting only measurements. The single quantum sensitivity (zero-dark current noise level) combined with per-pixel spectrometry and micro-scale pattern recognition analysis of single particle tracks enables the composition (particle type) and spectral characterization (energy loss) of mixed radiation fields to be determined. Timepix's pixel granularity and particle tracking capability also provides directional sensitivity for energetic charged particles. The payload detector response operates in wide dynamic range in terms of absorbed dose starting from single particle doses in the pGy level, particle count rate up to 106-10 /cm2/s and particle energy loss (threshold at 150 eV/μm). The flight model in orbit was successfully commissioned in 2013 and has been sampling the space radiation field in the satellite environment along its orbit at a rate of several frames per minute of varying exposure time. This article describes the design and operation of SATRAM together with an overview of the response and resolving power to the mixed radiation field including summary of the principal data products (dose rate, equivalent dose rate, particle-type count rate). The preliminary evaluation of response of the embedded Timepix detector to space radiation in the satellite environment is presented together with first results in the form of a detailed visualization of the mixed radiation field at the position of the payload and resulting spatial- and time-correlated radiation maps of cumulative dose rate along the satellite orbit.
Monitoring Bridge Dynamic Deformation in Vibration by Digital Photography
NASA Astrophysics Data System (ADS)
Yu, Chengxin; Zhang, Guojian; Liu, Xiaodong; Fan, Li; Hai, Hua
2018-01-01
This study adopts digital photography to monitor bridge dynamic deformation in vibration. Digital photography in this study is based on PST-TBPM (photographing scale transformation-time baseline parallax method). Firstly, we monitor the bridge in static as a zero image. Then, we continuously monitor the bridge in vibration as the successive images. Based on the reference points on each image, PST-TBPM is used to calculate the images to obtain the dynamic deformation values of these deformation points. Results show that the average measurement accuracies are 0.685 pixels (0.51mm) and 0.635 pixels (0.47mm) in X and Z direction, respectively. The maximal deformations in X and Z direction of the bridge are 4.53 pixels and 5.21 pixels, respectively. PST-TBPM is valid in solving the problem that the photographing direction is not perpendicular to the bridge. Digital photography in this study can be used to assess bridge health through monitoring the dynamic deformation of a bridge in vibration. The deformation trend curves also can warn the possible dangers over time.
Validating spatial structure in canopy water content using geostatistics
NASA Technical Reports Server (NTRS)
Sanderson, E. W.; Zhang, M. H.; Ustin, S. L.; Rejmankova, E.; Haxo, R. S.
1995-01-01
Heterogeneity in ecological phenomena are scale dependent and affect the hierarchical structure of image data. AVIRIS pixels average reflectance produced by complex absorption and scattering interactions between biogeochemical composition, canopy architecture, view and illumination angles, species distributions, and plant cover as well as other factors. These scales affect validation of pixel reflectance, typically performed by relating pixel spectra to ground measurements acquired at scales of 1m(exp 2) or less (e.g., field spectra, foilage and soil samples, etc.). As image analysis becomes more sophisticated, such as those for detection of canopy chemistry, better validation becomes a critical problem. This paper presents a methodology for bridging between point measurements and pixels using geostatistics. Geostatistics have been extensively used in geological or hydrogeolocial studies but have received little application in ecological studies. The key criteria for kriging estimation is that the phenomena varies in space and that an underlying controlling process produces spatial correlation between the measured data points. Ecological variation meets this requirement because communities vary along environmental gradients like soil moisture, nutrient availability, or topography.
True Ortho Generation of Urban Area Using High Resolution Aerial Photos
NASA Astrophysics Data System (ADS)
Hu, Yong; Stanley, David; Xin, Yubin
2016-06-01
The pros and cons of existing methods for true ortho generation are analyzed based on a critical literature review for its two major processing stages: visibility analysis and occlusion compensation. They process frame and pushbroom images using different algorithms for visibility analysis due to the need of perspective centers used by the z-buffer (or alike) techniques. For occlusion compensation, the pixel-based approach likely results in excessive seamlines in the ortho-rectified images due to the use of a quality measure on the pixel-by-pixel rating basis. In this paper, we proposed innovative solutions to tackle the aforementioned problems. For visibility analysis, an elevation buffer technique is introduced to employ the plain elevations instead of the distances from perspective centers by z-buffer, and has the advantage of sensor independency. A segment oriented strategy is developed to evaluate a plain cost measure per segment for occlusion compensation instead of the tedious quality rating per pixel. The cost measure directly evaluates the imaging geometry characteristics in ground space, and is also sensor independent. Experimental results are demonstrated using aerial photos acquired by UltraCam camera.
NASA Technical Reports Server (NTRS)
Kimble, Randy A.; Pain, Bedabrata; Norton, Timothy J.; Haas, J. Patrick; Oegerle, William R. (Technical Monitor)
2002-01-01
Silicon array readouts for microchannel plate intensifiers offer several attractive features. In this class of detector, the electron cloud output of the MCP intensifier is converted to visible light by a phosphor; that light is then fiber-optically coupled to the silicon array. In photon-counting mode, the resulting light splashes on the silicon array are recognized and centroided to fractional pixel accuracy by off-chip electronics. This process can result in very high (MCP-limited) spatial resolution while operating at a modest MCP gain (desirable for dynamic range and long term stability). The principal limitation of intensified CCD systems of this type is their severely limited local dynamic range, as accurate photon counting is achieved only if there are not overlapping event splashes within the frame time of the device. This problem can be ameliorated somewhat by processing events only in pre-selected windows of interest of by using an addressable charge injection device (CID) for the readout array. We are currently pursuing the development of an intriguing alternative readout concept based on using an event-driven CMOS Active Pixel Sensor. APS technology permits the incorporation of discriminator circuitry within each pixel. When coupled with suitable CMOS logic outside the array area, the discriminator circuitry can be used to trigger the readout of small sub-array windows only when and where an event splash has been detected, completely eliminating the local dynamic range problem, while achieving a high global count rate capability and maintaining high spatial resolution. We elaborate on this concept and present our progress toward implementing an event-driven APS readout.
Joint Dictionary Learning for Multispectral Change Detection.
Lu, Xiaoqiang; Yuan, Yuan; Zheng, Xiangtao
2017-04-01
Change detection is one of the most important applications of remote sensing technology. It is a challenging task due to the obvious variations in the radiometric value of spectral signature and the limited capability of utilizing spectral information. In this paper, an improved sparse coding method for change detection is proposed. The intuition of the proposed method is that unchanged pixels in different images can be well reconstructed by the joint dictionary, which corresponds to knowledge of unchanged pixels, while changed pixels cannot. First, a query image pair is projected onto the joint dictionary to constitute the knowledge of unchanged pixels. Then reconstruction error is obtained to discriminate between the changed and unchanged pixels in the different images. To select the proper thresholds for determining changed regions, an automatic threshold selection strategy is presented by minimizing the reconstruction errors of the changed pixels. Adequate experiments on multispectral data have been tested, and the experimental results compared with the state-of-the-art methods prove the superiority of the proposed method. Contributions of the proposed method can be summarized as follows: 1) joint dictionary learning is proposed to explore the intrinsic information of different images for change detection. In this case, change detection can be transformed as a sparse representation problem. To the authors' knowledge, few publications utilize joint learning dictionary in change detection; 2) an automatic threshold selection strategy is presented, which minimizes the reconstruction errors of the changed pixels without the prior assumption of the spectral signature. As a result, the threshold value provided by the proposed method can adapt to different data due to the characteristic of joint dictionary learning; and 3) the proposed method makes no prior assumption of the modeling and the handling of the spectral signature, which can be adapted to different data.
Poblete, Tomas; Ortega-Farías, Samuel; Ryu, Dongryeol
2018-01-30
Water stress caused by water scarcity has a negative impact on the wine industry. Several strategies have been implemented for optimizing water application in vineyards. In this regard, midday stem water potential (SWP) and thermal infrared (TIR) imaging for crop water stress index (CWSI) have been used to assess plant water stress on a vine-by-vine basis without considering the spatial variability. Unmanned Aerial Vehicle (UAV)-borne TIR images are used to assess the canopy temperature variability within vineyards that can be related to the vine water status. Nevertheless, when aerial TIR images are captured over canopy, internal shadow canopy pixels cannot be detected, leading to mixed information that negatively impacts the relationship between CWSI and SWP. This study proposes a methodology for automatic coregistration of thermal and multispectral images (ranging between 490 and 900 nm) obtained from a UAV to remove shadow canopy pixels using a modified scale invariant feature transformation (SIFT) computer vision algorithm and Kmeans++ clustering. Our results indicate that our proposed methodology improves the relationship between CWSI and SWP when shadow canopy pixels are removed from a drip-irrigated Cabernet Sauvignon vineyard. In particular, the coefficient of determination (R²) increased from 0.64 to 0.77. In addition, values of the root mean square error (RMSE) and standard error (SE) decreased from 0.2 to 0.1 MPa and 0.24 to 0.16 MPa, respectively. Finally, this study shows that the negative effect of shadow canopy pixels was higher in those vines with water stress compared with well-watered vines.
Single Particle Damage Events in Candidate Star Camera Sensors
NASA Technical Reports Server (NTRS)
Marshall, Paul; Marshall, Cheryl; Polidan, Elizabeth; Wacyznski, Augustyn; Johnson, Scott
2005-01-01
Si charge coupled devices (CCDs) are currently the preeminent detector in star cameras as well as in the near ultraviolet (uv) to visible wavelength region for astronomical observations in space and in earth-observing space missions. Unfortunately, the performance of CCDs is permanently degraded by total ionizing dose (TID) and displacement damage effects. TID produces threshold voltage shifts on the CCD gates and displacement damage reduces the charge transfer efficiency (CTE), increases the dark current, produces dark current nonuniformities and creates random telegraph noise in individual pixels. In addition to these long term effects, cosmic ray and trapped proton transients also interfere with device operation on orbit. In the present paper, we investigate the dark current behavior of CCDs - in particular the formation and annealing of hot pixels. Such pixels degrade the ability of a CCD to perform science and also can present problems to the performance of star camera functions (especially if their numbers are not correctly anticipated). To date, most dark current radiation studies have been performed by irradiating the CCDs at room temperature but this can result in a significantly optimistic picture of the hot pixel count. We know from the Hubble Space Telescope (HST) that high dark current pixels (so-called hot pixels or hot spikes) accumulate as a function of time on orbit. For example, the HST Advanced Camera for Surveys/Wide Field Camera instrument performs monthly anneals despite the loss of observational time, in order to partially anneal the hot pixels. Note that the fact that significant reduction in hot pixel populations occurs for room temperature anneals is not presently understood since none of the commonly expected defects in Si (e.g. divacancy, E center, and A-center) anneal at such a low temperature. A HST Wide Field Camera 3 (WFC3) CCD manufactured by E2V was irradiated while operating at -83C and the dark current studied as a function of temperature while the CCD was warmed to a sequence of temperatures up to a maximum of +30C. The device was then cooled back down to -83 and re-measured. Hot pixel populations were tracked during the warm-up and cool-down. Hot pixel annealing began below 40C and the anneal process was largely completed before the detector reached +3OC. There was no apparent sharp temperature dependence in the annealing. Although a large fraction of the hot pixels fell below the threshold to be counted as a hot pixel, they nevertheless remained warmer than the remaining population. The details of the mechanism for the formation and annealing of hot pixels is not presently understood, but it appears likely that hot pixels are associated with displacement damage occurring in high electric field regions.
Downsampling Photodetector Array with Windowing
NASA Technical Reports Server (NTRS)
Patawaran, Ferze D.; Farr, William H.; Nguyen, Danh H.; Quirk, Kevin J.; Sahasrabudhe, Adit
2012-01-01
In a photon counting detector array, each pixel in the array produces an electrical pulse when an incident photon on that pixel is detected. Detection and demodulation of an optical communication signal that modulated the intensity of the optical signal requires counting the number of photon arrivals over a given interval. As the size of photon counting photodetector arrays increases, parallel processing of all the pixels exceeds the resources available in current application-specific integrated circuit (ASIC) and gate array (GA) technology; the desire for a high fill factor in avalanche photodiode (APD) detector arrays also precludes this. Through the use of downsampling and windowing portions of the detector array, the processing is distributed between the ASIC and GA. This allows demodulation of the optical communication signal incident on a large photon counting detector array, as well as providing architecture amenable to algorithmic changes. The detector array readout ASIC functions as a parallel-to-serial converter, serializing the photodetector array output for subsequent processing. Additional downsampling functionality for each pixel is added to this ASIC. Due to the large number of pixels in the array, the readout time of the entire photodetector is greater than the time between photon arrivals; therefore, a downsampling pre-processing step is done in order to increase the time allowed for the readout to occur. Each pixel drives a small counter that is incremented at every detected photon arrival or, equivalently, the charge in a storage capacitor is incremented. At the end of a user-configurable counting period (calculated independently from the ASIC), the counters are sampled and cleared. This downsampled photon count information is then sent one counter word at a time to the GA. For a large array, processing even the downsampled pixel counts exceeds the capabilities of the GA. Windowing of the array, whereby several subsets of pixels are designated for processing, is used to further reduce the computational requirements. The grouping of the designated pixel frame as the photon count information is sent one word at a time to the GA, the aggregation of the pixels in a window can be achieved by selecting only the designated pixel counts from the serial stream of photon counts, thereby obviating the need to store the entire frame of pixel count in the gate array. The pixel count se quence from each window can then be processed, forming lower-rate pixel statistics for each window. By having this processing occur in the GA rather than in the ASIC, future changes to the processing algorithm can be readily implemented. The high-bandwidth requirements of a photon counting array combined with the properties of the optical modulation being detected by the array present a unique problem that has not been addressed by current CCD or CMOS sensor array solutions.
Superpixel-Augmented Endmember Detection for Hyperspectral Images
NASA Technical Reports Server (NTRS)
Thompson, David R.; Castano, Rebecca; Gilmore, Martha
2011-01-01
Superpixels are homogeneous image regions comprised of several contiguous pixels. They are produced by shattering the image into contiguous, homogeneous regions that each cover between 20 and 100 image pixels. The segmentation aims for a many-to-one mapping from superpixels to image features; each image feature could contain several superpixels, but each superpixel occupies no more than one image feature. This conservative segmentation is relatively easy to automate in a robust fashion. Superpixel processing is related to the more general idea of improving hyperspectral analysis through spatial constraints, which can recognize subtle features at or below the level of noise by exploiting the fact that their spectral signatures are found in neighboring pixels. Recent work has explored spatial constraints for endmember extraction, showing significant advantages over techniques that ignore pixels relative positions. Methods such as AMEE (automated morphological endmember extraction) express spatial influence using fixed isometric relationships a local square window or Euclidean distance in pixel coordinates. In other words, two pixels covariances are based on their spatial proximity, but are independent of their absolute location in the scene. These isometric spatial constraints are most appropriate when spectral variation is smooth and constant over the image. Superpixels are simple to implement, efficient to compute, and are empirically effective. They can be used as a preprocessing step with any desired endmember extraction technique. Superpixels also have a solid theoretical basis in the hyperspectral linear mixing model, making them a principled approach for improving endmember extraction. Unlike existing approaches, superpixels can accommodate non-isometric covariance between image pixels (characteristic of discrete image features separated by step discontinuities). These kinds of image features are common in natural scenes. Analysts can substitute superpixels for image pixels during endmember analysis that leverages the spatial contiguity of scene features to enhance subtle spectral features. Superpixels define populations of image pixels that are independent samples from each image feature, permitting robust estimation of spectral properties, and reducing measurement noise in proportion to the area of the superpixel. This permits improved endmember extraction, and enables automated search for novel and constituent minerals in very noisy, hyperspatial images. This innovation begins with a graph-based segmentation based on the work of Felzenszwalb et al., but then expands their approach to the hyperspectral image domain with a Euclidean distance metric. Then, the mean spectrum of each segment is computed, and the resulting data cloud is used as input into sequential maximum angle convex cone (SMACC) endmember extraction.
William H. Cooke; Dennis M. Jacobs
2002-01-01
FIA annual inventories require rapid updating of pixel-based Phase 1 estimates. Scientists at the Southern Research Station are developing an automated methodology that uses a Normalized Difference Vegetation Index (NDVI) for identifying and eliminating problem FIA plots from the analysis. Problem plots are those that have questionable land useiland cover information....
NASA Astrophysics Data System (ADS)
Pinsky, Lawrence; Stoffle, Nicholas; Jakubek, Jan; Pospisil, Stanislav; Leroy, Claude; Gutierrez, Andrea; Kitamura, Hisashi; Yasuda, Nakahiro; Uchihori, Yulio
2011-02-01
The Medipix2 Collaboration, based at CERN, has developed the TimePix version of the Medipix pixel readout chip, which has the ability to provide either an ADC or TDC capability separately in each of its 256×256 pixels. When coupled to a Si detector layer, the device is an excellent candidate for application as an active dosimeter for use in space radiation environments. In order to facilitate such a development, data have been taken with heavy ions at the HIMAC facility in Chiba, Japan. In particular, the problem of determining the resolution of such a detector system with respect to heavy ions of differing charges and energies, but with similar d E/d x values has been explored for several ions. The ultimate problem is to parse the information in the pixel "footprint" images from the drift of the charge cloud produced in the detector layer. In addition, with the use of convertor materials, the detector can be used as a neutron detector, and it has been used both as a charged particle and neutron detector to evaluate the detailed properties of the radiation fields produced by hadron therapy beams. New versions of the basic chip design are ongoing.
MODELING THE LINE-OF-SIGHT INTEGRATED EMISSION IN THE CORONA: IMPLICATIONS FOR CORONAL HEATING
DOE Office of Scientific and Technical Information (OSTI.GOV)
Viall, Nicholeen M.; Klimchuk, James A.
2013-07-10
One of the outstanding problems in all of space science is uncovering how the solar corona is heated to temperatures greater than 1 MK. Though studied for decades, one of the major difficulties in solving this problem has been unraveling the line-of-sight (LOS) effects in the observations. The corona is optically thin, so a single pixel measures counts from an indeterminate number (perhaps tens of thousands) of independently heated flux tubes, all along that pixel's LOS. In this paper we model the emission in individual pixels imaging the active region corona in the extreme ultraviolet. If LOS effects are notmore » properly taken into account, erroneous conclusions regarding both coronal heating and coronal dynamics may be reached. We model the corona as an LOS integration of many thousands of completely independently heated flux tubes. We demonstrate that despite the superposition of randomly heated flux tubes, nanoflares leave distinct signatures in light curves observed with multi-wavelength and high time cadence data, such as those data taken with the Atmospheric Imaging Assembly on board the Solar Dynamics Observatory. These signatures are readily detected with the time-lag analysis technique of Viall and Klimchuk in 2012. Steady coronal heating leaves a different and equally distinct signature that is also revealed by the technique.« less
Stereo using monocular cues within the tensor voting framework.
Mordohai, Philippos; Medioni, Gérard
2006-06-01
We address the fundamental problem of matching in two static images. The remaining challenges are related to occlusion and lack of texture. Our approach addresses these difficulties within a perceptual organization framework, considering both binocular and monocular cues. Initially, matching candidates for all pixels are generated by a combination of matching techniques. The matching candidates are then embedded in disparity space, where perceptual organization takes place in 3D neighborhoods and, thus, does not suffer from problems associated with scanline or image neighborhoods. The assumption is that correct matches produce salient, coherent surfaces, while wrong ones do not. Matching candidates that are consistent with the surfaces are kept and grouped into smooth layers. Thus, we achieve surface segmentation based on geometric and not photometric properties. Surface overextensions, which are due to occlusion, can be corrected by removing matches whose projections are not consistent in color with their neighbors of the same surface in both images. Finally, the projections of the refined surfaces on both images are used to obtain disparity hypotheses for unmatched pixels. The final disparities are selected after a second tensor voting stage, during which information is propagated from more reliable pixels to less reliable ones. We present results on widely used benchmark stereo pairs.
Xiao, Xun; Geyer, Veikko F; Bowne-Anderson, Hugo; Howard, Jonathon; Sbalzarini, Ivo F
2016-08-01
Biological filaments, such as actin filaments, microtubules, and cilia, are often imaged using different light-microscopy techniques. Reconstructing the filament curve from the acquired images constitutes the filament segmentation problem. Since filaments have lower dimensionality than the image itself, there is an inherent trade-off between tracing the filament with sub-pixel accuracy and avoiding noise artifacts. Here, we present a globally optimal filament segmentation method based on B-spline vector level-sets and a generalized linear model for the pixel intensity statistics. We show that the resulting optimization problem is convex and can hence be solved with global optimality. We introduce a simple and efficient algorithm to compute such optimal filament segmentations, and provide an open-source implementation as an ImageJ/Fiji plugin. We further derive an information-theoretic lower bound on the filament segmentation error, quantifying how well an algorithm could possibly do given the information in the image. We show that our algorithm asymptotically reaches this bound in the spline coefficients. We validate our method in comprehensive benchmarks, compare with other methods, and show applications from fluorescence, phase-contrast, and dark-field microscopy. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Towards an active real-time THz camera: first realization of a hybrid system
NASA Astrophysics Data System (ADS)
May, T.; am Weg, C.; Alcin, A.; Hils, B.; Löffler, T.; Roskos, H. G.
2007-04-01
We report the realization of a hybrid system for stand-off THz reflectrometry measurements. The design combines the best of two worlds: the high radiation power of sub-THz micro-electronic emitters and the high sensitivity of coherent opto-electronic detection. Our system is based on a commercially available multiplied Gunn source with a cw output power of 0.6 mW at 0.65 THz. We combine it with electro-optic mixing with femtosecond light pulses in a ZnTe crystal. This scheme can be described as heterodyne detection with a Ti:sapphire fs-laser acting as local oscillator and therefore allows for phase-sensitive measurements. Example images of test objects are obtained with mechanical scanning optics and with measurement times per pixel as short as 10 ms. The test objects are placed at a distance of 1 m from the detector and also from the source. The results indicate diffraction-limited resolution. Different contrast mechanisms, based on absorption, scattering, and difference in optical thickness are employed. Our evaluation shows that it should be possible to realize a real-time multi-pixel detector with several hundreds of pixels and a dynamic range of at least two orders of magnitude in power.
Spectral-spatial classification of hyperspectral imagery with cooperative game
NASA Astrophysics Data System (ADS)
Zhao, Ji; Zhong, Yanfei; Jia, Tianyi; Wang, Xinyu; Xu, Yao; Shu, Hong; Zhang, Liangpei
2018-01-01
Spectral-spatial classification is known to be an effective way to improve classification performance by integrating spectral information and spatial cues for hyperspectral imagery. In this paper, a game-theoretic spectral-spatial classification algorithm (GTA) using a conditional random field (CRF) model is presented, in which CRF is used to model the image considering the spatial contextual information, and a cooperative game is designed to obtain the labels. The algorithm establishes a one-to-one correspondence between image classification and game theory. The pixels of the image are considered as the players, and the labels are considered as the strategies in a game. Similar to the idea of soft classification, the uncertainty is considered to build the expected energy model in the first step. The local expected energy can be quickly calculated, based on a mixed strategy for the pixels, to establish the foundation for a cooperative game. Coalitions can then be formed by the designed merge rule based on the local expected energy, so that a majority game can be performed to make a coalition decision to obtain the label of each pixel. The experimental results on three hyperspectral data sets demonstrate the effectiveness of the proposed classification algorithm.
Photon-counting-based diffraction phase microscopy combined with single-pixel imaging
NASA Astrophysics Data System (ADS)
Shibuya, Kyuki; Araki, Hiroyuki; Iwata, Tetsuo
2018-04-01
We propose a photon-counting (PC)-based quantitative-phase imaging (QPI) method for use in diffraction phase microscopy (DPM) that is combined with a single-pixel imaging (SPI) scheme (PC-SPI-DPM). This combination of DPM with the SPI scheme overcomes a low optical throughput problem that has occasionally prevented us from obtaining quantitative-phase images in DPM through use of a high-sensitivity single-channel photodetector such as a photomultiplier tube (PMT). The introduction of a PMT allowed us to perform PC with ease and thus solved a dynamic range problem that was inherent to SPI. As a proof-of-principle experiment, we performed a comparison study of analogue-based SPI-DPM and PC-SPI-DPM for a 125-nm-thick indium tin oxide (ITO) layer coated on a silica glass substrate. We discuss the basic performance of the method and potential future modifications of the proposed system.
Yao, Tao; Yin, Shi-Min; Xiangli, Bin; Lü, Qun-Bo
2010-06-01
Based on in-depth analysis of the relative radiation scaling theorem and acquired scaling data of pixel response nonuniformity correction of CCD (charge-coupled device) in spaceborne visible interferential imaging spectrometer, a pixel response nonuniformity correction method of CCD adapted to visible and infrared interferential imaging spectrometer system was studied out, and it availably resolved the engineering technical problem of nonuniformity correction in detector arrays for interferential imaging spectrometer system. The quantitative impact of CCD nonuniformity on interferogram correction and recovery spectrum accuracy was given simultaneously. Furthermore, an improved method with calibration and nonuniformity correction done after the instrument is successfully assembled was proposed. The method can save time and manpower. It can correct nonuniformity caused by other reasons in spectrometer system besides CCD itself's nonuniformity, can acquire recalibration data when working environment is changed, and can also more effectively improve the nonuniformity calibration accuracy of interferential imaging
Spatio-Temporal Video Segmentation with Shape Growth or Shrinkage Constraint
NASA Technical Reports Server (NTRS)
Tarabalka, Yuliya; Charpiat, Guillaume; Brucker, Ludovic; Menze, Bjoern H.
2014-01-01
We propose a new method for joint segmentation of monotonously growing or shrinking shapes in a time sequence of noisy images. The task of segmenting the image time series is expressed as an optimization problem using the spatio-temporal graph of pixels, in which we are able to impose the constraint of shape growth or of shrinkage by introducing monodirectional infinite links connecting pixels at the same spatial locations in successive image frames. The globally optimal solution is computed with a graph cut. The performance of the proposed method is validated on three applications: segmentation of melting sea ice floes and of growing burned areas from time series of 2D satellite images, and segmentation of a growing brain tumor from sequences of 3D medical scans. In the latter application, we impose an additional intersequences inclusion constraint by adding directed infinite links between pixels of dependent image structures.
Fast image interpolation for motion estimation using graphics hardware
NASA Astrophysics Data System (ADS)
Kelly, Francis; Kokaram, Anil
2004-05-01
Motion estimation and compensation is the key to high quality video coding. Block matching motion estimation is used in most video codecs, including MPEG-2, MPEG-4, H.263 and H.26L. Motion estimation is also a key component in the digital restoration of archived video and for post-production and special effects in the movie industry. Sub-pixel accurate motion vectors can improve the quality of the vector field and lead to more efficient video coding. However sub-pixel accuracy requires interpolation of the image data. Image interpolation is a key requirement of many image processing algorithms. Often interpolation can be a bottleneck in these applications, especially in motion estimation due to the large number pixels involved. In this paper we propose using commodity computer graphics hardware for fast image interpolation. We use the full search block matching algorithm to illustrate the problems and limitations of using graphics hardware in this way.
NASA Astrophysics Data System (ADS)
Evans, Aaron H.
Thermal remote sensing is a powerful tool for measuring the spatial variability of evapotranspiration due to the cooling effect of vaporization. The residual method is a popular technique which calculates evapotranspiration by subtracting sensible heat from available energy. Estimating sensible heat requires aerodynamic surface temperature which is difficult to retrieve accurately. Methods such as SEBAL/METRIC correct for this problem by calibrating the relationship between sensible heat and retrieved surface temperature. Disadvantage of these calibrations are 1) user must manually identify extremely dry and wet pixels in image 2) each calibration is only applicable over limited spatial extent. Producing larger maps is operationally limited due to time required to manually calibrate multiple spatial extents over multiple days. This dissertation develops techniques which automatically detect dry and wet pixels. LANDSAT imagery is used because it resolves dry pixels. Calibrations using 1) only dry pixels and 2) including wet pixels are developed. Snapshots of retrieved evaporative fraction and actual evapotranspiration are compared to eddy covariance measurements for five study areas in Florida: 1) Big Cypress 2) Disney Wilderness 3) Everglades 4) near Gainesville, FL. 5) Kennedy Space Center. The sensitivity of evaporative fraction to temperature, available energy, roughness length and wind speed is tested. A technique for temporally interpolating evapotranspiration by fusing LANDSAT and MODIS is developed and tested. The automated algorithm is successful at detecting wet and dry pixels (if they exist). Including wet pixels in calibration and assuming constant atmospheric conductance significantly improved results for all but Big Cypress and Gainesville. Evaporative fraction is not very sensitive to instantaneous available energy but it is sensitive to temperature when wet pixels are included because temperature is required for estimating wet pixel evapotranspiration. Data fusion techniques only slightly outperformed linear interpolation. Eddy covariance comparison and temporal interpolation produced acceptable bias error for most cases suggesting automated calibration and interpolation could be used to predict monthly or annual ET. Maps demonstrating spatial patterns of evapotranspiration at field scale were successfully produced, but only for limited spatial extents. A framework has been established for producing larger maps by creating a mosaic of smaller individual maps.
Spickermann, Gunnar; Friederich, Fabian; Roskos, Hartmut G; Bolívar, Peter Haring
2009-11-01
We present a 64x48 pixel 2D electro-optical terahertz (THz) imaging system using a photonic mixing device time-of-flight camera as an optical demodulating detector array. The combination of electro-optic detection with a time-of-flight camera increases sensitivity drastically, enabling the use of a nonamplified laser source for high-resolution real-time THz electro-optic imaging.
Fuzzy Classification of Ocean Color Satellite Data for Bio-optical Algorithm Constituent Retrievals
NASA Technical Reports Server (NTRS)
Campbell, Janet W.
1998-01-01
The ocean has been traditionally viewed as a 2 class system. Morel and Prieur (1977) classified ocean water according to the dominant absorbent particle suspended in the water column. Case 1 is described as having a high concentration of phytoplankton (and detritus) relative to other particles. Conversely, case 2 is described as having inorganic particles such as suspended sediments in high concentrations. Little work has gone into the problem of mixing bio-optical models for these different water types. An approach is put forth here to blend bio-optical algorithms based on a fuzzy classification scheme. This scheme involves two procedures. First, a clustering procedure identifies classes and builds class statistics from in-situ optical measurements. Next, a classification procedure assigns satellite pixels partial memberships to these classes based on their ocean color reflectance signature. These membership assignments can be used as the basis for a weighting retrievals from class-specific bio-optical algorithms. This technique is demonstrated with in-situ optical measurements and an image from the SeaWiFS ocean color satellite.
Photovoltaic retinal prosthesis for restoring sight to the blind: implant design and fabrication
NASA Astrophysics Data System (ADS)
Wang, Lele; Mathieson, Keith; Kamins, Theodore I.; Loudin, James; Galambos, Ludwig; Harris, James S.; Palanker, Daniel
2012-03-01
We have designed and fabricated a silicon photodiode array for use as a subretinal prosthesis aimed at restoring sight to patients who lost photoreceptors due to retinal degeneration. The device operates in photovoltaic mode. Each pixel in the two-dimensional array independently converts pulsed infrared light into biphasic electric current to stimulate remaining retinal neurons without a wired power connection. To enhance the maximum voltage and charge injection levels, each pixel contains three photodiodes connected in series. An active and return electrode in each pixel ensure localized current flow and are sputter coated with iridium oxide to provide high charge injection. The fabrication process consists of eight mask layers and includes deep reactive ion etching, oxidation, and a polysilicon trench refill for in-pixel photodiode separation and isolation of adjacent pixels. Simulation of design parameters included TSUPREM4 computation of doping profiles for n+ and p+ doped regions and MATLAB computation of the anti-reflection coating layers thicknesses. The main process steps are illustrated in detail, and problems encountered are discussed. The IV characterization of the device shows that the dark reverse current is on the order of 10-100 pA-negligible compared to the stimulation current; the reverse breakdown voltage is higher than 20 V. The measured photo-responsivity per photodiode is about 0.33A/W at 880 nm.
Non-integer expansion embedding techniques for reversible image watermarking
NASA Astrophysics Data System (ADS)
Xiang, Shijun; Wang, Yi
2015-12-01
This work aims at reducing the embedding distortion of prediction-error expansion (PE)-based reversible watermarking. In the classical PE embedding method proposed by Thodi and Rodriguez, the predicted value is rounded to integer number for integer prediction-error expansion (IPE) embedding. The rounding operation makes a constraint on a predictor's performance. In this paper, we propose a non-integer PE (NIPE) embedding approach, which can proceed non-integer prediction errors for embedding data into an audio or image file by only expanding integer element of a prediction error while keeping its fractional element unchanged. The advantage of the NIPE embedding technique is that the NIPE technique can really bring a predictor into full play by estimating a sample/pixel in a noncausal way in a single pass since there is no rounding operation. A new noncausal image prediction method to estimate a pixel with four immediate pixels in a single pass is included in the proposed scheme. The proposed noncausal image predictor can provide better performance than Sachnev et al.'s noncausal double-set prediction method (where data prediction in two passes brings a distortion problem due to the fact that half of the pixels were predicted with the watermarked pixels). In comparison with existing several state-of-the-art works, experimental results have shown that the NIPE technique with the new noncausal prediction strategy can reduce the embedding distortion for the same embedding payload.
Homogeneity study of a GaAs:Cr pixelated sensor by means of X-rays
NASA Astrophysics Data System (ADS)
Billoud, T.; Leroy, C.; Papadatos, C.; Pichotka, M.; Pospisil, S.; Roux, J. S.
2018-04-01
Direct conversion semiconductor detectors have become an indispensable tool in radiation detection by now. In order to obtain a high detection efficiency, especially when detecting X or γ rays, high-Z semiconductor sensors are necessary. Like other compound semiconductors GaAs, compensated by chromium (GaAs:Cr), suffers from a number of defects that affect the charge collection efficiency and homogeneity of the material. A precise knowledge of this problem is important to predict the performance of such detectors and eventually correct their response in specific applications. In this study we analyse the homogeneity and mobility-lifetime products (μe τe) of a 500 μ m thick GaAs:Cr pixelated sensor connected to a Timepix chip. The detector is irradiated by 23 keV X-rays, each pixel recording the number of photon interactions and the charge they induce on its electrode. The μe τe products are extracted on a per-pixel basis, using the Hecht equation corrected for the small pixel effect. The detector shows a good time stability in the experimental conditions. Significant inhomogeneities are observed in photon counting and charge collection efficiencies. An average μe τe of 1.0 ṡ 10‑4 cm2V‑1 is found, and compared with values obtained by other methods for the same material. Solutions to improve the response are discussed.
William H. Cooke; Dennis M. Jacobs
2005-01-01
FIA annual inventories require rapid updating of pixel-based Phase 1 estimates. Scientists at the Southern Research Station are developing an automated methodology that uses a Normalized Difference Vegetation Index (NDVI) for identifying and eliminating problem FIA plots from the analysis. Problem plots are those that have questionable land use/land cover information....
Smart image sensors: an emerging key technology for advanced optical measurement and microsystems
NASA Astrophysics Data System (ADS)
Seitz, Peter
1996-08-01
Optical microsystems typically include photosensitive devices, analog preprocessing circuitry and digital signal processing electronics. The advances in semiconductor technology have made it possible today to integrate all photosensitive and electronical devices on one 'smart image sensor' or photo-ASIC (application-specific integrated circuits containing photosensitive elements). It is even possible to provide each 'smart pixel' with additional photoelectronic functionality, without compromising the fill factor substantially. This technological capability is the basis for advanced cameras and optical microsystems showing novel on-chip functionality: Single-chip cameras with on- chip analog-to-digital converters for less than $10 are advertised; image sensors have been developed including novel functionality such as real-time selectable pixel size and shape, the capability of performing arbitrary convolutions simultaneously with the exposure, as well as variable, programmable offset and sensitivity of the pixels leading to image sensors with a dynamic range exceeding 150 dB. Smart image sensors have been demonstrated offering synchronous detection and demodulation capabilities in each pixel (lock-in CCD), and conventional image sensors are combined with an on-chip digital processor for complete, single-chip image acquisition and processing systems. Technological problems of the monolithic integration of smart image sensors include offset non-uniformities, temperature variations of electronic properties, imperfect matching of circuit parameters, etc. These problems can often be overcome either by designing additional compensation circuitry or by providing digital correction routines. Where necessary for technological or economic reasons, smart image sensors can also be combined with or realized as hybrids, making use of commercially available electronic components. It is concluded that the possibilities offered by custom smart image sensors will influence the design and the performance of future electronic imaging systems in many disciplines, reaching from optical metrology to machine vision on the factory floor and in robotics applications.
NASA Technical Reports Server (NTRS)
Kimble, Randy A.; Pain, B.; Norton, T. J.; Haas, P.; Fisher, Richard R. (Technical Monitor)
2001-01-01
Silicon array readouts for microchannel plate intensifiers offer several attractive features. In this class of detector, the electron cloud output of the MCP intensifier is converted to visible light by a phosphor; that light is then fiber-optically coupled to the silicon array. In photon-counting mode, the resulting light splashes on the silicon array are recognized and centroided to fractional pixel accuracy by off-chip electronics. This process can result in very high (MCP-limited) spatial resolution for the readout while operating at a modest MCP gain (desirable for dynamic range and long term stability). The principal limitation of intensified CCD systems of this type is their severely limited local dynamic range, as accurate photon counting is achieved only if there are not overlapping event splashes within the frame time of the device. This problem can be ameliorated somewhat by processing events only in pre-selected windows of interest or by using an addressable charge injection device (CID) for the readout array. We are currently pursuing the development of an intriguing alternative readout concept based on using an event-driven CMOS Active Pixel Sensor. APS technology permits the incorporation of discriminator circuitry within each pixel. When coupled with suitable CMOS logic outside the array area, the discriminator circuitry can be used to trigger the readout of small sub-array windows only when and where an event splash has been detected, completely eliminating the local dynamic range problem, while achieving a high global count rate capability and maintaining high spatial resolution. We elaborate on this concept and present our progress toward implementing an event-driven APS readout.
NASA Technical Reports Server (NTRS)
Martin, William G.; Cairns, Brian; Bal, Guillaume
2014-01-01
This paper derives an efficient procedure for using the three-dimensional (3D) vector radiative transfer equation (VRTE) to adjust atmosphere and surface properties and improve their fit with multi-angle/multi-pixel radiometric and polarimetric measurements of scattered sunlight. The proposed adjoint method uses the 3D VRTE to compute the measurement misfit function and the adjoint 3D VRTE to compute its gradient with respect to all unknown parameters. In the remote sensing problems of interest, the scalar-valued misfit function quantifies agreement with data as a function of atmosphere and surface properties, and its gradient guides the search through this parameter space. Remote sensing of the atmosphere and surface in a three-dimensional region may require thousands of unknown parameters and millions of data points. Many approaches would require calls to the 3D VRTE solver in proportion to the number of unknown parameters or measurements. To avoid this issue of scale, we focus on computing the gradient of the misfit function as an alternative to the Jacobian of the measurement operator. The resulting adjoint method provides a way to adjust 3D atmosphere and surface properties with only two calls to the 3D VRTE solver for each spectral channel, regardless of the number of retrieval parameters, measurement view angles or pixels. This gives a procedure for adjusting atmosphere and surface parameters that will scale to the large problems of 3D remote sensing. For certain types of multi-angle/multi-pixel polarimetric measurements, this encourages the development of a new class of three-dimensional retrieval algorithms with more flexible parametrizations of spatial heterogeneity, less reliance on data screening procedures, and improved coverage in terms of the resolved physical processes in the Earth?s atmosphere.
Rahaman, Mijanur; Pang, Chin-Tzong; Ishtyak, Mohd; Ahmad, Rais
2017-01-01
In this article, we introduce a perturbed system of generalized mixed quasi-equilibrium-like problems involving multi-valued mappings in Hilbert spaces. To calculate the approximate solutions of the perturbed system of generalized multi-valued mixed quasi-equilibrium-like problems, firstly we develop a perturbed system of auxiliary generalized multi-valued mixed quasi-equilibrium-like problems, and then by using the celebrated Fan-KKM technique, we establish the existence and uniqueness of solutions of the perturbed system of auxiliary generalized multi-valued mixed quasi-equilibrium-like problems. By deploying an auxiliary principle technique and an existence result, we formulate an iterative algorithm for solving the perturbed system of generalized multi-valued mixed quasi-equilibrium-like problems. Lastly, we study the strong convergence analysis of the proposed iterative sequences under monotonicity and some mild conditions. These results are new and generalize some known results in this field.
NASA Astrophysics Data System (ADS)
Gacal, G. F. B.; Tan, F.; Antioquia, C. T.; Lagrosas, N.
2014-12-01
Cloud detection during nighttime poses a real problem to researchers because of a lack of optimum sensors that can specifically detect clouds during this time of the day. Hence, lidars and satellites are currently some of the instruments that are being utilized to determine cloud presence in the atmosphere. These clouds play a significant role in the night weather system for the reason that they serve as barriers of thermal radiation from the Earth and thereby reflecting this radiation back to the Earth. This effectively lowers the rate of decreasing temperature in the atmosphere at night. The objective of this study is to detect cloud occurrences at nighttime for the purpose of studying patterns of cloud occurrence and the effects of clouds on local weather. In this study, a commercial camera (Canon Powershot A2300) is operated continuously to capture nighttime clouds. The camera is situated inside a weather-proof box with a glass cover and is placed on the rooftop of the Manila Observatory building to gather pictures of the sky every 5min to observe cloud dynamics and evolution in the atmosphere. To detect pixels with clouds, the pictures are converted from its native JPEG to grayscale format. The pixels are then screened for clouds by looking at the values of pixels with and without clouds. In grayscale format, pixels with clouds have greater pixel values than pixels without clouds. Based on the observations, 0.34 of the maximum pixel value is enough to discern pixels with clouds from pixels without clouds. Figs. 1a & 1b are sample unprocessed pictures of cloudless night (May 22-23, 2014) and cloudy skies (May 23-24, 2014), respectively. Figs.1c and 1d show percentage of occurrence of nighttime clouds on May 22-23 and May 23-24, 2014, respectively. The cloud occurrence in a pixel is defined as the ratio of the number times when the pixel has clouds to the total number of observations. Fig. 1c shows less than 50% cloud occurrence while Fig. 1d shows cloud occurrence more than what is shown in Fig. 1c. These graphs show the capability of the camera to detect and measure the cloud occurrence at nighttime. Continuous collection of nighttime pictures is currently implemented. In regions where there is a dearth of scientific data, the measured nighttime cloud occurrence will serve as a baseline for future cloud studies in this part of the world.
CMOS Active Pixel Sensors as energy-range detectors for proton Computed Tomography.
Esposito, M; Anaxagoras, T; Evans, P M; Green, S; Manolopoulos, S; Nieto-Camero, J; Parker, D J; Poludniowski, G; Price, T; Waltham, C; Allinson, N M
2015-06-03
Since the first proof of concept in the early 70s, a number of technologies has been proposed to perform proton CT (pCT), as a means of mapping tissue stopping power for accurate treatment planning in proton therapy. Previous prototypes of energy-range detectors for pCT have been mainly based on the use of scintillator-based calorimeters, to measure proton residual energy after passing through the patient. However, such an approach is limited by the need for only a single proton passing through the energy-range detector in a read-out cycle. A novel approach to this problem could be the use of pixelated detectors, where the independent read-out of each pixel allows to measure simultaneously the residual energy of a number of protons in the same read-out cycle, facilitating a faster and more efficient pCT scan. This paper investigates the suitability of CMOS Active Pixel Sensors (APSs) to track individual protons as they go through a number of CMOS layers, forming an energy-range telescope. Measurements performed at the iThemba Laboratories will be presented and analysed in terms of correlation, to confirm capability of proton tracking for CMOS APSs.
NASA Astrophysics Data System (ADS)
Atzberger, C.; Richter, K.
2009-09-01
The robust and accurate retrieval of vegetation biophysical variables using radiative transfer models (RTM) is seriously hampered by the ill-posedness of the inverse problem. With this research we further develop our previously published (object-based) inversion approach [Atzberger (2004)]. The object-based RTM inversion takes advantage of the geostatistical fact that the biophysical characteristics of nearby pixel are generally more similar than those at a larger distance. A two-step inversion based on PROSPECT+SAIL generated look-up-tables is presented that can be easily implemented and adapted to other radiative transfer models. The approach takes into account the spectral signatures of neighboring pixel and optimizes a common value of the average leaf angle (ALA) for all pixel of a given image object, such as an agricultural field. Using a large set of leaf area index (LAI) measurements (n = 58) acquired over six different crops of the Barrax test site, Spain), we demonstrate that the proposed geostatistical regularization yields in most cases more accurate and spatially consistent results compared to the traditional (pixel-based) inversion. Pros and cons of the approach are discussed and possible future extensions presented.
Conditional random fields for pattern recognition applied to structured data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burr, Tom; Skurikhin, Alexei
In order to predict labels from an output domain, Y, pattern recognition is used to gather measurements from an input domain, X. Image analysis is one setting where one might want to infer whether a pixel patch contains an object that is “manmade” (such as a building) or “natural” (such as a tree). Suppose the label for a pixel patch is “manmade”; if the label for a nearby pixel patch is then more likely to be “manmade” there is structure in the output domain that can be exploited to improve pattern recognition performance. Modeling P(X) is difficult because features betweenmore » parts of the model are often correlated. Thus, conditional random fields (CRFs) model structured data using the conditional distribution P(Y|X = x), without specifying a model for P(X), and are well suited for applications with dependent features. Our paper has two parts. First, we overview CRFs and their application to pattern recognition in structured problems. Our primary examples are image analysis applications in which there is dependence among samples (pixel patches) in the output domain. Second, we identify research topics and present numerical examples.« less
Conditional random fields for pattern recognition applied to structured data
Burr, Tom; Skurikhin, Alexei
2015-07-14
In order to predict labels from an output domain, Y, pattern recognition is used to gather measurements from an input domain, X. Image analysis is one setting where one might want to infer whether a pixel patch contains an object that is “manmade” (such as a building) or “natural” (such as a tree). Suppose the label for a pixel patch is “manmade”; if the label for a nearby pixel patch is then more likely to be “manmade” there is structure in the output domain that can be exploited to improve pattern recognition performance. Modeling P(X) is difficult because features betweenmore » parts of the model are often correlated. Thus, conditional random fields (CRFs) model structured data using the conditional distribution P(Y|X = x), without specifying a model for P(X), and are well suited for applications with dependent features. Our paper has two parts. First, we overview CRFs and their application to pattern recognition in structured problems. Our primary examples are image analysis applications in which there is dependence among samples (pixel patches) in the output domain. Second, we identify research topics and present numerical examples.« less
A CT and MRI scan to MCNP input conversion program.
Van Riper, Kenneth A
2005-01-01
We describe a new program to read a sequence of tomographic scans and prepare the geometry and material sections of an MCNP input file. Image processing techniques include contrast controls and mapping of grey scales to colour. The user interface provides several tools with which the user can associate a range of image intensities to an MCNP material. Materials are loaded from a library. A separate material assignment can be made to a pixel intensity or range of intensities when that intensity dominates the image boundaries; this material is assigned to all pixels with that intensity contiguous with the boundary. Material fractions are computed in a user-specified voxel grid overlaying the scans. New materials are defined by mixing the library materials using the fractions. The geometry can be written as an MCNP lattice or as individual cells. A combination algorithm can be used to join neighbouring cells with the same material.
NASA Astrophysics Data System (ADS)
Takehara, Hironari; Miyazawa, Kazuya; Noda, Toshihiko; Sasagawa, Kiyotaka; Tokuda, Takashi; Kim, Soo Hyeon; Iino, Ryota; Noji, Hiroyuki; Ohta, Jun
2014-01-01
A CMOS image sensor with stacked photodiodes was fabricated using 0.18 µm mixed signal CMOS process technology. Two photodiodes were stacked at the same position of each pixel of the CMOS image sensor. The stacked photodiodes consist of shallow high-concentration N-type layer (N+), P-type well (PW), deep N-type well (DNW), and P-type substrate (P-sub). PW and P-sub were shorted to ground. By monitoring the voltage of N+ and DNW individually, we can observe two monochromatic colors simultaneously without using any color filters. The CMOS image sensor is suitable for fluorescence imaging, especially contact imaging such as a lensless observation system of digital enzyme-linked immunosorbent assay (ELISA). Since the fluorescence increases with time in digital ELISA, it is possible to observe fluorescence accurately by calculating the difference from the initial relation between the pixel values for both photodiodes.
Segregation of asphalt mixes caused by surge silos : final report.
DOT National Transportation Integrated Search
1982-01-01
Segregation of asphalt mixes continues to be a problem in Virginia, particularly with base mixes and coarse surface mixes. Although the problem is encountered primarily on jobs using surge silos, it has been related to other factors such as mix desig...
Digital Ethics: Computers, Photographs, and the Manipulation of Pixels.
ERIC Educational Resources Information Center
Mercedes, Dawn
1996-01-01
Summarizes negative aspects of computer technology and problems inherent in the field of digital imaging. Considers the postmodernist response that borrowing and alteration are essential characteristics of the technology. Discusses the implications of this for education and research. (MJP)
Multichroic Bolometric Detector Architecture for Cosmic Microwave Background Polarimetry Experiments
NASA Astrophysics Data System (ADS)
Suzuki, Aritoki
Characterization of the Cosmic Microwave Background (CMB) B-mode polarization signal will test models of inflationary cosmology, as well as constrain the sum of the neutrino masses and other cosmological parameters. The low intensity of the B-mode signal combined with the need to remove polarized galactic foregrounds requires a sensitive millimeter receiver and effective methods of foreground removal. Current bolometric detector technology is reaching the sensitivity limit set by the CMB photon noise. Thus, we need to increase the optical throughput to increase an experiment's sensitivity. To increase the throughput without increasing the focal plane size, we can increase the frequency coverage of each pixel. Increased frequency coverage per pixel has additional advantage that we can split the signal into frequency bands to obtain spectral information. The detection of multiple frequency bands allows for removal of the polarized foreground emission from synchrotron radiation and thermal dust emission, by utilizing its spectral dependence. Traditionally, spectral information has been captured with a multi-chroic focal plane consisting of a heterogeneous mix of single-color pixels. To maximize the efficiency of the focal plane area, we developed a multi-chroic pixel. This increases the number of pixels per frequency with same focal plane area. We developed multi-chroic antenna-coupled transition edge sensor (TES) detector array for the CMB polarimetry. In each pixel, a silicon lens-coupled dual polarized sinuous antenna collects light over a two-octave frequency band. The antenna couples the broadband millimeter wave signal into microstrip transmission lines, and on-chip filter banks split the broadband signal into several frequency bands. Separate TES bolometers detect the power in each frequency band and linear polarization. We will describe the design and performance of these devices and present optical data taken with prototype pixels and detector arrays. Our measurements show beams with percent level ellipticity, percent level cross-polarization leakage, and partitioned bands using banks of two and three filters. We will also describe the development of broadband anti-reflection coatings for the high dielectric constant lens. The broadband anti-reflection coating has approximately 100% bandwidth and no detectable loss at cryogenic temperature. We will describe a next generation CMB polarimetry experiment, the POLARBEAR-2, in detail. The POLARBEAR-2 would have focal planes with kilo-pixel of these detectors to achieve high sensitivity. We'll also introduce proposed experiments that would use multi-chroic detector array we developed in this work. We'll conclude by listing out suggestions for future multichroic detector development.
The fundamentals of average local variance--Part I: Detecting regular patterns.
Bøcher, Peder Klith; McCloy, Keith R
2006-02-01
The method of average local variance (ALV) computes the mean of the standard deviation values derived for a 3 x 3 moving window on a successively coarsened image to produce a function of ALV versus spatial resolution. In developing ALV, the authors used approximately a doubling of the pixel size at each coarsening of the image. They hypothesized that ALV is low when the pixel size is smaller than the size of scene objects because the pixels on the object will have similar response values. When the pixel and objects are of similar size, they will tend to vary in response and the ALV values will increase. As the size of pixels increase further, more objects will be contained in a single pixel and ALV will decrease. The authors showed that various cover types produced single peak ALV functions that inexplicitly peaked when the pixel size was 1/2 to 3/4 of the object size. This paper reports on work done to explore the characteristics of the various forms of the ALV function and to understand the location of the peaks that occur in this function. The work was conducted using synthetically generated image data. The investigation showed that the hypothesis originally proposed in is not adequate. A new hypothesis is proposed that the ALV function has peak locations that are related to the geometric size of pattern structures in the scene. These structures are not always the same as scene objects. Only in cases where the size of and separation between scene objects are equal does the ALV function detect the size of the objects. In situations where the distance between scene objects are larger than their size, the ALV function has a peak at the object separation, not at the object size. This work has also shown that multiple object structures of different sizes and distances in the image provide multiple peaks in the ALV function and that some of these structures are not implicitly recognized as such from our perspective. However, the magnitude of these peaks depends on the response mix in the structures, complicating their interpretation and analysis. The analysis of the ALV Function is, thus, more complex than that generally reported in the literature.
NASA Astrophysics Data System (ADS)
Xie, Bing; Duan, Zhemin; Chen, Yu
2017-11-01
The mode of navigation based on scene match can assist UAV to achieve autonomous navigation and other missions. However, aerial multi-frame images of the UAV in the complex flight environment easily be affected by the jitter, noise and exposure, which will lead to image blur, deformation and other issues, and result in the decline of detection rate of the interested regional target. Aiming at this problem, we proposed a kind of Graded sub-pixel motion estimation algorithm combining time-domain characteristics with frequency-domain phase correlation. Experimental results prove the validity and accuracy of the proposed algorithm.
Du, Bo; Zhang, Yuxiang; Zhang, Liangpei; Tao, Dacheng
2016-08-18
Hyperspectral images provide great potential for target detection, however, new challenges are also introduced for hyperspectral target detection, resulting that hyperspectral target detection should be treated as a new problem and modeled differently. Many classical detectors are proposed based on the linear mixing model and the sparsity model. However, the former type of model cannot deal well with spectral variability in limited endmembers, and the latter type of model usually treats the target detection as a simple classification problem and pays less attention to the low target probability. In this case, can we find an efficient way to utilize both the high-dimension features behind hyperspectral images and the limited target information to extract small targets? This paper proposes a novel sparsitybased detector named the hybrid sparsity and statistics detector (HSSD) for target detection in hyperspectral imagery, which can effectively deal with the above two problems. The proposed algorithm designs a hypothesis-specific dictionary based on the prior hypotheses for the test pixel, which can avoid the imbalanced number of training samples for a class-specific dictionary. Then, a purification process is employed for the background training samples in order to construct an effective competition between the two hypotheses. Next, a sparse representation based binary hypothesis model merged with additive Gaussian noise is proposed to represent the image. Finally, a generalized likelihood ratio test is performed to obtain a more robust detection decision than the reconstruction residual based detection methods. Extensive experimental results with three hyperspectral datasets confirm that the proposed HSSD algorithm clearly outperforms the stateof- the-art target detectors.
Niphadkar, Madhura; Nagendra, Harini; Tarantino, Cristina; Adamo, Maria; Blonda, Palma
2017-01-01
The establishment of invasive alien species in varied habitats across the world is now recognized as a genuine threat to the preservation of biodiversity. Specifically, plant invasions in understory tropical forests are detrimental to the persistence of healthy ecosystems. Monitoring such invasions using Very High Resolution (VHR) satellite remote sensing has been shown to be valuable in designing management interventions for conservation of native habitats. Object-based classification methods are very helpful in identifying invasive plants in various habitats, by their inherent nature of imitating the ability of the human brain in pattern recognition. However, these methods have not been tested adequately in dense tropical mixed forests where invasion occurs in the understorey. This study compares a pixel-based and object-based classification method for mapping the understorey invasive shrub Lantana camara (Lantana) in a tropical mixed forest habitat in the Western Ghats biodiversity hotspot in India. Overall, a hierarchical approach of mapping top canopy at first, and then further processing for the understorey shrub, using measures such as texture and vegetation indices proved effective in separating out Lantana from other cover types. In the first method, we implement a simple parametric supervised classification for mapping cover types, and then process within these types for Lantana delineation. In the second method, we use an object-based segmentation algorithm to map cover types, and then perform further processing for separating Lantana. The improved ability of the object-based approach to delineate structurally distinct objects with characteristic spectral and spatial characteristics of their own, as well as with reference to their surroundings, allows for much flexibility in identifying invasive understorey shrubs among the complex vegetation of the tropical forest than that provided by the parametric classifier. Conservation practices in tropical mixed forests can benefit greatly by adopting methods which use high resolution remotely sensed data and advanced techniques to monitor the patterns and effective functioning of native ecosystems by periodically mapping disturbances such as invasion. PMID:28620400
Applications of Fractal Analytical Techniques in the Estimation of Operational Scale
NASA Technical Reports Server (NTRS)
Emerson, Charles W.; Quattrochi, Dale A.
2000-01-01
The observational scale and the resolution of remotely sensed imagery are essential considerations in the interpretation process. Many atmospheric, hydrologic, and other natural and human-influenced spatial phenomena are inherently scale dependent and are governed by different physical processes at different spatial domains. This spatial and operational heterogeneity constrains the ability to compare interpretations of phenomena and processes observed in higher spatial resolution imagery to similar interpretations obtained from lower resolution imagery. This is a particularly acute problem, since longterm global change investigations will require high spatial resolution Earth Observing System (EOS), Landsat 7, or commercial satellite data to be combined with lower resolution imagery from older sensors such as Landsat TM and MSS. Fractal analysis is a useful technique for identifying the effects of scale changes on remotely sensed imagery. The fractal dimension of an image is a non-integer value between two and three which indicates the degree of complexity in the texture and shapes depicted in the image. A true fractal surface exhibits self-similarity, a property of curves or surfaces where each part is indistinguishable from the whole, or where the form of the curve or surface is invariant with respect to scale. Theoretically, if the digital numbers of a remotely sensed image resemble an ideal fractal surface, then due to the self-similarity property, the fractal dimension of the image will not vary with scale and resolution, and the slope of the fractal dimension-resolution relationship would be zero. Most geographical phenomena, however, are not self-similar at all scales, but they can be modeled by a stochastic fractal in which the scaling properties of the image exhibit patterns that can be described by statistics such as area-perimeter ratios and autocovariances. Stochastic fractal sets relax the self-similarity assumption and measure many scales and resolutions to represent the varying form of a phenomenon as the pixel size is increased in a convolution process. We have observed that for images of homogeneous land covers, the fractal dimension varies linearly with changes in resolution or pixel size over the range of past, current, and planned space-borne sensors. This relationship differs significantly in images of agricultural, urban, and forest land covers, with urban areas retaining the same level of complexity, forested areas growing smoother, and agricultural areas growing more complex as small pixels are aggregated into larger, mixed pixels. Images of scenes having a mixture of land covers have fractal dimensions that exhibit a non-linear, complex relationship to pixel size. Measuring the fractal dimension of a difference image derived from two images of the same area obtained on different dates showed that the fractal dimension increased steadily, then exhibited a sharp decrease at increasing levels of pixel aggregation. This breakpoint of the fractal dimension/resolution plot is related to the spatial domain or operational scale of the phenomenon exhibiting the predominant visible difference between the two images (in this case, mountain snow cover). The degree to which an image departs from a theoretical ideal fractal surface provides clues as to how much information is altered or lost in the processes of rescaling and rectification. The measured fractal dimension of complex, composite land covers such as urban areas also provides a useful textural index that can assist image classification of complex scenes.
Resolution Enhancement of MODIS-derived Water Indices for Studying Persistent Flooding
NASA Astrophysics Data System (ADS)
Underwood, L. W.; Kalcic, M. T.; Fletcher, R. M.
2012-12-01
Monitoring coastal marshes for persistent flooding and salinity stress is a high priority issue in Louisiana. Remote sensing can identify environmental variables that can be indicators of marsh habitat conditions, and offer timely and relatively accurate information for aiding wetland vegetation management. Monitoring activity accuracy is often limited by mixed pixels which occur when areas represented by the pixel encompasses more than one cover type. Mixtures of marsh grasses and open water in 250m Moderate Resolution Imaging Spectroradiometer (MODIS) data can impede flood area estimation. Flood mapping of such mixtures requires finer spatial resolution data to better represent the cover type composition within 250m MODIS pixel. Fusion of MODIS and Landsat can improve both spectral and temporal resolution of time series products to resolve rapid changes from forcing mechanisms like hurricane winds and storm surge. For this study, using a method for estimating sub-pixel values from a MODIS time series of a Normalized Difference Water Index (NDWI), using temporal weighting, was implemented to map persistent flooding in Louisiana coastal marshes. Ordinarily NDWI computed from daily 250m MODIS pixels represents a mixture of fragmented marshes and water. Here, sub-pixel NDWI values were derived for MODIS data using Landsat 30-m data. Each MODIS pixel was disaggregated into a mixture of the eight cover types according to the classified image pixels falling inside the MODIS pixel. The Landsat pixel means for each cover type inside a MODIS pixel were computed for the Landsat data preceding the MODIS image in time and for the Landsat data succeeding the MODIS image. The Landsat data were then weighted exponentially according to closeness in date to the MODIS data. The reconstructed MODIS data were produced by summing the product of fractional cover type with estimated NDWI values within each cover type. A new daily time series was produced using both the reconstructed 250-m MODIS, with enhanced features, and the approximated daily 30-m high-resolution image based on Landsat data. The algorithm was developed and tested over the Calcasieu-Sabine Basin, which was heavily inundated by storm surge from Hurricane Ike to study the extent and duration of flooding following the storm. Time series for 2000-2009, covering flooding events by Hurricane Rita in 2005 and Hurricane Ike in 2008, were derived. High resolution images were formed for all days in 2008 between the first cloud free Landsat scene and the last cloud-free Landsat scene. To refine and validate flooding maps, each time series was compared to Louisiana Coastwide Reference Monitoring System (CRMS) station water levels adjusted to marsh to optimize thresholds for MODIS-derived time series of NDWI. Seasonal fluctuations were adjusted by subtracting ten year average NDWI for marshes, excluding the hurricane events. Results from different NDWI indices and a combination of indices were compared. Flooding persistence that was mapped with higher-resolution data showed some improvement over the original MODIS time series estimates. The advantage of this novel technique is that improved mapping of extent and duration of inundation can be provided.
Resolution Enhancement of MODIS-Derived Water Indices for Studying Persistent Flooding
NASA Technical Reports Server (NTRS)
Underwood, L. W.; Kalcic, Maria; Fletcher, Rose
2012-01-01
Monitoring coastal marshes for persistent flooding and salinity stress is a high priority issue in Louisiana. Remote sensing can identify environmental variables that can be indicators of marsh habitat conditions, and offer timely and relatively accurate information for aiding wetland vegetation management. Monitoring activity accuracy is often limited by mixed pixels which occur when areas represented by the pixel encompasses more than one cover type. Mixtures of marsh grasses and open water in 250m Moderate Resolution Imaging Spectroradiometer (MODIS) data can impede flood area estimation. Flood mapping of such mixtures requires finer spatial resolution data to better represent the cover type composition within 250m MODIS pixel. Fusion of MODIS and Landsat can improve both spectral and temporal resolution of time series products to resolve rapid changes from forcing mechanisms like hurricane winds and storm surge. For this study, using a method for estimating sub-pixel values from a MODIS time series of a Normalized Difference Water Index (NDWI), using temporal weighting, was implemented to map persistent flooding in Louisiana coastal marshes. Ordinarily NDWI computed from daily 250m MODIS pixels represents a mixture of fragmented marshes and water. Here, sub-pixel NDWI values were derived for MODIS data using Landsat 30-m data. Each MODIS pixel was disaggregated into a mixture of the eight cover types according to the classified image pixels falling inside the MODIS pixel. The Landsat pixel means for each cover type inside a MODIS pixel were computed for the Landsat data preceding the MODIS image in time and for the Landsat data succeeding the MODIS image. The Landsat data were then weighted exponentially according to closeness in date to the MODIS data. The reconstructed MODIS data were produced by summing the product of fractional cover type with estimated NDWI values within each cover type. A new daily time series was produced using both the reconstructed 250-m MODIS, with enhanced features, and the approximated daily 30-m high-resolution image based on Landsat data. The algorithm was developed and tested over the Calcasieu-Sabine Basin, which was heavily inundated by storm surge from Hurricane Ike to study the extent and duration of flooding following the storm. Time series for 2000-2009, covering flooding events by Hurricane Rita in 2005 and Hurricane Ike in 2008, were derived. High resolution images were formed for all days in 2008 between the first cloud free Landsat scene and the last cloud-free Landsat scene. To refine and validate flooding maps, each time series was compared to Louisiana Coastwide Reference Monitoring System (CRMS) station water levels adjusted to marsh to optimize thresholds for MODIS-derived time series of NDWI. Seasonal fluctuations were adjusted by subtracting ten year average NDWI for marshes, excluding the hurricane events. Results from different NDWI indices and a combination of indices were compared. Flooding persistence that was mapped with higher-resolution data showed some improvement over the original MODIS time series estimates. The advantage of this novel technique is that improved mapping of extent and duration of inundation can be provided.
Enhancement of breast periphery region in digital mammography
NASA Astrophysics Data System (ADS)
Menegatti Pavan, Ana Luiza; Vacavant, Antoine; Petean Trindade, Andre; Quini, Caio Cesar; Rodrigues de Pina, Diana
2018-03-01
Volumetric breast density has been shown to be one of the strongest risk factor for breast cancer diagnosis. This metric can be estimated using digital mammograms. During mammography acquisition, breast is compressed and part of it loses contact with the paddle, resulting in an uncompressed region in periphery with thickness variation. Therefore, reliable density estimation in the breast periphery region is a problem, which affects the accuracy of volumetric breast density measurement. The aim of this study was to enhance breast periphery to solve the problem of thickness variation. Herein, we present an automatic algorithm to correct breast periphery thickness without changing pixel value from internal breast region. The correction pixel values from periphery was based on mean values over iso-distance lines from the breast skin-line using only adipose tissue information. The algorithm detects automatically the periphery region where thickness should be corrected. A correction factor was applied in breast periphery image to enhance the region. We also compare our contribution with two other algorithms from state-of-the-art, and we show its accuracy by means of different quality measures. Experienced radiologists subjectively evaluated resulting images from the tree methods in relation to original mammogram. The mean pixel value, skewness and kurtosis from histogram of the three methods were used as comparison metric. As a result, the methodology presented herein showed to be a good approach to be performed before calculating volumetric breast density.
NASA Astrophysics Data System (ADS)
Eppenhof, Koen A. J.; Pluim, Josien P. W.
2017-02-01
Error estimation in medical image registration is valuable when validating, comparing, or combining registration methods. To validate a nonlinear image registration method, ideally the registration error should be known for the entire image domain. We propose a supervised method for the estimation of a registration error map for nonlinear image registration. The method is based on a convolutional neural network that estimates the norm of the residual deformation from patches around each pixel in two registered images. This norm is interpreted as the registration error, and is defined for every pixel in the image domain. The network is trained using a set of artificially deformed images. Each training example is a pair of images: the original image, and a random deformation of that image. No manually labeled ground truth error is required. At test time, only the two registered images are required as input. We train and validate the network on registrations in a set of 2D digital subtraction angiography sequences, such that errors up to eight pixels can be estimated. We show that for this range of errors the convolutional network is able to learn the registration error in pairs of 2D registered images at subpixel precision. Finally, we present a proof of principle for the extension to 3D registration problems in chest CTs, showing that the method has the potential to estimate errors in 3D registration problems.
Digital radiology using active matrix readout: amplified pixel detector array for fluoroscopy.
Matsuura, N; Zhao, W; Huang, Z; Rowlands, J A
1999-05-01
Active matrix array technology has made possible the concept of flat panel imaging systems for radiography. In the conventional approach a thin-film circuit built on glass contains the necessary switching components (thin-film transistors or TFTs) to readout an image formed in either a phosphor or photoconductor layer. Extension of this concept to real time imaging--fluoroscopy--has had problems due to the very low noise required. A new design strategy for fluoroscopic active matrix flat panel detectors has therefore been investigated theoretically. In this approach, the active matrix has integrated thin-film amplifiers and readout electronics at each pixel and is called the amplified pixel detector array (APDA). Each amplified pixel consists of three thin-film transistors: an amplifier, a readout, and a reset TFT. The performance of the APDA approach compared to the conventional active matrix was investigated for two semiconductors commonly used to construct active matrix arrays--hydrogenated amorphous silicon and polycrystalline silicon. The results showed that with amplification close to the pixel, the noise from the external charge preamplifiers becomes insignificant. The thermal and flicker noise of the readout and the amplifying TFTs at the pixel become the dominant sources of noise. The magnitude of these noise sources is strongly dependent on the TFT geometry and its fabrication process. Both of these could be optimized to make the APDA active matrix operate at lower noise levels than is possible with the conventional approach. However, the APDA cannot be made to operate ideally (i.e., have noise limited only by the amount of radiation used) at the lowest exposure rate required in medical fluoroscopy.
Anti-aliasing techniques in photon-counting depth imaging using GHz clock rates
NASA Astrophysics Data System (ADS)
Krichel, Nils J.; McCarthy, Aongus; Collins, Robert J.; Buller, Gerald S.
2010-04-01
Single-photon detection technologies in conjunction with low laser illumination powers allow for the eye-safe acquisition of time-of-flight range information on non-cooperative target surfaces. We previously presented a photon-counting depth imaging system designed for the rapid acquisition of three-dimensional target models by steering a single scanning pixel across the field angle of interest. To minimise the per-pixel dwelling times required to obtain sufficient photon statistics for accurate distance resolution, periodic illumination at multi- MHz repetition rates was applied. Modern time-correlated single-photon counting (TCSPC) hardware allowed for depth measurements with sub-mm precision. Resolving the absolute target range with a fast periodic signal is only possible at sufficiently short distances: if the round-trip time towards an object is extended beyond the timespan between two trigger pulses, the return signal cannot be assigned to an unambiguous range value. Whereas constructing a precise depth image based on relative results may still be possible, problems emerge for large or unknown pixel-by-pixel separations or in applications with a wide range of possible scene distances. We introduce a technique to avoid range ambiguity effects in time-of-flight depth imaging systems at high average pulse rates. A long pseudo-random bitstream is used to trigger the illuminating laser. A cyclic, fast-Fourier supported analysis algorithm is used to search for the pattern within return photon events. We demonstrate this approach at base clock rates of up to 2 GHz with varying pattern lengths, allowing for unambiguous distances of several kilometers. Scans at long stand-off distances and of scenes with large pixel-to-pixel range differences are presented. Numerical simulations are performed to investigate the relative merits of the technique.
NASA Technical Reports Server (NTRS)
Clark, John M.; Schaeffer, Blake A.; Darling, John A.; Urquhart, Erin A.; Johnston, John M.; Ignatius, Amber R.; Myer, Mark H.; Loftin, Keith A.; Werdell, P. Jeremy; Stumpf, Richard P.
2017-01-01
Cyanobacterial harmful algal blooms (cyanoHAB) cause extensive problems in lakes worldwide, including human and ecological health risks, anoxia and sh kills, and taste and odor problems. CyanoHABs are a particular concern in both recreational waters and drinking water sources because of their dense biomass and the risk of exposure to toxins. Successful cyanoHAB assessment using satellites may provide an indicator for human and ecological health protection. In this study, methods were developed to assess the utility of satellite technology for detecting cyanoHAB frequency of occurrence at locations of potential management interest. The European Space Agency's MEdium Resolution Imaging Spectrometer (MERIS) was evaluated to prepare for the equivalent series of Sentinel-3 Ocean and Land Colour Imagers (OLCI) launched in 2016 as part of the Copernicus program. Based on the 2012 National Lakes Assessment site evaluation guidelines and National Hydrography Dataset, the continental United States contains 275,897 lakes and reservoirs greater than 1 ha in area. Results from this study show that 5.6% of waterbodies were resolvable by satellites with 300 m single-pixel resolution and 0.7% of waterbodies were resolvable when a three by three pixel (3 x 3-pixel) array was applied based on minimum Euclidian distance from shore. Satellite data were spatially joined to U.S. public water surface intake (PWSI) locations, where single-pixel resolution resolved 57% of the PWSI locations and a 3 x 3-pixel array resolved 33% of the PWSI locations. Recreational and drinking water sources in Florida and Ohio were ranked from 2008 through 2011 by cyanoHAB frequency above the World Health Organizations (WHO) high threshold for risk of 100,000 cells m/L. The ranking identified waterbodies with values above the WHO high threshold, where Lake Apopka, FL (99.1%) and Grand Lake St. Marys, OH (83%) had the highest observed bloom frequencies per region. The method presented here may indicate locations with high exposure to cyanoHABs and therefore can be used to assist in prioritizing management resources and actions for recreational and drinking water sources.
Clark, John M.; Schaeffer, Blake A.; Darling, John A.; Urquhart, Erin A.; Johnston, John M.; Ignatius, Amber R.; Myer, Mark H.; Loftin, Keith A.; Werdell, P. Jeremy; Stumpf, Richard P.
2017-01-01
Cyanobacterial harmful algal blooms (cyanoHAB) cause extensive problems in lakes worldwide, including human and ecological health risks, anoxia and fish kills, and taste and odor problems. CyanoHABs are a particular concern in both recreational waters and drinking water sources because of their dense biomass and the risk of exposure to toxins. Successful cyanoHAB assessment using satellites may provide an indicator for human and ecological health protection. In this study, methods were developed to assess the utility of satellite technology for detecting cyanoHAB frequency of occurrence at locations of potential management interest. The European Space Agency's MEdium Resolution Imaging Spectrometer (MERIS) was evaluated to prepare for the equivalent series of Sentinel-3 Ocean and Land Colour Imagers (OLCI) launched in 2016 as part of the Copernicus program. Based on the 2012 National Lakes Assessment site evaluation guidelines and National Hydrography Dataset, the continental United States contains 275,897 lakes and reservoirs >1 ha in area. Results from this study show that 5.6% of waterbodies were resolvable by satellites with 300 m single-pixel resolution and 0.7% of waterbodies were resolvable when a three by three pixel (3 × 3-pixel) array was applied based on minimum Euclidian distance from shore. Satellite data were spatially joined to U.S. public water surface intake (PWSI) locations, where single-pixel resolution resolved 57% of the PWSI locations and a 3 × 3-pixel array resolved 33% of the PWSI locations. Recreational and drinking water sources in Florida and Ohio were ranked from 2008 through 2011 by cyanoHAB frequency above the World Health Organization’s (WHO) high threshold for risk of 100,000 cells mL−1. The ranking identified waterbodies with values above the WHO high threshold, where Lake Apopka, FL (99.1%) and Grand Lake St. Marys, OH (83%) had the highest observed bloom frequencies per region. The method presented here may indicate locations with high exposure to cyanoHABs and therefore can be used to assist in prioritizing management resources and actions for recreational and drinking water sources.
Makeev, Andrey; Clajus, Martin; Snyder, Scott; Wang, Xiaolang; Glick, Stephen J.
2015-01-01
Abstract. Semiconductor photon-counting detectors based on high atomic number, high density materials [cadmium zinc telluride (CZT)/cadmium telluride (CdTe)] for x-ray computed tomography (CT) provide advantages over conventional energy-integrating detectors, including reduced electronic and Swank noise, wider dynamic range, capability of spectral CT, and improved signal-to-noise ratio. Certain CT applications require high spatial resolution. In breast CT, for example, visualization of microcalcifications and assessment of tumor microvasculature after contrast enhancement require resolution on the order of 100 μm. A straightforward approach to increasing spatial resolution of pixellated CZT-based radiation detectors by merely decreasing the pixel size leads to two problems: (1) fabricating circuitry with small pixels becomes costly and (2) inter-pixel charge spreading can obviate any improvement in spatial resolution. We have used computer simulations to investigate position estimation algorithms that utilize charge sharing to achieve subpixel position resolution. To study these algorithms, we model a simple detector geometry with a 5×5 array of 200 μm pixels, and use a conditional probability function to model charge transport in CZT. We used COMSOL finite element method software to map the distribution of charge pulses and the Monte Carlo package PENELOPE for simulating fluorescent radiation. Performance of two x-ray interaction position estimation algorithms was evaluated: the method of maximum-likelihood estimation and a fast, practical algorithm that can be implemented in a readout application-specific integrated circuit and allows for identification of a quadrant of the pixel in which the interaction occurred. Both methods demonstrate good subpixel resolution; however, their actual efficiency is limited by the presence of fluorescent K-escape photons. Current experimental breast CT systems typically use detectors with a pixel size of 194 μm, with 2×2 binning during the acquisition giving an effective pixel size of 388 μm. Thus, it would be expected that the position estimate accuracy reported in this study would improve detection and visualization of microcalcifications as compared to that with conventional detectors. PMID:26158095
Makeev, Andrey; Clajus, Martin; Snyder, Scott; Wang, Xiaolang; Glick, Stephen J
2015-04-01
Semiconductor photon-counting detectors based on high atomic number, high density materials [cadmium zinc telluride (CZT)/cadmium telluride (CdTe)] for x-ray computed tomography (CT) provide advantages over conventional energy-integrating detectors, including reduced electronic and Swank noise, wider dynamic range, capability of spectral CT, and improved signal-to-noise ratio. Certain CT applications require high spatial resolution. In breast CT, for example, visualization of microcalcifications and assessment of tumor microvasculature after contrast enhancement require resolution on the order of [Formula: see text]. A straightforward approach to increasing spatial resolution of pixellated CZT-based radiation detectors by merely decreasing the pixel size leads to two problems: (1) fabricating circuitry with small pixels becomes costly and (2) inter-pixel charge spreading can obviate any improvement in spatial resolution. We have used computer simulations to investigate position estimation algorithms that utilize charge sharing to achieve subpixel position resolution. To study these algorithms, we model a simple detector geometry with a [Formula: see text] array of [Formula: see text] pixels, and use a conditional probability function to model charge transport in CZT. We used COMSOL finite element method software to map the distribution of charge pulses and the Monte Carlo package PENELOPE for simulating fluorescent radiation. Performance of two x-ray interaction position estimation algorithms was evaluated: the method of maximum-likelihood estimation and a fast, practical algorithm that can be implemented in a readout application-specific integrated circuit and allows for identification of a quadrant of the pixel in which the interaction occurred. Both methods demonstrate good subpixel resolution; however, their actual efficiency is limited by the presence of fluorescent [Formula: see text]-escape photons. Current experimental breast CT systems typically use detectors with a pixel size of [Formula: see text], with [Formula: see text] binning during the acquisition giving an effective pixel size of [Formula: see text]. Thus, it would be expected that the position estimate accuracy reported in this study would improve detection and visualization of microcalcifications as compared to that with conventional detectors.
Design of a High-resolution Optoelectronic Retinal Prosthesis
NASA Astrophysics Data System (ADS)
Palanker, Daniel
2005-03-01
It has been demonstrated that electrical stimulation of the retina can produce visual percepts in blind patients suffering from macular degeneration and retinitis pigmentosa. So far retinal implants have had just a few electrodes, whereas at least several thousand pixels would be required for any functional restoration of sight. We will discuss physical limitations on the number of stimulating electrodes and on delivery of information and power to the retinal implant. Using a model of extracellular stimulation we derive the threshold values of current and voltage as a function of electrode size and distance to the target cell. Electrolysis, tissue heating, and cross-talk between neighboring electrodes depend critically on separation between electrodes and cells, thus strongly limiting the pixels size and spacing. Minimal pixel density required for 20/80 visual acuity (2500 pixels/mm2, pixel size 20 um) cannot be achieved unless the target neurons are within 7 um of the electrodes. At a separation of 50 um, the density drops to 44 pixels/mm2, and at 100 um it is further reduced to 10 pixels/mm2. We will present designs of subretinal implants that provide close proximity of electrodes to cells using migration of retinal cells to target areas. Two basic implant geometries will be described: perforated membranes and protruding electrode arrays. In addition, we will discuss delivery of information to the implant that allows for natural eye scanning of the scene, rather than scanning with a head-mounted camera. It operates similarly to ``virtual reality'' imaging devices where an image from a video camera is projected by a goggle-mounted collimated infrared LED-LCD display onto the retina, activating an array of powered photodiodes in the retinal implant. Optical delivery of visual information to the implant allows for flexible control of the image processing algorithms and stimulation parameters. In summary, we will describe solutions to some of the major problems facing the realization of a functional retinal implant: high pixel density, proximity of electrodes to target cells, natural eye scanning capability, and real-time image processing adjustable to retinal architecture.
Contact CMOS imaging of gaseous oxygen sensor array
Daivasagaya, Daisy S.; Yao, Lei; Yi Yung, Ka; Hajj-Hassan, Mohamad; Cheung, Maurice C.; Chodavarapu, Vamsy P.; Bright, Frank V.
2014-01-01
We describe a compact luminescent gaseous oxygen (O2) sensor microsystem based on the direct integration of sensor elements with a polymeric optical filter and placed on a low power complementary metal-oxide semiconductor (CMOS) imager integrated circuit (IC). The sensor operates on the measurement of excited-state emission intensity of O2-sensitive luminophore molecules tris(4,7-diphenyl-1,10-phenanthroline) ruthenium(II) ([Ru(dpp)3]2+) encapsulated within sol–gel derived xerogel thin films. The polymeric optical filter is made with polydimethylsiloxane (PDMS) that is mixed with a dye (Sudan-II). The PDMS membrane surface is molded to incorporate arrays of trapezoidal microstructures that serve to focus the optical sensor signals on to the imager pixels. The molded PDMS membrane is then attached with the PDMS color filter. The xerogel sensor arrays are contact printed on top of the PDMS trapezoidal lens-like microstructures. The CMOS imager uses a 32 × 32 (1024 elements) array of active pixel sensors and each pixel includes a high-gain phototransistor to convert the detected optical signals into electrical currents. Correlated double sampling circuit, pixel address, digital control and signal integration circuits are also implemented on-chip. The CMOS imager data is read out as a serial coded signal. The CMOS imager consumes a static power of 320 µW and an average dynamic power of 625 µW when operating at 100 Hz sampling frequency and 1.8 V DC. This CMOS sensor system provides a useful platform for the development of miniaturized optical chemical gas sensors. PMID:24493909
Contact CMOS imaging of gaseous oxygen sensor array.
Daivasagaya, Daisy S; Yao, Lei; Yi Yung, Ka; Hajj-Hassan, Mohamad; Cheung, Maurice C; Chodavarapu, Vamsy P; Bright, Frank V
2011-10-01
We describe a compact luminescent gaseous oxygen (O 2 ) sensor microsystem based on the direct integration of sensor elements with a polymeric optical filter and placed on a low power complementary metal-oxide semiconductor (CMOS) imager integrated circuit (IC). The sensor operates on the measurement of excited-state emission intensity of O 2 -sensitive luminophore molecules tris(4,7-diphenyl-1,10-phenanthroline) ruthenium(II) ([Ru(dpp) 3 ] 2+ ) encapsulated within sol-gel derived xerogel thin films. The polymeric optical filter is made with polydimethylsiloxane (PDMS) that is mixed with a dye (Sudan-II). The PDMS membrane surface is molded to incorporate arrays of trapezoidal microstructures that serve to focus the optical sensor signals on to the imager pixels. The molded PDMS membrane is then attached with the PDMS color filter. The xerogel sensor arrays are contact printed on top of the PDMS trapezoidal lens-like microstructures. The CMOS imager uses a 32 × 32 (1024 elements) array of active pixel sensors and each pixel includes a high-gain phototransistor to convert the detected optical signals into electrical currents. Correlated double sampling circuit, pixel address, digital control and signal integration circuits are also implemented on-chip. The CMOS imager data is read out as a serial coded signal. The CMOS imager consumes a static power of 320 µW and an average dynamic power of 625 µW when operating at 100 Hz sampling frequency and 1.8 V DC. This CMOS sensor system provides a useful platform for the development of miniaturized optical chemical gas sensors.
Edge detection for optical synthetic aperture based on deep neural network
NASA Astrophysics Data System (ADS)
Tan, Wenjie; Hui, Mei; Liu, Ming; Kong, Lingqin; Dong, Liquan; Zhao, Yuejin
2017-09-01
Synthetic aperture optics systems can meet the demands of the next-generation space telescopes being lighter, larger and foldable. However, the boundaries of segmented aperture systems are much more complex than that of the whole aperture. More edge regions mean more imaging edge pixels, which are often mixed and discretized. In order to achieve high-resolution imaging, it is necessary to identify the gaps between the sub-apertures and the edges of the projected fringes. In this work, we introduced the algorithm of Deep Neural Network into the edge detection of optical synthetic aperture imaging. According to the detection needs, we constructed image sets by experiments and simulations. Based on MatConvNet, a toolbox of MATLAB, we ran the neural network, trained it on training image set and tested its performance on validation set. The training was stopped when the test error on validation set stopped declining. As an input image is given, each intra-neighbor area around the pixel is taken into the network, and scanned pixel by pixel with the trained multi-hidden layers. The network outputs make a judgment on whether the center of the input block is on edge of fringes. We experimented with various pre-processing and post-processing techniques to reveal their influence on edge detection performance. Compared with the traditional algorithms or their improvements, our method makes decision on a much larger intra-neighbor, and is more global and comprehensive. Experiments on more than 2,000 images are also given to prove that our method outperforms classical algorithms in optical images-based edge detection.
Qualitative and quantitative ultrasound attributes of maternal-foetal structures in pregnant ewes.
da Silva, Pda; Uscategui, Rar; Santos, Vjc; Taira, A R; Mariano, Rsg; Rodrigues, Mgk; Simões, Apr; Maronezi, M C; Avante, M L; Vicente, Wrr; Feliciano, Mar
2018-06-01
The aim of this study was to examine foetal organs and placental tissue to establish a correlation between the changes in the composition of these structures associated with their maturation and the ultrasonographic characteristics of the images. Twenty-four pregnant ewes were included in the study. Ultrasonography assessments were performed in B-mode, from the ninth gestational week until parturition. The lungs, liver and kidneys of foetuses and placentomes were located in transverse and longitudinal sections to evaluate the echogenicity (hypoechoic, isoechoic, hyperechoic or mixed) and echotexture (homogeneous and heterogeneous) of the tissues of interest. For quantitative evaluation of the ultrasonographic characteristics, it was performed a computerized image analysis using a commercial software (Image ProPlus ® ). Mean numerical pixel values (NPVs), pixel heterogeneity (standard deviation of NPVs) and minimum and maximum pixel values were measured by selecting five circular regions of interest in each assessed tissue. All evaluated tissues presented significant variations in the NPVs, except for the liver. Pulmonary NPVmean, NPVmin and NPVmax decreased gradually through gestational weeks. The renal parameters gradually decreased with the advancement of the gestational weeks until the 17th week and later stabilized. The placentome NPVmean, NPVmin and NPVmax decreased gradually over the course of weeks. The hepatic tissue did not show echogenicity and echotexture variations and presented medium echogenicity and homogeneous echotexture throughout the experimental period. It was concluded that pixels numerical evaluation of maternal-foetal tissues was applicable and allowed the identification of quantitative ultrasonographic characteristics showing changes in echogenicity related to gestational age. © 2018 Blackwell Verlag GmbH.
Alor-Hernández, Giner; Pérez-Gallardo, Yuliana; Posada-Gómez, Rubén; Cortes-Robles, Guillermo; Rodríguez-González, Alejandro; Aguilar-Laserre, Alberto A
2012-09-01
Nowadays, traditional search engines such as Google, Yahoo and Bing facilitate the retrieval of information in the format of images, but the results are not always useful for the users. This is mainly due to two problems: (1) the semantic keywords are not taken into consideration and (2) it is not always possible to establish a query using the image features. This issue has been covered in different domains in order to develop content-based image retrieval (CBIR) systems. The expert community has focussed their attention on the healthcare domain, where a lot of visual information for medical analysis is available. This paper provides a solution called iPixel Visual Search Engine, which involves semantics and content issues in order to search for digitized mammograms. iPixel offers the possibility of retrieving mammogram features using collective intelligence and implementing a CBIR algorithm. Our proposal compares not only features with similar semantic meaning, but also visual features. In this sense, the comparisons are made in different ways: by the number of regions per image, by maximum and minimum size of regions per image and by average intensity level of each region. iPixel Visual Search Engine supports the medical community in differential diagnoses related to the diseases of the breast. The iPixel Visual Search Engine has been validated by experts in the healthcare domain, such as radiologists, in addition to experts in digital image analysis.
Evaluating an image-fusion algorithm with synthetic-image-generation tools
NASA Astrophysics Data System (ADS)
Gross, Harry N.; Schott, John R.
1996-06-01
An algorithm that combines spectral mixing and nonlinear optimization is used to fuse multiresolution images. Image fusion merges images of different spatial and spectral resolutions to create a high spatial resolution multispectral combination. High spectral resolution allows identification of materials in the scene, while high spatial resolution locates those materials. In this algorithm, conventional spectral mixing estimates the percentage of each material (called endmembers) within each low resolution pixel. Three spectral mixing models are compared; unconstrained, partially constrained, and fully constrained. In the partially constrained application, the endmember fractions are required to sum to one. In the fully constrained application, all fractions are additionally required to lie between zero and one. While negative fractions seem inappropriate, they can arise from random spectral realizations of the materials. In the second part of the algorithm, the low resolution fractions are used as inputs to a constrained nonlinear optimization that calculates the endmember fractions for the high resolution pixels. The constraints mirror the low resolution constraints and maintain consistency with the low resolution fraction results. The algorithm can use one or more higher resolution sharpening images to locate the endmembers to high spatial accuracy. The algorithm was evaluated with synthetic image generation (SIG) tools. A SIG developed image can be used to control the various error sources that are likely to impair the algorithm performance. These error sources include atmospheric effects, mismodeled spectral endmembers, and variability in topography and illumination. By controlling the introduction of these errors, the robustness of the algorithm can be studied and improved upon. The motivation for this research is to take advantage of the next generation of multi/hyperspectral sensors. Although the hyperspectral images will be of modest to low resolution, fusing them with high resolution sharpening images will produce a higher spatial resolution land cover or material map.
Lee, Hyoseong; Rhee, Huinam; Oh, Jae Hong; Park, Jin Ho
2016-03-11
This paper deals with an improved methodology to measure three-dimensional dynamic displacements of a structure by digital close-range photogrammetry. A series of stereo images of a vibrating structure installed with targets are taken at specified intervals by using two daily-use cameras. A new methodology is proposed to accurately trace the spatial displacement of each target in three-dimensional space. This method combines the correlation and the least-square image matching so that the sub-pixel targeting can be obtained to increase the measurement accuracy. Collinearity and space resection theory are used to determine the interior and exterior orientation parameters. To verify the proposed method, experiments have been performed to measure displacements of a cantilevered beam excited by an electrodynamic shaker, which is vibrating in a complex configuration with mixed bending and torsional motions simultaneously with multiple frequencies. The results by the present method showed good agreement with the measurement by two laser displacement sensors. The proposed methodology only requires inexpensive daily-use cameras, and can remotely detect the dynamic displacement of a structure vibrating in a complex three-dimensional defection shape up to sub-pixel accuracy. It has abundant potential applications to various fields, e.g., remote vibration monitoring of an inaccessible or dangerous facility.
Lee, Hyoseong; Rhee, Huinam; Oh, Jae Hong; Park, Jin Ho
2016-01-01
This paper deals with an improved methodology to measure three-dimensional dynamic displacements of a structure by digital close-range photogrammetry. A series of stereo images of a vibrating structure installed with targets are taken at specified intervals by using two daily-use cameras. A new methodology is proposed to accurately trace the spatial displacement of each target in three-dimensional space. This method combines the correlation and the least-square image matching so that the sub-pixel targeting can be obtained to increase the measurement accuracy. Collinearity and space resection theory are used to determine the interior and exterior orientation parameters. To verify the proposed method, experiments have been performed to measure displacements of a cantilevered beam excited by an electrodynamic shaker, which is vibrating in a complex configuration with mixed bending and torsional motions simultaneously with multiple frequencies. The results by the present method showed good agreement with the measurement by two laser displacement sensors. The proposed methodology only requires inexpensive daily-use cameras, and can remotely detect the dynamic displacement of a structure vibrating in a complex three-dimensional defection shape up to sub-pixel accuracy. It has abundant potential applications to various fields, e.g., remote vibration monitoring of an inaccessible or dangerous facility. PMID:26978366
System and Method for Multi-Wavelength Optical Signal Detection
NASA Technical Reports Server (NTRS)
McGlone, Thomas D. (Inventor)
2017-01-01
The system and method for multi-wavelength optical signal detection enables the detection of optical signal levels significantly below those processed at the discrete circuit level by the use of mixed-signal processing methods implemented with integrated circuit technologies. The present invention is configured to detect and process small signals, which enables the reduction of the optical power required to stimulate detection networks, and lowers the required laser power to make specific measurements. The present invention provides an adaptation of active pixel networks combined with mixed-signal processing methods to provide an integer representation of the received signal as an output. The present invention also provides multi-wavelength laser detection circuits for use in various systems, such as a differential absorption light detection and ranging system.
Masking Strategies for Image Manifolds.
Dadkhahi, Hamid; Duarte, Marco F
2016-07-07
We consider the problem of selecting an optimal mask for an image manifold, i.e., choosing a subset of the pixels of the image that preserves the manifold's geometric structure present in the original data. Such masking implements a form of compressive sensing through emerging imaging sensor platforms for which the power expense grows with the number of pixels acquired. Our goal is for the manifold learned from masked images to resemble its full image counterpart as closely as possible. More precisely, we show that one can indeed accurately learn an image manifold without having to consider a large majority of the image pixels. In doing so, we consider two masking methods that preserve the local and global geometric structure of the manifold, respectively. In each case, the process of finding the optimal masking pattern can be cast as a binary integer program, which is computationally expensive but can be approximated by a fast greedy algorithm. Numerical experiments show that the relevant manifold structure is preserved through the datadependent masking process, even for modest mask sizes.
Exploring of PST-TBPM in Monitoring Bridge Dynamic Deflection in Vibration
NASA Astrophysics Data System (ADS)
Zhang, Guojian; Liu, Shengzhen; Zhao, Tonglong; Yu, Chengxin
2018-01-01
This study adopts digital photography to monitor bridge dynamic deflection in vibration. Digital photography used in this study is based on PST-TBPM (photographing scale transformation-time baseline parallax method). Firstly, a digital camera is used to monitor the bridge in static as a zero image. Then, the digital camera is used to monitor the bridge in vibration every three seconds as the successive images. Based on the reference system, PST-TBPM is used to calculate the images to obtain the bridge dynamic deflection in vibration. Results show that the average measurement accuracies are 0.615 pixels and 0.79 pixels in X and Z direction. The maximal deflection of the bridge is 7.14 pixels. PST-TBPM is valid in solving the problem-the photographing direction not perpendicular to the bridge. Digital photography used in this study can assess the bridge health through monitoring the bridge dynamic deflection in vibration. The deformation trend curves depicted over time also can warn the possible dangers.
4K x 2K pixel color video pickup system
NASA Astrophysics Data System (ADS)
Sugawara, Masayuki; Mitani, Kohji; Shimamoto, Hiroshi; Fujita, Yoshihiro; Yuyama, Ichiro; Itakura, Keijirou
1998-12-01
This paper describes the development of an experimental super- high-definition color video camera system. During the past several years there has been much interest in super-high- definition images as the next generation image media. One of the difficulties in implementing a super-high-definition motion imaging system is constructing the image-capturing section (camera). Even the state-of-the-art semiconductor technology can not realize the image sensor which has enough pixels and output data rate for super-high-definition images. The present study is an attempt to fill the gap in this respect. The authors intend to solve the problem by using new imaging method in which four HDTV sensors are attached on a new color separation optics so that their pixel sample pattern forms checkerboard pattern. A series of imaging experiments demonstrate that this technique is an effective approach to capturing super-high-definition moving images in the present situation where no image sensors exist for such images.
NASA Astrophysics Data System (ADS)
Bhardwaj, Rupali
2018-03-01
Reversible data hiding means embedding a secret message in a cover image in such a manner, to the point that in the midst of extraction of the secret message, the cover image and, furthermore, the secret message are recovered with no error. The goal of by far most of the reversible data hiding algorithms is to have improved the embedding rate and enhanced visual quality of stego image. An improved encrypted-domain-based reversible data hiding algorithm to embed two binary bits in each gray pixel of original cover image with minimum distortion of stego-pixels is employed in this paper. Highlights of the proposed algorithm are minimum distortion of pixel's value, elimination of underflow and overflow problem, and equivalence of stego image and cover image with a PSNR of ∞ (for Lena, Goldhill, and Barbara image). The experimental outcomes reveal that in terms of average PSNR and embedding rate, for natural images, the proposed algorithm performed better than other conventional ones.
Memory color assisted illuminant estimation through pixel clustering
NASA Astrophysics Data System (ADS)
Zhang, Heng; Quan, Shuxue
2010-01-01
The under constrained nature of illuminant estimation determines that in order to resolve the problem, certain assumptions are needed, such as the gray world theory. Including more constraints in this process may help explore the useful information in an image and improve the accuracy of the estimated illuminant, providing that the constraints hold. Based on the observation that most personal images have contents of one or more of the following categories: neutral objects, human beings, sky, and plants, we propose a method for illuminant estimation through the clustering of pixels of gray and three dominant memory colors: skin tone, sky blue, and foliage green. Analysis shows that samples of the above colors cluster around small areas under different illuminants and their characteristics can be used to effectively detect pixels falling into each of the categories. The algorithm requires the knowledge of the spectral sensitivity response of the camera, and a spectral database consisted of the CIE standard illuminants and reflectance or radiance database of samples of the above colors.
Jennifer L. R. Jensen; Karen S. Humes; Andrew T. Hudak; Lee A. Vierling; Eric Delmelle
2011-01-01
This study presents an alternative assessment of the MODIS LAI product for a 58,000 ha evergreen needleleaf forest located in the western Rocky Mountain range in northern Idaho by using lidar data to model (R2=0.86, RMSE=0.76) and map LAI at higher resolution across a large number of MODIS pixels in their entirety. Moderate resolution (30 m) lidar-based LAI estimates...
Giewekemeyer, Klaus; Philipp, Hugh T.; Wilke, Robin N.; Aquila, Andrew; Osterhoff, Markus; Tate, Mark W.; Shanks, Katherine S.; Zozulya, Alexey V.; Salditt, Tim; Gruner, Sol M.; Mancuso, Adrian P.
2014-01-01
Coherent (X-ray) diffractive imaging (CDI) is an increasingly popular form of X-ray microscopy, mainly due to its potential to produce high-resolution images and the lack of an objective lens between the sample and its corresponding imaging detector. One challenge, however, is that very high dynamic range diffraction data must be collected to produce both quantitative and high-resolution images. In this work, hard X-ray ptychographic coherent diffractive imaging has been performed at the P10 beamline of the PETRA III synchrotron to demonstrate the potential of a very wide dynamic range imaging X-ray detector (the Mixed-Mode Pixel Array Detector, or MM-PAD). The detector is capable of single photon detection, detecting fluxes exceeding 1 × 108 8-keV photons pixel−1 s−1, and framing at 1 kHz. A ptychographic reconstruction was performed using a peak focal intensity on the order of 1 × 1010 photons µm−2 s−1 within an area of approximately 325 nm × 603 nm. This was done without need of a beam stop and with a very modest attenuation, while ‘still’ images of the empty beam far-field intensity were recorded without any attenuation. The treatment of the detector frames and CDI methodology for reconstruction of non-sensitive detector regions, partially also extending the active detector area, are described. PMID:25178008
NASA Astrophysics Data System (ADS)
Beltrame, Francesco; Diaspro, Alberto; Fato, Marco; Martin, I.; Ramoino, Paola; Sobel, Irwin E.
1995-03-01
Confocal microscopy systems can be linked to 3D data oriented devices for the interactive navigation of the operator through a 3D object space. Sometimes, such environments are named `virtual reality' or `augmented reality' systems. We consider optical confocal laser scanning microscopy images, in fluorescence with various excitations and emissions, and versus time The aim of our study has been the quantitative spatial analysis of confocal data using the false-color composition technique. Starting from three 2D confocal fluorescent images at the same slice location in a given biological specimen, a new single image representation of all three parameters has been generated by the false-color technique on a HP 9000/735 workstation, connected to the confocal microscope. The color composite result of the mapping of the three parameters is displayed using a resolution of 24 bits per pixel. The operator may independently vary the mix of each of the three components in the false-color composite via three (R, G, B) mixing sliders. Furthermore, by using the pixel data in the three fluorescent component images, a 3D space containing the density distribution of these three parameters has been constructed. The histogram has been displayed in stereo: it can be used for clustering purposes from the operator, through an original thresholding algorithm.
Optimal Control of Evolution Mixed Variational Inclusions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alduncin, Gonzalo, E-mail: alduncin@geofisica.unam.mx
2013-12-15
Optimal control problems of primal and dual evolution mixed variational inclusions, in reflexive Banach spaces, are studied. The solvability analysis of the mixed state systems is established via duality principles. The optimality analysis is performed in terms of perturbation conjugate duality methods, and proximation penalty-duality algorithms to mixed optimality conditions are further presented. Applications to nonlinear diffusion constrained problems as well as quasistatic elastoviscoplastic bilateral contact problems exemplify the theory.
Spectral-Spatial Shared Linear Regression for Hyperspectral Image Classification.
Haoliang Yuan; Yuan Yan Tang
2017-04-01
Classification of the pixels in hyperspectral image (HSI) is an important task and has been popularly applied in many practical applications. Its major challenge is the high-dimensional small-sized problem. To deal with this problem, lots of subspace learning (SL) methods are developed to reduce the dimension of the pixels while preserving the important discriminant information. Motivated by ridge linear regression (RLR) framework for SL, we propose a spectral-spatial shared linear regression method (SSSLR) for extracting the feature representation. Comparing with RLR, our proposed SSSLR has the following two advantages. First, we utilize a convex set to explore the spatial structure for computing the linear projection matrix. Second, we utilize a shared structure learning model, which is formed by original data space and a hidden feature space, to learn a more discriminant linear projection matrix for classification. To optimize our proposed method, an efficient iterative algorithm is proposed. Experimental results on two popular HSI data sets, i.e., Indian Pines and Salinas demonstrate that our proposed methods outperform many SL methods.
Multiple Sensor Camera for Enhanced Video Capturing
NASA Astrophysics Data System (ADS)
Nagahara, Hajime; Kanki, Yoshinori; Iwai, Yoshio; Yachida, Masahiko
A resolution of camera has been drastically improved under a current request for high-quality digital images. For example, digital still camera has several mega pixels. Although a video camera has the higher frame-rate, the resolution of a video camera is lower than that of still camera. Thus, the high-resolution is incompatible with the high frame rate of ordinary cameras in market. It is difficult to solve this problem by a single sensor, since it comes from physical limitation of the pixel transfer rate. In this paper, we propose a multi-sensor camera for capturing a resolution and frame-rate enhanced video. Common multi-CCDs camera, such as 3CCD color camera, has same CCD for capturing different spectral information. Our approach is to use different spatio-temporal resolution sensors in a single camera cabinet for capturing higher resolution and frame-rate information separately. We build a prototype camera which can capture high-resolution (2588×1958 pixels, 3.75 fps) and high frame-rate (500×500, 90 fps) videos. We also proposed the calibration method for the camera. As one of the application of the camera, we demonstrate an enhanced video (2128×1952 pixels, 90 fps) generated from the captured videos for showing the utility of the camera.
Zhang, Yue; Zou, Huanxin; Luo, Tiancheng; Qin, Xianxiang; Zhou, Shilin; Ji, Kefeng
2016-01-01
The superpixel segmentation algorithm, as a preprocessing technique, should show good performance in fast segmentation speed, accurate boundary adherence and homogeneous regularity. A fast superpixel segmentation algorithm by iterative edge refinement (IER) works well on optical images. However, it may generate poor superpixels for Polarimetric synthetic aperture radar (PolSAR) images due to the influence of strong speckle noise and many small-sized or slim regions. To solve these problems, we utilized a fast revised Wishart distance instead of Euclidean distance in the local relabeling of unstable pixels, and initialized unstable pixels as all the pixels substituted for the initial grid edge pixels in the initialization step. Then, postprocessing with the dissimilarity measure is employed to remove the generated small isolated regions as well as to preserve strong point targets. Finally, the superiority of the proposed algorithm is validated with extensive experiments on four simulated and two real-world PolSAR images from Experimental Synthetic Aperture Radar (ESAR) and Airborne Synthetic Aperture Radar (AirSAR) data sets, which demonstrate that the proposed method shows better performance with respect to several commonly used evaluation measures, even with about nine times higher computational efficiency, as well as fine boundary adherence and strong point targets preservation, compared with three state-of-the-art methods. PMID:27754385
Line fitting based feature extraction for object recognition
NASA Astrophysics Data System (ADS)
Li, Bing
2014-06-01
Image feature extraction plays a significant role in image based pattern applications. In this paper, we propose a new approach to generate hierarchical features. This new approach applies line fitting to adaptively divide regions based upon the amount of information and creates line fitting features for each subsequent region. It overcomes the feature wasting drawback of the wavelet based approach and demonstrates high performance in real applications. For gray scale images, we propose a diffusion equation approach to map information-rich pixels (pixels near edges and ridge pixels) into high values, and pixels in homogeneous regions into small values near zero that form energy map images. After the energy map images are generated, we propose a line fitting approach to divide regions recursively and create features for each region simultaneously. This new feature extraction approach is similar to wavelet based hierarchical feature extraction in which high layer features represent global characteristics and low layer features represent local characteristics. However, the new approach uses line fitting to adaptively focus on information-rich regions so that we avoid the feature waste problems of the wavelet approach in homogeneous regions. Finally, the experiments for handwriting word recognition show that the new method provides higher performance than the regular handwriting word recognition approach.
Mixed methods research in mental health nursing.
Kettles, A M; Creswell, J W; Zhang, W
2011-08-01
Mixed methods research is becoming more widely used in order to answer research questions and to investigate research problems in mental health and psychiatric nursing. However, two separate literature searches, one in Scotland and one in the USA, revealed that few mental health nursing studies identified mixed methods research in their titles. Many studies used the term 'embedded' but few studies identified in the literature were mixed methods embedded studies. The history, philosophical underpinnings, definition, types of mixed methods research and associated pragmatism are discussed, as well as the need for mixed methods research. Examples of mental health nursing mixed methods research are used to illustrate the different types of mixed methods: convergent parallel, embedded, explanatory and exploratory in their sequential and concurrent combinations. Implementing mixed methods research is also discussed briefly and the problem of identifying mixed methods research in mental and psychiatric nursing are discussed with some possible solutions to the problem proposed. © 2011 Blackwell Publishing.
Sollazzo, Alice; Brzozowska, Beata; Cheng, Lei; Lundholm, Lovisa; Scherthan, Harry
2018-01-01
Cells react differently to clustered and dispersed DNA double strand breaks (DSB). Little is known about the initial reaction to simultaneous induction of DSBs with different complexities. Here, we used live cell microscopy to analyse the behaviour of 53BP1-GFP (green fluorescence protein) foci formation at DSBs induced in U2OS cells by alpha particles, X-rays or mixed beams over a 75 min period post irradiation. X-ray-induced foci rapidly increased and declined over the observation interval. After an initial increase, mixed beam-induced foci remained at a constant level over the observation interval, similarly as alpha-induced foci. The average areas of radiation-induced foci were similar for mixed beams and X-rays, being significantly smaller than those induced by alpha particles. Pixel intensities were highest for mixed beam-induced foci and showed the lowest level of variability over time as compared to foci induced by alphas and X-rays alone. Finally, mixed beam-exposed foci showed the lowest level of mobility as compared to alpha and X-ray exposure. The results suggest paralysation of chromatin around foci containing clustered DNA damage. PMID:29419809
Luminance uniformity compensation for OLED panels based on FPGA
NASA Astrophysics Data System (ADS)
Ou, Peng; Yang, Gang; Jiang, Quan; Yu, Jun-Sheng; Wu, Qi-Peng; Shang, Fu-Hai; Yin, Wei; Wang, Jun; Zhong, Jian; Luo, Kai-Jun
2009-09-01
Aiming at the problem of luminance uniformity for organic lighting-emitting diode (OLED) panels, a new brightness calculating method based on bilinear interpolation is proposed. The irradiance time of each pixel reaching the same luminance is figured out by Matlab. Adopting the 64×32-pixel, single color and passive matrix OLED panel as adjusting luminance uniformity panel, a new circuit compensating scheme based on FPGA is designed. VHDL is used to make each pixel’s irradiance time in one frame period written in program. The irradiance brightness is controlled by changing its irradiance time, and finally, luminance compensation of the panel is realized. The simulation result indicates that the design is reasonable.
Taguchi, Katsuyuki; Stierstorfer, Karl; Polster, Christoph; Lee, Okkyun; Kappler, Steffen
2018-05-01
The interpixel cross-talk of energy-sensitive photon counting x-ray detectors (PCDs) has been studied and an analytical model (version 2.1) has been developed for double-counting between neighboring pixels due to charge sharing and K-shell fluorescence x-ray emission followed by its reabsorption (Taguchi K, et al., Medical Physics 2016;43(12):6386-6404). While the model version 2.1 simulated the spectral degradation well, it had the following problems that has been found to be significant recently: (1) The spectrum is inaccurate with smaller pixel sizes; (2) the charge cloud size must be smaller than the pixel size; (3) the model underestimates the spectrum/counts for 10-40 keV; and (4) the model version 2.1 cannot handlen-tuple-counting withn > 2 (i.e., triple-counting or higher). These problems are inherent to the design of the model version 2.1; therefore, we developed a new model and addressed these problems in this study. We propose a new PCD cross-talk model (version 3.2; Pc TK for "photon counting toolkit") that is based on a completely different design concept from the previous version. It uses a numerical approach and starts with a 2-D model of charge sharing (as opposed to an analytical approach and a 1-D model with version 2.1) and addresses all of the four problems. The model takes the following factors into account: (1) shift-variant electron density of the charge cloud (Gaussian-distributed), (2) detection efficiency, (3) interactions between photons and PCDs via photoelectric effect, and (4) electronic noise. Correlated noisy PCD data can be generated using either a multivariate normal random number generator or a Poisson random number generator. The effect of the two parameters, the effective charge cloud diameter (d 0 ) and pixel size (d pix ), was studied and results were compared with Monte Carlo simulations and the previous model version 2.1. Finally, a script for the workflow for CT image quality assessment has been developed, which started with a few material density images, generated material-specific sinogram (line integrals) data, noisy PCD data with spectral distortion using the model version 3.2, and reconstructed PCD- CT images for four energy windows. The model version 3.2 addressed all of the four problems listed above. The spectra withd pix = 56-113 μm agreed with that of Medipix3 detector withd pix = 55-110 μm without charge summing mode qualitatively. The counts for 10-40 keV were larger than the previous model (version 2.1) and agreed with MC simulations very well (root-mean-square difference values with model version 3.2 were decreased to 16%-67% of the values with version 2.1). There were many non-zero off-diagonal elements withn-tuple-counting withn > 2 in the normalized covariance matrix of 3 × 3 neighboring pixels. Reconstructed images showed biases and artifacts attributed to the spectral distortion due to the charge sharing and fluorescence x rays. We have developed a new PCD model for spatio-energetic cross-talk and correlation between PCD pixels. The workflow demonstrated the utility of the model for general or task-specific image quality assessments for the PCD- CT.Note: The program (Pc TK) and the workflow scripts have been made available to academic researchers. Interested readers should visit the website (pctk.jhu.edu) or contact the corresponding author. © 2018 American Association of Physicists in Medicine.
Case mix planning in hospitals: a review and future agenda.
Hof, Sebastian; Fügener, Andreas; Schoenfelder, Jan; Brunner, Jens O
2017-06-01
The case mix planning problem deals with choosing the ideal composition and volume of patients in a hospital. With many countries having recently changed to systems where hospitals are reimbursed for patients according to their diagnosis, case mix planning has become an important tool in strategic and tactical hospital planning. Selecting patients in such a payment system can have a significant impact on a hospital's revenue. The contribution of this article is to provide the first literature review focusing on the case mix planning problem. We describe the problem, distinguish it from similar planning problems, and evaluate the existing literature with regard to problem structure and managerial impact. Further, we identify gaps in the literature. We hope to foster research in the field of case mix planning, which only lately has received growing attention despite its fundamental economic impact on hospitals.
Superconducting Microwave Resonator Arrays for Submillimeter/Far-Infrared Imaging
NASA Astrophysics Data System (ADS)
Noroozian, Omid
Superconducting microwave resonators have the potential to revolutionize submillimeter and far-infrared astronomy, and with it our understanding of the universe. The field of low-temperature detector technology has reached a point where extremely sensitive devices like transition-edge sensors are now capable of detecting radiation limited by the background noise of the universe. However, the size of these detector arrays are limited to only a few thousand pixels. This is because of the cost and complexity of fabricating large-scale arrays of these detectors that can reach up to 10 lithographic levels on chip, and the complicated SQUID-based multiplexing circuitry and wiring for readout of each detector. In order to make substantial progress, next-generation ground-based telescopes such as CCAT or future space telescopes require focal planes with large-scale detector arrays of 104--10 6 pixels. Arrays using microwave kinetic inductance detectors (MKID) are a potential solution. These arrays can be easily made with a single layer of superconducting metal film deposited on a silicon substrate and pattered using conventional optical lithography. Furthermore, MKIDs are inherently multiplexable in the frequency domain, allowing ˜ 10 3 detectors to be read out using a single coaxial transmission line and cryogenic amplifier, drastically reducing cost and complexity. An MKID uses the change in the microwave surface impedance of a superconducting thin-film microresonator to detect photons. Absorption of photons in the superconductor breaks Cooper pairs into quasiparticles, changing the complex surface impedance, which results in a perturbation of resonator frequency and quality factor. For excitation and readout, the resonator is weakly coupled to a transmission line. The complex amplitude of a microwave probe signal tuned on-resonance and transmitted on the feedline past the resonator is perturbed as photons are absorbed in the superconductor. The perturbation can be detected using a cryogenic amplifier and subsequent homodyne mixing at room temperature. In an array of MKIDs, all the resonators are coupled to a shared feedline and are tuned to slightly different frequencies. They can be read out simultaneously using a comb of frequencies generated and measured using digital techniques. This thesis documents an effort to demonstrate the basic operation of ˜ 256 pixel arrays of lumped-element MKIDs made from superconducting TiN x on silicon. The resonators are designed and simulated for optimum operation. Various properties of the resonators and arrays are measured and compared to theoretical expectations. A particularly exciting observation is the extremely high quality factors (˜ 3 x 107) of our TiNx resonators which is essential for ultra-high sensitivity. The arrays are tightly packed both in space and in frequency which is desirable for larger full-size arrays. However, this can cause a serious problem in terms of microwave crosstalk between neighboring pixels. We show that by properly designing the resonator geometry, crosstalk can be eliminated; this is supported by our measurement results. We also tackle the problem of excess frequency noise in MKIDs. Intrinsic noise in the form of an excess resonance frequency jitter exists in planar superconducting resonators that are made on dielectric substrates. We conclusively show that this noise is due to fluctuations of the resonator capacitance. In turn, the capacitance fluctuations are thought to be driven by two-level system (TLS) fluctuators in a thin layer on the surface of the device. With a modified resonator design we demonstrate with measurements that this noise can be substantially reduced. An optimized version of this resonator was designed for the multiwavelength submillimeter kinetic inductance camera (MUSIC) instrument for the Caltech Submillimeter Observatory.
An exact algorithm for optimal MAE stack filter design.
Dellamonica, Domingos; Silva, Paulo J S; Humes, Carlos; Hirata, Nina S T; Barrera, Junior
2007-02-01
We propose a new algorithm for optimal MAE stack filter design. It is based on three main ingredients. First, we show that the dual of the integer programming formulation of the filter design problem is a minimum cost network flow problem. Next, we present a decomposition principle that can be used to break this dual problem into smaller subproblems. Finally, we propose a specialization of the network Simplex algorithm based on column generation to solve these smaller subproblems. Using our method, we were able to efficiently solve instances of the filter problem with window size up to 25 pixels. To the best of our knowledge, this is the largest dimension for which this problem was ever solved exactly.
Validating Phasing and Geometry of Large Focal Plane Arrays
NASA Technical Reports Server (NTRS)
Standley, Shaun P.; Gautier, Thomas N.; Caldwell, Douglas A.; Rabbette, Maura
2011-01-01
The Kepler Mission is designed to survey our region of the Milky Way galaxy to discover hundreds of Earth-sized and smaller planets in or near the habitable zone. The Kepler photometer is an array of 42 CCDs (charge-coupled devices) in the focal plane of a 95-cm Schmidt camera onboard the Kepler spacecraft. Each 50x25-mm CCD has 2,200 x 1,024 pixels. The CCDs accumulate photons and are read out every six seconds to prevent saturation. The data is integrated for 30 minutes, and then the pixel data is transferred to onboard storage. The data is subsequently encoded and transmitted to the ground. During End-to-End Information System (EEIS) testing of the Kepler Mission System (KMS), there was a need to verify that the pixels requested by the science team operationally were correctly collected, encoded, compressed, stored, and transmitted by the FS, and subsequently received, decoded, uncompressed, and displayed by the Ground Segment (GS) without the outputs of any CCD modules being flipped, mirrored, or otherwise corrupted during the extensive FS and GS processing. This would normally be done by projecting an image on the focal plane array (FPA), collecting the data in a flight-like way, and making a comparison between the original data and the data reconstructed by the science data system. Projecting a focused image onto the FPA through the telescope would normally involve using a collimator suspended over the telescope opening. There were several problems with this approach: the collimation equipment is elaborate and expensive; as conceived, it could only illuminate a limited section of the FPA (.25 percent) during a given test; the telescope cover would have to be deployed during testing to allow the image to be projected into the telescope; the equipment was bulky and difficult to situate in temperature-controlled environments; and given all the above, test setup, execution, and repeatability were significant concerns. Instead of using this complicated approach of projecting an optical image on the FPA, the Kepler project developed a method using known defect features in the CCDs to verify proper collection and reassembly of the pixels, thereby avoiding the costs and risks of the optical projection approach. The CCDs composing the Kepler FPA, as all CCDs, had minor defects. At ambient temperature, some pixels look far brighter than they should. These ghot h pixels have a higher rate of charge leakage than the others due to manufacturing variations. They are usually stable over time, and appear at temperatures above 5 oC. The hot pixels on the Kepler FPA were mapped before photometer assembly during module testing. Selected hot pixels were used as target gstars h for the purposes of EEIS testing. gDead h pixels are permanently off, producing a permanently black pixel. These can also be used if there is some illumination of the FPA. During EEIS testing, Dark Current Full Frame Images (FFIs) taken at room temperature were used to create the hot pixel maps for all 84 Kepler photometer CCD channels. Data from two separate nights were used to create two hot pixel maps per channel, which were cross-correlated to remove cosmic ray events which appear to be hot pixels. These hot pixel maps obtained during EEIS testing were compared to the maps made during module testing to verify that the end-to-end data flow was correct.
Xu, Yihua; Pitot, Henry C
2006-03-01
In the studies of quantitative stereology of rat hepatocarcinogenesis, we have used image analysis technology (automatic particle analysis) to obtain data such as liver tissue area, size and location of altered hepatic focal lesions (AHF), and nuclei counts. These data are then used for three-dimensional estimation of AHF occurrence and nuclear labeling index analysis. These are important parameters for quantitative studies of carcinogenesis, for screening and classifying carcinogens, and for risk estimation. To take such measurements, structures or cells of interest should be separated from the other components based on the difference of color and density. Common background problems seen on the captured sample image such as uneven light illumination or color shading can cause severe problems in the measurement. Two application programs (BK_Correction and Pixel_Separator) have been developed to solve these problems. With BK_Correction, common background problems such as incorrect color temperature setting, color shading, and uneven light illumination background, can be corrected. With Pixel_Separator different types of objects can be separated from each other in relation to their color, such as seen with different colors in immunohistochemically stained slides. The resultant images of such objects separated from other components are then ready for particle analysis. Objects that have the same darkness but different colors can be accurately differentiated in a grayscale image analysis system after application of these programs.
Wei Wu; Charlesb Hall; Lianjun Zhang
2006-01-01
We predicted the spatial pattern of hourly probability of cloud cover in the Luquillo Experimental Forest (LEF) in North-Eastern Puerto Rico using four different models. The probability of cloud cover (defined as âthe percentage of the area covered by clouds in each pixel on the mapâ in this paper) at any hour and any place is a function of three topographic variables...
Oriented Markov random field based dendritic spine segmentation for fluorescence microscopy images.
Cheng, Jie; Zhou, Xiaobo; Miller, Eric L; Alvarez, Veronica A; Sabatini, Bernardo L; Wong, Stephen T C
2010-10-01
Dendritic spines have been shown to be closely related to various functional properties of the neuron. Usually dendritic spines are manually labeled to analyze their morphological changes, which is very time-consuming and susceptible to operator bias, even with the assistance of computers. To deal with these issues, several methods have been recently proposed to automatically detect and measure the dendritic spines with little human interaction. However, problems such as degraded detection performance for images with larger pixel size (e.g. 0.125 μm/pixel instead of 0.08 μm/pixel) still exist in these methods. Moreover, the shapes of detected spines are also distorted. For example, the "necks" of some spines are missed. Here we present an oriented Markov random field (OMRF) based algorithm which improves spine detection as well as their geometric characterization. We begin with the identification of a region of interest (ROI) containing all the dendrites and spines to be analyzed. For this purpose, we introduce an adaptive procedure for identifying the image background. Next, the OMRF model is discussed within a statistical framework and the segmentation is solved as a maximum a posteriori estimation (MAP) problem, whose optimal solution is found by a knowledge-guided iterative conditional mode (KICM) algorithm. Compared with the existing algorithms, the proposed algorithm not only provides a more accurate representation of the spine shape, but also improves the detection performance by more than 50% with regard to reducing both the misses and false detection.
Jiang, Hongquan; Zhao, Yalin; Gao, Jianmin; Gao, Zhiyong
2017-06-01
The radiographic testing (RT) image of a steam turbine manufacturing enterprise has the characteristics of low gray level, low contrast, and blurriness, which lead to a substandard image quality. Moreover, it is not conducive for human eyes to detect and evaluate defects. This study proposes an adaptive pseudo-color enhancement method for weld radiographic images based on the hue, saturation, and intensity (HSI) color space and the self-transformation of pixels to solve these problems. First, the pixel's self-transformation is performed to the pixel value of the original RT image. The function value after the pixel's self-transformation is assigned to the HSI components in the HSI color space. Thereafter, the average intensity of the enhanced image is adaptively adjusted to 0.5 according to the intensity of the original image. Moreover, the hue range and interval can be adjusted according to personal habits. Finally, the HSI components after the adaptive adjustment can be transformed to display in the red, green, and blue color space. Numerous weld radiographic images from a steam turbine manufacturing enterprise are used to validate the proposed method. The experimental results show that the proposed pseudo-color enhancement method can improve image definition and make the target and background areas distinct in weld radiographic images. The enhanced images will be more conducive for defect recognition. Moreover, the image enhanced using the proposed method conforms to the human eye visual properties, and the effectiveness of defect recognition and evaluation can be ensured.
NASA Astrophysics Data System (ADS)
Jiang, Hongquan; Zhao, Yalin; Gao, Jianmin; Gao, Zhiyong
2017-06-01
The radiographic testing (RT) image of a steam turbine manufacturing enterprise has the characteristics of low gray level, low contrast, and blurriness, which lead to a substandard image quality. Moreover, it is not conducive for human eyes to detect and evaluate defects. This study proposes an adaptive pseudo-color enhancement method for weld radiographic images based on the hue, saturation, and intensity (HSI) color space and the self-transformation of pixels to solve these problems. First, the pixel's self-transformation is performed to the pixel value of the original RT image. The function value after the pixel's self-transformation is assigned to the HSI components in the HSI color space. Thereafter, the average intensity of the enhanced image is adaptively adjusted to 0.5 according to the intensity of the original image. Moreover, the hue range and interval can be adjusted according to personal habits. Finally, the HSI components after the adaptive adjustment can be transformed to display in the red, green, and blue color space. Numerous weld radiographic images from a steam turbine manufacturing enterprise are used to validate the proposed method. The experimental results show that the proposed pseudo-color enhancement method can improve image definition and make the target and background areas distinct in weld radiographic images. The enhanced images will be more conducive for defect recognition. Moreover, the image enhanced using the proposed method conforms to the human eye visual properties, and the effectiveness of defect recognition and evaluation can be ensured.
NASA Astrophysics Data System (ADS)
Shen, Wei; Zhao, Kai; Jiang, Yuan; Wang, Yan; Bai, Xiang; Yuille, Alan
2017-11-01
Object skeletons are useful for object representation and object detection. They are complementary to the object contour, and provide extra information, such as how object scale (thickness) varies among object parts. But object skeleton extraction from natural images is very challenging, because it requires the extractor to be able to capture both local and non-local image context in order to determine the scale of each skeleton pixel. In this paper, we present a novel fully convolutional network with multiple scale-associated side outputs to address this problem. By observing the relationship between the receptive field sizes of the different layers in the network and the skeleton scales they can capture, we introduce two scale-associated side outputs to each stage of the network. The network is trained by multi-task learning, where one task is skeleton localization to classify whether a pixel is a skeleton pixel or not, and the other is skeleton scale prediction to regress the scale of each skeleton pixel. Supervision is imposed at different stages by guiding the scale-associated side outputs toward the groundtruth skeletons at the appropriate scales. The responses of the multiple scale-associated side outputs are then fused in a scale-specific way to detect skeleton pixels using multiple scales effectively. Our method achieves promising results on two skeleton extraction datasets, and significantly outperforms other competitors. Additionally, the usefulness of the obtained skeletons and scales (thickness) are verified on two object detection applications: Foreground object segmentation and object proposal detection.
The Graphical User Interface Crisis: Danger and Opportunity.
ERIC Educational Resources Information Center
Boyd, Lawrence H.; And Others
This paper examines graphic computing environments, identifies potential problems in providing access to blind people, and describes programs and strategies being developed to provide this access. The paper begins with an explanation of how graphic user interfaces differ from character-based systems in their use of pixels, visual metaphors such as…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lecomte, Roger; Arpin, Louis; Beaudoin, Jean-Franç
Purpose: LabPET II is a new generation APD-based PET scanner designed to achieve sub-mm spatial resolution using truly pixelated detectors and highly integrated parallel front-end processing electronics. Methods: The basic element uses a 4×8 array of 1.12×1.12 mm{sup 2} Lu{sub 1.9}Y{sub 0.1}SiO{sub 5}:Ce (LYSO) scintillator pixels with one-to-one coupling to a 4×8 pixelated monolithic APD array mounted on a ceramic carrier. Four detector arrays are mounted on a daughter board carrying two flip-chip, 64-channel, mixed-signal, application-specific integrated circuits (ASIC) on the backside interfacing to two detector arrays each. Fully parallel signal processing was implemented in silico by encoding time andmore » energy information using a dual-threshold Time-over-Threshold (ToT) scheme. The self-contained 128-channel detector module was designed as a generic component for ultra-high resolution PET imaging of small to medium-size animals. Results: Energy and timing performance were optimized by carefully setting ToT thresholds to minimize the noise/slope ratio. ToT spectra clearly show resolved 511 keV photopeak and Compton edge with ToT resolution well below 10%. After correction for nonlinear ToT response, energy resolution is typically 24±2% FWHM. Coincidence time resolution between opposing 128-channel modules is below 4 ns FWHM. Initial imaging results demonstrate that 0.8 mm hot spots of a Derenzo phantom can be resolved. Conclusion: A new generation PET scanner featuring truly pixelated detectors was developed and shown to achieve a spatial resolution approaching the physical limit of PET. Future plans are to integrate a small-bore dedicated mouse version of the scanner within a PET/CT platform.« less
NASA Astrophysics Data System (ADS)
Ye, Su; Pontius, Robert Gilmore; Rakshit, Rahul
2018-07-01
Object-based image analysis (OBIA) has gained widespread popularity for creating maps from remotely sensed data. Researchers routinely claim that OBIA procedures outperform pixel-based procedures; however, it is not immediately obvious how to evaluate the degree to which an OBIA map compares to reference information in a manner that accounts for the fact that the OBIA map consists of objects that vary in size and shape. Our study reviews 209 journal articles concerning OBIA published between 2003 and 2017. We focus on the three stages of accuracy assessment: (1) sampling design, (2) response design and (3) accuracy analysis. First, we report the literature's overall characteristics concerning OBIA accuracy assessment. Simple random sampling was the most used method among probability sampling strategies, slightly more than stratified sampling. Office interpreted remotely sensed data was the dominant reference source. The literature reported accuracies ranging from 42% to 96%, with an average of 85%. A third of the articles failed to give sufficient information concerning accuracy methodology such as sampling scheme and sample size. We found few studies that focused specifically on the accuracy of the segmentation. Second, we identify a recent increase of OBIA articles in using per-polygon approaches compared to per-pixel approaches for accuracy assessment. We clarify the impacts of the per-pixel versus the per-polygon approaches respectively on sampling, response design and accuracy analysis. Our review defines the technical and methodological needs in the current per-polygon approaches, such as polygon-based sampling, analysis of mixed polygons, matching of mapped with reference polygons and assessment of segmentation accuracy. Our review summarizes and discusses the current issues in object-based accuracy assessment to provide guidance for improved accuracy assessments for OBIA.
An Active Fire Temperature Retrieval Model Using Hyperspectral Remote Sensing
NASA Astrophysics Data System (ADS)
Quigley, K. W.; Roberts, D. A.; Miller, D.
2017-12-01
Wildfire is both an important ecological process and a dangerous natural threat that humans face. In situ measurements of wildfire temperature are notoriously difficult to collect due to dangerous conditions. Imaging spectrometry data has the potential to provide some of the most accurate and highest temporally-resolved active fire temperature retrieval information for monitoring and modeling. Recent studies on fire temperature retrieval have used have used Multiple Endmember Spectral Mixture Analysis applied to Airborne Visible applied to Airborne Visible / Infrared Imaging Spectrometer (AVIRIS) bands to model fire temperatures within the regions marked to contain fire, but these methods are less effective at coarser spatial resolutions, as linear mixing methods are degraded by saturation within the pixel. The assumption of a distribution of temperatures within pixels allows us to model pixels with an effective maximum and likely minimum temperature. This assumption allows a more robust approach to modeling temperature at different spatial scales. In this study, instrument-corrected radiance is forward-modeled for different ranges of temperatures, with weighted temperatures from an effective maximum temperature to a likely minimum temperature contributing to the total radiance of the modeled pixel. Effective maximum fire temperature is estimated by minimizing the Root Mean Square Error (RMSE) between modeled and measured fires. The model was tested using AVIRIS collected over the 2016 Sherpa Fire in Santa Barbara County, California,. While only in situ experimentation would be able to confirm active fire temperatures, the fit of the data to modeled radiance can be assessed, as well as the similarity in temperature distributions seen on different spatial resolution scales. Results show that this model improves upon current modeling methods in producing similar effective temperatures on multiple spatial scales as well as a similar modeled area distribution of those temperatures.
Quantifying riverine surface currents from time sequences of thermal infrared imagery
Puleo, J.A.; McKenna, T.E.; Holland, K.T.; Calantoni, J.
2012-01-01
River surface currents are quantified from thermal and visible band imagery using two methods. One method utilizes time stacks of pixel intensity to estimate the streamwise velocity at multiple locations. The other method uses particle image velocimetry to solve for optimal two-dimensional pixel displacements between successive frames. Field validation was carried out on the Wolf River, a small coastal plain river near Landon, Mississippi, United States, on 26-27 May 2010 by collecting imagery in association with in situ velocities sampled using electromagnetic current meters deployed 0.1 m below the river surface. Comparisons are made between mean in situ velocities and image-derived velocities from 23 thermal and 6 visible-band image sequences (5 min length) during daylight and darkness conditions. The thermal signal was a small apparent temperature contrast induced by turbulent mixing of a thin layer of cooler water near the river surface with underlying warmer water. The visible-band signal was foam on the water surface. For thermal imagery, streamwise velocities derived from the pixel time stack and particle image velocimetry technique were generally highly correlated to mean streamwise current meter velocities during darkness (r 2 typically greater than 0.9) and early morning daylight (r 2 typically greater than 0.83). Streamwise velocities from the pixel time stack technique had high correlation for visible-band imagery during early morning daylight hours with respect to mean current meter velocities (r 2 > 0.86). Streamwise velocities for the particle image velocimetry technique for visible-band imagery had weaker correlations with only three out of six correlations performed having an r 2 exceeding 0.6. Copyright 2012 by the American Geophysical Union.
Willemse, Elias J; Joubert, Johan W
2016-09-01
In this article we present benchmark datasets for the Mixed Capacitated Arc Routing Problem under Time restrictions with Intermediate Facilities (MCARPTIF). The problem is a generalisation of the Capacitated Arc Routing Problem (CARP), and closely represents waste collection routing. Four different test sets are presented, each consisting of multiple instance files, and which can be used to benchmark different solution approaches for the MCARPTIF. An in-depth description of the datasets can be found in "Constructive heuristics for the Mixed Capacity Arc Routing Problem under Time Restrictions with Intermediate Facilities" (Willemseand Joubert, 2016) [2] and "Splitting procedures for the Mixed Capacitated Arc Routing Problem under Time restrictions with Intermediate Facilities" (Willemseand Joubert, in press) [4]. The datasets are publicly available from "Library of benchmark test sets for variants of the Capacitated Arc Routing Problem under Time restrictions with Intermediate Facilities" (Willemse and Joubert, 2016) [3].
A survey of mixed finite element methods
NASA Technical Reports Server (NTRS)
Brezzi, F.
1987-01-01
This paper is an introduction to and an overview of mixed finite element methods. It discusses the mixed formulation of certain basic problems in elasticity and hydrodynamics. It also discusses special techniques for solving the discrete problem.
Mixed-Strategy Chance Constrained Optimal Control
NASA Technical Reports Server (NTRS)
Ono, Masahiro; Kuwata, Yoshiaki; Balaram, J.
2013-01-01
This paper presents a novel chance constrained optimal control (CCOC) algorithm that chooses a control action probabilistically. A CCOC problem is to find a control input that minimizes the expected cost while guaranteeing that the probability of violating a set of constraints is below a user-specified threshold. We show that a probabilistic control approach, which we refer to as a mixed control strategy, enables us to obtain a cost that is better than what deterministic control strategies can achieve when the CCOC problem is nonconvex. The resulting mixed-strategy CCOC problem turns out to be a convexification of the original nonconvex CCOC problem. Furthermore, we also show that a mixed control strategy only needs to "mix" up to two deterministic control actions in order to achieve optimality. Building upon an iterative dual optimization, the proposed algorithm quickly converges to the optimal mixed control strategy with a user-specified tolerance.
Gaussian mixed model in support of semiglobal matching leveraged by ground control points
NASA Astrophysics Data System (ADS)
Ma, Hao; Zheng, Shunyi; Li, Chang; Li, Yingsong; Gui, Li
2017-04-01
Semiglobal matching (SGM) has been widely applied in large aerial images because of its good tradeoff between complexity and robustness. The concept of ground control points (GCPs) is adopted to make SGM more robust. We model the effect of GCPs as two data terms for stereo matching between high-resolution aerial epipolar images in an iterative scheme. One term based on GCPs is formulated by Gaussian mixture model, which strengths the relation between GCPs and the pixels to be estimated and encodes some degree of consistency between them with respect to disparity values. Another term depends on pixel-wise confidence, and we further design a confidence updating equation based on three rules. With this confidence-based term, the assignment of disparity can be heuristically selected among disparity search ranges during the iteration process. Several iterations are sufficient to bring out satisfactory results according to our experiments. Experimental results validate that the proposed method outperforms surface reconstruction, which is a representative variant of SGM and behaves excellently on aerial images.
NASA Astrophysics Data System (ADS)
Celenk, Mehmet; Song, Yinglei; Ma, Limin; Zhou, Min
2003-05-01
A new algorithm that can be used to automatically recognize and classify malignant lymphomas and lukemia is proposed in this paper. The algorithm utilizes the morphological watershed to extract boundaries of cells from their grey-level images. It generates a sequence of Euclidean distances by selecting pixels in clockwise direction on the boundary of the cell and calculating the Euclidean distances of the selected pixels from the centroid of the cell. A feature vector associated with each cell is then obtained by applying the auto-regressive moving-average (ARMA) model to the generated sequence of Euclidean distances. The clustering measure J3=trace{inverse(Sw-1)Sm} involving the within (Sw) and mixed (Sm) class-scattering matrices is computed for both cell classes to provide an insight into the extent to which different cell classes in the training data are separated. Our test results suggest that the algorithm is highly accurate for the development of an interactive, computer-assisted diagnosis (CAD) tool.
NASA Technical Reports Server (NTRS)
2002-01-01
This Moderate-resolution Imaging Spectroradiometer (MODIS) image over Argentina was acquired on April 24, 2000, and was produced using a combination of the sensor's 250-m and 500-m resolution 'true color' bands. This image was presented on June 13, 2000 as a GIFt to Argentinian President Fernando de la Rua by NASA Administrator Dan Goldin. Note the Parana River which runs due south from the top of the image before turning east to empty into the Atlantic Ocean. Note the yellowish sediment from the Parana River mixing with the redish sediment from the Uruguay River as it empties into the Rio de la Plata. The water level of the Parana seems high, which could explain the high sediment discharge. A variety of land surface features are visible in this image. To the north, the greenish pixels show forest regions, as well as characteristic clusters of rectangular patterns of agricultural fields. In the lower left of the image, the lighter green pixels show arable regions where there is grazing and farming. (Image courtesy Jacques Descloitres, MODIS Land Group, NASA GSFC)
NASA Technical Reports Server (NTRS)
Stoner, E. R.; May, G. A.; Kalcic, M. T. (Principal Investigator)
1981-01-01
Sample segments of ground-verified land cover data collected in conjunction with the USDA/ESS June Enumerative Survey were merged with LANDSAT data and served as a focus for unsupervised spectral class development and accuracy assessment. Multitemporal data sets were created from single-date LANDSAT MSS acquisitions from a nominal scene covering an eleven-county area in north central Missouri. Classification accuracies for the four land cover types predominant in the test site showed significant improvement in going from unitemporal to multitemporal data sets. Transformed LANDSAT data sets did not significantly improve classification accuracies. Regression estimators yielded mixed results for different land covers. Misregistration of two LANDSAT data sets by as much and one half pixels did not significantly alter overall classification accuracies. Existing algorithms for scene-to scene overlay proved adequate for multitemporal data analysis as long as statistical class development and accuracy assessment were restricted to field interior pixels.
Direct write electron beam lithography: a historical overview
NASA Astrophysics Data System (ADS)
Pfeiffer, Hans C.
2010-09-01
Maskless pattern generation capability in combination with practically limitless resolution made probe-forming electron beam systems attractive tools in the semiconductor fabrication process. However, serial exposure of pattern elements with a scanning beam is a slow process and throughput presented a key challenge in electron beam lithography from the beginning. To meet this challenge imaging concepts with increasing exposure efficiency have been developed projecting ever larger number of pixels in parallel. This evolution started in the 1960s with the SEM-type Gaussian beam systems writing one pixel at a time directly on wafers. During the 1970s IBM pioneered the concept of shaped beams containing multiple pixels which led to higher throughput and an early success of e-beam direct write (EBDW) in large scale manufacturing of semiconductor chips. EBDW in a mix-and match approach with optical lithography provided unique flexibility in part number management and cycle time reduction and proved extremely cost effective in IBM's Quick-Turn-Around-Time (QTAT) facilities. But shaped beams did not keep pace with Moore's law because of limitations imposed by the physics of charged particles: Coulomb interactions between beam electrons cause image blur and consequently limit beam current and throughput. A new technology approach was needed. Physically separating beam electrons into multiple beamlets to reduce Coulomb interaction led to the development of massively parallel projection of pixels. Electron projection lithography (EPL) - a mask based imaging technique emulating optical steppers - was pursued during the 1990s by Bell Labs with SCALPEL and by IBM with PREVAIL in partnership with Nikon. In 2003 Nikon shipped the first NCR-EB1A e-beam stepper based on the PREVAIL technology to Selete. It exposed pattern segments containing 10 million pixels in single shot and represented the first successful demonstration of massively parallel pixel projection. However the window of opportunity for EPL had closed with the quick implementation of immersion lithography and the interest of the industry has since shifted back to maskless lithography (ML2). This historical overview of EBDW will highlight opportunities and limitation of the technology with particular focus on technical challenges facing the current ML2 development efforts in Europe and the US. A brief status report and risk assessment of the ML2 approaches will be provided.
NASA Astrophysics Data System (ADS)
Liu, Lingling; Zhang, Xiaoyang; Yu, Yunyue; Donnelly, Alison
2017-02-01
The timing of fall foliage coloration, especially peak coloration, is of great importance to the climate change research community as it has implications for carbon storage in forests. However, its long-term variation and response to climate change are poorly understood. To address this issue, we examined the long-term trends and breakpoints in satellite derived peak coloration onset from 1982 to 2014 using an innovative approach that combines Singular Spectrum Analysis (SSA) with Breaks for Additive Seasonal and Trend (BFAST). The peak coloration trend was then evaluated using both field foliage coloration observations and flux tower measurements. Finally, interannual changes in peak coloration onset were correlated with temperature and precipitation variation. Results showed that temporal trends in satellite-derived peak coloration onset were comparable with both field observations and flux tower measurements of gross primary productivity. Specifically, a breakpoint in long-term peak coloration onset was detected in 25% of pixels which were mainly distributed at latitudes north of 37°N. The breakpoint tended to occur between 1998 and 2004. Peak coloration onset was delayed before the breakpoint while it was transformed to an early trend after the breakpoint in nearly all pixels. The remaining 75% of pixels exhibited monotonic trends, 35% of which revealed a late trend and 40% an early trend. The results indicate that the onset of peak coloration experienced a late trend during the 1980s and 1990s in most deciduous and mixed forests. However, the trend was reversed during the most recent decade when the timing of peak coloration became earlier. The onset of peak coloration was significantly correlated with late summer and autumn temperature in 55.5% of pixels from 1982 to 2014. This pattern of temperature impacts was also verified using field observations and flux tower measurements. In the remaining 44.5% of pixels, 12.2% of pixels showed significantly positive correlation between the onset of peak coloration and cumulative precipitation during late summer and autumn period from 1982 to 2014. Our findings can improve understanding of the impact of changes in autumn phenology on carbon uptake in forests, which in turn facilitate more reliable measures of carbon dynamics in vegetation-climate interactions models.
Background concentrations for high resolution satellite observing systems of methane
NASA Astrophysics Data System (ADS)
Benmergui, J. S.; Propp, A. M.; Turner, A. J.; Wofsy, S. C.
2017-12-01
Emerging satellite technologies promise to measure total column dry-air mole fractions of methane (XCH4) at resolutions on the order of a kilometer. XCH4 is linearly related to regional methane emissions through enhancements in the mixed layer, giving these satellites the ability to constrain emissions at unprecedented resolution. However, XCH4 is also sensitive to variability in transport of upwind concentrations (the "background concentration"). Variations in the background concentration are caused by synoptic scale transport in both the free troposphere and the stratosphere, as well as the rate of methane oxidation. Misspecification of the background concentration is aliased onto retrieved emissions as bias. This work explores several methods of specifying the background concentration for high resolution satellite observations of XCH4. We conduct observing system simulation experiments (OSSEs) that simulate the retrieval of emissions in the Barnett Shale using observations from a 1.33 km resolution XCH4 imaging satellite. We test background concentrations defined (1) from an external continental-scale model, (2) using pixels along the edge of the image as a boundary value, (3) using differences between adjacent pixels, and (4) using differences between the same pixel separated by one hour in time. We measure success using the accuracy of the retrieval, the potential for bias induced by misspecification of the background, and the computational expedience of the method. Pathological scenarios are given to each method.
Naked-eye 3D imaging employing a modified MIMO micro-ring conjugate mirrors
NASA Astrophysics Data System (ADS)
Youplao, P.; Pornsuwancharoen, N.; Amiri, I. S.; Thieu, V. N.; Yupapin, P.
2018-03-01
In this work, the use of a micro-conjugate mirror that can produce the 3D image incident probe and display is proposed. By using the proposed system together with the concept of naked-eye 3D imaging, a pixel and a large volume pixel of a 3D image can be created and displayed as naked-eye perception, which is valuable for the large volume naked-eye 3D imaging applications. In operation, a naked-eye 3D image that has a large pixel volume will be constructed by using the MIMO micro-ring conjugate mirror system. Thereafter, these 3D images, formed by the first micro-ring conjugate mirror system, can be transmitted through an optical link to a short distance away and reconstructed via the recovery conjugate mirror at the other end of the transmission. The image transmission is performed by the Fourier integral in MATLAB and compares to the Opti-wave program results. The Fourier convolution is also included for the large volume image transmission. The simulation is used for the manipulation, where the array of a micro-conjugate mirror system is designed and simulated for the MIMO system. The naked-eye 3D imaging is confirmed by the concept of the conjugate mirror in both the input and output images, in terms of the four-wave mixing (FWM), which is discussed and interpreted.
NASA Astrophysics Data System (ADS)
Morikawa, Junko
2015-05-01
The mobile type apparatus for a quantitative micro-scale thermography using a micro-bolometer was developed based on our original techniques such as an achromatic lens design to capture a micro-scale image in long-wave infrared, a video signal superimposing for the real time emissivity correction, and a pseudo acceleration of a timeframe. The total size of the instrument was designed as it was put in the 17 cm x 28 cm x 26 cm size carrying box. The video signal synthesizer enabled to record a direct digital signal of monitoring temperature or positioning data. The encoded digital signal data embedded in each image was decoded to read out. The protocol to encode/decode the measured data was originally defined. The mixed signals of IR camera and the imposed data were applied to the pixel by pixel emissivity corrections and the pseudo-acceleration of the periodical thermal phenomena. Because the emissivity of industrial materials and biological tissues were usually inhomogeneous, it has the different temperature dependence on each pixel. The time-scale resolution for the periodic thermal event was improved with the algorithm for "pseudoacceleration". It contributes to reduce the noise by integrating the multiple image data, keeping a time resolution. The anisotropic thermal properties of some composite materials such as thermal insulating materials of cellular plastics and the biometric composite materials were analyzed using these techniques.
To BG or not to BG: Background Subtraction for EIT Coronal Loops
NASA Astrophysics Data System (ADS)
Beene, J. E.; Schmelz, J. T.
2003-05-01
One of the few observational tests for various coronal heating models is to determine the temperature profile along coronal loops. Since loops are such an abundant coronal feature, this method originally seemed quite promising - that the coronal heating problem might actually be solved by determining the temperature as a function of arc length and comparing these observations with predictions made by different models. But there are many instruments currently available to study loops, as well as various techniques used to determine their temperature characteristics. Consequently, there are many different, mostly conflicting temperature results. We chose data for ten coronal loops observed with the Extreme ultraviolet Imaging Telescope (EIT), and chose specific pixels along each loop, as well as corresponding nearby background pixels where the loop emission was not present. Temperature analysis from the 171-to-195 and 195-to-284 angstrom image ratios was then performed on three forms of the data: the original data alone, the original data with a uniform background subtraction, and the original data with a pixel-by-pixel background subtraction. The original results show loops of constant temperature, as other authors have found before us, but the 171-to-195 and 195-to-284 results are significantly different. Background subtraction does not change the constant-temperature result or the value of the temperature itself. This does not mean that loops are isothermal, however, because the background pixels, which are not part of any contiguous structure, also produce a constant-temperature result with the same value as the loop pixels. These results indicate that EIT temperature analysis should not be trusted, and the isothermal loops that result from EIT (and TRACE) analysis may be an artifact of the analysis process. Solar physics research at the University of Memphis is supported by NASA grants NAG5-9783 and NAG5-12096.
Back-illuminated imager and method for making electrical and optical connections to same
NASA Technical Reports Server (NTRS)
Pain, Bedabrata (Inventor)
2010-01-01
Methods for bringing or exposing metal pads or traces to the backside of a backside-illuminated imager allow the pads or traces to reside on the illumination side for electrical connection. These methods provide a solution to a key packaging problem for backside thinned imagers. The methods also provide alignment marks for integrating color filters and microlenses to the imager pixels residing on the frontside of the wafer, enabling high performance multispectral and high sensitivity imagers, including those with extremely small pixel pitch. In addition, the methods incorporate a passivation layer for protection of devices against external contamination, and allow interface trap density reduction via thermal annealing. Backside-illuminated imagers with illumination side electrical connections are also disclosed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gruner, Sol
2012-01-20
The primary focus of the grant is the development of new x-ray detectors for biological and materials work at synchrotron sources, especially Pixel Array Detectors (PADs), and the training of students via research applications to problems in biophysics and materials science using novel x-ray methods. This Final Progress Report provides a high-level overview of the most important accomplishments. These major areas of accomplishment include: (1) Development and application of x-ray Pixel Array Detectors; (2) Development and application of methods of high pressure x-ray crystallography as applied to proteins; (3) Studies on the synthesis and structure of novel mesophase materials derivedmore » from block co-polymers.« less
Multi-frame partially saturated images blind deconvolution
NASA Astrophysics Data System (ADS)
Ye, Pengzhao; Feng, Huajun; Xu, Zhihai; Li, Qi; Chen, Yueting
2016-12-01
When blurred images have saturated or over-exposed pixels, conventional blind deconvolution approaches often fail to estimate accurate point spread function (PSF) and will introduce local ringing artifacts. In this paper, we propose a method to deal with the problem under the modified multi-frame blind deconvolution framework. First, in the kernel estimation step, a light streak detection scheme using multi-frame blurred images is incorporated into the regularization constraint. Second, we deal with image regions affected by the saturated pixels separately by modeling a weighted matrix during each multi-frame deconvolution iteration process. Both synthetic and real-world examples show that more accurate PSFs can be estimated and restored images have richer details and less negative effects compared to state of art methods.
Using DSLR cameras in digital holography
NASA Astrophysics Data System (ADS)
Hincapié-Zuluaga, Diego; Herrera-Ramírez, Jorge; García-Sucerquia, Jorge
2017-08-01
In Digital Holography (DH), the size of the bidimensional image sensor to record the digital hologram, plays a key role on the performance of this imaging technique; the larger the size of the camera sensor, the better the quality of the final reconstructed image. Scientific cameras with large formats are offered in the market, but their cost and availability limit their use as a first option when implementing DH. Nowadays, DSLR cameras provide an easy-access alternative that is worthwhile to be explored. The DSLR cameras are a wide, commercial, and available option that in comparison with traditional scientific cameras, offer a much lower cost per effective pixel over a large sensing area. However, in the DSLR cameras, with their RGB pixel distribution, the sampling of information is different to the sampling in monochrome cameras usually employed in DH. This fact has implications in their performance. In this work, we discuss why DSLR cameras are not extensively used for DH, taking into account the problem reported by different authors of object replication. Simulations of DH using monochromatic and DSLR cameras are presented and a theoretical deduction for the replication problem using the Fourier theory is also shown. Experimental results of DH implementation using a DSLR camera show the replication problem.
Satellite observations of turbidity in the Dead Sea
NASA Astrophysics Data System (ADS)
Nehorai, R.; Lensky, I. M.; Hochman, L.; Gertman, I.; Brenner, S.; Muskin, A.; Lensky, N. G.
2013-06-01
A methodology to attain daily variability of turbidity in the Dead Sea by means of remote sensing was developed. 250 m/pixel moderate resolution imaging spectroradiometer (MODIS) surface reflectance data were used to characterize the seasonal cycle of turbidity and plume spreading generated by flood events in the lake. Fifteen minutes interval images from meteosat second generation 1.6 km/pixel high-resolution visible (HRV) channel were used to monitor daily variations of turbidity. The HRV reflectance was normalized throughout the day to correct for the changing geometry and then calibrated against available MODIS surface reflectance. Finally, hourly averaged reflectance maps are presented for summer and winter. The results show that turbidity is concentrated along the silty shores of the lake and the southern embayments, with a gradual decrease of turbidity values from the shoreline toward the center of the lake. This pattern is most pronounced following the nighttime hours of intense winds. A few hours after winds calm the concentric turbidity pattern fades. In situ and remote sensing observations show a clear relation between wind intensity, wave amplitude and water turbidity. In summer and winter similar concentric turbidity patterns are observed but with a much narrower structure in winter. A simple Lagrangain trajectory model suggests that the combined effects of horizontal transport and vertical mixing of suspended particles leads to more effective mixing in winter. The dynamics of suspended matter contributions from winter desert floods are also presented in terms of hourly turbidity maps showing the spreading of the plumes and their decay.
The Extraction of Post-Earthquake Building Damage Informatiom Based on Convolutional Neural Network
NASA Astrophysics Data System (ADS)
Chen, M.; Wang, X.; Dou, A.; Wu, X.
2018-04-01
The seismic damage information of buildings extracted from remote sensing (RS) imagery is meaningful for supporting relief and effective reduction of losses caused by earthquake. Both traditional pixel-based and object-oriented methods have some shortcoming in extracting information of object. Pixel-based method can't make fully use of contextual information of objects. Object-oriented method faces problem that segmentation of image is not ideal, and the choice of feature space is difficult. In this paper, a new stratage is proposed which combines Convolution Neural Network (CNN) with imagery segmentation to extract building damage information from remote sensing imagery. the key idea of this method includes two steps. First to use CNN to predicate the probability of each pixel and then integrate the probability within each segmentation spot. The method is tested through extracting the collapsed building and uncollapsed building from the aerial image which is acquired in Longtoushan Town after Ms 6.5 Ludian County, Yunnan Province earthquake. The results show that the proposed method indicates its effectiveness in extracting damage information of buildings after earthquake.
NASA Astrophysics Data System (ADS)
Wang, Shuai; Sun, Huayan; Guo, Huichao
2018-01-01
Aiming at the problem of beam scanning in low-resolution APD array in three-dimensional imaging, a method of beam scanning with liquid crystal phase-space optical modulator is proposed to realize high-resolution imaging by low-resolution APD array. First, a liquid crystal phase spatial light modulator is used to generate a beam array and then a beam array is scanned. Since the sub-beam divergence angle in the beam array is smaller than the field angle of a single pixel in the APD array, the APD's pixels respond only to the three-dimensional information of the beam illumination position. Through the scanning of the beam array, a single pixel is used to collect the target three-dimensional information multiple times, thereby improving the resolution of the APD detector. Finally, MATLAB is used to simulate the algorithm in this paper by using two-dimensional scalar diffraction theory, which realizes the splitting and scanning with a resolution of 5 x 5. The feasibility is verified theoretically.
Crystallization mosaic effect generation by superpixels
NASA Astrophysics Data System (ADS)
Xie, Yuqi; Bo, Pengbo; Yuan, Ye; Wang, Kuanquan
2015-03-01
Art effect generation from digital images using computational tools has been a hot research topic in recent years. We propose a new method for generating crystallization mosaic effects from color images. Two key problems in generating pleasant mosaic effect are studied: grouping pixels into mosaic tiles and arrangement of mosaic tiles adapting to image features. To give visually pleasant mosaic effect, we propose to create mosaic tiles by pixel clustering in feature space of color information, taking compactness of tiles into consideration as well. Moreover, we propose a method for processing feature boundaries in images which gives guidance for arranging mosaic tiles near image features. This method gives nearly uniform shape of mosaic tiles, adapting to feature lines in an esthetic way. The new approach considers both color distance and Euclidean distance of pixels, and thus is capable of giving mosaic tiles in a more pleasing manner. Some experiments are included to demonstrate the computational efficiency of the present method and its capability of generating visually pleasant mosaic tiles. Comparisons with existing approaches are also included to show the superiority of the new method.
Breast cancer mitosis detection in histopathological images with spatial feature extraction
NASA Astrophysics Data System (ADS)
Albayrak, Abdülkadir; Bilgin, Gökhan
2013-12-01
In this work, cellular mitosis detection in histopathological images has been investigated. Mitosis detection is very expensive and time consuming process. Development of digital imaging in pathology has enabled reasonable and effective solution to this problem. Segmentation of digital images provides easier analysis of cell structures in histopathological data. To differentiate normal and mitotic cells in histopathological images, feature extraction step is very crucial step for the system accuracy. A mitotic cell has more distinctive textural dissimilarities than the other normal cells. Hence, it is important to incorporate spatial information in feature extraction or in post-processing steps. As a main part of this study, Haralick texture descriptor has been proposed with different spatial window sizes in RGB and La*b* color spaces. So, spatial dependencies of normal and mitotic cellular pixels can be evaluated within different pixel neighborhoods. Extracted features are compared with various sample sizes by Support Vector Machines using k-fold cross validation method. According to the represented results, it has been shown that separation accuracy on mitotic and non-mitotic cellular pixels gets better with the increasing size of spatial window.
Towards a true aerosol-and-cloud retrieval scheme
NASA Astrophysics Data System (ADS)
Thomas, Gareth; Poulsen, Caroline; Povey, Adam; McGarragh, Greg; Jerg, Matthias; Siddans, Richard; Grainger, Don
2014-05-01
The Optimal Retrieval of Aerosol and Cloud (ORAC) - formally the Oxford-RAL Aerosol and Cloud retrieval - offers a framework that can provide consistent and well characterised properties of both aerosols and clouds from a range of imaging satellite instruments. Several practical issues stand in the way of achieving the potential of this combined scheme however; in particular the sometimes conflicting priorities and requirements of aerosol and cloud retrieval problems, and the question of the unambiguous identification of aerosol and cloud pixels. This presentation will present recent developments made to the ORAC scheme for both aerosol and cloud, and detail how these are being integrated into a single retrieval framework. The implementation of a probabilistic method for pixel identification will also be presented, for both cloud detection and aerosol/cloud type selection. The method is based on Bayesian methods applied the optimal estimation retrieval output of ORAC and is particularly aimed at providing additional information in the so-called "twilight zone", where pixels can't be unambiguously identified as either aerosol or cloud and traditional cloud or aerosol products do not provide results.
Kriging in the Shadows: Geostatistical Interpolation for Remote Sensing
NASA Technical Reports Server (NTRS)
Rossi, Richard E.; Dungan, Jennifer L.; Beck, Louisa R.
1994-01-01
It is often useful to estimate obscured or missing remotely sensed data. Traditional interpolation methods, such as nearest-neighbor or bilinear resampling, do not take full advantage of the spatial information in the image. An alternative method, a geostatistical technique known as indicator kriging, is described and demonstrated using a Landsat Thematic Mapper image in southern Chiapas, Mexico. The image was first classified into pasture and nonpasture land cover. For each pixel that was obscured by cloud or cloud shadow, the probability that it was pasture was assigned by the algorithm. An exponential omnidirectional variogram model was used to characterize the spatial continuity of the image for use in the kriging algorithm. Assuming a cutoff probability level of 50%, the error was shown to be 17% with no obvious spatial bias but with some tendency to categorize nonpasture as pasture (overestimation). While this is a promising result, the method's practical application in other missing data problems for remotely sensed images will depend on the amount and spatial pattern of the unobscured pixels and missing pixels and the success of the spatial continuity model used.
Automatic segmentation of colon glands using object-graphs.
Gunduz-Demir, Cigdem; Kandemir, Melih; Tosun, Akif Burak; Sokmensuer, Cenk
2010-02-01
Gland segmentation is an important step to automate the analysis of biopsies that contain glandular structures. However, this remains a challenging problem as the variation in staining, fixation, and sectioning procedures lead to a considerable amount of artifacts and variances in tissue sections, which may result in huge variances in gland appearances. In this work, we report a new approach for gland segmentation. This approach decomposes the tissue image into a set of primitive objects and segments glands making use of the organizational properties of these objects, which are quantified with the definition of object-graphs. As opposed to the previous literature, the proposed approach employs the object-based information for the gland segmentation problem, instead of using the pixel-based information alone. Working with the images of colon tissues, our experiments demonstrate that the proposed object-graph approach yields high segmentation accuracies for the training and test sets and significantly improves the segmentation performance of its pixel-based counterparts. The experiments also show that the object-based structure of the proposed approach provides more tolerance to artifacts and variances in tissues.
Experimental characterization of the perceptron laser rangefinder
NASA Technical Reports Server (NTRS)
Kweon, I. S.; Hoffman, Regis; Krotkov, Eric
1991-01-01
In this report, we characterize experimentally a scanning laser rangefinder that employs active sensing to acquire three-dimensional images. We present experimental techniques applicable to a wide variety of laser scanners, and document the results of applying them to a device manufactured by Perceptron. Nominally, the sensor acquires data over a 60 deg x 60 deg field of view in 256 x 256 pixel images at 2 Hz. It digitizes both range and reflectance pixels to 12 bits, providing a maximum range of 40 m and a depth resolution of 1 cm. We present methods and results from experiments to measure geometric parameters including the field of view, angular scanning increments, and minimum sensing distance. We characterize qualitatively problems caused by implementation flaws, including internal reflections and range drift over time, and problems caused by inherent limitations of the rangefinding technology, including sensitivity to ambient light and surface material. We characterize statistically the precision and accuracy of the range measurements. We conclude that the performance of the Perceptron scanner does not compare favorably with the nominal performance, that scanner modifications are required, and that further experimentation must be conducted.
NASA Astrophysics Data System (ADS)
Hao, Yudong; Zhao, Yang; Li, Dacheng
1999-11-01
Grating projection 3D profilometry has three major problems that have to be handled with great care. They are local shadows, phase discontinuities and surface isolations. Carrying no information, shadow areas give us no clue about the profile there. Phase discontinuities often baffle phase unwrappers because they may be generated for several reasons difficult to distinguish. Spatial phase unwrapping will inevitably fail if the object under teste have surface isolations. In this paper, a complementary grating projection profilometry is reported, which attempts to tackle the three aforementioned problems simultaneously. This technique involves projecting two grating patterns form both sides of the CCD camera. Phase unwrapping is carried out pixel by pixel using the two phase maps based on the excess fraction method, which is immune to phase discontinuities or surface isolations. Complementary projection makes sure that no area in the visible volume of CCD is devoid of fringe information, although in some cases a small area of the reconstructed profile is of low accuracy compared with others. The system calibration procedures and measurement results are presented in detail, and possible improvement is discussed.
Heterogeneous CPU-GPU moving targets detection for UAV video
NASA Astrophysics Data System (ADS)
Li, Maowen; Tang, Linbo; Han, Yuqi; Yu, Chunlei; Zhang, Chao; Fu, Huiquan
2017-07-01
Moving targets detection is gaining popularity in civilian and military applications. On some monitoring platform of motion detection, some low-resolution stationary cameras are replaced by moving HD camera based on UAVs. The pixels of moving targets in the HD Video taken by UAV are always in a minority, and the background of the frame is usually moving because of the motion of UAVs. The high computational cost of the algorithm prevents running it at higher resolutions the pixels of frame. Hence, to solve the problem of moving targets detection based UAVs video, we propose a heterogeneous CPU-GPU moving target detection algorithm for UAV video. More specifically, we use background registration to eliminate the impact of the moving background and frame difference to detect small moving targets. In order to achieve the effect of real-time processing, we design the solution of heterogeneous CPU-GPU framework for our method. The experimental results show that our method can detect the main moving targets from the HD video taken by UAV, and the average process time is 52.16ms per frame which is fast enough to solve the problem.
Pixel-based parametric source depth map for Cerenkov luminescence imaging
NASA Astrophysics Data System (ADS)
Altabella, L.; Boschi, F.; Spinelli, A. E.
2016-01-01
Optical tomography represents a challenging problem in optical imaging because of the intrinsically ill-posed inverse problem due to photon diffusion. Cerenkov luminescence tomography (CLT) for optical photons produced in tissues by several radionuclides (i.e.: 32P, 18F, 90Y), has been investigated using both 3D multispectral approach and multiviews methods. Difficult in convergence of 3D algorithms can discourage to use this technique to have information of depth and intensity of source. For these reasons, we developed a faster 2D corrected approach based on multispectral acquisitions, to obtain source depth and its intensity using a pixel-based fitting of source intensity. Monte Carlo simulations and experimental data were used to develop and validate the method to obtain the parametric map of source depth. With this approach we obtain parametric source depth maps with a precision between 3% and 7% for MC simulation and 5-6% for experimental data. Using this method we are able to obtain reliable information about the source depth of Cerenkov luminescence with a simple and flexible procedure.
Image model: new perspective for image processing and computer vision
NASA Astrophysics Data System (ADS)
Ziou, Djemel; Allili, Madjid
2004-05-01
We propose a new image model in which the image support and image quantities are modeled using algebraic topology concepts. The image support is viewed as a collection of chains encoding combination of pixels grouped by dimension and linking different dimensions with the boundary operators. Image quantities are encoded using the notion of cochain which associates values for pixels of given dimension that can be scalar, vector, or tensor depending on the problem that is considered. This allows obtaining algebraic equations directly from the physical laws. The coboundary and codual operators, which are generic operations on cochains allow to formulate the classical differential operators as applied for field functions and differential forms in both global and local forms. This image model makes the association between the image support and the image quantities explicit which results in several advantages: it allows the derivation of efficient algorithms that operate in any dimension and the unification of mathematics and physics to solve classical problems in image processing and computer vision. We show the effectiveness of this model by considering the isotropic diffusion.
Cao, Qiongmin; Yuan, Guoan; Yin, Lijie; Chen, Dezhen; He, Pinjing; Wang, Hai
2016-12-01
In this research morphological techniques were used to characterize dechlorination process of PVC when it is in the mixed waste plastics and the two important factors influencing this process, namely, the proportion of PVC in the mixed plastics and heating rate adopted in the pyrolysis process were investigated. During the pyrolysis process for the mixed plastics containing PVC, the morphologic characteristics describing PVC dechlorination behaviors were obtained with help of a high-speed infrared camera and image processing tools. At the same time emission of hydrogen chloride (HCl) was detected to find out the start and termination of HCl release. The PVC contents in the mixed plastics varied from 0% to 12% in mass and the heating rate for PVC was changed from 10 to 60°C/min. The morphologic parameters including "bubble ratio" (BR) and "pixel area" (PA) were found to have obvious features matching with PVC dechlorination process therefore can be used to characterize dechlorination of PVC alone and in the mixed plastics. It has been also found that shape of HCl emission curve is independent of PVC proportions in the mixed plastics, but shifts to right side with elevated heating rate; and all of which can be quantitatively reflected in morphologic parameters vs. temperature curves. Copyright © 2016 Elsevier Ltd. All rights reserved.
Radiotracer Technology in Mixing Processes for Industrial Applications
Othman, N.; Kamarudin, S. K.
2014-01-01
Many problems associated with the mixing process remain unsolved and result in poor mixing performance. The residence time distribution (RTD) and the mixing time are the most important parameters that determine the homogenisation that is achieved in the mixing vessel and are discussed in detail in this paper. In addition, this paper reviews the current problems associated with conventional tracers, mathematical models, and computational fluid dynamics simulations involved in radiotracer experiments and hybrid of radiotracer. PMID:24616642
Quantifying uncertainty in stable isotope mixing models
Davis, Paul; Syme, James; Heikoop, Jeffrey; ...
2015-05-19
Mixing models are powerful tools for identifying biogeochemical sources and determining mixing fractions in a sample. However, identification of actual source contributors is often not simple, and source compositions typically vary or even overlap, significantly increasing model uncertainty in calculated mixing fractions. This study compares three probabilistic methods, SIAR [ Parnell et al., 2010] a pure Monte Carlo technique (PMC), and Stable Isotope Reference Source (SIRS) mixing model, a new technique that estimates mixing in systems with more than three sources and/or uncertain source compositions. In this paper, we use nitrate stable isotope examples (δ 15N and δ 18O) butmore » all methods tested are applicable to other tracers. In Phase I of a three-phase blind test, we compared methods for a set of six-source nitrate problems. PMC was unable to find solutions for two of the target water samples. The Bayesian method, SIAR, experienced anchoring problems, and SIRS calculated mixing fractions that most closely approximated the known mixing fractions. For that reason, SIRS was the only approach used in the next phase of testing. In Phase II, the problem was broadened where any subset of the six sources could be a possible solution to the mixing problem. Results showed a high rate of Type I errors where solutions included sources that were not contributing to the sample. In Phase III some sources were eliminated based on assumed site knowledge and assumed nitrate concentrations, substantially reduced mixing fraction uncertainties and lowered the Type I error rate. These results demonstrate that valuable insights into stable isotope mixing problems result from probabilistic mixing model approaches like SIRS. The results also emphasize the importance of identifying a minimal set of potential sources and quantifying uncertainties in source isotopic composition as well as demonstrating the value of additional information in reducing the uncertainty in calculated mixing fractions.« less
NASA Astrophysics Data System (ADS)
Wang, Min
2017-06-01
This paper aims to establish the Tikhonov regularization method for generalized mixed variational inequalities in Banach spaces. For this purpose, we firstly prove a very general existence result for generalized mixed variational inequalities, provided that the mapping involved has the so-called mixed variational inequality property and satisfies a rather weak coercivity condition. Finally, we establish the Tikhonov regularization method for generalized mixed variational inequalities. Our findings extended the results for the generalized variational inequality problem (for short, GVIP( F, K)) in R^n spaces (He in Abstr Appl Anal, 2012) to the generalized mixed variational inequality problem (for short, GMVIP(F,φ , K)) in reflexive Banach spaces. On the other hand, we generalized the corresponding results for the generalized mixed variational inequality problem (for short, GMVIP(F,φ ,K)) in R^n spaces (Fu and He in J Sichuan Norm Univ (Nat Sci) 37:12-17, 2014) to reflexive Banach spaces.
Unsupervised motion-based object segmentation refined by color
NASA Astrophysics Data System (ADS)
Piek, Matthijs C.; Braspenning, Ralph; Varekamp, Chris
2003-06-01
For various applications, such as data compression, structure from motion, medical imaging and video enhancement, there is a need for an algorithm that divides video sequences into independently moving objects. Because our focus is on video enhancement and structure from motion for consumer electronics, we strive for a low complexity solution. For still images, several approaches exist based on colour, but these lack in both speed and segmentation quality. For instance, colour-based watershed algorithms produce a so-called oversegmentation with many segments covering each single physical object. Other colour segmentation approaches exist which somehow limit the number of segments to reduce this oversegmentation problem. However, this often results in inaccurate edges or even missed objects. Most likely, colour is an inherently insufficient cue for real world object segmentation, because real world objects can display complex combinations of colours. For video sequences, however, an additional cue is available, namely the motion of objects. When different objects in a scene have different motion, the motion cue alone is often enough to reliably distinguish objects from one another and the background. However, because of the lack of sufficient resolution of efficient motion estimators, like the 3DRS block matcher, the resulting segmentation is not at pixel resolution, but at block resolution. Existing pixel resolution motion estimators are more sensitive to noise, suffer more from aperture problems or have less correspondence to the true motion of objects when compared to block-based approaches or are too computationally expensive. From its tendency to oversegmentation it is apparent that colour segmentation is particularly effective near edges of homogeneously coloured areas. On the other hand, block-based true motion estimation is particularly effective in heterogeneous areas, because heterogeneous areas improve the chance a block is unique and thus decrease the chance of the wrong position producing a good match. Consequently, a number of methods exist which combine motion and colour segmentation. These methods use colour segmentation as a base for the motion segmentation and estimation or perform an independent colour segmentation in parallel which is in some way combined with the motion segmentation. The presented method uses both techniques to complement each other by first segmenting on motion cues and then refining the segmentation with colour. To our knowledge few methods exist which adopt this approach. One example is te{meshrefine}. This method uses an irregular mesh, which hinders its efficient implementation in consumer electronics devices. Furthermore, the method produces a foreground/background segmentation, while our applications call for the segmentation of multiple objects. NEW METHOD As mentioned above we start with motion segmentation and refine the edges of this segmentation with a pixel resolution colour segmentation method afterwards. There are several reasons for this approach: + Motion segmentation does not produce the oversegmentation which colour segmentation methods normally produce, because objects are more likely to have colour discontinuities than motion discontinuities. In this way, the colour segmentation only has to be done at the edges of segments, confining the colour segmentation to a smaller part of the image. In such a part, it is more likely that the colour of an object is homogeneous. + This approach restricts the computationally expensive pixel resolution colour segmentation to a subset of the image. Together with the very efficient 3DRS motion estimation algorithm, this helps to reduce the computational complexity. + The motion cue alone is often enough to reliably distinguish objects from one another and the background. To obtain the motion vector fields, a variant of the 3DRS block-based motion estimator which analyses three frames of input was used. The 3DRS motion estimator is known for its ability to estimate motion vectors which closely resemble the true motion. BLOCK-BASED MOTION SEGMENTATION As mentioned above we start with a block-resolution segmentation based on motion vectors. The presented method is inspired by the well-known K-means segmentation method te{K-means}. Several other methods (e.g. te{kmeansc}) adapt K-means for connectedness by adding a weighted shape-error. This adds the additional difficulty of finding the correct weights for the shape-parameters. Also, these methods often bias one particular pre-defined shape. The presented method, which we call K-regions, encourages connectedness because only blocks at the edges of segments may be assigned to another segment. This constrains the segmentation method to such a degree that it allows the method to use least squares for the robust fitting of affine motion models for each segment. Contrary to te{parmkm}, the segmentation step still operates on vectors instead of model parameters. To make sure the segmentation is temporally consistent, the segmentation of the previous frame will be used as initialisation for every new frame. We also present a scheme which makes the algorithm independent of the initially chosen amount of segments. COLOUR-BASED INTRA-BLOCK SEGMENTATION The block resolution motion-based segmentation forms the starting point for the pixel resolution segmentation. The pixel resolution segmentation is obtained from the block resolution segmentation by reclassifying pixels only at the edges of clusters. We assume that an edge between two objects can be found in either one of two neighbouring blocks that belong to different clusters. This assumption allows us to do the pixel resolution segmentation on each pair of such neighbouring blocks separately. Because of the local nature of the segmentation, it largely avoids problems with heterogeneously coloured areas. Because no new segments are introduced in this step, it also does not suffer from oversegmentation problems. The presented method has no problems with bifurcations. For the pixel resolution segmentation itself we reclassify pixels such that we optimize an error norm which favour similarly coloured regions and straight edges. SEGMENTATION MEASURE To assist in the evaluation of the proposed algorithm we developed a quality metric. Because the problem does not have an exact specification, we decided to define a ground truth output which we find desirable for a given input. We define the measure for the segmentation quality as being how different the segmentation is from the ground truth. Our measure enables us to evaluate oversegmentation and undersegmentation seperately. Also, it allows us to evaluate which parts of a frame suffer from oversegmentation or undersegmentation. The proposed algorithm has been tested on several typical sequences. CONCLUSIONS In this abstract we presented a new video segmentation method which performs well in the segmentation of multiple independently moving foreground objects from each other and the background. It combines the strong points of both colour and motion segmentation in the way we expected. One of the weak points is that the segmentation method suffers from undersegmentation when adjacent objects display similar motion. In sequences with detailed backgrounds the segmentation will sometimes display noisy edges. Apart from these results, we think that some of the techniques, and in particular the K-regions technique, may be useful for other two-dimensional data segmentation problems.
Leveraging unsupervised training sets for multi-scale compartmentalization in renal pathology
NASA Astrophysics Data System (ADS)
Lutnick, Brendon; Tomaszewski, John E.; Sarder, Pinaki
2017-03-01
Clinical pathology relies on manual compartmentalization and quantification of biological structures, which is time consuming and often error-prone. Application of computer vision segmentation algorithms to histopathological image analysis, in contrast, can offer fast, reproducible, and accurate quantitative analysis to aid pathologists. Algorithms tunable to different biologically relevant structures can allow accurate, precise, and reproducible estimates of disease states. In this direction, we have developed a fast, unsupervised computational method for simultaneously separating all biologically relevant structures from histopathological images in multi-scale. Segmentation is achieved by solving an energy optimization problem. Representing the image as a graph, nodes (pixels) are grouped by minimizing a Potts model Hamiltonian, adopted from theoretical physics, modeling interacting electron spins. Pixel relationships (modeled as edges) are used to update the energy of the partitioned graph. By iteratively improving the clustering, the optimal number of segments is revealed. To reduce computational time, the graph is simplified using a Cantor pairing function to intelligently reduce the number of included nodes. The classified nodes are then used to train a multiclass support vector machine to apply the segmentation over the full image. Accurate segmentations of images with as many as 106 pixels can be completed only in 5 sec, allowing for attainable multi-scale visualization. To establish clinical potential, we employed our method in renal biopsies to quantitatively visualize for the first time scale variant compartments of heterogeneous intra- and extraglomerular structures simultaneously. Implications of the utility of our method extend to fields such as oncology, genomics, and non-biological problems.
Final Report---Optimization Under Nonconvexity and Uncertainty: Algorithms and Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jeff Linderoth
2011-11-06
the goal of this work was to develop new algorithmic techniques for solving large-scale numerical optimization problems, focusing on problems classes that have proven to be among the most challenging for practitioners: those involving uncertainty and those involving nonconvexity. This research advanced the state-of-the-art in solving mixed integer linear programs containing symmetry, mixed integer nonlinear programs, and stochastic optimization problems. The focus of the work done in the continuation was on Mixed Integer Nonlinear Programs (MINLP)s and Mixed Integer Linear Programs (MILP)s, especially those containing a great deal of symmetry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Portigal, F.P.; Harrill, R.W.
1996-08-01
Disregard for environmental issues, rising population and the ravages of war have left Honduras with an environmental catastrophe. Vast regions of southern Honduras have been denuded of vegetation causing rapid desertification. This has resulted in decreasing rainfall, falling agricultural yields and food shortages. Deforestation is accelerating in La Mosquitia, a vast region virtually without roads in the northeast extending into Nicaragua. This is the home of the indigenous Garifuna, Miskito, Pech and Tawahkan people and is the largest unbroken tropical forest in Central America. Increasing demand for resources, incursion by Ladino peasant settlers, poverty and the cattle industry are pushingmore » the colonization fronts deeper into La Mosquitia. Linear mixing models are used to identify subtle evidence of the forward fringes of the Tawahka Reserve colonization front.« less
NASA Technical Reports Server (NTRS)
Chen, Zhangxin; Ewing, Richard E.
1996-01-01
Multigrid algorithms for nonconforming and mixed finite element methods for second order elliptic problems on triangular and rectangular finite elements are considered. The construction of several coarse-to-fine intergrid transfer operators for nonconforming multigrid algorithms is discussed. The equivalence between the nonconforming and mixed finite element methods with and without projection of the coefficient of the differential problems into finite element spaces is described.
Formulation of image fusion as a constrained least squares optimization problem
Dwork, Nicholas; Lasry, Eric M.; Pauly, John M.; Balbás, Jorge
2017-01-01
Abstract. Fusing a lower resolution color image with a higher resolution monochrome image is a common practice in medical imaging. By incorporating spatial context and/or improving the signal-to-noise ratio, it provides clinicians with a single frame of the most complete information for diagnosis. In this paper, image fusion is formulated as a convex optimization problem that avoids image decomposition and permits operations at the pixel level. This results in a highly efficient and embarrassingly parallelizable algorithm based on widely available robust and simple numerical methods that realizes the fused image as the global minimizer of the convex optimization problem. PMID:28331885
Land cover mapping at sub-pixel scales
NASA Astrophysics Data System (ADS)
Makido, Yasuyo Kato
One of the biggest drawbacks of land cover mapping from remotely sensed images relates to spatial resolution, which determines the level of spatial details depicted in an image. Fine spatial resolution images from satellite sensors such as IKONOS and QuickBird are now available. However, these images are not suitable for large-area studies, since a single image is very small and therefore it is costly for large area studies. Much research has focused on attempting to extract land cover types at sub-pixel scale, and little research has been conducted concerning the spatial allocation of land cover types within a pixel. This study is devoted to the development of new algorithms for predicting land cover distribution using remote sensory imagery at sub-pixel level. The "pixel-swapping" optimization algorithm, which was proposed by Atkinson for predicting sub-pixel land cover distribution, is investigated in this study. Two limitations of this method, the arbitrary spatial range value and the arbitrary exponential model of spatial autocorrelation, are assessed. Various weighting functions, as alternatives to the exponential model, are evaluated in order to derive the optimum weighting function. Two different simulation models were employed to develop spatially autocorrelated binary class maps. In all tested models, Gaussian, Exponential, and IDW, the pixel swapping method improved classification accuracy compared with the initial random allocation of sub-pixels. However the results suggested that equal weight could be used to increase accuracy and sub-pixel spatial autocorrelation instead of using these more complex models of spatial structure. New algorithms for modeling the spatial distribution of multiple land cover classes at sub-pixel scales are developed and evaluated. Three methods are examined: sequential categorical swapping, simultaneous categorical swapping, and simulated annealing. These three methods are applied to classified Landsat ETM+ data that has been resampled to 210 meters. The result suggested that the simultaneous method can be considered as the optimum method in terms of accuracy performance and computation time. The case study employs remote sensing imagery at the following sites: tropical forests in Brazil and temperate multiple land mosaic in East China. Sub-areas for both sites are used to examine how the characteristics of the landscape affect the ability of the optimum technique. Three types of measurement: Moran's I, mean patch size (MPS), and patch size standard deviation (STDEV), are used to characterize the landscape. All results suggested that this technique could increase the classification accuracy more than traditional hard classification. The methods developed in this study can benefit researchers who employ coarse remote sensing imagery but are interested in detailed landscape information. In many cases, the satellite sensor that provides large spatial coverage has insufficient spatial detail to identify landscape patterns. Application of the super-resolution technique described in this dissertation could potentially solve this problem by providing detailed land cover predictions from the coarse resolution satellite sensor imagery.
General Tricomi-Rassias problem and oblique derivative problem for generalized Chaplygin equations
NASA Astrophysics Data System (ADS)
Wen, Guochun; Chen, Dechang; Cheng, Xiuzhen
2007-09-01
Many authors have discussed the Tricomi problem for some second order equations of mixed type, which has important applications in gas dynamics. In particular, Bers proposed the Tricomi problem for Chaplygin equations in multiply connected domains [L. Bers, Mathematical Aspects of Subsonic and Transonic Gas Dynamics, Wiley, New York, 1958]. And Rassias proposed the exterior Tricomi problem for mixed equations in a doubly connected domain and proved the uniqueness of solutions for the problem [J.M. Rassias, Lecture Notes on Mixed Type Partial Differential Equations, World Scientific, Singapore, 1990]. In the present paper, we discuss the general Tricomi-Rassias problem for generalized Chaplygin equations. This is one general oblique derivative problem that includes the exterior Tricomi problem as a special case. We first give the representation of solutions of the general Tricomi-Rassias problem, and then prove the uniqueness and existence of solutions for the problem by a new method. In this paper, we shall also discuss another general oblique derivative problem for generalized Chaplygin equations.
Forward model with space-variant of source size for reconstruction on X-ray radiographic image
NASA Astrophysics Data System (ADS)
Liu, Jin; Liu, Jun; Jing, Yue-feng; Xiao, Bo; Wei, Cai-hua; Guan, Yong-hong; Zhang, Xuan
2018-03-01
The Forward Imaging Technique is a method to solve the inverse problem of density reconstruction in radiographic imaging. In this paper, we introduce the forward projection equation (IFP model) for the radiographic system with areal source blur and detector blur. Our forward projection equation, based on X-ray tracing, is combined with the Constrained Conjugate Gradient method to form a new method for density reconstruction. We demonstrate the effectiveness of the new technique by reconstructing density distributions from simulated and experimental images. We show that for radiographic systems with source sizes larger than the pixel size, the effect of blur on the density reconstruction is reduced through our method and can be controlled within one or two pixels. The method is also suitable for reconstruction of non-homogeneousobjects.
Cheating prevention in visual cryptography.
Hu, Chih-Ming; Tzeng, Wen-Guey
2007-01-01
Visual cryptography (VC) is a method of encrypting a secret image into shares such that stacking a sufficient number of shares reveals the secret image. Shares are usually presented in transparencies. Each participant holds a transparency. Most of the previous research work on VC focuses on improving two parameters: pixel expansion and contrast. In this paper, we studied the cheating problem in VC and extended VC. We considered the attacks of malicious adversaries who may deviate from the scheme in any way. We presented three cheating methods and applied them on attacking existent VC or extended VC schemes. We improved one cheat-preventing scheme. We proposed a generic method that converts a VCS to another VCS that has the property of cheating prevention. The overhead of the conversion is near optimal in both contrast degression and pixel expansion.
Luma-chroma space filter design for subpixel-based monochrome image downsampling.
Fang, Lu; Au, Oscar C; Cheung, Ngai-Man; Katsaggelos, Aggelos K; Li, Houqiang; Zou, Feng
2013-10-01
In general, subpixel-based downsampling can achieve higher apparent resolution of the down-sampled images on LCD or OLED displays than pixel-based downsampling. With the frequency domain analysis of subpixel-based downsampling, we discover special characteristics of the luma-chroma color transform choice for monochrome images. With these, we model the anti-aliasing filter design for subpixel-based monochrome image downsampling as a human visual system-based optimization problem with a two-term cost function and obtain a closed-form solution. One cost term measures the luminance distortion and the other term measures the chrominance aliasing in our chosen luma-chroma space. Simulation results suggest that the proposed method can achieve sharper down-sampled gray/font images compared with conventional pixel and subpixel-based methods, without noticeable color fringing artifacts.
A phase-based stereo vision system-on-a-chip.
Díaz, Javier; Ros, Eduardo; Sabatini, Silvio P; Solari, Fabio; Mota, Sonia
2007-02-01
A simple and fast technique for depth estimation based on phase measurement has been adopted for the implementation of a real-time stereo system with sub-pixel resolution on an FPGA device. The technique avoids the attendant problem of phase warping. The designed system takes full advantage of the inherent processing parallelism and segmentation capabilities of FPGA devices to achieve a computation speed of 65megapixels/s, which can be arranged with a customized frame-grabber module to process 211frames/s at a size of 640x480 pixels. The processing speed achieved is higher than conventional camera frame rates, thus allowing the system to extract multiple estimations and be used as a platform to evaluate integration schemes of a population of neurons without increasing hardware resource demands.
Efficient Solar Scene Wavefront Estimation with Reduced Systematic and RMS Errors: Summary
NASA Astrophysics Data System (ADS)
Anugu, N.; Garcia, P.
2016-04-01
Wave front sensing for solar telescopes is commonly implemented with the Shack-Hartmann sensors. Correlation algorithms are usually used to estimate the extended scene Shack-Hartmann sub-aperture image shifts or slopes. The image shift is computed by correlating a reference sub-aperture image with the target distorted sub-aperture image. The pixel position where the maximum correlation is located gives the image shift in integer pixel coordinates. Sub-pixel precision image shifts are computed by applying a peak-finding algorithm to the correlation peak Poyneer (2003); Löfdahl (2010). However, the peak-finding algorithm results are usually biased towards the integer pixels, these errors are called as systematic bias errors Sjödahl (1994). These errors are caused due to the low pixel sampling of the images. The amplitude of these errors depends on the type of correlation algorithm and the type of peak-finding algorithm being used. To study the systematic errors in detail, solar sub-aperture synthetic images are constructed by using a Swedish Solar Telescope solar granulation image1. The performance of cross-correlation algorithm in combination with different peak-finding algorithms is investigated. The studied peak-finding algorithms are: parabola Poyneer (2003); quadratic polynomial Löfdahl (2010); threshold center of gravity Bailey (2003); Gaussian Nobach & Honkanen (2005) and Pyramid Bailey (2003). The systematic error study reveals that that the pyramid fit is the most robust to pixel locking effects. The RMS error analysis study reveals that the threshold centre of gravity behaves better in low SNR, although the systematic errors in the measurement are large. It is found that no algorithm is best for both the systematic and the RMS error reduction. To overcome the above problem, a new solution is proposed. In this solution, the image sampling is increased prior to the actual correlation matching. The method is realized in two steps to improve its computational efficiency. In the first step, the cross-correlation is implemented at the original image spatial resolution grid (1 pixel). In the second step, the cross-correlation is performed using a sub-pixel level grid by limiting the field of search to 4 × 4 pixels centered at the first step delivered initial position. The generation of these sub-pixel grid based region of interest images is achieved with the bi-cubic interpolation. The correlation matching with sub-pixel grid technique was previously reported in electronic speckle photography Sjö'dahl (1994). This technique is applied here for the solar wavefront sensing. A large dynamic range and a better accuracy in the measurements are achieved with the combination of the original pixel grid based correlation matching in a large field of view and a sub-pixel interpolated image grid based correlation matching within a small field of view. The results revealed that the proposed method outperforms all the different peak-finding algorithms studied in the first approach. It reduces both the systematic error and the RMS error by a factor of 5 (i.e., 75% systematic error reduction), when 5 times improved image sampling was used. This measurement is achieved at the expense of twice the computational cost. With the 5 times improved image sampling, the wave front accuracy is increased by a factor of 5. The proposed solution is strongly recommended for wave front sensing in the solar telescopes, particularly, for measuring large dynamic image shifts involved open loop adaptive optics. Also, by choosing an appropriate increment of image sampling in trade-off between the computational speed limitation and the aimed sub-pixel image shift accuracy, it can be employed in closed loop adaptive optics. The study is extended to three other class of sub-aperture images (a point source; a laser guide star; a Galactic Center extended scene). The results are planned to submit for the Optical Express journal.
Solving mixed integer nonlinear programming problems using spiral dynamics optimization algorithm
NASA Astrophysics Data System (ADS)
Kania, Adhe; Sidarto, Kuntjoro Adji
2016-02-01
Many engineering and practical problem can be modeled by mixed integer nonlinear programming. This paper proposes to solve the problem with modified spiral dynamics inspired optimization method of Tamura and Yasuda. Four test cases have been examined, including problem in engineering and sport. This method succeeds in obtaining the optimal result in all test cases.
GAMBIT: A Parameterless Model-Based Evolutionary Algorithm for Mixed-Integer Problems.
Sadowski, Krzysztof L; Thierens, Dirk; Bosman, Peter A N
2018-01-01
Learning and exploiting problem structure is one of the key challenges in optimization. This is especially important for black-box optimization (BBO) where prior structural knowledge of a problem is not available. Existing model-based Evolutionary Algorithms (EAs) are very efficient at learning structure in both the discrete, and in the continuous domain. In this article, discrete and continuous model-building mechanisms are integrated for the Mixed-Integer (MI) domain, comprising discrete and continuous variables. We revisit a recently introduced model-based evolutionary algorithm for the MI domain, the Genetic Algorithm for Model-Based mixed-Integer opTimization (GAMBIT). We extend GAMBIT with a parameterless scheme that allows for practical use of the algorithm without the need to explicitly specify any parameters. We furthermore contrast GAMBIT with other model-based alternatives. The ultimate goal of processing mixed dependences explicitly in GAMBIT is also addressed by introducing a new mechanism for the explicit exploitation of mixed dependences. We find that processing mixed dependences with this novel mechanism allows for more efficient optimization. We further contrast the parameterless GAMBIT with Mixed-Integer Evolution Strategies (MIES) and other state-of-the-art MI optimization algorithms from the General Algebraic Modeling System (GAMS) commercial algorithm suite on problems with and without constraints, and show that GAMBIT is capable of solving problems where variable dependences prevent many algorithms from successfully optimizing them.
1992-08-01
limits of these topics will be included. Digital SAR processing is for SAR indispensible. Theories and special algorithms will be given along with basic...traitement num~rique est indispensable aux SAP,. Des theories et des algorithmes sp~cifiques; seront proposes, ainsi que des configurations de processeur...equation If N independent pixel values are added than fol- lows from the laws of probability theory that the ra mean value of the sum is identical with
Retrieval of Cloud Properties for Partially Cloud-Filled Pixels During CRYSTAL-FACE
NASA Astrophysics Data System (ADS)
Nguyen, L.; Minnis, P.; Smith, W. L.; Khaiyer, M. M.; Heck, P. W.; Sun-Mack, S.; Uttal, T.; Comstock, J.
2003-12-01
Partially cloud-filled pixels can be a significant problem for remote sensing of cloud properties. Generally, the optical depth and effective particle sizes are often too small or too large, respectively, when derived from radiances that are assumed to be overcast but contain radiation from both clear and cloud areas within the satellite imager field of view. This study presents a method for reducing the impact of such partially cloud field pixels by estimating the cloud fraction within each pixel using higher resolution visible (VIS, 0.65mm) imager data. Although the nominal resolution for most channels on the Geostationary Operational Environmental Satellite (GOES) imager and the Moderate Resolution Imaging Spectroradiometer (MODIS) on Terra are 4 and 1 km, respectively, both instruments also take VIS channel data at 1 km and 0.25 km, respectively. Thus, it may be possible to obtain an improved estimate of cloud fraction within the lower resolution pixels by using the information contained in the higher resolution VIS data. GOES and MODIS multi-spectral data, taken during the Cirrus Regional Study of Tropical Anvils and Cirrus Layers - Florida Area Cirrus Experiment (CRYSTAL-FACE), are analyzed with the algorithm used for the Atmospheric Radiation Measurement Program (ARM) and the Clouds and Earth's Radiant Energy System (CERES) to derive cloud amount, temperature, height, phase, effective particle size, optical depth, and water path. Normally, the algorithm assumes that each pixel is either entirely clear or cloudy. In this study, a threshold method is applied to the higher resolution VIS data to estimate the partial cloud fraction within each low-resolution pixel. The cloud properties are then derived from the observed low-resolution radiances using the cloud cover estimate to properly extract the radiances due only to the cloudy part of the scene. This approach is applied to both GOES and MODIS data to estimate the improvement in the retrievals for each resolution. Results are compared with the radar reflectivity techniques employed by the NOAA ETL MMCR and the PARSL 94 GHz radars located at the CRYSTAL-FACE Eastern & Western Ground Sites, respectively. This technique is most likely to yield improvements for low and midlevel layer clouds that have little thermal variability in cloud height.
NASA Technical Reports Server (NTRS)
Jasinski, Michael F.; Eagleson, Peter S.
1989-01-01
A stochastic-geometric landsurface reflectance model is formulated and tested for the parameterization of spatially variable vegetation and soil at subpixel scales using satellite multispectral images without ground truth. Landscapes are conceptualized as 3-D Lambertian reflecting surfaces consisting of plant canopies, represented by solid geometric figures, superposed on a flat soil background. A computer simulation program is developed to investigate image characteristics at various spatial aggregations representative of satellite observational scales, or pixels. The evolution of the shape and structure of the red-infrared space, or scattergram, of typical semivegetated scenes is investigated by sequentially introducing model variables into the simulation. The analytical moments of the total pixel reflectance, including the mean, variance, spatial covariance, and cross-spectral covariance, are derived in terms of the moments of the individual fractional cover and reflectance components. The moments are applied to the solution of the inverse problem: The estimation of subpixel landscape properties on a pixel-by-pixel basis, given only one multispectral image and limited assumptions on the structure of the landscape. The landsurface reflectance model and inversion technique are tested using actual aerial radiometric data collected over regularly spaced pecan trees, and using both aerial and LANDSAT Thematic Mapper data obtained over discontinuous, randomly spaced conifer canopies in a natural forested watershed. Different amounts of solar backscattered diffuse radiation are assumed and the sensitivity of the estimated landsurface parameters to those amounts is examined.
CMOS Imaging of Pin-Printed Xerogel-Based Luminescent Sensor Microarrays.
Yao, Lei; Yung, Ka Yi; Khan, Rifat; Chodavarapu, Vamsy P; Bright, Frank V
2010-12-01
We present the design and implementation of a luminescence-based miniaturized multisensor system using pin-printed xerogel materials which act as host media for chemical recognition elements. We developed a CMOS imager integrated circuit (IC) to image the luminescence response of the xerogel-based sensor array. The imager IC uses a 26 × 20 (520 elements) array of active pixel sensors and each active pixel includes a high-gain phototransistor to convert the detected optical signals into electrical currents. The imager includes a correlated double sampling circuit and pixel address/digital control circuit; the image data is read-out as coded serial signal. The sensor system uses a light-emitting diode (LED) to excite the target analyte responsive luminophores doped within discrete xerogel-based sensor elements. As a prototype, we developed a 4 × 4 (16 elements) array of oxygen (O 2 ) sensors. Each group of 4 sensor elements in the array (arranged in a row) is designed to provide a different and specific sensitivity to the target gaseous O 2 concentration. This property of multiple sensitivities is achieved by using a strategic mix of two oxygen sensitive luminophores ([Ru(dpp) 3 ] 2+ and ([Ru(bpy) 3 ] 2+ ) in each pin-printed xerogel sensor element. The CMOS imager consumes an average power of 8 mW operating at 1 kHz sampling frequency driven at 5 V. The developed prototype system demonstrates a low cost and miniaturized luminescence multisensor system.
CMOS Imaging of Temperature Effects on Pin-Printed Xerogel Sensor Microarrays.
Lei Yao; Ka Yi Yung; Chodavarapu, Vamsy P; Bright, Frank V
2011-04-01
In this paper, we study the effect of temperature on the operation and performance of a xerogel-based sensor microarrays coupled to a complementary metal-oxide semiconductor (CMOS) imager integrated circuit (IC) that images the photoluminescence response from the sensor microarray. The CMOS imager uses a 32 × 32 (1024 elements) array of active pixel sensors and each pixel includes a high-gain phototransistor to convert the detected optical signals into electrical currents. A correlated double sampling circuit and pixel address/digital control/signal integration circuit are also implemented on-chip. The CMOS imager data are read out as a serial coded signal. The sensor system uses a light-emitting diode to excite target analyte responsive organometallic luminophores doped within discrete xerogel-based sensor elements. As a proto type, we developed a 3 × 3 (9 elements) array of oxygen (O2) sensors. Each group of three sensor elements in the array (arranged in a column) is designed to provide a different and specific sensitivity to the target gaseous O2 concentration. This property of multiple sensitivities is achieved by using a mix of two O2 sensitive luminophores in each pin-printed xerogel sensor element. The CMOS imager is designed to be low noise and consumes a static power of 320.4 μW and an average dynamic power of 624.6 μW when operating at 100-Hz sampling frequency and 1.8-V dc power supply.
A Quantile Mapping Bias Correction Method Based on Hydroclimatic Classification of the Guiana Shield
Ringard, Justine; Seyler, Frederique; Linguet, Laurent
2017-01-01
Satellite precipitation products (SPPs) provide alternative precipitation data for regions with sparse rain gauge measurements. However, SPPs are subject to different types of error that need correction. Most SPP bias correction methods use the statistical properties of the rain gauge data to adjust the corresponding SPP data. The statistical adjustment does not make it possible to correct the pixels of SPP data for which there is no rain gauge data. The solution proposed in this article is to correct the daily SPP data for the Guiana Shield using a novel two set approach, without taking into account the daily gauge data of the pixel to be corrected, but the daily gauge data from surrounding pixels. In this case, a spatial analysis must be involved. The first step defines hydroclimatic areas using a spatial classification that considers precipitation data with the same temporal distributions. The second step uses the Quantile Mapping bias correction method to correct the daily SPP data contained within each hydroclimatic area. We validate the results by comparing the corrected SPP data and daily rain gauge measurements using relative RMSE and relative bias statistical errors. The results show that analysis scale variation reduces rBIAS and rRMSE significantly. The spatial classification avoids mixing rainfall data with different temporal characteristics in each hydroclimatic area, and the defined bias correction parameters are more realistic and appropriate. This study demonstrates that hydroclimatic classification is relevant for implementing bias correction methods at the local scale. PMID:28621723
Ringard, Justine; Seyler, Frederique; Linguet, Laurent
2017-06-16
Satellite precipitation products (SPPs) provide alternative precipitation data for regions with sparse rain gauge measurements. However, SPPs are subject to different types of error that need correction. Most SPP bias correction methods use the statistical properties of the rain gauge data to adjust the corresponding SPP data. The statistical adjustment does not make it possible to correct the pixels of SPP data for which there is no rain gauge data. The solution proposed in this article is to correct the daily SPP data for the Guiana Shield using a novel two set approach, without taking into account the daily gauge data of the pixel to be corrected, but the daily gauge data from surrounding pixels. In this case, a spatial analysis must be involved. The first step defines hydroclimatic areas using a spatial classification that considers precipitation data with the same temporal distributions. The second step uses the Quantile Mapping bias correction method to correct the daily SPP data contained within each hydroclimatic area. We validate the results by comparing the corrected SPP data and daily rain gauge measurements using relative RMSE and relative bias statistical errors. The results show that analysis scale variation reduces rBIAS and rRMSE significantly. The spatial classification avoids mixing rainfall data with different temporal characteristics in each hydroclimatic area, and the defined bias correction parameters are more realistic and appropriate. This study demonstrates that hydroclimatic classification is relevant for implementing bias correction methods at the local scale.
Pixel detectors for x-ray imaging spectroscopy in space
NASA Astrophysics Data System (ADS)
Treis, J.; Andritschke, R.; Hartmann, R.; Herrmann, S.; Holl, P.; Lauf, T.; Lechner, P.; Lutz, G.; Meidinger, N.; Porro, M.; Richter, R. H.; Schopper, F.; Soltau, H.; Strüder, L.
2009-03-01
Pixelated semiconductor detectors for X-ray imaging spectroscopy are foreseen as key components of the payload of various future space missions exploring the x-ray sky. Located on the platform of the new Spectrum-Roentgen-Gamma satellite, the eROSITA (extended Roentgen Survey with an Imaging Telescope Array) instrument will perform an imaging all-sky survey up to an X-ray energy of 10 keV with unprecedented spectral and angular resolution. The instrument will consist of seven parallel oriented mirror modules each having its own pnCCD camera in the focus. The satellite born X-ray observatory SIMBOL-X will be the first mission to use formation-flying techniques to implement an X-ray telescope with an unprecedented focal length of around 20 m. The detector instrumentation consists of separate high- and low energy detectors, a monolithic 128 × 128 DEPFET macropixel array and a pixellated CdZTe detector respectively, making energy band between 0.5 to 80 keV accessible. A similar concept is proposed for the next generation X-ray observatory IXO. Finally, the MIXS (Mercury Imaging X-ray Spectrometer) instrument on the European Mercury exploration mission BepiColombo will use DEPFET macropixel arrays together with a small X-ray telescope to perform a spatially resolved planetary XRF analysis of Mercury's crust. Here, the mission concepts and their scientific targets are briefly discussed, and the resulting requirements on the detector devices together with the implementation strategies are shown.
NASA Astrophysics Data System (ADS)
Gao, Yuan; Ma, Jiayi; Yuille, Alan L.
2017-05-01
This paper addresses the problem of face recognition when there is only few, or even only a single, labeled examples of the face that we wish to recognize. Moreover, these examples are typically corrupted by nuisance variables, both linear (i.e., additive nuisance variables such as bad lighting, wearing of glasses) and non-linear (i.e., non-additive pixel-wise nuisance variables such as expression changes). The small number of labeled examples means that it is hard to remove these nuisance variables between the training and testing faces to obtain good recognition performance. To address the problem we propose a method called Semi-Supervised Sparse Representation based Classification (S$^3$RC). This is based on recent work on sparsity where faces are represented in terms of two dictionaries: a gallery dictionary consisting of one or more examples of each person, and a variation dictionary representing linear nuisance variables (e.g., different lighting conditions, different glasses). The main idea is that (i) we use the variation dictionary to characterize the linear nuisance variables via the sparsity framework, then (ii) prototype face images are estimated as a gallery dictionary via a Gaussian Mixture Model (GMM), with mixed labeled and unlabeled samples in a semi-supervised manner, to deal with the non-linear nuisance variations between labeled and unlabeled samples. We have done experiments with insufficient labeled samples, even when there is only a single labeled sample per person. Our results on the AR, Multi-PIE, CAS-PEAL, and LFW databases demonstrate that the proposed method is able to deliver significantly improved performance over existing methods.
Characterization of the Hokuyo URG-04LX laser rangefinder for mobile robot obstacle negotiation
NASA Astrophysics Data System (ADS)
Okubo, Yoichi; Ye, Cang; Borenstein, Johann
2009-05-01
This paper presents a characterization study of the Hokuyo URG-04LX scanning laser rangefinder (LRF). The Hokuyo LRF is similar in function to the Sick LRF, which has been the de-facto standard range sensor for mobile robot obstacle avoidance and mapping applications for the last decade. Problems with the Sick LRF are its relatively large size, weight, and power consumption, allowing its use only on relatively large mobile robots. The Hokuyo LRF is substantially smaller, lighter, and consumes less power, and is therefore more suitable for small mobile robots. The question is whether it performs just as well as the Sick LRF in typical mobile robot applications. In 2002, two of the authors of the present paper published a characterization study of the Sick LRF. For the present paper we used the exact same test apparatus and test procedures as we did in the 2002 paper, but this time to characterize the Hokuyo LRF. As a result, we are in the unique position of being able to provide not only a detailed characterization study of the Hokuyo LRF, but also to compare the Hokuyo LRF with the Sick LRF under identical test conditions. Among the tested characteristics are sensitivity to a variety of target surface properties and incidence angles, which may potentially affect the sensing performance. We also discuss the performance of the Hokuyo LRF with regard to the mixed pixels problem associated with LRFs. Lastly, the present paper provides a calibration model for improving the accuracy of the Hokuyo LRF.
Analysis of deep learning methods for blind protein contact prediction in CASP12.
Wang, Sheng; Sun, Siqi; Xu, Jinbo
2018-03-01
Here we present the results of protein contact prediction achieved in CASP12 by our RaptorX-Contact server, which is an early implementation of our deep learning method for contact prediction. On a set of 38 free-modeling target domains with a median family size of around 58 effective sequences, our server obtained an average top L/5 long- and medium-range contact accuracy of 47% and 44%, respectively (L = length). A complete implementation has an average accuracy of 59% and 57%, respectively. Our deep learning method formulates contact prediction as a pixel-level image labeling problem and simultaneously predicts all residue pairs of a protein using a combination of two deep residual neural networks, taking as input the residue conservation information, predicted secondary structure and solvent accessibility, contact potential, and coevolution information. Our approach differs from existing methods mainly in (1) formulating contact prediction as a pixel-level image labeling problem instead of an image-level classification problem; (2) simultaneously predicting all contacts of an individual protein to make effective use of contact occurrence patterns; and (3) integrating both one-dimensional and two-dimensional deep convolutional neural networks to effectively learn complex sequence-structure relationship including high-order residue correlation. This paper discusses the RaptorX-Contact pipeline, both contact prediction and contact-based folding results, and finally the strength and weakness of our method. © 2017 Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Eagleson, P. S.
1985-01-01
Research activities conducted from February 1, 1985 to July 31, 1985 and preliminary conclusions regarding research objectives are summarized. The objective is to determine the feasibility of using LANDSAT data to estimate effective hydraulic properties of soils. The general approach is to apply the climatic-climax hypothesis (Ealgeson, 1982) to natural water-limited vegetation systems using canopy cover estimated from LANDSAT data. Natural water-limited systems typically consist of inhomogeneous vegetation canopies interspersed with bare soils. The ground resolution associated with one pixel from LANDSAT MSS (or TM) data is generally greater than the scale of the plant canopy or canopy clusters. Thus a method for resolving percent canopy cover at a subpixel level must be established before the Eagleson hypothesis can be tested. Two formulations are proposed which extend existing methods of analyzing mixed pixels to naturally vegetated landscapes. The first method involves use of the normalized vegetation index. The second approach is a physical model based on radiative transfer principles. Both methods are to be analyzed for their feasibility on selected sites.
Twelve tips for getting started using mixed methods in medical education research.
Lavelle, Ellen; Vuk, Jasna; Barber, Carolyn
2013-04-01
Mixed methods research, which is gaining popularity in medical education, provides a new and comprehensive approach for addressing teaching, learning, and evaluation issues in the field. The aim of this article is to provide medical education researchers with 12 tips, based on consideration of current literature in the health professions and in educational research, for conducting and disseminating mixed methods research. Engaging in mixed methods research requires consideration of several major components: the mixed methods paradigm, types of problems, mixed method designs, collaboration, and developing or extending theory. Mixed methods is an ideal tool for addressing a full range of problems in medical education to include development of theory and improving practice.
Amplifier based broadband pixel for sub-millimeter wave imaging
NASA Astrophysics Data System (ADS)
Sarkozy, Stephen; Drewes, Jonathan; Leong, Kevin M. K. H.; Lai, Richard; Mei, X. B. (Gerry); Yoshida, Wayne; Lange, Michael D.; Lee, Jane; Deal, William R.
2012-09-01
Broadband sub-millimeter wave technology has received significant attention for potential applications in security, medical, and military imaging. Despite theoretical advantages of reduced size, weight, and power compared to current millimeter wave systems, sub-millimeter wave systems have been hampered by a fundamental lack of amplification with sufficient gain and noise figure properties. We report a broadband pixel operating from 300 to 340 GHz, biased off a single 2 V power supply. Over this frequency range, the amplifiers provide > 40 dB gain and <8 dB noise figure, representing the current state-of-art performance capabilities. This pixel is enabled by revolutionary enhancements to indium phosphide (InP) high electron mobility transistor technology, based on a sub-50 nm gate and indium arsenide composite channel with a projected maximum oscillation frequency fmax>1.0 THz. The first sub-millimeter wave-based images using active amplification are demonstrated as part of the Joint Improvised Explosive Device Defeat Organization Longe Range Personnel Imager Program. This development and demonstration may bring to life future sub-millimeter-wave and THz applications such as solutions to brownout problems, ultra-high bandwidth satellite communication cross-links, and future planetary exploration missions.
Water Detection Based on Object Reflections
NASA Technical Reports Server (NTRS)
Rankin, Arturo L.; Matthies, Larry H.
2012-01-01
Water bodies are challenging terrain hazards for terrestrial unmanned ground vehicles (UGVs) for several reasons. Traversing through deep water bodies could cause costly damage to the electronics of UGVs. Additionally, a UGV that is either broken down due to water damage or becomes stuck in a water body during an autonomous operation will require rescue, potentially drawing critical resources away from the primary operation and increasing the operation cost. Thus, robust water detection is a critical perception requirement for UGV autonomous navigation. One of the properties useful for detecting still water bodies is that their surface acts as a horizontal mirror at high incidence angles. Still water bodies in wide-open areas can be detected by geometrically locating the exact pixels in the sky that are reflecting on candidate water pixels on the ground, predicting if ground pixels are water based on color similarity to the sky and local terrain features. But in cluttered areas where reflections of objects in the background dominate the appearance of the surface of still water bodies, detection based on sky reflections is of marginal value. Specifically, this software attempts to solve the problem of detecting still water bodies on cross-country terrain in cluttered areas at low cost.
Low Complexity Compression and Speed Enhancement for Optical Scanning Holography
Tsang, P. W. M.; Poon, T.-C.; Liu, J.-P.; Kim, T.; Kim, Y. S.
2016-01-01
In this paper we report a low complexity compression method that is suitable for compact optical scanning holography (OSH) systems with different optical settings. Our proposed method can be divided into 2 major parts. First, an automatic decision maker is applied to select the rows of holographic pixels to be scanned. This process enhances the speed of acquiring a hologram, and also lowers the data rate. Second, each row of down-sampled pixels is converted into a one-bit representation with delta modulation (DM). Existing DM-based hologram compression techniques suffers from the disadvantage that a core parameter, commonly known as the step size, has to be determined in advance. However, the correct value of the step size for compressing each row of hologram is dependent on the dynamic range of the pixels, which could deviate significantly with the object scene, as well as OSH systems with different opical settings. We have overcome this problem by incorporating a dynamic step-size adjustment scheme. The proposed method is applied in the compression of holograms that are acquired with 2 different OSH systems, demonstrating a compression ratio of over two orders of magnitude, while preserving favorable fidelity on the reconstructed images. PMID:27708410
Hierarchical Probabilistic Inference of Cosmic Shear
NASA Astrophysics Data System (ADS)
Schneider, Michael D.; Hogg, David W.; Marshall, Philip J.; Dawson, William A.; Meyers, Joshua; Bard, Deborah J.; Lang, Dustin
2015-07-01
Point estimators for the shearing of galaxy images induced by gravitational lensing involve a complex inverse problem in the presence of noise, pixelization, and model uncertainties. We present a probabilistic forward modeling approach to gravitational lensing inference that has the potential to mitigate the biased inferences in most common point estimators and is practical for upcoming lensing surveys. The first part of our statistical framework requires specification of a likelihood function for the pixel data in an imaging survey given parameterized models for the galaxies in the images. We derive the lensing shear posterior by marginalizing over all intrinsic galaxy properties that contribute to the pixel data (i.e., not limited to galaxy ellipticities) and learn the distributions for the intrinsic galaxy properties via hierarchical inference with a suitably flexible conditional probabilitiy distribution specification. We use importance sampling to separate the modeling of small imaging areas from the global shear inference, thereby rendering our algorithm computationally tractable for large surveys. With simple numerical examples we demonstrate the improvements in accuracy from our importance sampling approach, as well as the significance of the conditional distribution specification for the intrinsic galaxy properties when the data are generated from an unknown number of distinct galaxy populations with different morphological characteristics.
When Dijkstra Meets Vanishing Point: A Stereo Vision Approach for Road Detection.
Zhang, Yigong; Su, Yingna; Yang, Jian; Ponce, Jean; Kong, Hui
2018-05-01
In this paper, we propose a vanishing-point constrained Dijkstra road model for road detection in a stereo-vision paradigm. First, the stereo-camera is used to generate the u- and v-disparity maps of road image, from which the horizon can be extracted. With the horizon and ground region constraints, we can robustly locate the vanishing point of road region. Second, a weighted graph is constructed using all pixels of the image, and the detected vanishing point is treated as the source node of the graph. By computing a vanishing-point constrained Dijkstra minimum-cost map, where both disparity and gradient of gray image are used to calculate cost between two neighbor pixels, the problem of detecting road borders in image is transformed into that of finding two shortest paths that originate from the vanishing point to two pixels in the last row of image. The proposed approach has been implemented and tested over 2600 grayscale images of different road scenes in the KITTI data set. The experimental results demonstrate that this training-free approach can detect horizon, vanishing point, and road regions very accurately and robustly. It can achieve promising performance.
NASA Astrophysics Data System (ADS)
Kim, Daeik D.; Thomas, Mikkel A.; Brooke, Martin A.; Jokerst, Nan M.
2004-06-01
Arrays of embedded bipolar junction transistor (BJT) photo detectors (PD) and a parallel mixed-signal processing system were fabricated as a silicon complementary metal oxide semiconductor (Si-CMOS) circuit for the integration optical sensors on the surface of the chip. The circuit was fabricated with AMI 1.5um n-well CMOS process and the embedded PNP BJT PD has a pixel size of 8um by 8um. BJT PD was chosen to take advantage of its higher gain amplification of photo current than that of PiN type detectors since the target application is a low-speed and high-sensitivity sensor. The photo current generated by BJT PD is manipulated by mixed-signal processing system, which consists of parallel first order low-pass delta-sigma oversampling analog-to-digital converters (ADC). There are 8 parallel ADCs on the chip and a group of 8 BJT PDs are selected with CMOS switches. An array of PD is composed of three or six groups of PDs depending on the number of rows.
NASA Astrophysics Data System (ADS)
Wu, Bo; Chung Liu, Wai; Grumpe, Arne; Wöhler, Christian
2016-06-01
Lunar topographic information, e.g., lunar DEM (Digital Elevation Model), is very important for lunar exploration missions and scientific research. Lunar DEMs are typically generated from photogrammetric image processing or laser altimetry, of which photogrammetric methods require multiple stereo images of an area. DEMs generated from these methods are usually achieved by various interpolation techniques, leading to interpolation artifacts in the resulting DEM. On the other hand, photometric shape reconstruction, e.g., SfS (Shape from Shading), extensively studied in the field of Computer Vision has been introduced to pixel-level resolution DEM refinement. SfS methods have the ability to reconstruct pixel-wise terrain details that explain a given image of the terrain. If the terrain and its corresponding pixel-wise albedo were to be estimated simultaneously, this is a SAfS (Shape and Albedo from Shading) problem and it will be under-determined without additional information. Previous works show strong statistical regularities in albedo of natural objects, and this is even more logically valid in the case of lunar surface due to its lower surface albedo complexity than the Earth. In this paper we suggest a method that refines a lower-resolution DEM to pixel-level resolution given a monocular image of the coverage with known light source, at the same time we also estimate the corresponding pixel-wise albedo map. We regulate the behaviour of albedo and shape such that the optimized terrain and albedo are the likely solutions that explain the corresponding image. The parameters in the approach are optimized through a kernel-based relaxation framework to gain computational advantages. In this research we experimentally employ the Lunar-Lambertian model for reflectance modelling; the framework of the algorithm is expected to be independent of a specific reflectance model. Experiments are carried out using the monocular images from Lunar Reconnaissance Orbiter (LRO) Narrow Angle Camera (NAC) (0.5 m spatial resolution), constrained by the SELENE and LRO Elevation Model (SLDEM 2015) of 60 m spatial resolution. The results indicate that local details are largely recovered by the algorithm while low frequency topographic consistency is affected by the low-resolution DEM.
NASA Technical Reports Server (NTRS)
Howard, Richard T. (Inventor); Bryan, ThomasC. (Inventor); Book, Michael L. (Inventor)
2004-01-01
A method and system for processing an image including capturing an image and storing the image as image pixel data. Each image pixel datum is stored in a respective memory location having a corresponding address. Threshold pixel data is selected from the image pixel data and linear spot segments are identified from the threshold pixel data selected.. Ihe positions of only a first pixel and a last pixel for each linear segment are saved. Movement of one or more objects are tracked by comparing the positions of fust and last pixels of a linear segment present in the captured image with respective first and last pixel positions in subsequent captured images. Alternatively, additional data for each linear data segment is saved such as sum of pixels and the weighted sum of pixels i.e., each threshold pixel value is multiplied by that pixel's x-location).
NASA Astrophysics Data System (ADS)
Ramage, J. M.; Brodzik, M. J.; Hardman, M.
2016-12-01
Passive microwave (PM) 18 GHz and 36 GHz horizontally- and vertically-polarized brightness temperatures (Tb) channels from the Advanced Microwave Scanning Radiometer for EOS (AMSR-E) have been important sources of information about snow melt status in glacial environments, particularly at high latitudes. PM data are sensitive to the changes in near-surface liquid water that accompany melt onset, melt intensification, and refreezing. Overpasses are frequent enough that in most areas multiple (2-8) observations per day are possible, yielding the potential for determining the dynamic state of the snow pack during transition seasons. AMSR-E Tb data have been used effectively to determine melt onset and melt intensification using daily Tb and diurnal amplitude variation (DAV) thresholds. Due to mixed pixels in historically coarse spatial resolution Tb data, melt analysis has been impractical in ice-marginal zones where pixels may be only fractionally snow/ice covered, and in areas where the glacier is near large bodies of water: even small regions of open water in a pixel severely impact the microwave signal. We use the new enhanced-resolution Calibrated Passive Microwave Daily EASE-Grid 2.0 Brightness Temperature (CETB) Earth System Data Record product's twice daily obserations to test and update existing snow melt algorithms by determining appropriate melt thresholds for both Tb and DAV for the CETB 18 and 36 GHz channels. We use the enhanced resolution data to evaluate melt characteristics along glacier margins and melt transition zones during the melt seasons in locations spanning a wide range of melt scenarios, including the Patagonian Andes, the Alaskan Coast Range, and the Russian High Arctic icecaps. We quantify how improvement of spatial resolution from the original 12.5 - 25 km-scale pixels to the enhanced resolution of 3.125 - 6.25 km improves the ability to evaluate melt timing across boundaries and transition zones in diverse glacial environments.
Research and implementation of simulation for TDICCD remote sensing in vibration of optical axis
NASA Astrophysics Data System (ADS)
Liu, Zhi-hong; Kang, Xiao-jun; Lin, Zhe; Song, Li
2013-12-01
During the exposure time, the charge transfer speed in the push-broom direction and the line-by-lines canning speed of the sensor are required to match each other strictly for a space-borne TDICCD push-broom camera. However, as attitude disturbance of satellite and vibration of camera are inevitable, it is impossible to eliminate the speed mismatch, which will make the signal of different targets overlay each other and result in a decline of image resolution. The effects of velocity mismatch will be visually observed and analyzed by simulating the degradation of image quality caused by the vibration of the optical axis, and it is significant for the evaluation of image quality and design of the image restoration algorithm. How to give a model in time domain and space domain during the imaging time is the problem needed to be solved firstly. As vibration information for simulation is usually given by a continuous curve, the pixels of original image matrix and sensor matrix are discrete, as a result, they cannot always match each other well. The effect of simulation will also be influenced by the discrete sampling in integration time. In conclusion, it is quite significant for improving simulation accuracy and efficiency to give an appropriate discrete modeling and simulation method. The paper analyses discretization schemes in time domain and space domain and presents a method to simulate the quality of image of the optical system in the vibration of the line of sight, which is based on the principle of TDICCD sensor. The gray value of pixels in sensor matrix is obtained by a weighted arithmetic, which solves the problem of pixels dismatch. The result which compared with the experiment of hardware test indicate that this simulation system performances well in accuracy and reliability.
Model-based video segmentation for vision-augmented interactive games
NASA Astrophysics Data System (ADS)
Liu, Lurng-Kuo
2000-04-01
This paper presents an architecture and algorithms for model based video object segmentation and its applications to vision augmented interactive game. We are especially interested in real time low cost vision based applications that can be implemented in software in a PC. We use different models for background and a player object. The object segmentation algorithm is performed in two different levels: pixel level and object level. At pixel level, the segmentation algorithm is formulated as a maximizing a posteriori probability (MAP) problem. The statistical likelihood of each pixel is calculated and used in the MAP problem. Object level segmentation is used to improve segmentation quality by utilizing the information about the spatial and temporal extent of the object. The concept of an active region, which is defined based on motion histogram and trajectory prediction, is introduced to indicate the possibility of a video object region for both background and foreground modeling. It also reduces the overall computation complexity. In contrast with other applications, the proposed video object segmentation system is able to create background and foreground models on the fly even without introductory background frames. Furthermore, we apply different rate of self-tuning on the scene model so that the system can adapt to the environment when there is a scene change. We applied the proposed video object segmentation algorithms to several prototype virtual interactive games. In our prototype vision augmented interactive games, a player can immerse himself/herself inside a game and can virtually interact with other animated characters in a real time manner without being constrained by helmets, gloves, special sensing devices, or background environment. The potential applications of the proposed algorithms including human computer gesture interface and object based video coding such as MPEG-4 video coding.
Non-parametric analysis of LANDSAT maps using neural nets and parallel computers
NASA Technical Reports Server (NTRS)
Salu, Yehuda; Tilton, James
1991-01-01
Nearest neighbor approaches and a new neural network, the Binary Diamond, are used for the classification of images of ground pixels obtained by LANDSAT satellite. The performances are evaluated by comparing classifications of a scene in the vicinity of Washington DC. The problem of optimal selection of categories is addressed as a step in the classification process.
Non-rigid image registration using graph-cuts.
Tang, Tommy W H; Chung, Albert C S
2007-01-01
Non-rigid image registration is an ill-posed yet challenging problem due to its supernormal high degree of freedoms and inherent requirement of smoothness. Graph-cuts method is a powerful combinatorial optimization tool which has been successfully applied into image segmentation and stereo matching. Under some specific constraints, graph-cuts method yields either a global minimum or a local minimum in a strong sense. Thus, it is interesting to see the effects of using graph-cuts in non-rigid image registration. In this paper, we formulate non-rigid image registration as a discrete labeling problem. Each pixel in the source image is assigned a displacement label (which is a vector) indicating which position in the floating image it is spatially corresponding to. A smoothness constraint based on first derivative is used to penalize sharp changes in displacement labels across pixels. The whole system can be optimized by using the graph-cuts method via alpha-expansions. We compare 2D and 3D registration results of our method with two state-of-the-art approaches. It is found that our method is more robust to different challenging non-rigid registration cases with higher registration accuracy.
NASA Astrophysics Data System (ADS)
Tian, J.; Krauß, T.; d'Angelo, P.
2017-05-01
Automatic rooftop extraction is one of the most challenging problems in remote sensing image analysis. Classical 2D image processing techniques are expensive due to the high amount of features required to locate buildings. This problem can be avoided when 3D information is available. In this paper, we show how to fuse the spectral and height information of stereo imagery to achieve an efficient and robust rooftop extraction. In the first step, the digital terrain model (DTM) and in turn the normalized digital surface model (nDSM) is generated by using a newly step-edge approach. In the second step, the initial building locations and rooftop boundaries are derived by removing the low-level pixels and high-level pixels with higher probability to be trees and shadows. This boundary is then served as the initial level set function, which is further refined to fit the best possible boundaries through distance regularized level-set curve evolution. During the fitting procedure, the edge-based active contour model is adopted and implemented by using the edges indicators extracted from panchromatic image. The performance of the proposed approach is tested by using the WorldView-2 satellite data captured over Munich.
A weak Hamiltonian finite element method for optimal control problems
NASA Technical Reports Server (NTRS)
Hodges, Dewey H.; Bless, Robert R.
1989-01-01
A temporal finite element method based on a mixed form of the Hamiltonian weak principle is developed for dynamics and optimal control problems. The mixed form of Hamilton's weak principle contains both displacements and momenta as primary variables that are expanded in terms of nodal values and simple polynomial shape functions. Unlike other forms of Hamilton's principle, however, time derivatives of the momenta and displacements do not appear therein; instead, only the virtual momenta and virtual displacements are differentiated with respect to time. Based on the duality that is observed to exist between the mixed form of Hamilton's weak principle and variational principles governing classical optimal control problems, a temporal finite element formulation of the latter can be developed in a rather straightforward manner. Several well-known problems in dynamics and optimal control are illustrated. The example dynamics problem involves a time-marching problem. As optimal control examples, elementary trajectory optimization problems are treated.
A weak Hamiltonian finite element method for optimal control problems
NASA Technical Reports Server (NTRS)
Hodges, Dewey H.; Bless, Robert R.
1990-01-01
A temporal finite element method based on a mixed form of the Hamiltonian weak principle is developed for dynamics and optimal control problems. The mixed form of Hamilton's weak principle contains both displacements and momenta as primary variables that are expanded in terms of nodal values and simple polynomial shape functions. Unlike other forms of Hamilton's principle, however, time derivatives of the momenta and displacements do not appear therein; instead, only the virtual momenta and virtual displacements are differentiated with respect to time. Based on the duality that is observed to exist between the mixed form of Hamilton's weak principle and variational principles governing classical optimal control problems, a temporal finite element formulation of the latter can be developed in a rather straightforward manner. Several well-known problems in dynamics and optimal control are illustrated. The example dynamics problem involves a time-marching problem. As optimal control examples, elementary trajectory optimization problems are treated.
Weak Hamiltonian finite element method for optimal control problems
NASA Technical Reports Server (NTRS)
Hodges, Dewey H.; Bless, Robert R.
1991-01-01
A temporal finite element method based on a mixed form of the Hamiltonian weak principle is developed for dynamics and optimal control problems. The mixed form of Hamilton's weak principle contains both displacements and momenta as primary variables that are expanded in terms of nodal values and simple polynomial shape functions. Unlike other forms of Hamilton's principle, however, time derivatives of the momenta and displacements do not appear therein; instead, only the virtual momenta and virtual displacements are differentiated with respect to time. Based on the duality that is observed to exist between the mixed form of Hamilton's weak principle and variational principles governing classical optimal control problems, a temporal finite element formulation of the latter can be developed in a rather straightforward manner. Several well-known problems in dynamics and optimal control are illustrated. The example dynamics problem involves a time-marching problem. As optimal control examples, elementary trajectory optimization problems are treated.
Spatial clustering of pixels of a multispectral image
Conger, James Lynn
2014-08-19
A method and system for clustering the pixels of a multispectral image is provided. A clustering system computes a maximum spectral similarity score for each pixel that indicates the similarity between that pixel and the most similar neighboring. To determine the maximum similarity score for a pixel, the clustering system generates a similarity score between that pixel and each of its neighboring pixels and then selects the similarity score that represents the highest similarity as the maximum similarity score. The clustering system may apply a filtering criterion based on the maximum similarity score so that pixels with similarity scores below a minimum threshold are not clustered. The clustering system changes the current pixel values of the pixels in a cluster based on an averaging of the original pixel values of the pixels in the cluster.
NASA Technical Reports Server (NTRS)
Cacciani, Alessandro; Rosati, P.; Ricci, D.; Marquedant, R.; Smith, E.
1988-01-01
The magneto-optical filter (MOF) was used to get high and intermediate l-modes of solar oscillations. For very low l-modes the imaging capability of the MOF is still attractive since it allows a pixel by pixel intensity normalization. However, a crude attempt to get very low l power spectra from Dopplergrams obtained at Mt. Wilson gave noisy results. This means that a careful analysis of all the factors potentially affecting high resolution Dopplergrams should be accomplished. In order to better investigate this problem, a nonimaging channel using the lock-in amplifier technique was considered. Two systems are now operational, one at JPL and the other at University of Rome. Observations in progress are used to discuss the MOF stability, the noise level, and the possible application in asteroseismology.
Extracting 3d Semantic Information from Video Surveillance System Using Deep Learning
NASA Astrophysics Data System (ADS)
Zhang, J. S.; Cao, J.; Mao, B.; Shen, D. Q.
2018-04-01
At present, intelligent video analysis technology has been widely used in various fields. Object tracking is one of the important part of intelligent video surveillance, but the traditional target tracking technology based on the pixel coordinate system in images still exists some unavoidable problems. Target tracking based on pixel can't reflect the real position information of targets, and it is difficult to track objects across scenes. Based on the analysis of Zhengyou Zhang's camera calibration method, this paper presents a method of target tracking based on the target's space coordinate system after converting the 2-D coordinate of the target into 3-D coordinate. It can be seen from the experimental results: Our method can restore the real position change information of targets well, and can also accurately get the trajectory of the target in space.
The CZCS geolocation algorithms
NASA Technical Reports Server (NTRS)
Wilson, W. H.; Smith, R. C.; Nolten, J. W.
1981-01-01
The Coastal Zone Color Scanner (CZCS) on board the Nimbus 7 satellite was designed to measure surface radiance upwelled from the ocean in 6 spectral bands. The CZCS spectrometer obtains its information from a rotating mirror and is timed to collect data when the mirror views the Earth surface between ca. 40 degrees to the left and right of the subsatellite track. Each scan is divided into 1968 picture elements, pixels, of 0.04 degrees scan each. In order to avoid direct reflected Sun glint, the rotating mirror shaft can be tilted so that scans across the subsatellite track up to 20 degrees forward or aft of the point directed beneath the satellite. The CZCS is the first satellite borne instrument to have this tilted scan capability and therefore poses some new problems in locating the Earth surface position of viewed pixels.
Subpixel resolution from multiple images
NASA Technical Reports Server (NTRS)
Cheeseman, Peter; Kanefsky, Rob; Stutz, John; Kraft, Richard
1994-01-01
Multiple images taken from similar locations and under similar lighting conditions contain similar, but not identical, information. Slight differences in instrument orientation and position produces mismatches between the projected pixel grids. These mismatches ensure that any point on the ground is sampled differently in each image. If all the images can be registered with respect to each other to a small fraction of a pixel accuracy, then the information from the multiple images can be combined to increase linear resolution by roughly the square root of the number of images. In addition, the gray-scale resolution of the composite image is also improved. We describe methods for multiple image registration and combination, and discuss some of the problems encountered in developing and extending them. We display test results with 8:1 resolution enhancement, and Viking Orbiter imagery with 2:1 and 4:1 enhancements.
Structure-aware depth super-resolution using Gaussian mixture model
NASA Astrophysics Data System (ADS)
Kim, Sunok; Oh, Changjae; Kim, Youngjung; Sohn, Kwanghoon
2015-03-01
This paper presents a probabilistic optimization approach to enhance the resolution of a depth map. Conventionally, a high-resolution color image is considered as a cue for depth super-resolution under the assumption that the pixels with similar color likely belong to similar depth. This assumption might induce a texture transferring from the color image into the depth map and an edge blurring artifact to the depth boundaries. In order to alleviate these problems, we propose an efficient depth prior exploiting a Gaussian mixture model in which an estimated depth map is considered to a feature for computing affinity between two pixels. Furthermore, a fixed-point iteration scheme is adopted to address the non-linearity of a constraint derived from the proposed prior. The experimental results show that the proposed method outperforms state-of-the-art methods both quantitatively and qualitatively.
Abstract generalized vector quasi-equilibrium problems in noncompact Hadamard manifolds.
Lu, Haishu; Wang, Zhihua
2017-01-01
This paper deals with the abstract generalized vector quasi-equilibrium problem in noncompact Hadamard manifolds. We prove the existence of solutions to the abstract generalized vector quasi-equilibrium problem under suitable conditions and provide applications to an abstract vector quasi-equilibrium problem, a generalized scalar equilibrium problem, a scalar equilibrium problem, and a perturbed saddle point problem. Finally, as an application of the existence of solutions to the generalized scalar equilibrium problem, we obtain a weakly mixed variational inequality and two mixed variational inequalities. The results presented in this paper unify and generalize many known results in the literature.
NASA Astrophysics Data System (ADS)
Li, Zixiang; Janardhanan, Mukund Nilakantan; Tang, Qiuhua; Nielsen, Peter
2018-05-01
This article presents the first method to simultaneously balance and sequence robotic mixed-model assembly lines (RMALB/S), which involves three sub-problems: task assignment, model sequencing and robot allocation. A new mixed-integer programming model is developed to minimize makespan and, using CPLEX solver, small-size problems are solved for optimality. Two metaheuristics, the restarted simulated annealing algorithm and co-evolutionary algorithm, are developed and improved to address this NP-hard problem. The restarted simulated annealing method replaces the current temperature with a new temperature to restart the search process. The co-evolutionary method uses a restart mechanism to generate a new population by modifying several vectors simultaneously. The proposed algorithms are tested on a set of benchmark problems and compared with five other high-performing metaheuristics. The proposed algorithms outperform their original editions and the benchmarked methods. The proposed algorithms are able to solve the balancing and sequencing problem of a robotic mixed-model assembly line effectively and efficiently.
Xiaoqiu Zuo; Urs Buehlmann; R. Edward Thomas
2004-01-01
Solving the least-cost lumber grade mix problem allows dimension mills to minimize the cost of dimension part production. This problem, due to its economic importance, has attracted much attention from researchers and industry in the past. Most solutions used linear programming models and assumed that a simple linear relationship existed between lumber grade mix and...
Topological numbering of features on a mesh
NASA Technical Reports Server (NTRS)
Atallah, Mikhail J.; Hambrusch, Susanne E.; Tewinkel, Lynn E.
1988-01-01
Assume a nxn binary image is given containing horizontally convex features; i.e., for each feature, each of its row's pixels form an interval on that row. The problem of assigning topological numbers to such features is considered; i.e., assign a number to every feature f so that all features to the left of f have a smaller number assigned to them. This problem arises in solutions to the stereo matching problem. A parallel algorithm to solve the topological numbering problem in O(n) time on an nxn mesh of processors is presented. The key idea of the solution is to create a tree from which the topological numbers can be obtained even though the tree does not uniquely represent the to the left of relationship of the features.
Reproducibility and calibration of MMC-based high-resolution gamma detectors
Bates, C. R.; Pies, C.; Kempf, S.; ...
2016-07-15
Here, we describe a prototype γ-ray detector based on a metallic magnetic calorimeter with an energy resolution of 46 eV at 60 keV and a reproducible response function that follows a simple second-order polynomial. The simple detector calibration allows adding high-resolution spectra from different pixels and different cool-downs without loss in energy resolution to determine γ-ray centroids with high accuracy. As an example of an application in nuclear safeguards enabled by such a γ-ray detector, we discuss the non-destructive assay of 242Pu in a mixed-isotope Pu sample.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lentine, Anthony L.; Nielson, Gregory N.; Cruz-Campa, Jose Luis
A photovoltaic module includes colorized reflective photovoltaic cells that act as pixels. The colorized reflective photovoltaic cells are arranged so that reflections from the photovoltaic cells or pixels visually combine into an image on the photovoltaic module. The colorized photovoltaic cell or pixel is composed of a set of 100 to 256 base color sub-pixel reflective segments or sub-pixels. The color of each pixel is determined by the combination of base color sub-pixels forming the pixel. As a result, each pixel can have a wide variety of colors using a set of base colors, which are created, from sub-pixel reflectivemore » segments having standard film thicknesses.« less
A Kind of Nonlinear Programming Problem Based on Mixed Fuzzy Relation Equations Constraints
NASA Astrophysics Data System (ADS)
Li, Jinquan; Feng, Shuang; Mi, Honghai
In this work, a kind of nonlinear programming problem with non-differential objective function and under the constraints expressed by a system of mixed fuzzy relation equations is investigated. First, some properties of this kind of optimization problem are obtained. Then, a polynomial-time algorithm for this kind of optimization problem is proposed based on these properties. Furthermore, we show that this algorithm is optimal for the considered optimization problem in this paper. Finally, numerical examples are provided to illustrate our algorithms.
NASA Astrophysics Data System (ADS)
Sargsyan, M. Z.; Poghosyan, H. M.
2018-04-01
A dynamical problem for a rectangular strip with variable coefficients of elasticity is solved by an asymptotic method. It is assumed that the strip is orthotropic, the elasticity coefficients are exponential functions of y, and mixed boundary conditions are posed. The solution of the inner problem is obtained using Bessel functions.
Image recovery by removing stochastic artefacts identified as local asymmetries
NASA Astrophysics Data System (ADS)
Osterloh, K.; Bücherl, T.; Zscherpel, U.; Ewert, U.
2012-04-01
Stochastic artefacts are frequently encountered in digital radiography and tomography with neutrons. Most obviously, they are caused by ubiquitous scattered radiation hitting the CCD-sensor. They appear as scattered dots and, at higher frequency of occurrence, they may obscure the image. Some of these dotted interferences vary with time, however, a large portion of them remains persistent so the problem cannot be resolved by collecting stacks of images and to merge them to a median image. The situation becomes even worse in computed tomography (CT) where each artefact causes a circular pattern in the reconstructed plane. Therefore, these stochastic artefacts have to be removed completely and automatically while leaving the original image content untouched. A simplified image acquisition and artefact removal tool was developed at BAM and is available to interested users. Furthermore, an algorithm complying with all the requirements mentioned above was developed that reliably removes artefacts that could even exceed the size of a single pixel without affecting other parts of the image. It consists of an iterative two-step algorithm adjusting pixel values within a 3 × 3 matrix inside of a 5 × 5 kernel and the centre pixel only within a 3 × 3 kernel, resp. It has been applied to thousands of images obtained from the NECTAR facility at the FRM II in Garching, Germany, without any need of a visual control. In essence, the procedure consists of identifying and tackling asymmetric intensity distributions locally with recording each treatment of a pixel. Searching for the local asymmetry with subsequent correction rather than replacing individually identified pixels constitutes the basic idea of the algorithm. The efficiency of the proposed algorithm is demonstrated with a severely spoiled example of neutron radiography and tomography as compared with median filtering, the most convenient alternative approach by visual check, histogram and power spectra analysis.
Correction for spatial averaging in laser speckle contrast analysis
Thompson, Oliver; Andrews, Michael; Hirst, Evan
2011-01-01
Practical laser speckle contrast analysis systems face a problem of spatial averaging of speckles, due to the pixel size in the cameras used. Existing practice is to use a system factor in speckle contrast analysis to account for spatial averaging. The linearity of the system factor correction has not previously been confirmed. The problem of spatial averaging is illustrated using computer simulation of time-integrated dynamic speckle, and the linearity of the correction confirmed using both computer simulation and experimental results. The valid linear correction allows various useful compromises in the system design. PMID:21483623
Plantar pressure cartography reconstruction from 3 sensors.
Abou Ghaida, Hussein; Mottet, Serge; Goujon, Jean-Marc
2014-01-01
Foot problem diagnosis is often made by using pressure mapping systems, unfortunately located and used in the laboratories. In the context of e-health and telemedicine for home monitoring of patients having foot problems, our focus is to present an acceptable system for daily use. We developed an ambulatory instrumented insole using 3 pressures sensors to visualize plantar pressure cartographies. We show that a standard insole with fixed sensor position could be used for different foot sizes. The results show an average error measured at each pixel of 0.01 daN, with a standard deviation of 0.005 daN.
A calibration method immune to the projector errors in fringe projection profilometry
NASA Astrophysics Data System (ADS)
Zhang, Ruihua; Guo, Hongwei
2017-08-01
In fringe projection technique, system calibration is a tedious task to establish the mapping relationship between the object depths and the fringe phases. Especially, it is not easy to accurately determine the parameters of the projector in this system, which may induce errors in the measurement results. To solve this problem, this paper proposes a new calibration by using the cross-ratio invariance in the system geometry for determining the phase-to-depth relations. In it, we analyze the epipolar eometry of the fringe projection system. On each epipolar plane, the depth variation along an incident ray induces the pixel movement along the epipolar line on the image plane of the camera. These depth variations and pixel movements can be connected by use of the projective transformations, under which condition the cross-ratio for each of them keeps invariant. Based on this fact, we suggest measuring the depth map by use of this cross-ratio invariance. Firstly, we shift the reference board in its perpendicular direction to three positions with known depths, and measure their phase maps as the reference phase maps; and secondly, when measuring an object, we calculate the object depth at each pixel by equating the cross-ratio of the depths to that of the corresponding pixels having the same phase on the image plane of the camera. This method is immune to the errors sourced from the projector, including the distortions both in the geometric shapes and in the intensity profiles of the projected fringe patterns.The experimental results demonstrate the proposed method to be feasible and valid.
Cellular image segmentation using n-agent cooperative game theory
NASA Astrophysics Data System (ADS)
Dimock, Ian B.; Wan, Justin W. L.
2016-03-01
Image segmentation is an important problem in computer vision and has significant applications in the segmentation of cellular images. Many different imaging techniques exist and produce a variety of image properties which pose difficulties to image segmentation routines. Bright-field images are particularly challenging because of the non-uniform shape of the cells, the low contrast between cells and background, and imaging artifacts such as halos and broken edges. Classical segmentation techniques often produce poor results on these challenging images. Previous attempts at bright-field imaging are often limited in scope to the images that they segment. In this paper, we introduce a new algorithm for automatically segmenting cellular images. The algorithm incorporates two game theoretic models which allow each pixel to act as an independent agent with the goal of selecting their best labelling strategy. In the non-cooperative model, the pixels choose strategies greedily based only on local information. In the cooperative model, the pixels can form coalitions, which select labelling strategies that benefit the entire group. Combining these two models produces a method which allows the pixels to balance both local and global information when selecting their label. With the addition of k-means and active contour techniques for initialization and post-processing purposes, we achieve a robust segmentation routine. The algorithm is applied to several cell image datasets including bright-field images, fluorescent images and simulated images. Experiments show that the algorithm produces good segmentation results across the variety of datasets which differ in cell density, cell shape, contrast, and noise levels.
NASA Astrophysics Data System (ADS)
Kim, S. K.; Lee, J.; Zhang, C.; Ames, S.; Williams, D. N.
2017-12-01
Deep learning techniques have been successfully applied to solve many problems in climate and geoscience using massive-scaled observed and modeled data. For extreme climate event detections, several models based on deep neural networks have been recently proposed and attend superior performance that overshadows all previous handcrafted expert based method. The issue arising, though, is that accurate localization of events requires high quality of climate data. In this work, we propose framework capable of detecting and localizing extreme climate events in very coarse climate data. Our framework is based on two models using deep neural networks, (1) Convolutional Neural Networks (CNNs) to detect and localize extreme climate events, and (2) Pixel recursive recursive super resolution model to reconstruct high resolution climate data from low resolution climate data. Based on our preliminary work, we have presented two CNNs in our framework for different purposes, detection and localization. Our results using CNNs for extreme climate events detection shows that simple neural nets can capture the pattern of extreme climate events with high accuracy from very coarse reanalysis data. However, localization accuracy is relatively low due to the coarse resolution. To resolve this issue, the pixel recursive super resolution model reconstructs the resolution of input of localization CNNs. We present a best networks using pixel recursive super resolution model that synthesizes details of tropical cyclone in ground truth data while enhancing their resolution. Therefore, this approach not only dramat- ically reduces the human effort, but also suggests possibility to reduce computing cost required for downscaling process to increase resolution of data.
Edge Probability and Pixel Relativity-Based Speckle Reducing Anisotropic Diffusion.
Mishra, Deepak; Chaudhury, Santanu; Sarkar, Mukul; Soin, Arvinder Singh; Sharma, Vivek
2018-02-01
Anisotropic diffusion filters are one of the best choices for speckle reduction in the ultrasound images. These filters control the diffusion flux flow using local image statistics and provide the desired speckle suppression. However, inefficient use of edge characteristics results in either oversmooth image or an image containing misinterpreted spurious edges. As a result, the diagnostic quality of the images becomes a concern. To alleviate such problems, a novel anisotropic diffusion-based speckle reducing filter is proposed in this paper. A probability density function of the edges along with pixel relativity information is used to control the diffusion flux flow. The probability density function helps in removing the spurious edges and the pixel relativity reduces the oversmoothing effects. Furthermore, the filtering is performed in superpixel domain to reduce the execution time, wherein a minimum of 15% of the total number of image pixels can be used. For performance evaluation, 31 frames of three synthetic images and 40 real ultrasound images are used. In most of the experiments, the proposed filter shows a better performance as compared to the state-of-the-art filters in terms of the speckle region's signal-to-noise ratio and mean square error. It also shows a comparative performance for figure of merit and structural similarity measure index. Furthermore, in the subjective evaluation, performed by the expert radiologists, the proposed filter's outputs are preferred for the improved contrast and sharpness of the object boundaries. Hence, the proposed filtering framework is suitable to reduce the unwanted speckle and improve the quality of the ultrasound images.
NASA Astrophysics Data System (ADS)
Narasimhan, T. N.; White, A. F.; Tokunaga, T.
1986-12-01
At Riverton, Wyoming, low pH process waters from an abandoned uranium mill tailings pile have been infiltrating into and contaminating the shallow water table aquifer. The contamination process has been governed by transient infiltration rates, saturated-unsaturated flow, as well as transient chemical reactions between the many chemical species present in the mixing waters and the sediments. In the first part of this two-part series [White et al., 1984] we presented field data as well as an interpretation based on a static mixing model. As an upper bound, we estimated that 1.7% of the tailings water had mixed with the native groundwater. In the present work we present the results of numerical investigation of the dynamic mixing process. The model, DYNAMIX (DYNAmic MIXing), couples a chemical speciation algorithm, PHREEQE, with a modified form of the transport algorithm, TRUMP, specifically designed to handle the simultaneous migration of several chemical constituents. The overall problem of simulating the evolution and migration of the contaminant plume was divided into three sub problems that were solved in sequential stages. These were the infiltration problem, the reactive mixing problem, and the plume-migration problem. The results of the application agree reasonably with the detailed field data. The methodology developed in the present study demonstrates the feasibility of analyzing the evolution of natural hydrogeochemical systems through a coupled analysis of transient fluid flow as well as chemical reactions. It seems worthwhile to devote further effort toward improving the physicochemical capabilities of the model as well as to enhance its computational efficiency.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Narashimhan, T.N.; White, A.F.; Tokunaga, T.
1986-12-01
At Riverton, Wyoming, low pH process waters from an abandoned uranium mill tailings pile have been infiltrating into and contaminating the shallow water table aquifer. The contamination process has been governed by transient infiltration rates, saturated-unsaturated flow, as well as transient chemical reactions between the many chemical species present in the mixing waters and the sediments. In the first part of this two-part series the authors presented field data as well as an interpretation based on a static mixing models. As an upper bound, the authors estimated that 1.7% of the tailings water had mixed with the native groundwater. Inmore » the present work they present the results of numerical investigation of the dynamic mixing process. The model, DYNAMIX (DYNamic MIXing), couples a chemical speciation algorithm, PHREEQE, with a modified form of the transport algorithm, TRUMP, specifically designed to handle the simultaneous migration of several chemical constituents. The overall problem of simulating the evolution and migration of the contaminant plume was divided into three sub problems that were solved in sequential stages. These were the infiltration problem, the reactive mixing problem, and the plume-migration problem. The results of the application agree reasonably with the detailed field data. The methodology developed in the present study demonstrates the feasibility of analyzing the evolution of natural hydrogeochemical systems through a coupled analysis of transient fluid flow as well as chemical reactions. It seems worthwhile to devote further effort toward improving the physicochemical capabilities of the model as well as to enhance its computational efficiency.« less
Submesoscale Sea Surface Temperature Variability from UAV and Satellite Measurements
NASA Astrophysics Data System (ADS)
Castro, S. L.; Emery, W. J.; Tandy, W., Jr.; Good, W. S.
2017-12-01
Technological advances in spatial resolution of observations have revealed the importance of short-lived ocean processes with scales of O(1km). These submesoscale processes play an important role for the transfer of energy from the meso- to small scales and for generating significant spatial and temporal intermittency in the upper ocean, critical for the mixing of the oceanic boundary layer. Submesoscales have been observed in sea surface temperatures (SST) from satellites. Satellite SST measurements are spatial averages over the footprint of the satellite. When the variance of the SST distribution within the footprint is small, the average value is representative of the SST over the whole pixel. If the variance is large, the spatial heterogeneity is a source of uncertainty in satellite derived SSTs. Here we show evidence that the submesoscale variability in SSTs at spatial scales of 1km is responsible for the spatial variability within satellite footprints. Previous studies of the spatial variability in SST, using ship-based radiometric data suggested that variability at scales smaller than 1 km is significant and affects the uncertainty of satellite-derived skin SSTs. We examine data collected by a calibrated thermal infrared radiometer, the Ball Experimental Sea Surface Temperature (BESST), flown on a UAV over the Arctic Ocean and compare them with coincident measurements from the MODIS spaceborne radiometer to assess the spatial variability of SST within 1 km pixels. By taking the standard deviation of all the BESST measurements within individual MODIS pixels we show that significant spatial variability exists within the footprints. The distribution of the surface variability measured by BESST shows a peak value of O(0.1K) with 95% of the pixels showing σ < 0.45K. More importantly, high-variability pixels are located at density fronts in the marginal ice zone, which are a primary source of submesoscale intermittency near the surface in the Arctic Ocean. Wavenumber spectra of the BESST SSTs indicate a spectral slope of -2, consistent with the presence of submesoscale processes. Furthermore, not only is the BESST wavenumber spectra able to match the MODIS SST spectra well, but also extends the spectral slope of -2 by 2 decades relative to MODIS, from wavelengths of 8km to 0.08km.
Olugbara, Oludayo
2014-01-01
This paper presents an annual multiobjective crop-mix planning as a problem of concurrent maximization of net profit and maximization of crop production to determine an optimal cropping pattern. The optimal crop production in a particular planting season is a crucial decision making task from the perspectives of economic management and sustainable agriculture. A multiobjective optimal crop-mix problem is formulated and solved using the generalized differential evolution 3 (GDE3) metaheuristic to generate a globally optimal solution. The performance of the GDE3 metaheuristic is investigated by comparing its results with the results obtained using epsilon constrained and nondominated sorting genetic algorithms—being two representatives of state-of-the-art in evolutionary optimization. The performance metrics of additive epsilon, generational distance, inverted generational distance, and spacing are considered to establish the comparability. In addition, a graphical comparison with respect to the true Pareto front for the multiobjective optimal crop-mix planning problem is presented. Empirical results generally show GDE3 to be a viable alternative tool for solving a multiobjective optimal crop-mix planning problem. PMID:24883369
Adekanmbi, Oluwole; Olugbara, Oludayo; Adeyemo, Josiah
2014-01-01
This paper presents an annual multiobjective crop-mix planning as a problem of concurrent maximization of net profit and maximization of crop production to determine an optimal cropping pattern. The optimal crop production in a particular planting season is a crucial decision making task from the perspectives of economic management and sustainable agriculture. A multiobjective optimal crop-mix problem is formulated and solved using the generalized differential evolution 3 (GDE3) metaheuristic to generate a globally optimal solution. The performance of the GDE3 metaheuristic is investigated by comparing its results with the results obtained using epsilon constrained and nondominated sorting genetic algorithms-being two representatives of state-of-the-art in evolutionary optimization. The performance metrics of additive epsilon, generational distance, inverted generational distance, and spacing are considered to establish the comparability. In addition, a graphical comparison with respect to the true Pareto front for the multiobjective optimal crop-mix planning problem is presented. Empirical results generally show GDE3 to be a viable alternative tool for solving a multiobjective optimal crop-mix planning problem.
Preconditioned Mixed Spectral Element Methods for Elasticity and Stokes Problems
NASA Technical Reports Server (NTRS)
Pavarino, Luca F.
1996-01-01
Preconditioned iterative methods for the indefinite systems obtained by discretizing the linear elasticity and Stokes problems with mixed spectral elements in three dimensions are introduced and analyzed. The resulting stiffness matrices have the structure of saddle point problems with a penalty term, which is associated with the Poisson ratio for elasticity problems or with stabilization techniques for Stokes problems. The main results of this paper show that the convergence rate of the resulting algorithms is independent of the penalty parameter, the number of spectral elements Nu and mildly dependent on the spectral degree eta via the inf-sup constant. The preconditioners proposed for the whole indefinite system are block-diagonal and block-triangular. Numerical experiments presented in the final section show that these algorithms are a practical and efficient strategy for the iterative solution of the indefinite problems arising from mixed spectral element discretizations of elliptic systems.
ERIC Educational Resources Information Center
Marin Quintero, Maider J.
2013-01-01
The structure tensor for vector valued images is most often defined as the average of the scalar structure tensors in each band. The problem with this definition is the assumption that all bands provide the same amount of edge information giving them the same weights. As a result non-edge pixels can be reinforced and edges can be weakened…
Method of fabrication of display pixels driven by silicon thin film transistors
Carey, Paul G.; Smith, Patrick M.
1999-01-01
Display pixels driven by silicon thin film transistors are fabricated on plastic substrates for use in active matrix displays, such as flat panel displays. The process for forming the pixels involves a prior method for forming individual silicon thin film transistors on low-temperature plastic substrates. Low-temperature substrates are generally considered as being incapable of withstanding sustained processing temperatures greater than about 200.degree. C. The pixel formation process results in a complete pixel and active matrix pixel array. A pixel (or picture element) in an active matrix display consists of a silicon thin film transistor (TFT) and a large electrode, which may control a liquid crystal light valve, an emissive material (such as a light emitting diode or LED), or some other light emitting or attenuating material. The pixels can be connected in arrays wherein rows of pixels contain common gate electrodes and columns of pixels contain common drain electrodes. The source electrode of each pixel TFT is connected to its pixel electrode, and is electrically isolated from every other circuit element in the pixel array.
Mezgec, Simon; Eftimov, Tome; Bucher, Tamara; Koroušić Seljak, Barbara
2018-04-06
The present study tested the combination of an established and a validated food-choice research method (the 'fake food buffet') with a new food-matching technology to automate the data collection and analysis. The methodology combines fake-food image recognition using deep learning and food matching and standardization based on natural language processing. The former is specific because it uses a single deep learning network to perform both the segmentation and the classification at the pixel level of the image. To assess its performance, measures based on the standard pixel accuracy and Intersection over Union were applied. Food matching firstly describes each of the recognized food items in the image and then matches the food items with their compositional data, considering both their food names and their descriptors. The final accuracy of the deep learning model trained on fake-food images acquired by 124 study participants and providing fifty-five food classes was 92·18 %, while the food matching was performed with a classification accuracy of 93 %. The present findings are a step towards automating dietary assessment and food-choice research. The methodology outperforms other approaches in pixel accuracy, and since it is the first automatic solution for recognizing the images of fake foods, the results could be used as a baseline for possible future studies. As the approach enables a semi-automatic description of recognized food items (e.g. with respect to FoodEx2), these can be linked to any food composition database that applies the same classification and description system.
Davila, Stephen J; Hadjar, Omar; Eiceman, Gary A
2013-07-16
A linear pixel-based detector array, the IonCCD, is characterized for use under ambient conditions with thermal (<1 eV) positive ions derived from purified air and a 10 mCi (63)Ni foil. The IonCCD combined with a drift tube-ion mobility spectrometer permitted the direct detection of gas phase ions at atmospheric pressure and confirmed a limit of detection of 3000 ions/pixel/frame established previously in both the keV (1-2 keV) and the hyper-thermal (10-40 eV) regimes. Results demonstrate the "broad-band" application of the IonCCD over 10(5) orders in ion energy and over 10(10) in operating pressure. The Faraday detector of a drift tube for an ion mobility spectrometer was replaced with the IonCCD providing images of ion profiles over the cross-section of the drift tube. Patterns in the ion profiles were developed in the drift tube cross-section by control of electric fields between wires of Bradbury Nielson and Tyndall Powell shutter designs at distances of 1-8 cm from the detector. Results showed that ion beams formed in wire sets, retained their shape with limited mixing by diffusion and Coulombic repulsion. Beam broadening determined as 95 μm/cm for hydrated protons in air with moisture of ~10 ppmv. These findings suggest a value of the IonCCD in further studies of ion motion and diffusion of thermalized ions, enhancing computational results from simulation programs, and in the design or operation of ion mobility spectrometers.
How many pixels does it take to make a good 4"×6" print? Pixel count wars revisited
NASA Astrophysics Data System (ADS)
Kriss, Michael A.
2011-01-01
In the early 1980's the future of conventional silver-halide photographic systems was of great concern due to the potential introduction of electronic imaging systems then typified by the Sony Mavica analog electronic camera. The focus was on the quality of film-based systems as expressed in the number of equivalent number pixels and bits-per-pixel, and how many pixels would be required to create an equivalent quality image from a digital camera. It was found that 35-mm frames, for ISO 100 color negative film, contained equivalent pixels of 12 microns for a total of 18 million pixels per frame (6 million pixels per layer) with about 6 bits of information per pixel; the introduction of new emulsion technology, tabular AgX grains, increased the value to 8 bit per pixel. Higher ISO speed films had larger equivalent pixels, fewer pixels per frame, but retained the 8 bits per pixel. Further work found that a high quality 3.5" x 5.25" print could be obtained from a three layer system containing 1300 x 1950 pixels per layer or about 7.6 million pixels in all. In short, it became clear that when a digital camera contained about 6 million pixels (in a single layer using a color filter array and appropriate image processing) that digital systems would challenge and replace conventional film-based system for the consumer market. By 2005 this became the reality. Since 2005 there has been a "pixel war" raging amongst digital camera makers. The question arises about just how many pixels are required and are all pixels equal? This paper will provide a practical look at how many pixels are needed for a good print based on the form factor of the sensor (sensor size) and the effective optical modulation transfer function (optical spread function) of the camera lens. Is it better to have 16 million, 5.7-micron pixels or 6 million 7.8-micron pixels? How does intrinsic (no electronic boost) ISO speed and exposure latitude vary with pixel size? A systematic review of these issues will be provided within the context of image quality and ISO speed models developed over the last 15 years.
Application of a Mixed Consequential Ethical Model to a Problem Regarding Test Standards.
ERIC Educational Resources Information Center
Busch, John Christian
The work of the ethicist Charles Curran and the problem-solving strategy of the mixed consequentialist ethical model are applied to a traditional social science measurement problem--that of how to adjust a recommended standard in order to be fair to the test-taker and society. The focus is on criterion-referenced teacher certification tests.…
NASA Astrophysics Data System (ADS)
Ratliff, Bradley M.; LeMaster, Daniel A.
2012-06-01
Pixel-to-pixel response nonuniformity is a common problem that affects nearly all focal plane array sensors. This results in a frame-to-frame fixed pattern noise (FPN) that causes an overall degradation in collected data. FPN is often compensated for through the use of blackbody calibration procedures; however, FPN is a particularly challenging problem because the detector responsivities drift relative to one another in time, requiring that the sensor be recalibrated periodically. The calibration process is obstructive to sensor operation and is therefore only performed at discrete intervals in time. Thus, any drift that occurs between calibrations (along with error in the calibration sources themselves) causes varying levels of residual calibration error to be present in the data at all times. Polarimetric microgrid sensors are particularly sensitive to FPN due to the spatial differencing involved in estimating the Stokes vector images. While many techniques exist in the literature to estimate FPN for conventional video sensors, few have been proposed to address the problem in microgrid imaging sensors. Here we present a scene-based nonuniformity correction technique for microgrid sensors that is able to reduce residual fixed pattern noise while preserving radiometry under a wide range of conditions. The algorithm requires a low number of temporal data samples to estimate the spatial nonuniformity and is computationally efficient. We demonstrate the algorithm's performance using real data from the AFRL PIRATE and University of Arizona LWIR microgrid sensors.
DOT National Transportation Integrated Search
2000-04-01
Approximately 500 million tons of hot mix asphalt (HMA) are placed in the United States each year. With this large quantity of HMA, it is expected that some construction problems will occur from time to time. One problem that has been observed for ye...
Impact of defective pixels in AMLCDs on the perception of medical images
NASA Astrophysics Data System (ADS)
Kimpe, Tom; Sneyders, Yuri
2006-03-01
With LCD displays, each pixel has its own individual transistor that controls the transmittance of that pixel. Occasionally, these individual transistors will short or alternatively malfunction, resulting in a defective pixel that always shows the same brightness. With ever increasing resolution of displays the number of defect pixels per display increases accordingly. State of the art processes are capable of producing displays with no more than one faulty transistor out of 3 million. A five Mega Pixel medical LCD panel contains 15 million individual sub pixels (3 sub pixels per pixel), each having an individual transistor. This means that a five Mega Pixel display on average will have 5 failing pixels. This paper investigates the visibility of defective pixels and analyzes the possible impact of defective pixels on the perception of medical images. JND simulations were done to study the effect of defective pixels on medical images. Our results indicate that defective LCD pixels can mask subtle features in medical images in an unexpectedly broad area around the defect and therefore may reduce the quality of diagnosis for specific high-demanding areas such as mammography. As a second contribution an innovative solution is proposed. A specialized image processing algorithm can make defective pixels completely invisible and moreover can also recover the information of the defect so that the radiologist perceives the medical image correctly. This correction algorithm has been validated with both JND simulations and psycho visual tests.
Thermal wake/vessel detection technique
Roskovensky, John K [Albuquerque, NM; Nandy, Prabal [Albuquerque, NM; Post, Brian N [Albuquerque, NM
2012-01-10
A computer-automated method for detecting a vessel in water based on an image of a portion of Earth includes generating a thermal anomaly mask. The thermal anomaly mask flags each pixel of the image initially deemed to be a wake pixel based on a comparison of a thermal value of each pixel against other thermal values of other pixels localized about each pixel. Contiguous pixels flagged by the thermal anomaly mask are grouped into pixel clusters. A shape of each of the pixel clusters is analyzed to determine whether each of the pixel clusters represents a possible vessel detection event. The possible vessel detection events are represented visually within the image.
Smart trigger logic for focal plane arrays
Levy, James E; Campbell, David V; Holmes, Michael L; Lovejoy, Robert; Wojciechowski, Kenneth; Kay, Randolph R; Cavanaugh, William S; Gurrieri, Thomas M
2014-03-25
An electronic device includes a memory configured to receive data representing light intensity values from pixels in a focal plane array and a processor that analyzes the received data to determine which light values correspond to triggered pixels, where the triggered pixels are those pixels that meet a predefined set of criteria, and determines, for each triggered pixel, a set of neighbor pixels for which light intensity values are to be stored. The electronic device also includes a buffer that temporarily stores light intensity values for at least one previously processed row of pixels, so that when a triggered pixel is identified in a current row, light intensity values for the neighbor pixels in the previously processed row and for the triggered pixel are persistently stored, as well as a data transmitter that transmits the persistently stored light intensity values for the triggered and neighbor pixels to a data receiver.
Method and System for Temporal Filtering in Video Compression Systems
NASA Technical Reports Server (NTRS)
Lu, Ligang; He, Drake; Jagmohan, Ashish; Sheinin, Vadim
2011-01-01
Three related innovations combine improved non-linear motion estimation, video coding, and video compression. The first system comprises a method in which side information is generated using an adaptive, non-linear motion model. This method enables extrapolating and interpolating a visual signal, including determining the first motion vector between the first pixel position in a first image to a second pixel position in a second image; determining a second motion vector between the second pixel position in the second image and a third pixel position in a third image; determining a third motion vector between the first pixel position in the first image and the second pixel position in the second image, the second pixel position in the second image, and the third pixel position in the third image using a non-linear model; and determining a position of the fourth pixel in a fourth image based upon the third motion vector. For the video compression element, the video encoder has low computational complexity and high compression efficiency. The disclosed system comprises a video encoder and a decoder. The encoder converts the source frame into a space-frequency representation, estimates the conditional statistics of at least one vector of space-frequency coefficients with similar frequencies, and is conditioned on previously encoded data. It estimates an encoding rate based on the conditional statistics and applies a Slepian-Wolf code with the computed encoding rate. The method for decoding includes generating a side-information vector of frequency coefficients based on previously decoded source data and encoder statistics and previous reconstructions of the source frequency vector. It also performs Slepian-Wolf decoding of a source frequency vector based on the generated side-information and the Slepian-Wolf code bits. The video coding element includes receiving a first reference frame having a first pixel value at a first pixel position, a second reference frame having a second pixel value at a second pixel position, and a third reference frame having a third pixel value at a third pixel position. It determines a first motion vector between the first pixel position and the second pixel position, a second motion vector between the second pixel position and the third pixel position, and a fourth pixel value for a fourth frame based upon a linear or nonlinear combination of the first pixel value, the second pixel value, and the third pixel value. A stationary filtering process determines the estimated pixel values. The parameters of the filter may be predetermined constants.
NASA Astrophysics Data System (ADS)
Janesick, James; Gunawan, Ferry; Dosluoglu, Taner; Tower, John; McCaffrey, Niel
2002-08-01
High performance CMOS pixels are introduced; and their development is discussed. 3T (3-transistor) photodiode, 5T pinned diode, 6T photogate and 6T photogate back illuminated CMOS pixels are examined in detail, and the latter three are considered as scientific pixels. The advantages and disadvantagesof these options for scientific CMOS pixels are examined.Pixel characterization, which is used to gain a better understanding of CMOS pixels themselves, is also discussed.
NASA Astrophysics Data System (ADS)
Janesick, J.; Gunawan, F.; Dosluoglu, T.; Tower, J.; McCaffrey, N.
High performance CMOS pixels are introduced and their development is discussed. 3T (3-transistor) photodiode, 5T pinned diode, 6T photogate and 6T photogate back illuminated CMOS pixels are examined in detail, and the latter three are considered as scientific pixels. The advantages and disadvantages of these options for scientific CMOS pixels are examined. Pixel characterization, which is used to gain a better understanding of CMOS pixels themselves, is also discussed.
Microlens performance limits in sub-2mum pixel CMOS image sensors.
Huo, Yijie; Fesenmaier, Christian C; Catrysse, Peter B
2010-03-15
CMOS image sensors with smaller pixels are expected to enable digital imaging systems with better resolution. When pixel size scales below 2 mum, however, diffraction affects the optical performance of the pixel and its microlens, in particular. We present a first-principles electromagnetic analysis of microlens behavior during the lateral scaling of CMOS image sensor pixels. We establish for a three-metal-layer pixel that diffraction prevents the microlens from acting as a focusing element when pixels become smaller than 1.4 microm. This severely degrades performance for on and off-axis pixels in red, green and blue color channels. We predict that one-metal-layer or backside-illuminated pixels are required to extend the functionality of microlenses beyond the 1.4 microm pixel node.
NASA Astrophysics Data System (ADS)
Singh, Mandeep; Khare, Kedar
2018-05-01
We describe a numerical processing technique that allows single-shot region-of-interest (ROI) reconstruction in image plane digital holographic microscopy with full pixel resolution. The ROI reconstruction is modelled as an optimization problem where the cost function to be minimized consists of an L2-norm squared data fitting term and a modified Huber penalty term that are minimized alternately in an adaptive fashion. The technique can provide full pixel resolution complex-valued images of the selected ROI which is not possible to achieve with the commonly used Fourier transform method. The technique can facilitate holographic reconstruction of individual cells of interest from a large field-of-view digital holographic microscopy data. The complementary phase information in addition to the usual absorption information already available in the form of bright field microscopy can make the methodology attractive to the biomedical user community.
Model-based error diffusion for high fidelity lenticular screening.
Lau, Daniel; Smith, Trebor
2006-04-17
Digital halftoning is the process of converting a continuous-tone image into an arrangement of black and white dots for binary display devices such as digital ink-jet and electrophotographic printers. As printers are achieving print resolutions exceeding 1,200 dots per inch, it is becoming increasingly important for halftoning algorithms to consider the variations and interactions in the size and shape of printed dots between neighboring pixels. In the case of lenticular screening where statistically independent images are spatially multiplexed together, ignoring these variations and interactions, such as dot overlap, will result in poor lenticular image quality. To this end, we describe our use of model-based error-diffusion for the lenticular screening problem where statistical independence between component images is achieved by restricting the diffusion of error to only those pixels of the same component image where, in order to avoid instabilities, the proposed approach involves a novel error-clipping procedure.
High-resolution three-dimensional imaging radar
NASA Technical Reports Server (NTRS)
Cooper, Ken B. (Inventor); Chattopadhyay, Goutam (Inventor); Siegel, Peter H. (Inventor); Dengler, Robert J. (Inventor); Schlecht, Erich T. (Inventor); Mehdi, Imran (Inventor); Skalare, Anders J. (Inventor)
2010-01-01
A three-dimensional imaging radar operating at high frequency e.g., 670 GHz, is disclosed. The active target illumination inherent in radar solves the problem of low signal power and narrow-band detection by using submillimeter heterodyne mixer receivers. A submillimeter imaging radar may use low phase-noise synthesizers and a fast chirper to generate a frequency-modulated continuous-wave (FMCW) waveform. Three-dimensional images are generated through range information derived for each pixel scanned over a target. A peak finding algorithm may be used in processing for each pixel to differentiate material layers of the target. Improved focusing is achieved through a compensation signal sampled from a point source calibration target and applied to received signals from active targets prior to FFT-based range compression to extract and display high-resolution target images. Such an imaging radar has particular application in detecting concealed weapons or contraband.
NASA Astrophysics Data System (ADS)
Lei, Sen; Zou, Zhengxia; Liu, Dunge; Xia, Zhenghuan; Shi, Zhenwei
2018-06-01
Sea-land segmentation is a key step for the information processing of ocean remote sensing images. Traditional sea-land segmentation algorithms ignore the local similarity prior of sea and land, and thus fail in complex scenarios. In this paper, we propose a new sea-land segmentation method for infrared remote sensing images to tackle the problem based on superpixels and multi-scale features. Considering the connectivity and local similarity of sea or land, we interpret the sea-land segmentation task in view of superpixels rather than pixels, where similar pixels are clustered and the local similarity are explored. Moreover, the multi-scale features are elaborately designed, comprising of gray histogram and multi-scale total variation. Experimental results on infrared bands of Landsat-8 satellite images demonstrate that the proposed method can obtain more accurate and more robust sea-land segmentation results than the traditional algorithms.
NASA Astrophysics Data System (ADS)
Dutt, Ashutosh; Mishra, Ashish; Goswami, D. R.; Kumar, A. S. Kiran
2016-05-01
The push-broom sensors in bands meant to study oceans, in general suffer from residual non uniformity even after radiometric correction. The in-orbit data from OCM-2 shows pronounced striping in lower bands. There have been many attempts and different approaches to solve the problem using image data itself. The success or lack of it of each algorithm lies on the quality of the uniform region identified. In this paper, an image based destriping algorithm is presented with constraints being derived from Ground Calibration exercise. The basis of the methodology is determination of pixel to pixel non-uniformity through uniform segments identified and collected from large number of images, covering the dynamic range of the sensor. The results show the effectiveness of the algorithm over different targets. The performance is qualitatively evaluated by visual inspection and quantitatively measured by two parameters.
Cloud screening Coastal Zone Color Scanner images using channel 5
NASA Technical Reports Server (NTRS)
Eckstein, B. A.; Simpson, J. J.
1991-01-01
Clouds are removed from Coastal Zone Color Scanner (CZCS) data using channel 5. Instrumentation problems require pre-processing of channel 5 before an intelligent cloud-screening algorithm can be used. For example, at intervals of about 16 lines, the sensor records anomalously low radiances. Moreover, the calibration equation yields negative radiances when the sensor records zero counts, and pixels corrupted by electronic overshoot must also be excluded. The remaining pixels may then be used in conjunction with the procedure of Simpson and Humphrey to determine the CZCS cloud mask. These results plus in situ observations of phytoplankton pigment concentration show that pre-processing and proper cloud-screening of CZCS data are necessary for accurate satellite-derived pigment concentrations. This is especially true in the coastal margins, where pigment content is high and image distortion associated with electronic overshoot is also present. The pre-processing algorithm is critical to obtaining accurate global estimates of pigment from spacecraft data.
NASA Astrophysics Data System (ADS)
Zhou, Nanrun; Zhang, Aidi; Zheng, Fen; Gong, Lihua
2014-10-01
The existing ways to encrypt images based on compressive sensing usually treat the whole measurement matrix as the key, which renders the key too large to distribute and memorize or store. To solve this problem, a new image compression-encryption hybrid algorithm is proposed to realize compression and encryption simultaneously, where the key is easily distributed, stored or memorized. The input image is divided into 4 blocks to compress and encrypt, then the pixels of the two adjacent blocks are exchanged randomly by random matrices. The measurement matrices in compressive sensing are constructed by utilizing the circulant matrices and controlling the original row vectors of the circulant matrices with logistic map. And the random matrices used in random pixel exchanging are bound with the measurement matrices. Simulation results verify the effectiveness, security of the proposed algorithm and the acceptable compression performance.
NASA Astrophysics Data System (ADS)
Wu, Jianping; Lu, Fei; Zou, Kai; Yan, Hong; Wan, Min; Kuang, Yan; Zhou, Yanqing
2018-03-01
An ultra-high angular velocity and minor-caliber high-precision stably control technology application for active-optics image-motion compensation, is put forward innovatively in this paper. The image blur problem due to several 100°/s high-velocity relative motion between imaging system and target is theoretically analyzed. The velocity match model of detection system and active optics compensation system is built, and active optics image motion compensation platform experiment parameters are designed. Several 100°/s high-velocity high-precision control optics compensation technology is studied and implemented. The relative motion velocity is up to 250°/s, and image motion amplitude is more than 20 pixel. After the active optics compensation, motion blur is less than one pixel. The bottleneck technology of ultra-high angular velocity and long exposure time in searching and infrared detection system is successfully broke through.
Land Cover Classification in a Complex Urban-Rural Landscape with Quickbird Imagery
Moran, Emilio Federico.
2010-01-01
High spatial resolution images have been increasingly used for urban land use/cover classification, but the high spectral variation within the same land cover, the spectral confusion among different land covers, and the shadow problem often lead to poor classification performance based on the traditional per-pixel spectral-based classification methods. This paper explores approaches to improve urban land cover classification with Quickbird imagery. Traditional per-pixel spectral-based supervised classification, incorporation of textural images and multispectral images, spectral-spatial classifier, and segmentation-based classification are examined in a relatively new developing urban landscape, Lucas do Rio Verde in Mato Grosso State, Brazil. This research shows that use of spatial information during the image classification procedure, either through the integrated use of textural and spectral images or through the use of segmentation-based classification method, can significantly improve land cover classification performance. PMID:21643433
A robust fuzzy local Information c-means clustering algorithm with noise detection
NASA Astrophysics Data System (ADS)
Shang, Jiayu; Li, Shiren; Huang, Junwei
2018-04-01
Fuzzy c-means clustering (FCM), especially with spatial constraints (FCM_S), is an effective algorithm suitable for image segmentation. Its reliability contributes not only to the presentation of fuzziness for belongingness of every pixel but also to exploitation of spatial contextual information. But these algorithms still remain some problems when processing the image with noise, they are sensitive to the parameters which have to be tuned according to prior knowledge of the noise. In this paper, we propose a new FCM algorithm, combining the gray constraints and spatial constraints, called spatial and gray-level denoised fuzzy c-means (SGDFCM) algorithm. This new algorithm conquers the parameter disadvantages mentioned above by considering the possibility of noise of each pixel, which aims to improve the robustness and obtain more detail information. Furthermore, the possibility of noise can be calculated in advance, which means the algorithm is effective and efficient.
Estimating the exceedance probability of rain rate by logistic regression
NASA Technical Reports Server (NTRS)
Chiu, Long S.; Kedem, Benjamin
1990-01-01
Recent studies have shown that the fraction of an area with rain intensity above a fixed threshold is highly correlated with the area-averaged rain rate. To estimate the fractional rainy area, a logistic regression model, which estimates the conditional probability that rain rate over an area exceeds a fixed threshold given the values of related covariates, is developed. The problem of dependency in the data in the estimation procedure is bypassed by the method of partial likelihood. Analyses of simulated scanning multichannel microwave radiometer and observed electrically scanning microwave radiometer data during the Global Atlantic Tropical Experiment period show that the use of logistic regression in pixel classification is superior to multiple regression in predicting whether rain rate at each pixel exceeds a given threshold, even in the presence of noisy data. The potential of the logistic regression technique in satellite rain rate estimation is discussed.
Impact of LANDSAT MSS sensor differences on change detection analysis
NASA Technical Reports Server (NTRS)
Likens, W. C.; Wrigley, R. C.
1983-01-01
Some 512 by 512 pixel subwindows for simultaneously acquired scene pairs obtained by LANDSAT 2,3 and 4 multispectral band scanners were coregistered using LANDSAT 4 scenes as the base to which the other images were registered. Scattergrams between the coregistered scenes (a form of contingency analysis) were used to radiometrically compare data from the various sensors. Mode values were derived and used to visually fit a linear regression. Root mean square errors of the registration varied between .1 and 1.5 pixels. There appear to be no major problem preventing the use of LANDSAT 4 MSS with previous MSS sensors for change detection, provided the noise interference can be removed or minimized. Data normalizations for change detection should be based on the data rather than solely on calibration information. This allows simultaneous normalization of the atmosphere as well as the radiometry.
Image Segmentation Method Using Fuzzy C Mean Clustering Based on Multi-Objective Optimization
NASA Astrophysics Data System (ADS)
Chen, Jinlin; Yang, Chunzhi; Xu, Guangkui; Ning, Li
2018-04-01
Image segmentation is not only one of the hottest topics in digital image processing, but also an important part of computer vision applications. As one kind of image segmentation algorithms, fuzzy C-means clustering is an effective and concise segmentation algorithm. However, the drawback of FCM is that it is sensitive to image noise. To solve the problem, this paper designs a novel fuzzy C-mean clustering algorithm based on multi-objective optimization. We add a parameter λ to the fuzzy distance measurement formula to improve the multi-objective optimization. The parameter λ can adjust the weights of the pixel local information. In the algorithm, the local correlation of neighboring pixels is added to the improved multi-objective mathematical model to optimize the clustering cent. Two different experimental results show that the novel fuzzy C-means approach has an efficient performance and computational time while segmenting images by different type of noises.
Visibility enhancement of color images using Type-II fuzzy membership function
NASA Astrophysics Data System (ADS)
Singh, Harmandeep; Khehra, Baljit Singh
2018-04-01
Images taken in poor environmental conditions decrease the visibility and hidden information of digital images. Therefore, image enhancement techniques are necessary for improving the significant details of these images. An extensive review has shown that histogram-based enhancement techniques greatly suffer from over/under enhancement issues. Fuzzy-based enhancement techniques suffer from over/under saturated pixels problems. In this paper, a novel Type-II fuzzy-based image enhancement technique has been proposed for improving the visibility of images. The Type-II fuzzy logic can automatically extract the local atmospheric light and roughly eliminate the atmospheric veil in local detail enhancement. The proposed technique has been evaluated on 10 well-known weather degraded color images and is also compared with four well-known existing image enhancement techniques. The experimental results reveal that the proposed technique outperforms others regarding visible edge ratio, color gradients and number of saturated pixels.
Super-pixel extraction based on multi-channel pulse coupled neural network
NASA Astrophysics Data System (ADS)
Xu, GuangZhu; Hu, Song; Zhang, Liu; Zhao, JingJing; Fu, YunXia; Lei, BangJun
2018-04-01
Super-pixel extraction techniques group pixels to form over-segmented image blocks according to the similarity among pixels. Compared with the traditional pixel-based methods, the image descripting method based on super-pixel has advantages of less calculation, being easy to perceive, and has been widely used in image processing and computer vision applications. Pulse coupled neural network (PCNN) is a biologically inspired model, which stems from the phenomenon of synchronous pulse release in the visual cortex of cats. Each PCNN neuron can correspond to a pixel of an input image, and the dynamic firing pattern of each neuron contains both the pixel feature information and its context spatial structural information. In this paper, a new color super-pixel extraction algorithm based on multi-channel pulse coupled neural network (MPCNN) was proposed. The algorithm adopted the block dividing idea of SLIC algorithm, and the image was divided into blocks with same size first. Then, for each image block, the adjacent pixels of each seed with similar color were classified as a group, named a super-pixel. At last, post-processing was adopted for those pixels or pixel blocks which had not been grouped. Experiments show that the proposed method can adjust the number of superpixel and segmentation precision by setting parameters, and has good potential for super-pixel extraction.
Observed Thermal Impacts of Wind Farms Over Northern Illinois.
Slawsky, Lauren M; Zhou, Liming; Baidya Roy, Somnath; Xia, Geng; Vuille, Mathias; Harris, Ronald A
2015-06-25
This paper assesses impacts of three wind farms in northern Illinois using land surface temperature (LST) data from the Moderate Resolution Imaging Spectroradiometer (MODIS) instruments onboard the Terra and Aqua satellites for the period 2003-2013. Changes in LST between two periods (before and after construction of the wind turbines) and between wind farm pixels and nearby non-wind-farm pixels are quantified. An areal mean increase in LST by 0.18-0.39 °C is observed at nighttime over the wind farms, with the geographic distribution of this warming effect generally spatially coupled with the layout of the wind turbines (referred to as the spatial coupling), while there is no apparent impact on daytime LST. The nighttime LST warming effect varies with seasons, with the strongest warming in winter months of December-February, and the tightest spatial coupling in summer months of June-August. Analysis of seasonal variations in wind speed and direction from weather balloon sounding data and Automated Surface Observing System hourly observations from nearby stations suggest stronger winds correspond to seasons with greater warming and larger downwind impacts. The early morning soundings in Illinois are representative of the nighttime boundary layer and exhibit strong temperature inversions across all seasons. The strong and relatively shallow inversion in summer leaves warm air readily available to be mixed down and spatially well coupled with the turbine. Although the warming effect is strongest in winter, the spatial coupling is more erratic and spread out than in summer. These results suggest that the observed warming signal at nighttime is likely due to the net downward transport of heat from warmer air aloft to the surface, caused by the turbulent mixing in the wakes of the spinning turbine rotor blades.
Observed Thermal Impacts of Wind Farms Over Northern Illinois
Slawsky, Lauren M.; Zhou, Liming; Baidya Roy, Somnath; Xia, Geng; Vuille, Mathias; Harris, Ronald A.
2015-01-01
This paper assesses impacts of three wind farms in northern Illinois using land surface temperature (LST) data from the Moderate Resolution Imaging Spectroradiometer (MODIS) instruments onboard the Terra and Aqua satellites for the period 2003–2013. Changes in LST between two periods (before and after construction of the wind turbines) and between wind farm pixels and nearby non-wind-farm pixels are quantified. An areal mean increase in LST by 0.18–0.39 °C is observed at nighttime over the wind farms, with the geographic distribution of this warming effect generally spatially coupled with the layout of the wind turbines (referred to as the spatial coupling), while there is no apparent impact on daytime LST. The nighttime LST warming effect varies with seasons, with the strongest warming in winter months of December-February, and the tightest spatial coupling in summer months of June-August. Analysis of seasonal variations in wind speed and direction from weather balloon sounding data and Automated Surface Observing System hourly observations from nearby stations suggest stronger winds correspond to seasons with greater warming and larger downwind impacts. The early morning soundings in Illinois are representative of the nighttime boundary layer and exhibit strong temperature inversions across all seasons. The strong and relatively shallow inversion in summer leaves warm air readily available to be mixed down and spatially well coupled with the turbine. Although the warming effect is strongest in winter, the spatial coupling is more erratic and spread out than in summer. These results suggest that the observed warming signal at nighttime is likely due to the net downward transport of heat from warmer air aloft to the surface, caused by the turbulent mixing in the wakes of the spinning turbine rotor blades. PMID:26121613
Bio-inspired multi-mode optic flow sensors for micro air vehicles
NASA Astrophysics Data System (ADS)
Park, Seokjun; Choi, Jaehyuk; Cho, Jihyun; Yoon, Euisik
2013-06-01
Monitoring wide-field surrounding information is essential for vision-based autonomous navigation in micro-air-vehicles (MAV). Our image-cube (iCube) module, which consists of multiple sensors that are facing different angles in 3-D space, can be applied to the wide-field of view optic flows estimation (μ-Compound eyes) and to attitude control (μ- Ocelli) in the Micro Autonomous Systems and Technology (MAST) platforms. In this paper, we report an analog/digital (A/D) mixed-mode optic-flow sensor, which generates both optic flows and normal images in different modes for μ- Compound eyes and μ-Ocelli applications. The sensor employs a time-stamp based optic flow algorithm which is modified from the conventional EMD (Elementary Motion Detector) algorithm to give an optimum partitioning of hardware blocks in analog and digital domains as well as adequate allocation of pixel-level, column-parallel, and chip-level signal processing. Temporal filtering, which may require huge hardware resources if implemented in digital domain, is remained in a pixel-level analog processing unit. The rest of the blocks, including feature detection and timestamp latching, are implemented using digital circuits in a column-parallel processing unit. Finally, time-stamp information is decoded into velocity from look-up tables, multiplications, and simple subtraction circuits in a chip-level processing unit, thus significantly reducing core digital processing power consumption. In the normal image mode, the sensor generates 8-b digital images using single slope ADCs in the column unit. In the optic flow mode, the sensor estimates 8-b 1-D optic flows from the integrated mixed-mode algorithm core and 2-D optic flows with an external timestamp processing, respectively.
NASA Astrophysics Data System (ADS)
Qie, G.; Wang, G.; Wang, M.
2016-12-01
Mixed pixels and shadows due to buildings in urban areas impede accurate estimation and mapping of city vegetation carbon density. In most of previous studies, these factors are often ignored, which thus result in underestimation of city vegetation carbon density. In this study we presented an integrated methodology to improve the accuracy of mapping city vegetation carbon density. Firstly, we applied a linear shadow remove analysis (LSRA) on remotely sensed Landsat 8 images to reduce the shadow effects on carbon estimation. Secondly, we integrated a linear spectral unmixing analysis (LSUA) with a linear stepwise regression (LSR), a logistic model-based stepwise regression (LMSR) and k-Nearest Neighbors (kNN), and utilized and compared the integrated models on shadow-removed images to map vegetation carbon density. This methodology was examined in Shenzhen City of Southeast China. A data set from a total of 175 sample plots measured in 2013 and 2014 was used to train the models. The independent variables statistically significantly contributing to improving the fit of the models to the data and reducing the sum of squared errors were selected from a total of 608 variables derived from different image band combinations and transformations. The vegetation fraction from LSUA was then added into the models as an important independent variable. The estimates obtained were evaluated using a cross-validation method. Our results showed that higher accuracies were obtained from the integrated models compared with the ones using traditional methods which ignore the effects of mixed pixels and shadows. This study indicates that the integrated method has great potential on improving the accuracy of urban vegetation carbon density estimation. Key words: Urban vegetation carbon, shadow, spectral unmixing, spatial modeling, Landsat 8 images
Developing Methods for Fraction Cover Estimation Toward Global Mapping of Ecosystem Composition
NASA Astrophysics Data System (ADS)
Roberts, D. A.; Thompson, D. R.; Dennison, P. E.; Green, R. O.; Kokaly, R. F.; Pavlick, R.; Schimel, D.; Stavros, E. N.
2016-12-01
Terrestrial vegetation seldom covers an entire pixel due to spatial mixing at many scales. Estimating the fractional contributions of photosynthetic green vegetation (GV), non-photosynthetic vegetation (NPV), and substrate (soil, rock, etc.) to mixed spectra can significantly improve quantitative remote measurement of terrestrial ecosystems. Traditional methods for estimating fractional vegetation cover rely on vegetation indices that are sensitive to variable substrate brightness, NPV and sun-sensor geometry. Spectral mixture analysis (SMA) is an alternate framework that provides estimates of fractional cover. However, simple SMA, in which the same set of endmembers is used for an entire image, fails to account for natural spectral variability within a cover class. Multiple Endmember Spectral Mixture Analysis (MESMA) is a variant of SMA that allows the number and types of pure spectra to vary on a per-pixel basis, thereby accounting for endmember variability and generating more accurate cover estimates, but at a higher computational cost. Routine generation and delivery of GV, NPV, and substrate (S) fractions using MESMA is currently in development for large, diverse datasets acquired by the Airborne Visible Infrared Imaging Spectrometer (AVIRIS). We present initial results, including our methodology for ensuring consistency and generalizability of fractional cover estimates across a wide range of regions, seasons, and biomes. We also assess uncertainty and provide a strategy for validation. GV, NPV, and S fractions are an important precursor for deriving consistent measurements of ecosystem parameters such as plant stress and mortality, functional trait assessment, disturbance susceptibility and recovery, and biomass and carbon stock assessment. Copyright 2016 California Institute of Technology. All Rights Reserved. We acknowledge support of the US Government, NASA, the Earth Science Division and Terrestrial Ecology program.
HIERARCHICAL PROBABILISTIC INFERENCE OF COSMIC SHEAR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schneider, Michael D.; Dawson, William A.; Hogg, David W.
2015-07-01
Point estimators for the shearing of galaxy images induced by gravitational lensing involve a complex inverse problem in the presence of noise, pixelization, and model uncertainties. We present a probabilistic forward modeling approach to gravitational lensing inference that has the potential to mitigate the biased inferences in most common point estimators and is practical for upcoming lensing surveys. The first part of our statistical framework requires specification of a likelihood function for the pixel data in an imaging survey given parameterized models for the galaxies in the images. We derive the lensing shear posterior by marginalizing over all intrinsic galaxymore » properties that contribute to the pixel data (i.e., not limited to galaxy ellipticities) and learn the distributions for the intrinsic galaxy properties via hierarchical inference with a suitably flexible conditional probabilitiy distribution specification. We use importance sampling to separate the modeling of small imaging areas from the global shear inference, thereby rendering our algorithm computationally tractable for large surveys. With simple numerical examples we demonstrate the improvements in accuracy from our importance sampling approach, as well as the significance of the conditional distribution specification for the intrinsic galaxy properties when the data are generated from an unknown number of distinct galaxy populations with different morphological characteristics.« less
[The optimizing design and experiment for a MOEMS micro-mirror spectrometer].
Mo, Xiang-xia; Wen, Zhi-yu; Zhang, Zhi-hai; Guo, Yuan-jun
2011-12-01
A MOEMS micro-mirror spectrometer, which uses micro-mirror as a light switch so that spectrum can be detected by a single detector, has the advantages of transforming DC into AC, applying Hadamard transform optics without additional template, high pixel resolution and low cost. In this spectrometer, the vital problem is the conflict between the scales of slit and the light intensity. Hence, in order to improve the resolution of this spectrometer, the present paper gives the analysis of the new effects caused by micro structure, and optimal values of the key factors. Firstly, the effects of diffraction limitation, spatial sample rate and curved slit image on the resolution of the spectrum were proposed. Then, the results were simulated; the key values were tested on the micro mirror spectrometer. Finally, taking all these three effects into account, this micro system was optimized. With a scale of 70 mm x 130 mm, decreasing the height of the image at the plane of micro mirror can not diminish the influence of curved slit image in the spectrum; under the demand of spatial sample rate, the resolution must be twice over the pixel resolution; only if the width of the slit is 1.818 microm and the pixel resolution is 2.2786 microm can the spectrometer have the best performance.
Comparison Of Eigenvector-Based Statistical Pattern Recognition Algorithms For Hybrid Processing
NASA Astrophysics Data System (ADS)
Tian, Q.; Fainman, Y.; Lee, Sing H.
1989-02-01
The pattern recognition algorithms based on eigenvector analysis (group 2) are theoretically and experimentally compared in this part of the paper. Group 2 consists of Foley-Sammon (F-S) transform, Hotelling trace criterion (HTC), Fukunaga-Koontz (F-K) transform, linear discriminant function (LDF) and generalized matched filter (GMF). It is shown that all eigenvector-based algorithms can be represented in a generalized eigenvector form. However, the calculations of the discriminant vectors are different for different algorithms. Summaries on how to calculate the discriminant functions for the F-S, HTC and F-K transforms are provided. Especially for the more practical, underdetermined case, where the number of training images is less than the number of pixels in each image, the calculations usually require the inversion of a large, singular, pixel correlation (or covariance) matrix. We suggest solving this problem by finding its pseudo-inverse, which requires inverting only the smaller, non-singular image correlation (or covariance) matrix plus multiplying several non-singular matrices. We also compare theoretically the effectiveness for classification with the discriminant functions from F-S, HTC and F-K with LDF and GMF, and between the linear-mapping-based algorithms and the eigenvector-based algorithms. Experimentally, we compare the eigenvector-based algorithms using a set of image data bases each image consisting of 64 x 64 pixels.
NASA Astrophysics Data System (ADS)
Gu, Yameng; Zhang, Xuming
2017-05-01
Optical coherence tomography (OCT) images are severely degraded by speckle noise. Existing methods for despeckling multiframe OCT data cannot deliver sufficient speckle suppression while preserving image details well. To address this problem, the spiking cortical model (SCM) based non-local means (NLM) method has been proposed in this letter. In the proposed method, the considered frame and two neighboring frames are input into three SCMs to generate the temporal series of pulse outputs. The normalized moment of inertia (NMI) of the considered patches in the pulse outputs is extracted to represent the rotational and scaling invariant features of the corresponding patches in each frame. The pixel similarity is computed based on the Euclidean distance between the NMI features and used as the weight. Each pixel in the considered frame is restored by the weighted averaging of all pixels in the pre-defined search window in the three frames. Experiments on the real multiframe OCT data of the pig eye demonstrate the advantage of the proposed method over the frame averaging method, the multiscale sparsity based tomographic denoising method, the wavelet-based method and the traditional NLM method in terms of visual inspection and objective metrics such as signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), equivalent number of looks (ENL) and cross-correlation (XCOR).
Toward one Giga frames per second--evolution of in situ storage image sensors.
Etoh, Takeharu G; Son, Dao V T; Yamada, Tetsuo; Charbon, Edoardo
2013-04-08
The ISIS is an ultra-fast image sensor with in-pixel storage. The evolution of the ISIS in the past and in the near future is reviewed and forecasted. To cover the storage area with a light shield, the conventional frontside illuminated ISIS has a limited fill factor. To achieve higher sensitivity, a BSI ISIS was developed. To avoid direct intrusion of light and migration of signal electrons to the storage area on the frontside, a cross-sectional sensor structure with thick pnpn layers was developed, and named "Tetratified structure". By folding and looping in-pixel storage CCDs, an image signal accumulation sensor, ISAS, is proposed. The ISAS has a new function, the in-pixel signal accumulation, in addition to the ultra-high-speed imaging. To achieve much higher frame rate, a multi-collection-gate (MCG) BSI image sensor architecture is proposed. The photoreceptive area forms a honeycomb-like shape. Performance of a hexagonal CCD-type MCG BSI sensor is examined by simulations. The highest frame rate is theoretically more than 1Gfps. For the near future, a stacked hybrid CCD/CMOS MCG image sensor seems most promising. The associated problems are discussed. A fine TSV process is the key technology to realize the structure.
Guided-wave high-performance spectrometers for the MEOS miniature earth observation satellite
NASA Astrophysics Data System (ADS)
Kruzelecky, Roman V.; Wong, Brian; Zou, Jing; Jamroz, Wes; Sloan, James; Cloutis, Edward
2017-11-01
The MEOS Miniature Earth Observing Satellite is a low-cost mission being developed for the Canadian Space Agency with international collaborations that will innovatively combine remote correlated atmospheric/land-cover measurements with the corresponding atmospheric and ecosystem modelling in near real-time to obtain simultaneous variations in lower tropospheric GHG mixing ratios and the resulting responses of the surface ecosystems. MEOS will provide lower tropospheric CO2, CH4, CO, N2O, H2O and aerosol mixing ratios over natural sources and sinks using two kinds of synergistic observations; a forward limb measurement and a follow-on nadir measurement over the same geographical tangent point. The measurements will be accomplished using separate limb and nadir suites of innovative miniature line-imaging spectrometers and will be spatially coordinated such that the same air mass is observed in both views within a few minutes. The limb data will consist of 16-pixel vertical spectral line imaging to provide 1-km vertical resolution, while the corresponding nadir measurements will view sixteen 5 by 10 km2 ground pixels with a 160-km East-West swath width. To facilitate the mission accommodation on a low-cost microsat with a net payload mass under 22 kg, groundbreaking miniature guided-wave spectrometers with advanced optical filtering and coding technologies will be employed based on MPBC's patented IOSPEC technologies. The data synergy requirements for each view will be innovatively met using two complementary miniature line-imaging spectrometers to provide broad-band measurements from 1200 to 2450 nm at about 1.2 nm/pixel bandwidth using a multislit binary-coded MEMS-IOSPEC and simultaneous high-resolution multiple microchannels at 0.03 nm FWHM using the revolutionary FP-IOSPEC Fabry-Perot guided-wave spectrometer concept. The guided-wave spectrometer integration provides an order of magnitude reduction in the mass and volume relative to traditional bulk-optic spectrometers while also providing significant performance advantages; including an optically immersed master grating for minimal optical aberrations, robust optical alignment using a low-loss dielectric IR waveguide, and simultaneous broad-band spectral acquisition using advanced infrared linear arrays and multiplexing electronics. This paper describes the trial bread-boarding of the groundbreaking new spectrometer concepts and associated technologies towards the MEOS mission requirements.
State space approach to mixed boundary value problems.
NASA Technical Reports Server (NTRS)
Chen, C. F.; Chen, M. M.
1973-01-01
A state-space procedure for the formulation and solution of mixed boundary value problems is established. This procedure is a natural extension of the method used in initial value problems; however, certain special theorems and rules must be developed. The scope of the applications of the approach includes beam, arch, and axisymmetric shell problems in structural analysis, boundary layer problems in fluid mechanics, and eigenvalue problems for deformable bodies. Many classical methods in these fields developed by Holzer, Prohl, Myklestad, Thomson, Love-Meissner, and others can be either simplified or unified under new light shed by the state-variable approach. A beam problem is included as an illustration.
NASA Technical Reports Server (NTRS)
Knopp, Jerome
1996-01-01
Astronauts are required to interface with complex systems that require sophisticated displays to communicate effectively. Lightweight, head-mounted real-time displays that present holographic images for comfortable viewing may be the ideal solution. We describe an implementation of a liquid crystal television (LCTV) as a spatial light modulator (SLM) for the display of holograms. The implementation required the solution of a complex set of problems. These include field calculations, determination of the LCTV-SLM complex transmittance characteristics and a precise knowledge of the signal mapping between the LCTV and frame grabbing board that controls it. Realizing the hologram is further complicated by the coupling that occurs between the phase and amplitude in the LCTV transmittance. A single drive signal (a gray level signal from a framegrabber) determines both amplitude and phase. Since they are not independently controllable (as is true in the ideal SLM) one must deal with the problem of optimizing (in some sense) the hologram based on this constraint. Solutions for the above problems have been found. An algorithm has been for field calculations that uses an efficient outer product formulation. Juday's MEDOF 7 (Minimum Euclidean Distance Optimal Filter) algorithm used for originally for filter calculations has been successfully adapted to handle metrics appropriate for holography. This has solved the problem of optimizing the hologram to the constraints imposed by coupling. Two laboratory methods have been developed for determining an accurate mapping of framegrabber pixels to LCTV pixels. A friendly software system has been developed that integrates the hologram calculation and realization process using a simple set of instructions. The computer code and all the laboratory measurement techniques determining SLM parameters have been proven with the production of a high quality test image.
Neighborhood comparison operator
NASA Technical Reports Server (NTRS)
Gennery, Donald B. (Inventor)
1987-01-01
Digital values in a moving window are compared by an operator having nine comparators (18) connected to line buffers (16) for receiving a succession of central pixels together with eight neighborhood pixels. A single bit of program control determines whether the neighborhood pixels are to be compared with the central pixel or a threshold value. The central pixel is always compared with the threshold. The comparator output, plus 2 bits indicating odd-even pixel/line information about the central pixel, addresses a lookup table (20) to provide 14 bits of information, including 2 bits which control a selector (22) to pass either the central pixel value, the other 12 bits of table information, or the bit-wise logic OR of all neighboring pixels.
Mapping target signatures via partial unmixing of AVIRIS data
NASA Technical Reports Server (NTRS)
Boardman, Joseph W.; Kruse, Fred A.; Green, Robert O.
1995-01-01
A complete spectral unmixing of a complicated AVIRIS scene may not always be possible or even desired. High quality data of spectrally complex areas are very high dimensional and are consequently difficult to fully unravel. Partial unmixing provides a method of solving only that fraction of the data inversion problem that directly relates to the specific goals of the investigation. Many applications of imaging spectrometry can be cast in the form of the following question: 'Are my target signatures present in the scene, and if so, how much of each target material is present in each pixel?' This is a partial unmixing problem. The number of unmixing endmembers is one greater than the number of spectrally defined target materials. The one additional endmember can be thought of as the composite of all the other scene materials, or 'everything else'. Several workers have proposed partial unmixing schemes for imaging spectrometry data, but each has significant limitations for operational application. The low probability detection methods described by Farrand and Harsanyi and the foreground-background method of Smith et al are both examples of such partial unmixing strategies. The new method presented here builds on these innovative analysis concepts, combining their different positive attributes while attempting to circumvent their limitations. This new method partially unmixes AVIRIS data, mapping apparent target abundances, in the presence of an arbitrary and unknown spectrally mixed background. It permits the target materials to be present in abundances that drive significant portions of the scene covariance. Furthermore it does not require a priori knowledge of the background material spectral signatures. The challenge is to find the proper projection of the data that hides the background variance while simultaneously maximizing the variance amongst the targets.
Second-order polynomial model to solve the least-cost lumber grade mix problem
Urs Buehlmann; Xiaoqiu Zuo; R. Edward Thomas
2010-01-01
Material costs when cutting solid wood parts from hardwood lumber for secondary wood products manufacturing account for 20 to 50 percent of final product cost. These costs can be minimized by proper selection of the lumber quality used. The lumber quality selection problem is referred to as the least-cost lumber grade mix problem in the industry. The objective of this...
Selecting Pixels for Kepler Downlink
NASA Technical Reports Server (NTRS)
Bryson, Stephen T.; Jenkins, Jon M.; Klaus, Todd C.; Cote, Miles T.; Quintana, Elisa V.; Hall, Jennifer R.; Ibrahim, Khadeejah; Chandrasekaran, Hema; Caldwell, Douglas A.; Van Cleve, Jeffrey E.;
2010-01-01
The Kepler mission monitors > 100,000 stellar targets using 42 2200 1024 pixel CCDs. Bandwidth constraints prevent the downlink of all 96 million pixels per 30-minute cadence, so the Kepler spacecraft downlinks a specified collection of pixels for each target. These pixels are selected by considering the object brightness, background and the signal-to-noise of each pixel, and are optimized to maximize the signal-to-noise ratio of the target. This paper describes pixel selection, creation of spacecraft apertures that efficiently capture selected pixels, and aperture assignment to a target. Diagnostic apertures, short-cadence targets and custom specified shapes are discussed.
Image Edge Extraction via Fuzzy Reasoning
NASA Technical Reports Server (NTRS)
Dominquez, Jesus A. (Inventor); Klinko, Steve (Inventor)
2008-01-01
A computer-based technique for detecting edges in gray level digital images employs fuzzy reasoning to analyze whether each pixel in an image is likely on an edge. The image is analyzed on a pixel-by-pixel basis by analyzing gradient levels of pixels in a square window surrounding the pixel being analyzed. An edge path passing through the pixel having the greatest intensity gradient is used as input to a fuzzy membership function, which employs fuzzy singletons and inference rules to assigns a new gray level value to the pixel that is related to the pixel's edginess degree.
NASA Astrophysics Data System (ADS)
Fu, Y.; Hu-Guo, C.; Dorokhov, A.; Pham, H.; Hu, Y.
2013-07-01
In order to exploit the ability to integrate a charge collecting electrode with analog and digital processing circuitry down to the pixel level, a new type of CMOS pixel sensors with full CMOS capability is presented in this paper. The pixel array is read out based on a column-parallel read-out architecture, where each pixel incorporates a diode, a preamplifier with a double sampling circuitry and a discriminator to completely eliminate analog read-out bottlenecks. The sensor featuring a pixel array of 8 rows and 32 columns with a pixel pitch of 80 μm×16 μm was fabricated in a 0.18 μm CMOS process. The behavior of each pixel-level discriminator isolated from the diode and the preamplifier was studied. The experimental results indicate that all in-pixel discriminators which are fully operational can provide significant improvements in the read-out speed and the power consumption of CMOS pixel sensors.
Deconvolution of mixing time series on a graph
Blocker, Alexander W.; Airoldi, Edoardo M.
2013-01-01
In many applications we are interested in making inference on latent time series from indirect measurements, which are often low-dimensional projections resulting from mixing or aggregation. Positron emission tomography, super-resolution, and network traffic monitoring are some examples. Inference in such settings requires solving a sequence of ill-posed inverse problems, yt = Axt, where the projection mechanism provides information on A. We consider problems in which A specifies mixing on a graph of times series that are bursty and sparse. We develop a multilevel state-space model for mixing times series and an efficient approach to inference. A simple model is used to calibrate regularization parameters that lead to efficient inference in the multilevel state-space model. We apply this method to the problem of estimating point-to-point traffic flows on a network from aggregate measurements. Our solution outperforms existing methods for this problem, and our two-stage approach suggests an efficient inference strategy for multilevel models of multivariate time series. PMID:25309135
An equivalent domain integral method for three-dimensional mixed-mode fracture problems
NASA Technical Reports Server (NTRS)
Shivakumar, K. N.; Raju, I. S.
1991-01-01
A general formulation of the equivalent domain integral (EDI) method for mixed mode fracture problems in cracked solids is presented. The method is discussed in the context of a 3-D finite element analysis. The J integral consists of two parts: the volume integral of the crack front potential over a torus enclosing the crack front and the crack surface integral due to the crack front potential plus the crack face loading. In mixed mode crack problems the total J integral is split into J sub I, J sub II, and J sub III representing the severity of the crack front in three modes of deformations. The direct and decomposition methods are used to separate the modes. These two methods were applied to several mixed mode fracture problems, were analyzed, and results were found to agree well with those available in the literature. The method lends itself to be used as a post-processing subroutine in a general purpose finite element program.
An equivalent domain integral method for three-dimensional mixed-mode fracture problems
NASA Technical Reports Server (NTRS)
Shivakumar, K. N.; Raju, I. S.
1992-01-01
A general formulation of the equivalent domain integral (EDI) method for mixed mode fracture problems in cracked solids is presented. The method is discussed in the context of a 3-D finite element analysis. The J integral consists of two parts: the volume integral of the crack front potential over a torus enclosing the crack front and the crack surface integral due to the crack front potential plus the crack face loading. In mixed mode crack problems the total J integral is split into J sub I, J sub II, and J sub III representing the severity of the crack front in three modes of deformations. The direct and decomposition methods are used to separate the modes. These two methods were applied to several mixed mode fracture problems, were analyzed, and results were found to agree well with those available in the literature. The method lends itself to be used as a post-processing subroutine in a general purpose finite element program.
Phenological features for winter rapeseed identification in Ukraine using satellite data
NASA Astrophysics Data System (ADS)
Kravchenko, Oleksiy
2014-05-01
Winter rapeseed is one of the major oilseed crops in Ukraine that is characterized by high profitability and often grown with violations of the crop rotation requirements leading to soil degradation. Therefore, rapeseed identification using satellite data is a promising direction for operational estimation of the crop acreage and rotation control. Crop acreage of rapeseed is about 0.5-3% of total area of Ukraine, which poses a major problem for identification using satellite data [1]. While winter rapeseed could be classified using biomass features observed during autumn vegetation, these features are quite unstable due to field to field differences in planting dates as well as spatial and temporal heterogeneity in soil moisture availability. Due to this fact autumn biomass level features could be used only locally (at NUTS-3 level) and are not suitable for large-scale country wide crop identification. We propose to use crop parameters at flowering phenological stage for crop identification and present a method for parameters estimation using time-series of moderate resolution data. Rapeseed flowering could be observed as a bell-shaped peak in red reflectance time series. However the duration of the flowering period that is observable by satellite data is about only two weeks, which is quite short period taking into account inevitable cloud coverage issues. Thus we need daily time series to resolve the flowering peak and due to this we are limited to moderate resolution data. We used daily atmospherically corrected MODIS data coming from Terra and Aqua satellites within 90-160 DOY period to perform features calculations. Empirical BRDF correction is used to minimize angular effects. We used Gaussian Processes Regression (GPR) for temporal interpolation to minimize errors due to residual could coverage, atmospheric correction and a mixed pixel problems. We estimate 12 parameters for each time series. They are red and near-infrared (NIR) reflectance and the timing at four stages: before and after the flowering, at the peak flowering and at the maximum NIR level. We used Support Vector Machine for data classification. The most relevant feature for classification is flowering peak timing followed by flowering peak magnitude. The dependency of the peak time on the latitude as a sole feature could be used to reject 90% of non-rapeseed pixels that is greatly reduces the imbalance of the classification problem. To assess the accuracy of our approach we performed a stratified area frame sampling survey in Odessa region (NUTS-2 level) in 2013. The omission error is about 12.6% while commission error is higher at the level of 22%. This fact is explained by high viewing angle composition criterion that is used in our approach to mitigate high cloud coverage problem. However the errors are quite stable spatially and could be easily corrected by regression technique. To do this we performed area estimation for Odessa region using regression estimator and obtained good area estimation accuracy with 4.6% error (1σ). [1] Gallego, F.J., et al., Efficiency assessment of using satellite data for crop area estimation in Ukraine. Int. J. Appl. Earth Observ. Geoinf. (2014), http://dx.doi.org/10.1016/j.jag.2013.12.013
Neighborhood comparison operator
NASA Technical Reports Server (NTRS)
Gennery, D. B. (Inventor)
1985-01-01
Digital values in a moving window are compared by an operator having nine comparators connected to line buffers for receiving a succession of central pixels together with eight neighborhood pixels. A single bit of program control determines whether the neighborhood pixels are to be compared with the central pixel or a threshold value. The central pixel is always compared with the threshold. The omparator output plus 2 bits indicating odd-even pixel/line information about the central pixel addresses a lookup table to provide 14 bits of information, including 2 bits which control a selector to pass either the central pixel value, the other 12 bits of table information, or the bit-wise logical OR of all nine pixels through circuit that implements a very wide OR gate.
Method and system for non-linear motion estimation
NASA Technical Reports Server (NTRS)
Lu, Ligang (Inventor)
2011-01-01
A method and system for extrapolating and interpolating a visual signal including determining a first motion vector between a first pixel position in a first image to a second pixel position in a second image, determining a second motion vector between the second pixel position in the second image and a third pixel position in a third image, determining a third motion vector between one of the first pixel position in the first image and the second pixel position in the second image, and the second pixel position in the second image and the third pixel position in the third image using a non-linear model, determining a position of the fourth pixel in a fourth image based upon the third motion vector.
The Backscatter Cloudprobe with Polarization Detection: A New Aircraft Ice Water Detector
NASA Astrophysics Data System (ADS)
Freer, M.; Baumgardner, D.; Axisa, D.
2017-12-01
The differentiation of liquid water and ice crystals smaller than 100 um in mixed phase clouds continues to challenge the cloud measurement community. In situ imaging probes now have pixel resolution down to about 5 um, but at least 10 pixels are needed to accurately distinguish a water droplet from an ice crystal. This presents a major obstacle for the understanding of cloud glaciation in general, and the formation and evolution of cloud ice in particular. A new sensor has recently been developed that can detect and quantify supercooled liquid droplets and ice crystals. The Backscatter Cloudprobe with Polarization Detection (BCPD) is a very lightweight, compact and low power optical spectrometer that has already undergone laboratory, wind tunnel and flight tests that have validated its capabilities. The BCPD employs the optical approach with single particles that has been used for years in remote sensing to distinguish liquid water from ice crystals in ensembles of cloud particles. The sensor is mounted inside an aircraft and projects a linearly polarized laser beam to the outside through a heated window. Particles that pass through the sample volume of the laser scatter light and the photons scattered in the back direction pass through another heated window where they are collected and focused onto a beam splitter that directs them onto two photodetectors. The P-detector senses the light with polarization parallel to that of the incident light and the S-Detector measures the light that is perpendicular to that of the laser. The polarization ratio, S/P, is sensitive to the asphericity of a particle and is used to identify liquid water and ice crystals. The BCPD has now been exercised in an icing wind tunnel where it was compared with other cloud spectrometers. It has also been flown on the NCAR C-130 and on a commercial Citation, making measurements in all water, all ice and mixed phase clouds. Results from these three applications clearly show that the BCPD can be employed successfully to derive ice fraction in mixed phase clouds at sizes less than 50 um, a size range that has previously been inaccessible to cloud researchers.
Snow and Dust over Inner Mongolia
NASA Technical Reports Server (NTRS)
2002-01-01
A severe snow-and-sand storm hit an 80,000 square-mile (205,000-square-km) stretch of the Chinese province of Mongolia on New Year's Eve, killing 21 people and leaving thousands of people to face possible starvation. The affected area is located about 250 miles (400 km) northwest of Beijing. It is the worst snowstorm to hit the region in more than 50 years. Lasting about 3 days, the storm dumped 24 inches (60 cm) of snow mixed with sand from the Gobi Desert, stranding many residents in deep drifts. The Chinese Red Cross reports that almost 1 million people were affected by the storm and at least 10,000 head of livestock are confirmed dead. As many as 120,000 residents are in need of food and other supplies. The Sea-viewing Wide Field-of-view Sensor (SeaWiFS), flying aboard the OrbView-2 satellite, acquired this image of the storm on January 2, 2001, as it approached China's eastern provinces. You can see storm clouds (white pixels) and windblown dust (brownish pixels) crossing the Yellow Sea and East China Sea toward Japan and the western Pacific. Provided by the SeaWiFS Project, NASA/Goddard Space Flight Center, and ORBIMAGE
Vulnerability of CMOS image sensors in Megajoule Class Laser harsh environment.
Goiffon, V; Girard, S; Chabane, A; Paillet, P; Magnan, P; Cervantes, P; Martin-Gonthier, P; Baggio, J; Estribeau, M; Bourgade, J-L; Darbon, S; Rousseau, A; Glebov, V Yu; Pien, G; Sangster, T C
2012-08-27
CMOS image sensors (CIS) are promising candidates as part of optical imagers for the plasma diagnostics devoted to the study of fusion by inertial confinement. However, the harsh radiative environment of Megajoule Class Lasers threatens the performances of these optical sensors. In this paper, the vulnerability of CIS to the transient and mixed pulsed radiation environment associated with such facilities is investigated during an experiment at the OMEGA facility at the Laboratory for Laser Energetics (LLE), Rochester, NY, USA. The transient and permanent effects of the 14 MeV neutron pulse on CIS are presented. The behavior of the tested CIS shows that active pixel sensors (APS) exhibit a better hardness to this harsh environment than a CCD. A first order extrapolation of the reported results to the higher level of radiation expected for Megajoule Class Laser facilities (Laser Megajoule in France or National Ignition Facility in the USA) shows that temporarily saturated pixels due to transient neutron-induced single event effects will be the major issue for the development of radiation-tolerant plasma diagnostic instruments whereas the permanent degradation of the CIS related to displacement damage or total ionizing dose effects could be reduced by applying well known mitigation techniques.
PCTO-SIM: Multiple-point geostatistical modeling using parallel conditional texture optimization
NASA Astrophysics Data System (ADS)
Pourfard, Mohammadreza; Abdollahifard, Mohammad J.; Faez, Karim; Motamedi, Sayed Ahmad; Hosseinian, Tahmineh
2017-05-01
Multiple-point Geostatistics is a well-known general statistical framework by which complex geological phenomena have been modeled efficiently. Pixel-based and patch-based are two major categories of these methods. In this paper, the optimization-based category is used which has a dual concept in texture synthesis as texture optimization. Our extended version of texture optimization uses the energy concept to model geological phenomena. While honoring the hard point, the minimization of our proposed cost function forces simulation grid pixels to be as similar as possible to training images. Our algorithm has a self-enrichment capability and creates a richer training database from a sparser one through mixing the information of all surrounding patches of the simulation nodes. Therefore, it preserves pattern continuity in both continuous and categorical variables very well. It also shows a fuzzy result in its every realization similar to the expected result of multi realizations of other statistical models. While the main core of most previous Multiple-point Geostatistics methods is sequential, the parallel main core of our algorithm enabled it to use GPU efficiently to reduce the CPU time. One new validation method for MPS has also been proposed in this paper.
Motion camera based on a custom vision sensor and an FPGA architecture
NASA Astrophysics Data System (ADS)
Arias-Estrada, Miguel
1998-09-01
A digital camera for custom focal plane arrays was developed. The camera allows the test and development of analog or mixed-mode arrays for focal plane processing. The camera is used with a custom sensor for motion detection to implement a motion computation system. The custom focal plane sensor detects moving edges at the pixel level using analog VLSI techniques. The sensor communicates motion events using the event-address protocol associated to a temporal reference. In a second stage, a coprocessing architecture based on a field programmable gate array (FPGA) computes the time-of-travel between adjacent pixels. The FPGA allows rapid prototyping and flexible architecture development. Furthermore, the FPGA interfaces the sensor to a compact PC computer which is used for high level control and data communication to the local network. The camera could be used in applications such as self-guided vehicles, mobile robotics and smart surveillance systems. The programmability of the FPGA allows the exploration of further signal processing like spatial edge detection or image segmentation tasks. The article details the motion algorithm, the sensor architecture, the use of the event- address protocol for velocity vector computation and the FPGA architecture used in the motion camera system.
Fabrication of digital rainbow holograms and 3-D imaging using SEM based e-beam lithography.
Firsov, An; Firsov, A; Loechel, B; Erko, A; Svintsov, A; Zaitsev, S
2014-11-17
Here we present an approach for creating full-color digital rainbow holograms based on mixing three basic colors. Much like in a color TV with three luminescent points per single screen pixel, each color pixel of initial image is presented by three (R, G, B) distinct diffractive gratings in a hologram structure. Change of either duty cycle or area of the gratings are used to provide proper R, G, B intensities. Special algorithms allow one to design rather complicated 3D images (that might even be replacing each other with hologram rotation). The software developed ("RainBow") provides stability of colorization of rotated image by means of equalizing of angular blur from gratings responsible for R, G, B basic colors. The approach based on R, G, B color synthesis allows one to fabricate gray-tone rainbow hologram containing white color what is hardly possible in traditional dot-matrix technology. Budgetary electron beam lithography based on SEM column was used to fabricate practical examples of digital rainbow hologram. The results of fabrication of large rainbow holograms from design to imprinting are presented. Advantages of the EBL in comparison to traditional optical (dot-matrix) technology is considered.
NASA Astrophysics Data System (ADS)
Hishe, Hadgu; Giday, Kidane; Neka, Mulugeta; Soromessa, Teshome; Van Orshoven, Jos; Muys, Bart
2015-01-01
Comprehensive and less costly forest inventory approaches are required to monitor the spatiotemporal dynamics of key species in forest ecosystems. Subpixel analysis using the earth resources data analysis system imagine subpixel classification procedure was tested to extract Olea europaea subsp. cuspidata and Juniperus procera canopies from Landsat 7 enhanced thematic mapper plus imagery. Control points with various canopy area fractions of the target species were collected to develop signatures for each of the species. With these signatures, the imagine subpixel classification procedure was run for each species independently. The subpixel process enabled the detection of O. europaea subsp. cuspidata and J. procera trees in pure and mixed pixels. Total of 100 pixels each were field verified for both species. An overall accuracy of 85% was achieved for O. europaea subsp. cuspidata and 89% for J. procera. A high overall accuracy level of detecting species at a natural forest was achieved, which encourages using the algorithm for future species monitoring activities. We recommend that the algorithm has to be validated in similar environment to enrich the knowledge on its capability to ensure its wider usage.
Reflectance of vegetation, soil, and water
NASA Technical Reports Server (NTRS)
Wiegand, C. L.; Gausman, H. W.; Leamer, R. W.; Richardson, A. J.; Gerbermann, A. H. (Principal Investigator)
1974-01-01
The author has identified the following significant results. Iron deficient and normal grain sorghum plants were sufficiently different spectrally in ERTS-1 band 5 CCT data to detect chlorotic sorghum areas 2.8 acres (1.1 hectares) or larger in size in computer printouts of the MSS data. The ratio of band 5 to band 7 or band 7 minus band 5 relates to vegetation ground cover conditions and helps to select training samples representative of differing vegetation maturity or vigor classes and to estimate ground cover or green vegetation density in the absence of ground information. The four plant parameters; leaf area index, plant population, plant cover, and plant height explained 87 to 93% of the variability in band 6 digital counts and from 59 to 90% of the variation in bands 4 and 5. A ground area 2244 acres in size was classified on a pixel by pixel basis using simultaneously acquired aircraft support and ERTS-1 data. Overall recognition for vegetables, immature crops and mixed shrubs, and bare soil categories was 64.5% for aircraft and 59.6% for spacecraft data, respectively. Overall recognition results on a per field basis were 61.8% for aircraft and 62.8% for ERTS-1 data.
Fully-coupled analysis of jet mixing problems. Three-dimensional PNS model, SCIP3D
NASA Technical Reports Server (NTRS)
Wolf, D. E.; Sinha, N.; Dash, S. M.
1988-01-01
Numerical procedures formulated for the analysis of 3D jet mixing problems, as incorporated in the computer model, SCIP3D, are described. The overall methodology closely parallels that developed in the earlier 2D axisymmetric jet mixing model, SCIPVIS. SCIP3D integrates the 3D parabolized Navier-Stokes (PNS) jet mixing equations, cast in mapped cartesian or cylindrical coordinates, employing the explicit MacCormack Algorithm. A pressure split variant of this algorithm is employed in subsonic regions with a sublayer approximation utilized for treating the streamwise pressure component. SCIP3D contains both the ks and kW turbulence models, and employs a two component mixture approach to treat jet exhausts of arbitrary composition. Specialized grid procedures are used to adjust the grid growth in accordance with the growth of the jet, including a hybrid cartesian/cylindrical grid procedure for rectangular jets which moves the hybrid coordinate origin towards the flow origin as the jet transitions from a rectangular to circular shape. Numerous calculations are presented for rectangular mixing problems, as well as for a variety of basic unit problems exhibiting overall capabilities of SCIP3D.
NASA Technical Reports Server (NTRS)
Haralick, R. M.
1982-01-01
The facet model was used to accomplish step edge detection. The essence of the facet model is that any analysis made on the basis of the pixel values in some neighborhood has its final authoritative interpretation relative to the underlying grey tone intensity surface of which the neighborhood pixel values are observed noisy samples. Pixels which are part of regions have simple grey tone intensity surfaces over their areas. Pixels which have an edge in them have complex grey tone intensity surfaces over their areas. Specially, an edge moves through a pixel only if there is some point in the pixel's area having a zero crossing of the second directional derivative taken in the direction of a non-zero gradient at the pixel's center. To determine whether or not a pixel should be marked as a step edge pixel, its underlying grey tone intensity surface was estimated on the basis of the pixels in its neighborhood.
Shapiro, Stephen L.; Mani, Sudhindra; Atlas, Eugene L.; Cords, Dieter H. W.; Holbrook, Britt
1997-01-01
A data acquisition circuit for a particle detection system that allows for time tagging of particles detected by the system. The particle detection system screens out background noise and discriminate between hits from scattered and unscattered particles. The detection system can also be adapted to detect a wide variety of particle types. The detection system utilizes a particle detection pixel array, each pixel containing a back-biased PIN diode, and a data acquisition pixel array. Each pixel in the particle detection pixel array is in electrical contact with a pixel in the data acquisition pixel array. In response to a particle hit, the affected PIN diodes generate a current, which is detected by the corresponding data acquisition pixels. This current is integrated to produce a voltage across a capacitor, the voltage being related to the amount of energy deposited in the pixel by the particle. The current is also used to trigger a read of the pixel hit by the particle.
NASA Technical Reports Server (NTRS)
Mazzoni, Dominic; Wagstaff, Kiri; Bornstein, Benjamin; Tang, Nghia; Roden, Joseph
2006-01-01
PixelLearn is an integrated user-interface computer program for classifying pixels in scientific images. Heretofore, training a machine-learning algorithm to classify pixels in images has been tedious and difficult. PixelLearn provides a graphical user interface that makes it faster and more intuitive, leading to more interactive exploration of image data sets. PixelLearn also provides image-enhancement controls to make it easier to see subtle details in images. PixelLearn opens images or sets of images in a variety of common scientific file formats and enables the user to interact with several supervised or unsupervised machine-learning pixel-classifying algorithms while the user continues to browse through the images. The machinelearning algorithms in PixelLearn use advanced clustering and classification methods that enable accuracy much higher than is achievable by most other software previously available for this purpose. PixelLearn is written in portable C++ and runs natively on computers running Linux, Windows, or Mac OS X.
NASA Astrophysics Data System (ADS)
Martel, Anne L.
2004-04-01
In order to extract quantitative information from dynamic contrast-enhanced MR images (DCE-MRI) it is usually necessary to identify an arterial input function. This is not a trivial problem if there are no major vessels present in the field of view. Most existing techniques rely on operator intervention or use various curve parameters to identify suitable pixels but these are often specific to the anatomical region or the acquisition method used. They also require the signal from several pixels to be averaged in order to improve the signal to noise ratio, however this introduces errors due to partial volume effects. We have described previously how factor analysis can be used to automatically separate arterial and venous components from DCE-MRI studies of the brain but although that method works well for single slice images through the brain when the blood brain barrier technique is intact, it runs into problems for multi-slice images with more complex dynamics. This paper will describe a factor analysis method that is more robust in such situations and is relatively insensitive to the number of physiological components present in the data set. The technique is very similar to that used to identify spectral end-members from multispectral remote sensing images.
An approach to integrate the human vision psychology and perception knowledge into image enhancement
NASA Astrophysics Data System (ADS)
Wang, Hui; Huang, Xifeng; Ping, Jiang
2009-07-01
Image enhancement is very important image preprocessing technology especially when the image is captured in the poor imaging condition or dealing with the high bits image. The benefactor of image enhancement either may be a human observer or a computer vision process performing some kind of higher-level image analysis, such as target detection or scene understanding. One of the main objects of the image enhancement is getting a high dynamic range image and a high contrast degree image for human perception or interpretation. So, it is very necessary to integrate either empirical or statistical human vision psychology and perception knowledge into image enhancement. The human vision psychology and perception claims that humans' perception and response to the intensity fluctuation δu of visual signals are weighted by the background stimulus u, instead of being plainly uniform. There are three main laws: Weber's law, Weber- Fechner's law and Stevens's Law that describe this phenomenon in the psychology and psychophysics. This paper will integrate these three laws of the human vision psychology and perception into a very popular image enhancement algorithm named Adaptive Plateau Equalization (APE). The experiments were done on the high bits star image captured in night scene and the infrared-red image both the static image and the video stream. For the jitter problem in the video stream, this algorithm reduces this problem using the difference between the current frame's plateau value and the previous frame's plateau value to correct the current frame's plateau value. Considering the random noise impacts, the pixel value mapping process is not only depending on the current pixel but the pixels in the window surround the current pixel. The window size is usually 3×3. The process results of this improved algorithms is evaluated by the entropy analysis and visual perception analysis. The experiments' result showed the improved APE algorithms improved the quality of the image, the target and the surrounding assistant targets could be identified easily, and the noise was not amplified much. For the low quality image, these improved algorithms augment the information entropy and improve the image and the video stream aesthetic quality, while for the high quality image they will not debase the quality of the image.
NASA Astrophysics Data System (ADS)
Andreon, S.; Gargiulo, G.; Longo, G.; Tagliaferri, R.; Capuano, N.
2000-12-01
Astronomical wide-field imaging performed with new large-format CCD detectors poses data reduction problems of unprecedented scale, which are difficult to deal with using traditional interactive tools. We present here NExt (Neural Extractor), a new neural network (NN) based package capable of detecting objects and performing both deblending and star/galaxy classification in an automatic way. Traditionally, in astronomical images, objects are first distinguished from the noisy background by searching for sets of connected pixels having brightnesses above a given threshold; they are then classified as stars or as galaxies through diagnostic diagrams having variables chosen according to the astronomer's taste and experience. In the extraction step, assuming that images are well sampled, NExt requires only the simplest a priori definition of `what an object is' (i.e. it keeps all structures composed of more than one pixel) and performs the detection via an unsupervised NN, approaching detection as a clustering problem that has been thoroughly studied in the artificial intelligence literature. The first part of the NExt procedure consists of an optimal compression of the redundant information contained in the pixels via a mapping from pixel intensities to a subspace individualized through principal component analysis. At magnitudes fainter than the completeness limit, stars are usually almost indistinguishable from galaxies, and therefore the parameters characterizing the two classes do not lie in disconnected subspaces, thus preventing the use of unsupervised methods. We therefore adopted a supervised NN (i.e. a NN that first finds the rules to classify objects from examples and then applies them to the whole data set). In practice, each object is classified depending on its membership of the regions mapping the input feature space in the training set. In order to obtain an objective and reliable classification, instead of using an arbitrarily defined set of features we use a NN to select the most significant features among the large number of measured ones, and then we use these selected features to perform the classification task. In order to optimize the performance of the system, we implemented and tested several different models of NN. The comparison of the NExt performance with that of the best detection and classification package known to the authors (SExtractor) shows that NExt is at least as effective as the best traditional packages.
NASA Astrophysics Data System (ADS)
Bocchio, Marco
2014-09-01
The main goal of my PhD study is to understand the dust processing that occurs during the mixing between the galactic interstellar medium and the intracluster medium. This process is of particular interest in violent phenomena such as galaxy-galaxy interactions or the ``Ram Pressure Stripping'' due to the infalling of a galaxy towards the cluster centre.Initially, I focus my attention to the problem of dust destruction and heating processes, re-visiting the available models in literature. I particularly stress on the cases of extreme environments such as a hot coronal-type gas (e.g., IGM, ICM, HIM) and supernova-generated interstellar shocks. Under these conditions small grains are destroyed on short timescales and large grains are heated by the collisions with fast electrons making the dust spectral energy distribution very different from what observed in the diffuse ISM.In order to test our models I apply them to the case of an interacting galaxy, NGC 4438. Herschel data of this galaxy indicates the presence of dust with a higher-than-expected temperature.With a multi-wavelength analysis on a pixel-by-pixel basis we show that this hot dust seems to be embedded in a hot ionised gas therefore undergoing both collisional heating and small grain destruction.Furthermore, I focus on the long-standing conundrum about the dust destruction and dust formation timescales in the Milky Way. Based on the destruction efficiency in interstellar shocks, previous estimates led to a dust lifetime shorter than the typical timescale for dust formation in AGB stars. Using a recent dust model and an updated dust processing model we re-evaluate the dust lifetime in our Galaxy. Finally, I turn my attention to the phenomenon of ``Ram Pressure Stripping''. The galaxy ESO 137-001 represents one of the best cases to study this effect. Its long H2 tail embedded in a hot and ionised tail raises questions about its possible stripping from the galaxy or formation downstream in the tail. Based on recent hydrodynamical numerical simulations, I show that the formation of H2 molecules on the surface of dust grains in the tail is a viable scenario.
NASA Astrophysics Data System (ADS)
Azmi, N. I. L. Mohd; Ahmad, R.; Zainuddin, Z. M.
2017-09-01
This research explores the Mixed-Model Two-Sided Assembly Line (MMTSAL). There are two interrelated problems in MMTSAL which are line balancing and model sequencing. In previous studies, many researchers considered these problems separately and only few studied them simultaneously for one-sided line. However in this study, these two problems are solved simultaneously to obtain more efficient solution. The Mixed Integer Linear Programming (MILP) model with objectives of minimizing total utility work and idle time is generated by considering variable launching interval and assignment restriction constraint. The problem is analysed using small-size test cases to validate the integrated model. Throughout this paper, numerical experiment was conducted by using General Algebraic Modelling System (GAMS) with the solver CPLEX. Experimental results indicate that integrating the problems of model sequencing and line balancing help to minimise the proposed objectives function.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kay, Randolph R; Campbell, David V; Shinde, Subhash L
A modular, scalable focal plane array is provided as an array of integrated circuit dice, wherein each die includes a given amount of modular pixel array circuitry. The array of dice effectively multiplies the amount of modular pixel array circuitry to produce a larger pixel array without increasing die size. Desired pixel pitch across the enlarged pixel array is preserved by forming die stacks with each pixel array circuitry die stacked on a separate die that contains the corresponding signal processing circuitry. Techniques for die stack interconnections and die stack placement are implemented to ensure that the desired pixel pitchmore » is preserved across the enlarged pixel array.« less
A new 9T global shutter pixel with CDS technique
NASA Astrophysics Data System (ADS)
Liu, Yang; Ma, Cheng; Zhou, Quan; Wang, Xinyang
2015-04-01
Benefiting from motion blur free, Global shutter pixel is very widely used in the design of CMOS image sensors for high speed applications such as motion vision, scientifically inspection, etc. In global shutter sensors, all pixel signal information needs to be stored in the pixel first and then waiting for readout. For higher frame rate, we need very fast operation of the pixel array. There are basically two ways for the in pixel signal storage, one is in charge domain, such as the one shown in [1], this needs complicated process during the pixel fabrication. The other one is in voltage domain, one example is the one in [2], this pixel is based on the 4T PPD technology and normally the driving of the high capacitive transfer gate limits the speed of the array operation. In this paper we report a new 9T global shutter pixel based on 3-T partially pinned photodiode (PPPD) technology. It incorporates three in-pixel storage capacitors allowing for correlated double sampling (CDS) and pipeline operation of the array (pixel exposure during the readout of the array). Only two control pulses are needed for all the pixels at the end of exposure which allows high speed exposure control.
Incompressibility without tears - How to avoid restrictions of mixed formulation
NASA Technical Reports Server (NTRS)
Zienkiewicz, O. C.; Wu, J.
1991-01-01
Several time-stepping schemes for incompressibility problems are presented which can be solved directly for steady state or iteratively through the time domain. The difficulty of mixed interpolation is avoided by using these schemes. The schemes are applicable to problems of fluid and solid mechanics.
NASA Astrophysics Data System (ADS)
Filipenko, Mykhaylo; Gleixner, Thomas; Anton, Gisela; Durst, Jürgen; Michel, Thilo
2013-04-01
Many different experiments are being developed to explore the existence of the neutrinoless double beta decay (0 νββ) since it would imply fundamental consequences for particle physics. In this work we present results on the evaluation of Timepix detectors with cadmium-telluride sensor material to search for 0 νββ in 116Cd. This work was carried out with the COBRA collaboration and the Medipix collaboration. Due to the relatively small pixel dimension of 110×110×1000 μm3 the energy deposited by particles typically extends over several detector pixels leading to a track in the pixel matrix. We investigated the separation power regarding different event-types like α-particles, atmospheric muons, single electrons and electron-positron pairs produced at a single vertex. We achieved excellent classification power for α-particles and muons. In addition, we achieved good separation power between single electron and electron-positron pair production events. These separation abilities indicate a very good background reduction for the 0 νββ search. Further, in order to distinguish between 2 νββ and 0 νββ, the energy resolution is of particular importance. We carried out simulations which demonstrate that an energy resolution of 0.43 % is achievable at the Q-value for 0 νββ of 116Cd at 2.814 MeV. We measured an energy resolution of 1.6 % at a nominal energy of 1589 keV for electron-positron tracks which is about two times worse that predicted by our simulations. This deviation is probably due to the problem of detector calibration at energies above 122 keV which is discussed in this paper as well.
[The research on bidirectional reflectance computer simulation of forest canopy at pixel scale].
Song, Jin-Ling; Wang, Jin-Di; Shuai, Yan-Min; Xiao, Zhi-Qiang
2009-08-01
Computer simulation is based on computer graphics to generate the realistic 3D structure scene of vegetation, and to simulate the canopy regime using radiosity method. In the present paper, the authors expand the computer simulation model to simulate forest canopy bidirectional reflectance at pixel scale. But usually, the trees are complex structures, which are tall and have many branches. So there is almost a need for hundreds of thousands or even millions of facets to built up the realistic structure scene for the forest It is difficult for the radiosity method to compute so many facets. In order to make the radiosity method to simulate the forest scene at pixel scale, in the authors' research, the authors proposed one idea to simplify the structure of forest crowns, and abstract the crowns to ellipsoids. And based on the optical characteristics of the tree component and the characteristics of the internal energy transmission of photon in real crown, the authors valued the optical characteristics of ellipsoid surface facets. In the computer simulation of the forest, with the idea of geometrical optics model, the gap model is considered to get the forest canopy bidirectional reflectance at pixel scale. Comparing the computer simulation results with the GOMS model, and Multi-angle Imaging SpectroRadiometer (MISR) multi-angle remote sensing data, the simulation results are in agreement with the GOMS simulation result and MISR BRF. But there are also some problems to be solved. So the authors can conclude that the study has important value for the application of multi-angle remote sensing and the inversion of vegetation canopy structure parameters.
Parallax barrier engineering for image quality improvement in an autostereoscopic 3D display.
Kim, Sung-Kyu; Yoon, Ki-Hyuk; Yoon, Seon Kyu; Ju, Heongkyu
2015-05-18
We present a image quality improvement in a parallax barrier (PB)-based multiview autostereoscopic 3D display system under a real-time tracking of positions of a viewer's eyes. The system presented exploits a parallax barrier engineered to offer significantly improved quality of three-dimensional images for a moving viewer without an eyewear under the dynamic eye tracking. The improved image quality includes enhanced uniformity of image brightness, reduced point crosstalk, and no pseudoscopic effects. We control the relative ratio between two parameters i.e., a pixel size and the aperture of a parallax barrier slit to improve uniformity of image brightness at a viewing zone. The eye tracking that monitors positions of a viewer's eyes enables pixel data control software to turn on only pixels for view images near the viewer's eyes (the other pixels turned off), thus reducing point crosstalk. The eye tracking combined software provides right images for the respective eyes, therefore producing no pseudoscopic effects at its zone boundaries. The viewing zone can be spanned over area larger than the central viewing zone offered by a conventional PB-based multiview autostereoscopic 3D display (no eye tracking). Our 3D display system also provides multiviews for motion parallax under eye tracking. More importantly, we demonstrate substantial reduction of point crosstalk of images at the viewing zone, its level being comparable to that of a commercialized eyewear-assisted 3D display system. The multiview autostereoscopic 3D display presented can greatly resolve the point crosstalk problem, which is one of the critical factors that make it difficult for previous technologies for a multiview autostereoscopic 3D display to replace an eyewear-assisted counterpart.
NASA Astrophysics Data System (ADS)
Vera, N. C.; GMMC
2013-05-01
In this paper we present the results of macrohybrid mixed Darcian flow in porous media in a general three-dimensional domain. The global problem is solved as a set of local subproblems which are posed using a domain decomposition method. Unknown fields of local problems, velocity and pressure are approximated using mixed finite elements. For this application, a general three-dimensional domain is considered which is discretized using tetrahedra. The discrete domain is decomposed into subdomains and reformulated the original problem as a set of subproblems, communicated through their interfaces. To solve this set of subproblems, we use finite element mixed and parallel computing. The parallelization of a problem using this methodology can, in principle, to fully exploit a computer equipment and also provides results in less time, two very important elements in modeling. Referencias G.Alduncin and N.Vera-Guzmán Parallel proximal-point algorithms for mixed _nite element models of _ow in the subsurface, Commun. Numer. Meth. Engng 2004; 20:83-104 (DOI: 10.1002/cnm.647) Z. Chen, G.Huan and Y. Ma Computational Methods for Multiphase Flows in Porous Media, SIAM, Society for Industrial and Applied Mathematics, Philadelphia, 2006. A. Quarteroni and A. Valli, Numerical Approximation of Partial Differential Equations, Springer-Verlag, Berlin, 1994. Brezzi F, Fortin M. Mixed and Hybrid Finite Element Methods. Springer: New York, 1991.