Sample records for local pixel structures

  1. Local structure-based image decomposition for feature extraction with applications to face recognition.

    PubMed

    Qian, Jianjun; Yang, Jian; Xu, Yong

    2013-09-01

    This paper presents a robust but simple image feature extraction method, called image decomposition based on local structure (IDLS). It is assumed that in the local window of an image, the macro-pixel (patch) of the central pixel, and those of its neighbors, are locally linear. IDLS captures the local structural information by describing the relationship between the central macro-pixel and its neighbors. This relationship is represented with the linear representation coefficients determined using ridge regression. One image is actually decomposed into a series of sub-images (also called structure images) according to a local structure feature vector. All the structure images, after being down-sampled for dimensionality reduction, are concatenated into one super-vector. Fisher linear discriminant analysis is then used to provide a low-dimensional, compact, and discriminative representation for each super-vector. The proposed method is applied to face recognition and examined using our real-world face image database, NUST-RWFR, and five popular, publicly available, benchmark face image databases (AR, Extended Yale B, PIE, FERET, and LFW). Experimental results show the performance advantages of IDLS over state-of-the-art algorithms.

  2. Remote sensing image segmentation using local sparse structure constrained latent low rank representation

    NASA Astrophysics Data System (ADS)

    Tian, Shu; Zhang, Ye; Yan, Yimin; Su, Nan; Zhang, Junping

    2016-09-01

    Latent low-rank representation (LatLRR) has been attached considerable attention in the field of remote sensing image segmentation, due to its effectiveness in exploring the multiple subspace structures of data. However, the increasingly heterogeneous texture information in the high spatial resolution remote sensing images, leads to more severe interference of pixels in local neighborhood, and the LatLRR fails to capture the local complex structure information. Therefore, we present a local sparse structure constrainted latent low-rank representation (LSSLatLRR) segmentation method, which explicitly imposes the local sparse structure constraint on LatLRR to capture the intrinsic local structure in manifold structure feature subspaces. The whole segmentation framework can be viewed as two stages in cascade. In the first stage, we use the local histogram transform to extract the texture local histogram features (LHOG) at each pixel, which can efficiently capture the complex and micro-texture pattern. In the second stage, a local sparse structure (LSS) formulation is established on LHOG, which aims to preserve the local intrinsic structure and enhance the relationship between pixels having similar local characteristics. Meanwhile, by integrating the LSS and the LatLRR, we can efficiently capture the local sparse and low-rank structure in the mixture of feature subspace, and we adopt the subspace segmentation method to improve the segmentation accuracy. Experimental results on the remote sensing images with different spatial resolution show that, compared with three state-of-the-art image segmentation methods, the proposed method achieves more accurate segmentation results.

  3. Improved Compressive Sensing of Natural Scenes Using Localized Random Sampling

    PubMed Central

    Barranca, Victor J.; Kovačič, Gregor; Zhou, Douglas; Cai, David

    2016-01-01

    Compressive sensing (CS) theory demonstrates that by using uniformly-random sampling, rather than uniformly-spaced sampling, higher quality image reconstructions are often achievable. Considering that the structure of sampling protocols has such a profound impact on the quality of image reconstructions, we formulate a new sampling scheme motivated by physiological receptive field structure, localized random sampling, which yields significantly improved CS image reconstructions. For each set of localized image measurements, our sampling method first randomly selects an image pixel and then measures its nearby pixels with probability depending on their distance from the initially selected pixel. We compare the uniformly-random and localized random sampling methods over a large space of sampling parameters, and show that, for the optimal parameter choices, higher quality image reconstructions can be consistently obtained by using localized random sampling. In addition, we argue that the localized random CS optimal parameter choice is stable with respect to diverse natural images, and scales with the number of samples used for reconstruction. We expect that the localized random sampling protocol helps to explain the evolutionarily advantageous nature of receptive field structure in visual systems and suggests several future research areas in CS theory and its application to brain imaging. PMID:27555464

  4. Fluorescence X-ray absorption spectroscopy using a Ge pixel array detector: application to high-temperature superconducting thin-film single crystals.

    PubMed

    Oyanagi, H; Tsukada, A; Naito, M; Saini, N L; Lampert, M O; Gutknecht, D; Dressler, P; Ogawa, S; Kasai, K; Mohamed, S; Fukano, A

    2006-07-01

    A Ge pixel array detector with 100 segments was applied to fluorescence X-ray absorption spectroscopy, probing the local structure of high-temperature superconducting thin-film single crystals (100 nm in thickness). Independent monitoring of pixel signals allows real-time inspection of artifacts owing to substrate diffractions. By optimizing the grazing-incidence angle theta and adjusting the azimuthal angle phi, smooth extended X-ray absorption fine structure (EXAFS) oscillations were obtained for strained (La,Sr)2CuO4 thin-film single crystals grown by molecular beam epitaxy. The results of EXAFS data analysis show that the local structure (CuO6 octahedron) in (La,Sr)2CuO4 thin films grown on LaSrAlO4 and SrTiO3 substrates is uniaxially distorted changing the tetragonality by approximately 5 x 10(-3) in accordance with the crystallographic lattice mismatch. It is demonstrated that the local structure of thin-film single crystals can be probed with high accuracy at low temperature without interference from substrates.

  5. Optimization method of superpixel analysis for multi-contrast Jones matrix tomography (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Miyazawa, Arata; Hong, Young-Joo; Makita, Shuichi; Kasaragod, Deepa K.; Miura, Masahiro; Yasuno, Yoshiaki

    2017-02-01

    Local statistics are widely utilized for quantification and image processing of OCT. For example, local mean is used to reduce speckle, local variation of polarization state (degree-of-polarization-uniformity (DOPU)) is used to visualize melanin. Conventionally, these statistics are calculated in a rectangle kernel whose size is uniform over the image. However, the fixed size and shape of the kernel result in a tradeoff between image sharpness and statistical accuracy. Superpixel is a cluster of pixels which is generated by grouping image pixels based on the spatial proximity and similarity of signal values. Superpixels have variant size and flexible shapes which preserve the tissue structure. Here we demonstrate a new superpixel method which is tailored for multifunctional Jones matrix OCT (JM-OCT). This new method forms the superpixels by clustering image pixels in a 6-dimensional (6-D) feature space (spatial two dimensions and four dimensions of optical features). All image pixels were clustered based on their spatial proximity and optical feature similarity. The optical features are scattering, OCT-A, birefringence and DOPU. The method is applied to retinal OCT. Generated superpixels preserve the tissue structures such as retinal layers, sclera, vessels, and retinal pigment epithelium. Hence, superpixel can be utilized as a local statistics kernel which would be more suitable than a uniform rectangle kernel. Superpixelized image also can be used for further image processing and analysis. Since it reduces the number of pixels to be analyzed, it reduce the computational cost of such image processing.

  6. Orientation selectivity based structure for texture classification

    NASA Astrophysics Data System (ADS)

    Wu, Jinjian; Lin, Weisi; Shi, Guangming; Zhang, Yazhong; Lu, Liu

    2014-10-01

    Local structure, e.g., local binary pattern (LBP), is widely used in texture classification. However, LBP is too sensitive to disturbance. In this paper, we introduce a novel structure for texture classification. Researches on cognitive neuroscience indicate that the primary visual cortex presents remarkable orientation selectivity for visual information extraction. Inspired by this, we investigate the orientation similarities among neighbor pixels, and propose an orientation selectivity based pattern for local structure description. Experimental results on texture classification demonstrate that the proposed structure descriptor is quite robust to disturbance.

  7. Generation algorithm of craniofacial structure contour in cephalometric images

    NASA Astrophysics Data System (ADS)

    Mondal, Tanmoy; Jain, Ashish; Sardana, H. K.

    2010-02-01

    Anatomical structure tracing on cephalograms is a significant way to obtain cephalometric analysis. Computerized cephalometric analysis involves both manual and automatic approaches. The manual approach is limited in accuracy and repeatability. In this paper we have attempted to develop and test a novel method for automatic localization of craniofacial structure based on the detected edges on the region of interest. According to the grey scale feature at the different region of the cephalometric images, an algorithm for obtaining tissue contour is put forward. Using edge detection with specific threshold an improved bidirectional contour tracing approach is proposed by an interactive selection of the starting edge pixels, the tracking process searches repetitively for an edge pixel at the neighborhood of previously searched edge pixel to segment images, and then craniofacial structures are obtained. The effectiveness of the algorithm is demonstrated by the preliminary experimental results obtained with the proposed method.

  8. Multiscale vector fields for image pattern recognition

    NASA Technical Reports Server (NTRS)

    Low, Kah-Chan; Coggins, James M.

    1990-01-01

    A uniform processing framework for low-level vision computing in which a bank of spatial filters maps the image intensity structure at each pixel into an abstract feature space is proposed. Some properties of the filters and the feature space are described. Local orientation is measured by a vector sum in the feature space as follows: each filter's preferred orientation along with the strength of the filter's output determine the orientation and the length of a vector in the feature space; the vectors for all filters are summed to yield a resultant vector for a particular pixel and scale. The orientation of the resultant vector indicates the local orientation, and the magnitude of the vector indicates the strength of the local orientation preference. Limitations of the vector sum method are discussed. Investigations show that the processing framework provides a useful, redundant representation of image structure across orientation and scale.

  9. Mapping of the Culann-Tohil Region of Io

    NASA Technical Reports Server (NTRS)

    Turtle, E. P.; Keszthelyi, L. P.; Jaeger, W. L.; Radebaugh, J.; Milazzo, M. P.; McEwen, A. S.; Moore, J. M.; Schenk, P. M.; Lopes, R. M. C.

    2003-01-01

    The Galileo spacecraft completed its observations of Jupiter's volcanic moon Io in October 2001 with the orbit I32 flyby, during which new local (13-55 m/pixel) and regional (130-400 m/pixel) resolution images and spectroscopic data were returned of the antijovian hemisphere. We have combined a I32 regional mosaic (330 m/pixel) with lower-resolution C21 color data (1.4 km/pixel, Figure 1) and produced a geomorphologic map of the Culann-Tohil area of this hemisphere. Here we present the geologic features, map units, and structures in this region, and give preliminary conclusions about geologic activity for comparison with other regions to better understand Io's geologic evolution.

  10. The fundamentals of average local variance--Part I: Detecting regular patterns.

    PubMed

    Bøcher, Peder Klith; McCloy, Keith R

    2006-02-01

    The method of average local variance (ALV) computes the mean of the standard deviation values derived for a 3 x 3 moving window on a successively coarsened image to produce a function of ALV versus spatial resolution. In developing ALV, the authors used approximately a doubling of the pixel size at each coarsening of the image. They hypothesized that ALV is low when the pixel size is smaller than the size of scene objects because the pixels on the object will have similar response values. When the pixel and objects are of similar size, they will tend to vary in response and the ALV values will increase. As the size of pixels increase further, more objects will be contained in a single pixel and ALV will decrease. The authors showed that various cover types produced single peak ALV functions that inexplicitly peaked when the pixel size was 1/2 to 3/4 of the object size. This paper reports on work done to explore the characteristics of the various forms of the ALV function and to understand the location of the peaks that occur in this function. The work was conducted using synthetically generated image data. The investigation showed that the hypothesis originally proposed in is not adequate. A new hypothesis is proposed that the ALV function has peak locations that are related to the geometric size of pattern structures in the scene. These structures are not always the same as scene objects. Only in cases where the size of and separation between scene objects are equal does the ALV function detect the size of the objects. In situations where the distance between scene objects are larger than their size, the ALV function has a peak at the object separation, not at the object size. This work has also shown that multiple object structures of different sizes and distances in the image provide multiple peaks in the ALV function and that some of these structures are not implicitly recognized as such from our perspective. However, the magnitude of these peaks depends on the response mix in the structures, complicating their interpretation and analysis. The analysis of the ALV Function is, thus, more complex than that generally reported in the literature.

  11. Locality-constrained anomaly detection for hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Liu, Jiabin; Li, Wei; Du, Qian; Liu, Kui

    2015-12-01

    Detecting a target with low-occurrence-probability from unknown background in a hyperspectral image, namely anomaly detection, is of practical significance. Reed-Xiaoli (RX) algorithm is considered as a classic anomaly detector, which calculates the Mahalanobis distance between local background and the pixel under test. Local RX, as an adaptive RX detector, employs a dual-window strategy to consider pixels within the frame between inner and outer windows as local background. However, the detector is sensitive if such a local region contains anomalous pixels (i.e., outliers). In this paper, a locality-constrained anomaly detector is proposed to remove outliers in the local background region before employing the RX algorithm. Specifically, a local linear representation is designed to exploit the internal relationship between linearly correlated pixels in the local background region and the pixel under test and its neighbors. Experimental results demonstrate that the proposed detector improves the original local RX algorithm.

  12. Automatic bone outer contour extraction from B-modes ultrasound images based on local phase symmetry and quadratic polynomial fitting

    NASA Astrophysics Data System (ADS)

    Karlita, Tita; Yuniarno, Eko Mulyanto; Purnama, I. Ketut Eddy; Purnomo, Mauridhi Hery

    2017-06-01

    Analyzing ultrasound (US) images to get the shapes and structures of particular anatomical regions is an interesting field of study since US imaging is a non-invasive method to capture internal structures of a human body. However, bone segmentation of US images is still challenging because it is strongly influenced by speckle noises and it has poor image quality. This paper proposes a combination of local phase symmetry and quadratic polynomial fitting methods to extract bone outer contour (BOC) from two dimensional (2D) B-modes US image as initial steps of three-dimensional (3D) bone surface reconstruction. By using local phase symmetry, the bone is initially extracted from US images. BOC is then extracted by scanning one pixel on the bone boundary in each column of the US images using first phase features searching method. Quadratic polynomial fitting is utilized to refine and estimate the pixel location that fails to be detected during the extraction process. Hole filling method is then applied by utilize the polynomial coefficients to fill the gaps with new pixel. The proposed method is able to estimate the new pixel position and ensures smoothness and continuity of the contour path. Evaluations are done using cow and goat bones by comparing the resulted BOCs with the contours produced by manual segmentation and contours produced by canny edge detection. The evaluation shows that our proposed methods produces an excellent result with average MSE before and after hole filling at the value of 0.65.

  13. Low-dose CT reconstruction with patch based sparsity and similarity constraints

    NASA Astrophysics Data System (ADS)

    Xu, Qiong; Mou, Xuanqin

    2014-03-01

    As the rapid growth of CT based medical application, low-dose CT reconstruction becomes more and more important to human health. Compared with other methods, statistical iterative reconstruction (SIR) usually performs better in lowdose case. However, the reconstructed image quality of SIR highly depends on the prior based regularization due to the insufficient of low-dose data. The frequently-used regularization is developed from pixel based prior, such as the smoothness between adjacent pixels. This kind of pixel based constraint cannot distinguish noise and structures effectively. Recently, patch based methods, such as dictionary learning and non-local means filtering, have outperformed the conventional pixel based methods. Patch is a small area of image, which expresses structural information of image. In this paper, we propose to use patch based constraint to improve the image quality of low-dose CT reconstruction. In the SIR framework, both patch based sparsity and similarity are considered in the regularization term. On one hand, patch based sparsity is addressed by sparse representation and dictionary learning methods, on the other hand, patch based similarity is addressed by non-local means filtering method. We conducted a real data experiment to evaluate the proposed method. The experimental results validate this method can lead to better image with less noise and more detail than other methods in low-count and few-views cases.

  14. Brain vascular image segmentation based on fuzzy local information C-means clustering

    NASA Astrophysics Data System (ADS)

    Hu, Chaoen; Liu, Xia; Liang, Xiao; Hui, Hui; Yang, Xin; Tian, Jie

    2017-02-01

    Light sheet fluorescence microscopy (LSFM) is a powerful optical resolution fluorescence microscopy technique which enables to observe the mouse brain vascular network in cellular resolution. However, micro-vessel structures are intensity inhomogeneity in LSFM images, which make an inconvenience for extracting line structures. In this work, we developed a vascular image segmentation method by enhancing vessel details which should be useful for estimating statistics like micro-vessel density. Since the eigenvalues of hessian matrix and its sign describes different geometric structure in images, which enable to construct vascular similarity function and enhance line signals, the main idea of our method is to cluster the pixel values of the enhanced image. Our method contained three steps: 1) calculate the multiscale gradients and the differences between eigenvalues of Hessian matrix. 2) In order to generate the enhanced microvessels structures, a feed forward neural network was trained by 2.26 million pixels for dealing with the correlations between multi-scale gradients and the differences between eigenvalues. 3) The fuzzy local information c-means clustering (FLICM) was used to cluster the pixel values in enhance line signals. To verify the feasibility and effectiveness of this method, mouse brain vascular images have been acquired by a commercial light-sheet microscope in our lab. The experiment of the segmentation method showed that dice similarity coefficient can reach up to 85%. The results illustrated that our approach extracting line structures of blood vessels dramatically improves the vascular image and enable to accurately extract blood vessels in LSFM images.

  15. Masking Strategies for Image Manifolds.

    PubMed

    Dadkhahi, Hamid; Duarte, Marco F

    2016-07-07

    We consider the problem of selecting an optimal mask for an image manifold, i.e., choosing a subset of the pixels of the image that preserves the manifold's geometric structure present in the original data. Such masking implements a form of compressive sensing through emerging imaging sensor platforms for which the power expense grows with the number of pixels acquired. Our goal is for the manifold learned from masked images to resemble its full image counterpart as closely as possible. More precisely, we show that one can indeed accurately learn an image manifold without having to consider a large majority of the image pixels. In doing so, we consider two masking methods that preserve the local and global geometric structure of the manifold, respectively. In each case, the process of finding the optimal masking pattern can be cast as a binary integer program, which is computationally expensive but can be approximated by a fast greedy algorithm. Numerical experiments show that the relevant manifold structure is preserved through the datadependent masking process, even for modest mask sizes.

  16. Structure-Preserving Smoothing of Biomedical Images

    NASA Astrophysics Data System (ADS)

    Gil, Debora; Hernàndez-Sabaté, Aura; Burnat, Mireia; Jansen, Steven; Martínez-Villalta, Jordi

    Smoothing of biomedical images should preserve gray-level transitions between adjacent tissues, while restoring contours consistent with anatomical structures. Anisotropic diffusion operators are based on image appearance discontinuities (either local or contextual) and might fail at weak inter-tissue transitions. Meanwhile, the output of block-wise and morphological operations is prone to present a block structure due to the shape and size of the considered pixel neighborhood.

  17. Generation and optimization of superpixels as image processing kernels for Jones matrix optical coherence tomography

    PubMed Central

    Miyazawa, Arata; Hong, Young-Joo; Makita, Shuichi; Kasaragod, Deepa; Yasuno, Yoshiaki

    2017-01-01

    Jones matrix-based polarization sensitive optical coherence tomography (JM-OCT) simultaneously measures optical intensity, birefringence, degree of polarization uniformity, and OCT angiography. The statistics of the optical features in a local region, such as the local mean of the OCT intensity, are frequently used for image processing and the quantitative analysis of JM-OCT. Conventionally, local statistics have been computed with fixed-size rectangular kernels. However, this results in a trade-off between image sharpness and statistical accuracy. We introduce a superpixel method to JM-OCT for generating the flexible kernels of local statistics. A superpixel is a cluster of image pixels that is formed by the pixels’ spatial and signal value proximities. An algorithm for superpixel generation specialized for JM-OCT and its optimization methods are presented in this paper. The spatial proximity is in two-dimensional cross-sectional space and the signal values are the four optical features. Hence, the superpixel method is a six-dimensional clustering technique for JM-OCT pixels. The performance of the JM-OCT superpixels and its optimization methods are evaluated in detail using JM-OCT datasets of posterior eyes. The superpixels were found to well preserve tissue structures, such as layer structures, sclera, vessels, and retinal pigment epithelium. And hence, they are more suitable for local statistics kernels than conventional uniform rectangular kernels. PMID:29082073

  18. SU-E-J-114: A Practical Hybrid Method for Improving the Quality of CT-CBCT Deformable Image Registration for Head and Neck Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, C; Kumarasiri, A; Chetvertkov, M

    2015-06-15

    Purpose: Accurate deformable image registration (DIR) between CT and CBCT in H&N is challenging. In this study, we propose a practical hybrid method that uses not only the pixel intensities but also organ physical properties, structure volume of interest (VOI), and interactive local registrations. Methods: Five oropharyngeal cancer patients were selected retrospectively. For each patient, the planning CT was registered to the last fraction CBCT, where the anatomy difference was largest. A three step registration strategy was tested; Step1) DIR using pixel intensity only, Step2) DIR with additional use of structure VOI and rigidity penalty, and Step3) interactive local correction.more » For Step1, a public-domain open-source DIR algorithm was used (cubic B-spline, mutual information, steepest gradient optimization, and 4-level multi-resolution). For Step2, rigidity penalty was applied on bony anatomies and brain, and a structure VOI was used to handle the body truncation such as the shoulder cut-off on CBCT. Finally, in Step3, the registrations were reviewed on our in-house developed software and the erroneous areas were corrected via a local registration using level-set motion algorithm. Results: After Step1, there were considerable amount of registration errors in soft tissues and unrealistic stretching in the posterior to the neck and near the shoulder due to body truncation. The brain was also found deformed to a measurable extent near the superior border of CBCT. Such errors could be effectively removed by using a structure VOI and rigidity penalty. The rest of the local soft tissue error could be corrected using the interactive software tool. The estimated interactive correction time was approximately 5 minutes. Conclusion: The DIR using only the image pixel intensity was vulnerable to noise and body truncation. A corrective action was inevitable to achieve good quality of registrations. We found the proposed three-step hybrid method efficient and practical for CT/CBCT registrations in H&N. My department receives grant support from Industrial partners: (a) Varian Medical Systems, Palo Alto, CA, and (b) Philips HealthCare, Best, Netherlands.« less

  19. Fluorescence XAS using Ge PAD: Application to High-Temperature Superconducting Thin Film Single Crystals

    NASA Astrophysics Data System (ADS)

    Oyanagi, H.; Tsukada, A.; Naito, M.; Saini, N. L.; Zhang, C.

    2007-02-01

    A Ge pixel array detector (PAD) with 100 segments was used in fluorescence x-ray absorption spectroscopy (XAS) study, probing local structure of high temperature superconducting thin film single crystals. Independent monitoring of individual pixel outputs allows real-time inspection of interference of substrates which has long been a major source of systematic error. By optimizing grazing-incidence angle and azimuthal orientation, smooth extended x-ray absorption fine structure (EXAFS) oscillations were obtained, demonstrating that strain effects can be studied using high-quality data for thin film single crystals grown by molecular beam epitaxy (MBE). The results of (La,Sr)2CuO4 thin film single crystals under strain are related to the strain dependence of the critical temperature of superconductivity.

  20. Patch-based automatic retinal vessel segmentation in global and local structural context.

    PubMed

    Cao, Shuoying; Bharath, Anil A; Parker, Kim H; Ng, Jeffrey

    2012-01-01

    In this paper, we extend our published work [1] and propose an automated system to segment retinal vessel bed in digital fundus images with enough adaptability to analyze images from fluorescein angiography. This approach takes into account both the global and local context and enables both vessel segmentation and microvascular centreline extraction. These tools should allow researchers and clinicians to estimate and assess vessel diameter, capillary blood volume and microvascular topology for early stage disease detection, monitoring and treatment. Global vessel bed segmentation is achieved by combining phase-invariant orientation fields with neighbourhood pixel intensities in a patch-based feature vector for supervised learning. This approach is evaluated against benchmarks on the DRIVE database [2]. Local microvascular centrelines within Regions-of-Interest (ROIs) are segmented by linking the phase-invariant orientation measures with phase-selective local structure features. Our global and local structural segmentation can be used to assess both pathological structural alterations and microemboli occurrence in non-invasive clinical settings in a longitudinal study.

  1. Novel Hyperspectral Anomaly Detection Methods Based on Unsupervised Nearest Regularized Subspace

    NASA Astrophysics Data System (ADS)

    Hou, Z.; Chen, Y.; Tan, K.; Du, P.

    2018-04-01

    Anomaly detection has been of great interest in hyperspectral imagery analysis. Most conventional anomaly detectors merely take advantage of spectral and spatial information within neighboring pixels. In this paper, two methods of Unsupervised Nearest Regularized Subspace-based with Outlier Removal Anomaly Detector (UNRSORAD) and Local Summation UNRSORAD (LSUNRSORAD) are proposed, which are based on the concept that each pixel in background can be approximately represented by its spatial neighborhoods, while anomalies cannot. Using a dual window, an approximation of each testing pixel is a representation of surrounding data via a linear combination. The existence of outliers in the dual window will affect detection accuracy. Proposed detectors remove outlier pixels that are significantly different from majority of pixels. In order to make full use of various local spatial distributions information with the neighboring pixels of the pixels under test, we take the local summation dual-window sliding strategy. The residual image is constituted by subtracting the predicted background from the original hyperspectral imagery, and anomalies can be detected in the residual image. Experimental results show that the proposed methods have greatly improved the detection accuracy compared with other traditional detection method.

  2. Local Competition-Based Superpixel Segmentation Algorithm in Remote Sensing

    PubMed Central

    Liu, Jiayin; Tang, Zhenmin; Cui, Ying; Wu, Guoxing

    2017-01-01

    Remote sensing technologies have been widely applied in urban environments’ monitoring, synthesis and modeling. Incorporating spatial information in perceptually coherent regions, superpixel-based approaches can effectively eliminate the “salt and pepper” phenomenon which is common in pixel-wise approaches. Compared with fixed-size windows, superpixels have adaptive sizes and shapes for different spatial structures. Moreover, superpixel-based algorithms can significantly improve computational efficiency owing to the greatly reduced number of image primitives. Hence, the superpixel algorithm, as a preprocessing technique, is more and more popularly used in remote sensing and many other fields. In this paper, we propose a superpixel segmentation algorithm called Superpixel Segmentation with Local Competition (SSLC), which utilizes a local competition mechanism to construct energy terms and label pixels. The local competition mechanism leads to energy terms locality and relativity, and thus, the proposed algorithm is less sensitive to the diversity of image content and scene layout. Consequently, SSLC could achieve consistent performance in different image regions. In addition, the Probability Density Function (PDF), which is estimated by Kernel Density Estimation (KDE) with the Gaussian kernel, is introduced to describe the color distribution of superpixels as a more sophisticated and accurate measure. To reduce computational complexity, a boundary optimization framework is introduced to only handle boundary pixels instead of the whole image. We conduct experiments to benchmark the proposed algorithm with the other state-of-the-art ones on the Berkeley Segmentation Dataset (BSD) and remote sensing images. Results demonstrate that the SSLC algorithm yields the best overall performance, while the computation time-efficiency is still competitive. PMID:28604641

  3. Local Competition-Based Superpixel Segmentation Algorithm in Remote Sensing.

    PubMed

    Liu, Jiayin; Tang, Zhenmin; Cui, Ying; Wu, Guoxing

    2017-06-12

    Remote sensing technologies have been widely applied in urban environments' monitoring, synthesis and modeling. Incorporating spatial information in perceptually coherent regions, superpixel-based approaches can effectively eliminate the "salt and pepper" phenomenon which is common in pixel-wise approaches. Compared with fixed-size windows, superpixels have adaptive sizes and shapes for different spatial structures. Moreover, superpixel-based algorithms can significantly improve computational efficiency owing to the greatly reduced number of image primitives. Hence, the superpixel algorithm, as a preprocessing technique, is more and more popularly used in remote sensing and many other fields. In this paper, we propose a superpixel segmentation algorithm called Superpixel Segmentation with Local Competition (SSLC), which utilizes a local competition mechanism to construct energy terms and label pixels. The local competition mechanism leads to energy terms locality and relativity, and thus, the proposed algorithm is less sensitive to the diversity of image content and scene layout. Consequently, SSLC could achieve consistent performance in different image regions. In addition, the Probability Density Function (PDF), which is estimated by Kernel Density Estimation (KDE) with the Gaussian kernel, is introduced to describe the color distribution of superpixels as a more sophisticated and accurate measure. To reduce computational complexity, a boundary optimization framework is introduced to only handle boundary pixels instead of the whole image. We conduct experiments to benchmark the proposed algorithm with the other state-of-the-art ones on the Berkeley Segmentation Dataset (BSD) and remote sensing images. Results demonstrate that the SSLC algorithm yields the best overall performance, while the computation time-efficiency is still competitive.

  4. Automated retinal nerve fiber layer defect detection using fundus imaging in glaucoma.

    PubMed

    Panda, Rashmi; Puhan, N B; Rao, Aparna; Padhy, Debananda; Panda, Ganapati

    2018-06-01

    Retinal nerve fiber layer defect (RNFLD) provides an early objective evidence of structural changes in glaucoma. RNFLD detection is currently carried out using imaging modalities like OCT and GDx which are expensive for routine practice. In this regard, we propose a novel automatic method for RNFLD detection and angular width quantification using cost effective redfree fundus images to be practically useful for computer-assisted glaucoma risk assessment. After blood vessel inpainting and CLAHE based contrast enhancement, the initial boundary pixels are identified by local minima analysis of the 1-D intensity profiles on concentric circles. The true boundary pixels are classified using random forest trained by newly proposed cumulative zero count local binary pattern (CZC-LBP) and directional differential energy (DDE) along with Shannon, Tsallis entropy and intensity features. Finally, the RNFLD angular width is obtained by random sample consensus (RANSAC) line fitting on the detected set of boundary pixels. The proposed method is found to achieve high RNFLD detection performance on a newly created dataset with sensitivity (SN) of 0.7821 at 0.2727 false positives per image (FPI) and the area under curve (AUC) value is obtained as 0.8733. Copyright © 2018 Elsevier Ltd. All rights reserved.

  5. A neighbor pixel communication filtering structure for Dynamic Vision Sensors

    NASA Astrophysics Data System (ADS)

    Xu, Yuan; Liu, Shiqi; Lu, Hehui; Zhang, Zilong

    2017-02-01

    For Dynamic Vision Sensors (DVS), thermal noise and junction leakage current induced Background Activity (BA) is the major cause of the deterioration of images quality. Inspired by the smoothing filtering principle of horizontal cells in vertebrate retina, A DVS pixel with Neighbor Pixel Communication (NPC) filtering structure is proposed to solve this issue. The NPC structure is designed to judge the validity of pixel's activity through the communication between its 4 adjacent pixels. The pixel's outputs will be suppressed if its activities are determined not real. The proposed pixel's area is 23.76×24.71μm2 and only 3ns output latency is introduced. In order to validate the effectiveness of the structure, a 5×5 pixel array has been implemented in SMIC 0.13μm CIS process. 3 test cases of array's behavioral model show that the NPC-DVS have an ability of filtering the BA.

  6. A novel fusion method of improved adaptive LTP and two-directional two-dimensional PCA for face feature extraction

    NASA Astrophysics Data System (ADS)

    Luo, Yuan; Wang, Bo-yu; Zhang, Yi; Zhao, Li-ming

    2018-03-01

    In this paper, under different illuminations and random noises, focusing on the local texture feature's defects of a face image that cannot be completely described because the threshold of local ternary pattern (LTP) cannot be calculated adaptively, a local three-value model of improved adaptive local ternary pattern (IALTP) is proposed. Firstly, the difference function between the center pixel and the neighborhood pixel weight is established to obtain the statistical characteristics of the central pixel and the neighborhood pixel. Secondly, the adaptively gradient descent iterative function is established to calculate the difference coefficient which is defined to be the threshold of the IALTP operator. Finally, the mean and standard deviation of the pixel weight of the local region are used as the coding mode of IALTP. In order to reflect the overall properties of the face and reduce the dimension of features, the two-directional two-dimensional PCA ((2D)2PCA) is adopted. The IALTP is used to extract local texture features of eyes and mouth area. After combining the global features and local features, the fusion features (IALTP+) are obtained. The experimental results on the Extended Yale B and AR standard face databases indicate that under different illuminations and random noises, the algorithm proposed in this paper is more robust than others, and the feature's dimension is smaller. The shortest running time reaches 0.329 6 s, and the highest recognition rate reaches 97.39%.

  7. a Data Field Method for Urban Remotely Sensed Imagery Classification Considering Spatial Correlation

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Qin, K.; Zeng, C.; Zhang, E. B.; Yue, M. X.; Tong, X.

    2016-06-01

    Spatial correlation between pixels is important information for remotely sensed imagery classification. Data field method and spatial autocorrelation statistics have been utilized to describe and model spatial information of local pixels. The original data field method can represent the spatial interactions of neighbourhood pixels effectively. However, its focus on measuring the grey level change between the central pixel and the neighbourhood pixels results in exaggerating the contribution of the central pixel to the whole local window. Besides, Geary's C has also been proven to well characterise and qualify the spatial correlation between each pixel and its neighbourhood pixels. But the extracted object is badly delineated with the distracting salt-and-pepper effect of isolated misclassified pixels. To correct this defect, we introduce the data field method for filtering and noise limitation. Moreover, the original data field method is enhanced by considering each pixel in the window as the central pixel to compute statistical characteristics between it and its neighbourhood pixels. The last step employs a support vector machine (SVM) for the classification of multi-features (e.g. the spectral feature and spatial correlation feature). In order to validate the effectiveness of the developed method, experiments are conducted on different remotely sensed images containing multiple complex object classes inside. The results show that the developed method outperforms the traditional method in terms of classification accuracies.

  8. Thermal wake/vessel detection technique

    DOEpatents

    Roskovensky, John K [Albuquerque, NM; Nandy, Prabal [Albuquerque, NM; Post, Brian N [Albuquerque, NM

    2012-01-10

    A computer-automated method for detecting a vessel in water based on an image of a portion of Earth includes generating a thermal anomaly mask. The thermal anomaly mask flags each pixel of the image initially deemed to be a wake pixel based on a comparison of a thermal value of each pixel against other thermal values of other pixels localized about each pixel. Contiguous pixels flagged by the thermal anomaly mask are grouped into pixel clusters. A shape of each of the pixel clusters is analyzed to determine whether each of the pixel clusters represents a possible vessel detection event. The possible vessel detection events are represented visually within the image.

  9. Method and apparatus for detecting a desired behavior in digital image data

    DOEpatents

    Kegelmeyer, Jr., W. Philip

    1997-01-01

    A method for detecting stellate lesions in digitized mammographic image data includes the steps of prestoring a plurality of reference images, calculating a plurality of features for each of the pixels of the reference images, and creating a binary decision tree from features of randomly sampled pixels from each of the reference images. Once the binary decision tree has been created, a plurality of features, preferably including an ALOE feature (analysis of local oriented edges), are calculated for each of the pixels of the digitized mammographic data. Each of these plurality of features of each pixel are input into the binary decision tree and a probability is determined, for each of the pixels, corresponding to the likelihood of the presence of a stellate lesion, to create a probability image. Finally, the probability image is spatially filtered to enforce local consensus among neighboring pixels and the spatially filtered image is output.

  10. Method and apparatus for detecting a desired behavior in digital image data

    DOEpatents

    Kegelmeyer, Jr., W. Philip

    1997-01-01

    A method for detecting stellate lesions in digitized mammographic image data includes the steps of prestoring a plurality of reference images, calculating a plurality of features for each of the pixels of the reference images, and creating a binary decision tree from features of randomly sampled pixels from each of the reference images. Once the binary decision tree has been created, a plurality of features, preferably including an ALOE feature (analysis of local oriented edges), are calculated for each of the pixels of the digitized mammographic data. Each of these plurality of features of each pixel are input into the binary decision tree and a probability is determined, for each of the pixels, corresponding to the likelihood of the presence of a stellate lesion, to create a probability image. Finally, the probability image is spacially filtered to enforce local consensus among neighboring pixels and the spacially filtered image is output.

  11. Smart-Pixel Array Processors Based on Optimal Cellular Neural Networks for Space Sensor Applications

    NASA Technical Reports Server (NTRS)

    Fang, Wai-Chi; Sheu, Bing J.; Venus, Holger; Sandau, Rainer

    1997-01-01

    A smart-pixel cellular neural network (CNN) with hardware annealing capability, digitally programmable synaptic weights, and multisensor parallel interface has been under development for advanced space sensor applications. The smart-pixel CNN architecture is a programmable multi-dimensional array of optoelectronic neurons which are locally connected with their local neurons and associated active-pixel sensors. Integration of the neuroprocessor in each processor node of a scalable multiprocessor system offers orders-of-magnitude computing performance enhancements for on-board real-time intelligent multisensor processing and control tasks of advanced small satellites. The smart-pixel CNN operation theory, architecture, design and implementation, and system applications are investigated in detail. The VLSI (Very Large Scale Integration) implementation feasibility was illustrated by a prototype smart-pixel 5x5 neuroprocessor array chip of active dimensions 1380 micron x 746 micron in a 2-micron CMOS technology.

  12. Variable waveband infrared imager

    DOEpatents

    Hunter, Scott R.

    2013-06-11

    A waveband imager includes an imaging pixel that utilizes photon tunneling with a thermally actuated bimorph structure to convert infrared radiation to visible radiation. Infrared radiation passes through a transparent substrate and is absorbed by a bimorph structure formed with a pixel plate. The absorption generates heat which deflects the bimorph structure and pixel plate towards the substrate and into an evanescent electric field generated by light propagating through the substrate. Penetration of the bimorph structure and pixel plate into the evanescent electric field allows a portion of the visible wavelengths propagating through the substrate to tunnel through the substrate, bimorph structure, and/or pixel plate as visible radiation that is proportional to the intensity of the incident infrared radiation. This converted visible radiation may be superimposed over visible wavelengths passed through the imaging pixel.

  13. X-ray characterization of a multichannel smart-pixel array detector.

    PubMed

    Ross, Steve; Haji-Sheikh, Michael; Huntington, Andrew; Kline, David; Lee, Adam; Li, Yuelin; Rhee, Jehyuk; Tarpley, Mary; Walko, Donald A; Westberg, Gregg; Williams, George; Zou, Haifeng; Landahl, Eric

    2016-01-01

    The Voxtel VX-798 is a prototype X-ray pixel array detector (PAD) featuring a silicon sensor photodiode array of 48 × 48 pixels, each 130 µm × 130 µm × 520 µm thick, coupled to a CMOS readout application specific integrated circuit (ASIC). The first synchrotron X-ray characterization of this detector is presented, and its ability to selectively count individual X-rays within two independent arrival time windows, a programmable energy range, and localized to a single pixel is demonstrated. During our first trial run at Argonne National Laboratory's Advance Photon Source, the detector achieved a 60 ns gating time and 700 eV full width at half-maximum energy resolution in agreement with design parameters. Each pixel of the PAD holds two independent digital counters, and the discriminator for X-ray energy features both an upper and lower threshold to window the energy of interest discarding unwanted background. This smart-pixel technology allows energy and time resolution to be set and optimized in software. It is found that the detector linearity follows an isolated dead-time model, implying that megahertz count rates should be possible in each pixel. Measurement of the line and point spread functions showed negligible spatial blurring. When combined with the timing structure of the synchrotron storage ring, it is demonstrated that the area detector can perform both picosecond time-resolved X-ray diffraction and fluorescence spectroscopy measurements.

  14. Humans make efficient use of natural image statistics when performing spatial interpolation.

    PubMed

    D'Antona, Anthony D; Perry, Jeffrey S; Geisler, Wilson S

    2013-12-16

    Visual systems learn through evolution and experience over the lifespan to exploit the statistical structure of natural images when performing visual tasks. Understanding which aspects of this statistical structure are incorporated into the human nervous system is a fundamental goal in vision science. To address this goal, we measured human ability to estimate the intensity of missing image pixels in natural images. Human estimation accuracy is compared with various simple heuristics (e.g., local mean) and with optimal observers that have nearly complete knowledge of the local statistical structure of natural images. Human estimates are more accurate than those of simple heuristics, and they match the performance of an optimal observer that knows the local statistical structure of relative intensities (contrasts). This optimal observer predicts the detailed pattern of human estimation errors and hence the results place strong constraints on the underlying neural mechanisms. However, humans do not reach the performance of an optimal observer that knows the local statistical structure of the absolute intensities, which reflect both local relative intensities and local mean intensity. As predicted from a statistical analysis of natural images, human estimation accuracy is negligibly improved by expanding the context from a local patch to the whole image. Our results demonstrate that the human visual system exploits efficiently the statistical structure of natural images.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ross, Steve; Haji-Sheikh, Michael; Huntington, Andrew

    The Voxtel VX-798 is a prototype X-ray pixel array detector (PAD) featuring a silicon sensor photodiode array of 48 x 48 pixels, each 130 mu m x 130 mu m x 520 mu m thick, coupled to a CMOS readout application specific integrated circuit (ASIC). The first synchrotron X-ray characterization of this detector is presented, and its ability to selectively count individual X-rays within two independent arrival time windows, a programmable energy range, and localized to a single pixel is demonstrated. During our first trial run at Argonne National Laboratory's Advance Photon Source, the detector achieved a 60 ns gatingmore » time and 700 eV full width at half-maximum energy resolution in agreement with design parameters. Each pixel of the PAD holds two independent digital counters, and the discriminator for X-ray energy features both an upper and lower threshold to window the energy of interest discarding unwanted background. This smart-pixel technology allows energy and time resolution to be set and optimized in software. It is found that the detector linearity follows an isolated dead-time model, implying that megahertz count rates should be possible in each pixel. Measurement of the line and point spread functions showed negligible spatial blurring. When combined with the timing structure of the synchrotron storage ring, it is demonstrated that the area detector can perform both picosecond time-resolved X-ray diffraction and fluorescence spectroscopy measurements.« less

  16. RGB-D depth-map restoration using smooth depth neighborhood supports

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Xue, Haoyang; Yu, Zhongjie; Wu, Qiang; Yang, Jie

    2015-05-01

    A method to restore the depth map of an RGB-D image using smooth depth neighborhood (SDN) supports is presented. The SDN supports are computed based on the corresponding color image of the depth map. Compared with the most widely used square supports, the proposed SDN supports can well-capture the local structure of the object. Only pixels with similar depth values are allowed to be included in the support. We combine our SDN supports with the joint bilateral filter (JBF) to form the SDN-JBF and use it to restore depth maps. Experimental results show that our SDN-JBF can not only rectify the misaligned depth pixels but also preserve sharp depth discontinuities.

  17. An enhanced structure tensor method for sea ice ridge detection from GF-3 SAR imagery

    NASA Astrophysics Data System (ADS)

    Zhu, T.; Li, F.; Zhang, Y.; Zhang, S.; Spreen, G.; Dierking, W.; Heygster, G.

    2017-12-01

    In SAR imagery, ridges or leads are shown as the curvilinear features. The proposed ridge detection method is facilitated by their curvilinear shapes. The bright curvilinear features are recognized as the ridges while the dark curvilinear features are classified as the leads. In dual-polarization HH or HV channel of C-band SAR imagery, the bright curvilinear feature may be false alarm because the frost flowers of young leads may show as bright pixels associated with changes in the surface salinity under calm surface conditions. Wind roughened leads also trigger the backscatter increasing that can be misclassified as ridges [1]. Thus the width limitation is considered in this proposed structure tensor method [2], since only shape feature based method is not enough for detecting ridges. The ridge detection algorithm is based on the hypothesis that the bright pixels are ridges with curvilinear shapes and the ridge width is less 30 meters. Benefited from GF-3 with high spatial resolution of 3 meters, we provide an enhanced structure tensor method for detecting the significant ridge. The preprocessing procedures including the calibration and incidence angle normalization are also investigated. The bright pixels will have strong response to the bandpass filtering. The ridge training samples are delineated from the SAR imagery in the Log-Gabor filters to construct structure tensor. From the tensor, the dominant orientation of the pixel representing the ridge is determined by the dominant eigenvector. For the post-processing of structure tensor, the elongated kernel is desired to enhance the ridge curvilinear shape. Since ridge presents along a certain direction, the ratio of the dominant eigenvector will be used to measure the intensity of local anisotropy. The convolution filter has been utilized in the constructed structure tensor is used to model spatial contextual information. Ridge detection results from GF-3 show the proposed method performs better compared to the direct threshold method.

  18. Geological History of the Tyre Region of Europa: A Regional Perspective on Europan Surface Features and Ice Thickness

    NASA Technical Reports Server (NTRS)

    Kadel, Steven D.; Chuang, Frank C.; Greeley, Ronald; Moore, Jeffrey M.

    2000-01-01

    Galileo images of the Tyre Macula region of Europa at regional (170 m/pixel) and local (approx. 40 m/pixel) scales allow mapping and understanding of surface processes and landforms. Ridged plains, doublet and complex ridges, shallow pits, domes, "chaos" areas. impact structures, tilted blocks and massifs, and young fracture systems indicate a complex history of surface deformation on Europa. Regional and local morphologies of the Tyre region of Europa suggest that an impactor penetrated through several kilometers of water ice tc a mobile layer below. The surface morphology was initially dominated by formation of ridged plains, followed by development of ridge bands and doublet ridges, with chaos and fracture formation dominating the latter part of the geologic history of the Tyre region. Two distinct types of chaos have been identified which, along with upwarped dome materials, appear to represent a continuum of features (domes-play chaos-knobby chaos) resulting from increasing degree of surface disruption associated with local lithospheric heating and thinning. Local and regional stratigraphic relationships, block heights, and the morphology of the Tyre impact structure suggest the presence of low-viscosity ice or liquid water beneath a thin (severa1 kilometers) surface ice shell at the time of the impact. The very low impact crater density on the surface of Europa suggests that this thin shell has either formed or been thoroughly resurfaced in the very recent past.

  19. Probing Cytoskeletal Structures by Coupling Optical Superresolution and AFM Techniques for a Correlative Approach

    PubMed Central

    Chacko, Jenu Varghese; Zanacchi, Francesca Cella; Diaspro, Alberto

    2013-01-01

    In this article, we describe and show the application of some of the most advanced fluorescence superresolution techniques, STED AFM and STORM AFM microscopy towards imaging of cytoskeletal structures, such as microtubule filaments. Mechanical and structural properties can play a relevant role in the investigation of cytoskeletal structures of interest, such as microtubules, that provide support to the cell structure. In fact, the mechanical properties, such as the local stiffness and the elasticity, can be investigated by AFM force spectroscopy with tens of nanometers resolution. Force curves can be analyzed in order to obtain the local elasticity (and the Young's modulus calculation by fitting the force curves from every pixel of interest), and the combination with STED/STORM microscopy integrates the measurement with high specificity and yields superresolution structural information. This hybrid modality of superresolution-AFM working is a clear example of correlative multimodal microscopy. PMID:24027190

  20. CMOS Active Pixel Sensor Star Tracker with Regional Electronic Shutter

    NASA Technical Reports Server (NTRS)

    Yadid-Pecht, Orly; Pain, Bedabrata; Staller, Craig; Clark, Christopher; Fossum, Eric

    1996-01-01

    The guidance system in a spacecraft determines spacecraft attitude by matching an observed star field to a star catalog....An APS(active pixel sensor)-based system can reduce mass and power consumption and radiation effects compared to a CCD(charge-coupled device)-based system...This paper reports an APS (active pixel sensor) with locally variable times, achieved through individual pixel reset (IPR).

  1. Characterization of pixel sensor designed in 180 nm SOI CMOS technology

    NASA Astrophysics Data System (ADS)

    Benka, T.; Havranek, M.; Hejtmanek, M.; Jakovenko, J.; Janoska, Z.; Marcisovska, M.; Marcisovsky, M.; Neue, G.; Tomasek, L.; Vrba, V.

    2018-01-01

    A new type of X-ray imaging Monolithic Active Pixel Sensor (MAPS), X-CHIP-02, was developed using a 180 nm deep submicron Silicon On Insulator (SOI) CMOS commercial technology. Two pixel matrices were integrated into the prototype chip, which differ by the pixel pitch of 50 μm and 100 μm. The X-CHIP-02 contains several test structures, which are useful for characterization of individual blocks. The sensitive part of the pixel integrated in the handle wafer is one of the key structures designed for testing. The purpose of this structure is to determine the capacitance of the sensitive part (diode in the MAPS pixel). The measured capacitance is 2.9 fF for 50 μm pixel pitch and 4.8 fF for 100 μm pixel pitch at -100 V (default operational voltage). This structure was used to measure the IV characteristics of the sensitive diode. In this work, we report on a circuit designed for precise determination of sensor capacitance and IV characteristics of both pixel types with respect to X-ray irradiation. The motivation for measurement of the sensor capacitance was its importance for the design of front-end amplifier circuits. The design of pixel elements, as well as circuit simulation and laboratory measurement techniques are described. The experimental results are of great importance for further development of MAPS sensors in this technology.

  2. Comparative study of various pixel photodiodes for digital radiography: Junction structure, corner shape and noble window opening

    NASA Astrophysics Data System (ADS)

    Kang, Dong-Uk; Cho, Minsik; Lee, Dae Hee; Yoo, Hyunjun; Kim, Myung Soo; Bae, Jun Hyung; Kim, Hyoungtaek; Kim, Jongyul; Kim, Hyunduk; Cho, Gyuseong

    2012-05-01

    Recently, large-size 3-transistors (3-Tr) active pixel complementary metal-oxide silicon (CMOS) image sensors have been being used for medium-size digital X-ray radiography, such as dental computed tomography (CT), mammography and nondestructive testing (NDT) for consumer products. We designed and fabricated 50 µm × 50 µm 3-Tr test pixels having a pixel photodiode with various structures and shapes by using the TSMC 0.25-m standard CMOS process to compare their optical characteristics. The pixel photodiode output was continuously sampled while a test pixel was continuously illuminated by using 550-nm light at a constant intensity. The measurement was repeated 300 times for each test pixel to obtain reliable results on the mean and the variance of the pixel output at each sampling time. The sampling rate was 50 kHz, and the reset period was 200 msec. To estimate the conversion gain, we used the mean-variance method. From the measured results, the n-well/p-substrate photodiode, among 3 photodiode structures available in a standard CMOS process, showed the best performance at a low illumination equivalent to the typical X-ray signal range. The quantum efficiencies of the n+/p-well, n-well/p-substrate, and n+/p-substrate photodiodes were 18.5%, 62.1%, and 51.5%, respectively. From a comparison of pixels with rounded and rectangular corners, we found that a rounded corner structure could reduce the dark current in large-size pixels. A pixel with four rounded corners showed a reduced dark current of about 200fA compared to a pixel with four rectangular corners in our pixel sample size. Photodiodes with round p-implant openings showed about 5% higher dark current, but about 34% higher sensitivities, than the conventional photodiodes.

  3. Photon event distribution sampling: an image formation technique for scanning microscopes that permits tracking of sub-diffraction particles with high spatial and temporal resolutions.

    PubMed

    Larkin, J D; Publicover, N G; Sutko, J L

    2011-01-01

    In photon event distribution sampling, an image formation technique for scanning microscopes, the maximum likelihood position of origin of each detected photon is acquired as a data set rather than binning photons in pixels. Subsequently, an intensity-related probability density function describing the uncertainty associated with the photon position measurement is applied to each position and individual photon intensity distributions are summed to form an image. Compared to pixel-based images, photon event distribution sampling images exhibit increased signal-to-noise and comparable spatial resolution. Photon event distribution sampling is superior to pixel-based image formation in recognizing the presence of structured (non-random) photon distributions at low photon counts and permits use of non-raster scanning patterns. A photon event distribution sampling based method for localizing single particles derived from a multi-variate normal distribution is more precise than statistical (Gaussian) fitting to pixel-based images. Using the multi-variate normal distribution method, non-raster scanning and a typical confocal microscope, localizations with 8 nm precision were achieved at 10 ms sampling rates with acquisition of ~200 photons per frame. Single nanometre precision was obtained with a greater number of photons per frame. In summary, photon event distribution sampling provides an efficient way to form images when low numbers of photons are involved and permits particle tracking with confocal point-scanning microscopes with nanometre precision deep within specimens. © 2010 The Authors Journal of Microscopy © 2010 The Royal Microscopical Society.

  4. 1T Pixel Using Floating-Body MOSFET for CMOS Image Sensors.

    PubMed

    Lu, Guo-Neng; Tournier, Arnaud; Roy, François; Deschamps, Benoît

    2009-01-01

    We present a single-transistor pixel for CMOS image sensors (CIS). It is a floating-body MOSFET structure, which is used as photo-sensing device and source-follower transistor, and can be controlled to store and evacuate charges. Our investigation into this 1T pixel structure includes modeling to obtain analytical description of conversion gain. Model validation has been done by comparing theoretical predictions and experimental results. On the other hand, the 1T pixel structure has been implemented in different configurations, including rectangular-gate and ring-gate designs, and variations of oxidation parameters for the fabrication process. The pixel characteristics are presented and discussed.

  5. Estimation of urban surface water at subpixel level from neighborhood pixels using multispectral remote sensing image (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Xie, Huan; Luo, Xin; Xu, Xiong; Wang, Chen; Pan, Haiyan; Tong, Xiaohua; Liu, Shijie

    2016-10-01

    Water body is a fundamental element in urban ecosystems and water mapping is critical for urban and landscape planning and management. As remote sensing has increasingly been used for water mapping in rural areas, this spatially explicit approach applied in urban area is also a challenging work due to the water bodies mainly distributed in a small size and the spectral confusion widely exists between water and complex features in the urban environment. Water index is the most common method for water extraction at pixel level, and spectral mixture analysis (SMA) has been widely employed in analyzing urban environment at subpixel level recently. In this paper, we introduce an automatic subpixel water mapping method in urban areas using multispectral remote sensing data. The objectives of this research consist of: (1) developing an automatic land-water mixed pixels extraction technique by water index; (2) deriving the most representative endmembers of water and land by utilizing neighboring water pixels and adaptive iterative optimal neighboring land pixel for respectively; (3) applying a linear unmixing model for subpixel water fraction estimation. Specifically, to automatically extract land-water pixels, the locally weighted scatter plot smoothing is firstly used to the original histogram curve of WI image . And then the Ostu threshold is derived as the start point to select land-water pixels based on histogram of the WI image with the land threshold and water threshold determination through the slopes of histogram curve . Based on the previous process at pixel level, the image is divided into three parts: water pixels, land pixels, and mixed land-water pixels. Then the spectral mixture analysis (SMA) is applied to land-water mixed pixels for water fraction estimation at subpixel level. With the assumption that the endmember signature of a target pixel should be more similar to adjacent pixels due to spatial dependence, the endmember of water and land are determined by neighboring pure land or pure water pixels within a distance. To obtaining the most representative endmembers in SMA, we designed an adaptive iterative endmember selection method based on the spatial similarity of adjacent pixels. According to the spectral similarity in a spatial adjacent region, the spectrum of land endmember is determined by selecting the most representative land pixel in a local window, and the spectrum of water endmember is determined by calculating an average of the water pixels in the local window. The proposed hierarchical processing method based on WI and SMA (WISMA) is applied to urban areas for reliability evaluation using the Landsat-8 Operational Land Imager (OLI) images. For comparison, four methods at pixel level and subpixel level were chosen respectively. Results indicate that the water maps generated by the proposed method correspond as closely with the truth water maps with subpixel precision. And the results showed that the WISMA achieved the best performance in water mapping with comprehensive analysis of different accuracy evaluation indexes (RMSE and SE).

  6. Structural colour printing from a reusable generic nanosubstrate masked for the target image

    NASA Astrophysics Data System (ADS)

    Rezaei, M.; Jiang, H.; Kaminska, B.

    2016-02-01

    Structural colour printing has advantages over traditional pigment-based colour printing. However, the high fabrication cost has hindered its applications in printing large-area images because each image requires patterning structural pixels in nanoscale resolution. In this work, we present a novel strategy to print structural colour images from a pixelated substrate which is called a nanosubstrate. The nanosubstrate is fabricated only once using nanofabrication tools and can be reused for printing a large quantity of structural colour images. It contains closely packed arrays of nanostructures from which red, green, blue and infrared structural pixels can be imprinted. To print a target colour image, the nanosubstrate is first covered with a mask layer to block all the structural pixels. The mask layer is subsequently patterned according to the target colour image to make apertures of controllable sizes on top of the wanted primary colour pixels. The masked nanosubstrate is then used as a stamp to imprint the colour image onto a separate substrate surface using nanoimprint lithography. Different visual colours are achieved by properly mixing the red, green and blue primary colours into appropriate ratios controlled by the aperture sizes on the patterned mask layer. Such a strategy significantly reduces the cost and complexity of printing a structural colour image from lengthy nanoscale patterning into high throughput micro-patterning and makes it possible to apply structural colour printing in personalized security features and data storage. In this paper, nanocone array grating pixels were used as the structural pixels and the nanosubstrate contains structures to imprint the nanocone arrays. Laser lithography was implemented to pattern the mask layer with submicron resolution. The optical properties of the nanocone array gratings are studied in detail. Multiple printed structural colour images with embedded covert information are demonstrated.

  7. Bone geometry, structure and mineral distribution using Dual energy X ray Absorptiometry (DXA)

    NASA Technical Reports Server (NTRS)

    Whalen, Robert; Cleek, Tammy

    1993-01-01

    Dual energy x-ray absorptiometry (DXA) is currently the most widely used method of analyzing regional and whole body changes in bone mineral content (BMC) and areal (g/sq cm) bone mineral density (BMD). However, BMC and BMD do not provide direct measures of long bone geometry, structure, or strength nor do regional measurements detect localized changes in other regions of the same bone. The capabilities of DXA can be enhanced significantly by special processing of pixel BMC data which yields cross-sectional geometric and structural information. We have extended this method of analysis in order to develop non-uniform structural beam models of long bones.

  8. Development of n+-in-p planar pixel sensors for extremely high radiation environments, designed to retain high efficiency after irradiation

    NASA Astrophysics Data System (ADS)

    Unno, Y.; Kamada, S.; Yamamura, K.; Ikegami, Y.; Nakamura, K.; Takubo, Y.; Takashima, R.; Tojo, J.; Kono, T.; Hanagaki, K.; Yajima, K.; Yamauchi, Y.; Hirose, M.; Homma, Y.; Jinnouchi, O.; Kimura, K.; Motohashi, K.; Sato, S.; Sawai, H.; Todome, K.; Yamaguchi, D.; Hara, K.; Sato, Kz.; Sato, Kj.; Hagihara, M.; Iwabuchi, S.

    2016-09-01

    We have developed n+-in-p pixel sensors to obtain highly radiation tolerant sensors for extremely high radiation environments such as those found at the high-luminosity LHC. We have designed novel pixel structures to eliminate the sources of efficiency loss under the bias rails after irradiation by removing the bias rail out of the boundary region and routing the bias resistors inside the area of the pixel electrodes. After irradiation by protons with the fluence of approximately 3 ×1015neq /cm2, the pixel structure with the polysilicon bias resistor and the bias rails removed far away from the boundary shows an efficiency loss of < 0.5 % per pixel at the boundary region, which is as efficient as the pixel structure without a biasing structure. The pixel structure with the bias rails at the boundary and the widened p-stop's underneath the bias rail also exhibits an improved loss of approximately 1% per pixel at the boundary region. We have elucidated the physical mechanisms behind the efficiency loss under the bias rail with TCAD simulations. The efficiency loss is due to the interplay of the bias rail acting as a charge collecting electrode with the region of low electric field in the silicon near the surface at the boundary. The region acts as a "shield" for the electrode. After irradiation, the strong applied electric field nearly eliminates the region. The TCAD simulations have shown that wide p-stop and large Si-SiO2 interface charge (inversion layer, specifically) act to shield the weighting potential. The pixel sensor of the old design irradiated by γ-rays at 2.4 MGy is confirmed to exhibit only a slight efficiency loss at the boundary.

  9. Technique for ship/wake detection

    DOEpatents

    Roskovensky, John K [Albuquerque, NM

    2012-05-01

    An automated ship detection technique includes accessing data associated with an image of a portion of Earth. The data includes reflectance values. A first portion of pixels within the image are masked with a cloud and land mask based on spectral flatness of the reflectance values associated with the pixels. A given pixel selected from the first portion of pixels is unmasked when a threshold number of localized pixels surrounding the given pixel are not masked by the cloud and land mask. A spatial variability image is generated based on spatial derivatives of the reflectance values of the pixels which remain unmasked by the cloud and land mask. The spatial variability image is thresholded to identify one or more regions within the image as possible ship detection regions.

  10. Sub-pixel localization of highways in AVIRIS images

    NASA Technical Reports Server (NTRS)

    Salu, Yehuda

    1995-01-01

    Roads and highways show up clearly in many bands of AVIRIS images. A typical lane in the U.S. is 12 feet wide, and the total width of a four lane highway, including 18 feet of paved shoulders, is 19.8 m. Such a highway will cover only a portion of any 20x20 m AVIRIS pixel that it traverses. The other portion of these pixels wil be usually covered by vegetation. An interesting problem is to precisely determine the location of a highway within the AVIRIS pixels that it traverses. This information may be used for alignment and spatial calibration of AVIRIS images. Also, since the reflection properties of highway surfaces do not change with time, and they can be determined once and for all, such information can be of help in calculating and filtering out the atmospheric noise that contaminates AVIRIS measurements. The purpose of this report is to describe a method for sub-pixel localization of highways.

  11. Assymetry in the Polar Mesosphere Revealed by the 2012 Venus Transit Aureole

    NASA Astrophysics Data System (ADS)

    Widemann, Thomas; Tanga, P.; Reardon, K. P.; Limaye, S.; Wilson, C.; Vandaele, A.; Wilquet, V.; Mahieux, A.; Robert, S.; Pasachoff, J. M.; Schneider, G.

    2012-10-01

    Close to ingress and egress phases, the fraction of Venus disk projected outside the solar photosphere appears outlined by an irregular thin arc of light called the "aureole." We have shown that the deviation due to refraction and the aureole intensity are related to the local density scale height and the altitude of the refraction layer (Tanga et al. 2012). Since the aureole brightness is the quantity that can be measured during the transit, an appropriate model allows us to determine both parameters. We now compare this model developed for the 2004 data to the first results of 2012 campaign. Ingress pictures of NASA's SDO/HMI observations, OP-OCA/VTE coronagraph observations at Haleakala and Lowell stations, and Dunn/IBIS observations at Sacramento Peak, NM, show latitudinal structure of the aureole during the ingress phase of the Venus transit. For the HMI data, the temporal cadence is 3.75 sec and the pixel scale is 0.5 arcsec/pixel. The polar region, significantly brighter in initial phases due to the larger scale height of the polar mesosphere, appears consistently offset toward morning terminator by about 15 deg. latitude, peaking at 75N at 6:00 local time. This result reflects local latitudinal structure in the polar mesosphere, either in temperature or aerosol altitude distribution. Relation with ESA / Venus Express / SOIR simultaneous measurements and dynamical interpretation will be discussed at the meeting. Tanga et al. 2012, Icarus 218, 207-219

  12. a Preliminary Work on Layout Slam for Reconstruction of Indoor Corridor Environments

    NASA Astrophysics Data System (ADS)

    Baligh Jahromi, A.; Sohn, G.; Shahbazi, M.; Kang, J.

    2017-09-01

    We propose a real time indoor corridor layout estimation method based on visual Simultaneous Localization and Mapping (SLAM). The proposed method adopts the Manhattan World Assumption at indoor spaces and uses the detected single image straight line segments and their corresponding orthogonal vanishing points to improve the feature matching scheme in the adopted visual SLAM system. Using the proposed real time indoor corridor layout estimation method, the system is able to build an online sparse map of structural corner point features. The challenges presented by abrupt camera rotation in the 3D space are successfully handled through matching vanishing directions of consecutive video frames on the Gaussian sphere. Using the single image based indoor layout features for initializing the system, permitted the proposed method to perform real time layout estimation and camera localization in indoor corridor areas. For layout structural corner points matching, we adopted features which are invariant under scale, translation, and rotation. We proposed a new feature matching cost function which considers both local and global context information. The cost function consists of a unary term, which measures pixel to pixel orientation differences of the matched corners, and a binary term, which measures the amount of angle differences between directly connected layout corner features. We have performed the experiments on real scenes at York University campus buildings and the available RAWSEEDS dataset. The incoming results depict that the proposed method robustly performs along with producing very limited position and orientation errors.

  13. Adaptive box filters for removal of random noise from digital images

    USGS Publications Warehouse

    Eliason, E.M.; McEwen, A.S.

    1990-01-01

    We have developed adaptive box-filtering algorithms to (1) remove random bit errors (pixel values with no relation to the image scene) and (2) smooth noisy data (pixels related to the image scene but with an additive or multiplicative component of noise). For both procedures, we use the standard deviation (??) of those pixels within a local box surrounding each pixel, hence they are adaptive filters. This technique effectively reduces speckle in radar images without eliminating fine details. -from Authors

  14. Dye and pigment-free structural colors and angle-insensitive spectrum filters

    DOEpatents

    Guo, Lingjie Jay; Hollowell, Andrew E.; Wu, Yi-Kuei

    2017-01-17

    Optical spectrum filtering devices displaying minimal angle dependence or angle insensitivity are provided. The filter comprises a localized plasmonic nanoresonator assembly having a metal material layer defining at least one nanogroove and a dielectric material disposed adjacent to the metal material layer. The dielectric material is disposed within the nanogroove(s). The localized plasmonic nanoresonator assembly is configured to funnel and absorb a portion of an electromagnetic spectrum in the at least one nanogroove via localized plasmonic resonance to generate a filtered output having a predetermined range of wavelengths that displays angle insensitivity. Thus, flexible, high efficiency angle independent color filters having very small diffraction limits are provided that are particularly suitable for use as pixels for various display devices or for use in anti-counterfeiting and cryptography applications. The structures can also be used for colored print applications and the elements can be rendered as pigment-like particles.

  15. Pixel structures to compensate nonuniform threshold voltage and mobility of polycrystalline silicon thin-film transistors using subthreshold current for large-size active matrix organic light-emitting diode displays

    NASA Astrophysics Data System (ADS)

    Na, Jun-Seok; Kwon, Oh-Kyong

    2014-01-01

    We propose pixel structures for large-size and high-resolution active matrix organic light-emitting diode (AMOLED) displays using a polycrystalline silicon (poly-Si) thin-film transistor (TFT) backplane. The proposed pixel structures compensate the variations of the threshold voltage and mobility of the driving TFT using the subthreshold current. The simulated results show that the emission current error of the proposed pixel structure B ranges from -2.25 to 2.02 least significant bit (LSB) when the variations of the threshold voltage and mobility of the driving TFT are ±0.5 V and ±10%, respectively.

  16. Superpixel-based graph cuts for accurate stereo matching

    NASA Astrophysics Data System (ADS)

    Feng, Liting; Qin, Kaihuai

    2017-06-01

    Estimating the surface normal vector and disparity of a pixel simultaneously, also known as three-dimensional label method, has been widely used in recent continuous stereo matching problem to achieve sub-pixel accuracy. However, due to the infinite label space, it’s extremely hard to assign each pixel an appropriate label. In this paper, we present an accurate and efficient algorithm, integrating patchmatch with graph cuts, to approach this critical computational problem. Besides, to get robust and precise matching cost, we use a convolutional neural network to learn a similarity measure on small image patches. Compared with other MRF related methods, our method has several advantages: its sub-modular property ensures a sub-problem optimality which is easy to perform in parallel; graph cuts can simultaneously update multiple pixels, avoiding local minima caused by sequential optimizers like belief propagation; it uses segmentation results for better local expansion move; local propagation and randomization can easily generate the initial solution without using external methods. Middlebury experiments show that our method can get higher accuracy than other MRF-based algorithms.

  17. Framework for Detection and Localization of Extreme Climate Event with Pixel Recursive Super Resolution

    NASA Astrophysics Data System (ADS)

    Kim, S. K.; Lee, J.; Zhang, C.; Ames, S.; Williams, D. N.

    2017-12-01

    Deep learning techniques have been successfully applied to solve many problems in climate and geoscience using massive-scaled observed and modeled data. For extreme climate event detections, several models based on deep neural networks have been recently proposed and attend superior performance that overshadows all previous handcrafted expert based method. The issue arising, though, is that accurate localization of events requires high quality of climate data. In this work, we propose framework capable of detecting and localizing extreme climate events in very coarse climate data. Our framework is based on two models using deep neural networks, (1) Convolutional Neural Networks (CNNs) to detect and localize extreme climate events, and (2) Pixel recursive recursive super resolution model to reconstruct high resolution climate data from low resolution climate data. Based on our preliminary work, we have presented two CNNs in our framework for different purposes, detection and localization. Our results using CNNs for extreme climate events detection shows that simple neural nets can capture the pattern of extreme climate events with high accuracy from very coarse reanalysis data. However, localization accuracy is relatively low due to the coarse resolution. To resolve this issue, the pixel recursive super resolution model reconstructs the resolution of input of localization CNNs. We present a best networks using pixel recursive super resolution model that synthesizes details of tropical cyclone in ground truth data while enhancing their resolution. Therefore, this approach not only dramat- ically reduces the human effort, but also suggests possibility to reduce computing cost required for downscaling process to increase resolution of data.

  18. Reflective coherent spatial light modulator

    DOEpatents

    Simpson, John T.; Richards, Roger K.; Hutchinson, Donald P.; Simpson, Marcus L.

    2003-04-22

    A reflective coherent spatial light modulator (RCSLM) includes a subwavelength resonant grating structure (SWS), the SWS including at least one subwavelength resonant grating layer (SWL) have a plurality of areas defining a plurality of pixels. Each pixel represents an area capable of individual control of its reflective response. A structure for modulating the resonant reflective response of at least one pixel is provided. The structure for modulating can include at least one electro-optic layer in optical contact with the SWS. The RCSLM is scalable in both pixel size and wavelength. A method for forming a RCSLM includes the steps of selecting a waveguide material and forming a SWS in the waveguide material, the SWS formed from at least one SWL, the SWL having a plurality of areas defining a plurality of pixels.

  19. Scene-based nonuniformity correction using local constant statistics.

    PubMed

    Zhang, Chao; Zhao, Wenyi

    2008-06-01

    In scene-based nonuniformity correction, the statistical approach assumes all possible values of the true-scene pixel are seen at each pixel location. This global-constant-statistics assumption does not distinguish fixed pattern noise from spatial variations in the average image. This often causes the "ghosting" artifacts in the corrected images since the existing spatial variations are treated as noises. We introduce a new statistical method to reduce the ghosting artifacts. Our method proposes a local-constant statistics that assumes that the temporal signal distribution is not constant at each pixel but is locally true. This considers statistically a constant distribution in a local region around each pixel but uneven distribution in a larger scale. Under the assumption that the fixed pattern noise concentrates in a higher spatial-frequency domain than the distribution variation, we apply a wavelet method to the gain and offset image of the noise and separate out the pattern noise from the spatial variations in the temporal distribution of the scene. We compare the results to the global-constant-statistics method using a clean sequence with large artificial pattern noises. We also apply the method to a challenging CCD video sequence and a LWIR sequence to show how effective it is in reducing noise and the ghosting artifacts.

  20. Investigation of skin structures based on infrared wave parameter indirect microscopic imaging

    NASA Astrophysics Data System (ADS)

    Zhao, Jun; Liu, Xuefeng; Xiong, Jichuan; Zhou, Lijuan

    2017-02-01

    Detailed imaging and analysis of skin structures are becoming increasingly important in modern healthcare and clinic diagnosis. Nanometer resolution imaging techniques such as SEM and AFM can cause harmful damage to the sample and cannot measure the whole skin structure from the very surface through epidermis, dermis to subcutaneous. Conventional optical microscopy has the highest imaging efficiency, flexibility in onsite applications and lowest cost in manufacturing and usage, but its image resolution is too low to be accepted for biomedical analysis. Infrared parameter indirect microscopic imaging (PIMI) uses an infrared laser as the light source due to its high transmission in skins. The polarization of optical wave through the skin sample was modulated while the variation of the optical field was observed at the imaging plane. The intensity variation curve of each pixel was fitted to extract the near field polarization parameters to form indirect images. During the through-skin light modulation and image retrieving process, the curve fitting removes the blurring scattering from neighboring pixels and keeps only the field variations related to local skin structures. By using the infrared PIMI, we can break the diffraction limit, bring the wide field optical image resolution to sub-200nm, in the meantime of taking advantage of high transmission of infrared waves in skin structures.

  1. CMOS foveal image sensor chip

    NASA Technical Reports Server (NTRS)

    Scott, Peter (Inventor); Sridhar, Ramalingam (Inventor); Bandera, Cesar (Inventor); Xia, Shu (Inventor)

    2002-01-01

    A foveal image sensor integrated circuit comprising a plurality of CMOS active pixel sensors arranged both within and about a central fovea region of the chip. The pixels in the central fovea region have a smaller size than the pixels arranged in peripheral rings about the central region. A new photocharge normalization scheme and associated circuitry normalizes the output signals from the different size pixels in the array. The pixels are assembled into a multi-resolution rectilinear foveal image sensor chip using a novel access scheme to reduce the number of analog RAM cells needed. Localized spatial resolution declines monotonically with offset from the imager's optical axis, analogous to biological foveal vision.

  2. Switching non-local median filter

    NASA Astrophysics Data System (ADS)

    Matsuoka, Jyohei; Koga, Takanori; Suetake, Noriaki; Uchino, Eiji

    2015-06-01

    This paper describes a novel image filtering method for removal of random-valued impulse noise superimposed on grayscale images. Generally, it is well known that switching-type median filters are effective for impulse noise removal. In this paper, we propose a more sophisticated switching-type impulse noise removal method in terms of detail-preserving performance. Specifically, the noise detector of the proposed method finds out noise-corrupted pixels by focusing attention on the difference between the value of a pixel of interest (POI) and the median of its neighboring pixel values, and on the POI's isolation tendency from the surrounding pixels. Furthermore, the removal of the detected noise is performed by the newly proposed median filter based on non-local processing, which has superior detail-preservation capability compared to the conventional median filter. The effectiveness and the validity of the proposed method are verified by some experiments using natural grayscale images.

  3. Local morphologic scale: application to segmenting tumor infiltrating lymphocytes in ovarian cancer TMAs

    NASA Astrophysics Data System (ADS)

    Janowczyk, Andrew; Chandran, Sharat; Feldman, Michael; Madabhushi, Anant

    2011-03-01

    In this paper we present the concept and associated methodological framework for a novel locally adaptive scale notion called local morphological scale (LMS). Broadly speaking, the LMS at every spatial location is defined as the set of spatial locations, with associated morphological descriptors, which characterize the local structure or heterogeneity for the location under consideration. More specifically, the LMS is obtained as the union of all pixels in the polygon obtained by linking the final location of trajectories of particles emanating from the location under consideration, where the path traveled by originating particles is a function of the local gradients and heterogeneity that they encounter along the way. As these particles proceed on their trajectory away from the location under consideration, the velocity of each particle (i.e. do the particles stop, slow down, or simply continue around the object) is modeled using a physics based system. At some time point the particle velocity goes to zero (potentially on account of encountering (a) repeated obstructions, (b) an insurmountable image gradient, or (c) timing out) and comes to a halt. By using a Monte-Carlo sampling technique, LMS is efficiently determined through parallelized computations. LMS is different from previous local scale related formulations in that it is (a) not a locally connected sets of pixels satisfying some pre-defined intensity homogeneity criterion (generalized-scale), nor is it (b) constrained by any prior shape criterion (ball-scale, tensor-scale). Shape descriptors quantifying the morphology of the particle paths are used to define a tensor LMS signature associated with every spatial image location. These features include the number of object collisions per particle, average velocity of a particle, and the length of the individual particle paths. These features can be used in conjunction with a supervised classifier to correctly differentiate between two different object classes based on local structural properties. In this paper, we apply LMS to the specific problem of classifying regions of interest in Ovarian Cancer (OCa) histology images as either tumor or stroma. This approach is used to classify lymphocytes as either tumor infiltrating lymphocytes (TILs) or non-TILs; the presence of TILs having been identified as an important prognostic indicator for disease outcome in patients with OCa. We present preliminary results on the tumor/stroma classification of 11,000 randomly selected locations of interest, across 11 images obtained from 6 patient studies. Using a Probabilistic Boosting Tree (PBT), our supervised classifier yielded an area under the receiver operation characteristic curve (AUC) of 0.8341 +/-0.0059 over 5 runs of randomized cross validation. The average LMS computation time at every spatial location for an image patch comprising 2000 pixels with 24 particles at every location was only 18s.

  4. Global and Local Translation Designs of Quantum Image Based on FRQI

    NASA Astrophysics Data System (ADS)

    Zhou, Ri-Gui; Tan, Canyun; Ian, Hou

    2017-04-01

    In this paper, two kinds of quantum image translation are designed based on FRQI, including global translation and local translation. Firstly, global translation is realized by employing adder modulo N, where all pixels in the image will be moved, and the circuit of right translation is designed. Meanwhile, left translation can also be implemented by using right translation. Complexity analysis shows that the circuits of global translation in this paper have lower complexity and cost less qubits. Secondly, local translation, consisted of single-column translation, multiple-columns translation and translation in the restricted area, is designed by adopting Gray code. In local translation, any parts of pixels in the image can be translated while other pixels remain unchanged. In order to lower complexity when the number of columns needing to be translated are more than one, multiple-columns translation is proposed, which has the approximate complexity with single-column translation. To perform multiple-columns translation, three conditions must be satisfied. In addition, all translations in this paper are cyclic.

  5. Classifying features in CT imagery: accuracy for some single- and multiple-species classifiers

    Treesearch

    Daniel L. Schmoldt; Jing He; A. Lynn Abbott

    1998-01-01

    Our current approach to automatically label features in CT images of hardwood logs classifies each pixel of an image individually. These feature classifiers use a back-propagation artificial neural network (ANN) and feature vectors that include a small, local neighborhood of pixels and the distance of the target pixel to the center of the log. Initially, this type of...

  6. Interactive rendering of acquired materials on dynamic geometry using frequency analysis.

    PubMed

    Bagher, Mahdi Mohammad; Soler, Cyril; Subr, Kartic; Belcour, Laurent; Holzschuch, Nicolas

    2013-05-01

    Shading acquired materials with high-frequency illumination is computationally expensive. Estimating the shading integral requires multiple samples of the incident illumination. The number of samples required may vary across the image, and the image itself may have high- and low-frequency variations, depending on a combination of several factors. Adaptively distributing computational budget across the pixels for shading is a challenging problem. In this paper, we depict complex materials such as acquired reflectances, interactively, without any precomputation based on geometry. In each frame, we first estimate the frequencies in the local light field arriving at each pixel, as well as the variance of the shading integrand. Our frequency analysis accounts for combinations of a variety of factors: the reflectance of the object projecting to the pixel, the nature of the illumination, the local geometry and the camera position relative to the geometry and lighting. We then exploit this frequency information (bandwidth and variance) to adaptively sample for reconstruction and integration. For example, fewer pixels per unit area are shaded for pixels projecting onto diffuse objects, and fewer samples are used for integrating illumination incident on specular objects.

  7. Adaptive local thresholding for robust nucleus segmentation utilizing shape priors

    NASA Astrophysics Data System (ADS)

    Wang, Xiuzhong; Srinivas, Chukka

    2016-03-01

    This paper describes a novel local thresholding method for foreground detection. First, a Canny edge detection method is used for initial edge detection. Then, tensor voting is applied on the initial edge pixels, using a nonsymmetric tensor field tailored to encode prior information about nucleus size, shape, and intensity spatial distribution. Tensor analysis is then performed to generate the saliency image and, based on that, the refined edge. Next, the image domain is divided into blocks. In each block, at least one foreground and one background pixel are sampled for each refined edge pixel. The saliency weighted foreground histogram and background histogram are then created. These two histograms are used to calculate a threshold by minimizing the background and foreground pixel classification error. The block-wise thresholds are then used to generate the threshold for each pixel via interpolation. Finally, the foreground is obtained by comparing the original image with the threshold image. The effective use of prior information, combined with robust techniques, results in far more reliable foreground detection, which leads to robust nucleus segmentation.

  8. Pixel parallel localized driver design for a 128 x 256 pixel array 3D 1Gfps image sensor

    NASA Astrophysics Data System (ADS)

    Zhang, C.; Dao, V. T. S.; Etoh, T. G.; Charbon, E.

    2017-02-01

    In this paper, a 3D 1Gfps BSI image sensor is proposed, where 128 × 256 pixels are located in the top-tier chip and a 32 × 32 localized driver array in the bottom-tier chip. Pixels are designed with Multiple Collection Gates (MCG), which collects photons selectively with different collection gates being active at intervals of 1ns to achieve 1Gfps. For the drivers, a global PLL is designed, which consists of a ring oscillator with 6-stage current starved differential inverters, achieving a wide frequency tuning range from 40MHz to 360MHz (20ps rms jitter). The drivers are the replicas of the ring oscillator that operates within a PLL. Together with level shifters and XNOR gates, continuous 3.3V pulses are generated with desired pulse width, which is 1/12 of the PLL clock period. The driver array is activated by a START signal, which propagates through a highly balanced clock tree, to activate all the pixels at the same time with virtually negligible skew.

  9. Fractional order integration and fuzzy logic based filter for denoising of echocardiographic image.

    PubMed

    Saadia, Ayesha; Rashdi, Adnan

    2016-12-01

    Ultrasound is widely used for imaging due to its cost effectiveness and safety feature. However, ultrasound images are inherently corrupted with speckle noise which severely affects the quality of these images and create difficulty for physicians in diagnosis. To get maximum benefit from ultrasound imaging, image denoising is an essential requirement. To perform image denoising, a two stage methodology using fuzzy weighted mean and fractional integration filter has been proposed in this research work. In stage-1, image pixels are processed by applying a 3 × 3 window around each pixel and fuzzy logic is used to assign weights to the pixels in each window, replacing central pixel of the window with weighted mean of all neighboring pixels present in the same window. Noise suppression is achieved by assigning weights to the pixels while preserving edges and other important features of an image. In stage-2, the resultant image is further improved by fractional order integration filter. Effectiveness of the proposed methodology has been analyzed for standard test images artificially corrupted with speckle noise and real ultrasound B-mode images. Results of the proposed technique have been compared with different state-of-the-art techniques including Lsmv, Wiener, Geometric filter, Bilateral, Non-local means, Wavelet, Perona et al., Total variation (TV), Global Adaptive Fractional Integral Algorithm (GAFIA) and Improved Fractional Order Differential (IFD) model. Comparison has been done on quantitative and qualitative basis. For quantitative analysis different metrics like Peak Signal to Noise Ratio (PSNR), Speckle Suppression Index (SSI), Structural Similarity (SSIM), Edge Preservation Index (β) and Correlation Coefficient (ρ) have been used. Simulations have been done using Matlab. Simulation results of artificially corrupted standard test images and two real Echocardiographic images reveal that the proposed method outperforms existing image denoising techniques reported in the literature. The proposed method for denoising of Echocardiographic images is effective in noise suppression/removal. It not only removes noise from an image but also preserves edges and other important structure. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  10. CMOS Active-Pixel Image Sensor With Simple Floating Gates

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R.; Nakamura, Junichi; Kemeny, Sabrina E.

    1996-01-01

    Experimental complementary metal-oxide/semiconductor (CMOS) active-pixel image sensor integrated circuit features simple floating-gate structure, with metal-oxide/semiconductor field-effect transistor (MOSFET) as active circuit element in each pixel. Provides flexibility of readout modes, no kTC noise, and relatively simple structure suitable for high-density arrays. Features desirable for "smart sensor" applications.

  11. A New Pixels Flipping Method for Huge Watermarking Capacity of the Invoice Font Image

    PubMed Central

    Li, Li; Hou, Qingzheng; Lu, Jianfeng; Dai, Junping; Mao, Xiaoyang; Chang, Chin-Chen

    2014-01-01

    Invoice printing just has two-color printing, so invoice font image can be seen as binary image. To embed watermarks into invoice image, the pixels need to be flipped. The more huge the watermark is, the more the pixels need to be flipped. We proposed a new pixels flipping method in invoice image for huge watermarking capacity. The pixels flipping method includes one novel interpolation method for binary image, one flippable pixels evaluation mechanism, and one denoising method based on gravity center and chaos degree. The proposed interpolation method ensures that the invoice image keeps features well after scaling. The flippable pixels evaluation mechanism ensures that the pixels keep better connectivity and smoothness and the pattern has highest structural similarity after flipping. The proposed denoising method makes invoice font image smoother and fiter for human vision. Experiments show that the proposed flipping method not only keeps the invoice font structure well but also improves watermarking capacity. PMID:25489606

  12. A robust technique based on VLM and Frangi filter for retinal vessel extraction and denoising.

    PubMed

    Khan, Khan Bahadar; Khaliq, Amir A; Jalil, Abdul; Shahid, Muhammad

    2018-01-01

    The exploration of retinal vessel structure is colossally important on account of numerous diseases including stroke, Diabetic Retinopathy (DR) and coronary heart diseases, which can damage the retinal vessel structure. The retinal vascular network is very hard to be extracted due to its spreading and diminishing geometry and contrast variation in an image. The proposed technique consists of unique parallel processes for denoising and extraction of blood vessels in retinal images. In the preprocessing section, an adaptive histogram equalization enhances dissimilarity between the vessels and the background and morphological top-hat filters are employed to eliminate macula and optic disc, etc. To remove local noise, the difference of images is computed from the top-hat filtered image and the high-boost filtered image. Frangi filter is applied at multi scale for the enhancement of vessels possessing diverse widths. Segmentation is performed by using improved Otsu thresholding on the high-boost filtered image and Frangi's enhanced image, separately. In the postprocessing steps, a Vessel Location Map (VLM) is extracted by using raster to vector transformation. Postprocessing steps are employed in a novel way to reject misclassified vessel pixels. The final segmented image is obtained by using pixel-by-pixel AND operation between VLM and Frangi output image. The method has been rigorously analyzed on the STARE, DRIVE and HRF datasets.

  13. Introducing anisotropic Minkowski functionals and quantitative anisotropy measures for local structure analysis in biomedical imaging

    NASA Astrophysics Data System (ADS)

    Wismüller, Axel; De, Titas; Lochmüller, Eva; Eckstein, Felix; Nagarajan, Mahesh B.

    2013-03-01

    The ability of Minkowski Functionals to characterize local structure in different biological tissue types has been demonstrated in a variety of medical image processing tasks. We introduce anisotropic Minkowski Functionals (AMFs) as a novel variant that captures the inherent anisotropy of the underlying gray-level structures. To quantify the anisotropy characterized by our approach, we further introduce a method to compute a quantitative measure motivated by a technique utilized in MR diffusion tensor imaging, namely fractional anisotropy. We showcase the applicability of our method in the research context of characterizing the local structure properties of trabecular bone micro-architecture in the proximal femur as visualized on multi-detector CT. To this end, AMFs were computed locally for each pixel of ROIs extracted from the head, neck and trochanter regions. Fractional anisotropy was then used to quantify the local anisotropy of the trabecular structures found in these ROIs and to compare its distribution in different anatomical regions. Our results suggest a significantly greater concentration of anisotropic trabecular structures in the head and neck regions when compared to the trochanter region (p < 10-4). We also evaluated the ability of such AMFs to predict bone strength in the femoral head of proximal femur specimens obtained from 50 donors. Our results suggest that such AMFs, when used in conjunction with multi-regression models, can outperform more conventional features such as BMD in predicting failure load. We conclude that such anisotropic Minkowski Functionals can capture valuable information regarding directional attributes of local structure, which may be useful in a wide scope of biomedical imaging applications.

  14. Introducing Anisotropic Minkowski Functionals and Quantitative Anisotropy Measures for Local Structure Analysis in Biomedical Imaging

    PubMed Central

    Wismüller, Axel; De, Titas; Lochmüller, Eva; Eckstein, Felix; Nagarajan, Mahesh B.

    2017-01-01

    The ability of Minkowski Functionals to characterize local structure in different biological tissue types has been demonstrated in a variety of medical image processing tasks. We introduce anisotropic Minkowski Functionals (AMFs) as a novel variant that captures the inherent anisotropy of the underlying gray-level structures. To quantify the anisotropy characterized by our approach, we further introduce a method to compute a quantitative measure motivated by a technique utilized in MR diffusion tensor imaging, namely fractional anisotropy. We showcase the applicability of our method in the research context of characterizing the local structure properties of trabecular bone micro-architecture in the proximal femur as visualized on multi-detector CT. To this end, AMFs were computed locally for each pixel of ROIs extracted from the head, neck and trochanter regions. Fractional anisotropy was then used to quantify the local anisotropy of the trabecular structures found in these ROIs and to compare its distribution in different anatomical regions. Our results suggest a significantly greater concentration of anisotropic trabecular structures in the head and neck regions when compared to the trochanter region (p < 10−4). We also evaluated the ability of such AMFs to predict bone strength in the femoral head of proximal femur specimens obtained from 50 donors. Our results suggest that such AMFs, when used in conjunction with multi-regression models, can outperform more conventional features such as BMD in predicting failure load. We conclude that such anisotropic Minkowski Functionals can capture valuable information regarding directional attributes of local structure, which may be useful in a wide scope of biomedical imaging applications. PMID:29170580

  15. Adaptive Locally Optimum Processing for Interference Suppression from Communication and Undersea Surveillance Signals

    DTIC Science & Technology

    1994-07-01

    1993. "Analysis of the 1730-1732. Track - Before - Detect Approach to Target Detection using Pixel Statistics", to appear in IEEE Transactions Scholz, J...large surveillance arrays. One approach to combining energy in different spatial cells is track - before - detect . References to examples appear in the next... track - before - detect problem. The results obtained are not expected to depend strongly on model details. In particular, the structure of the tracking

  16. Compact VLSI neural computer integrated with active pixel sensor for real-time ATR applications

    NASA Astrophysics Data System (ADS)

    Fang, Wai-Chi; Udomkesmalee, Gabriel; Alkalai, Leon

    1997-04-01

    A compact VLSI neural computer integrated with an active pixel sensor has been under development to mimic what is inherent in biological vision systems. This electronic eye- brain computer is targeted for real-time machine vision applications which require both high-bandwidth communication and high-performance computing for data sensing, synergy of multiple types of sensory information, feature extraction, target detection, target recognition, and control functions. The neural computer is based on a composite structure which combines Annealing Cellular Neural Network (ACNN) and Hierarchical Self-Organization Neural Network (HSONN). The ACNN architecture is a programmable and scalable multi- dimensional array of annealing neurons which are locally connected with their local neurons. Meanwhile, the HSONN adopts a hierarchical structure with nonlinear basis functions. The ACNN+HSONN neural computer is effectively designed to perform programmable functions for machine vision processing in all levels with its embedded host processor. It provides a two order-of-magnitude increase in computation power over the state-of-the-art microcomputer and DSP microelectronics. A compact current-mode VLSI design feasibility of the ACNN+HSONN neural computer is demonstrated by a 3D 16X8X9-cube neural processor chip design in a 2-micrometers CMOS technology. Integration of this neural computer as one slice of a 4'X4' multichip module into the 3D MCM based avionics architecture for NASA's New Millennium Program is also described.

  17. Gland segmentation in prostate histopathological images

    PubMed Central

    Singh, Malay; Kalaw, Emarene Mationg; Giron, Danilo Medina; Chong, Kian-Tai; Tan, Chew Lim; Lee, Hwee Kuan

    2017-01-01

    Abstract. Glandular structural features are important for the tumor pathologist in the assessment of cancer malignancy of prostate tissue slides. The varying shapes and sizes of glands combined with the tedious manual observation task can result in inaccurate assessment. There are also discrepancies and low-level agreement among pathologists, especially in cases of Gleason pattern 3 and pattern 4 prostate adenocarcinoma. An automated gland segmentation system can highlight various glandular shapes and structures for further analysis by the pathologist. These objective highlighted patterns can help reduce the assessment variability. We propose an automated gland segmentation system. Forty-three hematoxylin and eosin-stained images were acquired from prostate cancer tissue slides and were manually annotated for gland, lumen, periacinar retraction clefting, and stroma regions. Our automated gland segmentation system was trained using these manual annotations. It identifies these regions using a combination of pixel and object-level classifiers by incorporating local and spatial information for consolidating pixel-level classification results into object-level segmentation. Experimental results show that our method outperforms various texture and gland structure-based gland segmentation algorithms in the literature. Our method has good performance and can be a promising tool to help decrease interobserver variability among pathologists. PMID:28653016

  18. The spectral signature of cloud spatial structure in shortwave irradiance

    PubMed Central

    Song, Shi; Schmidt, K. Sebastian; Pilewskie, Peter; King, Michael D.; Heidinger, Andrew K.; Walther, Andi; Iwabuchi, Hironobu; Wind, Gala; Coddington, Odele M.

    2017-01-01

    In this paper, we used cloud imagery from a NASA field experiment in conjunction with three-dimensional radiative transfer calculations to show that cloud spatial structure manifests itself as a spectral signature in shortwave irradiance fields – specifically in transmittance and net horizontal photon transport in the visible and near-ultraviolet wavelength range. We found a robust correlation between the magnitude of net horizontal photon transport (H) and its spectral dependence (slope), which is scale-invariant and holds for the entire pixel population of a domain. This was surprising at first given the large degree of spatial inhomogeneity. We prove that the underlying physical mechanism for this phenomenon is molecular scattering in conjunction with cloud spatial structure. On this basis, we developed a simple parameterization through a single parameter ε, which quantifies the characteristic spectral signature of spatial inhomogeneities. In the case we studied, neglecting net horizontal photon transport leads to a local transmittance bias of ±12–19 %, even at the relatively coarse spatial resolution of 20 km. Since three-dimensional effects depend on the spatial context of a given pixel in a nontrivial way, the spectral dimension of this problem may emerge as the starting point for future bias corrections. PMID:28824698

  19. The spectral signature of cloud spatial structure in shortwave irradiance.

    PubMed

    Song, Shi; Schmidt, K Sebastian; Pilewskie, Peter; King, Michael D; Heidinger, Andrew K; Walther, Andi; Iwabuchi, Hironobu; Wind, Gala; Coddington, Odele M

    2016-11-08

    In this paper, we used cloud imagery from a NASA field experiment in conjunction with three-dimensional radiative transfer calculations to show that cloud spatial structure manifests itself as a spectral signature in shortwave irradiance fields - specifically in transmittance and net horizontal photon transport in the visible and near-ultraviolet wavelength range. We found a robust correlation between the magnitude of net horizontal photon transport ( H ) and its spectral dependence (slope), which is scale-invariant and holds for the entire pixel population of a domain. This was surprising at first given the large degree of spatial inhomogeneity. We prove that the underlying physical mechanism for this phenomenon is molecular scattering in conjunction with cloud spatial structure. On this basis, we developed a simple parameterization through a single parameter ε , which quantifies the characteristic spectral signature of spatial inhomogeneities. In the case we studied, neglecting net horizontal photon transport leads to a local transmittance bias of ±12-19 %, even at the relatively coarse spatial resolution of 20 km. Since three-dimensional effects depend on the spatial context of a given pixel in a nontrivial way, the spectral dimension of this problem may emerge as the starting point for future bias corrections.

  20. WE-FG-207B-04: Noise Suppression for Energy-Resolved CT Via Variance Weighted Non-Local Filtration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harms, J; Zhu, L

    Purpose: The photon starvation problem is exacerbated in energy-resolved CT, since the detected photons are shared by multiple energy channels. Using pixel similarity-based non-local filtration, we aim to produce accurate and high-resolution energy-resolved CT images with significantly reduced noise. Methods: Averaging CT images reconstructed from different energy channels reduces noise at the price of losing spectral information, while conventional denoising techniques inevitably degrade image resolution. Inspired by the fact that CT images of the same object at different energies share the same structures, we aim to reduce noise of energy-resolved CT by averaging only pixels of similar materials - amore » non-local filtration technique. For each CT image, an empirical exponential model is used to calculate the material similarity between two pixels based on their CT values and the similarity values are organized in a matrix form. A final similarity matrix is generated by averaging these similarity matrices, with weights inversely proportional to the estimated total noise variance in the sinogram of different energy channels. Noise suppression is achieved for each energy channel via multiplying the image vector by the similarity matrix. Results: Multiple scans on a tabletop CT system are used to simulate 6-channel energy-resolved CT, with energies ranging from 75 to 125 kVp. On a low-dose acquisition at 15 mA of the Catphan©600 phantom, our method achieves the same image spatial resolution as a high-dose scan at 80 mA with a noise standard deviation (STD) lower by a factor of >2. Compared with another non-local noise suppression algorithm (ndiNLM), the proposed algorithms obtains images with substantially improved resolution at the same level of noise reduction. Conclusion: We propose a noise-suppression method for energy-resolved CT. Our method takes full advantage of the additional structural information provided by energy-resolved CT and preserves image values at each energy level. Research reported in this publication was supported by the National Institute Of Biomedical Imaging And Bioengineering of the National Institutes of Health under Award Number R21EB019597. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.« less

  1. Error analysis of filtering operations in pixel-duplicated images of diabetic retinopathy

    NASA Astrophysics Data System (ADS)

    Mehrubeoglu, Mehrube; McLauchlan, Lifford

    2010-08-01

    In this paper, diabetic retinopathy is chosen for a sample target image to demonstrate the effectiveness of image enlargement through pixel duplication in identifying regions of interest. Pixel duplication is presented as a simpler alternative to data interpolation techniques for detecting small structures in the images. A comparative analysis is performed on different image processing schemes applied to both original and pixel-duplicated images. Structures of interest are detected and and classification parameters optimized for minimum false positive detection in the original and enlarged retinal pictures. The error analysis demonstrates the advantages as well as shortcomings of pixel duplication in image enhancement when spatial averaging operations (smoothing filters) are also applied.

  2. Virus based Full Colour Pixels using a Microheater

    NASA Astrophysics Data System (ADS)

    Kim, Won-Geun; Kim, Kyujung; Ha, Sung-Hun; Song, Hyerin; Yu, Hyun-Woo; Kim, Chuntae; Kim, Jong-Man; Oh, Jin-Woo

    2015-09-01

    Mimicking natural structures has been received considerable attentions, and there have been a few practical advances. Tremendous efforts based on a self-assembly technique have been contributed to the development of the novel photonic structures which are mimicking nature’s inventions. We emulate the photonic structures from an origin of colour generation of mammalian skins and avian skin/feathers using M13 phage. The structures can be generated a full range of RGB colours that can be sensitively switched by temperature and substrate materials. Consequently, we developed an M13 phage-based temperature-dependent actively controllable colour pixels platform on a microheater chip. Given the simplicity of the fabrication process, the low voltage requirements and cycling stability, the virus colour pixels enable us to substitute for conventional colour pixels for the development of various implantable, wearable and flexible devices in future.

  3. Pixel-based dust-extinction mapping in nearby galaxies: A new approach to lifting the veil of dust

    NASA Astrophysics Data System (ADS)

    Tamura, Kazuyuki

    In the first part of this dissertation, I explore a new approach to mapping dust extinction in galaxies, using the observed and estimated dust-free flux- ratios of optical V -band and mid-IR 3.6 micro-meter emission. Inferred missing V -band flux is then converted into an estimate of dust extinction. While dust features are not clearly evident in the observed ground-based images of NGC 0959, the target of my pilot study, the dust-map created with this method clearly traces the distribution of dust seen in higher resolution Hubble images. Stellar populations are then analyzed through various pixel Color- Magnitude Diagrams and pixel Color-Color Diagrams (pCCDs), both before and after extinction correction. The ( B - 3.6 microns) versus (far-UV - U ) pCCD proves particularly powerful to distinguish pixels that are dominated by different types of or mixtures of stellar populations. Mapping these pixel- groups onto a pixel-coordinate map shows that they are not distributed randomly, but follow genuine galactic structures, such as a previously unrecognized bar. I show that selecting pixel-groups is not meaningful when using uncorrected colors, and that pixel-based extinction correction is crucial to reveal the true spatial variations in stellar populations. This method is then applied to a sample of late-type galaxies to study the distribution of dust and stellar population as a function of their morphological type and absolute magnitude. In each galaxy, I find that dust extinction is not simply decreasing radially, but that is concentrated in localized clumps throughout a galaxy. I also find some cases where star-formation regions are not associated with dust. In the second part, I describe the application of astronomical image analysis tools for medical purposes. In particular, Source Extractor is used to detect nerve fibers in the basement membrane images of human skin-biopsies of obese subjects. While more development and testing is necessary for this kind of work, I show that computerized detection methods significantly increase the repeatability and reliability of the results. A patent on this work is pending.

  4. Discrimination of isotrigon textures using the Rényi entropy of Allan variances.

    PubMed

    Gabarda, Salvador; Cristóbal, Gabriel

    2008-09-01

    We present a computational algorithm for isotrigon texture discrimination. The aim of this method consists in discriminating isotrigon textures against a binary random background. The extension of the method to the problem of multitexture discrimination is considered as well. The method relies on the fact that the information content of time or space-frequency representations of signals, including images, can be readily analyzed by means of generalized entropy measures. In such a scenario, the Rényi entropy appears as an effective tool, given that Rényi measures can be used to provide information about a local neighborhood within an image. Localization is essential for comparing images on a pixel-by-pixel basis. Discrimination is performed through a local Rényi entropy measurement applied on a spatially oriented 1-D pseudo-Wigner distribution (PWD) of the test image. The PWD is normalized so that it may be interpreted as a probability distribution. Prior to the calculation of the texture's PWD, a preprocessing filtering step replaces the original texture with its localized spatially oriented Allan variances. The anisotropic structure of the textures, as revealed by the Allan variances, turns out to be crucial later to attain a high discrimination by the extraction of Rényi entropy measures. The method has been empirically evaluated with a family of isotrigon textures embedded in a binary random background. The extension to the case of multiple isotrigon mosaics has also been considered. Discrimination results are compared with other existing methods.

  5. Realistic full wave modeling of focal plane array pixels

    DOE PAGES

    Campione, Salvatore; Warne, Larry K.; Jorgenson, Roy E.; ...

    2017-11-01

    Here, we investigate full-wave simulations of realistic implementations of multifunctional nanoantenna enabled detectors (NEDs). We focus on a 2x2 pixelated array structure that supports two wavelengths of operation. We design each resonating structure independently using full-wave simulations with periodic boundary conditions mimicking the whole infinite array. We then construct a supercell made of a 2x2 pixelated array with periodic boundary conditions mimicking the full NED; in this case, however, each pixel comprises 10-20 antennas per side. In this way, the cross-talk between contiguous pixels is accounted for in our simulations. We observe that, even though there are finite extent effects,more » the pixels work as designed, each responding at the respective wavelength of operation. This allows us to stress that realistic simulations of multifunctional NEDs need to be performed to verify the design functionality by taking into account finite extent and cross-talk effects.« less

  6. Which Photodiode to Use: A Comparison of CMOS-Compatible Structures

    PubMed Central

    Murari, Kartikeya; Etienne-Cummings, Ralph; Thakor, Nitish; Cauwenberghs, Gert

    2010-01-01

    While great advances have been made in optimizing fabrication process technologies for solid state image sensors, the need remains to be able to fabricate high quality photosensors in standard CMOS processes. The quality metrics depend on both the pixel architecture and the photosensitive structure. This paper presents a comparison of three photodiode structures in terms of spectral sensitivity, noise and dark current. The three structures are n+/p-sub, n-well/p-sub and p+/n-well/p-sub. All structures were fabricated in a 0.5 μm 3-metal, 2-poly, n-well process and shared the same pixel and readout architectures. Two pixel structures were fabricated—the standard three transistor active pixel sensor, where the output depends on the photodiode capacitance, and one incorporating an in-pixel capacitive transimpedance amplifier where the output is dependent only on a designed feedback capacitor. The n-well/p-sub diode performed best in terms of sensitivity (an improvement of 3.5 × and 1.6 × over the n+/p-sub and p+/n-well/p-sub diodes, respectively) and signal-to-noise ratio (1.5 × and 1.2 × improvement over the n+/p-sub and p+/n-well/p-sub diodes, respectively) while the p+/n-well/p-sub diode had the minimum (33% compared to other two structures) dark current for a given sensitivity. PMID:20454596

  7. Which Photodiode to Use: A Comparison of CMOS-Compatible Structures.

    PubMed

    Murari, Kartikeya; Etienne-Cummings, Ralph; Thakor, Nitish; Cauwenberghs, Gert

    2009-07-01

    While great advances have been made in optimizing fabrication process technologies for solid state image sensors, the need remains to be able to fabricate high quality photosensors in standard CMOS processes. The quality metrics depend on both the pixel architecture and the photosensitive structure. This paper presents a comparison of three photodiode structures in terms of spectral sensitivity, noise and dark current. The three structures are n(+)/p-sub, n-well/p-sub and p(+)/n-well/p-sub. All structures were fabricated in a 0.5 mum 3-metal, 2-poly, n-well process and shared the same pixel and readout architectures. Two pixel structures were fabricated-the standard three transistor active pixel sensor, where the output depends on the photodiode capacitance, and one incorporating an in-pixel capacitive transimpedance amplifier where the output is dependent only on a designed feedback capacitor. The n-well/p-sub diode performed best in terms of sensitivity (an improvement of 3.5 x and 1.6 x over the n(+)/p-sub and p(+)/n-well/p-sub diodes, respectively) and signal-to-noise ratio (1.5 x and 1.2 x improvement over the n(+)/p-sub and p(+)/n-well/p-sub diodes, respectively) while the p(+)/n-well/p-sub diode had the minimum (33% compared to other two structures) dark current for a given sensitivity.

  8. Early science from the Pan-STARRS1 Optical Galaxy Survey (POGS): Maps of stellar mass and star formation rate surface density obtained from distributed-computing pixel-SED fitting

    NASA Astrophysics Data System (ADS)

    Thilker, David A.; Vinsen, K.; Galaxy Properties Key Project, PS1

    2014-01-01

    To measure resolved galactic physical properties unbiased by the mask of recent star formation and dust features, we are conducting a citizen-scientist enabled nearby galaxy survey based on the unprecedented optical (g,r,i,z,y) imaging from Pan-STARRS1 (PS1). The PS1 Optical Galaxy Survey (POGS) covers 3π steradians (75% of the sky), about twice the footprint of SDSS. Whenever possible we also incorporate ancillary multi-wavelength image data from the ultraviolet (GALEX) and infrared (WISE, Spitzer) spectral regimes. For each cataloged nearby galaxy with a reliable redshift estimate of z < 0.05 - 0.1 (dependent on donated CPU power), publicly-distributed computing is being harnessed to enable pixel-by-pixel spectral energy distribution (SED) fitting, which in turn provides maps of key physical parameters such as the local stellar mass surface density, crude star formation history, and dust attenuation. With pixel SED fitting output we will then constrain parametric models of galaxy structure in a more meaningful way than ordinarily achieved. In particular, we will fit multi-component (e.g. bulge, bar, disk) galaxy models directly to the distribution of stellar mass rather than surface brightness in a single band, which is often locally biased. We will also compute non-parametric measures of morphology such as concentration, asymmetry using the POGS stellar mass and SFR surface density images. We anticipate studying how galactic substructures evolve by comparing our results with simulations and against more distant imaging surveys, some of which which will also be processed in the POGS pipeline. The reliance of our survey on citizen-scientist volunteers provides a world-wide opportunity for education. We developed an interactive interface which highlights the science being produced by each volunteer’s own CPU cycles. The POGS project has already proven popular amongst the public, attracting about 5000 volunteers with nearly 12,000 participating computers, and is growing rapidly.

  9. Numerical simulation of crosstalk in reduced pitch HgCdTe photon-trapping structure pixel arrays.

    PubMed

    Schuster, Jonathan; Bellotti, Enrico

    2013-06-17

    We have investigated crosstalk in HgCdTe photovoltaic pixel arrays employing a photon trapping (PT) structure realized with a periodic array of pillars intended to provide broadband operation. We have found that, compared to non-PT pixel arrays with similar geometry, the array employing the PT structure has a slightly higher optical crosstalk. However, when the total crosstalk is evaluated, the presence of the PT region drastically reduces the total crosstalk; making the use of the PT structure not only useful to obtain broadband operation, but also desirable for reducing crosstalk in small pitch detector arrays.

  10. SAR image segmentation using skeleton-based fuzzy clustering

    NASA Astrophysics Data System (ADS)

    Cao, Yun Yi; Chen, Yan Qiu

    2003-06-01

    SAR image segmentation can be converted to a clustering problem in which pixels or small patches are grouped together based on local feature information. In this paper, we present a novel framework for segmentation. The segmentation goal is achieved by unsupervised clustering upon characteristic descriptors extracted from local patches. The mixture model of characteristic descriptor, which combines intensity and texture feature, is investigated. The unsupervised algorithm is derived from the recently proposed Skeleton-Based Data Labeling method. Skeletons are constructed as prototypes of clusters to represent arbitrary latent structures in image data. Segmentation using Skeleton-Based Fuzzy Clustering is able to detect the types of surfaces appeared in SAR images automatically without any user input.

  11. Measurements and TCAD simulation of novel ATLAS planar pixel detector structures for the HL-LHC upgrade

    NASA Astrophysics Data System (ADS)

    Nellist, C.; Dinu, N.; Gkougkousis, E.; Lounis, A.

    2015-06-01

    The LHC accelerator complex will be upgraded between 2020-2022, to the High-Luminosity-LHC, to considerably increase statistics for the various physics analyses. To operate under these challenging new conditions, and maintain excellent performance in track reconstruction and vertex location, the ATLAS pixel detector must be substantially upgraded and a full replacement is expected. Processing techniques for novel pixel designs are optimised through characterisation of test structures in a clean room and also through simulations with Technology Computer Aided Design (TCAD). A method to study non-perpendicular tracks through a pixel device is discussed. Comparison of TCAD simulations with Secondary Ion Mass Spectrometry (SIMS) measurements to investigate the doping profile of structures and validate the simulation process is also presented.

  12. Joint spatial-spectral hyperspectral image clustering using block-diagonal amplified affinity matrix

    NASA Astrophysics Data System (ADS)

    Fan, Lei; Messinger, David W.

    2018-03-01

    The large number of spectral channels in a hyperspectral image (HSI) produces a fine spectral resolution to differentiate between materials in a scene. However, difficult classes that have similar spectral signatures are often confused while merely exploiting information in the spectral domain. Therefore, in addition to spectral characteristics, the spatial relationships inherent in HSIs should also be considered for incorporation into classifiers. The growing availability of high spectral and spatial resolution of remote sensors provides rich information for image clustering. Besides the discriminating power in the rich spectrum, contextual information can be extracted from the spatial domain, such as the size and the shape of the structure to which one pixel belongs. In recent years, spectral clustering has gained popularity compared to other clustering methods due to the difficulty of accurate statistical modeling of data in high dimensional space. The joint spatial-spectral information could be effectively incorporated into the proximity graph for spectral clustering approach, which provides a better data representation by discovering the inherent lower dimensionality from the input space. We embedded both spectral and spatial information into our proposed local density adaptive affinity matrix, which is able to handle multiscale data by automatically selecting the scale of analysis for every pixel according to its neighborhood of the correlated pixels. Furthermore, we explored the "conductivity method," which aims at amplifying the block diagonal structure of the affinity matrix to further improve the performance of spectral clustering on HSI datasets.

  13. Lifting the Veil of Dust from NGC 0959: The Importance of a Pixel-based Two-dimensional Extinction Correction

    NASA Astrophysics Data System (ADS)

    Tamura, K.; Jansen, R. A.; Eskridge, P. B.; Cohen, S. H.; Windhorst, R. A.

    2010-06-01

    We present the results of a study of the late-type spiral galaxy NGC 0959, before and after application of the pixel-based dust extinction correction described in Tamura et al. (Paper I). Galaxy Evolution Explorer far-UV, and near-UV, ground-based Vatican Advanced Technology Telescope, UBVR, and Spitzer/Infrared Array Camera 3.6, 4.5, 5.8, and 8.0 μm images are studied through pixel color-magnitude diagrams and pixel color-color diagrams (pCCDs). We define groups of pixels based on their distribution in a pCCD of (B - 3.6 μm) versus (FUV - U) colors after extinction correction. In the same pCCD, we trace their locations before the extinction correction was applied. This shows that selecting pixel groups is not meaningful when using colors uncorrected for dust. We also trace the distribution of the pixel groups on a pixel coordinate map of the galaxy. We find that the pixel-based (two-dimensional) extinction correction is crucial for revealing the spatial variations in the dominant stellar population, averaged over each resolution element. Different types and mixtures of stellar populations, and galaxy structures such as a previously unrecognized bar, become readily discernible in the extinction-corrected pCCD and as coherent spatial structures in the pixel coordinate map.

  14. Daytime Water Detection Based on Sky Reflections

    NASA Technical Reports Server (NTRS)

    Rankin, Arturo; Matthies, Larry; Bellutta, Paolo

    2011-01-01

    A water body s surface can be modeled as a horizontal mirror. Water detection based on sky reflections and color variation are complementary. A reflection coefficient model suggests sky reflections dominate the color of water at ranges > 12 meters. Water detection based on sky reflections: (1) geometrically locates the pixel in the sky that is reflecting on a candidate water pixel on the ground (2) predicts if the ground pixel is water based on color similarity and local terrain features. Water detection has been integrated on XUVs.

  15. Dependence of optical phase modulation on anchoring strength of dielectric shield wall surfaces in small liquid crystal pixels

    NASA Astrophysics Data System (ADS)

    Isomae, Yoshitomo; Shibata, Yosei; Ishinabe, Takahiro; Fujikake, Hideo

    2018-03-01

    We demonstrated that the uniform phase modulation in a pixel can be realized by optimizing the anchoring strength on the walls and the wall width in the dielectric shield wall structure, which is the needed pixel structure for realizing a 1-µm-pitch optical phase modulator. The anchoring force degrades the uniformity of the phase modulation in ON-state pixels, but it also keeps liquid crystals from rotating against the leakage of an electric field. We clarified that the optimal wall width and anchoring strength are 250 nm and less than 10-4 J/m2, respectively.

  16. Metrology Camera System Using Two-Color Interferometry

    NASA Technical Reports Server (NTRS)

    Dubovitsky, Serge; Liebe, Carl Christian; Peters, Robert; Lay, Oliver

    2007-01-01

    A metrology system that contains no moving parts simultaneously measures the bearings and ranges of multiple reflective targets in its vicinity, enabling determination of the three-dimensional (3D) positions of the targets with submillimeter accuracy. The system combines a direction-measuring metrology camera and an interferometric range-finding subsystem. Because the system is based partly on a prior instrument denoted the Modulation Sideband Technology for Absolute Ranging (MSTAR) sensor and because of its 3D capability, the system is denoted the MSTAR3D. Developed for use in measuring the shape (for the purpose of compensating for distortion) of large structures like radar antennas, it can also be used to measure positions of multiple targets in the course of conventional terrestrial surveying. A diagram of the system is shown in the figure. One of the targets is a reference target having a known, constant distance with respect to the system. The system comprises a laser for generating local and target beams at a carrier frequency; a frequency shifting unit to introduce a frequency shift offset between the target and local beams; a pair of high-speed modulators that apply modulation to the carrier frequency in the local and target beams to produce a series of modulation sidebands, the highspeed modulators having modulation frequencies of FL and FM; a target beam launcher that illuminates the targets with the target beam; optics and a multipixel photodetector; a local beam launcher that launches the local beam towards the multi-pixel photodetector; a mirror for projecting to the optics a portion of the target beam reflected from the targets, the optics being configured to focus the portion of the target beam at the multi-pixel photodetector; and a signal-processing unit connected to the photodetector. The portion of the target beam reflected from the targets produces spots on the multi-pixel photodetector corresponding to the targets, respectively, and the signal-processing unit centroids the spots to determine bearings of the targets, respectively. As the spots oscillate in intensity because they are mixed with the local laser beam that is flood illuminating the focal plane, the phase of oscillation of each spot is measured, the phase of sidebands in the oscillation of each spot being proportional to a distance to the corresponding target relative to the reference target A.

  17. Limits in point to point resolution of MOS based pixels detector arrays

    NASA Astrophysics Data System (ADS)

    Fourches, N.; Desforge, D.; Kebbiri, M.; Kumar, V.; Serruys, Y.; Gutierrez, G.; Leprêtre, F.; Jomard, F.

    2018-01-01

    In high energy physics point-to-point resolution is a key prerequisite for particle detector pixel arrays. Current and future experiments require the development of inner-detectors able to resolve the tracks of particles down to the micron range. Present-day technologies, although not fully implemented in actual detectors, can reach a 5-μm limit, this limit being based on statistical measurements, with a pixel-pitch in the 10 μm range. This paper is devoted to the evaluation of the building blocks for use in pixel arrays enabling accurate tracking of charged particles. Basing us on simulations we will make here a quantitative evaluation of the physical and technological limits in pixel size. Attempts to design small pixels based on SOI technology will be briefly recalled here. A design based on CMOS compatible technologies that allow a reduction of the pixel size below the micrometer is introduced here. Its physical principle relies on a buried carrier-localizing collecting gate. The fabrication process needed by this pixel design can be based on existing process steps used in silicon microelectronics. The pixel characteristics will be discussed as well as the design of pixel arrays. The existing bottlenecks and how to overcome them will be discussed in the light of recent ion implantation and material characterization experiments.

  18. DeepSkeleton: Learning Multi-Task Scale-Associated Deep Side Outputs for Object Skeleton Extraction in Natural Images

    NASA Astrophysics Data System (ADS)

    Shen, Wei; Zhao, Kai; Jiang, Yuan; Wang, Yan; Bai, Xiang; Yuille, Alan

    2017-11-01

    Object skeletons are useful for object representation and object detection. They are complementary to the object contour, and provide extra information, such as how object scale (thickness) varies among object parts. But object skeleton extraction from natural images is very challenging, because it requires the extractor to be able to capture both local and non-local image context in order to determine the scale of each skeleton pixel. In this paper, we present a novel fully convolutional network with multiple scale-associated side outputs to address this problem. By observing the relationship between the receptive field sizes of the different layers in the network and the skeleton scales they can capture, we introduce two scale-associated side outputs to each stage of the network. The network is trained by multi-task learning, where one task is skeleton localization to classify whether a pixel is a skeleton pixel or not, and the other is skeleton scale prediction to regress the scale of each skeleton pixel. Supervision is imposed at different stages by guiding the scale-associated side outputs toward the groundtruth skeletons at the appropriate scales. The responses of the multiple scale-associated side outputs are then fused in a scale-specific way to detect skeleton pixels using multiple scales effectively. Our method achieves promising results on two skeleton extraction datasets, and significantly outperforms other competitors. Additionally, the usefulness of the obtained skeletons and scales (thickness) are verified on two object detection applications: Foreground object segmentation and object proposal detection.

  19. Classification of Global Urban Centers Using ASTER Data: Preliminary Results From the Urban Environmental Monitoring Program

    NASA Astrophysics Data System (ADS)

    Stefanov, W. L.; Stefanov, W. L.; Christensen, P. R.

    2001-05-01

    Land cover and land use changes associated with urbanization are important drivers of global ecologic and climatic change. Quantification and monitoring of these changes are part of the primary mission of the ASTER instrument, and comprise the fundamental research objective of the Urban Environmental Monitoring (UEM) Program. The UEM program will acquire day/night, visible through thermal infrared ASTER data twice per year for 100 global urban centers over the duration of the mission (6 years). Data are currently available for a number of these urban centers and allow for initial comparison of global city structure using spatial variance texture analysis of the 15 m/pixel visible to near infrared ASTER bands. Variance texture analysis highlights changes in pixel edge density as recorded by sharp transitions from bright to dark pixels. In human-dominated landscapes these brightness variations correlate well with urbanized vs. natural land cover and are useful for characterizing the geographic extent and internal structure of cities. Variance texture analysis was performed on twelve urban centers (Albuquerque, Baghdad, Baltimore, Chongqing, Istanbul, Johannesburg, Lisbon, Madrid, Phoenix, Puebla, Riyadh, Vancouver) for which cloud-free daytime ASTER data are available. Image transects through each urban center produce texture profiles that correspond to urban density. These profiles can be used to classify cities into centralized (ex. Baltimore), decentralized (ex. Phoenix), or intermediate (ex. Madrid) structural types. Image texture is one of the primary data inputs (with vegetation indices and visible to thermal infrared image spectra) to a knowledge-based land cover classifier currently under development for application to ASTER UEM data as it is acquired. Collaboration with local investigators is sought to both verify the accuracy of the knowledge-based system and to develop more sophisticated classification models.

  20. Pitch Angles Of Artificially Redshifted Galaxies

    NASA Astrophysics Data System (ADS)

    Shields, Douglas W.; Davis, B.; Johns, L.; Berrier, J. C.; Kennefick, D.; Kennefick, J.; Seigar, M.

    2012-05-01

    We present the pitch angles of several galaxies that have been artificially redshifted using Barden et al’s FERENGI software. The (central black hole mass)-(spiral arm pitch angle) relation has been used on a statistically complete sample of local galaxies to determine the black hole mass function of local spiral galaxies. We now measure the pitch angles at increasing redshifts by operating on the images pixel-by-pixel. The results will be compared to the pitch angle function as measured in the GOODS field. This research was funded in part by NASA / EPScOR.

  1. Method for fabricating pixelated silicon device cells

    DOEpatents

    Nielson, Gregory N.; Okandan, Murat; Cruz-Campa, Jose Luis; Nelson, Jeffrey S.; Anderson, Benjamin John

    2015-08-18

    A method, apparatus and system for flexible, ultra-thin, and high efficiency pixelated silicon or other semiconductor photovoltaic solar cell array fabrication is disclosed. A structure and method of creation for a pixelated silicon or other semiconductor photovoltaic solar cell array with interconnects is described using a manufacturing method that is simplified compared to previous versions of pixelated silicon photovoltaic cells that require more microfabrication steps.

  2. Applying network theory to animal movements to identify properties of landscape space use.

    PubMed

    Bastille-Rousseau, Guillaume; Douglas-Hamilton, Iain; Blake, Stephen; Northrup, Joseph M; Wittemyer, George

    2018-04-01

    Network (graph) theory is a popular analytical framework to characterize the structure and dynamics among discrete objects and is particularly effective at identifying critical hubs and patterns of connectivity. The identification of such attributes is a fundamental objective of animal movement research, yet network theory has rarely been applied directly to animal relocation data. We develop an approach that allows the analysis of movement data using network theory by defining occupied pixels as nodes and connection among these pixels as edges. We first quantify node-level (local) metrics and graph-level (system) metrics on simulated movement trajectories to assess the ability of these metrics to pull out known properties in movement paths. We then apply our framework to empirical data from African elephants (Loxodonta africana), giant Galapagos tortoises (Chelonoidis spp.), and mule deer (Odocoileous hemionus). Our results indicate that certain node-level metrics, namely degree, weight, and betweenness, perform well in capturing local patterns of space use, such as the definition of core areas and paths used for inter-patch movement. These metrics were generally applicable across data sets, indicating their robustness to assumptions structuring analysis or strategies of movement. Other metrics capture local patterns effectively, but were sensitive to specified graph properties, indicating case specific applications. Our analysis indicates that graph-level metrics are unlikely to outperform other approaches for the categorization of general movement strategies (central place foraging, migration, nomadism). By identifying critical nodes, our approach provides a robust quantitative framework to identify local properties of space use that can be used to evaluate the effect of the loss of specific nodes on range wide connectivity. Our network approach is intuitive, and can be implemented across imperfectly sampled or large-scale data sets efficiently, providing a framework for conservationists to analyze movement data. Functions created for the analyses are available within the R package moveNT. © 2018 by the Ecological Society of America.

  3. Effect of Using 2 mm Voxels on Observer Performance for PET Lesion Detection

    NASA Astrophysics Data System (ADS)

    Morey, A. M.; Noo, Frédéric; Kadrmas, Dan J.

    2016-06-01

    Positron emission tomography (PET) images are typically reconstructed with an in-plane pixel size of approximately 4 mm for cancer imaging. The objective of this work was to evaluate the effect of using smaller pixels on general oncologic lesion-detection. A series of observer studies was performed using experimental phantom data from the Utah PET Lesion Detection Database, which modeled whole-body FDG PET cancer imaging of a 92 kg patient. The data comprised 24 scans over 4 days on a Biograph mCT time-of-flight (TOF) PET/CT scanner, with up to 23 lesions (diam. 6-16 mm) distributed throughout the phantom each day. Images were reconstructed with 2.036 mm and 4.073 mm pixels using ordered-subsets expectation-maximization (OSEM) both with and without point spread function (PSF) modeling and TOF. Detection performance was assessed using the channelized non-prewhitened numerical observer with localization receiver operating characteristic (LROC) analysis. Tumor localization performance and the area under the LROC curve were then analyzed as functions of the pixel size. In all cases, the images with 2 mm pixels provided higher detection performance than those with 4 mm pixels. The degree of improvement from the smaller pixels was larger than that offered by PSF modeling for these data, and provided roughly half the benefit of using TOF. Key results were confirmed by two human observers, who read subsets of the test data. This study suggests that a significant improvement in tumor detection performance for PET can be attained by using smaller voxel sizes than commonly used at many centers. The primary drawback is a 4-fold increase in reconstruction time and data storage requirements.

  4. A novel method of the image processing on irregular triangular meshes

    NASA Astrophysics Data System (ADS)

    Vishnyakov, Sergey; Pekhterev, Vitaliy; Sokolova, Elizaveta

    2018-04-01

    The paper describes a novel method of the image processing based on irregular triangular meshes implementation. The triangular mesh is adaptive to the image content, least mean square linear approximation is proposed for the basic interpolation within the triangle. It is proposed to use triangular numbers to simplify using of the local (barycentric) coordinates for the further analysis - triangular element of the initial irregular mesh is to be represented through the set of the four equilateral triangles. This allows to use fast and simple pixels indexing in local coordinates, e.g. "for" or "while" loops for access to the pixels. Moreover, representation proposed allows to use discrete cosine transform of the simple "rectangular" symmetric form without additional pixels reordering (as it is used for shape-adaptive DCT forms). Furthermore, this approach leads to the simple form of the wavelet transform on triangular mesh. The results of the method application are presented. It is shown that advantage of the method proposed is a combination of the flexibility of the image-adaptive irregular meshes with the simple form of the pixel indexing in local triangular coordinates and the using of the common forms of the discrete transforms for triangular meshes. Method described is proposed for the image compression, pattern recognition, image quality improvement, image search and indexing. It also may be used as a part of video coding (intra-frame or inter-frame coding, motion detection).

  5. High Dynamic Range Pixel Array Detector for Scanning Transmission Electron Microscopy.

    PubMed

    Tate, Mark W; Purohit, Prafull; Chamberlain, Darol; Nguyen, Kayla X; Hovden, Robert; Chang, Celesta S; Deb, Pratiti; Turgut, Emrah; Heron, John T; Schlom, Darrell G; Ralph, Daniel C; Fuchs, Gregory D; Shanks, Katherine S; Philipp, Hugh T; Muller, David A; Gruner, Sol M

    2016-02-01

    We describe a hybrid pixel array detector (electron microscope pixel array detector, or EMPAD) adapted for use in electron microscope applications, especially as a universal detector for scanning transmission electron microscopy. The 128×128 pixel detector consists of a 500 µm thick silicon diode array bump-bonded pixel-by-pixel to an application-specific integrated circuit. The in-pixel circuitry provides a 1,000,000:1 dynamic range within a single frame, allowing the direct electron beam to be imaged while still maintaining single electron sensitivity. A 1.1 kHz framing rate enables rapid data collection and minimizes sample drift distortions while scanning. By capturing the entire unsaturated diffraction pattern in scanning mode, one can simultaneously capture bright field, dark field, and phase contrast information, as well as being able to analyze the full scattering distribution, allowing true center of mass imaging. The scattering is recorded on an absolute scale, so that information such as local sample thickness can be directly determined. This paper describes the detector architecture, data acquisition system, and preliminary results from experiments with 80-200 keV electron beams.

  6. Local gray level S-curve transformation - A generalized contrast enhancement technique for medical images.

    PubMed

    Gandhamal, Akash; Talbar, Sanjay; Gajre, Suhas; Hani, Ahmad Fadzil M; Kumar, Dileep

    2017-04-01

    Most medical images suffer from inadequate contrast and brightness, which leads to blurred or weak edges (low contrast) between adjacent tissues resulting in poor segmentation and errors in classification of tissues. Thus, contrast enhancement to improve visual information is extremely important in the development of computational approaches for obtaining quantitative measurements from medical images. In this research, a contrast enhancement algorithm that applies gray-level S-curve transformation technique locally in medical images obtained from various modalities is investigated. The S-curve transformation is an extended gray level transformation technique that results into a curve similar to a sigmoid function through a pixel to pixel transformation. This curve essentially increases the difference between minimum and maximum gray values and the image gradient, locally thereby, strengthening edges between adjacent tissues. The performance of the proposed technique is determined by measuring several parameters namely, edge content (improvement in image gradient), enhancement measure (degree of contrast enhancement), absolute mean brightness error (luminance distortion caused by the enhancement), and feature similarity index measure (preservation of the original image features). Based on medical image datasets comprising 1937 images from various modalities such as ultrasound, mammograms, fluorescent images, fundus, X-ray radiographs and MR images, it is found that the local gray-level S-curve transformation outperforms existing techniques in terms of improved contrast and brightness, resulting in clear and strong edges between adjacent tissues. The proposed technique can be used as a preprocessing tool for effective segmentation and classification of tissue structures in medical images. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. The Structure and Dynamics of the Upper Chromosphere and Lower Transition Region as Revealed by the Subarcsecond VAULT Observations

    DTIC Science & Technology

    2010-06-28

    average Quiet Sun radiance measured at Earth as we did for the first flight. For the VAULT Quiet Sun level we used the peak of the histogram of the...region, and considering the median value for each pixel in time (from Figure 1): Quiet Sun (blue line): We select a region around the lower right...prominences, while the high end reaches the plage levels. Scattered around this Quiet Sun we find several cases of localized brightenings which may be

  8. Adaptive removal of background and white space from document images using seam categorization

    NASA Astrophysics Data System (ADS)

    Fillion, Claude; Fan, Zhigang; Monga, Vishal

    2011-03-01

    Document images are obtained regularly by rasterization of document content and as scans of printed documents. Resizing via background and white space removal is often desired for better consumption of these images, whether on displays or in print. While white space and background are easy to identify in images, existing methods such as naïve removal and content aware resizing (seam carving) each have limitations that can lead to undesirable artifacts, such as uneven spacing between lines of text or poor arrangement of content. An adaptive method based on image content is hence needed. In this paper we propose an adaptive method to intelligently remove white space and background content from document images. Document images are different from pictorial images in structure. They typically contain objects (text letters, pictures and graphics) separated by uniform background, which include both white paper space and other uniform color background. Pixels in uniform background regions are excellent candidates for deletion if resizing is required, as they introduce less change in document content and style, compared with deletion of object pixels. We propose a background deletion method that exploits both local and global context. The method aims to retain the document structural information and image quality.

  9. Method to improve cancerous lesion detection sensitivity in a dedicated dual-head scintimammography system

    DOEpatents

    Kieper, Douglas Arthur [Seattle, WA; Majewski, Stanislaw [Morgantown, WV; Welch, Benjamin L [Hampton, VA

    2012-07-03

    An improved method for enhancing the contrast between background and lesion areas of a breast undergoing dual-head scintimammographic examination comprising: 1) acquiring a pair of digital images from a pair of small FOV or mini gamma cameras compressing the breast under examination from opposing sides; 2) inverting one of the pair of images to align or co-register with the other of the images to obtain co-registered pixel values; 3) normalizing the pair of images pixel-by-pixel by dividing pixel values from each of the two acquired images and the co-registered image by the average count per pixel in the entire breast area of the corresponding detector; and 4) multiplying the number of counts in each pixel by the value obtained in step 3 to produce a normalization enhanced two dimensional contrast map. This enhanced (increased contrast) contrast map enhances the visibility of minor local increases (uptakes) of activity over the background and therefore improves lesion detection sensitivity, especially of small lesions.

  10. Method to improve cancerous lesion detection sensitivity in a dedicated dual-head scintimammography system

    DOEpatents

    Kieper, Douglas Arthur [Newport News, VA; Majewski, Stanislaw [Yorktown, VA; Welch, Benjamin L [Hampton, VA

    2008-10-28

    An improved method for enhancing the contrast between background and lesion areas of a breast undergoing dual-head scintimammographic examination comprising: 1) acquiring a pair of digital images from a pair of small FOV or mini gamma cameras compressing the breast under examination from opposing sides; 2) inverting one of the pair of images to align or co-register with the other of the images to obtain co-registered pixel values; 3) normalizing the pair of images pixel-by-pixel by dividing pixel values from each of the two acquired images and the co-registered image by the average count per pixel in the entire breast area of the corresponding detector; and 4) multiplying the number of counts in each pixel by the value obtained in step 3 to produce a normalization enhanced two dimensional contrast map. This enhanced (increased contrast) contrast map enhances the visibility of minor local increases (uptakes) of activity over the background and therefore improves lesion detection sensitivity, especially of small lesions.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campione, Salvatore; Warne, Larry K.; Jorgenson, Roy E.

    Here, we investigate full-wave simulations of realistic implementations of multifunctional nanoantenna enabled detectors (NEDs). We focus on a 2x2 pixelated array structure that supports two wavelengths of operation. We design each resonating structure independently using full-wave simulations with periodic boundary conditions mimicking the whole infinite array. We then construct a supercell made of a 2x2 pixelated array with periodic boundary conditions mimicking the full NED; in this case, however, each pixel comprises 10-20 antennas per side. In this way, the cross-talk between contiguous pixels is accounted for in our simulations. We observe that, even though there are finite extent effects,more » the pixels work as designed, each responding at the respective wavelength of operation. This allows us to stress that realistic simulations of multifunctional NEDs need to be performed to verify the design functionality by taking into account finite extent and cross-talk effects.« less

  12. Oil Motion Control by an Extra Pinning Structure in Electro-Fluidic Display.

    PubMed

    Dou, Yingying; Tang, Biao; Groenewold, Jan; Li, Fahong; Yue, Qiao; Zhou, Rui; Li, Hui; Shui, Lingling; Henzen, Alex; Zhou, Guofu

    2018-04-06

    Oil motion control is the key for the optical performance of electro-fluidic displays (EFD). In this paper, we introduced an extra pinning structure (EPS) into the EFD pixel to control the oil motion inside for the first time. The pinning structure canbe fabricated together with the pixel wall by a one-step lithography process. The effect of the relative location of the EPS in pixels on the oil motion was studied by a series of optoelectronic measurements. EPS showed good control of oil rupture position. The properly located EPS effectively guided the oil contraction direction, significantly accelerated switching on process, and suppressed oil overflow, without declining in aperture ratio. An asymmetrically designed EPS off the diagonal is recommended. This study provides a novel and facile way for oil motion control within an EFD pixel in both direction and timescale.

  13. Quantum noise properties of CT images with anatomical textured backgrounds across reconstruction algorithms: FBP and SAFIRE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Solomon, Justin, E-mail: justin.solomon@duke.edu; Samei, Ehsan

    2014-09-15

    Purpose: Quantum noise properties of CT images are generally assessed using simple geometric phantoms with uniform backgrounds. Such phantoms may be inadequate when assessing nonlinear reconstruction or postprocessing algorithms. The purpose of this study was to design anatomically informed textured phantoms and use the phantoms to assess quantum noise properties across two clinically available reconstruction algorithms, filtered back projection (FBP) and sinogram affirmed iterative reconstruction (SAFIRE). Methods: Two phantoms were designed to represent lung and soft-tissue textures. The lung phantom included intricate vessel-like structures along with embedded nodules (spherical, lobulated, and spiculated). The soft tissue phantom was designed based onmore » a three-dimensional clustered lumpy background with included low-contrast lesions (spherical and anthropomorphic). The phantoms were built using rapid prototyping (3D printing) technology and, along with a uniform phantom of similar size, were imaged on a Siemens SOMATOM Definition Flash CT scanner and reconstructed with FBP and SAFIRE. Fifty repeated acquisitions were acquired for each background type and noise was assessed by estimating pixel-value statistics, such as standard deviation (i.e., noise magnitude), autocorrelation, and noise power spectrum. Noise stationarity was also assessed by examining the spatial distribution of noise magnitude. The noise properties were compared across background types and between the two reconstruction algorithms. Results: In FBP and SAFIRE images, noise was globally nonstationary for all phantoms. In FBP images of all phantoms, and in SAFIRE images of the uniform phantom, noise appeared to be locally stationary (within a reasonably small region of interest). Noise was locally nonstationary in SAFIRE images of the textured phantoms with edge pixels showing higher noise magnitude compared to pixels in more homogenous regions. For pixels in uniform regions, noise magnitude was reduced by an average of 60% in SAFIRE images compared to FBP. However, for edge pixels, noise magnitude ranged from 20% higher to 40% lower in SAFIRE images compared to FBP. SAFIRE images of the lung phantom exhibited distinct regions with varying noise texture (i.e., noise autocorrelation/power spectra). Conclusions: Quantum noise properties observed in uniform phantoms may not be representative of those in actual patients for nonlinear reconstruction algorithms. Anatomical texture should be considered when evaluating the performance of CT systems that use such nonlinear algorithms.« less

  14. Using pixel intensity as a self-regulating threshold for deterministic image sampling in Milano Retinex: the T-Rex algorithm

    NASA Astrophysics Data System (ADS)

    Lecca, Michela; Modena, Carla Maria; Rizzi, Alessandro

    2018-01-01

    Milano Retinexes are spatial color algorithms, part of the Retinex family, usually employed for image enhancement. They modify the color of each pixel taking into account the surrounding colors and their position, catching in this way the local spatial color distribution relevant to image enhancement. We present T-Rex (from the words threshold and Retinex), an implementation of Milano Retinex, whose main novelty is the use of the pixel intensity as a self-regulating threshold to deterministically sample local color information. The experiments, carried out on real-world pictures, show that T-Rex image enhancement performance are in line with those of the Milano Retinex family: T-Rex increases the brightness, the contrast, and the flatness of the channel distributions of the input image, making more intelligible the content of pictures acquired under difficult light conditions.

  15. Geology of the Icy Galilean Satellites: Understanding Crustal Processes and Geologic Histories Through the JIMO Mission

    NASA Technical Reports Server (NTRS)

    Figueredo, P. H.; Tanaka, K.; Senske, D.; Greeley, R.

    2003-01-01

    Knowledge of the geology, style and time history of crustal processes on the icy Galilean satellites is necessary to understanding how these bodies formed and evolved. Data from the Galileo mission have provided a basis for detailed geologic and geo- physical analysis. Due to constrained downlink, Galileo Solid State Imaging (SSI) data consisted of global coverage at a -1 km/pixel ground sampling and representative, widely spaced regional maps at -200 m/pixel. These two data sets provide a general means to extrapolate units identified at higher resolution to lower resolution data. A sampling of key sites at much higher resolution (10s of m/pixel) allows evaluation of processes on local scales. We are currently producing the first global geological map of Europa using Galileo global and regional-scale data. This work is demonstrating the necessity and utility of planet-wide contiguous image coverage at global, regional, and local scales.

  16. Performance of a 512 x 512 Gated CMOS Imager with a 250 ps Exposure Time

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teruya, A T; Moody, J D; Hsing, W W

    2012-10-01

    We describe the performance of a 512x512 gated CMOS read out integrated circuit (ROIC) with a 250 ps exposure time. A low-skew, H-tree trigger distribution system is used to locally generate individual pixel gates in each 8x8 neighborhood of the ROIC. The temporal width of the gate is voltage controlled and user selectable via a precision potentiometer. The gating implementation was first validated in optical tests of a 64x64 pixel prototype ROIC developed as a proof-of-concept during the early phases of the development program. The layout of the H-Tree addresses each quadrant of the ROIC independently and admits operation ofmore » the ROIC in two modes. If “common mode” triggering is used, the camera provides a single 512x512 image. If independent triggers are used, the camera can provide up to four 256x256 images with a frame separation set by the trigger intervals. The ROIC design includes small (sub-pixel) optical photodiode structures to allow test and characterization of the ROIC using optical sources prior to bump bonding. Reported test results were obtained using short pulse, second harmonic Ti:Sapphire laser systems operating at λ~ 400 nm at sub-ps pulse widths.« less

  17. Assessing the impact of background spectral graph construction techniques on the topological anomaly detection algorithm

    NASA Astrophysics Data System (ADS)

    Ziemann, Amanda K.; Messinger, David W.; Albano, James A.; Basener, William F.

    2012-06-01

    Anomaly detection algorithms have historically been applied to hyperspectral imagery in order to identify pixels whose material content is incongruous with the background material in the scene. Typically, the application involves extracting man-made objects from natural and agricultural surroundings. A large challenge in designing these algorithms is determining which pixels initially constitute the background material within an image. The topological anomaly detection (TAD) algorithm constructs a graph theory-based, fully non-parametric topological model of the background in the image scene, and uses codensity to measure deviation from this background. In TAD, the initial graph theory structure of the image data is created by connecting an edge between any two pixel vertices x and y if the Euclidean distance between them is less than some resolution r. While this type of proximity graph is among the most well-known approaches to building a geometric graph based on a given set of data, there is a wide variety of dierent geometrically-based techniques. In this paper, we present a comparative test of the performance of TAD across four dierent constructs of the initial graph: mutual k-nearest neighbor graph, sigma-local graph for two different values of σ > 1, and the proximity graph originally implemented in TAD.

  18. Compression of color-mapped images

    NASA Technical Reports Server (NTRS)

    Hadenfeldt, A. C.; Sayood, Khalid

    1992-01-01

    In a standard image coding scenario, pixel-to-pixel correlation nearly always exists in the data, especially if the image is a natural scene. This correlation is what allows predictive coding schemes (e.g., DPCM) to perform efficient compression. In a color-mapped image, the values stored in the pixel array are no longer directly related to the pixel intensity. Two color indices which are numerically adjacent (close) may point to two very different colors. The correlation still exists, but only via the colormap. This fact can be exploited by sorting the color map to reintroduce the structure. The sorting of colormaps is studied and it is shown how the resulting structure can be used in both lossless and lossy compression of images.

  19. Travel time tomography with local image regularization by sparsity constrained dictionary learning

    NASA Astrophysics Data System (ADS)

    Bianco, M.; Gerstoft, P.

    2017-12-01

    We propose a regularization approach for 2D seismic travel time tomography which models small rectangular groups of slowness pixels, within an overall or `global' slowness image, as sparse linear combinations of atoms from a dictionary. The groups of slowness pixels are referred to as patches and a dictionary corresponds to a collection of functions or `atoms' describing the slowness in each patch. These functions could for example be wavelets.The patch regularization is incorporated into the global slowness image. The global image models the broad features, while the local patch images incorporate prior information from the dictionary. Further, high resolution slowness within patches is permitted if the travel times from the global estimates support it. The proposed approach is formulated as an algorithm, which is repeated until convergence is achieved: 1) From travel times, find the global slowness image with a minimum energy constraint on the pixel variance relative to a reference. 2) Find the patch level solutions to fit the global estimate as a sparse linear combination of dictionary atoms.3) Update the reference as the weighted average of the patch level solutions.This approach relies on the redundancy of the patches in the seismic image. Redundancy means that the patches are repetitions of a finite number of patterns, which are described by the dictionary atoms. Redundancy in the earth's structure was demonstrated in previous works in seismics where dictionaries of wavelet functions regularized inversion. We further exploit redundancy of the patches by using dictionary learning algorithms, a form of unsupervised machine learning, to estimate optimal dictionaries from the data in parallel with the inversion. We demonstrate our approach on densely, but irregularly sampled synthetic seismic images.

  20. An analysis of image storage systems for scalable training of deep neural networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lim, Seung-Hwan; Young, Steven R; Patton, Robert M

    This study presents a principled empirical evaluation of image storage systems for training deep neural networks. We employ the Caffe deep learning framework to train neural network models for three different data sets, MNIST, CIFAR-10, and ImageNet. While training the models, we evaluate five different options to retrieve training image data: (1) PNG-formatted image files on local file system; (2) pushing pixel arrays from image files into a single HDF5 file on local file system; (3) in-memory arrays to hold the pixel arrays in Python and C++; (4) loading the training data into LevelDB, a log-structured merge tree based key-valuemore » storage; and (5) loading the training data into LMDB, a B+tree based key-value storage. The experimental results quantitatively highlight the disadvantage of using normal image files on local file systems to train deep neural networks and demonstrate reliable performance with key-value storage based storage systems. When training a model on the ImageNet dataset, the image file option was more than 17 times slower than the key-value storage option. Along with measurements on training time, this study provides in-depth analysis on the cause of performance advantages/disadvantages of each back-end to train deep neural networks. We envision the provided measurements and analysis will shed light on the optimal way to architect systems for training neural networks in a scalable manner.« less

  1. Design and Simulations of an Energy Harvesting Capable CMOS Pixel for Implantable Retinal Prosthesis

    NASA Astrophysics Data System (ADS)

    Ansaripour, Iman; Karami, Mohammad Azim

    2017-12-01

    A new pixel is designed with the capability of imaging and energy harvesting for the retinal prosthesis implant in 0.18 µm standard Complementary Metal Oxide Semiconductor technology. The pixel conversion gain and dynamic range, are 2.05 \\upmu{{V}}/{{e}}^{ - } and 63.2 dB. The power consumption 53.12 pW per pixel while energy harvesting performance is 3.87 nW in 60 klx of illuminance per pixel. These results have been obtained using post layout simulation. In the proposed pixel structure, the high power production capability in energy harvesting mode covers the demanded energy by using all available p-n junction photo generated currents.

  2. Supervised pixel classification using a feature space derived from an artificial visual system

    NASA Technical Reports Server (NTRS)

    Baxter, Lisa C.; Coggins, James M.

    1991-01-01

    Image segmentation involves labelling pixels according to their membership in image regions. This requires the understanding of what a region is. Using supervised pixel classification, the paper investigates how groups of pixels labelled manually according to perceived image semantics map onto the feature space created by an Artificial Visual System. Multiscale structure of regions are investigated and it is shown that pixels form clusters based on their geometric roles in the image intensity function, not by image semantics. A tentative abstract definition of a 'region' is proposed based on this behavior.

  3. Super-pixel extraction based on multi-channel pulse coupled neural network

    NASA Astrophysics Data System (ADS)

    Xu, GuangZhu; Hu, Song; Zhang, Liu; Zhao, JingJing; Fu, YunXia; Lei, BangJun

    2018-04-01

    Super-pixel extraction techniques group pixels to form over-segmented image blocks according to the similarity among pixels. Compared with the traditional pixel-based methods, the image descripting method based on super-pixel has advantages of less calculation, being easy to perceive, and has been widely used in image processing and computer vision applications. Pulse coupled neural network (PCNN) is a biologically inspired model, which stems from the phenomenon of synchronous pulse release in the visual cortex of cats. Each PCNN neuron can correspond to a pixel of an input image, and the dynamic firing pattern of each neuron contains both the pixel feature information and its context spatial structural information. In this paper, a new color super-pixel extraction algorithm based on multi-channel pulse coupled neural network (MPCNN) was proposed. The algorithm adopted the block dividing idea of SLIC algorithm, and the image was divided into blocks with same size first. Then, for each image block, the adjacent pixels of each seed with similar color were classified as a group, named a super-pixel. At last, post-processing was adopted for those pixels or pixel blocks which had not been grouped. Experiments show that the proposed method can adjust the number of superpixel and segmentation precision by setting parameters, and has good potential for super-pixel extraction.

  4. Pixel-based absorption correction for dual-tracer fluorescence imaging of receptor binding potential

    PubMed Central

    Kanick, Stephen C.; Tichauer, Kenneth M.; Gunn, Jason; Samkoe, Kimberley S.; Pogue, Brian W.

    2014-01-01

    Ratiometric approaches to quantifying molecular concentrations have been used for decades in microscopy, but have rarely been exploited in vivo until recently. One dual-tracer approach can utilize an untargeted reference tracer to account for non-specific uptake of a receptor-targeted tracer, and ultimately estimate receptor binding potential quantitatively. However, interpretation of the relative dynamic distribution kinetics is confounded by differences in local tissue absorption at the wavelengths used for each tracer. This study simulated the influence of absorption on fluorescence emission intensity and depth sensitivity at typical near-infrared fluorophore wavelength bands near 700 and 800 nm in mouse skin in order to correct for these tissue optical differences in signal detection. Changes in blood volume [1-3%] and hemoglobin oxygen saturation [0-100%] were demonstrated to introduce substantial distortions to receptor binding estimates (error > 30%), whereas sampled depth was relatively insensitive to wavelength (error < 6%). In response, a pixel-by-pixel normalization of tracer inputs immediately post-injection was found to account for spatial heterogeneities in local absorption properties. Application of the pixel-based normalization method to an in vivo imaging study demonstrated significant improvement, as compared with a reference tissue normalization approach. PMID:25360349

  5. Spatially resolved heat release rate measurements in turbulent premixed flames

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ayoola, B.O.; Kaminski, C.F.; Balachandran, R.

    Heat release rate is a fundamental property of great importance for the theoretical and experimental elucidation of unsteady flame behaviors such as combustion noise, combustion instabilities, and pulsed combustion. Investigations of such thermoacoustic interactions require a reliable indicator of heat release rate capable of resolving spatial structures in turbulent flames. Traditionally, heat release rate has been estimated via OH or CH radical chemiluminescence; however, chemiluminescence suffers from being a line-of-sight technique with limited capability for resolving small-scale structures. In this paper, we report spatially resolved two-dimensional measurements of a quantity closely related to heat release rate. The diagnostic technique usesmore » simultaneous OH and CH{sub 2}O planar laser-induced fluorescence (PLIF), and the pixel-by-pixel product of the OH and CH{sub 2}O PLIF signals has previously been shown to correlate well with local heat release rates. Results from this diagnostic technique, which we refer to as heat release rate imaging (HR imaging), are compared with traditional OH chemiluminescence measurements in several flames. Studies were performed in lean premixed ethylene flames stabilized between opposed jets and with a bluff body. Correlations between bulk strain rates and local heat release rates were obtained and the effects of curvature on heat release rate were investigated. The results show that the heat release rate tends to increase with increasing negative curvature for the flames investigated for which Lewis numbers are greater than unity. This correlation becomes more pronounced as the flame gets closer to global extinction.« less

  6. Edge Probability and Pixel Relativity-Based Speckle Reducing Anisotropic Diffusion.

    PubMed

    Mishra, Deepak; Chaudhury, Santanu; Sarkar, Mukul; Soin, Arvinder Singh; Sharma, Vivek

    2018-02-01

    Anisotropic diffusion filters are one of the best choices for speckle reduction in the ultrasound images. These filters control the diffusion flux flow using local image statistics and provide the desired speckle suppression. However, inefficient use of edge characteristics results in either oversmooth image or an image containing misinterpreted spurious edges. As a result, the diagnostic quality of the images becomes a concern. To alleviate such problems, a novel anisotropic diffusion-based speckle reducing filter is proposed in this paper. A probability density function of the edges along with pixel relativity information is used to control the diffusion flux flow. The probability density function helps in removing the spurious edges and the pixel relativity reduces the oversmoothing effects. Furthermore, the filtering is performed in superpixel domain to reduce the execution time, wherein a minimum of 15% of the total number of image pixels can be used. For performance evaluation, 31 frames of three synthetic images and 40 real ultrasound images are used. In most of the experiments, the proposed filter shows a better performance as compared to the state-of-the-art filters in terms of the speckle region's signal-to-noise ratio and mean square error. It also shows a comparative performance for figure of merit and structural similarity measure index. Furthermore, in the subjective evaluation, performed by the expert radiologists, the proposed filter's outputs are preferred for the improved contrast and sharpness of the object boundaries. Hence, the proposed filtering framework is suitable to reduce the unwanted speckle and improve the quality of the ultrasound images.

  7. Text image authenticating algorithm based on MD5-hash function and Henon map

    NASA Astrophysics Data System (ADS)

    Wei, Jinqiao; Wang, Ying; Ma, Xiaoxue

    2017-07-01

    In order to cater to the evidentiary requirements of the text image, this paper proposes a fragile watermarking algorithm based on Hash function and Henon map. The algorithm is to divide a text image into parts, get flippable pixels and nonflippable pixels of every lump according to PSD, generate watermark of non-flippable pixels with MD5-Hash, encrypt watermark with Henon map and select embedded blocks. The simulation results show that the algorithm with a good ability in tampering localization can be used to authenticate and forensics the authenticity and integrity of text images

  8. Quantitative evaluation of the accuracy and variance of individual pixels in a scientific CMOS (sCMOS) camera for computational imaging

    NASA Astrophysics Data System (ADS)

    Watanabe, Shigeo; Takahashi, Teruo; Bennett, Keith

    2017-02-01

    The"scientific" CMOS (sCMOS) camera architecture fundamentally differs from CCD and EMCCD cameras. In digital CCD and EMCCD cameras, conversion from charge to the digital output is generally through a single electronic chain, and the read noise and the conversion factor from photoelectrons to digital outputs are highly uniform for all pixels, although quantum efficiency may spatially vary. In CMOS cameras, the charge to voltage conversion is separate for each pixel and each column has independent amplifiers and analog-to-digital converters, in addition to possible pixel-to-pixel variation in quantum efficiency. The "raw" output from the CMOS image sensor includes pixel-to-pixel variability in the read noise, electronic gain, offset and dark current. Scientific camera manufacturers digitally compensate the raw signal from the CMOS image sensors to provide usable images. Statistical noise in images, unless properly modeled, can introduce errors in methods such as fluctuation correlation spectroscopy or computational imaging, for example, localization microscopy using maximum likelihood estimation. We measured the distributions and spatial maps of individual pixel offset, dark current, read noise, linearity, photoresponse non-uniformity and variance distributions of individual pixels for standard, off-the-shelf Hamamatsu ORCA-Flash4.0 V3 sCMOS cameras using highly uniform and controlled illumination conditions, from dark conditions to multiple low light levels between 20 to 1,000 photons / pixel per frame to higher light conditions. We further show that using pixel variance for flat field correction leads to errors in cameras with good factory calibration.

  9. High sensitivity, solid state neutron detector

    DOEpatents

    Stradins, Pauls; Branz, Howard M; Wang, Qi; McHugh, Harold R

    2015-05-12

    An apparatus (200) for detecting slow or thermal neutrons (160). The apparatus (200) includes an alpha particle-detecting layer (240) that is a hydrogenated amorphous silicon p-i-n diode structure. The apparatus includes a bottom metal contact (220) and a top metal contact (250) with the diode structure (240) positioned between the two contacts (220, 250) to facilitate detection of alpha particles (170). The apparatus (200) includes a neutron conversion layer (230) formed of a material containing boron-10 isotopes. The top contact (250) is pixilated with each contact pixel extending to or proximate to an edge of the apparatus to facilitate electrical contacting. The contact pixels have elongated bodies to allow them to extend across the apparatus surface (242) with each pixel having a small surface area to match capacitance based upon a current spike detecting circuit or amplifier connected to each pixel. The neutron conversion layer (860) may be deposited on the contact pixels (830) such as with use of inkjet printing of nanoparticle ink.

  10. High sensitivity, solid state neutron detector

    DOEpatents

    Stradins, Pauls; Branz, Howard M.; Wang, Qi; McHugh, Harold R.

    2013-10-29

    An apparatus (200) for detecting slow or thermal neutrons (160) including an alpha particle-detecting layer (240) that is a hydrogenated amorphous silicon p-i-n diode structure. The apparatus includes a bottom metal contact (220) and a top metal contact (250) with the diode structure (240) positioned between the two contacts (220, 250) to facilitate detection of alpha particles (170). The apparatus (200) includes a neutron conversion layer (230) formed of a material containing boron-10 isotopes. The top contact (250) is pixilated with each contact pixel extending to or proximate to an edge of the apparatus to facilitate electrical contacting. The contact pixels have elongated bodies to allow them to extend across the apparatus surface (242) with each pixel having a small surface area to match capacitance based upon a current spike detecting circuit or amplifier connected to each pixel. The neutron conversion layer (860) may be deposited on the contact pixels (830) such as with use of inkjet printing of nanoparticle ink.

  11. Security of fragile authentication watermarks with localization

    NASA Astrophysics Data System (ADS)

    Fridrich, Jessica

    2002-04-01

    In this paper, we study the security of fragile image authentication watermarks that can localize tampered areas. We start by comparing the goals, capabilities, and advantages of image authentication based on watermarking and cryptography. Then we point out some common security problems of current fragile authentication watermarks with localization and classify attacks on authentication watermarks into five categories. By investigating the attacks and vulnerabilities of current schemes, we propose a variation of the Wong scheme18 that is fast, simple, cryptographically secure, and resistant to all known attacks, including the Holliman-Memon attack9. In the new scheme, a special symmetry structure in the logo is used to authenticate the block content, while the logo itself carries information about the block origin (block index, the image index or time stamp, author ID, etc.). Because the authentication of the content and its origin are separated, it is possible to easily identify swapped blocks between images and accurately detect cropped areas, while being able to accurately localize tampered pixels.

  12. Quantitative evaluation of software packages for single-molecule localization microscopy.

    PubMed

    Sage, Daniel; Kirshner, Hagai; Pengo, Thomas; Stuurman, Nico; Min, Junhong; Manley, Suliana; Unser, Michael

    2015-08-01

    The quality of super-resolution images obtained by single-molecule localization microscopy (SMLM) depends largely on the software used to detect and accurately localize point sources. In this work, we focus on the computational aspects of super-resolution microscopy and present a comprehensive evaluation of localization software packages. Our philosophy is to evaluate each package as a whole, thus maintaining the integrity of the software. We prepared synthetic data that represent three-dimensional structures modeled after biological components, taking excitation parameters, noise sources, point-spread functions and pixelation into account. We then asked developers to run their software on our data; most responded favorably, allowing us to present a broad picture of the methods available. We evaluated their results using quantitative and user-interpretable criteria: detection rate, accuracy, quality of image reconstruction, resolution, software usability and computational resources. These metrics reflect the various tradeoffs of SMLM software packages and help users to choose the software that fits their needs.

  13. Color filter array pattern identification using variance of color difference image

    NASA Astrophysics Data System (ADS)

    Shin, Hyun Jun; Jeon, Jong Ju; Eom, Il Kyu

    2017-07-01

    A color filter array is placed on the image sensor of a digital camera to acquire color images. Each pixel uses only one color, since the image sensor can measure only one color per pixel. Therefore, empty pixels are filled using an interpolation process called demosaicing. The original and the interpolated pixels have different statistical characteristics. If the image is modified by manipulation or forgery, the color filter array pattern is altered. This pattern change can be a clue for image forgery detection. However, most forgery detection algorithms have the disadvantage of assuming the color filter array pattern. We present an identification method of the color filter array pattern. Initially, the local mean is eliminated to remove the background effect. Subsequently, the color difference block is constructed to emphasize the difference between the original pixel and the interpolated pixel. The variance measure of the color difference image is proposed as a means of estimating the color filter array configuration. The experimental results show that the proposed method is effective in identifying the color filter array pattern. Compared with conventional methods, our method provides superior performance.

  14. Blind identification of full-field vibration modes from video measurements with phase-based video motion magnification

    NASA Astrophysics Data System (ADS)

    Yang, Yongchao; Dorn, Charles; Mancini, Tyler; Talken, Zachary; Kenyon, Garrett; Farrar, Charles; Mascareñas, David

    2017-02-01

    Experimental or operational modal analysis traditionally requires physically-attached wired or wireless sensors for vibration measurement of structures. This instrumentation can result in mass-loading on lightweight structures, and is costly and time-consuming to install and maintain on large civil structures, especially for long-term applications (e.g., structural health monitoring) that require significant maintenance for cabling (wired sensors) or periodic replacement of the energy supply (wireless sensors). Moreover, these sensors are typically placed at a limited number of discrete locations, providing low spatial sensing resolution that is hardly sufficient for modal-based damage localization, or model correlation and updating for larger-scale structures. Non-contact measurement methods such as scanning laser vibrometers provide high-resolution sensing capacity without the mass-loading effect; however, they make sequential measurements that require considerable acquisition time. As an alternative non-contact method, digital video cameras are relatively low-cost, agile, and provide high spatial resolution, simultaneous, measurements. Combined with vision based algorithms (e.g., image correlation, optical flow), video camera based measurements have been successfully used for vibration measurements and subsequent modal analysis, based on techniques such as the digital image correlation (DIC) and the point-tracking. However, they typically require speckle pattern or high-contrast markers to be placed on the surface of structures, which poses challenges when the measurement area is large or inaccessible. This work explores advanced computer vision and video processing algorithms to develop a novel video measurement and vision-based operational (output-only) modal analysis method that alleviate the need of structural surface preparation associated with existing vision-based methods and can be implemented in a relatively efficient and autonomous manner with little user supervision and calibration. First a multi-scale image processing method is applied on the frames of the video of a vibrating structure to extract the local pixel phases that encode local structural vibration, establishing a full-field spatiotemporal motion matrix. Then a high-spatial dimensional, yet low-modal-dimensional, over-complete model is used to represent the extracted full-field motion matrix using modal superposition, which is physically connected and manipulated by a family of unsupervised learning models and techniques, respectively. Thus, the proposed method is able to blindly extract modal frequencies, damping ratios, and full-field (as many points as the pixel number of the video frame) mode shapes from line of sight video measurements of the structure. The method is validated by laboratory experiments on a bench-scale building structure and a cantilever beam. Its ability for output (video measurements)-only identification and visualization of the weakly-excited mode is demonstrated and several issues with its implementation are discussed.

  15. Computational imaging with a single-pixel detector and a consumer video projector

    NASA Astrophysics Data System (ADS)

    Sych, D.; Aksenov, M.

    2018-02-01

    Single-pixel imaging is a novel rapidly developing imaging technique that employs spatially structured illumination and a single-pixel detector. In this work, we experimentally demonstrate a fully operating modular single-pixel imaging system. Light patterns in our setup are created with help of a computer-controlled digital micromirror device from a consumer video projector. We investigate how different working modes and settings of the projector affect the quality of reconstructed images. We develop several image reconstruction algorithms and compare their performance for real imaging. Also, we discuss the potential use of the single-pixel imaging system for quantum applications.

  16. The fundamentals of average local variance--Part II: Sampling simple regular patterns with optical imagery.

    PubMed

    Bøcher, Peder Klith; McCloy, Keith R

    2006-02-01

    In this investigation, the characteristics of the average local variance (ALV) function is investigated through the acquisition of images at different spatial resolutions of constructed scenes of regular patterns of black and white squares. It is shown that the ALV plot consistently peaks at a spatial resolution in which the pixels has a size corresponding to half the distance between scene objects, and that, under very specific conditions, it also peaks at a spatial resolution in which the pixel size corresponds to the whole distance between scene objects. It is argued that the peak at object distance when present is an expression of the Nyquist sample rate. The presence of this peak is, hence, shown to be a function of the matching between the phase of the scene pattern and the phase of the sample grid, i.e., the image. When these phases match, a clear and distinct peak is produced on the ALV plot. The fact that the peak at half the distance consistently occurs in the ALV plot is linked to the circumstance that the sampling interval (distance between pixels) and the extent of the sampling unit (size of pixels) are equal. Hence, at twice the Nyquist sampling rate, each fundamental period of the pattern is covered by four pixels; therefore, at least one pixel is always completely embedded within one pattern element, regardless of sample scene phase. If the objects in the scene are scattered with a distance larger than their extent, the peak will be related to the size by a factor larger than 1/2. This is suggested to be the explanation to the results presented by others that the ALV plot is related to scene-object size by a factor of 1/2-3/4.

  17. Fiber pixelated image database

    NASA Astrophysics Data System (ADS)

    Shinde, Anant; Perinchery, Sandeep Menon; Matham, Murukeshan Vadakke

    2016-08-01

    Imaging of physically inaccessible parts of the body such as the colon at micron-level resolution is highly important in diagnostic medical imaging. Though flexible endoscopes based on the imaging fiber bundle are used for such diagnostic procedures, their inherent honeycomb-like structure creates fiber pixelation effects. This impedes the observer from perceiving the information from an image captured and hinders the direct use of image processing and machine intelligence techniques on the recorded signal. Significant efforts have been made by researchers in the recent past in the development and implementation of pixelation removal techniques. However, researchers have often used their own set of images without making source data available which subdued their usage and adaptability universally. A database of pixelated images is the current requirement to meet the growing diagnostic needs in the healthcare arena. An innovative fiber pixelated image database is presented, which consists of pixelated images that are synthetically generated and experimentally acquired. Sample space encompasses test patterns of different scales, sizes, and shapes. It is envisaged that this proposed database will alleviate the current limitations associated with relevant research and development and would be of great help for researchers working on comb structure removal algorithms.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Philipp, Hugh T., E-mail: htp2@cornell.edu; Tate, Mark W.; Purohit, Prafull

    Modern storage rings are readily capable of providing intense x-ray pulses, tens of picoseconds in duration, millions of times per second. Exploiting the temporal structure of these x-ray sources opens avenues for studying rapid structural changes in materials. Many processes (e.g. crack propagation, deformation on impact, turbulence, etc.) differ in detail from one sample trial to the next and would benefit from the ability to record successive x-ray images with single x-ray sensitivity while framing at 5 to 10 MHz rates. To this end, we have pursued the development of fast x-ray imaging detectors capable of collecting bursts of imagesmore » that enable the isolation of single synchrotron bunches and/or bunch trains. The detector technology used is the hybrid pixel array detector (PAD) with a charge integrating front-end, and high-speed, in-pixel signal storage elements. A 384×256 pixel version, the Keck-PAD, with 150 µm × 150 µm pixels and 8 dedicated in-pixel storage elements is operational, has been tested at CHESS, and has collected data for compression wave studies. An updated version with 27 dedicated storage capacitors and identical pixel size has been fabricated.« less

  19. Multiparametric dynamic contrast-enhanced ultrasound imaging of prostate cancer.

    PubMed

    Wildeboer, Rogier R; Postema, Arnoud W; Demi, Libertario; Kuenen, Maarten P J; Wijkstra, Hessel; Mischi, Massimo

    2017-08-01

    The aim of this study is to improve the accuracy of dynamic contrast-enhanced ultrasound (DCE-US) for prostate cancer (PCa) localization by means of a multiparametric approach. Thirteen different parameters related to either perfusion or dispersion were extracted pixel-by-pixel from 45 DCE-US recordings in 19 patients referred for radical prostatectomy. Multiparametric maps were retrospectively produced using a Gaussian mixture model algorithm. These were subsequently evaluated on their pixel-wise performance in classifying 43 benign and 42 malignant histopathologically confirmed regions of interest, using a prostate-based leave-one-out procedure. The combination of the spatiotemporal correlation (r), mean transit time (μ), curve skewness (κ), and peak time (PT) yielded an accuracy of 81% ± 11%, which was higher than the best performing single parameters: r (73%), μ (72%), and wash-in time (72%). The negative predictive value increased to 83% ± 16% from 70%, 69% and 67%, respectively. Pixel inclusion based on the confidence level boosted these measures to 90% with half of the pixels excluded, but without disregarding any prostate or region. Our results suggest multiparametric DCE-US analysis might be a useful diagnostic tool for PCa, possibly supporting future targeting of biopsies or therapy. Application in other types of cancer can also be foreseen. • DCE-US can be used to extract both perfusion and dispersion-related parameters. • Multiparametric DCE-US performs better in detecting PCa than single-parametric DCE-US. • Multiparametric DCE-US might become a useful tool for PCa localization.

  20. Nonlinear decoding of a complex movie from the mammalian retina

    PubMed Central

    Deny, Stéphane; Martius, Georg

    2018-01-01

    Retina is a paradigmatic system for studying sensory encoding: the transformation of light into spiking activity of ganglion cells. The inverse problem, where stimulus is reconstructed from spikes, has received less attention, especially for complex stimuli that should be reconstructed “pixel-by-pixel”. We recorded around a hundred neurons from a dense patch in a rat retina and decoded movies of multiple small randomly-moving discs. We constructed nonlinear (kernelized and neural network) decoders that improved significantly over linear results. An important contribution to this was the ability of nonlinear decoders to reliably separate between neural responses driven by locally fluctuating light signals, and responses at locally constant light driven by spontaneous-like activity. This improvement crucially depended on the precise, non-Poisson temporal structure of individual spike trains, which originated in the spike-history dependence of neural responses. We propose a general principle by which downstream circuitry could discriminate between spontaneous and stimulus-driven activity based solely on higher-order statistical structure in the incoming spike trains. PMID:29746463

  1. Stereoscopic determination of all-sky altitude map of aurora using two ground-based Nikon DSLR cameras

    NASA Astrophysics Data System (ADS)

    Kataoka, R.; Miyoshi, Y.; Shigematsu, K.; Hampton, D.; Mori, Y.; Kubo, T.; Yamashita, A.; Tanaka, M.; Takahei, T.; Nakai, T.; Miyahara, H.; Shiokawa, K.

    2013-09-01

    A new stereoscopic measurement technique is developed to obtain an all-sky altitude map of aurora using two ground-based digital single-lens reflex (DSLR) cameras. Two identical full-color all-sky cameras were set with an 8 km separation across the Chatanika area in Alaska (Poker Flat Research Range and Aurora Borealis Lodge) to find localized emission height with the maximum correlation of the apparent patterns in the localized pixels applying a method of the geographical coordinate transform. It is found that a typical ray structure of discrete aurora shows the broad altitude distribution above 100 km, while a typical patchy structure of pulsating aurora shows the narrow altitude distribution of less than 100 km. Because of its portability and low cost of the DSLR camera systems, the new technique may open a unique opportunity not only for scientists but also for night-sky photographers to complementarily attend the aurora science to potentially form a dense observation network.

  2. The lawful imprecision of human surface tilt estimation in natural scenes

    PubMed Central

    2018-01-01

    Estimating local surface orientation (slant and tilt) is fundamental to recovering the three-dimensional structure of the environment. It is unknown how well humans perform this task in natural scenes. Here, with a database of natural stereo-images having groundtruth surface orientation at each pixel, we find dramatic differences in human tilt estimation with natural and artificial stimuli. Estimates are precise and unbiased with artificial stimuli and imprecise and strongly biased with natural stimuli. An image-computable Bayes optimal model grounded in natural scene statistics predicts human bias, precision, and trial-by-trial errors without fitting parameters to the human data. The similarities between human and model performance suggest that the complex human performance patterns with natural stimuli are lawful, and that human visual systems have internalized local image and scene statistics to optimally infer the three-dimensional structure of the environment. These results generalize our understanding of vision from the lab to the real world. PMID:29384477

  3. The lawful imprecision of human surface tilt estimation in natural scenes.

    PubMed

    Kim, Seha; Burge, Johannes

    2018-01-31

    Estimating local surface orientation (slant and tilt) is fundamental to recovering the three-dimensional structure of the environment. It is unknown how well humans perform this task in natural scenes. Here, with a database of natural stereo-images having groundtruth surface orientation at each pixel, we find dramatic differences in human tilt estimation with natural and artificial stimuli. Estimates are precise and unbiased with artificial stimuli and imprecise and strongly biased with natural stimuli. An image-computable Bayes optimal model grounded in natural scene statistics predicts human bias, precision, and trial-by-trial errors without fitting parameters to the human data. The similarities between human and model performance suggest that the complex human performance patterns with natural stimuli are lawful, and that human visual systems have internalized local image and scene statistics to optimally infer the three-dimensional structure of the environment. These results generalize our understanding of vision from the lab to the real world. © 2018, Kim et al.

  4. Design of measuring system for wire diameter based on sub-pixel edge detection algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Yudong; Zhou, Wang

    2016-09-01

    Light projection method is often used in measuring system for wire diameter, which is relatively simpler structure and lower cost, and the measuring accuracy is limited by the pixel size of CCD. Using a CCD with small pixel size can improve the measuring accuracy, but will increase the cost and difficulty of making. In this paper, through the comparative analysis of a variety of sub-pixel edge detection algorithms, polynomial fitting method is applied for data processing in measuring system for wire diameter, to improve the measuring accuracy and enhance the ability of anti-noise. In the design of system structure, light projection method with orthogonal structure is used for the detection optical part, which can effectively reduce the error caused by line jitter in the measuring process. For the electrical part, ARM Cortex-M4 microprocessor is used as the core of the circuit module, which can not only drive double channel linear CCD but also complete the sampling, processing and storage of the CCD video signal. In addition, ARM microprocessor can complete the high speed operation of the whole measuring system for wire diameter in the case of no additional chip. The experimental results show that sub-pixel edge detection algorithm based on polynomial fitting can make up for the lack of single pixel size and improve the precision of measuring system for wire diameter significantly, without increasing hardware complexity of the entire system.

  5. A time-resolved image sensor for tubeless streak cameras

    NASA Astrophysics Data System (ADS)

    Yasutomi, Keita; Han, SangMan; Seo, Min-Woong; Takasawa, Taishi; Kagawa, Keiichiro; Kawahito, Shoji

    2014-03-01

    This paper presents a time-resolved CMOS image sensor with draining-only modulation (DOM) pixels for tube-less streak cameras. Although the conventional streak camera has high time resolution, the device requires high voltage and bulky system due to the structure with a vacuum tube. The proposed time-resolved imager with a simple optics realize a streak camera without any vacuum tubes. The proposed image sensor has DOM pixels, a delay-based pulse generator, and a readout circuitry. The delay-based pulse generator in combination with an in-pixel logic allows us to create and to provide a short gating clock to the pixel array. A prototype time-resolved CMOS image sensor with the proposed pixel is designed and implemented using 0.11um CMOS image sensor technology. The image array has 30(Vertical) x 128(Memory length) pixels with the pixel pitch of 22.4um. .

  6. Pollen Image Recognition Based on DGDB-LBP Descriptor

    NASA Astrophysics Data System (ADS)

    Han, L. P.; Xie, Y. H.

    2018-01-01

    In this paper, we propose DGDB-LBP, a local binary pattern descriptor based on the pixel blocks in the dominant gradient direction. Differing from traditional LBP and its variants, DGDB-LBP encodes by comparing the main gradient magnitude of each block rather than the single pixel value or the average of pixel blocks, in doing so, it reduces the influence of noise on pollen images and eliminates redundant and non-informative features. In order to fully describe the texture features of pollen images and analyze it under multi-scales, we propose a new sampling strategy, which uses three types of operators to extract the radial, angular and multiple texture features under different scales. Considering that the pollen images have some degree of rotation under the microscope, we propose the adaptive encoding direction, which is determined by the texture distribution of local region. Experimental results on the Pollenmonitor dataset show that the average correct recognition rate of our method is superior to other pollen recognition methods in recent years.

  7. Pixel-by-pixel absolute phase retrieval using three phase-shifted fringe patterns without markers

    NASA Astrophysics Data System (ADS)

    Jiang, Chufan; Li, Beiwen; Zhang, Song

    2017-04-01

    This paper presents a method that can recover absolute phase pixel by pixel without embedding markers on three phase-shifted fringe patterns, acquiring additional images, or introducing additional hardware component(s). The proposed three-dimensional (3D) absolute shape measurement technique includes the following major steps: (1) segment the measured object into different regions using rough priori knowledge of surface geometry; (2) artificially create phase maps at different z planes using geometric constraints of structured light system; (3) unwrap the phase pixel by pixel for each region by properly referring to the artificially created phase map; and (4) merge unwrapped phases from all regions into a complete absolute phase map for 3D reconstruction. We demonstrate that conventional three-step phase-shifted fringe patterns can be used to create absolute phase map pixel by pixel even for large depth range objects. We have successfully implemented our proposed computational framework to achieve absolute 3D shape measurement at 40 Hz.

  8. Remote stereoscopic video play platform for naked eyes based on the Android system

    NASA Astrophysics Data System (ADS)

    Jia, Changxin; Sang, Xinzhu; Liu, Jing; Cheng, Mingsheng

    2014-11-01

    As people's life quality have been improved significantly, the traditional 2D video technology can not meet people's urgent desire for a better video quality, which leads to the rapid development of 3D video technology. Simultaneously people want to watch 3D video in portable devices,. For achieving the above purpose, we set up a remote stereoscopic video play platform. The platform consists of a server and clients. The server is used for transmission of different formats of video and the client is responsible for receiving remote video for the next decoding and pixel restructuring. We utilize and improve Live555 as video transmission server. Live555 is a cross-platform open source project which provides solutions for streaming media such as RTSP protocol and supports transmission of multiple video formats. At the receiving end, we use our laboratory own player. The player for Android, which is with all the basic functions as the ordinary players do and able to play normal 2D video, is the basic structure for redevelopment. Also RTSP is implemented into this structure for telecommunication. In order to achieve stereoscopic display, we need to make pixel rearrangement in this player's decoding part. The decoding part is the local code which JNI interface calls so that we can extract video frames more effectively. The video formats that we process are left and right, up and down and nine grids. In the design and development, a large number of key technologies from Android application development have been employed, including a variety of wireless transmission, pixel restructuring and JNI call. By employing these key technologies, the design plan has been finally completed. After some updates and optimizations, the video player can play remote 3D video well anytime and anywhere and meet people's requirement.

  9. 670-GHz Schottky Diode-Based Subharmonic Mixer with CPW Circuits and 70-GHz IF

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Goutam; Schlecht, Erich T.; Lee, Choonsup; Lin, Robert H.; Gill, John J.; Mehdi, Imran; Sin, Seth; Deal, William; Loi, Kwok K.; Nam, Peta; hide

    2012-01-01

    GaAs-based, sub-harmonically pumped Schottky diode mixers offer a number of advantages for array implementation in a heterodyne receiver system. Since the radio frequency (RF) and local oscillator (LO) signals are far apart, system design becomes much simpler. A proprietary planar GaAs Schottky diode process was developed that results in very low parasitic anodes that have cutoff frequencies in the tens of terahertz. This technology enables robust implementation of monolithic mixer and frequency multiplier circuits well into the terahertz frequency range. Using optical and e-beam lithography, and conventional epitaxial layer design with innovative usage of GaAs membranes and metal beam leads, high-performance terahertz circuits can be designed with high fidelity. All of these mixers use metal waveguide structures for housing. Metal machined structures for RF and LO coupling hamper these mixers to be integrated in multi-pixel heterodyne array receivers for spectroscopic and imaging applications. Moreover, the recent developments of terahertz transistors on InP substrate provide an opportunity, for the first time, to have integrated amplifiers followed by Schottky diode mixers in a heterodyne receiver at these frequencies. Since the amplifiers are developed on a planar architecture to facilitate multi-pixel array implementation, it is quite important to find alternative architecture to waveguide-based mixers.

  10. Medical image classification using spatial adjacent histogram based on adaptive local binary patterns.

    PubMed

    Liu, Dong; Wang, Shengsheng; Huang, Dezhi; Deng, Gang; Zeng, Fantao; Chen, Huiling

    2016-05-01

    Medical image recognition is an important task in both computer vision and computational biology. In the field of medical image classification, representing an image based on local binary patterns (LBP) descriptor has become popular. However, most existing LBP-based methods encode the binary patterns in a fixed neighborhood radius and ignore the spatial relationships among local patterns. The ignoring of the spatial relationships in the LBP will cause a poor performance in the process of capturing discriminative features for complex samples, such as medical images obtained by microscope. To address this problem, in this paper we propose a novel method to improve local binary patterns by assigning an adaptive neighborhood radius for each pixel. Based on these adaptive local binary patterns, we further propose a spatial adjacent histogram strategy to encode the micro-structures for image representation. An extensive set of evaluations are performed on four medical datasets which show that the proposed method significantly improves standard LBP and compares favorably with several other prevailing approaches. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Conditional random fields for pattern recognition applied to structured data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burr, Tom; Skurikhin, Alexei

    In order to predict labels from an output domain, Y, pattern recognition is used to gather measurements from an input domain, X. Image analysis is one setting where one might want to infer whether a pixel patch contains an object that is “manmade” (such as a building) or “natural” (such as a tree). Suppose the label for a pixel patch is “manmade”; if the label for a nearby pixel patch is then more likely to be “manmade” there is structure in the output domain that can be exploited to improve pattern recognition performance. Modeling P(X) is difficult because features betweenmore » parts of the model are often correlated. Thus, conditional random fields (CRFs) model structured data using the conditional distribution P(Y|X = x), without specifying a model for P(X), and are well suited for applications with dependent features. Our paper has two parts. First, we overview CRFs and their application to pattern recognition in structured problems. Our primary examples are image analysis applications in which there is dependence among samples (pixel patches) in the output domain. Second, we identify research topics and present numerical examples.« less

  12. Conditional random fields for pattern recognition applied to structured data

    DOE PAGES

    Burr, Tom; Skurikhin, Alexei

    2015-07-14

    In order to predict labels from an output domain, Y, pattern recognition is used to gather measurements from an input domain, X. Image analysis is one setting where one might want to infer whether a pixel patch contains an object that is “manmade” (such as a building) or “natural” (such as a tree). Suppose the label for a pixel patch is “manmade”; if the label for a nearby pixel patch is then more likely to be “manmade” there is structure in the output domain that can be exploited to improve pattern recognition performance. Modeling P(X) is difficult because features betweenmore » parts of the model are often correlated. Thus, conditional random fields (CRFs) model structured data using the conditional distribution P(Y|X = x), without specifying a model for P(X), and are well suited for applications with dependent features. Our paper has two parts. First, we overview CRFs and their application to pattern recognition in structured problems. Our primary examples are image analysis applications in which there is dependence among samples (pixel patches) in the output domain. Second, we identify research topics and present numerical examples.« less

  13. Image recovery by removing stochastic artefacts identified as local asymmetries

    NASA Astrophysics Data System (ADS)

    Osterloh, K.; Bücherl, T.; Zscherpel, U.; Ewert, U.

    2012-04-01

    Stochastic artefacts are frequently encountered in digital radiography and tomography with neutrons. Most obviously, they are caused by ubiquitous scattered radiation hitting the CCD-sensor. They appear as scattered dots and, at higher frequency of occurrence, they may obscure the image. Some of these dotted interferences vary with time, however, a large portion of them remains persistent so the problem cannot be resolved by collecting stacks of images and to merge them to a median image. The situation becomes even worse in computed tomography (CT) where each artefact causes a circular pattern in the reconstructed plane. Therefore, these stochastic artefacts have to be removed completely and automatically while leaving the original image content untouched. A simplified image acquisition and artefact removal tool was developed at BAM and is available to interested users. Furthermore, an algorithm complying with all the requirements mentioned above was developed that reliably removes artefacts that could even exceed the size of a single pixel without affecting other parts of the image. It consists of an iterative two-step algorithm adjusting pixel values within a 3 × 3 matrix inside of a 5 × 5 kernel and the centre pixel only within a 3 × 3 kernel, resp. It has been applied to thousands of images obtained from the NECTAR facility at the FRM II in Garching, Germany, without any need of a visual control. In essence, the procedure consists of identifying and tackling asymmetric intensity distributions locally with recording each treatment of a pixel. Searching for the local asymmetry with subsequent correction rather than replacing individually identified pixels constitutes the basic idea of the algorithm. The efficiency of the proposed algorithm is demonstrated with a severely spoiled example of neutron radiography and tomography as compared with median filtering, the most convenient alternative approach by visual check, histogram and power spectra analysis.

  14. SU-D-201-06: Random Walk Algorithm Seed Localization Parameters in Lung Positron Emission Tomography (PET) Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soufi, M; Asl, A Kamali; Geramifar, P

    2015-06-15

    Purpose: The objective of this study was to find the best seed localization parameters in random walk algorithm application to lung tumor delineation in Positron Emission Tomography (PET) images. Methods: PET images suffer from statistical noise and therefore tumor delineation in these images is a challenging task. Random walk algorithm, a graph based image segmentation technique, has reliable image noise robustness. Also its fast computation and fast editing characteristics make it powerful for clinical purposes. We implemented the random walk algorithm using MATLAB codes. The validation and verification of the algorithm have been done by 4D-NCAT phantom with spherical lungmore » lesions in different diameters from 20 to 90 mm (with incremental steps of 10 mm) and different tumor to background ratios of 4:1 and 8:1. STIR (Software for Tomographic Image Reconstruction) has been applied to reconstruct the phantom PET images with different pixel sizes of 2×2×2 and 4×4×4 mm{sup 3}. For seed localization, we selected pixels with different maximum Standardized Uptake Value (SUVmax) percentages, at least (70%, 80%, 90% and 100%) SUVmax for foreground seeds and up to (20% to 55%, 5% increment) SUVmax for background seeds. Also, for investigation of algorithm performance on clinical data, 19 patients with lung tumor were studied. The resulted contours from algorithm have been compared with nuclear medicine expert manual contouring as ground truth. Results: Phantom and clinical lesion segmentation have shown that the best segmentation results obtained by selecting the pixels with at least 70% SUVmax as foreground seeds and pixels up to 30% SUVmax as background seeds respectively. The mean Dice Similarity Coefficient of 94% ± 5% (83% ± 6%) and mean Hausdorff Distance of 1 (2) pixels have been obtained for phantom (clinical) study. Conclusion: The accurate results of random walk algorithm in PET image segmentation assure its application for radiation treatment planning and diagnosis.« less

  15. Mapping of major volcanic structures on Pavonis Mons in Tharsis, Mars

    NASA Astrophysics Data System (ADS)

    Orlandi, Diana; Mazzarini, Francesco; Pagli, Carolina; Pozzobon, Riccardo

    2017-04-01

    Pavonis Mons, with its 300 km of diameter and 14 km of height, is one of the largest volcanoes of Mars. It rests on a topographic high called Tharsis rise and it is located in the centre of a SW-NE trending row of volcanoes, including Arsia and Ascraeus Montes. In this study we mapped and analyzed the volcanic and tectonic structures of Pavonis Mons in order to understand its formation and the relationship between magmatic and tectonic activity. We use the mapping ArcGIS software and vast set of high resolution topographic and multi-spectral images including CTX (6 m/pixel) as well as HRSC (12.5 m/pixel) and HiRiSE ( 0.25 m/pixel) mosaic images. Furthemore, we used MOLA ( 463 m/pixel in the MOLA MEGDR gridded topographic data), THEMIS thermal inertia (IR-day, 100 m/pixel) and THEMIS (IR-night, 100 m/pixel) images global mosaic to map structures at the regional scale. We found a wide range of structures including ring dykes, wrinkle ridges, pit chains, lava flows, lava channels, fissures and depressions that we preliminary interpreted as coalescent lava tubes. Many sinuous rilles have eroded Pavonis' slopes and culminate with lava aprons, similar to alluvial fans. South of Pavonis Mons we also identify a series of volcanic vents mainly aligned along a SW-NE trend. Displacements across recent crater rim and volcanic deposits (strike slip faults and wrinkle ridges) have been documented suggesting that, at least during the most recent volcanic phases, the regional tectonics has contributed in shaping the morphology of Pavonis. The kinematics of the mapped structures is consistent with a ENE-SSW direction of the maximum horizontal stress suggesting a possible interaction with nearby Valles Marineris. Our study provides new morphometric analysis of volcano-tectonic features that can be used to depict an evolutionary history for the Pavonis Volcano.

  16. Selecting good regions to deblur via relative total variation

    NASA Astrophysics Data System (ADS)

    Li, Lerenhan; Yan, Hao; Fan, Zhihua; Zheng, Hanqing; Gao, Changxin; Sang, Nong

    2018-03-01

    Image deblurring is to estimate the blur kernel and to restore the latent image. It is usually divided into two stage, including kernel estimation and image restoration. In kernel estimation, selecting a good region that contains structure information is helpful to the accuracy of estimated kernel. Good region to deblur is usually expert-chosen or in a trial-anderror way. In this paper, we apply a metric named relative total variation (RTV) to discriminate the structure regions from smooth and texture. Given a blurry image, we first calculate the RTV of each pixel to determine whether it is the pixel in structure region, after which, we sample the image in an overlapping way. At last, the sampled region that contains the most structure pixels is the best region to deblur. Both qualitative and quantitative experiments show that our proposed method can help to estimate the kernel accurately.

  17. Factorization-based texture segmentation

    DOE PAGES

    Yuan, Jiangye; Wang, Deliang; Cheriyadat, Anil M.

    2015-06-17

    This study introduces a factorization-based approach that efficiently segments textured images. We use local spectral histograms as features, and construct an M × N feature matrix using M-dimensional feature vectors in an N-pixel image. Based on the observation that each feature can be approximated by a linear combination of several representative features, we factor the feature matrix into two matrices-one consisting of the representative features and the other containing the weights of representative features at each pixel used for linear combination. The factorization method is based on singular value decomposition and nonnegative matrix factorization. The method uses local spectral histogramsmore » to discriminate region appearances in a computationally efficient way and at the same time accurately localizes region boundaries. Finally, the experiments conducted on public segmentation data sets show the promise of this simple yet powerful approach.« less

  18. Weber-aware weighted mutual information evaluation for infrared-visible image fusion

    NASA Astrophysics Data System (ADS)

    Luo, Xiaoyan; Wang, Shining; Yuan, Ding

    2016-10-01

    A performance metric for infrared and visible image fusion is proposed based on Weber's law. To indicate the stimulus of source images, two Weber components are provided. One is differential excitation to reflect the spectral signal of visible and infrared images, and the other is orientation to capture the scene structure feature. By comparing the corresponding Weber component in infrared and visible images, the source pixels can be marked with different dominant properties in intensity or structure. If the pixels have the same dominant property label, the pixels are grouped to calculate the mutual information (MI) on the corresponding Weber components between dominant source and fused images. Then, the final fusion metric is obtained via weighting the group-wise MI values according to the number of pixels in different groups. Experimental results demonstrate that the proposed metric performs well on popular image fusion cases and outperforms other image fusion metrics.

  19. Minimization of color shift generated in RGBW quad structure.

    NASA Astrophysics Data System (ADS)

    Kim, Hong Chul; Yun, Jae Kyeong; Baek, Heume-Il; Kim, Ki Duk; Oh, Eui Yeol; Chung, In Jae

    2005-03-01

    The purpose of RGBW Quad Structure Technology is to realize higher brightness than that of normal panel (RGB stripe structure) by adding white sub-pixel to existing RGB stripe structure. However, there is side effect called 'color shift' resulted from increasing brightness. This side effect degrades general color characteristics due to change of 'Hue', 'Brightness' and 'Saturation' as compared with existing RGB stripe structure. Especially, skin-tone colors show a tendency to get darker in contrast to normal panel. We"ve tried to minimize 'color shift' through use of LUT (Look Up Table) for linear arithmetic processing of input data, data bit expansion to 12-bit for minimizing arithmetic tolerance and brightness weight of white sub-pixel on each R, G, B pixel. The objective of this study is to minimize and keep Δu'v' value (we commonly use to represent a color difference), quantitative basis of color difference between RGB stripe structure and RGBW quad structure, below 0.01 level (existing 0.02 or higher) using Macbeth colorchecker that is general reference of color characteristics.

  20. Faxed document image restoration method based on local pixel patterns

    NASA Astrophysics Data System (ADS)

    Akiyama, Teruo; Miyamoto, Nobuo; Oguro, Masami; Ogura, Kenji

    1998-04-01

    A method for restoring degraded faxed document images using the patterns of pixels that construct small areas in a document is proposed. The method effectively restores faxed images that contain the halftone textures and/or density salt-and-pepper noise that degrade OCR system performance. The halftone image restoration process, white-centered 3 X 3 pixels, in which black-and-white pixels alternate, are identified first using the distribution of the pixel values as halftone textures, and then the white center pixels are inverted to black. To remove high-density salt- and-pepper noise, it is assumed that the degradation is caused by ill-balanced bias and inappropriate thresholding of the sensor output which results in the addition of random noise. Restored image can be estimated using an approximation that uses the inverse operation of the assumed original process. In order to process degraded faxed images, the algorithms mentioned above are combined. An experiment is conducted using 24 especially poor quality examples selected from data sets that exemplify what practical fax- based OCR systems cannot handle. The maximum recovery rate in terms of mean square error was 98.8 percent.

  1. SVM Pixel Classification on Colour Image Segmentation

    NASA Astrophysics Data System (ADS)

    Barui, Subhrajit; Latha, S.; Samiappan, Dhanalakshmi; Muthu, P.

    2018-04-01

    The aim of image segmentation is to simplify the representation of an image with the help of cluster pixels into something meaningful to analyze. Segmentation is typically used to locate boundaries and curves in an image, precisely to label every pixel in an image to give each pixel an independent identity. SVM pixel classification on colour image segmentation is the topic highlighted in this paper. It holds useful application in the field of concept based image retrieval, machine vision, medical imaging and object detection. The process is accomplished step by step. At first we need to recognize the type of colour and the texture used as an input to the SVM classifier. These inputs are extracted via local spatial similarity measure model and Steerable filter also known as Gabon Filter. It is then trained by using FCM (Fuzzy C-Means). Both the pixel level information of the image and the ability of the SVM Classifier undergoes some sophisticated algorithm to form the final image. The method has a well developed segmented image and efficiency with respect to increased quality and faster processing of the segmented image compared with the other segmentation methods proposed earlier. One of the latest application result is the Light L16 camera.

  2. Switching non-local vector median filter

    NASA Astrophysics Data System (ADS)

    Matsuoka, Jyohei; Koga, Takanori; Suetake, Noriaki; Uchino, Eiji

    2016-04-01

    This paper describes a novel image filtering method that removes random-valued impulse noise superimposed on a natural color image. In impulse noise removal, it is essential to employ a switching-type filtering method, as used in the well-known switching median filter, to preserve the detail of an original image with good quality. In color image filtering, it is generally preferable to deal with the red (R), green (G), and blue (B) components of each pixel of a color image as elements of a vectorized signal, as in the well-known vector median filter, rather than as component-wise signals to prevent a color shift after filtering. By taking these fundamentals into consideration, we propose a switching-type vector median filter with non-local processing that mainly consists of a noise detector and a noise removal filter. Concretely, we propose a noise detector that proactively detects noise-corrupted pixels by focusing attention on the isolation tendencies of pixels of interest not in an input image but in difference images between RGB components. Furthermore, as the noise removal filter, we propose an extended version of the non-local median filter, we proposed previously for grayscale image processing, named the non-local vector median filter, which is designed for color image processing. The proposed method realizes a superior balance between the preservation of detail and impulse noise removal by proactive noise detection and non-local switching vector median filtering, respectively. The effectiveness and validity of the proposed method are verified in a series of experiments using natural color images.

  3. Adaptive Electronic Camouflage Using Texture Synthesis

    DTIC Science & Technology

    2012-04-01

    algorithm begins by computing the GLCMs, GIN and GOUT , of the input image (e.g., image of local environment) and output image (randomly generated...respectively. The algorithm randomly selects a pixel from the output image and cycles its gray-level through all values. For each value, GOUT is updated...The value of the selected pixel is permanently changed to the gray-level value that minimizes the error between GIN and GOUT . Without selecting a

  4. Processing Translational Motion Sequences.

    DTIC Science & Technology

    1982-10-01

    the initial ROADSIGN image using a (del)**2g mask with a width of 5 pixels The distinctiveness values were computed using features which were 5x5 pixel...the initial step size of the local search quite large. 34 4. EX P R g NTg The following experiments were performed using the roadsign and industrial...the initial image of the sequence. The third experiment involves processing the roadsign image sequence using the features extracted at the positions

  5. 2D Fast Vessel Visualization Using a Vessel Wall Mask Guiding Fine Vessel Detection

    PubMed Central

    Raptis, Sotirios; Koutsouris, Dimitris

    2010-01-01

    The paper addresses the fine retinal-vessel's detection issue that is faced in diagnostic applications and aims at assisting in better recognizing fine vessel anomalies in 2D. Our innovation relies in separating key visual features vessels exhibit in order to make the diagnosis of eventual retinopathologies easier to detect. This allows focusing on vessel segments which present fine changes detectable at different sampling scales. We advocate that these changes can be addressed as subsequent stages of the same vessel detection procedure. We first carry out an initial estimate of the basic vessel-wall's network, define the main wall-body, and then try to approach the ridges and branches of the vasculature's using fine detection. Fine vessel screening looks into local structural inconsistencies in vessels properties, into noise, or into not expected intensity variations observed inside pre-known vessel-body areas. The vessels are first modelled sufficiently but not precisely by their walls with a tubular model-structure that is the result of an initial segmentation. This provides a chart of likely Vessel Wall Pixels (VWPs) yielding a form of a likelihood vessel map mainly based on gradient filter's intensity and spatial arrangement parameters (e.g., linear consistency). Specific vessel parameters (centerline, width, location, fall-away rate, main orientation) are post-computed by convolving the image with a set of pre-tuned spatial filters called Matched Filters (MFs). These are easily computed as Gaussian-like 2D forms that use a limited range sub-optimal parameters adjusted to the dominant vessel characteristics obtained by Spatial Grey Level Difference statistics limiting the range of search into vessel widths of 16, 32, and 64 pixels. Sparse pixels are effectively eliminated by applying a limited range Hough Transform (HT) or region growing. Major benefits are limiting the range of parameters, reducing the search-space for post-convolution to only masked regions, representing almost 2% of the 2D volume, good speed versus accuracy/time trade-off. Results show the potentials of our approach in terms of time for detection ROC analysis and accuracy of vessel pixel (VP) detection. PMID:20706682

  6. Image Description with Local Patterns: An Application to Face Recognition

    NASA Astrophysics Data System (ADS)

    Zhou, Wei; Ahrary, Alireza; Kamata, Sei-Ichiro

    In this paper, we propose a novel approach for presenting the local features of digital image using 1D Local Patterns by Multi-Scans (1DLPMS). We also consider the extentions and simplifications of the proposed approach into facial images analysis. The proposed approach consists of three steps. At the first step, the gray values of pixels in image are represented as a vector giving the local neighborhood intensity distrubutions of the pixels. Then, multi-scans are applied to capture different spatial information on the image with advantage of less computation than other traditional ways, such as Local Binary Patterns (LBP). The second step is encoding the local features based on different encoding rules using 1D local patterns. This transformation is expected to be less sensitive to illumination variations besides preserving the appearance of images embedded in the original gray scale. At the final step, Grouped 1D Local Patterns by Multi-Scans (G1DLPMS) is applied to make the proposed approach computationally simpler and easy to extend. Next, we further formulate boosted algorithm to extract the most discriminant local features. The evaluated results demonstrate that the proposed approach outperforms the conventional approaches in terms of accuracy in applications of face recognition, gender estimation and facial expression.

  7. An investigation of signal performance enhancements achieved through innovative pixel design across several generations of indirect detection, active matrix, flat-panel arrays

    PubMed Central

    Antonuk, Larry E.; Zhao, Qihua; El-Mohri, Youcef; Du, Hong; Wang, Yi; Street, Robert A.; Ho, Jackson; Weisfield, Richard; Yao, William

    2009-01-01

    Active matrix flat-panel imager (AMFPI) technology is being employed for an increasing variety of imaging applications. An important element in the adoption of this technology has been significant ongoing improvements in optical signal collection achieved through innovations in indirect detection array pixel design. Such improvements have a particularly beneficial effect on performance in applications involving low exposures and∕or high spatial frequencies, where detective quantum efficiency is strongly reduced due to the relatively high level of additive electronic noise compared to signal levels of AMFPI devices. In this article, an examination of various signal properties, as determined through measurements and calculations related to novel array designs, is reported in the context of the evolution of AMFPI pixel design. For these studies, dark, optical, and radiation signal measurements were performed on prototype imagers incorporating a variety of increasingly sophisticated array designs, with pixel pitches ranging from 75 to 127 μm. For each design, detailed measurements of fundamental pixel-level properties conducted under radiographic and fluoroscopic operating conditions are reported and the results are compared. A series of 127 μm pitch arrays employing discrete photodiodes culminated in a novel design providing an optical fill factor of ∼80% (thereby assuring improved x-ray sensitivity), and demonstrating low dark current, very low charge trapping and charge release, and a large range of linear signal response. In two of the designs having 75 and 90 μm pitches, a novel continuous photodiode structure was found to provide fill factors that approach the theoretical maximum of 100%. Both sets of novel designs achieved large fill factors by employing architectures in which some, or all of the photodiode structure was elevated above the plane of the pixel addressing transistor. Generally, enhancement of the fill factor in either discrete or continuous photodiode arrays was observed to result in no degradation in MTF due to charge sharing between pixels. While the continuous designs exhibited relatively high levels of charge trapping and release, as well as shorter ranges of linearity, it is possible that these behaviors can be addressed through further refinements to pixel design. Both the continuous and the most recent discrete photodiode designs accommodate more sophisticated pixel circuitry than is present on conventional AMFPIs – such as a pixel clamp circuit, which is demonstrated to limit signal saturation under conditions corresponding to high exposures. It is anticipated that photodiode structures such as the ones reported in this study will enable the development of even more complex pixel circuitry, such as pixel-level amplifiers, that will lead to further significant improvements in imager performance. PMID:19673228

  8. Coherent multiscale image processing using dual-tree quaternion wavelets.

    PubMed

    Chan, Wai Lam; Choi, Hyeokho; Baraniuk, Richard G

    2008-07-01

    The dual-tree quaternion wavelet transform (QWT) is a new multiscale analysis tool for geometric image features. The QWT is a near shift-invariant tight frame representation whose coefficients sport a magnitude and three phases: two phases encode local image shifts while the third contains image texture information. The QWT is based on an alternative theory for the 2-D Hilbert transform and can be computed using a dual-tree filter bank with linear computational complexity. To demonstrate the properties of the QWT's coherent magnitude/phase representation, we develop an efficient and accurate procedure for estimating the local geometrical structure of an image. We also develop a new multiscale algorithm for estimating the disparity between a pair of images that is promising for image registration and flow estimation applications. The algorithm features multiscale phase unwrapping, linear complexity, and sub-pixel estimation accuracy.

  9. Neighborhood binary speckle pattern for deformation measurements insensitive to local illumination variation by digital image correlation.

    PubMed

    Zhao, Jian; Yang, Ping; Zhao, Yue

    2017-06-01

    Speckle pattern-based characteristics of digital image correlation (DIC) restrict its application in engineering fields and nonlaboratory environments, since serious decorrelation effect occurs due to localized sudden illumination variation. A simple and efficient speckle pattern adjusting and optimizing approach presented in this paper is aimed at providing a novel speckle pattern robust enough to resist local illumination variation. The new speckle pattern, called neighborhood binary speckle pattern, derived from original speckle pattern, is obtained by means of thresholding the pixels of a neighborhood at its central pixel value and considering the result as a binary number. The efficiency of the proposed speckle pattern is evaluated in six experimental scenarios. Experiment results indicate that the DIC measurements based on neighborhood binary speckle pattern are able to provide reliable and accurate results, even though local brightness and contrast of the deformed images have been seriously changed. It is expected that the new speckle pattern will have more potential value in engineering applications.

  10. Joint denoising and distortion correction of atomic scale scanning transmission electron microscopy images

    NASA Astrophysics Data System (ADS)

    Berkels, Benjamin; Wirth, Benedikt

    2017-09-01

    Nowadays, modern electron microscopes deliver images at atomic scale. The precise atomic structure encodes information about material properties. Thus, an important ingredient in the image analysis is to locate the centers of the atoms shown in micrographs as precisely as possible. Here, we consider scanning transmission electron microscopy (STEM), which acquires data in a rastering pattern, pixel by pixel. Due to this rastering combined with the magnification to atomic scale, movements of the specimen even at the nanometer scale lead to random image distortions that make precise atom localization difficult. Given a series of STEM images, we derive a Bayesian method that jointly estimates the distortion in each image and reconstructs the underlying atomic grid of the material by fitting the atom bumps with suitable bump functions. The resulting highly non-convex minimization problems are solved numerically with a trust region approach. Existence of minimizers and the model behavior for faster and faster rastering are investigated using variational techniques. The performance of the method is finally evaluated on both synthetic and real experimental data.

  11. Improving Photometry and Stellar Signal Preservation with Pixel-Level Systematic Error Correction

    NASA Technical Reports Server (NTRS)

    Kolodzijczak, Jeffrey J.; Smith, Jeffrey C.; Jenkins, Jon M.

    2013-01-01

    The Kepler Mission has demonstrated that excellent stellar photometric performance can be achieved using apertures constructed from optimally selected CCD pixels. The clever methods used to correct for systematic errors, while very successful, still have some limitations in their ability to extract long-term trends in stellar flux. They also leave poorly correlated bias sources, such as drifting moiré pattern, uncorrected. We will illustrate several approaches where applying systematic error correction algorithms to the pixel time series, rather than the co-added raw flux time series, provide significant advantages. Examples include, spatially localized determination of time varying moiré pattern biases, greater sensitivity to radiation-induced pixel sensitivity drops (SPSDs), improved precision of co-trending basis vectors (CBV), and a means of distinguishing the stellar variability from co-trending terms even when they are correlated. For the last item, the approach enables physical interpretation of appropriately scaled coefficients derived in the fit of pixel time series to the CBV as linear combinations of various spatial derivatives of the pixel response function (PRF). We demonstrate that the residuals of a fit of soderived pixel coefficients to various PRF-related components can be deterministically interpreted in terms of physically meaningful quantities, such as the component of the stellar flux time series which is correlated with the CBV, as well as, relative pixel gain, proper motion and parallax. The approach also enables us to parameterize and assess the limiting factors in the uncertainties in these quantities.

  12. Person-independent facial expression analysis by fusing multiscale cell features

    NASA Astrophysics Data System (ADS)

    Zhou, Lubing; Wang, Han

    2013-03-01

    Automatic facial expression recognition is an interesting and challenging task. To achieve satisfactory accuracy, deriving a robust facial representation is especially important. A novel appearance-based feature, the multiscale cell local intensity increasing patterns (MC-LIIP), to represent facial images and conduct person-independent facial expression analysis is presented. The LIIP uses a decimal number to encode the texture or intensity distribution around each pixel via pixel-to-pixel intensity comparison. To boost noise resistance, MC-LIIP carries out comparison computation on the average values of scalable cells instead of individual pixels. The facial descriptor fuses region-based histograms of MC-LIIP features from various scales, so as to encode not only textural microstructures but also the macrostructures of facial images. Finally, a support vector machine classifier is applied for expression recognition. Experimental results on the CK+ and Karolinska directed emotional faces databases show the superiority of the proposed method.

  13. Correcting speckle contrast at small speckle size to enhance signal to noise ratio for laser speckle contrast imaging.

    PubMed

    Qiu, Jianjun; Li, Yangyang; Huang, Qin; Wang, Yang; Li, Pengcheng

    2013-11-18

    In laser speckle contrast imaging, it was usually suggested that speckle size should exceed two camera pixels to eliminate the spatial averaging effect. In this work, we show the benefit of enhancing signal to noise ratio by correcting the speckle contrast at small speckle size. Through simulations and experiments, we demonstrated that local speckle contrast, even at speckle size much smaller than one pixel size, can be corrected through dividing the original speckle contrast by the static speckle contrast. Moreover, we show a 50% higher signal to noise ratio of the speckle contrast image at speckle size below 0.5 pixel size than that at speckle size of two pixels. These results indicate the possibility of selecting a relatively large aperture to simultaneously ensure sufficient light intensity and high accuracy and signal to noise ratio, making the laser speckle contrast imaging more flexible.

  14. Camouflaging in Digital Image for Secure Communication

    NASA Astrophysics Data System (ADS)

    Jindal, B.; Singh, A. P.

    2013-06-01

    The present paper reports on a new type of camouflaging in digital image for hiding crypto-data using moderate bit alteration in the pixel. In the proposed method, cryptography is combined with steganography to provide a two layer security to the hidden data. The novelty of the algorithm proposed in the present work lies in the fact that the information about hidden bit is reflected by parity condition in one part of the image pixel. The remaining part of the image pixel is used to perform local pixel adjustment to improve the visual perception of the cover image. In order to examine the effectiveness of the proposed method, image quality measuring parameters are computed. In addition to this, security analysis is also carried by comparing the histograms of cover and stego images. This scheme provides a higher security as well as robustness to intentional as well as unintentional attacks.

  15. Theory and applications of structured light single pixel imaging

    NASA Astrophysics Data System (ADS)

    Stokoe, Robert J.; Stockton, Patrick A.; Pezeshki, Ali; Bartels, Randy A.

    2018-02-01

    Many single-pixel imaging techniques have been developed in recent years. Though the methods of image acquisition vary considerably, the methods share unifying features that make general analysis possible. Furthermore, the methods developed thus far are based on intuitive processes that enable simple and physically-motivated reconstruction algorithms, however, this approach may not leverage the full potential of single-pixel imaging. We present a general theoretical framework of single-pixel imaging based on frame theory, which enables general, mathematically rigorous analysis. We apply our theoretical framework to existing single-pixel imaging techniques, as well as provide a foundation for developing more-advanced methods of image acquisition and reconstruction. The proposed frame theoretic framework for single-pixel imaging results in improved noise robustness, decrease in acquisition time, and can take advantage of special properties of the specimen under study. By building on this framework, new methods of imaging with a single element detector can be developed to realize the full potential associated with single-pixel imaging.

  16. A Low-Noise X-ray Astronomical Silicon-On-Insulator Pixel Detector Using a Pinned Depleted Diode Structure

    PubMed Central

    Kamehama, Hiroki; Kawahito, Shoji; Shrestha, Sumeet; Nakanishi, Syunta; Yasutomi, Keita; Takeda, Ayaki; Tsuru, Takeshi Go

    2017-01-01

    This paper presents a novel full-depletion Si X-ray detector based on silicon-on-insulator pixel (SOIPIX) technology using a pinned depleted diode structure, named the SOIPIX-PDD. The SOIPIX-PDD greatly reduces stray capacitance at the charge sensing node, the dark current of the detector, and capacitive coupling between the sensing node and SOI circuits. These features of the SOIPIX-PDD lead to low read noise, resulting high X-ray energy resolution and stable operation of the pixel. The back-gate surface pinning structure using neutralized p-well at the back-gate surface and depleted n-well underneath the p-well for all the pixel area other than the charge sensing node is also essential for preventing hole injection from the p-well by making the potential barrier to hole, reducing dark current from the Si-SiO2 interface and creating lateral drift field to gather signal electrons in the pixel area into the small charge sensing node. A prototype chip using 0.2 μm SOI technology shows very low readout noise of 11.0 e−rms, low dark current density of 56 pA/cm2 at −35 °C and the energy resolution of 200 eV(FWHM) at 5.9 keV and 280 eV (FWHM) at 13.95 keV. PMID:29295523

  17. A Low-Noise X-ray Astronomical Silicon-On-Insulator Pixel Detector Using a Pinned Depleted Diode Structure.

    PubMed

    Kamehama, Hiroki; Kawahito, Shoji; Shrestha, Sumeet; Nakanishi, Syunta; Yasutomi, Keita; Takeda, Ayaki; Tsuru, Takeshi Go; Arai, Yasuo

    2017-12-23

    This paper presents a novel full-depletion Si X-ray detector based on silicon-on-insulator pixel (SOIPIX) technology using a pinned depleted diode structure, named the SOIPIX-PDD. The SOIPIX-PDD greatly reduces stray capacitance at the charge sensing node, the dark current of the detector, and capacitive coupling between the sensing node and SOI circuits. These features of the SOIPIX-PDD lead to low read noise, resulting high X-ray energy resolution and stable operation of the pixel. The back-gate surface pinning structure using neutralized p-well at the back-gate surface and depleted n-well underneath the p-well for all the pixel area other than the charge sensing node is also essential for preventing hole injection from the p-well by making the potential barrier to hole, reducing dark current from the Si-SiO₂ interface and creating lateral drift field to gather signal electrons in the pixel area into the small charge sensing node. A prototype chip using 0.2 μm SOI technology shows very low readout noise of 11.0 e - rms , low dark current density of 56 pA/cm² at -35 °C and the energy resolution of 200 eV(FWHM) at 5.9 keV and 280 eV (FWHM) at 13.95 keV.

  18. WE-EF-207-07: Dual Energy CT with One Full Scan and a Second Sparse-View Scan Using Structure Preserving Iterative Reconstruction (SPIR)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, T; Zhu, L

    Purpose: Conventional dual energy CT (DECT) reconstructs CT and basis material images from two full-size projection datasets with different energy spectra. To relax the data requirement, we propose an iterative DECT reconstruction algorithm using one full scan and a second sparse-view scan by utilizing redundant structural information of the same object acquired at two different energies. Methods: We first reconstruct a full-scan CT image using filtered-backprojection (FBP) algorithm. The material similarities of each pixel with other pixels are calculated by an exponential function about pixel value differences. We assume that the material similarities of pixels remains in the second CTmore » scan, although pixel values may vary. An iterative method is designed to reconstruct the second CT image from reduced projections. Under the data fidelity constraint, the algorithm minimizes the L2 norm of the difference between pixel value and its estimation, which is the average of other pixel values weighted by their similarities. The proposed algorithm, referred to as structure preserving iterative reconstruction (SPIR), is evaluated on physical phantoms. Results: On the Catphan600 phantom, SPIR-based DECT method with a second 10-view scan reduces the noise standard deviation of a full-scan FBP CT reconstruction by a factor of 4 with well-maintained spatial resolution, while iterative reconstruction using total-variation regularization (TVR) degrades the spatial resolution at the same noise level. The proposed method achieves less than 1% measurement difference on electron density map compared with the conventional two-full-scan DECT. On an anthropomorphic pediatric phantom, our method successfully reconstructs the complicated vertebra structures and decomposes bone and soft tissue. Conclusion: We develop an effective method to reduce the number of views and therefore data acquisition in DECT. We show that SPIR-based DECT using one full scan and a second 10-view scan can provide high-quality DECT images and accurate electron density maps as conventional two-full-scan DECT.« less

  19. Geological Structures in the WaIls of Vestan Craters

    NASA Technical Reports Server (NTRS)

    Mittlefehldt, David; Nathues, A.; Beck, A. W.; Hoffmann, M.; Schaefer, M.; Williams, D. A.

    2014-01-01

    A compelling case can be made that Vesta is the parent asteroid for the howardite, eucrite and diogenite (HED) meteorites [1], although this interpretation has been questioned [2]. Generalized models for the structure of the crust of Vesta have been developed based on petrologic studies of basaltic eucrites, cumulate eucrites and diogenites. These models use inferred cooling rates for different types of HEDs and compositional variations within the clan to posit that the lower crust is dominantly diogenitic in character, cumulate eucrites occur deep in the upper crust, and basaltic eucrites dominate the higher levels of the upper crust [3-5]. These models lack fine-scale resolution and thus do not allow for detailed predictions of crustal structure. Geophysical models predict dike and sill intrusions ought to be present, but their widths may be quite small [6]. The northern hemisphere of Vesta is heavily cratered, and the southern hemisphere is dominated by two 400-500 km diameter basins that excavated deep into the crust [7-8]. Physical modeling of regolith formation on 300 km diameter asteroids predicts that debris layers would reach a few km in thickness, while on asteroids of Vesta's diameter regolith thicknesses would be less [9]. This agrees well with the estimated =1 km thickness of local debris excavated by a 45 km diameter vestan crater [10]. Large craters and basins may have punched through the regolith/megaregolith and exposed primary vestan crustal structures. We will use Dawn Framing Camera (FC) [11] images and color ratio maps from the High Altitude and Low Altitude Mapping Orbits (HAMO, 65 m/pixel; LAMO, 20 m/pixel) to evaluate structures exposed on the walls of craters: two examples are discussed here.

  20. A discrete polar Stockwell transform for enhanced characterization of tissue structure using MRI.

    PubMed

    Pridham, Glen; Steenwijk, Martijn D; Geurts, Jeroen J G; Zhang, Yunyan

    2018-05-02

    The purpose of this study was to present an effective algorithm for computing the discrete polar Stockwell transform (PST), investigate its unique multiscale and multi-orientation features, and explore potentially new applications including denoising and tissue segmentation. We investigated PST responses using both synthetic and MR images. Moreover, we compared the features of PST with both Gabor and Morlet wavelet transforms, and compared the PST with two wavelet approaches for denoising using MRI. Using a synthetic image, we also tested the edge effect of PST through signal-padding. Then, we constructed a partially supervised classifier using radial, marginal PST spectra of T2-weighted MRI, acquired from postmortem brains with multiple sclerosis. The classification involved three histology-verified tissue types: normal appearing white matter (NAWM), lesion, or other, along with 5-fold cross-validation. The PST generated a series of images with varying orientations or rotation-invariant scales. Radial frequencies highlighted image structures of different size, and angular frequencies enhanced structures by orientation. Signal-padding helped suppress boundary artifacts but required attention to incidental artifacts. In comparison, the Gabor transform produced more redundant images and the wavelet spectra appeared less spatially smooth than the PST. In addition, the PST demonstrated lower root-mean-square errors than other transforms in denoising and achieved a 93% accuracy for NAWM pixels (296/317), and 88% accuracy for lesion pixels (165/188) in MRI segmentation. The PST is a unique local spectral density-assessing tool which is sensitive to both structure orientations and scales. This may facilitate multiple new applications including advanced characterization of tissue structure in standard MRI. © 2018 International Society for Magnetic Resonance in Medicine.

  1. Locality-preserving sparse representation-based classification in hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Gao, Lianru; Yu, Haoyang; Zhang, Bing; Li, Qingting

    2016-10-01

    This paper proposes to combine locality-preserving projections (LPP) and sparse representation (SR) for hyperspectral image classification. The LPP is first used to reduce the dimensionality of all the training and testing data by finding the optimal linear approximations to the eigenfunctions of the Laplace Beltrami operator on the manifold, where the high-dimensional data lies. Then, SR codes the projected testing pixels as sparse linear combinations of all the training samples to classify the testing pixels by evaluating which class leads to the minimum approximation error. The integration of LPP and SR represents an innovative contribution to the literature. The proposed approach, called locality-preserving SR-based classification, addresses the imbalance between high dimensionality of hyperspectral data and the limited number of training samples. Experimental results on three real hyperspectral data sets demonstrate that the proposed approach outperforms the original counterpart, i.e., SR-based classification.

  2. Sea-land segmentation for infrared remote sensing images based on superpixels and multi-scale features

    NASA Astrophysics Data System (ADS)

    Lei, Sen; Zou, Zhengxia; Liu, Dunge; Xia, Zhenghuan; Shi, Zhenwei

    2018-06-01

    Sea-land segmentation is a key step for the information processing of ocean remote sensing images. Traditional sea-land segmentation algorithms ignore the local similarity prior of sea and land, and thus fail in complex scenarios. In this paper, we propose a new sea-land segmentation method for infrared remote sensing images to tackle the problem based on superpixels and multi-scale features. Considering the connectivity and local similarity of sea or land, we interpret the sea-land segmentation task in view of superpixels rather than pixels, where similar pixels are clustered and the local similarity are explored. Moreover, the multi-scale features are elaborately designed, comprising of gray histogram and multi-scale total variation. Experimental results on infrared bands of Landsat-8 satellite images demonstrate that the proposed method can obtain more accurate and more robust sea-land segmentation results than the traditional algorithms.

  3. Assessing land cover performance in Senegal, West Africa using 1-km integrated NDVI and local variance analysis

    USGS Publications Warehouse

    Budde, M.E.; Tappan, G.; Rowland, James; Lewis, J.; Tieszen, L.L.

    2004-01-01

    The researchers calculated seasonal integrated normalized difference vegetation index (NDVI) for each of 7 years using a time-series of 1-km data from the Advanced Very High Resolution Radiometer (AVHRR) (1992-93, 1995) and SPOT Vegetation (1998-2001) sensors. We used a local variance technique to identify each pixel as normal or either positively or negatively anomalous when compared to its surroundings. We then summarized the number of years that a given pixel was identified as an anomaly. The resulting anomaly maps were analysed using Landsat TM imagery and extensive ground knowledge to assess the results. This technique identified anomalies that can be linked to numerous anthropogenic impacts including agricultural and urban expansion, maintenance of protected areas and increased fallow. Local variance analysis is a reliable method for assessing vegetation degradation resulting from human pressures or increased land productivity from natural resource management practices. ?? 2004 Published by Elsevier Ltd.

  4. Up Scalable Full Colour Plasmonic Pixels with Controllable Hue, Brightness and Saturation.

    PubMed

    Mudachathi, Renilkumar; Tanaka, Takuo

    2017-04-26

    It has long been the interests of scientists to develop ink free colour printing technique using nano structured materials inspired by brilliant colours found in many creatures like butterflies and peacocks. Recently isolated metal nano structures exhibiting preferential light absorption and scattering have been explored as a promising candidate for this emerging field. Applying such structures in practical use, however, demands the production of individual colours with distinct reflective peaks, tunable across the visible wavelength region combined with controllable colour attributes and economically feasible fabrication. Herein, we present a simple yet efficient colour printing approach employing sub-micrometer scale plasmonic pixels of single constituent metal structure which supports near unity broadband light absorption at two distinct wavelengths, facilitating the creation of saturated colours. The dependence of these resonances on two different parameters of the same pixel enables controllable colour attributes such as hue, brightness and saturation across the visible spectrum. The linear dependence of colour attributes on the pixel parameters eases the automation; which combined with the use of inexpensive and stable aluminum as functional material will make this colour design strategy relevant for use in various commercial applications like printing micro images for security purposes, consumer product colouration and functionalized decoration to name a few.

  5. Measuring Filament Orientation: A New Quantitative, Local Approach

    NASA Astrophysics Data System (ADS)

    Green, C.-E.; Dawson, J. R.; Cunningham, M. R.; Jones, P. A.; Novak, G.; Fissel, L. M.

    2017-09-01

    The relative orientation between filamentary structures in molecular clouds and the ambient magnetic field provides insight into filament formation and stability. To calculate the relative orientation, a measurement of filament orientation is first required. We propose a new method to calculate the orientation of the one-pixel-wide filament skeleton that is output by filament identification algorithms such as filfinder. We derive the local filament orientation from the direction of the intensity gradient in the skeleton image using the Sobel filter and a few simple post-processing steps. We call this the “Sobel-gradient method.” The resulting filament orientation map can be compared quantitatively on a local scale with the magnetic field orientation map to then find the relative orientation of the filament with respect to the magnetic field at each point along the filament. It can also be used for constructing radial profiles for filament width fitting. The proposed method facilitates automation in analyses of filament skeletons, which is imperative in this era of “big data.”

  6. Quantification and visualization of relative local ventilation on dynamic chest radiographs

    NASA Astrophysics Data System (ADS)

    Tanaka, Rie; Sanada, Shigeru; Okazaki, Nobuo; Kobayashi, Takeshi; Nakayama, Kazuya; Matsui, Takeshi; Hayashi, Norio; Matsui, Osamu

    2006-03-01

    Recently-developed dynamic flat-panel detector (FPD) with a large field of view is possible to obtain breathing chest radiographs, which provide respiratory kinetics information. This study was performed to investigate the ability of dynamic chest radiography using FPD to quantify relative ventilation according to respiratory physiology. We also reported the results of primary clinical study and described the possibility of clinical use of our method. Dynamic chest radiographs of 12 subjects involving abnormal subjects during respiration were obtained using a modified FPD system (30 frames in 10 seconds). Imaging was performed in three different positions (standing, and right and left decubitus positions) to change the distribution of local ventilation by changing the lung's own gravity in each area. The distance from the lung apex to the diaphragm (abbr. DLD) was measured by the edge detection technique for use as an index of respiratory phase. We measured pixel values in each lung area and calculated correlation coefficients with DLD. Differences in the pixel values between the maximum inspiratory and expiratory frame were calculated, and the trend of distribution was evaluated by two-way analysis of variance. Pixel value in each lung area was strongly associated with respiratory phase and its time variation and distribution were consistent with known properties in respiratory physiology. Dynamic chest radiography using FPD combined with our computerized methods was capable of quantifying relative amount of ventilation during respiration, and of detecting regional differences in ventilation. In the subjects with emphysema, areas with decreased respiratory changes in pixel value are consisted with the areas with air trapping. This method is expected to be a useful novel diagnostic imaging method for supporting diagnosis and follow-up of pulmonary disease, which presents with abnormalities in local ventilation.

  7. Directional x-ray dark-field imaging of strongly ordered systems

    NASA Astrophysics Data System (ADS)

    Jensen, Torben Haugaard; Bech, Martin; Zanette, Irene; Weitkamp, Timm; David, Christian; Deyhle, Hans; Rutishauser, Simon; Reznikova, Elena; Mohr, Jürgen; Feidenhans'L, Robert; Pfeiffer, Franz

    2010-12-01

    Recently a novel grating based x-ray imaging approach called directional x-ray dark-field imaging was introduced. Directional x-ray dark-field imaging yields information about the local texture of structures smaller than the pixel size of the imaging system. In this work we extend the theoretical description and data processing schemes for directional dark-field imaging to strongly scattering systems, which could not be described previously. We develop a simple scattering model to account for these recent observations and subsequently demonstrate the model using experimental data. The experimental data includes directional dark-field images of polypropylene fibers and a human tooth slice.

  8. Phase unwrapping algorithm using polynomial phase approximation and linear Kalman filter.

    PubMed

    Kulkarni, Rishikesh; Rastogi, Pramod

    2018-02-01

    A noise-robust phase unwrapping algorithm is proposed based on state space analysis and polynomial phase approximation using wrapped phase measurement. The true phase is approximated as a two-dimensional first order polynomial function within a small sized window around each pixel. The estimates of polynomial coefficients provide the measurement of phase and local fringe frequencies. A state space representation of spatial phase evolution and the wrapped phase measurement is considered with the state vector consisting of polynomial coefficients as its elements. Instead of using the traditional nonlinear Kalman filter for the purpose of state estimation, we propose to use the linear Kalman filter operating directly with the wrapped phase measurement. The adaptive window width is selected at each pixel based on the local fringe density to strike a balance between the computation time and the noise robustness. In order to retrieve the unwrapped phase, either a line-scanning approach or a quality guided strategy of pixel selection is used depending on the underlying continuous or discontinuous phase distribution, respectively. Simulation and experimental results are provided to demonstrate the applicability of the proposed method.

  9. Machine learning to analyze images of shocked materials for precise and accurate measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dresselhaus-Cooper, Leora; Howard, Marylesa; Hock, Margaret C.

    A supervised machine learning algorithm, called locally adaptive discriminant analysis (LADA), has been developed to locate boundaries between identifiable image features that have varying intensities. LADA is an adaptation of image segmentation, which includes techniques that find the positions of image features (classes) using statistical intensity distributions for each class in the image. In order to place a pixel in the proper class, LADA considers the intensity at that pixel and the distribution of intensities in local (nearby) pixels. This paper presents the use of LADA to provide, with statistical uncertainties, the positions and shapes of features within ultrafast imagesmore » of shock waves. We demonstrate the ability to locate image features including crystals, density changes associated with shock waves, and material jetting caused by shock waves. This algorithm can analyze images that exhibit a wide range of physical phenomena because it does not rely on comparison to a model. LADA enables analysis of images from shock physics with statistical rigor independent of underlying models or simulations.« less

  10. Unified framework for automated iris segmentation using distantly acquired face images.

    PubMed

    Tan, Chun-Wei; Kumar, Ajay

    2012-09-01

    Remote human identification using iris biometrics has high civilian and surveillance applications and its success requires the development of robust segmentation algorithm to automatically extract the iris region. This paper presents a new iris segmentation framework which can robustly segment the iris images acquired using near infrared or visible illumination. The proposed approach exploits multiple higher order local pixel dependencies to robustly classify the eye region pixels into iris or noniris regions. Face and eye detection modules have been incorporated in the unified framework to automatically provide the localized eye region from facial image for iris segmentation. We develop robust postprocessing operations algorithm to effectively mitigate the noisy pixels caused by the misclassification. Experimental results presented in this paper suggest significant improvement in the average segmentation errors over the previously proposed approaches, i.e., 47.5%, 34.1%, and 32.6% on UBIRIS.v2, FRGC, and CASIA.v4 at-a-distance databases, respectively. The usefulness of the proposed approach is also ascertained from recognition experiments on three different publicly available databases.

  11. Glue detection based on teaching points constraint and tracking model of pixel convolution

    NASA Astrophysics Data System (ADS)

    Geng, Lei; Ma, Xiao; Xiao, Zhitao; Wang, Wen

    2018-01-01

    On-line glue detection based on machine version is significant for rust protection and strengthening in car production. Shadow stripes caused by reflect light and unevenness of inside front cover of car reduce the accuracy of glue detection. In this paper, we propose an effective algorithm to distinguish the edges of the glue and shadow stripes. Teaching points are utilized to calculate slope between the two adjacent points. Then a tracking model based on pixel convolution along motion direction is designed to segment several local rectangular regions using distance. The distance is the height of rectangular region. The pixel convolution along the motion direction is proposed to extract edges of gules in local rectangular region. A dataset with different illumination and complexity shape stripes are used to evaluate proposed method, which include 500 thousand images captured from the camera of glue gun machine. Experimental results demonstrate that the proposed method can detect the edges of glue accurately. The shadow stripes are distinguished and removed effectively. Our method achieves the 99.9% accuracies for the image dataset.

  12. Learning Compact Binary Face Descriptor for Face Recognition.

    PubMed

    Lu, Jiwen; Liong, Venice Erin; Zhou, Xiuzhuang; Zhou, Jie

    2015-10-01

    Binary feature descriptors such as local binary patterns (LBP) and its variations have been widely used in many face recognition systems due to their excellent robustness and strong discriminative power. However, most existing binary face descriptors are hand-crafted, which require strong prior knowledge to engineer them by hand. In this paper, we propose a compact binary face descriptor (CBFD) feature learning method for face representation and recognition. Given each face image, we first extract pixel difference vectors (PDVs) in local patches by computing the difference between each pixel and its neighboring pixels. Then, we learn a feature mapping to project these pixel difference vectors into low-dimensional binary vectors in an unsupervised manner, where 1) the variance of all binary codes in the training set is maximized, 2) the loss between the original real-valued codes and the learned binary codes is minimized, and 3) binary codes evenly distribute at each learned bin, so that the redundancy information in PDVs is removed and compact binary codes are obtained. Lastly, we cluster and pool these binary codes into a histogram feature as the final representation for each face image. Moreover, we propose a coupled CBFD (C-CBFD) method by reducing the modality gap of heterogeneous faces at the feature level to make our method applicable to heterogeneous face recognition. Extensive experimental results on five widely used face datasets show that our methods outperform state-of-the-art face descriptors.

  13. A new imaging technique on strength and phase of pulsatile tissue-motion in brightness-mode ultrasonogram

    NASA Astrophysics Data System (ADS)

    Fukuzawa, Masayuki; Yamada, Masayoshi; Nakamori, Nobuyuki; Kitsunezuka, Yoshiki

    2007-03-01

    A new imaging technique has been developed for observing both strength and phase of pulsatile tissue-motion in a movie of brightness-mode ultrasonogram. The pulsatile tissue-motion is determined by evaluating the heartbeat-frequency component in Fourier transform of a series of pixel value as a function of time at each pixel in a movie of ultrasonogram (640x480pixels/frame, 8bit/pixel, 33ms/frame) taken by a conventional ultrasonograph apparatus (ATL HDI5000). In order to visualize both the strength and the phase of the pulsatile tissue-motion, we propose a pulsatile-phase image that is obtained by superimposition of color gradation proportional to the motion phase on the original ultrasonogram only at which the motion strength exceeds a proper threshold. The pulsatile-phase image obtained from a cranial ultrasonogram of normal neonate clearly reveals that the motion region gives good agreement with the anatomical shape and position of the middle cerebral artery and the corpus callosum. The motion phase is fluctuated with the shape of arteries revealing local obstruction of blood flow. The pulsatile-phase images in the neonates with asphyxia at birth reveal decreases of the motion region and increases of the phase fluctuation due to the weakness and local disturbance of blood flow, which is useful for pediatric diagnosis.

  14. Cost-Sensitive Local Binary Feature Learning for Facial Age Estimation.

    PubMed

    Lu, Jiwen; Liong, Venice Erin; Zhou, Jie

    2015-12-01

    In this paper, we propose a cost-sensitive local binary feature learning (CS-LBFL) method for facial age estimation. Unlike the conventional facial age estimation methods that employ hand-crafted descriptors or holistically learned descriptors for feature representation, our CS-LBFL method learns discriminative local features directly from raw pixels for face representation. Motivated by the fact that facial age estimation is a cost-sensitive computer vision problem and local binary features are more robust to illumination and expression variations than holistic features, we learn a series of hashing functions to project raw pixel values extracted from face patches into low-dimensional binary codes, where binary codes with similar chronological ages are projected as close as possible, and those with dissimilar chronological ages are projected as far as possible. Then, we pool and encode these local binary codes within each face image as a real-valued histogram feature for face representation. Moreover, we propose a cost-sensitive local binary multi-feature learning method to jointly learn multiple sets of hashing functions using face patches extracted from different scales to exploit complementary information. Our methods achieve competitive performance on four widely used face aging data sets.

  15. Local Oscillator Sub-Systems for Array Receivers in the 1-3 THz Range

    NASA Technical Reports Server (NTRS)

    Mehdi, Imran; Siles, Jose V.; Maestrini, Alain; Lin, Robert; Lee, Choonsup; Schlecht, Erich; Chattopadhyay, Goutam

    2012-01-01

    Recent results from the Heterodyne Instrument for the Far-Infrared (HIFI) on the Herschel Space Telescope have confirmed the usefulness of high resolution spectroscopic data for a better understanding of our Universe. This paper will explore the current status of tunable local oscillator sources with emphasis on building a multi-pixel LO subsystem for the scientifically important CII line around 1908 GHz. Recent results have shown that over 50 microwatts of output power at 1.9 THz are possible with an optimized single pixel LO chain. These power levels are now sufficient to pump array receivers in this frequency range. Further power enhancement can be obtained by cooling the chain to 120 K or by utilizing in-phase power combining technology.

  16. Experimental investigation on aero-optical aberration of shock wave/boundary layer interactions

    NASA Astrophysics Data System (ADS)

    Ding, Haolin; Yi, Shihe; Fu, Jia; He, Lin

    2016-10-01

    After streaming through the flow field which including the expansion, shock wave, boundary, etc., the optical wave would be distorted by fluctuations in the density field. Interactions between laminar/turbulent boundary layer and shock wave contain large number complex flow structures, which offer a condition for studying the influences that different flow structures of the complex flow field have on the aero-optical aberrations. Interactions between laminar/turbulent boundary layer and shock wave are investigated in a Mach 3.0 supersonic wind tunnel, based on nanoparticle-tracer planar laser scattering (NPLS) system. Boundary layer separation/attachment, induced suppression waves, induced shock wave, expansion fan and boundary layer are presented by NPLS images. Its spatial resolution is 44.15 μm/pixel. Time resolution is 6ns. Based on the NPLS images, the density fields with high spatial-temporal resolution are obtained by the flow image calibration, and then the optical path difference (OPD) fluctuations of the original 532nm planar wavefront are calculated using Ray-tracing theory. According to the different flow structures in the flow field, four parts are selected, (1) Y=692 600pixel; (2) Y=600 400pixel; (3) Y=400 268pixel; (4) Y=268 0pixel. The aerooptical effects of different flow structures are quantitatively analyzed, the results indicate that: the compressive waves such as incident shock wave, induced shock wave, etc. rise the density, and then uplift the OPD curve, but this kind of shock are fixed in space position and intensity, the aero-optics induced by it can be regarded as constant; The induced shock waves are induced by the coherent structure of large size vortex in the interaction between turbulent boundary layer, its unsteady characteristic decides the induced waves unsteady characteristic; The space position and intensity of the induced shock wave are fixed in the interaction between turbulent boundary layer; The boundary layer aero-optics are induced by the coherent structure of large size vortex, which result in the fluctuation of OPD.

  17. Spatial scaling of core and dominant forest cover in the Upper Mississippi and Illinois River floodplains, USA

    USGS Publications Warehouse

    De Jager, Nathan R.; Rohweder, Jason J.

    2011-01-01

    Different organisms respond to spatial structure in different terms and across different spatial scales. As a consequence, efforts to reverse habitat loss and fragmentation through strategic habitat restoration ought to account for the different habitat density and scale requirements of various taxonomic groups. Here, we estimated the local density of floodplain forest surrounding each of ~20 million 10-m forested pixels of the Upper Mississippi and Illinois River floodplains by using moving windows of multiple sizes (1–100 ha). We further identified forest pixels that met two local density thresholds: 'core' forest pixels were nested in a 100% (unfragmented) forested window and 'dominant' forest pixels were those nested in a >60% forested window. Finally, we fit two scaling functions to declines in the proportion of forest cover meeting these criteria with increasing window length for 107 management-relevant focal areas: a power function (i.e. self-similar, fractal-like scaling) and an exponential decay function (fractal dimension depends on scale). The exponential decay function consistently explained more variation in changes to the proportion of forest meeting both the 'core' and 'dominant' criteria with increasing window length than did the power function, suggesting that elevation, soil type, hydrology, and human land use constrain these forest types to a limited range of scales. To examine these scales, we transformed the decay constants to measures of the distance at which the probability of forest meeting the 'core' and 'dominant' criteria was cut in half (S 1/2, m). S 1/2 for core forest was typically between ~55 and ~95 m depending on location along the river, indicating that core forest cover is restricted to extremely fine scales. In contrast, half of all dominant forest cover was lost at scales that were typically between ~525 and 750 m, but S 1/2 was as long as 1,800 m. S 1/2 is a simple measure that (1) condenses information derived from multi-scale analyses, (2) allows for comparisons of the amount of forest habitat available to species with different habitat density and scale requirements, and (3) can be used as an index of the spatial continuity of habitat types that do not scale fractally.

  18. Scattering property based contextual PolSAR speckle filter

    NASA Astrophysics Data System (ADS)

    Mullissa, Adugna G.; Tolpekin, Valentyn; Stein, Alfred

    2017-12-01

    Reliability of the scattering model based polarimetric SAR (PolSAR) speckle filter depends upon the accurate decomposition and classification of the scattering mechanisms. This paper presents an improved scattering property based contextual speckle filter based upon an iterative classification of the scattering mechanisms. It applies a Cloude-Pottier eigenvalue-eigenvector decomposition and a fuzzy H/α classification to determine the scattering mechanisms on a pre-estimate of the coherency matrix. The H/α classification identifies pixels with homogeneous scattering properties. A coarse pixel selection rule groups pixels that are either single bounce, double bounce or volume scatterers. A fine pixel selection rule is applied to pixels within each canonical scattering mechanism. We filter the PolSAR data and depending on the type of image scene (urban or rural) use either the coarse or fine pixel selection rule. Iterative refinement of the Wishart H/α classification reduces the speckle in the PolSAR data. Effectiveness of this new filter is demonstrated by using both simulated and real PolSAR data. It is compared with the refined Lee filter, the scattering model based filter and the non-local means filter. The study concludes that the proposed filter compares favorably with other polarimetric speckle filters in preserving polarimetric information, point scatterers and subtle features in PolSAR data.

  19. The Area Coverage of Geophysical Fields as a Function of Sensor Field-of View

    NASA Technical Reports Server (NTRS)

    Key, Jeffrey R.

    1994-01-01

    In many remote sensing studies of geophysical fields such as clouds, land cover, or sea ice characteristics, the fractional area coverage of the field in an image is estimated as the proportion of pixels that have the characteristic of interest (i.e., are part of the field) as determined by some thresholding operation. The effect of sensor field-of-view on this estimate is examined by modeling the unknown distribution of subpixel area fraction with the beta distribution, whose two parameters depend upon the true fractional area coverage, the pixel size, and the spatial structure of the geophysical field. Since it is often not possible to relate digital number, reflectance, or temperature to subpixel area fraction, the statistical models described are used to determine the effect of pixel size and thresholding operations on the estimate of area fraction for hypothetical geophysical fields. Examples are given for simulated cumuliform clouds and linear openings in sea ice, whose spatial structures are described by an exponential autocovariance function. It is shown that the rate and direction of change in total area fraction with changing pixel size depends on the true area fraction, the spatial structure, and the thresholding operation used.

  20. Development of Gentle Slope Light Guide Structure in a 3.4 μm Pixel Pitch Global Shutter CMOS Image Sensor with Multiple Accumulation Shutter Technology.

    PubMed

    Sekine, Hiroshi; Kobayashi, Masahiro; Onuki, Yusuke; Kawabata, Kazunari; Tsuboi, Toshiki; Matsuno, Yasushi; Takahashi, Hidekazu; Inoue, Shunsuke; Ichikawa, Takeshi

    2017-12-09

    CMOS image sensors (CISs) with global shutter (GS) function are strongly required in order to avoid image degradation. However, CISs with GS function have generally been inferior to the rolling shutter (RS) CIS in performance, because they have more components. This problem is remarkable in small pixel pitch. The newly developed 3.4 µm pitch GS CIS solves this problem by using multiple accumulation shutter technology and the gentle slope light guide structure. As a result, the developed GS pixel achieves 1.8 e - temporal noise and 16,200 e - full well capacity with charge domain memory in 120 fps operation. The sensitivity and parasitic light sensitivity are 28,000 e - /lx·s and -89 dB, respectively. Moreover, the incident light angle dependence of sensitivity and parasitic light sensitivity are improved by the gentle slope light guide structure.

  1. Generalized pixel profiling and comparative segmentation with application to arteriovenous malformation segmentation.

    PubMed

    Babin, D; Pižurica, A; Bellens, R; De Bock, J; Shang, Y; Goossens, B; Vansteenkiste, E; Philips, W

    2012-07-01

    Extraction of structural and geometric information from 3-D images of blood vessels is a well known and widely addressed segmentation problem. The segmentation of cerebral blood vessels is of great importance in diagnostic and clinical applications, with a special application in diagnostics and surgery on arteriovenous malformations (AVM). However, the techniques addressing the problem of the AVM inner structure segmentation are rare. In this work we present a novel method of pixel profiling with the application to segmentation of the 3-D angiography AVM images. Our algorithm stands out in situations with low resolution images and high variability of pixel intensity. Another advantage of our method is that the parameters are set automatically, which yields little manual user intervention. The results on phantoms and real data demonstrate its effectiveness and potentials for fine delineation of AVM structure. Copyright © 2012 Elsevier B.V. All rights reserved.

  2. Liquid-crystal projection image depixelization by spatial phase scrambling

    NASA Astrophysics Data System (ADS)

    Yang, Xiangyang; Jutamulia, Suganda; Li, Nan

    1996-08-01

    A technique that removes the pixel structure by scrambling the relative phases among multiple spatial spectra is described. Because of the pixel structure of the liquid-crystal-display (LCD) panel, multiple spectra are generated at the Fourier-spectrum plane (usually at the back focal plane of the imaging lens). A transparent phase mask is placed at the Fourier-spectrum plane such that each spectral order is modulated by one of the subareas of the phase mask, and the phase delay resulting from each pair of subareas is longer than the coherent length of the light source, which is approximately 1 m for the wideband white light sources used in most of LCD s. Such a phase-scrambling technique eliminates the coherence between different spectral orders; therefore, the reconstructed images from the multiple spectra will superimpose incoherently, and the pixel structure will not be observed in the projection image.

  3. Signal dependence of inter-pixel capacitance in hybridized HgCdTe H2RG arrays for use in James Webb space telescope's NIRcam

    NASA Astrophysics Data System (ADS)

    Donlon, Kevan; Ninkov, Zoran; Baum, Stefi

    2016-08-01

    Interpixel capacitance (IPC) is a deterministic electronic coupling by which signal generated in one pixel is measured in neighboring pixels. Examination of dark frames from test NIRcam arrays corroborates earlier results and simulations illustrating a signal dependent coupling. When the signal on an individual pixel is larger, the fractional coupling to nearest neighbors is lesser than when the signal is lower. Frames from test arrays indicate a drop in average coupling from approximately 1.0% at low signals down to approximately 0.65% at high signals depending on the particular array in question. The photometric ramifications for this non-uniformity are not fully understood. This non-uniformity intro-duces a non-linearity in the current mathematical model for IPC coupling. IPC coupling has been mathematically formalized as convolution by a blur kernel. Signal dependence requires that the blur kernel be locally defined as a function of signal intensity. Through application of a signal dependent coupling kernel, the IPC coupling can be modeled computationally. This method allows for simultaneous knowledge of the intrinsic parameters of the image scene, the result of applying a constant IPC, and the result of a signal dependent IPC. In the age of sub-pixel precision in astronomy these effects must be properly understood and accounted for in order for the data to accurately represent the object of observation. Implementation of this method is done through python scripted processing of images. The introduction of IPC into simulated frames is accomplished through convolution of the image with a blur kernel whose parameters are themselves locally defined functions of the image. These techniques can be used to enhance the data processing pipeline for NIRcam.

  4. Simultaneous measurement of cerebral blood flow and mRNA signals: pixel-based inter-modality correlational analysis.

    PubMed

    Zhao, W; Busto, R; Truettner, J; Ginsberg, M D

    2001-07-30

    The analysis of pixel-based relationships between local cerebral blood flow (LCBF) and mRNA expression can reveal important insights into brain function. Traditionally, LCBF and in situ hybridization studies for genes of interest have been analyzed in separate series. To overcome this limitation and to increase the power of statistical analysis, this study focused on developing a double-label method to measure local cerebral blood flow (LCBF) and gene expressions simultaneously by means of a dual-autoradiography procedure. A 14C-iodoantipyrine autoradiographic LCBF study was first performed. Serial brain sections (12 in this study) were obtained at multiple coronal levels and were processed in the conventional manner to yield quantitative LCBF images. Two replicate sections at each bregma level were then used for in situ hybridization. To eliminate the 14C-iodoantipyrine from these sections, a chloroform-washout procedure was first performed. The sections were then processed for in situ hybridization autoradiography for the probes of interest. This method was tested in Wistar rats subjected to 12 min of global forebrain ischemia by two-vessel occlusion plus hypotension, followed by 2 or 6 h of reperfusion (n=4-6 per group). LCBF and in situ hybridization images for heat shock protein 70 (HSP70) were generated for each rat, aligned by disparity analysis, and analyzed on a pixel-by-pixel basis. This method yielded detailed inter-modality correlation between LCBF and HSP70 mRNA expressions. The advantages of this method include reducing the number of experimental animals by one-half; and providing accurate pixel-based correlations between different modalities in the same animals, thus enabling paired statistical analyses. This method can be extended to permit correlation of LCBF with the expression of multiple genes of interest.

  5. Impulsive noise suppression in color images based on the geodesic digital paths

    NASA Astrophysics Data System (ADS)

    Smolka, Bogdan; Cyganek, Boguslaw

    2015-02-01

    In the paper a novel filtering design based on the concept of exploration of the pixel neighborhood by digital paths is presented. The paths start from the boundary of a filtering window and reach its center. The cost of transitions between adjacent pixels is defined in the hybrid spatial-color space. Then, an optimal path of minimum total cost, leading from pixels of the window's boundary to its center is determined. The cost of an optimal path serves as a degree of similarity of the central pixel to the samples from the local processing window. If a pixel is an outlier, then all the paths starting from the window's boundary will have high costs and the minimum one will also be high. The filter output is calculated as a weighted mean of the central pixel and an estimate constructed using the information on the minimum cost assigned to each image pixel. So, first the costs of optimal paths are used to build a smoothed image and in the second step the minimum cost of the central pixel is utilized for construction of the weights of a soft-switching scheme. The experiments performed on a set of standard color images, revealed that the efficiency of the proposed algorithm is superior to the state-of-the-art filtering techniques in terms of the objective restoration quality measures, especially for high noise contamination ratios. The proposed filter, due to its low computational complexity, can be applied for real time image denoising and also for the enhancement of video streams.

  6. Contrast Invariant Interest Point Detection by Zero-Norm LoG Filter.

    PubMed

    Zhenwei Miao; Xudong Jiang; Kim-Hui Yap

    2016-01-01

    The Laplacian of Gaussian (LoG) filter is widely used in interest point detection. However, low-contrast image structures, though stable and significant, are often submerged by the high-contrast ones in the response image of the LoG filter, and hence are difficult to be detected. To solve this problem, we derive a generalized LoG filter, and propose a zero-norm LoG filter. The response of the zero-norm LoG filter is proportional to the weighted number of bright/dark pixels in a local region, which makes this filter be invariant to the image contrast. Based on the zero-norm LoG filter, we develop an interest point detector to extract local structures from images. Compared with the contrast dependent detectors, such as the popular scale invariant feature transform detector, the proposed detector is robust to illumination changes and abrupt variations of images. Experiments on benchmark databases demonstrate the superior performance of the proposed zero-norm LoG detector in terms of the repeatability and matching score of the detected points as well as the image recognition rate under different conditions.

  7. Simulation of ultrasonic pulse propagation, distortion, and attenuation in the human chest wall.

    PubMed

    Mast, T D; Hinkelman, L M; Metlay, L A; Orr, M J; Waag, R C

    1999-12-01

    A finite-difference time-domain model for ultrasonic pulse propagation through soft tissue has been extended to incorporate absorption effects as well as longitudinal-wave propagation in cartilage and bone. This extended model has been used to simulate ultrasonic propagation through anatomically detailed representations of chest wall structure. The inhomogeneous chest wall tissue is represented by two-dimensional maps determined by staining chest wall cross sections to distinguish between tissue types, digitally scanning the stained cross sections, and mapping each pixel of the scanned images to fat, muscle, connective tissue, cartilage, or bone. Each pixel of the tissue map is then assigned a sound speed, density, and absorption value determined from published measurements and assumed to be representative of the local tissue type. Computational results for energy level fluctuations and arrival time fluctuations show qualitative agreement with measurements performed on the same specimens, but show significantly less waveform distortion than measurements. Visualization of simulated tissue-ultrasound interactions in the chest wall shows possible mechanisms for image aberration in echocardiography, including effects associated with reflection and diffraction caused by rib structures. A comparison of distortion effects for varying pulse center frequencies shows that, for soft tissue paths through the chest wall, energy level and waveform distortion increase markedly with rising ultrasonic frequency and that arrival-time fluctuations increase to a lesser degree.

  8. Wavelength-scale light concentrator made by direct 3D laser writing of polymer metamaterials.

    PubMed

    Moughames, J; Jradi, S; Chan, T M; Akil, S; Battie, Y; Naciri, A En; Herro, Z; Guenneau, S; Enoch, S; Joly, L; Cousin, J; Bruyant, A

    2016-10-04

    We report on the realization of functional infrared light concentrators based on a thick layer of air-polymer metamaterial with controlled pore size gradients. The design features an optimum gradient index profile leading to light focusing in the Fresnel zone of the structures for two selected operating wavelength domains near 5.6 and 10.4 μm. The metamaterial which consists in a thick polymer containing air holes with diameters ranging from λ/20 to λ/8 is made using a 3D lithography technique based on the two-photon polymerization of a homemade photopolymer. Infrared imaging of the structures reveals a tight focusing for both structures with a maximum local intensity increase by a factor of 2.5 for a concentrator volume of 1.5 λ 3 , slightly limited by the residual absorption of the selected polymer. Such porous and flat metamaterial structures offer interesting perspectives to increase infrared detector performance at the pixel level for imaging or sensing applications.

  9. Wavelength-scale light concentrator made by direct 3D laser writing of polymer metamaterials

    PubMed Central

    Moughames, J.; Jradi, S.; Chan, T. M.; Akil, S.; Battie, Y.; Naciri, A. En; Herro, Z.; Guenneau, S.; Enoch, S.; Joly, L.; Cousin, J.; Bruyant, A.

    2016-01-01

    We report on the realization of functional infrared light concentrators based on a thick layer of air-polymer metamaterial with controlled pore size gradients. The design features an optimum gradient index profile leading to light focusing in the Fresnel zone of the structures for two selected operating wavelength domains near 5.6 and 10.4 μm. The metamaterial which consists in a thick polymer containing air holes with diameters ranging from λ/20 to λ/8 is made using a 3D lithography technique based on the two-photon polymerization of a homemade photopolymer. Infrared imaging of the structures reveals a tight focusing for both structures with a maximum local intensity increase by a factor of 2.5 for a concentrator volume of 1.5 λ3, slightly limited by the residual absorption of the selected polymer. Such porous and flat metamaterial structures offer interesting perspectives to increase infrared detector performance at the pixel level for imaging or sensing applications. PMID:27698476

  10. Novel Near-Lossless Compression Algorithm for Medical Sequence Images with Adaptive Block-Based Spatial Prediction.

    PubMed

    Song, Xiaoying; Huang, Qijun; Chang, Sheng; He, Jin; Wang, Hao

    2016-12-01

    To address the low compression efficiency of lossless compression and the low image quality of general near-lossless compression, a novel near-lossless compression algorithm based on adaptive spatial prediction is proposed for medical sequence images for possible diagnostic use in this paper. The proposed method employs adaptive block size-based spatial prediction to predict blocks directly in the spatial domain and Lossless Hadamard Transform before quantization to improve the quality of reconstructed images. The block-based prediction breaks the pixel neighborhood constraint and takes full advantage of the local spatial correlations found in medical images. The adaptive block size guarantees a more rational division of images and the improved use of the local structure. The results indicate that the proposed algorithm can efficiently compress medical images and produces a better peak signal-to-noise ratio (PSNR) under the same pre-defined distortion than other near-lossless methods.

  11. SuBSENSE: a universal change detection method with local adaptive sensitivity.

    PubMed

    St-Charles, Pierre-Luc; Bilodeau, Guillaume-Alexandre; Bergevin, Robert

    2015-01-01

    Foreground/background segmentation via change detection in video sequences is often used as a stepping stone in high-level analytics and applications. Despite the wide variety of methods that have been proposed for this problem, none has been able to fully address the complex nature of dynamic scenes in real surveillance tasks. In this paper, we present a universal pixel-level segmentation method that relies on spatiotemporal binary features as well as color information to detect changes. This allows camouflaged foreground objects to be detected more easily while most illumination variations are ignored. Besides, instead of using manually set, frame-wide constants to dictate model sensitivity and adaptation speed, we use pixel-level feedback loops to dynamically adjust our method's internal parameters without user intervention. These adjustments are based on the continuous monitoring of model fidelity and local segmentation noise levels. This new approach enables us to outperform all 32 previously tested state-of-the-art methods on the 2012 and 2014 versions of the ChangeDetection.net dataset in terms of overall F-Measure. The use of local binary image descriptors for pixel-level modeling also facilitates high-speed parallel implementations: our own version, which used no low-level or architecture-specific instruction, reached real-time processing speed on a midlevel desktop CPU. A complete C++ implementation based on OpenCV is available online.

  12. Use of local noise power spectrum and wavelet analysis in quantitative image quality assurance for EPIDs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Soyoung

    Purpose: To investigate the use of local noise power spectrum (NPS) to characterize image noise and wavelet analysis to isolate defective pixels and inter-subpanel flat-fielding artifacts for quantitative quality assurance (QA) of electronic portal imaging devices (EPIDs). Methods: A total of 93 image sets including custom-made bar-pattern images and open exposure images were collected from four iViewGT a-Si EPID systems over three years. Global quantitative metrics such as modulation transform function (MTF), NPS, and detective quantum efficiency (DQE) were computed for each image set. Local NPS was also calculated for individual subpanels by sampling region of interests within each subpanelmore » of the EPID. The 1D NPS, obtained by radially averaging the 2D NPS, was fitted to a power-law function. The r-square value of the linear regression analysis was used as a singular metric to characterize the noise properties of individual subpanels of the EPID. The sensitivity of the local NPS was first compared with the global quantitative metrics using historical image sets. It was then compared with two commonly used commercial QA systems with images collected after applying two different EPID calibration methods (single-level gain and multilevel gain). To detect isolated defective pixels and inter-subpanel flat-fielding artifacts, Haar wavelet transform was applied on the images. Results: Global quantitative metrics including MTF, NPS, and DQE showed little change over the period of data collection. On the contrary, a strong correlation between the local NPS (r-square values) and the variation of the EPID noise condition was observed. The local NPS analysis indicated image quality improvement with the r-square values increased from 0.80 ± 0.03 (before calibration) to 0.85 ± 0.03 (after single-level gain calibration) and to 0.96 ± 0.03 (after multilevel gain calibration), while the commercial QA systems failed to distinguish the image quality improvement between the two calibration methods. With wavelet analysis, defective pixels and inter-subpanel flat-fielding artifacts were clearly identified as spikes after thresholding the inversely transformed images. Conclusions: The proposed local NPS (r-square values) showed superior sensitivity to the noise level variations of individual subpanels compared with global quantitative metrics such as MTF, NPS, and DQE. Wavelet analysis was effective in detecting isolated defective pixels and inter-subpanel flat-fielding artifacts. The proposed methods are promising for the early detection of imaging artifacts of EPIDs.« less

  13. Heterogeneity of Particle Deposition by Pixel Analysis of 2D Gamma Scintigraphy Images

    PubMed Central

    Xie, Miao; Zeman, Kirby; Hurd, Harry; Donaldson, Scott

    2015-01-01

    Abstract Background: Heterogeneity of inhaled particle deposition in airways disease may be a sensitive indicator of physiologic changes in the lungs. Using planar gamma scintigraphy, we developed new methods to locate and quantify regions of high (hot) and low (cold) particle deposition in the lungs. Methods: Initial deposition and 24 hour retention images were obtained from healthy (n=31) adult subjects and patients with mild cystic fibrosis lung disease (CF) (n=14) following inhalation of radiolabeled particles (Tc99m-sulfur colloid, 5.4 μm MMAD) under controlled breathing conditions. The initial deposition image of the right lung was normalized to (i.e., same median pixel value), and then divided by, a transmission (Tc99m) image in the same individual to obtain a pixel-by-pixel ratio image. Hot spots were defined where pixel values in the deposition image were greater than 2X those of the transmission, and cold spots as pixels where the deposition image was less than 0.5X of the transmission. The number ratio (NR) of the hot and cold pixels to total lung pixels, and the sum ratio (SR) of total counts in hot pixels to total lung counts were compared between healthy and CF subjects. Other traditional measures of regional particle deposition, nC/P and skew of the pixel count histogram distribution, were also compared. Results: The NR of cold spots was greater in mild CF, 0.221±0.047(CF) vs. 0.186±0.038 (healthy) (p<0.005) and was significantly correlated with FEV1 %pred in the patients (R=−0.70). nC/P (central to peripheral count ratio), skew of the count histogram, and hot NR or SR were not different between the healthy and mild CF patients. Conclusions: These methods may provide more sensitive measures of airway function and localization of deposition that might be useful for assessing treatment efficacy in these patients. PMID:25393109

  14. Development of high energy micro-tomography system at SPring-8

    NASA Astrophysics Data System (ADS)

    Uesugi, Kentaro; Hoshino, Masato

    2017-09-01

    A high energy X-ray micro-tomography system has been developed at BL20B2 in SPring-8. The available range of the energy is between 20keV and 113keV with a Si (511) double crystal monochromator. The system enables us to image large or heavy materials such as fossils and metals. The X-ray image detector consists of visible light conversion system and sCMOS camera. The effective pixel size is variable by changing a tandem lens between 6.5 μm/pixel and 25.5 μm/pixel discretely. The format of the camera is 2048 pixels x 2048 pixels. As a demonstration of the system, alkaline battery and a nodule from Bolivia were imaged. A detail of the structure of the battery and a female mold Trilobite were successfully imaged without breaking those fossils.

  15. Design and fabrication of AlGaInP-based micro-light-emitting-diode array devices

    NASA Astrophysics Data System (ADS)

    Bao, Xingzhen; Liang, Jingqiu; Liang, Zhongzhu; Wang, Weibiao; Tian, Chao; Qin, Yuxin; Lü, Jinguang

    2016-04-01

    An integrated high-resolution (individual pixel size 80 μm×80 μm) solid-state self-emissive active matrix programmed with 320×240 micro-light-emitting-diode arrays structure was designed and fabricated on an AlGaInP semiconductor chip using micro electro-mechanical systems, microstructure and semiconductor fabricating techniques. Row pixels share a p-electrode and line pixels share an n-electrode. We experimentally investigated GaAs substrate thickness affects the electrical and optical characteristics of the pixels. For a 150-μm-thick GaAs substrate, the single pixel output power was 167.4 μW at 5 mA, and increased to 326.4 μW when current increase to 10 mA. The device investigated potentially plays an important role in many fields.

  16. A low-noise CMOS pixel direct charge sensor, Topmetal-II-

    DOE PAGES

    An, Mangmang; Chen, Chufeng; Gao, Chaosong; ...

    2015-12-12

    In this paper, we report the design and characterization of a CMOS pixel direct charge sensor, Topmetal-II-, fabricated in a standard 0.35 μm CMOS Integrated Circuit process. The sensor utilizes exposed metal patches on top of each pixel to directly collect charge. Each pixel contains a low-noise charge-sensitive preamplifier to establish the analog signal and a discriminator with tunable threshold to generate hits. The analog signal from each pixel is accessible through time-shared multiplexing over the entire array. Hits are read out digitally through a column-based priority logic structure. Tests show that the sensor achieved a <15e - analog noisemore » and a 200e - minimum threshold for digital readout per pixel. The sensor is capable of detecting both electrons and ions drifting in gas. Lastly, these characteristics enable its use as the charge readout device in future Time Projection Chambers without gaseous gain mechanism, which has unique advantages in low background and low rate-density experiments.« less

  17. A low-noise CMOS pixel direct charge sensor, Topmetal-II-

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    An, Mangmang; Chen, Chufeng; Gao, Chaosong

    In this paper, we report the design and characterization of a CMOS pixel direct charge sensor, Topmetal-II-, fabricated in a standard 0.35 μm CMOS Integrated Circuit process. The sensor utilizes exposed metal patches on top of each pixel to directly collect charge. Each pixel contains a low-noise charge-sensitive preamplifier to establish the analog signal and a discriminator with tunable threshold to generate hits. The analog signal from each pixel is accessible through time-shared multiplexing over the entire array. Hits are read out digitally through a column-based priority logic structure. Tests show that the sensor achieved a <15e - analog noisemore » and a 200e - minimum threshold for digital readout per pixel. The sensor is capable of detecting both electrons and ions drifting in gas. Lastly, these characteristics enable its use as the charge readout device in future Time Projection Chambers without gaseous gain mechanism, which has unique advantages in low background and low rate-density experiments.« less

  18. Investigating biofilm structure using x-ray microtomography and gratings-based phase contrast

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, Erin A.; Xiao, Xianghui; Miller, Micah D.

    2012-10-17

    Direct examination of natural and engineered environments has revealed that the majority of microorganisms in these systems live in structured communities termed biofilms. To gain a better understanding for how biofilms function and interact with their local environment, fundamental capabilities for enhanced visualization, compositional analysis, and functional characterization of biofilms are needed. For pore-scale and community-scale analysis (100’s of nm to 10’s of microns), a variety of surface tools are available. However, understanding biofilm structure in complex three-dimensional (3-D) environments is considerably more difficult. X-ray microtomography can reveal a biofilm’s internal structure, but the obtaining sufficient contrast to image low-Zmore » biological material against a higher-Z substrate makes detecting biofilms difficult. Here we present results imaging Shewanella oneidensis biofilms on a Hollow-fiber Membrane Biofilm Reactor (HfMBR), using the x-ray microtomography system at sector 2-BM of the Advanced Photon Source (APS), at energies ranging from 13-15.4 keV and pixel sizes of 0.7 and 1.3 μm/pixel. We examine the use of osmium (Os) as a contrast agent to enhance biofilm visibility and demonstrate that staining improves imaging of hydrated biofilms. We also present results using a Talbot interferometer to provide phase and scatter contrast information in addition to absorption. Talbot interferometry allows imaging of unstained hydrated biofilms with phase contrast, while absorption contrast primarily highlights edges and scatter contrast provides little information. However, the gratings used here limit the spatial resolution to no finer than 2 μm, which hinders the ability to detect small features. Future studies at higher resolution or higher Talbot order for greater sensitivity to density variations may improve imaging.« less

  19. Measuring the effective pixel positions for the HARPS3 CCD

    NASA Astrophysics Data System (ADS)

    Hall, Richard D.; Thompson, Samantha; Queloz, Didier

    2016-07-01

    We present preliminary results from an experiment designed to measure the effective pixel positions of a CCD to sub-pixel precision. This technique will be used to characterise the 4k x 4k CCD destined for the HARPS-3 spectrograph. The principle of coherent beam interference is used to create intensity fringes along one axis of the CCD. By sweeping the physical parameters of the experiment, the geometry of the fringes can be altered which is used to probe the pixel structure. We also present the limitations of the current experimental set-up and suggest what will be implemented in the future to vastly improve the precision of the measurements.

  20. An Unsupervised Deep Hyperspectral Anomaly Detector

    PubMed Central

    Ma, Ning; Peng, Yu; Wang, Shaojun

    2018-01-01

    Hyperspectral image (HSI) based detection has attracted considerable attention recently in agriculture, environmental protection and military applications as different wavelengths of light can be advantageously used to discriminate different types of objects. Unfortunately, estimating the background distribution and the detection of interesting local objects is not straightforward, and anomaly detectors may give false alarms. In this paper, a Deep Belief Network (DBN) based anomaly detector is proposed. The high-level features and reconstruction errors are learned through the network in a manner which is not affected by previous background distribution assumption. To reduce contamination by local anomalies, adaptive weights are constructed from reconstruction errors and statistical information. By using the code image which is generated during the inference of DBN and modified by adaptively updated weights, a local Euclidean distance between under test pixels and their neighboring pixels is used to determine the anomaly targets. Experimental results on synthetic and recorded HSI datasets show the performance of proposed method outperforms the classic global Reed-Xiaoli detector (RXD), local RX detector (LRXD) and the-state-of-the-art Collaborative Representation detector (CRD). PMID:29495410

  1. Separation of metadata and pixel data to speed DICOM tag morphing.

    PubMed

    Ismail, Mahmoud; Philbin, James

    2013-01-01

    The DICOM information model combines pixel data and metadata in single DICOM object. It is not possible to access the metadata separately from the pixel data. There are use cases where only metadata is accessed. The current DICOM object format increases the running time of those use cases. Tag morphing is one of those use cases. Tag morphing includes deletion, insertion or manipulation of one or more of the metadata attributes. It is typically used for order reconciliation on study acquisition or to localize the issuer of patient ID (IPID) and the patient ID attributes when data from one domain is transferred to a different domain. In this work, we propose using Multi-Series DICOM (MSD) objects, which separate metadata from pixel data and remove duplicate attributes, to reduce the time required for Tag Morphing. The time required to update a set of study attributes in each format is compared. The results show that the MSD format significantly reduces the time required for tag morphing.

  2. Symbolic feature detection for image understanding

    NASA Astrophysics Data System (ADS)

    Aslan, Sinem; Akgül, Ceyhun Burak; Sankur, Bülent

    2014-03-01

    In this study we propose a model-driven codebook generation method used to assign probability scores to pixels in order to represent underlying local shapes they reside in. In the first version of the symbol library we limited ourselves to photometric and similarity transformations applied on eight prototypical shapes of flat plateau , ramp, valley, ridge, circular and elliptic respectively pit and hill and used randomized decision forest as the statistical classifier to compute shape class ambiguity of each pixel. We achieved90% accuracy in identification of known objects from alternate views, however, we could not outperform texture, global and local shape methods, but only color-based method in recognition of unknown objects. We present a progress plan to be accomplished as a future work to improve the proposed approach further.

  3. Simultaneous Local Binary Feature Learning and Encoding for Homogeneous and Heterogeneous Face Recognition.

    PubMed

    Lu, Jiwen; Erin Liong, Venice; Zhou, Jie

    2017-08-09

    In this paper, we propose a simultaneous local binary feature learning and encoding (SLBFLE) approach for both homogeneous and heterogeneous face recognition. Unlike existing hand-crafted face descriptors such as local binary pattern (LBP) and Gabor features which usually require strong prior knowledge, our SLBFLE is an unsupervised feature learning approach which automatically learns face representation from raw pixels. Unlike existing binary face descriptors such as the LBP, discriminant face descriptor (DFD), and compact binary face descriptor (CBFD) which use a two-stage feature extraction procedure, our SLBFLE jointly learns binary codes and the codebook for local face patches so that discriminative information from raw pixels from face images of different identities can be obtained by using a one-stage feature learning and encoding procedure. Moreover, we propose a coupled simultaneous local binary feature learning and encoding (C-SLBFLE) method to make the proposed approach suitable for heterogeneous face matching. Unlike most existing coupled feature learning methods which learn a pair of transformation matrices for each modality, we exploit both the common and specific information from heterogeneous face samples to characterize their underlying correlations. Experimental results on six widely used face datasets are presented to demonstrate the effectiveness of the proposed method.

  4. Noise-gating to Clean Astrophysical Image Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeForest, C. E.

    I present a family of algorithms to reduce noise in astrophysical images and image sequences, preserving more information from the original data than is retained by conventional techniques. The family uses locally adaptive filters (“noise gates”) in the Fourier domain to separate coherent image structure from background noise based on the statistics of local neighborhoods in the image. Processing of solar data limited by simple shot noise or by additive noise reveals image structure not easily visible in the originals, preserves photometry of observable features, and reduces shot noise by a factor of 10 or more with little to nomore » apparent loss of resolution. This reveals faint features that were either not directly discernible or not sufficiently strongly detected for quantitative analysis. The method works best on image sequences containing related subjects, for example movies of solar evolution, but is also applicable to single images provided that there are enough pixels. The adaptive filter uses the statistical properties of noise and of local neighborhoods in the data to discriminate between coherent features and incoherent noise without reference to the specific shape or evolution of those features. The technique can potentially be modified in a straightforward way to exploit additional a priori knowledge about the functional form of the noise.« less

  5. Noise-gating to Clean Astrophysical Image Data

    NASA Astrophysics Data System (ADS)

    DeForest, C. E.

    2017-04-01

    I present a family of algorithms to reduce noise in astrophysical images and image sequences, preserving more information from the original data than is retained by conventional techniques. The family uses locally adaptive filters (“noise gates”) in the Fourier domain to separate coherent image structure from background noise based on the statistics of local neighborhoods in the image. Processing of solar data limited by simple shot noise or by additive noise reveals image structure not easily visible in the originals, preserves photometry of observable features, and reduces shot noise by a factor of 10 or more with little to no apparent loss of resolution. This reveals faint features that were either not directly discernible or not sufficiently strongly detected for quantitative analysis. The method works best on image sequences containing related subjects, for example movies of solar evolution, but is also applicable to single images provided that there are enough pixels. The adaptive filter uses the statistical properties of noise and of local neighborhoods in the data to discriminate between coherent features and incoherent noise without reference to the specific shape or evolution of those features. The technique can potentially be modified in a straightforward way to exploit additional a priori knowledge about the functional form of the noise.

  6. Laser pixelation of thick scintillators for medical imaging applications: x-ray studies

    NASA Astrophysics Data System (ADS)

    Sabet, Hamid; Kudrolli, Haris; Marton, Zsolt; Singh, Bipin; Nagarkar, Vivek V.

    2013-09-01

    To achieve high spatial resolution required in nuclear imaging, scintillation light spread has to be controlled. This has been traditionally achieved by introducing structures in the bulk of scintillation materials; typically by mechanical pixelation of scintillators and fill the resultant inter-pixel gaps by reflecting materials. Mechanical pixelation however, is accompanied by various cost and complexity issues especially for hard, brittle and hygroscopic materials. For example LSO and LYSO, hard and brittle scintillators of interest to medical imaging community, are known to crack under thermal and mechanical stress; the material yield drops quickly with large arrays with high aspect ratio pixels and therefore the pixelation process cost increases. We are utilizing a novel technique named Laser Induced Optical Barriers (LIOB) for pixelation of scintillators that overcomes the issues associated with mechanical pixelation. In this technique, we can introduce optical barriers within the bulk of scintillator crystals to form pixelated arrays with small pixel size and large thickness. We applied LIOB to LYSO using a high-frequency solid-state laser. Arrays with different crystal thickness (5 to 20 mm thick), and pixel size (0.8×0.8 to 1.5×1.5 mm2) were fabricated and tested. The width of the optical barriers were controlled by fine-tuning key parameters such as lens focal spot size and laser energy density. Here we report on LIOB process, its optimization, and the optical crosstalk measurements using X-rays. There are many applications that can potentially benefit from LIOB including but not limited to clinical/pre-clinical PET and SPECT systems, and photon counting CT detectors.

  7. The bipolar silicon microstrip detector: A proposal for a novel precision tracking device

    NASA Astrophysics Data System (ADS)

    Horisberger, R.

    1990-03-01

    It is proposed to combine the technology of fully depleted silicon microstrip detectors fabricated on n doped high resistivity silicon with the concept of the bipolar transistor. This is done by adding a n ++ doped region inside the normal p + implanted region of the reverse biased p + n diode. Teh resulting structure has amplifying properties and is referred to as bipolar pixel transistor. The simplest readout scheme of a bipolar pixel array by an aluminium strip bus leads to the bipolar microstrip detector. The bipolar pixel structure is expected to give a better signal-to-noise performance for the detection of minimum ionizing charged particle tracks than the normal silicon diode strip detector and therefore should allow in future the fabrication of thinner silicon detectors for precision tracking.

  8. First light from a very large area pixel array for high-throughput x-ray polarimetry

    NASA Astrophysics Data System (ADS)

    Bellazzini, R.; Spandre, G.; Minuti, M.; Baldini, L.; Brez, A.; Cavalca, F.; Latronico, L.; Omodei, N.; Massai, M. M.; Sgrò, C.; Costa, E.; Soffitta, P.; Krummenacher, F.; de Oliveira, R.

    2006-06-01

    We report on a large active area (15x15mm2), high channel density (470 pixels/mm2), self-triggering CMOS analog chip that we have developed as pixelized charge collecting electrode of a Micropattern Gas Detector. This device, which represents a big step forward both in terms of size and performance, is the last version of three generations of custom ASICs of increasing complexity. The CMOS pixel array has the top metal layer patterned in a matrix of 105600 hexagonal pixels at 50μm pitch. Each pixel is directly connected to the underneath full electronics chain which has been realized in the remaining five metal and single poly-silicon layers of a standard 0.18μm CMOS VLSI technology. The chip has customizable self-triggering capability and includes a signal pre-processing function for the automatic localization of the event coordinates. In this way it is possible to reduce significantly the readout time and the data volume by limiting the signal output only to those pixels belonging to the region of interest. The very small pixel area and the use of a deep sub-micron CMOS technology has brought the noise down to 50 electrons ENC. Results from in depth tests of this device when coupled to a fine pitch (50μm on a triangular pattern) Gas Electron Multiplier are presented. The matching of readout and gas amplification pitch allows getting optimal results. The application of this detector for Astronomical X-Ray Polarimetry is discussed. The experimental detector response to polarized and unpolarized X-ray radiation when working with two gas mixtures and two different photon energies is shown. Results from a full MonteCarlo simulation for several galactic and extragalactic astronomical sources are also reported.

  9. Multi-resolution analysis using integrated microscopic configuration with local patterns for benign-malignant mass classification

    NASA Astrophysics Data System (ADS)

    Rabidas, Rinku; Midya, Abhishek; Chakraborty, Jayasree; Sadhu, Anup; Arif, Wasim

    2018-02-01

    In this paper, Curvelet based local attributes, Curvelet-Local configuration pattern (C-LCP), is introduced for the characterization of mammographic masses as benign or malignant. Amid different anomalies such as micro- calcification, bilateral asymmetry, architectural distortion, and masses, the reason for targeting the mass lesions is due to their variation in shape, size, and margin which makes the diagnosis a challenging task. Being efficient in classification, multi-resolution property of the Curvelet transform is exploited and local information is extracted from the coefficients of each subband using Local configuration pattern (LCP). The microscopic measures in concatenation with the local textural information provide more discriminating capability than individual. The measures embody the magnitude information along with the pixel-wise relationships among the neighboring pixels. The performance analysis is conducted with 200 mammograms of the DDSM database containing 100 mass cases of each benign and malignant. The optimal set of features is acquired via stepwise logistic regression method and the classification is carried out with Fisher linear discriminant analysis. The best area under the receiver operating characteristic curve and accuracy of 0.95 and 87.55% are achieved with the proposed method, which is further compared with some of the state-of-the-art competing methods.

  10. Semantic image segmentation with fused CNN features

    NASA Astrophysics Data System (ADS)

    Geng, Hui-qiang; Zhang, Hua; Xue, Yan-bing; Zhou, Mian; Xu, Guang-ping; Gao, Zan

    2017-09-01

    Semantic image segmentation is a task to predict a category label for every image pixel. The key challenge of it is to design a strong feature representation. In this paper, we fuse the hierarchical convolutional neural network (CNN) features and the region-based features as the feature representation. The hierarchical features contain more global information, while the region-based features contain more local information. The combination of these two kinds of features significantly enhances the feature representation. Then the fused features are used to train a softmax classifier to produce per-pixel label assignment probability. And a fully connected conditional random field (CRF) is used as a post-processing method to improve the labeling consistency. We conduct experiments on SIFT flow dataset. The pixel accuracy and class accuracy are 84.4% and 34.86%, respectively.

  11. Spatial and Temporal Patterns of Land Loss in Mississippi River Delta

    NASA Astrophysics Data System (ADS)

    Roy, S.; Edmonds, D. A.; Robeson, S. M.; Ortiz, A. C.; Nienhuis, J.

    2017-12-01

    Land loss across the Louisiana coast is predicted to exceed 10,000 km2 by 2100. An estimated 18-24 billion tons of sediment is needed to offset land loss, but available sediment supply from the Mississippi River falls short. As a result, coastal restoration plans must target certain areas, which highlight the importance of understanding the processes and patterns of land loss. In this study, we use remote sensing to investigate and quantify land loss patterns, as well as the corresponding morphology of the land segments that are lost. Using Google Earth Engine, we combined over 10,000 time-series Landsat imagery in the Mississippi River Delta to create twelve, three-year composites from 1983 to 2016. We then spectrally unmixed each pixel into land and water percentages, and create land-water binaries. Stratifying by hydrologic unit code boundaries and local subsidence rates, we analyze the land loss pixels using landscape metrics. Our results show that the total loss from 1983-2016 for our area of interest was 908.02 km2 (loss of 5.84%) and total land area was 6855.63 km2 (49.97 % of total area) in 2016 compared to 7763.65 km2 (44.13%) in 1983 consistent with previous estimates for our study area. Land loss pixels have a low patch density (mean of 4.80 patches/ha) and high aggregation indices (mean of 47.15), which indicates that land-loss pixels tend to clump together. The shape index of these clumped pixels are also low (mean of 2.32), which points towards long, narrow patches and edges. Local indicator of spatial autocorrelation (LISA) areas was applied to determine areas of high positive autocorrelation within the loss pixels which reinforced loss across edges. Based on spatial metrics and subsidence grid based analysis on the temporal pattern of land loss pixels we find that i) land change (both growth and loss pixels) occurs along the marsh, lake and coastal edges rather than inland; ii) subsidence, though positively correlated with landloss, is no longer the dominant process of land loss at rates greater than 8 mm/year; and iii) a frequency analysis shows 30.96% of land loss occurs gradually by changing back and forth from water to land over the study period whereas 69.04% of land loss is permanent and does not revert back. Our findings provide new insight into pathways of land loss and the morphological evolution of deltaic systems.

  12. Seismic zonation of Port-Au-Prince using pixel- and object-based imaging analysis methods on ASTER GDEM

    USGS Publications Warehouse

    Yong, Alan; Hough, Susan E.; Cox, Brady R.; Rathje, Ellen M.; Bachhuber, Jeff; Dulberg, Ranon; Hulslander, David; Christiansen, Lisa; and Abrams, Michael J.

    2011-01-01

    We report about a preliminary study to evaluate the use of semi-automated imaging analysis of remotely-sensed DEM and field geophysical measurements to develop a seismic-zonation map of Port-au-Prince, Haiti. For in situ data, VS30 values are derived from the MASW technique deployed in and around the city. For satellite imagery, we use an ASTER GDEM of Hispaniola. We apply both pixel- and object-based imaging methods on the ASTER GDEM to explore local topography (absolute elevation values) and classify terrain types such as mountains, alluvial fans and basins/near-shore regions. We assign NEHRP seismic site class ranges based on available VS30 values. A comparison of results from imagery-based methods to results from traditional geologic-based approaches reveals good overall correspondence. We conclude that image analysis of RS data provides reliable first-order site characterization results in the absence of local data and can be useful to refine detailed site maps with sparse local data.

  13. A unified tensor level set for image segmentation.

    PubMed

    Wang, Bin; Gao, Xinbo; Tao, Dacheng; Li, Xuelong

    2010-06-01

    This paper presents a new region-based unified tensor level set model for image segmentation. This model introduces a three-order tensor to comprehensively depict features of pixels, e.g., gray value and the local geometrical features, such as orientation and gradient, and then, by defining a weighted distance, we generalized the representative region-based level set method from scalar to tensor. The proposed model has four main advantages compared with the traditional representative method as follows. First, involving the Gaussian filter bank, the model is robust against noise, particularly the salt- and pepper-type noise. Second, considering the local geometrical features, e.g., orientation and gradient, the model pays more attention to boundaries and makes the evolving curve stop more easily at the boundary location. Third, due to the unified tensor pixel representation representing the pixels, the model segments images more accurately and naturally. Fourth, based on a weighted distance definition, the model possesses the capacity to cope with data varying from scalar to vector, then to high-order tensor. We apply the proposed method to synthetic, medical, and natural images, and the result suggests that the proposed method is superior to the available representative region-based level set method.

  14. Phase unwrapping in digital holography based on non-subsampled contourlet transform

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaolei; Zhang, Xiangchao; Xu, Min; Zhang, Hao; Jiang, Xiangqian

    2018-01-01

    In the digital holographic measurement of complex surfaces, phase unwrapping is a critical step for accurate reconstruction. The phases of the complex amplitudes calculated from interferometric holograms are disturbed by speckle noise, thus reliable unwrapping results are difficult to be obtained. Most of existing unwrapping algorithms implement denoising operations first to obtain noise-free phases and then conduct phase unwrapping pixel by pixel. This approach is sensitive to spikes and prone to unreliable results in practice. In this paper, a robust unwrapping algorithm based on the non-subsampled contourlet transform (NSCT) is developed. The multiscale and directional decomposition of NSCT enhances the boundary between adjacent phase levels and henceforth the influence of local noise can be eliminated in the transform domain. The wrapped phase map is segmented into several regions corresponding to different phase levels. Finally, an unwrapped phase map is obtained by elevating the phases of a whole segment instead of individual pixels to avoid unwrapping errors caused by local spikes. This algorithm is suitable for dealing with complex and noisy wavefronts. Its universality and superiority in the digital holographic interferometry have been demonstrated by both numerical analysis and practical experiments.

  15. 1 kHz 2D Visual Motion Sensor Using 20 × 20 Silicon Retina Optical Sensor and DSP Microcontroller.

    PubMed

    Liu, Shih-Chii; Yang, MinHao; Steiner, Andreas; Moeckel, Rico; Delbruck, Tobi

    2015-04-01

    Optical flow sensors have been a long running theme in neuromorphic vision sensors which include circuits that implement the local background intensity adaptation mechanism seen in biological retinas. This paper reports a bio-inspired optical motion sensor aimed towards miniature robotic and aerial platforms. It combines a 20 × 20 continuous-time CMOS silicon retina vision sensor with a DSP microcontroller. The retina sensor has pixels that have local gain control and adapt to background lighting. The system allows the user to validate various motion algorithms without building dedicated custom solutions. Measurements are presented to show that the system can compute global 2D translational motion from complex natural scenes using one particular algorithm: the image interpolation algorithm (I2A). With this algorithm, the system can compute global translational motion vectors at a sample rate of 1 kHz, for speeds up to ±1000 pixels/s, using less than 5 k instruction cycles (12 instructions per pixel) per frame. At 1 kHz sample rate the DSP is 12% occupied with motion computation. The sensor is implemented as a 6 g PCB consuming 170 mW of power.

  16. Spectral-spatial classification of hyperspectral imagery with cooperative game

    NASA Astrophysics Data System (ADS)

    Zhao, Ji; Zhong, Yanfei; Jia, Tianyi; Wang, Xinyu; Xu, Yao; Shu, Hong; Zhang, Liangpei

    2018-01-01

    Spectral-spatial classification is known to be an effective way to improve classification performance by integrating spectral information and spatial cues for hyperspectral imagery. In this paper, a game-theoretic spectral-spatial classification algorithm (GTA) using a conditional random field (CRF) model is presented, in which CRF is used to model the image considering the spatial contextual information, and a cooperative game is designed to obtain the labels. The algorithm establishes a one-to-one correspondence between image classification and game theory. The pixels of the image are considered as the players, and the labels are considered as the strategies in a game. Similar to the idea of soft classification, the uncertainty is considered to build the expected energy model in the first step. The local expected energy can be quickly calculated, based on a mixed strategy for the pixels, to establish the foundation for a cooperative game. Coalitions can then be formed by the designed merge rule based on the local expected energy, so that a majority game can be performed to make a coalition decision to obtain the label of each pixel. The experimental results on three hyperspectral data sets demonstrate the effectiveness of the proposed classification algorithm.

  17. Half-unit weighted bilinear algorithm for image contrast enhancement in capsule endoscopy

    NASA Astrophysics Data System (ADS)

    Rukundo, Olivier

    2018-04-01

    This paper proposes a novel enhancement method based exclusively on the bilinear interpolation algorithm for capsule endoscopy images. The proposed method does not convert the original RBG image components to HSV or any other color space or model; instead, it processes directly RGB components. In each component, a group of four adjacent pixels and half-unit weight in the bilinear weighting function are used to calculate the average pixel value, identical for each pixel in that particular group. After calculations, groups of identical pixels are overlapped successively in horizontal and vertical directions to achieve a preliminary-enhanced image. The final-enhanced image is achieved by halving the sum of the original and preliminary-enhanced image pixels. Quantitative and qualitative experiments were conducted focusing on pairwise comparisons between original and enhanced images. Final-enhanced images have generally the best diagnostic quality and gave more details about the visibility of vessels and structures in capsule endoscopy images.

  18. Active pixel sensor pixel having a photodetector whose output is coupled to an output transistor gate

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R. (Inventor); Nakamura, Junichi (Inventor); Kemeny, Sabrina E. (Inventor)

    2005-01-01

    An imaging device formed as a monolithic complementary metal oxide semiconductor integrated circuit in an industry standard complementary metal oxide semiconductor process, the integrated circuit including a focal plane array of pixel cells, each one of the cells including a photogate overlying the substrate for accumulating photo-generated charge in an underlying portion of the substrate and a charge coupled device section formed on the substrate adjacent the photogate having a sensing node and at least one charge coupled device stage for transferring charge from the underlying portion of the substrate to the sensing node. There is also a readout circuit, part of which can be disposed at the bottom of each column of cells and be common to all the cells in the column. A Simple Floating Gate (SFG) pixel structure could also be employed in the imager to provide a non-destructive readout and smaller pixel sizes.

  19. SU-C-304-05: Use of Local Noise Power Spectrum and Wavelets in Comprehensive EPID Quality Assurance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, S; Gopal, A; Yan, G

    2015-06-15

    Purpose: As EPIDs are increasingly used for IMRT QA and real-time treatment verification, comprehensive quality assurance (QA) of EPIDs becomes critical. Current QA with phantoms such as the Las Vegas and PIPSpro™ can fail in the early detection of EPID artifacts. Beyond image quality assessment, we propose a quantitative methodology using local noise power spectrum (NPS) to characterize image noise and wavelet transform to identify bad pixels and inter-subpanel flat-fielding artifacts. Methods: A total of 93 image sets including bar-pattern images and open exposure images were collected from four iViewGT a-Si EPID systems over three years. Quantitative metrics such asmore » modulation transform function (MTF), NPS and detective quantum efficiency (DQE) were computed for each image set. Local 2D NPS was calculated for each subpanel. A 1D NPS was obtained by radial averaging the 2D NPS and fitted to a power-law function. R-square and slope of the linear regression analysis were used for panel performance assessment. Haar wavelet transformation was employed to identify pixel defects and non-uniform gain correction across subpanels. Results: Overall image quality was assessed with DQE based on empirically derived area under curve (AUC) thresholds. Using linear regression analysis of 1D NPS, panels with acceptable flat fielding were indicated by r-square between 0.8 and 1, and slopes of −0.4 to −0.7. However, for panels requiring flat fielding recalibration, r-square values less than 0.8 and slopes from +0.2 to −0.4 were observed. The wavelet transform successfully identified pixel defects and inter-subpanel flat fielding artifacts. Standard QA with the Las Vegas and PIPSpro phantoms failed to detect these artifacts. Conclusion: The proposed QA methodology is promising for the early detection of imaging and dosimetric artifacts of EPIDs. Local NPS can accurately characterize the noise level within each subpanel, while the wavelet transforms can detect bad pixels and inter-subpanel flat fielding artifacts.« less

  20. Near Wall measurement in Turbulent Flow over Rough Wall using microscopic HPIV

    NASA Astrophysics Data System (ADS)

    Talapatra, Siddharth; Hong, Jiarong; Katz, Joseph

    2009-11-01

    Using holographic PIV, 3D velocity measurements are being performed in a turbulent rough wall channel flow. Our objective is to examine the contribution of coherent structures to the flow dynamics, momentum and energy fluxes in the roughness sublayer. The 0.45mm high, pyramid-shaped roughness is uniformly distributed on the top and bottom surfaces of a 5X20cm rectangular channel flow, where the Reτ is 3400. To facilitate recording of holograms through a rough plate, the working fluid is a concentrated solution of NaI in water, whose optical refractive index is matched with that of the acrylic rough plates. The test section is illuminated by a collimated laser beam from the top, and the sample volume extends from the bottom wall up to 7 roughness heights. After passing through the sample volume, the in-line hologram is magnified and recorded on a 4864X3248 pixels camera at a resolution of 0.74μm/pixel. The flow is locally seeded with 2μm particles. Reconstruction, spatial filtering and particle tracking provide the 3D velocity field. This approach has been successfully implemented recently, as preliminary data demonstrate.

  1. Microradiography with Semiconductor Pixel Detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jakubek, Jan; Cejnarova, Andrea; Dammer, Jiri

    High resolution radiography (with X-rays, neutrons, heavy charged particles, ...) often exploited also in tomographic mode to provide 3D images stands as a powerful imaging technique for instant and nondestructive visualization of fine internal structure of objects. Novel types of semiconductor single particle counting pixel detectors offer many advantages for radiation imaging: high detection efficiency, energy discrimination or direct energy measurement, noiseless digital integration (counting), high frame rate and virtually unlimited dynamic range. This article shows the application and potential of pixel detectors (such as Medipix2 or TimePix) in different fields of radiation imaging.

  2. Validating spatial structure in canopy water content using geostatistics

    NASA Technical Reports Server (NTRS)

    Sanderson, E. W.; Zhang, M. H.; Ustin, S. L.; Rejmankova, E.; Haxo, R. S.

    1995-01-01

    Heterogeneity in ecological phenomena are scale dependent and affect the hierarchical structure of image data. AVIRIS pixels average reflectance produced by complex absorption and scattering interactions between biogeochemical composition, canopy architecture, view and illumination angles, species distributions, and plant cover as well as other factors. These scales affect validation of pixel reflectance, typically performed by relating pixel spectra to ground measurements acquired at scales of 1m(exp 2) or less (e.g., field spectra, foilage and soil samples, etc.). As image analysis becomes more sophisticated, such as those for detection of canopy chemistry, better validation becomes a critical problem. This paper presents a methodology for bridging between point measurements and pixels using geostatistics. Geostatistics have been extensively used in geological or hydrogeolocial studies but have received little application in ecological studies. The key criteria for kriging estimation is that the phenomena varies in space and that an underlying controlling process produces spatial correlation between the measured data points. Ecological variation meets this requirement because communities vary along environmental gradients like soil moisture, nutrient availability, or topography.

  3. Fabrication of close-packed TES microcalorimeter arrays using superconducting molybdenum/gold transition-edge sensors

    NASA Astrophysics Data System (ADS)

    Finkbeiner, F. M.; Brekosky, R. P.; Chervenak, J. A.; Figueroa-Feliciano, E.; Li, M. J.; Lindeman, M. A.; Stahle, C. K.; Stahle, C. M.; Tralshawala, N.

    2002-02-01

    We present an overview of our efforts in fabricating Transition-Edge Sensor (TES) microcalorimeter arrays for use in astronomical x-ray spectroscopy. Two distinct types of array schemes are currently pursued: 5×5 single pixel TES array where each pixel is a TES microcalorimeter, and Position-Sensing TES (PoST) array. In the latter, a row of 7 or 15 thermally-linked absorber pixels is read out by two TES at its ends. Both schemes employ superconducting Mo/Au bilayers as the TES. The TES are placed on silicon nitride membranes for thermal isolation from the structural frame. The silicon nitride membranes are prepared by a Deep Reactive Ion Etch (DRIE) process into a silicon wafer. In order to achieve the concept of closely packed arrays without decreasing its structural and functional integrity, we have already developed the technology to fabricate arrays of cantilevered pixel-sized absorbers and slit membranes in silicon nitride films. Furthermore, we have started to investigate ultra-low resistance through-wafer micro-vias to bring the electrical contact out to the back of a wafer. .

  4. Invalid-point removal based on epipolar constraint in the structured-light method

    NASA Astrophysics Data System (ADS)

    Qi, Zhaoshuai; Wang, Zhao; Huang, Junhui; Xing, Chao; Gao, Jianmin

    2018-06-01

    In structured-light measurement, there unavoidably exist many invalid points caused by shadows, image noise and ambient light. According to the property of the epipolar constraint, because the retrieved phase of the invalid point is inaccurate, the corresponding projector image coordinate (PIC) will not satisfy the epipolar constraint. Based on this fact, a new invalid-point removal method based on the epipolar constraint is proposed in this paper. First, the fundamental matrix of the measurement system is calculated, which will be used for calculating the epipolar line. Then, according to the retrieved phase map of the captured fringes, the PICs of each pixel are retrieved. Subsequently, the epipolar line in the projector image plane of each pixel is obtained using the fundamental matrix. The distance between the corresponding PIC and the epipolar line of a pixel is defined as the invalidation criterion, which quantifies the satisfaction degree of the epipolar constraint. Finally, all pixels with a distance larger than a certain threshold are removed as invalid points. Experiments verified that the method is easy to implement and demonstrates better performance than state-of-the-art measurement systems.

  5. Simulation study of light transport in laser-processed LYSO:Ce detectors with single-side readout

    NASA Astrophysics Data System (ADS)

    Bläckberg, L.; El Fakhri, G.; Sabet, H.

    2017-11-01

    A tightly focused pulsed laser beam can locally modify the crystal structure inside the bulk of a scintillator. The result is incorporation of so-called optical barriers with a refractive index different from that of the crystal bulk, that can be used to redirect the scintillation light and control the light spread in the detector. We here systematically study the scintillation light transport in detectors fabricated using the laser induced optical barrier technique, and objectively compare their potential performance characteristics with those of the two mainstream detector types: monolithic and mechanically pixelated arrays. Among countless optical barrier patterns, we explore barriers arranged in a pixel-like pattern extending all-the-way or half-way through a 20 mm thick LYSO:Ce crystal. We analyze the performance of the detectors coupled to MPPC arrays, in terms of light response functions, flood maps, line profiles, and light collection efficiency. Our results show that laser-processed detectors with both barrier patterns constitute a new detector category with a behavior between that of the two standard detector types. Results show that when the barrier-crystal interface is smooth, no DOI information can be obtained regardless of barrier refractive index (RI). However, with a rough barrier-crystal interface we can extract multiple levels of DOI. Lower barrier RI results in larger light confinement, leading to better transverse resolution. Furthermore we see that the laser-processed crystals have the potential to increase the light collection efficiency, which could lead to improved energy resolution and potentially better timing resolution due to higher signals. For a laser-processed detector with smooth barrier-crystal interfaces the light collection efficiency is simulated to  >42%, and for rough interfaces  >73%. The corresponding numbers for a monolithic crystal is 39% with polished surfaces, and 71% with rough surfaces, and for a mechanically pixelated array 35% with polished pixel surfaces and 59% with rough surfaces.

  6. Simulation study of light transport in laser-processed LYSO:Ce detectors with single-side readout.

    PubMed

    Bläckberg, L; El Fakhri, G; Sabet, H

    2017-10-19

    A tightly focused pulsed laser beam can locally modify the crystal structure inside the bulk of a scintillator. The result is incorporation of so-called optical barriers with a refractive index different from that of the crystal bulk, that can be used to redirect the scintillation light and control the light spread in the detector. We here systematically study the scintillation light transport in detectors fabricated using the laser induced optical barrier technique, and objectively compare their potential performance characteristics with those of the two mainstream detector types: monolithic and mechanically pixelated arrays. Among countless optical barrier patterns, we explore barriers arranged in a pixel-like pattern extending all-the-way or half-way through a 20 mm thick LYSO:Ce crystal. We analyze the performance of the detectors coupled to MPPC arrays, in terms of light response functions, flood maps, line profiles, and light collection efficiency. Our results show that laser-processed detectors with both barrier patterns constitute a new detector category with a behavior between that of the two standard detector types. Results show that when the barrier-crystal interface is smooth, no DOI information can be obtained regardless of barrier refractive index (RI). However, with a rough barrier-crystal interface we can extract multiple levels of DOI. Lower barrier RI results in larger light confinement, leading to better transverse resolution. Furthermore we see that the laser-processed crystals have the potential to increase the light collection efficiency, which could lead to improved energy resolution and potentially better timing resolution due to higher signals. For a laser-processed detector with smooth barrier-crystal interfaces the light collection efficiency is simulated to  >42%, and for rough interfaces  >73%. The corresponding numbers for a monolithic crystal is 39% with polished surfaces, and 71% with rough surfaces, and for a mechanically pixelated array 35% with polished pixel surfaces and 59% with rough surfaces.

  7. Optical performances of the FM JEM-X masks

    NASA Astrophysics Data System (ADS)

    Reglero, V.; Rodrigo, J.; Velasco, T.; Gasent, J. L.; Chato, R.; Alamo, J.; Suso, J.; Blay, P.; Martínez, S.; Doñate, M.; Reina, M.; Sabau, D.; Ruiz-Urien, I.; Santos, I.; Zarauz, J.; Vázquez, J.

    2001-09-01

    The JEM-X Signal Multiplexing Systems are large HURA codes "written" in a pure tungsten plate 0.5 mm thick. 24.247 hexagonal pixels (25% open) are spread over a total area of 535 mm diameter. The tungsten plate is embedded in a mechanical structure formed by a Ti ring, a pretensioning system (Cu-Be) and an exoskeleton structure that provides the required stiffness. The JEM-X masks differ from the SPI and IBIS masks on the absence of a code support structure covering the mask assembly. Open pixels are fully transparent to X-rays. The scope of this paper is to report the optical performances of the FM JEM-X masks defined by uncertainties on the pixel location (centroid) and size coming from the manufacturing and assembly processes. Stability of the code elements under thermoelastic deformations is also discussed. As a general statement, JEM-X Mask optical properties are nearly one order of magnitude better than specified in 1994 during the ESA instrument selection.

  8. Laser printed plasmonic color metasurfaces (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Kristensen, Anders; Zhu, Xiaolong; Højlund-Nielsen, Emil; Vannahme, Christoph; Mortensen, N. Asger

    2016-09-01

    This paper describes color printing on nanoimprinted plasmonic metasurfaces by laser post-writing, for flexible decoration of high volume manufactured plastic products. Laser pulses induce transient local heat generation that leads to melting and reshaping of the imprinted nanostructures. Different surface morphologies that support different plasmonic resonances, and thereby different color appearances, are created by control of the laser pulse energy density. All primary colors can be printed, with a speed of 1 ns per pixel, resolution up to 127,000 dots per inch (DPI) and power consumption down to 0.3 nJ per pixel.

  9. Applications of interferometrically derived terrain slopes: Normalization of SAR backscatter and the interferometric correlation coefficient

    NASA Technical Reports Server (NTRS)

    Werner, Charles L.; Wegmueller, Urs; Small, David L.; Rosen, Paul A.

    1994-01-01

    Terrain slopes, which can be measured with Synthetic Aperture Radar (SAR) interferometry either from a height map or from the interferometric phase gradient, were used to calculate the local incidence angle and the correct pixel area. Both are required for correct thematic interpretation of SAR data. The interferometric correlation depends on the pixel area projected on a plane perpendicular to the look vector and requires correction for slope effects. Methods for normalization of the backscatter and interferometric correlation for ERS-1 SAR are presented.

  10. SU-F-J-177: A Novel Image Analysis Technique (center Pixel Method) to Quantify End-To-End Tests

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wen, N; Chetty, I; Snyder, K

    Purpose: To implement a novel image analysis technique, “center pixel method”, to quantify end-to-end tests accuracy of a frameless, image guided stereotactic radiosurgery system. Methods: The localization accuracy was determined by delivering radiation to an end-to-end prototype phantom. The phantom was scanned with 0.8 mm slice thickness. The treatment isocenter was placed at the center of the phantom. In the treatment room, CBCT images of the phantom (kVp=77, mAs=1022, slice thickness 1 mm) were acquired to register to the reference CT images. 6D couch correction were applied based on the registration results. Electronic Portal Imaging Device (EPID)-based Winston Lutz (WL)more » tests were performed to quantify the errors of the targeting accuracy of the system at 15 combinations of gantry, collimator and couch positions. The images were analyzed using two different methods. a) The classic method. The deviation was calculated by measuring the radial distance between the center of the central BB and the full width at half maximum of the radiation field. b) The center pixel method. Since the imager projection offset from the treatment isocenter was known from the IsoCal calibration, the deviation was determined between the center of the BB and the central pixel of the imager panel. Results: Using the automatic registration method to localize the phantom and the classic method of measuring the deviation of the BB center, the mean and standard deviation of the radial distance was 0.44 ± 0.25, 0.47 ± 0.26, and 0.43 ± 0.13 mm for the jaw, MLC and cone defined field sizes respectively. When the center pixel method was used, the mean and standard deviation was 0.32 ± 0.18, 0.32 ± 0.17, and 0.32 ± 0.19 mm respectively. Conclusion: Our results demonstrated that the center pixel method accurately analyzes the WL images to evaluate the targeting accuracy of the radiosurgery system. The work was supported by a Research Scholar Grant, RSG-15-137-01-CCE from the American Cancer Society.« less

  11. High resolution depth reconstruction from monocular images and sparse point clouds using deep convolutional neural network

    NASA Astrophysics Data System (ADS)

    Dimitrievski, Martin; Goossens, Bart; Veelaert, Peter; Philips, Wilfried

    2017-09-01

    Understanding the 3D structure of the environment is advantageous for many tasks in the field of robotics and autonomous vehicles. From the robot's point of view, 3D perception is often formulated as a depth image reconstruction problem. In the literature, dense depth images are often recovered deterministically from stereo image disparities. Other systems use an expensive LiDAR sensor to produce accurate, but semi-sparse depth images. With the advent of deep learning there have also been attempts to estimate depth by only using monocular images. In this paper we combine the best of the two worlds, focusing on a combination of monocular images and low cost LiDAR point clouds. We explore the idea that very sparse depth information accurately captures the global scene structure while variations in image patches can be used to reconstruct local depth to a high resolution. The main contribution of this paper is a supervised learning depth reconstruction system based on a deep convolutional neural network. The network is trained on RGB image patches reinforced with sparse depth information and the output is a depth estimate for each pixel. Using image and point cloud data from the KITTI vision dataset we are able to learn a correspondence between local RGB information and local depth, while at the same time preserving the global scene structure. Our results are evaluated on sequences from the KITTI dataset and our own recordings using a low cost camera and LiDAR setup.

  12. Extraction of object skeletons in multispectral imagery by the orthogonal regression fitting

    NASA Astrophysics Data System (ADS)

    Palenichka, Roman M.; Zaremba, Marek B.

    2003-03-01

    Accurate and automatic extraction of skeletal shape of objects of interest from satellite images provides an efficient solution to such image analysis tasks as object detection, object identification, and shape description. The problem of skeletal shape extraction can be effectively solved in three basic steps: intensity clustering (i.e. segmentation) of objects, extraction of a structural graph of the object shape, and refinement of structural graph by the orthogonal regression fitting. The objects of interest are segmented from the background by a clustering transformation of primary features (spectral components) with respect to each pixel. The structural graph is composed of connected skeleton vertices and represents the topology of the skeleton. In the general case, it is a quite rough piecewise-linear representation of object skeletons. The positions of skeleton vertices on the image plane are adjusted by means of the orthogonal regression fitting. It consists of changing positions of existing vertices according to the minimum of the mean orthogonal distances and, eventually, adding new vertices in-between if a given accuracy if not yet satisfied. Vertices of initial piecewise-linear skeletons are extracted by using a multi-scale image relevance function. The relevance function is an image local operator that has local maximums at the centers of the objects of interest.

  13. Comparison of Filters Dedicated to Speckle Suppression in SAR Images

    NASA Astrophysics Data System (ADS)

    Kupidura, P.

    2016-06-01

    This paper presents the results of research on the effectiveness of different filtering methods dedicated to speckle suppression in SAR images. The tests were performed on RadarSat-2 images and on an artificial image treated with simulated speckle noise. The research analysed the performance of particular filters related to the effectiveness of speckle suppression and to the ability to preserve image details and edges. Speckle is a phenomenon inherent to radar images - a deterministic noise connected with land cover type, but also causing significant changes in digital numbers of pixels. As a result, it may affect interpretation, classification and other processes concerning radar images. Speckle, resembling "salt and pepper" noise, has the form of a set of relatively small groups of pixels of values markedly different from values of other pixels representing the same type of land cover. Suppression of this noise may also cause suppression of small image details, therefore the ability to preserve the important parts of an image, was analysed as well. In the present study, selected filters were tested, and methods dedicated particularly to speckle noise suppression: Frost, Gamma-MAP, Lee, Lee-Sigma, Local Region, general filtering methods which might be effective in this respect: Mean, Median, in addition to morphological filters (alternate sequential filters with multiple structuring element and by reconstruction). The analysis presented in this paper compared the effectiveness of different filtering methods. It proved that some of the dedicated radar filters are efficient tools for speckle suppression, but also demonstrated a significant efficiency of the morphological approach, especially its ability to preserve image details.

  14. An automated method for mapping human tissue permittivities by MRI in hyperthermia treatment planning.

    PubMed

    Farace, P; Pontalti, R; Cristoforetti, L; Antolini, R; Scarpa, M

    1997-11-01

    This paper presents an automatic method to obtain tissue complex permittivity values to be used as input data in the computer modelling for hyperthermia treatment planning. Magnetic resonance (MR) images were acquired and the tissue water content was calculated from the signal intensity of the image pixels. The tissue water content was converted into complex permittivity values by monotonic functions based on mixture theory. To obtain a water content map by MR imaging a gradient-echo pulse sequence was used and an experimental procedure was set up to correct for relaxation and radiofrequency field inhomogeneity effects on signal intensity. Two approaches were followed to assign the permittivity values to fat-rich tissues: (i) fat-rich tissue localization by a segmentation procedure followed by assignment of tabulated permittivity values; (ii) water content evaluation by chemical shift imaging followed by permittivity calculation. Tests were performed on phantoms of known water content to establish the reliability of the proposed method. MRI data were acquired and processed pixel-by-pixel according to the outlined procedure. The signal intensity in the phantom images correlated well with water content. Experiments were performed on volunteers' healthy tissue. In particular two anatomical structures were chosen to calculate permittivity maps: the head and the thigh. The water content and electric permittivity values were obtained from the MRI data and compared to others in the literature. A good agreement was found for muscle, cerebrospinal fluid (CSF) and white and grey matter. The advantages of the reported method are discussed in the light of possible application in hyperthermia treatment planning.

  15. k(+)-buffer: An Efficient, Memory-Friendly and Dynamic k-buffer Framework.

    PubMed

    Vasilakis, Andreas-Alexandros; Papaioannou, Georgios; Fudos, Ioannis

    2015-06-01

    Depth-sorted fragment determination is fundamental for a host of image-based techniques which simulates complex rendering effects. It is also a challenging task in terms of time and space required when rasterizing scenes with high depth complexity. When low graphics memory requirements are of utmost importance, k-buffer can objectively be considered as the most preferred framework which advantageously ensures the correct depth order on a subset of all generated fragments. Although various alternatives have been introduced to partially or completely alleviate the noticeable quality artifacts produced by the initial k-buffer algorithm in the expense of memory increase or performance downgrade, appropriate tools to automatically and dynamically compute the most suitable value of k are still missing. To this end, we introduce k(+)-buffer, a fast framework that accurately simulates the behavior of k-buffer in a single rendering pass. Two memory-bounded data structures: (i) the max-array and (ii) the max-heap are developed on the GPU to concurrently maintain the k-foremost fragments per pixel by exploring pixel synchronization and fragment culling. Memory-friendly strategies are further introduced to dynamically (a) lessen the wasteful memory allocation of individual pixels with low depth complexity frequencies, (b) minimize the allocated size of k-buffer according to different application goals and hardware limitations via a straightforward depth histogram analysis and (c) manage local GPU cache with a fixed-memory depth-sorting mechanism. Finally, an extensive experimental evaluation is provided demonstrating the advantages of our work over all prior k-buffer variants in terms of memory usage, performance cost and image quality.

  16. Perceptually relevant grouping of image tokens on the basis of constraint propagation from local binary patterns

    NASA Astrophysics Data System (ADS)

    Behlim, Sadaf Iqbal; Syed, Tahir Qasim; Malik, Muhammad Yameen; Vigneron, Vincent

    2016-11-01

    Grouping image tokens is an intermediate step needed to arrive at meaningful image representation and summarization. Usually, perceptual cues, for instance, gestalt properties inform token grouping. However, they do not take into account structural continuities that could be derived from other tokens belonging to similar structures irrespective of their location. We propose an image representation that encodes structural constraints emerging from local binary patterns (LBP), which provides a long-distance measure of similarity but in a structurally connected way. Our representation provides a grouping of pixels or larger image tokens that is free of numeric similarity measures and could therefore be extended to nonmetric spaces. The representation lends itself nicely to ubiquitous image processing applications such as connected component labeling and segmentation. We test our proposed representation on the perceptual grouping or segmentation task on the popular Berkeley segmentation dataset (BSD500) that with respect to human segmented images achieves an average F-measure of 0.559. Our algorithm achieves a high average recall of 0.787 and is therefore well-suited to other applications such as object retrieval and category-independent object recognition. The proposed merging heuristic based on levels of singular tree component has shown promising results on the BSD500 dataset and currently ranks 12th among all benchmarked algorithms, but contrary to the others, it requires no data-driven training or specialized preprocessing.

  17. Measuring Filament Orientation: A New Quantitative, Local Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Green, C.-E.; Cunningham, M. R.; Jones, P. A.

    The relative orientation between filamentary structures in molecular clouds and the ambient magnetic field provides insight into filament formation and stability. To calculate the relative orientation, a measurement of filament orientation is first required. We propose a new method to calculate the orientation of the one-pixel-wide filament skeleton that is output by filament identification algorithms such as filfinder. We derive the local filament orientation from the direction of the intensity gradient in the skeleton image using the Sobel filter and a few simple post-processing steps. We call this the “Sobel-gradient method.” The resulting filament orientation map can be compared quantitativelymore » on a local scale with the magnetic field orientation map to then find the relative orientation of the filament with respect to the magnetic field at each point along the filament. It can also be used for constructing radial profiles for filament width fitting. The proposed method facilitates automation in analyses of filament skeletons, which is imperative in this era of “big data.”.« less

  18. Localizing text in scene images by boundary clustering, stroke segmentation, and string fragment classification.

    PubMed

    Yi, Chucai; Tian, Yingli

    2012-09-01

    In this paper, we propose a novel framework to extract text regions from scene images with complex backgrounds and multiple text appearances. This framework consists of three main steps: boundary clustering (BC), stroke segmentation, and string fragment classification. In BC, we propose a new bigram-color-uniformity-based method to model both text and attachment surface, and cluster edge pixels based on color pairs and spatial positions into boundary layers. Then, stroke segmentation is performed at each boundary layer by color assignment to extract character candidates. We propose two algorithms to combine the structural analysis of text stroke with color assignment and filter out background interferences. Further, we design a robust string fragment classification based on Gabor-based text features. The features are obtained from feature maps of gradient, stroke distribution, and stroke width. The proposed framework of text localization is evaluated on scene images, born-digital images, broadcast video images, and images of handheld objects captured by blind persons. Experimental results on respective datasets demonstrate that the framework outperforms state-of-the-art localization algorithms.

  19. Exploring three faint source detections methods for aperture synthesis radio images

    NASA Astrophysics Data System (ADS)

    Peracaula, M.; Torrent, A.; Masias, M.; Lladó, X.; Freixenet, J.; Martí, J.; Sánchez-Sutil, J. R.; Muñoz-Arjonilla, A. J.; Paredes, J. M.

    2015-04-01

    Wide-field radio interferometric images often contain a large population of faint compact sources. Due to their low intensity/noise ratio, these objects can be easily missed by automated detection methods, which have been classically based on thresholding techniques after local noise estimation. The aim of this paper is to present and analyse the performance of several alternative or complementary techniques to thresholding. We compare three different algorithms to increase the detection rate of faint objects. The first technique consists of combining wavelet decomposition with local thresholding. The second technique is based on the structural behaviour of the neighbourhood of each pixel. Finally, the third algorithm uses local features extracted from a bank of filters and a boosting classifier to perform the detections. The methods' performances are evaluated using simulations and radio mosaics from the Giant Metrewave Radio Telescope and the Australia Telescope Compact Array. We show that the new methods perform better than well-known state of the art methods such as SEXTRACTOR, SAD and DUCHAMP at detecting faint sources of radio interferometric images.

  20. Brain blood vessel segmentation using line-shaped profiles

    NASA Astrophysics Data System (ADS)

    Babin, Danilo; Pižurica, Aleksandra; De Vylder, Jonas; Vansteenkiste, Ewout; Philips, Wilfried

    2013-11-01

    Segmentation of cerebral blood vessels is of great importance in diagnostic and clinical applications, especially for embolization of cerebral aneurysms and arteriovenous malformations (AVMs). In order to perform embolization of the AVM, the structural and geometric information of blood vessels from 3D images is of utmost importance. For this reason, the in-depth segmentation of cerebral blood vessels is usually done as a fusion of different segmentation techniques, often requiring extensive user interaction. In this paper we introduce the idea of line-shaped profiling with an application to brain blood vessel and AVM segmentation, efficient both in terms of resolving details and in terms of computation time. Our method takes into account both local proximate and wider neighbourhood of the processed pixel, which makes it efficient for segmenting large blood vessel tree structures, as well as fine structures of the AVMs. Another advantage of our method is that it requires selection of only one parameter to perform segmentation, yielding very little user interaction.

  1. Quality issues in blue noise halftoning

    NASA Astrophysics Data System (ADS)

    Yu, Qing; Parker, Kevin J.

    1998-01-01

    The blue noise mask (BNM) is a halftone screen that produces unstructured visually pleasing dot patterns. The BNM combines the blue-noise characteristics of error diffusion and the simplicity of ordered dither. A BNM is constructed by designing a set of interdependent binary patterns for individual gray levels. In this paper, we investigate the quality issues in blue-noise binary pattern design and mask generation as well as in application to color reproduction. Using a global filtering technique and a local 'force' process for rearranging black and white pixels, we are able to generate a series of binary patterns, all representing a certain gray level, ranging from white-noise pattern to highly structured pattern. The quality of these individual patterns are studied in terms of low-frequency structure and graininess. Typically, the low-frequency structure (LF) is identified with a measurement of the energy around dc in the spatial frequency domain, while the graininess is quantified by a measurement of the average minimum distance (AMD) between minority dots as well as the kurtosis of the local kurtosis distribution (KLK) for minority pixels of the binary pattern. A set of partial BNMs are generated by using the different patterns as unique starting 'seeds.' In this way, we are able to study the quality of binary patterns over a range of gray levels. We observe that the optimality of a binary pattern for mask generation is related to its own quality mertirc values as well as the transition smoothness of those quality metric values over neighboring levels. Several schemes have been developed to apply blue-noise halftoning to color reproduction. Different schemes generate halftone patterns with different textures. In a previous paper, a human visual system (HVS) model was used to study the color halftone quality in terms of luminance and chrominance error in CIELAB color space. In this paper, a new series of psycho-visual experiments address the 'preferred' color rendering among four different blue noise halftoning schemes. The experimental results will be interpreted with respect to the proposed halftone quality metrics.

  2. Scanning X-ray diffraction on cardiac tissue: automatized data analysis and processing.

    PubMed

    Nicolas, Jan David; Bernhardt, Marten; Markus, Andrea; Alves, Frauke; Burghammer, Manfred; Salditt, Tim

    2017-11-01

    A scanning X-ray diffraction study of cardiac tissue has been performed, covering the entire cross section of a mouse heart slice. To this end, moderate focusing by compound refractive lenses to micrometer spot size, continuous scanning, data acquisition by a fast single-photon-counting pixel detector, and fully automated analysis scripts have been combined. It was shown that a surprising amount of structural data can be harvested from such a scan, evaluating the local scattering intensity, interfilament spacing of the muscle tissue, the filament orientation, and the degree of anisotropy. The workflow of data analysis is described and a data analysis toolbox with example data for general use is provided. Since many cardiomyopathies rely on the structural integrity of the sarcomere, the contractile unit of cardiac muscle cells, the present study can be easily extended to characterize tissue from a diseased heart.

  3. Superpixel-based structure classification for laparoscopic surgery

    NASA Astrophysics Data System (ADS)

    Bodenstedt, Sebastian; Görtler, Jochen; Wagner, Martin; Kenngott, Hannes; Müller-Stich, Beat Peter; Dillmann, Rüdiger; Speidel, Stefanie

    2016-03-01

    Minimally-invasive interventions offers multiple benefits for patients, but also entails drawbacks for the surgeon. The goal of context-aware assistance systems is to alleviate some of these difficulties. Localizing and identifying anatomical structures, maligned tissue and surgical instruments through endoscopic image analysis is paramount for an assistance system, making online measurements and augmented reality visualizations possible. Furthermore, such information can be used to assess the progress of an intervention, hereby allowing for a context-aware assistance. In this work, we present an approach for such an analysis. First, a given laparoscopic image is divided into groups of connected pixels, so-called superpixels, using the SEEDS algorithm. The content of a given superpixel is then described using information regarding its color and texture. Using a Random Forest classifier, we determine the class label of each superpixel. We evaluated our approach on a publicly available dataset for laparoscopic instrument detection and achieved a DICE score of 0.69.

  4. Synchronous Phase-Resolving Flash Range Imaging

    NASA Technical Reports Server (NTRS)

    Pain, Bedabrata; Hancock, Bruce

    2007-01-01

    An apparatus, now undergoing development, for range imaging based on measurement of the round-trip phase delay of a pulsed laser beam is described. The apparatus would operate in a staring mode. A pulsed laser would illuminate a target. Laser light reflected from the target would be imaged on a verylarge- scale integrated (VLSI)-circuit image detector, each pixel of which would contain a photodetector and a phase-measuring circuit. The round-trip travel time for the reflected laser light incident on each pixel, and thus the distance to the portion of the target imaged in that pixel, would be measured in terms of the phase difference between (1) the photodetector output pulse and (2) a local-oscillator signal that would have a frequency between 10 and 20 MHz and that would be synchronized with the laser-pulse-triggering signal.

  5. A novel segmentation method for uneven lighting image with noise injection based on non-local spatial information and intuitionistic fuzzy entropy

    NASA Astrophysics Data System (ADS)

    Yu, Haiyan; Fan, Jiulun

    2017-12-01

    Local thresholding methods for uneven lighting image segmentation always have the limitations that they are very sensitive to noise injection and that the performance relies largely upon the choice of the initial window size. This paper proposes a novel algorithm for segmenting uneven lighting images with strong noise injection based on non-local spatial information and intuitionistic fuzzy theory. We regard an image as a gray wave in three-dimensional space, which is composed of many peaks and troughs, and these peaks and troughs can divide the image into many local sub-regions in different directions. Our algorithm computes the relative characteristic of each pixel located in the corresponding sub-region based on fuzzy membership function and uses it to replace its absolute characteristic (its gray level) to reduce the influence of uneven light on image segmentation. At the same time, the non-local adaptive spatial constraints of pixels are introduced to avoid noise interference with the search of local sub-regions and the computation of local characteristics. Moreover, edge information is also taken into account to avoid false peak and trough labeling. Finally, a global method based on intuitionistic fuzzy entropy is employed on the wave transformation image to obtain the segmented result. Experiments on several test images show that the proposed method has excellent capability of decreasing the influence of uneven illumination on images and noise injection and behaves more robustly than several classical global and local thresholding methods.

  6. Optimization of Focusing by Strip and Pixel Arrays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burke, G J; White, D A; Thompson, C A

    Professor Kevin Webb and students at Purdue University have demonstrated the design of conducting strip and pixel arrays for focusing electromagnetic waves [1, 2]. Their key point was to design structures to focus waves in the near field using full wave modeling and optimization methods for design. Their designs included arrays of conducting strips optimized with a downhill search algorithm and arrays of conducting and dielectric pixels optimized with the iterative direct binary search method. They used a finite element code for modeling. This report documents our attempts to duplicate and verify their results. We have modeled 2D conducting stripsmore » and both conducting and dielectric pixel arrays with moment method and FDTD codes to compare with Webb's results. New designs for strip arrays were developed with optimization by the downhill simplex method with simulated annealing. Strip arrays were optimized to focus an incident plane wave at a point or at two separated points and to switch between focusing points with a change in frequency. We also tried putting a line current source at the focus point for the plane wave to see how it would work as a directive antenna. We have not tried optimizing the conducting or dielectric pixel arrays, but modeled the structures designed by Webb with the moment method and FDTD to compare with the Purdue results.« less

  7. Prediction of near-term breast cancer risk using local region-based bilateral asymmetry features in mammography

    NASA Astrophysics Data System (ADS)

    Li, Yane; Fan, Ming; Li, Lihua; Zheng, Bin

    2017-03-01

    This study proposed a near-term breast cancer risk assessment model based on local region bilateral asymmetry features in Mammography. The database includes 566 cases who underwent at least two sequential FFDM examinations. The `prior' examination in the two series all interpreted as negative (not recalled). In the "current" examination, 283 women were diagnosed cancers and 283 remained negative. Age of cancers and negative cases completely matched. These cases were divided into three subgroups according to age: 152 cases among the 37-49 age-bracket, 220 cases in the age-bracket 50- 60, and 194 cases with the 61-86 age-bracket. For each image, two local regions including strip-based regions and difference-of-Gaussian basic element regions were segmented. After that, structural variation features among pixel values and structural similarity features were computed for strip regions. Meanwhile, positional features were extracted for basic element regions. The absolute subtraction value was computed between each feature of the left and right local-regions. Next, a multi-layer perception classifier was implemented to assess performance of features for prediction. Features were then selected according stepwise regression analysis. The AUC achieved 0.72, 0.75 and 0.71 for these 3 age-based subgroups, respectively. The maximum adjustable odds ratios were 12.4, 20.56 and 4.91 for these three groups, respectively. This study demonstrate that the local region-based bilateral asymmetry features extracted from CC-view mammography could provide useful information to predict near-term breast cancer risk.

  8. Using Anisotropic 3D Minkowski Functionals for Trabecular Bone Characterization and Biomechanical Strength Prediction in Proximal Femur Specimens

    PubMed Central

    Nagarajan, Mahesh B.; De, Titas; Lochmüller, Eva-Maria; Eckstein, Felix; Wismüller, Axel

    2017-01-01

    The ability of Anisotropic Minkowski Functionals (AMFs) to capture local anisotropy while evaluating topological properties of the underlying gray-level structures has been previously demonstrated. We evaluate the ability of this approach to characterize local structure properties of trabecular bone micro-architecture in ex vivo proximal femur specimens, as visualized on multi-detector CT, for purposes of biomechanical bone strength prediction. To this end, volumetric AMFs were computed locally for each voxel of volumes of interest (VOI) extracted from the femoral head of 146 specimens. The local anisotropy captured by such AMFs was quantified using a fractional anisotropy measure; the magnitude and direction of anisotropy at every pixel was stored in histograms that served as a feature vectors that characterized the VOIs. A linear multi-regression analysis algorithm was used to predict the failure load (FL) from the feature sets; the predicted FL was compared to the true FL determined through biomechanical testing. The prediction performance was measured by the root mean square error (RMSE) for each feature set. The best prediction performance was obtained from the fractional anisotropy histogram of AMF Euler Characteristic (RMSE = 1.01 ± 0.13), which was significantly better than MDCT-derived mean BMD (RMSE = 1.12 ± 0.16, p<0.05). We conclude that such anisotropic Minkowski Functionals can capture valuable information regarding regional trabecular bone quality and contribute to improved bone strength prediction, which is important for improving the clinical assessment of osteoporotic fracture risk. PMID:29170581

  9. Topology-guided deformable registration with local importance preservation for biomedical images

    NASA Astrophysics Data System (ADS)

    Zheng, Chaojie; Wang, Xiuying; Zeng, Shan; Zhou, Jianlong; Yin, Yong; Feng, Dagan; Fulham, Michael

    2018-01-01

    The demons registration (DR) model is well recognized for its deformation capability. However, it might lead to misregistration due to erroneous diffusion direction when there are no overlaps between corresponding regions. We propose a novel registration energy function, introducing topology energy, and incorporating a local energy function into the DR in a progressive registration scheme, to address these shortcomings. The topology energy that is derived from the topological information of the images serves as a direction inference to guide diffusion transformation to retain the merits of DR. The local energy constrains the deformation disparity of neighbouring pixels to maintain important local texture and density features. The energy function is minimized in a progressive scheme steered by a topology tree graph and we refer to it as topology-guided deformable registration (TDR). We validated our TDR on 20 pairs of synthetic images with Gaussian noise, 20 phantom PET images with artificial deformations and 12 pairs of clinical PET-CT studies. We compared it to three methods: (1) free-form deformation registration method, (2) energy-based DR and (3) multi-resolution DR. The experimental results show that our TDR outperformed the other three methods in regard to structural correspondence and preservation of the local important information including texture and density, while retaining global correspondence.

  10. A High-Speed, Event-Driven, Active Pixel Sensor Readout for Photon-Counting Microchannel Plate Detectors

    NASA Technical Reports Server (NTRS)

    Kimble, Randy A.; Pain, Bedabrata; Norton, Timothy J.; Haas, J. Patrick; Oegerle, William R. (Technical Monitor)

    2002-01-01

    Silicon array readouts for microchannel plate intensifiers offer several attractive features. In this class of detector, the electron cloud output of the MCP intensifier is converted to visible light by a phosphor; that light is then fiber-optically coupled to the silicon array. In photon-counting mode, the resulting light splashes on the silicon array are recognized and centroided to fractional pixel accuracy by off-chip electronics. This process can result in very high (MCP-limited) spatial resolution while operating at a modest MCP gain (desirable for dynamic range and long term stability). The principal limitation of intensified CCD systems of this type is their severely limited local dynamic range, as accurate photon counting is achieved only if there are not overlapping event splashes within the frame time of the device. This problem can be ameliorated somewhat by processing events only in pre-selected windows of interest of by using an addressable charge injection device (CID) for the readout array. We are currently pursuing the development of an intriguing alternative readout concept based on using an event-driven CMOS Active Pixel Sensor. APS technology permits the incorporation of discriminator circuitry within each pixel. When coupled with suitable CMOS logic outside the array area, the discriminator circuitry can be used to trigger the readout of small sub-array windows only when and where an event splash has been detected, completely eliminating the local dynamic range problem, while achieving a high global count rate capability and maintaining high spatial resolution. We elaborate on this concept and present our progress toward implementing an event-driven APS readout.

  11. Cellular image segmentation using n-agent cooperative game theory

    NASA Astrophysics Data System (ADS)

    Dimock, Ian B.; Wan, Justin W. L.

    2016-03-01

    Image segmentation is an important problem in computer vision and has significant applications in the segmentation of cellular images. Many different imaging techniques exist and produce a variety of image properties which pose difficulties to image segmentation routines. Bright-field images are particularly challenging because of the non-uniform shape of the cells, the low contrast between cells and background, and imaging artifacts such as halos and broken edges. Classical segmentation techniques often produce poor results on these challenging images. Previous attempts at bright-field imaging are often limited in scope to the images that they segment. In this paper, we introduce a new algorithm for automatically segmenting cellular images. The algorithm incorporates two game theoretic models which allow each pixel to act as an independent agent with the goal of selecting their best labelling strategy. In the non-cooperative model, the pixels choose strategies greedily based only on local information. In the cooperative model, the pixels can form coalitions, which select labelling strategies that benefit the entire group. Combining these two models produces a method which allows the pixels to balance both local and global information when selecting their label. With the addition of k-means and active contour techniques for initialization and post-processing purposes, we achieve a robust segmentation routine. The algorithm is applied to several cell image datasets including bright-field images, fluorescent images and simulated images. Experiments show that the algorithm produces good segmentation results across the variety of datasets which differ in cell density, cell shape, contrast, and noise levels.

  12. Robust image region descriptor using local derivative ordinal binary pattern

    NASA Astrophysics Data System (ADS)

    Shang, Jun; Chen, Chuanbo; Pei, Xiaobing; Liang, Hu; Tang, He; Sarem, Mudar

    2015-05-01

    Binary image descriptors have received a lot of attention in recent years, since they provide numerous advantages, such as low memory footprint and efficient matching strategy. However, they utilize intermediate representations and are generally less discriminative than floating-point descriptors. We propose an image region descriptor, namely local derivative ordinal binary pattern, for object recognition and image categorization. In order to preserve more local contrast and edge information, we quantize the intensity differences between the central pixels and their neighbors of the detected local affine covariant regions in an adaptive way. These differences are then sorted and mapped into binary codes and histogrammed with a weight of the sum of the absolute value of the differences. Furthermore, the gray level of the central pixel is quantized to further improve the discriminative ability. Finally, we combine them to form a joint histogram to represent the features of the image. We observe that our descriptor preserves more local brightness and edge information than traditional binary descriptors. Also, our descriptor is robust to rotation, illumination variations, and other geometric transformations. We conduct extensive experiments on the standard ETHZ and Kentucky datasets for object recognition and PASCAL for image classification. The experimental results show that our descriptor outperforms existing state-of-the-art methods.

  13. A summary of image segmentation techniques

    NASA Technical Reports Server (NTRS)

    Spirkovska, Lilly

    1993-01-01

    Machine vision systems are often considered to be composed of two subsystems: low-level vision and high-level vision. Low level vision consists primarily of image processing operations performed on the input image to produce another image with more favorable characteristics. These operations may yield images with reduced noise or cause certain features of the image to be emphasized (such as edges). High-level vision includes object recognition and, at the highest level, scene interpretation. The bridge between these two subsystems is the segmentation system. Through segmentation, the enhanced input image is mapped into a description involving regions with common features which can be used by the higher level vision tasks. There is no theory on image segmentation. Instead, image segmentation techniques are basically ad hoc and differ mostly in the way they emphasize one or more of the desired properties of an ideal segmenter and in the way they balance and compromise one desired property against another. These techniques can be categorized in a number of different groups including local vs. global, parallel vs. sequential, contextual vs. noncontextual, interactive vs. automatic. In this paper, we categorize the schemes into three main groups: pixel-based, edge-based, and region-based. Pixel-based segmentation schemes classify pixels based solely on their gray levels. Edge-based schemes first detect local discontinuities (edges) and then use that information to separate the image into regions. Finally, region-based schemes start with a seed pixel (or group of pixels) and then grow or split the seed until the original image is composed of only homogeneous regions. Because there are a number of survey papers available, we will not discuss all segmentation schemes. Rather than a survey, we take the approach of a detailed overview. We focus only on the more common approaches in order to give the reader a flavor for the variety of techniques available yet present enough details to facilitate implementation and experimentation.

  14. Mars Descent Imager (MARDI) on the Mars Polar Lander

    USGS Publications Warehouse

    Malin, M.C.; Caplinger, M.A.; Carr, M.H.; Squyres, S.; Thomas, P.; Veverka, J.

    2001-01-01

    The Mars Descent Imager, or MARDI, experiment on the Mars Polar Lander (MPL) consists of a camera characterized by small physical size and mass (???6 ?? 6 ?? 12 cm, including baffle; <500 gm), low power requirements (<2.5 W, including power supply losses), and high science performance (1000 x 1000 pixel, low noise). The intent of the investigation is to acquire nested images over a range of resolutions, from 8 m/pixel to better than 1 cm/pixel, during the roughly 2 min it takes the MPL to descend from 8 km to the surface under parachute and rocket-powered deceleration. Observational goals will include studies of (1) surface morphology (e.g., nature and distribution of landforms indicating past and present environmental processes); (2) local and regional geography (e.g., context for other lander instruments: precise location, detailed local relief); and (3) relationships to features seen in orbiter data. To accomplish these goals, MARDI will collect three types of images. Four small images (256 x 256 pixels) will be acquired on 0.5 s centers beginning 0.3 s before MPL's heatshield is jettisoned. Sixteen full-frame images (1024 X 1024, circularly edited) will be acquired on 5.3 s centers thereafter. Just after backshell jettison but prior to the start of powered descent, a "best final nonpowered descent image" will be acquired. Five seconds after the start of powered descent, the camera will begin acquiring images on 4 s centers. Storage for as many as ten 800 x 800 pixel images is available during terminal descent. A number of spacecraft factors are likely to impact the quality of MARDI images, including substantial motion blur resulting from large rates of attitude variation during parachute descent and substantial rocket-engine-induced vibration during powered descent. In addition, the mounting location of the camera places the exhaust plume of the hydrazine engines prominently in the field of view. Copyright 2001 by the American Geophysical Union.

  15. The DEPFET Sensor-Amplifier Structure: A Method to Beat 1/f Noise and Reach Sub-Electron Noise in Pixel Detectors

    PubMed Central

    Lutz, Gerhard; Porro, Matteo; Aschauer, Stefan; Wölfel, Stefan; Strüder, Lothar

    2016-01-01

    Depleted field effect transistors (DEPFET) are used to achieve very low noise signal charge readout with sub-electron measurement precision. This is accomplished by repeatedly reading an identical charge, thereby suppressing not only the white serial noise but also the usually constant 1/f noise. The repetitive non-destructive readout (RNDR) DEPFET is an ideal central element for an active pixel sensor (APS) pixel. The theory has been derived thoroughly and results have been verified on RNDR-DEPFET prototypes. A charge measurement precision of 0.18 electrons has been achieved. The device is well-suited for spectroscopic X-ray imaging and for optical photon counting in pixel sensors, even at high photon numbers in the same cell. PMID:27136549

  16. Biological tissue imaging with a position and time sensitive pixelated detector.

    PubMed

    Jungmann, Julia H; Smith, Donald F; MacAleese, Luke; Klinkert, Ivo; Visser, Jan; Heeren, Ron M A

    2012-10-01

    We demonstrate the capabilities of a highly parallel, active pixel detector for large-area, mass spectrometric imaging of biological tissue sections. A bare Timepix assembly (512 × 512 pixels) is combined with chevron microchannel plates on an ion microscope matrix-assisted laser desorption time-of-flight mass spectrometer (MALDI TOF-MS). The detector assembly registers position- and time-resolved images of multiple m/z species in every measurement frame. We prove the applicability of the detection system to biomolecular mass spectrometry imaging on biologically relevant samples by mass-resolved images from Timepix measurements of a peptide-grid benchmark sample and mouse testis tissue slices. Mass-spectral and localization information of analytes at physiologic concentrations are measured in MALDI-TOF-MS imaging experiments. We show a high spatial resolution (pixel size down to 740 × 740 nm(2) on the sample surface) and a spatial resolving power of 6 μm with a microscope mode laser field of view of 100-335 μm. Automated, large-area imaging is demonstrated and the Timepix' potential for fast, large-area image acquisition is highlighted.

  17. Automated determination of arterial input function for DCE-MRI of the prostate

    NASA Astrophysics Data System (ADS)

    Zhu, Yingxuan; Chang, Ming-Ching; Gupta, Sandeep

    2011-03-01

    Prostate cancer is one of the commonest cancers in the world. Dynamic contrast enhanced MRI (DCE-MRI) provides an opportunity for non-invasive diagnosis, staging, and treatment monitoring. Quantitative analysis of DCE-MRI relies on determination of an accurate arterial input function (AIF). Although several methods for automated AIF detection have been proposed in literature, none are optimized for use in prostate DCE-MRI, which is particularly challenging due to large spatial signal inhomogeneity. In this paper, we propose a fully automated method for determining the AIF from prostate DCE-MRI. Our method is based on modeling pixel uptake curves as gamma variate functions (GVF). First, we analytically compute bounds on GVF parameters for more robust fitting. Next, we approximate a GVF for each pixel based on local time domain information, and eliminate the pixels with false estimated AIFs using the deduced upper and lower bounds. This makes the algorithm robust to signal inhomogeneity. After that, according to spatial information such as similarity and distance between pixels, we formulate the global AIF selection as an energy minimization problem and solve it using a message passing algorithm to further rule out the weak pixels and optimize the detected AIF. Our method is fully automated without training or a priori setting of parameters. Experimental results on clinical data have shown that our method obtained promising detection accuracy (all detected pixels inside major arteries), and a very good match with expert traced manual AIF.

  18. Automatic Organ Segmentation for CT Scans Based on Super-Pixel and Convolutional Neural Networks.

    PubMed

    Liu, Xiaoming; Guo, Shuxu; Yang, Bingtao; Ma, Shuzhi; Zhang, Huimao; Li, Jing; Sun, Changjian; Jin, Lanyi; Li, Xueyan; Yang, Qi; Fu, Yu

    2018-04-20

    Accurate segmentation of specific organ from computed tomography (CT) scans is a basic and crucial task for accurate diagnosis and treatment. To avoid time-consuming manual optimization and to help physicians distinguish diseases, an automatic organ segmentation framework is presented. The framework utilized convolution neural networks (CNN) to classify pixels. To reduce the redundant inputs, the simple linear iterative clustering (SLIC) of super-pixels and the support vector machine (SVM) classifier are introduced. To establish the perfect boundary of organs in one-pixel-level, the pixels need to be classified step-by-step. First, the SLIC is used to cut an image into grids and extract respective digital signatures. Next, the signature is classified by the SVM, and the rough edges are acquired. Finally, a precise boundary is obtained by the CNN, which is based on patches around each pixel-point. The framework is applied to abdominal CT scans of livers and high-resolution computed tomography (HRCT) scans of lungs. The experimental CT scans are derived from two public datasets (Sliver 07 and a Chinese local dataset). Experimental results show that the proposed method can precisely and efficiently detect the organs. This method consumes 38 s/slice for liver segmentation. The Dice coefficient of the liver segmentation results reaches to 97.43%. For lung segmentation, the Dice coefficient is 97.93%. This finding demonstrates that the proposed framework is a favorable method for lung segmentation of HRCT scans.

  19. Construction of pixel-level resolution DEMs from monocular images by shape and albedo from shading constrained with low-resolution DEM

    NASA Astrophysics Data System (ADS)

    Wu, Bo; Liu, Wai Chung; Grumpe, Arne; Wöhler, Christian

    2018-06-01

    Lunar Digital Elevation Model (DEM) is important for lunar successful landing and exploration missions. Lunar DEMs are typically generated by photogrammetry or laser altimetry approaches. Photogrammetric methods require multiple stereo images of the region of interest and it may not be applicable in cases where stereo coverage is not available. In contrast, reflectance based shape reconstruction techniques, such as shape from shading (SfS) and shape and albedo from shading (SAfS), apply monocular images to generate DEMs with pixel-level resolution. We present a novel hierarchical SAfS method that refines a lower-resolution DEM to pixel-level resolution given a monocular image with known light source. We also estimate the corresponding pixel-wise albedo map in the process and based on that to regularize the shape reconstruction with pixel-level resolution based on the low-resolution DEM. In this study, a Lunar-Lambertian reflectance model is applied to estimate the albedo map. Experiments were carried out using monocular images from the Lunar Reconnaissance Orbiter Narrow Angle Camera (LRO NAC), with spatial resolution of 0.5-1.5 m per pixel, constrained by the Selenological and Engineering Explorer and LRO Elevation Model (SLDEM), with spatial resolution of 60 m. The results indicate that local details are well recovered by the proposed algorithm with plausible albedo estimation. The low-frequency topographic consistency depends on the quality of low-resolution DEM and the resolution difference between the image and the low-resolution DEM.

  20. Low cost solution-based materials processing methods for large area OLEDs and OFETs

    NASA Astrophysics Data System (ADS)

    Jeong, Jonghwa

    In Part 1, we demonstrate the fabrication of organic light-emitting devices (OLEDs) with precisely patterned pixels by the spin-casting of Alq3 and rubrene thin films with dimensions as small as 10 mum. The solution-based patterning technique produces pixels via the segregation of organic molecules into microfabricated channels or wells. Segregation is controlled by a combination of weak adsorbing characteristics of aliphatic terminated self-assembled monolayers (SAMs) and by centrifugal force, which directs the organic solution into the channel or well. This novel patterning technique may resolve the limitations of pixel resolution in the method of thermal evaporation using shadow masks, and is applicable to the fabrication of large area displays. Furthermore, the patterning technique has the potential to produce pixel sizes down to the limitation of photolithography and micromachining techniques, thereby enabling the fabrication of high-resolution microdisplays. The patterned OLEDs, based upon a confined structure with low refractive index of SiO2, exhibited higher current density than an unpatterned OLED, which results in higher electroluminescence intensity and eventually more efficient device operation at low applied voltages. We discuss the patterning method and device fabrication, and characterize the morphological, optical, and electrical properties of the organic pixels. In part 2, we demonstrate a new growth technique for organic single crystals based on solvent vapor assisted recrystallization. We show that, by controlling the polarity of the solvent vapor and the exposure time in a closed system, we obtain rubrene in orthorhombic to monoclinic crystal structures. This novel technique for growing single crystals can induce phase shifting and alteration of crystal structure and lattice parameters. The organic molecules showed structural change from orthorhombic to monoclinic, which also provided additional optical transition of hypsochromic shift from that of the orthorhombic form. An intermediate form of the crystal exhibits an optical transition to the lowest vibrational energy level that is otherwise disallowed in the single-crystal orthorhombic form. The monoclinic form exhibits entirely new optical transitions and showed a possible structural rearrangement for increasing charge carrier mobility, making it promising for organic devices. These phenomena can be explained and proved by the chemical structure and molecular packing of the monoclinic form, transformed from orthorhombic crystalline structure.

  1. A new method for CT dose estimation by determining patient water equivalent diameter from localizer radiographs: Geometric transformation and calibration methods using readily available phantoms.

    PubMed

    Zhang, Da; Mihai, Georgeta; Barbaras, Larry G; Brook, Olga R; Palmer, Matthew R

    2018-05-10

    Water equivalent diameter (Dw) reflects patient's attenuation and is a sound descriptor of patient size, and is used to determine size-specific dose estimator from a CT examination. Calculating Dw from CT localizer radiographs makes it possible to utilize Dw before actual scans and minimizes truncation errors due to limited reconstructed fields of view. One obstacle preventing the user community from implementing this useful tool is the necessity to calibrate localizer pixel values so as to represent water equivalent attenuation. We report a practical method to ease this calibration process. Dw is calculated from water equivalent area (Aw) which is deduced from the average localizer pixel value (LPV) of the line(s) in the localizer radiograph that correspond(s) to the axial image. The calibration process is conducted to establish the relationship between Aw and LPV. Localizer and axial images were acquired from phantoms of different total attenuation. We developed a program that automates the geometrical association between axial images and localizer lines and manages the measurements of Dw and average pixel values. We tested the calibration method on three CT scanners: a GE CT750HD, a Siemens Definition AS, and a Toshiba Acquilion Prime80, for both posterior-anterior (PA) and lateral (LAT) localizer directions (for all CTs) and with different localizer filters (for the Toshiba CT). The computer program was able to correctly perform the geometrical association between corresponding axial images and localizer lines. Linear relationships between Aw and LPV were observed (with R 2 all greater than 0.998) on all tested conditions, regardless of the direction and image filters used on the localizer radiographs. When comparing LAT and PA directions with the same image filter and for the same scanner, the slope values were close (maximum difference of 0.02 mm), and the intercept values showed larger deviations (maximum difference of 2.8 mm). Water equivalent diameter estimation on phantoms and patients demonstrated high accuracy of the calibration: percentage difference between Dw from axial images and localizers was below 2%. With five clinical chest examinations and five abdominal-pelvic examinations of varying patient sizes, the maximum percentage difference was approximately 5%. Our study showed that Aw and LPV are highly correlated, providing enough evidence to allow for the Dw determination once the experimental calibration process is established. © 2018 American Association of Physicists in Medicine.

  2. Coloured computational imaging with single-pixel detectors based on a 2D discrete cosine transform

    NASA Astrophysics Data System (ADS)

    Liu, Bao-Lei; Yang, Zhao-Hua; Liu, Xia; Wu, Ling-An

    2017-02-01

    We propose and demonstrate a computational imaging technique that uses structured illumination based on a two-dimensional discrete cosine transform to perform imaging with a single-pixel detector. A scene is illuminated by a projector with two sets of orthogonal patterns, then by applying an inverse cosine transform to the spectra obtained from the single-pixel detector a full-colour image is retrieved. This technique can retrieve an image from sub-Nyquist measurements, and the background noise is easily cancelled to give excellent image quality. Moreover, the experimental set-up is very simple.

  3. An Over 90 dB Intra-Scene Single-Exposure Dynamic Range CMOS Image Sensor Using a 3.0 μm Triple-Gain Pixel Fabricated in a Standard BSI Process.

    PubMed

    Takayanagi, Isao; Yoshimura, Norio; Mori, Kazuya; Matsuo, Shinichiro; Tanaka, Shunsuke; Abe, Hirofumi; Yasuda, Naoto; Ishikawa, Kenichiro; Okura, Shunsuke; Ohsawa, Shinji; Otaka, Toshinori

    2018-01-12

    To respond to the high demand for high dynamic range imaging suitable for moving objects with few artifacts, we have developed a single-exposure dynamic range image sensor by introducing a triple-gain pixel and a low noise dual-gain readout circuit. The developed 3 μm pixel is capable of having three conversion gains. Introducing a new split-pinned photodiode structure, linear full well reaches 40 ke - . Readout noise under the highest pixel gain condition is 1 e - with a low noise readout circuit. Merging two signals, one with high pixel gain and high analog gain, and the other with low pixel gain and low analog gain, a single exposure dynamic rage (SEHDR) signal is obtained. Using this technology, a 1/2.7", 2M-pixel CMOS image sensor has been developed and characterized. The image sensor also employs an on-chip linearization function, yielding a 16-bit linear signal at 60 fps, and an intra-scene dynamic range of higher than 90 dB was successfully demonstrated. This SEHDR approach inherently mitigates the artifacts from moving objects or time-varying light sources that can appear in the multiple exposure high dynamic range (MEHDR) approach.

  4. An Over 90 dB Intra-Scene Single-Exposure Dynamic Range CMOS Image Sensor Using a 3.0 μm Triple-Gain Pixel Fabricated in a Standard BSI Process †

    PubMed Central

    Takayanagi, Isao; Yoshimura, Norio; Mori, Kazuya; Matsuo, Shinichiro; Tanaka, Shunsuke; Abe, Hirofumi; Yasuda, Naoto; Ishikawa, Kenichiro; Okura, Shunsuke; Ohsawa, Shinji; Otaka, Toshinori

    2018-01-01

    To respond to the high demand for high dynamic range imaging suitable for moving objects with few artifacts, we have developed a single-exposure dynamic range image sensor by introducing a triple-gain pixel and a low noise dual-gain readout circuit. The developed 3 μm pixel is capable of having three conversion gains. Introducing a new split-pinned photodiode structure, linear full well reaches 40 ke−. Readout noise under the highest pixel gain condition is 1 e− with a low noise readout circuit. Merging two signals, one with high pixel gain and high analog gain, and the other with low pixel gain and low analog gain, a single exposure dynamic rage (SEHDR) signal is obtained. Using this technology, a 1/2.7”, 2M-pixel CMOS image sensor has been developed and characterized. The image sensor also employs an on-chip linearization function, yielding a 16-bit linear signal at 60 fps, and an intra-scene dynamic range of higher than 90 dB was successfully demonstrated. This SEHDR approach inherently mitigates the artifacts from moving objects or time-varying light sources that can appear in the multiple exposure high dynamic range (MEHDR) approach. PMID:29329210

  5. [The research on bidirectional reflectance computer simulation of forest canopy at pixel scale].

    PubMed

    Song, Jin-Ling; Wang, Jin-Di; Shuai, Yan-Min; Xiao, Zhi-Qiang

    2009-08-01

    Computer simulation is based on computer graphics to generate the realistic 3D structure scene of vegetation, and to simulate the canopy regime using radiosity method. In the present paper, the authors expand the computer simulation model to simulate forest canopy bidirectional reflectance at pixel scale. But usually, the trees are complex structures, which are tall and have many branches. So there is almost a need for hundreds of thousands or even millions of facets to built up the realistic structure scene for the forest It is difficult for the radiosity method to compute so many facets. In order to make the radiosity method to simulate the forest scene at pixel scale, in the authors' research, the authors proposed one idea to simplify the structure of forest crowns, and abstract the crowns to ellipsoids. And based on the optical characteristics of the tree component and the characteristics of the internal energy transmission of photon in real crown, the authors valued the optical characteristics of ellipsoid surface facets. In the computer simulation of the forest, with the idea of geometrical optics model, the gap model is considered to get the forest canopy bidirectional reflectance at pixel scale. Comparing the computer simulation results with the GOMS model, and Multi-angle Imaging SpectroRadiometer (MISR) multi-angle remote sensing data, the simulation results are in agreement with the GOMS simulation result and MISR BRF. But there are also some problems to be solved. So the authors can conclude that the study has important value for the application of multi-angle remote sensing and the inversion of vegetation canopy structure parameters.

  6. Development of n-in-p pixel modules for the ATLAS upgrade at HL-LHC

    NASA Astrophysics Data System (ADS)

    Macchiolo, A.; Nisius, R.; Savic, N.; Terzo, S.

    2016-09-01

    Thin planar pixel modules are promising candidates to instrument the inner layers of the new ATLAS pixel detector for HL-LHC, thanks to the reduced contribution to the material budget and their high charge collection efficiency after irradiation. 100-200 μm thick sensors, interconnected to FE-I4 read-out chips, have been characterized with radioactive sources and beam tests at the CERN-SPS and DESY. The results of these measurements are reported for devices before and after irradiation up to a fluence of 14 ×1015 neq /cm2 . The charge collection and tracking efficiency of the different sensor thicknesses are compared. The outlook for future planar pixel sensor production is discussed, with a focus on sensor design with the pixel pitches (50×50 and 25×100 μm2) foreseen for the RD53 Collaboration read-out chip in 65 nm CMOS technology. An optimization of the biasing structures in the pixel cells is required to avoid the hit efficiency loss presently observed in the punch-through region after irradiation. For this purpose the performance of different layouts have been compared in FE-I4 compatible sensors at various fluence levels by using beam test data. Highly segmented sensors will represent a challenge for the tracking in the forward region of the pixel system at HL-LHC. In order to reproduce the performance of 50×50 μm2 pixels at high pseudo-rapidity values, FE-I4 compatible planar pixel sensors have been studied before and after irradiation in beam tests at high incidence angle (80°) with respect to the short pixel direction. Results on cluster shapes, charge collection and hit efficiency will be shown.

  7. Context-Aware Local Binary Feature Learning for Face Recognition.

    PubMed

    Duan, Yueqi; Lu, Jiwen; Feng, Jianjiang; Zhou, Jie

    2018-05-01

    In this paper, we propose a context-aware local binary feature learning (CA-LBFL) method for face recognition. Unlike existing learning-based local face descriptors such as discriminant face descriptor (DFD) and compact binary face descriptor (CBFD) which learn each feature code individually, our CA-LBFL exploits the contextual information of adjacent bits by constraining the number of shifts from different binary bits, so that more robust information can be exploited for face representation. Given a face image, we first extract pixel difference vectors (PDV) in local patches, and learn a discriminative mapping in an unsupervised manner to project each pixel difference vector into a context-aware binary vector. Then, we perform clustering on the learned binary codes to construct a codebook, and extract a histogram feature for each face image with the learned codebook as the final representation. In order to exploit local information from different scales, we propose a context-aware local binary multi-scale feature learning (CA-LBMFL) method to jointly learn multiple projection matrices for face representation. To make the proposed methods applicable for heterogeneous face recognition, we present a coupled CA-LBFL (C-CA-LBFL) method and a coupled CA-LBMFL (C-CA-LBMFL) method to reduce the modality gap of corresponding heterogeneous faces in the feature level, respectively. Extensive experimental results on four widely used face datasets clearly show that our methods outperform most state-of-the-art face descriptors.

  8. Adaptive non-local means on local principle neighborhood for noise/artifacts reduction in low-dose CT images.

    PubMed

    Zhang, Yuanke; Lu, Hongbing; Rong, Junyan; Meng, Jing; Shang, Junliang; Ren, Pinghong; Zhang, Junying

    2017-09-01

    Low-dose CT (LDCT) technique can reduce the x-ray radiation exposure to patients at the cost of degraded images with severe noise and artifacts. Non-local means (NLM) filtering has shown its potential in improving LDCT image quality. However, currently most NLM-based approaches employ a weighted average operation directly on all neighbor pixels with a fixed filtering parameter throughout the NLM filtering process, ignoring the non-stationary noise nature of LDCT images. In this paper, an adaptive NLM filtering scheme on local principle neighborhoods (PC-NLM) is proposed for structure-preserving noise/artifacts reduction in LDCT images. Instead of using neighboring patches directly, in the PC-NLM scheme, the principle component analysis (PCA) is first applied on local neighboring patches of the target patch to decompose the local patches into uncorrelated principle components (PCs), then a NLM filtering is used to regularize each PC of the target patch and finally the regularized components is transformed to get the target patch in image domain. Especially, in the NLM scheme, the filtering parameter is estimated adaptively from local noise level of the neighborhood as well as the signal-to-noise ratio (SNR) of the corresponding PC, which guarantees a "weaker" NLM filtering on PCs with higher SNR and a "stronger" filtering on PCs with lower SNR. The PC-NLM procedure is iteratively performed several times for better removal of the noise and artifacts, and an adaptive iteration strategy is developed to reduce the computational load by determining whether a patch should be processed or not in next round of the PC-NLM filtering. The effectiveness of the presented PC-NLM algorithm is validated by experimental phantom studies and clinical studies. The results show that it can achieve promising gain over some state-of-the-art methods in terms of artifact suppression and structure preservation. With the use of PCA on local neighborhoods to extract principal structural components, as well as adaptive NLM filtering on PCs of the target patch using filtering parameter estimated based on the local noise level and corresponding SNR, the proposed PC-NLM method shows its efficacy in preserving fine anatomical structures and suppressing noise/artifacts in LDCT images. © 2017 American Association of Physicists in Medicine.

  9. Active pixel imagers incorporating pixel-level amplifiers based on polycrystalline-silicon thin-film transistors

    PubMed Central

    El-Mohri, Youcef; Antonuk, Larry E.; Koniczek, Martin; Zhao, Qihua; Li, Yixin; Street, Robert A.; Lu, Jeng-Ping

    2009-01-01

    Active matrix, flat-panel imagers (AMFPIs) employing a 2D matrix of a-Si addressing TFTs have become ubiquitous in many x-ray imaging applications due to their numerous advantages. However, under conditions of low exposures and∕or high spatial resolution, their signal-to-noise performance is constrained by the modest system gain relative to the electronic additive noise. In this article, a strategy for overcoming this limitation through the incorporation of in-pixel amplification circuits, referred to as active pixel (AP) architectures, using polycrystalline-silicon (poly-Si) TFTs is reported. Compared to a-Si, poly-Si offers substantially higher mobilities, enabling higher TFT currents and the possibility of sophisticated AP designs based on both n- and p-channel TFTs. Three prototype indirect detection arrays employing poly-Si TFTs and a continuous a-Si photodiode structure were characterized. The prototypes consist of an array (PSI-1) that employs a pixel architecture with a single TFT, as well as two arrays (PSI-2 and PSI-3) that employ AP architectures based on three and five TFTs, respectively. While PSI-1 serves as a reference with a design similar to that of conventional AMFPI arrays, PSI-2 and PSI-3 incorporate additional in-pixel amplification circuitry. Compared to PSI-1, results of x-ray sensitivity demonstrate signal gains of ∼10.7 and 20.9 for PSI-2 and PSI-3, respectively. These values are in reasonable agreement with design expectations, demonstrating that poly-Si AP circuits can be tailored to provide a desired level of signal gain. PSI-2 exhibits the same high levels of charge trapping as those observed for PSI-1 and other conventional arrays employing a continuous photodiode structure. For PSI-3, charge trapping was found to be significantly lower and largely independent of the bias voltage applied across the photodiode. MTF results indicate that the use of a continuous photodiode structure in PSI-1, PSI-2, and PSI-3 results in optical fill factors that are close to unity. In addition, the greater complexity of PSI-2 and PSI-3 pixel circuits, compared to that of PSI-1, has no observable effect on spatial resolution. Both PSI-2 and PSI-3 exhibit high levels of additive noise, resulting in no net improvement in the signal-to-noise performance of these early prototypes compared to conventional AMFPIs. However, faster readout rates, coupled with implementation of multiple sampling protocols allowed by the nondestructive nature of pixel readout, resulted in a significantly lower noise level of ∼560 e (rms) for PSI-3. PMID:19673229

  10. Active pixel imagers incorporating pixel-level amplifiers based on polycrystalline-silicon thin-film transistors.

    PubMed

    El-Mohri, Youcef; Antonuk, Larry E; Koniczek, Martin; Zhao, Qihua; Li, Yixin; Street, Robert A; Lu, Jeng-Ping

    2009-07-01

    Active matrix, flat-panel imagers (AMFPIs) employing a 2D matrix of a-Si addressing TFTs have become ubiquitous in many x-ray imaging applications due to their numerous advantages. However, under conditions of low exposures and/or high spatial resolution, their signal-to-noise performance is constrained by the modest system gain relative to the electronic additive noise. In this article, a strategy for overcoming this limitation through the incorporation of in-pixel amplification circuits, referred to as active pixel (AP) architectures, using polycrystalline-silicon (poly-Si) TFTs is reported. Compared to a-Si, poly-Si offers substantially higher mobilities, enabling higher TFT currents and the possibility of sophisticated AP designs based on both n- and p-channel TFTs. Three prototype indirect detection arrays employing poly-Si TFTs and a continuous a-Si photodiode structure were characterized. The prototypes consist of an array (PSI-1) that employs a pixel architecture with a single TFT, as well as two arrays (PSI-2 and PSI-3) that employ AP architectures based on three and five TFTs, respectively. While PSI-1 serves as a reference with a design similar to that of conventional AMFPI arrays, PSI-2 and PSI-3 incorporate additional in-pixel amplification circuitry. Compared to PSI-1, results of x-ray sensitivity demonstrate signal gains of approximately 10.7 and 20.9 for PSI-2 and PSI-3, respectively. These values are in reasonable agreement with design expectations, demonstrating that poly-Si AP circuits can be tailored to provide a desired level of signal gain. PSI-2 exhibits the same high levels of charge trapping as those observed for PSI-1 and other conventional arrays employing a continuous photodiode structure. For PSI-3, charge trapping was found to be significantly lower and largely independent of the bias voltage applied across the photodiode. MTF results indicate that the use of a continuous photodiode structure in PSI-1, PSI-2, and PSI-3 results in optical fill factors that are close to unity. In addition, the greater complexity of PSI-2 and PSI-3 pixel circuits, compared to that of PSI-1, has no observable effect on spatial resolution. Both PSI-2 and PSI-3 exhibit high levels of additive noise, resulting in no net improvement in the signal-to-noise performance of these early prototypes compared to conventional AMFPIs. However, faster readout rates, coupled with implementation of multiple sampling protocols allowed by the nondestructive nature of pixel readout, resulted in a significantly lower noise level of approximately 560 e (rms) for PSI-3.

  11. CMOS image sensor with lateral electric field modulation pixels for fluorescence lifetime imaging with sub-nanosecond time response

    NASA Astrophysics Data System (ADS)

    Li, Zhuo; Seo, Min-Woong; Kagawa, Keiichiro; Yasutomi, Keita; Kawahito, Shoji

    2016-04-01

    This paper presents the design and implementation of a time-resolved CMOS image sensor with a high-speed lateral electric field modulation (LEFM) gating structure for time domain fluorescence lifetime measurement. Time-windowed signal charge can be transferred from a pinned photodiode (PPD) to a pinned storage diode (PSD) by turning on a pair of transfer gates, which are situated beside the channel. Unwanted signal charge can be drained from the PPD to the drain by turning on another pair of gates. The pixel array contains 512 (V) × 310 (H) pixels with 5.6 × 5.6 µm2 pixel size. The imager chip was fabricated using 0.11 µm CMOS image sensor process technology. The prototype sensor has a time response of 150 ps at 374 nm. The fill factor of the pixels is 5.6%. The usefulness of the prototype sensor is demonstrated for fluorescence lifetime imaging through simulation and measurement results.

  12. Organic Light-Emitting Diode-on-Silicon Pixel Circuit Using the Source Follower Structure with Active Load for Microdisplays

    NASA Astrophysics Data System (ADS)

    Kwak, Bong-Choon; Lim, Han-Sin; Kwon, Oh-Kyong

    2011-03-01

    In this paper, we propose a pixel circuit immune to the electrical characteristic variation of organic light-emitting diodes (OLEDs) for organic light-emitting diode-on-silicon (OLEDoS) microdisplays with a 0.4 inch video graphics array (VGA) resolution and a 6-bit gray scale. The proposed pixel circuit is implemented using five p-channel metal oxide semiconductor field-effect transistors (MOSFETs) and one storage capacitor. The proposed pixel circuit has a source follower with a diode-connected transistor as an active load for improving the immunity against the electrical characteristic variation of OLEDs. The deviation in the measured emission current ranges from -0.165 to 0.212 least significant bit (LSB) among 11 samples while the anode voltage of OLED is 0 V. Also, the deviation in the measured emission current ranges from -0.262 to 0.272 LSB in pixel samples, while the anode voltage of OLED varies from 0 to 2.5 V owing to the electrical characteristic variation of OLEDs.

  13. Pixel-by-Pixel SED Fitting of Intermediate Redshift Galaxies

    NASA Astrophysics Data System (ADS)

    Cohen, Seth H.; Kim, Hwihyun; Petty, Sara M.; Farrah, Duncan

    2015-01-01

    We select intermediate redshift galaxies from the Hubble Space Telescope CANDELS and GOODS surveys to study their stellar populations on sub-kilo-parsec scales by fitting SED models on a pixel-by-pixel basis. Galaxies are chosen to have measured spectroscopic redshifts (z<1.5), to be bright (H_AB<21 mag), to be relatively face-on (b/a > 0.6), and have a minimum of ten individual resolution elements across the face of the galaxy, as defined by the broadest PSF (F160W-band) in the data. The sample contains ~200 galaxies with BViz(Y)JH band HST photometry. The main goal of the study is to better understand the effects of population blending when using a pixel-by-pixel SED fitting (pSED) approach. We outline our pSED fitting method which gives maps of stellar mass, age, star-formation rate, etc. Several examples of individual pSED-fit maps are presented in detail, as well as some preliminary results on the full sample. The pSED method is necessarily biased by the brightest population in a given pixel outshining the rest of the stars, and, therefore, we intend to study this apparent population blending in a set of artificially redshifted images of nearby galaxies, for which we have star-by-star measurements of their stellar populations. This local sample will be used to better interpret the measurements for the higher redshift galaxies.Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the Data Archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. This archival research is associated with program #13241.

  14. A digital pixel cell for address event representation image convolution processing

    NASA Astrophysics Data System (ADS)

    Camunas-Mesa, Luis; Acosta-Jimenez, Antonio; Serrano-Gotarredona, Teresa; Linares-Barranco, Bernabe

    2005-06-01

    Address Event Representation (AER) is an emergent neuromorphic interchip communication protocol that allows for real-time virtual massive connectivity between huge number of neurons located on different chips. By exploiting high speed digital communication circuits (with nano-seconds timings), synaptic neural connections can be time multiplexed, while neural activity signals (with mili-seconds timings) are sampled at low frequencies. Also, neurons generate events according to their information levels. Neurons with more information (activity, derivative of activities, contrast, motion, edges,...) generate more events per unit time, and access the interchip communication channel more frequently, while neurons with low activity consume less communication bandwidth. AER technology has been used and reported for the implementation of various type of image sensors or retinae: luminance with local agc, contrast retinae, motion retinae,... Also, there has been a proposal for realizing programmable kernel image convolution chips. Such convolution chips would contain an array of pixels that perform weighted addition of events. Once a pixel has added sufficient event contributions to reach a fixed threshold, the pixel fires an event, which is then routed out of the chip for further processing. Such convolution chips have been proposed to be implemented using pulsed current mode mixed analog and digital circuit techniques. In this paper we present a fully digital pixel implementation to perform the weighted additions and fire the events. This way, for a given technology, there is a fully digital implementation reference against which compare the mixed signal implementations. We have designed, implemented and tested a fully digital AER convolution pixel. This pixel will be used to implement a full AER convolution chip for programmable kernel image convolution processing.

  15. In vivo performance of photovoltaic subretinal prosthesis

    NASA Astrophysics Data System (ADS)

    Mandel, Yossi; Goetz, George; Lavinsky, Daniel; Huie, Phil; Mathieson, Keith; Wang, Lele; Kamins, Theodore; Manivanh, Richard; Harris, James; Palanker, Daniel

    2013-02-01

    We have developed a photovoltaic retinal prosthesis, in which camera-captured images are projected onto the retina using pulsed near-IR light. Each pixel in the subretinal implant directly converts pulsed light into local electric current to stimulate the nearby inner retinal neurons. 30 μm-thick implants with pixel sizes of 280, 140 and 70 μm were successfully implanted in the subretinal space of wild type (WT, Long-Evans) and degenerate (Royal College of Surgeons, RCS) rats. Optical Coherence Tomography and fluorescein angiography demonstrated normal retinal thickness and healthy vasculature above the implants upon 6 months follow-up. Stimulation with NIR pulses over the implant elicited robust visual evoked potentials (VEP) at safe irradiance levels. Thresholds increased with decreasing pulse duration and pixel size: with 10 ms pulses it went from 0.5 mW/mm2 on 280 μm pixels to 1.1 mW/mm2 on 140 μm pixels, to 2.1 mW/mm2 on 70 μm pixels. Latency of the implant-evoked VEP was at least 30 ms shorter than in response evoked by the visible light, due to lack of phototransduction. Like with the visible light stimulation in normal sighted animals, amplitude of the implant-induced VEP increased logarithmically with peak irradiance and pulse duration. It decreased with increasing frequency similar to the visible light response in the range of 2 - 10 Hz, but decreased slower than the visible light response at 20 - 40 Hz. Modular design of the photovoltaic arrays allows scalability to a large number of pixels, and combined with the ease of implantation, offers a promising approach to restoration of sight in patients blinded by retinal degenerative diseases.

  16. Identification of optimal mask size parameter for noise filtering in 99mTc-methylene diphosphonate bone scintigraphy images.

    PubMed

    Pandey, Anil K; Bisht, Chandan S; Sharma, Param D; ArunRaj, Sreedharan Thankarajan; Taywade, Sameer; Patel, Chetan; Bal, Chandrashekhar; Kumar, Rakesh

    2017-11-01

    Tc-methylene diphosphonate (Tc-MDP) bone scintigraphy images have limited number of counts per pixel. A noise filtering method based on local statistics of the image produces better results than a linear filter. However, the mask size has a significant effect on image quality. In this study, we have identified the optimal mask size that yields a good smooth bone scan image. Forty four bone scan images were processed using mask sizes 3, 5, 7, 9, 11, 13, and 15 pixels. The input and processed images were reviewed in two steps. In the first step, the images were inspected and the mask sizes that produced images with significant loss of clinical details in comparison with the input image were excluded. In the second step, the image quality of the 40 sets of images (each set had input image, and its corresponding three processed images with 3, 5, and 7-pixel masks) was assessed by two nuclear medicine physicians. They selected one good smooth image from each set of images. The image quality was also assessed quantitatively with a line profile. Fisher's exact test was used to find statistically significant differences in image quality processed with 5 and 7-pixel mask at a 5% cut-off. A statistically significant difference was found between the image quality processed with 5 and 7-pixel mask at P=0.00528. The identified optimal mask size to produce a good smooth image was found to be 7 pixels. The best mask size for the John-Sen Lee filter was found to be 7×7 pixels, which yielded Tc-methylene diphosphonate bone scan images with the highest acceptable smoothness.

  17. An LOD with improved breakdown voltage in full-frame CCD devices

    NASA Astrophysics Data System (ADS)

    Banghart, Edmund K.; Stevens, Eric G.; Doan, Hung Q.; Shepherd, John P.; Meisenzahl, Eric J.

    2005-02-01

    In full-frame image sensors, lateral overflow drain (LOD) structures are typically formed along the vertical CCD shift registers to provide a means for preventing charge blooming in the imager pixels. In a conventional LOD structure, the n-type LOD implant is made through the thin gate dielectric stack in the device active area and adjacent to the thick field oxidation that isolates the vertical CCD columns of the imager. In this paper, a novel LOD structure is described in which the n-type LOD impurities are placed directly under the field oxidation and are, therefore, electrically isolated from the gate electrodes. By reducing the electrical fields that cause breakdown at the silicon surface, this new structure permits a larger amount of n-type impurities to be implanted for the purpose of increasing the LOD conductivity. As a consequence of the improved conductance, the LOD width can be significantly reduced, enabling the design of higher resolution imaging arrays without sacrificing charge capacity in the pixels. Numerical simulations with MEDICI of the LOD leakage current are presented that identify the breakdown mechanism, while three-dimensional solutions to Poisson's equation are used to determine the charge capacity as a function of pixel dimension.

  18. Image size invariant visual cryptography for general access structures subject to display quality constraints.

    PubMed

    Lee, Kai-Hui; Chiu, Pei-Ling

    2013-10-01

    Conventional visual cryptography (VC) suffers from a pixel-expansion problem, or an uncontrollable display quality problem for recovered images, and lacks a general approach to construct visual secret sharing schemes for general access structures. We propose a general and systematic approach to address these issues without sophisticated codebook design. This approach can be used for binary secret images in non-computer-aided decryption environments. To avoid pixel expansion, we design a set of column vectors to encrypt secret pixels rather than using the conventional VC-based approach. We begin by formulating a mathematic model for the VC construction problem to find the column vectors for the optimal VC construction, after which we develop a simulated-annealing-based algorithm to solve the problem. The experimental results show that the display quality of the recovered image is superior to that of previous papers.

  19. The structure of the mitotic spindle and nucleolus during mitosis in the amebo-flagellate Naegleria.

    PubMed

    Walsh, Charles J

    2012-01-01

    Mitosis in the amebo-flagellate Naegleria pringsheimi is acentrosomal and closed (the nuclear membrane does not break down). The large central nucleolus, which occupies about 20% of the nuclear volume, persists throughout the cell cycle. At mitosis, the nucleolus divides and moves to the poles in association with the chromosomes. The structure of the mitotic spindle and its relationship to the nucleolus are unknown. To identify the origin and structure of the mitotic spindle, its relationship to the nucleolus and to further understand the influence of persistent nucleoli on cellular division in acentriolar organisms like Naegleria, three-dimensional reconstructions of the mitotic spindle and nucleolus were carried out using confocal microscopy. Monoclonal antibodies against three different nucleolar regions and α-tubulin were used to image the nucleolus and mitotic spindle. Microtubules were restricted to the nucleolus beginning with the earliest prophase spindle microtubules. Early spindle microtubules were seen as short rods on the surface of the nucleolus. Elongation of the spindle microtubules resulted in a rough cage of microtubules surrounding the nucleolus. At metaphase, the mitotic spindle formed a broad band completely embedded within the nucleolus. The nucleolus separated into two discreet masses connected by a dense band of microtubules as the spindle elongated. At telophase, the distal ends of the mitotic spindle were still completely embedded within the daughter nucleoli. Pixel by pixel comparison of tubulin and nucleolar protein fluorescence showed 70% or more of tubulin co-localized with nucleolar proteins by early prophase. These observations suggest a model in which specific nucleolar binding sites for microtubules allow mitotic spindle formation and attachment. The fact that a significant mass of nucleolar material precedes the chromosomes as the mitotic spindle elongates suggests that spindle elongation drives nucleolar division.

  20. Spiking cortical model based non-local means method for despeckling multiframe optical coherence tomography data

    NASA Astrophysics Data System (ADS)

    Gu, Yameng; Zhang, Xuming

    2017-05-01

    Optical coherence tomography (OCT) images are severely degraded by speckle noise. Existing methods for despeckling multiframe OCT data cannot deliver sufficient speckle suppression while preserving image details well. To address this problem, the spiking cortical model (SCM) based non-local means (NLM) method has been proposed in this letter. In the proposed method, the considered frame and two neighboring frames are input into three SCMs to generate the temporal series of pulse outputs. The normalized moment of inertia (NMI) of the considered patches in the pulse outputs is extracted to represent the rotational and scaling invariant features of the corresponding patches in each frame. The pixel similarity is computed based on the Euclidean distance between the NMI features and used as the weight. Each pixel in the considered frame is restored by the weighted averaging of all pixels in the pre-defined search window in the three frames. Experiments on the real multiframe OCT data of the pig eye demonstrate the advantage of the proposed method over the frame averaging method, the multiscale sparsity based tomographic denoising method, the wavelet-based method and the traditional NLM method in terms of visual inspection and objective metrics such as signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), equivalent number of looks (ENL) and cross-correlation (XCOR).

  1. Exploring the Hidden Structure of Astronomical Images: A "Pixelated" View of Solar System and Deep Space Features!

    ERIC Educational Resources Information Center

    Ward, R. Bruce; Sienkiewicz, Frank; Sadler, Philip; Antonucci, Paul; Miller, Jaimie

    2013-01-01

    We describe activities created to help student participants in Project ITEAMS (Innovative Technology-Enabled Astronomy for Middle Schools) develop a deeper understanding of picture elements (pixels), image creation, and analysis of the recorded data. ITEAMS is an out-of-school time (OST) program funded by the National Science Foundation (NSF) with…

  2. ACS Internal Flat Fields

    NASA Astrophysics Data System (ADS)

    Borncamp, David

    2017-08-01

    The stability of the CCD flat fields will be monitored using the calibration lamps. One set of observations for all the filters and another at a different epoch for a subset of filters will be taken during this cycle. High signal observations will be used to assess the stability of the pixel-to-pixel flat field structure and to monitor the position of the dust motes.

  3. ACS Internal Flat Fields

    NASA Astrophysics Data System (ADS)

    Borncamp, David

    2016-10-01

    The stability of the CCD flat fields will be monitored using the calibration lamps. One set of observations for all the filters and another at a different epoch for a subset of filters will be taken during this cycle. High signal observations will be used to assess the stability of the pixel-to-pixel flat field structure and to monitor the position of the dust motes.

  4. Bioinspired architecture approach for a one-billion transistor smart CMOS camera chip

    NASA Astrophysics Data System (ADS)

    Fey, Dietmar; Komann, Marcus

    2007-05-01

    In the paper we present a massively parallel VLSI architecture for future smart CMOS camera chips with up to one billion transistors. To exploit efficiently the potential offered by future micro- or nanoelectronic devices traditional on central structures oriented parallel architectures based on MIMD or SIMD approaches will fail. They require too long and too many global interconnects for the distribution of code or the access to common memory. On the other hand nature developed self-organising and emergent principles to manage successfully complex structures based on lots of interacting simple elements. Therefore we developed a new as Marching Pixels denoted emergent computing paradigm based on a mixture of bio-inspired computing models like cellular automaton and artificial ants. In the paper we present different Marching Pixels algorithms and the corresponding VLSI array architecture. A detailed synthesis result for a 0.18 μm CMOS process shows that a 256×256 pixel image is processed in less than 10 ms assuming a moderate 100 MHz clock rate for the processor array. Future higher integration densities and a 3D chip stacking technology will allow the integration and processing of Mega pixels within the same time since our architecture is fully scalable.

  5. A Motion-Based Feature for Event-Based Pattern Recognition

    PubMed Central

    Clady, Xavier; Maro, Jean-Matthieu; Barré, Sébastien; Benosman, Ryad B.

    2017-01-01

    This paper introduces an event-based luminance-free feature from the output of asynchronous event-based neuromorphic retinas. The feature consists in mapping the distribution of the optical flow along the contours of the moving objects in the visual scene into a matrix. Asynchronous event-based neuromorphic retinas are composed of autonomous pixels, each of them asynchronously generating “spiking” events that encode relative changes in pixels' illumination at high temporal resolutions. The optical flow is computed at each event, and is integrated locally or globally in a speed and direction coordinate frame based grid, using speed-tuned temporal kernels. The latter ensures that the resulting feature equitably represents the distribution of the normal motion along the current moving edges, whatever their respective dynamics. The usefulness and the generality of the proposed feature are demonstrated in pattern recognition applications: local corner detection and global gesture recognition. PMID:28101001

  6. Reliable two-dimensional phase unwrapping method using region growing and local linear estimation.

    PubMed

    Zhou, Kun; Zaitsev, Maxim; Bao, Shanglian

    2009-10-01

    In MRI, phase maps can provide useful information about parameters such as field inhomogeneity, velocity of blood flow, and the chemical shift between water and fat. As phase is defined in the (-pi,pi] range, however, phase wraps often occur, which complicates image analysis and interpretation. This work presents a two-dimensional phase unwrapping algorithm that uses quality-guided region growing and local linear estimation. The quality map employs the variance of the second-order partial derivatives of the phase as the quality criterion. Phase information from unwrapped neighboring pixels is used to predict the correct phase of the current pixel using a linear regression method. The algorithm was tested on both simulated and real data, and is shown to successfully unwrap phase images that are corrupted by noise and have rapidly changing phase. (c) 2009 Wiley-Liss, Inc.

  7. Adaptive Optics Images of the Galactic Center: Using Empirical Noise-maps to Optimize Image Analysis

    NASA Astrophysics Data System (ADS)

    Albers, Saundra; Witzel, Gunther; Meyer, Leo; Sitarski, Breann; Boehle, Anna; Ghez, Andrea M.

    2015-01-01

    Adaptive Optics images are one of the most important tools in studying our Galactic Center. In-depth knowledge of the noise characteristics is crucial to optimally analyze this data. Empirical noise estimates - often represented by a constant value for the entire image - can be greatly improved by computing the local detector properties and photon noise contributions pixel by pixel. To comprehensively determine the noise, we create a noise model for each image using the three main contributors—photon noise of stellar sources, sky noise, and dark noise. We propagate the uncertainties through all reduction steps and analyze the resulting map using Starfinder. The estimation of local noise properties helps to eliminate fake detections while improving the detection limit of fainter sources. We predict that a rigorous understanding of noise allows a more robust investigation of the stellar dynamics in the center of our Galaxy.

  8. Image Segmentation Method Using Fuzzy C Mean Clustering Based on Multi-Objective Optimization

    NASA Astrophysics Data System (ADS)

    Chen, Jinlin; Yang, Chunzhi; Xu, Guangkui; Ning, Li

    2018-04-01

    Image segmentation is not only one of the hottest topics in digital image processing, but also an important part of computer vision applications. As one kind of image segmentation algorithms, fuzzy C-means clustering is an effective and concise segmentation algorithm. However, the drawback of FCM is that it is sensitive to image noise. To solve the problem, this paper designs a novel fuzzy C-mean clustering algorithm based on multi-objective optimization. We add a parameter λ to the fuzzy distance measurement formula to improve the multi-objective optimization. The parameter λ can adjust the weights of the pixel local information. In the algorithm, the local correlation of neighboring pixels is added to the improved multi-objective mathematical model to optimize the clustering cent. Two different experimental results show that the novel fuzzy C-means approach has an efficient performance and computational time while segmenting images by different type of noises.

  9. Visibility enhancement of color images using Type-II fuzzy membership function

    NASA Astrophysics Data System (ADS)

    Singh, Harmandeep; Khehra, Baljit Singh

    2018-04-01

    Images taken in poor environmental conditions decrease the visibility and hidden information of digital images. Therefore, image enhancement techniques are necessary for improving the significant details of these images. An extensive review has shown that histogram-based enhancement techniques greatly suffer from over/under enhancement issues. Fuzzy-based enhancement techniques suffer from over/under saturated pixels problems. In this paper, a novel Type-II fuzzy-based image enhancement technique has been proposed for improving the visibility of images. The Type-II fuzzy logic can automatically extract the local atmospheric light and roughly eliminate the atmospheric veil in local detail enhancement. The proposed technique has been evaluated on 10 well-known weather degraded color images and is also compared with four well-known existing image enhancement techniques. The experimental results reveal that the proposed technique outperforms others regarding visible edge ratio, color gradients and number of saturated pixels.

  10. Investigation of SIS Up-Converters for Use in Multi-pixel Receivers

    NASA Astrophysics Data System (ADS)

    Uzawa, Yoshinori; Kojima, Takafumi; Shan, Wenlei; Gonzalez, Alvaro; Kroug, Matthias

    2018-02-01

    We propose the use of SIS junctions as a frequency up-converter based on quasiparticle mixing in frequency division multiplexing circuits for multi-pixel heterodyne receivers. Our theoretical calculation showed that SIS junctions have the potential to achieve positive gain and low-noise characteristics in the frequency up-conversion process at local oscillator (LO) frequencies larger than the voltage scale of the dc nonlinearity of the SIS junction. We experimentally observed up-conversion gain in a mixer with four-series Nb-based SIS junctions at the LO frequency of 105 GHz for the first time.

  11. Real-Time Symbol Extraction From Grey-Level Images

    NASA Astrophysics Data System (ADS)

    Massen, R.; Simnacher, M.; Rosch, J.; Herre, E.; Wuhrer, H. W.

    1988-04-01

    A VME-bus image pipeline processor for extracting vectorized contours from grey-level images in real-time is presented. This 3 Giga operation per second processor uses large kernel convolvers and new non-linear neighbourhood processing algorithms to compute true 1-pixel wide and noise-free contours without thresholding even from grey-level images with quite varying edge sharpness. The local edge orientation is used as an additional cue to compute a list of vectors describing the closed and open contours in real-time and to dump a CAD-like symbolic image description into a symbol memory at pixel clock rate.

  12. Harbour surveillance with cameras calibrated with AIS data

    NASA Astrophysics Data System (ADS)

    Palmieri, F. A. N.; Castaldo, F.; Marino, G.

    The inexpensive availability of surveillance cameras, easily connected in network configurations, suggests the deployment of this additional sensor modality in port surveillance. Vessels appearing within cameras fields of view can be recognized and localized providing to fusion centers information that can be added to data coming from Radar, Lidar, AIS, etc. Camera systems, that are used as localizers however, must be properly calibrated in changing scenarios where often there is limited choice on the position on which they are deployed. Automatic Identification System (AIS) data, that includes position, course and vessel's identity, freely available through inexpensive receivers, for some of the vessels appearing within the field of view, provide the opportunity to achieve proper camera calibration to be used for the localization of vessels not equipped with AIS transponders. In this paper we assume a pinhole model for camera geometry and propose perspective matrices computation using AIS positional data. Images obtained from calibrated cameras are then matched and pixel association is utilized for other vessel's localization. We report preliminary experimental results of calibration and localization using two cameras deployed on the Gulf of Naples coastline. The two cameras overlook a section of the harbour and record short video sequences that are synchronized offline with AIS positional information of easily-identified passenger ships. Other small vessels, not equipped with AIS transponders, are localized using camera matrices and pixel matching. Localization accuracy is experimentally evaluated as a function of target distance from the sensors.

  13. Contrast-guided image interpolation.

    PubMed

    Wei, Zhe; Ma, Kai-Kuang

    2013-11-01

    In this paper a contrast-guided image interpolation method is proposed that incorporates contrast information into the image interpolation process. Given the image under interpolation, four binary contrast-guided decision maps (CDMs) are generated and used to guide the interpolation filtering through two sequential stages: 1) the 45(°) and 135(°) CDMs for interpolating the diagonal pixels and 2) the 0(°) and 90(°) CDMs for interpolating the row and column pixels. After applying edge detection to the input image, the generation of a CDM lies in evaluating those nearby non-edge pixels of each detected edge for re-classifying them possibly as edge pixels. This decision is realized by solving two generalized diffusion equations over the computed directional variation (DV) fields using a derived numerical approach to diffuse or spread the contrast boundaries or edges, respectively. The amount of diffusion or spreading is proportional to the amount of local contrast measured at each detected edge. The diffused DV fields are then thresholded for yielding the binary CDMs, respectively. Therefore, the decision bands with variable widths will be created on each CDM. The two CDMs generated in each stage will be exploited as the guidance maps to conduct the interpolation process: for each declared edge pixel on the CDM, a 1-D directional filtering will be applied to estimate its associated to-be-interpolated pixel along the direction as indicated by the respective CDM; otherwise, a 2-D directionless or isotropic filtering will be used instead to estimate the associated missing pixels for each declared non-edge pixel. Extensive simulation results have clearly shown that the proposed contrast-guided image interpolation is superior to other state-of-the-art edge-guided image interpolation methods. In addition, the computational complexity is relatively low when compared with existing methods; hence, it is fairly attractive for real-time image applications.

  14. Direct reading of charge multipliers with a self-triggering CMOS analog chip with 105 k pixels at 50 μm pitch

    NASA Astrophysics Data System (ADS)

    Bellazzini, R.; Spandre, G.; Minuti, M.; Baldini, L.; Brez, A.; Cavalca, F.; Latronico, L.; Omodei, N.; Massai, M. M.; Sgro', C.; Costa, E.; Soffitta, P.; Krummenacher, F.; de Oliveira, R.

    2006-10-01

    We report on a large area (15×15 mm2), high channel density (470 pixel/mm2), self-triggering CMOS analog chip that we have developed as a pixelized charge collecting electrode of a Micropattern Gas Detector. This device represents a big step forward both in terms of size and performance, and is in fact the last version of three generations of custom ASICs of increasing complexity. The top metal layer of the CMOS pixel array is patterned in a matrix of 105,600 hexagonal pixels with a 50 μm pitch. Each pixel is directly connected to the underlying full electronics chain which has been realized in the remaining five metal and single poly-silicon layers of a 0.18 μm VLSI technology. The chip, which has customizable self-triggering capabilities, also includes a signal pre-processing function for the automatic localization of the event coordinates. Thanks to these advances it is possible to significantly reduce the read-out time and the data volume by limiting the signal output only to those pixels belonging to the region of interest. In addition to the reduced read-out time and data volume, the very small pixel area and the use of a deep sub-micron CMOS technology has allowed bringing the noise down to 50 electrons ENC. Results from in depth tests of this device when coupled to a fine pitch (50 μm on a triangular pattern) Gas Electron Multiplier are presented. It was found that matching the read-out and gas amplification pitch allows getting optimal results. The experimental detector response to polarized and unpolarized X-ray radiation when working with two gas mixtures and two different photon energies is shown and the application of this detector for Astronomical X-ray Polarimetry is discussed. Results from a full Monte-Carlo simulation for several galactic and extragalactic astronomical sources are also reported.

  15. Seismic-zonation of Port-au-Prince using pixel- and object-based imaging analysis methods on ASTER GDEM

    USGS Publications Warehouse

    Yong, A.; Hough, S.E.; Cox, B.R.; Rathje, E.M.; Bachhuber, J.; Dulberg, R.; Hulslander, D.; Christiansen, L.; Abrams, M.J.

    2011-01-01

    We report about a preliminary study to evaluate the use of semi-automated imaging analysis of remotely-sensed DEM and field geophysical measurements to develop a seismic-zonation map of Port-au-Prince, Haiti. For in situ data, Vs30 values are derived from the MASW technique deployed in and around the city. For satellite imagery, we use an ASTER GDEM of Hispaniola. We apply both pixel- and object-based imaging methods on the ASTER GDEM to explore local topography (absolute elevation values) and classify terrain types such as mountains, alluvial fans and basins/near-shore regions. We assign NEHRP seismic site class ranges based on available Vs30 values. A comparison of results from imagery-based methods to results from traditional geologic-based approaches reveals good overall correspondence. We conclude that image analysis of RS data provides reliable first-order site characterization results in the absence of local data and can be useful to refine detailed site maps with sparse local data. ?? 2011 American Society for Photogrammetry and Remote Sensing.

  16. Hi-fidelity multi-scale local processing for visually optimized far-infrared Herschel images

    NASA Astrophysics Data System (ADS)

    Li Causi, G.; Schisano, E.; Liu, S. J.; Molinari, S.; Di Giorgio, A.

    2016-07-01

    In the context of the "Hi-Gal" multi-band full-plane mapping program for the Galactic Plane, as imaged by the Herschel far-infrared satellite, we have developed a semi-automatic tool which produces high definition, high quality color maps optimized for visual perception of extended features, like bubbles and filaments, against the high background variations. We project the map tiles of three selected bands onto a 3-channel panorama, which spans the central 130 degrees of galactic longitude times 2.8 degrees of galactic latitude, at the pixel scale of 3.2", in cartesian galactic coordinates. Then we process this image piecewise, applying a custom multi-scale local stretching algorithm, enforced by a local multi-scale color balance. Finally, we apply an edge-preserving contrast enhancement to perform an artifact-free details sharpening. Thanks to this tool, we have thus produced a stunning giga-pixel color image of the far-infrared Galactic Plane that we made publicly available with the recent release of the Hi-Gal mosaics and compact source catalog.

  17. Fully 3D-Integrated Pixel Detectors for X-Rays

    DOE PAGES

    Deptuch, Grzegorz W.; Gabriella, Carini; Enquist, Paul; ...

    2016-01-01

    The vertically integrated photon imaging chip (VIPIC1) pixel detector is a stack consisting of a 500-μm-thick silicon sensor, a two-tier 34-μm-thick integrated circuit, and a host printed circuit board (PCB). The integrated circuit tiers were bonded using the direct bonding technology with copper, and each tier features 1-μm-diameter through-silicon vias that were used for connections to the sensor on one side, and to the host PCB on the other side. The 80-μm-pixel-pitch sensor was the direct bonding technology with nickel bonded to the integrated circuit. The stack was mounted on the board using Sn–Pb balls placed on a 320-μm pitch,more » yielding an entirely wire-bond-less structure. The analog front-end features a pulse response peaking at below 250 ns, and the power consumption per pixel is 25 μW. We successful completed the 3-D integration and have reported here. Additionally, all pixels in the matrix of 64 × 64 pixels were responding on well-bonded devices. Correct operation of the sparsified readout, allowing a single 153-ns bunch timing resolution, was confirmed in the tests on a synchrotron beam of 10-keV X-rays. An equivalent noise charge of 36.2 e - rms and a conversion gain of 69.5 μV/e - with 2.6 e - rms and 2.7 μV/e - rms pixel-to-pixel variations, respectively, were measured.« less

  18. Observing Bridge Dynamic Deflection in Green Time by Information Technology

    NASA Astrophysics Data System (ADS)

    Yu, Chengxin; Zhang, Guojian; Zhao, Yongqian; Chen, Mingzhi

    2018-01-01

    As traditional surveying methods are limited to observe bridge dynamic deflection; information technology is adopted to observe bridge dynamic deflection in Green time. Information technology used in this study means that we use digital cameras to photograph the bridge in red time as a zero image. Then, a series of successive images are photographed in green time. Deformation point targets are identified and located by Hough transform. With reference to the control points, the deformation values of these deformation points are obtained by differencing the successive images with a zero image, respectively. Results show that the average measurement accuracies of C0 are 0.46 pixels, 0.51 pixels and 0.74 pixels in X, Z and comprehensive direction. The average measurement accuracies of C1 are 0.43 pixels, 0.43 pixels and 0.67 pixels in X, Z and comprehensive direction in these tests. The maximal bridge deflection is 44.16mm, which is less than 75mm (Bridge deflection tolerance value). Information technology in this paper can monitor bridge dynamic deflection and depict deflection trend curves of the bridge in real time. It can provide data support for the site decisions to the bridge structure safety.

  19. Advanced Demonstration of Motion Correction for Ship-to-Ship Passive Inspections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ziock, Klaus-Peter; Boehnen, Chris Bensing; Ernst, Joseph

    2013-09-30

    Passive radiation detection is a key tool for detecting illicit nuclear materials. In maritime applications it is most effective against small vessels where attenuation is of less concern. Passive imaging provides: discrimination between localized (threat) and distributed (non-threat) sources, removal of background fluctuations due to nearby shorelines and structures, source localization to an individual craft in crowded waters, and background subtracted spectra. Unfortunately, imaging methods cannot be easily applied in ship-to-ship inspections because relative motion of the vessels blurs the results over many pixels, significantly reducing sensitivity. This is particularly true for the smaller water craft where passive inspections aremore » most valuable. In this project we performed tests and improved the performance of an instrument (developed earlier under, “Motion Correction for Ship-to-Ship Passive Inspections”) that uses automated tracking of a target vessel in visible-light images to generate a 3D radiation map of the target vessel from data obtained using a gamma-ray imager.« less

  20. Smart CMOS image sensor for lightning detection and imaging.

    PubMed

    Rolando, Sébastien; Goiffon, Vincent; Magnan, Pierre; Corbière, Franck; Molina, Romain; Tulet, Michel; Bréart-de-Boisanger, Michel; Saint-Pé, Olivier; Guiry, Saïprasad; Larnaudie, Franck; Leone, Bruno; Perez-Cuevas, Leticia; Zayer, Igor

    2013-03-01

    We present a CMOS image sensor dedicated to lightning detection and imaging. The detector has been designed to evaluate the potentiality of an on-chip lightning detection solution based on a smart sensor. This evaluation is performed in the frame of the predevelopment phase of the lightning detector that will be implemented in the Meteosat Third Generation Imager satellite for the European Space Agency. The lightning detection process is performed by a smart detector combining an in-pixel frame-to-frame difference comparison with an adjustable threshold and on-chip digital processing allowing an efficient localization of a faint lightning pulse on the entire large format array at a frequency of 1 kHz. A CMOS prototype sensor with a 256×256 pixel array and a 60 μm pixel pitch has been fabricated using a 0.35 μm 2P 5M technology and tested to validate the selected detection approach.

  1. Early breast tumor and late SARS detections using space-variant multispectral infrared imaging at a single pixel

    NASA Astrophysics Data System (ADS)

    Szu, Harold H.; Buss, James R.; Kopriva, Ivica

    2004-04-01

    We proposed the physics approach to solve a physical inverse problem, namely to choose the unique equilibrium solution (at the minimum free energy: H= E - ToS, including the Wiener, l.m.s E, and ICA, Max S, as special cases). The "unsupervised classification" presumes that required information must be learned and derived directly and solely from the data alone, in consistence with the classical Duda-Hart ATR definition of the "unlabelled data". Such truly unsupervised methodology is presented for space-variant imaging processing for a single pixel in the real world case of remote sensing, early tumor detections and SARS. The indeterminacy of the multiple solutions of the inverse problem is regulated or selected by means of the absolute minimum of isothermal free energy as the ground truth of local equilibrium condition at the single-pixel foot print.

  2. A Different Way to Visualize Solar Changes

    NASA Astrophysics Data System (ADS)

    Kohler, Susanna

    2016-07-01

    This time series of SDO images of an active region shows coronal dimming as well as flares. These images can be combined into a minimum-value persistence map (bottom panel) that better reveals the entire dimming region. [Adapted from Thompson Young 2016]What if there were a better way to analyze a comets tail, the dimming of the Suns surface, or the path of material in a bright solar eruption? A recent study examines a new technique for looking at these evolving features.Mapping Evolving FeaturesSometimes interesting advances in astronomy come from simple, creative new approaches to analyzing old data. Such is the case in a new study by Barbara Thompson and Alex Young (NASA Goddard Space Flight Center), which introduces a technique called persistence mapping to better examine solar phenomena whose dynamic natures make them difficult to analyze.What is a persistence map? Suppose you have a set of N images of the same spatial region, with each image taken at a different time. To create a persistence map of these images, you would combine this set of images by retaining only the most extreme (for example, the maximum) value for each pixel, throwing away the remaining N-1 values for each pixel.Persistence mapping is especially useful for bringing out rare or intermittent phenomena features that would often be washed out if the images were combined in a sum or average instead. Thompson and Young describe three example cases where persistence mapping brings something new to the table.Top: Single SDO image of Comet Lovejoy. Center: 17 minutes of SDO images, combined in a persistence map. The structure of the tail is now clearly visible. Bottom: For comparison, the average pixel value for this sequence of images. Click for a closer look![Thompson Young 2016]A Comets TailAs Comet Lovejoy passed through the solar corona in 2011, solar physicists analyzed extreme ultraviolet images of its tail because the motion of the tail particles reveals information about the local coronal magnetic field.Past analyses have averaged or summed images of the comet in orbit to examine its tail. But a persistence map of the maximum pixel values far more clearly shows the striations within the tail that reveal the directions of the local magnetic field lines.Dimming of the SunDimming of the Suns corona near active regions tells us about the material thats evacuated during coronal mass ejections. This process can be complex: regions dim at different times, and flares sometimes hide the dimming, making it difficult to observe. But understanding the entire dimming region is necessary to infer the total mass loss and complete magnetic footprint of a gradual eruption from the Suns surface.SDO and STEREO-A images of a prominence eruption. Tracking the falling material is difficult due to the complex background. [Thompson Young 2016]Creating a persistence map of minimum pixel values achieves this and also neatly sidesteps the problem of flares hiding the dimming regions, since the bright pixels are discarded. In the authors example, a persistence map estimates 50% more mass loss for a coronal dimming event than the traditional image analysis method, and it reveals connections between dimming regions that were previously missed.An Erupting ProminenceThe authors final example is of falling prominence material after a solar eruption, seen in absorption against the bright corona. They show that you can construct a persistence map of minimum pixel values over the time the material falls (see the cover image), allowing the materials paths to be tracked despite the evolving background behind it. Tracing these trajectories provides information about the local magnetic field.Thompson and Youngs examples indicate that persistence mapping clearly provides new information in some cases of intermittent or slowly evolving solar phenomena. It will be interesting to see where else this technique can be applied!CitationB. J. Thompson and C. A. Young 2016 ApJ 825 27. doi:10.3847/0004-637X/825/1/27

  3. On the possibility to use semiconductive hybrid pixel detectors for study of radiation belt of the Earth.

    NASA Astrophysics Data System (ADS)

    Guskov, A.; Shelkov, G.; Smolyanskiy, P.; Zhemchugov, A.

    2016-02-01

    The scientific apparatus GAMMA-400 designed for study of electromagnetic and hadron components of cosmic rays will be launched to an elliptic orbit with the apogee of about 300 000 km and the perigee of about 500 km. Such a configuration of the orbit allows it to cross periodically the radiation belt and the outer part of magnetosphere. We discuss the possibility to use hybrid pixel detecters based on the Timepix chip and semiconductive sensors on board the GAMMA-400 apparatus. Due to high granularity of the sensor (pixel size is 55 mum) and possibility to measure independently an energy deposition in each pixel, such compact and lightweight detector could be a unique instrument for study of spatial, energy and time structure of electron and proton components of the radiation belt.

  4. A 4MP high-dynamic-range, low-noise CMOS image sensor

    NASA Astrophysics Data System (ADS)

    Ma, Cheng; Liu, Yang; Li, Jing; Zhou, Quan; Chang, Yuchun; Wang, Xinyang

    2015-03-01

    In this paper we present a 4 Megapixel high dynamic range, low dark noise and dark current CMOS image sensor, which is ideal for high-end scientific and surveillance applications. The pixel design is based on a 4-T PPD structure. During the readout of the pixel array, signals are first amplified, and then feed to a low- power column-parallel ADC array which is already presented in [1]. Measurement results show that the sensor achieves a dynamic range of 96dB, a dark noise of 1.47e- at 24fps speed. The dark current is 0.15e-/pixel/s at -20oC.

  5. Detection of thoracic vascular structures by electrical impedance tomography: a systematic assessment of prominence peak analysis of impedance changes.

    PubMed

    Wodack, K H; Buehler, S; Nishimoto, S A; Graessler, M F; Behem, C R; Waldmann, A D; Mueller, B; Böhm, S H; Kaniusas, E; Thürk, F; Maerz, A; Trepte, C J C; Reuter, D A

    2018-02-28

    Electrical impedance tomography (EIT) is a non-invasive and radiation-free bedside monitoring technology, primarily used to monitor lung function. First experimental data shows that the descending aorta can be detected at different thoracic heights and might allow the assessment of central hemodynamics, i.e. stroke volume and pulse transit time. First, the feasibility of localizing small non-conductive objects within a saline phantom model was evaluated. Second, this result was utilized for the detection of the aorta by EIT in ten anesthetized pigs with comparison to thoracic computer tomography (CT). Two EIT belts were placed at different thoracic positions and a bolus of hypertonic saline (10 ml, 20%) was administered into the ascending aorta while EIT data were recorded. EIT images were reconstructed using the GREIT model, based on the individual's thoracic contours. The resulting EIT images were analyzed pixel by pixel to identify the aortic pixel, in which the bolus caused the highest transient impedance peak in time. In the phantom, small objects could be located at each position with a maximal deviation of 0.71 cm. In vivo, no significant differences between the aorta position measured by EIT and the anatomical aorta location were obtained for both measurement planes if the search was restricted to the dorsal thoracic region of interest (ROIs). It is possible to detect the descending aorta at different thoracic levels by EIT using an intra-aortic bolus of hypertonic saline. No significant differences in the position of the descending aorta on EIT images compared to CT images were obtained for both EIT belts.

  6. Imaging tissues for biomedical research using the high-resolution micro-tomography system nanotom® m

    NASA Astrophysics Data System (ADS)

    Deyhle, Hans; Schulz, Georg; Khimchenko, Anna; Bikis, Christos; Hieber, Simone E.; Jaquiery, Claude; Kunz, Christoph; Müller-Gerbl, Magdalena; Höchel, Sebastian; Saxer, Till; Stalder, Anja K.; Ilgenstein, Bernd; Beckmann, Felix; Thalmann, Peter; Buscema, Marzia; Rohr, Nadja; Holme, Margaret N.; Müller, Bert

    2016-10-01

    Micro computed tomography (mCT) is well established in virtually all fields of biomedical research, allowing for the non-destructive volumetric visualization of tissue morphology. A variety of specimens can be investigated, ranging from soft to hard tissue to engineered structures like scaffolds. Similarly, the size of the objects of interest ranges from a fraction of a millimeter to several tens of centimeters. While synchrotron radiation-based μCT still offers unrivaled data quality, the ever-improving technology of cathodic tube-based machines offers a valuable and more accessible alternative. The Biomaterials Science Center of the University of Basel operates a nanotomOR m (phoenix|x-ray, GE Sensing and Inspection Technologies GmbH, Wunstorf, Germany), with a 180 kV source and a minimal spot size of about 0.9 μm. Through the adjustable focus-specimen and focus-detector distances, the effective pixel size can be adjusted from below 500 nm to about 80 μm. On the high-resolution side, it is for example possible to visualize the tubular network in sub-millimeter thin dentin specimens. It is then possible to locally extract parameters such as tubule diameter, density, or alignment, giving information on cell movements during tooth formation. On the other side, with a horizontal shift of the 3,072 pixels x 2,400 pixels detector, specimens up to 35 cm in diameter can be scanned. It is possible, for example, to scan an entire human knee, albeit with inferior resolution. Lab source μCT machines are thus a powerful and flexible tool for the advancement of biomedical research, and a valuable and more accessible alternative to synchrotron radiation facilities.

  7. High-speed X-ray imaging pixel array detector for synchrotron bunch isolation

    DOE PAGES

    Philipp, Hugh T.; Tate, Mark W.; Purohit, Prafull; ...

    2016-01-28

    A wide-dynamic-range imaging X-ray detector designed for recording successive frames at rates up to 10 MHz is described. X-ray imaging with frame rates of up to 6.5 MHz have been experimentally verified. The pixel design allows for up to 8–12 frames to be stored internally at high speed before readout, which occurs at a 1 kHz frame rate. An additional mode of operation allows the integration capacitors to be re-addressed repeatedly before readout which can enhance the signal-to-noise ratio of cyclical processes. This detector, along with modern storage ring sources which provide short (10–100 ps) and intense X-ray pulses atmore » megahertz rates, opens new avenues for the study of rapid structural changes in materials. The detector consists of hybridized modules, each of which is comprised of a 500 µm-thick silicon X-ray sensor solder bump-bonded, pixel by pixel, to an application-specific integrated circuit. The format of each module is 128 × 128 pixels with a pixel pitch of 150 µm. In the prototype detector described here, the three-side buttable modules are tiled in a 3 × 2 array with a full format of 256 × 384 pixels. Lastly, we detail the characteristics, operation, testing and application of the detector.« less

  8. High-speed X-ray imaging pixel array detector for synchrotron bunch isolation

    PubMed Central

    Philipp, Hugh T.; Tate, Mark W.; Purohit, Prafull; Shanks, Katherine S.; Weiss, Joel T.; Gruner, Sol M.

    2016-01-01

    A wide-dynamic-range imaging X-ray detector designed for recording successive frames at rates up to 10 MHz is described. X-ray imaging with frame rates of up to 6.5 MHz have been experimentally verified. The pixel design allows for up to 8–12 frames to be stored internally at high speed before readout, which occurs at a 1 kHz frame rate. An additional mode of operation allows the integration capacitors to be re-addressed repeatedly before readout which can enhance the signal-to-noise ratio of cyclical processes. This detector, along with modern storage ring sources which provide short (10–100 ps) and intense X-ray pulses at megahertz rates, opens new avenues for the study of rapid structural changes in materials. The detector consists of hybridized modules, each of which is comprised of a 500 µm-thick silicon X-ray sensor solder bump-bonded, pixel by pixel, to an application-specific integrated circuit. The format of each module is 128 × 128 pixels with a pixel pitch of 150 µm. In the prototype detector described here, the three-side buttable modules are tiled in a 3 × 2 array with a full format of 256 × 384 pixels. The characteristics, operation, testing and application of the detector are detailed. PMID:26917125

  9. High-speed X-ray imaging pixel array detector for synchrotron bunch isolation.

    PubMed

    Philipp, Hugh T; Tate, Mark W; Purohit, Prafull; Shanks, Katherine S; Weiss, Joel T; Gruner, Sol M

    2016-03-01

    A wide-dynamic-range imaging X-ray detector designed for recording successive frames at rates up to 10 MHz is described. X-ray imaging with frame rates of up to 6.5 MHz have been experimentally verified. The pixel design allows for up to 8-12 frames to be stored internally at high speed before readout, which occurs at a 1 kHz frame rate. An additional mode of operation allows the integration capacitors to be re-addressed repeatedly before readout which can enhance the signal-to-noise ratio of cyclical processes. This detector, along with modern storage ring sources which provide short (10-100 ps) and intense X-ray pulses at megahertz rates, opens new avenues for the study of rapid structural changes in materials. The detector consists of hybridized modules, each of which is comprised of a 500 µm-thick silicon X-ray sensor solder bump-bonded, pixel by pixel, to an application-specific integrated circuit. The format of each module is 128 × 128 pixels with a pixel pitch of 150 µm. In the prototype detector described here, the three-side buttable modules are tiled in a 3 × 2 array with a full format of 256 × 384 pixels. The characteristics, operation, testing and application of the detector are detailed.

  10. Large Area Cd0.9Zn0.1Te Pixelated Detector: Fabrication and Characterization

    NASA Astrophysics Data System (ADS)

    Chaudhuri, Sandeep K.; Nguyen, Khai; Pak, Rahmi O.; Matei, Liviu; Buliga, Vladimir; Groza, Michael; Burger, Arnold; Mandal, Krishna C.

    2014-04-01

    Cd0.9Zn0.1Te (CZT) based pixelated radiation detectors have been fabricated and characterized for gamma ray detection. Large area CZT single crystals has been grown using a tellurium solvent method. A 10 ×10 guarded pixelated detector has been fabricated on a 19.5 ×19.5 ×5 mm3 crystal cut out from the grown ingot. The pixel dimensions were 1.3 ×1.3 mm2 and were pitched at 1.8 mm. A guard grid was used to reduce interpixel/inter-electrode leakage. The crystal was characterized in planar configuration using electrical, optical and optoelectronic methods prior to the fabrication of pixelated geometry. Current-voltage (I-V) measurements revealed a leakage current of 27 nA at an operating bias voltage of 1000 V and a resistivity of 3.1 ×1010 Ω-cm. Infrared transmission imaging revealed an average tellurium inclusion/precipitate size less than 8 μm. Pockels measurement has revealed a near-uniform depth-wise distribution of the internal electric field. The mobility-lifetime product in this crystal was calculated to be 6.2 ×10 - 3 cm2/V using alpha ray spectroscopic method. Gamma spectroscopy using a 137Cs source on the pixelated structure showed fully resolved 662 keV gamma peaks for all the pixels, with percentage resolution (FWHM) as high as 1.8%.

  11. A comparative study of linear and nonlinear anomaly detectors for hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Goldberg, Hirsh; Nasrabadi, Nasser M.

    2007-04-01

    In this paper we implement various linear and nonlinear subspace-based anomaly detectors for hyperspectral imagery. First, a dual window technique is used to separate the local area around each pixel into two regions - an inner-window region (IWR) and an outer-window region (OWR). Pixel spectra from each region are projected onto a subspace which is defined by projection bases that can be generated in several ways. Here we use three common pattern classification techniques (Principal Component Analysis (PCA), Fisher Linear Discriminant (FLD) Analysis, and the Eigenspace Separation Transform (EST)) to generate projection vectors. In addition to these three algorithms, the well-known Reed-Xiaoli (RX) anomaly detector is also implemented. Each of the four linear methods is then implicitly defined in a high- (possibly infinite-) dimensional feature space by using a nonlinear mapping associated with a kernel function. Using a common machine-learning technique known as the kernel trick all dot products in the feature space are replaced with a Mercer kernel function defined in terms of the original input data space. To determine how anomalous a given pixel is, we then project the current test pixel spectra and the spectral mean vector of the OWR onto the linear and nonlinear projection vectors in order to exploit the statistical differences between the IWR and OWR pixels. Anomalies are detected if the separation of the projection of the current test pixel spectra and the OWR mean spectra are greater than a certain threshold. Comparisons are made using receiver operating characteristics (ROC) curves.

  12. Image-based metal artifact reduction in x-ray computed tomography utilizing local anatomical similarity

    NASA Astrophysics Data System (ADS)

    Dong, Xue; Yang, Xiaofeng; Rosenfield, Jonathan; Elder, Eric; Dhabaan, Anees

    2017-03-01

    X-ray computed tomography (CT) is widely used in radiation therapy treatment planning in recent years. However, metal implants such as dental fillings and hip prostheses can cause severe bright and dark streaking artifacts in reconstructed CT images. These artifacts decrease image contrast and degrade HU accuracy, leading to inaccuracies in target delineation and dose calculation. In this work, a metal artifact reduction method is proposed based on the intrinsic anatomical similarity between neighboring CT slices. Neighboring CT slices from the same patient exhibit similar anatomical features. Exploiting this anatomical similarity, a gamma map is calculated as a weighted summation of relative HU error and distance error for each pixel in an artifact-corrupted CT image relative to a neighboring, artifactfree image. The minimum value in the gamma map for each pixel is used to identify an appropriate pixel from the artifact-free CT slice to replace the corresponding artifact-corrupted pixel. With the proposed method, the mean CT HU error was reduced from 360 HU and 460 HU to 24 HU and 34 HU on head and pelvis CT images, respectively. Dose calculation accuracy also improved, as the dose difference was reduced from greater than 20% to less than 4%. Using 3%/3mm criteria, the gamma analysis failure rate was reduced from 23.25% to 0.02%. An image-based metal artifact reduction method is proposed that replaces corrupted image pixels with pixels from neighboring CT slices free of metal artifacts. This method is shown to be capable of suppressing streaking artifacts, thereby improving HU and dose calculation accuracy.

  13. Supervised classification of brain tissues through local multi-scale texture analysis by coupling DIR and FLAIR MR sequences

    NASA Astrophysics Data System (ADS)

    Poletti, Enea; Veronese, Elisa; Calabrese, Massimiliano; Bertoldo, Alessandra; Grisan, Enrico

    2012-02-01

    The automatic segmentation of brain tissues in magnetic resonance (MR) is usually performed on T1-weighted images, due to their high spatial resolution. T1w sequence, however, has some major downsides when brain lesions are present: the altered appearance of diseased tissues causes errors in tissues classification. In order to overcome these drawbacks, we employed two different MR sequences: fluid attenuated inversion recovery (FLAIR) and double inversion recovery (DIR). The former highlights both gray matter (GM) and white matter (WM), the latter highlights GM alone. We propose here a supervised classification scheme that does not require any anatomical a priori information to identify the 3 classes, "GM", "WM", and "background". Features are extracted by means of a local multi-scale texture analysis, computed for each pixel of the DIR and FLAIR sequences. The 9 textures considered are average, standard deviation, kurtosis, entropy, contrast, correlation, energy, homogeneity, and skewness, evaluated on a neighborhood of 3x3, 5x5, and 7x7 pixels. Hence, the total number of features associated to a pixel is 56 (9 textures x3 scales x2 sequences +2 original pixel values). The classifier employed is a Support Vector Machine with Radial Basis Function as kernel. From each of the 4 brain volumes evaluated, a DIR and a FLAIR slice have been selected and manually segmented by 2 expert neurologists, providing 1st and 2nd human reference observations which agree with an average accuracy of 99.03%. SVM performances have been assessed with a 4-fold cross-validation, yielding an average classification accuracy of 98.79%.

  14. A High-Speed, Event-Driven, Active Pixel Sensor Readout for Photon-Counting Microchannel Plate Detectors

    NASA Technical Reports Server (NTRS)

    Kimble, Randy A.; Pain, B.; Norton, T. J.; Haas, P.; Fisher, Richard R. (Technical Monitor)

    2001-01-01

    Silicon array readouts for microchannel plate intensifiers offer several attractive features. In this class of detector, the electron cloud output of the MCP intensifier is converted to visible light by a phosphor; that light is then fiber-optically coupled to the silicon array. In photon-counting mode, the resulting light splashes on the silicon array are recognized and centroided to fractional pixel accuracy by off-chip electronics. This process can result in very high (MCP-limited) spatial resolution for the readout while operating at a modest MCP gain (desirable for dynamic range and long term stability). The principal limitation of intensified CCD systems of this type is their severely limited local dynamic range, as accurate photon counting is achieved only if there are not overlapping event splashes within the frame time of the device. This problem can be ameliorated somewhat by processing events only in pre-selected windows of interest or by using an addressable charge injection device (CID) for the readout array. We are currently pursuing the development of an intriguing alternative readout concept based on using an event-driven CMOS Active Pixel Sensor. APS technology permits the incorporation of discriminator circuitry within each pixel. When coupled with suitable CMOS logic outside the array area, the discriminator circuitry can be used to trigger the readout of small sub-array windows only when and where an event splash has been detected, completely eliminating the local dynamic range problem, while achieving a high global count rate capability and maintaining high spatial resolution. We elaborate on this concept and present our progress toward implementing an event-driven APS readout.

  15. Singular Stokes-polarimetry as new technique for metrology and inspection of polarized speckle fields

    NASA Astrophysics Data System (ADS)

    Soskin, Marat S.; Denisenko, Vladimir G.; Egorov, Roman I.

    2004-08-01

    Polarimetry is effective technique for polarized light fields characterization. It was shown recently that most full "finger-print" of light fields with arbitrary complexity is network of polarization singularities: C points with circular polarization and L lines with variable azimuth. The new singular Stokes-polarimetry was elaborated for such measurements. It allows define azimuth, eccentricity and handedness of elliptical vibrations in each pixel of receiving CCD camera in the range of mega-pixels. It is based on precise measurement of full set of Stokes parameters by the help of high quality analyzers and quarter-wave plates with λ/500 preciseness and 4" adjustment. The matrices of obtained data are processed in PC by special programs to find positions of polarization singularities and other needed topological features. The developed SSP technique was proved successfully by measurements of topology of polarized speckle-fields produced by multimode "photonic-crystal" fibers, double side rubbed polymer films, biomedical samples. Each singularity is localized with preciseness up to +/- 1 pixel in comparison with 500 pixels dimensions of typical speckle. It was confirmed that network of topological features appeared in polarized light field after its interaction with specimen under inspection is exact individual "passport" for its characterization. Therefore, SSP can be used for smart materials characterization. The presented data show that SSP technique is promising for local analysis of properties and defects of thin films, liquid crystal cells, optical elements, biological samples, etc. It is able discover heterogeneities and defects, which define essentially merits of specimens under inspection and can"t be checked by usual polarimetry methods. The detected extra high sensitivity of polarization singularities position and network to any changes of samples position and deformation opens quite new possibilities for sensing of deformations and displacement of checked elements in the sub-micron range.

  16. Design and implementation of non-linear image processing functions for CMOS image sensor

    NASA Astrophysics Data System (ADS)

    Musa, Purnawarman; Sudiro, Sunny A.; Wibowo, Eri P.; Harmanto, Suryadi; Paindavoine, Michel

    2012-11-01

    Today, solid state image sensors are used in many applications like in mobile phones, video surveillance systems, embedded medical imaging and industrial vision systems. These image sensors require the integration in the focal plane (or near the focal plane) of complex image processing algorithms. Such devices must meet the constraints related to the quality of acquired images, speed and performance of embedded processing, as well as low power consumption. To achieve these objectives, low-level analog processing allows extracting the useful information in the scene directly. For example, edge detection step followed by a local maxima extraction will facilitate the high-level processing like objects pattern recognition in a visual scene. Our goal was to design an intelligent image sensor prototype achieving high-speed image acquisition and non-linear image processing (like local minima and maxima calculations). For this purpose, we present in this article the design and test of a 64×64 pixels image sensor built in a standard CMOS Technology 0.35 μm including non-linear image processing. The architecture of our sensor, named nLiRIC (non-Linear Rapid Image Capture), is based on the implementation of an analog Minima/Maxima Unit. This MMU calculates the minimum and maximum values (non-linear functions), in real time, in a 2×2 pixels neighbourhood. Each MMU needs 52 transistors and the pitch of one pixel is 40×40 mu m. The total area of the 64×64 pixels is 12.5mm2. Our tests have shown the validity of the main functions of our new image sensor like fast image acquisition (10K frames per second), minima/maxima calculations in less then one ms.

  17. High-efficiency aperiodic two-dimensional high-contrast-grating hologram

    NASA Astrophysics Data System (ADS)

    Qiao, Pengfei; Zhu, Li; Chang-Hasnain, Connie J.

    2016-03-01

    High efficiency phase holograms are designed and implemented using aperiodic two-dimensional (2D) high-contrast gratings (HCGs). With our design algorithm and an in-house developed rigorous coupled-wave analysis (RCWA) package for periodic 2D HCGs, the structural parameters are obtained to achieve a full 360-degree phase-tuning range of the reflected or transmitted wave, while maintaining the power efficiency above 90%. For given far-field patterns or 3D objects to reconstruct, we can generate the near-field phase distribution through an iterative process. The aperiodic HCG phase plates we design for holograms are pixelated, and the local geometric parameters for each pixel to achieve desired phase alternation are extracted from our periodic HCG designs. Our aperiodic HCG holograms are simulated using the 3D finite-difference time-domain method. The simulation results confirm that the desired far-field patterns are successfully produced under illumination at the designed wavelength. The HCG holograms are implemented on the quartz wafers, using amorphous silicon as the high-index material. We propose HCG designs at both visible and infrared wavelengths, and our simulation confirms the reconstruction of 3D objects. The high-contrast gratings allow us to realize low-cost, compact, flat, and integrable holograms with sub-micrometer thicknesses.

  18. A perceptive method for handwritten text segmentation

    NASA Astrophysics Data System (ADS)

    Lemaitre, Aurélie; Camillerapp, Jean; Coüasnon, Bertrand

    2011-01-01

    This paper presents a new method to address the problem of handwritten text segmentation into text lines and words. Thus, we propose a method based on the cooperation among points of view that enables the localization of the text lines in a low resolution image, and then to associate the pixels at a higher level of resolution. Thanks to the combination of levels of vision, we can detect overlapping characters and re-segment the connected components during the analysis. Then, we propose a segmentation of lines into words based on the cooperation among digital data and symbolic knowledge. The digital data are obtained from distances inside a Delaunay graph, which gives a precise distance between connected components, at the pixel level. We introduce structural rules in order to take into account some generic knowledge about the organization of a text page. This cooperation among information gives a bigger power of expression and ensures the global coherence of the recognition. We validate this work using the metrics and the database proposed for the segmentation contest of ICDAR 2009. Thus, we show that our method obtains very interesting results, compared to the other methods of the literature. More precisely, we are able to deal with slope and curvature, overlapping text lines and varied kinds of writings, which are the main difficulties met by the other methods.

  19. Status and performance of HST/Wide Field Camera 3

    NASA Astrophysics Data System (ADS)

    Kimble, Randy A.; MacKenty, John W.; O'Connell, Robert W.

    2006-06-01

    Wide Field Camera 3 (WFC3) is a powerful UV/visible/near-infrared camera currently in development for installation into the Hubble Space Telescope. WFC3 provides two imaging channels. The UVIS channel features a 4096 x 4096 pixel CCD focal plane covering 200 to 1000 nm wavelengths with a 160 x 160 arcsec field of view. The UVIS channel provides unprecedented sensitivity and field of view in the near ultraviolet for HST. It is particularly well suited for studies of the star formation history of local galaxies and clusters, searches for Lyman alpha dropouts at moderate redshift, and searches for low surface brightness structures against the dark UV sky background. The IR channel features a 1024 x 1024 pixel HgCdTe focal plane covering 800 to 1700 nm with a 139 x 123 arcsec field of view, providing a major advance in IR survey efficiency for HST. IR channel science goals include studies of dark energy, galaxy formation at high redshift, and star formation. The instrument is being prepared for launch as part of HST Servicing Mission 4, tentatively scheduled for late 2007, contingent upon formal approval of shuttle-based servicing after successful shuttle return-to-flight. We report here on the status and performance of WFC3.

  20. Photovoltaic retinal prosthesis for restoring sight to the blind: implant design and fabrication

    NASA Astrophysics Data System (ADS)

    Wang, Lele; Mathieson, Keith; Kamins, Theodore I.; Loudin, James; Galambos, Ludwig; Harris, James S.; Palanker, Daniel

    2012-03-01

    We have designed and fabricated a silicon photodiode array for use as a subretinal prosthesis aimed at restoring sight to patients who lost photoreceptors due to retinal degeneration. The device operates in photovoltaic mode. Each pixel in the two-dimensional array independently converts pulsed infrared light into biphasic electric current to stimulate remaining retinal neurons without a wired power connection. To enhance the maximum voltage and charge injection levels, each pixel contains three photodiodes connected in series. An active and return electrode in each pixel ensure localized current flow and are sputter coated with iridium oxide to provide high charge injection. The fabrication process consists of eight mask layers and includes deep reactive ion etching, oxidation, and a polysilicon trench refill for in-pixel photodiode separation and isolation of adjacent pixels. Simulation of design parameters included TSUPREM4 computation of doping profiles for n+ and p+ doped regions and MATLAB computation of the anti-reflection coating layers thicknesses. The main process steps are illustrated in detail, and problems encountered are discussed. The IV characterization of the device shows that the dark reverse current is on the order of 10-100 pA-negligible compared to the stimulation current; the reverse breakdown voltage is higher than 20 V. The measured photo-responsivity per photodiode is about 0.33A/W at 880 nm.

  1. PIXSIC: A Pixellated Beta-Microprobe for Kinetic Measurements of Radiotracers on Awake and Freely Moving Small Animals

    NASA Astrophysics Data System (ADS)

    Godart, J.; Weiss, P.; Chantepie, B.; Clemens, J. C.; Delpierre, P.; Dinkespiler, B.; Janvier, B.; Jevaud, M.; Karkar, S.; Lefebvre, F.; Mastrippolito, R.; Menouni, M.; Pain, F.; Pangaud, P.; Pinot, L.; Morel, C.; Laniece, P.

    2010-06-01

    We present a design study of PIXSIC, a new β+ radiosensitive microprobe implantable in rodent brain dedicated to in vivo and autonomous measurements of local time activity curves of beta radiotracers in a small (a few mm3) volume of brain tissue. This project follows the initial β microprobe previously developed at IMNC, which has been validated in several neurobiological experiments. This first prototype has been extensively used on anesthetized animals, but presents some critical limits for utilization on awake and freely moving animals. Consequently, we propose to develop a wireless setup that can be worn by an animal without constraints upon its movements. To that aim, we have chosen a Silicon-based detector, highly β sensitive, which allows for the development of a compact pixellated probe (typically 600 × 200 × 1000 μm3), read out with miniaturized wireless electronics. Using Monte-Carlo simulations, we show that high resistive Silicon pixels are appropriate for this purpose, assuming that the pixel dimensions are adapted to our specific signals. More precisely, a tradeoff has to be found between the sensitivity to β+ particles and to the 511 keV j background resulting from annihilations of β+ with electrons. We demonstrate that pixels with maximized surface and minimized thickness can lead to an optimization of their β+ sensitivity with a relative transparency to the annihilation background.

  2. Effect of spatial noise of medical grade Liquid Crystal Displays (LCD) on the detection of micro-calcification

    NASA Astrophysics Data System (ADS)

    Roehrig, Hans; Fan, Jiahua; Dallas, William J.; Krupinski, Elizabeth A.; Johnson, Jeffrey

    2009-08-01

    This presentation describes work in progress that is the result of an NIH SBIR Phase 1 project that addresses the wide- spread concern for the large number of breast-cancers and cancer victims [1,2]. The primary goal of the project is to increase the detection rate of microcalcifications as a result of the decrease of spatial noise of the LCDs used to display the mammograms [3,4]. Noise reduction is to be accomplished with the aid of a high performance CCD camera and subsequent application of local-mean equalization and error diffusion [5,6]. A second goal of the project is the actual detection of breast cancer. Contrary to the approach to mammography, where the mammograms typically have a pixel matrix of approximately 1900 x 2300 pixels, otherwise known as FFDM or Full-Field Digital Mammograms, we will only use sections of mammograms with a pixel matrix of 256 x 256 pixels. This is because at this time, reduction of spatial noise on an LCD can only be done on relatively small areas like 256 x 256 pixels. In addition, judging the efficacy for detection of breast cancer will be done using two methods: One is a conventional ROC study [7], the other is a vision model developed over several years starting at the Sarnoff Research Center and continuing at the Siemens Corporate Research in Princeton NJ [8].

  3. Dynamic Black-Level Correction and Artifact Flagging in the Kepler Data Pipeline

    NASA Technical Reports Server (NTRS)

    Clarke, B. D.; Kolodziejczak, J. J.; Caldwell, D. A.

    2013-01-01

    Instrument-induced artifacts in the raw Kepler pixel data include time-varying crosstalk from the fine guidance sensor (FGS) clock signals, manifestations of drifting moiré pattern as locally correlated nonstationary noise and rolling bands in the images which find their way into the calibrated pixel time series and ultimately into the calibrated target flux time series. Using a combination of raw science pixel data, full frame images, reverse-clocked pixel data and ancillary temperature data the Keplerpipeline models and removes the FGS crosstalk artifacts by dynamically adjusting the black level correction. By examining the residuals to the model fits, the pipeline detects and flags spatial regions and time intervals of strong time-varying blacklevel (rolling bands ) on a per row per cadence basis. These flags are made available to downstream users of the data since the uncorrected rolling band artifacts could complicate processing or lead to misinterpretation of instrument behavior as stellar. This model fitting and artifact flagging is performed within the new stand-alone pipeline model called Dynablack. We discuss the implementation of Dynablack in the Kepler data pipeline and present results regarding the improvement in calibrated pixels and the expected improvement in cotrending performances as a result of including FGS corrections in the calibration. We also discuss the effectiveness of the rolling band flagging for downstream users and illustrate with some affected light curves.

  4. Pneumothorax detection in chest radiographs using convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Blumenfeld, Aviel; Konen, Eli; Greenspan, Hayit

    2018-02-01

    This study presents a computer assisted diagnosis system for the detection of pneumothorax (PTX) in chest radiographs based on a convolutional neural network (CNN) for pixel classification. Using a pixel classification approach allows utilization of the texture information in the local environment of each pixel while training a CNN model on millions of training patches extracted from a relatively small dataset. The proposed system uses a pre-processing step of lung field segmentation to overcome the large variability in the input images coming from a variety of imaging sources and protocols. Using a CNN classification, suspected pixel candidates are extracted within each lung segment. A postprocessing step follows to remove non-physiological suspected regions and noisy connected components. The overall percentage of suspected PTX area was used as a robust global decision for the presence of PTX in each lung. The system was trained on a set of 117 chest x-ray images with ground truth segmentations of the PTX regions. The system was tested on a set of 86 images and reached diagnosis accuracy of AUC=0.95. Overall preliminary results are promising and indicate the growing ability of CAD based systems to detect findings in medical imaging on a clinical level accuracy.

  5. Artificial Structural Color Pixels: A Review

    PubMed Central

    Zhao, Yuqian; Zhao, Yong; Hu, Sheng; Lv, Jiangtao; Ying, Yu; Gervinskas, Gediminas; Si, Guangyuan

    2017-01-01

    Inspired by natural photonic structures (Morpho butterfly, for instance), researchers have demonstrated varying artificial color display devices using different designs. Photonic-crystal/plasmonic color filters have drawn increasing attention most recently. In this review article, we show the developing trend of artificial structural color pixels from photonic crystals to plasmonic nanostructures. Such devices normally utilize the distinctive optical features of photonic/plasmon resonance, resulting in high compatibility with current display and imaging technologies. Moreover, dynamical color filtering devices are highly desirable because tunable optical components are critical for developing new optical platforms which can be integrated or combined with other existing imaging and display techniques. Thus, extensive promising potential applications have been triggered and enabled including more abundant functionalities in integrated optics and nanophotonics. PMID:28805736

  6. The realization of an SVGA OLED-on-silicon microdisplay driving circuit

    NASA Astrophysics Data System (ADS)

    Bohua, Zhao; Ran, Huang; Fei, Ma; Guohua, Xie; Zhensong, Zhang; Huan, Du; Jiajun, Luo; Yi, Zhao

    2012-03-01

    An 800 × 600 pixel organic light-emitting diode-on-silicon (OLEDoS) driving circuit is proposed. The pixel cell circuit utilizes a subthreshold-voltage-scaling structure which can modulate the pixel current between 170 pA and 11.4 nA. In order to keep the voltage of the column bus at a relatively high level, the sample-and-hold circuits adopt a ping-pong operation. The driving circuit is fabricated in a commercially available 0.35 μm two-poly four-metal 3.3 V mixed-signal CMOS process. The pixel cell area is 15 × 15 μm2 and the total chip occupies 15.5 × 12.3 mm2. Experimental results show that the chip can work properly at a frame frequency of 60 Hz and has a 64 grayscale (monochrome) display. The total power consumption of the chip is about 85 mW with a 3.3V supply voltage.

  7. Phase information contained in meter-scale SAR images

    NASA Astrophysics Data System (ADS)

    Datcu, Mihai; Schwarz, Gottfried; Soccorsi, Matteo; Chaabouni, Houda

    2007-10-01

    The properties of single look complex SAR satellite images have already been analyzed by many investigators. A common belief is that, apart from inverse SAR methods or polarimetric applications, no information can be gained from the phase of each pixel. This belief is based on the assumption that we obtain uniformly distributed random phases when a sufficient number of small-scale scatterers are mixed in each image pixel. However, the random phase assumption does no longer hold for typical high resolution urban remote sensing scenes, when a limited number of prominent human-made scatterers with near-regular shape and sub-meter size lead to correlated phase patterns. If the pixel size shrinks to a critical threshold of about 1 meter, the reflectance of built-up urban scenes becomes dominated by typical metal reflectors, corner-like structures, and multiple scattering. The resulting phases are hard to model, but one can try to classify a scene based on the phase characteristics of neighboring image pixels. We provide a "cooking recipe" of how to analyze existing phase patterns that extend over neighboring pixels.

  8. Simultaneous fluorescence and quantitative phase microscopy with single-pixel detectors

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Suo, Jinli; Zhang, Yuanlong; Dai, Qionghai

    2018-02-01

    Multimodal microscopy offers high flexibilities for biomedical observation and diagnosis. Conventional multimodal approaches either use multiple cameras or a single camera spatially multiplexing different modes. The former needs expertise demanding alignment and the latter suffers from limited spatial resolution. Here, we report an alignment-free full-resolution simultaneous fluorescence and quantitative phase imaging approach using single-pixel detectors. By combining reference-free interferometry with single-pixel detection, we encode the phase and fluorescence of the sample in two detection arms at the same time. Then we employ structured illumination and the correlated measurements between the sample and the illuminations for reconstruction. The recovered fluorescence and phase images are inherently aligned thanks to single-pixel detection. To validate the proposed method, we built a proof-of-concept setup for first imaging the phase of etched glass with the depth of a few hundred nanometers and then imaging the fluorescence and phase of the quantum dot drop. This method holds great potential for multispectral fluorescence microscopy with additional single-pixel detectors or a spectrometer. Besides, this cost-efficient multimodal system might find broad applications in biomedical science and neuroscience.

  9. A fast event preprocessor for the Simbol-X Low-Energy Detector

    NASA Astrophysics Data System (ADS)

    Schanz, T.; Tenzer, C.; Kendziorra, E.; Santangelo, A.

    2008-07-01

    The Simbol-X1 Low Energy Detector (LED), a 128 × 128 pixel DEPFET array, will be read out very fast (8000 frames/second). This requires a very fast onboard data preprocessing of the raw data. We present an FPGA based Event Preprocessor (EPP) which can fulfill this requirements. The design is developed in the hardware description language VHDL and can be later ported on an ASIC technology. The EPP performs a pixel related offset correction and can apply different energy thresholds to each pixel of the frame. It also provides a line related common-mode correction to reduce noise that is unavoidably caused by the analog readout chip of the DEPFET. An integrated pattern detector can block all invalid pixel patterns. The EPP has an internal pipeline structure and can perform all operation in realtime (< 2 μs per line of 64 pixel) with a base clock frequency of 100 MHz. It is utilizing a fast median-value detection algorithm for common-mode correction and a new pattern scanning algorithm to select only valid events. Both new algorithms were developed during the last year at our institute.

  10. Ground-based Nighttime Cloud Detection Using a Commercial Digital Camera: Observations at Manila Observatory (14.64N, 121.07E)

    NASA Astrophysics Data System (ADS)

    Gacal, G. F. B.; Tan, F.; Antioquia, C. T.; Lagrosas, N.

    2014-12-01

    Cloud detection during nighttime poses a real problem to researchers because of a lack of optimum sensors that can specifically detect clouds during this time of the day. Hence, lidars and satellites are currently some of the instruments that are being utilized to determine cloud presence in the atmosphere. These clouds play a significant role in the night weather system for the reason that they serve as barriers of thermal radiation from the Earth and thereby reflecting this radiation back to the Earth. This effectively lowers the rate of decreasing temperature in the atmosphere at night. The objective of this study is to detect cloud occurrences at nighttime for the purpose of studying patterns of cloud occurrence and the effects of clouds on local weather. In this study, a commercial camera (Canon Powershot A2300) is operated continuously to capture nighttime clouds. The camera is situated inside a weather-proof box with a glass cover and is placed on the rooftop of the Manila Observatory building to gather pictures of the sky every 5min to observe cloud dynamics and evolution in the atmosphere. To detect pixels with clouds, the pictures are converted from its native JPEG to grayscale format. The pixels are then screened for clouds by looking at the values of pixels with and without clouds. In grayscale format, pixels with clouds have greater pixel values than pixels without clouds. Based on the observations, 0.34 of the maximum pixel value is enough to discern pixels with clouds from pixels without clouds. Figs. 1a & 1b are sample unprocessed pictures of cloudless night (May 22-23, 2014) and cloudy skies (May 23-24, 2014), respectively. Figs.1c and 1d show percentage of occurrence of nighttime clouds on May 22-23 and May 23-24, 2014, respectively. The cloud occurrence in a pixel is defined as the ratio of the number times when the pixel has clouds to the total number of observations. Fig. 1c shows less than 50% cloud occurrence while Fig. 1d shows cloud occurrence more than what is shown in Fig. 1c. These graphs show the capability of the camera to detect and measure the cloud occurrence at nighttime. Continuous collection of nighttime pictures is currently implemented. In regions where there is a dearth of scientific data, the measured nighttime cloud occurrence will serve as a baseline for future cloud studies in this part of the world.

  11. A coarse-to-fine approach for pericardial effusion localization and segmentation in chest CT scans

    NASA Astrophysics Data System (ADS)

    Liu, Jiamin; Chellamuthu, Karthik; Lu, Le; Bagheri, Mohammadhadi; Summers, Ronald M.

    2018-02-01

    Pericardial effusion on CT scans demonstrates very high shape and volume variability and very low contrast to adjacent structures. This inhibits traditional automated segmentation methods from achieving high accuracies. Deep neural networks have been widely used for image segmentation in CT scans. In this work, we present a two-stage method for pericardial effusion localization and segmentation. For the first step, we localize the pericardial area from the entire CT volume, providing a reliable bounding box for the more refined segmentation step. A coarse-scaled holistically-nested convolutional networks (HNN) model is trained on entire CT volume. The resulting HNN per-pixel probability maps are then threshold to produce a bounding box covering the pericardial area. For the second step, a fine-scaled HNN model is trained only on the bounding box region for effusion segmentation to reduce the background distraction. Quantitative evaluation is performed on a dataset of 25 CT scans of patient (1206 images) with pericardial effusion. The segmentation accuracy of our two-stage method, measured by Dice Similarity Coefficient (DSC), is 75.59+/-12.04%, which is significantly better than the segmentation accuracy (62.74+/-15.20%) of only using the coarse-scaled HNN model.

  12. Ultrahigh-frame CCD imagers

    NASA Astrophysics Data System (ADS)

    Lowrance, John L.; Mastrocola, V. J.; Renda, George F.; Swain, Pradyumna K.; Kabra, R.; Bhaskaran, Mahalingham; Tower, John R.; Levine, Peter A.

    2004-02-01

    This paper describes the architecture, process technology, and performance of a family of high burst rate CCDs. These imagers employ high speed, low lag photo-detectors with local storage at each photo-detector to achieve image capture at rates greater than 106 frames per second. One imager has a 64 x 64 pixel array with 12 frames of storage. A second imager has a 80 x 160 array with 28 frames of storage, and the third imager has a 64 x 64 pixel array with 300 frames of storage. Application areas include capture of rapid mechanical motion, optical wavefront sensing, fluid cavitation research, combustion studies, plasma research and wind-tunnel-based gas dynamics research.

  13. Crossword: A Fully Automated Algorithm for the Segmentation and Quality Control of Protein Microarray Images

    PubMed Central

    2015-01-01

    Biological assays formatted as microarrays have become a critical tool for the generation of the comprehensive data sets required for systems-level understanding of biological processes. Manual annotation of data extracted from images of microarrays, however, remains a significant bottleneck, particularly for protein microarrays due to the sensitivity of this technology to weak artifact signal. In order to automate the extraction and curation of data from protein microarrays, we describe an algorithm called Crossword that logically combines information from multiple approaches to fully automate microarray segmentation. Automated artifact removal is also accomplished by segregating structured pixels from the background noise using iterative clustering and pixel connectivity. Correlation of the location of structured pixels across image channels is used to identify and remove artifact pixels from the image prior to data extraction. This component improves the accuracy of data sets while reducing the requirement for time-consuming visual inspection of the data. Crossword enables a fully automated protocol that is robust to significant spatial and intensity aberrations. Overall, the average amount of user intervention is reduced by an order of magnitude and the data quality is increased through artifact removal and reduced user variability. The increase in throughput should aid the further implementation of microarray technologies in clinical studies. PMID:24417579

  14. Detector Sampling of Optical/IR Spectra: How Many Pixels per FWHM?

    NASA Astrophysics Data System (ADS)

    Robertson, J. Gordon

    2017-08-01

    Most optical and IR spectra are now acquired using detectors with finite-width pixels in a square array. Each pixel records the received intensity integrated over its own area, and pixels are separated by the array pitch. This paper examines the effects of such pixellation, using computed simulations to illustrate the effects which most concern the astronomer end-user. It is shown that coarse sampling increases the random noise errors in wavelength by typically 10-20 % at 2 pixels per Full Width at Half Maximum, but with wide variation depending on the functional form of the instrumental Line Spread Function (i.e. the instrumental response to a monochromatic input) and on the pixel phase. If line widths are determined, they are even more strongly affected at low sampling frequencies. However, the noise in fitted peak amplitudes is minimally affected by pixellation, with increases less than about 5%. Pixellation has a substantial but complex effect on the ability to see a relative minimum between two closely spaced peaks (or relative maximum between two absorption lines). The consistent scale of resolving power presented by Robertson to overcome the inadequacy of the Full Width at Half Maximum as a resolution measure is here extended to cover pixellated spectra. The systematic bias errors in wavelength introduced by pixellation, independent of signal/noise ratio, are examined. While they may be negligible for smooth well-sampled symmetric Line Spread Functions, they are very sensitive to asymmetry and high spatial frequency sub-structure. The Modulation Transfer Function for sampled data is shown to give a useful indication of the extent of improperly sampled signal in an Line Spread Function. The common maxim that 2 pixels per Full Width at Half Maximum is the Nyquist limit is incorrect and most Line Spread Functions will exhibit some aliasing at this sample frequency. While 2 pixels per Full Width at Half Maximum is nevertheless often an acceptable minimum for moderate signal/noise work, it is preferable to carry out simulations for any actual or proposed Line Spread Function to find the effects of various sampling frequencies. Where spectrograph end-users have a choice of sampling frequencies, through on-chip binning and/or spectrograph configurations, it is desirable that the instrument user manual should include an examination of the effects of the various choices.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Philipp, Hugh T.; Tate, Mark W.; Purohit, Prafull

    A wide-dynamic-range imaging X-ray detector designed for recording successive frames at rates up to 10 MHz is described. X-ray imaging with frame rates of up to 6.5 MHz have been experimentally verified. The pixel design allows for up to 8–12 frames to be stored internally at high speed before readout, which occurs at a 1 kHz frame rate. An additional mode of operation allows the integration capacitors to be re-addressed repeatedly before readout which can enhance the signal-to-noise ratio of cyclical processes. This detector, along with modern storage ring sources which provide short (10–100 ps) and intense X-ray pulses atmore » megahertz rates, opens new avenues for the study of rapid structural changes in materials. The detector consists of hybridized modules, each of which is comprised of a 500 µm-thick silicon X-ray sensor solder bump-bonded, pixel by pixel, to an application-specific integrated circuit. The format of each module is 128 × 128 pixels with a pixel pitch of 150 µm. In the prototype detector described here, the three-side buttable modules are tiled in a 3 × 2 array with a full format of 256 × 384 pixels. Lastly, we detail the characteristics, operation, testing and application of the detector.« less

  16. Mapping Electrical Crosstalk in Pixelated Sensor Arrays

    NASA Technical Reports Server (NTRS)

    Seshadri, S.; Cole, D. M.; Hancock, B. R.; Smith, R. M.

    2008-01-01

    Electronic coupling effects such as Inter-Pixel Capacitance (IPC) affect the quantitative interpretation of image data from CMOS, hybrid visible and infrared imagers alike. Existing methods of characterizing IPC do not provide a map of the spatial variation of IPC over all pixels. We demonstrate a deterministic method that provides a direct quantitative map of the crosstalk across an imager. The approach requires only the ability to reset single pixels to an arbitrary voltage, different from the rest of the imager. No illumination source is required. Mapping IPC independently for each pixel is also made practical by the greater S/N ratio achievable for an electrical stimulus than for an optical stimulus, which is subject to both Poisson statistics and diffusion effects of photo-generated charge. The data we present illustrates a more complex picture of IPC in Teledyne HgCdTe and HyViSi focal plane arrays than is presently understood, including the presence of a newly discovered, long range IPC in the HyViSi FPA that extends tens of pixels in distance, likely stemming from extended field effects in the fully depleted substrate. The sensitivity of the measurement approach has been shown to be good enough to distinguish spatial structure in IPC of the order of 0.1%.

  17. Reduced signal crosstalk multi neurotransmitter image sensor by microhole array structure

    NASA Astrophysics Data System (ADS)

    Ogaeri, Yuta; Lee, You-Na; Mitsudome, Masato; Iwata, Tatsuya; Takahashi, Kazuhiro; Sawada, Kazuaki

    2018-06-01

    A microhole array structure combined with an enzyme immobilization method using magnetic beads can enhance the target discernment capability of a multi neurotransmitter image sensor. Here we report the fabrication and evaluation of the H+-diffusion-preventing capability of the sensor with the array structure. The structure with an SU-8 photoresist has holes with a size of 24.5 × 31.6 µm2. Sensors were prepared with the array structure of three different heights: 0, 15, and 60 µm. When the sensor has the structure of 60 µm height, 48% reduced output voltage is measured at a H+-sensitive null pixel that is located 75 µm from the acetylcholinesterase (AChE)-immobilized pixel, which is the starting point of H+ diffusion. The suppressed H+ immigration is shown in a two-dimensional (2D) image in real time. The sensor parameters, such as height of the array structure and measuring time, are optimized experimentally. The sensor is expected to effectively distinguish various neurotransmitters in biological samples.

  18. Imaging Local Ca2+ Signals in Cultured Mammalian Cells

    PubMed Central

    Lock, Jeffrey T.; Ellefsen, Kyle L.; Settle, Bret; Parker, Ian; Smith, Ian F.

    2015-01-01

    Cytosolic Ca2+ ions regulate numerous aspects of cellular activity in almost all cell types, controlling processes as wide-ranging as gene transcription, electrical excitability and cell proliferation. The diversity and specificity of Ca2+ signaling derives from mechanisms by which Ca2+ signals are generated to act over different time and spatial scales, ranging from cell-wide oscillations and waves occurring over the periods of minutes to local transient Ca2+ microdomains (Ca2+ puffs) lasting milliseconds. Recent advances in electron multiplied CCD (EMCCD) cameras now allow for imaging of local Ca2+ signals with a 128 x 128 pixel spatial resolution at rates of >500 frames sec-1 (fps). This approach is highly parallel and enables the simultaneous monitoring of hundreds of channels or puff sites in a single experiment. However, the vast amounts of data generated (ca. 1 Gb per min) render visual identification and analysis of local Ca2+ events impracticable. Here we describe and demonstrate the procedures for the acquisition, detection, and analysis of local IP3-mediated Ca2+ signals in intact mammalian cells loaded with Ca2+ indicators using both wide-field epi-fluorescence (WF) and total internal reflection fluorescence (TIRF) microscopy. Furthermore, we describe an algorithm developed within the open-source software environment Python that automates the identification and analysis of these local Ca2+ signals. The algorithm localizes sites of Ca2+ release with sub-pixel resolution; allows user review of data; and outputs time sequences of fluorescence ratio signals together with amplitude and kinetic data in an Excel-compatible table. PMID:25867132

  19. Image interpolation by adaptive 2-D autoregressive modeling and soft-decision estimation.

    PubMed

    Zhang, Xiangjun; Wu, Xiaolin

    2008-06-01

    The challenge of image interpolation is to preserve spatial details. We propose a soft-decision interpolation technique that estimates missing pixels in groups rather than one at a time. The new technique learns and adapts to varying scene structures using a 2-D piecewise autoregressive model. The model parameters are estimated in a moving window in the input low-resolution image. The pixel structure dictated by the learnt model is enforced by the soft-decision estimation process onto a block of pixels, including both observed and estimated. The result is equivalent to that of a high-order adaptive nonseparable 2-D interpolation filter. This new image interpolation approach preserves spatial coherence of interpolated images better than the existing methods, and it produces the best results so far over a wide range of scenes in both PSNR measure and subjective visual quality. Edges and textures are well preserved, and common interpolation artifacts (blurring, ringing, jaggies, zippering, etc.) are greatly reduced.

  20. Ferrocene pixels by laser-induced forward transfer: towards flexible microelectrode printing

    NASA Astrophysics Data System (ADS)

    Mitu, B.; Matei, A.; Filipescu, M.; Palla Papavlu, A.; Bercea, A.; Lippert, T.; Dinescu, M.

    2017-03-01

    The aim of this work is to demonstrate the potential of laser-induced forward transfer (LIFT) as a printing technology, alternative to standard microfabrication techniques, in the area of flexible micro-electrode fabrication. First, ferrocene thin films are deposited onto fused silica and fused silica substrates previously coated with a photodegradable polymer film (triazene polymer) by matrix assisted pulsed laser evaporation (MAPLE). The morphology and chemical structure of the ferrocene thin films deposited by MAPLE has been investigated by atomic force microscopy and Fourier transformed infrared spectroscopy, and no structural damage occurs as a result of the laser deposition. Second, LIFT is applied to print for the first time ferrocene pixels and lines onto flexible polydimethylsiloxane (PDMS) substrates. The ferrocene pixels and lines are flawlessly transferred onto the PDMS substrates in air at room temperature, without the need of additional conventional photolithography processes. We believe that these results are very promising for a variety of applications ranging from flexible electronics to lab-on-a-chip devices, MEMS, and medical implants.

  1. Graph Laplacian Regularization for Image Denoising: Analysis in the Continuous Domain.

    PubMed

    Pang, Jiahao; Cheung, Gene

    2017-04-01

    Inverse imaging problems are inherently underdetermined, and hence, it is important to employ appropriate image priors for regularization. One recent popular prior-the graph Laplacian regularizer-assumes that the target pixel patch is smooth with respect to an appropriately chosen graph. However, the mechanisms and implications of imposing the graph Laplacian regularizer on the original inverse problem are not well understood. To address this problem, in this paper, we interpret neighborhood graphs of pixel patches as discrete counterparts of Riemannian manifolds and perform analysis in the continuous domain, providing insights into several fundamental aspects of graph Laplacian regularization for image denoising. Specifically, we first show the convergence of the graph Laplacian regularizer to a continuous-domain functional, integrating a norm measured in a locally adaptive metric space. Focusing on image denoising, we derive an optimal metric space assuming non-local self-similarity of pixel patches, leading to an optimal graph Laplacian regularizer for denoising in the discrete domain. We then interpret graph Laplacian regularization as an anisotropic diffusion scheme to explain its behavior during iterations, e.g., its tendency to promote piecewise smooth signals under certain settings. To verify our analysis, an iterative image denoising algorithm is developed. Experimental results show that our algorithm performs competitively with state-of-the-art denoising methods, such as BM3D for natural images, and outperforms them significantly for piecewise smooth images.

  2. Opportunities for Live Cell FT-Infrared Imaging: Macromolecule Identification with 2D and 3D Localization

    PubMed Central

    Mattson, Eric C.; Aboualizadeh, Ebrahim; Barabas, Marie E.; Stucky, Cheryl L.; Hirschmugl, Carol J.

    2013-01-01

    Infrared (IR) spectromicroscopy, or chemical imaging, is an evolving technique that is poised to make significant contributions in the fields of biology and medicine. Recent developments in sources, detectors, measurement techniques and speciman holders have now made diffraction-limited Fourier transform infrared (FTIR) imaging of cellular chemistry in living cells a reality. The availability of bright, broadband IR sources and large area, pixelated detectors facilitate live cell imaging, which requires rapid measurements using non-destructive probes. In this work, we review advances in the field of FTIR spectromicroscopy that have contributed to live-cell two and three-dimensional IR imaging, and discuss several key examples that highlight the utility of this technique for studying the structure and chemistry of living cells. PMID:24256815

  3. Development of Time-Distance Helioseismology Data Analysis Pipeline for SDO/HMI

    NASA Technical Reports Server (NTRS)

    DuVall, T. L., Jr.; Zhao, J.; Couvidat, S.; Parchevsky, K. V.; Beck, J.; Kosovichev, A. G.; Scherrer, P. H.

    2008-01-01

    The Helioseismic and Magnetic Imager of SDO will provide uninterrupted 4k x 4k-pixel Doppler-shift images of the Sun with approximately 40 sec cadence. These data will have a unique potential for advancing local helioseismic diagnostics of the Sun's interior structure and dynamics. They will help to understand the basic mechanisms of solar activity and develop predictive capabilities for NASA's Living with a Star program. Because of the tremendous amount of data the HMI team is developing a data analysis pipeline, which will provide maps of subsurface flows and sound-speed distributions inferred form the Doppler data by the time-distance technique. We discuss the development plan, methods, and algorithms, and present the status of the pipeline, testing results and examples of the data products.

  4. Photon Counting Energy Dispersive Detector Arrays for X-ray Imaging

    PubMed Central

    Iwanczyk, Jan S.; Nygård, Einar; Meirav, Oded; Arenson, Jerry; Barber, William C.; Hartsough, Neal E.; Malakhov, Nail; Wessel, Jan C.

    2009-01-01

    The development of an innovative detector technology for photon-counting in X-ray imaging is reported. This new generation of detectors, based on pixellated cadmium telluride (CdTe) and cadmium zinc telluride (CZT) detector arrays electrically connected to application specific integrated circuits (ASICs) for readout, will produce fast and highly efficient photon-counting and energy-dispersive X-ray imaging. There are a number of applications that can greatly benefit from these novel imagers including mammography, planar radiography, and computed tomography (CT). Systems based on this new detector technology can provide compositional analysis of tissue through spectroscopic X-ray imaging, significantly improve overall image quality, and may significantly reduce X-ray dose to the patient. A very high X-ray flux is utilized in many of these applications. For example, CT scanners can produce ~100 Mphotons/mm2/s in the unattenuated beam. High flux is required in order to collect sufficient photon statistics in the measurement of the transmitted flux (attenuated beam) during the very short time frame of a CT scan. This high count rate combined with a need for high detection efficiency requires the development of detector structures that can provide a response signal much faster than the transit time of carriers over the whole detector thickness. We have developed CdTe and CZT detector array structures which are 3 mm thick with 16×16 pixels and a 1 mm pixel pitch. These structures, in the two different implementations presented here, utilize either a small pixel effect or a drift phenomenon. An energy resolution of 4.75% at 122 keV has been obtained with a 30 ns peaking time using discrete electronics and a 57Co source. An output rate of 6×106 counts per second per individual pixel has been obtained with our ASIC readout electronics and a clinical CT X-ray tube. Additionally, the first clinical CT images, taken with several of our prototype photon-counting and energy-dispersive detector modules, are shown. PMID:19920884

  5. Photon Counting Energy Dispersive Detector Arrays for X-ray Imaging.

    PubMed

    Iwanczyk, Jan S; Nygård, Einar; Meirav, Oded; Arenson, Jerry; Barber, William C; Hartsough, Neal E; Malakhov, Nail; Wessel, Jan C

    2009-01-01

    The development of an innovative detector technology for photon-counting in X-ray imaging is reported. This new generation of detectors, based on pixellated cadmium telluride (CdTe) and cadmium zinc telluride (CZT) detector arrays electrically connected to application specific integrated circuits (ASICs) for readout, will produce fast and highly efficient photon-counting and energy-dispersive X-ray imaging. There are a number of applications that can greatly benefit from these novel imagers including mammography, planar radiography, and computed tomography (CT). Systems based on this new detector technology can provide compositional analysis of tissue through spectroscopic X-ray imaging, significantly improve overall image quality, and may significantly reduce X-ray dose to the patient. A very high X-ray flux is utilized in many of these applications. For example, CT scanners can produce ~100 Mphotons/mm(2)/s in the unattenuated beam. High flux is required in order to collect sufficient photon statistics in the measurement of the transmitted flux (attenuated beam) during the very short time frame of a CT scan. This high count rate combined with a need for high detection efficiency requires the development of detector structures that can provide a response signal much faster than the transit time of carriers over the whole detector thickness. We have developed CdTe and CZT detector array structures which are 3 mm thick with 16×16 pixels and a 1 mm pixel pitch. These structures, in the two different implementations presented here, utilize either a small pixel effect or a drift phenomenon. An energy resolution of 4.75% at 122 keV has been obtained with a 30 ns peaking time using discrete electronics and a (57)Co source. An output rate of 6×10(6) counts per second per individual pixel has been obtained with our ASIC readout electronics and a clinical CT X-ray tube. Additionally, the first clinical CT images, taken with several of our prototype photon-counting and energy-dispersive detector modules, are shown.

  6. Micro-pixelation and color mixing in biological photonic structures (presentation video)

    NASA Astrophysics Data System (ADS)

    Bartl, Michael H.; Nagi, Ramneet K.

    2014-03-01

    The world of insects displays myriad hues of coloration effects produced by elaborate nano-scale architectures built into wings and exoskeleton. For example, we have recently found many weevils possess photonic architectures with cubic lattices. In this talk, we will present high-resolution three-dimensional reconstructions of weevil photonic structures with diamond and gyroid lattices. Moreover, by reconstructing entire scales we found arrays of single-crystalline domains, each oriented such that only selected crystal faces are visible to an observer. This pixel-like arrangement is key to the angle-independent coloration typical of weevils—a strategy that could enable a new generation of coating technologies.

  7. Surface-Micromachined Planar Arrays of Thermopiles

    NASA Technical Reports Server (NTRS)

    Foote, Marc C.

    2003-01-01

    Planar two-dimensional arrays of thermopiles intended for use as thermal-imaging detectors are to be fabricated by a process that includes surface micromachining. These thermopile arrays are designed to perform better than do prior two-dimensional thermopile arrays. The lower performance of prior two-dimensional thermopile arrays is attributed to the following causes: The thermopiles are made from low-performance thermoelectric materials. The devices contain dielectric supporting structures, the thermal conductances of which give rise to parasitic losses of heat from detectors to substrates. The bulk-micromachining processes sometimes used to remove substrate material under the pixels, making it difficult to incorporate low-noise readout electronic circuitry. The thermoelectric lines are on the same level as the infrared absorbers, thereby reducing fill factor. The improved pixel design of a thermopile array of the type under development is expected to afford enhanced performance by virtue of the following combination of features: Surface-micromachined detectors are thermally isolated through suspension above readout circuitry. The thermopiles are made of such high-performance thermoelectric materials as Bi-Te and Bi-Sb-Te alloys. Pixel structures are supported only by the thermoelectric materials: there are no supporting dielectric structures that could leak heat by conduction to the substrate.

  8. Evaluation of a CdTe semiconductor based compact γ camera for sentinel lymph node imaging.

    PubMed

    Russo, Paolo; Curion, Assunta S; Mettivier, Giovanni; Esposito, Michela; Aurilio, Michela; Caracò, Corradina; Aloj, Luigi; Lastoria, Secondo

    2011-03-01

    The authors assembled a prototype compact gamma-ray imaging probe (MediPROBE) for sentinel lymph node (SLN) localization. This probe is based on a semiconductor pixel detector. Its basic performance was assessed in the laboratory and clinically in comparison with a conventional gamma camera. The room-temperature CdTe pixel detector (1 mm thick) has 256 x 256 square pixels arranged with a 55 microm pitch (sensitive area 14.08 x 14.08 mm2), coupled pixel-by-pixel via bump-bonding to the Medipix2 photon-counting readout CMOS integrated circuit. The imaging probe is equipped with a set of three interchangeable knife-edge pinhole collimators (0.94, 1.2, or 2.1 mm effective diameter at 140 keV) and its focal distance can be regulated in order to set a given field of view (FOV). A typical FOV of 70 mm at 50 mm skin-to-collimator distance corresponds to a minification factor 1:5. The detector is operated at a single low-energy threshold of about 20 keV. For 99 mTc, at 50 mm distance, a background-subtracted sensitivity of 6.5 x 10(-3) cps/kBq and a system spatial resolution of 5.5 mm FWHM were obtained for the 0.94 mm pinhole; corresponding values for the 2.1 mm pinhole were 3.3 x 10(-2) cps/kBq and 12.6 mm. The dark count rate was 0.71 cps. Clinical images in three patients with melanoma indicate detection of the SLNs with acquisition times between 60 and 410 s with an injected activity of 26 MBq 99 mTc and prior localization with standard gamma camera lymphoscintigraphy. The laboratory performance of this imaging probe is limited by the pinhole collimator performance and the necessity of working in minification due to the limited detector size. However, in clinical operative conditions, the CdTe imaging probe was effective in detecting SLNs with adequate resolution and an acceptable sensitivity. Sensitivity is expected to improve with the future availability of a larger CdTe detector permitting operation at shorter distances from the patient skin.

  9. An embedded face-classification system for infrared images on an FPGA

    NASA Astrophysics Data System (ADS)

    Soto, Javier E.; Figueroa, Miguel

    2014-10-01

    We present a face-classification architecture for long-wave infrared (IR) images implemented on a Field Programmable Gate Array (FPGA). The circuit is fast, compact and low power, can recognize faces in real time and be embedded in a larger image-processing and computer vision system operating locally on an IR camera. The algorithm uses Local Binary Patterns (LBP) to perform feature extraction on each IR image. First, each pixel in the image is represented as an LBP pattern that encodes the similarity between the pixel and its neighbors. Uniform LBP codes are then used to reduce the number of patterns to 59 while preserving more than 90% of the information contained in the original LBP representation. Then, the image is divided into 64 non-overlapping regions, and each region is represented as a 59-bin histogram of patterns. Finally, the algorithm concatenates all 64 regions to create a 3,776-bin spatially enhanced histogram. We reduce the dimensionality of this histogram using Linear Discriminant Analysis (LDA), which improves clustering and enables us to store an entire database of 53 subjects on-chip. During classification, the circuit applies LBP and LDA to each incoming IR image in real time, and compares the resulting feature vector to each pattern stored in the local database using the Manhattan distance. We implemented the circuit on a Xilinx Artix-7 XC7A100T FPGA and tested it with the UCHThermalFace database, which consists of 28 81 x 150-pixel images of 53 subjects in indoor and outdoor conditions. The circuit achieves a 98.6% hit ratio, trained with 16 images and tested with 12 images of each subject in the database. Using a 100 MHz clock, the circuit classifies 8,230 images per second, and consumes only 309mW.

  10. CHOBS: Color Histogram of Block Statistics for Automatic Bleeding Detection in Wireless Capsule Endoscopy Video.

    PubMed

    Ghosh, Tonmoy; Fattah, Shaikh Anowarul; Wahid, Khan A

    2018-01-01

    Wireless capsule endoscopy (WCE) is the most advanced technology to visualize whole gastrointestinal (GI) tract in a non-invasive way. But the major disadvantage here, it takes long reviewing time, which is very laborious as continuous manual intervention is necessary. In order to reduce the burden of the clinician, in this paper, an automatic bleeding detection method for WCE video is proposed based on the color histogram of block statistics, namely CHOBS. A single pixel in WCE image may be distorted due to the capsule motion in the GI tract. Instead of considering individual pixel values, a block surrounding to that individual pixel is chosen for extracting local statistical features. By combining local block features of three different color planes of RGB color space, an index value is defined. A color histogram, which is extracted from those index values, provides distinguishable color texture feature. A feature reduction technique utilizing color histogram pattern and principal component analysis is proposed, which can drastically reduce the feature dimension. For bleeding zone detection, blocks are classified using extracted local features that do not incorporate any computational burden for feature extraction. From extensive experimentation on several WCE videos and 2300 images, which are collected from a publicly available database, a very satisfactory bleeding frame and zone detection performance is achieved in comparison to that obtained by some of the existing methods. In the case of bleeding frame detection, the accuracy, sensitivity, and specificity obtained from proposed method are 97.85%, 99.47%, and 99.15%, respectively, and in the case of bleeding zone detection, 95.75% of precision is achieved. The proposed method offers not only low feature dimension but also highly satisfactory bleeding detection performance, which even can effectively detect bleeding frame and zone in a continuous WCE video data.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deptuch, Grzegorz W.; Gabriella, Carini; Enquist, Paul

    The vertically integrated photon imaging chip (VIPIC1) pixel detector is a stack consisting of a 500-μm-thick silicon sensor, a two-tier 34-μm-thick integrated circuit, and a host printed circuit board (PCB). The integrated circuit tiers were bonded using the direct bonding technology with copper, and each tier features 1-μm-diameter through-silicon vias that were used for connections to the sensor on one side, and to the host PCB on the other side. The 80-μm-pixel-pitch sensor was the direct bonding technology with nickel bonded to the integrated circuit. The stack was mounted on the board using Sn–Pb balls placed on a 320-μm pitch,more » yielding an entirely wire-bond-less structure. The analog front-end features a pulse response peaking at below 250 ns, and the power consumption per pixel is 25 μW. We successful completed the 3-D integration and have reported here. Additionally, all pixels in the matrix of 64 × 64 pixels were responding on well-bonded devices. Correct operation of the sparsified readout, allowing a single 153-ns bunch timing resolution, was confirmed in the tests on a synchrotron beam of 10-keV X-rays. An equivalent noise charge of 36.2 e - rms and a conversion gain of 69.5 μV/e - with 2.6 e - rms and 2.7 μV/e - rms pixel-to-pixel variations, respectively, were measured.« less

  12. Low-power priority Address-Encoder and Reset-Decoder data-driven readout for Monolithic Active Pixel Sensors for tracker system

    NASA Astrophysics Data System (ADS)

    Yang, P.; Aglieri, G.; Cavicchioli, C.; Chalmet, P. L.; Chanlek, N.; Collu, A.; Gao, C.; Hillemanns, H.; Junique, A.; Kofarago, M.; Keil, M.; Kugathasan, T.; Kim, D.; Kim, J.; Lattuca, A.; Marin Tobon, C. A.; Marras, D.; Mager, M.; Martinengo, P.; Mazza, G.; Mugnier, H.; Musa, L.; Puggioni, C.; Rousset, J.; Reidt, F.; Riedler, P.; Snoeys, W.; Siddhanta, S.; Usai, G.; van Hoorne, J. W.; Yi, J.

    2015-06-01

    Active Pixel Sensors used in High Energy Particle Physics require low power consumption to reduce the detector material budget, low integration time to reduce the possibilities of pile-up and fast readout to improve the detector data capability. To satisfy these requirements, a novel Address-Encoder and Reset-Decoder (AERD) asynchronous circuit for a fast readout of a pixel matrix has been developed. The AERD data-driven readout architecture operates the address encoding and reset decoding based on an arbitration tree, and allows us to readout only the hit pixels. Compared to the traditional readout structure of the rolling shutter scheme in Monolithic Active Pixel Sensors (MAPS), AERD can achieve a low readout time and a low power consumption especially for low hit occupancies. The readout is controlled at the chip periphery with a signal synchronous with the clock, allows a good digital and analogue signal separation in the matrix and a reduction of the power consumption. The AERD circuit has been implemented in the TowerJazz 180 nm CMOS Imaging Sensor (CIS) process with full complementary CMOS logic in the pixel. It works at 10 MHz with a matrix height of 15 mm. The energy consumed to read out one pixel is around 72 pJ. A scheme to boost the readout speed to 40 MHz is also discussed. The sensor chip equipped with AERD has been produced and characterised. Test results including electrical beam measurement are presented.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Altunbas, Cem, E-mail: caltunbas@gmail.com; Lai, Chao-Jen; Zhong, Yuncheng

    Purpose: In using flat panel detectors (FPD) for cone beam computed tomography (CBCT), pixel gain variations may lead to structured nonuniformities in projections and ring artifacts in CBCT images. Such gain variations can be caused by change in detector entrance exposure levels or beam hardening, and they are not accounted by conventional flat field correction methods. In this work, the authors presented a method to identify isolated pixel clusters that exhibit gain variations and proposed a pixel gain correction (PGC) method to suppress both beam hardening and exposure level dependent gain variations. Methods: To modulate both beam spectrum and entrancemore » exposure, flood field FPD projections were acquired using beam filters with varying thicknesses. “Ideal” pixel values were estimated by performing polynomial fits in both raw and flat field corrected projections. Residuals were calculated by taking the difference between measured and ideal pixel values to identify clustered image and FPD artifacts in flat field corrected and raw images, respectively. To correct clustered image artifacts, the ratio of ideal to measured pixel values in filtered images were utilized as pixel-specific gain correction factors, referred as PGC method, and they were tabulated as a function of pixel value in a look-up table. Results: 0.035% of detector pixels lead to clustered image artifacts in flat field corrected projections, where 80% of these pixels were traced back and linked to artifacts in the FPD. The performance of PGC method was tested in variety of imaging conditions and phantoms. The PGC method reduced clustered image artifacts and fixed pattern noise in projections, and ring artifacts in CBCT images. Conclusions: Clustered projection image artifacts that lead to ring artifacts in CBCT can be better identified with our artifact detection approach. When compared to the conventional flat field correction method, the proposed PGC method enables characterization of nonlinear pixel gain variations as a function of change in x-ray spectrum and intensity. Hence, it can better suppress image artifacts due to beam hardening as well as artifacts that arise from detector entrance exposure variation.« less

  14. Investigation of the limitations of the highly pixilated CdZnTe detector for PET applications

    PubMed Central

    Komarov, Sergey; Yin, Yongzhi; Wu, Heyu; Wen, Jie; Krawczynski, Henric; Meng, Ling-Jian; Tai, Yuan-Chuan

    2016-01-01

    We are investigating the feasibility of a high resolution positron emission tomography (PET) insert device based on the CdZnTe detector with 350 μm anode pixel pitch to be integrated into a conventional animal PET scanner to improve its image resolution. In this paper, we have used a simplified version of the multi pixel CdZnTe planar detector, 5 mm thick with 9 anode pixels only. This simplified 9 anode pixel structure makes it possible to carry out experiments without a complete application-specific integrated circuits readout system that is still under development. Special attention was paid to the double pixel (or charge sharing) detections. The following characteristics were obtained in experiment: energy resolution full-width-at-half-maximum (FWHM) is 7% for single pixel and 9% for double pixel photoelectric detections of 511 keV gammas; timing resolution (FWHM) from the anode signals is 30 ns for single pixel and 35 ns for double pixel detections (for photoelectric interactions only the corresponding values are 20 and 25 ns); position resolution is 350 μm in x,y-plane and ~0.4 mm in depth-of-interaction. The experimental measurements were accompanied by Monte Carlo (MC) simulations to find a limitation imposed by spatial charge distribution. Results from MC simulations suggest the limitation of the intrinsic spatial resolution of the CdZnTe detector for 511 keV photoelectric interactions is 170 μm. The interpixel interpolation cannot recover the resolution beyond the limit mentioned above for photoelectric interactions. However, it is possible to achieve higher spatial resolution using interpolation for Compton scattered events. Energy and timing resolution of the proposed 350 μm anode pixel pitch detector is no better than 0.6% FWHM at 511 keV, and 2 ns FWHM, respectively. These MC results should be used as a guide to understand the performance limits of the pixelated CdZnTe detector due to the underlying detection processes, with the understanding of the inherent limitations of MC methods. PMID:23079763

  15. Investigation of the limitations of the highly pixilated CdZnTe detector for PET applications.

    PubMed

    Komarov, Sergey; Yin, Yongzhi; Wu, Heyu; Wen, Jie; Krawczynski, Henric; Meng, Ling-Jian; Tai, Yuan-Chuan

    2012-11-21

    We are investigating the feasibility of a high resolution positron emission tomography (PET) insert device based on the CdZnTe detector with 350 µm anode pixel pitch to be integrated into a conventional animal PET scanner to improve its image resolution. In this paper, we have used a simplified version of the multi pixel CdZnTe planar detector, 5 mm thick with 9 anode pixels only. This simplified 9 anode pixel structure makes it possible to carry out experiments without a complete application-specific integrated circuits readout system that is still under development. Special attention was paid to the double pixel (or charge sharing) detections. The following characteristics were obtained in experiment: energy resolution full-width-at-half-maximum (FWHM) is 7% for single pixel and 9% for double pixel photoelectric detections of 511 keV gammas; timing resolution (FWHM) from the anode signals is 30 ns for single pixel and 35 ns for double pixel detections (for photoelectric interactions only the corresponding values are 20 and 25 ns); position resolution is 350 µm in x,y-plane and ∼0.4 mm in depth-of-interaction. The experimental measurements were accompanied by Monte Carlo (MC) simulations to find a limitation imposed by spatial charge distribution. Results from MC simulations suggest the limitation of the intrinsic spatial resolution of the CdZnTe detector for 511 keV photoelectric interactions is 170 µm. The interpixel interpolation cannot recover the resolution beyond the limit mentioned above for photoelectric interactions. However, it is possible to achieve higher spatial resolution using interpolation for Compton scattered events. Energy and timing resolution of the proposed 350 µm anode pixel pitch detector is no better than 0.6% FWHM at 511 keV, and 2 ns FWHM, respectively. These MC results should be used as a guide to understand the performance limits of the pixelated CdZnTe detector due to the underlying detection processes, with the understanding of the inherent limitations of MC methods.

  16. Urban Image Classification: Per-Pixel Classifiers, Sub-Pixel Analysis, Object-Based Image Analysis, and Geospatial Methods. 10; Chapter

    NASA Technical Reports Server (NTRS)

    Myint, Soe W.; Mesev, Victor; Quattrochi, Dale; Wentz, Elizabeth A.

    2013-01-01

    Remote sensing methods used to generate base maps to analyze the urban environment rely predominantly on digital sensor data from space-borne platforms. This is due in part from new sources of high spatial resolution data covering the globe, a variety of multispectral and multitemporal sources, sophisticated statistical and geospatial methods, and compatibility with GIS data sources and methods. The goal of this chapter is to review the four groups of classification methods for digital sensor data from space-borne platforms; per-pixel, sub-pixel, object-based (spatial-based), and geospatial methods. Per-pixel methods are widely used methods that classify pixels into distinct categories based solely on the spectral and ancillary information within that pixel. They are used for simple calculations of environmental indices (e.g., NDVI) to sophisticated expert systems to assign urban land covers. Researchers recognize however, that even with the smallest pixel size the spectral information within a pixel is really a combination of multiple urban surfaces. Sub-pixel classification methods therefore aim to statistically quantify the mixture of surfaces to improve overall classification accuracy. While within pixel variations exist, there is also significant evidence that groups of nearby pixels have similar spectral information and therefore belong to the same classification category. Object-oriented methods have emerged that group pixels prior to classification based on spectral similarity and spatial proximity. Classification accuracy using object-based methods show significant success and promise for numerous urban 3 applications. Like the object-oriented methods that recognize the importance of spatial proximity, geospatial methods for urban mapping also utilize neighboring pixels in the classification process. The primary difference though is that geostatistical methods (e.g., spatial autocorrelation methods) are utilized during both the pre- and post-classification steps. Within this chapter, each of the four approaches is described in terms of scale and accuracy classifying urban land use and urban land cover; and for its range of urban applications. We demonstrate the overview of four main classification groups in Figure 1 while Table 1 details the approaches with respect to classification requirements and procedures (e.g., reflectance conversion, steps before training sample selection, training samples, spatial approaches commonly used, classifiers, primary inputs for classification, output structures, number of output layers, and accuracy assessment). The chapter concludes with a brief summary of the methods reviewed and the challenges that remain in developing new classification methods for improving the efficiency and accuracy of mapping urban areas.

  17. Rolling Band Artifact Flagging in the Kepler Data Pipeline

    NASA Astrophysics Data System (ADS)

    Clarke, Bruce; Kolodziejczak, Jeffery J; Caldwell, Douglas A.

    2014-06-01

    Instrument-induced artifacts in the raw Kepler pixel data include time-varying crosstalk from the fine guidance sensor (FGS) clock signals, manifestations of drifting moiré pattern as locally correlated nonstationary noise and rolling bands in the images. These systematics find their way into the calibrated pixel time series and ultimately into the target flux time series. The Kepler pipeline module Dynablack models the FGS crosstalk artifacts using a combination of raw science pixel data, full frame images, reverse-clocked pixel data and ancillary temperature data. The calibration module (CAL) uses the fitted Dynablack models to remove FGS crosstalk artifacts in the calibrated pixels by adjusting the black level correction per cadence. Dynablack also detects and flags spatial regions and time intervals of strong time-varying black-level. These rolling band artifact (RBA) flags are produced on a per row per cadence basis by searching for transit signatures in the Dynablack fit residuals. The Photometric Analysis module (PA) generates per target per cadence data quality flags based on the Dynablack RBA flags. Proposed future work includes using the target data quality flags as a basis for de-weighting in the Presearch Data Conditioning (PDC), Transiting Planet Search (TPS) and Data Validation (DV) pipeline modules. We discuss the effectiveness of RBA flagging for downstream users and illustrate with some affected light curves. We also discuss the implementation of Dynablack in the Kepler data pipeline and present results regarding the improvement in calibrated pixels and the expected improvement in cotrending performance as a result of including FGS corrections in the calibration. Funding for the Kepler Mission has been provided by the NASA Science Mission Directorate.

  18. SU-C-206-03: Metal Artifact Reduction in X-Ray Computed Tomography Based On Local Anatomical Similarity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, X; Yang, X; Rosenfield, J

    Purpose: Metal implants such as orthopedic hardware and dental fillings cause severe bright and dark streaking in reconstructed CT images. These artifacts decrease image contrast and degrade HU accuracy, leading to inaccuracies in target delineation and dose calculation. Additionally, such artifacts negatively impact patient set-up in image guided radiation therapy (IGRT). In this work, we propose a novel method for metal artifact reduction which utilizes the anatomical similarity between neighboring CT slices. Methods: Neighboring CT slices show similar anatomy. Based on this anatomical similarity, the proposed method replaces corrupted CT pixels with pixels from adjacent, artifact-free slices. A gamma map,more » which is the weighted summation of relative HU error and distance error, is calculated for each pixel in the artifact-corrupted CT image. The minimum value in each pixel’s gamma map is used to identify a pixel from the adjacent CT slice to replace the corresponding artifact-corrupted pixel. This replacement only occurs if the minimum value in a particular pixel’s gamma map is larger than a threshold. The proposed method was evaluated with clinical images. Results: Highly attenuating dental fillings and hip implants cause severe streaking artifacts on CT images. The proposed method eliminates the dark and bright streaking and improves the implant delineation and visibility. In particular, the image non-uniformity in the central region of interest was reduced from 1.88 and 1.01 to 0.28 and 0.35, respectively. Further, the mean CT HU error was reduced from 328 HU and 460 HU to 60 HU and 36 HU, respectively. Conclusions: The proposed metal artifact reduction method replaces corrupted image pixels with pixels from neighboring slices that are free of metal artifacts. This method proved capable of suppressing streaking artifacts, improving HU accuracy and image detectability.« less

  19. Maps of the Martian Landing Sites and Rover Traverses: Viking 1 and 2, Mars Pathfinder, and Phoenix Landers, and the Mars Exploration Rovers.

    NASA Astrophysics Data System (ADS)

    Parker, T. J.; Calef, F. J., III; Deen, R. G.; Gengl, H.

    2016-12-01

    The traverse maps produced tactically for the MER and MSL rover missions are the first step in placing the observations made by each vehicle into a local and regional geologic context. For the MER, Phoenix and MSL missions, 25cm/pixel HiRISE data is available for accurately localizing the vehicles. Viking and Mars Pathfinder, however, relied on Viking Orbiter images of several tens of m/pixel to triangulate to horizon features visible both from the ground and from orbit. After Pathfinder, MGS MOC images became available for these landing sites, enabling much better correlations to horizon features and localization predictions to be made, that were then corroborated with HiRISE images beginning 9 years ago. By combining topography data from MGS, Mars Express, and stereo processing of MRO CTX and HiRISE images into orthomosaics (ORRs) and digital elevation models (DEMs), it is possible to localize all the landers and rover positions to an accuracy of a few tens of meters with respect to the Mars global control net, and to better than half a meter with respect to other features within a HiRISE orthomosaic. JPL's MIPL produces point clouds of the MER Navcam stereo images that can be processed into 1cm/pixel ORR/DEMs that are then georeferenced to a HiRISE/CTX base map and DEM. This allows compilation of seamless mosaics of the lander and rover camera-based ORR/DEMs with the HiRISE ORR/DEM that can be viewed in 3 dimensions with GIS programs with that capability. We are re-processing the Viking Lander, Mars Pathfinder, and Phoenix lander data to allow similar ORR/DEM products to be made for those missions. For the fixed landers and Spirit, we will compile merged surface/CTX/HiRISE ORR/DEMs, that will enable accurate local and regional mapping of these landing sites, and allow comparisons of the results from these missions to be made with current and future surface missions.

  20. A sea-land segmentation algorithm based on multi-feature fusion for a large-field remote sensing image

    NASA Astrophysics Data System (ADS)

    Li, Jing; Xie, Weixin; Pei, Jihong

    2018-03-01

    Sea-land segmentation is one of the key technologies of sea target detection in remote sensing images. At present, the existing algorithms have the problems of low accuracy, low universality and poor automatic performance. This paper puts forward a sea-land segmentation algorithm based on multi-feature fusion for a large-field remote sensing image removing island. Firstly, the coastline data is extracted and all of land area is labeled by using the geographic information in large-field remote sensing image. Secondly, three features (local entropy, local texture and local gradient mean) is extracted in the sea-land border area, and the three features combine a 3D feature vector. And then the MultiGaussian model is adopted to describe 3D feature vectors of sea background in the edge of the coastline. Based on this multi-gaussian sea background model, the sea pixels and land pixels near coastline are classified more precise. Finally, the coarse segmentation result and the fine segmentation result are fused to obtain the accurate sea-land segmentation. Comparing and analyzing the experimental results by subjective vision, it shows that the proposed method has high segmentation accuracy, wide applicability and strong anti-disturbance ability.

  1. Eulerian frequency analysis of structural vibrations from high-speed video

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Venanzoni, Andrea; Siemens Industry Software NV, Interleuvenlaan 68, B-3001 Leuven; De Ryck, Laurent

    An approach for the analysis of the frequency content of structural vibrations from high-speed video recordings is proposed. The techniques and tools proposed rely on an Eulerian approach, that is, using the time history of pixels independently to analyse structural motion, as opposed to Lagrangian approaches, where the motion of the structure is tracked in time. The starting point is an existing Eulerian motion magnification method, which consists in decomposing the video frames into a set of spatial scales through a so-called Laplacian pyramid [1]. Each scale — or level — can be amplified independently to reconstruct a magnified motionmore » of the observed structure. The approach proposed here provides two analysis tools or pre-amplification steps. The first tool provides a representation of the global frequency content of a video per pyramid level. This may be further enhanced by applying an angular filter in the spatial frequency domain to each frame of the video before the Laplacian pyramid decomposition, which allows for the identification of the frequency content of the structural vibrations in a particular direction of space. This proposed tool complements the existing Eulerian magnification method by amplifying selectively the levels containing relevant motion information with respect to their frequency content. This magnifies the displacement while limiting the noise contribution. The second tool is a holographic representation of the frequency content of a vibrating structure, yielding a map of the predominant frequency components across the structure. In contrast to the global frequency content representation of the video, this tool provides a local analysis of the periodic gray scale intensity changes of the frame in order to identify the vibrating parts of the structure and their main frequencies. Validation cases are provided and the advantages and limits of the approaches are discussed. The first validation case consists of the frequency content retrieval of the tip of a shaker, excited at selected fixed frequencies. The goal of this setup is to retrieve the frequencies at which the tip is excited. The second validation case consists of two thin metal beams connected to a randomly excited bar. It is shown that the holographic representation visually highlights the predominant frequency content of each pixel and locates the global frequencies of the motion, thus retrieving the natural frequencies for each beam.« less

  2. Automated artery-venous classification of retinal blood vessels based on structural mapping method

    NASA Astrophysics Data System (ADS)

    Joshi, Vinayak S.; Garvin, Mona K.; Reinhardt, Joseph M.; Abramoff, Michael D.

    2012-03-01

    Retinal blood vessels show morphologic modifications in response to various retinopathies. However, the specific responses exhibited by arteries and veins may provide a precise diagnostic information, i.e., a diabetic retinopathy may be detected more accurately with the venous dilatation instead of average vessel dilatation. In order to analyze the vessel type specific morphologic modifications, the classification of a vessel network into arteries and veins is required. We previously described a method for identification and separation of retinal vessel trees; i.e. structural mapping. Therefore, we propose the artery-venous classification based on structural mapping and identification of color properties prominent to the vessel types. The mean and standard deviation of each of green channel intensity and hue channel intensity are analyzed in a region of interest around each centerline pixel of a vessel. Using the vector of color properties extracted from each centerline pixel, it is classified into one of the two clusters (artery and vein), obtained by the fuzzy-C-means clustering. According to the proportion of clustered centerline pixels in a particular vessel, and utilizing the artery-venous crossing property of retinal vessels, each vessel is assigned a label of an artery or a vein. The classification results are compared with the manually annotated ground truth (gold standard). We applied the proposed method to a dataset of 15 retinal color fundus images resulting in an accuracy of 88.28% correctly classified vessel pixels. The automated classification results match well with the gold standard suggesting its potential in artery-venous classification and the respective morphology analysis.

  3. Optical sectioning in wide-field microscopy obtained by dynamic structured light illumination and detection based on a smart pixel detector array.

    PubMed

    Mitić, Jelena; Anhut, Tiemo; Meier, Matthias; Ducros, Mathieu; Serov, Alexander; Lasser, Theo

    2003-05-01

    Optical sectioning in wide-field microscopy is achieved by illumination of the object with a continuously moving single-spatial-frequency pattern and detecting the image with a smart pixel detector array. This detector performs an on-chip electronic signal processing that extracts the optically sectioned image. The optically sectioned image is directly observed in real time without any additional postprocessing.

  4. Viewing-zone enlargement method for sampled hologram that uses high-order diffraction.

    PubMed

    Mishina, Tomoyuki; Okui, Makoto; Okano, Fumio

    2002-03-10

    We demonstrate a method of enlarging the viewing zone for holography that has holograms with a pixel structure. First, aliasing generated by the sampling of a hologram by pixel is described. Next the high-order diffracted beams reproduced from the hologram that contains aliasing are explained. Finally, we show that the viewing zone can be enlarged by combining these high-order reconstructed beams from the hologram with aliasing.

  5. Bi-cubic interpolation for shift-free pan-sharpening

    NASA Astrophysics Data System (ADS)

    Aiazzi, Bruno; Baronti, Stefano; Selva, Massimo; Alparone, Luciano

    2013-12-01

    Most of pan-sharpening techniques require the re-sampling of the multi-spectral (MS) image for matching the size of the panchromatic (Pan) image, before the geometric details of Pan are injected into the MS image. This operation is usually performed in a separable fashion by means of symmetric digital low-pass filtering kernels with odd lengths that utilize piecewise local polynomials, typically implementing linear or cubic interpolation functions. Conversely, constant, i.e. nearest-neighbour, and quadratic kernels, implementing zero and two degree polynomials, respectively, introduce shifts in the magnified images, that are sub-pixel in the case of interpolation by an even factor, as it is the most usual case. However, in standard satellite systems, the point spread functions (PSF) of the MS and Pan instruments are centered in the middle of each pixel. Hence, commercial MS and Pan data products, whose scale ratio is an even number, are relatively shifted by an odd number of half pixels. Filters of even lengths may be exploited to compensate the half-pixel shifts between the MS and Pan sampling grids. In this paper, it is shown that separable polynomial interpolations of odd degrees are feasible with linear-phase kernels of even lengths. The major benefit is that bi-cubic interpolation, which is known to represent the best trade-off between performances and computational complexity, can be applied to commercial MS + Pan datasets, without the need of performing a further half-pixel registration after interpolation, to align the expanded MS with the Pan image.

  6. New concept of a submillimetric pixellated Silicon detector for intracerebral application

    NASA Astrophysics Data System (ADS)

    Benoit, M.; Märk, J.; Weiss, P.; Benoit, D.; Clemens, J. C.; Fougeron, D.; Janvier, B.; Jevaud, M.; Karkar, S.; Menouni, M.; Pain, F.; Pinot, L.; Morel, C.; Laniece, P.

    2011-12-01

    A new beta+ radiosensitive microprobe implantable in rodent brain dedicated to in vivo and autonomous measurements of local time activity curves of beta radiotracers in a volume of brain tissue of a few mm3 has been developed recently. This project expands the concept of the previously designed beta microprobe, which has been validated extensively in neurobiological experiments performed on anesthetized animals. Due to its limitations considering recordings on awake and freely moving animals, we have proposed to develop a wireless setup that can be worn by an animal without constraining its movements. To that aim, we have chosen a highly beta sensitive Silicon-based detector to devise a compact pixellated probe. Miniaturized wireless electronics is used to read-out and transfer the measurement data. Initial Monte-Carlo simulations showed that high resistive Silicon pixels are appropriate for this purpose, with their dimensions to be adapted to our specific signals. More precisely, we demonstrated that 200 μm thick pixels with an area of 200 μm×500 μm are optimized in terms of beta+sensitivity versus relative transparency to the gamma background. Based on this theoretical study, we now present the development of the novel sensor, including the system simulations with technology computer-assisted design (TCAD) to investigate specific configurations of guard rings and their potential to increase the electrical isolation and stabilization of the pixel, as well as the corresponding physical tests to validate the particular geometries of this new sensor.

  7. Surface topography of 1€ coin measured by stereo-PIXE

    NASA Astrophysics Data System (ADS)

    Gholami-Hatam, E.; Lamehi-Rachti, M.; Vavpetič, P.; Grlj, N.; Pelicon, P.

    2013-07-01

    We demonstrate the stereo-PIXE method by measurement of surface topography of the relief details on 1€ coin. Two X-ray elemental maps were simultaneously recorded by two X-ray detectors positioned at the left and the right side of the proton microbeam. The asymmetry of the yields in the pixels of the two X-ray maps occurs due to different photon attenuation on the exit travel path of the characteristic X-rays from the point of emission through the sample into the X-ray detectors. In order to calibrate the inclination angle with respect to the X-ray asymmetry, a flat inclined surface model was at first applied for the sample in which the matrix composition and the depth elemental concentration profile is known. After that, the yield asymmetry in each image pixel was transferred into corresponding local inclination angle using calculated dependence of the asymmetry on the surface inclination. Finally, the quantitative topography profile was revealed by integrating the local inclination angle over the lateral displacement of the probing beam.

  8. Operating organic light-emitting diodes imaged by super-resolution spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    King, John T.; Granick, Steve

    Super-resolution stimulated emission depletion (STED) microscopy is adapted here for materials characterization that would not otherwise be possible. With the example of organic light-emitting diodes (OLEDs), spectral imaging with pixel-by-pixel wavelength discrimination allows us to resolve local-chain environment encoded in the spectral response of the semi-conducting polymer, and correlate chain packing with local electroluminescence by using externally applied current as the excitation source. We observe nanoscopic defects that would be unresolvable by traditional microscopy. They are revealed in electroluminescence maps in operating OLEDs with 50 nm spatial resolution. We find that brightest emission comes from regions with more densely packedmore » chains. Conventional microscopy of an operating OLED would lack the resolution needed to discriminate these features, while traditional methods to resolve nanoscale features generally cannot be performed when the device is operating. As a result, this points the way towards real-time analysis of materials design principles in devices as they actually operate.« less

  9. Operating organic light-emitting diodes imaged by super-resolution spectroscopy

    DOE PAGES

    King, John T.; Granick, Steve

    2016-06-21

    Super-resolution stimulated emission depletion (STED) microscopy is adapted here for materials characterization that would not otherwise be possible. With the example of organic light-emitting diodes (OLEDs), spectral imaging with pixel-by-pixel wavelength discrimination allows us to resolve local-chain environment encoded in the spectral response of the semi-conducting polymer, and correlate chain packing with local electroluminescence by using externally applied current as the excitation source. We observe nanoscopic defects that would be unresolvable by traditional microscopy. They are revealed in electroluminescence maps in operating OLEDs with 50 nm spatial resolution. We find that brightest emission comes from regions with more densely packedmore » chains. Conventional microscopy of an operating OLED would lack the resolution needed to discriminate these features, while traditional methods to resolve nanoscale features generally cannot be performed when the device is operating. As a result, this points the way towards real-time analysis of materials design principles in devices as they actually operate.« less

  10. 3D Spatial and Spectral Fusion of Terrestrial Hyperspectral Imagery and Lidar for Hyperspectral Image Shadow Restoration Applied to a Geologic Outcrop

    NASA Astrophysics Data System (ADS)

    Hartzell, P. J.; Glennie, C. L.; Hauser, D. L.; Okyay, U.; Khan, S.; Finnegan, D. C.

    2016-12-01

    Recent advances in remote sensing technology have expanded the acquisition and fusion of active lidar and passive hyperspectral imagery (HSI) from an exclusively airborne technique to terrestrial modalities. This enables high resolution 3D spatial and spectral quantification of vertical geologic structures for applications such as virtual 3D rock outcrop models for hydrocarbon reservoir analog analysis and mineral quantification in open pit mining environments. In contrast to airborne observation geometry, the vertical surfaces observed by horizontal-viewing terrestrial HSI sensors are prone to extensive topography-induced solar shadowing, which leads to reduced pixel classification accuracy or outright removal of shadowed pixels from analysis tasks. Using a precisely calibrated and registered offset cylindrical linear array camera model, we demonstrate the use of 3D lidar data for sub-pixel HSI shadow detection and the restoration of the shadowed pixel spectra via empirical methods that utilize illuminated and shadowed pixels of similar material composition. We further introduce a new HSI shadow restoration technique that leverages collocated backscattered lidar intensity, which is resistant to solar conditions, obtained by projecting the 3D lidar points through the HSI camera model into HSI pixel space. Using ratios derived from the overlapping lidar laser and HSI wavelengths, restored shadow pixel spectra are approximated using a simple scale factor. Simulations of multiple lidar wavelengths, i.e., multi-spectral lidar, indicate the potential for robust HSI spectral restoration that is independent of the complexity and costs associated with rigorous radiometric transfer models, which have yet to be developed for horizontal-viewing terrestrial HSI sensors. The spectral restoration performance is quantified through HSI pixel classification consistency between full sun and partial sun exposures of a single geologic outcrop.

  11. Sensitivity of Marine Warm Cloud Retrieval Statistics to Algorithm Choices: Examples from MODIS Collection 6

    NASA Technical Reports Server (NTRS)

    Platnick, Steven; Wind, Galina; Zhang, Zhibo; Ackerman, Steven A.; Maddux, Brent

    2012-01-01

    The optical and microphysical structure of warm boundary layer marine clouds is of fundamental importance for understanding a variety of cloud radiation and precipitation processes. With the advent of MODIS (Moderate Resolution Imaging Spectroradiometer) on the NASA EOS Terra and Aqua platforms, simultaneous global/daily 1km retrievals of cloud optical thickness and effective particle size are provided, as well as the derived water path. In addition, the cloud product (MOD06/MYD06 for MODIS Terra and Aqua, respectively) provides separate effective radii results using the l.6, 2.1, and 3.7 m spectral channels. Cloud retrieval statistics are highly sensitive to how a pixel identified as being "notclear" by a cloud mask (e.g., the MOD35/MYD35 product) is determined to be useful for an optical retrieval based on a 1-D cloud model. The Collection 5 MODIS retrieval algorithm removed pixels associated with cloud'edges as well as ocean pixels with partly cloudy elements in the 250m MODIS cloud mask - part of the so-called Clear Sky Restoral (CSR) algorithm. Collection 6 attempts retrievals for those two pixel populations, but allows a user to isolate or filter out the populations via CSR pixel-level Quality Assessment (QA) assignments. In this paper, using the preliminary Collection 6 MOD06 product, we present global and regional statistical results of marine warm cloud retrieval sensitivities to the cloud edge and 250m partly cloudy pixel populations. As expected, retrievals for these pixels are generally consistent with a breakdown of the ID cloud model. While optical thickness for these suspect pixel populations may have some utility for radiative studies, the retrievals should be used with extreme caution for process and microphysical studies.

  12. Plasma-panel based detectors

    NASA Astrophysics Data System (ADS)

    Friedman, Peter

    2017-09-01

    The plasma panel sensor (PPS) is a novel micropattern gas detector inspired by plasma display panels (PDPs), the core component of plasma-TVs. A PDP comprises millions of discrete cells per square meter, each of which, when provided with a signal pulse, can initiate and sustain a plasma discharge. Configured as a detector, a pixel or cell is biased to discharge when a free-electron is generated in the gas. The PPS consists of an array of small plasma discharge pixels, and can be configured to have either an ``open-cell'' or ``closed-cell'' structure, operating with high gain in the Geiger region. We describe both configurations and their application to particle physics. The open-cell PPS lends itself to ultra-low-mass, ultrathin structures, whereas the closed-cell microhexcavity PPS is capable of higher performance. For the ultrathin-PPS, we are fabricating 3-inch devices based on two types of extremely thin, inorganic, transparent, substrate materials: one being 8-10 µm thick, and the other 25-27 µm thick. These gas-filled ultrathin devices are designed to operate in a beam-line vacuum environment, yet must be hermetically-sealed and gas-filled in an ambient environment at atmospheric pressure. We have successfully fabricated high resolution, submillimeter pixel electrodes on both types of ultrathin substrates. We will also report on the fabrication, staging and operation of the first microhexcavity detectors (µH-PPS). The first µH-PPS prototype devices have a 16 by 16 matrix of closed packed hexagon pixels, each having a 2 mm width. Initial tests of these detectors, conducted with Ne based gases at atmospheric pressure, indicate that each pixel responds independent of its neighboring cells, producing volt level pulse amplitudes in response to ionizing radiation. Results will include the hit rate response to a radioactive beta source, cosmic ray muons, the background from spontaneous discharge, pixel isolation and uniformity, and efficiency measurements. This work was funded in part by a DOE Office of Nuclear Physics SBIR Phase-II Grant.

  13. Structural geology of Amazonian-aged layered sedimentary deposits in southwest Candor Chasma, Mars

    USGS Publications Warehouse

    Okubo, C.H.

    2010-01-01

    The structural geology of an outcropping of layered sedimentary deposits in southwest Candor Chasma is mapped using two adjacent high-resolution (1 m/pixel) HiRISE digital elevation models and orthoimagery. Analysis of these structural data yields new insight into the depositional and deformational history of these deposits. Bedding in non-deformed areas generally dips toward the center of west Candor Chasma, suggesting that these deposits are basin-filling sediments. Numerous kilometer-scale faults and folds characterize the deformation here. Normal faults of the requisite orientation and length for chasma-related faulting are not observed, indicating that the local sediments accumulated after chasma formation had largely ceased in this area. The cause of the observed deformation is attributed to landsliding within these sedimentary deposits. Observed crosscutting relationships indicate that a population of sub-vertical joints are the youngest deformational structures in the area. The distribution of strain amongst these joints, and an apparently youthful infill of sediment, suggests that these fractures have been active in the recent past. The source of the driving stress acting on these joints has yet to be fully constrained, but the joint orientations are consistent with minor subsidence within west Candor Chasma.

  14. Improving Measurement of Forest Structural Parameters by Co-Registering of High Resolution Aerial Imagery and Low Density LiDAR Data

    PubMed Central

    Huang, Huabing; Gong, Peng; Cheng, Xiao; Clinton, Nick; Li, Zengyuan

    2009-01-01

    Forest structural parameters, such as tree height and crown width, are indispensable for evaluating forest biomass or forest volume. LiDAR is a revolutionary technology for measurement of forest structural parameters, however, the accuracy of crown width extraction is not satisfactory when using a low density LiDAR, especially in high canopy cover forest. We used high resolution aerial imagery with a low density LiDAR system to overcome this shortcoming. A morphological filtering was used to generate a DEM (Digital Elevation Model) and a CHM (Canopy Height Model) from LiDAR data. The LiDAR camera image is matched to the aerial image with an automated keypoints search algorithm. As a result, a high registration accuracy of 0.5 pixels was obtained. A local maximum filter, watershed segmentation, and object-oriented image segmentation are used to obtain tree height and crown width. Results indicate that the camera data collected by the integrated LiDAR system plays an important role in registration with aerial imagery. The synthesis with aerial imagery increases the accuracy of forest structural parameter extraction when compared to only using the low density LiDAR data. PMID:22573971

  15. Parcels versus pixels: modeling agricultural land use across broad geographic regions using parcel-based field boundaries

    USGS Publications Warehouse

    Sohl, Terry L.; Dornbierer, Jordan; Wika, Steve; Sayler, Kristi L.; Quenzer, Robert

    2017-01-01

    Land use and land cover (LULC) change occurs at a local level within contiguous ownership and management units (parcels), yet LULC models primarily use pixel-based spatial frameworks. The few parcel-based models being used overwhelmingly focus on small geographic areas, limiting the ability to assess LULC change impacts at regional to national scales. We developed a modified version of the Forecasting Scenarios of land use change model to project parcel-based agricultural change across a large region in the United States Great Plains. A scenario representing an agricultural biofuel scenario was modeled from 2012 to 2030, using real parcel boundaries based on contiguous ownership and land management units. The resulting LULC projection provides a vastly improved representation of landscape pattern over existing pixel-based models, while simultaneously providing an unprecedented combination of thematic detail and broad geographic extent. The conceptual approach is practical and scalable, with potential use for national-scale projections.

  16. Controlling bridging and pinching with pixel-based mask for inverse lithography

    NASA Astrophysics Data System (ADS)

    Kobelkov, Sergey; Tritchkov, Alexander; Han, JiWan

    2016-03-01

    Inverse Lithography Technology (ILT) has become a viable computational lithography candidate in recent years as it can produce mask output that results in process latitude and CD control in the fab that is hard to match with conventional OPC/SRAF insertion approaches. An approach to solving the inverse lithography problem as a nonlinear, constrained minimization problem over a domain mask pixels was suggested in the paper by Y. Granik "Fast pixel-based mask optimization for inverse lithography" in 2006. The present paper extends this method to satisfy bridging and pinching constraints imposed on print contours. Namely, there are suggested objective functions expressing penalty for constraints violations, and their minimization with gradient descent methods is considered. This approach has been tested with an ILT-based Local Printability Enhancement (LPTM) tool in an automated flow to eliminate hotspots that can be present on the full chip after conventional SRAF placement/OPC and has been applied in 14nm, 10nm node production, single and multiple-patterning flows.

  17. High dynamic range bio-molecular ion microscopy with the Timepix detector.

    PubMed

    Jungmann, Julia H; MacAleese, Luke; Visser, Jan; Vrakking, Marc J J; Heeren, Ron M A

    2011-10-15

    Highly parallel, active pixel detectors enable novel detection capabilities for large biomolecules in time-of-flight (TOF) based mass spectrometry imaging (MSI). In this work, a 512 × 512 pixel, bare Timepix assembly combined with chevron microchannel plates (MCP) captures time-resolved images of several m/z species in a single measurement. Mass-resolved ion images from Timepix measurements of peptide and protein standards demonstrate the capability to return both mass-spectral and localization information of biologically relevant analytes from matrix-assisted laser desorption ionization (MALDI) on a commercial ion microscope. The use of a MCP-Timepix assembly delivers an increased dynamic range of several orders of magnitude. The Timepix returns defined mass spectra already at subsaturation MCP gains, which prolongs the MCP lifetime and allows the gain to be optimized for image quality. The Timepix peak resolution is only limited by the resolution of the in-pixel measurement clock. Oligomers of the protein ubiquitin were measured up to 78 kDa. © 2011 American Chemical Society

  18. Note: A disposable x-ray camera based on mass produced complementary metal-oxide-semiconductor sensors and single-board computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoidn, Oliver R.; Seidler, Gerald T., E-mail: seidler@uw.edu

    We have integrated mass-produced commercial complementary metal-oxide-semiconductor (CMOS) image sensors and off-the-shelf single-board computers into an x-ray camera platform optimized for acquisition of x-ray spectra and radiographs at energies of 2–6 keV. The CMOS sensor and single-board computer are complemented by custom mounting and interface hardware that can be easily acquired from rapid prototyping services. For single-pixel detection events, i.e., events where the deposited energy from one photon is substantially localized in a single pixel, we establish ∼20% quantum efficiency at 2.6 keV with ∼190 eV resolution and a 100 kHz maximum detection rate. The detector platform’s useful intrinsic energymore » resolution, 5-μm pixel size, ease of use, and obvious potential for parallelization make it a promising candidate for many applications at synchrotron facilities, in laser-heating plasma physics studies, and in laboratory-based x-ray spectrometry.« less

  19. Heterogeneity Measurement Based on Distance Measure for Polarimetric SAR Data

    NASA Astrophysics Data System (ADS)

    Xing, Xiaoli; Chen, Qihao; Liu, Xiuguo

    2018-04-01

    To effectively test the scene heterogeneity for polarimetric synthetic aperture radar (PolSAR) data, in this paper, the distance measure is introduced by utilizing the similarity between the sample and pixels. Moreover, given the influence of the distribution and modeling texture, the K distance measure is deduced according to the Wishart distance measure. Specifically, the average of the pixels in the local window replaces the class center coherency or covariance matrix. The Wishart and K distance measure are calculated between the average matrix and the pixels. Then, the ratio of the standard deviation to the mean is established for the Wishart and K distance measure, and the two features are defined and applied to reflect the complexity of the scene. The proposed heterogeneity measure is proceeded by integrating the two features using the Pauli basis. The experiments conducted on the single-look and multilook PolSAR data demonstrate the effectiveness of the proposed method for the detection of the scene heterogeneity.

  20. Cloud cover analysis with Arctic Advanced Very High Resolution Radiometer data. II - Classification with spectral and textural measures

    NASA Technical Reports Server (NTRS)

    Key, J.

    1990-01-01

    The spectral and textural characteristics of polar clouds and surfaces for a 7-day summer series of AVHRR data in two Arctic locations are examined, and the results used in the development of a cloud classification procedure for polar satellite data. Since spatial coherence and texture sensitivity tests indicate that a joint spectral-textural analysis based on the same cell size is inappropriate, cloud detection with AVHRR data and surface identification with passive microwave data are first done on the pixel level as described by Key and Barry (1989). Next, cloud patterns within 250-sq-km regions are described, then the spectral and local textural characteristics of cloud patterns in the image are determined and each cloud pixel is classified by statistical methods. Results indicate that both spectral and textural features can be utilized in the classification of cloudy pixels, although spectral features are most useful for the discrimination between cloud classes.

  1. Video-rate nanoscopy enabled by sCMOS camera-specific single-molecule localization algorithms

    PubMed Central

    Huang, Fang; Hartwich, Tobias M. P.; Rivera-Molina, Felix E.; Lin, Yu; Duim, Whitney C.; Long, Jane J.; Uchil, Pradeep D.; Myers, Jordan R.; Baird, Michelle A.; Mothes, Walther; Davidson, Michael W.; Toomre, Derek; Bewersdorf, Joerg

    2013-01-01

    Newly developed scientific complementary metal–oxide–semiconductor (sCMOS) cameras have the potential to dramatically accelerate data acquisition in single-molecule switching nanoscopy (SMSN) while simultaneously increasing the effective quantum efficiency. However, sCMOS-intrinsic pixel-dependent readout noise substantially reduces the localization precision and introduces localization artifacts. Here we present algorithms that overcome these limitations and provide unbiased, precise localization of single molecules at the theoretical limit. In combination with a multi-emitter fitting algorithm, we demonstrate single-molecule localization super-resolution imaging at up to 32 reconstructed images/second (recorded at 1,600–3,200 camera frames/second) in both fixed and living cells. PMID:23708387

  2. Sensitivity of Marine Warm Cloud Retrieval Statistics to Algorithm Choices: Examples from MODIS Collection 6

    NASA Astrophysics Data System (ADS)

    Platnick, S.; Wind, G.; Zhang, Z.; Ackerman, S. A.; Maddux, B. C.

    2012-12-01

    The optical and microphysical structure of warm boundary layer marine clouds is of fundamental importance for understanding a variety of cloud radiation and precipitation processes. With the advent of MODIS (Moderate Resolution Imaging Spectroradiometer) on the NASA EOS Terra and Aqua platforms, simultaneous global/daily 1km retrievals of cloud optical thickness and effective particle size are provided, as well as the derived water path. In addition, the cloud product (MOD06/MYD06 for MODIS Terra and Aqua, respectively) provides separate effective radii results using the 1.6, 2.1, and 3.7 μm spectral channels. Cloud retrieval statistics are highly sensitive to how a pixel identified as being "not-clear" by a cloud mask (e.g., the MOD35/MYD35 product) is determined to be useful for an optical retrieval based on a 1-D cloud model. The Collection 5 MODIS retrieval algorithm removed pixels associated with cloud edges (defined by immediate adjacency to "clear" MOD/MYD35 pixels) as well as ocean pixels with partly cloudy elements in the 250m MODIS cloud mask - part of the so-called Clear Sky Restoral (CSR) algorithm. Collection 6 attempts retrievals for those two pixel populations, but allows a user to isolate or filter out the populations via CSR pixel-level Quality Assessment (QA) assignments. In this paper, using the preliminary Collection 6 MOD06 product, we present global and regional statistical results of marine warm cloud retrieval sensitivities to the cloud edge and 250m partly cloudy pixel populations. As expected, retrievals for these pixels are generally consistent with a breakdown of the 1D cloud model. While optical thickness for these suspect pixel populations may have some utility for radiative studies, the retrievals should be used with extreme caution for process and microphysical studies.

  3. Automatic detection and segmentation of vascular structures in dermoscopy images using a novel vesselness measure based on pixel redness and tubularness

    NASA Astrophysics Data System (ADS)

    Kharazmi, Pegah; Lui, Harvey; Stoecker, William V.; Lee, Tim

    2015-03-01

    Vascular structures are one of the most important features in the diagnosis and assessment of skin disorders. The presence and clinical appearance of vascular structures in skin lesions is a discriminating factor among different skin diseases. In this paper, we address the problem of segmentation of vascular patterns in dermoscopy images. Our proposed method is composed of three parts. First, based on biological properties of human skin, we decompose the skin to melanin and hemoglobin component using independent component analysis of skin color images. The relative quantities and pure color densities of each component were then estimated. Subsequently, we obtain three reference vectors of the mean RGB values for normal skin, pigmented skin and blood vessels from the hemoglobin component by averaging over 100000 pixels of each group outlined by an expert. Based on the Euclidean distance thresholding, we generate a mask image that extracts the red regions of the skin. Finally, Frangi measure was applied to the extracted red areas to segment the tubular structures. Finally, Otsu's thresholding was applied to segment the vascular structures and get a binary vessel mask image. The algorithm was implemented on a set of 50 dermoscopy images. In order to evaluate the performance of our method, we have artificially extended some of the existing vessels in our dermoscopy data set and evaluated the performance of the algorithm to segment the newly added vessel pixels. A sensitivity of 95% and specificity of 87% were achieved.

  4. Localization, Localization, Localization

    NASA Technical Reports Server (NTRS)

    Parker, T.; Malin, M.; Golombek, M.; Duxbury, T.; Johnson, A.; Guinn, J.; McElrath, T.; Kirk, R.; Archinal, B.; Soderblom, L.

    2004-01-01

    Localization of the two Mars Exploration Rovers involved three independent approaches to place the landers with respect to the surface of Mars and to refine the location of those points on the surface with the Mars control net: 1) Track the spacecraft through entry, descent, and landing, then refine the final roll stop position by radio tracking and comparison to images taken during descent; 2) Locate features on the horizon imaged by the two rovers and compare them to the MOC and THEMIS VIS images, and the DIMES images on the two MER landers; and 3) 'Check' and refine locations by acquisition of MOC 1.5 meter and 50 cm/pixel images.

  5. Mercuric iodide room-temperature array detectors for gamma-ray imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patt, B.

    Significant progress has been made recently in the development of mercuric iodide detector arrays for gamma-ray imaging, making real the possibility of constructing high-performance small, light-weight, portable gamma-ray imaging systems. New techniques have been applied in detector fabrication and then low noise electronics which have produced pixel arrays with high-energy resolution, high spatial resolution, high gamma stopping efficiency. Measurements of the energy resolution capability have been made on a 19-element protypical array. Pixel energy resolutions of 2.98% fwhm and 3.88% fwhm were obtained at 59 keV (241-Am) and 140-keV (99m-Tc), respectively. The pixel spectra for a 14-element section of themore » data is shown together with the composition of the overlapped individual pixel spectra. These techniques are now being applied to fabricate much larger arrays with thousands of pixels. Extension of these principles to imaging scenarios involving gamma-ray energies up to several hundred keV is also possible. This would enable imaging of the 208 keV and 375-414 keV 239-Pu and 240-Pu structures, as well as the 186 keV line of 235-U.« less

  6. Single-pixel imaging based on compressive sensing with spectral-domain optical mixing

    NASA Astrophysics Data System (ADS)

    Zhu, Zhijing; Chi, Hao; Jin, Tao; Zheng, Shilie; Jin, Xiaofeng; Zhang, Xianmin

    2017-11-01

    In this letter a single-pixel imaging structure is proposed based on compressive sensing using a spatial light modulator (SLM)-based spectrum shaper. In the approach, an SLM-based spectrum shaper, the pattern of which is a predetermined pseudorandom bit sequence (PRBS), spectrally codes the optical pulse carrying image information. The energy of the spectrally mixed pulse is detected by a single-pixel photodiode and the measurement results are used to reconstruct the image via a sparse recovery algorithm. As the mixing of the image signal and the PRBS is performed in the spectral domain, optical pulse stretching, modulation, compression and synchronization in the time domain are avoided. Experiments are implemented to verify the feasibility of the approach.

  7. Efficiency of the spectral-spatial classification of hyperspectral imaging data

    NASA Astrophysics Data System (ADS)

    Borzov, S. M.; Potaturkin, O. I.

    2017-01-01

    The efficiency of methods of the spectral-spatial classification of similarly looking types of vegetation on the basis of hyperspectral data of remote sensing of the Earth, which take into account local neighborhoods of analyzed image pixels, is experimentally studied. Algorithms that involve spatial pre-processing of the raw data and post-processing of pixel-based spectral classification maps are considered. Results obtained both for a large-size hyperspectral image and for its test fragment with different methods of training set construction are reported. The classification accuracy in all cases is estimated through comparisons of ground-truth data and classification maps formed by using the compared methods. The reasons for the differences in these estimates are discussed.

  8. A Fast Superpixel Segmentation Algorithm for PolSAR Images Based on Edge Refinement and Revised Wishart Distance

    PubMed Central

    Zhang, Yue; Zou, Huanxin; Luo, Tiancheng; Qin, Xianxiang; Zhou, Shilin; Ji, Kefeng

    2016-01-01

    The superpixel segmentation algorithm, as a preprocessing technique, should show good performance in fast segmentation speed, accurate boundary adherence and homogeneous regularity. A fast superpixel segmentation algorithm by iterative edge refinement (IER) works well on optical images. However, it may generate poor superpixels for Polarimetric synthetic aperture radar (PolSAR) images due to the influence of strong speckle noise and many small-sized or slim regions. To solve these problems, we utilized a fast revised Wishart distance instead of Euclidean distance in the local relabeling of unstable pixels, and initialized unstable pixels as all the pixels substituted for the initial grid edge pixels in the initialization step. Then, postprocessing with the dissimilarity measure is employed to remove the generated small isolated regions as well as to preserve strong point targets. Finally, the superiority of the proposed algorithm is validated with extensive experiments on four simulated and two real-world PolSAR images from Experimental Synthetic Aperture Radar (ESAR) and Airborne Synthetic Aperture Radar (AirSAR) data sets, which demonstrate that the proposed method shows better performance with respect to several commonly used evaluation measures, even with about nine times higher computational efficiency, as well as fine boundary adherence and strong point targets preservation, compared with three state-of-the-art methods. PMID:27754385

  9. Line fitting based feature extraction for object recognition

    NASA Astrophysics Data System (ADS)

    Li, Bing

    2014-06-01

    Image feature extraction plays a significant role in image based pattern applications. In this paper, we propose a new approach to generate hierarchical features. This new approach applies line fitting to adaptively divide regions based upon the amount of information and creates line fitting features for each subsequent region. It overcomes the feature wasting drawback of the wavelet based approach and demonstrates high performance in real applications. For gray scale images, we propose a diffusion equation approach to map information-rich pixels (pixels near edges and ridge pixels) into high values, and pixels in homogeneous regions into small values near zero that form energy map images. After the energy map images are generated, we propose a line fitting approach to divide regions recursively and create features for each region simultaneously. This new feature extraction approach is similar to wavelet based hierarchical feature extraction in which high layer features represent global characteristics and low layer features represent local characteristics. However, the new approach uses line fitting to adaptively focus on information-rich regions so that we avoid the feature waste problems of the wavelet approach in homogeneous regions. Finally, the experiments for handwriting word recognition show that the new method provides higher performance than the regular handwriting word recognition approach.

  10. Local heterogeneities in early batches of EBT2 film: a suggested solution.

    PubMed

    Kairn, T; Aland, T; Kenny, J

    2010-08-07

    To enhance the utility of radiochromic films for high-resolution dosimetry of small and modulated radiotherapy fields, we propose a means to negate the effects of heterogeneities in EBT2 (and other) films. The results of using our simple procedure for evaluating radiation dose in EBT2 film are compared with the results of using the manufacturer's recommended procedure as well as a procedure previously established for evaluating dose in older EBT film. It is shown that Newton's ring-like scanning artefacts can be avoided through the use of a plastic frame, to elevate the film above the scanner's surface. The effects of film heterogeneity can be minimized by evaluating net optical density, pixelwise, as the logarithm of the ratio of the red-channel pixel value in each pixel of each irradiated film to the red-channel pixel value in the same pixel in the same film prior to irradiation. The application of a blue-channel correction was found to result in increased noise. It is recommended that, when using EBT2 film for radiotherapy quality assurance, the films should be scanned before and after irradiation and analysed using the method proposed herein, without the use of the blue-channel correction, in order to produce dose images with minimal film heterogeneity effects.

  11. Novel approach for image skeleton and distance transformation parallel algorithms

    NASA Astrophysics Data System (ADS)

    Qing, Kent P.; Means, Robert W.

    1994-05-01

    Image Understanding is more important in medical imaging than ever, particularly where real-time automatic inspection, screening and classification systems are installed. Skeleton and distance transformations are among the common operations that extract useful information from binary images and aid in Image Understanding. The distance transformation describes the objects in an image by labeling every pixel in each object with the distance to its nearest boundary. The skeleton algorithm starts from the distance transformation and finds the set of pixels that have a locally maximum label. The distance algorithm has to scan the entire image several times depending on the object width. For each pixel, the algorithm must access the neighboring pixels and find the maximum distance from the nearest boundary. It is a computational and memory access intensive procedure. In this paper, we propose a novel parallel approach to the distance transform and skeleton algorithms using the latest VLSI high- speed convolutional chips such as HNC's ViP. The algorithm speed is dependent on the object's width and takes (k + [(k-1)/3]) * 7 milliseconds for a 512 X 512 image with k being the maximum distance of the largest object. All objects in the image will be skeletonized at the same time in parallel.

  12. The trigger system of the JEM-EUSO Project

    NASA Astrophysics Data System (ADS)

    Bertaina, M.; Ebisuzaki, T.; Hamada, T.; Ikeda, H.; Kawasai, Y.; Sawabe, T.; Takahashi, Y.; JEM-EUSO Collaboration

    The trigger system of JEM-EUSO should face different major challenging points: a) cope with the limited down-link transmission rate from the ISS to Earth, by operating a severe on-board and on-time data reduction; b) use very fast, low power consuming and radiation hard electronics; c) have a high signal-over-noise performance and flexibility in order to lower as much as possible the energy threshold of the detector, adjust the system to a variable nightglow background, and trigger on different categories of events (images insisting on the same pixels or crossing huge portions of the entire focal surface). Based on the above stringent requirements, the main ingredients for the trigger logic are: the Gate Time Unit (GTU); the minimum number Nthresh of photo-electrons piling up in a GTU in a pixel to be fired; the persistency level Npers, in which fired pixels are over threshold; the localization and correlation in space and time of the fired pixels, that distinguish a real EAS from an accidental background enhancement. The core of the trigger logic is the Track Trigger Algorithm that has been specifically developed for this purpose. Its characteristics, preliminary performance and its possible implementation on FPGA or DSP will be discussed together with a general overview of the architecture of the triggering system of JEM-EUSO.

  13. Mesoscale variability of the Upper Colorado River snowpack

    USGS Publications Warehouse

    Ling, C.-H.; Josberger, E.G.; Thorndike, A.S.

    1996-01-01

    In the mountainous regions of the Upper Colorado River Basin, snow course observations give local measurements of snow water equivalent, which can be used to estimate regional averages of snow conditions. We develop a statistical technique to estimate the mesoscale average snow accumulation, using 8 years of snow course observations. For each of three major snow accumulation regions in the Upper Colorado River Basin - the Colorado Rocky Mountains, Colorado, the Uinta Mountains, Utah, and the Wind River Range, Wyoming - the snow course observations yield a correlation length scale of 38 km, 46 km, and 116 km respectively. This is the scale for which the snow course data at different sites are correlated with 70 per cent correlation. This correlation of snow accumulation over large distances allows for the estimation of the snow water equivalent on a mesoscale basis. With the snow course data binned into 1/4?? latitude by 1/4?? longitude pixels, an error analysis shows the following: for no snow course data in a given pixel, the uncertainty in the water equivalent estimate reaches 50 cm; that is, the climatological variability. However, as the number of snow courses in a pixel increases the uncertainty decreases, and approaches 5-10 cm when there are five snow courses in a pixel.

  14. Predictable Programming on a Precision Timed Architecture

    DTIC Science & Technology

    2008-04-18

    Application: A Video Game Figure 6: Structure of the Video Game Example Inspired by an example game sup- plied with the Hydra development board [17...we implemented a sim- ple video game in C targeted to our PRET architecture. Our example centers on rendering graphics and is otherwise fairly simple...background image. 13 Figure 10: A Screen Dump From Our Video Game Ultimately, each displayed pixel is one of only four col- ors, but the pixels in

  15. A patch-based convolutional neural network for remote sensing image classification.

    PubMed

    Sharma, Atharva; Liu, Xiuwen; Yang, Xiaojun; Shi, Di

    2017-11-01

    Availability of accurate land cover information over large areas is essential to the global environment sustainability; digital classification using medium-resolution remote sensing data would provide an effective method to generate the required land cover information. However, low accuracy of existing per-pixel based classification methods for medium-resolution data is a fundamental limiting factor. While convolutional neural networks (CNNs) with deep layers have achieved unprecedented improvements in object recognition applications that rely on fine image structures, they cannot be applied directly to medium-resolution data due to lack of such fine structures. In this paper, considering the spatial relation of a pixel to its neighborhood, we propose a new deep patch-based CNN system tailored for medium-resolution remote sensing data. The system is designed by incorporating distinctive characteristics of medium-resolution data; in particular, the system computes patch-based samples from multidimensional top of atmosphere reflectance data. With a test site from the Florida Everglades area (with a size of 771 square kilometers), the proposed new system has outperformed pixel-based neural network, pixel-based CNN and patch-based neural network by 24.36%, 24.23% and 11.52%, respectively, in overall classification accuracy. By combining the proposed deep CNN and the huge collection of medium-resolution remote sensing data, we believe that much more accurate land cover datasets can be produced over large areas. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. A spatially adaptive spectral re-ordering technique for lossless coding of hyper-spectral images

    NASA Technical Reports Server (NTRS)

    Memon, Nasir D.; Galatsanos, Nikolas

    1995-01-01

    In this paper, we propose a new approach, applicable to lossless compression of hyper-spectral images, that alleviates some limitations of linear prediction as applied to this problem. According to this approach, an adaptive re-ordering of the spectral components of each pixel is performed prior to prediction and encoding. This re-ordering adaptively exploits, on a pixel-by pixel basis, the presence of inter-band correlations for prediction. Furthermore, the proposed approach takes advantage of spatial correlations, and does not introduce any coding overhead to transmit the order of the spectral bands. This is accomplished by using the assumption that two spatially adjacent pixels are expected to have similar spectral relationships. We thus have a simple technique to exploit spectral and spatial correlations in hyper-spectral data sets, leading to compression performance improvements as compared to our previously reported techniques for lossless compression. We also look at some simple error modeling techniques for further exploiting any structure that remains in the prediction residuals prior to entropy coding.

  17. Optical and x-ray characterization of two novel CMOS image sensors

    NASA Astrophysics Data System (ADS)

    Bohndiek, Sarah E.; Arvanitis, Costas D.; Venanzi, Cristian; Royle, Gary J.; Clark, Andy T.; Crooks, Jamie P.; Prydderch, Mark L.; Turchetta, Renato; Blue, Andrew; Speller, Robert D.

    2007-02-01

    A UK consortium (MI3) has been founded to develop advanced CMOS pixel designs for scientific applications. Vanilla, a 520x520 array of 25μm pixels benefits from flushed reset circuitry for low noise and random pixel access for region of interest (ROI) readout. OPIC, a 64x72 test structure array of 30μm digital pixels has thresholding capabilities for sparse readout at 3,700fps. Characterization is performed with both optical illumination and x-ray exposure via a scintillator. Vanilla exhibits 34+/-3e - read noise, interactive quantum efficiency of 54% at 500nm and can read a 6x6 ROI at 24,395fps. OPIC has 46+/-3e - read noise and a wide dynamic range of 65dB due to high full well capacity. Based on these characterization studies, Vanilla could be utilized in applications where demands include high spectral response and high speed region of interest readout while OPIC could be used for high speed, high dynamic range imaging.

  18. Simulations of radiation-damaged 3D detectors for the Super-LHC

    NASA Astrophysics Data System (ADS)

    Pennicard, D.; Pellegrini, G.; Fleta, C.; Bates, R.; O'Shea, V.; Parkes, C.; Tartoni, N.

    2008-07-01

    Future high-luminosity colliders, such as the Super-LHC at CERN, will require pixel detectors capable of withstanding extremely high radiation damage. In this article, the performances of various 3D detector structures are simulated with up to 1×1016 1 MeV- neq/cm2 radiation damage. The simulations show that 3D detectors have higher collection efficiency and lower depletion voltages than planar detectors due to their small electrode spacing. When designing a 3D detector with a large pixel size, such as an ATLAS sensor, different electrode column layouts are possible. Using a small number of n+ readout electrodes per pixel leads to higher depletion voltages and lower collection efficiency, due to the larger electrode spacing. Conversely, using more electrodes increases both the insensitive volume occupied by the electrode columns and the capacitive noise. Overall, the best performance after 1×1016 1 MeV- neq/cm2 damage is achieved by using 4-6 n+ electrodes per pixel.

  19. Polymer-stabilized liquid crystalline topological defect network for micro-pixelated optical devices

    NASA Astrophysics Data System (ADS)

    Araoka, Fumito; Le, Khoa V.; Fujii, Shuji; Orihara, Hiroshi; Sasaki, Yuji

    2018-02-01

    Spatially and temporally controlled topological defects in nematic liquid crystals (NLCs) are promising for its potential in optical applications. Utilization of self-organization is a key to fabricate complex micro- and nano-structures which are often difficult to obtain by conventional lithographic tools. Using photo-polymerization technique, here we show a polymer-stabilized NLC having a micro-pixelated structure of regularly ordered umbilical defects which are induced by an electric field. Due to the formation of polymer network, the self-organized pattern is kept stable without deterioration. Moreover, the polymer network allows to template other LCs whose optical properties can be tuned with external stimuli such as temperature and electric fields.

  20. Electron crystallography with the EIGER detector

    PubMed Central

    Tinti, Gemma; Fröjdh, Erik; van Genderen, Eric; Gruene, Tim; Schmitt, Bernd; de Winter, D. A. Matthijs; Weckhuysen, Bert M.; Abrahams, Jan Pieter

    2018-01-01

    Electron crystallography is a discipline that currently attracts much attention as method for inorganic, organic and macromolecular structure solution. EIGER, a direct-detection hybrid pixel detector developed at the Paul Scherrer Institut, Switzerland, has been tested for electron diffraction in a transmission electron microscope. EIGER features a pixel pitch of 75 × 75 µm2, frame rates up to 23 kHz and a dead time between frames as low as 3 µs. Cluster size and modulation transfer functions of the detector at 100, 200 and 300 keV electron energies are reported and the data quality is demonstrated by structure determination of a SAPO-34 zeotype from electron diffraction data. PMID:29765609

  1. Design and Development of 256x256 Linear Mode Low-Noise Avalanche Photodiode Arrays

    NASA Technical Reports Server (NTRS)

    Yuan, Ping; Sudharsanan, Rengarajan; Bai, Xiaogang; Boisvert, Joseph; McDonald, Paul; Chang, James

    2011-01-01

    A larger format photodiode array is always desirable for many LADAR imaging applications. However, as the array format increases, the laser power or the lens aperture has to increase to maintain the same flux per pixel thus increasing the size, weight and power of the imaging system. In order to avoid this negative impact, it is essential to improve the pixel sensitivity. The sensitivity of a short wavelength infrared linear-mode avalanche photodiode (APD) is a delicate balance of quantum efficiency, usable gain, excess noise factor, capacitance, and dark current of APD as well as the input equivalent noise of the amplifier. By using InA1As as a multiplication layer in an InP-based APD, the ionization coefficient ratio, k, is reduced from 0.40 (lnP) to 0.22, and the excess noise is reduced by about 50%. An additional improvement in excess noise of 25% was achieved by employing an impact-ionization-engineering structure with a k value of 0.15. Compared with the traditional InP structure, about 30% reduction in the noise-equivalent power with the following amplifier can be achieved. Spectrolab demonstrated 30-um mesa APD pixels with a dark current less than 10 nA and a capacitance of 60 fF at gain of 10. APD gain uninformity determines the usable gain of most pixels in an array, which is critical to focal plane array sensitivity. By fine tuning the material growth and device process, a break-down-voltage standard deviation of 0.1 V and gain of 30 on individual pixels were demonstrated in our 256x256 linear-mode APD arrays.

  2. A Local DCT-II Feature Extraction Approach for Personal Identification Based on Palmprint

    NASA Astrophysics Data System (ADS)

    Choge, H. Kipsang; Oyama, Tadahiro; Karungaru, Stephen; Tsuge, Satoru; Fukumi, Minoru

    Biometric applications based on the palmprint have recently attracted increased attention from various researchers. In this paper, a method is presented that differs from the commonly used global statistical and structural techniques by extracting and using local features instead. The middle palm area is extracted after preprocessing for rotation, position and illumination normalization. The segmented region of interest is then divided into blocks of either 8×8 or 16×16 pixels in size. The type-II Discrete Cosine Transform (DCT) is applied to transform the blocks into DCT space. A subset of coefficients that encode the low to medium frequency components is selected using the JPEG-style zigzag scanning method. Features from each block are subsequently concatenated into a compact feature vector and used in palmprint verification experiments with palmprints from the PolyU Palmprint Database. Results indicate that this approach achieves better results than many conventional transform-based methods, with an excellent recognition accuracy above 99% and an Equal Error Rate (EER) of less than 1.2% in palmprint verification.

  3. A comparison of semiglobal and local dense matching algorithms for surface reconstruction

    NASA Astrophysics Data System (ADS)

    Dall'Asta, E.; Roncella, R.

    2014-06-01

    Encouraged by the growing interest in automatic 3D image-based reconstruction, the development and improvement of robust stereo matching techniques is one of the most investigated research topic of the last years in photogrammetry and computer vision. The paper is focused on the comparison of some stereo matching algorithms (local and global) which are very popular both in photogrammetry and computer vision. In particular, the Semi-Global Matching (SGM), which realizes a pixel-wise matching and relies on the application of consistency constraints during the matching cost aggregation, will be discussed. The results of some tests performed on real and simulated stereo image datasets, evaluating in particular the accuracy of the obtained digital surface models, will be presented. Several algorithms and different implementation are considered in the comparison, using freeware software codes like MICMAC and OpenCV, commercial software (e.g. Agisoft PhotoScan) and proprietary codes implementing Least Square e Semi-Global Matching algorithms. The comparisons will also consider the completeness and the level of detail within fine structures, and the reliability and repeatability of the obtainable data.

  4. Optimization of Decision-Making for Spatial Sampling in the North China Plain, Based on Remote-Sensing a Priori Knowledge

    NASA Astrophysics Data System (ADS)

    Feng, J.; Bai, L.; Liu, S.; Su, X.; Hu, H.

    2012-07-01

    In this paper, the MODIS remote sensing data, featured with low-cost, high-timely and moderate/low spatial resolutions, in the North China Plain (NCP) as a study region were firstly used to carry out mixed-pixel spectral decomposition to extract an useful regionalized indicator parameter (RIP) (i.e., an available ratio, that is, fraction/percentage, of winter wheat planting area in each pixel as a regionalized indicator variable (RIV) of spatial sampling) from the initial selected indicators. Then, the RIV values were spatially analyzed, and the spatial structure characteristics (i.e., spatial correlation and variation) of the NCP were achieved, which were further processed to obtain the scalefitting, valid a priori knowledge or information of spatial sampling. Subsequently, founded upon an idea of rationally integrating probability-based and model-based sampling techniques and effectively utilizing the obtained a priori knowledge or information, the spatial sampling models and design schemes and their optimization and optimal selection were developed, as is a scientific basis of improving and optimizing the existing spatial sampling schemes of large-scale cropland remote sensing monitoring. Additionally, by the adaptive analysis and decision strategy the optimal local spatial prediction and gridded system of extrapolation results were able to excellently implement an adaptive report pattern of spatial sampling in accordance with report-covering units in order to satisfy the actual needs of sampling surveys.

  5. Reduced As Components in Highly Oxidized Environments: Evidence from Full Spectral XANES Imaging using the Maia Massively Parallel Detector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Etschmann, B.; Ryan, C; Brugger, J

    2010-01-01

    Synchrotron X-ray fluorescence (SXRF) and X-ray absorption spectroscopy (XAS) have become standard tools to measure element concentration, distribution at micrometer- to nanometer-scale, and speciation (e.g., nature of host phase; oxidation state) in inhomogeneous geomaterials. The new Maia X-ray detector system provides a quantum leap for the method in terms of data acquisition rate. It is now possible to rapidly collect fully quantitative maps of the distribution of major and trace elements at micrometer spatial resolution over areas as large as 1 x 5 cm{sup 2}. Fast data acquisition rates also open the way to X-ray absorption near-edge structure (XANES) imaging,more » in which spectroscopic information is available at each pixel in the map. These capabilities are critical for studying inhomogeneous Earth materials. Using a 96-element prototype Maia detector, we imaged thin sections of an oxidized pisolitic regolith (2 x 4.5 mm{sup 2} at 2.5 x 2.5 {micro}m{sup 2} pixel size) and a metamorphosed, sedimentary exhalative Mn-Fe ore (3.3 x 4 mm{sup 2} at 1.25 x 5 {micro}m{sup 2}). In both cases, As K-edge XANES imaging reveals localized occurrence of reduced As in parts of these oxidized samples, which would have been difficult to recognize using traditional approaches.« less

  6. Lizard-Skin Surface Texture

    NASA Technical Reports Server (NTRS)

    2007-01-01

    [figure removed for brevity, see original site] Figure 1

    The south polar region of Mars is covered seasonally with translucent carbon dioxide ice. In the spring gas subliming (evaporating) from the underside of the seasonal layer of ice bursts through weak spots, carrying dust from below with it, to form numerous dust fans aligned in the direction of the prevailing wind.

    The dust gets trapped in the shallow grooves on the surface, helping to define the small-scale structure of the surface. The surface texture is reminiscent of lizard skin (figure 1).

    Observation Geometry Image PSP_003730_0945 was taken by the High Resolution Imaging Science Experiment (HiRISE) camera onboard the Mars Reconnaissance Orbiter spacecraft on 14-May-2007. The complete image is centered at -85.2 degrees latitude, 181.5 degrees East longitude. The range to the target site was 248.5 km (155.3 miles). At this distance the image scale is 24.9 cm/pixel (with 1 x 1 binning) so objects 75 cm across are resolved. The image shown here has been map-projected to 25 cm/pixel . The image was taken at a local Mars time of 06:04 PM and the scene is illuminated from the west with a solar incidence angle of 69 degrees, thus the sun was about 21 degrees above the horizon. At a solar longitude of 237.5 degrees, the season on Mars is Northern Autumn.

  7. Fractal assembly of micrometre-scale DNA origami arrays with arbitrary patterns.

    PubMed

    Tikhomirov, Grigory; Petersen, Philip; Qian, Lulu

    2017-12-06

    Self-assembled DNA nanostructures enable nanometre-precise patterning that can be used to create programmable molecular machines and arrays of functional materials. DNA origami is particularly versatile in this context because each DNA strand in the origami nanostructure occupies a unique position and can serve as a uniquely addressable pixel. However, the scale of such structures has been limited to about 0.05 square micrometres, hindering applications that demand a larger layout and integration with more conventional patterning methods. Hierarchical multistage assembly of simple sets of tiles can in principle overcome this limitation, but so far has not been sufficiently robust to enable successful implementation of larger structures using DNA origami tiles. Here we show that by using simple local assembly rules that are modified and applied recursively throughout a hierarchical, multistage assembly process, a small and constant set of unique DNA strands can be used to create DNA origami arrays of increasing size and with arbitrary patterns. We illustrate this method, which we term 'fractal assembly', by producing DNA origami arrays with sizes of up to 0.5 square micrometres and with up to 8,704 pixels, allowing us to render images such as the Mona Lisa and a rooster. We find that self-assembly of the tiles into arrays is unaffected by changes in surface patterns on the tiles, and that the yield of the fractal assembly process corresponds to about 0.95 m - 1 for arrays containing m tiles. When used in conjunction with a software tool that we developed that converts an arbitrary pattern into DNA sequences and experimental protocols, our assembly method is readily accessible and will facilitate the construction of sophisticated materials and devices with sizes similar to that of a bacterium using DNA nanostructures.

  8. Fractal assembly of micrometre-scale DNA origami arrays with arbitrary patterns

    NASA Astrophysics Data System (ADS)

    Tikhomirov, Grigory; Petersen, Philip; Qian, Lulu

    2017-12-01

    Self-assembled DNA nanostructures enable nanometre-precise patterning that can be used to create programmable molecular machines and arrays of functional materials. DNA origami is particularly versatile in this context because each DNA strand in the origami nanostructure occupies a unique position and can serve as a uniquely addressable pixel. However, the scale of such structures has been limited to about 0.05 square micrometres, hindering applications that demand a larger layout and integration with more conventional patterning methods. Hierarchical multistage assembly of simple sets of tiles can in principle overcome this limitation, but so far has not been sufficiently robust to enable successful implementation of larger structures using DNA origami tiles. Here we show that by using simple local assembly rules that are modified and applied recursively throughout a hierarchical, multistage assembly process, a small and constant set of unique DNA strands can be used to create DNA origami arrays of increasing size and with arbitrary patterns. We illustrate this method, which we term ‘fractal assembly’, by producing DNA origami arrays with sizes of up to 0.5 square micrometres and with up to 8,704 pixels, allowing us to render images such as the Mona Lisa and a rooster. We find that self-assembly of the tiles into arrays is unaffected by changes in surface patterns on the tiles, and that the yield of the fractal assembly process corresponds to about 0.95m - 1 for arrays containing m tiles. When used in conjunction with a software tool that we developed that converts an arbitrary pattern into DNA sequences and experimental protocols, our assembly method is readily accessible and will facilitate the construction of sophisticated materials and devices with sizes similar to that of a bacterium using DNA nanostructures.

  9. Automatic optic disc segmentation based on image brightness and contrast

    NASA Astrophysics Data System (ADS)

    Lu, Shijian; Liu, Jiang; Lim, Joo Hwee; Zhang, Zhuo; Tan, Ngan Meng; Wong, Wing Kee; Li, Huiqi; Wong, Tien Yin

    2010-03-01

    Untreated glaucoma leads to permanent damage of the optic nerve and resultant visual field loss, which can progress to blindness. As glaucoma often produces additional pathological cupping of the optic disc (OD), cupdisc- ratio is one measure that is widely used for glaucoma diagnosis. This paper presents an OD localization method that automatically segments the OD and so can be applied for the cup-disc-ratio based glaucoma diagnosis. The proposed OD segmentation method is based on the observations that the OD is normally much brighter and at the same time have a smoother texture characteristics compared with other regions within retinal images. Given a retinal image we first capture the ODs smooth texture characteristic by a contrast image that is constructed based on the local maximum and minimum pixel lightness within a small neighborhood window. The centre of the OD can then be determined according to the density of the candidate OD pixels that are detected by retinal image pixels of the lowest contrast. After that, an OD region is approximately determined by a pair of morphological operations and the OD boundary is finally determined by an ellipse that is fitted by the convex hull of the detected OD region. Experiments over 71 retinal images of different qualities show that the OD region overlapping reaches up to 90.37% according to the OD boundary ellipses determined by our proposed method and the one manually plotted by an ophthalmologist.

  10. High-resolution mapping of Martian water ice clouds using Mars Express OMEGA observations - Derivation of the diurnal cloud life cycle

    NASA Astrophysics Data System (ADS)

    Szantai, Andre; Audouard, Joachim; Madeleine, Jean-Baptiste; Forget, Francois; Pottier, Alizée; Millour, Ehouarn; Gondet, Brigitte; Langevin, Yves; Bibring, Jean-Pierre

    2016-10-01

    The mapping in space and time of water ice clouds can help to explain the Martian water cycle and atmospheric circulation. For this purpose, an ice cloud index (ICI) corresponding to the depth of a water ice absorption band at 3.4 microns is derived from a series of OMEGA images (spectels) covering 5 Martian years. The ICI values for the corresponding pixels are then binned on a high-resolution regular grid (1° longitude x 1° latitude x 5° Ls x 1 h local time) and averaged. Inside each bin, the cloud cover is calculated by dividing the number of pixels considered as cloudy (after comparison to a threshold) to the number of all (valid) pixelsWe compare the maps of clouds obtained around local time 14:00 with collocated TES cloud observations (which were only obtained around this time of the day). A good agreement is found.Averaged ICI compared to the water ice column variable from the Martian Climate Database (MCD) show a correct correlation (~0.5) , which increases when values limited to the tropics only are compared.The number of gridpoints containing ICI values is small ( ~1%), but by taking several neighbor gridpoints and over longer periods, we can observe a cloud life cycle during daytime. An example in the the tropics, around the northern summer solstice, shows a decrease of cloudiness in the morning followed by an increase in the afternoon.

  11. Abinitio powder x-ray diffraction and PIXEL energy calculations on thiophene derived 1,4 dihydropyridine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karthikeyan, N., E-mail: karthin10@gmail.com; Sivakumar, K.; Pachamuthu, M. P.

    We focus on the application of powder diffraction data to get abinitio crystal structure determination of thiophene derived 1,4 DHP prepared by cyclocondensation method using solid catalyst. Crystal structure of the compound has been solved by direct-space approach on Monte Carlo search in parallel tempering mode using FOX program. Initial atomic coordinates were derived using Gaussian 09W quantum chemistry software in semi-empirical approach and Rietveld refinement was carried out using GSAS program. The crystal structure of the compound is stabilized by one N-H…O and three C-H…O hydrogen bonds. PIXEL lattice energy calculation was carried out to understand the physical naturemore » of intermolecular interactions in the crystal packing, on which the total lattice energy is contributed into Columbic, polarization, dispersion, and repulsion energies.« less

  12. Measurement of 3-D Vibrational Motion by Dynamic Photogrammetry Using Least-Square Image Matching for Sub-Pixel Targeting to Improve Accuracy.

    PubMed

    Lee, Hyoseong; Rhee, Huinam; Oh, Jae Hong; Park, Jin Ho

    2016-03-11

    This paper deals with an improved methodology to measure three-dimensional dynamic displacements of a structure by digital close-range photogrammetry. A series of stereo images of a vibrating structure installed with targets are taken at specified intervals by using two daily-use cameras. A new methodology is proposed to accurately trace the spatial displacement of each target in three-dimensional space. This method combines the correlation and the least-square image matching so that the sub-pixel targeting can be obtained to increase the measurement accuracy. Collinearity and space resection theory are used to determine the interior and exterior orientation parameters. To verify the proposed method, experiments have been performed to measure displacements of a cantilevered beam excited by an electrodynamic shaker, which is vibrating in a complex configuration with mixed bending and torsional motions simultaneously with multiple frequencies. The results by the present method showed good agreement with the measurement by two laser displacement sensors. The proposed methodology only requires inexpensive daily-use cameras, and can remotely detect the dynamic displacement of a structure vibrating in a complex three-dimensional defection shape up to sub-pixel accuracy. It has abundant potential applications to various fields, e.g., remote vibration monitoring of an inaccessible or dangerous facility.

  13. Measurement of 3-D Vibrational Motion by Dynamic Photogrammetry Using Least-Square Image Matching for Sub-Pixel Targeting to Improve Accuracy

    PubMed Central

    Lee, Hyoseong; Rhee, Huinam; Oh, Jae Hong; Park, Jin Ho

    2016-01-01

    This paper deals with an improved methodology to measure three-dimensional dynamic displacements of a structure by digital close-range photogrammetry. A series of stereo images of a vibrating structure installed with targets are taken at specified intervals by using two daily-use cameras. A new methodology is proposed to accurately trace the spatial displacement of each target in three-dimensional space. This method combines the correlation and the least-square image matching so that the sub-pixel targeting can be obtained to increase the measurement accuracy. Collinearity and space resection theory are used to determine the interior and exterior orientation parameters. To verify the proposed method, experiments have been performed to measure displacements of a cantilevered beam excited by an electrodynamic shaker, which is vibrating in a complex configuration with mixed bending and torsional motions simultaneously with multiple frequencies. The results by the present method showed good agreement with the measurement by two laser displacement sensors. The proposed methodology only requires inexpensive daily-use cameras, and can remotely detect the dynamic displacement of a structure vibrating in a complex three-dimensional defection shape up to sub-pixel accuracy. It has abundant potential applications to various fields, e.g., remote vibration monitoring of an inaccessible or dangerous facility. PMID:26978366

  14. Toward one Giga frames per second--evolution of in situ storage image sensors.

    PubMed

    Etoh, Takeharu G; Son, Dao V T; Yamada, Tetsuo; Charbon, Edoardo

    2013-04-08

    The ISIS is an ultra-fast image sensor with in-pixel storage. The evolution of the ISIS in the past and in the near future is reviewed and forecasted. To cover the storage area with a light shield, the conventional frontside illuminated ISIS has a limited fill factor. To achieve higher sensitivity, a BSI ISIS was developed. To avoid direct intrusion of light and migration of signal electrons to the storage area on the frontside, a cross-sectional sensor structure with thick pnpn layers was developed, and named "Tetratified structure". By folding and looping in-pixel storage CCDs, an image signal accumulation sensor, ISAS, is proposed. The ISAS has a new function, the in-pixel signal accumulation, in addition to the ultra-high-speed imaging. To achieve much higher frame rate, a multi-collection-gate (MCG) BSI image sensor architecture is proposed. The photoreceptive area forms a honeycomb-like shape. Performance of a hexagonal CCD-type MCG BSI sensor is examined by simulations. The highest frame rate is theoretically more than 1Gfps. For the near future, a stacked hybrid CCD/CMOS MCG image sensor seems most promising. The associated problems are discussed. A fine TSV process is the key technology to realize the structure.

  15. A novel ship CFAR detection algorithm based on adaptive parameter enhancement and wake-aided detection in SAR images

    NASA Astrophysics Data System (ADS)

    Meng, Siqi; Ren, Kan; Lu, Dongming; Gu, Guohua; Chen, Qian; Lu, Guojun

    2018-03-01

    Synthetic aperture radar (SAR) is an indispensable and useful method for marine monitoring. With the increase of SAR sensors, high resolution images can be acquired and contain more target structure information, such as more spatial details etc. This paper presents a novel adaptive parameter transform (APT) domain constant false alarm rate (CFAR) to highlight targets. The whole method is based on the APT domain value. Firstly, the image is mapped to the new transform domain by the algorithm. Secondly, the false candidate target pixels are screened out by the CFAR detector to highlight the target ships. Thirdly, the ship pixels are replaced by the homogeneous sea pixels. And then, the enhanced image is processed by Niblack algorithm to obtain the wake binary image. Finally, normalized Hough transform (NHT) is used to detect wakes in the binary image, as a verification of the presence of the ships. Experiments on real SAR images validate that the proposed transform does enhance the target structure and improve the contrast of the image. The algorithm has a good performance in the ship and ship wake detection.

  16. Error correcting coding-theory for structured light illumination systems

    NASA Astrophysics Data System (ADS)

    Porras-Aguilar, Rosario; Falaggis, Konstantinos; Ramos-Garcia, Ruben

    2017-06-01

    Intensity discrete structured light illumination systems project a series of projection patterns for the estimation of the absolute fringe order using only the temporal grey-level sequence at each pixel. This work proposes the use of error-correcting codes for pixel-wise correction of measurement errors. The use of an error correcting code is advantageous in many ways: it allows reducing the effect of random intensity noise, it corrects outliners near the border of the fringe commonly present when using intensity discrete patterns, and it provides a robustness in case of severe measurement errors (even for burst errors where whole frames are lost). The latter aspect is particular interesting in environments with varying ambient light as well as in critical safety applications as e.g. monitoring of deformations of components in nuclear power plants, where a high reliability is ensured even in case of short measurement disruptions. A special form of burst errors is the so-called salt and pepper noise, which can largely be removed with error correcting codes using only the information of a given pixel. The performance of this technique is evaluated using both simulations and experiments.

  17. Applications of a pnCCD detector coupled to columnar structure CsI(Tl) scintillator system in ultra high energy X-ray Laue diffraction

    NASA Astrophysics Data System (ADS)

    Shokr, M.; Schlosser, D.; Abboud, A.; Algashi, A.; Tosson, A.; Conka, T.; Hartmann, R.; Klaus, M.; Genzel, C.; Strüder, L.; Pietsch, U.

    2017-12-01

    Most charge coupled devices (CCDs) are made of silicon (Si) with typical active layer thicknesses of several microns. In case of a pnCCD detector the sensitive Si thickness is 450 μm. However, for silicon based detectors the quantum efficiency for hard X-rays drops significantly for photon energies above 10 keV . This drawback can be overcome by combining a pixelated silicon-based detector system with a columnar scintillator. Here we report on the characterization of a low noise, fully depleted 128×128 pixels pnCCD detector with 75×75 μm2 pixel size coupled to a 700 μm thick columnar CsI(Tl) scintillator in the photon range between 1 keV to 130 keV . The excellent performance of the detection system in the hard X-ray range is demonstrated in a Laue type X-ray diffraction experiment performed at EDDI beamline of the BESSY II synchrotron taken at a set of several GaAs single crystals irradiated by white synchrotron radiation. With the columnar structure of the scintillator, the position resolution of the whole system reaches a value of less than one pixel. Using the presented detector system and considering the functional relation between indirect and direct photon events Laue diffraction peaks with X-ray energies up to 120 keV were efficiently detected. As one of possible applications of the combined CsI-pnCCD system we demonstrate that the accuracy of X-ray structure factors extracted from Laue diffraction peaks can be significantly improved in hard X-ray range using the combined CsI(Tl)-pnCCD system compared to a bare pnCCD.

  18. Lossless medical image compression using geometry-adaptive partitioning and least square-based prediction.

    PubMed

    Song, Xiaoying; Huang, Qijun; Chang, Sheng; He, Jin; Wang, Hao

    2018-06-01

    To improve the compression rates for lossless compression of medical images, an efficient algorithm, based on irregular segmentation and region-based prediction, is proposed in this paper. Considering that the first step of a region-based compression algorithm is segmentation, this paper proposes a hybrid method by combining geometry-adaptive partitioning and quadtree partitioning to achieve adaptive irregular segmentation for medical images. Then, least square (LS)-based predictors are adaptively designed for each region (regular subblock or irregular subregion). The proposed adaptive algorithm not only exploits spatial correlation between pixels but it utilizes local structure similarity, resulting in efficient compression performance. Experimental results show that the average compression performance of the proposed algorithm is 10.48, 4.86, 3.58, and 0.10% better than that of JPEG 2000, CALIC, EDP, and JPEG-LS, respectively. Graphical abstract ᅟ.

  19. Looking Forward - A Next Generation of Thermal Infrared Planetary Instruments

    NASA Astrophysics Data System (ADS)

    Christensen, P. R.; Hamilton, V. E.; Edwards, C. S.; Spencer, J. R.

    2017-12-01

    Thermal infrared measurements have provided important information about the physical properties of planetary surfaces beginning with the initial Mariner spacecraft in the early 1960's. These infrared measurements will continue into the future with a series of instruments that are now on their way or in development that will explore a suite of asteroids, Europa, and Mars. These instruments are being developed at Arizona State University, and are next-generation versions of the TES, Mini-TES, and THEMIS infrared spectrometers and imagers. The OTES instrument on OSIRIS-REx, which was launched in Sept. 2016, will map the surface of the asteroid Bennu down to a resolution of 40 m/pixel at seven times of day. This multiple time of day coverage will be used to produce global thermal inertia maps that will be used to determine the particle size distribution, which will in turn help select a safe and appropriate sample site. The EMIRS instrument, which is being built in partnership with the UAE's MBRSC for the Emirates Mars Mission, will measure martian surface temperatures at 200-300 km/pixel scales at over the full diurnal cycle - the first time the full diurnal temperature cycle has been observed since the Viking mission. The E-THEMIS instrument on the Europa Clipper mission will provide global mapping at 5-10 km/pixel scale at multiple times of day, and local observations down to resolutions of 50 m/pixel. These measurements will have a precision of 0.2 K for a 90 K scene, and will be used to map the thermal inertia and block abundances across Europa and to identify areas of localized endogenic heat. These observations will be used to investigate the physical processes of surface formation and evolution and to help select the landing site of a future Europa lander. Finally, the LTES instrument on the Lucy mission will measure temperatures on the day and night sides of the target Trojan asteroids, again providing insights into their surface properties and evolution processes.

  20. Intercomparison of four remote-sensing-based energy balance methods to retrieve surface evapotranspiration and water stress of irrigated fields in semi-arid climate

    NASA Astrophysics Data System (ADS)

    Chirouze, J.; Boulet, G.; Jarlan, L.; Fieuzal, R.; Rodriguez, J. C.; Ezzahar, J.; Er-Raki, S.; Bigeard, G.; Merlin, O.; Garatuza-Payan, J.; Watts, C.; Chehbouni, G.

    2014-03-01

    Instantaneous evapotranspiration rates and surface water stress levels can be deduced from remotely sensed surface temperature data through the surface energy budget. Two families of methods can be defined: the contextual methods, where stress levels are scaled on a given image between hot/dry and cool/wet pixels for a particular vegetation cover, and single-pixel methods, which evaluate latent heat as the residual of the surface energy balance for one pixel independently from the others. Four models, two contextual (S-SEBI and a modified triangle method, named VIT) and two single-pixel (TSEB, SEBS) are applied over one growing season (December-May) for a 4 km × 4 km irrigated agricultural area in the semi-arid northern Mexico. Their performance, both at local and spatial standpoints, are compared relatively to energy balance data acquired at seven locations within the area, as well as an uncalibrated soil-vegetation-atmosphere transfer (SVAT) model forced with local in situ data including observed irrigation and rainfall amounts. Stress levels are not always well retrieved by most models, but S-SEBI as well as TSEB, although slightly biased, show good performance. The drop in model performance is observed for all models when vegetation is senescent, mostly due to a poor partitioning both between turbulent fluxes and between the soil/plant components of the latent heat flux and the available energy. As expected, contextual methods perform well when contrasted soil moisture and vegetation conditions are encountered in the same image (therefore, especially in spring and early summer) while they tend to exaggerate the spread in water status in more homogeneous conditions (especially in winter). Surface energy balance models run with available remotely sensed products prove to be nearly as accurate as the uncalibrated SVAT model forced with in situ data.

  1. Volumetric three-dimensional intravascular ultrasound visualization using shape-based nonlinear interpolation

    PubMed Central

    2013-01-01

    Background Intravascular ultrasound (IVUS) is a standard imaging modality for identification of plaque formation in the coronary and peripheral arteries. Volumetric three-dimensional (3D) IVUS visualization provides a powerful tool to overcome the limited comprehensive information of 2D IVUS in terms of complex spatial distribution of arterial morphology and acoustic backscatter information. Conventional 3D IVUS techniques provide sub-optimal visualization of arterial morphology or lack acoustic information concerning arterial structure due in part to low quality of image data and the use of pixel-based IVUS image reconstruction algorithms. In the present study, we describe a novel volumetric 3D IVUS reconstruction algorithm to utilize IVUS signal data and a shape-based nonlinear interpolation. Methods We developed an algorithm to convert a series of IVUS signal data into a fully volumetric 3D visualization. Intermediary slices between original 2D IVUS slices were generated utilizing the natural cubic spline interpolation to consider the nonlinearity of both vascular structure geometry and acoustic backscatter in the arterial wall. We evaluated differences in image quality between the conventional pixel-based interpolation and the shape-based nonlinear interpolation methods using both virtual vascular phantom data and in vivo IVUS data of a porcine femoral artery. Volumetric 3D IVUS images of the arterial segment reconstructed using the two interpolation methods were compared. Results In vitro validation and in vivo comparative studies with the conventional pixel-based interpolation method demonstrated more robustness of the shape-based nonlinear interpolation algorithm in determining intermediary 2D IVUS slices. Our shape-based nonlinear interpolation demonstrated improved volumetric 3D visualization of the in vivo arterial structure and more realistic acoustic backscatter distribution compared to the conventional pixel-based interpolation method. Conclusions This novel 3D IVUS visualization strategy has the potential to improve ultrasound imaging of vascular structure information, particularly atheroma determination. Improved volumetric 3D visualization with accurate acoustic backscatter information can help with ultrasound molecular imaging of atheroma component distribution. PMID:23651569

  2. Curiosity's Mars Hand Lens Imager (MAHLI): Inital Observations and Activities

    NASA Technical Reports Server (NTRS)

    Edgett, K. S.; Yingst, R. A.; Minitti, M. E.; Robinson, M. L.; Kennedy, M. R.; Lipkaman, L. J.; Jensen, E. H.; Anderson, R. C.; Bean, K. M.; Beegle, L. W.; hide

    2013-01-01

    MAHLI (Mars Hand Lens Imager) is a 2-megapixel focusable macro lens color camera on the turret on Curiosity's robotic arm. The investigation centers on stratigraphy, grain-scale texture, structure, mineralogy, and morphology of geologic materials at Curiosity's Gale robotic field site. MAHLI acquires focused images at working distances of 2.1 cm to infinity; for reference, at 2.1 cm the scale is 14 microns/pixel; at 6.9 cm it is 31 microns/pixel, like the Spirit and Opportunity Microscopic Imager (MI) cameras.

  3. Direct Integration of Dynamic Emissive Displays into Knitted Fabric Structures

    NASA Astrophysics Data System (ADS)

    Bellingham, Alyssa

    Smart textiles are revolutionizing the textile industry by combining technology into fabric to give clothing new abilities including communication, transformation, and energy conduction. The advent of electroluminescent fibers, which emit light in response to an applied electric field, has opened the door for fabric-integrated emissive displays in textiles. This thesis focuses on the development of a flexible and scalable emissive fabric display with individually addressable pixels disposed within a fabric matrix. The pixels are formed in areas where a fiber supporting the dielectric and phosphor layers of an electroluminescent structure contacts a conductive surface. This conductive surface can be an external conductive fiber, yarn or wire, or a translucent conductive material layer deposited at set points along the electroluminescent fibers. Different contacting methods are introduced and the different ways the EL yarns can be incorporated into the knitted fabric are discussed. EL fibers were fabricated using a single yarn coating system with a custom, adjustable 3D printed slot die coater for even distribution of material onto the supporting fiber substrates. These fibers are mechanically characterized inside of and outside of a knitted fabric matrix to determine their potential for various applications, including wearables. A 4-pixel dynamic emissive display prototype is fabricated and characterized. This is the first demonstration of an all-knit emissive display with individually controllable pixels. The prototype is composed of a grid of fibers supporting the dielectric and phosphor layers of an electroluminescent (EL) device structure, called EL fibers, and conductive fibers acting as the top electrode. This grid is integrated into a biaxial weft knit structure where the EL fibers make up the rows and conductive fibers make up the columns of the reinforcement yarns inside the supporting weft knit. The pixels exist as individual segments of electroluminescence that occur where the conductive fibers contact the EL fibers. A passive matrix addressing scheme was used to apply a voltage to each pixel individually, creating a display capable of dynamically communicating information. Optical measurements of the intensity and color of emitted light were used to quantify the performance of the display and compare it to state-of-the-art display technologies. The charge-voltage (Q-V) electrical characterization technique is used to gain information about the ACPEL fiber device operation, and mechanical tests were performed to determine the effect everyday wear and tear would have on the performance of the display. The presented textile display structure and method of producing fibers with individual sections of electroluminescence addresses the shortcomings in existing textile display technology and provides a route to directly integrated communicative textiles for applications ranging from biomedical research and monitoring to fashion. An extensive discussion of the materials and methods of production needed to scale this textile display technology and incorporate it into wearable applications is presented.

  4. Spatiotemporal Characteristics of Visual Localization. Phase 2.

    DTIC Science & Technology

    1987-09-30

    two monitors The delay between presentations gave the observer time to Wonrac 2900 C19 black -and-white monitors with 512 pixels saccade from one...FREQUENCY COMPONENTS d) Black and Wh,te Bars The following experiments investigate the role of high spatial frequencies in the localization of spectrally...s0 10O5 (c) GAUSSIAN-MODULATED HIGH FREQUENCY B3ARS 0 1 05 1 0 50 10 50 (dl BLACK AND WHITE BARS Fig 4 Fourier transforms of the stimuli used in the

  5. High Accuracy 3D Processing of Satellite Imagery

    NASA Technical Reports Server (NTRS)

    Gruen, A.; Zhang, L.; Kocaman, S.

    2007-01-01

    Automatic DSM/DTM generation reproduces not only general features, but also detailed features of the terrain relief. Height accuracy of around 1 pixel in cooperative terrain. RMSE values of 1.3-1.5 m (1.0-2.0 pixels) for IKONOS and RMSE values of 2.9-4.6 m (0.5-1.0 pixels) for SPOT5 HRS. For 3D city modeling, the manual and semi-automatic feature extraction capability of SAT-PP provides a good basis. The tools of SAT-PP allowed the stereo-measurements of points on the roofs in order to generate a 3D city model with CCM The results show that building models with main roof structures can be successfully extracted by HRSI. As expected, with Quickbird more details are visible.

  6. Micro-valve pump light valve display

    DOEpatents

    Yeechun Lee.

    1993-01-19

    A flat panel display incorporates a plurality of micro-pump light valves (MLV's) to form pixels for recreating an image. Each MLV consists of a dielectric drop sandwiched between substrates, at least one of which is transparent, a holding electrode for maintaining the drop outside a viewing area, and a switching electrode from accelerating the drop from a location within the holding electrode to a location within the viewing area. The sustrates may further define non-wetting surface areas to create potential energy barriers to assist in controlling movement of the drop. The forces acting on the drop are quadratic in nature to provide a nonlinear response for increased image contrast. A crossed electrode structure can be used to activate the pixels whereby a large flat panel display is formed without active driver components at each pixel.

  7. Micro-valve pump light valve display

    DOEpatents

    Lee, Yee-Chun

    1993-01-01

    A flat panel display incorporates a plurality of micro-pump light valves (MLV's) to form pixels for recreating an image. Each MLV consists of a dielectric drop sandwiched between substrates, at least one of which is transparent, a holding electrode for maintaining the drop outside a viewing area, and a switching electrode from accelerating the drop from a location within the holding electrode to a location within the viewing area. The sustrates may further define non-wetting surface areas to create potential energy barriers to assist in controlling movement of the drop. The forces acting on the drop are quadratic in nature to provide a nonlinear response for increased image contrast. A crossed electrode structure can be used to activate the pixels whereby a large flat panel display is formed without active driver components at each pixel.

  8. City of Flagstaff Project: Ground Water Resource Evaluation, Remote Sensing Component

    USGS Publications Warehouse

    Chavez, Pat S.; Velasco, Miguel G.; Bowell, Jo-Ann; Sides, Stuart C.; Gonzalez, Rosendo R.; Soltesz, Deborah L.

    1996-01-01

    Many regions, cities, and towns in the Western United States need new or expanded water resources because of both population growth and increased development. Any tools or data that can help in the evaluation of an area's potential water resources must be considered for this increasingly critical need. Remotely sensed satellite images and subsequent digital image processing have been under-utilized in ground water resource evaluation and exploration. Satellite images can be helpful in detecting and mapping an area's regional structural patterns, including major fracture and fault systems, two important geologic settings for an area's surface to ground water relations. Within the United States Geological Survey's (USGS) Flagstaff Field Center, expertise and capabilities in remote sensing and digital image processing have been developed over the past 25 years through various programs. For the City of Flagstaff project, this expertise and these capabilities were combined with traditional geologic field mapping to help evaluate ground water resources in the Flagstaff area. Various enhancement and manipulation procedures were applied to the digital satellite images; the results, in both digital and hardcopy format, were used for field mapping and analyzing the regional structure. Relative to surface sampling, remotely sensed satellite and airborne images have improved spatial coverage that can help study, map, and monitor the earth surface at local and/or regional scales. Advantages offered by remotely sensed satellite image data include: 1. a synoptic/regional view compared to both aerial photographs and ground sampling, 2. cost effectiveness, 3. high spatial resolution and coverage compared to ground sampling, and 4. relatively high temporal coverage on a long term basis. Remotely sensed images contain both spectral and spatial information. The spectral information provides various properties and characteristics about the surface cover at a given location or pixel (that is, vegetation and/or soil type). The spatial information gives the distribution, variation, and topographic relief of the cover types from pixel to pixel. Therefore, the main characteristics that determine a pixel's brightness/reflectance and, consequently, the digital number (DN) assigned to the pixel, are the physical properties of the surface and near surface, the cover type, and the topographic slope. In this application, the ability to detect and map lineaments, especially those related to fractures and faults, is critical. Therefore, the extraction of spatial information from the digital images was of prime interest in this project. The spatial information varies among the different spectral bands available; in particular, a near infrared spectral band is better than a visible band when extracting spatial information in highly vegetated areas. In this study, both visible and near infrared bands were analyzed and used to extract the desired spatial information from the images. The wide swath coverage of remotely sensed satellite digital images makes them ideal for regional analysis and mapping. Since locating and mapping highly fractured and faulted areas is a major requirement for ground water resource evaluation and exploration this aspect of satellite images was considered critical; it allowed us to stand back (actually up about 440 miles), look at, and map the regional structural setting of the area. The main focus of the remote sensing and digital image processing component of this project was to use both remotely sensed digital satellite images and a Digital Elevation Model (DEM) to extract spatial information related to the structural and topographic patterns in the area. The data types used were digital satellite images collected by the United States' Landsat Thematic Mapper (TM) and French Systeme Probatoire d'Observation de laTerre (SPOT) imaging systems, along with a DEM of the Flagstaff region. The USGS Mini Image Processing Sy

  9. The Phase-II ATLAS ITk pixel upgrade

    NASA Astrophysics Data System (ADS)

    Terzo, S.

    2017-07-01

    The entire tracking system of the ATLAS experiment will be replaced during the LHC Phase-II shutdown (foreseen to take place around 2025) by an all-silicon detector called the ``ITk'' (Inner Tracker). The innermost portion of ITk will consist of a pixel detector with five layers in the barrel region and ring-shaped supports in the end-cap regions. It will be instrumented with new sensor and readout electronics technologies to improve the tracking performance and cope with the HL-LHC environment, which will be severe in terms of occupancy and radiation levels. The new pixel system could include up to 14 m2 of silicon, depending on the final layout, which is expected to be decided in 2017. Several layout options are being investigated at the moment, including some with novel inclined support structures in the barrel end-cap overlap region and others with very long innermost barrel layers. Forward coverage could be as high as |eta| <4. Supporting structures will be based on low mass, highly stable and highly thermally conductive carbon-based materials cooled by evaporative carbon dioxide circulated in thin-walled titanium pipes embedded in the structures. Planar, 3D, and CMOS sensors are being investigated to identify the optimal technology, which may be different for the various layers. The RD53 Collaboration is developing the new readout chip. The pixel off-detector readout electronics will be implemented in the framework of the general ATLAS trigger and DAQ system. A readout speed of up to 5 Gb/s per data link will be needed in the innermost layers going down to 640 Mb/s for the outermost. Because of the very high radiation level inside the detector, the first part of the transmission has to be implemented electrically, with signals converted for optical transmission at larger radii. Extensive tests are being carried out to prove the feasibility of implementing serial powering, which has been chosen as the baseline for the ITk pixel system due to the reduced material in the servicing cables foreseen for this option.

  10. The Europa Imaging System (EIS): High-Resolution, 3-D Insight into Europa's Geology, Ice Shell, and Potential for Current Activity

    NASA Astrophysics Data System (ADS)

    Turtle, E. P.; McEwen, A. S.; Collins, G. C.; Fletcher, L. N.; Hansen, C. J.; Hayes, A.; Hurford, T., Jr.; Kirk, R. L.; Barr, A.; Nimmo, F.; Patterson, G.; Quick, L. C.; Soderblom, J. M.; Thomas, N.

    2015-12-01

    The Europa Imaging System will transform our understanding of Europa through global decameter-scale coverage, three-dimensional maps, and unprecedented meter-scale imaging. EIS combines narrow-angle and wide-angle cameras (NAC and WAC) designed to address high-priority Europa science and reconnaissance goals. It will: (A) Characterize the ice shell by constraining its thickness and correlating surface features with subsurface structures detected by ice penetrating radar; (B) Constrain formation processes of surface features and the potential for current activity by characterizing endogenic structures, surface units, global cross-cutting relationships, and relationships to Europa's subsurface structure, and by searching for evidence of recent activity, including potential plumes; and (C) Characterize scientifically compelling landing sites and hazards by determining the nature of the surface at scales relevant to a potential lander. The NAC provides very high-resolution, stereo reconnaissance, generating 2-km-wide swaths at 0.5-m pixel scale from 50-km altitude, and uses a gimbal to enable independent targeting. NAC observations also include: near-global (>95%) mapping of Europa at ≤50-m pixel scale (to date, only ~14% of Europa has been imaged at ≤500 m/pixel, with best pixel scale 6 m); regional and high-resolution stereo imaging at <1-m/pixel; and high-phase-angle observations for plume searches. The WAC is designed to acquire pushbroom stereo swaths along flyby ground-tracks, generating digital topographic models with 32-m spatial scale and 4-m vertical precision from 50-km altitude. These data support characterization of cross-track clutter for radar sounding. The WAC also performs pushbroom color imaging with 6 broadband filters (350-1050 nm) to map surface units and correlations with geologic features and topography. EIS will provide comprehensive data sets essential to fulfilling the goal of exploring Europa to investigate its habitability and perform collaborative science with other investigations, including cartographic and geologic maps, regional and high-resolution digital topography, GIS products, color and photometric data products, a geodetic control network tied to radar altimetry, and a database of plume-search observations.

  11. Spatiotemporal characteristics of retinal response to network-mediated photovoltaic stimulation.

    PubMed

    Ho, Elton; Smith, Richard; Goetz, Georges; Lei, Xin; Galambos, Ludwig; Kamins, Theodore I; Harris, James; Mathieson, Keith; Palanker, Daniel; Sher, Alexander

    2018-02-01

    Subretinal prostheses aim at restoring sight to patients blinded by photoreceptor degeneration using electrical activation of the surviving inner retinal neurons. Today, such implants deliver visual information with low-frequency stimulation, resulting in discontinuous visual percepts. We measured retinal responses to complex visual stimuli delivered at video rate via a photovoltaic subretinal implant and by visible light. Using a multielectrode array to record from retinal ganglion cells (RGCs) in the healthy and degenerated rat retina ex vivo, we estimated their spatiotemporal properties from the spike-triggered average responses to photovoltaic binary white noise stimulus with 70-μm pixel size at 20-Hz frame rate. The average photovoltaic receptive field size was 194 ± 3 μm (mean ± SE), similar to that of visual responses (221 ± 4 μm), but response latency was significantly shorter with photovoltaic stimulation. Both visual and photovoltaic receptive fields had an opposing center-surround structure. In the healthy retina, ON RGCs had photovoltaic OFF responses, and vice versa. This reversal is consistent with depolarization of photoreceptors by electrical pulses, as opposed to their hyperpolarization under increasing light, although alternative mechanisms cannot be excluded. In degenerate retina, both ON and OFF photovoltaic responses were observed, but in the absence of visual responses, it is not clear what functional RGC types they correspond to. Degenerate retina maintained the antagonistic center-surround organization of receptive fields. These fast and spatially localized network-mediated ON and OFF responses to subretinal stimulation via photovoltaic pixels with local return electrodes raise confidence in the possibility of providing more functional prosthetic vision. NEW & NOTEWORTHY Retinal prostheses currently in clinical use have struggled to deliver visual information at naturalistic frequencies, resulting in discontinuous percepts. We demonstrate modulation of the retinal ganglion cells (RGC) activity using complex spatiotemporal stimuli delivered via subretinal photovoltaic implant at 20 Hz in healthy and in degenerate retina. RGCs exhibit fast and localized ON and OFF network-mediated responses, with antagonistic center-surround organization of their receptive fields.

  12. VK-phantom male with 583 structures and female with 459 structures, based on the sectioned images of a male and a female, for computational dosimetry

    PubMed Central

    Park, Jin Seo; Jung, Yong Wook; Choi, Hyung-Do; Lee, Ae-Kyoung

    2018-01-01

    Abstract The anatomical structures in most phantoms are classified according to tissue properties rather than according to their detailed structures, because the tissue properties, not the detailed structures, are what is considered important. However, if a phantom does not have detailed structures, the phantom will be unreliable because different tissues can be regarded as the same. Thus, we produced the Visible Korean (VK) -phantoms with detailed structures (male, 583 structures; female, 459 structures) based on segmented images of the whole male body (interval, 1.0 mm; pixel size, 1.0 mm2) and the whole female body (interval, 1.0 mm; pixel size, 1.0 mm2), using house-developed software to analyze the text string and voxel information for each of the structures. The density of each structure in the VK-phantom was calculated based on Virtual Population and a publication of the International Commission on Radiological Protection. In the future, we will standardize the size of each structure in the VK-phantoms. If the VK-phantoms are standardized and the mass density of each structure is precisely known, researchers will be able to measure the exact absorption rate of electromagnetic radiation in specific organs and tissues of the whole body. PMID:29659988

  13. VK-phantom male with 583 structures and female with 459 structures, based on the sectioned images of a male and a female, for computational dosimetry.

    PubMed

    Park, Jin Seo; Jung, Yong Wook; Choi, Hyung-Do; Lee, Ae-Kyoung

    2018-05-01

    The anatomical structures in most phantoms are classified according to tissue properties rather than according to their detailed structures, because the tissue properties, not the detailed structures, are what is considered important. However, if a phantom does not have detailed structures, the phantom will be unreliable because different tissues can be regarded as the same. Thus, we produced the Visible Korean (VK) -phantoms with detailed structures (male, 583 structures; female, 459 structures) based on segmented images of the whole male body (interval, 1.0 mm; pixel size, 1.0 mm2) and the whole female body (interval, 1.0 mm; pixel size, 1.0 mm2), using house-developed software to analyze the text string and voxel information for each of the structures. The density of each structure in the VK-phantom was calculated based on Virtual Population and a publication of the International Commission on Radiological Protection. In the future, we will standardize the size of each structure in the VK-phantoms. If the VK-phantoms are standardized and the mass density of each structure is precisely known, researchers will be able to measure the exact absorption rate of electromagnetic radiation in specific organs and tissues of the whole body.

  14. Numerical simulation of the modulation transfer function (MTF) in infrared focal plane arrays: simulation methodology and MTF optimization

    NASA Astrophysics Data System (ADS)

    Schuster, J.

    2018-02-01

    Military requirements demand both single and dual-color infrared (IR) imaging systems with both high resolution and sharp contrast. To quantify the performance of these imaging systems, a key measure of performance, the modulation transfer function (MTF), describes how well an optical system reproduces an objects contrast in the image plane at different spatial frequencies. At the center of an IR imaging system is the focal plane array (FPA). IR FPAs are hybrid structures consisting of a semiconductor detector pixel array, typically fabricated from HgCdTe, InGaAs or III-V superlattice materials, hybridized with heat/pressure to a silicon read-out integrated circuit (ROIC) with indium bumps on each pixel providing the mechanical and electrical connection. Due to the growing sophistication of the pixel arrays in these FPAs, sophisticated modeling techniques are required to predict, understand, and benchmark the pixel array MTF that contributes to the total imaging system MTF. To model the pixel array MTF, computationally exhaustive 2D and 3D numerical simulation approaches are required to correctly account for complex architectures and effects such as lateral diffusion from the pixel corners. It is paramount to accurately model the lateral di_usion (pixel crosstalk) as it can become the dominant mechanism limiting the detector MTF if not properly mitigated. Once the detector MTF has been simulated, it is directly decomposed into its constituent contributions to reveal exactly what is limiting the total detector MTF, providing a path for optimization. An overview of the MTF will be given and the simulation approach will be discussed in detail, along with how different simulation parameters effect the MTF calculation. Finally, MTF optimization strategies (crosstalk mitigation) will be discussed.

  15. Amorphous selenium direct detection CMOS digital x-ray imager with 25 micron pixel pitch

    NASA Astrophysics Data System (ADS)

    Scott, Christopher C.; Abbaszadeh, Shiva; Ghanbarzadeh, Sina; Allan, Gary; Farrier, Michael; Cunningham, Ian A.; Karim, Karim S.

    2014-03-01

    We have developed a high resolution amorphous selenium (a-Se) direct detection imager using a large-area compatible back-end fabrication process on top of a CMOS active pixel sensor having 25 micron pixel pitch. Integration of a-Se with CMOS technology requires overcoming CMOS/a-Se interfacial strain, which initiates nucleation of crystalline selenium and results in high detector dark currents. A CMOS-compatible polyimide buffer layer was used to planarize the backplane and provide a low stress and thermally stable surface for a-Se. The buffer layer inhibits crystallization and provides detector stability that is not only a performance factor but also critical for favorable long term cost-benefit considerations in the application of CMOS digital x-ray imagers in medical practice. The detector structure is comprised of a polyimide (PI) buffer layer, the a-Se layer, and a gold (Au) top electrode. The PI layer is applied by spin-coating and is patterned using dry etching to open the backplane bond pads for wire bonding. Thermal evaporation is used to deposit the a-Se and Au layers, and the detector is operated in hole collection mode (i.e. a positive bias on the Au top electrode). High resolution a-Se diagnostic systems typically use 70 to 100 μm pixel pitch and have a pre-sampling modulation transfer function (MTF) that is significantly limited by the pixel aperture. Our results confirm that, for a densely integrated 25 μm pixel pitch CMOS array, the MTF approaches the fundamental material limit, i.e. where the MTF begins to be limited by the a-Se material properties and not the pixel aperture. Preliminary images demonstrating high spatial resolution have been obtained from a frst prototype imager.

  16. Digital radiography using amorphous selenium: photoconductively activated switch (PAS) readout system.

    PubMed

    Reznik, Nikita; Komljenovic, Philip T; Germann, Stephen; Rowlands, John A

    2008-03-01

    A new amorphous selenium (a-Se) digital radiography detector is introduced. The proposed detector generates a charge image in the a-Se layer in a conventional manner, which is stored on electrode pixels at the surface of the a-Se layer. A novel method, called photoconductively activated switch (PAS), is used to read out the latent x-ray charge image. The PAS readout method uses lateral photoconduction at the a-Se surface which is a revolutionary modification of the bulk photoinduced discharge (PID) methods. The PAS method addresses and eliminates the fundamental weaknesses of the PID methods--long readout times and high readout noise--while maintaining the structural simplicity and high resolution for which PID optical readout systems are noted. The photoconduction properties of the a-Se surface were investigated and the geometrical design for the electrode pixels for a PAS radiography system was determined. This design was implemented in a single pixel PAS evaluation system. The results show that the PAS x-ray induced output charge signal was reproducible and depended linearly on the x-ray exposure in the diagnostic exposure range. Furthermore, the readout was reasonably rapid (10 ms for pixel discharge). The proposed detector allows readout of half a pixel row at a time (odd pixels followed by even pixels), thus permitting the readout of a complete image in 30 s for a 40 cm x 40 cm detector with the potential of reducing that time by using greater readout light intensity. This demonstrates that a-Se based x-ray detectors using photoconductively activated switches could form a basis for a practical integrated digital radiography system.

  17. Superpixel-Augmented Endmember Detection for Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Thompson, David R.; Castano, Rebecca; Gilmore, Martha

    2011-01-01

    Superpixels are homogeneous image regions comprised of several contiguous pixels. They are produced by shattering the image into contiguous, homogeneous regions that each cover between 20 and 100 image pixels. The segmentation aims for a many-to-one mapping from superpixels to image features; each image feature could contain several superpixels, but each superpixel occupies no more than one image feature. This conservative segmentation is relatively easy to automate in a robust fashion. Superpixel processing is related to the more general idea of improving hyperspectral analysis through spatial constraints, which can recognize subtle features at or below the level of noise by exploiting the fact that their spectral signatures are found in neighboring pixels. Recent work has explored spatial constraints for endmember extraction, showing significant advantages over techniques that ignore pixels relative positions. Methods such as AMEE (automated morphological endmember extraction) express spatial influence using fixed isometric relationships a local square window or Euclidean distance in pixel coordinates. In other words, two pixels covariances are based on their spatial proximity, but are independent of their absolute location in the scene. These isometric spatial constraints are most appropriate when spectral variation is smooth and constant over the image. Superpixels are simple to implement, efficient to compute, and are empirically effective. They can be used as a preprocessing step with any desired endmember extraction technique. Superpixels also have a solid theoretical basis in the hyperspectral linear mixing model, making them a principled approach for improving endmember extraction. Unlike existing approaches, superpixels can accommodate non-isometric covariance between image pixels (characteristic of discrete image features separated by step discontinuities). These kinds of image features are common in natural scenes. Analysts can substitute superpixels for image pixels during endmember analysis that leverages the spatial contiguity of scene features to enhance subtle spectral features. Superpixels define populations of image pixels that are independent samples from each image feature, permitting robust estimation of spectral properties, and reducing measurement noise in proportion to the area of the superpixel. This permits improved endmember extraction, and enables automated search for novel and constituent minerals in very noisy, hyperspatial images. This innovation begins with a graph-based segmentation based on the work of Felzenszwalb et al., but then expands their approach to the hyperspectral image domain with a Euclidean distance metric. Then, the mean spectrum of each segment is computed, and the resulting data cloud is used as input into sequential maximum angle convex cone (SMACC) endmember extraction.

  18. Acquisition of STEM Images by Adaptive Compressive Sensing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Weiyi; Feng, Qianli; Srinivasan, Ramprakash

    Compressive Sensing (CS) allows a signal to be sparsely measured first and accurately recovered later in software [1]. In scanning transmission electron microscopy (STEM), it is possible to compress an image spatially by reducing the number of measured pixels, which decreases electron dose and increases sensing speed [2,3,4]. The two requirements for CS to work are: (1) sparsity of basis coefficients and (2) incoherence of the sensing system and the representation system. However, when pixels are missing from the image, it is difficult to have an incoherent sensing matrix. Nevertheless, dictionary learning techniques such as Beta-Process Factor Analysis (BPFA) [5]more » are able to simultaneously discover a basis and the sparse coefficients in the case of missing pixels. On top of CS, we would like to apply active learning [6,7] to further reduce the proportion of pixels being measured, while maintaining image reconstruction quality. Suppose we initially sample 10% of random pixels. We wish to select the next 1% of pixels that are most useful in recovering the image. Now, we have 11% of pixels, and we want to decide the next 1% of “most informative” pixels. Active learning methods are online and sequential in nature. Our goal is to adaptively discover the best sensing mask during acquisition using feedback about the structures in the image. In the end, we hope to recover a high quality reconstruction with a dose reduction relative to the non-adaptive (random) sensing scheme. In doing this, we try three metrics applied to the partial reconstructions for selecting the new set of pixels: (1) variance, (2) Kullback-Leibler (KL) divergence using a Radial Basis Function (RBF) kernel, and (3) entropy. Figs. 1 and 2 display the comparison of Peak Signal-to-Noise (PSNR) using these three different active learning methods at different percentages of sampled pixels. At 20% level, all the three active learning methods underperform the original CS without active learning. However, they all beat the original CS as more of the “most informative” pixels are sampled. One can also argue that CS equipped with active learning requires less sampled pixels to achieve the same value of PSNR than CS with pixels randomly sampled, since all the three PSNR curves with active learning grow at a faster pace than that without active learning. For this particular STEM image, by observing the reconstructed images and the sensing masks, we find that while the method based on RBF kernel acquires samples more uniformly, the one on entropy samples more areas of significant change, thus less uniformly. The KL-divergence method performs the best in terms of reconstruction error (PSNR) for this example [8].« less

  19. Indium-bump-free antimonide superlattice membrane detectors on silicon substrates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zamiri, M., E-mail: mzamiri@chtm.unm.edu, E-mail: skrishna@chtm.unm.edu; Klein, B.; Schuler-Sandy, T.

    2016-02-29

    We present an approach to realize antimonide superlattices on silicon substrates without using conventional Indium-bump hybridization. In this approach, PIN superlattices are grown on top of a 60 nm Al{sub 0.6}Ga{sub 0.4}Sb sacrificial layer on a GaSb host substrate. Following the growth, the individual pixels are transferred using our epitaxial-lift off technique, which consists of a wet-etch to undercut the pixels followed by a dry-stamp process to transfer the pixels to a silicon substrate prepared with a gold layer. Structural and optical characterization of the transferred pixels was done using an optical microscope, scanning electron microscopy, and photoluminescence. The interface betweenmore » the transferred pixels and the new substrate was abrupt, and no significant degradation in the optical quality was observed. An Indium-bump-free membrane detector was then fabricated using this approach. Spectral response measurements provided a 100% cut-off wavelength of 4.3 μm at 77 K. The performance of the membrane detector was compared to a control detector on the as-grown substrate. The membrane detector was limited by surface leakage current. The proposed approach could pave the way for wafer-level integration of photonic detectors on silicon substrates, which could dramatically reduce the cost of these detectors.« less

  20. QLog Solar-Cell Mode Photodiode Logarithmic CMOS Pixel Using Charge Compression and Readout †

    PubMed Central

    Ni, Yang

    2018-01-01

    In this paper, we present a new logarithmic pixel design currently under development at New Imaging Technologies SA (NIT). This new logarithmic pixel design uses charge domain logarithmic signal compression and charge-transfer-based signal readout. This structure gives a linear response in low light conditions and logarithmic response in high light conditions. The charge transfer readout efficiently suppresses the reset (KTC) noise by using true correlated double sampling (CDS) in low light conditions. In high light conditions, thanks to charge domain logarithmic compression, it has been demonstrated that 3000 electrons should be enough to cover a 120 dB dynamic range with a mobile phone camera-like signal-to-noise ratio (SNR) over the whole dynamic range. This low electron count permits the use of ultra-small floating diffusion capacitance (sub-fF) without charge overflow. The resulting large conversion gain permits a single photon detection capability with a wide dynamic range without a complex sensor/system design. A first prototype sensor with 320 × 240 pixels has been implemented to validate this charge domain logarithmic pixel concept and modeling. The first experimental results validate the logarithmic charge compression theory and the low readout noise due to the charge-transfer-based readout. PMID:29443903

  1. QLog Solar-Cell Mode Photodiode Logarithmic CMOS Pixel Using Charge Compression and Readout.

    PubMed

    Ni, Yang

    2018-02-14

    In this paper, we present a new logarithmic pixel design currently under development at New Imaging Technologies SA (NIT). This new logarithmic pixel design uses charge domain logarithmic signal compression and charge-transfer-based signal readout. This structure gives a linear response in low light conditions and logarithmic response in high light conditions. The charge transfer readout efficiently suppresses the reset (KTC) noise by using true correlated double sampling (CDS) in low light conditions. In high light conditions, thanks to charge domain logarithmic compression, it has been demonstrated that 3000 electrons should be enough to cover a 120 dB dynamic range with a mobile phone camera-like signal-to-noise ratio (SNR) over the whole dynamic range. This low electron count permits the use of ultra-small floating diffusion capacitance (sub-fF) without charge overflow. The resulting large conversion gain permits a single photon detection capability with a wide dynamic range without a complex sensor/system design. A first prototype sensor with 320 × 240 pixels has been implemented to validate this charge domain logarithmic pixel concept and modeling. The first experimental results validate the logarithmic charge compression theory and the low readout noise due to the charge-transfer-based readout.

  2. The NUC and blind pixel eliminating in the DTDI application

    NASA Astrophysics Data System (ADS)

    Su, Xiao Feng; Chen, Fan Sheng; Pan, Sheng Da; Gong, Xue Yi; Dong, Yu Cui

    2013-12-01

    AS infrared CMOS Digital TDI (Time Delay and integrate) has a simple structure, excellent performance and flexible operation, it has been used in more and more applications. Because of the limitation of the Production process level, the plane array of the infrared detector has a large NU (non-uniformity) and a certain blind pixel rate. Both of the two will raise the noise and lead to the TDI works not very well. In this paper, for the impact of the system performance, the most important elements are analyzed, which are the NU of the optical system, the NU of the Plane array and the blind pixel in the Plane array. Here a reasonable algorithm which considers the background removal and the linear response model of the infrared detector is used to do the NUC (Non-uniformity correction) process, when the infrared detector array is used as a Digital TDI. In order to eliminate the impact of the blind pixel, the concept of surplus pixel method is introduced in, through the method, the SNR (signal to noise ratio) can be improved and the spatial and temporal resolution will not be changed. Finally we use a MWIR (Medium Ware Infrared) detector to do the experiment and the result proves the effectiveness of the method.

  3. Evaluation of a CdTe semiconductor based compact gamma camera for sentinel lymph node imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Russo, Paolo; Curion, Assunta S.; Mettivier, Giovanni

    2011-03-15

    Purpose: The authors assembled a prototype compact gamma-ray imaging probe (MediPROBE) for sentinel lymph node (SLN) localization. This probe is based on a semiconductor pixel detector. Its basic performance was assessed in the laboratory and clinically in comparison with a conventional gamma camera. Methods: The room-temperature CdTe pixel detector (1 mm thick) has 256x256 square pixels arranged with a 55 {mu}m pitch (sensitive area 14.08x14.08 mm{sup 2}), coupled pixel-by-pixel via bump-bonding to the Medipix2 photon-counting readout CMOS integrated circuit. The imaging probe is equipped with a set of three interchangeable knife-edge pinhole collimators (0.94, 1.2, or 2.1 mm effective diametermore » at 140 keV) and its focal distance can be regulated in order to set a given field of view (FOV). A typical FOV of 70 mm at 50 mm skin-to-collimator distance corresponds to a minification factor 1:5. The detector is operated at a single low-energy threshold of about 20 keV. Results: For {sup 99m}Tc, at 50 mm distance, a background-subtracted sensitivity of 6.5x10{sup -3} cps/kBq and a system spatial resolution of 5.5 mm FWHM were obtained for the 0.94 mm pinhole; corresponding values for the 2.1 mm pinhole were 3.3x10{sup -2} cps/kBq and 12.6 mm. The dark count rate was 0.71 cps. Clinical images in three patients with melanoma indicate detection of the SLNs with acquisition times between 60 and 410 s with an injected activity of 26 MBq {sup 99m}Tc and prior localization with standard gamma camera lymphoscintigraphy. Conclusions: The laboratory performance of this imaging probe is limited by the pinhole collimator performance and the necessity of working in minification due to the limited detector size. However, in clinical operative conditions, the CdTe imaging probe was effective in detecting SLNs with adequate resolution and an acceptable sensitivity. Sensitivity is expected to improve with the future availability of a larger CdTe detector permitting operation at shorter distances from the patient skin.« less

  4. Real time thermal imaging for analysis and control of crystal growth by the Czochralski technique

    NASA Technical Reports Server (NTRS)

    Wargo, M. J.; Witt, A. F.

    1992-01-01

    A real time thermal imaging system with temperature resolution better than +/- 0.5 C and spatial resolution of better than 0.5 mm has been developed. It has been applied to the analysis of melt surface thermal field distributions in both Czochralski and liquid encapsulated Czochralski growth configurations. The sensor can provide single/multiple point thermal information; a multi-pixel averaging algorithm has been developed which permits localized, low noise sensing and display of optical intensity variations at any location in the hot zone as a function of time. Temperature distributions are measured by extraction of data along a user selectable linear pixel array and are simultaneously displayed, as a graphic overlay, on the thermal image.

  5. Multi-frame partially saturated images blind deconvolution

    NASA Astrophysics Data System (ADS)

    Ye, Pengzhao; Feng, Huajun; Xu, Zhihai; Li, Qi; Chen, Yueting

    2016-12-01

    When blurred images have saturated or over-exposed pixels, conventional blind deconvolution approaches often fail to estimate accurate point spread function (PSF) and will introduce local ringing artifacts. In this paper, we propose a method to deal with the problem under the modified multi-frame blind deconvolution framework. First, in the kernel estimation step, a light streak detection scheme using multi-frame blurred images is incorporated into the regularization constraint. Second, we deal with image regions affected by the saturated pixels separately by modeling a weighted matrix during each multi-frame deconvolution iteration process. Both synthetic and real-world examples show that more accurate PSFs can be estimated and restored images have richer details and less negative effects compared to state of art methods.

  6. Comparing Pixel- and Object-Based Approaches in Effectively Classifying Wetland-Dominated Landscapes

    PubMed Central

    Berhane, Tedros M.; Lane, Charles R.; Wu, Qiusheng; Anenkhonov, Oleg A.; Chepinoga, Victor V.; Autrey, Bradley C.; Liu, Hongxing

    2018-01-01

    Wetland ecosystems straddle both terrestrial and aquatic habitats, performing many ecological functions directly and indirectly benefitting humans. However, global wetland losses are substantial. Satellite remote sensing and classification informs wise wetland management and monitoring. Both pixel- and object-based classification approaches using parametric and non-parametric algorithms may be effectively used in describing wetland structure and habitat, but which approach should one select? We conducted both pixel- and object-based image analyses (OBIA) using parametric (Iterative Self-Organizing Data Analysis Technique, ISODATA, and maximum likelihood, ML) and non-parametric (random forest, RF) approaches in the Barguzin Valley, a large wetland (~500 km2) in the Lake Baikal, Russia, drainage basin. Four Quickbird multispectral bands plus various spatial and spectral metrics (e.g., texture, Non-Differentiated Vegetation Index, slope, aspect, etc.) were analyzed using field-based regions of interest sampled to characterize an initial 18 ISODATA-based classes. Parsimoniously using a three-layer stack (Quickbird band 3, water ratio index (WRI), and mean texture) in the analyses resulted in the highest accuracy, 87.9% with pixel-based RF, followed by OBIA RF (segmentation scale 5, 84.6% overall accuracy), followed by pixel-based ML (83.9% overall accuracy). Increasing the predictors from three to five by adding Quickbird bands 2 and 4 decreased the pixel-based overall accuracy while increasing the OBIA RF accuracy to 90.4%. However, McNemar’s chi-square test confirmed no statistically significant difference in overall accuracy among the classifiers (pixel-based ML, RF, or object-based RF) for either the three- or five-layer analyses. Although potentially useful in some circumstances, the OBIA approach requires substantial resources and user input (such as segmentation scale selection—which was found to substantially affect overall accuracy). Hence, we conclude that pixel-based RF approaches are likely satisfactory for classifying wetland-dominated landscapes. PMID:29707381

  7. Comparing Pixel- and Object-Based Approaches in Effectively Classifying Wetland-Dominated Landscapes.

    PubMed

    Berhane, Tedros M; Lane, Charles R; Wu, Qiusheng; Anenkhonov, Oleg A; Chepinoga, Victor V; Autrey, Bradley C; Liu, Hongxing

    2018-01-01

    Wetland ecosystems straddle both terrestrial and aquatic habitats, performing many ecological functions directly and indirectly benefitting humans. However, global wetland losses are substantial. Satellite remote sensing and classification informs wise wetland management and monitoring. Both pixel- and object-based classification approaches using parametric and non-parametric algorithms may be effectively used in describing wetland structure and habitat, but which approach should one select? We conducted both pixel- and object-based image analyses (OBIA) using parametric (Iterative Self-Organizing Data Analysis Technique, ISODATA, and maximum likelihood, ML) and non-parametric (random forest, RF) approaches in the Barguzin Valley, a large wetland (~500 km 2 ) in the Lake Baikal, Russia, drainage basin. Four Quickbird multispectral bands plus various spatial and spectral metrics (e.g., texture, Non-Differentiated Vegetation Index, slope, aspect, etc.) were analyzed using field-based regions of interest sampled to characterize an initial 18 ISODATA-based classes. Parsimoniously using a three-layer stack (Quickbird band 3, water ratio index (WRI), and mean texture) in the analyses resulted in the highest accuracy, 87.9% with pixel-based RF, followed by OBIA RF (segmentation scale 5, 84.6% overall accuracy), followed by pixel-based ML (83.9% overall accuracy). Increasing the predictors from three to five by adding Quickbird bands 2 and 4 decreased the pixel-based overall accuracy while increasing the OBIA RF accuracy to 90.4%. However, McNemar's chi-square test confirmed no statistically significant difference in overall accuracy among the classifiers (pixel-based ML, RF, or object-based RF) for either the three- or five-layer analyses. Although potentially useful in some circumstances, the OBIA approach requires substantial resources and user input (such as segmentation scale selection-which was found to substantially affect overall accuracy). Hence, we conclude that pixel-based RF approaches are likely satisfactory for classifying wetland-dominated landscapes.

  8. Discovery of Finely Structured Dynamic Solar Corona Observed in the Hi-C Telescope

    NASA Technical Reports Server (NTRS)

    Winebarger, A.; Cirtain, J.; Golub, L.; DeLuca, E.; Savage, S.; Alexander, C.; Schuler, T.

    2014-01-01

    In the summer of 2012, the High-resolution Coronal Imager (Hi-C) flew aboard a NASA sounding rocket and collected the highest spatial resolution images ever obtained of the solar corona. One of the goals of the Hi-C flight was to characterize the substructure of the solar corona. We therefore examine how the intensity scales from AIA resolution to Hi-C resolution. For each low-resolution pixel, we calculate the standard deviation in the contributing high-resolution pixel intensities and compare that to the expected standard deviation calculated from the noise. If these numbers are approximately equal, the corona can be assumed to be smoothly varying, i.e. have no evidence of substructure in the Hi-C image to within Hi-C's ability to measure it given its throughput and readout noise. A standard deviation much larger than the noise value indicates the presence of substructure. We calculate these values for each low-resolution pixel for each frame of the Hi-C data. On average, 70 percent of the pixels in each Hi-C image show no evidence of substructure. The locations where substructure is prevalent is in the moss regions and in regions of sheared magnetic field. We also find that the level of substructure varies significantly over the roughly 160 s of the Hi-C data analyzed here. This result indicates that the finely structured corona is concentrated in regions of heating and is highly time dependent.

  9. DISCOVERY OF FINELY STRUCTURED DYNAMIC SOLAR CORONA OBSERVED IN THE Hi-C TELESCOPE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Winebarger, Amy R.; Cirtain, Jonathan; Savage, Sabrina

    In the Summer of 2012, the High-resolution Coronal Imager (Hi-C) flew on board a NASA sounding rocket and collected the highest spatial resolution images ever obtained of the solar corona. One of the goals of the Hi-C flight was to characterize the substructure of the solar corona. We therefore examine how the intensity scales from AIA resolution to Hi-C resolution. For each low-resolution pixel, we calculate the standard deviation in the contributing high-resolution pixel intensities and compare that to the expected standard deviation calculated from the noise. If these numbers are approximately equal, the corona can be assumed to bemore » smoothly varying, i.e., have no evidence of substructure in the Hi-C image to within Hi-C's ability to measure it given its throughput and readout noise. A standard deviation much larger than the noise value indicates the presence of substructure. We calculate these values for each low-resolution pixel for each frame of the Hi-C data. On average, 70% of the pixels in each Hi-C image show no evidence of substructure. The locations where substructure is prevalent is in the moss regions and in regions of sheared magnetic field. We also find that the level of substructure varies significantly over the roughly 160 s of the Hi-C data analyzed here. This result indicates that the finely structured corona is concentrated in regions of heating and is highly time dependent.« less

  10. Physically-based parameterization of spatially variable soil and vegetation using satellite multispectral data

    NASA Technical Reports Server (NTRS)

    Jasinski, Michael F.; Eagleson, Peter S.

    1989-01-01

    A stochastic-geometric landsurface reflectance model is formulated and tested for the parameterization of spatially variable vegetation and soil at subpixel scales using satellite multispectral images without ground truth. Landscapes are conceptualized as 3-D Lambertian reflecting surfaces consisting of plant canopies, represented by solid geometric figures, superposed on a flat soil background. A computer simulation program is developed to investigate image characteristics at various spatial aggregations representative of satellite observational scales, or pixels. The evolution of the shape and structure of the red-infrared space, or scattergram, of typical semivegetated scenes is investigated by sequentially introducing model variables into the simulation. The analytical moments of the total pixel reflectance, including the mean, variance, spatial covariance, and cross-spectral covariance, are derived in terms of the moments of the individual fractional cover and reflectance components. The moments are applied to the solution of the inverse problem: The estimation of subpixel landscape properties on a pixel-by-pixel basis, given only one multispectral image and limited assumptions on the structure of the landscape. The landsurface reflectance model and inversion technique are tested using actual aerial radiometric data collected over regularly spaced pecan trees, and using both aerial and LANDSAT Thematic Mapper data obtained over discontinuous, randomly spaced conifer canopies in a natural forested watershed. Different amounts of solar backscattered diffuse radiation are assumed and the sensitivity of the estimated landsurface parameters to those amounts is examined.

  11. Pixelated filters for spatial imaging

    NASA Astrophysics Data System (ADS)

    Mathieu, Karine; Lequime, Michel; Lumeau, Julien; Abel-Tiberini, Laetitia; Savin De Larclause, Isabelle; Berthon, Jacques

    2015-10-01

    Small satellites are often used by spatial agencies to meet scientific spatial mission requirements. Their payloads are composed of various instruments collecting an increasing amount of data, as well as respecting the growing constraints relative to volume and mass; So small-sized integrated camera have taken a favored place among these instruments. To ensure scene specific color information sensing, pixelated filters seem to be more attractive than filter wheels. The work presented here, in collaboration with Institut Fresnel, deals with the manufacturing of this kind of component, based on thin film technologies and photolithography processes. CCD detectors with a pixel pitch about 30 μm were considered. In the configuration where the matrix filters are positioned the closest to the detector, the matrix filters are composed of 2x2 macro pixels (e.g. 4 filters). These 4 filters have a bandwidth about 40 nm and are respectively centered at 550, 700, 770 and 840 nm with a specific rejection rate defined on the visible spectral range [500 - 900 nm]. After an intense design step, 4 thin-film structures have been elaborated with a maximum thickness of 5 μm. A run of tests has allowed us to choose the optimal micro-structuration parameters. The 100x100 matrix filters prototypes have been successfully manufactured with lift-off and ion assisted deposition processes. High spatial and spectral characterization, with a dedicated metrology bench, showed that initial specifications and simulations were globally met. These excellent performances knock down the technological barriers for high-end integrated specific multi spectral imaging.

  12. Comparisons of evening and morning SMOS passes over the Midwest United States

    USDA-ARS?s Scientific Manuscript database

    This study investigates differences in the soil moisture product and brightness temperatures between 6 pm and 6 am local solar time, when the SMOS passes for a region in north-central Iowa. This region consists of 69 SMOS pixels and has uniform land-cover, consisting of maize and soybean row crops. ...

  13. Automated Ki-67 Quantification of Immunohistochemical Staining Image of Human Nasopharyngeal Carcinoma Xenografts.

    PubMed

    Shi, Peng; Zhong, Jing; Hong, Jinsheng; Huang, Rongfang; Wang, Kaijun; Chen, Yunbin

    2016-08-26

    Nasopharyngeal carcinoma is one of the malignant neoplasm with high incidence in China and south-east Asia. Ki-67 protein is strictly associated with cell proliferation and malignant degree. Cells with higher Ki-67 expression are always sensitive to chemotherapy and radiotherapy, the assessment of which is beneficial to NPC treatment. It is still challenging to automatically analyze immunohistochemical Ki-67 staining nasopharyngeal carcinoma images due to the uneven color distributions in different cell types. In order to solve the problem, an automated image processing pipeline based on clustering of local correlation features is proposed in this paper. Unlike traditional morphology-based methods, our algorithm segments cells by classifying image pixels on the basis of local pixel correlations from particularly selected color spaces, then characterizes cells with a set of grading criteria for the reference of pathological analysis. Experimental results showed high accuracy and robustness in nucleus segmentation despite image data variance. Quantitative indicators obtained in this essay provide a reliable evidence for the analysis of Ki-67 staining nasopharyngeal carcinoma microscopic images, which would be helpful in relevant histopathological researches.

  14. Remote sensing image stitch using modified structure deformation

    NASA Astrophysics Data System (ADS)

    Pan, Ke-cheng; Chen, Jin-wei; Chen, Yueting; Feng, Huajun

    2012-10-01

    To stitch remote sensing images seamlessly without producing visual artifact which is caused by severe intensity discrepancy and structure misalignment, we modify the original structure deformation based stitching algorithm which have two main problems: Firstly, using Poisson equation to propagate deformation vectors leads to the change of the topological relationship between the key points and their surrounding pixels, which may bring in wrong image characteristics. Secondly, the diffusion area of the sparse matrix is too limited to rectify the global intensity discrepancy. To solve the first problem, we adopt Spring-Mass model and bring in external force to keep the topological relationship between key points and their surrounding pixels. We also apply tensor voting algorithm to achieve the global intensity corresponding curve of the two images to solve the second problem. Both simulated and experimental results show that our algorithm is faster and can reach better result than the original algorithm.

  15. Star sub-pixel centroid calculation based on multi-step minimum energy difference method

    NASA Astrophysics Data System (ADS)

    Wang, Duo; Han, YanLi; Sun, Tengfei

    2013-09-01

    The star's centroid plays a vital role in celestial navigation, star images which be gotten during daytime, due to the strong sky background, have a low SNR, and the star objectives are nearly submerged in the background, takes a great trouble to the centroid localization. Traditional methods, such as a moment method, weighted centroid calculation method is simple but has a big error, especially in the condition of a low SNR. Gaussian method has a high positioning accuracy, but the computational complexity. Analysis of the energy distribution in star image, a location method for star target centroids based on multi-step minimum energy difference is proposed. This method uses the linear superposition to narrow the centroid area, in the certain narrow area uses a certain number of interpolation to pixels for the pixels' segmentation, and then using the symmetry of the stellar energy distribution, tentatively to get the centroid position: assume that the current pixel is the star centroid position, and then calculates and gets the difference of the sum of the energy which in the symmetric direction(in this paper we take the two directions of transverse and longitudinal) and the equal step length(which can be decided through different conditions, the paper takes 9 as the step length) of the current pixel, and obtain the centroid position in this direction when the minimum difference appears, and so do the other directions, then the validation comparison of simulated star images, and compare with several traditional methods, experiments shows that the positioning accuracy of the method up to 0.001 pixel, has good effect to calculate the centroid of low SNR conditions; at the same time, uses this method on a star map which got at the fixed observation site during daytime in near-infrared band, compare the results of the paper's method with the position messages which were known of the star, it shows that :the multi-step minimum energy difference method achieves a better effect.

  16. An approach of surface coal fire detection from ASTER and Landsat-8 thermal data: Jharia coal field, India

    NASA Astrophysics Data System (ADS)

    Roy, Priyom; Guha, Arindam; Kumar, K. Vinod

    2015-07-01

    Radiant temperature images from thermal remote sensing sensors are used to delineate surface coal fires, by deriving a cut-off temperature to separate coal-fire from non-fire pixels. Temperature contrast of coal fire and background elements (rocks and vegetation etc.) controls this cut-off temperature. This contrast varies across the coal field, as it is influenced by variability of associated rock types, proportion of vegetation cover and intensity of coal fires etc. We have delineated coal fires from background, based on separation in data clusters in maximum v/s mean radiant temperature (13th band of ASTER and 10th band of Landsat-8) scatter-plot, derived using randomly distributed homogeneous pixel-blocks (9 × 9 pixels for ASTER and 27 × 27 pixels for Landsat-8), covering the entire coal bearing geological formation. It is seen that, for both the datasets, overall temperature variability of background and fires can be addressed using this regional cut-off. However, the summer time ASTER data could not delineate fire pixels for one specific mine (Bhulanbararee) as opposed to the winter time Landsat-8 data. The contrast of radiant temperature of fire and background terrain elements, specific to this mine, is different from the regional contrast of fire and background, during summer. This is due to the higher solar heating of background rocky outcrops, thus, reducing their temperature contrast with fire. The specific cut-off temperature determined for this mine, to extract this fire, differs from the regional cut-off. This is derived by reducing the pixel-block size of the temperature data. It is seen that, summer-time ASTER image is useful for fire detection but required additional processing to determine a local threshold, along with the regional threshold to capture all the fires. However, the winter Landsat-8 data was better for fire detection with a regional threshold.

  17. Energy-selective Neutron Imaging for Three-dimensional Non-destructive Probing of Crystalline Structures

    NASA Astrophysics Data System (ADS)

    Peetermans, S.; Bopp, M.; Vontobel, P.; Lehmann, E. H.

    Common neutron imaging uses the full polychromatic neutron beam spectrum to reveal the material distribution in a non-destructive way. Performing it with a reduced energy band, i.e. energy-selective neutron imaging, allows access to local variation in sample crystallographic properties. Two sample categories can be discerned with different energy responses. Polycrystalline materials have an energy-dependent cross-section featuring Bragg edges. Energy-selective neutron imaging can be used to distinguish be- tween crystallographic phases, increase material sensitivity or penetration, improve quantification etc. An example of the latter is shown by the examination of copper discs prior to machining them into linear accelerator cavity structures. The cross-section of single crystals features distinct Bragg peaks. Based on their pattern, one can determine the orientation of the crystal, as in a Laue pattern, but with the tremendous advantage that the operation can be performed for each pixel, yielding crystal orientation maps at high spatial resolution. A wholly different method to investigate such samples is also introduced: neutron diffraction imaging. It is based on projections formed by neutrons diffracted from the crystal lattice out of the direct beam. The position of these projections on the detector gives information on the crystal orientation. The projection itself can be used to reconstruct the crystal shape. A three-dimensional mapping of local Bragg reflectivity or a grain orientation mapping can thus be obtained.

  18. Weighted image de-fogging using luminance dark prior

    NASA Astrophysics Data System (ADS)

    Kansal, Isha; Kasana, Singara Singh

    2017-10-01

    In this work, the weighted image de-fogging process based upon dark channel prior is modified by using luminance dark prior. Dark channel prior estimates the transmission by using three colour channels whereas luminance dark prior does the same by making use of only Y component of YUV colour space. For each pixel in a patch of ? size, the luminance dark prior uses ? pixels, rather than ? pixels used in DCP technique, which speeds up the de-fogging process. To estimate the transmission map, weighted approach based upon difference prior is used which mitigates halo artefacts at the time of transmission estimation. The major drawback of weighted technique is that it does not maintain the constancy of the transmission in a local patch even if there are no significant depth disruptions, due to which the de-fogged image looks over smooth and has low contrast. Apart from this, in some images, weighted transmission still carries less visible halo artefacts. Therefore, Gaussian filter is used to blur the estimated weighted transmission map which enhances the contrast of de-fogged images. In addition to this, a novel approach is proposed to remove the pixels belonging to bright light source(s) during the atmospheric light estimation process based upon histogram of YUV colour space. To show the effectiveness, the proposed technique is compared with existing techniques. This comparison shows that the proposed technique performs better than the existing techniques.

  19. Scalable gamma-ray camera for wide-area search based on silicon photomultipliers array

    NASA Astrophysics Data System (ADS)

    Jeong, Manhee; Van, Benjamin; Wells, Byron T.; D'Aries, Lawrence J.; Hammig, Mark D.

    2018-03-01

    Portable coded-aperture imaging systems based on scintillators and semiconductors have found use in a variety of radiological applications. For stand-off detection of weakly emitting materials, large volume detectors can facilitate the rapid localization of emitting materials. We describe a scalable coded-aperture imaging system based on 5.02 × 5.02 cm2 CsI(Tl) scintillator modules, each partitioned into 4 × 4 × 20 mm3 pixels that are optically coupled to 12 × 12 pixel silicon photo-multiplier (SiPM) arrays. The 144 pixels per module are read-out with a resistor-based charge-division circuit that reduces the readout outputs from 144 to four signals per module, from which the interaction position and total deposited energy can be extracted. All 144 CsI(Tl) pixels are readily distinguishable with an average energy resolution, at 662 keV, of 13.7% FWHM, a peak-to-valley ratio of 8.2, and a peak-to-Compton ratio of 2.9. The detector module is composed of a SiPM array coupled with a 2 cm thick scintillator and modified uniformly redundant array mask. For the image reconstruction, cross correlation and maximum likelihood expectation maximization methods are used. The system shows a field of view of 45° and an angular resolution of 4.7° FWHM.

  20. Tracking brain motion during the cardiac cycle using spiral cine-DENSE MRI

    PubMed Central

    Zhong, Xiaodong; Meyer, Craig H.; Schlesinger, David J.; Sheehan, Jason P.; Epstein, Frederick H.; Larner, James M.; Benedict, Stanley H.; Read, Paul W.; Sheng, Ke; Cai, Jing

    2009-01-01

    Cardiac-synchronized brain motion is well documented, but the accurate measurement of such motion on the pixel-by-pixel basis has been hampered by the lack of proper imaging technique. In this article, the authors present the implementation of an autotracking spiral cine displacement-encoded stimulation echo (DENSE) magnetic resonance imaging (MRI) technique for the measurement of pulsatile brain motion during the cardiac cycle. Displacement-encoded dynamic MR images of three healthy volunteers were acquired throughout the cardiac cycle using the spiral cine-DENSE pulse sequence gated to the R wave of an electrocardiogram. Pixelwise Lagrangian displacement maps were computed, and 2D displacement as a function of time was determined for selected regions of interests. Different intracranial structures exhibited characteristic motion amplitude, direction, and pattern throughout the cardiac cycle. Time-resolved displacement curves revealed the pathway of pulsatile motion from brain stem to peripheral brain lobes. These preliminary results demonstrated that the spiral cine-DENSE MRI technique can be used to measure cardiac-synchronized pulsatile brain motion on the pixel-by-pixel basis with high temporal∕spatial resolution and sensitivity. PMID:19746774

  1. Fast Readout Architectures for Large Arrays of Digital Pixels: Examples and Applications

    PubMed Central

    Gabrielli, A.

    2014-01-01

    Modern pixel detectors, particularly those designed and constructed for applications and experiments for high-energy physics, are commonly built implementing general readout architectures, not specifically optimized in terms of speed. High-energy physics experiments use bidimensional matrices of sensitive elements located on a silicon die. Sensors are read out via other integrated circuits bump bonded over the sensor dies. The speed of the readout electronics can significantly increase the overall performance of the system, and so here novel forms of readout architectures are studied and described. These circuits have been investigated in terms of speed and are particularly suited for large monolithic, low-pitch pixel detectors. The idea is to have a small simple structure that may be expanded to fit large matrices without affecting the layout complexity of the chip, while maintaining a reasonably high readout speed. The solutions might be applied to devices for applications not only in physics but also to general-purpose pixel detectors whenever online fast data sparsification is required. The paper presents also simulations on the efficiencies of the systems as proof of concept for the proposed ideas. PMID:24778588

  2. Charge collection properties in an irradiated pixel sensor built in a thick-film HV-SOI process

    NASA Astrophysics Data System (ADS)

    Hiti, B.; Cindro, V.; Gorišek, A.; Hemperek, T.; Kishishita, T.; Kramberger, G.; Krüger, H.; Mandić, I.; Mikuž, M.; Wermes, N.; Zavrtanik, M.

    2017-10-01

    Investigation of HV-CMOS sensors for use as a tracking detector in the ATLAS experiment at the upgraded LHC (HL-LHC) has recently been an active field of research. A potential candidate for a pixel detector built in Silicon-On-Insulator (SOI) technology has already been characterized in terms of radiation hardness to TID (Total Ionizing Dose) and charge collection after a moderate neutron irradiation. In this article we present results of an extensive irradiation hardness study with neutrons up to a fluence of 1× 1016 neq/cm2. Charge collection in a passive pixelated structure was measured by Edge Transient Current Technique (E-TCT). The evolution of the effective space charge concentration was found to be compliant with the acceptor removal model, with the minimum of the space charge concentration being reached after 5× 1014 neq/cm2. An investigation of the in-pixel uniformity of the detector response revealed parasitic charge collection by the epitaxial silicon layer characteristic for the SOI design. The results were backed by a numerical simulation of charge collection in an equivalent detector layout.

  3. Modulate chopper technique used in pyroelectric uncooled focal plane array thermal imager

    NASA Astrophysics Data System (ADS)

    He, Yuqing; Jin, Weiqi; Liu, Guangrong; Gao, Zhiyun; Wang, Xia; Wang, Lingxue

    2002-09-01

    Pyroelectric uncooled focal plane array (FPA) thermal imager has the advantages of low cost, small size, high responsibility and can work under room temperature, so it has great progress in recent years. As a matched technique, the modulate chopper has become one of the key techniques in uncooled FPA thermal imaging system. Now the Archimedes spiral cord chopper technique is mostly used. When it works, the chopper pushing scans the detector's pixel array, thus makes the pixels being exposed continuously. This paper simulates the shape of this kind of chopper, analyses the exposure time of the detector's every pixel, and also analyses the whole detector pixels' exposure sequence. From the analysis we can get the results: the parameter of Archimedes spiral cord, the detector's thermal time constant, the detector's geometrical dimension, the relative position of the detector to the chopper's spiral cord are the system's important parameters, they will affect the chopper's exposure efficiency and uniformity. We should design the chopper's relevant parameter according to the practical request to achieve the chopper's appropriate structure.

  4. Shade images of forested areas obtained from LANDSAT MSS data

    NASA Technical Reports Server (NTRS)

    Shimabukuro, Yosio Edemir; Smith, James A.

    1989-01-01

    The pixel size in the present day Remote Sensing systems is large enough to include different types of land cover. Depending upon the target area, several components may be present within the pixel. In forested areas, generally, three main components are present: tree canopy, soil (understory), and shadow. The objective is to generate a shade (shadow) image of forested areas from multispectral measurements of LANDSAT MSS (Multispectral Scanner) data by implementing a linear mixing model, where shadow is considered as one of the primary components in a pixel. The shade images are related to the observed variation in forest structure, i.e., the proportion of inferred shadow in a pixel is related to different forest ages, forest types, and tree crown cover. The Constrained Least Squares (CLS) method is used to generate shade images for forest of eucalyptus and vegetation of cerrado using LANDSAT MSS imagery over Itapeva study area in Brazil. The resulted shade images may explain the difference on ages for forest of eucalyptus and the difference on three crown cover for vegetation of cerrado.

  5. CHOBS: Color Histogram of Block Statistics for Automatic Bleeding Detection in Wireless Capsule Endoscopy Video

    PubMed Central

    Ghosh, Tonmoy; Wahid, Khan A.

    2018-01-01

    Wireless capsule endoscopy (WCE) is the most advanced technology to visualize whole gastrointestinal (GI) tract in a non-invasive way. But the major disadvantage here, it takes long reviewing time, which is very laborious as continuous manual intervention is necessary. In order to reduce the burden of the clinician, in this paper, an automatic bleeding detection method for WCE video is proposed based on the color histogram of block statistics, namely CHOBS. A single pixel in WCE image may be distorted due to the capsule motion in the GI tract. Instead of considering individual pixel values, a block surrounding to that individual pixel is chosen for extracting local statistical features. By combining local block features of three different color planes of RGB color space, an index value is defined. A color histogram, which is extracted from those index values, provides distinguishable color texture feature. A feature reduction technique utilizing color histogram pattern and principal component analysis is proposed, which can drastically reduce the feature dimension. For bleeding zone detection, blocks are classified using extracted local features that do not incorporate any computational burden for feature extraction. From extensive experimentation on several WCE videos and 2300 images, which are collected from a publicly available database, a very satisfactory bleeding frame and zone detection performance is achieved in comparison to that obtained by some of the existing methods. In the case of bleeding frame detection, the accuracy, sensitivity, and specificity obtained from proposed method are 97.85%, 99.47%, and 99.15%, respectively, and in the case of bleeding zone detection, 95.75% of precision is achieved. The proposed method offers not only low feature dimension but also highly satisfactory bleeding detection performance, which even can effectively detect bleeding frame and zone in a continuous WCE video data. PMID:29468094

  6. Characterizing the spatial structure of endangered species habitat using geostatistical analysis of IKONOS imagery

    USGS Publications Warehouse

    Wallace, C.S.A.; Marsh, S.E.

    2005-01-01

    Our study used geostatistics to extract measures that characterize the spatial structure of vegetated landscapes from satellite imagery for mapping endangered Sonoran pronghorn habitat. Fine spatial resolution IKONOS data provided information at the scale of individual trees or shrubs that permitted analysis of vegetation structure and pattern. We derived images of landscape structure by calculating local estimates of the nugget, sill, and range variogram parameters within 25 ?? 25-m image windows. These variogram parameters, which describe the spatial autocorrelation of the 1-m image pixels, are shown in previous studies to discriminate between different species-specific vegetation associations. We constructed two independent models of pronghorn landscape preference by coupling the derived measures with Sonoran pronghorn sighting data: a distribution-based model and a cluster-based model. The distribution-based model used the descriptive statistics for variogram measures at pronghorn sightings, whereas the cluster-based model used the distribution of pronghorn sightings within clusters of an unsupervised classification of derived images. Both models define similar landscapes, and validation results confirm they effectively predict the locations of an independent set of pronghorn sightings. Such information, although not a substitute for field-based knowledge of the landscape and associated ecological processes, can provide valuable reconnaissance information to guide natural resource management efforts. ?? 2005 Taylor & Francis Group Ltd.

  7. Chandra ACIS Sub-pixel Resolution

    NASA Astrophysics Data System (ADS)

    Kim, Dong-Woo; Anderson, C. S.; Mossman, A. E.; Allen, G. E.; Fabbiano, G.; Glotfelty, K. J.; Karovska, M.; Kashyap, V. L.; McDowell, J. C.

    2011-05-01

    We investigate how to achieve the best possible ACIS spatial resolution by binning in ACIS sub-pixel and applying an event repositioning algorithm after removing pixel-randomization from the pipeline data. We quantitatively assess the improvement in spatial resolution by (1) measuring point source sizes and (2) detecting faint point sources. The size of a bright (but no pile-up), on-axis point source can be reduced by about 20-30%. With the improve resolution, we detect 20% more faint sources when embedded on the extended, diffuse emission in a crowded field. We further discuss the false source rate of about 10% among the newly detected sources, using a few ultra-deep observations. We also find that the new algorithm does not introduce a grid structure by an aliasing effect for dithered observations and does not worsen the positional accuracy

  8. Application of low-noise CID imagers in scientific instrumentation cameras

    NASA Astrophysics Data System (ADS)

    Carbone, Joseph; Hutton, J.; Arnold, Frank S.; Zarnowski, Jeffrey J.; Vangorden, Steven; Pilon, Michael J.; Wadsworth, Mark V.

    1991-07-01

    CIDTEC has developed a PC-based instrumentation camera incorporating a preamplifier per row CID imager and a microprocessor/LCA camera controller. The camera takes advantage of CID X-Y addressability to randomly read individual pixels and potentially overlapping pixel subsets in true nondestructive (NDRO) as well as destructive readout modes. Using an oxy- nitride fabricated CID and the NDRO readout technique, pixel full well and noise levels of approximately 1*10(superscript 6) and 40 electrons, respectively, were measured. Data taken from test structures indicates noise levels (which appear to be 1/f limited) can be reduced by a factor of two by eliminating the nitride under the preamplifier gate. Due to software programmability, versatile readout capabilities, wide dynamic range, and extended UV/IR capability, this camera appears to be ideally suited for use in spectroscopy and other scientific applications.

  9. Investigating error structure of shuttle radar topography mission elevation data product

    NASA Astrophysics Data System (ADS)

    Becek, Kazimierz

    2008-08-01

    An attempt was made to experimentally assess the instrumental component of error of the C-band SRTM (SRTM). This was achieved by comparing elevation data of 302 runways from airports all over the world with the shuttle radar topography mission data product (SRTM). It was found that the rms of the instrumental error is about +/-1.55 m. Modeling of the remaining SRTM error sources, including terrain relief and pixel size, shows that downsampling from 30 m to 90 m (1 to 3 arc-sec pixels) worsened SRTM vertical accuracy threefold. It is suspected that the proximity of large metallic objects is a source of large SRTM errors. The achieved error estimates allow a pixel-based accuracy assessment of the SRTM elevation data product to be constructed. Vegetation-induced errors were not considered in this work.

  10. Solution processed integrated pixel element for an imaging device

    NASA Astrophysics Data System (ADS)

    Swathi, K.; Narayan, K. S.

    2016-09-01

    We demonstrate the implementation of a solid state circuit/structure comprising of a high performing polymer field effect transistor (PFET) utilizing an oxide layer in conjunction with a self-assembled monolayer (SAM) as the dielectric and a bulk-heterostructure based organic photodiode as a CMOS-like pixel element for an imaging sensor. Practical usage of functional organic photon detectors requires on chip components for image capture and signal transfer as in the CMOS/CCD architecture rather than simple photodiode arrays in order to increase speed and sensitivity of the sensor. The availability of high performing PFETs with low operating voltage and photodiodes with high sensitivity provides the necessary prerequisite to implement a CMOS type image sensing device structure based on organic electronic devices. Solution processing routes in organic electronics offers relatively facile procedures to integrate these components, combined with unique features of large-area, form factor and multiple optical attributes. We utilize the inherent property of a binary mixture in a blend to phase-separate vertically and create a graded junction for effective photocurrent response. The implemented design enables photocharge generation along with on chip charge to voltage conversion with performance parameters comparable to traditional counterparts. Charge integration analysis for the passive pixel element using 2D TCAD simulations is also presented to evaluate the different processes that take place in the monolithic structure.

  11. Cerebral vessels segmentation for light-sheet microscopy image using convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Hu, Chaoen; Hui, Hui; Wang, Shuo; Dong, Di; Liu, Xia; Yang, Xin; Tian, Jie

    2017-03-01

    Cerebral vessel segmentation is an important step in image analysis for brain function and brain disease studies. To extract all the cerebrovascular patterns, including arteries and capillaries, some filter-based methods are used to segment vessels. However, the design of accurate and robust vessel segmentation algorithms is still challenging, due to the variety and complexity of images, especially in cerebral blood vessel segmentation. In this work, we addressed a problem of automatic and robust segmentation of cerebral micro-vessels structures in cerebrovascular images acquired by light-sheet microscope for mouse. To segment micro-vessels in large-scale image data, we proposed a convolutional neural networks (CNNs) architecture trained by 1.58 million pixels with manual label. Three convolutional layers and one fully connected layer were used in the CNNs model. We extracted a patch of size 32x32 pixels in each acquired brain vessel image as training data set to feed into CNNs for classification. This network was trained to output the probability that the center pixel of input patch belongs to vessel structures. To build the CNNs architecture, a series of mouse brain vascular images acquired from a commercial light sheet fluorescence microscopy (LSFM) system were used for training the model. The experimental results demonstrated that our approach is a promising method for effectively segmenting micro-vessels structures in cerebrovascular images with vessel-dense, nonuniform gray-level and long-scale contrast regions.

  12. Deep learning of the sectional appearances of 3D CT images for anatomical structure segmentation based on an FCN voting method.

    PubMed

    Zhou, Xiangrong; Takayama, Ryosuke; Wang, Song; Hara, Takeshi; Fujita, Hiroshi

    2017-10-01

    We propose a single network trained by pixel-to-label deep learning to address the general issue of automatic multiple organ segmentation in three-dimensional (3D) computed tomography (CT) images. Our method can be described as a voxel-wise multiple-class classification scheme for automatically assigning labels to each pixel/voxel in a 2D/3D CT image. We simplify the segmentation algorithms of anatomical structures (including multiple organs) in a CT image (generally in 3D) to a majority voting scheme over the semantic segmentation of multiple 2D slices drawn from different viewpoints with redundancy. The proposed method inherits the spirit of fully convolutional networks (FCNs) that consist of "convolution" and "deconvolution" layers for 2D semantic image segmentation, and expands the core structure with 3D-2D-3D transformations to adapt to 3D CT image segmentation. All parameters in the proposed network are trained pixel-to-label from a small number of CT cases with human annotations as the ground truth. The proposed network naturally fulfills the requirements of multiple organ segmentations in CT cases of different sizes that cover arbitrary scan regions without any adjustment. The proposed network was trained and validated using the simultaneous segmentation of 19 anatomical structures in the human torso, including 17 major organs and two special regions (lumen and content inside of stomach). Some of these structures have never been reported in previous research on CT segmentation. A database consisting of 240 (95% for training and 5% for testing) 3D CT scans, together with their manually annotated ground-truth segmentations, was used in our experiments. The results show that the 19 structures of interest were segmented with acceptable accuracy (88.1% and 87.9% voxels in the training and testing datasets, respectively, were labeled correctly) against the ground truth. We propose a single network based on pixel-to-label deep learning to address the challenging issue of anatomical structure segmentation in 3D CT cases. The novelty of this work is the policy of deep learning of the different 2D sectional appearances of 3D anatomical structures for CT cases and the majority voting of the 3D segmentation results from multiple crossed 2D sections to achieve availability and reliability with better efficiency, generality, and flexibility than conventional segmentation methods, which must be guided by human expertise. © 2017 The Authors. Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  13. Segmentation via fusion of edge and needle map

    NASA Astrophysics Data System (ADS)

    Ahn, Hong-Young; Tou, Julius T.

    1991-03-01

    This paper presents an integrated image segmentation method using edge and needle map which compensates deficiencies of using either edge-based approach or region-based approach. Segmentation of an image is the first and most difficult step toward symbolic transformation of a raw image, which is essential in image understanding. In industrial applications, the task is further complicated by the ubiquitous presence of specularity in most industrial parts. Three images taken from three different illumination directions were used to separate specular and Lambertian components in the images. Needle map is generated from Lambertian component images using photometric stereo technique. In one channel, edges are extracted and linked from the averaged Lambertian images providing one source of segmentation. The other channel, Gaussian curvature and mean curvature values are estimated at each pixel from least square local surface fit of needle map. Labeled surface type image is then generated using the signs of Gaussian and mean curvatures, where one of ten surface types is assigned to each pixel. Connected regions of identical surface type pixels provide the first level grouping, a rough initial segmentation. Edge information and initial segmentation of surface type are fed to an integration module which interprets the edges and regions in a consistent way. During interpretation regions are merged or split, edges are discarded or generated depending upon global surface fit error and consistency with neighboring regions. The output of integrated segmentation is an explicit description of surface type and contours of each region which facilitates recognition, localization and attitude determination of objects in the image.

  14. A novel high electrode count spike recording array using an 81,920 pixel transimpedance amplifier-based imaging chip.

    PubMed

    Johnson, Lee J; Cohen, Ethan; Ilg, Doug; Klein, Richard; Skeath, Perry; Scribner, Dean A

    2012-04-15

    Microelectrode recording arrays of 60-100 electrodes are commonly used to record neuronal biopotentials, and these have aided our understanding of brain function, development and pathology. However, higher density microelectrode recording arrays of larger area are needed to study neuronal function over broader brain regions such as in cerebral cortex or hippocampal slices. Here, we present a novel design of a high electrode count picocurrent imaging array (PIA), based on an 81,920 pixel Indigo ISC9809 readout integrated circuit camera chip. While originally developed for interfacing to infrared photodetector arrays, we have adapted the chip for neuron recording by bonding it to microwire glass resulting in an array with an inter-electrode pixel spacing of 30 μm. In a high density electrode array, the ability to selectively record neural regions at high speed and with good signal to noise ratio are both functionally important. A critical feature of our PIA is that each pixel contains a dedicated low noise transimpedance amplifier (∼0.32 pA rms) which allows recording high signal to noise ratio biocurrents comparable to single electrode voltage amplifier recordings. Using selective sampling of 256 pixel subarray regions, we recorded the extracellular biocurrents of rabbit retinal ganglion cell spikes at sampling rates up to 7.2 kHz. Full array local electroretinogram currents could also be recorded at frame rates up to 100 Hz. A PIA with a full complement of 4 readout circuits would span 1cm and could acquire simultaneous data from selected regions of 1024 electrodes at sampling rates up to 9.3 kHz. Published by Elsevier B.V.

  15. Mechanical studies towards a silicon micro-strip super module for the ATLAS inner detector upgrade at the high luminosity LHC

    NASA Astrophysics Data System (ADS)

    Barbier, G.; Cadoux, F.; Clark, A.; Endo, M.; Favre, Y.; Ferrere, D.; Gonzalez-Sevilla, S.; Hanagaki, K.; Hara, K.; Iacobucci, G.; Ikegami, Y.; Jinnouchi, O.; La Marra, D.; Nakamura, K.; Nishimura, R.; Perrin, E.; Seez, W.; Takubo, Y.; Takashima, R.; Terada, S.; Todome, K.; Unno, Y.; Weber, M.

    2014-04-01

    It is expected that after several years of data-taking, the Large Hadron Collider (LHC) physics programme will be extended to the so-called High-Luminosity LHC, where the instantaneous luminosity will be increased up to 5 × 1034 cm-2 s-1. For the general-purpose ATLAS experiment at the LHC, a complete replacement of its internal tracking detector will be necessary, as the existing detector will not provide the required performance due to the cumulated radiation damage and the increase in the detector occupancy. The baseline layout for the new ATLAS tracker is an all-silicon-based detector, with pixel sensors in the inner layers and silicon micro-strip detectors at intermediate and outer radii. The super-module (SM) is an integration concept proposed for the barrel strip region of the future ATLAS tracker, where double-sided stereo silicon micro-strip modules (DSM) are assembled into a low-mass local support (LS) structure. Mechanical aspects of the proposed LS structure are described.

  16. Geometry Of Discrete Sets With Applications To Pattern Recognition

    NASA Astrophysics Data System (ADS)

    Sinha, Divyendu

    1990-03-01

    In this paper we present a new framework for discrete black and white images that employs only integer arithmetic. This framework is shown to retain the essential characteristics of the framework for Euclidean images. We propose two norms and based on them, the permissible geometric operations on images are defined. The basic invariants of our geometry are line images, structure of image and the corresponding local property of strong attachment of pixels. The permissible operations also preserve the 3x3 neighborhoods, area, and perpendicularity. The structure, patterns, and the inter-pattern gaps in a discrete image are shown to be conserved by the magnification and contraction process. Our notions of approximate congruence, similarity and symmetry are similar, in character, to the corresponding notions, for Euclidean images [1]. We mention two discrete pattern recognition algorithms that work purely with integers, and which fit into our framework. Their performance has been shown to be at par with the performance of traditional geometric schemes. Also, all the undesired effects of finite length registers in fixed point arithmetic that plague traditional algorithms, are non-existent in this family of algorithms.

  17. MAMA detector systems - A status report

    NASA Technical Reports Server (NTRS)

    Timothy, J. Gethyn; Morgan, Jeffrey S.; Slater, David C.; Kasle, David B.; Bybee, Richard L.

    1989-01-01

    Third-generation, 224 x 960 and 360 x 1024-pixel multianode microchannel (MAMA) detectors are under development for satellite-borne FUV and EUV observations, using pixel dimensions of 25 x 25 microns. An account is presently given of the configurations, modes of operation, and recent performance data of these systems. At UV and visible wavelengths, these MAMAs employ a semitransparent, proximity-focused photocathode structure. At FUV and EUV wavelengths below about 1500 A, opaque alkali-halide photocathodes deposited directly on the front surface of the MCP furnish the best detective quantum efficiencies.

  18. dada - a web-based 2D detector analysis tool

    NASA Astrophysics Data System (ADS)

    Osterhoff, Markus

    2017-06-01

    The data daemon, dada, is a server backend for unified access to 2D pixel detector image data stored with different detectors, file formats and saved with varying naming conventions and folder structures across instruments. Furthermore, dada implements basic pre-processing and analysis routines from pixel binning over azimuthal integration to raster scan processing. Common user interactions with dada are by a web frontend, but all parameters for an analysis are encoded into a Uniform Resource Identifier (URI) which can also be written by hand or scripts for batch processing.

  19. Vertical waveguides integrated with silicon photodetectors: Towards high efficiency and low cross-talk image sensors

    NASA Astrophysics Data System (ADS)

    Tut, Turgut; Dan, Yaping; Duane, Peter; Yu, Young; Wober, Munib; Crozier, Kenneth B.

    2012-01-01

    We describe the experimental realization of vertical silicon nitride waveguides integrated with silicon photodetectors. The waveguides are embedded in a silicon dioxide layer. Scanning photocurrent microscopy is performed on a device containing a waveguide, and on a device containing the silicon dioxide layer, but without the waveguide. The results confirm the waveguide's ability to guide light onto the photodetector with high efficiency. We anticipate that the use of these structures in image sensors, with one waveguide per pixel, would greatly improve efficiency and significantly reduce inter-pixel crosstalk.

  20. The least-squares mixing models to generate fraction images derived from remote sensing multispectral data

    NASA Technical Reports Server (NTRS)

    Shimabukuro, Yosio Edemir; Smith, James A.

    1991-01-01

    Constrained-least-squares and weighted-least-squares mixing models for generating fraction images derived from remote sensing multispectral data are presented. An experiment considering three components within the pixels-eucalyptus, soil (understory), and shade-was performed. The generated fraction images for shade (shade image) derived from these two methods were compared by considering the performance and computer time. The derived shade images are related to the observed variation in forest structure, i.e., the fraction of inferred shade in the pixel is related to different eucalyptus ages.

Top