Sample records for original pixel structure

  1. Error analysis of filtering operations in pixel-duplicated images of diabetic retinopathy

    NASA Astrophysics Data System (ADS)

    Mehrubeoglu, Mehrube; McLauchlan, Lifford

    2010-08-01

    In this paper, diabetic retinopathy is chosen for a sample target image to demonstrate the effectiveness of image enlargement through pixel duplication in identifying regions of interest. Pixel duplication is presented as a simpler alternative to data interpolation techniques for detecting small structures in the images. A comparative analysis is performed on different image processing schemes applied to both original and pixel-duplicated images. Structures of interest are detected and and classification parameters optimized for minimum false positive detection in the original and enlarged retinal pictures. The error analysis demonstrates the advantages as well as shortcomings of pixel duplication in image enhancement when spatial averaging operations (smoothing filters) are also applied.

  2. Half-unit weighted bilinear algorithm for image contrast enhancement in capsule endoscopy

    NASA Astrophysics Data System (ADS)

    Rukundo, Olivier

    2018-04-01

    This paper proposes a novel enhancement method based exclusively on the bilinear interpolation algorithm for capsule endoscopy images. The proposed method does not convert the original RBG image components to HSV or any other color space or model; instead, it processes directly RGB components. In each component, a group of four adjacent pixels and half-unit weight in the bilinear weighting function are used to calculate the average pixel value, identical for each pixel in that particular group. After calculations, groups of identical pixels are overlapped successively in horizontal and vertical directions to achieve a preliminary-enhanced image. The final-enhanced image is achieved by halving the sum of the original and preliminary-enhanced image pixels. Quantitative and qualitative experiments were conducted focusing on pairwise comparisons between original and enhanced images. Final-enhanced images have generally the best diagnostic quality and gave more details about the visibility of vessels and structures in capsule endoscopy images.

  3. Virus based Full Colour Pixels using a Microheater

    NASA Astrophysics Data System (ADS)

    Kim, Won-Geun; Kim, Kyujung; Ha, Sung-Hun; Song, Hyerin; Yu, Hyun-Woo; Kim, Chuntae; Kim, Jong-Man; Oh, Jin-Woo

    2015-09-01

    Mimicking natural structures has been received considerable attentions, and there have been a few practical advances. Tremendous efforts based on a self-assembly technique have been contributed to the development of the novel photonic structures which are mimicking nature’s inventions. We emulate the photonic structures from an origin of colour generation of mammalian skins and avian skin/feathers using M13 phage. The structures can be generated a full range of RGB colours that can be sensitively switched by temperature and substrate materials. Consequently, we developed an M13 phage-based temperature-dependent actively controllable colour pixels platform on a microheater chip. Given the simplicity of the fabrication process, the low voltage requirements and cycling stability, the virus colour pixels enable us to substitute for conventional colour pixels for the development of various implantable, wearable and flexible devices in future.

  4. Remote sensing image stitch using modified structure deformation

    NASA Astrophysics Data System (ADS)

    Pan, Ke-cheng; Chen, Jin-wei; Chen, Yueting; Feng, Huajun

    2012-10-01

    To stitch remote sensing images seamlessly without producing visual artifact which is caused by severe intensity discrepancy and structure misalignment, we modify the original structure deformation based stitching algorithm which have two main problems: Firstly, using Poisson equation to propagate deformation vectors leads to the change of the topological relationship between the key points and their surrounding pixels, which may bring in wrong image characteristics. Secondly, the diffusion area of the sparse matrix is too limited to rectify the global intensity discrepancy. To solve the first problem, we adopt Spring-Mass model and bring in external force to keep the topological relationship between key points and their surrounding pixels. We also apply tensor voting algorithm to achieve the global intensity corresponding curve of the two images to solve the second problem. Both simulated and experimental results show that our algorithm is faster and can reach better result than the original algorithm.

  5. To BG or not to BG: Background Subtraction for EIT Coronal Loops

    NASA Astrophysics Data System (ADS)

    Beene, J. E.; Schmelz, J. T.

    2003-05-01

    One of the few observational tests for various coronal heating models is to determine the temperature profile along coronal loops. Since loops are such an abundant coronal feature, this method originally seemed quite promising - that the coronal heating problem might actually be solved by determining the temperature as a function of arc length and comparing these observations with predictions made by different models. But there are many instruments currently available to study loops, as well as various techniques used to determine their temperature characteristics. Consequently, there are many different, mostly conflicting temperature results. We chose data for ten coronal loops observed with the Extreme ultraviolet Imaging Telescope (EIT), and chose specific pixels along each loop, as well as corresponding nearby background pixels where the loop emission was not present. Temperature analysis from the 171-to-195 and 195-to-284 angstrom image ratios was then performed on three forms of the data: the original data alone, the original data with a uniform background subtraction, and the original data with a pixel-by-pixel background subtraction. The original results show loops of constant temperature, as other authors have found before us, but the 171-to-195 and 195-to-284 results are significantly different. Background subtraction does not change the constant-temperature result or the value of the temperature itself. This does not mean that loops are isothermal, however, because the background pixels, which are not part of any contiguous structure, also produce a constant-temperature result with the same value as the loop pixels. These results indicate that EIT temperature analysis should not be trusted, and the isothermal loops that result from EIT (and TRACE) analysis may be an artifact of the analysis process. Solar physics research at the University of Memphis is supported by NASA grants NAG5-9783 and NAG5-12096.

  6. Masking Strategies for Image Manifolds.

    PubMed

    Dadkhahi, Hamid; Duarte, Marco F

    2016-07-07

    We consider the problem of selecting an optimal mask for an image manifold, i.e., choosing a subset of the pixels of the image that preserves the manifold's geometric structure present in the original data. Such masking implements a form of compressive sensing through emerging imaging sensor platforms for which the power expense grows with the number of pixels acquired. Our goal is for the manifold learned from masked images to resemble its full image counterpart as closely as possible. More precisely, we show that one can indeed accurately learn an image manifold without having to consider a large majority of the image pixels. In doing so, we consider two masking methods that preserve the local and global geometric structure of the manifold, respectively. In each case, the process of finding the optimal masking pattern can be cast as a binary integer program, which is computationally expensive but can be approximated by a fast greedy algorithm. Numerical experiments show that the relevant manifold structure is preserved through the datadependent masking process, even for modest mask sizes.

  7. Experimental investigation on aero-optical aberration of shock wave/boundary layer interactions

    NASA Astrophysics Data System (ADS)

    Ding, Haolin; Yi, Shihe; Fu, Jia; He, Lin

    2016-10-01

    After streaming through the flow field which including the expansion, shock wave, boundary, etc., the optical wave would be distorted by fluctuations in the density field. Interactions between laminar/turbulent boundary layer and shock wave contain large number complex flow structures, which offer a condition for studying the influences that different flow structures of the complex flow field have on the aero-optical aberrations. Interactions between laminar/turbulent boundary layer and shock wave are investigated in a Mach 3.0 supersonic wind tunnel, based on nanoparticle-tracer planar laser scattering (NPLS) system. Boundary layer separation/attachment, induced suppression waves, induced shock wave, expansion fan and boundary layer are presented by NPLS images. Its spatial resolution is 44.15 μm/pixel. Time resolution is 6ns. Based on the NPLS images, the density fields with high spatial-temporal resolution are obtained by the flow image calibration, and then the optical path difference (OPD) fluctuations of the original 532nm planar wavefront are calculated using Ray-tracing theory. According to the different flow structures in the flow field, four parts are selected, (1) Y=692 600pixel; (2) Y=600 400pixel; (3) Y=400 268pixel; (4) Y=268 0pixel. The aerooptical effects of different flow structures are quantitatively analyzed, the results indicate that: the compressive waves such as incident shock wave, induced shock wave, etc. rise the density, and then uplift the OPD curve, but this kind of shock are fixed in space position and intensity, the aero-optics induced by it can be regarded as constant; The induced shock waves are induced by the coherent structure of large size vortex in the interaction between turbulent boundary layer, its unsteady characteristic decides the induced waves unsteady characteristic; The space position and intensity of the induced shock wave are fixed in the interaction between turbulent boundary layer; The boundary layer aero-optics are induced by the coherent structure of large size vortex, which result in the fluctuation of OPD.

  8. The Geology of Comet 19/P Borrelly

    NASA Technical Reports Server (NTRS)

    Britt, D. T.; Boice, D. C; Buratti, B. J.; Hicks, M. D.; Nelson, R. M.; Oberst, J.; Sandel, B. R.; Soderblom, L. A.; Stern, S. A.; Thomas, N.

    2002-01-01

    The Deep Space One spacecraft flew by Comet 19P/Borrelly on September 22, 2001 and returned a rich array of imagery with resolutions of up to 48 m/pixel. These images provide a window into the surface structure, processes, and geological history of a comet. Additional information is contained in the original extended abstract.

  9. Methods in quantitative image analysis.

    PubMed

    Oberholzer, M; Ostreicher, M; Christen, H; Brühlmann, M

    1996-05-01

    The main steps of image analysis are image capturing, image storage (compression), correcting imaging defects (e.g. non-uniform illumination, electronic-noise, glare effect), image enhancement, segmentation of objects in the image and image measurements. Digitisation is made by a camera. The most modern types include a frame-grabber, converting the analog-to-digital signal into digital (numerical) information. The numerical information consists of the grey values describing the brightness of every point within the image, named a pixel. The information is stored in bits. Eight bits are summarised in one byte. Therefore, grey values can have a value between 0 and 256 (2(8)). The human eye seems to be quite content with a display of 5-bit images (corresponding to 64 different grey values). In a digitised image, the pixel grey values can vary within regions that are uniform in the original scene: the image is noisy. The noise is mainly manifested in the background of the image. For an optimal discrimination between different objects or features in an image, uniformity of illumination in the whole image is required. These defects can be minimised by shading correction [subtraction of a background (white) image from the original image, pixel per pixel, or division of the original image by the background image]. The brightness of an image represented by its grey values can be analysed for every single pixel or for a group of pixels. The most frequently used pixel-based image descriptors are optical density, integrated optical density, the histogram of the grey values, mean grey value and entropy. The distribution of the grey values existing within an image is one of the most important characteristics of the image. However, the histogram gives no information about the texture of the image. The simplest way to improve the contrast of an image is to expand the brightness scale by spreading the histogram out to the full available range. Rules for transforming the grey value histogram of an existing image (input image) into a new grey value histogram (output image) are most quickly handled by a look-up table (LUT). The histogram of an image can be influenced by gain, offset and gamma of the camera. Gain defines the voltage range, offset defines the reference voltage and gamma the slope of the regression line between the light intensity and the voltage of the camera. A very important descriptor of neighbourhood relations in an image is the co-occurrence matrix. The distance between the pixels (original pixel and its neighbouring pixel) can influence the various parameters calculated from the co-occurrence matrix. The main goals of image enhancement are elimination of surface roughness in an image (smoothing), correction of defects (e.g. noise), extraction of edges, identification of points, strengthening texture elements and improving contrast. In enhancement, two types of operations can be distinguished: pixel-based (point operations) and neighbourhood-based (matrix operations). The most important pixel-based operations are linear stretching of grey values, application of pre-stored LUTs and histogram equalisation. The neighbourhood-based operations work with so-called filters. These are organising elements with an original or initial point in their centre. Filters can be used to accentuate or to suppress specific structures within the image. Filters can work either in the spatial or in the frequency domain. The method used for analysing alterations of grey value intensities in the frequency domain is the Hartley transform. Filter operations in the spatial domain can be based on averaging or ranking the grey values occurring in the organising element. The most important filters, which are usually applied, are the Gaussian filter and the Laplace filter (both averaging filters), and the median filter, the top hat filter and the range operator (all ranking filters). Segmentation of objects is traditionally based on threshold grey values. (AB

  10. The fundamentals of average local variance--Part I: Detecting regular patterns.

    PubMed

    Bøcher, Peder Klith; McCloy, Keith R

    2006-02-01

    The method of average local variance (ALV) computes the mean of the standard deviation values derived for a 3 x 3 moving window on a successively coarsened image to produce a function of ALV versus spatial resolution. In developing ALV, the authors used approximately a doubling of the pixel size at each coarsening of the image. They hypothesized that ALV is low when the pixel size is smaller than the size of scene objects because the pixels on the object will have similar response values. When the pixel and objects are of similar size, they will tend to vary in response and the ALV values will increase. As the size of pixels increase further, more objects will be contained in a single pixel and ALV will decrease. The authors showed that various cover types produced single peak ALV functions that inexplicitly peaked when the pixel size was 1/2 to 3/4 of the object size. This paper reports on work done to explore the characteristics of the various forms of the ALV function and to understand the location of the peaks that occur in this function. The work was conducted using synthetically generated image data. The investigation showed that the hypothesis originally proposed in is not adequate. A new hypothesis is proposed that the ALV function has peak locations that are related to the geometric size of pattern structures in the scene. These structures are not always the same as scene objects. Only in cases where the size of and separation between scene objects are equal does the ALV function detect the size of the objects. In situations where the distance between scene objects are larger than their size, the ALV function has a peak at the object separation, not at the object size. This work has also shown that multiple object structures of different sizes and distances in the image provide multiple peaks in the ALV function and that some of these structures are not implicitly recognized as such from our perspective. However, the magnitude of these peaks depends on the response mix in the structures, complicating their interpretation and analysis. The analysis of the ALV Function is, thus, more complex than that generally reported in the literature.

  11. a Data Field Method for Urban Remotely Sensed Imagery Classification Considering Spatial Correlation

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Qin, K.; Zeng, C.; Zhang, E. B.; Yue, M. X.; Tong, X.

    2016-06-01

    Spatial correlation between pixels is important information for remotely sensed imagery classification. Data field method and spatial autocorrelation statistics have been utilized to describe and model spatial information of local pixels. The original data field method can represent the spatial interactions of neighbourhood pixels effectively. However, its focus on measuring the grey level change between the central pixel and the neighbourhood pixels results in exaggerating the contribution of the central pixel to the whole local window. Besides, Geary's C has also been proven to well characterise and qualify the spatial correlation between each pixel and its neighbourhood pixels. But the extracted object is badly delineated with the distracting salt-and-pepper effect of isolated misclassified pixels. To correct this defect, we introduce the data field method for filtering and noise limitation. Moreover, the original data field method is enhanced by considering each pixel in the window as the central pixel to compute statistical characteristics between it and its neighbourhood pixels. The last step employs a support vector machine (SVM) for the classification of multi-features (e.g. the spectral feature and spatial correlation feature). In order to validate the effectiveness of the developed method, experiments are conducted on different remotely sensed images containing multiple complex object classes inside. The results show that the developed method outperforms the traditional method in terms of classification accuracies.

  12. Impact of nonrigid motion correction technique on pixel-wise pharmacokinetic analysis of free-breathing pulmonary dynamic contrast-enhanced MR imaging.

    PubMed

    Tokuda, Junichi; Mamata, Hatsuho; Gill, Ritu R; Hata, Nobuhiko; Kikinis, Ron; Padera, Robert F; Lenkinski, Robert E; Sugarbaker, David J; Hatabu, Hiroto

    2011-04-01

    To investigates the impact of nonrigid motion correction on pixel-wise pharmacokinetic analysis of free-breathing DCE-MRI in patients with solitary pulmonary nodules (SPNs). Misalignment of focal lesions due to respiratory motion in free-breathing dynamic contrast-enhanced MRI (DCE-MRI) precludes obtaining reliable time-intensity curves, which are crucial for pharmacokinetic analysis for tissue characterization. Single-slice 2D DCE-MRI was obtained in 15 patients. Misalignments of SPNs were corrected using nonrigid B-spline image registration. Pixel-wise pharmacokinetic parameters K(trans) , v(e) , and k(ep) were estimated from both original and motion-corrected DCE-MRI by fitting the two-compartment pharmacokinetic model to the time-intensity curve obtained in each pixel. The "goodness-of-fit" was tested with χ(2) -test in pixel-by-pixel basis to evaluate the reliability of the parameters. The percentages of reliable pixels within the SPNs were compared between the original and motion-corrected DCE-MRI. In addition, the parameters obtained from benign and malignant SPNs were compared. The percentage of reliable pixels in the motion-corrected DCE-MRI was significantly larger than the original DCE-MRI (P = 4 × 10(-7) ). Both K(trans) and k(ep) derived from the motion-corrected DCE-MRI showed significant differences between benign and malignant SPNs (P = 0.024, 0.015). The study demonstrated the impact of nonrigid motion correction technique on pixel-wise pharmacokinetic analysis of free-breathing DCE-MRI in SPNs. Copyright © 2011 Wiley-Liss, Inc.

  13. Acquisition of STEM Images by Adaptive Compressive Sensing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Weiyi; Feng, Qianli; Srinivasan, Ramprakash

    Compressive Sensing (CS) allows a signal to be sparsely measured first and accurately recovered later in software [1]. In scanning transmission electron microscopy (STEM), it is possible to compress an image spatially by reducing the number of measured pixels, which decreases electron dose and increases sensing speed [2,3,4]. The two requirements for CS to work are: (1) sparsity of basis coefficients and (2) incoherence of the sensing system and the representation system. However, when pixels are missing from the image, it is difficult to have an incoherent sensing matrix. Nevertheless, dictionary learning techniques such as Beta-Process Factor Analysis (BPFA) [5]more » are able to simultaneously discover a basis and the sparse coefficients in the case of missing pixels. On top of CS, we would like to apply active learning [6,7] to further reduce the proportion of pixels being measured, while maintaining image reconstruction quality. Suppose we initially sample 10% of random pixels. We wish to select the next 1% of pixels that are most useful in recovering the image. Now, we have 11% of pixels, and we want to decide the next 1% of “most informative” pixels. Active learning methods are online and sequential in nature. Our goal is to adaptively discover the best sensing mask during acquisition using feedback about the structures in the image. In the end, we hope to recover a high quality reconstruction with a dose reduction relative to the non-adaptive (random) sensing scheme. In doing this, we try three metrics applied to the partial reconstructions for selecting the new set of pixels: (1) variance, (2) Kullback-Leibler (KL) divergence using a Radial Basis Function (RBF) kernel, and (3) entropy. Figs. 1 and 2 display the comparison of Peak Signal-to-Noise (PSNR) using these three different active learning methods at different percentages of sampled pixels. At 20% level, all the three active learning methods underperform the original CS without active learning. However, they all beat the original CS as more of the “most informative” pixels are sampled. One can also argue that CS equipped with active learning requires less sampled pixels to achieve the same value of PSNR than CS with pixels randomly sampled, since all the three PSNR curves with active learning grow at a faster pace than that without active learning. For this particular STEM image, by observing the reconstructed images and the sensing masks, we find that while the method based on RBF kernel acquires samples more uniformly, the one on entropy samples more areas of significant change, thus less uniformly. The KL-divergence method performs the best in terms of reconstruction error (PSNR) for this example [8].« less

  14. Spatial clustering of pixels of a multispectral image

    DOEpatents

    Conger, James Lynn

    2014-08-19

    A method and system for clustering the pixels of a multispectral image is provided. A clustering system computes a maximum spectral similarity score for each pixel that indicates the similarity between that pixel and the most similar neighboring. To determine the maximum similarity score for a pixel, the clustering system generates a similarity score between that pixel and each of its neighboring pixels and then selects the similarity score that represents the highest similarity as the maximum similarity score. The clustering system may apply a filtering criterion based on the maximum similarity score so that pixels with similarity scores below a minimum threshold are not clustered. The clustering system changes the current pixel values of the pixels in a cluster based on an averaging of the original pixel values of the pixels in the cluster.

  15. Integrated parabolic nanolenses on MicroLED color pixels

    NASA Astrophysics Data System (ADS)

    Demory, Brandon; Chung, Kunook; Katcher, Adam; Sui, Jingyang; Deng, Hui; Ku, Pei-Cheng

    2018-04-01

    A parabolic nanolens array coupled to the emission of a nanopillar micro-light emitting diode (LED) color pixel is shown to reduce the far field divergence. For a blue wavelength LED, the total emission is 95% collimated within a 0.5 numerical aperture zone, a 3.5x improvement over the same LED without a lens structure. This corresponds to a half-width at half-maximum (HWHM) line width reduction of 2.85 times. Using a resist reflow and etchback procedure, the nanolens array dimensions and parabolic shape are formed. Experimental measurement of the far field emission shows a HWHM linewidth reduction by a factor of 2x, reducing the divergence over the original LED.

  16. Integrated parabolic nanolenses on MicroLED color pixels.

    PubMed

    Demory, Brandon; Chung, Kunook; Katcher, Adam; Sui, Jingyang; Deng, Hui; Ku, Pei-Cheng

    2018-04-20

    A parabolic nanolens array coupled to the emission of a nanopillar micro-light emitting diode (LED) color pixel is shown to reduce the far field divergence. For a blue wavelength LED, the total emission is 95% collimated within a 0.5 numerical aperture zone, a 3.5x improvement over the same LED without a lens structure. This corresponds to a half-width at half-maximum (HWHM) line width reduction of 2.85 times. Using a resist reflow and etchback procedure, the nanolens array dimensions and parabolic shape are formed. Experimental measurement of the far field emission shows a HWHM linewidth reduction by a factor of 2x, reducing the divergence over the original LED.

  17. Study of cluster shapes in a monolithic active pixel detector

    NASA Astrophysics Data System (ADS)

    Maçzewski, ł.; Adamus, M.; Ciborowski, J.; Grzelak, G.; łużniak, P.; Nieżurawski, P.; Żarnecki, A. F.

    2009-11-01

    Beamstrahlung will constitute an important source of background in a pixel vertex detector at the future International Linear Collider. Electron and positron tracks of this origin impact the pixel planes at angles generally larger than those of secondary hadrons and the corresponding clusters are elongated. We report studies of cluster characteristics using test beam electron tracks incident at various angles on a MIMOSA-5 monolithic active pixel sensor matrix.

  18. A neighbor pixel communication filtering structure for Dynamic Vision Sensors

    NASA Astrophysics Data System (ADS)

    Xu, Yuan; Liu, Shiqi; Lu, Hehui; Zhang, Zilong

    2017-02-01

    For Dynamic Vision Sensors (DVS), thermal noise and junction leakage current induced Background Activity (BA) is the major cause of the deterioration of images quality. Inspired by the smoothing filtering principle of horizontal cells in vertebrate retina, A DVS pixel with Neighbor Pixel Communication (NPC) filtering structure is proposed to solve this issue. The NPC structure is designed to judge the validity of pixel's activity through the communication between its 4 adjacent pixels. The pixel's outputs will be suppressed if its activities are determined not real. The proposed pixel's area is 23.76×24.71μm2 and only 3ns output latency is introduced. In order to validate the effectiveness of the structure, a 5×5 pixel array has been implemented in SMIC 0.13μm CIS process. 3 test cases of array's behavioral model show that the NPC-DVS have an ability of filtering the BA.

  19. Color filter array pattern identification using variance of color difference image

    NASA Astrophysics Data System (ADS)

    Shin, Hyun Jun; Jeon, Jong Ju; Eom, Il Kyu

    2017-07-01

    A color filter array is placed on the image sensor of a digital camera to acquire color images. Each pixel uses only one color, since the image sensor can measure only one color per pixel. Therefore, empty pixels are filled using an interpolation process called demosaicing. The original and the interpolated pixels have different statistical characteristics. If the image is modified by manipulation or forgery, the color filter array pattern is altered. This pattern change can be a clue for image forgery detection. However, most forgery detection algorithms have the disadvantage of assuming the color filter array pattern. We present an identification method of the color filter array pattern. Initially, the local mean is eliminated to remove the background effect. Subsequently, the color difference block is constructed to emphasize the difference between the original pixel and the interpolated pixel. The variance measure of the color difference image is proposed as a means of estimating the color filter array configuration. The experimental results show that the proposed method is effective in identifying the color filter array pattern. Compared with conventional methods, our method provides superior performance.

  20. Variable waveband infrared imager

    DOEpatents

    Hunter, Scott R.

    2013-06-11

    A waveband imager includes an imaging pixel that utilizes photon tunneling with a thermally actuated bimorph structure to convert infrared radiation to visible radiation. Infrared radiation passes through a transparent substrate and is absorbed by a bimorph structure formed with a pixel plate. The absorption generates heat which deflects the bimorph structure and pixel plate towards the substrate and into an evanescent electric field generated by light propagating through the substrate. Penetration of the bimorph structure and pixel plate into the evanescent electric field allows a portion of the visible wavelengths propagating through the substrate to tunnel through the substrate, bimorph structure, and/or pixel plate as visible radiation that is proportional to the intensity of the incident infrared radiation. This converted visible radiation may be superimposed over visible wavelengths passed through the imaging pixel.

  1. Fast Fiber-Coupled Imaging Devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brockington, Samuel; Case, Andrew; Witherspoon, Franklin Douglas

    HyperV Technologies Corp. has successfully designed, built and experimentally demonstrated a full scale 1024 pixel 100 MegaFrames/s fiber coupled camera with 12 or 14 bits, and record lengths of 32K frames, exceeding our original performance objectives. This high-pixel-count, fiber optically-coupled, imaging diagnostic can be used for investigating fast, bright plasma events. In Phase 1 of this effort, a 100 pixel fiber-coupled fast streak camera for imaging plasma jet profiles was constructed and successfully demonstrated. The resulting response from outside plasma physics researchers emphasized development of increased pixel performance as a higher priority over increasing pixel count. In this Phase 2more » effort, HyperV therefore focused on increasing the sample rate and bit-depth of the photodiode pixel designed in Phase 1, while still maintaining a long record length and holding the cost per channel to levels which allowed up to 1024 pixels to be constructed. Cost per channel was 53.31 dollars, very close to our original target of $50 per channel. The system consists of an imaging "camera head" coupled to a photodiode bank with an array of optical fibers. The output of these fast photodiodes is then digitized at 100 Megaframes per second and stored in record lengths of 32,768 samples with bit depths of 12 to 14 bits per pixel. Longer record lengths are possible with additional memory. A prototype imaging system with up to 1024 pixels was designed and constructed and used to successfully take movies of very fast moving plasma jets as a demonstration of the camera performance capabilities. Some faulty electrical components on the 64 circuit boards resulted in only 1008 functional channels out of 1024 on this first generation prototype system. We experimentally observed backlit high speed fan blades in initial camera testing and then followed that with full movies and streak images of free flowing high speed plasma jets (at 30-50 km/s). Jet structure and jet collisions onto metal pillars in the path of the plasma jets were recorded in a single shot. This new fast imaging system is an attractive alternative to conventional fast framing cameras for applications and experiments where imaging events using existing techniques are inefficient or impossible. The development of HyperV's new diagnostic was split into two tracks: a next generation camera track, in which HyperV built, tested, and demonstrated a prototype 1024 channel camera at its own facility, and a second plasma community beta test track, where selected plasma physics programs received small systems of a few test pixels to evaluate the expected performance of a full scale camera on their experiments. These evaluations were performed as part of an unfunded collaboration with researchers at Los Alamos National Laboratory and the University of California at Davis. Results from the prototype 1024-pixel camera are discussed, as well as results from the collaborations with test pixel system deployment sites.« less

  2. Method and apparatus for determining the coordinates of an object

    DOEpatents

    Pedersen, Paul S; Sebring, Robert

    2003-01-01

    A method and apparatus is described for determining the coordinates on the surface of an object which is illuminated by a beam having pixels which have been modulated according to predetermined mathematical relationships with pixel position within the modulator. The reflected illumination is registered by an image sensor at a known location which registers the intensity of the pixels as received. Computations on the intensity, which relate the pixel intensities received to the pixel intensities transmitted at the modulator, yield the proportional loss of intensity and planar position of the originating pixels. The proportional loss and position information can then be utilized within triangulation equations to resolve the coordinates of associated surface locations on the object.

  3. Spectral-Spatial Shared Linear Regression for Hyperspectral Image Classification.

    PubMed

    Haoliang Yuan; Yuan Yan Tang

    2017-04-01

    Classification of the pixels in hyperspectral image (HSI) is an important task and has been popularly applied in many practical applications. Its major challenge is the high-dimensional small-sized problem. To deal with this problem, lots of subspace learning (SL) methods are developed to reduce the dimension of the pixels while preserving the important discriminant information. Motivated by ridge linear regression (RLR) framework for SL, we propose a spectral-spatial shared linear regression method (SSSLR) for extracting the feature representation. Comparing with RLR, our proposed SSSLR has the following two advantages. First, we utilize a convex set to explore the spatial structure for computing the linear projection matrix. Second, we utilize a shared structure learning model, which is formed by original data space and a hidden feature space, to learn a more discriminant linear projection matrix for classification. To optimize our proposed method, an efficient iterative algorithm is proposed. Experimental results on two popular HSI data sets, i.e., Indian Pines and Salinas demonstrate that our proposed methods outperform many SL methods.

  4. Volumetric three-dimensional intravascular ultrasound visualization using shape-based nonlinear interpolation

    PubMed Central

    2013-01-01

    Background Intravascular ultrasound (IVUS) is a standard imaging modality for identification of plaque formation in the coronary and peripheral arteries. Volumetric three-dimensional (3D) IVUS visualization provides a powerful tool to overcome the limited comprehensive information of 2D IVUS in terms of complex spatial distribution of arterial morphology and acoustic backscatter information. Conventional 3D IVUS techniques provide sub-optimal visualization of arterial morphology or lack acoustic information concerning arterial structure due in part to low quality of image data and the use of pixel-based IVUS image reconstruction algorithms. In the present study, we describe a novel volumetric 3D IVUS reconstruction algorithm to utilize IVUS signal data and a shape-based nonlinear interpolation. Methods We developed an algorithm to convert a series of IVUS signal data into a fully volumetric 3D visualization. Intermediary slices between original 2D IVUS slices were generated utilizing the natural cubic spline interpolation to consider the nonlinearity of both vascular structure geometry and acoustic backscatter in the arterial wall. We evaluated differences in image quality between the conventional pixel-based interpolation and the shape-based nonlinear interpolation methods using both virtual vascular phantom data and in vivo IVUS data of a porcine femoral artery. Volumetric 3D IVUS images of the arterial segment reconstructed using the two interpolation methods were compared. Results In vitro validation and in vivo comparative studies with the conventional pixel-based interpolation method demonstrated more robustness of the shape-based nonlinear interpolation algorithm in determining intermediary 2D IVUS slices. Our shape-based nonlinear interpolation demonstrated improved volumetric 3D visualization of the in vivo arterial structure and more realistic acoustic backscatter distribution compared to the conventional pixel-based interpolation method. Conclusions This novel 3D IVUS visualization strategy has the potential to improve ultrasound imaging of vascular structure information, particularly atheroma determination. Improved volumetric 3D visualization with accurate acoustic backscatter information can help with ultrasound molecular imaging of atheroma component distribution. PMID:23651569

  5. Characterization of pixel sensor designed in 180 nm SOI CMOS technology

    NASA Astrophysics Data System (ADS)

    Benka, T.; Havranek, M.; Hejtmanek, M.; Jakovenko, J.; Janoska, Z.; Marcisovska, M.; Marcisovsky, M.; Neue, G.; Tomasek, L.; Vrba, V.

    2018-01-01

    A new type of X-ray imaging Monolithic Active Pixel Sensor (MAPS), X-CHIP-02, was developed using a 180 nm deep submicron Silicon On Insulator (SOI) CMOS commercial technology. Two pixel matrices were integrated into the prototype chip, which differ by the pixel pitch of 50 μm and 100 μm. The X-CHIP-02 contains several test structures, which are useful for characterization of individual blocks. The sensitive part of the pixel integrated in the handle wafer is one of the key structures designed for testing. The purpose of this structure is to determine the capacitance of the sensitive part (diode in the MAPS pixel). The measured capacitance is 2.9 fF for 50 μm pixel pitch and 4.8 fF for 100 μm pixel pitch at -100 V (default operational voltage). This structure was used to measure the IV characteristics of the sensitive diode. In this work, we report on a circuit designed for precise determination of sensor capacitance and IV characteristics of both pixel types with respect to X-ray irradiation. The motivation for measurement of the sensor capacitance was its importance for the design of front-end amplifier circuits. The design of pixel elements, as well as circuit simulation and laboratory measurement techniques are described. The experimental results are of great importance for further development of MAPS sensors in this technology.

  6. Comparative study of various pixel photodiodes for digital radiography: Junction structure, corner shape and noble window opening

    NASA Astrophysics Data System (ADS)

    Kang, Dong-Uk; Cho, Minsik; Lee, Dae Hee; Yoo, Hyunjun; Kim, Myung Soo; Bae, Jun Hyung; Kim, Hyoungtaek; Kim, Jongyul; Kim, Hyunduk; Cho, Gyuseong

    2012-05-01

    Recently, large-size 3-transistors (3-Tr) active pixel complementary metal-oxide silicon (CMOS) image sensors have been being used for medium-size digital X-ray radiography, such as dental computed tomography (CT), mammography and nondestructive testing (NDT) for consumer products. We designed and fabricated 50 µm × 50 µm 3-Tr test pixels having a pixel photodiode with various structures and shapes by using the TSMC 0.25-m standard CMOS process to compare their optical characteristics. The pixel photodiode output was continuously sampled while a test pixel was continuously illuminated by using 550-nm light at a constant intensity. The measurement was repeated 300 times for each test pixel to obtain reliable results on the mean and the variance of the pixel output at each sampling time. The sampling rate was 50 kHz, and the reset period was 200 msec. To estimate the conversion gain, we used the mean-variance method. From the measured results, the n-well/p-substrate photodiode, among 3 photodiode structures available in a standard CMOS process, showed the best performance at a low illumination equivalent to the typical X-ray signal range. The quantum efficiencies of the n+/p-well, n-well/p-substrate, and n+/p-substrate photodiodes were 18.5%, 62.1%, and 51.5%, respectively. From a comparison of pixels with rounded and rectangular corners, we found that a rounded corner structure could reduce the dark current in large-size pixels. A pixel with four rounded corners showed a reduced dark current of about 200fA compared to a pixel with four rectangular corners in our pixel sample size. Photodiodes with round p-implant openings showed about 5% higher dark current, but about 34% higher sensitivities, than the conventional photodiodes.

  7. Dynamic Janus Metasurfaces in the Visible Spectral Region.

    PubMed

    Yu, Ping; Li, Jianxiong; Zhang, Shuang; Jin, Zhongwei; Schütz, Gisela; Qiu, Cheng-Wei; Hirscher, Michael; Liu, Na

    2018-06-27

    Janus monolayers have long been captivated as a popular notion for breaking in-plane and out-of-plane structural symmetry. Originated from chemistry and materials science, the concept of Janus functions have been recently extended to ultrathin metasurfaces by arranging meta-atoms asymmetrically with respect to the propagation or polarization direction of the incident light. However, such metasurfaces are intrinsically static and the information they carry can be straightforwardly decrypted by scanning the incident light directions and polarization states once the devices are fabricated. In this Letter, we present a dynamic Janus metasurface scheme in the visible spectral region. In each super unit cell, three plasmonic pixels are categorized into two sets. One set contains a magnesium nanorod and a gold nanorod that are orthogonally oriented with respect to each other, working as counter pixels. The other set only contains a magnesium nanorod. The effective pixels on the Janus metasurface can be reversibly regulated by hydrogenation/dehydrogenation of the magnesium nanorods. Such dynamic controllability at visible frequencies allows for flat optical elements with novel functionalities including beam steering, bifocal lensing, holographic encryption, and dual optical function switching.

  8. 1T Pixel Using Floating-Body MOSFET for CMOS Image Sensors.

    PubMed

    Lu, Guo-Neng; Tournier, Arnaud; Roy, François; Deschamps, Benoît

    2009-01-01

    We present a single-transistor pixel for CMOS image sensors (CIS). It is a floating-body MOSFET structure, which is used as photo-sensing device and source-follower transistor, and can be controlled to store and evacuate charges. Our investigation into this 1T pixel structure includes modeling to obtain analytical description of conversion gain. Model validation has been done by comparing theoretical predictions and experimental results. On the other hand, the 1T pixel structure has been implemented in different configurations, including rectangular-gate and ring-gate designs, and variations of oxidation parameters for the fabrication process. The pixel characteristics are presented and discussed.

  9. Algorithm for Detecting a Bright Spot in an Image

    NASA Technical Reports Server (NTRS)

    2009-01-01

    An algorithm processes the pixel intensities of a digitized image to detect and locate a circular bright spot, the approximate size of which is known in advance. The algorithm is used to find images of the Sun in cameras aboard the Mars Exploration Rovers. (The images are used in estimating orientations of the Rovers relative to the direction to the Sun.) The algorithm can also be adapted to tracking of circular shaped bright targets in other diverse applications. The first step in the algorithm is to calculate a dark-current ramp a correction necessitated by the scheme that governs the readout of pixel charges in the charge-coupled-device camera in the original Mars Exploration Rover application. In this scheme, the fraction of each frame period during which dark current is accumulated in a given pixel (and, hence, the dark-current contribution to the pixel image-intensity reading) is proportional to the pixel row number. For the purpose of the algorithm, the dark-current contribution to the intensity reading from each pixel is assumed to equal the average of intensity readings from all pixels in the same row, and the factor of proportionality is estimated on the basis of this assumption. Then the product of the row number and the factor of proportionality is subtracted from the reading from each pixel to obtain a dark-current-corrected intensity reading. The next step in the algorithm is to determine the best location, within the overall image, for a window of N N pixels (where N is an odd number) large enough to contain the bright spot of interest plus a small margin. (In the original application, the overall image contains 1,024 by 1,024 pixels, the image of the Sun is about 22 pixels in diameter, and N is chosen to be 29.)

  10. Clays of Ladon Basin

    NASA Image and Video Library

    2018-01-23

    Ladon Basin was a large impact structure that was filled in by the deposits from Ladon Valles, a major ancient river on Mars as seen in this image from NASA's Mars Reconnaissance Orbiter (MRO). These wet sediments were altered into minerals such as various clay minerals. Clays imply chemistry that may have been favorable for life on ancient Mars, if anything lived there, so this could be a good spot for future exploration by rovers and perhaps return of samples to Earth. The map is projected here at a scale of 50 centimeters (19.7 inches) per pixel. [The original image scale is 52.1 centimeters (20.5 inches) per pixel (with 2 x 2 binning); objects on the order of 156 centimeters (61.4 inches) across are resolved.] North is up. https://photojournal.jpl.nasa.gov/catalog/PIA22183

  11. Impact of sensor's point spread function on land cover characterization: Assessment and deconvolution

    USGS Publications Warehouse

    Huang, C.; Townshend, J.R.G.; Liang, S.; Kalluri, S.N.V.; DeFries, R.S.

    2002-01-01

    Measured and modeled point spread functions (PSF) of sensor systems indicate that a significant portion of the recorded signal of each pixel of a satellite image originates from outside the area represented by that pixel. This hinders the ability to derive surface information from satellite images on a per-pixel basis. In this study, the impact of the PSF of the Moderate Resolution Imaging Spectroradiometer (MODIS) 250 m bands was assessed using four images representing different landscapes. Experimental results showed that though differences between pixels derived with and without PSF effects were small on the average, the PSF generally brightened dark objects and darkened bright objects. This impact of the PSF lowered the performance of a support vector machine (SVM) classifier by 5.4% in overall accuracy and increased the overall root mean square error (RMSE) by 2.4% in estimating subpixel percent land cover. An inversion method based on the known PSF model reduced the signals originating from surrounding areas by as much as 53%. This method differs from traditional PSF inversion deconvolution methods in that the PSF was adjusted with lower weighting factors for signals originating from neighboring pixels than those specified by the PSF model. By using this deconvolution method, the lost classification accuracy due to residual impact of PSF effects was reduced to only 1.66% in overall accuracy. The increase in the RMSE of estimated subpixel land cover proportions due to the residual impact of PSF effects was reduced to 0.64%. Spatial aggregation also effectively reduced the errors in estimated land cover proportion images. About 50% of the estimation errors were removed after applying the deconvolution method and aggregating derived proportion images to twice their dimensional pixel size. ?? 2002 Elsevier Science Inc. All rights reserved.

  12. Structural colour printing from a reusable generic nanosubstrate masked for the target image

    NASA Astrophysics Data System (ADS)

    Rezaei, M.; Jiang, H.; Kaminska, B.

    2016-02-01

    Structural colour printing has advantages over traditional pigment-based colour printing. However, the high fabrication cost has hindered its applications in printing large-area images because each image requires patterning structural pixels in nanoscale resolution. In this work, we present a novel strategy to print structural colour images from a pixelated substrate which is called a nanosubstrate. The nanosubstrate is fabricated only once using nanofabrication tools and can be reused for printing a large quantity of structural colour images. It contains closely packed arrays of nanostructures from which red, green, blue and infrared structural pixels can be imprinted. To print a target colour image, the nanosubstrate is first covered with a mask layer to block all the structural pixels. The mask layer is subsequently patterned according to the target colour image to make apertures of controllable sizes on top of the wanted primary colour pixels. The masked nanosubstrate is then used as a stamp to imprint the colour image onto a separate substrate surface using nanoimprint lithography. Different visual colours are achieved by properly mixing the red, green and blue primary colours into appropriate ratios controlled by the aperture sizes on the patterned mask layer. Such a strategy significantly reduces the cost and complexity of printing a structural colour image from lengthy nanoscale patterning into high throughput micro-patterning and makes it possible to apply structural colour printing in personalized security features and data storage. In this paper, nanocone array grating pixels were used as the structural pixels and the nanosubstrate contains structures to imprint the nanocone arrays. Laser lithography was implemented to pattern the mask layer with submicron resolution. The optical properties of the nanocone array gratings are studied in detail. Multiple printed structural colour images with embedded covert information are demonstrated.

  13. Development of n+-in-p planar pixel sensors for extremely high radiation environments, designed to retain high efficiency after irradiation

    NASA Astrophysics Data System (ADS)

    Unno, Y.; Kamada, S.; Yamamura, K.; Ikegami, Y.; Nakamura, K.; Takubo, Y.; Takashima, R.; Tojo, J.; Kono, T.; Hanagaki, K.; Yajima, K.; Yamauchi, Y.; Hirose, M.; Homma, Y.; Jinnouchi, O.; Kimura, K.; Motohashi, K.; Sato, S.; Sawai, H.; Todome, K.; Yamaguchi, D.; Hara, K.; Sato, Kz.; Sato, Kj.; Hagihara, M.; Iwabuchi, S.

    2016-09-01

    We have developed n+-in-p pixel sensors to obtain highly radiation tolerant sensors for extremely high radiation environments such as those found at the high-luminosity LHC. We have designed novel pixel structures to eliminate the sources of efficiency loss under the bias rails after irradiation by removing the bias rail out of the boundary region and routing the bias resistors inside the area of the pixel electrodes. After irradiation by protons with the fluence of approximately 3 ×1015neq /cm2, the pixel structure with the polysilicon bias resistor and the bias rails removed far away from the boundary shows an efficiency loss of < 0.5 % per pixel at the boundary region, which is as efficient as the pixel structure without a biasing structure. The pixel structure with the bias rails at the boundary and the widened p-stop's underneath the bias rail also exhibits an improved loss of approximately 1% per pixel at the boundary region. We have elucidated the physical mechanisms behind the efficiency loss under the bias rail with TCAD simulations. The efficiency loss is due to the interplay of the bias rail acting as a charge collecting electrode with the region of low electric field in the silicon near the surface at the boundary. The region acts as a "shield" for the electrode. After irradiation, the strong applied electric field nearly eliminates the region. The TCAD simulations have shown that wide p-stop and large Si-SiO2 interface charge (inversion layer, specifically) act to shield the weighting potential. The pixel sensor of the old design irradiated by γ-rays at 2.4 MGy is confirmed to exhibit only a slight efficiency loss at the boundary.

  14. Communication system analysis for manned space flight

    NASA Technical Reports Server (NTRS)

    Schilling, D. L.

    1977-01-01

    One- and two-dimensional adaptive delta modulator (ADM) algorithms are discussed and compared. Results are shown for bit rates of two bits/pixel, one bit/pixel and 0.5 bits/pixel. Pictures showing the difference between the encoded-decoded pictures and the original pictures are presented. The effect of channel errors on the reconstructed picture is illustrated. A two-dimensional ADM using interframe encoding is also presented. This system operates at the rate of two bits/pixel and produces excellent quality pictures when there is little motion. The effect of large amounts of motion on the reconstructed picture is described.

  15. Pixel structures to compensate nonuniform threshold voltage and mobility of polycrystalline silicon thin-film transistors using subthreshold current for large-size active matrix organic light-emitting diode displays

    NASA Astrophysics Data System (ADS)

    Na, Jun-Seok; Kwon, Oh-Kyong

    2014-01-01

    We propose pixel structures for large-size and high-resolution active matrix organic light-emitting diode (AMOLED) displays using a polycrystalline silicon (poly-Si) thin-film transistor (TFT) backplane. The proposed pixel structures compensate the variations of the threshold voltage and mobility of the driving TFT using the subthreshold current. The simulated results show that the emission current error of the proposed pixel structure B ranges from -2.25 to 2.02 least significant bit (LSB) when the variations of the threshold voltage and mobility of the driving TFT are ±0.5 V and ±10%, respectively.

  16. Color extended visual cryptography using error diffusion.

    PubMed

    Kang, InKoo; Arce, Gonzalo R; Lee, Heung-Kyu

    2011-01-01

    Color visual cryptography (VC) encrypts a color secret message into n color halftone image shares. Previous methods in the literature show good results for black and white or gray scale VC schemes, however, they are not sufficient to be applied directly to color shares due to different color structures. Some methods for color visual cryptography are not satisfactory in terms of producing either meaningless shares or meaningful shares with low visual quality, leading to suspicion of encryption. This paper introduces the concept of visual information pixel (VIP) synchronization and error diffusion to attain a color visual cryptography encryption method that produces meaningful color shares with high visual quality. VIP synchronization retains the positions of pixels carrying visual information of original images throughout the color channels and error diffusion generates shares pleasant to human eyes. Comparisons with previous approaches show the superior performance of the new method.

  17. HEALPix: A Framework for High-Resolution Discretization and Fast Analysis of Data Distributed on the Sphere

    NASA Technical Reports Server (NTRS)

    Gorski, K. M.; Hivon, Eric; Banday, A. J.; Wandelt, Benjamin D.; Hansen, Frode K.; Reinecke, Mstvos; Bartelmann, Matthia

    2005-01-01

    HEALPix the Hierarchical Equal Area isoLatitude Pixelization is a versatile structure for the pixelization of data on the sphere. An associated library of computational algorithms and visualization software supports fast scientific applications executable directly on discretized spherical maps generated from very large volumes of astronomical data. Originally developed to address the data processing and analysis needs of the present generation of cosmic microwave background experiments (e.g., BOOMERANG, WMAP), HEALPix can be expanded to meet many of the profound challenges that will arise in confrontation with the observational output of future missions and experiments, including, e.g., Planck, Herschel, SAFIR, and the Beyond Einstein inflation probe. In this paper we consider the requirements and implementation constraints on a framework that simultaneously enables an efficient discretization with associated hierarchical indexation and fast analysis/synthesis of functions defined on the sphere. We demonstrate how these are explicitly satisfied by HEALPix.

  18. Optical proximity correction (OPC) in near-field lithography with pixel-based field sectioning time modulation

    NASA Astrophysics Data System (ADS)

    Oh, Seonghyeon; Han, Dandan; Shim, Hyeon Bo; Hahn, Jae W.

    2018-01-01

    Subwavelength features have been successfully demonstrated in near-field lithography. In this study, the point spread function (PSF) of a near-field beam spot from a plasmonic ridge nanoaperture is discussed with regard to the complex decaying characteristic of a non-propagating wave and the asymmetry of the field distribution for pattern design. We relaxed the shape complexity of the field distribution with pixel-based optical proximity correction (OPC) for simplifying the pattern image distortion. To enhance the pattern fidelity for a variety of arbitrary patterns, field-sectioning structures are formulated via convolutions with a time-modulation function and a transient PSF along the near-field dominant direction. The sharpness of corners and edges, and line shortening can be improved by modifying the original target pattern shape using the proposed approach by considering both the pattern geometry and directionality of the field decay for OPC in near-field lithography.

  19. Optical proximity correction (OPC) in near-field lithography with pixel-based field sectioning time modulation.

    PubMed

    Oh, Seonghyeon; Han, Dandan; Shim, Hyeon Bo; Hahn, Jae W

    2018-01-26

    Subwavelength features have been successfully demonstrated in near-field lithography. In this study, the point spread function (PSF) of a near-field beam spot from a plasmonic ridge nanoaperture is discussed with regard to the complex decaying characteristic of a non-propagating wave and the asymmetry of the field distribution for pattern design. We relaxed the shape complexity of the field distribution with pixel-based optical proximity correction (OPC) for simplifying the pattern image distortion. To enhance the pattern fidelity for a variety of arbitrary patterns, field-sectioning structures are formulated via convolutions with a time-modulation function and a transient PSF along the near-field dominant direction. The sharpness of corners and edges, and line shortening can be improved by modifying the original target pattern shape using the proposed approach by considering both the pattern geometry and directionality of the field decay for OPC in near-field lithography.

  20. Reflective coherent spatial light modulator

    DOEpatents

    Simpson, John T.; Richards, Roger K.; Hutchinson, Donald P.; Simpson, Marcus L.

    2003-04-22

    A reflective coherent spatial light modulator (RCSLM) includes a subwavelength resonant grating structure (SWS), the SWS including at least one subwavelength resonant grating layer (SWL) have a plurality of areas defining a plurality of pixels. Each pixel represents an area capable of individual control of its reflective response. A structure for modulating the resonant reflective response of at least one pixel is provided. The structure for modulating can include at least one electro-optic layer in optical contact with the SWS. The RCSLM is scalable in both pixel size and wavelength. A method for forming a RCSLM includes the steps of selecting a waveguide material and forming a SWS in the waveguide material, the SWS formed from at least one SWL, the SWL having a plurality of areas defining a plurality of pixels.

  1. A Gaussian Mixture Model Representation of Endmember Variability in Hyperspectral Unmixing

    NASA Astrophysics Data System (ADS)

    Zhou, Yuan; Rangarajan, Anand; Gader, Paul D.

    2018-05-01

    Hyperspectral unmixing while considering endmember variability is usually performed by the normal compositional model (NCM), where the endmembers for each pixel are assumed to be sampled from unimodal Gaussian distributions. However, in real applications, the distribution of a material is often not Gaussian. In this paper, we use Gaussian mixture models (GMM) to represent the endmember variability. We show, given the GMM starting premise, that the distribution of the mixed pixel (under the linear mixing model) is also a GMM (and this is shown from two perspectives). The first perspective originates from the random variable transformation and gives a conditional density function of the pixels given the abundances and GMM parameters. With proper smoothness and sparsity prior constraints on the abundances, the conditional density function leads to a standard maximum a posteriori (MAP) problem which can be solved using generalized expectation maximization. The second perspective originates from marginalizing over the endmembers in the GMM, which provides us with a foundation to solve for the endmembers at each pixel. Hence, our model can not only estimate the abundances and distribution parameters, but also the distinct endmember set for each pixel. We tested the proposed GMM on several synthetic and real datasets, and showed its potential by comparing it to current popular methods.

  2. Pixel Paradise

    NASA Technical Reports Server (NTRS)

    1998-01-01

    PixelVision, Inc., has developed a series of integrated imaging engines capable of high-resolution image capture at dynamic speeds. This technology was used originally at Jet Propulsion Laboratory in a series of imaging engines for a NASA mission to Pluto. By producing this integrated package, Charge-Coupled Device (CCD) technology has been made accessible to a wide range of users.

  3. CMOS Active-Pixel Image Sensor With Simple Floating Gates

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R.; Nakamura, Junichi; Kemeny, Sabrina E.

    1996-01-01

    Experimental complementary metal-oxide/semiconductor (CMOS) active-pixel image sensor integrated circuit features simple floating-gate structure, with metal-oxide/semiconductor field-effect transistor (MOSFET) as active circuit element in each pixel. Provides flexibility of readout modes, no kTC noise, and relatively simple structure suitable for high-density arrays. Features desirable for "smart sensor" applications.

  4. A New Pixels Flipping Method for Huge Watermarking Capacity of the Invoice Font Image

    PubMed Central

    Li, Li; Hou, Qingzheng; Lu, Jianfeng; Dai, Junping; Mao, Xiaoyang; Chang, Chin-Chen

    2014-01-01

    Invoice printing just has two-color printing, so invoice font image can be seen as binary image. To embed watermarks into invoice image, the pixels need to be flipped. The more huge the watermark is, the more the pixels need to be flipped. We proposed a new pixels flipping method in invoice image for huge watermarking capacity. The pixels flipping method includes one novel interpolation method for binary image, one flippable pixels evaluation mechanism, and one denoising method based on gravity center and chaos degree. The proposed interpolation method ensures that the invoice image keeps features well after scaling. The flippable pixels evaluation mechanism ensures that the pixels keep better connectivity and smoothness and the pattern has highest structural similarity after flipping. The proposed denoising method makes invoice font image smoother and fiter for human vision. Experiments show that the proposed flipping method not only keeps the invoice font structure well but also improves watermarking capacity. PMID:25489606

  5. Exploring space-time structure of human mobility in urban space

    NASA Astrophysics Data System (ADS)

    Sun, J. B.; Yuan, J.; Wang, Y.; Si, H. B.; Shan, X. M.

    2011-03-01

    Understanding of human mobility in urban space benefits the planning and provision of municipal facilities and services. Due to the high penetration of cell phones, mobile cellular networks provide information for urban dynamics with a large spatial extent and continuous temporal coverage in comparison with traditional approaches. The original data investigated in this paper were collected by cellular networks in a southern city of China, recording the population distribution by dividing the city into thousands of pixels. The space-time structure of urban dynamics is explored by applying Principal Component Analysis (PCA) to the original data, from temporal and spatial perspectives between which there is a dual relation. Based on the results of the analysis, we have discovered four underlying rules of urban dynamics: low intrinsic dimensionality, three categories of common patterns, dominance of periodic trends, and temporal stability. It implies that the space-time structure can be captured well by remarkably few temporal or spatial predictable periodic patterns, and the structure unearthed by PCA evolves stably over time. All these features play a critical role in the applications of forecasting and anomaly detection.

  6. Status and Performance Updates for the Cosmic Origins Spectrograph

    NASA Astrophysics Data System (ADS)

    Snyder, Elaine M.; De Rosa, Gisella; Fischer, William J.; Fix, Mees; Fox, Andrew; Indriolo, Nick; James, Bethan; Oliveira, Cristina M.; Penton, Steven V.; Plesha, Rachel; Rafelski, Marc; Roman-Duval, Julia; Sahnow, David J.; Sankrit, Ravi; Taylor, Joanna M.; White, James

    2018-01-01

    The Hubble Space Telescope's Cosmic Origins Spectrograph (COS) moved the spectra on the FUV detector from Lifetime Position 3 (LP3) to a new pristine location, LP4, in October 2017. The spectra were shifted in the cross-dispersion direction by -2.5" (roughly -31 pixels) from LP3, or -5" (roughly -62 pixels) from the original LP1. This move mitigates the adverse effects of gain sag on the spectral quality and accuracy of COS FUV observations. Here, we present updates regarding the calibration of FUV data at LP4, including the flat fields, flux calibrations, and spectral resolution. We also present updates on the time-dependent sensitivities and dark rates of both the NUV and FUV detectors.

  7. Data processing for soft X-ray diagnostics based on GEM detector measurements for fusion plasma imaging

    NASA Astrophysics Data System (ADS)

    Czarski, T.; Chernyshova, M.; Pozniak, K. T.; Kasprowicz, G.; Byszuk, A.; Juszczyk, B.; Wojenski, A.; Zabolotny, W.; Zienkiewicz, P.

    2015-12-01

    The measurement system based on GEM - Gas Electron Multiplier detector is developed for X-ray diagnostics of magnetic confinement fusion plasmas. The Triple Gas Electron Multiplier (T-GEM) is presented as soft X-ray (SXR) energy and position sensitive detector. The paper is focused on the measurement subject and describes the fundamental data processing to obtain reliable characteristics (histograms) useful for physicists. So, it is the software part of the project between the electronic hardware and physics applications. The project is original and it was developed by the paper authors. Multi-channel measurement system and essential data processing for X-ray energy and position recognition are considered. Several modes of data acquisition determined by hardware and software processing are introduced. Typical measuring issues are deliberated for the enhancement of data quality. The primary version based on 1-D GEM detector was applied for the high-resolution X-ray crystal spectrometer KX1 in the JET tokamak. The current version considers 2-D detector structures initially for the investigation purpose. Two detector structures with single-pixel sensors and multi-pixel (directional) sensors are considered for two-dimensional X-ray imaging. Fundamental output characteristics are presented for one and two dimensional detector structure. Representative results for reference source and tokamak plasma are demonstrated.

  8. Method and apparatus for determining the coordinates of an object

    DOEpatents

    Pedersen, Paul S.

    2002-01-01

    A simplified method and related apparatus are described for determining the location of points on the surface of an object by varying, in accordance with a unique sequence, the intensity of each illuminated pixel directed to the object surface, and detecting at known detector pixel locations the intensity sequence of reflected illumination from the surface of the object whereby the identity and location of the originating illuminated pixel can be determined. The coordinates of points on the surface of the object are then determined by conventional triangulation methods.

  9. Multiple image encryption scheme based on pixel exchange operation and vector decomposition

    NASA Astrophysics Data System (ADS)

    Xiong, Y.; Quan, C.; Tay, C. J.

    2018-02-01

    We propose a new multiple image encryption scheme based on a pixel exchange operation and a basic vector decomposition in Fourier domain. In this algorithm, original images are imported via a pixel exchange operator, from which scrambled images and pixel position matrices are obtained. Scrambled images encrypted into phase information are imported using the proposed algorithm and phase keys are obtained from the difference between scrambled images and synthesized vectors in a charge-coupled device (CCD) plane. The final synthesized vector is used as an input in a random phase encoding (DRPE) scheme. In the proposed encryption scheme, pixel position matrices and phase keys serve as additional private keys to enhance the security of the cryptosystem which is based on a 4-f system. Numerical simulations are presented to demonstrate the feasibility and robustness of the proposed encryption scheme.

  10. Randomized subspace-based robust principal component analysis for hyperspectral anomaly detection

    NASA Astrophysics Data System (ADS)

    Sun, Weiwei; Yang, Gang; Li, Jialin; Zhang, Dianfa

    2018-01-01

    A randomized subspace-based robust principal component analysis (RSRPCA) method for anomaly detection in hyperspectral imagery (HSI) is proposed. The RSRPCA combines advantages of randomized column subspace and robust principal component analysis (RPCA). It assumes that the background has low-rank properties, and the anomalies are sparse and do not lie in the column subspace of the background. First, RSRPCA implements random sampling to sketch the original HSI dataset from columns and to construct a randomized column subspace of the background. Structured random projections are also adopted to sketch the HSI dataset from rows. Sketching from columns and rows could greatly reduce the computational requirements of RSRPCA. Second, the RSRPCA adopts the columnwise RPCA (CWRPCA) to eliminate negative effects of sampled anomaly pixels and that purifies the previous randomized column subspace by removing sampled anomaly columns. The CWRPCA decomposes the submatrix of the HSI data into a low-rank matrix (i.e., background component), a noisy matrix (i.e., noise component), and a sparse anomaly matrix (i.e., anomaly component) with only a small proportion of nonzero columns. The algorithm of inexact augmented Lagrange multiplier is utilized to optimize the CWRPCA problem and estimate the sparse matrix. Nonzero columns of the sparse anomaly matrix point to sampled anomaly columns in the submatrix. Third, all the pixels are projected onto the complemental subspace of the purified randomized column subspace of the background and the anomaly pixels in the original HSI data are finally exactly located. Several experiments on three real hyperspectral images are carefully designed to investigate the detection performance of RSRPCA, and the results are compared with four state-of-the-art methods. Experimental results show that the proposed RSRPCA outperforms four comparison methods both in detection performance and in computational time.

  11. Solar-blind ultraviolet optical system design for missile warning

    NASA Astrophysics Data System (ADS)

    Chen, Yu; Huo, Furong; Zheng, Liqin

    2015-03-01

    Solar-blind region of Ultraviolet (UV) spectrum has very important application in military field. The spectrum range is from 240nm to 280nm, which can be applied to detect the tail flame from approaching missile. A solar-blind UV optical system is designed to detect the UV radiation, which is an energy system. iKon-L 936 from ANDOR company is selected as the UV detector, which has pixel size 13.5μm x 13.5 μm and active image area 27.6mm x 27.6 mm. CaF2 and F_silica are the chosen materials. The original structure is composed of 6 elements. To reduce the system structure and improve image quality, two aspheric surfaces and one diffractive optical element are adopted in this paper. After optimization and normalization, the designed system is composed of five elements with the maximum spot size 11.988μ m, which is less than the pixel size of the selected CCD detector. Application of aspheric surface and diffractive optical element makes each FOV have similar spot size, which shows the system almost meets the requirements of isoplanatic condition. If the focal length can be decreased, the FOV of the system can be enlarged further.

  12. On Adapting the Tensor Voting Framework to Robust Color Image Denoising

    NASA Astrophysics Data System (ADS)

    Moreno, Rodrigo; Garcia, Miguel Angel; Puig, Domenec; Julià, Carme

    This paper presents an adaptation of the tensor voting framework for color image denoising, while preserving edges. Tensors are used in order to encode the CIELAB color channels, the uniformity and the edginess of image pixels. A specific voting process is proposed in order to propagate color from a pixel to its neighbors by considering the distance between pixels, the perceptual color difference (by using an optimized version of CIEDE2000), a uniformity measurement and the likelihood of the pixels being impulse noise. The original colors are corrected with those encoded by the tensors obtained after the voting process. Peak to noise ratios and visual inspection show that the proposed methodology has a better performance than state-of-the-art techniques.

  13. Image compression technique

    DOEpatents

    Fu, Chi-Yung; Petrich, Loren I.

    1997-01-01

    An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace's equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image.

  14. Image compression technique

    DOEpatents

    Fu, C.Y.; Petrich, L.I.

    1997-03-25

    An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace`s equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image. 16 figs.

  15. Realistic full wave modeling of focal plane array pixels

    DOE PAGES

    Campione, Salvatore; Warne, Larry K.; Jorgenson, Roy E.; ...

    2017-11-01

    Here, we investigate full-wave simulations of realistic implementations of multifunctional nanoantenna enabled detectors (NEDs). We focus on a 2x2 pixelated array structure that supports two wavelengths of operation. We design each resonating structure independently using full-wave simulations with periodic boundary conditions mimicking the whole infinite array. We then construct a supercell made of a 2x2 pixelated array with periodic boundary conditions mimicking the full NED; in this case, however, each pixel comprises 10-20 antennas per side. In this way, the cross-talk between contiguous pixels is accounted for in our simulations. We observe that, even though there are finite extent effects,more » the pixels work as designed, each responding at the respective wavelength of operation. This allows us to stress that realistic simulations of multifunctional NEDs need to be performed to verify the design functionality by taking into account finite extent and cross-talk effects.« less

  16. Locality-constrained anomaly detection for hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Liu, Jiabin; Li, Wei; Du, Qian; Liu, Kui

    2015-12-01

    Detecting a target with low-occurrence-probability from unknown background in a hyperspectral image, namely anomaly detection, is of practical significance. Reed-Xiaoli (RX) algorithm is considered as a classic anomaly detector, which calculates the Mahalanobis distance between local background and the pixel under test. Local RX, as an adaptive RX detector, employs a dual-window strategy to consider pixels within the frame between inner and outer windows as local background. However, the detector is sensitive if such a local region contains anomalous pixels (i.e., outliers). In this paper, a locality-constrained anomaly detector is proposed to remove outliers in the local background region before employing the RX algorithm. Specifically, a local linear representation is designed to exploit the internal relationship between linearly correlated pixels in the local background region and the pixel under test and its neighbors. Experimental results demonstrate that the proposed detector improves the original local RX algorithm.

  17. Diffraction-Based Optical Switching with MEMS

    DOE PAGES

    Blanche, Pierre-Alexandre; LaComb, Lloyd; Wang, Youmin; ...

    2017-04-19

    In this article, we are presenting an overview of MEMS-based (Micro-Electro-Mechanical System) optical switch technology starting from the reflective two-dimensional (2D) and three-dimensional (3D) MEMS implementations. To further increase the speed of the MEMS from these devices, the mirror size needs to be reduced. Small mirror size prevents efficient reflection but favors a diffraction-based approach. Two implementations have been demonstrated, one using the Texas Instruments DLP (Digital Light Processing), and the other an LCoS-based (Liquid Crystal on Silicon) SLM (Spatial Light Modulator). These switches demonstrated the benefit of diffraction, by independently achieving high speed, efficiency, and high number of ports.more » We also demonstrated for the first time that PSK (Phase Shift Keying) modulation format can be used with diffraction-based devices. To be truly effective in diffraction mode, the MEMS pixels should modulate the phase of the incident light. We are presenting our past and current efforts to manufacture a new type of MEMS where the pixels are moving in the vertical direction. The original structure is a 32 x 32 phase modulator array with high contrast grating pixels, and we are introducing a new sub-wavelength linear array capable of a 310 kHz modulation rate« less

  18. Diffraction-Based Optical Switching with MEMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blanche, Pierre-Alexandre; LaComb, Lloyd; Wang, Youmin

    In this article, we are presenting an overview of MEMS-based (Micro-Electro-Mechanical System) optical switch technology starting from the reflective two-dimensional (2D) and three-dimensional (3D) MEMS implementations. To further increase the speed of the MEMS from these devices, the mirror size needs to be reduced. Small mirror size prevents efficient reflection but favors a diffraction-based approach. Two implementations have been demonstrated, one using the Texas Instruments DLP (Digital Light Processing), and the other an LCoS-based (Liquid Crystal on Silicon) SLM (Spatial Light Modulator). These switches demonstrated the benefit of diffraction, by independently achieving high speed, efficiency, and high number of ports.more » We also demonstrated for the first time that PSK (Phase Shift Keying) modulation format can be used with diffraction-based devices. To be truly effective in diffraction mode, the MEMS pixels should modulate the phase of the incident light. We are presenting our past and current efforts to manufacture a new type of MEMS where the pixels are moving in the vertical direction. The original structure is a 32 x 32 phase modulator array with high contrast grating pixels, and we are introducing a new sub-wavelength linear array capable of a 310 kHz modulation rate« less

  19. Assessing the impact of background spectral graph construction techniques on the topological anomaly detection algorithm

    NASA Astrophysics Data System (ADS)

    Ziemann, Amanda K.; Messinger, David W.; Albano, James A.; Basener, William F.

    2012-06-01

    Anomaly detection algorithms have historically been applied to hyperspectral imagery in order to identify pixels whose material content is incongruous with the background material in the scene. Typically, the application involves extracting man-made objects from natural and agricultural surroundings. A large challenge in designing these algorithms is determining which pixels initially constitute the background material within an image. The topological anomaly detection (TAD) algorithm constructs a graph theory-based, fully non-parametric topological model of the background in the image scene, and uses codensity to measure deviation from this background. In TAD, the initial graph theory structure of the image data is created by connecting an edge between any two pixel vertices x and y if the Euclidean distance between them is less than some resolution r. While this type of proximity graph is among the most well-known approaches to building a geometric graph based on a given set of data, there is a wide variety of dierent geometrically-based techniques. In this paper, we present a comparative test of the performance of TAD across four dierent constructs of the initial graph: mutual k-nearest neighbor graph, sigma-local graph for two different values of σ > 1, and the proximity graph originally implemented in TAD.

  20. Saturn's Hexagon as Summer Solstice Approaches

    NASA Image and Video Library

    2017-05-24

    These natural color views from NASA's Cassini spacecraft compare the appearance of Saturn's north-polar region in June 2013 and April 2017. In both views, Saturn's polar hexagon dominates the scene. The comparison shows how clearly the color of the region changed in the interval between the two views, which represents the latter half of Saturn's northern hemisphere spring. In 2013, the entire interior of the hexagon appeared blue. By 2017, most of the hexagon's interior was covered in yellowish haze, and only the center of the polar vortex retained the blue color. The seasonal arrival of the sun's ultraviolet light triggers the formation of photochemical aerosols, leading to haze formation. The general yellowing of the polar region is believed to be caused by smog particles produced by increasing solar radiation shining on the polar region as Saturn approached the northern summer solstice on May 24, 2017. Scientists are considering several ideas to explain why the center of the polar vortex remains blue while the rest of the polar region has turned yellow. One idea is that, because the atmosphere in the vortex's interior is the last place in the northern hemisphere to be exposed to spring and summer sunlight, smog particles have not yet changed the color of the region. A second explanation hypothesizes that the polar vortex may have an internal circulation similar to hurricanes on Earth. If the Saturnian polar vortex indeed has an analogous structure to terrestrial hurricanes, the circulation should be downward in the eye of the vortex. The downward circulation should keep the atmosphere clear of the photochemical smog particles, and may explain the blue color. Images captured with Cassini's wide-angle camera using red, green and blue spectral filters were combined to create these natural-color views. The 2013 view (left in the combined view), was captured on June 25, 2013, when the spacecraft was about 430,000 miles (700,000 kilometers) away from Saturn. The original versions of these images, as sent by the spacecraft, have a size of 512 by 512 pixels and an image scale of about 52 miles (80 kilometers) per pixel; the images have been mapped in polar stereographic projection to the resolution of approximately 16 miles (25 kilometers) per pixel. The second and third frames in the animation were taken approximately 130 and 260 minutes after the first image. The 2017 sequence (right in the combined view) was captured on April 25, 2017, just before Cassini made its first dive between Saturn and its rings. During the imaging sequence, the spacecraft's distance from the center of the planet changed from 450,000 miles (725,000 kilometers) to 143,000 miles (230,000 kilometers). The original versions of these images, as sent by the spacecraft, have a size of 512 by 512 pixels. The resolution of the original images changed from about 52 miles (80 kilometers) per pixel at the beginning to about 9 miles (14 kilometers) per pixel at the end. The images have been mapped in polar stereographic projection to the resolution of approximately 16 miles (25 kilometers) per pixel. The average interval between the frames in the movie sequence is 230 minutes. Corresponding animated movie sequences are available at https://photojournal.jpl.nasa.gov/catalog/PIA21611 https://photojournal.jpl.nasa.gov/catalog/PIA21611

  1. A history of hybrid pixel detectors, from high energy physics to medical imaging

    NASA Astrophysics Data System (ADS)

    Delpierre, P.

    2014-05-01

    The aim of this paper is to describe the development of hybrid pixel detectors from the origin to the application on medical imaging. We are going to recall the need for fast 2D detectors in the high energy physics experiments and to follow the different pixel electronic circuits created to satisfy this demand. The adaptation of these circuits for X-rays will be presented as well as their industrialization. Today, a number of applications are open for these cameras, particularly for biomedical imaging applications. Some developments for clinical CT will also be shown.

  2. Which Photodiode to Use: A Comparison of CMOS-Compatible Structures

    PubMed Central

    Murari, Kartikeya; Etienne-Cummings, Ralph; Thakor, Nitish; Cauwenberghs, Gert

    2010-01-01

    While great advances have been made in optimizing fabrication process technologies for solid state image sensors, the need remains to be able to fabricate high quality photosensors in standard CMOS processes. The quality metrics depend on both the pixel architecture and the photosensitive structure. This paper presents a comparison of three photodiode structures in terms of spectral sensitivity, noise and dark current. The three structures are n+/p-sub, n-well/p-sub and p+/n-well/p-sub. All structures were fabricated in a 0.5 μm 3-metal, 2-poly, n-well process and shared the same pixel and readout architectures. Two pixel structures were fabricated—the standard three transistor active pixel sensor, where the output depends on the photodiode capacitance, and one incorporating an in-pixel capacitive transimpedance amplifier where the output is dependent only on a designed feedback capacitor. The n-well/p-sub diode performed best in terms of sensitivity (an improvement of 3.5 × and 1.6 × over the n+/p-sub and p+/n-well/p-sub diodes, respectively) and signal-to-noise ratio (1.5 × and 1.2 × improvement over the n+/p-sub and p+/n-well/p-sub diodes, respectively) while the p+/n-well/p-sub diode had the minimum (33% compared to other two structures) dark current for a given sensitivity. PMID:20454596

  3. Which Photodiode to Use: A Comparison of CMOS-Compatible Structures.

    PubMed

    Murari, Kartikeya; Etienne-Cummings, Ralph; Thakor, Nitish; Cauwenberghs, Gert

    2009-07-01

    While great advances have been made in optimizing fabrication process technologies for solid state image sensors, the need remains to be able to fabricate high quality photosensors in standard CMOS processes. The quality metrics depend on both the pixel architecture and the photosensitive structure. This paper presents a comparison of three photodiode structures in terms of spectral sensitivity, noise and dark current. The three structures are n(+)/p-sub, n-well/p-sub and p(+)/n-well/p-sub. All structures were fabricated in a 0.5 mum 3-metal, 2-poly, n-well process and shared the same pixel and readout architectures. Two pixel structures were fabricated-the standard three transistor active pixel sensor, where the output depends on the photodiode capacitance, and one incorporating an in-pixel capacitive transimpedance amplifier where the output is dependent only on a designed feedback capacitor. The n-well/p-sub diode performed best in terms of sensitivity (an improvement of 3.5 x and 1.6 x over the n(+)/p-sub and p(+)/n-well/p-sub diodes, respectively) and signal-to-noise ratio (1.5 x and 1.2 x improvement over the n(+)/p-sub and p(+)/n-well/p-sub diodes, respectively) while the p(+)/n-well/p-sub diode had the minimum (33% compared to other two structures) dark current for a given sensitivity.

  4. To Great Depths

    NASA Image and Video Library

    2017-03-22

    Hellas is an ancient impact structure and is the deepest and broadest enclosed basin on Mars. It measures about 2,300 kilometers across and the floor of the basin, Hellas Planitia, contains the lowest elevations on Mars. The Hellas region can often be difficult to view from orbit due to seasonal frost, water-ice clouds and dust storms, yet this region is intriguing because of its diverse, and oftentimes bizarre, landforms. This image from eastern Hellas Planitia shows some of the unusual features on the basin floor. These relatively flat-lying "cells" appear to have concentric layers or bands, similar to a honeycomb. This "honeycomb" terrain exists elsewhere in Hellas, but the geologic process responsible for creating these features remains unresolved. The map is projected here at a scale of 50 centimeters (19.7 inches) per pixel. [The original image scale is 52.2 centimeters (20.6 inches) per pixel (with 2 x 2 binning); objects on the order of 157 centimeters (61.8 inches) across are resolved.] North is up. http://photojournal.jpl.nasa.gov/catalog/PIA21570

  5. Fine structure of Galactic foreground ISM towards high-redshift AGN - utilizing Herschel PACS and SPIRE data

    NASA Astrophysics Data System (ADS)

    Perger, K.; Pinter, S.; Frey, S.; Tóth, L. V.

    2018-05-01

    One of the most certain ways to determine star formation rate in galaxies is based on far infrared (FIR) measurements. To decide the origin of the observed FIR emission, subtracting the Galactic foreground is a crucial step. We utilized Herschel photometric data to determine the hydrogen column densities in three galactic latitude regions, at b = 27°, 50° and -80°. We applied a pixel-by-pixel fit to the spectral energy distribution (SED) for the images aquired from parallel PACS-SPIRE observations in all three sky areas. We determined the column densities with resolutions 45'' and 6', and compared the results with values estimated from the IRAS dust maps. Column densities at 27° and 50° galactic latitudes determined from the Herschel data are in a good agreement with the literature values. However, at the highest galactic latitude we found that the column densities from the Herschel data exceed those derived from the IRAS dust map.

  6. Multiresolution texture analysis applied to road surface inspection

    NASA Astrophysics Data System (ADS)

    Paquis, Stephane; Legeay, Vincent; Konik, Hubert; Charrier, Jean

    1999-03-01

    Technological advances provide now the opportunity to automate the pavement distress assessment. This paper deals with an approach for achieving an automatic vision system for road surface classification. Road surfaces are composed of aggregates, which have a particular grain size distribution and a mortar matrix. From various physical properties and visual aspects, four road families are generated. We present here a tool using a pyramidal process with the assumption that regions or objects in an image rise up because of their uniform texture. Note that the aim is not to compute another statistical parameter but to include usual criteria in our method. In fact, the road surface classification uses a multiresolution cooccurrence matrix and a hierarchical process through an original intensity pyramid, where a father pixel takes the minimum gray level value of its directly linked children pixels. More precisely, only matrix diagonal is taken into account and analyzed along the pyramidal structure, which allows the classification to be made.

  7. Numerical simulation of crosstalk in reduced pitch HgCdTe photon-trapping structure pixel arrays.

    PubMed

    Schuster, Jonathan; Bellotti, Enrico

    2013-06-17

    We have investigated crosstalk in HgCdTe photovoltaic pixel arrays employing a photon trapping (PT) structure realized with a periodic array of pillars intended to provide broadband operation. We have found that, compared to non-PT pixel arrays with similar geometry, the array employing the PT structure has a slightly higher optical crosstalk. However, when the total crosstalk is evaluated, the presence of the PT region drastically reduces the total crosstalk; making the use of the PT structure not only useful to obtain broadband operation, but also desirable for reducing crosstalk in small pitch detector arrays.

  8. Measurements and TCAD simulation of novel ATLAS planar pixel detector structures for the HL-LHC upgrade

    NASA Astrophysics Data System (ADS)

    Nellist, C.; Dinu, N.; Gkougkousis, E.; Lounis, A.

    2015-06-01

    The LHC accelerator complex will be upgraded between 2020-2022, to the High-Luminosity-LHC, to considerably increase statistics for the various physics analyses. To operate under these challenging new conditions, and maintain excellent performance in track reconstruction and vertex location, the ATLAS pixel detector must be substantially upgraded and a full replacement is expected. Processing techniques for novel pixel designs are optimised through characterisation of test structures in a clean room and also through simulations with Technology Computer Aided Design (TCAD). A method to study non-perpendicular tracks through a pixel device is discussed. Comparison of TCAD simulations with Secondary Ion Mass Spectrometry (SIMS) measurements to investigate the doping profile of structures and validate the simulation process is also presented.

  9. Lifting the Veil of Dust from NGC 0959: The Importance of a Pixel-based Two-dimensional Extinction Correction

    NASA Astrophysics Data System (ADS)

    Tamura, K.; Jansen, R. A.; Eskridge, P. B.; Cohen, S. H.; Windhorst, R. A.

    2010-06-01

    We present the results of a study of the late-type spiral galaxy NGC 0959, before and after application of the pixel-based dust extinction correction described in Tamura et al. (Paper I). Galaxy Evolution Explorer far-UV, and near-UV, ground-based Vatican Advanced Technology Telescope, UBVR, and Spitzer/Infrared Array Camera 3.6, 4.5, 5.8, and 8.0 μm images are studied through pixel color-magnitude diagrams and pixel color-color diagrams (pCCDs). We define groups of pixels based on their distribution in a pCCD of (B - 3.6 μm) versus (FUV - U) colors after extinction correction. In the same pCCD, we trace their locations before the extinction correction was applied. This shows that selecting pixel groups is not meaningful when using colors uncorrected for dust. We also trace the distribution of the pixel groups on a pixel coordinate map of the galaxy. We find that the pixel-based (two-dimensional) extinction correction is crucial for revealing the spatial variations in the dominant stellar population, averaged over each resolution element. Different types and mixtures of stellar populations, and galaxy structures such as a previously unrecognized bar, become readily discernible in the extinction-corrected pCCD and as coherent spatial structures in the pixel coordinate map.

  10. Dependence of optical phase modulation on anchoring strength of dielectric shield wall surfaces in small liquid crystal pixels

    NASA Astrophysics Data System (ADS)

    Isomae, Yoshitomo; Shibata, Yosei; Ishinabe, Takahiro; Fujikake, Hideo

    2018-03-01

    We demonstrated that the uniform phase modulation in a pixel can be realized by optimizing the anchoring strength on the walls and the wall width in the dielectric shield wall structure, which is the needed pixel structure for realizing a 1-µm-pitch optical phase modulator. The anchoring force degrades the uniformity of the phase modulation in ON-state pixels, but it also keeps liquid crystals from rotating against the leakage of an electric field. We clarified that the optimal wall width and anchoring strength are 250 nm and less than 10-4 J/m2, respectively.

  11. Positive visualization of implanted devices with susceptibility gradient mapping using the original resolution.

    PubMed

    Varma, Gopal; Clough, Rachel E; Acher, Peter; Sénégas, Julien; Dahnke, Hannes; Keevil, Stephen F; Schaeffter, Tobias

    2011-05-01

    In magnetic resonance imaging, implantable devices are usually visualized with a negative contrast. Recently, positive contrast techniques have been proposed, such as susceptibility gradient mapping (SGM). However, SGM reduces the spatial resolution making positive visualization of small structures difficult. Here, a development of SGM using the original resolution (SUMO) is presented. For this, a filter is applied in k-space and the signal amplitude is analyzed in the image domain to determine quantitatively the susceptibility gradient for each pixel. It is shown in simulations and experiments that SUMO results in a better visualization of small structures in comparison to SGM. SUMO is applied to patient datasets for visualization of stent and prostate brachytherapy seeds. In addition, SUMO also provides quantitative information about the number of prostate brachytherapy seeds. The method might be extended to application for visualization of other interventional devices, and, like SGM, it might also be used to visualize magnetically labelled cells. Copyright © 2010 Wiley-Liss, Inc.

  12. Steganalysis based on JPEG compatibility

    NASA Astrophysics Data System (ADS)

    Fridrich, Jessica; Goljan, Miroslav; Du, Rui

    2001-11-01

    In this paper, we introduce a new forensic tool that can reliably detect modifications in digital images, such as distortion due to steganography and watermarking, in images that were originally stored in the JPEG format. The JPEG compression leave unique fingerprints and serves as a fragile watermark enabling us to detect changes as small as modifying the LSB of one randomly chosen pixel. The detection of changes is based on investigating the compatibility of 8x8 blocks of pixels with JPEG compression with a given quantization matrix. The proposed steganalytic method is applicable to virtually all steganongraphic and watermarking algorithms with the exception of those that embed message bits into the quantized JPEG DCT coefficients. The method can also be used to estimate the size of the secret message and identify the pixels that carry message bits. As a consequence of our steganalysis, we strongly recommend avoiding using images that have been originally stored in the JPEG format as cover-images for spatial-domain steganography.

  13. Measurements and simulations of MAPS (Monolithic Active Pixel Sensors) response to charged particles - a study towards a vertex detector at the ILC

    NASA Astrophysics Data System (ADS)

    Maczewski, Lukasz

    2010-05-01

    The International Linear Collider (ILC) is a project of an electron-positron (e+e-) linear collider with the centre-of-mass energy of 200-500 GeV. Monolithic Active Pixel Sensors (MAPS) are one of the proposed silicon pixel detector concepts for the ILC vertex detector (VTX). Basic characteristics of two MAPS pixel matrices MIMOSA-5 (17 μm pixel pitch) and MIMOSA-18 (10 μm pixel pitch) are studied and compared (pedestals, noises, calibration of the ADC-to-electron conversion gain, detector efficiency and charge collection properties). The e+e- collisions at the ILC will be accompanied by intense beamsstrahlung background of electrons and positrons hitting inner planes of the vertex detector. Tracks of this origin leave elongated clusters contrary to those of secondary hadrons. Cluster characteristics and orientation with respect to the pixels netting are studied for perpendicular and inclined tracks. Elongation and precision of determining the cluster orientation as a function of the angle of incidence were measured. A simple model of signal formation (based on charge diffusion) is proposed and tested using the collected data.

  14. Novel Hyperspectral Anomaly Detection Methods Based on Unsupervised Nearest Regularized Subspace

    NASA Astrophysics Data System (ADS)

    Hou, Z.; Chen, Y.; Tan, K.; Du, P.

    2018-04-01

    Anomaly detection has been of great interest in hyperspectral imagery analysis. Most conventional anomaly detectors merely take advantage of spectral and spatial information within neighboring pixels. In this paper, two methods of Unsupervised Nearest Regularized Subspace-based with Outlier Removal Anomaly Detector (UNRSORAD) and Local Summation UNRSORAD (LSUNRSORAD) are proposed, which are based on the concept that each pixel in background can be approximately represented by its spatial neighborhoods, while anomalies cannot. Using a dual window, an approximation of each testing pixel is a representation of surrounding data via a linear combination. The existence of outliers in the dual window will affect detection accuracy. Proposed detectors remove outlier pixels that are significantly different from majority of pixels. In order to make full use of various local spatial distributions information with the neighboring pixels of the pixels under test, we take the local summation dual-window sliding strategy. The residual image is constituted by subtracting the predicted background from the original hyperspectral imagery, and anomalies can be detected in the residual image. Experimental results show that the proposed methods have greatly improved the detection accuracy compared with other traditional detection method.

  15. Photon event distribution sampling: an image formation technique for scanning microscopes that permits tracking of sub-diffraction particles with high spatial and temporal resolutions.

    PubMed

    Larkin, J D; Publicover, N G; Sutko, J L

    2011-01-01

    In photon event distribution sampling, an image formation technique for scanning microscopes, the maximum likelihood position of origin of each detected photon is acquired as a data set rather than binning photons in pixels. Subsequently, an intensity-related probability density function describing the uncertainty associated with the photon position measurement is applied to each position and individual photon intensity distributions are summed to form an image. Compared to pixel-based images, photon event distribution sampling images exhibit increased signal-to-noise and comparable spatial resolution. Photon event distribution sampling is superior to pixel-based image formation in recognizing the presence of structured (non-random) photon distributions at low photon counts and permits use of non-raster scanning patterns. A photon event distribution sampling based method for localizing single particles derived from a multi-variate normal distribution is more precise than statistical (Gaussian) fitting to pixel-based images. Using the multi-variate normal distribution method, non-raster scanning and a typical confocal microscope, localizations with 8 nm precision were achieved at 10 ms sampling rates with acquisition of ~200 photons per frame. Single nanometre precision was obtained with a greater number of photons per frame. In summary, photon event distribution sampling provides an efficient way to form images when low numbers of photons are involved and permits particle tracking with confocal point-scanning microscopes with nanometre precision deep within specimens. © 2010 The Authors Journal of Microscopy © 2010 The Royal Microscopical Society.

  16. Method for fabricating pixelated silicon device cells

    DOEpatents

    Nielson, Gregory N.; Okandan, Murat; Cruz-Campa, Jose Luis; Nelson, Jeffrey S.; Anderson, Benjamin John

    2015-08-18

    A method, apparatus and system for flexible, ultra-thin, and high efficiency pixelated silicon or other semiconductor photovoltaic solar cell array fabrication is disclosed. A structure and method of creation for a pixelated silicon or other semiconductor photovoltaic solar cell array with interconnects is described using a manufacturing method that is simplified compared to previous versions of pixelated silicon photovoltaic cells that require more microfabrication steps.

  17. Low temperature performance of a commercially available InGaAs image sensor

    NASA Astrophysics Data System (ADS)

    Nakaya, Hidehiko; Komiyama, Yutaka; Kashikawa, Nobunari; Uchida, Tomohisa; Nagayama, Takahiro; Yoshida, Michitoshi

    2016-08-01

    We report the evaluation results of a commercially available InGaAs image sensor manufactured by Hamamatsu Photonics K. K., which has sensitivity between 0.95μm and 1.7μm at a room temperature. The sensor format was 128×128 pixels with 20 μm pitch. It was tested with our original readout electronics and cooled down to 80 K by a mechanical cooler to minimize the dark current. Although the readout noise and dark current were 200 e- and 20 e- /sec/pixel, respectively, we found no serious problems for the linearity, wavelength response, and intra-pixel response.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campione, Salvatore; Warne, Larry K.; Jorgenson, Roy E.

    Here, we investigate full-wave simulations of realistic implementations of multifunctional nanoantenna enabled detectors (NEDs). We focus on a 2x2 pixelated array structure that supports two wavelengths of operation. We design each resonating structure independently using full-wave simulations with periodic boundary conditions mimicking the whole infinite array. We then construct a supercell made of a 2x2 pixelated array with periodic boundary conditions mimicking the full NED; in this case, however, each pixel comprises 10-20 antennas per side. In this way, the cross-talk between contiguous pixels is accounted for in our simulations. We observe that, even though there are finite extent effects,more » the pixels work as designed, each responding at the respective wavelength of operation. This allows us to stress that realistic simulations of multifunctional NEDs need to be performed to verify the design functionality by taking into account finite extent and cross-talk effects.« less

  19. Oil Motion Control by an Extra Pinning Structure in Electro-Fluidic Display.

    PubMed

    Dou, Yingying; Tang, Biao; Groenewold, Jan; Li, Fahong; Yue, Qiao; Zhou, Rui; Li, Hui; Shui, Lingling; Henzen, Alex; Zhou, Guofu

    2018-04-06

    Oil motion control is the key for the optical performance of electro-fluidic displays (EFD). In this paper, we introduced an extra pinning structure (EPS) into the EFD pixel to control the oil motion inside for the first time. The pinning structure canbe fabricated together with the pixel wall by a one-step lithography process. The effect of the relative location of the EPS in pixels on the oil motion was studied by a series of optoelectronic measurements. EPS showed good control of oil rupture position. The properly located EPS effectively guided the oil contraction direction, significantly accelerated switching on process, and suppressed oil overflow, without declining in aperture ratio. An asymmetrically designed EPS off the diagonal is recommended. This study provides a novel and facile way for oil motion control within an EFD pixel in both direction and timescale.

  20. Study on pixel matching method of the multi-angle observation from airborne AMPR measurements

    NASA Astrophysics Data System (ADS)

    Hou, Weizhen; Qie, Lili; Li, Zhengqiang; Sun, Xiaobing; Hong, Jin; Chen, Xingfeng; Xu, Hua; Sun, Bin; Wang, Han

    2015-10-01

    For the along-track scanning mode, the same place along the ground track could be detected by the Advanced Multi-angular Polarized Radiometer (AMPR) with several different scanning angles from -55 to 55 degree, which provides a possible means to get the multi-angular detection for some nearby pixels. However, due to the ground sample spacing and spatial footprint of the detection, the different sizes of footprints cannot guarantee the spatial matching of some partly overlap pixels, which turn into a bottleneck for the effective use of the multi-angular detected information of AMPR to study the aerosol and surface polarized properties. Based on our definition and calculation of t he pixel coincidence rate for the multi-angular detection, an effective multi-angle observation's pixel matching method is presented to solve the spatial matching problem for airborne AMPR. Assuming the shape of AMPR's each pixel is an ellipse, and the major axis and minor axis depends on the flying attitude and each scanning angle. By the definition of coordinate system and origin of coordinate, the latitude and longitude could be transformed into the Euclidian distance, and the pixel coincidence rate of two nearby ellipses could be calculated. Via the traversal of each ground pixel, those pixels with high coincidence rate could be selected and merged, and with the further quality control of observation data, thus the ground pixels dataset with multi-angular detection could be obtained and analyzed, providing the support for the multi-angular and polarized retrieval algorithm research in t he next study.

  1. Small target detection using bilateral filter and temporal cross product in infrared images

    NASA Astrophysics Data System (ADS)

    Bae, Tae-Wuk

    2011-09-01

    We introduce a spatial and temporal target detection method using spatial bilateral filter (BF) and temporal cross product (TCP) of temporal pixels in infrared (IR) image sequences. At first, the TCP is presented to extract the characteristics of temporal pixels by using temporal profile in respective spatial coordinates of pixels. The TCP represents the cross product values by the gray level distance vector of a current temporal pixel and the adjacent temporal pixel, as well as the horizontal distance vector of the current temporal pixel and a temporal pixel corresponding to potential target center. The summation of TCP values of temporal pixels in spatial coordinates makes the temporal target image (TTI), which represents the temporal target information of temporal pixels in spatial coordinates. And then the proposed BF filter is used to extract the spatial target information. In order to predict background without targets, the proposed BF filter uses standard deviations obtained by an exponential mapping of the TCP value corresponding to the coordinate of a pixel processed spatially. The spatial target image (STI) is made by subtracting the predicted image from the original image. Thus, the spatial and temporal target image (STTI) is achieved by multiplying the STI and the TTI, and then targets finally are detected in STTI. In experimental result, the receiver operating characteristics (ROC) curves were computed experimentally to compare the objective performance. From the results, the proposed algorithm shows better discrimination of target and clutters and lower false alarm rates than the existing target detection methods.

  2. High-resolution CCD imaging alternatives

    NASA Astrophysics Data System (ADS)

    Brown, D. L.; Acker, D. E.

    1992-08-01

    High resolution CCD color cameras have recently stimulated the interest of a large number of potential end-users for a wide range of practical applications. Real-time High Definition Television (HDTV) systems are now being used or considered for use in applications ranging from entertainment program origination through digital image storage to medical and scientific research. HDTV generation of electronic images offers significant cost and time-saving advantages over the use of film in such applications. Further in still image systems electronic image capture is faster and more efficient than conventional image scanners. The CCD still camera can capture 3-dimensional objects into the computing environment directly without having to shoot a picture on film develop it and then scan the image into a computer. 2. EXTENDING CCD TECHNOLOGY BEYOND BROADCAST Most standard production CCD sensor chips are made for broadcast-compatible systems. One popular CCD and the basis for this discussion offers arrays of roughly 750 x 580 picture elements (pixels) or a total array of approximately 435 pixels (see Fig. 1). FOR. A has developed a technique to increase the number of available pixels for a given image compared to that produced by the standard CCD itself. Using an inter-lined CCD with an overall spatial structure several times larger than the photo-sensitive sensor areas each of the CCD sensors is shifted in two dimensions in order to fill in spatial gaps between adjacent sensors.

  3. All-passive pixel super-resolution of time-stretch imaging

    PubMed Central

    Chan, Antony C. S.; Ng, Ho-Cheung; Bogaraju, Sharat C. V.; So, Hayden K. H.; Lam, Edmund Y.; Tsia, Kevin K.

    2017-01-01

    Based on image encoding in a serial-temporal format, optical time-stretch imaging entails a stringent requirement of state-of-the-art fast data acquisition unit in order to preserve high image resolution at an ultrahigh frame rate — hampering the widespread utilities of such technology. Here, we propose a pixel super-resolution (pixel-SR) technique tailored for time-stretch imaging that preserves pixel resolution at a relaxed sampling rate. It harnesses the subpixel shifts between image frames inherently introduced by asynchronous digital sampling of the continuous time-stretch imaging process. Precise pixel registration is thus accomplished without any active opto-mechanical subpixel-shift control or other additional hardware. Here, we present the experimental pixel-SR image reconstruction pipeline that restores high-resolution time-stretch images of microparticles and biological cells (phytoplankton) at a relaxed sampling rate (≈2–5 GSa/s)—more than four times lower than the originally required readout rate (20 GSa/s) — is thus effective for high-throughput label-free, morphology-based cellular classification down to single-cell precision. Upon integration with the high-throughput image processing technology, this pixel-SR time-stretch imaging technique represents a cost-effective and practical solution for large scale cell-based phenotypic screening in biomedical diagnosis and machine vision for quality control in manufacturing. PMID:28303936

  4. Adaptive pseudo-color enhancement method of weld radiographic images based on HSI color space and self-transformation of pixels.

    PubMed

    Jiang, Hongquan; Zhao, Yalin; Gao, Jianmin; Gao, Zhiyong

    2017-06-01

    The radiographic testing (RT) image of a steam turbine manufacturing enterprise has the characteristics of low gray level, low contrast, and blurriness, which lead to a substandard image quality. Moreover, it is not conducive for human eyes to detect and evaluate defects. This study proposes an adaptive pseudo-color enhancement method for weld radiographic images based on the hue, saturation, and intensity (HSI) color space and the self-transformation of pixels to solve these problems. First, the pixel's self-transformation is performed to the pixel value of the original RT image. The function value after the pixel's self-transformation is assigned to the HSI components in the HSI color space. Thereafter, the average intensity of the enhanced image is adaptively adjusted to 0.5 according to the intensity of the original image. Moreover, the hue range and interval can be adjusted according to personal habits. Finally, the HSI components after the adaptive adjustment can be transformed to display in the red, green, and blue color space. Numerous weld radiographic images from a steam turbine manufacturing enterprise are used to validate the proposed method. The experimental results show that the proposed pseudo-color enhancement method can improve image definition and make the target and background areas distinct in weld radiographic images. The enhanced images will be more conducive for defect recognition. Moreover, the image enhanced using the proposed method conforms to the human eye visual properties, and the effectiveness of defect recognition and evaluation can be ensured.

  5. Adaptive pseudo-color enhancement method of weld radiographic images based on HSI color space and self-transformation of pixels

    NASA Astrophysics Data System (ADS)

    Jiang, Hongquan; Zhao, Yalin; Gao, Jianmin; Gao, Zhiyong

    2017-06-01

    The radiographic testing (RT) image of a steam turbine manufacturing enterprise has the characteristics of low gray level, low contrast, and blurriness, which lead to a substandard image quality. Moreover, it is not conducive for human eyes to detect and evaluate defects. This study proposes an adaptive pseudo-color enhancement method for weld radiographic images based on the hue, saturation, and intensity (HSI) color space and the self-transformation of pixels to solve these problems. First, the pixel's self-transformation is performed to the pixel value of the original RT image. The function value after the pixel's self-transformation is assigned to the HSI components in the HSI color space. Thereafter, the average intensity of the enhanced image is adaptively adjusted to 0.5 according to the intensity of the original image. Moreover, the hue range and interval can be adjusted according to personal habits. Finally, the HSI components after the adaptive adjustment can be transformed to display in the red, green, and blue color space. Numerous weld radiographic images from a steam turbine manufacturing enterprise are used to validate the proposed method. The experimental results show that the proposed pseudo-color enhancement method can improve image definition and make the target and background areas distinct in weld radiographic images. The enhanced images will be more conducive for defect recognition. Moreover, the image enhanced using the proposed method conforms to the human eye visual properties, and the effectiveness of defect recognition and evaluation can be ensured.

  6. Compression of color-mapped images

    NASA Technical Reports Server (NTRS)

    Hadenfeldt, A. C.; Sayood, Khalid

    1992-01-01

    In a standard image coding scenario, pixel-to-pixel correlation nearly always exists in the data, especially if the image is a natural scene. This correlation is what allows predictive coding schemes (e.g., DPCM) to perform efficient compression. In a color-mapped image, the values stored in the pixel array are no longer directly related to the pixel intensity. Two color indices which are numerically adjacent (close) may point to two very different colors. The correlation still exists, but only via the colormap. This fact can be exploited by sorting the color map to reintroduce the structure. The sorting of colormaps is studied and it is shown how the resulting structure can be used in both lossless and lossy compression of images.

  7. Design and Simulations of an Energy Harvesting Capable CMOS Pixel for Implantable Retinal Prosthesis

    NASA Astrophysics Data System (ADS)

    Ansaripour, Iman; Karami, Mohammad Azim

    2017-12-01

    A new pixel is designed with the capability of imaging and energy harvesting for the retinal prosthesis implant in 0.18 µm standard Complementary Metal Oxide Semiconductor technology. The pixel conversion gain and dynamic range, are 2.05 \\upmu{{V}}/{{e}}^{ - } and 63.2 dB. The power consumption 53.12 pW per pixel while energy harvesting performance is 3.87 nW in 60 klx of illuminance per pixel. These results have been obtained using post layout simulation. In the proposed pixel structure, the high power production capability in energy harvesting mode covers the demanded energy by using all available p-n junction photo generated currents.

  8. Faxed document image restoration method based on local pixel patterns

    NASA Astrophysics Data System (ADS)

    Akiyama, Teruo; Miyamoto, Nobuo; Oguro, Masami; Ogura, Kenji

    1998-04-01

    A method for restoring degraded faxed document images using the patterns of pixels that construct small areas in a document is proposed. The method effectively restores faxed images that contain the halftone textures and/or density salt-and-pepper noise that degrade OCR system performance. The halftone image restoration process, white-centered 3 X 3 pixels, in which black-and-white pixels alternate, are identified first using the distribution of the pixel values as halftone textures, and then the white center pixels are inverted to black. To remove high-density salt- and-pepper noise, it is assumed that the degradation is caused by ill-balanced bias and inappropriate thresholding of the sensor output which results in the addition of random noise. Restored image can be estimated using an approximation that uses the inverse operation of the assumed original process. In order to process degraded faxed images, the algorithms mentioned above are combined. An experiment is conducted using 24 especially poor quality examples selected from data sets that exemplify what practical fax- based OCR systems cannot handle. The maximum recovery rate in terms of mean square error was 98.8 percent.

  9. Supervised pixel classification using a feature space derived from an artificial visual system

    NASA Technical Reports Server (NTRS)

    Baxter, Lisa C.; Coggins, James M.

    1991-01-01

    Image segmentation involves labelling pixels according to their membership in image regions. This requires the understanding of what a region is. Using supervised pixel classification, the paper investigates how groups of pixels labelled manually according to perceived image semantics map onto the feature space created by an Artificial Visual System. Multiscale structure of regions are investigated and it is shown that pixels form clusters based on their geometric roles in the image intensity function, not by image semantics. A tentative abstract definition of a 'region' is proposed based on this behavior.

  10. Super-pixel extraction based on multi-channel pulse coupled neural network

    NASA Astrophysics Data System (ADS)

    Xu, GuangZhu; Hu, Song; Zhang, Liu; Zhao, JingJing; Fu, YunXia; Lei, BangJun

    2018-04-01

    Super-pixel extraction techniques group pixels to form over-segmented image blocks according to the similarity among pixels. Compared with the traditional pixel-based methods, the image descripting method based on super-pixel has advantages of less calculation, being easy to perceive, and has been widely used in image processing and computer vision applications. Pulse coupled neural network (PCNN) is a biologically inspired model, which stems from the phenomenon of synchronous pulse release in the visual cortex of cats. Each PCNN neuron can correspond to a pixel of an input image, and the dynamic firing pattern of each neuron contains both the pixel feature information and its context spatial structural information. In this paper, a new color super-pixel extraction algorithm based on multi-channel pulse coupled neural network (MPCNN) was proposed. The algorithm adopted the block dividing idea of SLIC algorithm, and the image was divided into blocks with same size first. Then, for each image block, the adjacent pixels of each seed with similar color were classified as a group, named a super-pixel. At last, post-processing was adopted for those pixels or pixel blocks which had not been grouped. Experiments show that the proposed method can adjust the number of superpixel and segmentation precision by setting parameters, and has good potential for super-pixel extraction.

  11. An image hiding method based on cascaded iterative Fourier transform and public-key encryption algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, B.; Sang, Jun; Alam, Mohammad S.

    2013-03-01

    An image hiding method based on cascaded iterative Fourier transform and public-key encryption algorithm was proposed. Firstly, the original secret image was encrypted into two phase-only masks M1 and M2 via cascaded iterative Fourier transform (CIFT) algorithm. Then, the public-key encryption algorithm RSA was adopted to encrypt M2 into M2' . Finally, a host image was enlarged by extending one pixel into 2×2 pixels and each element in M1 and M2' was multiplied with a superimposition coefficient and added to or subtracted from two different elements in the 2×2 pixels of the enlarged host image. To recover the secret image from the stego-image, the two masks were extracted from the stego-image without the original host image. By applying public-key encryption algorithm, the key distribution was facilitated, and also compared with the image hiding method based on optical interference, the proposed method may reach higher robustness by employing the characteristics of the CIFT algorithm. Computer simulations show that this method has good robustness against image processing.

  12. Correcting speckle contrast at small speckle size to enhance signal to noise ratio for laser speckle contrast imaging.

    PubMed

    Qiu, Jianjun; Li, Yangyang; Huang, Qin; Wang, Yang; Li, Pengcheng

    2013-11-18

    In laser speckle contrast imaging, it was usually suggested that speckle size should exceed two camera pixels to eliminate the spatial averaging effect. In this work, we show the benefit of enhancing signal to noise ratio by correcting the speckle contrast at small speckle size. Through simulations and experiments, we demonstrated that local speckle contrast, even at speckle size much smaller than one pixel size, can be corrected through dividing the original speckle contrast by the static speckle contrast. Moreover, we show a 50% higher signal to noise ratio of the speckle contrast image at speckle size below 0.5 pixel size than that at speckle size of two pixels. These results indicate the possibility of selecting a relatively large aperture to simultaneously ensure sufficient light intensity and high accuracy and signal to noise ratio, making the laser speckle contrast imaging more flexible.

  13. Generalized procrustean image deformation for subtraction of mammograms

    NASA Astrophysics Data System (ADS)

    Good, Walter F.; Zheng, Bin; Chang, Yuan-Hsiang; Wang, Xiao Hui; Maitz, Glenn S.

    1999-05-01

    This project is a preliminary evaluation of two simple fully automatic nonlinear transformations which can map any mammographic image onto a reference image while guaranteeing registration of specific features. The first method automatically identifies skin lines, after which each pixel is given coordinates in the range [0,1] X [0,1], where the actual value of a coordinate is the fractional distance of the pixel between tissue boundaries in either the horizontal or vertical direction. This insures that skin lines are put in registration. The second method, which is the method of primary interest, automatically detects pectoral muscles, skin lines and nipple locations. For each image, a polar coordinate system is established with its origin at the intersection of the nipple axes line (NAL) and a line indicating the pectoral muscle. Points within a mammogram are identified by the angle of their position vector, relative to the NAL, and by their fractional distance between the origin and the skin line. This deforms mammograms in such a way that their pectoral lines, NALs and skin lines are all in registration. After images are deformed, their grayscales are adjusted by applying linear regression to pixel value pairs for corresponding tissue pixels. In a comparison of these methods to a previously reported 'translation/rotation' technique, evaluation of difference images clearly indicates that the polar coordinates method results in the most accurate registration of the transformations considered.

  14. Fifteen Years of the Hubble Space Telescope's Advanced Camera for Surveys: Calibration Update

    NASA Astrophysics Data System (ADS)

    Grogin, Norman A.; Advanced CameraSurveys Instrument Team

    2017-06-01

    The Advanced Camera for Surveys (ACS) has been a workhorse HST imager for over fifteen years, subsequent to its Servicing Mission 3B installation in 2002. The once defunct ACS Wide Field Channel (WFC) has now been operating almost twice as long (>8yrs) since its Servicing Mission 4 (SM4) repair than it had originally operated prior to its 2007 failure. Despite the accumulating radiation damage to the WFC CCDs during their long stay in low Earth orbit, ACS continues to be heavily exploited by the HST community as both a prime and a parallel detector.The past year has seen several advancements in ACS data acquisition and calibration capabilities: the mostwidespread changes since shortly after SM4. We review these recent developments that enable the continued high performance of this instrument, including both the WFC and the Solar Blind Channel (SBC). Highlightsinclude: 1) implementaton of new WFC subarray modes to allow for more consistent high-fidelity calibration; 2) a thorough modernization of the original pixel-based correction of WFC charge-transfer efficiency decline; 3)"save the pixels" initiatives resulting in much less WFC bad-pixel flagging via hot-pixel stability analyses and readout-dark modeling; and 4) a new initiative to provide improved PSF estimates via empirical fitting to the full ACS archive of nearly 200,000 images.

  15. A semi-automatic method for quantification and classification of erythrocytes infected with malaria parasites in microscopic images.

    PubMed

    Díaz, Gloria; González, Fabio A; Romero, Eduardo

    2009-04-01

    Visual quantification of parasitemia in thin blood films is a very tedious, subjective and time-consuming task. This study presents an original method for quantification and classification of erythrocytes in stained thin blood films infected with Plasmodium falciparum. The proposed approach is composed of three main phases: a preprocessing step, which corrects luminance differences. A segmentation step that uses the normalized RGB color space for classifying pixels either as erythrocyte or background followed by an Inclusion-Tree representation that structures the pixel information into objects, from which erythrocytes are found. Finally, a two step classification process identifies infected erythrocytes and differentiates the infection stage, using a trained bank of classifiers. Additionally, user intervention is allowed when the approach cannot make a proper decision. Four hundred fifty malaria images were used for training and evaluating the method. Automatic identification of infected erythrocytes showed a specificity of 99.7% and a sensitivity of 94%. The infection stage was determined with an average sensitivity of 78.8% and average specificity of 91.2%.

  16. Color matrix display simulation based upon luminance and chromatic contrast sensitivity of early vision

    NASA Technical Reports Server (NTRS)

    Martin, Russel A.; Ahumada, Albert J., Jr.; Larimer, James O.

    1992-01-01

    This paper describes the design and operation of a new simulation model for color matrix display development. It models the physical structure, the signal processing, and the visual perception of static displays, to allow optimization of display design parameters through image quality measures. The model is simple, implemented in the Mathematica computer language, and highly modular. Signal processing modules operate on the original image. The hardware modules describe backlights and filters, the pixel shape, and the tiling of the pixels over the display. Small regions of the displayed image can be visualized on a CRT. Visual perception modules assume static foveal images. The image is converted into cone catches and then into luminance, red-green, and blue-yellow images. A Haar transform pyramid separates the three images into spatial frequency and direction-specific channels. The channels are scaled by weights taken from human contrast sensitivity measurements of chromatic and luminance mechanisms at similar frequencies and orientations. Each channel provides a detectability measure. These measures allow the comparison of images displayed on prospective devices and, by that, the optimization of display designs.

  17. Serial data acquisition for the X-ray plasma diagnostics with selected GEM detector structures

    NASA Astrophysics Data System (ADS)

    Czarski, T.; Chernyshova, M.; Pozniak, K. T.; Kasprowicz, G.; Zabolotny, W.; Kolasinski, P.; Krawczyk, R.; Wojenski, A.; Zienkiewicz, P.

    2015-10-01

    The measurement system based on GEM—Gas Electron Multiplier detector is developed for X-ray diagnostics of magnetic confinement tokamak plasmas. The paper is focused on the measurement subject and describes the fundamental data processing to obtain reliable characteristics (histograms) useful for physicists. The required data processing have two steps: 1—processing in the time domain, i.e. events selections for bunches of coinciding clusters, 2—processing in the planar space domain, i.e. cluster identification for the given detector structure. So, it is the software part of the project between the electronic hardware and physics applications. The whole project is original and it was developed by the paper authors. The previous version based on 1-D GEM detector was applied for the high-resolution X-ray crystal spectrometer KX1 in the JET tokamak. The current version considers 2-D detector structures for the new data acquisition system. The fast and accurate mode of data acquisition implemented in the hardware in real time can be applied for the dynamic plasma diagnostics. Several detector structures with single-pixel sensors and multi-pixel (directional) sensors are considered for two-dimensional X-ray imaging. Final data processing is presented by histograms for selected range of position, time interval and cluster charge values. Exemplary radiation source properties are measured by the basic cumulative characteristics: the cluster position distribution and cluster charge value distribution corresponding to the energy spectra. A shorter version of this contribution is due to be published in PoS at: 1st EPS conference on Plasma Diagnostics

  18. A kind of color image segmentation algorithm based on super-pixel and PCNN

    NASA Astrophysics Data System (ADS)

    Xu, GuangZhu; Wang, YaWen; Zhang, Liu; Zhao, JingJing; Fu, YunXia; Lei, BangJun

    2018-04-01

    Image segmentation is a very important step in the low-level visual computing. Although image segmentation has been studied for many years, there are still many problems. PCNN (Pulse Coupled Neural network) has biological background, when it is applied to image segmentation it can be viewed as a region-based method, but due to the dynamics properties of PCNN, many connectionless neurons will pulse at the same time, so it is necessary to identify different regions for further processing. The existing PCNN image segmentation algorithm based on region growing is used for grayscale image segmentation, cannot be directly used for color image segmentation. In addition, the super-pixel can better reserve the edges of images, and reduce the influences resulted from the individual difference between the pixels on image segmentation at the same time. Therefore, on the basis of the super-pixel, the original PCNN algorithm based on region growing is improved by this paper. First, the color super-pixel image was transformed into grayscale super-pixel image which was used to seek seeds among the neurons that hadn't been fired. And then it determined whether to stop growing by comparing the average of each color channel of all the pixels in the corresponding regions of the color super-pixel image. Experiment results show that the proposed algorithm for the color image segmentation is fast and effective, and has a certain effect and accuracy.

  19. Digital replication of chest radiographs without altering diagnostic observer performance

    NASA Astrophysics Data System (ADS)

    Flynn, Michael J.; Davies, Eric; Spizarny, David; Beute, Gordon H.; Peterson, Edward; Eyler, William R.; Gross, Barry; Chen, Ji

    1991-05-01

    A study to test the ability of a high-fidelity system to digitize chest radiographs, store the data in a computer, and reprint the film without altering diagnostic observer performance is reported. Two hundred and fifty-two (252) chest films with subtle image features indicative of interstitial disease, pulmonary nodule, or pneumothorax, along with 36 normal chest films were used in the study. Films were selected from a key word search on a computerized report archive and were graded by two experienced radiologists. Each film was digitized with 86 micron pixels and stored in 4000 X 5000 arrays using a research instrument. Replicates were printed using a commercial laser film printer (Eastman Kodak Company) having 80 micron pixels. Originals and replicates were observed separately by two different experienced radiologists. Each indicated a graded response for the three possible pathologies. The agreement of observers between responses for replicates and originals was described by the kappa statistic and compared to the agreement when rereading the original film. The final result of this study supports a hypothesis that the replicate is indistinguishable from the original.

  20. Evolutionary image simplification for lung nodule classification with convolutional neural networks.

    PubMed

    Lückehe, Daniel; von Voigt, Gabriele

    2018-05-29

    Understanding decisions of deep learning techniques is important. Especially in the medical field, the reasons for a decision in a classification task are as crucial as the pure classification results. In this article, we propose a new approach to compute relevant parts of a medical image. Knowing the relevant parts makes it easier to understand decisions. In our approach, a convolutional neural network is employed to learn structures of images of lung nodules. Then, an evolutionary algorithm is applied to compute a simplified version of an unknown image based on the learned structures by the convolutional neural network. In the simplified version, irrelevant parts are removed from the original image. In the results, we show simplified images which allow the observer to focus on the relevant parts. In these images, more than 50% of the pixels are simplified. The simplified pixels do not change the meaning of the images based on the learned structures by the convolutional neural network. An experimental analysis shows the potential of the approach. Besides the examples of simplified images, we analyze the run time development. Simplified images make it easier to focus on relevant parts and to find reasons for a decision. The combination of an evolutionary algorithm employing a learned convolutional neural network is well suited for the simplification task. From a research perspective, it is interesting which areas of the images are simplified and which parts are taken as relevant.

  1. High sensitivity, solid state neutron detector

    DOEpatents

    Stradins, Pauls; Branz, Howard M; Wang, Qi; McHugh, Harold R

    2015-05-12

    An apparatus (200) for detecting slow or thermal neutrons (160). The apparatus (200) includes an alpha particle-detecting layer (240) that is a hydrogenated amorphous silicon p-i-n diode structure. The apparatus includes a bottom metal contact (220) and a top metal contact (250) with the diode structure (240) positioned between the two contacts (220, 250) to facilitate detection of alpha particles (170). The apparatus (200) includes a neutron conversion layer (230) formed of a material containing boron-10 isotopes. The top contact (250) is pixilated with each contact pixel extending to or proximate to an edge of the apparatus to facilitate electrical contacting. The contact pixels have elongated bodies to allow them to extend across the apparatus surface (242) with each pixel having a small surface area to match capacitance based upon a current spike detecting circuit or amplifier connected to each pixel. The neutron conversion layer (860) may be deposited on the contact pixels (830) such as with use of inkjet printing of nanoparticle ink.

  2. High sensitivity, solid state neutron detector

    DOEpatents

    Stradins, Pauls; Branz, Howard M.; Wang, Qi; McHugh, Harold R.

    2013-10-29

    An apparatus (200) for detecting slow or thermal neutrons (160) including an alpha particle-detecting layer (240) that is a hydrogenated amorphous silicon p-i-n diode structure. The apparatus includes a bottom metal contact (220) and a top metal contact (250) with the diode structure (240) positioned between the two contacts (220, 250) to facilitate detection of alpha particles (170). The apparatus (200) includes a neutron conversion layer (230) formed of a material containing boron-10 isotopes. The top contact (250) is pixilated with each contact pixel extending to or proximate to an edge of the apparatus to facilitate electrical contacting. The contact pixels have elongated bodies to allow them to extend across the apparatus surface (242) with each pixel having a small surface area to match capacitance based upon a current spike detecting circuit or amplifier connected to each pixel. The neutron conversion layer (860) may be deposited on the contact pixels (830) such as with use of inkjet printing of nanoparticle ink.

  3. A CMOS pixel sensor prototype for the outer layers of linear collider vertex detector

    NASA Astrophysics Data System (ADS)

    Zhang, L.; Morel, F.; Hu-Guo, C.; Himmi, A.; Dorokhov, A.; Hu, Y.

    2015-01-01

    The International Linear Collider (ILC) expresses a stringent requirement for high precision vertex detectors (VXD). CMOS pixel sensors (CPS) have been considered as an option for the VXD of the International Large Detector (ILD), one of the detector concepts proposed for the ILC. MIMOSA-31 developed at IPHC-Strasbourg is the first CPS integrated with 4-bit column-level ADC for the outer layers of the VXD, adapted to an original concept minimizing the power consumption. It is composed of a matrix of 64 rows and 48 columns. The pixel concept combines in-pixel amplification with a correlated double sampling (CDS) operation in order to reduce the temporal noise and fixed pattern noise (FPN). At the bottom of the pixel array, each column is terminated with a self-triggered analog-to-digital converter (ADC). The ADC design was optimized for power saving at a sampling frequency of 6.25 MS/s. The prototype chip is fabricated in a 0.35 μm CMOS technology. This paper presents the details of the prototype chip and its test results.

  4. Three-pass protocol scheme for bitmap image security by using vernam cipher algorithm

    NASA Astrophysics Data System (ADS)

    Rachmawati, D.; Budiman, M. A.; Aulya, L.

    2018-02-01

    Confidentiality, integrity, and efficiency are the crucial aspects of data security. Among the other digital data, image data is too prone to abuse of operation like duplication, modification, etc. There are some data security techniques, one of them is cryptography. The security of Vernam Cipher cryptography algorithm is very dependent on the key exchange process. If the key is leaked, security of this algorithm will collapse. Therefore, a method that minimizes key leakage during the exchange of messages is required. The method which is used, is known as Three-Pass Protocol. This protocol enables message delivery process without the key exchange. Therefore, the sending messages process can reach the receiver safely without fear of key leakage. The system is built by using Java programming language. The materials which are used for system testing are image in size 200×200 pixel, 300×300 pixel, 500×500 pixel, 800×800 pixel and 1000×1000 pixel. The result of experiments showed that Vernam Cipher algorithm in Three-Pass Protocol scheme could restore the original image.

  5. Adaptive local thresholding for robust nucleus segmentation utilizing shape priors

    NASA Astrophysics Data System (ADS)

    Wang, Xiuzhong; Srinivas, Chukka

    2016-03-01

    This paper describes a novel local thresholding method for foreground detection. First, a Canny edge detection method is used for initial edge detection. Then, tensor voting is applied on the initial edge pixels, using a nonsymmetric tensor field tailored to encode prior information about nucleus size, shape, and intensity spatial distribution. Tensor analysis is then performed to generate the saliency image and, based on that, the refined edge. Next, the image domain is divided into blocks. In each block, at least one foreground and one background pixel are sampled for each refined edge pixel. The saliency weighted foreground histogram and background histogram are then created. These two histograms are used to calculate a threshold by minimizing the background and foreground pixel classification error. The block-wise thresholds are then used to generate the threshold for each pixel via interpolation. Finally, the foreground is obtained by comparing the original image with the threshold image. The effective use of prior information, combined with robust techniques, results in far more reliable foreground detection, which leads to robust nucleus segmentation.

  6. Computational imaging with a single-pixel detector and a consumer video projector

    NASA Astrophysics Data System (ADS)

    Sych, D.; Aksenov, M.

    2018-02-01

    Single-pixel imaging is a novel rapidly developing imaging technique that employs spatially structured illumination and a single-pixel detector. In this work, we experimentally demonstrate a fully operating modular single-pixel imaging system. Light patterns in our setup are created with help of a computer-controlled digital micromirror device from a consumer video projector. We investigate how different working modes and settings of the projector affect the quality of reconstructed images. We develop several image reconstruction algorithms and compare their performance for real imaging. Also, we discuss the potential use of the single-pixel imaging system for quantum applications.

  7. Modification of the random forest algorithm to avoid statistical dependence problems when classifying remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Cánovas-García, Fulgencio; Alonso-Sarría, Francisco; Gomariz-Castillo, Francisco; Oñate-Valdivieso, Fernando

    2017-06-01

    Random forest is a classification technique widely used in remote sensing. One of its advantages is that it produces an estimation of classification accuracy based on the so called out-of-bag cross-validation method. It is usually assumed that such estimation is not biased and may be used instead of validation based on an external data-set or a cross-validation external to the algorithm. In this paper we show that this is not necessarily the case when classifying remote sensing imagery using training areas with several pixels or objects. According to our results, out-of-bag cross-validation clearly overestimates accuracy, both overall and per class. The reason is that, in a training patch, pixels or objects are not independent (from a statistical point of view) of each other; however, they are split by bootstrapping into in-bag and out-of-bag as if they were really independent. We believe that putting whole patch, rather than pixels/objects, in one or the other set would produce a less biased out-of-bag cross-validation. To deal with the problem, we propose a modification of the random forest algorithm to split training patches instead of the pixels (or objects) that compose them. This modified algorithm does not overestimate accuracy and has no lower predictive capability than the original. When its results are validated with an external data-set, the accuracy is not different from that obtained with the original algorithm. We analysed three remote sensing images with different classification approaches (pixel and object based); in the three cases reported, the modification we propose produces a less biased accuracy estimation.

  8. Mapping Diffusion in a Living Cell via the Phasor Approach

    PubMed Central

    Ranjit, Suman; Lanzano, Luca; Gratton, Enrico

    2014-01-01

    Diffusion of a fluorescent protein within a cell has been measured using either fluctuation-based techniques (fluorescence correlation spectroscopy (FCS) or raster-scan image correlation spectroscopy) or particle tracking. However, none of these methods enables us to measure the diffusion of the fluorescent particle at each pixel of the image. Measurement using conventional single-point FCS at every individual pixel results in continuous long exposure of the cell to the laser and eventual bleaching of the sample. To overcome this limitation, we have developed what we believe to be a new method of scanning with simultaneous construction of a fluorescent image of the cell. In this believed new method of modified raster scanning, as it acquires the image, the laser scans each individual line multiple times before moving to the next line. This continues until the entire area is scanned. This is different from the original raster-scan image correlation spectroscopy approach, where data are acquired by scanning each frame once and then scanning the image multiple times. The total time of data acquisition needed for this method is much shorter than the time required for traditional FCS analysis at each pixel. However, at a single pixel, the acquired intensity time sequence is short; requiring nonconventional analysis of the correlation function to extract information about the diffusion. These correlation data have been analyzed using the phasor approach, a fit-free method that was originally developed for analysis of FLIM images. Analysis using this method results in an estimation of the average diffusion coefficient of the fluorescent species at each pixel of an image, and thus, a detailed diffusion map of the cell can be created. PMID:25517145

  9. Fiber pixelated image database

    NASA Astrophysics Data System (ADS)

    Shinde, Anant; Perinchery, Sandeep Menon; Matham, Murukeshan Vadakke

    2016-08-01

    Imaging of physically inaccessible parts of the body such as the colon at micron-level resolution is highly important in diagnostic medical imaging. Though flexible endoscopes based on the imaging fiber bundle are used for such diagnostic procedures, their inherent honeycomb-like structure creates fiber pixelation effects. This impedes the observer from perceiving the information from an image captured and hinders the direct use of image processing and machine intelligence techniques on the recorded signal. Significant efforts have been made by researchers in the recent past in the development and implementation of pixelation removal techniques. However, researchers have often used their own set of images without making source data available which subdued their usage and adaptability universally. A database of pixelated images is the current requirement to meet the growing diagnostic needs in the healthcare arena. An innovative fiber pixelated image database is presented, which consists of pixelated images that are synthetically generated and experimentally acquired. Sample space encompasses test patterns of different scales, sizes, and shapes. It is envisaged that this proposed database will alleviate the current limitations associated with relevant research and development and would be of great help for researchers working on comb structure removal algorithms.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Philipp, Hugh T., E-mail: htp2@cornell.edu; Tate, Mark W.; Purohit, Prafull

    Modern storage rings are readily capable of providing intense x-ray pulses, tens of picoseconds in duration, millions of times per second. Exploiting the temporal structure of these x-ray sources opens avenues for studying rapid structural changes in materials. Many processes (e.g. crack propagation, deformation on impact, turbulence, etc.) differ in detail from one sample trial to the next and would benefit from the ability to record successive x-ray images with single x-ray sensitivity while framing at 5 to 10 MHz rates. To this end, we have pursued the development of fast x-ray imaging detectors capable of collecting bursts of imagesmore » that enable the isolation of single synchrotron bunches and/or bunch trains. The detector technology used is the hybrid pixel array detector (PAD) with a charge integrating front-end, and high-speed, in-pixel signal storage elements. A 384×256 pixel version, the Keck-PAD, with 150 µm × 150 µm pixels and 8 dedicated in-pixel storage elements is operational, has been tested at CHESS, and has collected data for compression wave studies. An updated version with 27 dedicated storage capacitors and identical pixel size has been fabricated.« less

  11. The effect of lossy image compression on image classification

    NASA Technical Reports Server (NTRS)

    Paola, Justin D.; Schowengerdt, Robert A.

    1995-01-01

    We have classified four different images, under various levels of JPEG compression, using the following classification algorithms: minimum-distance, maximum-likelihood, and neural network. The training site accuracy and percent difference from the original classification were tabulated for each image compression level, with maximum-likelihood showing the poorest results. In general, as compression ratio increased, the classification retained its overall appearance, but much of the pixel-to-pixel detail was eliminated. We also examined the effect of compression on spatial pattern detection using a neural network.

  12. BigView Image Viewing on Tiled Displays

    NASA Technical Reports Server (NTRS)

    Sandstrom, Timothy

    2007-01-01

    BigView allows for interactive panning and zooming of images of arbitrary size on desktop PCs running Linux. Additionally, it can work in a multi-screen environment where multiple PCs cooperate to view a single, large image. Using this software, one can explore on relatively modest machines images such as the Mars Orbiter Camera mosaic [92,160 33,280 pixels]. The images must be first converted into paged format, where the image is stored in 256 256 pages to allow rapid movement of pixels into texture memory. The format contains an image pyramid : a set of scaled versions of the original image. Each scaled image is 1/2 the size of the previous, starting with the original down to the smallest, which fits into a single 256 x 256 page.

  13. A New Quantum Gray-Scale Image Encoding Scheme

    NASA Astrophysics Data System (ADS)

    Naseri, Mosayeb; Abdolmaleky, Mona; Parandin, Fariborz; Fatahi, Negin; Farouk, Ahmed; Nazari, Reza

    2018-02-01

    In this paper, a new quantum images encoding scheme is proposed. The proposed scheme mainly consists of four different encoding algorithms. The idea behind of the scheme is a binary key generated randomly for each pixel of the original image. Afterwards, the employed encoding algorithm is selected corresponding to the qubit pair of the generated randomized binary key. The security analysis of the proposed scheme proved its enhancement through both randomization of the generated binary image key and altering the gray-scale value of the image pixels using the qubits of randomized binary key. The simulation of the proposed scheme assures that the final encoded image could not be recognized visually. Moreover, the histogram diagram of encoded image is flatter than the original one. The Shannon entropies of the final encoded images are significantly higher than the original one, which indicates that the attacker can not gain any information about the encoded images. Supported by Kermanshah Branch, Islamic Azad University, Kermanshah, IRAN

  14. Design of measuring system for wire diameter based on sub-pixel edge detection algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Yudong; Zhou, Wang

    2016-09-01

    Light projection method is often used in measuring system for wire diameter, which is relatively simpler structure and lower cost, and the measuring accuracy is limited by the pixel size of CCD. Using a CCD with small pixel size can improve the measuring accuracy, but will increase the cost and difficulty of making. In this paper, through the comparative analysis of a variety of sub-pixel edge detection algorithms, polynomial fitting method is applied for data processing in measuring system for wire diameter, to improve the measuring accuracy and enhance the ability of anti-noise. In the design of system structure, light projection method with orthogonal structure is used for the detection optical part, which can effectively reduce the error caused by line jitter in the measuring process. For the electrical part, ARM Cortex-M4 microprocessor is used as the core of the circuit module, which can not only drive double channel linear CCD but also complete the sampling, processing and storage of the CCD video signal. In addition, ARM microprocessor can complete the high speed operation of the whole measuring system for wire diameter in the case of no additional chip. The experimental results show that sub-pixel edge detection algorithm based on polynomial fitting can make up for the lack of single pixel size and improve the precision of measuring system for wire diameter significantly, without increasing hardware complexity of the entire system.

  15. A time-resolved image sensor for tubeless streak cameras

    NASA Astrophysics Data System (ADS)

    Yasutomi, Keita; Han, SangMan; Seo, Min-Woong; Takasawa, Taishi; Kagawa, Keiichiro; Kawahito, Shoji

    2014-03-01

    This paper presents a time-resolved CMOS image sensor with draining-only modulation (DOM) pixels for tube-less streak cameras. Although the conventional streak camera has high time resolution, the device requires high voltage and bulky system due to the structure with a vacuum tube. The proposed time-resolved imager with a simple optics realize a streak camera without any vacuum tubes. The proposed image sensor has DOM pixels, a delay-based pulse generator, and a readout circuitry. The delay-based pulse generator in combination with an in-pixel logic allows us to create and to provide a short gating clock to the pixel array. A prototype time-resolved CMOS image sensor with the proposed pixel is designed and implemented using 0.11um CMOS image sensor technology. The image array has 30(Vertical) x 128(Memory length) pixels with the pixel pitch of 22.4um. .

  16. Pixel-by-pixel absolute phase retrieval using three phase-shifted fringe patterns without markers

    NASA Astrophysics Data System (ADS)

    Jiang, Chufan; Li, Beiwen; Zhang, Song

    2017-04-01

    This paper presents a method that can recover absolute phase pixel by pixel without embedding markers on three phase-shifted fringe patterns, acquiring additional images, or introducing additional hardware component(s). The proposed three-dimensional (3D) absolute shape measurement technique includes the following major steps: (1) segment the measured object into different regions using rough priori knowledge of surface geometry; (2) artificially create phase maps at different z planes using geometric constraints of structured light system; (3) unwrap the phase pixel by pixel for each region by properly referring to the artificially created phase map; and (4) merge unwrapped phases from all regions into a complete absolute phase map for 3D reconstruction. We demonstrate that conventional three-step phase-shifted fringe patterns can be used to create absolute phase map pixel by pixel even for large depth range objects. We have successfully implemented our proposed computational framework to achieve absolute 3D shape measurement at 40 Hz.

  17. Change of spatial information under rescaling: A case study using multi-resolution image series

    NASA Astrophysics Data System (ADS)

    Chen, Weirong; Henebry, Geoffrey M.

    Spatial structure in imagery depends on a complicated interaction between the observational regime and the types and arrangements of entities within the scene that the image portrays. Although block averaging of pixels has commonly been used to simulate coarser resolution imagery, relatively little attention has been focused on the effects of simple rescaling on spatial structure and the explanation and a possible solution to the problem. Yet, if there are significant differences in spatial variance between rescaled and observed images, it may affect the reliability of retrieved biogeophysical quantities. To investigate these issues, a nested series of high spatial resolution digital imagery was collected at a research site in eastern Nebraska in 2001. An airborne Kodak DCS420IR camera acquired imagery at three altitudes, yielding nominal spatial resolutions ranging from 0.187 m to 1 m. The red and near infrared (NIR) bands of the co-registered image series were normalized using pseudo-invariant features, and the normalized difference vegetation index (NDVI) was calculated. Plots of grain sorghum planted in orthogonal crop row orientations were extracted from the image series. The finest spatial resolution data were then rescaled by averaging blocks of pixels to produce a rescaled image series that closely matched the spatial resolution of the observed image series. Spatial structures of the observed and rescaled image series were characterized using semivariogram analysis. Results for NDVI and its component bands show, as expected, that decreasing spatial resolution leads to decreasing spatial variability and increasing spatial dependence. However, compared to the observed data, the rescaled images contain more persistent spatial structure that exhibits limited variation in both spatial dependence and spatial heterogeneity. Rescaling via simple block averaging fails to consider the effect of scene object shape and extent on spatial information. As the features portrayed by pixels are equally weighted regardless of the shape and extent of the underlying scene objects, the rescaled image retains more of the original spatial information than would occur through direct observation at a coarser sensor spatial resolution. In contrast, for the observed images, due to the effect of the modulation transfer function (MTF) of the imaging system, high frequency features like edges are blurred or lost as the pixel size increases, resulting in greater variation in spatial structure. Successive applications of a low-pass spatial convolution filter are shown to mimic a MTF. Accordingly, it is recommended that such a procedure be applied prior to rescaling by simple block averaging, if insufficient image metadata exist to replicate the net MTF of the imaging system, as might be expected in land cover change analysis studies using historical imagery.

  18. Flexible ultrathin-body single-photon avalanche diode sensors and CMOS integration.

    PubMed

    Sun, Pengfei; Ishihara, Ryoichi; Charbon, Edoardo

    2016-02-22

    We proposed the world's first flexible ultrathin-body single-photon avalanche diode (SPAD) as photon counting device providing a suitable solution to advanced implantable bio-compatible chronic medical monitoring, diagnostics and other applications. In this paper, we investigate the Geiger-mode performance of this flexible ultrathin-body SPAD comprehensively and we extend this work to the first flexible SPAD image sensor with in-pixel and off-pixel electronics integrated in CMOS. Experimental results show that dark count rate (DCR) by band-to-band tunneling can be reduced by optimizing multiplication doping. DCR by trap-assisted avalanche, which is believed to be originated from the trench etching process, could be further reduced, resulting in a DCR density of tens to hundreds of Hertz per micrometer square at cryogenic temperature. The influence of the trench etching process onto DCR is also proved by comparison with planar ultrathin-body SPAD structures without trench. Photon detection probability (PDP) can be achieved by wider depletion and drift regions and by carefully optimizing body thickness. PDP in frontside- (FSI) and backside-illumination (BSI) are comparable, thus making this technology suitable for both modes of illumination. Afterpulsing and crosstalk are negligible at 2µs dead time, while it has been proved, for the first time, that a CMOS SPAD pixel of this kind could work in a cryogenic environment. By appropriate choice of substrate, this technology is amenable to implantation for biocompatible photon-counting applications and wherever bended imaging sensors are essential.

  19. Conditional random fields for pattern recognition applied to structured data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burr, Tom; Skurikhin, Alexei

    In order to predict labels from an output domain, Y, pattern recognition is used to gather measurements from an input domain, X. Image analysis is one setting where one might want to infer whether a pixel patch contains an object that is “manmade” (such as a building) or “natural” (such as a tree). Suppose the label for a pixel patch is “manmade”; if the label for a nearby pixel patch is then more likely to be “manmade” there is structure in the output domain that can be exploited to improve pattern recognition performance. Modeling P(X) is difficult because features betweenmore » parts of the model are often correlated. Thus, conditional random fields (CRFs) model structured data using the conditional distribution P(Y|X = x), without specifying a model for P(X), and are well suited for applications with dependent features. Our paper has two parts. First, we overview CRFs and their application to pattern recognition in structured problems. Our primary examples are image analysis applications in which there is dependence among samples (pixel patches) in the output domain. Second, we identify research topics and present numerical examples.« less

  20. Conditional random fields for pattern recognition applied to structured data

    DOE PAGES

    Burr, Tom; Skurikhin, Alexei

    2015-07-14

    In order to predict labels from an output domain, Y, pattern recognition is used to gather measurements from an input domain, X. Image analysis is one setting where one might want to infer whether a pixel patch contains an object that is “manmade” (such as a building) or “natural” (such as a tree). Suppose the label for a pixel patch is “manmade”; if the label for a nearby pixel patch is then more likely to be “manmade” there is structure in the output domain that can be exploited to improve pattern recognition performance. Modeling P(X) is difficult because features betweenmore » parts of the model are often correlated. Thus, conditional random fields (CRFs) model structured data using the conditional distribution P(Y|X = x), without specifying a model for P(X), and are well suited for applications with dependent features. Our paper has two parts. First, we overview CRFs and their application to pattern recognition in structured problems. Our primary examples are image analysis applications in which there is dependence among samples (pixel patches) in the output domain. Second, we identify research topics and present numerical examples.« less

  1. Mapping of major volcanic structures on Pavonis Mons in Tharsis, Mars

    NASA Astrophysics Data System (ADS)

    Orlandi, Diana; Mazzarini, Francesco; Pagli, Carolina; Pozzobon, Riccardo

    2017-04-01

    Pavonis Mons, with its 300 km of diameter and 14 km of height, is one of the largest volcanoes of Mars. It rests on a topographic high called Tharsis rise and it is located in the centre of a SW-NE trending row of volcanoes, including Arsia and Ascraeus Montes. In this study we mapped and analyzed the volcanic and tectonic structures of Pavonis Mons in order to understand its formation and the relationship between magmatic and tectonic activity. We use the mapping ArcGIS software and vast set of high resolution topographic and multi-spectral images including CTX (6 m/pixel) as well as HRSC (12.5 m/pixel) and HiRiSE ( 0.25 m/pixel) mosaic images. Furthemore, we used MOLA ( 463 m/pixel in the MOLA MEGDR gridded topographic data), THEMIS thermal inertia (IR-day, 100 m/pixel) and THEMIS (IR-night, 100 m/pixel) images global mosaic to map structures at the regional scale. We found a wide range of structures including ring dykes, wrinkle ridges, pit chains, lava flows, lava channels, fissures and depressions that we preliminary interpreted as coalescent lava tubes. Many sinuous rilles have eroded Pavonis' slopes and culminate with lava aprons, similar to alluvial fans. South of Pavonis Mons we also identify a series of volcanic vents mainly aligned along a SW-NE trend. Displacements across recent crater rim and volcanic deposits (strike slip faults and wrinkle ridges) have been documented suggesting that, at least during the most recent volcanic phases, the regional tectonics has contributed in shaping the morphology of Pavonis. The kinematics of the mapped structures is consistent with a ENE-SSW direction of the maximum horizontal stress suggesting a possible interaction with nearby Valles Marineris. Our study provides new morphometric analysis of volcano-tectonic features that can be used to depict an evolutionary history for the Pavonis Volcano.

  2. Characterization of a hybrid energy-resolving photon-counting detector

    NASA Astrophysics Data System (ADS)

    Zang, A.; Pelzer, G.; Anton, G.; Ballabriga Sune, R.; Bisello, F.; Campbell, M.; Fauler, A.; Fiederle, M.; Llopart Cudie, X.; Ritter, I.; Tennert, F.; Wölfel, S.; Wong, W. S.; Michel, T.

    2014-03-01

    Photon-counting detectors in medical x-ray imaging provide a higher dose efficiency than integrating detectors. Even further possibilities for imaging applications arise, if the energy of each photon counted is measured, as for example K-edge-imaging or optimizing image quality by applying energy weighting factors. In this contribution, we show results of the characterization of the Dosepix detector. This hybrid photon- counting pixel detector allows energy resolved measurements with a novel concept of energy binning included in the pixel electronics. Based on ideas of the Medipix detector family, it provides three different modes of operation: An integration mode, a photon-counting mode, and an energy-binning mode. In energy-binning mode, it is possible to set 16 energy thresholds in each pixel individually to derive a binned energy spectrum in every pixel in one acquisition. The hybrid setup allows using different sensor materials. For the measurements 300 μm Si and 1 mm CdTe were used. The detector matrix consists of 16 x 16 square pixels for CdTe (16 x 12 for Si) with a pixel pitch of 220 μm. The Dosepix was originally intended for applications in the field of radiation measurement. Therefore it is not optimized towards medical imaging. The detector concept itself still promises potential as an imaging detector. We present spectra measured in one single pixel as well as in the whole pixel matrix in energy-binning mode with a conventional x-ray tube. In addition, results concerning the count rate linearity for the different sensor materials are shown as well as measurements regarding energy resolution.

  3. Generation algorithm of craniofacial structure contour in cephalometric images

    NASA Astrophysics Data System (ADS)

    Mondal, Tanmoy; Jain, Ashish; Sardana, H. K.

    2010-02-01

    Anatomical structure tracing on cephalograms is a significant way to obtain cephalometric analysis. Computerized cephalometric analysis involves both manual and automatic approaches. The manual approach is limited in accuracy and repeatability. In this paper we have attempted to develop and test a novel method for automatic localization of craniofacial structure based on the detected edges on the region of interest. According to the grey scale feature at the different region of the cephalometric images, an algorithm for obtaining tissue contour is put forward. Using edge detection with specific threshold an improved bidirectional contour tracing approach is proposed by an interactive selection of the starting edge pixels, the tracking process searches repetitively for an edge pixel at the neighborhood of previously searched edge pixel to segment images, and then craniofacial structures are obtained. The effectiveness of the algorithm is demonstrated by the preliminary experimental results obtained with the proposed method.

  4. Selecting good regions to deblur via relative total variation

    NASA Astrophysics Data System (ADS)

    Li, Lerenhan; Yan, Hao; Fan, Zhihua; Zheng, Hanqing; Gao, Changxin; Sang, Nong

    2018-03-01

    Image deblurring is to estimate the blur kernel and to restore the latent image. It is usually divided into two stage, including kernel estimation and image restoration. In kernel estimation, selecting a good region that contains structure information is helpful to the accuracy of estimated kernel. Good region to deblur is usually expert-chosen or in a trial-anderror way. In this paper, we apply a metric named relative total variation (RTV) to discriminate the structure regions from smooth and texture. Given a blurry image, we first calculate the RTV of each pixel to determine whether it is the pixel in structure region, after which, we sample the image in an overlapping way. At last, the sampled region that contains the most structure pixels is the best region to deblur. Both qualitative and quantitative experiments show that our proposed method can help to estimate the kernel accurately.

  5. Hexagonal Pixels and Indexing Scheme for Binary Images

    NASA Technical Reports Server (NTRS)

    Johnson, Gordon G.

    2004-01-01

    A scheme for resampling binaryimage data from a rectangular grid to a regular hexagonal grid and an associated tree-structured pixel-indexing scheme keyed to the level of resolution have been devised. This scheme could be utilized in conjunction with appropriate image-data-processing algorithms to enable automated retrieval and/or recognition of images. For some purposes, this scheme is superior to a prior scheme that relies on rectangular pixels: one example of such a purpose is recognition of fingerprints, which can be approximated more closely by use of line segments along hexagonal axes than by line segments along rectangular axes. This scheme could also be combined with algorithms for query-image-based retrieval of images via the Internet. A binary image on a rectangular grid is generated by raster scanning or by sampling on a stationary grid of rectangular pixels. In either case, each pixel (each cell in the rectangular grid) is denoted as either bright or dark, depending on whether the light level in the pixel is above or below a prescribed threshold. The binary data on such an image are stored in a matrix form that lends itself readily to searches of line segments aligned with either or both of the perpendicular coordinate axes. The first step in resampling onto a regular hexagonal grid is to make the resolution of the hexagonal grid fine enough to capture all the binaryimage detail from the rectangular grid. In practice, this amounts to choosing a hexagonal-cell width equal to or less than a third of the rectangular- cell width. Once the data have been resampled onto the hexagonal grid, the image can readily be checked for line segments aligned with the hexagonal coordinate axes, which typically lie at angles of 30deg, 90deg, and 150deg with respect to say, the horizontal rectangular coordinate axis. Optionally, one can then rotate the rectangular image by 90deg, then again sample onto the hexagonal grid and check for line segments at angles of 0deg, 60deg, and 120deg to the original horizontal coordinate axis. The net result is that one has checked for line segments at angular intervals of 30deg. For even finer angular resolution, one could, for example, then rotate the rectangular-grid image +/-45deg before sampling to perform checking for line segments at angular intervals of 15deg.

  6. Weber-aware weighted mutual information evaluation for infrared-visible image fusion

    NASA Astrophysics Data System (ADS)

    Luo, Xiaoyan; Wang, Shining; Yuan, Ding

    2016-10-01

    A performance metric for infrared and visible image fusion is proposed based on Weber's law. To indicate the stimulus of source images, two Weber components are provided. One is differential excitation to reflect the spectral signal of visible and infrared images, and the other is orientation to capture the scene structure feature. By comparing the corresponding Weber component in infrared and visible images, the source pixels can be marked with different dominant properties in intensity or structure. If the pixels have the same dominant property label, the pixels are grouped to calculate the mutual information (MI) on the corresponding Weber components between dominant source and fused images. Then, the final fusion metric is obtained via weighting the group-wise MI values according to the number of pixels in different groups. Experimental results demonstrate that the proposed metric performs well on popular image fusion cases and outperforms other image fusion metrics.

  7. Minimization of color shift generated in RGBW quad structure.

    NASA Astrophysics Data System (ADS)

    Kim, Hong Chul; Yun, Jae Kyeong; Baek, Heume-Il; Kim, Ki Duk; Oh, Eui Yeol; Chung, In Jae

    2005-03-01

    The purpose of RGBW Quad Structure Technology is to realize higher brightness than that of normal panel (RGB stripe structure) by adding white sub-pixel to existing RGB stripe structure. However, there is side effect called 'color shift' resulted from increasing brightness. This side effect degrades general color characteristics due to change of 'Hue', 'Brightness' and 'Saturation' as compared with existing RGB stripe structure. Especially, skin-tone colors show a tendency to get darker in contrast to normal panel. We"ve tried to minimize 'color shift' through use of LUT (Look Up Table) for linear arithmetic processing of input data, data bit expansion to 12-bit for minimizing arithmetic tolerance and brightness weight of white sub-pixel on each R, G, B pixel. The objective of this study is to minimize and keep Δu'v' value (we commonly use to represent a color difference), quantitative basis of color difference between RGB stripe structure and RGBW quad structure, below 0.01 level (existing 0.02 or higher) using Macbeth colorchecker that is general reference of color characteristics.

  8. Relation between one- and two-dimensional noise power spectra of magnetic resonance images.

    PubMed

    Ichinoseki, Yuki; Machida, Yoshio

    2017-06-01

    Our purpose in this study was to elucidate the relation between the one-dimensional (1D) and two-dimensional (2D) noise power spectra (NPSs) in magnetic resonance imaging (MRI). We measured the 1D NPSs using the slit method and the radial frequency method. In the slit method, numerical slits 1 pixel wide and L pixels long were placed on a noise image (128 × 128 pixels) and scanned in the MR image domain. We obtained the 1D NPS using the slit method (1D NPS_Slit) and the 2D NPS of the noise region scanned by the slit (2D NPS_Slit). We also obtained 1D NPS using the radial frequency method (1D NPS_Radial) by averaging the NPS values on the circumference of a circle centered at the origin of the original 2D NPS. The properties of the 1D NPS_Slits varied with L and the scanning direction in PROPELLER MRI. The 2D NPS_Slit shapes matched that of the original 2D NPS, but were compressed by L/128. The central line profiles of the 2D NPS_Slits and the 1D NPS_Slits matched exactly. Therefore, the 1D NPS_Slits reflected not only the NPS values on the central axis of the original 2D NPS, but also the NPS values around the central axis. Moreover, the measurement precisions of the 1D NPS_Slits were lower than those of the 1D NPS_Radial. Consequently, it is necessary to select the approach applied for 1D NPS measurements according to the data acquisition method and the purpose of the noise evaluation.

  9. Comparative Tectonics of Europa and Ganymede

    NASA Astrophysics Data System (ADS)

    Pappalardo, R. T.; Collins, G. C.; Prockter, L. M.; Head, J. W.

    2000-10-01

    Europa and Ganymede are sibling satellites with tectonic similarities and differences. Ganymede's ancient dark terrain is crossed by furrows, probably related to ancient large impacts, and has been normal faulted to various degrees. Bright grooved is pervasively deformed at multiple scales and is locally highly strained, consistent with normal faulting of an ice-rich lithosphere above a ductile asthenosphere, along with minor horizontal shear. Little evidence has been identified for compressional structures. The relative roles of tectonism and icy cryovolcanism in creating bright grooved terrain is an outstanding issue. Some ridge and trough structures within Europa's bands show tectonic similarities to Ganymede's grooved terrain, specifically sawtooth structures resembling normal fault blocks. Small-scale troughs are consistent with widened tension fractures. Shearing has produced transtensional and transpressional structures in Europan bands. Large-scale folds are recognized on Europa, with synclinal small-scale ridges and scarps probably representing folds and/or thrust blocks. Europa's ubiquitous double ridges may have originated as warm ice upwelled along tidally heated fracture zones. The morphological variety of ridges and troughs on Europa imply that care must be taken in inferring their origin. The relative youth of Europa's surface means that the satellite has preserved near-pristine morphologies of many structures, though sputter erosion could have altered the morphology of older topography. Moderate-resolution imaging has revealed lesser apparent diversity in Ganymede's ridge and trough types. Galileo's 28th orbit has brought new 20 m/pixel imaging of Ganymede, allowing direct comparison to Europa's small-scale structures.

  10. An investigation of signal performance enhancements achieved through innovative pixel design across several generations of indirect detection, active matrix, flat-panel arrays

    PubMed Central

    Antonuk, Larry E.; Zhao, Qihua; El-Mohri, Youcef; Du, Hong; Wang, Yi; Street, Robert A.; Ho, Jackson; Weisfield, Richard; Yao, William

    2009-01-01

    Active matrix flat-panel imager (AMFPI) technology is being employed for an increasing variety of imaging applications. An important element in the adoption of this technology has been significant ongoing improvements in optical signal collection achieved through innovations in indirect detection array pixel design. Such improvements have a particularly beneficial effect on performance in applications involving low exposures and∕or high spatial frequencies, where detective quantum efficiency is strongly reduced due to the relatively high level of additive electronic noise compared to signal levels of AMFPI devices. In this article, an examination of various signal properties, as determined through measurements and calculations related to novel array designs, is reported in the context of the evolution of AMFPI pixel design. For these studies, dark, optical, and radiation signal measurements were performed on prototype imagers incorporating a variety of increasingly sophisticated array designs, with pixel pitches ranging from 75 to 127 μm. For each design, detailed measurements of fundamental pixel-level properties conducted under radiographic and fluoroscopic operating conditions are reported and the results are compared. A series of 127 μm pitch arrays employing discrete photodiodes culminated in a novel design providing an optical fill factor of ∼80% (thereby assuring improved x-ray sensitivity), and demonstrating low dark current, very low charge trapping and charge release, and a large range of linear signal response. In two of the designs having 75 and 90 μm pitches, a novel continuous photodiode structure was found to provide fill factors that approach the theoretical maximum of 100%. Both sets of novel designs achieved large fill factors by employing architectures in which some, or all of the photodiode structure was elevated above the plane of the pixel addressing transistor. Generally, enhancement of the fill factor in either discrete or continuous photodiode arrays was observed to result in no degradation in MTF due to charge sharing between pixels. While the continuous designs exhibited relatively high levels of charge trapping and release, as well as shorter ranges of linearity, it is possible that these behaviors can be addressed through further refinements to pixel design. Both the continuous and the most recent discrete photodiode designs accommodate more sophisticated pixel circuitry than is present on conventional AMFPIs – such as a pixel clamp circuit, which is demonstrated to limit signal saturation under conditions corresponding to high exposures. It is anticipated that photodiode structures such as the ones reported in this study will enable the development of even more complex pixel circuitry, such as pixel-level amplifiers, that will lead to further significant improvements in imager performance. PMID:19673228

  11. Plains South of Valles Marineris

    NASA Image and Video Library

    2017-03-28

    This enhanced-color sample reveals the incredible diversity of landforms on some Martian plains that appear bland and uniform at larger scales. Here we see layers, small channels suggesting water flow, craters, and indurated sand dunes. The map is projected here at a scale of 25 centimeters (9.8 inches) per pixel. [The original image scale is 25.7 centimeters (10.1 inches) per pixel (with 1 x 1 binning); objects on the order of 77 centimeters (30.3 inches) across are resolved.] North is up. http://photojournal.jpl.nasa.gov/catalog/PIA21573

  12. Theory and applications of structured light single pixel imaging

    NASA Astrophysics Data System (ADS)

    Stokoe, Robert J.; Stockton, Patrick A.; Pezeshki, Ali; Bartels, Randy A.

    2018-02-01

    Many single-pixel imaging techniques have been developed in recent years. Though the methods of image acquisition vary considerably, the methods share unifying features that make general analysis possible. Furthermore, the methods developed thus far are based on intuitive processes that enable simple and physically-motivated reconstruction algorithms, however, this approach may not leverage the full potential of single-pixel imaging. We present a general theoretical framework of single-pixel imaging based on frame theory, which enables general, mathematically rigorous analysis. We apply our theoretical framework to existing single-pixel imaging techniques, as well as provide a foundation for developing more-advanced methods of image acquisition and reconstruction. The proposed frame theoretic framework for single-pixel imaging results in improved noise robustness, decrease in acquisition time, and can take advantage of special properties of the specimen under study. By building on this framework, new methods of imaging with a single element detector can be developed to realize the full potential associated with single-pixel imaging.

  13. Simulation of mirror surfaces for virtual estimation of visibility lines for 3D motor vehicle collision reconstruction.

    PubMed

    Leipner, Anja; Dobler, Erika; Braun, Marcel; Sieberth, Till; Ebert, Lars

    2017-10-01

    3D reconstructions of motor vehicle collisions are used to identify the causes of these events and to identify potential violations of traffic regulations. Thus far, the reconstruction of mirrors has been a problem since they are often based on approximations or inaccurate data. Our aim with this paper was to confirm that structured light scans of a mirror improve the accuracy of simulating the field of view of mirrors. We analyzed the performances of virtual mirror surfaces based on structured light scans using real mirror surfaces and their reflections as references. We used an ATOS GOM III scanner to scan the mirrors and processed the 3D data using Geomagic Wrap. For scene reconstruction and to generate virtual images, we used 3ds Max. We compared the simulated virtual images and photographs of real scenes using Adobe Photoshop. Our results showed that we achieved clear and even mirror results and that the mirrors behaved as expected. The greatest measured deviation between an original photo and the corresponding virtual image was 20 pixels in the transverse direction for an image width of 4256 pixels. We discussed the influences of data processing and alignment of the 3D models on the results. The study was limited to a distance of 1.6m, and the method was not able to simulate an interior mirror. In conclusion, structured light scans of mirror surfaces can be used to simulate virtual mirror surfaces with regard to 3D motor vehicle collision reconstruction. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. A Low-Noise X-ray Astronomical Silicon-On-Insulator Pixel Detector Using a Pinned Depleted Diode Structure

    PubMed Central

    Kamehama, Hiroki; Kawahito, Shoji; Shrestha, Sumeet; Nakanishi, Syunta; Yasutomi, Keita; Takeda, Ayaki; Tsuru, Takeshi Go

    2017-01-01

    This paper presents a novel full-depletion Si X-ray detector based on silicon-on-insulator pixel (SOIPIX) technology using a pinned depleted diode structure, named the SOIPIX-PDD. The SOIPIX-PDD greatly reduces stray capacitance at the charge sensing node, the dark current of the detector, and capacitive coupling between the sensing node and SOI circuits. These features of the SOIPIX-PDD lead to low read noise, resulting high X-ray energy resolution and stable operation of the pixel. The back-gate surface pinning structure using neutralized p-well at the back-gate surface and depleted n-well underneath the p-well for all the pixel area other than the charge sensing node is also essential for preventing hole injection from the p-well by making the potential barrier to hole, reducing dark current from the Si-SiO2 interface and creating lateral drift field to gather signal electrons in the pixel area into the small charge sensing node. A prototype chip using 0.2 μm SOI technology shows very low readout noise of 11.0 e−rms, low dark current density of 56 pA/cm2 at −35 °C and the energy resolution of 200 eV(FWHM) at 5.9 keV and 280 eV (FWHM) at 13.95 keV. PMID:29295523

  15. A Low-Noise X-ray Astronomical Silicon-On-Insulator Pixel Detector Using a Pinned Depleted Diode Structure.

    PubMed

    Kamehama, Hiroki; Kawahito, Shoji; Shrestha, Sumeet; Nakanishi, Syunta; Yasutomi, Keita; Takeda, Ayaki; Tsuru, Takeshi Go; Arai, Yasuo

    2017-12-23

    This paper presents a novel full-depletion Si X-ray detector based on silicon-on-insulator pixel (SOIPIX) technology using a pinned depleted diode structure, named the SOIPIX-PDD. The SOIPIX-PDD greatly reduces stray capacitance at the charge sensing node, the dark current of the detector, and capacitive coupling between the sensing node and SOI circuits. These features of the SOIPIX-PDD lead to low read noise, resulting high X-ray energy resolution and stable operation of the pixel. The back-gate surface pinning structure using neutralized p-well at the back-gate surface and depleted n-well underneath the p-well for all the pixel area other than the charge sensing node is also essential for preventing hole injection from the p-well by making the potential barrier to hole, reducing dark current from the Si-SiO₂ interface and creating lateral drift field to gather signal electrons in the pixel area into the small charge sensing node. A prototype chip using 0.2 μm SOI technology shows very low readout noise of 11.0 e - rms , low dark current density of 56 pA/cm² at -35 °C and the energy resolution of 200 eV(FWHM) at 5.9 keV and 280 eV (FWHM) at 13.95 keV.

  16. WE-EF-207-07: Dual Energy CT with One Full Scan and a Second Sparse-View Scan Using Structure Preserving Iterative Reconstruction (SPIR)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, T; Zhu, L

    Purpose: Conventional dual energy CT (DECT) reconstructs CT and basis material images from two full-size projection datasets with different energy spectra. To relax the data requirement, we propose an iterative DECT reconstruction algorithm using one full scan and a second sparse-view scan by utilizing redundant structural information of the same object acquired at two different energies. Methods: We first reconstruct a full-scan CT image using filtered-backprojection (FBP) algorithm. The material similarities of each pixel with other pixels are calculated by an exponential function about pixel value differences. We assume that the material similarities of pixels remains in the second CTmore » scan, although pixel values may vary. An iterative method is designed to reconstruct the second CT image from reduced projections. Under the data fidelity constraint, the algorithm minimizes the L2 norm of the difference between pixel value and its estimation, which is the average of other pixel values weighted by their similarities. The proposed algorithm, referred to as structure preserving iterative reconstruction (SPIR), is evaluated on physical phantoms. Results: On the Catphan600 phantom, SPIR-based DECT method with a second 10-view scan reduces the noise standard deviation of a full-scan FBP CT reconstruction by a factor of 4 with well-maintained spatial resolution, while iterative reconstruction using total-variation regularization (TVR) degrades the spatial resolution at the same noise level. The proposed method achieves less than 1% measurement difference on electron density map compared with the conventional two-full-scan DECT. On an anthropomorphic pediatric phantom, our method successfully reconstructs the complicated vertebra structures and decomposes bone and soft tissue. Conclusion: We develop an effective method to reduce the number of views and therefore data acquisition in DECT. We show that SPIR-based DECT using one full scan and a second 10-view scan can provide high-quality DECT images and accurate electron density maps as conventional two-full-scan DECT.« less

  17. Local structure-based image decomposition for feature extraction with applications to face recognition.

    PubMed

    Qian, Jianjun; Yang, Jian; Xu, Yong

    2013-09-01

    This paper presents a robust but simple image feature extraction method, called image decomposition based on local structure (IDLS). It is assumed that in the local window of an image, the macro-pixel (patch) of the central pixel, and those of its neighbors, are locally linear. IDLS captures the local structural information by describing the relationship between the central macro-pixel and its neighbors. This relationship is represented with the linear representation coefficients determined using ridge regression. One image is actually decomposed into a series of sub-images (also called structure images) according to a local structure feature vector. All the structure images, after being down-sampled for dimensionality reduction, are concatenated into one super-vector. Fisher linear discriminant analysis is then used to provide a low-dimensional, compact, and discriminative representation for each super-vector. The proposed method is applied to face recognition and examined using our real-world face image database, NUST-RWFR, and five popular, publicly available, benchmark face image databases (AR, Extended Yale B, PIE, FERET, and LFW). Experimental results show the performance advantages of IDLS over state-of-the-art algorithms.

  18. A Comparative Study of Landsat TM and SPOT HRG Images for Vegetation Classification in the Brazilian Amazon.

    PubMed

    Lu, Dengsheng; Batistella, Mateus; de Miranda, Evaristo E; Moran, Emilio

    2008-01-01

    Complex forest structure and abundant tree species in the moist tropical regions often cause difficulties in classifying vegetation classes with remotely sensed data. This paper explores improvement in vegetation classification accuracies through a comparative study of different image combinations based on the integration of Landsat Thematic Mapper (TM) and SPOT High Resolution Geometric (HRG) instrument data, as well as the combination of spectral signatures and textures. A maximum likelihood classifier was used to classify the different image combinations into thematic maps. This research indicated that data fusion based on HRG multispectral and panchromatic data slightly improved vegetation classification accuracies: a 3.1 to 4.6 percent increase in the kappa coefficient compared with the classification results based on original HRG or TM multispectral images. A combination of HRG spectral signatures and two textural images improved the kappa coefficient by 6.3 percent compared with pure HRG multispectral images. The textural images based on entropy or second-moment texture measures with a window size of 9 pixels × 9 pixels played an important role in improving vegetation classification accuracy. Overall, optical remote-sensing data are still insufficient for accurate vegetation classifications in the Amazon basin.

  19. A Comparative Study of Landsat TM and SPOT HRG Images for Vegetation Classification in the Brazilian Amazon

    PubMed Central

    Lu, Dengsheng; Batistella, Mateus; de Miranda, Evaristo E.; Moran, Emilio

    2009-01-01

    Complex forest structure and abundant tree species in the moist tropical regions often cause difficulties in classifying vegetation classes with remotely sensed data. This paper explores improvement in vegetation classification accuracies through a comparative study of different image combinations based on the integration of Landsat Thematic Mapper (TM) and SPOT High Resolution Geometric (HRG) instrument data, as well as the combination of spectral signatures and textures. A maximum likelihood classifier was used to classify the different image combinations into thematic maps. This research indicated that data fusion based on HRG multispectral and panchromatic data slightly improved vegetation classification accuracies: a 3.1 to 4.6 percent increase in the kappa coefficient compared with the classification results based on original HRG or TM multispectral images. A combination of HRG spectral signatures and two textural images improved the kappa coefficient by 6.3 percent compared with pure HRG multispectral images. The textural images based on entropy or second-moment texture measures with a window size of 9 pixels × 9 pixels played an important role in improving vegetation classification accuracy. Overall, optical remote-sensing data are still insufficient for accurate vegetation classifications in the Amazon basin. PMID:19789716

  20. Small pixel cross-talk MTF and its impact on MWIR sensor performance

    NASA Astrophysics Data System (ADS)

    Goss, Tristan M.; Willers, Cornelius J.

    2017-05-01

    As pixel sizes reduce in the development of modern High Definition (HD) Mid Wave Infrared (MWIR) detectors the interpixel cross-talk becomes increasingly difficult to regulate. The diffusion lengths required to achieve the quantum efficiency and sensitivity of MWIR detectors are typically longer than the pixel pitch dimension, and the probability of inter-pixel cross-talk increases as the pixel pitch/diffusion length fraction decreases. Inter-pixel cross-talk is most conveniently quantified by the focal plane array sampling Modulation Transfer Function (MTF). Cross-talk MTF will reduce the ideal sinc square pixel MTF that is commonly used when modelling sensor performance. However, cross-talk MTF data is not always readily available from detector suppliers, and since the origins of inter-pixel cross-talk are uniquely device and manufacturing process specific, no generic MTF models appear to satisfy the needs of the sensor designers and analysts. In this paper cross-talk MTF data has been collected from recent publications and the development for a generic cross-talk MTF model to fit this data is investigated. The resulting cross-talk MTF model is then included in a MWIR sensor model and the impact on sensor performance is evaluated in terms of the National Imagery Interoperability Rating Scale's (NIIRS) General Image Quality Equation (GIQE) metric for a range of fnumber/ detector pitch Fλ/d configurations and operating environments. By applying non-linear boost transfer functions in the signal processing chain, the contrast losses due to cross-talk may be compensated for. Boost transfer functions, however, also reduce the signal to noise ratio of the sensor. In this paper boost function limits are investigated and included in the sensor performance assessments.

  1. Up Scalable Full Colour Plasmonic Pixels with Controllable Hue, Brightness and Saturation.

    PubMed

    Mudachathi, Renilkumar; Tanaka, Takuo

    2017-04-26

    It has long been the interests of scientists to develop ink free colour printing technique using nano structured materials inspired by brilliant colours found in many creatures like butterflies and peacocks. Recently isolated metal nano structures exhibiting preferential light absorption and scattering have been explored as a promising candidate for this emerging field. Applying such structures in practical use, however, demands the production of individual colours with distinct reflective peaks, tunable across the visible wavelength region combined with controllable colour attributes and economically feasible fabrication. Herein, we present a simple yet efficient colour printing approach employing sub-micrometer scale plasmonic pixels of single constituent metal structure which supports near unity broadband light absorption at two distinct wavelengths, facilitating the creation of saturated colours. The dependence of these resonances on two different parameters of the same pixel enables controllable colour attributes such as hue, brightness and saturation across the visible spectrum. The linear dependence of colour attributes on the pixel parameters eases the automation; which combined with the use of inexpensive and stable aluminum as functional material will make this colour design strategy relevant for use in various commercial applications like printing micro images for security purposes, consumer product colouration and functionalized decoration to name a few.

  2. Optical Double Image Hiding in the Fractional Hartley Transform Using Structured Phase Filter and Arnold Transform

    NASA Astrophysics Data System (ADS)

    Yadav, Poonam Lata; Singh, Hukum

    2018-06-01

    To maintain the security of the image encryption and to protect the image from intruders, a new asymmetric cryptosystem based on fractional Hartley Transform (FrHT) and the Arnold transform (AT) is proposed. AT is a method of image cropping and edging in which pixels of the image are reorganized. In this cryptosystem we have used AT so as to extent the information content of the two original images onto the encrypted images so as to increase the safety of the encoded images. We have even used Structured Phase Mask (SPM) and Hybrid Mask (HM) as the encryption keys. The original image is first multiplied with the SPM and HM and then transformed with direct and inverse fractional Hartley transform so as to obtain the encrypted image. The fractional orders of the FrHT and the parameters of the AT correspond to the keys of encryption and decryption methods. If both the keys are correctly used only then the original image would be retrieved. Recommended method helps in strengthening the safety of DRPE by growing the key space and the number of parameters and the method is robust against various attacks. By using MATLAB 8.3.0.52 (R2014a) we calculate the strength of the recommended cryptosystem. A set of simulated results shows the power of the proposed asymmetric cryptosystem.

  3. Estimation of urban surface water at subpixel level from neighborhood pixels using multispectral remote sensing image (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Xie, Huan; Luo, Xin; Xu, Xiong; Wang, Chen; Pan, Haiyan; Tong, Xiaohua; Liu, Shijie

    2016-10-01

    Water body is a fundamental element in urban ecosystems and water mapping is critical for urban and landscape planning and management. As remote sensing has increasingly been used for water mapping in rural areas, this spatially explicit approach applied in urban area is also a challenging work due to the water bodies mainly distributed in a small size and the spectral confusion widely exists between water and complex features in the urban environment. Water index is the most common method for water extraction at pixel level, and spectral mixture analysis (SMA) has been widely employed in analyzing urban environment at subpixel level recently. In this paper, we introduce an automatic subpixel water mapping method in urban areas using multispectral remote sensing data. The objectives of this research consist of: (1) developing an automatic land-water mixed pixels extraction technique by water index; (2) deriving the most representative endmembers of water and land by utilizing neighboring water pixels and adaptive iterative optimal neighboring land pixel for respectively; (3) applying a linear unmixing model for subpixel water fraction estimation. Specifically, to automatically extract land-water pixels, the locally weighted scatter plot smoothing is firstly used to the original histogram curve of WI image . And then the Ostu threshold is derived as the start point to select land-water pixels based on histogram of the WI image with the land threshold and water threshold determination through the slopes of histogram curve . Based on the previous process at pixel level, the image is divided into three parts: water pixels, land pixels, and mixed land-water pixels. Then the spectral mixture analysis (SMA) is applied to land-water mixed pixels for water fraction estimation at subpixel level. With the assumption that the endmember signature of a target pixel should be more similar to adjacent pixels due to spatial dependence, the endmember of water and land are determined by neighboring pure land or pure water pixels within a distance. To obtaining the most representative endmembers in SMA, we designed an adaptive iterative endmember selection method based on the spatial similarity of adjacent pixels. According to the spectral similarity in a spatial adjacent region, the spectrum of land endmember is determined by selecting the most representative land pixel in a local window, and the spectrum of water endmember is determined by calculating an average of the water pixels in the local window. The proposed hierarchical processing method based on WI and SMA (WISMA) is applied to urban areas for reliability evaluation using the Landsat-8 Operational Land Imager (OLI) images. For comparison, four methods at pixel level and subpixel level were chosen respectively. Results indicate that the water maps generated by the proposed method correspond as closely with the truth water maps with subpixel precision. And the results showed that the WISMA achieved the best performance in water mapping with comprehensive analysis of different accuracy evaluation indexes (RMSE and SE).

  4. Tritium autoradiography with thinned and back-side illuminated monolithic active pixel sensor device

    NASA Astrophysics Data System (ADS)

    Deptuch, G.

    2005-05-01

    The first autoradiographic results of the tritium ( 3H) marked source obtained with monolithic active pixel sensors are presented. The detector is a high-resolution, back-side illuminated imager, developed within the SUCIMA collaboration for low-energy (<30 keV) electrons detection. The sensitivity to these energies is obtained by thinning the detector, originally fabricated in the form of a standard VLSI chip, down to the thickness of the epitaxial layer. The detector used is the 1×10 6 pixel, thinned MIMOSA V chip. The low noise performance and thin (˜160 nm) entrance window provide the sensitivity of the device to energies as low as ˜4 keV. A polymer tritium source was parked directly atop the detector in open-air conditions. A real-time image of the source was obtained.

  5. The Area Coverage of Geophysical Fields as a Function of Sensor Field-of View

    NASA Technical Reports Server (NTRS)

    Key, Jeffrey R.

    1994-01-01

    In many remote sensing studies of geophysical fields such as clouds, land cover, or sea ice characteristics, the fractional area coverage of the field in an image is estimated as the proportion of pixels that have the characteristic of interest (i.e., are part of the field) as determined by some thresholding operation. The effect of sensor field-of-view on this estimate is examined by modeling the unknown distribution of subpixel area fraction with the beta distribution, whose two parameters depend upon the true fractional area coverage, the pixel size, and the spatial structure of the geophysical field. Since it is often not possible to relate digital number, reflectance, or temperature to subpixel area fraction, the statistical models described are used to determine the effect of pixel size and thresholding operations on the estimate of area fraction for hypothetical geophysical fields. Examples are given for simulated cumuliform clouds and linear openings in sea ice, whose spatial structures are described by an exponential autocovariance function. It is shown that the rate and direction of change in total area fraction with changing pixel size depends on the true area fraction, the spatial structure, and the thresholding operation used.

  6. An effective approach for gap-filling continental scale remotely sensed time-series

    PubMed Central

    Weiss, Daniel J.; Atkinson, Peter M.; Bhatt, Samir; Mappin, Bonnie; Hay, Simon I.; Gething, Peter W.

    2014-01-01

    The archives of imagery and modeled data products derived from remote sensing programs with high temporal resolution provide powerful resources for characterizing inter- and intra-annual environmental dynamics. The impressive depth of available time-series from such missions (e.g., MODIS and AVHRR) affords new opportunities for improving data usability by leveraging spatial and temporal information inherent to longitudinal geospatial datasets. In this research we develop an approach for filling gaps in imagery time-series that result primarily from cloud cover, which is particularly problematic in forested equatorial regions. Our approach consists of two, complementary gap-filling algorithms and a variety of run-time options that allow users to balance competing demands of model accuracy and processing time. We applied the gap-filling methodology to MODIS Enhanced Vegetation Index (EVI) and daytime and nighttime Land Surface Temperature (LST) datasets for the African continent for 2000–2012, with a 1 km spatial resolution, and an 8-day temporal resolution. We validated the method by introducing and filling artificial gaps, and then comparing the original data with model predictions. Our approach achieved R2 values above 0.87 even for pixels within 500 km wide introduced gaps. Furthermore, the structure of our approach allows estimation of the error associated with each gap-filled pixel based on the distance to the non-gap pixels used to model its fill value, thus providing a mechanism for including uncertainty associated with the gap-filling process in downstream applications of the resulting datasets. PMID:25642100

  7. Development of Gentle Slope Light Guide Structure in a 3.4 μm Pixel Pitch Global Shutter CMOS Image Sensor with Multiple Accumulation Shutter Technology.

    PubMed

    Sekine, Hiroshi; Kobayashi, Masahiro; Onuki, Yusuke; Kawabata, Kazunari; Tsuboi, Toshiki; Matsuno, Yasushi; Takahashi, Hidekazu; Inoue, Shunsuke; Ichikawa, Takeshi

    2017-12-09

    CMOS image sensors (CISs) with global shutter (GS) function are strongly required in order to avoid image degradation. However, CISs with GS function have generally been inferior to the rolling shutter (RS) CIS in performance, because they have more components. This problem is remarkable in small pixel pitch. The newly developed 3.4 µm pitch GS CIS solves this problem by using multiple accumulation shutter technology and the gentle slope light guide structure. As a result, the developed GS pixel achieves 1.8 e - temporal noise and 16,200 e - full well capacity with charge domain memory in 120 fps operation. The sensitivity and parasitic light sensitivity are 28,000 e - /lx·s and -89 dB, respectively. Moreover, the incident light angle dependence of sensitivity and parasitic light sensitivity are improved by the gentle slope light guide structure.

  8. Generalized pixel profiling and comparative segmentation with application to arteriovenous malformation segmentation.

    PubMed

    Babin, D; Pižurica, A; Bellens, R; De Bock, J; Shang, Y; Goossens, B; Vansteenkiste, E; Philips, W

    2012-07-01

    Extraction of structural and geometric information from 3-D images of blood vessels is a well known and widely addressed segmentation problem. The segmentation of cerebral blood vessels is of great importance in diagnostic and clinical applications, with a special application in diagnostics and surgery on arteriovenous malformations (AVM). However, the techniques addressing the problem of the AVM inner structure segmentation are rare. In this work we present a novel method of pixel profiling with the application to segmentation of the 3-D angiography AVM images. Our algorithm stands out in situations with low resolution images and high variability of pixel intensity. Another advantage of our method is that the parameters are set automatically, which yields little manual user intervention. The results on phantoms and real data demonstrate its effectiveness and potentials for fine delineation of AVM structure. Copyright © 2012 Elsevier B.V. All rights reserved.

  9. Liquid-crystal projection image depixelization by spatial phase scrambling

    NASA Astrophysics Data System (ADS)

    Yang, Xiangyang; Jutamulia, Suganda; Li, Nan

    1996-08-01

    A technique that removes the pixel structure by scrambling the relative phases among multiple spatial spectra is described. Because of the pixel structure of the liquid-crystal-display (LCD) panel, multiple spectra are generated at the Fourier-spectrum plane (usually at the back focal plane of the imaging lens). A transparent phase mask is placed at the Fourier-spectrum plane such that each spectral order is modulated by one of the subareas of the phase mask, and the phase delay resulting from each pair of subareas is longer than the coherent length of the light source, which is approximately 1 m for the wideband white light sources used in most of LCD s. Such a phase-scrambling technique eliminates the coherence between different spectral orders; therefore, the reconstructed images from the multiple spectra will superimpose incoherently, and the pixel structure will not be observed in the projection image.

  10. Digital simulation of staining in histopathology multispectral images: enhancement and linear transformation of spectral transmittance.

    PubMed

    Bautista, Pinky A; Yagi, Yukako

    2012-05-01

    Hematoxylin and eosin (H&E) stain is currently the most popular for routine histopathology staining. Special and/or immuno-histochemical (IHC) staining is often requested to further corroborate the initial diagnosis on H&E stained tissue sections. Digital simulation of staining (or digital staining) can be a very valuable tool to produce the desired stained images from the H&E stained tissue sections instantaneously. We present an approach to digital staining of histopathology multispectral images by combining the effects of spectral enhancement and spectral transformation. Spectral enhancement is accomplished by shifting the N-band original spectrum of the multispectral pixel with the weighted difference between the pixel's original and estimated spectrum; the spectrum is estimated using M < N principal component (PC) vectors. The pixel's enhanced spectrum is transformed to the spectral configuration associated to its reaction to a specific stain by utilizing an N × N transformation matrix, which is derived through application of least mean squares method to the enhanced and target spectral transmittance samples of the different tissue components found in the image. Results of our experiments on the digital conversion of an H&E stained multispectral image to its Masson's trichrome stained equivalent show the viability of the method.

  11. Efficient Solar Scene Wavefront Estimation with Reduced Systematic and RMS Errors: Summary

    NASA Astrophysics Data System (ADS)

    Anugu, N.; Garcia, P.

    2016-04-01

    Wave front sensing for solar telescopes is commonly implemented with the Shack-Hartmann sensors. Correlation algorithms are usually used to estimate the extended scene Shack-Hartmann sub-aperture image shifts or slopes. The image shift is computed by correlating a reference sub-aperture image with the target distorted sub-aperture image. The pixel position where the maximum correlation is located gives the image shift in integer pixel coordinates. Sub-pixel precision image shifts are computed by applying a peak-finding algorithm to the correlation peak Poyneer (2003); Löfdahl (2010). However, the peak-finding algorithm results are usually biased towards the integer pixels, these errors are called as systematic bias errors Sjödahl (1994). These errors are caused due to the low pixel sampling of the images. The amplitude of these errors depends on the type of correlation algorithm and the type of peak-finding algorithm being used. To study the systematic errors in detail, solar sub-aperture synthetic images are constructed by using a Swedish Solar Telescope solar granulation image1. The performance of cross-correlation algorithm in combination with different peak-finding algorithms is investigated. The studied peak-finding algorithms are: parabola Poyneer (2003); quadratic polynomial Löfdahl (2010); threshold center of gravity Bailey (2003); Gaussian Nobach & Honkanen (2005) and Pyramid Bailey (2003). The systematic error study reveals that that the pyramid fit is the most robust to pixel locking effects. The RMS error analysis study reveals that the threshold centre of gravity behaves better in low SNR, although the systematic errors in the measurement are large. It is found that no algorithm is best for both the systematic and the RMS error reduction. To overcome the above problem, a new solution is proposed. In this solution, the image sampling is increased prior to the actual correlation matching. The method is realized in two steps to improve its computational efficiency. In the first step, the cross-correlation is implemented at the original image spatial resolution grid (1 pixel). In the second step, the cross-correlation is performed using a sub-pixel level grid by limiting the field of search to 4 × 4 pixels centered at the first step delivered initial position. The generation of these sub-pixel grid based region of interest images is achieved with the bi-cubic interpolation. The correlation matching with sub-pixel grid technique was previously reported in electronic speckle photography Sjö'dahl (1994). This technique is applied here for the solar wavefront sensing. A large dynamic range and a better accuracy in the measurements are achieved with the combination of the original pixel grid based correlation matching in a large field of view and a sub-pixel interpolated image grid based correlation matching within a small field of view. The results revealed that the proposed method outperforms all the different peak-finding algorithms studied in the first approach. It reduces both the systematic error and the RMS error by a factor of 5 (i.e., 75% systematic error reduction), when 5 times improved image sampling was used. This measurement is achieved at the expense of twice the computational cost. With the 5 times improved image sampling, the wave front accuracy is increased by a factor of 5. The proposed solution is strongly recommended for wave front sensing in the solar telescopes, particularly, for measuring large dynamic image shifts involved open loop adaptive optics. Also, by choosing an appropriate increment of image sampling in trade-off between the computational speed limitation and the aimed sub-pixel image shift accuracy, it can be employed in closed loop adaptive optics. The study is extended to three other class of sub-aperture images (a point source; a laser guide star; a Galactic Center extended scene). The results are planned to submit for the Optical Express journal.

  12. Security of fragile authentication watermarks with localization

    NASA Astrophysics Data System (ADS)

    Fridrich, Jessica

    2002-04-01

    In this paper, we study the security of fragile image authentication watermarks that can localize tampered areas. We start by comparing the goals, capabilities, and advantages of image authentication based on watermarking and cryptography. Then we point out some common security problems of current fragile authentication watermarks with localization and classify attacks on authentication watermarks into five categories. By investigating the attacks and vulnerabilities of current schemes, we propose a variation of the Wong scheme18 that is fast, simple, cryptographically secure, and resistant to all known attacks, including the Holliman-Memon attack9. In the new scheme, a special symmetry structure in the logo is used to authenticate the block content, while the logo itself carries information about the block origin (block index, the image index or time stamp, author ID, etc.). Because the authentication of the content and its origin are separated, it is possible to easily identify swapped blocks between images and accurately detect cropped areas, while being able to accurately localize tampered pixels.

  13. Modulation transfer function measurement of microbolometer focal plane array by Lloyd's mirror method

    NASA Astrophysics Data System (ADS)

    Druart, Guillaume; Rommeluere, Sylvain; Viale, Thibault; Guerineau, Nicolas; Ribet-Mohamed, Isabelle; Crastes, Arnaud; Durand, Alain; Taboury, Jean

    2014-05-01

    Today, both military and civilian applications require miniaturized and cheap optical systems. One way to achieve this trend consists in decreasing the pixel pitch of focal plane arrays (FPA). In order to evaluate the performance of the overall optical systems, it is necessary to measure the modulation transfer function (MTF) of these pixels. However, small pixels lead to higher cut-off frequencies and therefore, original MTF measurements that are able to extract frequencies up to these high cut-off frequencies, are needed. In this paper, we will present a way to extract 1D MTF at high frequencies by projecting fringes on the FPA. The device uses a Lloyd mirror placed near and perpendicular to the focal plane array. Consequently, an interference pattern of fringes can be projected on the detector. By varying the angle of incidence of the light beam, we can tune the period of the interference fringes and, thus, explore a wide range of spatial frequencies, and mainly around the cut-off frequency of the pixel which is one of the most interesting area. Illustration of this method will be applied to a 640×480 microbolometer focal plane array with a pixel pitch of 17µm in the LWIR spectral region.

  14. Precise color images a high-speed color video camera system with three intensified sensors

    NASA Astrophysics Data System (ADS)

    Oki, Sachio; Yamakawa, Masafumi; Gohda, Susumu; Etoh, Takeharu G.

    1999-06-01

    High speed imaging systems have been used in a large field of science and engineering. Although the high speed camera systems have been improved to high performance, most of their applications are only to get high speed motion pictures. However, in some fields of science and technology, it is useful to get some other information, such as temperature of combustion flame, thermal plasma and molten materials. Recent digital high speed video imaging technology should be able to get such information from those objects. For this purpose, we have already developed a high speed video camera system with three-intensified-sensors and cubic prism image splitter. The maximum frame rate is 40,500 pps (picture per second) at 64 X 64 pixels and 4,500 pps at 256 X 256 pixels with 256 (8 bit) intensity resolution for each pixel. The camera system can store more than 1,000 pictures continuously in solid state memory. In order to get the precise color images from this camera system, we need to develop a digital technique, which consists of a computer program and ancillary instruments, to adjust displacement of images taken from two or three image sensors and to calibrate relationship between incident light intensity and corresponding digital output signals. In this paper, the digital technique for pixel-based displacement adjustment are proposed. Although the displacement of the corresponding circle was more than 8 pixels in original image, the displacement was adjusted within 0.2 pixels at most by this method.

  15. Development of high energy micro-tomography system at SPring-8

    NASA Astrophysics Data System (ADS)

    Uesugi, Kentaro; Hoshino, Masato

    2017-09-01

    A high energy X-ray micro-tomography system has been developed at BL20B2 in SPring-8. The available range of the energy is between 20keV and 113keV with a Si (511) double crystal monochromator. The system enables us to image large or heavy materials such as fossils and metals. The X-ray image detector consists of visible light conversion system and sCMOS camera. The effective pixel size is variable by changing a tandem lens between 6.5 μm/pixel and 25.5 μm/pixel discretely. The format of the camera is 2048 pixels x 2048 pixels. As a demonstration of the system, alkaline battery and a nodule from Bolivia were imaged. A detail of the structure of the battery and a female mold Trilobite were successfully imaged without breaking those fossils.

  16. Design and fabrication of AlGaInP-based micro-light-emitting-diode array devices

    NASA Astrophysics Data System (ADS)

    Bao, Xingzhen; Liang, Jingqiu; Liang, Zhongzhu; Wang, Weibiao; Tian, Chao; Qin, Yuxin; Lü, Jinguang

    2016-04-01

    An integrated high-resolution (individual pixel size 80 μm×80 μm) solid-state self-emissive active matrix programmed with 320×240 micro-light-emitting-diode arrays structure was designed and fabricated on an AlGaInP semiconductor chip using micro electro-mechanical systems, microstructure and semiconductor fabricating techniques. Row pixels share a p-electrode and line pixels share an n-electrode. We experimentally investigated GaAs substrate thickness affects the electrical and optical characteristics of the pixels. For a 150-μm-thick GaAs substrate, the single pixel output power was 167.4 μW at 5 mA, and increased to 326.4 μW when current increase to 10 mA. The device investigated potentially plays an important role in many fields.

  17. A low-noise CMOS pixel direct charge sensor, Topmetal-II-

    DOE PAGES

    An, Mangmang; Chen, Chufeng; Gao, Chaosong; ...

    2015-12-12

    In this paper, we report the design and characterization of a CMOS pixel direct charge sensor, Topmetal-II-, fabricated in a standard 0.35 μm CMOS Integrated Circuit process. The sensor utilizes exposed metal patches on top of each pixel to directly collect charge. Each pixel contains a low-noise charge-sensitive preamplifier to establish the analog signal and a discriminator with tunable threshold to generate hits. The analog signal from each pixel is accessible through time-shared multiplexing over the entire array. Hits are read out digitally through a column-based priority logic structure. Tests show that the sensor achieved a <15e - analog noisemore » and a 200e - minimum threshold for digital readout per pixel. The sensor is capable of detecting both electrons and ions drifting in gas. Lastly, these characteristics enable its use as the charge readout device in future Time Projection Chambers without gaseous gain mechanism, which has unique advantages in low background and low rate-density experiments.« less

  18. A low-noise CMOS pixel direct charge sensor, Topmetal-II-

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    An, Mangmang; Chen, Chufeng; Gao, Chaosong

    In this paper, we report the design and characterization of a CMOS pixel direct charge sensor, Topmetal-II-, fabricated in a standard 0.35 μm CMOS Integrated Circuit process. The sensor utilizes exposed metal patches on top of each pixel to directly collect charge. Each pixel contains a low-noise charge-sensitive preamplifier to establish the analog signal and a discriminator with tunable threshold to generate hits. The analog signal from each pixel is accessible through time-shared multiplexing over the entire array. Hits are read out digitally through a column-based priority logic structure. Tests show that the sensor achieved a <15e - analog noisemore » and a 200e - minimum threshold for digital readout per pixel. The sensor is capable of detecting both electrons and ions drifting in gas. Lastly, these characteristics enable its use as the charge readout device in future Time Projection Chambers without gaseous gain mechanism, which has unique advantages in low background and low rate-density experiments.« less

  19. Measuring the effective pixel positions for the HARPS3 CCD

    NASA Astrophysics Data System (ADS)

    Hall, Richard D.; Thompson, Samantha; Queloz, Didier

    2016-07-01

    We present preliminary results from an experiment designed to measure the effective pixel positions of a CCD to sub-pixel precision. This technique will be used to characterise the 4k x 4k CCD destined for the HARPS-3 spectrograph. The principle of coherent beam interference is used to create intensity fringes along one axis of the CCD. By sweeping the physical parameters of the experiment, the geometry of the fringes can be altered which is used to probe the pixel structure. We also present the limitations of the current experimental set-up and suggest what will be implemented in the future to vastly improve the precision of the measurements.

  20. Mapping of the Culann-Tohil Region of Io

    NASA Technical Reports Server (NTRS)

    Turtle, E. P.; Keszthelyi, L. P.; Jaeger, W. L.; Radebaugh, J.; Milazzo, M. P.; McEwen, A. S.; Moore, J. M.; Schenk, P. M.; Lopes, R. M. C.

    2003-01-01

    The Galileo spacecraft completed its observations of Jupiter's volcanic moon Io in October 2001 with the orbit I32 flyby, during which new local (13-55 m/pixel) and regional (130-400 m/pixel) resolution images and spectroscopic data were returned of the antijovian hemisphere. We have combined a I32 regional mosaic (330 m/pixel) with lower-resolution C21 color data (1.4 km/pixel, Figure 1) and produced a geomorphologic map of the Culann-Tohil area of this hemisphere. Here we present the geologic features, map units, and structures in this region, and give preliminary conclusions about geologic activity for comparison with other regions to better understand Io's geologic evolution.

  1. Laser pixelation of thick scintillators for medical imaging applications: x-ray studies

    NASA Astrophysics Data System (ADS)

    Sabet, Hamid; Kudrolli, Haris; Marton, Zsolt; Singh, Bipin; Nagarkar, Vivek V.

    2013-09-01

    To achieve high spatial resolution required in nuclear imaging, scintillation light spread has to be controlled. This has been traditionally achieved by introducing structures in the bulk of scintillation materials; typically by mechanical pixelation of scintillators and fill the resultant inter-pixel gaps by reflecting materials. Mechanical pixelation however, is accompanied by various cost and complexity issues especially for hard, brittle and hygroscopic materials. For example LSO and LYSO, hard and brittle scintillators of interest to medical imaging community, are known to crack under thermal and mechanical stress; the material yield drops quickly with large arrays with high aspect ratio pixels and therefore the pixelation process cost increases. We are utilizing a novel technique named Laser Induced Optical Barriers (LIOB) for pixelation of scintillators that overcomes the issues associated with mechanical pixelation. In this technique, we can introduce optical barriers within the bulk of scintillator crystals to form pixelated arrays with small pixel size and large thickness. We applied LIOB to LYSO using a high-frequency solid-state laser. Arrays with different crystal thickness (5 to 20 mm thick), and pixel size (0.8×0.8 to 1.5×1.5 mm2) were fabricated and tested. The width of the optical barriers were controlled by fine-tuning key parameters such as lens focal spot size and laser energy density. Here we report on LIOB process, its optimization, and the optical crosstalk measurements using X-rays. There are many applications that can potentially benefit from LIOB including but not limited to clinical/pre-clinical PET and SPECT systems, and photon counting CT detectors.

  2. A simple and effective method for filling gaps in Landsat ETM+ SLC-off images

    USGS Publications Warehouse

    Chen, Jin; Zhu, Xiaolin; Vogelmann, James E.; Gao, Feng; Jin, Suming

    2011-01-01

    The scan-line corrector (SLC) of the Landsat 7 Enhanced Thematic Mapper Plus (ETM+) sensor failed in 2003, resulting in about 22% of the pixels per scene not being scanned. The SLC failure has seriously limited the scientific applications of ETM+ data. While there have been a number of methods developed to fill in the data gaps, each method has shortcomings, especially for heterogeneous landscapes. Based on the assumption that the same-class neighboring pixels around the un-scanned pixels have similar spectral characteristics, and that these neighboring and un-scanned pixels exhibit similar patterns of spectral differences between dates, we developed a simple and effective method to interpolate the values of the pixels within the gaps. We refer to this method as the Neighborhood Similar Pixel Interpolator (NSPI). Simulated and actual SLC-off ETM+ images were used to assess the performance of the NSPI. Results indicate that NSPI can restore the value of un-scanned pixels very accurately, and that it works especially well in heterogeneous regions. In addition, it can work well even if there is a relatively long time interval or significant spectral changes between the input and target image. The filled images appear reasonably spatially continuous without obvious striping patterns. Supervised classification using the maximum likelihood algorithm was done on both gap-filled simulated SLC-off data and the original "gap free" data set, and it was found that classification results, including accuracies, were very comparable. This indicates that gap-filled products generated by NSPI will have relevance to the user community for various land cover applications. In addition, the simple principle and high computational efficiency of NSPI will enable processing large volumes of SLC-off ETM+ data.

  3. The structure of the mitotic spindle and nucleolus during mitosis in the amebo-flagellate Naegleria.

    PubMed

    Walsh, Charles J

    2012-01-01

    Mitosis in the amebo-flagellate Naegleria pringsheimi is acentrosomal and closed (the nuclear membrane does not break down). The large central nucleolus, which occupies about 20% of the nuclear volume, persists throughout the cell cycle. At mitosis, the nucleolus divides and moves to the poles in association with the chromosomes. The structure of the mitotic spindle and its relationship to the nucleolus are unknown. To identify the origin and structure of the mitotic spindle, its relationship to the nucleolus and to further understand the influence of persistent nucleoli on cellular division in acentriolar organisms like Naegleria, three-dimensional reconstructions of the mitotic spindle and nucleolus were carried out using confocal microscopy. Monoclonal antibodies against three different nucleolar regions and α-tubulin were used to image the nucleolus and mitotic spindle. Microtubules were restricted to the nucleolus beginning with the earliest prophase spindle microtubules. Early spindle microtubules were seen as short rods on the surface of the nucleolus. Elongation of the spindle microtubules resulted in a rough cage of microtubules surrounding the nucleolus. At metaphase, the mitotic spindle formed a broad band completely embedded within the nucleolus. The nucleolus separated into two discreet masses connected by a dense band of microtubules as the spindle elongated. At telophase, the distal ends of the mitotic spindle were still completely embedded within the daughter nucleoli. Pixel by pixel comparison of tubulin and nucleolar protein fluorescence showed 70% or more of tubulin co-localized with nucleolar proteins by early prophase. These observations suggest a model in which specific nucleolar binding sites for microtubules allow mitotic spindle formation and attachment. The fact that a significant mass of nucleolar material precedes the chromosomes as the mitotic spindle elongates suggests that spindle elongation drives nucleolar division.

  4. The bipolar silicon microstrip detector: A proposal for a novel precision tracking device

    NASA Astrophysics Data System (ADS)

    Horisberger, R.

    1990-03-01

    It is proposed to combine the technology of fully depleted silicon microstrip detectors fabricated on n doped high resistivity silicon with the concept of the bipolar transistor. This is done by adding a n ++ doped region inside the normal p + implanted region of the reverse biased p + n diode. Teh resulting structure has amplifying properties and is referred to as bipolar pixel transistor. The simplest readout scheme of a bipolar pixel array by an aluminium strip bus leads to the bipolar microstrip detector. The bipolar pixel structure is expected to give a better signal-to-noise performance for the detection of minimum ionizing charged particle tracks than the normal silicon diode strip detector and therefore should allow in future the fabrication of thinner silicon detectors for precision tracking.

  5. Active pixel sensor pixel having a photodetector whose output is coupled to an output transistor gate

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R. (Inventor); Nakamura, Junichi (Inventor); Kemeny, Sabrina E. (Inventor)

    2005-01-01

    An imaging device formed as a monolithic complementary metal oxide semiconductor integrated circuit in an industry standard complementary metal oxide semiconductor process, the integrated circuit including a focal plane array of pixel cells, each one of the cells including a photogate overlying the substrate for accumulating photo-generated charge in an underlying portion of the substrate and a charge coupled device section formed on the substrate adjacent the photogate having a sensing node and at least one charge coupled device stage for transferring charge from the underlying portion of the substrate to the sensing node. There is also a readout circuit, part of which can be disposed at the bottom of each column of cells and be common to all the cells in the column. A Simple Floating Gate (SFG) pixel structure could also be employed in the imager to provide a non-destructive readout and smaller pixel sizes.

  6. Fluorescence X-ray absorption spectroscopy using a Ge pixel array detector: application to high-temperature superconducting thin-film single crystals.

    PubMed

    Oyanagi, H; Tsukada, A; Naito, M; Saini, N L; Lampert, M O; Gutknecht, D; Dressler, P; Ogawa, S; Kasai, K; Mohamed, S; Fukano, A

    2006-07-01

    A Ge pixel array detector with 100 segments was applied to fluorescence X-ray absorption spectroscopy, probing the local structure of high-temperature superconducting thin-film single crystals (100 nm in thickness). Independent monitoring of pixel signals allows real-time inspection of artifacts owing to substrate diffractions. By optimizing the grazing-incidence angle theta and adjusting the azimuthal angle phi, smooth extended X-ray absorption fine structure (EXAFS) oscillations were obtained for strained (La,Sr)2CuO4 thin-film single crystals grown by molecular beam epitaxy. The results of EXAFS data analysis show that the local structure (CuO6 octahedron) in (La,Sr)2CuO4 thin films grown on LaSrAlO4 and SrTiO3 substrates is uniaxially distorted changing the tetragonality by approximately 5 x 10(-3) in accordance with the crystallographic lattice mismatch. It is demonstrated that the local structure of thin-film single crystals can be probed with high accuracy at low temperature without interference from substrates.

  7. Digital Evolution. Pixel Palette.

    ERIC Educational Resources Information Center

    Fionda, Robert

    2000-01-01

    Describes a project for high school students that introduces them to photographic software. Students design a new species of animal by reassembling parts of at least four animals into a "believable" new form. Students also write a commentary on their digital animal's habitat and origin. (CMK)

  8. Painting Patterns with Pixels.

    ERIC Educational Resources Information Center

    Yoerg, Kim

    2002-01-01

    Describes an art unit for middle school students where they created their own original pattern through the use of "ClarisWorks Paint." Discusses the procedure for the project and the evaluation used at the end of the unit. Emphasizes the importance of learning about computers. (CMK)

  9. Microradiography with Semiconductor Pixel Detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jakubek, Jan; Cejnarova, Andrea; Dammer, Jiri

    High resolution radiography (with X-rays, neutrons, heavy charged particles, ...) often exploited also in tomographic mode to provide 3D images stands as a powerful imaging technique for instant and nondestructive visualization of fine internal structure of objects. Novel types of semiconductor single particle counting pixel detectors offer many advantages for radiation imaging: high detection efficiency, energy discrimination or direct energy measurement, noiseless digital integration (counting), high frame rate and virtually unlimited dynamic range. This article shows the application and potential of pixel detectors (such as Medipix2 or TimePix) in different fields of radiation imaging.

  10. Efficient reversible data hiding in encrypted image with public key cryptosystem

    NASA Astrophysics Data System (ADS)

    Xiang, Shijun; Luo, Xinrong

    2017-12-01

    This paper proposes a new reversible data hiding scheme for encrypted images by using homomorphic and probabilistic properties of Paillier cryptosystem. The proposed method can embed additional data directly into encrypted image without any preprocessing operations on original image. By selecting two pixels as a group for encryption, data hider can retrieve the absolute differences of groups of two pixels by employing a modular multiplicative inverse method. Additional data can be embedded into encrypted image by shifting histogram of the absolute differences by using the homomorphic property in encrypted domain. On the receiver side, legal user can extract the marked histogram in encrypted domain in the same way as data hiding procedure. Then, the hidden data can be extracted from the marked histogram and the encrypted version of original image can be restored by using inverse histogram shifting operations. Besides, the marked absolute differences can be computed after decryption for extraction of additional data and restoration of original image. Compared with previous state-of-the-art works, the proposed scheme can effectively avoid preprocessing operations before encryption and can efficiently embed and extract data in encrypted domain. The experiments on the standard image files also certify the effectiveness of the proposed scheme.

  11. Validating spatial structure in canopy water content using geostatistics

    NASA Technical Reports Server (NTRS)

    Sanderson, E. W.; Zhang, M. H.; Ustin, S. L.; Rejmankova, E.; Haxo, R. S.

    1995-01-01

    Heterogeneity in ecological phenomena are scale dependent and affect the hierarchical structure of image data. AVIRIS pixels average reflectance produced by complex absorption and scattering interactions between biogeochemical composition, canopy architecture, view and illumination angles, species distributions, and plant cover as well as other factors. These scales affect validation of pixel reflectance, typically performed by relating pixel spectra to ground measurements acquired at scales of 1m(exp 2) or less (e.g., field spectra, foilage and soil samples, etc.). As image analysis becomes more sophisticated, such as those for detection of canopy chemistry, better validation becomes a critical problem. This paper presents a methodology for bridging between point measurements and pixels using geostatistics. Geostatistics have been extensively used in geological or hydrogeolocial studies but have received little application in ecological studies. The key criteria for kriging estimation is that the phenomena varies in space and that an underlying controlling process produces spatial correlation between the measured data points. Ecological variation meets this requirement because communities vary along environmental gradients like soil moisture, nutrient availability, or topography.

  12. Fabrication of close-packed TES microcalorimeter arrays using superconducting molybdenum/gold transition-edge sensors

    NASA Astrophysics Data System (ADS)

    Finkbeiner, F. M.; Brekosky, R. P.; Chervenak, J. A.; Figueroa-Feliciano, E.; Li, M. J.; Lindeman, M. A.; Stahle, C. K.; Stahle, C. M.; Tralshawala, N.

    2002-02-01

    We present an overview of our efforts in fabricating Transition-Edge Sensor (TES) microcalorimeter arrays for use in astronomical x-ray spectroscopy. Two distinct types of array schemes are currently pursued: 5×5 single pixel TES array where each pixel is a TES microcalorimeter, and Position-Sensing TES (PoST) array. In the latter, a row of 7 or 15 thermally-linked absorber pixels is read out by two TES at its ends. Both schemes employ superconducting Mo/Au bilayers as the TES. The TES are placed on silicon nitride membranes for thermal isolation from the structural frame. The silicon nitride membranes are prepared by a Deep Reactive Ion Etch (DRIE) process into a silicon wafer. In order to achieve the concept of closely packed arrays without decreasing its structural and functional integrity, we have already developed the technology to fabricate arrays of cantilevered pixel-sized absorbers and slit membranes in silicon nitride films. Furthermore, we have started to investigate ultra-low resistance through-wafer micro-vias to bring the electrical contact out to the back of a wafer. .

  13. Invalid-point removal based on epipolar constraint in the structured-light method

    NASA Astrophysics Data System (ADS)

    Qi, Zhaoshuai; Wang, Zhao; Huang, Junhui; Xing, Chao; Gao, Jianmin

    2018-06-01

    In structured-light measurement, there unavoidably exist many invalid points caused by shadows, image noise and ambient light. According to the property of the epipolar constraint, because the retrieved phase of the invalid point is inaccurate, the corresponding projector image coordinate (PIC) will not satisfy the epipolar constraint. Based on this fact, a new invalid-point removal method based on the epipolar constraint is proposed in this paper. First, the fundamental matrix of the measurement system is calculated, which will be used for calculating the epipolar line. Then, according to the retrieved phase map of the captured fringes, the PICs of each pixel are retrieved. Subsequently, the epipolar line in the projector image plane of each pixel is obtained using the fundamental matrix. The distance between the corresponding PIC and the epipolar line of a pixel is defined as the invalidation criterion, which quantifies the satisfaction degree of the epipolar constraint. Finally, all pixels with a distance larger than a certain threshold are removed as invalid points. Experiments verified that the method is easy to implement and demonstrates better performance than state-of-the-art measurement systems.

  14. Optical performances of the FM JEM-X masks

    NASA Astrophysics Data System (ADS)

    Reglero, V.; Rodrigo, J.; Velasco, T.; Gasent, J. L.; Chato, R.; Alamo, J.; Suso, J.; Blay, P.; Martínez, S.; Doñate, M.; Reina, M.; Sabau, D.; Ruiz-Urien, I.; Santos, I.; Zarauz, J.; Vázquez, J.

    2001-09-01

    The JEM-X Signal Multiplexing Systems are large HURA codes "written" in a pure tungsten plate 0.5 mm thick. 24.247 hexagonal pixels (25% open) are spread over a total area of 535 mm diameter. The tungsten plate is embedded in a mechanical structure formed by a Ti ring, a pretensioning system (Cu-Be) and an exoskeleton structure that provides the required stiffness. The JEM-X masks differ from the SPI and IBIS masks on the absence of a code support structure covering the mask assembly. Open pixels are fully transparent to X-rays. The scope of this paper is to report the optical performances of the FM JEM-X masks defined by uncertainties on the pixel location (centroid) and size coming from the manufacturing and assembly processes. Stability of the code elements under thermoelastic deformations is also discussed. As a general statement, JEM-X Mask optical properties are nearly one order of magnitude better than specified in 1994 during the ESA instrument selection.

  15. Phoebe: A Surface Dominated by Water

    NASA Astrophysics Data System (ADS)

    Fraser, Wesley C.; Brown, Michael E.

    2018-07-01

    The Saturnian irregular satellite, Phoebe, can be broadly described as a water-rich rock. This object, which presumably originated from the same primordial population shared by the dynamically excited Kuiper Belt Objects (KBOs), has received high-resolution spectral imaging during the Cassini flyby. We present a new analysis of the Visual Infrared Mapping Spectrometer observations of Phoebe, which critically, includes a geometry correction routine that enables pixel-by-pixel mapping of visible and infrared spectral cubes directly onto the Phoebe shape model, even when an image exhibits significant trailing errors. The result of our re-analysis is a successful match of 46 images, producing spectral maps covering the majority of Phoebe’s surface, roughly a third of which is imaged by high-resolution observations (<22 km per pixel resolution). There is no spot on Phoebe’s surface that is absent of water absorption. The regions richest in water are clearly associated with the Jason and south pole impact basins. Phoebe exhibits only three spectral types, and a water–ice concentration that correlates with physical depth and visible albedo. The water-rich and water-poor regions exhibit significantly different crater size frequency distributions and different large crater morphologies. We propose that Phoebe once had a water-poor surface whose water–ice concentration was enhanced by basin-forming impacts that exposed richer subsurface layers. The range of Phoebe’s water–ice absorption spans the same range exhibited by dynamically excited KBOs. The common water–ice absorption depths and primordial origins, and the association of Phoebe’s water-rich regions with its impact basins, suggests the plausible idea that KBOs also originated with water-poor surfaces that were enhanced through stochastic collisional modification.

  16. BOREAS TE-18, 60-m, Radiometrically Rectified Landsat TM Imagery

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Knapp, David

    2000-01-01

    The BOREAS TE-18 team used a radiometric rectification process to produce standardized DN values for a series of Landsat TM images of the BOREAS SSA and NSA in order to compare images that were collected under different atmospheric conditions. The images for each study area were referenced to an image that had very clear atmospheric qualities. The reference image for the SSA was collected on 02-Sep-1994, while the reference image for the NSA was collected on 2 1 Jun-1995. The 23 rectified images cover the period of 07-Jul-1985 to 18-Sep-1994 in the SSA and 22-Jun-1984 to 09-Jun-1994 in the NSA. Each of the reference scenes had coincident atmospheric optical thickness measurements made by RSS-11. The radiometric rectification process is described in more detail by Hall et al. (1991). The original Landsat TM data were received from CCRS for use in the BOREAS project. Due to the nature of the radiometric rectification process and copyright issues, the full-resolution (30-m) images may not be publicly distributed. However, this spatially degraded 60-m resolution version of the images may be openly distributed and is available on the BOREAS CD-ROM series. After the radiometric rectification processing, the original data were degraded to a 60-m pixel size from the original 30-m pixel size by averaging the data over a 2- by 2-pixel window. The data are stored in binary image-format files. The data files are available on a CD-ROM (see document number 20010000884), or from the Oak Ridge National Laboratory (ORNL) Distributed Activity Archive Center (DAAC).

  17. Color image encryption based on hybrid hyper-chaotic system and cellular automata

    NASA Astrophysics Data System (ADS)

    Yaghouti Niyat, Abolfazl; Moattar, Mohammad Hossein; Niazi Torshiz, Masood

    2017-03-01

    This paper proposes an image encryption scheme based on Cellular Automata (CA). CA is a self-organizing structure with a set of cells in which each cell is updated by certain rules that are dependent on a limited number of neighboring cells. The major disadvantages of cellular automata in cryptography include limited number of reversal rules and inability to produce long sequences of states by these rules. In this paper, a non-uniform cellular automata framework is proposed to solve this problem. This proposed scheme consists of confusion and diffusion steps. In confusion step, the positions of the original image pixels are replaced by chaos mapping. Key image is created using non-uniform cellular automata and then the hyper-chaotic mapping is used to select random numbers from the image key for encryption. The main contribution of the paper is the application of hyper chaotic functions and non-uniform CA for robust key image generation. Security analysis and experimental results show that the proposed method has a very large key space and is resistive against noise and attacks. The correlation between adjacent pixels in the encrypted image is reduced and the amount of entropy is equal to 7.9991 which is very close to 8 which is ideal.

  18. A fast fully constrained geometric unmixing of hyperspectral images

    NASA Astrophysics Data System (ADS)

    Zhou, Xin; Li, Xiao-run; Cui, Jian-tao; Zhao, Liao-ying; Zheng, Jun-peng

    2014-11-01

    A great challenge in hyperspectral image analysis is decomposing a mixed pixel into a collection of endmembers and their corresponding abundance fractions. This paper presents an improved implementation of Barycentric Coordinate approach to unmix hyperspectral images, integrating with the Most-Negative Remove Projection method to meet the abundance sum-to-one constraint (ASC) and abundance non-negativity constraint (ANC). The original barycentric coordinate approach interprets the endmember unmixing problem as a simplex volume ratio problem, which is solved by calculate the determinants of two augmented matrix. One consists of all the members and the other consist of the to-be-unmixed pixel and all the endmembers except for the one corresponding to the specific abundance that is to be estimated. In this paper, we first modified the algorithm of Barycentric Coordinate approach by bringing in the Matrix Determinant Lemma to simplify the unmixing process, which makes the calculation only contains linear matrix and vector operations. So, the matrix determinant calculation of every pixel, as the original algorithm did, is avoided. By the end of this step, the estimated abundance meet the ASC constraint. Then, the Most-Negative Remove Projection method is used to make the abundance fractions meet the full constraints. This algorithm is demonstrated both on synthetic and real images. The resulting algorithm yields the abundance maps that are similar to those obtained by FCLS, while the runtime is outperformed as its computational simplicity.

  19. Superimpose methods for uncooled infrared camera applied to the micro-scale thermal characterization of composite materials

    NASA Astrophysics Data System (ADS)

    Morikawa, Junko

    2015-05-01

    The mobile type apparatus for a quantitative micro-scale thermography using a micro-bolometer was developed based on our original techniques such as an achromatic lens design to capture a micro-scale image in long-wave infrared, a video signal superimposing for the real time emissivity correction, and a pseudo acceleration of a timeframe. The total size of the instrument was designed as it was put in the 17 cm x 28 cm x 26 cm size carrying box. The video signal synthesizer enabled to record a direct digital signal of monitoring temperature or positioning data. The encoded digital signal data embedded in each image was decoded to read out. The protocol to encode/decode the measured data was originally defined. The mixed signals of IR camera and the imposed data were applied to the pixel by pixel emissivity corrections and the pseudo-acceleration of the periodical thermal phenomena. Because the emissivity of industrial materials and biological tissues were usually inhomogeneous, it has the different temperature dependence on each pixel. The time-scale resolution for the periodic thermal event was improved with the algorithm for "pseudoacceleration". It contributes to reduce the noise by integrating the multiple image data, keeping a time resolution. The anisotropic thermal properties of some composite materials such as thermal insulating materials of cellular plastics and the biometric composite materials were analyzed using these techniques.

  20. Isidis Basin Ejecta

    NASA Image and Video Library

    2017-03-02

    This scene is a jumbled mess. There are blocks and smears of many different rocks types that appear to have been dumped into a pile. That's probably about what happened, as ejecta from the Isidis impact basin to the east. This pile of old rocks is an island surrounded by younger lava flows from Syrtis Major. The map is projected here at a scale of 25 centimeters (9.8 inches) per pixel. [The original image scale is 27.4 centimeters (10.8 inches) per pixel (with 1 x 1 binning); objects on the order of 82 centimeters (32.2 inches) across are resolved.] North is up. http://photojournal.jpl.nasa.gov/catalog/PIA21553

  1. Spatio-Temporal Mining of PolSAR Satellite Image Time Series

    NASA Astrophysics Data System (ADS)

    Julea, A.; Meger, N.; Trouve, E.; Bolon, Ph.; Rigotti, C.; Fallourd, R.; Nicolas, J.-M.; Vasile, G.; Gay, M.; Harant, O.; Ferro-Famil, L.

    2010-12-01

    This paper presents an original data mining approach for describing Satellite Image Time Series (SITS) spatially and temporally. It relies on pixel-based evolution and sub-evolution extraction. These evolutions, namely the frequent grouped sequential patterns, are required to cover a minimum surface and to affect pixels that are sufficiently connected. These spatial constraints are actively used to face large data volumes and to select evolutions making sense for end-users. In this paper, a specific application to fully polarimetric SAR image time series is presented. Preliminary experiments performed on a RADARSAT-2 SITS covering the Chamonix Mont-Blanc test-site are used to illustrate the proposed approach.

  2. Twofold processing for denoising ultrasound medical images.

    PubMed

    Kishore, P V V; Kumar, K V V; Kumar, D Anil; Prasad, M V D; Goutham, E N D; Rahul, R; Krishna, C B S Vamsi; Sandeep, Y

    2015-01-01

    Ultrasound medical (US) imaging non-invasively pictures inside of a human body for disease diagnostics. Speckle noise attacks ultrasound images degrading their visual quality. A twofold processing algorithm is proposed in this work to reduce this multiplicative speckle noise. First fold used block based thresholding, both hard (BHT) and soft (BST), on pixels in wavelet domain with 8, 16, 32 and 64 non-overlapping block sizes. This first fold process is a better denoising method for reducing speckle and also inducing object of interest blurring. The second fold process initiates to restore object boundaries and texture with adaptive wavelet fusion. The degraded object restoration in block thresholded US image is carried through wavelet coefficient fusion of object in original US mage and block thresholded US image. Fusion rules and wavelet decomposition levels are made adaptive for each block using gradient histograms with normalized differential mean (NDF) to introduce highest level of contrast between the denoised pixels and the object pixels in the resultant image. Thus the proposed twofold methods are named as adaptive NDF block fusion with hard and soft thresholding (ANBF-HT and ANBF-ST). The results indicate visual quality improvement to an interesting level with the proposed twofold processing, where the first fold removes noise and second fold restores object properties. Peak signal to noise ratio (PSNR), normalized cross correlation coefficient (NCC), edge strength (ES), image quality Index (IQI) and structural similarity index (SSIM), measure the quantitative quality of the twofold processing technique. Validation of the proposed method is done by comparing with anisotropic diffusion (AD), total variational filtering (TVF) and empirical mode decomposition (EMD) for enhancement of US images. The US images are provided by AMMA hospital radiology labs at Vijayawada, India.

  3. Toward Unified Satellite Climatology of Aerosol Properties. 3. MODIS Versus MISR Versus AERONET

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.; Liu, Li; Geogdzhayev, Igor V.; Travis, Larry D.; Cairns, Brian; Lacis, Andrew A.

    2010-01-01

    We use the full duration of collocated pixel-level MODIS-Terra and MISR aerosol optical thickness (AOT) retrievals and level 2 cloud-screened quality-assured AERONET measurements to evaluate the likely individual MODIS and MISR retrieval accuracies globally over oceans and land. We show that the use of quality-assured MODIS AOTs as opposed to the use of all MODIS AOTs has little effect on the resulting accuracy. The MODIS and MISR relative standard deviations (RSTDs) with respect to AERONET are remarkably stable over the entire measurement record and reveal nearly identical overall AOT performances of MODIS and MISR over the entire suite of AERONET sites. This result is used to evaluate the likely pixel-level MODIS and MISR performances on the global basis with respect to the (unknown) actual AOTs. For this purpose, we use only fully compatible MISR and MODIS aerosol pixels. We conclude that the likely RSTDs for this subset of MODIS and MISR AOTs are 73% over land and 30% over oceans. The average RSTDs for the combined [AOT(MODIS)+AOT(MISR)]/2 pixel-level product are close to 66% and 27%, respectively, which allows us to recommend this simple blend as a better alternative to the original MODIS and MISR data. These accuracy estimates still do not represent the totality of MISR and quality-assured MODIS pixel-level AOTs since an unaccounted for and potentially significant source of errors is imperfect cloud screening. Furthermore, many collocated pixels for which one of the datasets reports a retrieval, whereas the other one does not may also be problematic.

  4. A comparative study of linear and nonlinear anomaly detectors for hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Goldberg, Hirsh; Nasrabadi, Nasser M.

    2007-04-01

    In this paper we implement various linear and nonlinear subspace-based anomaly detectors for hyperspectral imagery. First, a dual window technique is used to separate the local area around each pixel into two regions - an inner-window region (IWR) and an outer-window region (OWR). Pixel spectra from each region are projected onto a subspace which is defined by projection bases that can be generated in several ways. Here we use three common pattern classification techniques (Principal Component Analysis (PCA), Fisher Linear Discriminant (FLD) Analysis, and the Eigenspace Separation Transform (EST)) to generate projection vectors. In addition to these three algorithms, the well-known Reed-Xiaoli (RX) anomaly detector is also implemented. Each of the four linear methods is then implicitly defined in a high- (possibly infinite-) dimensional feature space by using a nonlinear mapping associated with a kernel function. Using a common machine-learning technique known as the kernel trick all dot products in the feature space are replaced with a Mercer kernel function defined in terms of the original input data space. To determine how anomalous a given pixel is, we then project the current test pixel spectra and the spectral mean vector of the OWR onto the linear and nonlinear projection vectors in order to exploit the statistical differences between the IWR and OWR pixels. Anomalies are detected if the separation of the projection of the current test pixel spectra and the OWR mean spectra are greater than a certain threshold. Comparisons are made using receiver operating characteristics (ROC) curves.

  5. High-End CMOS Active Pixel Sensors For Space-Borne Imaging Instruments

    DTIC Science & Technology

    2005-07-13

    DISTRIBUTION/AVAILABILITY STATEMENT Approved for public release, distribution unlimited 13. SUPPLEMENTARY NOTES See also ADM001791, Potentially Disruptive ... Technologies and Their Impact in Space Programs Held in Marseille, France on 4-6 July 2005. , The original document contains color images. 14

  6. Salt-and-pepper noise removal using modified mean filter and total variation minimization

    NASA Astrophysics Data System (ADS)

    Aghajarian, Mickael; McInroy, John E.; Wright, Cameron H. G.

    2018-01-01

    The search for effective noise removal algorithms is still a real challenge in the field of image processing. An efficient image denoising method is proposed for images that are corrupted by salt-and-pepper noise. Salt-and-pepper noise takes either the minimum or maximum intensity, so the proposed method restores the image by processing the pixels whose values are either 0 or 255 (assuming an 8-bit/pixel image). For low levels of noise corruption (less than or equal to 50% noise density), the method employs the modified mean filter (MMF), while for heavy noise corruption, noisy pixels values are replaced by the weighted average of the MMF and the total variation of corrupted pixels, which is minimized using convex optimization. Two fuzzy systems are used to determine the weights for taking average. To evaluate the performance of the algorithm, several test images with different noise levels are restored, and the results are quantitatively measured by peak signal-to-noise ratio and mean absolute error. The results show that the proposed scheme gives considerable noise suppression up to a noise density of 90%, while almost completely maintaining edges and fine details of the original image.

  7. The discriminant pixel approach: a new tool for the rational interpretation of GCxGC-MS chromatograms.

    PubMed

    Vial, Jérôme; Pezous, Benoît; Thiébaut, Didier; Sassiat, Patrick; Teillet, Béatrice; Cahours, Xavier; Rivals, Isabelle

    2011-01-30

    GCxGC is now recognized as the most suited analytical technique for the characterization of complex mixtures of volatile compounds; it is implemented worldwide in academic and industrial laboratories. However, in the frame of comprehensive analysis of non-target analytes, going beyond the visual examination of the color plots remains challenging for most users. We propose a strategy that aims at classifying chromatograms according to the chemical composition of the samples while determining the origin of the discrimination between different classes of samples: the discriminant pixel approach. After data pre-processing and time-alignment, the discriminatory power of each chromatogram pixel for a given class was defined as its correlation with the membership to this class. Using a peak finding algorithm, the most discriminant pixels were then linked to chromatographic peaks. Finally, crosschecking with mass spectrometry data enabled to establish relationships with compounds that could consequently be considered as candidate class markers. This strategy was applied to a large experimental data set of 145 GCxGC-MS chromatograms of tobacco extracts corresponding to three distinct classes of tobacco. Copyright © 2010 Elsevier B.V. All rights reserved.

  8. Discrete Radon transform has an exact, fast inverse and generalizes to operations other than sums along lines

    PubMed Central

    Press, William H.

    2006-01-01

    Götz, Druckmüller, and, independently, Brady have defined a discrete Radon transform (DRT) that sums an image's pixel values along a set of aptly chosen discrete lines, complete in slope and intercept. The transform is fast, O(N2log N) for an N × N image; it uses only addition, not multiplication or interpolation, and it admits a fast, exact algorithm for the adjoint operation, namely backprojection. This paper shows that the transform additionally has a fast, exact (although iterative) inverse. The inverse reproduces to machine accuracy the pixel-by-pixel values of the original image from its DRT, without artifacts or a finite point-spread function. Fourier or fast Fourier transform methods are not used. The inverse can also be calculated from sampled sinograms and is well conditioned in the presence of noise. Also introduced are generalizations of the DRT that combine pixel values along lines by operations other than addition. For example, there is a fast transform that calculates median values along all discrete lines and is able to detect linear features at low signal-to-noise ratios in the presence of pointlike clutter features of arbitrarily large amplitude. PMID:17159155

  9. Discrete Radon transform has an exact, fast inverse and generalizes to operations other than sums along lines.

    PubMed

    Press, William H

    2006-12-19

    Götz, Druckmüller, and, independently, Brady have defined a discrete Radon transform (DRT) that sums an image's pixel values along a set of aptly chosen discrete lines, complete in slope and intercept. The transform is fast, O(N2log N) for an N x N image; it uses only addition, not multiplication or interpolation, and it admits a fast, exact algorithm for the adjoint operation, namely backprojection. This paper shows that the transform additionally has a fast, exact (although iterative) inverse. The inverse reproduces to machine accuracy the pixel-by-pixel values of the original image from its DRT, without artifacts or a finite point-spread function. Fourier or fast Fourier transform methods are not used. The inverse can also be calculated from sampled sinograms and is well conditioned in the presence of noise. Also introduced are generalizations of the DRT that combine pixel values along lines by operations other than addition. For example, there is a fast transform that calculates median values along all discrete lines and is able to detect linear features at low signal-to-noise ratios in the presence of pointlike clutter features of arbitrarily large amplitude.

  10. Noise-gating to Clean Astrophysical Image Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeForest, C. E.

    I present a family of algorithms to reduce noise in astrophysical images and image sequences, preserving more information from the original data than is retained by conventional techniques. The family uses locally adaptive filters (“noise gates”) in the Fourier domain to separate coherent image structure from background noise based on the statistics of local neighborhoods in the image. Processing of solar data limited by simple shot noise or by additive noise reveals image structure not easily visible in the originals, preserves photometry of observable features, and reduces shot noise by a factor of 10 or more with little to nomore » apparent loss of resolution. This reveals faint features that were either not directly discernible or not sufficiently strongly detected for quantitative analysis. The method works best on image sequences containing related subjects, for example movies of solar evolution, but is also applicable to single images provided that there are enough pixels. The adaptive filter uses the statistical properties of noise and of local neighborhoods in the data to discriminate between coherent features and incoherent noise without reference to the specific shape or evolution of those features. The technique can potentially be modified in a straightforward way to exploit additional a priori knowledge about the functional form of the noise.« less

  11. Noise-gating to Clean Astrophysical Image Data

    NASA Astrophysics Data System (ADS)

    DeForest, C. E.

    2017-04-01

    I present a family of algorithms to reduce noise in astrophysical images and image sequences, preserving more information from the original data than is retained by conventional techniques. The family uses locally adaptive filters (“noise gates”) in the Fourier domain to separate coherent image structure from background noise based on the statistics of local neighborhoods in the image. Processing of solar data limited by simple shot noise or by additive noise reveals image structure not easily visible in the originals, preserves photometry of observable features, and reduces shot noise by a factor of 10 or more with little to no apparent loss of resolution. This reveals faint features that were either not directly discernible or not sufficiently strongly detected for quantitative analysis. The method works best on image sequences containing related subjects, for example movies of solar evolution, but is also applicable to single images provided that there are enough pixels. The adaptive filter uses the statistical properties of noise and of local neighborhoods in the data to discriminate between coherent features and incoherent noise without reference to the specific shape or evolution of those features. The technique can potentially be modified in a straightforward way to exploit additional a priori knowledge about the functional form of the noise.

  12. Remote sensing image segmentation using local sparse structure constrained latent low rank representation

    NASA Astrophysics Data System (ADS)

    Tian, Shu; Zhang, Ye; Yan, Yimin; Su, Nan; Zhang, Junping

    2016-09-01

    Latent low-rank representation (LatLRR) has been attached considerable attention in the field of remote sensing image segmentation, due to its effectiveness in exploring the multiple subspace structures of data. However, the increasingly heterogeneous texture information in the high spatial resolution remote sensing images, leads to more severe interference of pixels in local neighborhood, and the LatLRR fails to capture the local complex structure information. Therefore, we present a local sparse structure constrainted latent low-rank representation (LSSLatLRR) segmentation method, which explicitly imposes the local sparse structure constraint on LatLRR to capture the intrinsic local structure in manifold structure feature subspaces. The whole segmentation framework can be viewed as two stages in cascade. In the first stage, we use the local histogram transform to extract the texture local histogram features (LHOG) at each pixel, which can efficiently capture the complex and micro-texture pattern. In the second stage, a local sparse structure (LSS) formulation is established on LHOG, which aims to preserve the local intrinsic structure and enhance the relationship between pixels having similar local characteristics. Meanwhile, by integrating the LSS and the LatLRR, we can efficiently capture the local sparse and low-rank structure in the mixture of feature subspace, and we adopt the subspace segmentation method to improve the segmentation accuracy. Experimental results on the remote sensing images with different spatial resolution show that, compared with three state-of-the-art image segmentation methods, the proposed method achieves more accurate segmentation results.

  13. A Glimpse of Atlas

    NASA Technical Reports Server (NTRS)

    2005-01-01

    Saturn's little moon Atlas orbits Saturn between the outer edge of the A ring and the fascinating, twisted F ring. This image just barely resolves the disk of Atlas, and also shows some of the knotted structure for which the F ring is known. Atlas is 32 kilometers (20 miles) across.

    The bright outer edge of the A ring is overexposed here, but farther down the image several bright ring features can be seen.

    The image was taken in visible light with the Cassini spacecraft narrow-angle camera on April 25, 2005, at a distance of approximately 2.4 million kilometers (1.5 million miles) from Atlas and at a Sun-Atlas-spacecraft, or phase, angle of 60 degrees. Resolution in the original image was 14 kilometers (9 miles) per pixel.

  14. Weak-lensing shear estimates with general adaptive moments, and studies of bias by pixellation, PSF distortions, and noise

    NASA Astrophysics Data System (ADS)

    Simon, Patrick; Schneider, Peter

    2017-08-01

    In weak gravitational lensing, weighted quadrupole moments of the brightness profile in galaxy images are a common way to estimate gravitational shear. We have employed general adaptive moments (GLAM ) to study causes of shear bias on a fundamental level and for a practical definition of an image ellipticity. The GLAM ellipticity has useful properties for any chosen weight profile: the weighted ellipticity is identical to that of isophotes of elliptical images, and in absence of noise and pixellation it is always an unbiased estimator of reduced shear. We show that moment-based techniques, adaptive or unweighted, are similar to a model-based approach in the sense that they can be seen as imperfect fit of an elliptical profile to the image. Due to residuals in the fit, moment-based estimates of ellipticities are prone to underfitting bias when inferred from observed images. The estimation is fundamentally limited mainly by pixellation which destroys information on the original, pre-seeing image. We give an optimised estimator for the pre-seeing GLAM ellipticity and quantify its bias for noise-free images. To deal with images where pixel noise is prominent, we consider a Bayesian approach to infer GLAM ellipticity where, similar to the noise-free case, the ellipticity posterior can be inconsistent with the true ellipticity if we do not properly account for our ignorance about fit residuals. This underfitting bias, quantified in the paper, does not vary with the overall noise level but changes with the pre-seeing brightness profile and the correlation or heterogeneity of pixel noise over the image. Furthermore, when inferring a constant ellipticity or, more relevantly, constant shear from a source sample with a distribution of intrinsic properties (sizes, centroid positions, intrinsic shapes), an additional, now noise-dependent bias arises towards low signal-to-noise if incorrect prior densities for the intrinsic properties are used. We discuss the origin of this prior bias. With regard to a fully-Bayesian lensing analysis, we point out that passing tests with source samples subject to constant shear may not be sufficient for an analysis of sources with varying shear.

  15. Optimization of Focusing by Strip and Pixel Arrays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burke, G J; White, D A; Thompson, C A

    Professor Kevin Webb and students at Purdue University have demonstrated the design of conducting strip and pixel arrays for focusing electromagnetic waves [1, 2]. Their key point was to design structures to focus waves in the near field using full wave modeling and optimization methods for design. Their designs included arrays of conducting strips optimized with a downhill search algorithm and arrays of conducting and dielectric pixels optimized with the iterative direct binary search method. They used a finite element code for modeling. This report documents our attempts to duplicate and verify their results. We have modeled 2D conducting stripsmore » and both conducting and dielectric pixel arrays with moment method and FDTD codes to compare with Webb's results. New designs for strip arrays were developed with optimization by the downhill simplex method with simulated annealing. Strip arrays were optimized to focus an incident plane wave at a point or at two separated points and to switch between focusing points with a change in frequency. We also tried putting a line current source at the focus point for the plane wave to see how it would work as a directive antenna. We have not tried optimizing the conducting or dielectric pixel arrays, but modeled the structures designed by Webb with the moment method and FDTD to compare with the Purdue results.« less

  16. Lights All Askew: Systematics in Galaxy Images from Megaparsecs to Microns

    NASA Astrophysics Data System (ADS)

    Bradshaw, Andrew Kenneth

    The stars and galaxies are not where they seem. In the process of imaging and measurement, the light from distant objects is distorted, blurred, and skewed by several physical effects on scales from megaparsecs to microns. Charge-coupled devices (CCDs) provide sensitive detection of this light, but introduce their own problems in the form of systematic biases. Images of these stars and galaxies are formed in CCDs when incoming light generates photoelectrons which are then collected in a pixel's potential well and measured as signal. However, these signal electrons can be diverted from purely parallel paths toward the pixel wells by transverse fields sourced by structural elements of the CCD, accidental imperfections in fabrication, or dynamic electric fields induced by other collected charges. These charge transport anomalies lead to measurable systematic errors in the images which bias cosmological inferences based on them. The physics of imaging therefore deserves thorough investigation, which is performed in the laboratory using a unique optical beam simulator and in computer simulations of charge transport. On top of detector systematics, there are often biases in the mathematical analysis of pixelized images; in particular, the location, shape, and orientation of stars and galaxies. Using elliptical Gaussians as a toy model for galaxies, it is demonstrated how small biases in the computed image moments lead to observable orientation patterns in modern survey data. Also presented are examples of the reduction of data and fitting of optical aberrations of images in the lab and on the sky which are modeled by physically or mathematically-motivated methods. Finally, end-to-end analysis of the weak gravitational lensing signal is presented using deep sky data as well as in N-body simulations. It is demonstrated how measured weak lens shear can be transformed by signal matched filters which aid in the detection of mass overdensities and separate signal from noise. A commonly-used decomposition of shear into two components, E- and B-modes, is thoroughly tested and both modes are shown to be useful in the detection of large scale structure. We find several astrophysical sources of B-mode and explain their apparent origin. The methods presented therefore offer an optimal way to filter weak gravitational shear into maps of large scale structure through the process of cosmic mass cartography.

  17. The DEPFET Sensor-Amplifier Structure: A Method to Beat 1/f Noise and Reach Sub-Electron Noise in Pixel Detectors

    PubMed Central

    Lutz, Gerhard; Porro, Matteo; Aschauer, Stefan; Wölfel, Stefan; Strüder, Lothar

    2016-01-01

    Depleted field effect transistors (DEPFET) are used to achieve very low noise signal charge readout with sub-electron measurement precision. This is accomplished by repeatedly reading an identical charge, thereby suppressing not only the white serial noise but also the usually constant 1/f noise. The repetitive non-destructive readout (RNDR) DEPFET is an ideal central element for an active pixel sensor (APS) pixel. The theory has been derived thoroughly and results have been verified on RNDR-DEPFET prototypes. A charge measurement precision of 0.18 electrons has been achieved. The device is well-suited for spectroscopic X-ray imaging and for optical photon counting in pixel sensors, even at high photon numbers in the same cell. PMID:27136549

  18. Jupiter's Moons: Family Portrait

    NASA Technical Reports Server (NTRS)

    2007-01-01

    This montage shows the best views of Jupiter's four large and diverse 'Galilean' satellites as seen by the Long Range Reconnaissance Imager (LORRI) on the New Horizons spacecraft during its flyby of Jupiter in late February 2007. The four moons are, from left to right: Io, Europa, Ganymede and Callisto. The images have been scaled to represent the true relative sizes of the four moons and are arranged in their order from Jupiter.

    Io, 3,640 kilometers (2,260 miles) in diameter, was imaged at 03:50 Universal Time on February 28 from a range of 2.7 million kilometers (1.7 million miles). The original image scale was 13 kilometers per pixel, and the image is centered at Io coordinates 6 degrees south, 22 degrees west. Io is notable for its active volcanism, which New Horizons has studied extensively.

    Europa, 3,120 kilometers (1,938 miles) in diameter, was imaged at 01:28 Universal Time on February 28 from a range of 3 million kilometers (1.8 million miles). The original image scale was 15 kilometers per pixel, and the image is centered at Europa coordinates 6 degrees south, 347 degrees west. Europa's smooth, icy surface likely conceals an ocean of liquid water. New Horizons obtained data on Europa's surface composition and imaged subtle surface features, and analysis of these data may provide new information about the ocean and the icy shell that covers it.

    New Horizons spied Ganymede, 5,262 kilometers (3,268 miles) in diameter, at 10:01 Universal Time on February 27 from 3.5 million kilometers (2.2 million miles) away. The original scale was 17 kilometers per pixel, and the image is centered at Ganymede coordinates 6 degrees south, 38 degrees west. Ganymede, the largest moon in the solar system, has a dirty ice surface cut by fractures and peppered by impact craters. New Horizons' infrared observations may provide insight into the composition of the moon's surface and interior.

    Callisto, 4,820 kilometers (2,995 miles) in diameter, was imaged at 03:50 Universal Time on February 28 from a range of 4.2 million kilometers (2.6 million miles). The original image scale was 21 kilometers per pixel, and the image is centered at Callisto coordinates 4 degrees south, 356 degrees west. Scientists are using the infrared spectra New Horizons gathered of Callisto's ancient, cratered surface to calibrate spectral analysis techniques that will help them to understand the surfaces of Pluto and its moon Charon when New Horizons passes them in 2015.

  19. Low cost solution-based materials processing methods for large area OLEDs and OFETs

    NASA Astrophysics Data System (ADS)

    Jeong, Jonghwa

    In Part 1, we demonstrate the fabrication of organic light-emitting devices (OLEDs) with precisely patterned pixels by the spin-casting of Alq3 and rubrene thin films with dimensions as small as 10 mum. The solution-based patterning technique produces pixels via the segregation of organic molecules into microfabricated channels or wells. Segregation is controlled by a combination of weak adsorbing characteristics of aliphatic terminated self-assembled monolayers (SAMs) and by centrifugal force, which directs the organic solution into the channel or well. This novel patterning technique may resolve the limitations of pixel resolution in the method of thermal evaporation using shadow masks, and is applicable to the fabrication of large area displays. Furthermore, the patterning technique has the potential to produce pixel sizes down to the limitation of photolithography and micromachining techniques, thereby enabling the fabrication of high-resolution microdisplays. The patterned OLEDs, based upon a confined structure with low refractive index of SiO2, exhibited higher current density than an unpatterned OLED, which results in higher electroluminescence intensity and eventually more efficient device operation at low applied voltages. We discuss the patterning method and device fabrication, and characterize the morphological, optical, and electrical properties of the organic pixels. In part 2, we demonstrate a new growth technique for organic single crystals based on solvent vapor assisted recrystallization. We show that, by controlling the polarity of the solvent vapor and the exposure time in a closed system, we obtain rubrene in orthorhombic to monoclinic crystal structures. This novel technique for growing single crystals can induce phase shifting and alteration of crystal structure and lattice parameters. The organic molecules showed structural change from orthorhombic to monoclinic, which also provided additional optical transition of hypsochromic shift from that of the orthorhombic form. An intermediate form of the crystal exhibits an optical transition to the lowest vibrational energy level that is otherwise disallowed in the single-crystal orthorhombic form. The monoclinic form exhibits entirely new optical transitions and showed a possible structural rearrangement for increasing charge carrier mobility, making it promising for organic devices. These phenomena can be explained and proved by the chemical structure and molecular packing of the monoclinic form, transformed from orthorhombic crystalline structure.

  20. Coloured computational imaging with single-pixel detectors based on a 2D discrete cosine transform

    NASA Astrophysics Data System (ADS)

    Liu, Bao-Lei; Yang, Zhao-Hua; Liu, Xia; Wu, Ling-An

    2017-02-01

    We propose and demonstrate a computational imaging technique that uses structured illumination based on a two-dimensional discrete cosine transform to perform imaging with a single-pixel detector. A scene is illuminated by a projector with two sets of orthogonal patterns, then by applying an inverse cosine transform to the spectra obtained from the single-pixel detector a full-colour image is retrieved. This technique can retrieve an image from sub-Nyquist measurements, and the background noise is easily cancelled to give excellent image quality. Moreover, the experimental set-up is very simple.

  1. An Over 90 dB Intra-Scene Single-Exposure Dynamic Range CMOS Image Sensor Using a 3.0 μm Triple-Gain Pixel Fabricated in a Standard BSI Process.

    PubMed

    Takayanagi, Isao; Yoshimura, Norio; Mori, Kazuya; Matsuo, Shinichiro; Tanaka, Shunsuke; Abe, Hirofumi; Yasuda, Naoto; Ishikawa, Kenichiro; Okura, Shunsuke; Ohsawa, Shinji; Otaka, Toshinori

    2018-01-12

    To respond to the high demand for high dynamic range imaging suitable for moving objects with few artifacts, we have developed a single-exposure dynamic range image sensor by introducing a triple-gain pixel and a low noise dual-gain readout circuit. The developed 3 μm pixel is capable of having three conversion gains. Introducing a new split-pinned photodiode structure, linear full well reaches 40 ke - . Readout noise under the highest pixel gain condition is 1 e - with a low noise readout circuit. Merging two signals, one with high pixel gain and high analog gain, and the other with low pixel gain and low analog gain, a single exposure dynamic rage (SEHDR) signal is obtained. Using this technology, a 1/2.7", 2M-pixel CMOS image sensor has been developed and characterized. The image sensor also employs an on-chip linearization function, yielding a 16-bit linear signal at 60 fps, and an intra-scene dynamic range of higher than 90 dB was successfully demonstrated. This SEHDR approach inherently mitigates the artifacts from moving objects or time-varying light sources that can appear in the multiple exposure high dynamic range (MEHDR) approach.

  2. An Over 90 dB Intra-Scene Single-Exposure Dynamic Range CMOS Image Sensor Using a 3.0 μm Triple-Gain Pixel Fabricated in a Standard BSI Process †

    PubMed Central

    Takayanagi, Isao; Yoshimura, Norio; Mori, Kazuya; Matsuo, Shinichiro; Tanaka, Shunsuke; Abe, Hirofumi; Yasuda, Naoto; Ishikawa, Kenichiro; Okura, Shunsuke; Ohsawa, Shinji; Otaka, Toshinori

    2018-01-01

    To respond to the high demand for high dynamic range imaging suitable for moving objects with few artifacts, we have developed a single-exposure dynamic range image sensor by introducing a triple-gain pixel and a low noise dual-gain readout circuit. The developed 3 μm pixel is capable of having three conversion gains. Introducing a new split-pinned photodiode structure, linear full well reaches 40 ke−. Readout noise under the highest pixel gain condition is 1 e− with a low noise readout circuit. Merging two signals, one with high pixel gain and high analog gain, and the other with low pixel gain and low analog gain, a single exposure dynamic rage (SEHDR) signal is obtained. Using this technology, a 1/2.7”, 2M-pixel CMOS image sensor has been developed and characterized. The image sensor also employs an on-chip linearization function, yielding a 16-bit linear signal at 60 fps, and an intra-scene dynamic range of higher than 90 dB was successfully demonstrated. This SEHDR approach inherently mitigates the artifacts from moving objects or time-varying light sources that can appear in the multiple exposure high dynamic range (MEHDR) approach. PMID:29329210

  3. New Subarray Readout Patterns for the ACS Wide Field Channel

    NASA Astrophysics Data System (ADS)

    Golimowski, D.; Anderson, J.; Arslanian, S.; Chiaberge, M.; Grogin, N.; Lim, Pey Lian; Lupie, O.; McMaster, M.; Reinhart, M.; Schiffer, F.; Serrano, B.; Van Marshall, M.; Welty, A.

    2017-04-01

    At the start of Cycle 24, the original CCD-readout timing patterns used to generate ACS Wide Field Channel (WFC) subarray images were replaced with new patterns adapted from the four-quadrant readout pattern used to generate full-frame WFC images. The primary motivation for this replacement was a substantial reduction of observatory and staff resources needed to support WFC subarray bias calibration, which became a new and challenging obligation after the installation of the ACS CCD Electronics Box Replacement during Servicing Mission 4. The new readout patterns also improve the overall efficiency of observing with WFC subarrays and enable the processing of subarray images through stages of the ACS data calibration pipeline (calacs) that were previously restricted to full-frame WFC images. The new readout patterns replace the original 512×512, 1024×1024, and 2048×2046-pixel subarrays with subarrays having 2048 columns and 512, 1024, and 2048 rows, respectively. Whereas the original square subarrays were limited to certain WFC quadrants, the new rectangular subarrays are available in all four quadrants. The underlying bias structure of the new subarrays now conforms with those of the corresponding regions of the full-frame image, which allows raw frames in all image formats to be calibrated using one contemporaneous full-frame "superbias" reference image. The original subarrays remain available for scientific use, but calibration of these image formats is no longer supported by STScI.

  4. [The research on bidirectional reflectance computer simulation of forest canopy at pixel scale].

    PubMed

    Song, Jin-Ling; Wang, Jin-Di; Shuai, Yan-Min; Xiao, Zhi-Qiang

    2009-08-01

    Computer simulation is based on computer graphics to generate the realistic 3D structure scene of vegetation, and to simulate the canopy regime using radiosity method. In the present paper, the authors expand the computer simulation model to simulate forest canopy bidirectional reflectance at pixel scale. But usually, the trees are complex structures, which are tall and have many branches. So there is almost a need for hundreds of thousands or even millions of facets to built up the realistic structure scene for the forest It is difficult for the radiosity method to compute so many facets. In order to make the radiosity method to simulate the forest scene at pixel scale, in the authors' research, the authors proposed one idea to simplify the structure of forest crowns, and abstract the crowns to ellipsoids. And based on the optical characteristics of the tree component and the characteristics of the internal energy transmission of photon in real crown, the authors valued the optical characteristics of ellipsoid surface facets. In the computer simulation of the forest, with the idea of geometrical optics model, the gap model is considered to get the forest canopy bidirectional reflectance at pixel scale. Comparing the computer simulation results with the GOMS model, and Multi-angle Imaging SpectroRadiometer (MISR) multi-angle remote sensing data, the simulation results are in agreement with the GOMS simulation result and MISR BRF. But there are also some problems to be solved. So the authors can conclude that the study has important value for the application of multi-angle remote sensing and the inversion of vegetation canopy structure parameters.

  5. Development of n-in-p pixel modules for the ATLAS upgrade at HL-LHC

    NASA Astrophysics Data System (ADS)

    Macchiolo, A.; Nisius, R.; Savic, N.; Terzo, S.

    2016-09-01

    Thin planar pixel modules are promising candidates to instrument the inner layers of the new ATLAS pixel detector for HL-LHC, thanks to the reduced contribution to the material budget and their high charge collection efficiency after irradiation. 100-200 μm thick sensors, interconnected to FE-I4 read-out chips, have been characterized with radioactive sources and beam tests at the CERN-SPS and DESY. The results of these measurements are reported for devices before and after irradiation up to a fluence of 14 ×1015 neq /cm2 . The charge collection and tracking efficiency of the different sensor thicknesses are compared. The outlook for future planar pixel sensor production is discussed, with a focus on sensor design with the pixel pitches (50×50 and 25×100 μm2) foreseen for the RD53 Collaboration read-out chip in 65 nm CMOS technology. An optimization of the biasing structures in the pixel cells is required to avoid the hit efficiency loss presently observed in the punch-through region after irradiation. For this purpose the performance of different layouts have been compared in FE-I4 compatible sensors at various fluence levels by using beam test data. Highly segmented sensors will represent a challenge for the tracking in the forward region of the pixel system at HL-LHC. In order to reproduce the performance of 50×50 μm2 pixels at high pseudo-rapidity values, FE-I4 compatible planar pixel sensors have been studied before and after irradiation in beam tests at high incidence angle (80°) with respect to the short pixel direction. Results on cluster shapes, charge collection and hit efficiency will be shown.

  6. Nonlinear decoding of a complex movie from the mammalian retina

    PubMed Central

    Deny, Stéphane; Martius, Georg

    2018-01-01

    Retina is a paradigmatic system for studying sensory encoding: the transformation of light into spiking activity of ganglion cells. The inverse problem, where stimulus is reconstructed from spikes, has received less attention, especially for complex stimuli that should be reconstructed “pixel-by-pixel”. We recorded around a hundred neurons from a dense patch in a rat retina and decoded movies of multiple small randomly-moving discs. We constructed nonlinear (kernelized and neural network) decoders that improved significantly over linear results. An important contribution to this was the ability of nonlinear decoders to reliably separate between neural responses driven by locally fluctuating light signals, and responses at locally constant light driven by spontaneous-like activity. This improvement crucially depended on the precise, non-Poisson temporal structure of individual spike trains, which originated in the spike-history dependence of neural responses. We propose a general principle by which downstream circuitry could discriminate between spontaneous and stimulus-driven activity based solely on higher-order statistical structure in the incoming spike trains. PMID:29746463

  7. Active pixel imagers incorporating pixel-level amplifiers based on polycrystalline-silicon thin-film transistors

    PubMed Central

    El-Mohri, Youcef; Antonuk, Larry E.; Koniczek, Martin; Zhao, Qihua; Li, Yixin; Street, Robert A.; Lu, Jeng-Ping

    2009-01-01

    Active matrix, flat-panel imagers (AMFPIs) employing a 2D matrix of a-Si addressing TFTs have become ubiquitous in many x-ray imaging applications due to their numerous advantages. However, under conditions of low exposures and∕or high spatial resolution, their signal-to-noise performance is constrained by the modest system gain relative to the electronic additive noise. In this article, a strategy for overcoming this limitation through the incorporation of in-pixel amplification circuits, referred to as active pixel (AP) architectures, using polycrystalline-silicon (poly-Si) TFTs is reported. Compared to a-Si, poly-Si offers substantially higher mobilities, enabling higher TFT currents and the possibility of sophisticated AP designs based on both n- and p-channel TFTs. Three prototype indirect detection arrays employing poly-Si TFTs and a continuous a-Si photodiode structure were characterized. The prototypes consist of an array (PSI-1) that employs a pixel architecture with a single TFT, as well as two arrays (PSI-2 and PSI-3) that employ AP architectures based on three and five TFTs, respectively. While PSI-1 serves as a reference with a design similar to that of conventional AMFPI arrays, PSI-2 and PSI-3 incorporate additional in-pixel amplification circuitry. Compared to PSI-1, results of x-ray sensitivity demonstrate signal gains of ∼10.7 and 20.9 for PSI-2 and PSI-3, respectively. These values are in reasonable agreement with design expectations, demonstrating that poly-Si AP circuits can be tailored to provide a desired level of signal gain. PSI-2 exhibits the same high levels of charge trapping as those observed for PSI-1 and other conventional arrays employing a continuous photodiode structure. For PSI-3, charge trapping was found to be significantly lower and largely independent of the bias voltage applied across the photodiode. MTF results indicate that the use of a continuous photodiode structure in PSI-1, PSI-2, and PSI-3 results in optical fill factors that are close to unity. In addition, the greater complexity of PSI-2 and PSI-3 pixel circuits, compared to that of PSI-1, has no observable effect on spatial resolution. Both PSI-2 and PSI-3 exhibit high levels of additive noise, resulting in no net improvement in the signal-to-noise performance of these early prototypes compared to conventional AMFPIs. However, faster readout rates, coupled with implementation of multiple sampling protocols allowed by the nondestructive nature of pixel readout, resulted in a significantly lower noise level of ∼560 e (rms) for PSI-3. PMID:19673229

  8. Active pixel imagers incorporating pixel-level amplifiers based on polycrystalline-silicon thin-film transistors.

    PubMed

    El-Mohri, Youcef; Antonuk, Larry E; Koniczek, Martin; Zhao, Qihua; Li, Yixin; Street, Robert A; Lu, Jeng-Ping

    2009-07-01

    Active matrix, flat-panel imagers (AMFPIs) employing a 2D matrix of a-Si addressing TFTs have become ubiquitous in many x-ray imaging applications due to their numerous advantages. However, under conditions of low exposures and/or high spatial resolution, their signal-to-noise performance is constrained by the modest system gain relative to the electronic additive noise. In this article, a strategy for overcoming this limitation through the incorporation of in-pixel amplification circuits, referred to as active pixel (AP) architectures, using polycrystalline-silicon (poly-Si) TFTs is reported. Compared to a-Si, poly-Si offers substantially higher mobilities, enabling higher TFT currents and the possibility of sophisticated AP designs based on both n- and p-channel TFTs. Three prototype indirect detection arrays employing poly-Si TFTs and a continuous a-Si photodiode structure were characterized. The prototypes consist of an array (PSI-1) that employs a pixel architecture with a single TFT, as well as two arrays (PSI-2 and PSI-3) that employ AP architectures based on three and five TFTs, respectively. While PSI-1 serves as a reference with a design similar to that of conventional AMFPI arrays, PSI-2 and PSI-3 incorporate additional in-pixel amplification circuitry. Compared to PSI-1, results of x-ray sensitivity demonstrate signal gains of approximately 10.7 and 20.9 for PSI-2 and PSI-3, respectively. These values are in reasonable agreement with design expectations, demonstrating that poly-Si AP circuits can be tailored to provide a desired level of signal gain. PSI-2 exhibits the same high levels of charge trapping as those observed for PSI-1 and other conventional arrays employing a continuous photodiode structure. For PSI-3, charge trapping was found to be significantly lower and largely independent of the bias voltage applied across the photodiode. MTF results indicate that the use of a continuous photodiode structure in PSI-1, PSI-2, and PSI-3 results in optical fill factors that are close to unity. In addition, the greater complexity of PSI-2 and PSI-3 pixel circuits, compared to that of PSI-1, has no observable effect on spatial resolution. Both PSI-2 and PSI-3 exhibit high levels of additive noise, resulting in no net improvement in the signal-to-noise performance of these early prototypes compared to conventional AMFPIs. However, faster readout rates, coupled with implementation of multiple sampling protocols allowed by the nondestructive nature of pixel readout, resulted in a significantly lower noise level of approximately 560 e (rms) for PSI-3.

  9. Enhanced Resolution Maps of Energetic Neutral Atoms from IBEX

    NASA Astrophysics Data System (ADS)

    Teodoro, L. A.; Elphic, R. C.; Janzen, P.; Reisenfeld, D.; Wilson, J. T.

    2017-12-01

    The discovery by the Interstellar Boundary Explorer (IBEX) of a "Ribbon" in the measurements of Energetic Neutral Particles (ENA) was a major surprise that lead to the re-thinking of the Physics underpinning the heliosphere-intergalactic medium boundary dynamics. Several physical models have been proposed and tested in their ability to mimic the IBEX observations. Some of the ENA IBEX's include the following features: 1) The presence of fine structure within the ribbon suggests that the physical properties of it exhibit small-scale spacial structure and possibly rapid small-scale variations. 2) The ribbon is a fairly narrow feature at low energies and broadens with increasing energy;The IBEX detectors were designed to maximize count rate by incorporating wide angular and broad energy acceptance. Thus far, the existing mapping software used by the IBEX Science Operation Center has not been design with the "Ribbon" ( 20o wide) in mind: the current generation of maps are binned in 6o longitude pixels by 6o latitude pixels (so the pixels are all of the same size in angle and are quite "blocky"). Furthermore, the instrumental point spread function has not been deconvolved, making any potential narrow features broader than they are. An improvement in the spatial resolution of the IBEX maps would foster a better understanding of the Ribbon and its substructure, and thus reply to some of the basic and profound questions related to its origin, the nature of the outer boundaries of the our solar system and the surrounding interstellar Galactic medium.Here we report on the application of the Bayesian image reconstruction algorithm "Speedy Pixons" to the ENA data with the aim to sharpen the ENA IBEX maps. A preliminary application allow us to conclude that: The peaks in the count rate do appear to be more enhanced in the reconstruction; The reconstruction is clearly denoised; The "Ribbon" is better defined in the reconstruction. We are currently studying the implications of our preliminary results in the current generation of models. Potentially, our results can also be used in the design and planning of future missions whose aim is to produce higher resolution maps of the interstellar medium (e.g. IMAP).

  10. CMOS image sensor with lateral electric field modulation pixels for fluorescence lifetime imaging with sub-nanosecond time response

    NASA Astrophysics Data System (ADS)

    Li, Zhuo; Seo, Min-Woong; Kagawa, Keiichiro; Yasutomi, Keita; Kawahito, Shoji

    2016-04-01

    This paper presents the design and implementation of a time-resolved CMOS image sensor with a high-speed lateral electric field modulation (LEFM) gating structure for time domain fluorescence lifetime measurement. Time-windowed signal charge can be transferred from a pinned photodiode (PPD) to a pinned storage diode (PSD) by turning on a pair of transfer gates, which are situated beside the channel. Unwanted signal charge can be drained from the PPD to the drain by turning on another pair of gates. The pixel array contains 512 (V) × 310 (H) pixels with 5.6 × 5.6 µm2 pixel size. The imager chip was fabricated using 0.11 µm CMOS image sensor process technology. The prototype sensor has a time response of 150 ps at 374 nm. The fill factor of the pixels is 5.6%. The usefulness of the prototype sensor is demonstrated for fluorescence lifetime imaging through simulation and measurement results.

  11. Organic Light-Emitting Diode-on-Silicon Pixel Circuit Using the Source Follower Structure with Active Load for Microdisplays

    NASA Astrophysics Data System (ADS)

    Kwak, Bong-Choon; Lim, Han-Sin; Kwon, Oh-Kyong

    2011-03-01

    In this paper, we propose a pixel circuit immune to the electrical characteristic variation of organic light-emitting diodes (OLEDs) for organic light-emitting diode-on-silicon (OLEDoS) microdisplays with a 0.4 inch video graphics array (VGA) resolution and a 6-bit gray scale. The proposed pixel circuit is implemented using five p-channel metal oxide semiconductor field-effect transistors (MOSFETs) and one storage capacitor. The proposed pixel circuit has a source follower with a diode-connected transistor as an active load for improving the immunity against the electrical characteristic variation of OLEDs. The deviation in the measured emission current ranges from -0.165 to 0.212 least significant bit (LSB) among 11 samples while the anode voltage of OLED is 0 V. Also, the deviation in the measured emission current ranges from -0.262 to 0.272 LSB in pixel samples, while the anode voltage of OLED varies from 0 to 2.5 V owing to the electrical characteristic variation of OLEDs.

  12. An LOD with improved breakdown voltage in full-frame CCD devices

    NASA Astrophysics Data System (ADS)

    Banghart, Edmund K.; Stevens, Eric G.; Doan, Hung Q.; Shepherd, John P.; Meisenzahl, Eric J.

    2005-02-01

    In full-frame image sensors, lateral overflow drain (LOD) structures are typically formed along the vertical CCD shift registers to provide a means for preventing charge blooming in the imager pixels. In a conventional LOD structure, the n-type LOD implant is made through the thin gate dielectric stack in the device active area and adjacent to the thick field oxidation that isolates the vertical CCD columns of the imager. In this paper, a novel LOD structure is described in which the n-type LOD impurities are placed directly under the field oxidation and are, therefore, electrically isolated from the gate electrodes. By reducing the electrical fields that cause breakdown at the silicon surface, this new structure permits a larger amount of n-type impurities to be implanted for the purpose of increasing the LOD conductivity. As a consequence of the improved conductance, the LOD width can be significantly reduced, enabling the design of higher resolution imaging arrays without sacrificing charge capacity in the pixels. Numerical simulations with MEDICI of the LOD leakage current are presented that identify the breakdown mechanism, while three-dimensional solutions to Poisson's equation are used to determine the charge capacity as a function of pixel dimension.

  13. Image size invariant visual cryptography for general access structures subject to display quality constraints.

    PubMed

    Lee, Kai-Hui; Chiu, Pei-Ling

    2013-10-01

    Conventional visual cryptography (VC) suffers from a pixel-expansion problem, or an uncontrollable display quality problem for recovered images, and lacks a general approach to construct visual secret sharing schemes for general access structures. We propose a general and systematic approach to address these issues without sophisticated codebook design. This approach can be used for binary secret images in non-computer-aided decryption environments. To avoid pixel expansion, we design a set of column vectors to encrypt secret pixels rather than using the conventional VC-based approach. We begin by formulating a mathematic model for the VC construction problem to find the column vectors for the optimal VC construction, after which we develop a simulated-annealing-based algorithm to solve the problem. The experimental results show that the display quality of the recovered image is superior to that of previous papers.

  14. Fast Fourier single-pixel imaging via binary illumination.

    PubMed

    Zhang, Zibang; Wang, Xueying; Zheng, Guoan; Zhong, Jingang

    2017-09-20

    Fourier single-pixel imaging (FSI) employs Fourier basis patterns for encoding spatial information and is capable of reconstructing high-quality two-dimensional and three-dimensional images. Fourier-domain sparsity in natural scenes allows FSI to recover sharp images from undersampled data. The original FSI demonstration, however, requires grayscale Fourier basis patterns for illumination. This requirement imposes a limitation on the imaging speed as digital micro-mirror devices (DMDs) generate grayscale patterns at a low refreshing rate. In this paper, we report a new strategy to increase the speed of FSI by two orders of magnitude. In this strategy, we binarize the Fourier basis patterns based on upsampling and error diffusion dithering. We demonstrate a 20,000 Hz projection rate using a DMD and capture 256-by-256-pixel dynamic scenes at a speed of 10 frames per second. The reported technique substantially accelerates image acquisition speed of FSI. It may find broad imaging applications at wavebands that are not accessible using conventional two-dimensional image sensors.

  15. Enhanced encrypted reversible data hiding algorithm with minimum distortion through homomorphic encryption

    NASA Astrophysics Data System (ADS)

    Bhardwaj, Rupali

    2018-03-01

    Reversible data hiding means embedding a secret message in a cover image in such a manner, to the point that in the midst of extraction of the secret message, the cover image and, furthermore, the secret message are recovered with no error. The goal of by far most of the reversible data hiding algorithms is to have improved the embedding rate and enhanced visual quality of stego image. An improved encrypted-domain-based reversible data hiding algorithm to embed two binary bits in each gray pixel of original cover image with minimum distortion of stego-pixels is employed in this paper. Highlights of the proposed algorithm are minimum distortion of pixel's value, elimination of underflow and overflow problem, and equivalence of stego image and cover image with a PSNR of ∞ (for Lena, Goldhill, and Barbara image). The experimental outcomes reveal that in terms of average PSNR and embedding rate, for natural images, the proposed algorithm performed better than other conventional ones.

  16. Image Format Conversion to DICOM and Lookup Table Conversion to Presentation Value of the Japanese Society of Radiological Technology (JSRT) Standard Digital Image Database.

    PubMed

    Yanagita, Satoshi; Imahana, Masato; Suwa, Kazuaki; Sugimura, Hitomi; Nishiki, Masayuki

    2016-01-01

    Japanese Society of Radiological Technology (JSRT) standard digital image database contains many useful cases of chest X-ray images, and has been used in many state-of-the-art researches. However, the pixel values of all the images are simply digitized as relative density values by utilizing a scanned film digitizer. As a result, the pixel values are completely different from the standardized display system input value of digital imaging and communications in medicine (DICOM), called presentation value (P-value), which can maintain a visual consistency when observing images using different display luminance. Therefore, we converted all the images from JSRT standard digital image database to DICOM format followed by the conversion of the pixel values to P-value using an original program developed by ourselves. Consequently, JSRT standard digital image database has been modified so that the visual consistency of images is maintained among different luminance displays.

  17. Exploring the Hidden Structure of Astronomical Images: A "Pixelated" View of Solar System and Deep Space Features!

    ERIC Educational Resources Information Center

    Ward, R. Bruce; Sienkiewicz, Frank; Sadler, Philip; Antonucci, Paul; Miller, Jaimie

    2013-01-01

    We describe activities created to help student participants in Project ITEAMS (Innovative Technology-Enabled Astronomy for Middle Schools) develop a deeper understanding of picture elements (pixels), image creation, and analysis of the recorded data. ITEAMS is an out-of-school time (OST) program funded by the National Science Foundation (NSF) with…

  18. ACS Internal Flat Fields

    NASA Astrophysics Data System (ADS)

    Borncamp, David

    2017-08-01

    The stability of the CCD flat fields will be monitored using the calibration lamps. One set of observations for all the filters and another at a different epoch for a subset of filters will be taken during this cycle. High signal observations will be used to assess the stability of the pixel-to-pixel flat field structure and to monitor the position of the dust motes.

  19. ACS Internal Flat Fields

    NASA Astrophysics Data System (ADS)

    Borncamp, David

    2016-10-01

    The stability of the CCD flat fields will be monitored using the calibration lamps. One set of observations for all the filters and another at a different epoch for a subset of filters will be taken during this cycle. High signal observations will be used to assess the stability of the pixel-to-pixel flat field structure and to monitor the position of the dust motes.

  20. Bioinspired architecture approach for a one-billion transistor smart CMOS camera chip

    NASA Astrophysics Data System (ADS)

    Fey, Dietmar; Komann, Marcus

    2007-05-01

    In the paper we present a massively parallel VLSI architecture for future smart CMOS camera chips with up to one billion transistors. To exploit efficiently the potential offered by future micro- or nanoelectronic devices traditional on central structures oriented parallel architectures based on MIMD or SIMD approaches will fail. They require too long and too many global interconnects for the distribution of code or the access to common memory. On the other hand nature developed self-organising and emergent principles to manage successfully complex structures based on lots of interacting simple elements. Therefore we developed a new as Marching Pixels denoted emergent computing paradigm based on a mixture of bio-inspired computing models like cellular automaton and artificial ants. In the paper we present different Marching Pixels algorithms and the corresponding VLSI array architecture. A detailed synthesis result for a 0.18 μm CMOS process shows that a 256×256 pixel image is processed in less than 10 ms assuming a moderate 100 MHz clock rate for the processor array. Future higher integration densities and a 3D chip stacking technology will allow the integration and processing of Mega pixels within the same time since our architecture is fully scalable.

  1. Reflectance Prediction Modelling for Residual-Based Hyperspectral Image Coding

    PubMed Central

    Xiao, Rui; Gao, Junbin; Bossomaier, Terry

    2016-01-01

    A Hyperspectral (HS) image provides observational powers beyond human vision capability but represents more than 100 times the data compared to a traditional image. To transmit and store the huge volume of an HS image, we argue that a fundamental shift is required from the existing “original pixel intensity”-based coding approaches using traditional image coders (e.g., JPEG2000) to the “residual”-based approaches using a video coder for better compression performance. A modified video coder is required to exploit spatial-spectral redundancy using pixel-level reflectance modelling due to the different characteristics of HS images in their spectral and shape domain of panchromatic imagery compared to traditional videos. In this paper a novel coding framework using Reflectance Prediction Modelling (RPM) in the latest video coding standard High Efficiency Video Coding (HEVC) for HS images is proposed. An HS image presents a wealth of data where every pixel is considered a vector for different spectral bands. By quantitative comparison and analysis of pixel vector distribution along spectral bands, we conclude that modelling can predict the distribution and correlation of the pixel vectors for different bands. To exploit distribution of the known pixel vector, we estimate a predicted current spectral band from the previous bands using Gaussian mixture-based modelling. The predicted band is used as the additional reference band together with the immediate previous band when we apply the HEVC. Every spectral band of an HS image is treated like it is an individual frame of a video. In this paper, we compare the proposed method with mainstream encoders. The experimental results are fully justified by three types of HS dataset with different wavelength ranges. The proposed method outperforms the existing mainstream HS encoders in terms of rate-distortion performance of HS image compression. PMID:27695102

  2. A BLIND METHOD TO DETREND INSTRUMENTAL SYSTEMATICS IN EXOPLANETARY LIGHT CURVES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morello, G., E-mail: giuseppe.morello.11@ucl.ac.uk

    2015-07-20

    The study of the atmospheres of transiting exoplanets requires a photometric precision, and repeatability, of one part in ∼10{sup 4}. This is beyond the original calibration plans of current observatories, hence the necessity to disentangle the instrumental systematics from the astrophysical signals in raw data sets. Most methods used in the literature are based on an approximate instrument model. The choice of parameters of the model and their functional forms can sometimes be subjective, causing controversies in the literature. Recently, Morello et al. (2014, 2015) have developed a non-parametric detrending method that gave coherent and repeatable results when applied tomore » Spitzer/IRAC data sets that were debated in the literature. Said method is based on independent component analysis (ICA) of individual pixel time-series, hereafter “pixel-ICA”. The main purpose of this paper is to investigate the limits and advantages of pixel-ICA on a series of simulated data sets with different instrument properties, and a range of jitter timescales and shapes, non-stationarity, sudden change points, etc. The performances of pixel-ICA are compared against the ones of other methods, in particular polynomial centroid division, and pixel-level decorrelation method. We find that in simulated cases pixel-ICA performs as well or better than other methods, and it also guarantees a higher degree of objectivity, because of its purely statistical foundation with no prior information on the instrument systematics. The results of this paper, together with previous analyses of Spitzer/IRAC data sets, suggest that photometric precision and repeatability of one part in 10{sup 4} can be achieved with current infrared space instruments.« less

  3. Dark Materials on Olympus Mons

    NASA Image and Video Library

    2018-01-23

    This image from NASA's Mars Reconnaissance Orbiter (MRO) shows blocks of layered terrain within the Olympus Mons aureole. The aureole is a giant apron of chaotic material around the volcano, perhaps formed by enormous landslides off the flanks of the giant volcano. These blocks of layered material have been eroded by the wind into the scenic landscape we see here. The map is projected here at a scale of 25 centimeters (9.8 inches) per pixel. [The original image scale is 28.3 centimeters (11.1 inches) per pixel (with 1 x 1 binning); objects on the order of 85 centimeters (33.5 inches) across are resolved.] North is up. https://photojournal.jpl.nasa.gov/catalog/PIA22181

  4. Recent X-ray hybrid CMOS detector developments and measurements

    NASA Astrophysics Data System (ADS)

    Hull, Samuel V.; Falcone, Abraham D.; Burrows, David N.; Wages, Mitchell; Chattopadhyay, Tanmoy; McQuaide, Maria; Bray, Evan; Kern, Matthew

    2017-08-01

    The Penn State X-ray detector lab, in collaboration with Teledyne Imaging Sensors (TIS), have progressed their efforts to improve soft X-ray Hybrid CMOS detector (HCD) technology on multiple fronts. Having newly acquired a Teledyne cryogenic SIDECARTM ASIC for use with HxRG devices, measurements were performed with an H2RG HCD and the cooled SIDECARTM. We report new energy resolution and read noise measurements, which show a significant improvement over room temperature SIDECARTM operation. Further, in order to meet the demands of future high-throughput and high spatial resolution X-ray observatories, detectors with fast readout and small pixel sizes are being developed. We report on characteristics of new X-ray HCDs with 12.5 micron pitch that include in-pixel CDS circuitry and crosstalk-eliminating CTIA amplifiers. In addition, PSU and TIS are developing a new large-scale array Speedster-EXD device. The original 64 × 64 pixel Speedster-EXD prototype used comparators in each pixel to enable event driven readout with order of magnitude higher effective readout rates, which will now be implemented in a 550 × 550 pixel device. Finally, the detector lab is involved in a sounding rocket mission that is slated to fly in 2018 with an off-plane reflection grating array and an H2RG X-ray HCD. We report on the planned detector configuration for this mission, which will increase the NASA technology readiness level of X-ray HCDs to TRL 9.

  5. New DTM Extraction Approach from Airborne Images Derived Dsm

    NASA Astrophysics Data System (ADS)

    Mousa, Y. A.; Helmholz, P.; Belton, D.

    2017-05-01

    In this work, a new filtering approach is proposed for a fully automatic Digital Terrain Model (DTM) extraction from very high resolution airborne images derived Digital Surface Models (DSMs). Our approach represents an enhancement of the existing DTM extraction algorithm Multi-directional and Slope Dependent (MSD) by proposing parameters that are more reliable for the selection of ground pixels and the pixelwise classification. To achieve this, four main steps are implemented: Firstly, 8 well-distributed scanlines are used to search for minima as a ground point within a pre-defined filtering window size. These selected ground points are stored with their positions on a 2D surface to create a network of ground points. Then, an initial DTM is created using an interpolation method to fill the gaps in the 2D surface. Afterwards, a pixel to pixel comparison between the initial DTM and the original DSM is performed utilising pixelwise classification of ground and non-ground pixels by applying a vertical height threshold. Finally, the pixels classified as non-ground are removed and the remaining holes are filled. The approach is evaluated using the Vaihingen benchmark dataset provided by the ISPRS working group III/4. The evaluation includes the comparison of our approach, denoted as Network of Ground Points (NGPs) algorithm, with the DTM created based on MSD as well as a reference DTM generated from LiDAR data. The results show that our proposed approach over performs the MSD approach.

  6. Monte Carlo Modeling of VLWIR HgCdTe Interdigitated Pixel Response

    NASA Astrophysics Data System (ADS)

    D'Souza, A. I.; Stapelbroek, M. G.; Wijewarnasuriya, P. S.

    2010-07-01

    Increasing very long-wave infrared (VLWIR, λ c ≈ 15 μm) pixel operability was approached by subdividing each pixel into four interdigitated subpixels. High response is maintained across the pixel, even if one or two interdigitated subpixels are deselected (turned off), because interdigitation provides that the preponderance of minority carriers photogenerated in the pixel are collected by the selected subpixels. Monte Carlo modeling of the photoresponse of the interdigitated subpixel simulates minority-carrier diffusion from carrier creation to recombination. Each carrier generated at an appropriately weighted random location is assigned an exponentially distributed random lifetime τ i, where < τ i> is the bulk minority-carrier lifetime. The minority carrier is allowed to diffuse for a short time d τ, and the fate of the carrier is decided from its present position and the boundary conditions, i.e., whether the carrier is absorbed in a junction, recombined at a surface, reflected from a surface, or recombined in the bulk because it lived for its designated lifetime. If nothing happens, the process is then repeated until one of the boundary conditions is attained. The next step is to go on to the next carrier and repeat the procedure for all the launches of minority carriers. For each minority carrier launched, the original location and boundary condition at fatality are recorded. An example of the results from Monte Carlo modeling is that, for a 20- μm diffusion length, the calculated quantum efficiency (QE) changed from 85% with no subpixels deselected, to 78% with one subpixel deselected, 67% with two subpixels deselected, and 48% with three subpixels deselected. Demonstration of the interdigitated pixel concept and verification of the Monte Carlo modeling utilized λ c(60 K) ≈ 15 μm HgCdTe pixels in a 96 × 96 array format. The measured collection efficiency for one, two, and three subelements selected, divided by the collection efficiency for all four subelements selected, matched that calculated using Monte Carlo modeling.

  7. Development of the Next Generation of Multi-chroic Antenna-Coupled Transition Edge Sensor Detectors for CMB Polarimetry

    NASA Astrophysics Data System (ADS)

    Westbrook, B.; Cukierman, A.; Lee, A.; Suzuki, A.; Raum, C.; Holzapfel, W.

    2016-07-01

    We present the development of the next generation of multi-chroic sinuous antenna-coupled transition edge sensor (TES) bolometers optimized for precision measurements of polarization of the cosmic microwave background (CMB) and cosmic foreground. These devices employ a polarization sensitive broadband self-complementary sinuous antenna to feed on-chip band defining filters before delivering the power to load resistors coupled to a TES on a released bolometer island. This technology was originally developed by UC Berkeley and will be deployed by POLARBEAR-2 and SPT-3G in the next year and half. In addition, it is a candidate detector for the LiteBIRD mission which will make all sky CMB and cosmic foreground polarization observations from a satellite platform in the early 2020's. This works focuses on expanding both the bandwidth and band count per pixel of this technology in order to meet the needs of future CMB missions. This work demonstrates that these devices are well suited for observations between 20 and 380 GHz. This proceeding describes the design, fabrication, and the characterization of three new pixel types: a low-frequency triplexing pixel (LFTP) with bands centered on 40, 60, and 90 GHz, a high-frequency triplexing pixel (HFTP) with bands centered on 220, 280, and 350 GHz, and a mid-frequency tetraplexing pixel with bands (MFTP) centered on 90, 150, 220, and 280 GHz. The average fractional bandwidth of these pixels designs was 36.7, 34.5, and 31.4 % respectively. In addition we found that the polarization modulation efficiency of each band was between 1 and 3 % which is consistent with the polarization efficiency of the wire grid used to take the measurement. Finally, we find that the beams have {˜ }1 % ellipticity for each pixel type. The thermal properties of the bolometers where tuned for characterization in our lab so we do not report on G and noise values as they would be unsuitable for modern CMB experiments.

  8. Removing Visual Bias in Filament Identification: A New Goodness-of-fit Measure

    NASA Astrophysics Data System (ADS)

    Green, C.-E.; Cunningham, M. R.; Dawson, J. R.; Jones, P. A.; Novak, G.; Fissel, L. M.

    2017-05-01

    Different combinations of input parameters to filament identification algorithms, such as disperse and filfinder, produce numerous different output skeletons. The skeletons are a one-pixel-wide representation of the filamentary structure in the original input image. However, these output skeletons may not necessarily be a good representation of that structure. Furthermore, a given skeleton may not be as good of a representation as another. Previously, there has been no mathematical “goodness-of-fit” measure to compare output skeletons to the input image. Thus far this has been assessed visually, introducing visual bias. We propose the application of the mean structural similarity index (MSSIM) as a mathematical goodness-of-fit measure. We describe the use of the MSSIM to find the output skeletons that are the most mathematically similar to the original input image (the optimum, or “best,” skeletons) for a given algorithm, and independently of the algorithm. This measure makes possible systematic parameter studies, aimed at finding the subset of input parameter values returning optimum skeletons. It can also be applied to the output of non-skeleton-based filament identification algorithms, such as the Hessian matrix method. The MSSIM removes the need to visually examine thousands of output skeletons, and eliminates the visual bias, subjectivity, and limited reproducibility inherent in that process, representing a major improvement upon existing techniques. Importantly, it also allows further automation in the post-processing of output skeletons, which is crucial in this era of “big data.”

  9. Fully 3D-Integrated Pixel Detectors for X-Rays

    DOE PAGES

    Deptuch, Grzegorz W.; Gabriella, Carini; Enquist, Paul; ...

    2016-01-01

    The vertically integrated photon imaging chip (VIPIC1) pixel detector is a stack consisting of a 500-μm-thick silicon sensor, a two-tier 34-μm-thick integrated circuit, and a host printed circuit board (PCB). The integrated circuit tiers were bonded using the direct bonding technology with copper, and each tier features 1-μm-diameter through-silicon vias that were used for connections to the sensor on one side, and to the host PCB on the other side. The 80-μm-pixel-pitch sensor was the direct bonding technology with nickel bonded to the integrated circuit. The stack was mounted on the board using Sn–Pb balls placed on a 320-μm pitch,more » yielding an entirely wire-bond-less structure. The analog front-end features a pulse response peaking at below 250 ns, and the power consumption per pixel is 25 μW. We successful completed the 3-D integration and have reported here. Additionally, all pixels in the matrix of 64 × 64 pixels were responding on well-bonded devices. Correct operation of the sparsified readout, allowing a single 153-ns bunch timing resolution, was confirmed in the tests on a synchrotron beam of 10-keV X-rays. An equivalent noise charge of 36.2 e - rms and a conversion gain of 69.5 μV/e - with 2.6 e - rms and 2.7 μV/e - rms pixel-to-pixel variations, respectively, were measured.« less

  10. Observing Bridge Dynamic Deflection in Green Time by Information Technology

    NASA Astrophysics Data System (ADS)

    Yu, Chengxin; Zhang, Guojian; Zhao, Yongqian; Chen, Mingzhi

    2018-01-01

    As traditional surveying methods are limited to observe bridge dynamic deflection; information technology is adopted to observe bridge dynamic deflection in Green time. Information technology used in this study means that we use digital cameras to photograph the bridge in red time as a zero image. Then, a series of successive images are photographed in green time. Deformation point targets are identified and located by Hough transform. With reference to the control points, the deformation values of these deformation points are obtained by differencing the successive images with a zero image, respectively. Results show that the average measurement accuracies of C0 are 0.46 pixels, 0.51 pixels and 0.74 pixels in X, Z and comprehensive direction. The average measurement accuracies of C1 are 0.43 pixels, 0.43 pixels and 0.67 pixels in X, Z and comprehensive direction in these tests. The maximal bridge deflection is 44.16mm, which is less than 75mm (Bridge deflection tolerance value). Information technology in this paper can monitor bridge dynamic deflection and depict deflection trend curves of the bridge in real time. It can provide data support for the site decisions to the bridge structure safety.

  11. An investigative study of multispectral data compression for remotely-sensed images using vector quantization and difference-mapped shift-coding

    NASA Technical Reports Server (NTRS)

    Jaggi, S.

    1993-01-01

    A study is conducted to investigate the effects and advantages of data compression techniques on multispectral imagery data acquired by NASA's airborne scanners at the Stennis Space Center. The first technique used was vector quantization. The vector is defined in the multispectral imagery context as an array of pixels from the same location from each channel. The error obtained in substituting the reconstructed images for the original set is compared for different compression ratios. Also, the eigenvalues of the covariance matrix obtained from the reconstructed data set are compared with the eigenvalues of the original set. The effects of varying the size of the vector codebook on the quality of the compression and on subsequent classification are also presented. The output data from the Vector Quantization algorithm was further compressed by a lossless technique called Difference-mapped Shift-extended Huffman coding. The overall compression for 7 channels of data acquired by the Calibrated Airborne Multispectral Scanner (CAMS), with an RMS error of 15.8 pixels was 195:1 (0.41 bpp) and with an RMS error of 3.6 pixels was 18:1 (.447 bpp). The algorithms were implemented in software and interfaced with the help of dedicated image processing boards to an 80386 PC compatible computer. Modules were developed for the task of image compression and image analysis. Also, supporting software to perform image processing for visual display and interpretation of the compressed/classified images was developed.

  12. Optical transmission testing based on asynchronous sampling techniques

    NASA Astrophysics Data System (ADS)

    Mrozek, T.; Perlicki, K.; Wilczewski, G.

    2016-09-01

    This paper presents a method of analysis of images obtained with the Asynchronous Delay Tap Sampling technique, which is used for simultaneous monitoring of a number of phenomena in the physical layer of an optical network. This method allows visualization of results in a form of an optical signal's waveform (characteristics depicting phase portraits). Depending on a specific phenomenon being observed (i.e.: chromatic dispersion, polarization mode dispersion and ASE noise), the shape of the waveform changes. Herein presented original waveforms were acquired utilizing the OptSim 4.0 simulation package. After specific simulation testing, the obtained numerical data was transformed into an image form, that was further subjected to the analysis using authors' custom algorithms. These algorithms utilize various pixel operations and creation of reports each image might be characterized with. Each individual report shows the number of black pixels being present in the specific image segment. Afterwards, generated reports are compared with each other, across the original-impaired relationship. The differential report is created which consists of a "binary key" that shows the increase in the number of pixels in each particular segment. The ultimate aim of this work is to find the correlation between the generated binary keys and the analyzed common phenomenon being observed, allowing identification of the type of interference occurring. In the further course of the work it is evitable to determine their respective values. The presented work delivers the first objective - the ability to recognize interference.

  13. Applying reconfigurable hardware to the analysis of multispectral and hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Leeser, Miriam E.; Belanovic, Pavle; Estlick, Michael; Gokhale, Maya; Szymanski, John J.; Theiler, James P.

    2002-01-01

    Unsupervised clustering is a powerful technique for processing multispectral and hyperspectral images. Last year, we reported on an implementation of k-means clustering for multispectral images. Our implementation in reconfigurable hardware processed 10 channel multispectral images two orders of magnitude faster than a software implementation of the same algorithm. The advantage of using reconfigurable hardware to accelerate k-means clustering is clear; the disadvantage is the hardware implementation worked for one specific dataset. It is a non-trivial task to change this implementation to handle a dataset with different number of spectral channels, bits per spectral channel, or number of pixels; or to change the number of clusters. These changes required knowledge of the hardware design process and could take several days of a designer's time. Since multispectral data sets come in many shapes and sizes, being able to easily change the k-means implementation for these different data sets is important. For this reason, we have developed a parameterized implementation of the k-means algorithm. Our design is parameterized by the number of pixels in an image, the number of channels per pixel, and the number of bits per channel as well as the number of clusters. These parameters can easily be changed in a few minutes by someone not familiar with the design process. The resulting implementation is very close in performance to the original hardware implementation. It has the added advantage that the parameterized design compiles approximately three times faster than the original.

  14. Registration of Panoramic/Fish-Eye Image Sequence and LiDAR Points Using Skyline Features

    PubMed Central

    Zhu, Ningning; Jia, Yonghong; Ji, Shunping

    2018-01-01

    We propose utilizing a rigorous registration model and a skyline-based method for automatic registration of LiDAR points and a sequence of panoramic/fish-eye images in a mobile mapping system (MMS). This method can automatically optimize original registration parameters and avoid the use of manual interventions in control point-based registration methods. First, the rigorous registration model between the LiDAR points and the panoramic/fish-eye image was built. Second, skyline pixels from panoramic/fish-eye images and skyline points from the MMS’s LiDAR points were extracted, relying on the difference in the pixel values and the registration model, respectively. Third, a brute force optimization method was used to search for optimal matching parameters between skyline pixels and skyline points. In the experiments, the original registration method and the control point registration method were used to compare the accuracy of our method with a sequence of panoramic/fish-eye images. The result showed: (1) the panoramic/fish-eye image registration model is effective and can achieve high-precision registration of the image and the MMS’s LiDAR points; (2) the skyline-based registration method can automatically optimize the initial attitude parameters, realizing a high-precision registration of a panoramic/fish-eye image and the MMS’s LiDAR points; and (3) the attitude correction values of the sequences of panoramic/fish-eye images are different, and the values must be solved one by one. PMID:29883431

  15. On the possibility to use semiconductive hybrid pixel detectors for study of radiation belt of the Earth.

    NASA Astrophysics Data System (ADS)

    Guskov, A.; Shelkov, G.; Smolyanskiy, P.; Zhemchugov, A.

    2016-02-01

    The scientific apparatus GAMMA-400 designed for study of electromagnetic and hadron components of cosmic rays will be launched to an elliptic orbit with the apogee of about 300 000 km and the perigee of about 500 km. Such a configuration of the orbit allows it to cross periodically the radiation belt and the outer part of magnetosphere. We discuss the possibility to use hybrid pixel detecters based on the Timepix chip and semiconductive sensors on board the GAMMA-400 apparatus. Due to high granularity of the sensor (pixel size is 55 mum) and possibility to measure independently an energy deposition in each pixel, such compact and lightweight detector could be a unique instrument for study of spatial, energy and time structure of electron and proton components of the radiation belt.

  16. Optimization method of superpixel analysis for multi-contrast Jones matrix tomography (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Miyazawa, Arata; Hong, Young-Joo; Makita, Shuichi; Kasaragod, Deepa K.; Miura, Masahiro; Yasuno, Yoshiaki

    2017-02-01

    Local statistics are widely utilized for quantification and image processing of OCT. For example, local mean is used to reduce speckle, local variation of polarization state (degree-of-polarization-uniformity (DOPU)) is used to visualize melanin. Conventionally, these statistics are calculated in a rectangle kernel whose size is uniform over the image. However, the fixed size and shape of the kernel result in a tradeoff between image sharpness and statistical accuracy. Superpixel is a cluster of pixels which is generated by grouping image pixels based on the spatial proximity and similarity of signal values. Superpixels have variant size and flexible shapes which preserve the tissue structure. Here we demonstrate a new superpixel method which is tailored for multifunctional Jones matrix OCT (JM-OCT). This new method forms the superpixels by clustering image pixels in a 6-dimensional (6-D) feature space (spatial two dimensions and four dimensions of optical features). All image pixels were clustered based on their spatial proximity and optical feature similarity. The optical features are scattering, OCT-A, birefringence and DOPU. The method is applied to retinal OCT. Generated superpixels preserve the tissue structures such as retinal layers, sclera, vessels, and retinal pigment epithelium. Hence, superpixel can be utilized as a local statistics kernel which would be more suitable than a uniform rectangle kernel. Superpixelized image also can be used for further image processing and analysis. Since it reduces the number of pixels to be analyzed, it reduce the computational cost of such image processing.

  17. Tracking Detectors in the STAR Experiment at RHIC

    NASA Astrophysics Data System (ADS)

    Wieman, Howard

    2015-04-01

    The STAR experiment at RHIC is designed to measure and identify the thousands of particles produced in 200 Gev/nucleon Au on Au collisions. This talk will focus on the design and construction of two of the main tracking detectors in the experiment, the TPC and the Heavy Flavor Tracker (HFT) pixel detector. The TPC is a solenoidal gas filled detector 4 meters in diameter and 4.2 meters long. It provides precise, continuous tracking and rate of energy loss in the gas (dE/dx) for particles at + - 1 units of pseudo rapidity. The tracking in a half Tesla magnetic field measures momentum and dE/dX provides particle ID. To detect short lived particles tracking close to the point of interaction is required. The HFT pixel detector is a two-layered, high resolution vertex detector located at a few centimeters radius from the collision point. It determines origins of the tracks to a few tens of microns for the purpose of extracting displaced vertices, allowing the identification of D mesons and other short-lived particles. The HFT pixel detector uses detector chips developed by the IPHC group at Strasbourg that are based on standard IC Complementary Metal-Oxide-Semiconductor (CMOS) technology. This is the first time that CMOS pixel chips have been incorporated in a collider application.

  18. Change detection from synthetic aperture radar images based on neighborhood-based ratio and extreme learning machine

    NASA Astrophysics Data System (ADS)

    Gao, Feng; Dong, Junyu; Li, Bo; Xu, Qizhi; Xie, Cui

    2016-10-01

    Change detection is of high practical value to hazard assessment, crop growth monitoring, and urban sprawl detection. A synthetic aperture radar (SAR) image is the ideal information source for performing change detection since it is independent of atmospheric and sunlight conditions. Existing SAR image change detection methods usually generate a difference image (DI) first and use clustering methods to classify the pixels of DI into changed class and unchanged class. Some useful information may get lost in the DI generation process. This paper proposed an SAR image change detection method based on neighborhood-based ratio (NR) and extreme learning machine (ELM). NR operator is utilized for obtaining some interested pixels that have high probability of being changed or unchanged. Then, image patches centered at these pixels are generated, and ELM is employed to train a model by using these patches. Finally, pixels in both original SAR images are classified by the pretrained ELM model. The preclassification result and the ELM classification result are combined to form the final change map. The experimental results obtained on three real SAR image datasets and one simulated dataset show that the proposed method is robust to speckle noise and is effective to detect change information among multitemporal SAR images.

  19. First full dynamic range calibration of the JUNGFRAU photon detector

    NASA Astrophysics Data System (ADS)

    Redford, S.; Andrä, M.; Barten, R.; Bergamaschi, A.; Brückner, M.; Dinapoli, R.; Fröjdh, E.; Greiffenberg, D.; Lopez-Cuenca, C.; Mezza, D.; Mozzanica, A.; Ramilli, M.; Ruat, M.; Ruder, C.; Schmitt, B.; Shi, X.; Thattil, D.; Tinti, G.; Vetter, S.; Zhang, J.

    2018-01-01

    The JUNGFRAU detector is a charge integrating hybrid silicon pixel detector developed at the Paul Scherrer Institut for photon science applications, in particular for the upcoming free electron laser SwissFEL. With a high dynamic range, analogue readout, low noise and three automatically switching gains, JUNGFRAU promises excellent performance not only at XFELs but also at synchrotrons in areas such as protein crystallography, ptychography, pump-probe and time resolved measurements. To achieve its full potential, the detector must be calibrated on a pixel-by-pixel basis. This contribution presents the current status of the JUNGFRAU calibration project, in which a variety of input charge sources are used to parametrise the energy response of the detector across four orders of magnitude of dynamic range. Building on preliminary studies, the first full calibration procedure of a JUNGFRAU 0.5 Mpixel module is described. The calibration is validated using alternative sources of charge deposition, including laboratory experiments and measurements at ESRF and LCLS. The findings from these measurements are presented. Calibrated modules have already been used in proof-of-principle style protein crystallography experiments at the SLS. A first look at selected results is shown. Aspects such as the conversion of charge to number of photons, treatment of multi-size pixels and the origin of non-linear response are also discussed.

  20. A 4MP high-dynamic-range, low-noise CMOS image sensor

    NASA Astrophysics Data System (ADS)

    Ma, Cheng; Liu, Yang; Li, Jing; Zhou, Quan; Chang, Yuchun; Wang, Xinyang

    2015-03-01

    In this paper we present a 4 Megapixel high dynamic range, low dark noise and dark current CMOS image sensor, which is ideal for high-end scientific and surveillance applications. The pixel design is based on a 4-T PPD structure. During the readout of the pixel array, signals are first amplified, and then feed to a low- power column-parallel ADC array which is already presented in [1]. Measurement results show that the sensor achieves a dynamic range of 96dB, a dark noise of 1.47e- at 24fps speed. The dark current is 0.15e-/pixel/s at -20oC.

  1. An enhanced structure tensor method for sea ice ridge detection from GF-3 SAR imagery

    NASA Astrophysics Data System (ADS)

    Zhu, T.; Li, F.; Zhang, Y.; Zhang, S.; Spreen, G.; Dierking, W.; Heygster, G.

    2017-12-01

    In SAR imagery, ridges or leads are shown as the curvilinear features. The proposed ridge detection method is facilitated by their curvilinear shapes. The bright curvilinear features are recognized as the ridges while the dark curvilinear features are classified as the leads. In dual-polarization HH or HV channel of C-band SAR imagery, the bright curvilinear feature may be false alarm because the frost flowers of young leads may show as bright pixels associated with changes in the surface salinity under calm surface conditions. Wind roughened leads also trigger the backscatter increasing that can be misclassified as ridges [1]. Thus the width limitation is considered in this proposed structure tensor method [2], since only shape feature based method is not enough for detecting ridges. The ridge detection algorithm is based on the hypothesis that the bright pixels are ridges with curvilinear shapes and the ridge width is less 30 meters. Benefited from GF-3 with high spatial resolution of 3 meters, we provide an enhanced structure tensor method for detecting the significant ridge. The preprocessing procedures including the calibration and incidence angle normalization are also investigated. The bright pixels will have strong response to the bandpass filtering. The ridge training samples are delineated from the SAR imagery in the Log-Gabor filters to construct structure tensor. From the tensor, the dominant orientation of the pixel representing the ridge is determined by the dominant eigenvector. For the post-processing of structure tensor, the elongated kernel is desired to enhance the ridge curvilinear shape. Since ridge presents along a certain direction, the ratio of the dominant eigenvector will be used to measure the intensity of local anisotropy. The convolution filter has been utilized in the constructed structure tensor is used to model spatial contextual information. Ridge detection results from GF-3 show the proposed method performs better compared to the direct threshold method.

  2. High-speed X-ray imaging pixel array detector for synchrotron bunch isolation

    DOE PAGES

    Philipp, Hugh T.; Tate, Mark W.; Purohit, Prafull; ...

    2016-01-28

    A wide-dynamic-range imaging X-ray detector designed for recording successive frames at rates up to 10 MHz is described. X-ray imaging with frame rates of up to 6.5 MHz have been experimentally verified. The pixel design allows for up to 8–12 frames to be stored internally at high speed before readout, which occurs at a 1 kHz frame rate. An additional mode of operation allows the integration capacitors to be re-addressed repeatedly before readout which can enhance the signal-to-noise ratio of cyclical processes. This detector, along with modern storage ring sources which provide short (10–100 ps) and intense X-ray pulses atmore » megahertz rates, opens new avenues for the study of rapid structural changes in materials. The detector consists of hybridized modules, each of which is comprised of a 500 µm-thick silicon X-ray sensor solder bump-bonded, pixel by pixel, to an application-specific integrated circuit. The format of each module is 128 × 128 pixels with a pixel pitch of 150 µm. In the prototype detector described here, the three-side buttable modules are tiled in a 3 × 2 array with a full format of 256 × 384 pixels. Lastly, we detail the characteristics, operation, testing and application of the detector.« less

  3. High-speed X-ray imaging pixel array detector for synchrotron bunch isolation

    PubMed Central

    Philipp, Hugh T.; Tate, Mark W.; Purohit, Prafull; Shanks, Katherine S.; Weiss, Joel T.; Gruner, Sol M.

    2016-01-01

    A wide-dynamic-range imaging X-ray detector designed for recording successive frames at rates up to 10 MHz is described. X-ray imaging with frame rates of up to 6.5 MHz have been experimentally verified. The pixel design allows for up to 8–12 frames to be stored internally at high speed before readout, which occurs at a 1 kHz frame rate. An additional mode of operation allows the integration capacitors to be re-addressed repeatedly before readout which can enhance the signal-to-noise ratio of cyclical processes. This detector, along with modern storage ring sources which provide short (10–100 ps) and intense X-ray pulses at megahertz rates, opens new avenues for the study of rapid structural changes in materials. The detector consists of hybridized modules, each of which is comprised of a 500 µm-thick silicon X-ray sensor solder bump-bonded, pixel by pixel, to an application-specific integrated circuit. The format of each module is 128 × 128 pixels with a pixel pitch of 150 µm. In the prototype detector described here, the three-side buttable modules are tiled in a 3 × 2 array with a full format of 256 × 384 pixels. The characteristics, operation, testing and application of the detector are detailed. PMID:26917125

  4. High-speed X-ray imaging pixel array detector for synchrotron bunch isolation.

    PubMed

    Philipp, Hugh T; Tate, Mark W; Purohit, Prafull; Shanks, Katherine S; Weiss, Joel T; Gruner, Sol M

    2016-03-01

    A wide-dynamic-range imaging X-ray detector designed for recording successive frames at rates up to 10 MHz is described. X-ray imaging with frame rates of up to 6.5 MHz have been experimentally verified. The pixel design allows for up to 8-12 frames to be stored internally at high speed before readout, which occurs at a 1 kHz frame rate. An additional mode of operation allows the integration capacitors to be re-addressed repeatedly before readout which can enhance the signal-to-noise ratio of cyclical processes. This detector, along with modern storage ring sources which provide short (10-100 ps) and intense X-ray pulses at megahertz rates, opens new avenues for the study of rapid structural changes in materials. The detector consists of hybridized modules, each of which is comprised of a 500 µm-thick silicon X-ray sensor solder bump-bonded, pixel by pixel, to an application-specific integrated circuit. The format of each module is 128 × 128 pixels with a pixel pitch of 150 µm. In the prototype detector described here, the three-side buttable modules are tiled in a 3 × 2 array with a full format of 256 × 384 pixels. The characteristics, operation, testing and application of the detector are detailed.

  5. Large Area Cd0.9Zn0.1Te Pixelated Detector: Fabrication and Characterization

    NASA Astrophysics Data System (ADS)

    Chaudhuri, Sandeep K.; Nguyen, Khai; Pak, Rahmi O.; Matei, Liviu; Buliga, Vladimir; Groza, Michael; Burger, Arnold; Mandal, Krishna C.

    2014-04-01

    Cd0.9Zn0.1Te (CZT) based pixelated radiation detectors have been fabricated and characterized for gamma ray detection. Large area CZT single crystals has been grown using a tellurium solvent method. A 10 ×10 guarded pixelated detector has been fabricated on a 19.5 ×19.5 ×5 mm3 crystal cut out from the grown ingot. The pixel dimensions were 1.3 ×1.3 mm2 and were pitched at 1.8 mm. A guard grid was used to reduce interpixel/inter-electrode leakage. The crystal was characterized in planar configuration using electrical, optical and optoelectronic methods prior to the fabrication of pixelated geometry. Current-voltage (I-V) measurements revealed a leakage current of 27 nA at an operating bias voltage of 1000 V and a resistivity of 3.1 ×1010 Ω-cm. Infrared transmission imaging revealed an average tellurium inclusion/precipitate size less than 8 μm. Pockels measurement has revealed a near-uniform depth-wise distribution of the internal electric field. The mobility-lifetime product in this crystal was calculated to be 6.2 ×10 - 3 cm2/V using alpha ray spectroscopic method. Gamma spectroscopy using a 137Cs source on the pixelated structure showed fully resolved 662 keV gamma peaks for all the pixels, with percentage resolution (FWHM) as high as 1.8%.

  6. Mapping the northern plains of Mars: origins, evolution and response to climate change - a new overview of the recent ice-related landforms in Utopia Planitia

    NASA Astrophysics Data System (ADS)

    Costard, Francois; Sejourne, Antoine; Losiak, Ania; Swirad, Zusanna; Balm, Matthew; Conway, Susan; Gallagher, Colman; van-Gassel, Stephan; Hauber, Ernst; Johnsson, Andreas; Kereszturi, Akos; Platz, Thomas; Ramsdale, Jason; Reiss, Dennis; Skinner, James

    2015-04-01

    An ISSI (International Space Science Institute) international team has been convened to study the Northern Plain of Mars. The northern plains of Mars are extensive, geologically young, low-lying areas that contrast in age and relief to Mars' older, heavily cratered, southern highlands. Mars' northern plains are characterised by a wealth of landforms and landscapes that have been inferred to be related to the presence of ice or ice-rich material. Such landforms include 'scalloped' pits and depressions, polygonally-patterned grounds, and viscous flow features similar in form to terrestrial glacial or ice-sheet landforms. Furthermore, new (within the last few years) impact craters have exposed ice in the northern plains, and spectral data from orbiting instruments have revealed the presence of tens of percent by weight of water within the upper most ~50 cm of the martian surface at high latitudes. The western Utopia Planitia contains numerous relatively young ice-related landforms (< 10 Ma). Among them, there are scalloped depressions, spatially-associated polygons and polygon-junction pits. There is an agreement within the community that they are periglacial in origin and, derivatively, indicate the presence of an ice-rich permafrost. However, these landforms were studied individually and, many questions remain about their formation-evolution and climatic significance. In contrast, we conducted a geomorphological study of all landforms in Utopia Planitia along a long strip from ~30N to ~80N latitude and about 250km wide. The goals are to: (i) map the geographical distribution of the ice-related landforms; (ii) identify their association with subtly-expressed geological units and; (iii) discuss the climatic modifications of the ice-rich permafrost in UP. Our work combines a study with CTX (5-6 m/pixel) and HRSC (~12.5-50 m/pixel) images, supported by higher resolution HiRISE (25 cm/pixel) and MOC (~2 m/pixel) and a comparison with analogous landforms on Earth.

  7. The SLD VXD3 detector and its initial performance

    NASA Astrophysics Data System (ADS)

    Abe, K.; Arodzero, A.; Baltay, C.; Brau, J.; Breidenbach, M.; Burrows, P. N.; Chou, A.; Crawford, G.; Damerell, C.; Dervan, P.; Dong, D.; Emmet, W.; English, R.; Etzion, E.; Foss, M.; Frey, R.; Haller, G.; Hasuko, K.; Hertzbach, S.; Hoeflich, J.; Huber, J.; Huffer, M.; Jackson, D.; Jaros, J.; Kelsy, J.; Kendall, H.; Lee, I.; Lia, V.; Lintern, L.; Liu, M.; Manly, S.; Masuda, H.; Moore, T.; Nagamine, T.; Ohishi, N.; Osborne, L.; Ross, D.; Russell, J.; Serbo, V.; Sinev, N.; Sinnott, J.; Skarpaas, K. Viii; Smy, M.; Snyder, J.; Strauss, M.; Dong, S.; Suekane, F.; Taylor, F.; Trandafir, A.; Usher, T.; Verdier, R.; Watts, S.; Weiss, E.; Yashima, J.; Yuta, H.; Zapalac, G.

    1997-02-01

    The SLD collaboration completed construction of a new CCD vertex detector (VXD3) in January 1996 and started data taking in April 1996 with the new system. VXD3 is an upgrade of the original CCD vertex detector, VXD2, which had successfully operated in SLD for three years. VXD3 consists of 96 large area CCDs, each having 3.2 million 20 μm × 20 μm pixels. By reducing the detector material and lengthening the lever arm, VXD3 is expected to improve secondary vertex resolution by about a factor of two compared with VXD2. The new three-layered structure enables stand-alone tracking without any ambiguity and its extended size along the beam direction improves the polar-angle coverage to |cos θ| < 0.85. An overview of this detector system and its initial performance are described.

  8. Yardangs: Nature's Weathervanes

    NASA Image and Video Library

    2017-11-28

    The prominent tear-shaped features in this image from NASA's Mars Reconnaissance Orbiter (MRO) are erosional features called yardangs. Yardangs are composed of sand grains that have clumped together and have become more resistant to erosion than their surrounding materials. As the winds of Mars blow and erode away at the landscape, the more cohesive rock is left behind as a standing feature. (This Context Camera image shows several examples of yardangs that overlie the darker iron-rich material that makes up the lava plains in the southern portion of Elysium Planitia.) Resistant as they may be, the yardangs are not permanent, and will eventually be eroded away by the persistence of the Martian winds. For scientists observing the Red Planet, yardangs serve as a useful indicator of regional prevailing wind direction. The sandy structures are slowly eroded down and carved into elongated shapes that point in the downwind direction, like giant weathervanes. In this instance, the yardangs are all aligned, pointing towards north-northwest. This shows that the winds in this area generally gust in that direction. The map is projected here at a scale of 50 centimeters (19.7 inches) per pixel. [The original image scale is 55.8 centimeters (21 inches) per pixel (with 2 x 2 binning); objects on the order of 167 centimeters (65.7 inches) across are resolved.] North is up. https://photojournal.jpl.nasa.gov/catalog/PIA22119

  9. Mapping Vesta Mid-Latitude Quadrangle V-12EW: Mapping the Edge of the South Polar Structure

    NASA Astrophysics Data System (ADS)

    Hoogenboom, T.; Schenk, P.; Williams, D. A.; Hiesinger, H.; Garry, W. B.; Yingst, R.; Buczkowski, D.; McCord, T. B.; Jaumann, R.; Pieters, C. M.; Gaskell, R. W.; Neukum, G.; Schmedemann, N.; Marchi, S.; Nathues, A.; Le Corre, L.; Roatsch, T.; Preusker, F.; White, O. L.; DeSanctis, C.; Filacchione, G.; Raymond, C. A.; Russell, C. T.

    2011-12-01

    NASA's Dawn spacecraft arrived at the asteroid 4Vesta on July 15, 2011, and is now collecting imaging, spectroscopic, and elemental abundance data during its one-year orbital mission. As part of the geological analysis of the surface, a series of 15 quadrangle maps are being produced based on Framing Camera images (FC: spatial resolution: ~65 m/pixel) along with Visible & Infrared Spectrometer data (VIR: spatial resolution: ~180 m/pixel) obtained during the High-Altitude Mapping Orbit (HAMO). This poster presentation concentrates on our geologic analysis and mapping of quadrangle V-12EW. This quadrangle is dominated by the arcuate edge of the large 460+ km diameter south polar topographic feature first observed by HST (Thomas et al., 1997). Sparsely cratered, the portion of this feature covered in V-12EW is characterized by arcuate ridges and troughs forming a generalized arcuate pattern. Mapping of this terrain and the transition to areas to the north will be used to test whether this feature has an impact or other (e.g., internal) origin. We are also using FC stereo and VIR images to assess whether their are any compositional differences between this terrain and areas further to the north, and image data to evaluate the distribution and age of young impact craters within the map area. The authors acknowledge the support of the Dawn Science, Instrument and Operations Teams.

  10. Lizard-Skin Surface Texture

    NASA Technical Reports Server (NTRS)

    2007-01-01

    [figure removed for brevity, see original site] Figure 1

    The south polar region of Mars is covered seasonally with translucent carbon dioxide ice. In the spring gas subliming (evaporating) from the underside of the seasonal layer of ice bursts through weak spots, carrying dust from below with it, to form numerous dust fans aligned in the direction of the prevailing wind.

    The dust gets trapped in the shallow grooves on the surface, helping to define the small-scale structure of the surface. The surface texture is reminiscent of lizard skin (figure 1).

    Observation Geometry Image PSP_003730_0945 was taken by the High Resolution Imaging Science Experiment (HiRISE) camera onboard the Mars Reconnaissance Orbiter spacecraft on 14-May-2007. The complete image is centered at -85.2 degrees latitude, 181.5 degrees East longitude. The range to the target site was 248.5 km (155.3 miles). At this distance the image scale is 24.9 cm/pixel (with 1 x 1 binning) so objects 75 cm across are resolved. The image shown here has been map-projected to 25 cm/pixel . The image was taken at a local Mars time of 06:04 PM and the scene is illuminated from the west with a solar incidence angle of 69 degrees, thus the sun was about 21 degrees above the horizon. At a solar longitude of 237.5 degrees, the season on Mars is Northern Autumn.

  11. Natural pixel decomposition for computational tomographic reconstruction from interferometric projection: algorithms and comparison

    NASA Astrophysics Data System (ADS)

    Cha, Don J.; Cha, Soyoung S.

    1995-09-01

    A computational tomographic technique, termed the variable grid method (VGM), has been developed for improving interferometric reconstruction of flow fields under ill-posed data conditions of restricted scanning and incomplete projection. The technique is based on natural pixel decomposition, that is, division of a field into variable grid elements. The performances of two algorithms, that is, original and revised versions, are compared to investigate the effects of the data redundancy criteria and seed element forming schemes. Tests of the VGMs are conducted through computer simulation of experiments and reconstruction of fields with a limited view angel of 90 degree(s). The temperature fields at two horizontal sections of a thermal plume of two interacting isothermal cubes, produced by a finite numerical code, are analyzed as test fields. The computer simulation demonstrates the superiority of the revised VGM to either the conventional fixed grid method or the original VGM. Both the maximum and average reconstruction errors are reduced appreciably. The reconstruction shows substantial improvement in the regions with dense scanning by probing rays. These regions are usually of interest in engineering applications.

  12. Evaluation of Origin Ensemble algorithm for image reconstruction for pixelated solid-state detectors with large number of channels

    NASA Astrophysics Data System (ADS)

    Kolstein, M.; De Lorenzo, G.; Mikhaylova, E.; Chmeissani, M.; Ariño, G.; Calderón, Y.; Ozsahin, I.; Uzun, D.

    2013-04-01

    The Voxel Imaging PET (VIP) Pathfinder project intends to show the advantages of using pixelated solid-state technology for nuclear medicine applications. It proposes designs for Positron Emission Tomography (PET), Positron Emission Mammography (PEM) and Compton gamma camera detectors with a large number of signal channels (of the order of 106). For PET scanners, conventional algorithms like Filtered Back-Projection (FBP) and Ordered Subset Expectation Maximization (OSEM) are straightforward to use and give good results. However, FBP presents difficulties for detectors with limited angular coverage like PEM and Compton gamma cameras, whereas OSEM has an impractically large time and memory consumption for a Compton gamma camera with a large number of channels. In this article, the Origin Ensemble (OE) algorithm is evaluated as an alternative algorithm for image reconstruction. Monte Carlo simulations of the PET design are used to compare the performance of OE, FBP and OSEM in terms of the bias, variance and average mean squared error (MSE) image quality metrics. For the PEM and Compton camera designs, results obtained with OE are presented.

  13. Automatic cell detection and segmentation from H and E stained pathology slides using colorspace decorrelation stretching

    NASA Astrophysics Data System (ADS)

    Peikari, Mohammad; Martel, Anne L.

    2016-03-01

    Purpose: Automatic cell segmentation plays an important role in reliable diagnosis and prognosis of patients. Most of the state-of-the-art cell detection and segmentation techniques focus on complicated methods to subtract foreground cells from the background. In this study, we introduce a preprocessing method which leads to a better detection and segmentation results compared to a well-known state-of-the-art work. Method: We transform the original red-green-blue (RGB) space into a new space defined by the top eigenvectors of the RGB space. Stretching is done by manipulating the contrast of each pixel value to equalize the color variances. New pixel values are then inverse transformed to the original RGB space. This altered RGB image is then used to segment cells. Result: The validation of our method with a well-known state-of-the-art technique revealed a statistically significant improvement on an identical validation set. We achieved a mean F1-score of 0.901. Conclusion: Preprocessing steps to decorrelate colorspaces may improve cell segmentation performances.

  14. Computational polarization difference underwater imaging based on image fusion

    NASA Astrophysics Data System (ADS)

    Han, Hongwei; Zhang, Xiaohui; Guan, Feng

    2016-01-01

    Polarization difference imaging can improve the quality of images acquired underwater, whether the background and veiling light are unpolarized or partial polarized. Computational polarization difference imaging technique which replaces the mechanical rotation of polarization analyzer and shortens the time spent to select the optimum orthogonal ǁ and ⊥axes is the improvement of the conventional PDI. But it originally gets the output image by setting the weight coefficient manually to an identical constant for all pixels. In this paper, a kind of algorithm is proposed to combine the Q and U parameters of the Stokes vector through pixel-level image fusion theory based on non-subsample contourlet transform. The experimental system built by the green LED array with polarizer to illuminate a piece of flat target merged in water and the CCD with polarization analyzer to obtain target image under different angle is used to verify the effect of the proposed algorithm. The results showed that the output processed by our algorithm could show more details of the flat target and had higher contrast compared to original computational polarization difference imaging.

  15. Artificial Structural Color Pixels: A Review

    PubMed Central

    Zhao, Yuqian; Zhao, Yong; Hu, Sheng; Lv, Jiangtao; Ying, Yu; Gervinskas, Gediminas; Si, Guangyuan

    2017-01-01

    Inspired by natural photonic structures (Morpho butterfly, for instance), researchers have demonstrated varying artificial color display devices using different designs. Photonic-crystal/plasmonic color filters have drawn increasing attention most recently. In this review article, we show the developing trend of artificial structural color pixels from photonic crystals to plasmonic nanostructures. Such devices normally utilize the distinctive optical features of photonic/plasmon resonance, resulting in high compatibility with current display and imaging technologies. Moreover, dynamical color filtering devices are highly desirable because tunable optical components are critical for developing new optical platforms which can be integrated or combined with other existing imaging and display techniques. Thus, extensive promising potential applications have been triggered and enabled including more abundant functionalities in integrated optics and nanophotonics. PMID:28805736

  16. The realization of an SVGA OLED-on-silicon microdisplay driving circuit

    NASA Astrophysics Data System (ADS)

    Bohua, Zhao; Ran, Huang; Fei, Ma; Guohua, Xie; Zhensong, Zhang; Huan, Du; Jiajun, Luo; Yi, Zhao

    2012-03-01

    An 800 × 600 pixel organic light-emitting diode-on-silicon (OLEDoS) driving circuit is proposed. The pixel cell circuit utilizes a subthreshold-voltage-scaling structure which can modulate the pixel current between 170 pA and 11.4 nA. In order to keep the voltage of the column bus at a relatively high level, the sample-and-hold circuits adopt a ping-pong operation. The driving circuit is fabricated in a commercially available 0.35 μm two-poly four-metal 3.3 V mixed-signal CMOS process. The pixel cell area is 15 × 15 μm2 and the total chip occupies 15.5 × 12.3 mm2. Experimental results show that the chip can work properly at a frame frequency of 60 Hz and has a 64 grayscale (monochrome) display. The total power consumption of the chip is about 85 mW with a 3.3V supply voltage.

  17. Phase information contained in meter-scale SAR images

    NASA Astrophysics Data System (ADS)

    Datcu, Mihai; Schwarz, Gottfried; Soccorsi, Matteo; Chaabouni, Houda

    2007-10-01

    The properties of single look complex SAR satellite images have already been analyzed by many investigators. A common belief is that, apart from inverse SAR methods or polarimetric applications, no information can be gained from the phase of each pixel. This belief is based on the assumption that we obtain uniformly distributed random phases when a sufficient number of small-scale scatterers are mixed in each image pixel. However, the random phase assumption does no longer hold for typical high resolution urban remote sensing scenes, when a limited number of prominent human-made scatterers with near-regular shape and sub-meter size lead to correlated phase patterns. If the pixel size shrinks to a critical threshold of about 1 meter, the reflectance of built-up urban scenes becomes dominated by typical metal reflectors, corner-like structures, and multiple scattering. The resulting phases are hard to model, but one can try to classify a scene based on the phase characteristics of neighboring image pixels. We provide a "cooking recipe" of how to analyze existing phase patterns that extend over neighboring pixels.

  18. Simultaneous fluorescence and quantitative phase microscopy with single-pixel detectors

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Suo, Jinli; Zhang, Yuanlong; Dai, Qionghai

    2018-02-01

    Multimodal microscopy offers high flexibilities for biomedical observation and diagnosis. Conventional multimodal approaches either use multiple cameras or a single camera spatially multiplexing different modes. The former needs expertise demanding alignment and the latter suffers from limited spatial resolution. Here, we report an alignment-free full-resolution simultaneous fluorescence and quantitative phase imaging approach using single-pixel detectors. By combining reference-free interferometry with single-pixel detection, we encode the phase and fluorescence of the sample in two detection arms at the same time. Then we employ structured illumination and the correlated measurements between the sample and the illuminations for reconstruction. The recovered fluorescence and phase images are inherently aligned thanks to single-pixel detection. To validate the proposed method, we built a proof-of-concept setup for first imaging the phase of etched glass with the depth of a few hundred nanometers and then imaging the fluorescence and phase of the quantum dot drop. This method holds great potential for multispectral fluorescence microscopy with additional single-pixel detectors or a spectrometer. Besides, this cost-efficient multimodal system might find broad applications in biomedical science and neuroscience.

  19. A fast event preprocessor for the Simbol-X Low-Energy Detector

    NASA Astrophysics Data System (ADS)

    Schanz, T.; Tenzer, C.; Kendziorra, E.; Santangelo, A.

    2008-07-01

    The Simbol-X1 Low Energy Detector (LED), a 128 × 128 pixel DEPFET array, will be read out very fast (8000 frames/second). This requires a very fast onboard data preprocessing of the raw data. We present an FPGA based Event Preprocessor (EPP) which can fulfill this requirements. The design is developed in the hardware description language VHDL and can be later ported on an ASIC technology. The EPP performs a pixel related offset correction and can apply different energy thresholds to each pixel of the frame. It also provides a line related common-mode correction to reduce noise that is unavoidably caused by the analog readout chip of the DEPFET. An integrated pattern detector can block all invalid pixel patterns. The EPP has an internal pipeline structure and can perform all operation in realtime (< 2 μs per line of 64 pixel) with a base clock frequency of 100 MHz. It is utilizing a fast median-value detection algorithm for common-mode correction and a new pattern scanning algorithm to select only valid events. Both new algorithms were developed during the last year at our institute.

  20. Features of the normal choriocapillaris with OCT-angiography: Density estimation and textural properties.

    PubMed

    Montesano, Giovanni; Allegrini, Davide; Colombo, Leonardo; Rossetti, Luca M; Pece, Alfredo

    2017-01-01

    The main objective of our work is to perform an in depth analysis of the structural features of normal choriocapillaris imaged with OCT Angiography. Specifically, we provide an optimal radius for a circular Region of Interest (ROI) to obtain a stable estimate of the subfoveal choriocapillaris density and characterize its textural properties using Markov Random Fields. On each binarized image of the choriocapillaris OCT Angiography we performed simulated measurements of the subfoveal choriocapillaris densities with circular Regions of Interest (ROIs) of different radii and with small random displacements from the center of the Foveal Avascular Zone (FAZ). We then calculated the variability of the density measure with different ROI radii. We then characterized the textural features of choriocapillaris binary images by estimating the parameters of an Ising model. For each image we calculated the Optimal Radius (OR) as the minimum ROI radius required to obtain a standard deviation in the simulation below 0.01. The density measured with the individual OR was 0.52 ± 0.07 (mean ± STD). Similar density values (0.51 ± 0.07) were obtained using a fixed ROI radius of 450 μm. The Ising model yielded two parameter estimates (β = 0.34 ± 0.03; γ = 0.003 ± 0.012; mean ± STD), characterizing pixel clustering and white pixel density respectively. Using the estimated parameters to synthetize new random textures via simulation we obtained a good reproduction of the original choriocapillaris structural features and density. In conclusion, we developed an extensive characterization of the normal subfoveal choriocapillaris that might be used for flow analysis and applied to the investigation pathological alterations.

  1. APOLLO_NG - a probabilistic interpretation of the APOLLO legacy for AVHRR heritage channels

    NASA Astrophysics Data System (ADS)

    Klüser, L.; Killius, N.; Gesell, G.

    2015-10-01

    The cloud processing scheme APOLLO (AVHRR Processing scheme Over cLouds, Land and Ocean) has been in use for cloud detection and cloud property retrieval since the late 1980s. The physics of the APOLLO scheme still build the backbone of a range of cloud detection algorithms for AVHRR (Advanced Very High Resolution Radiometer) heritage instruments. The APOLLO_NG (APOLLO_NextGeneration) cloud processing scheme is a probabilistic interpretation of the original APOLLO method. It builds upon the physical principles that have served well in the original APOLLO scheme. Nevertheless, a couple of additional variables have been introduced in APOLLO_NG. Cloud detection is no longer performed as a binary yes/no decision based on these physical principles. It is rather expressed as cloud probability for each satellite pixel. Consequently, the outcome of the algorithm can be tuned from being sure to reliably identify clear pixels to conditions of reliably identifying definitely cloudy pixels, depending on the purpose. The probabilistic approach allows retrieving not only the cloud properties (optical depth, effective radius, cloud top temperature and cloud water path) but also their uncertainties. APOLLO_NG is designed as a standalone cloud retrieval method robust enough for operational near-realtime use and for application to large amounts of historical satellite data. The radiative transfer solution is approximated by the same two-stream approach which also had been used for the original APOLLO. This allows the algorithm to be applied to a wide range of sensors without the necessity of sensor-specific tuning. Moreover it allows for online calculation of the radiative transfer (i.e., within the retrieval algorithm) giving rise to a detailed probabilistic treatment of cloud variables. This study presents the algorithm for cloud detection and cloud property retrieval together with the physical principles from the APOLLO legacy it is based on. Furthermore a couple of example results from NOAA-18 are presented.

  2. Layered Mantling Deposits in the Northern Mid-Latitudes

    NASA Image and Video Library

    2017-02-22

    Ice-rich mantling deposits accumulate from the atmosphere in the Martian mid-latitudes in cycles during periods of high obliquity (axial tilt), as recently as several million years ago. These deposits accumulate over cycles in layers, and here in the southern mid-latitudes, where the deposits have mostly eroded away due to warmer temperatures, small patches of the remnant layered deposits can still be observed. The map is projected here at a scale of 25 centimeters (9.8 inches) per pixel. [The original image scale is 29.5 centimeters (11.6 inches) per pixel (with 1 x 1 binning); objects on the order of 89 centimeters (35 inches) across are resolved.] North is up. http://photojournal.jpl.nasa.gov/catalog/PIA21462

  3. Bedrock Outcrops in Kaiser Crater

    NASA Image and Video Library

    2017-03-13

    This enhanced-color image from NASA Mars Reconnaissance Orbiter shows a patch of well-exposed bedrock on the floor of Kaiser Crater. The wind has stripped off the overlying soil, and created grooves and scallops in the bedrock. The narrow linear ridges are fractures that have been indurated, probably by precipitation of cementing minerals from groundwater flow. The rippled dark blue patches consist of sand. The map is projected here at a scale of 25 centimeters (9.8 inches) per pixel. [The original image scale is 25.3 centimeters (9.9 inches) per pixel (with 1 x 1 binning); objects on the order of 76 centimeters (29.9 inches) across are resolved.] North is up. http://photojournal.jpl.nasa.gov/catalog/PIA21559

  4. Crossword: A Fully Automated Algorithm for the Segmentation and Quality Control of Protein Microarray Images

    PubMed Central

    2015-01-01

    Biological assays formatted as microarrays have become a critical tool for the generation of the comprehensive data sets required for systems-level understanding of biological processes. Manual annotation of data extracted from images of microarrays, however, remains a significant bottleneck, particularly for protein microarrays due to the sensitivity of this technology to weak artifact signal. In order to automate the extraction and curation of data from protein microarrays, we describe an algorithm called Crossword that logically combines information from multiple approaches to fully automate microarray segmentation. Automated artifact removal is also accomplished by segregating structured pixels from the background noise using iterative clustering and pixel connectivity. Correlation of the location of structured pixels across image channels is used to identify and remove artifact pixels from the image prior to data extraction. This component improves the accuracy of data sets while reducing the requirement for time-consuming visual inspection of the data. Crossword enables a fully automated protocol that is robust to significant spatial and intensity aberrations. Overall, the average amount of user intervention is reduced by an order of magnitude and the data quality is increased through artifact removal and reduced user variability. The increase in throughput should aid the further implementation of microarray technologies in clinical studies. PMID:24417579

  5. Detector Sampling of Optical/IR Spectra: How Many Pixels per FWHM?

    NASA Astrophysics Data System (ADS)

    Robertson, J. Gordon

    2017-08-01

    Most optical and IR spectra are now acquired using detectors with finite-width pixels in a square array. Each pixel records the received intensity integrated over its own area, and pixels are separated by the array pitch. This paper examines the effects of such pixellation, using computed simulations to illustrate the effects which most concern the astronomer end-user. It is shown that coarse sampling increases the random noise errors in wavelength by typically 10-20 % at 2 pixels per Full Width at Half Maximum, but with wide variation depending on the functional form of the instrumental Line Spread Function (i.e. the instrumental response to a monochromatic input) and on the pixel phase. If line widths are determined, they are even more strongly affected at low sampling frequencies. However, the noise in fitted peak amplitudes is minimally affected by pixellation, with increases less than about 5%. Pixellation has a substantial but complex effect on the ability to see a relative minimum between two closely spaced peaks (or relative maximum between two absorption lines). The consistent scale of resolving power presented by Robertson to overcome the inadequacy of the Full Width at Half Maximum as a resolution measure is here extended to cover pixellated spectra. The systematic bias errors in wavelength introduced by pixellation, independent of signal/noise ratio, are examined. While they may be negligible for smooth well-sampled symmetric Line Spread Functions, they are very sensitive to asymmetry and high spatial frequency sub-structure. The Modulation Transfer Function for sampled data is shown to give a useful indication of the extent of improperly sampled signal in an Line Spread Function. The common maxim that 2 pixels per Full Width at Half Maximum is the Nyquist limit is incorrect and most Line Spread Functions will exhibit some aliasing at this sample frequency. While 2 pixels per Full Width at Half Maximum is nevertheless often an acceptable minimum for moderate signal/noise work, it is preferable to carry out simulations for any actual or proposed Line Spread Function to find the effects of various sampling frequencies. Where spectrograph end-users have a choice of sampling frequencies, through on-chip binning and/or spectrograph configurations, it is desirable that the instrument user manual should include an examination of the effects of the various choices.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Philipp, Hugh T.; Tate, Mark W.; Purohit, Prafull

    A wide-dynamic-range imaging X-ray detector designed for recording successive frames at rates up to 10 MHz is described. X-ray imaging with frame rates of up to 6.5 MHz have been experimentally verified. The pixel design allows for up to 8–12 frames to be stored internally at high speed before readout, which occurs at a 1 kHz frame rate. An additional mode of operation allows the integration capacitors to be re-addressed repeatedly before readout which can enhance the signal-to-noise ratio of cyclical processes. This detector, along with modern storage ring sources which provide short (10–100 ps) and intense X-ray pulses atmore » megahertz rates, opens new avenues for the study of rapid structural changes in materials. The detector consists of hybridized modules, each of which is comprised of a 500 µm-thick silicon X-ray sensor solder bump-bonded, pixel by pixel, to an application-specific integrated circuit. The format of each module is 128 × 128 pixels with a pixel pitch of 150 µm. In the prototype detector described here, the three-side buttable modules are tiled in a 3 × 2 array with a full format of 256 × 384 pixels. Lastly, we detail the characteristics, operation, testing and application of the detector.« less

  7. Mapping Electrical Crosstalk in Pixelated Sensor Arrays

    NASA Technical Reports Server (NTRS)

    Seshadri, S.; Cole, D. M.; Hancock, B. R.; Smith, R. M.

    2008-01-01

    Electronic coupling effects such as Inter-Pixel Capacitance (IPC) affect the quantitative interpretation of image data from CMOS, hybrid visible and infrared imagers alike. Existing methods of characterizing IPC do not provide a map of the spatial variation of IPC over all pixels. We demonstrate a deterministic method that provides a direct quantitative map of the crosstalk across an imager. The approach requires only the ability to reset single pixels to an arbitrary voltage, different from the rest of the imager. No illumination source is required. Mapping IPC independently for each pixel is also made practical by the greater S/N ratio achievable for an electrical stimulus than for an optical stimulus, which is subject to both Poisson statistics and diffusion effects of photo-generated charge. The data we present illustrates a more complex picture of IPC in Teledyne HgCdTe and HyViSi focal plane arrays than is presently understood, including the presence of a newly discovered, long range IPC in the HyViSi FPA that extends tens of pixels in distance, likely stemming from extended field effects in the fully depleted substrate. The sensitivity of the measurement approach has been shown to be good enough to distinguish spatial structure in IPC of the order of 0.1%.

  8. Reduced signal crosstalk multi neurotransmitter image sensor by microhole array structure

    NASA Astrophysics Data System (ADS)

    Ogaeri, Yuta; Lee, You-Na; Mitsudome, Masato; Iwata, Tatsuya; Takahashi, Kazuhiro; Sawada, Kazuaki

    2018-06-01

    A microhole array structure combined with an enzyme immobilization method using magnetic beads can enhance the target discernment capability of a multi neurotransmitter image sensor. Here we report the fabrication and evaluation of the H+-diffusion-preventing capability of the sensor with the array structure. The structure with an SU-8 photoresist has holes with a size of 24.5 × 31.6 µm2. Sensors were prepared with the array structure of three different heights: 0, 15, and 60 µm. When the sensor has the structure of 60 µm height, 48% reduced output voltage is measured at a H+-sensitive null pixel that is located 75 µm from the acetylcholinesterase (AChE)-immobilized pixel, which is the starting point of H+ diffusion. The suppressed H+ immigration is shown in a two-dimensional (2D) image in real time. The sensor parameters, such as height of the array structure and measuring time, are optimized experimentally. The sensor is expected to effectively distinguish various neurotransmitters in biological samples.

  9. Comparison of individual and composite field analysis using array detector for Intensity Modulated Radiotherapy dose verification.

    PubMed

    Saminathan, Sathiyan; Chandraraj, Varatharaj; Sridhar, C H; Manickam, Ravikumar

    2012-01-01

    To compare the measured and calculated individual and composite field planar dose distribution of Intensity Modulated Radiotherapy plans. The measurements were performed in Clinac DHX linear accelerator with 6 MV photons using Matrixx device and a solid water phantom. The 20 brain tumor patients were selected for this study. The IMRT plan was carried out for all the patients using Eclipse treatment planning system. The verification plan was produced for every original plan using CT scan of Matrixx embedded in the phantom. Every verification field was measured by the Matrixx. The TPS calculated and measured dose distributions were compared for individual and composite fields. The percentage of gamma pixel match for the dose distribution patterns were evaluated using gamma histogram. The gamma pixel match was 95-98% for 41 fields (39%) and 98% for 59 fields (61%) with individual fields. The percentage of gamma pixel match was 95-98% for 5 patients and 98% for other 12 patients with composite fields. Three patients showed a gamma pixel match of less than 95%. The comparison of percentage gamma pixel match for individual and composite fields showed more than 2.5% variation for 6 patients, more than 1% variation for 4 patients, while the remaining 10 patients showed less than 1% variation. The individual and composite field measurements showed good agreement with TPS calculated dose distribution for the studied patients. The measurement and data analysis for individual fields is a time consuming process, the composite field analysis may be sufficient enough for smaller field dose distribution analysis with array detectors.

  10. Image interpolation by adaptive 2-D autoregressive modeling and soft-decision estimation.

    PubMed

    Zhang, Xiangjun; Wu, Xiaolin

    2008-06-01

    The challenge of image interpolation is to preserve spatial details. We propose a soft-decision interpolation technique that estimates missing pixels in groups rather than one at a time. The new technique learns and adapts to varying scene structures using a 2-D piecewise autoregressive model. The model parameters are estimated in a moving window in the input low-resolution image. The pixel structure dictated by the learnt model is enforced by the soft-decision estimation process onto a block of pixels, including both observed and estimated. The result is equivalent to that of a high-order adaptive nonseparable 2-D interpolation filter. This new image interpolation approach preserves spatial coherence of interpolated images better than the existing methods, and it produces the best results so far over a wide range of scenes in both PSNR measure and subjective visual quality. Edges and textures are well preserved, and common interpolation artifacts (blurring, ringing, jaggies, zippering, etc.) are greatly reduced.

  11. Ferrocene pixels by laser-induced forward transfer: towards flexible microelectrode printing

    NASA Astrophysics Data System (ADS)

    Mitu, B.; Matei, A.; Filipescu, M.; Palla Papavlu, A.; Bercea, A.; Lippert, T.; Dinescu, M.

    2017-03-01

    The aim of this work is to demonstrate the potential of laser-induced forward transfer (LIFT) as a printing technology, alternative to standard microfabrication techniques, in the area of flexible micro-electrode fabrication. First, ferrocene thin films are deposited onto fused silica and fused silica substrates previously coated with a photodegradable polymer film (triazene polymer) by matrix assisted pulsed laser evaporation (MAPLE). The morphology and chemical structure of the ferrocene thin films deposited by MAPLE has been investigated by atomic force microscopy and Fourier transformed infrared spectroscopy, and no structural damage occurs as a result of the laser deposition. Second, LIFT is applied to print for the first time ferrocene pixels and lines onto flexible polydimethylsiloxane (PDMS) substrates. The ferrocene pixels and lines are flawlessly transferred onto the PDMS substrates in air at room temperature, without the need of additional conventional photolithography processes. We believe that these results are very promising for a variety of applications ranging from flexible electronics to lab-on-a-chip devices, MEMS, and medical implants.

  12. Fully depleted CMOS pixel sensor development and potential applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baudot, J.; Kachel, M.; CNRS, UMR7178, 67037 Strasbourg

    CMOS pixel sensors are often opposed to hybrid pixel sensors due to their very different sensitive layer. In standard CMOS imaging processes, a thin (about 20 μm) low resistivity epitaxial layer acts as the sensitive volume and charge collection is mostly driven by thermal agitation. In contrast, the so-called hybrid pixel technology exploits a thick (typically 300 μm) silicon sensor with high resistivity allowing for the depletion of this volume, hence charges drift toward collecting electrodes. But this difference is fading away with the recent availability of some CMOS imaging processes based on a relatively thick (about 50 μm) highmore » resistivity epitaxial layer which allows for full depletion. This evolution extents the range of applications for CMOS pixel sensors where their known assets, high sensitivity and granularity combined with embedded signal treatment, could potentially foster breakthrough in detection performances for specific scientific instruments. One such domain is the Xray detection for soft energies, typically below 10 keV, where the thin sensitive layer was previously severely impeding CMOS sensor usage. Another application becoming realistic for CMOS sensors, is the detection in environment with a high fluence of non-ionizing radiation, such as hadron colliders. However, when considering highly demanding applications, it is still to be proven that micro-circuits required to uniformly deplete the sensor at the pixel level, do not mitigate the sensitivity and efficiency required. Prototype sensors in two different technologies with resistivity higher than 1 kΩ, sensitive layer between 40 and 50 μm and featuring pixel pitch in the range 25 to 50 μm, have been designed and fabricated. Various biasing architectures were adopted to reach full depletion with only a few volts. Laboratory investigations with three types of sources (X-rays, β-rays and infrared light) demonstrated the validity of the approach with respect to depletion, keeping a low noise figure. Especially, an energy resolution of about 400 eV for 5 keV X-rays was obtained for single pixels. The prototypes have then been exposed to gradually increased fluences of neutrons, from 10{sup 13} to 5x10{sup 14} neq/cm{sup 2}. Again laboratory tests allowed to evaluate the signal over noise persistence on the different pixels implemented. Currently our development mostly targets the detection of soft X-rays, with the ambition to develop a pixel sensor matching counting rates as affordable with hybrid pixel sensors, but with an extended sensitivity to low energy and finer pixel about 25 x 25 μm{sup 2}. The original readout architecture proposed relies on a two tiers chip. The first tier consists of a sensor with a modest dynamic in order to insure low noise performances required by sensitivity. The interconnected second tier chip enhances the read-out speed by introducing massive parallelization. Performances reachable with this strategy combining counting and integration will be detailed. (authors)« less

  13. Photon Counting Energy Dispersive Detector Arrays for X-ray Imaging

    PubMed Central

    Iwanczyk, Jan S.; Nygård, Einar; Meirav, Oded; Arenson, Jerry; Barber, William C.; Hartsough, Neal E.; Malakhov, Nail; Wessel, Jan C.

    2009-01-01

    The development of an innovative detector technology for photon-counting in X-ray imaging is reported. This new generation of detectors, based on pixellated cadmium telluride (CdTe) and cadmium zinc telluride (CZT) detector arrays electrically connected to application specific integrated circuits (ASICs) for readout, will produce fast and highly efficient photon-counting and energy-dispersive X-ray imaging. There are a number of applications that can greatly benefit from these novel imagers including mammography, planar radiography, and computed tomography (CT). Systems based on this new detector technology can provide compositional analysis of tissue through spectroscopic X-ray imaging, significantly improve overall image quality, and may significantly reduce X-ray dose to the patient. A very high X-ray flux is utilized in many of these applications. For example, CT scanners can produce ~100 Mphotons/mm2/s in the unattenuated beam. High flux is required in order to collect sufficient photon statistics in the measurement of the transmitted flux (attenuated beam) during the very short time frame of a CT scan. This high count rate combined with a need for high detection efficiency requires the development of detector structures that can provide a response signal much faster than the transit time of carriers over the whole detector thickness. We have developed CdTe and CZT detector array structures which are 3 mm thick with 16×16 pixels and a 1 mm pixel pitch. These structures, in the two different implementations presented here, utilize either a small pixel effect or a drift phenomenon. An energy resolution of 4.75% at 122 keV has been obtained with a 30 ns peaking time using discrete electronics and a 57Co source. An output rate of 6×106 counts per second per individual pixel has been obtained with our ASIC readout electronics and a clinical CT X-ray tube. Additionally, the first clinical CT images, taken with several of our prototype photon-counting and energy-dispersive detector modules, are shown. PMID:19920884

  14. Photon Counting Energy Dispersive Detector Arrays for X-ray Imaging.

    PubMed

    Iwanczyk, Jan S; Nygård, Einar; Meirav, Oded; Arenson, Jerry; Barber, William C; Hartsough, Neal E; Malakhov, Nail; Wessel, Jan C

    2009-01-01

    The development of an innovative detector technology for photon-counting in X-ray imaging is reported. This new generation of detectors, based on pixellated cadmium telluride (CdTe) and cadmium zinc telluride (CZT) detector arrays electrically connected to application specific integrated circuits (ASICs) for readout, will produce fast and highly efficient photon-counting and energy-dispersive X-ray imaging. There are a number of applications that can greatly benefit from these novel imagers including mammography, planar radiography, and computed tomography (CT). Systems based on this new detector technology can provide compositional analysis of tissue through spectroscopic X-ray imaging, significantly improve overall image quality, and may significantly reduce X-ray dose to the patient. A very high X-ray flux is utilized in many of these applications. For example, CT scanners can produce ~100 Mphotons/mm(2)/s in the unattenuated beam. High flux is required in order to collect sufficient photon statistics in the measurement of the transmitted flux (attenuated beam) during the very short time frame of a CT scan. This high count rate combined with a need for high detection efficiency requires the development of detector structures that can provide a response signal much faster than the transit time of carriers over the whole detector thickness. We have developed CdTe and CZT detector array structures which are 3 mm thick with 16×16 pixels and a 1 mm pixel pitch. These structures, in the two different implementations presented here, utilize either a small pixel effect or a drift phenomenon. An energy resolution of 4.75% at 122 keV has been obtained with a 30 ns peaking time using discrete electronics and a (57)Co source. An output rate of 6×10(6) counts per second per individual pixel has been obtained with our ASIC readout electronics and a clinical CT X-ray tube. Additionally, the first clinical CT images, taken with several of our prototype photon-counting and energy-dispersive detector modules, are shown.

  15. Micro-pixelation and color mixing in biological photonic structures (presentation video)

    NASA Astrophysics Data System (ADS)

    Bartl, Michael H.; Nagi, Ramneet K.

    2014-03-01

    The world of insects displays myriad hues of coloration effects produced by elaborate nano-scale architectures built into wings and exoskeleton. For example, we have recently found many weevils possess photonic architectures with cubic lattices. In this talk, we will present high-resolution three-dimensional reconstructions of weevil photonic structures with diamond and gyroid lattices. Moreover, by reconstructing entire scales we found arrays of single-crystalline domains, each oriented such that only selected crystal faces are visible to an observer. This pixel-like arrangement is key to the angle-independent coloration typical of weevils—a strategy that could enable a new generation of coating technologies.

  16. Surface-Micromachined Planar Arrays of Thermopiles

    NASA Technical Reports Server (NTRS)

    Foote, Marc C.

    2003-01-01

    Planar two-dimensional arrays of thermopiles intended for use as thermal-imaging detectors are to be fabricated by a process that includes surface micromachining. These thermopile arrays are designed to perform better than do prior two-dimensional thermopile arrays. The lower performance of prior two-dimensional thermopile arrays is attributed to the following causes: The thermopiles are made from low-performance thermoelectric materials. The devices contain dielectric supporting structures, the thermal conductances of which give rise to parasitic losses of heat from detectors to substrates. The bulk-micromachining processes sometimes used to remove substrate material under the pixels, making it difficult to incorporate low-noise readout electronic circuitry. The thermoelectric lines are on the same level as the infrared absorbers, thereby reducing fill factor. The improved pixel design of a thermopile array of the type under development is expected to afford enhanced performance by virtue of the following combination of features: Surface-micromachined detectors are thermally isolated through suspension above readout circuitry. The thermopiles are made of such high-performance thermoelectric materials as Bi-Te and Bi-Sb-Te alloys. Pixel structures are supported only by the thermoelectric materials: there are no supporting dielectric structures that could leak heat by conduction to the substrate.

  17. Improved Compressive Sensing of Natural Scenes Using Localized Random Sampling

    PubMed Central

    Barranca, Victor J.; Kovačič, Gregor; Zhou, Douglas; Cai, David

    2016-01-01

    Compressive sensing (CS) theory demonstrates that by using uniformly-random sampling, rather than uniformly-spaced sampling, higher quality image reconstructions are often achievable. Considering that the structure of sampling protocols has such a profound impact on the quality of image reconstructions, we formulate a new sampling scheme motivated by physiological receptive field structure, localized random sampling, which yields significantly improved CS image reconstructions. For each set of localized image measurements, our sampling method first randomly selects an image pixel and then measures its nearby pixels with probability depending on their distance from the initially selected pixel. We compare the uniformly-random and localized random sampling methods over a large space of sampling parameters, and show that, for the optimal parameter choices, higher quality image reconstructions can be consistently obtained by using localized random sampling. In addition, we argue that the localized random CS optimal parameter choice is stable with respect to diverse natural images, and scales with the number of samples used for reconstruction. We expect that the localized random sampling protocol helps to explain the evolutionarily advantageous nature of receptive field structure in visual systems and suggests several future research areas in CS theory and its application to brain imaging. PMID:27555464

  18. Origin of coloration in beetle scales: An optical and structural investigation

    NASA Astrophysics Data System (ADS)

    Nagi, Ramneet Kaur

    In this thesis the origin of angle-independent yellowish-green coloration of the exoskeleton of a beetle was studied. The beetle chosen was a weevil with the Latin name Eupholus chevrolati. The origin of this weevil's coloration was investigated by optical and structural characterization techniques, including optical microscopy, scanning electron microscopy imaging and focused ion beam milling, combined with three-dimensional modeling and photonic band structure calculations. Furthermore, using color theory the pixel-like coloring of the weevil's exoskeleton was investigated and an interesting additive color mixing scheme was discovered. For optical studies, a microreflectance microscopy/spectroscopy set-up was optimized. This set-up allowed not only for imaging of individual colored exoskeleton domains with sizes ˜2-10 μm, but also for obtaining reflection spectra of these micrometer-sized domains. Spectra were analyzed in terms of reflection intensity and wavelength position and shape of the reflection features. To find the origin of these colored exoskeleton spots, a combination of focused ion beam milling and scanning electron microscopy imaging was employed. A three-dimensional photonic crystal in the form of a face-centered cubic lattice of ABC-stacked air cylinders in a biopolymeric cuticle matrix was discovered. Our photonic band structure calculations revealed the existence of different sets of stop-gaps for the lattice constant of 360, 380 and 400 nm in the main lattice directions, Gamma-L, Gamma-X, Gamma-U, Gamma-W and Gamma-K. In addition, scanning electron microscopy images were compared to the specific directional-cuts through the constructed face-centered cubic lattice-based model and the optical micrographs of individual domains to determine the photonic structure corresponding to the different lattice directions. The three-dimensional model revealed stop-gaps in the Gamma-L, Gamma-W and Gamma-K directions. Finally, the coloration of the weevil as perceived by an unaided human eye was represented (mathematically) on the xy-chromaticity diagram, based on XYZ color space developed by CIE (Commission Internationale de l'Eclairage), using the micro-reflectance spectroscopy measurements. The results confirmed the additive mixing of various colors produced by differently oriented photonic crystal domains present in the weevil's exoskeleton scales, as a reason for the angle-independent dull yellowish-green coloration of the weevil E. chevrolati.

  19. X-ray characterization of a multichannel smart-pixel array detector.

    PubMed

    Ross, Steve; Haji-Sheikh, Michael; Huntington, Andrew; Kline, David; Lee, Adam; Li, Yuelin; Rhee, Jehyuk; Tarpley, Mary; Walko, Donald A; Westberg, Gregg; Williams, George; Zou, Haifeng; Landahl, Eric

    2016-01-01

    The Voxtel VX-798 is a prototype X-ray pixel array detector (PAD) featuring a silicon sensor photodiode array of 48 × 48 pixels, each 130 µm × 130 µm × 520 µm thick, coupled to a CMOS readout application specific integrated circuit (ASIC). The first synchrotron X-ray characterization of this detector is presented, and its ability to selectively count individual X-rays within two independent arrival time windows, a programmable energy range, and localized to a single pixel is demonstrated. During our first trial run at Argonne National Laboratory's Advance Photon Source, the detector achieved a 60 ns gating time and 700 eV full width at half-maximum energy resolution in agreement with design parameters. Each pixel of the PAD holds two independent digital counters, and the discriminator for X-ray energy features both an upper and lower threshold to window the energy of interest discarding unwanted background. This smart-pixel technology allows energy and time resolution to be set and optimized in software. It is found that the detector linearity follows an isolated dead-time model, implying that megahertz count rates should be possible in each pixel. Measurement of the line and point spread functions showed negligible spatial blurring. When combined with the timing structure of the synchrotron storage ring, it is demonstrated that the area detector can perform both picosecond time-resolved X-ray diffraction and fluorescence spectroscopy measurements.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deptuch, Grzegorz W.; Gabriella, Carini; Enquist, Paul

    The vertically integrated photon imaging chip (VIPIC1) pixel detector is a stack consisting of a 500-μm-thick silicon sensor, a two-tier 34-μm-thick integrated circuit, and a host printed circuit board (PCB). The integrated circuit tiers were bonded using the direct bonding technology with copper, and each tier features 1-μm-diameter through-silicon vias that were used for connections to the sensor on one side, and to the host PCB on the other side. The 80-μm-pixel-pitch sensor was the direct bonding technology with nickel bonded to the integrated circuit. The stack was mounted on the board using Sn–Pb balls placed on a 320-μm pitch,more » yielding an entirely wire-bond-less structure. The analog front-end features a pulse response peaking at below 250 ns, and the power consumption per pixel is 25 μW. We successful completed the 3-D integration and have reported here. Additionally, all pixels in the matrix of 64 × 64 pixels were responding on well-bonded devices. Correct operation of the sparsified readout, allowing a single 153-ns bunch timing resolution, was confirmed in the tests on a synchrotron beam of 10-keV X-rays. An equivalent noise charge of 36.2 e - rms and a conversion gain of 69.5 μV/e - with 2.6 e - rms and 2.7 μV/e - rms pixel-to-pixel variations, respectively, were measured.« less

  1. Low-power priority Address-Encoder and Reset-Decoder data-driven readout for Monolithic Active Pixel Sensors for tracker system

    NASA Astrophysics Data System (ADS)

    Yang, P.; Aglieri, G.; Cavicchioli, C.; Chalmet, P. L.; Chanlek, N.; Collu, A.; Gao, C.; Hillemanns, H.; Junique, A.; Kofarago, M.; Keil, M.; Kugathasan, T.; Kim, D.; Kim, J.; Lattuca, A.; Marin Tobon, C. A.; Marras, D.; Mager, M.; Martinengo, P.; Mazza, G.; Mugnier, H.; Musa, L.; Puggioni, C.; Rousset, J.; Reidt, F.; Riedler, P.; Snoeys, W.; Siddhanta, S.; Usai, G.; van Hoorne, J. W.; Yi, J.

    2015-06-01

    Active Pixel Sensors used in High Energy Particle Physics require low power consumption to reduce the detector material budget, low integration time to reduce the possibilities of pile-up and fast readout to improve the detector data capability. To satisfy these requirements, a novel Address-Encoder and Reset-Decoder (AERD) asynchronous circuit for a fast readout of a pixel matrix has been developed. The AERD data-driven readout architecture operates the address encoding and reset decoding based on an arbitration tree, and allows us to readout only the hit pixels. Compared to the traditional readout structure of the rolling shutter scheme in Monolithic Active Pixel Sensors (MAPS), AERD can achieve a low readout time and a low power consumption especially for low hit occupancies. The readout is controlled at the chip periphery with a signal synchronous with the clock, allows a good digital and analogue signal separation in the matrix and a reduction of the power consumption. The AERD circuit has been implemented in the TowerJazz 180 nm CMOS Imaging Sensor (CIS) process with full complementary CMOS logic in the pixel. It works at 10 MHz with a matrix height of 15 mm. The energy consumed to read out one pixel is around 72 pJ. A scheme to boost the readout speed to 40 MHz is also discussed. The sensor chip equipped with AERD has been produced and characterised. Test results including electrical beam measurement are presented.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Altunbas, Cem, E-mail: caltunbas@gmail.com; Lai, Chao-Jen; Zhong, Yuncheng

    Purpose: In using flat panel detectors (FPD) for cone beam computed tomography (CBCT), pixel gain variations may lead to structured nonuniformities in projections and ring artifacts in CBCT images. Such gain variations can be caused by change in detector entrance exposure levels or beam hardening, and they are not accounted by conventional flat field correction methods. In this work, the authors presented a method to identify isolated pixel clusters that exhibit gain variations and proposed a pixel gain correction (PGC) method to suppress both beam hardening and exposure level dependent gain variations. Methods: To modulate both beam spectrum and entrancemore » exposure, flood field FPD projections were acquired using beam filters with varying thicknesses. “Ideal” pixel values were estimated by performing polynomial fits in both raw and flat field corrected projections. Residuals were calculated by taking the difference between measured and ideal pixel values to identify clustered image and FPD artifacts in flat field corrected and raw images, respectively. To correct clustered image artifacts, the ratio of ideal to measured pixel values in filtered images were utilized as pixel-specific gain correction factors, referred as PGC method, and they were tabulated as a function of pixel value in a look-up table. Results: 0.035% of detector pixels lead to clustered image artifacts in flat field corrected projections, where 80% of these pixels were traced back and linked to artifacts in the FPD. The performance of PGC method was tested in variety of imaging conditions and phantoms. The PGC method reduced clustered image artifacts and fixed pattern noise in projections, and ring artifacts in CBCT images. Conclusions: Clustered projection image artifacts that lead to ring artifacts in CBCT can be better identified with our artifact detection approach. When compared to the conventional flat field correction method, the proposed PGC method enables characterization of nonlinear pixel gain variations as a function of change in x-ray spectrum and intensity. Hence, it can better suppress image artifacts due to beam hardening as well as artifacts that arise from detector entrance exposure variation.« less

  3. Investigation of the limitations of the highly pixilated CdZnTe detector for PET applications

    PubMed Central

    Komarov, Sergey; Yin, Yongzhi; Wu, Heyu; Wen, Jie; Krawczynski, Henric; Meng, Ling-Jian; Tai, Yuan-Chuan

    2016-01-01

    We are investigating the feasibility of a high resolution positron emission tomography (PET) insert device based on the CdZnTe detector with 350 μm anode pixel pitch to be integrated into a conventional animal PET scanner to improve its image resolution. In this paper, we have used a simplified version of the multi pixel CdZnTe planar detector, 5 mm thick with 9 anode pixels only. This simplified 9 anode pixel structure makes it possible to carry out experiments without a complete application-specific integrated circuits readout system that is still under development. Special attention was paid to the double pixel (or charge sharing) detections. The following characteristics were obtained in experiment: energy resolution full-width-at-half-maximum (FWHM) is 7% for single pixel and 9% for double pixel photoelectric detections of 511 keV gammas; timing resolution (FWHM) from the anode signals is 30 ns for single pixel and 35 ns for double pixel detections (for photoelectric interactions only the corresponding values are 20 and 25 ns); position resolution is 350 μm in x,y-plane and ~0.4 mm in depth-of-interaction. The experimental measurements were accompanied by Monte Carlo (MC) simulations to find a limitation imposed by spatial charge distribution. Results from MC simulations suggest the limitation of the intrinsic spatial resolution of the CdZnTe detector for 511 keV photoelectric interactions is 170 μm. The interpixel interpolation cannot recover the resolution beyond the limit mentioned above for photoelectric interactions. However, it is possible to achieve higher spatial resolution using interpolation for Compton scattered events. Energy and timing resolution of the proposed 350 μm anode pixel pitch detector is no better than 0.6% FWHM at 511 keV, and 2 ns FWHM, respectively. These MC results should be used as a guide to understand the performance limits of the pixelated CdZnTe detector due to the underlying detection processes, with the understanding of the inherent limitations of MC methods. PMID:23079763

  4. Investigation of the limitations of the highly pixilated CdZnTe detector for PET applications.

    PubMed

    Komarov, Sergey; Yin, Yongzhi; Wu, Heyu; Wen, Jie; Krawczynski, Henric; Meng, Ling-Jian; Tai, Yuan-Chuan

    2012-11-21

    We are investigating the feasibility of a high resolution positron emission tomography (PET) insert device based on the CdZnTe detector with 350 µm anode pixel pitch to be integrated into a conventional animal PET scanner to improve its image resolution. In this paper, we have used a simplified version of the multi pixel CdZnTe planar detector, 5 mm thick with 9 anode pixels only. This simplified 9 anode pixel structure makes it possible to carry out experiments without a complete application-specific integrated circuits readout system that is still under development. Special attention was paid to the double pixel (or charge sharing) detections. The following characteristics were obtained in experiment: energy resolution full-width-at-half-maximum (FWHM) is 7% for single pixel and 9% for double pixel photoelectric detections of 511 keV gammas; timing resolution (FWHM) from the anode signals is 30 ns for single pixel and 35 ns for double pixel detections (for photoelectric interactions only the corresponding values are 20 and 25 ns); position resolution is 350 µm in x,y-plane and ∼0.4 mm in depth-of-interaction. The experimental measurements were accompanied by Monte Carlo (MC) simulations to find a limitation imposed by spatial charge distribution. Results from MC simulations suggest the limitation of the intrinsic spatial resolution of the CdZnTe detector for 511 keV photoelectric interactions is 170 µm. The interpixel interpolation cannot recover the resolution beyond the limit mentioned above for photoelectric interactions. However, it is possible to achieve higher spatial resolution using interpolation for Compton scattered events. Energy and timing resolution of the proposed 350 µm anode pixel pitch detector is no better than 0.6% FWHM at 511 keV, and 2 ns FWHM, respectively. These MC results should be used as a guide to understand the performance limits of the pixelated CdZnTe detector due to the underlying detection processes, with the understanding of the inherent limitations of MC methods.

  5. Noise and spectroscopic performance of DEPMOSFET matrix devices for XEUS

    NASA Astrophysics Data System (ADS)

    Treis, J.; Fischer, P.; Hälker, O.; Herrmann, S.; Kohrs, R.; Krüger, H.; Lechner, P.; Lutz, G.; Peric, I.; Porro, M.; Richter, R. H.; Strüder, L.; Trimpl, M.; Wermes, N.; Wölfel, S.

    2005-08-01

    DEPMOSFET based Active Pixel Sensor (APS) matrix devices, originally developed to cope with the challenging requirements of the XEUS Wide Field Imager, have proven to be a promising new imager concept for a variety of future X-ray imaging and spectroscopy missions like Simbol-X. The devices combine excellent energy resolution, high speed readout and low power consumption with the attractive feature of random accessibility of pixels. A production of sensor prototypes with 64 x 64 pixels with a size of 75 μm x 75 μm each has recently been finished at the MPI semiconductor laboratory in Munich. The devices are built for row-wise readout and require dedicated control and signal processing electronics of the CAMEX type, which is integrated together with the sensor onto a readout hybrid. A number of hybrids incorporating the most promising sensor design variants has been built, and their performance has been studied in detail. A spectroscopic resolution of 131 eV has been measured, the readout noise is as low as 3.5 e- ENC. Here, the dependence of readout noise and spectroscopic resolution on the device temperature is presented.

  6. Urban Image Classification: Per-Pixel Classifiers, Sub-Pixel Analysis, Object-Based Image Analysis, and Geospatial Methods. 10; Chapter

    NASA Technical Reports Server (NTRS)

    Myint, Soe W.; Mesev, Victor; Quattrochi, Dale; Wentz, Elizabeth A.

    2013-01-01

    Remote sensing methods used to generate base maps to analyze the urban environment rely predominantly on digital sensor data from space-borne platforms. This is due in part from new sources of high spatial resolution data covering the globe, a variety of multispectral and multitemporal sources, sophisticated statistical and geospatial methods, and compatibility with GIS data sources and methods. The goal of this chapter is to review the four groups of classification methods for digital sensor data from space-borne platforms; per-pixel, sub-pixel, object-based (spatial-based), and geospatial methods. Per-pixel methods are widely used methods that classify pixels into distinct categories based solely on the spectral and ancillary information within that pixel. They are used for simple calculations of environmental indices (e.g., NDVI) to sophisticated expert systems to assign urban land covers. Researchers recognize however, that even with the smallest pixel size the spectral information within a pixel is really a combination of multiple urban surfaces. Sub-pixel classification methods therefore aim to statistically quantify the mixture of surfaces to improve overall classification accuracy. While within pixel variations exist, there is also significant evidence that groups of nearby pixels have similar spectral information and therefore belong to the same classification category. Object-oriented methods have emerged that group pixels prior to classification based on spectral similarity and spatial proximity. Classification accuracy using object-based methods show significant success and promise for numerous urban 3 applications. Like the object-oriented methods that recognize the importance of spatial proximity, geospatial methods for urban mapping also utilize neighboring pixels in the classification process. The primary difference though is that geostatistical methods (e.g., spatial autocorrelation methods) are utilized during both the pre- and post-classification steps. Within this chapter, each of the four approaches is described in terms of scale and accuracy classifying urban land use and urban land cover; and for its range of urban applications. We demonstrate the overview of four main classification groups in Figure 1 while Table 1 details the approaches with respect to classification requirements and procedures (e.g., reflectance conversion, steps before training sample selection, training samples, spatial approaches commonly used, classifiers, primary inputs for classification, output structures, number of output layers, and accuracy assessment). The chapter concludes with a brief summary of the methods reviewed and the challenges that remain in developing new classification methods for improving the efficiency and accuracy of mapping urban areas.

  7. Self-Organizing-Map Program for Analyzing Multivariate Data

    NASA Technical Reports Server (NTRS)

    Li, P. Peggy; Jacob, Joseph C.; Block, Gary L.; Braverman, Amy J.

    2005-01-01

    SOM_VIS is a computer program for analysis and display of multidimensional sets of Earth-image data typified by the data acquired by the Multi-angle Imaging Spectro-Radiometer [MISR (a spaceborne instrument)]. In SOM_VIS, an enhanced self-organizing-map (SOM) algorithm is first used to project a multidimensional set of data into a nonuniform three-dimensional lattice structure. The lattice structure is mapped to a color space to obtain a color map for an image. The Voronoi cell-refinement algorithm is used to map the SOM lattice structure to various levels of color resolution. The final result is a false-color image in which similar colors represent similar characteristics across all its data dimensions. SOM_VIS provides a control panel for selection of a subset of suitably preprocessed MISR radiance data, and a control panel for choosing parameters to run SOM training. SOM_VIS also includes a component for displaying the false-color SOM image, a color map for the trained SOM lattice, a plot showing an original input vector in 36 dimensions of a selected pixel from the SOM image, the SOM vector that represents the input vector, and the Euclidean distance between the two vectors.

  8. Automated artery-venous classification of retinal blood vessels based on structural mapping method

    NASA Astrophysics Data System (ADS)

    Joshi, Vinayak S.; Garvin, Mona K.; Reinhardt, Joseph M.; Abramoff, Michael D.

    2012-03-01

    Retinal blood vessels show morphologic modifications in response to various retinopathies. However, the specific responses exhibited by arteries and veins may provide a precise diagnostic information, i.e., a diabetic retinopathy may be detected more accurately with the venous dilatation instead of average vessel dilatation. In order to analyze the vessel type specific morphologic modifications, the classification of a vessel network into arteries and veins is required. We previously described a method for identification and separation of retinal vessel trees; i.e. structural mapping. Therefore, we propose the artery-venous classification based on structural mapping and identification of color properties prominent to the vessel types. The mean and standard deviation of each of green channel intensity and hue channel intensity are analyzed in a region of interest around each centerline pixel of a vessel. Using the vector of color properties extracted from each centerline pixel, it is classified into one of the two clusters (artery and vein), obtained by the fuzzy-C-means clustering. According to the proportion of clustered centerline pixels in a particular vessel, and utilizing the artery-venous crossing property of retinal vessels, each vessel is assigned a label of an artery or a vein. The classification results are compared with the manually annotated ground truth (gold standard). We applied the proposed method to a dataset of 15 retinal color fundus images resulting in an accuracy of 88.28% correctly classified vessel pixels. The automated classification results match well with the gold standard suggesting its potential in artery-venous classification and the respective morphology analysis.

  9. Optical sectioning in wide-field microscopy obtained by dynamic structured light illumination and detection based on a smart pixel detector array.

    PubMed

    Mitić, Jelena; Anhut, Tiemo; Meier, Matthias; Ducros, Mathieu; Serov, Alexander; Lasser, Theo

    2003-05-01

    Optical sectioning in wide-field microscopy is achieved by illumination of the object with a continuously moving single-spatial-frequency pattern and detecting the image with a smart pixel detector array. This detector performs an on-chip electronic signal processing that extracts the optically sectioned image. The optically sectioned image is directly observed in real time without any additional postprocessing.

  10. Viewing-zone enlargement method for sampled hologram that uses high-order diffraction.

    PubMed

    Mishina, Tomoyuki; Okui, Makoto; Okano, Fumio

    2002-03-10

    We demonstrate a method of enlarging the viewing zone for holography that has holograms with a pixel structure. First, aliasing generated by the sampling of a hologram by pixel is described. Next the high-order diffracted beams reproduced from the hologram that contains aliasing are explained. Finally, we show that the viewing zone can be enlarged by combining these high-order reconstructed beams from the hologram with aliasing.

  11. Clock Scan Protocol for Image Analysis: ImageJ Plugins.

    PubMed

    Dobretsov, Maxim; Petkau, Georg; Hayar, Abdallah; Petkau, Eugen

    2017-06-19

    The clock scan protocol for image analysis is an efficient tool to quantify the average pixel intensity within, at the border, and outside (background) a closed or segmented convex-shaped region of interest, leading to the generation of an averaged integral radial pixel-intensity profile. This protocol was originally developed in 2006, as a visual basic 6 script, but as such, it had limited distribution. To address this problem and to join similar recent efforts by others, we converted the original clock scan protocol code into two Java-based plugins compatible with NIH-sponsored and freely available image analysis programs like ImageJ or Fiji ImageJ. Furthermore, these plugins have several new functions, further expanding the range of capabilities of the original protocol, such as analysis of multiple regions of interest and image stacks. The latter feature of the program is especially useful in applications in which it is important to determine changes related to time and location. Thus, the clock scan analysis of stacks of biological images may potentially be applied to spreading of Na + or Ca ++ within a single cell, as well as to the analysis of spreading activity (e.g., Ca ++ waves) in populations of synaptically-connected or gap junction-coupled cells. Here, we describe these new clock scan plugins and show some examples of their applications in image analysis.

  12. Comparison of multihardware parallel implementations for a phase unwrapping algorithm

    NASA Astrophysics Data System (ADS)

    Hernandez-Lopez, Francisco Javier; Rivera, Mariano; Salazar-Garibay, Adan; Legarda-Sáenz, Ricardo

    2018-04-01

    Phase unwrapping is an important problem in the areas of optical metrology, synthetic aperture radar (SAR) image analysis, and magnetic resonance imaging (MRI) analysis. These images are becoming larger in size and, particularly, the availability and need for processing of SAR and MRI data have increased significantly with the acquisition of remote sensing data and the popularization of magnetic resonators in clinical diagnosis. Therefore, it is important to develop faster and accurate phase unwrapping algorithms. We propose a parallel multigrid algorithm of a phase unwrapping method named accumulation of residual maps, which builds on a serial algorithm that consists of the minimization of a cost function; minimization achieved by means of a serial Gauss-Seidel kind algorithm. Our algorithm also optimizes the original cost function, but unlike the original work, our algorithm is a parallel Jacobi class with alternated minimizations. This strategy is known as the chessboard type, where red pixels can be updated in parallel at same iteration since they are independent. Similarly, black pixels can be updated in parallel in an alternating iteration. We present parallel implementations of our algorithm for different parallel multicore architecture such as CPU-multicore, Xeon Phi coprocessor, and Nvidia graphics processing unit. In all the cases, we obtain a superior performance of our parallel algorithm when compared with the original serial version. In addition, we present a detailed comparative performance of the developed parallel versions.

  13. 3D Spatial and Spectral Fusion of Terrestrial Hyperspectral Imagery and Lidar for Hyperspectral Image Shadow Restoration Applied to a Geologic Outcrop

    NASA Astrophysics Data System (ADS)

    Hartzell, P. J.; Glennie, C. L.; Hauser, D. L.; Okyay, U.; Khan, S.; Finnegan, D. C.

    2016-12-01

    Recent advances in remote sensing technology have expanded the acquisition and fusion of active lidar and passive hyperspectral imagery (HSI) from an exclusively airborne technique to terrestrial modalities. This enables high resolution 3D spatial and spectral quantification of vertical geologic structures for applications such as virtual 3D rock outcrop models for hydrocarbon reservoir analog analysis and mineral quantification in open pit mining environments. In contrast to airborne observation geometry, the vertical surfaces observed by horizontal-viewing terrestrial HSI sensors are prone to extensive topography-induced solar shadowing, which leads to reduced pixel classification accuracy or outright removal of shadowed pixels from analysis tasks. Using a precisely calibrated and registered offset cylindrical linear array camera model, we demonstrate the use of 3D lidar data for sub-pixel HSI shadow detection and the restoration of the shadowed pixel spectra via empirical methods that utilize illuminated and shadowed pixels of similar material composition. We further introduce a new HSI shadow restoration technique that leverages collocated backscattered lidar intensity, which is resistant to solar conditions, obtained by projecting the 3D lidar points through the HSI camera model into HSI pixel space. Using ratios derived from the overlapping lidar laser and HSI wavelengths, restored shadow pixel spectra are approximated using a simple scale factor. Simulations of multiple lidar wavelengths, i.e., multi-spectral lidar, indicate the potential for robust HSI spectral restoration that is independent of the complexity and costs associated with rigorous radiometric transfer models, which have yet to be developed for horizontal-viewing terrestrial HSI sensors. The spectral restoration performance is quantified through HSI pixel classification consistency between full sun and partial sun exposures of a single geologic outcrop.

  14. Sensitivity of Marine Warm Cloud Retrieval Statistics to Algorithm Choices: Examples from MODIS Collection 6

    NASA Technical Reports Server (NTRS)

    Platnick, Steven; Wind, Galina; Zhang, Zhibo; Ackerman, Steven A.; Maddux, Brent

    2012-01-01

    The optical and microphysical structure of warm boundary layer marine clouds is of fundamental importance for understanding a variety of cloud radiation and precipitation processes. With the advent of MODIS (Moderate Resolution Imaging Spectroradiometer) on the NASA EOS Terra and Aqua platforms, simultaneous global/daily 1km retrievals of cloud optical thickness and effective particle size are provided, as well as the derived water path. In addition, the cloud product (MOD06/MYD06 for MODIS Terra and Aqua, respectively) provides separate effective radii results using the l.6, 2.1, and 3.7 m spectral channels. Cloud retrieval statistics are highly sensitive to how a pixel identified as being "notclear" by a cloud mask (e.g., the MOD35/MYD35 product) is determined to be useful for an optical retrieval based on a 1-D cloud model. The Collection 5 MODIS retrieval algorithm removed pixels associated with cloud'edges as well as ocean pixels with partly cloudy elements in the 250m MODIS cloud mask - part of the so-called Clear Sky Restoral (CSR) algorithm. Collection 6 attempts retrievals for those two pixel populations, but allows a user to isolate or filter out the populations via CSR pixel-level Quality Assessment (QA) assignments. In this paper, using the preliminary Collection 6 MOD06 product, we present global and regional statistical results of marine warm cloud retrieval sensitivities to the cloud edge and 250m partly cloudy pixel populations. As expected, retrievals for these pixels are generally consistent with a breakdown of the ID cloud model. While optical thickness for these suspect pixel populations may have some utility for radiative studies, the retrievals should be used with extreme caution for process and microphysical studies.

  15. Plasma-panel based detectors

    NASA Astrophysics Data System (ADS)

    Friedman, Peter

    2017-09-01

    The plasma panel sensor (PPS) is a novel micropattern gas detector inspired by plasma display panels (PDPs), the core component of plasma-TVs. A PDP comprises millions of discrete cells per square meter, each of which, when provided with a signal pulse, can initiate and sustain a plasma discharge. Configured as a detector, a pixel or cell is biased to discharge when a free-electron is generated in the gas. The PPS consists of an array of small plasma discharge pixels, and can be configured to have either an ``open-cell'' or ``closed-cell'' structure, operating with high gain in the Geiger region. We describe both configurations and their application to particle physics. The open-cell PPS lends itself to ultra-low-mass, ultrathin structures, whereas the closed-cell microhexcavity PPS is capable of higher performance. For the ultrathin-PPS, we are fabricating 3-inch devices based on two types of extremely thin, inorganic, transparent, substrate materials: one being 8-10 µm thick, and the other 25-27 µm thick. These gas-filled ultrathin devices are designed to operate in a beam-line vacuum environment, yet must be hermetically-sealed and gas-filled in an ambient environment at atmospheric pressure. We have successfully fabricated high resolution, submillimeter pixel electrodes on both types of ultrathin substrates. We will also report on the fabrication, staging and operation of the first microhexcavity detectors (µH-PPS). The first µH-PPS prototype devices have a 16 by 16 matrix of closed packed hexagon pixels, each having a 2 mm width. Initial tests of these detectors, conducted with Ne based gases at atmospheric pressure, indicate that each pixel responds independent of its neighboring cells, producing volt level pulse amplitudes in response to ionizing radiation. Results will include the hit rate response to a radioactive beta source, cosmic ray muons, the background from spontaneous discharge, pixel isolation and uniformity, and efficiency measurements. This work was funded in part by a DOE Office of Nuclear Physics SBIR Phase-II Grant.

  16. Learning Compact Binary Face Descriptor for Face Recognition.

    PubMed

    Lu, Jiwen; Liong, Venice Erin; Zhou, Xiuzhuang; Zhou, Jie

    2015-10-01

    Binary feature descriptors such as local binary patterns (LBP) and its variations have been widely used in many face recognition systems due to their excellent robustness and strong discriminative power. However, most existing binary face descriptors are hand-crafted, which require strong prior knowledge to engineer them by hand. In this paper, we propose a compact binary face descriptor (CBFD) feature learning method for face representation and recognition. Given each face image, we first extract pixel difference vectors (PDVs) in local patches by computing the difference between each pixel and its neighboring pixels. Then, we learn a feature mapping to project these pixel difference vectors into low-dimensional binary vectors in an unsupervised manner, where 1) the variance of all binary codes in the training set is maximized, 2) the loss between the original real-valued codes and the learned binary codes is minimized, and 3) binary codes evenly distribute at each learned bin, so that the redundancy information in PDVs is removed and compact binary codes are obtained. Lastly, we cluster and pool these binary codes into a histogram feature as the final representation for each face image. Moreover, we propose a coupled CBFD (C-CBFD) method by reducing the modality gap of heterogeneous faces at the feature level to make our method applicable to heterogeneous face recognition. Extensive experimental results on five widely used face datasets show that our methods outperform state-of-the-art face descriptors.

  17. Towards simultaneous Talbot bands based optical coherence tomography and scanning laser ophthalmoscopy imaging.

    PubMed

    Marques, Manuel J; Bradu, Adrian; Podoleanu, Adrian Gh

    2014-05-01

    We report a Talbot bands-based optical coherence tomography (OCT) system capable of producing longitudinal B-scan OCT images and en-face scanning laser ophthalmoscopy (SLO) images of the human retina in-vivo. The OCT channel employs a broadband optical source and a spectrometer. A gap is created between the sample and reference beams while on their way towards the spectrometer's dispersive element to create Talbot bands. The spatial separation of the two beams facilitates collection by an SLO channel of optical power originating exclusively from the retina, deprived from any contribution from the reference beam. Three different modes of operation are presented, constrained by the minimum integration time of the camera used in the spectrometer and by the galvo-scanners' scanning rate: (i) a simultaneous acquisition mode over the two channels, useful for small size imaging, that conserves the pixel-to-pixel correspondence between them; (ii) a hybrid sequential mode, where the system switches itself between the two regimes and (iii) a sequential "on-demand" mode, where the system can be used in either OCT or SLO regimes for as long as required. The two sequential modes present varying degrees of trade-off between pixel-to-pixel correspondence and independent full control of parameters within each channel. Images of the optic nerve and fovea regions obtained in the simultaneous (i) and in the hybrid sequential mode (ii) are presented.

  18. A new imaging technique on strength and phase of pulsatile tissue-motion in brightness-mode ultrasonogram

    NASA Astrophysics Data System (ADS)

    Fukuzawa, Masayuki; Yamada, Masayoshi; Nakamori, Nobuyuki; Kitsunezuka, Yoshiki

    2007-03-01

    A new imaging technique has been developed for observing both strength and phase of pulsatile tissue-motion in a movie of brightness-mode ultrasonogram. The pulsatile tissue-motion is determined by evaluating the heartbeat-frequency component in Fourier transform of a series of pixel value as a function of time at each pixel in a movie of ultrasonogram (640x480pixels/frame, 8bit/pixel, 33ms/frame) taken by a conventional ultrasonograph apparatus (ATL HDI5000). In order to visualize both the strength and the phase of the pulsatile tissue-motion, we propose a pulsatile-phase image that is obtained by superimposition of color gradation proportional to the motion phase on the original ultrasonogram only at which the motion strength exceeds a proper threshold. The pulsatile-phase image obtained from a cranial ultrasonogram of normal neonate clearly reveals that the motion region gives good agreement with the anatomical shape and position of the middle cerebral artery and the corpus callosum. The motion phase is fluctuated with the shape of arteries revealing local obstruction of blood flow. The pulsatile-phase images in the neonates with asphyxia at birth reveal decreases of the motion region and increases of the phase fluctuation due to the weakness and local disturbance of blood flow, which is useful for pediatric diagnosis.

  19. Towards simultaneous Talbot bands based optical coherence tomography and scanning laser ophthalmoscopy imaging

    PubMed Central

    Marques, Manuel J.; Bradu, Adrian; Podoleanu, Adrian Gh.

    2014-01-01

    We report a Talbot bands-based optical coherence tomography (OCT) system capable of producing longitudinal B-scan OCT images and en-face scanning laser ophthalmoscopy (SLO) images of the human retina in-vivo. The OCT channel employs a broadband optical source and a spectrometer. A gap is created between the sample and reference beams while on their way towards the spectrometer’s dispersive element to create Talbot bands. The spatial separation of the two beams facilitates collection by an SLO channel of optical power originating exclusively from the retina, deprived from any contribution from the reference beam. Three different modes of operation are presented, constrained by the minimum integration time of the camera used in the spectrometer and by the galvo-scanners’ scanning rate: (i) a simultaneous acquisition mode over the two channels, useful for small size imaging, that conserves the pixel-to-pixel correspondence between them; (ii) a hybrid sequential mode, where the system switches itself between the two regimes and (iii) a sequential “on-demand” mode, where the system can be used in either OCT or SLO regimes for as long as required. The two sequential modes present varying degrees of trade-off between pixel-to-pixel correspondence and independent full control of parameters within each channel. Images of the optic nerve and fovea regions obtained in the simultaneous (i) and in the hybrid sequential mode (ii) are presented. PMID:24877006

  20. Validating Phasing and Geometry of Large Focal Plane Arrays

    NASA Technical Reports Server (NTRS)

    Standley, Shaun P.; Gautier, Thomas N.; Caldwell, Douglas A.; Rabbette, Maura

    2011-01-01

    The Kepler Mission is designed to survey our region of the Milky Way galaxy to discover hundreds of Earth-sized and smaller planets in or near the habitable zone. The Kepler photometer is an array of 42 CCDs (charge-coupled devices) in the focal plane of a 95-cm Schmidt camera onboard the Kepler spacecraft. Each 50x25-mm CCD has 2,200 x 1,024 pixels. The CCDs accumulate photons and are read out every six seconds to prevent saturation. The data is integrated for 30 minutes, and then the pixel data is transferred to onboard storage. The data is subsequently encoded and transmitted to the ground. During End-to-End Information System (EEIS) testing of the Kepler Mission System (KMS), there was a need to verify that the pixels requested by the science team operationally were correctly collected, encoded, compressed, stored, and transmitted by the FS, and subsequently received, decoded, uncompressed, and displayed by the Ground Segment (GS) without the outputs of any CCD modules being flipped, mirrored, or otherwise corrupted during the extensive FS and GS processing. This would normally be done by projecting an image on the focal plane array (FPA), collecting the data in a flight-like way, and making a comparison between the original data and the data reconstructed by the science data system. Projecting a focused image onto the FPA through the telescope would normally involve using a collimator suspended over the telescope opening. There were several problems with this approach: the collimation equipment is elaborate and expensive; as conceived, it could only illuminate a limited section of the FPA (.25 percent) during a given test; the telescope cover would have to be deployed during testing to allow the image to be projected into the telescope; the equipment was bulky and difficult to situate in temperature-controlled environments; and given all the above, test setup, execution, and repeatability were significant concerns. Instead of using this complicated approach of projecting an optical image on the FPA, the Kepler project developed a method using known defect features in the CCDs to verify proper collection and reassembly of the pixels, thereby avoiding the costs and risks of the optical projection approach. The CCDs composing the Kepler FPA, as all CCDs, had minor defects. At ambient temperature, some pixels look far brighter than they should. These ghot h pixels have a higher rate of charge leakage than the others due to manufacturing variations. They are usually stable over time, and appear at temperatures above 5 oC. The hot pixels on the Kepler FPA were mapped before photometer assembly during module testing. Selected hot pixels were used as target gstars h for the purposes of EEIS testing. gDead h pixels are permanently off, producing a permanently black pixel. These can also be used if there is some illumination of the FPA. During EEIS testing, Dark Current Full Frame Images (FFIs) taken at room temperature were used to create the hot pixel maps for all 84 Kepler photometer CCD channels. Data from two separate nights were used to create two hot pixel maps per channel, which were cross-correlated to remove cosmic ray events which appear to be hot pixels. These hot pixel maps obtained during EEIS testing were compared to the maps made during module testing to verify that the end-to-end data flow was correct.

  1. Changes to the Spectral Extraction Algorithm at the Third COS FUV Lifetime Position

    NASA Astrophysics Data System (ADS)

    Taylor, Joanna M.; Azalee Bostroem, K.; Debes, John H.; Ely, Justin; Hernandez, Svea; Hodge, Philip E.; Jedrzejewski, Robert I.; Lindsay, Kevin; Lockwood, Sean A.; Massa, Derck; Oliveira, Cristina M.; Penton, Steven V.; Proffitt, Charles R.; Roman-Duval, Julia; Sahnow, David J.; Sana, Hugues; Sonnentrucker, Paule

    2015-01-01

    Due to the effects of gain sag on flux on the COS FUV microchannel plate detector, the COS FUV spectra will be moved in February 2015 to a pristine location on the detector, from Lifetime Position 2 (LP2) to LP3. The spectra will be shifted in the cross-dispersion (XD) direction by -2.5", about -31 pixels, from the original LP1. In contrast, LP2 was shifted by +3.5", about 41 pixels, from LP1. By reducing the LP3-LP1 separation compared to the LP2-LP1 separation, we achieve maximal spectral resolution at LP3 while preserving more detector area for future lifetime positions. In the current version of the COS boxcar extraction algorithm, flux is summed within a box of fixed height that is larger than the PSF. Bad pixels located anywhere within the extraction box cause the entire column to be discarded. At the new LP3 position the current extraction box will overlap with LP1 regions of low gain (pixels which have lost >5% of their sensitivity). As a result, large portions of spectra will be discarded, even though these flagged pixels will be located in the wings of the profiles and contain a negligible fraction of the total source flux. To avoid unnecessarily discarding columns affected by such pixels, an algorithm is needed that can judge whether the effects of gain-sagged pixels on the extracted flux are significant. The "two-zone" solution adopted for pipeline use was tailored specifically for the COS FUV data characteristics: First, using a library of 1-D spectral centroid ("trace") locations, residual geometric distortions in the XD direction are removed. Next, 2-D template profiles are aligned with the observed spectral image. Encircled energy contours are calculated and an inner zone that contains 80% of the flux is defined, as well as an outer zone that contains 99% of the flux. With this approach, only pixels flagged as bad in the inner 80% zone will cause columns to be discarded while flagged pixels in the outer zones do not affect extraction. Finally, all good columns are summed in the XD direction to obtain a 1-D extracted spectrum. We present examples of the trace and profile libraries that are used in the two-zone extraction and compare the performance of the two-zone and boxcar algorithms.

  2. Multitemporal and Multiscaled Fractal Analysis of Landsat Satellite Data Using the Image Characterization and Modeling System (ICAMS)

    NASA Technical Reports Server (NTRS)

    Quattrochi, Dale A.; Emerson, Charles W.; Lam, Nina Siu-Ngan; Laymon, Charles A.

    1997-01-01

    The Image Characterization And Modeling System (ICAMS) is a public domain software package that is designed to provide scientists with innovative spatial analytical tools to visualize, measure, and characterize landscape patterns so that environmental conditions or processes can be assessed and monitored more effectively. In this study ICAMS has been used to evaluate how changes in fractal dimension, as a landscape characterization index, and resolution, are related to differences in Landsat images collected at different dates for the same area. Landsat Thematic Mapper (TM) data obtained in May and August 1993 over a portion of the Great Basin Desert in eastern Nevada were used for analysis. These data represent contrasting periods of peak "green-up" and "dry-down" for the study area. The TM data sets were converted into Normalized Difference Vegetation Index (NDVI) images to expedite analysis of differences in fractal dimension between the two dates. These NDVI images were also resampled to resolutions of 60, 120, 240, 480, and 960 meters from the original 30 meter pixel size, to permit an assessment of how fractal dimension varies with spatial resolution. Tests of fractal dimension for two dates at various pixel resolutions show that the D values in the August image become increasingly more complex as pixel size increases to 480 meters. The D values in the May image show an even more complex relationship to pixel size than that expressed in the August image. Fractal dimension for a difference image computed for the May and August dates increase with pixel size up to a resolution of 120 meters, and then decline with increasing pixel size. This means that the greatest complexity in the difference images occur around a resolution of 120 meters, which is analogous to the operational domain of changes in vegetation and snow cover that constitute differences between the two dates.

  3. Tensor Rank Preserving Discriminant Analysis for Facial Recognition.

    PubMed

    Tao, Dapeng; Guo, Yanan; Li, Yaotang; Gao, Xinbo

    2017-10-12

    Facial recognition, one of the basic topics in computer vision and pattern recognition, has received substantial attention in recent years. However, for those traditional facial recognition algorithms, the facial images are reshaped to a long vector, thereby losing part of the original spatial constraints of each pixel. In this paper, a new tensor-based feature extraction algorithm termed tensor rank preserving discriminant analysis (TRPDA) for facial image recognition is proposed; the proposed method involves two stages: in the first stage, the low-dimensional tensor subspace of the original input tensor samples was obtained; in the second stage, discriminative locality alignment was utilized to obtain the ultimate vector feature representation for subsequent facial recognition. On the one hand, the proposed TRPDA algorithm fully utilizes the natural structure of the input samples, and it applies an optimization criterion that can directly handle the tensor spectral analysis problem, thereby decreasing the computation cost compared those traditional tensor-based feature selection algorithms. On the other hand, the proposed TRPDA algorithm extracts feature by finding a tensor subspace that preserves most of the rank order information of the intra-class input samples. Experiments on the three facial databases are performed here to determine the effectiveness of the proposed TRPDA algorithm.

  4. Sensitivity of Marine Warm Cloud Retrieval Statistics to Algorithm Choices: Examples from MODIS Collection 6

    NASA Astrophysics Data System (ADS)

    Platnick, S.; Wind, G.; Zhang, Z.; Ackerman, S. A.; Maddux, B. C.

    2012-12-01

    The optical and microphysical structure of warm boundary layer marine clouds is of fundamental importance for understanding a variety of cloud radiation and precipitation processes. With the advent of MODIS (Moderate Resolution Imaging Spectroradiometer) on the NASA EOS Terra and Aqua platforms, simultaneous global/daily 1km retrievals of cloud optical thickness and effective particle size are provided, as well as the derived water path. In addition, the cloud product (MOD06/MYD06 for MODIS Terra and Aqua, respectively) provides separate effective radii results using the 1.6, 2.1, and 3.7 μm spectral channels. Cloud retrieval statistics are highly sensitive to how a pixel identified as being "not-clear" by a cloud mask (e.g., the MOD35/MYD35 product) is determined to be useful for an optical retrieval based on a 1-D cloud model. The Collection 5 MODIS retrieval algorithm removed pixels associated with cloud edges (defined by immediate adjacency to "clear" MOD/MYD35 pixels) as well as ocean pixels with partly cloudy elements in the 250m MODIS cloud mask - part of the so-called Clear Sky Restoral (CSR) algorithm. Collection 6 attempts retrievals for those two pixel populations, but allows a user to isolate or filter out the populations via CSR pixel-level Quality Assessment (QA) assignments. In this paper, using the preliminary Collection 6 MOD06 product, we present global and regional statistical results of marine warm cloud retrieval sensitivities to the cloud edge and 250m partly cloudy pixel populations. As expected, retrievals for these pixels are generally consistent with a breakdown of the 1D cloud model. While optical thickness for these suspect pixel populations may have some utility for radiative studies, the retrievals should be used with extreme caution for process and microphysical studies.

  5. Brain vascular image segmentation based on fuzzy local information C-means clustering

    NASA Astrophysics Data System (ADS)

    Hu, Chaoen; Liu, Xia; Liang, Xiao; Hui, Hui; Yang, Xin; Tian, Jie

    2017-02-01

    Light sheet fluorescence microscopy (LSFM) is a powerful optical resolution fluorescence microscopy technique which enables to observe the mouse brain vascular network in cellular resolution. However, micro-vessel structures are intensity inhomogeneity in LSFM images, which make an inconvenience for extracting line structures. In this work, we developed a vascular image segmentation method by enhancing vessel details which should be useful for estimating statistics like micro-vessel density. Since the eigenvalues of hessian matrix and its sign describes different geometric structure in images, which enable to construct vascular similarity function and enhance line signals, the main idea of our method is to cluster the pixel values of the enhanced image. Our method contained three steps: 1) calculate the multiscale gradients and the differences between eigenvalues of Hessian matrix. 2) In order to generate the enhanced microvessels structures, a feed forward neural network was trained by 2.26 million pixels for dealing with the correlations between multi-scale gradients and the differences between eigenvalues. 3) The fuzzy local information c-means clustering (FLICM) was used to cluster the pixel values in enhance line signals. To verify the feasibility and effectiveness of this method, mouse brain vascular images have been acquired by a commercial light-sheet microscope in our lab. The experiment of the segmentation method showed that dice similarity coefficient can reach up to 85%. The results illustrated that our approach extracting line structures of blood vessels dramatically improves the vascular image and enable to accurately extract blood vessels in LSFM images.

  6. Automatic detection and segmentation of vascular structures in dermoscopy images using a novel vesselness measure based on pixel redness and tubularness

    NASA Astrophysics Data System (ADS)

    Kharazmi, Pegah; Lui, Harvey; Stoecker, William V.; Lee, Tim

    2015-03-01

    Vascular structures are one of the most important features in the diagnosis and assessment of skin disorders. The presence and clinical appearance of vascular structures in skin lesions is a discriminating factor among different skin diseases. In this paper, we address the problem of segmentation of vascular patterns in dermoscopy images. Our proposed method is composed of three parts. First, based on biological properties of human skin, we decompose the skin to melanin and hemoglobin component using independent component analysis of skin color images. The relative quantities and pure color densities of each component were then estimated. Subsequently, we obtain three reference vectors of the mean RGB values for normal skin, pigmented skin and blood vessels from the hemoglobin component by averaging over 100000 pixels of each group outlined by an expert. Based on the Euclidean distance thresholding, we generate a mask image that extracts the red regions of the skin. Finally, Frangi measure was applied to the extracted red areas to segment the tubular structures. Finally, Otsu's thresholding was applied to segment the vascular structures and get a binary vessel mask image. The algorithm was implemented on a set of 50 dermoscopy images. In order to evaluate the performance of our method, we have artificially extended some of the existing vessels in our dermoscopy data set and evaluated the performance of the algorithm to segment the newly added vessel pixels. A sensitivity of 95% and specificity of 87% were achieved.

  7. Mercuric iodide room-temperature array detectors for gamma-ray imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patt, B.

    Significant progress has been made recently in the development of mercuric iodide detector arrays for gamma-ray imaging, making real the possibility of constructing high-performance small, light-weight, portable gamma-ray imaging systems. New techniques have been applied in detector fabrication and then low noise electronics which have produced pixel arrays with high-energy resolution, high spatial resolution, high gamma stopping efficiency. Measurements of the energy resolution capability have been made on a 19-element protypical array. Pixel energy resolutions of 2.98% fwhm and 3.88% fwhm were obtained at 59 keV (241-Am) and 140-keV (99m-Tc), respectively. The pixel spectra for a 14-element section of themore » data is shown together with the composition of the overlapped individual pixel spectra. These techniques are now being applied to fabricate much larger arrays with thousands of pixels. Extension of these principles to imaging scenarios involving gamma-ray energies up to several hundred keV is also possible. This would enable imaging of the 208 keV and 375-414 keV 239-Pu and 240-Pu structures, as well as the 186 keV line of 235-U.« less

  8. A back-illuminated megapixel CMOS image sensor

    NASA Technical Reports Server (NTRS)

    Pain, Bedabrata; Cunningham, Thomas; Nikzad, Shouleh; Hoenk, Michael; Jones, Todd; Wrigley, Chris; Hancock, Bruce

    2005-01-01

    In this paper, we present the test and characterization results for a back-illuminated megapixel CMOS imager. The imager pixel consists of a standard junction photodiode coupled to a three transistor-per-pixel switched source-follower readout [1]. The imager also consists of integrated timing and control and bias generation circuits, and provides analog output. The analog column-scan circuits were implemented in such a way that the imager could be configured to run in off-chip correlated double-sampling (CDS) mode. The imager was originally designed for normal front-illuminated operation, and was fabricated in a commercially available 0.5 pn triple-metal CMOS-imager compatible process. For backside illumination, the imager was thinned by etching away the substrate was etched away in a post-fabrication processing step.

  9. Coincidence detection of spatially correlated photon pairs with a monolithic time-resolving detector array.

    PubMed

    Unternährer, Manuel; Bessire, Bänz; Gasparini, Leonardo; Stoppa, David; Stefanov, André

    2016-12-12

    We demonstrate coincidence measurements of spatially entangled photons by means of a multi-pixel based detection array. The sensor, originally developed for positron emission tomography applications, is a fully digital 8×16 silicon photomultiplier array allowing not only photon counting but also per-pixel time stamping of the arrived photons with an effective resolution of 265 ps. Together with a frame rate of 500 kfps, this property exceeds the capabilities of conventional charge-coupled device cameras which have become of growing interest for the detection of transversely correlated photon pairs. The sensor is used to measure a second-order correlation function for various non-collinear configurations of entangled photons generated by spontaneous parametric down-conversion. The experimental results are compared to theory.

  10. Separation of specular and diffuse components using tensor voting in color images.

    PubMed

    Nguyen, Tam; Vo, Quang Nhat; Yang, Hyung-Jeong; Kim, Soo-Hyung; Lee, Guee-Sang

    2014-11-20

    Most methods for the detection and removal of specular reflections suffer from nonuniform highlight regions and/or nonconverged artifacts induced by discontinuities in the surface colors, especially when dealing with highly textured, multicolored images. In this paper, a novel noniterative and predefined constraint-free method based on tensor voting is proposed to detect and remove the highlight components of a single color image. The distribution of diffuse and specular pixels in the original image is determined using tensors' saliency analysis, instead of comparing color information among neighbor pixels. The achieved diffuse reflectance distribution is used to remove specularity components. The proposed method is evaluated quantitatively and qualitatively over a dataset of highly textured, multicolor images. The experimental results show that our result outperforms other state-of-the-art techniques.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ross, Steve; Haji-Sheikh, Michael; Huntington, Andrew

    The Voxtel VX-798 is a prototype X-ray pixel array detector (PAD) featuring a silicon sensor photodiode array of 48 x 48 pixels, each 130 mu m x 130 mu m x 520 mu m thick, coupled to a CMOS readout application specific integrated circuit (ASIC). The first synchrotron X-ray characterization of this detector is presented, and its ability to selectively count individual X-rays within two independent arrival time windows, a programmable energy range, and localized to a single pixel is demonstrated. During our first trial run at Argonne National Laboratory's Advance Photon Source, the detector achieved a 60 ns gatingmore » time and 700 eV full width at half-maximum energy resolution in agreement with design parameters. Each pixel of the PAD holds two independent digital counters, and the discriminator for X-ray energy features both an upper and lower threshold to window the energy of interest discarding unwanted background. This smart-pixel technology allows energy and time resolution to be set and optimized in software. It is found that the detector linearity follows an isolated dead-time model, implying that megahertz count rates should be possible in each pixel. Measurement of the line and point spread functions showed negligible spatial blurring. When combined with the timing structure of the synchrotron storage ring, it is demonstrated that the area detector can perform both picosecond time-resolved X-ray diffraction and fluorescence spectroscopy measurements.« less

  12. Single-pixel imaging based on compressive sensing with spectral-domain optical mixing

    NASA Astrophysics Data System (ADS)

    Zhu, Zhijing; Chi, Hao; Jin, Tao; Zheng, Shilie; Jin, Xiaofeng; Zhang, Xianmin

    2017-11-01

    In this letter a single-pixel imaging structure is proposed based on compressive sensing using a spatial light modulator (SLM)-based spectrum shaper. In the approach, an SLM-based spectrum shaper, the pattern of which is a predetermined pseudorandom bit sequence (PRBS), spectrally codes the optical pulse carrying image information. The energy of the spectrally mixed pulse is detected by a single-pixel photodiode and the measurement results are used to reconstruct the image via a sparse recovery algorithm. As the mixing of the image signal and the PRBS is performed in the spectral domain, optical pulse stretching, modulation, compression and synchronization in the time domain are avoided. Experiments are implemented to verify the feasibility of the approach.

  13. Combined statistical analysis of landslide release and propagation

    NASA Astrophysics Data System (ADS)

    Mergili, Martin; Rohmaneo, Mohammad; Chu, Hone-Jay

    2016-04-01

    Statistical methods - often coupled with stochastic concepts - are commonly employed to relate areas affected by landslides with environmental layers, and to estimate spatial landslide probabilities by applying these relationships. However, such methods only concern the release of landslides, disregarding their motion. Conceptual models for mass flow routing are used for estimating landslide travel distances and possible impact areas. Automated approaches combining release and impact probabilities are rare. The present work attempts to fill this gap by a fully automated procedure combining statistical and stochastic elements, building on the open source GRASS GIS software: (1) The landslide inventory is subset into release and deposition zones. (2) We employ a traditional statistical approach to estimate the spatial release probability of landslides. (3) We back-calculate the probability distribution of the angle of reach of the observed landslides, employing the software tool r.randomwalk. One set of random walks is routed downslope from each pixel defined as release area. Each random walk stops when leaving the observed impact area of the landslide. (4) The cumulative probability function (cdf) derived in (3) is used as input to route a set of random walks downslope from each pixel in the study area through the DEM, assigning the probability gained from the cdf to each pixel along the path (impact probability). The impact probability of a pixel is defined as the average impact probability of all sets of random walks impacting a pixel. Further, the average release probabilities of the release pixels of all sets of random walks impacting a given pixel are stored along with the area of the possible release zone. (5) We compute the zonal release probability by increasing the release probability according to the size of the release zone - the larger the zone, the larger the probability that a landslide will originate from at least one pixel within this zone. We quantify this relationship by a set of empirical curves. (6) Finally, we multiply the zonal release probability with the impact probability in order to estimate the combined impact probability for each pixel. We demonstrate the model with a 167 km² study area in Taiwan, using an inventory of landslides triggered by the typhoon Morakot. Analyzing the model results leads us to a set of key conclusions: (i) The average composite impact probability over the entire study area corresponds well to the density of observed landside pixels. Therefore we conclude that the method is valid in general, even though the concept of the zonal release probability bears some conceptual issues that have to be kept in mind. (ii) The parameters used as predictors cannot fully explain the observed distribution of landslides. The size of the release zone influences the composite impact probability to a larger degree than the pixel-based release probability. (iii) The prediction rate increases considerably when excluding the largest, deep-seated, landslides from the analysis. We conclude that such landslides are mainly related to geological features hardly reflected in the predictor layers used.

  14. Predictable Programming on a Precision Timed Architecture

    DTIC Science & Technology

    2008-04-18

    Application: A Video Game Figure 6: Structure of the Video Game Example Inspired by an example game sup- plied with the Hydra development board [17...we implemented a sim- ple video game in C targeted to our PRET architecture. Our example centers on rendering graphics and is otherwise fairly simple...background image. 13 Figure 10: A Screen Dump From Our Video Game Ultimately, each displayed pixel is one of only four col- ors, but the pixels in

  15. Low-dose CT reconstruction with patch based sparsity and similarity constraints

    NASA Astrophysics Data System (ADS)

    Xu, Qiong; Mou, Xuanqin

    2014-03-01

    As the rapid growth of CT based medical application, low-dose CT reconstruction becomes more and more important to human health. Compared with other methods, statistical iterative reconstruction (SIR) usually performs better in lowdose case. However, the reconstructed image quality of SIR highly depends on the prior based regularization due to the insufficient of low-dose data. The frequently-used regularization is developed from pixel based prior, such as the smoothness between adjacent pixels. This kind of pixel based constraint cannot distinguish noise and structures effectively. Recently, patch based methods, such as dictionary learning and non-local means filtering, have outperformed the conventional pixel based methods. Patch is a small area of image, which expresses structural information of image. In this paper, we propose to use patch based constraint to improve the image quality of low-dose CT reconstruction. In the SIR framework, both patch based sparsity and similarity are considered in the regularization term. On one hand, patch based sparsity is addressed by sparse representation and dictionary learning methods, on the other hand, patch based similarity is addressed by non-local means filtering method. We conducted a real data experiment to evaluate the proposed method. The experimental results validate this method can lead to better image with less noise and more detail than other methods in low-count and few-views cases.

  16. A patch-based convolutional neural network for remote sensing image classification.

    PubMed

    Sharma, Atharva; Liu, Xiuwen; Yang, Xiaojun; Shi, Di

    2017-11-01

    Availability of accurate land cover information over large areas is essential to the global environment sustainability; digital classification using medium-resolution remote sensing data would provide an effective method to generate the required land cover information. However, low accuracy of existing per-pixel based classification methods for medium-resolution data is a fundamental limiting factor. While convolutional neural networks (CNNs) with deep layers have achieved unprecedented improvements in object recognition applications that rely on fine image structures, they cannot be applied directly to medium-resolution data due to lack of such fine structures. In this paper, considering the spatial relation of a pixel to its neighborhood, we propose a new deep patch-based CNN system tailored for medium-resolution remote sensing data. The system is designed by incorporating distinctive characteristics of medium-resolution data; in particular, the system computes patch-based samples from multidimensional top of atmosphere reflectance data. With a test site from the Florida Everglades area (with a size of 771 square kilometers), the proposed new system has outperformed pixel-based neural network, pixel-based CNN and patch-based neural network by 24.36%, 24.23% and 11.52%, respectively, in overall classification accuracy. By combining the proposed deep CNN and the huge collection of medium-resolution remote sensing data, we believe that much more accurate land cover datasets can be produced over large areas. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. A novel high electrode count spike recording array using an 81,920 pixel transimpedance amplifier-based imaging chip.

    PubMed

    Johnson, Lee J; Cohen, Ethan; Ilg, Doug; Klein, Richard; Skeath, Perry; Scribner, Dean A

    2012-04-15

    Microelectrode recording arrays of 60-100 electrodes are commonly used to record neuronal biopotentials, and these have aided our understanding of brain function, development and pathology. However, higher density microelectrode recording arrays of larger area are needed to study neuronal function over broader brain regions such as in cerebral cortex or hippocampal slices. Here, we present a novel design of a high electrode count picocurrent imaging array (PIA), based on an 81,920 pixel Indigo ISC9809 readout integrated circuit camera chip. While originally developed for interfacing to infrared photodetector arrays, we have adapted the chip for neuron recording by bonding it to microwire glass resulting in an array with an inter-electrode pixel spacing of 30 μm. In a high density electrode array, the ability to selectively record neural regions at high speed and with good signal to noise ratio are both functionally important. A critical feature of our PIA is that each pixel contains a dedicated low noise transimpedance amplifier (∼0.32 pA rms) which allows recording high signal to noise ratio biocurrents comparable to single electrode voltage amplifier recordings. Using selective sampling of 256 pixel subarray regions, we recorded the extracellular biocurrents of rabbit retinal ganglion cell spikes at sampling rates up to 7.2 kHz. Full array local electroretinogram currents could also be recorded at frame rates up to 100 Hz. A PIA with a full complement of 4 readout circuits would span 1cm and could acquire simultaneous data from selected regions of 1024 electrodes at sampling rates up to 9.3 kHz. Published by Elsevier B.V.

  18. A spatially adaptive spectral re-ordering technique for lossless coding of hyper-spectral images

    NASA Technical Reports Server (NTRS)

    Memon, Nasir D.; Galatsanos, Nikolas

    1995-01-01

    In this paper, we propose a new approach, applicable to lossless compression of hyper-spectral images, that alleviates some limitations of linear prediction as applied to this problem. According to this approach, an adaptive re-ordering of the spectral components of each pixel is performed prior to prediction and encoding. This re-ordering adaptively exploits, on a pixel-by pixel basis, the presence of inter-band correlations for prediction. Furthermore, the proposed approach takes advantage of spatial correlations, and does not introduce any coding overhead to transmit the order of the spectral bands. This is accomplished by using the assumption that two spatially adjacent pixels are expected to have similar spectral relationships. We thus have a simple technique to exploit spectral and spatial correlations in hyper-spectral data sets, leading to compression performance improvements as compared to our previously reported techniques for lossless compression. We also look at some simple error modeling techniques for further exploiting any structure that remains in the prediction residuals prior to entropy coding.

  19. Optical and x-ray characterization of two novel CMOS image sensors

    NASA Astrophysics Data System (ADS)

    Bohndiek, Sarah E.; Arvanitis, Costas D.; Venanzi, Cristian; Royle, Gary J.; Clark, Andy T.; Crooks, Jamie P.; Prydderch, Mark L.; Turchetta, Renato; Blue, Andrew; Speller, Robert D.

    2007-02-01

    A UK consortium (MI3) has been founded to develop advanced CMOS pixel designs for scientific applications. Vanilla, a 520x520 array of 25μm pixels benefits from flushed reset circuitry for low noise and random pixel access for region of interest (ROI) readout. OPIC, a 64x72 test structure array of 30μm digital pixels has thresholding capabilities for sparse readout at 3,700fps. Characterization is performed with both optical illumination and x-ray exposure via a scintillator. Vanilla exhibits 34+/-3e - read noise, interactive quantum efficiency of 54% at 500nm and can read a 6x6 ROI at 24,395fps. OPIC has 46+/-3e - read noise and a wide dynamic range of 65dB due to high full well capacity. Based on these characterization studies, Vanilla could be utilized in applications where demands include high spectral response and high speed region of interest readout while OPIC could be used for high speed, high dynamic range imaging.

  20. Simulations of radiation-damaged 3D detectors for the Super-LHC

    NASA Astrophysics Data System (ADS)

    Pennicard, D.; Pellegrini, G.; Fleta, C.; Bates, R.; O'Shea, V.; Parkes, C.; Tartoni, N.

    2008-07-01

    Future high-luminosity colliders, such as the Super-LHC at CERN, will require pixel detectors capable of withstanding extremely high radiation damage. In this article, the performances of various 3D detector structures are simulated with up to 1×1016 1 MeV- neq/cm2 radiation damage. The simulations show that 3D detectors have higher collection efficiency and lower depletion voltages than planar detectors due to their small electrode spacing. When designing a 3D detector with a large pixel size, such as an ATLAS sensor, different electrode column layouts are possible. Using a small number of n+ readout electrodes per pixel leads to higher depletion voltages and lower collection efficiency, due to the larger electrode spacing. Conversely, using more electrodes increases both the insensitive volume occupied by the electrode columns and the capacitive noise. Overall, the best performance after 1×1016 1 MeV- neq/cm2 damage is achieved by using 4-6 n+ electrodes per pixel.

  1. Polymer-stabilized liquid crystalline topological defect network for micro-pixelated optical devices

    NASA Astrophysics Data System (ADS)

    Araoka, Fumito; Le, Khoa V.; Fujii, Shuji; Orihara, Hiroshi; Sasaki, Yuji

    2018-02-01

    Spatially and temporally controlled topological defects in nematic liquid crystals (NLCs) are promising for its potential in optical applications. Utilization of self-organization is a key to fabricate complex micro- and nano-structures which are often difficult to obtain by conventional lithographic tools. Using photo-polymerization technique, here we show a polymer-stabilized NLC having a micro-pixelated structure of regularly ordered umbilical defects which are induced by an electric field. Due to the formation of polymer network, the self-organized pattern is kept stable without deterioration. Moreover, the polymer network allows to template other LCs whose optical properties can be tuned with external stimuli such as temperature and electric fields.

  2. Electron crystallography with the EIGER detector

    PubMed Central

    Tinti, Gemma; Fröjdh, Erik; van Genderen, Eric; Gruene, Tim; Schmitt, Bernd; de Winter, D. A. Matthijs; Weckhuysen, Bert M.; Abrahams, Jan Pieter

    2018-01-01

    Electron crystallography is a discipline that currently attracts much attention as method for inorganic, organic and macromolecular structure solution. EIGER, a direct-detection hybrid pixel detector developed at the Paul Scherrer Institut, Switzerland, has been tested for electron diffraction in a transmission electron microscope. EIGER features a pixel pitch of 75 × 75 µm2, frame rates up to 23 kHz and a dead time between frames as low as 3 µs. Cluster size and modulation transfer functions of the detector at 100, 200 and 300 keV electron energies are reported and the data quality is demonstrated by structure determination of a SAPO-34 zeotype from electron diffraction data. PMID:29765609

  3. Design and Development of 256x256 Linear Mode Low-Noise Avalanche Photodiode Arrays

    NASA Technical Reports Server (NTRS)

    Yuan, Ping; Sudharsanan, Rengarajan; Bai, Xiaogang; Boisvert, Joseph; McDonald, Paul; Chang, James

    2011-01-01

    A larger format photodiode array is always desirable for many LADAR imaging applications. However, as the array format increases, the laser power or the lens aperture has to increase to maintain the same flux per pixel thus increasing the size, weight and power of the imaging system. In order to avoid this negative impact, it is essential to improve the pixel sensitivity. The sensitivity of a short wavelength infrared linear-mode avalanche photodiode (APD) is a delicate balance of quantum efficiency, usable gain, excess noise factor, capacitance, and dark current of APD as well as the input equivalent noise of the amplifier. By using InA1As as a multiplication layer in an InP-based APD, the ionization coefficient ratio, k, is reduced from 0.40 (lnP) to 0.22, and the excess noise is reduced by about 50%. An additional improvement in excess noise of 25% was achieved by employing an impact-ionization-engineering structure with a k value of 0.15. Compared with the traditional InP structure, about 30% reduction in the noise-equivalent power with the following amplifier can be achieved. Spectrolab demonstrated 30-um mesa APD pixels with a dark current less than 10 nA and a capacitance of 60 fF at gain of 10. APD gain uninformity determines the usable gain of most pixels in an array, which is critical to focal plane array sensitivity. By fine tuning the material growth and device process, a break-down-voltage standard deviation of 0.1 V and gain of 30 on individual pixels were demonstrated in our 256x256 linear-mode APD arrays.

  4. A Fixed-Pattern Noise Correction Method Based on Gray Value Compensation for TDI CMOS Image Sensor.

    PubMed

    Liu, Zhenwang; Xu, Jiangtao; Wang, Xinlei; Nie, Kaiming; Jin, Weimin

    2015-09-16

    In order to eliminate the fixed-pattern noise (FPN) in the output image of time-delay-integration CMOS image sensor (TDI-CIS), a FPN correction method based on gray value compensation is proposed. One hundred images are first captured under uniform illumination. Then, row FPN (RFPN) and column FPN (CFPN) are estimated based on the row-mean vector and column-mean vector of all collected images, respectively. Finally, RFPN are corrected by adding the estimated RFPN gray value to the original gray values of pixels in the corresponding row, and CFPN are corrected by subtracting the estimated CFPN gray value from the original gray values of pixels in the corresponding column. Experimental results based on a 128-stage TDI-CIS show that, after correcting the FPN in the image captured under uniform illumination with the proposed method, the standard-deviation of row-mean vector decreases from 5.6798 to 0.4214 LSB, and the standard-deviation of column-mean vector decreases from 15.2080 to 13.4623 LSB. Both kinds of FPN in the real images captured by TDI-CIS are eliminated effectively with the proposed method.

  5. Where do the 3.5 keV photons come from? A morphological study of the Galactic Center and of Perseus

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlson, Eric; Jeltema, Tesla; Profumo, Stefano, E-mail: erccarls@ucsc.edu, E-mail: tesla@ucsc.edu, E-mail: profumo@ucsc.edu

    We test the origin of the 3.5 keV line photons by analyzing the morphology of the emission at that energy from the Galactic Center and from the Perseus cluster of galaxies. We employ a variety of different templates to model the continuum emission and analyze the resulting radial and azimuthal distribution of the residual emission. We then perform a pixel-by-pixel binned likelihood analysis including line emission templates and dark matter templates and assess the correlation of the 3.5 keV emission with these templates. We conclude that the radial and azimuthal distribution of the residual emission is incompatible with a darkmore » matter origin for both the Galactic center and Perseus; the Galactic center 3.5 keV line photons trace the morphology of lines at comparable energy, while the Perseus 3.5 keV photons are highly correlated with the cluster's cool core, and exhibit a morphology incompatible with dark matter decay. The template analysis additionally allows us to set the most stringent constraints to date on lines in the 3.5 keV range from dark matter decay.« less

  6. Updated Status and Performance at the Fourth HST COS FUV Lifetime Position

    NASA Astrophysics Data System (ADS)

    Taylor, Joanna M.; De Rosa, Gisella; Fix, Mees B.; Fox, Andrew; Indriolo, Nick; James, Bethan; Jedrzejewski, Robert I.; Oliveira, Cristina M.; Penton, Steven V.; Plesha, Rachel; Proffitt, Charles R.; Rafelski, Marc; Roman-Duval, Julia; Sahnow, David J.; Snyder, Elaine M.; Sonnentrucker, Paule; White, James

    2017-06-01

    To mitigate the adverse effects of gain sag on the spectral quality and accuracy of Hubble Space Telescope’s Cosmic Origins Spectrograph FUV observations, COS FUV spectra will be moved from Lifetime Position 3 (LP3) to a new pristine location on the detectors at LP4 in July 2017. To achieve maximal spectral resolution while preserving detector area, the spectra will be shifted in the cross-dispersion (XD) direction by -2.5" (about -31 pixels) from LP3 or -5” (about 62 pixels) from the original LP1. At LP4, the wavelength calibration lamp spectrum can overlap with the previously gain-sagged LP2 PSA spectrum location. If lamp lines fall in the gain sag holes from LP2, it can cause line ratios to change and the wavelength calibration to fail. As a result, we have updated the Wavecal Parameters Reference Table and CalCOS to address this issue. Additionally, it was necessary to extend the current geometric correction in order to encompass the entire LP4 location. Here we present 2-D template profiles and 1-D spectral trace centroids derived at LP4 as well as LP4-related updates to the wavelength calibration, and geometric correction.

  7. Discrimination of isotrigon textures using the Rényi entropy of Allan variances.

    PubMed

    Gabarda, Salvador; Cristóbal, Gabriel

    2008-09-01

    We present a computational algorithm for isotrigon texture discrimination. The aim of this method consists in discriminating isotrigon textures against a binary random background. The extension of the method to the problem of multitexture discrimination is considered as well. The method relies on the fact that the information content of time or space-frequency representations of signals, including images, can be readily analyzed by means of generalized entropy measures. In such a scenario, the Rényi entropy appears as an effective tool, given that Rényi measures can be used to provide information about a local neighborhood within an image. Localization is essential for comparing images on a pixel-by-pixel basis. Discrimination is performed through a local Rényi entropy measurement applied on a spatially oriented 1-D pseudo-Wigner distribution (PWD) of the test image. The PWD is normalized so that it may be interpreted as a probability distribution. Prior to the calculation of the texture's PWD, a preprocessing filtering step replaces the original texture with its localized spatially oriented Allan variances. The anisotropic structure of the textures, as revealed by the Allan variances, turns out to be crucial later to attain a high discrimination by the extraction of Rényi entropy measures. The method has been empirically evaluated with a family of isotrigon textures embedded in a binary random background. The extension to the case of multiple isotrigon mosaics has also been considered. Discrimination results are compared with other existing methods.

  8. The OSIRIS-REx Laser Altimeter (OLA) Investigation and Instrument

    NASA Astrophysics Data System (ADS)

    Daly, M. G.; Barnouin, O. S.; Dickinson, C.; Seabrook, J.; Johnson, C. L.; Cunningham, G.; Haltigin, T.; Gaudreau, D.; Brunet, C.; Aslam, I.; Taylor, A.; Bierhaus, E. B.; Boynton, W.; Nolan, M.; Lauretta, D. S.

    2017-10-01

    The Canadian Space Agency (CSA) has contributed to the Origins Spectral Interpretation Resource Identification Security-Regolith Explorer (OSIRIS-REx) spacecraft the OSIRIS-REx Laser Altimeter (OLA). The OSIRIS-REx mission will sample asteroid 101955 Bennu, the first B-type asteroid to be visited by a spacecraft. Bennu is thought to be primitive, carbonaceous, and spectrally most closely related to CI and/or CM meteorites. As a scanning laser altimeter, the OLA instrument will measure the range between the OSIRIS-REx spacecraft and the surface of Bennu to produce digital terrain maps of unprecedented spatial scales for a planetary mission. The digital terrain maps produced will measure ˜7 cm per pixel globally, and ˜3 cm per pixel at specific sample sites. In addition, OLA data will be used to constrain and refine the spacecraft trajectories. Global maps and highly accurate spacecraft trajectory estimates are critical to infer the internal structure of the asteroid. The global and regional maps also are key to gain new insights into the surface processes acting across Bennu, which inform the selection of the OSIRIS-REx sample site. These, in turn, are essential for understanding the provenance of the regolith sample collected by the OSIRIS-REx spacecraft. The OLA data also are important for quantifying any hazards near the selected OSIRIS-REx sample site and for evaluating the range of tilts at the sampling site for comparison against the capabilities of the sample acquisition device.

  9. Abinitio powder x-ray diffraction and PIXEL energy calculations on thiophene derived 1,4 dihydropyridine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karthikeyan, N., E-mail: karthin10@gmail.com; Sivakumar, K.; Pachamuthu, M. P.

    We focus on the application of powder diffraction data to get abinitio crystal structure determination of thiophene derived 1,4 DHP prepared by cyclocondensation method using solid catalyst. Crystal structure of the compound has been solved by direct-space approach on Monte Carlo search in parallel tempering mode using FOX program. Initial atomic coordinates were derived using Gaussian 09W quantum chemistry software in semi-empirical approach and Rietveld refinement was carried out using GSAS program. The crystal structure of the compound is stabilized by one N-H…O and three C-H…O hydrogen bonds. PIXEL lattice energy calculation was carried out to understand the physical naturemore » of intermolecular interactions in the crystal packing, on which the total lattice energy is contributed into Columbic, polarization, dispersion, and repulsion energies.« less

  10. Measurement of 3-D Vibrational Motion by Dynamic Photogrammetry Using Least-Square Image Matching for Sub-Pixel Targeting to Improve Accuracy.

    PubMed

    Lee, Hyoseong; Rhee, Huinam; Oh, Jae Hong; Park, Jin Ho

    2016-03-11

    This paper deals with an improved methodology to measure three-dimensional dynamic displacements of a structure by digital close-range photogrammetry. A series of stereo images of a vibrating structure installed with targets are taken at specified intervals by using two daily-use cameras. A new methodology is proposed to accurately trace the spatial displacement of each target in three-dimensional space. This method combines the correlation and the least-square image matching so that the sub-pixel targeting can be obtained to increase the measurement accuracy. Collinearity and space resection theory are used to determine the interior and exterior orientation parameters. To verify the proposed method, experiments have been performed to measure displacements of a cantilevered beam excited by an electrodynamic shaker, which is vibrating in a complex configuration with mixed bending and torsional motions simultaneously with multiple frequencies. The results by the present method showed good agreement with the measurement by two laser displacement sensors. The proposed methodology only requires inexpensive daily-use cameras, and can remotely detect the dynamic displacement of a structure vibrating in a complex three-dimensional defection shape up to sub-pixel accuracy. It has abundant potential applications to various fields, e.g., remote vibration monitoring of an inaccessible or dangerous facility.

  11. Measurement of 3-D Vibrational Motion by Dynamic Photogrammetry Using Least-Square Image Matching for Sub-Pixel Targeting to Improve Accuracy

    PubMed Central

    Lee, Hyoseong; Rhee, Huinam; Oh, Jae Hong; Park, Jin Ho

    2016-01-01

    This paper deals with an improved methodology to measure three-dimensional dynamic displacements of a structure by digital close-range photogrammetry. A series of stereo images of a vibrating structure installed with targets are taken at specified intervals by using two daily-use cameras. A new methodology is proposed to accurately trace the spatial displacement of each target in three-dimensional space. This method combines the correlation and the least-square image matching so that the sub-pixel targeting can be obtained to increase the measurement accuracy. Collinearity and space resection theory are used to determine the interior and exterior orientation parameters. To verify the proposed method, experiments have been performed to measure displacements of a cantilevered beam excited by an electrodynamic shaker, which is vibrating in a complex configuration with mixed bending and torsional motions simultaneously with multiple frequencies. The results by the present method showed good agreement with the measurement by two laser displacement sensors. The proposed methodology only requires inexpensive daily-use cameras, and can remotely detect the dynamic displacement of a structure vibrating in a complex three-dimensional defection shape up to sub-pixel accuracy. It has abundant potential applications to various fields, e.g., remote vibration monitoring of an inaccessible or dangerous facility. PMID:26978366

  12. Toward one Giga frames per second--evolution of in situ storage image sensors.

    PubMed

    Etoh, Takeharu G; Son, Dao V T; Yamada, Tetsuo; Charbon, Edoardo

    2013-04-08

    The ISIS is an ultra-fast image sensor with in-pixel storage. The evolution of the ISIS in the past and in the near future is reviewed and forecasted. To cover the storage area with a light shield, the conventional frontside illuminated ISIS has a limited fill factor. To achieve higher sensitivity, a BSI ISIS was developed. To avoid direct intrusion of light and migration of signal electrons to the storage area on the frontside, a cross-sectional sensor structure with thick pnpn layers was developed, and named "Tetratified structure". By folding and looping in-pixel storage CCDs, an image signal accumulation sensor, ISAS, is proposed. The ISAS has a new function, the in-pixel signal accumulation, in addition to the ultra-high-speed imaging. To achieve much higher frame rate, a multi-collection-gate (MCG) BSI image sensor architecture is proposed. The photoreceptive area forms a honeycomb-like shape. Performance of a hexagonal CCD-type MCG BSI sensor is examined by simulations. The highest frame rate is theoretically more than 1Gfps. For the near future, a stacked hybrid CCD/CMOS MCG image sensor seems most promising. The associated problems are discussed. A fine TSV process is the key technology to realize the structure.

  13. A novel ship CFAR detection algorithm based on adaptive parameter enhancement and wake-aided detection in SAR images

    NASA Astrophysics Data System (ADS)

    Meng, Siqi; Ren, Kan; Lu, Dongming; Gu, Guohua; Chen, Qian; Lu, Guojun

    2018-03-01

    Synthetic aperture radar (SAR) is an indispensable and useful method for marine monitoring. With the increase of SAR sensors, high resolution images can be acquired and contain more target structure information, such as more spatial details etc. This paper presents a novel adaptive parameter transform (APT) domain constant false alarm rate (CFAR) to highlight targets. The whole method is based on the APT domain value. Firstly, the image is mapped to the new transform domain by the algorithm. Secondly, the false candidate target pixels are screened out by the CFAR detector to highlight the target ships. Thirdly, the ship pixels are replaced by the homogeneous sea pixels. And then, the enhanced image is processed by Niblack algorithm to obtain the wake binary image. Finally, normalized Hough transform (NHT) is used to detect wakes in the binary image, as a verification of the presence of the ships. Experiments on real SAR images validate that the proposed transform does enhance the target structure and improve the contrast of the image. The algorithm has a good performance in the ship and ship wake detection.

  14. Error correcting coding-theory for structured light illumination systems

    NASA Astrophysics Data System (ADS)

    Porras-Aguilar, Rosario; Falaggis, Konstantinos; Ramos-Garcia, Ruben

    2017-06-01

    Intensity discrete structured light illumination systems project a series of projection patterns for the estimation of the absolute fringe order using only the temporal grey-level sequence at each pixel. This work proposes the use of error-correcting codes for pixel-wise correction of measurement errors. The use of an error correcting code is advantageous in many ways: it allows reducing the effect of random intensity noise, it corrects outliners near the border of the fringe commonly present when using intensity discrete patterns, and it provides a robustness in case of severe measurement errors (even for burst errors where whole frames are lost). The latter aspect is particular interesting in environments with varying ambient light as well as in critical safety applications as e.g. monitoring of deformations of components in nuclear power plants, where a high reliability is ensured even in case of short measurement disruptions. A special form of burst errors is the so-called salt and pepper noise, which can largely be removed with error correcting codes using only the information of a given pixel. The performance of this technique is evaluated using both simulations and experiments.

  15. Southern Auroras Over Saturn

    NASA Image and Video Library

    2017-07-28

    Cassini gazed toward high southern latitudes near Saturn's south pole to observe ghostly curtains of dancing light -- Saturn's southern auroras, or southern lights. These natural light displays at the planet's poles are created by charged particles raining down into the upper atmosphere, making gases there glow. The dark area at the top of this scene is Saturn's night side. The auroras rotate from left to right, curving around the planet as Saturn rotates over about 70 minutes, compressed here into a movie sequence of about five seconds. Background stars are seen sliding behind the planet. Cassini was moving around Saturn during the observation, keeping its gaze fixed on a particular spot on the planet, which causes a shift in the distant background over the course of the observation. Some of the stars seem to make a slight turn to the right just before disappearing. This effect is due to refraction -- the starlight gets bent as it passes through the atmosphere, which acts as a lens. Random bright specks and streaks appearing from frame to frame are due to charged particles and cosmic rays hitting the camera detector. The aim of this observation was to observe seasonal changes in the brightness of Saturn's auroras, and to compare with the simultaneous observations made by Cassini's infrared and ultraviolet imaging spectrometers. The original images in this movie sequence have a size of 256x256 pixels; both the original size and a version enlarged to 500x500 pixels are available here. The small image size is the result of a setting on the camera that allows for shorter exposure times than full-size (1024x1024 pixel) images. This enabled Cassini to take more frames in a short time and still capture enough photons from the auroras for them to be visible. The images were taken in visible light using the Cassini spacecraft narrow-angle camera on July 20, 2017, at a distance of about 620,000 miles (1 million kilometers) from Saturn. The views look toward 74 degrees south latitude on Saturn. Image scale is about 0.9 mile (1.4 kilometers) per pixel on Saturn. An animation is available at https://photojournal.jpl.nasa.gov/catalog/PIA21623

  16. Applications of a pnCCD detector coupled to columnar structure CsI(Tl) scintillator system in ultra high energy X-ray Laue diffraction

    NASA Astrophysics Data System (ADS)

    Shokr, M.; Schlosser, D.; Abboud, A.; Algashi, A.; Tosson, A.; Conka, T.; Hartmann, R.; Klaus, M.; Genzel, C.; Strüder, L.; Pietsch, U.

    2017-12-01

    Most charge coupled devices (CCDs) are made of silicon (Si) with typical active layer thicknesses of several microns. In case of a pnCCD detector the sensitive Si thickness is 450 μm. However, for silicon based detectors the quantum efficiency for hard X-rays drops significantly for photon energies above 10 keV . This drawback can be overcome by combining a pixelated silicon-based detector system with a columnar scintillator. Here we report on the characterization of a low noise, fully depleted 128×128 pixels pnCCD detector with 75×75 μm2 pixel size coupled to a 700 μm thick columnar CsI(Tl) scintillator in the photon range between 1 keV to 130 keV . The excellent performance of the detection system in the hard X-ray range is demonstrated in a Laue type X-ray diffraction experiment performed at EDDI beamline of the BESSY II synchrotron taken at a set of several GaAs single crystals irradiated by white synchrotron radiation. With the columnar structure of the scintillator, the position resolution of the whole system reaches a value of less than one pixel. Using the presented detector system and considering the functional relation between indirect and direct photon events Laue diffraction peaks with X-ray energies up to 120 keV were efficiently detected. As one of possible applications of the combined CsI-pnCCD system we demonstrate that the accuracy of X-ray structure factors extracted from Laue diffraction peaks can be significantly improved in hard X-ray range using the combined CsI(Tl)-pnCCD system compared to a bare pnCCD.

  17. Volume and tissue composition preserving deformation of breast CT images to simulate breast compression in mammographic imaging

    NASA Astrophysics Data System (ADS)

    Han, Tao; Chen, Lingyun; Lai, Chao-Jen; Liu, Xinming; Shen, Youtao; Zhong, Yuncheng; Ge, Shuaiping; Yi, Ying; Wang, Tianpeng; Shaw, Chris C.

    2009-02-01

    Images of mastectomy breast specimens have been acquired with a bench top experimental Cone beam CT (CBCT) system. The resulting images have been segmented to model an uncompressed breast for simulation of various CBCT techniques. To further simulate conventional or tomosynthesis mammographic imaging for comparison with the CBCT technique, a deformation technique was developed to convert the CT data for an uncompressed breast to a compressed breast without altering the breast volume or regional breast density. With this technique, 3D breast deformation is separated into two 2D deformations in coronal and axial views. To preserve the total breast volume and regional tissue composition, each 2D deformation step was achieved by altering the square pixels into rectangular ones with the pixel areas unchanged and resampling with the original square pixels using bilinear interpolation. The compression was modeled by first stretching the breast in the superior-inferior direction in the coronal view. The image data were first deformed by distorting the voxels with a uniform distortion ratio. These deformed data were then deformed again using distortion ratios varying with the breast thickness and re-sampled. The deformation procedures were applied in the axial view to stretch the breast in the chest wall to nipple direction while shrinking it in the mediolateral to lateral direction re-sampled and converted into data for uniform cubic voxels. Threshold segmentation was applied to the final deformed image data to obtain the 3D compressed breast model. Our results show that the original segmented CBCT image data were successfully converted into those for a compressed breast with the same volume and regional density preserved. Using this compressed breast model, conventional and tomosynthesis mammograms were simulated for comparison with CBCT.

  18. Curiosity's Mars Hand Lens Imager (MAHLI): Inital Observations and Activities

    NASA Technical Reports Server (NTRS)

    Edgett, K. S.; Yingst, R. A.; Minitti, M. E.; Robinson, M. L.; Kennedy, M. R.; Lipkaman, L. J.; Jensen, E. H.; Anderson, R. C.; Bean, K. M.; Beegle, L. W.; hide

    2013-01-01

    MAHLI (Mars Hand Lens Imager) is a 2-megapixel focusable macro lens color camera on the turret on Curiosity's robotic arm. The investigation centers on stratigraphy, grain-scale texture, structure, mineralogy, and morphology of geologic materials at Curiosity's Gale robotic field site. MAHLI acquires focused images at working distances of 2.1 cm to infinity; for reference, at 2.1 cm the scale is 14 microns/pixel; at 6.9 cm it is 31 microns/pixel, like the Spirit and Opportunity Microscopic Imager (MI) cameras.

  19. Direct Integration of Dynamic Emissive Displays into Knitted Fabric Structures

    NASA Astrophysics Data System (ADS)

    Bellingham, Alyssa

    Smart textiles are revolutionizing the textile industry by combining technology into fabric to give clothing new abilities including communication, transformation, and energy conduction. The advent of electroluminescent fibers, which emit light in response to an applied electric field, has opened the door for fabric-integrated emissive displays in textiles. This thesis focuses on the development of a flexible and scalable emissive fabric display with individually addressable pixels disposed within a fabric matrix. The pixels are formed in areas where a fiber supporting the dielectric and phosphor layers of an electroluminescent structure contacts a conductive surface. This conductive surface can be an external conductive fiber, yarn or wire, or a translucent conductive material layer deposited at set points along the electroluminescent fibers. Different contacting methods are introduced and the different ways the EL yarns can be incorporated into the knitted fabric are discussed. EL fibers were fabricated using a single yarn coating system with a custom, adjustable 3D printed slot die coater for even distribution of material onto the supporting fiber substrates. These fibers are mechanically characterized inside of and outside of a knitted fabric matrix to determine their potential for various applications, including wearables. A 4-pixel dynamic emissive display prototype is fabricated and characterized. This is the first demonstration of an all-knit emissive display with individually controllable pixels. The prototype is composed of a grid of fibers supporting the dielectric and phosphor layers of an electroluminescent (EL) device structure, called EL fibers, and conductive fibers acting as the top electrode. This grid is integrated into a biaxial weft knit structure where the EL fibers make up the rows and conductive fibers make up the columns of the reinforcement yarns inside the supporting weft knit. The pixels exist as individual segments of electroluminescence that occur where the conductive fibers contact the EL fibers. A passive matrix addressing scheme was used to apply a voltage to each pixel individually, creating a display capable of dynamically communicating information. Optical measurements of the intensity and color of emitted light were used to quantify the performance of the display and compare it to state-of-the-art display technologies. The charge-voltage (Q-V) electrical characterization technique is used to gain information about the ACPEL fiber device operation, and mechanical tests were performed to determine the effect everyday wear and tear would have on the performance of the display. The presented textile display structure and method of producing fibers with individual sections of electroluminescence addresses the shortcomings in existing textile display technology and provides a route to directly integrated communicative textiles for applications ranging from biomedical research and monitoring to fashion. An extensive discussion of the materials and methods of production needed to scale this textile display technology and incorporate it into wearable applications is presented.

  20. Noise suppression for dual-energy CT via penalized weighted least-square optimization with similarity-based regularization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harms, Joseph; Wang, Tonghe; Petrongolo, Michael

    Purpose: Dual-energy CT (DECT) expands applications of CT imaging in its capability to decompose CT images into material images. However, decomposition via direct matrix inversion leads to large noise amplification and limits quantitative use of DECT. Their group has previously developed a noise suppression algorithm via penalized weighted least-square optimization with edge-preservation regularization (PWLS-EPR). In this paper, the authors improve method performance using the same framework of penalized weighted least-square optimization but with similarity-based regularization (PWLS-SBR), which substantially enhances the quality of decomposed images by retaining a more uniform noise power spectrum (NPS). Methods: The design of PWLS-SBR is basedmore » on the fact that averaging pixels of similar materials gives a low-noise image. For each pixel, the authors calculate the similarity to other pixels in its neighborhood by comparing CT values. Using an empirical Gaussian model, the authors assign high/low similarity value to one neighboring pixel if its CT value is close/far to the CT value of the pixel of interest. These similarity values are organized in matrix form, such that multiplication of the similarity matrix to the image vector reduces image noise. The similarity matrices are calculated on both high- and low-energy CT images and averaged. In PWLS-SBR, the authors include a regularization term to minimize the L-2 norm of the difference between the images without and with noise suppression via similarity matrix multiplication. By using all pixel information of the initial CT images rather than just those lying on or near edges, PWLS-SBR is superior to the previously developed PWLS-EPR, as supported by comparison studies on phantoms and a head-and-neck patient. Results: On the line-pair slice of the Catphan{sup ©}600 phantom, PWLS-SBR outperforms PWLS-EPR and retains spatial resolution of 8 lp/cm, comparable to the original CT images, even at 90% reduction in noise standard deviation (STD). Similar performance on spatial resolution is observed on an anthropomorphic head phantom. In addition, results of PWLS-SBR show substantially improved image quality due to preservation of image NPS. On the Catphan{sup ©}600 phantom, NPS using PWLS-SBR has a correlation of 93% with that via direct matrix inversion, while the correlation drops to −52% for PWLS-EPR. Electron density measurement studies indicate high accuracy of PWLS-SBR. On seven different materials, the measured electron densities calculated from the decomposed material images using PWLS-SBR have a root-mean-square error (RMSE) of 1.20%, while the results of PWLS-EPR have a RMSE of 2.21%. In the study on a head-and-neck patient, PWLS-SBR is shown to reduce noise STD by a factor of 3 on material images with image qualities comparable to CT images, whereas fine structures are lost in the PWLS-EPR result. Additionally, PWLS-SBR better preserves low contrast on the tissue image. Conclusions: The authors propose improvements to the regularization term of an optimization framework which performs iterative image-domain decomposition for DECT with noise suppression. The regularization term avoids calculation of image gradient and is based on pixel similarity. The proposed method not only achieves a high decomposition accuracy, but also improves over the previous algorithm on NPS as well as spatial resolution.« less

  1. Noise suppression for dual-energy CT via penalized weighted least-square optimization with similarity-based regularization

    PubMed Central

    Harms, Joseph; Wang, Tonghe; Petrongolo, Michael; Niu, Tianye; Zhu, Lei

    2016-01-01

    Purpose: Dual-energy CT (DECT) expands applications of CT imaging in its capability to decompose CT images into material images. However, decomposition via direct matrix inversion leads to large noise amplification and limits quantitative use of DECT. Their group has previously developed a noise suppression algorithm via penalized weighted least-square optimization with edge-preservation regularization (PWLS-EPR). In this paper, the authors improve method performance using the same framework of penalized weighted least-square optimization but with similarity-based regularization (PWLS-SBR), which substantially enhances the quality of decomposed images by retaining a more uniform noise power spectrum (NPS). Methods: The design of PWLS-SBR is based on the fact that averaging pixels of similar materials gives a low-noise image. For each pixel, the authors calculate the similarity to other pixels in its neighborhood by comparing CT values. Using an empirical Gaussian model, the authors assign high/low similarity value to one neighboring pixel if its CT value is close/far to the CT value of the pixel of interest. These similarity values are organized in matrix form, such that multiplication of the similarity matrix to the image vector reduces image noise. The similarity matrices are calculated on both high- and low-energy CT images and averaged. In PWLS-SBR, the authors include a regularization term to minimize the L-2 norm of the difference between the images without and with noise suppression via similarity matrix multiplication. By using all pixel information of the initial CT images rather than just those lying on or near edges, PWLS-SBR is superior to the previously developed PWLS-EPR, as supported by comparison studies on phantoms and a head-and-neck patient. Results: On the line-pair slice of the Catphan©600 phantom, PWLS-SBR outperforms PWLS-EPR and retains spatial resolution of 8 lp/cm, comparable to the original CT images, even at 90% reduction in noise standard deviation (STD). Similar performance on spatial resolution is observed on an anthropomorphic head phantom. In addition, results of PWLS-SBR show substantially improved image quality due to preservation of image NPS. On the Catphan©600 phantom, NPS using PWLS-SBR has a correlation of 93% with that via direct matrix inversion, while the correlation drops to −52% for PWLS-EPR. Electron density measurement studies indicate high accuracy of PWLS-SBR. On seven different materials, the measured electron densities calculated from the decomposed material images using PWLS-SBR have a root-mean-square error (RMSE) of 1.20%, while the results of PWLS-EPR have a RMSE of 2.21%. In the study on a head-and-neck patient, PWLS-SBR is shown to reduce noise STD by a factor of 3 on material images with image qualities comparable to CT images, whereas fine structures are lost in the PWLS-EPR result. Additionally, PWLS-SBR better preserves low contrast on the tissue image. Conclusions: The authors propose improvements to the regularization term of an optimization framework which performs iterative image-domain decomposition for DECT with noise suppression. The regularization term avoids calculation of image gradient and is based on pixel similarity. The proposed method not only achieves a high decomposition accuracy, but also improves over the previous algorithm on NPS as well as spatial resolution. PMID:27147376

  2. The High Resolution Stereo Camera (HRSC) of Mars Express and its approach to science analysis and mapping for Mars and its satellites

    NASA Astrophysics Data System (ADS)

    Gwinner, K.; Jaumann, R.; Hauber, E.; Hoffmann, H.; Heipke, C.; Oberst, J.; Neukum, G.; Ansan, V.; Bostelmann, J.; Dumke, A.; Elgner, S.; Erkeling, G.; Fueten, F.; Hiesinger, H.; Hoekzema, N. M.; Kersten, E.; Loizeau, D.; Matz, K.-D.; McGuire, P. C.; Mertens, V.; Michael, G.; Pasewaldt, A.; Pinet, P.; Preusker, F.; Reiss, D.; Roatsch, T.; Schmidt, R.; Scholten, F.; Spiegel, M.; Stesky, R.; Tirsch, D.; van Gasselt, S.; Walter, S.; Wählisch, M.; Willner, K.

    2016-07-01

    The High Resolution Stereo Camera (HRSC) of ESA's Mars Express is designed to map and investigate the topography of Mars. The camera, in particular its Super Resolution Channel (SRC), also obtains images of Phobos and Deimos on a regular basis. As HRSC is a push broom scanning instrument with nine CCD line detectors mounted in parallel, its unique feature is the ability to obtain along-track stereo images and four colors during a single orbital pass. The sub-pixel accuracy of 3D points derived from stereo analysis allows producing DTMs with grid size of up to 50 m and height accuracy on the order of one image ground pixel and better, as well as corresponding orthoimages. Such data products have been produced systematically for approximately 40% of the surface of Mars so far, while global shape models and a near-global orthoimage mosaic could be produced for Phobos. HRSC is also unique because it bridges between laser altimetry and topography data derived from other stereo imaging instruments, and provides geodetic reference data and geological context to a variety of non-stereo datasets. This paper, in addition to an overview of the status and evolution of the experiment, provides a review of relevant methods applied for 3D reconstruction and mapping, and respective achievements. We will also review the methodology of specific approaches to science analysis based on joint analysis of DTM and orthoimage information, or benefitting from high accuracy of co-registration between multiple datasets, such as studies using multi-temporal or multi-angular observations, from the fields of geomorphology, structural geology, compositional mapping, and atmospheric science. Related exemplary results from analysis of HRSC data will be discussed. After 10 years of operation, HRSC covered about 70% of the surface by panchromatic images at 10-20 m/pixel, and about 97% at better than 100 m/pixel. As the areas with contiguous coverage by stereo data are increasingly abundant, we also present original data related to the analysis of image blocks and address methodology aspects of newly established procedures for the generation of multi-orbit DTMs and image mosaics. The current results suggest that multi-orbit DTMs with grid spacing of 50 m can be feasible for large parts of the surface, as well as brightness-adjusted image mosaics with co-registration accuracy of adjacent strips on the order of one pixel, and at the highest image resolution available. These characteristics are demonstrated by regional multi-orbit data products covering the MC-11 (East) quadrangle of Mars, representing the first prototype of a new HRSC data product level.

  3. High Accuracy 3D Processing of Satellite Imagery

    NASA Technical Reports Server (NTRS)

    Gruen, A.; Zhang, L.; Kocaman, S.

    2007-01-01

    Automatic DSM/DTM generation reproduces not only general features, but also detailed features of the terrain relief. Height accuracy of around 1 pixel in cooperative terrain. RMSE values of 1.3-1.5 m (1.0-2.0 pixels) for IKONOS and RMSE values of 2.9-4.6 m (0.5-1.0 pixels) for SPOT5 HRS. For 3D city modeling, the manual and semi-automatic feature extraction capability of SAT-PP provides a good basis. The tools of SAT-PP allowed the stereo-measurements of points on the roofs in order to generate a 3D city model with CCM The results show that building models with main roof structures can be successfully extracted by HRSI. As expected, with Quickbird more details are visible.

  4. Micro-valve pump light valve display

    DOEpatents

    Yeechun Lee.

    1993-01-19

    A flat panel display incorporates a plurality of micro-pump light valves (MLV's) to form pixels for recreating an image. Each MLV consists of a dielectric drop sandwiched between substrates, at least one of which is transparent, a holding electrode for maintaining the drop outside a viewing area, and a switching electrode from accelerating the drop from a location within the holding electrode to a location within the viewing area. The sustrates may further define non-wetting surface areas to create potential energy barriers to assist in controlling movement of the drop. The forces acting on the drop are quadratic in nature to provide a nonlinear response for increased image contrast. A crossed electrode structure can be used to activate the pixels whereby a large flat panel display is formed without active driver components at each pixel.

  5. Micro-valve pump light valve display

    DOEpatents

    Lee, Yee-Chun

    1993-01-01

    A flat panel display incorporates a plurality of micro-pump light valves (MLV's) to form pixels for recreating an image. Each MLV consists of a dielectric drop sandwiched between substrates, at least one of which is transparent, a holding electrode for maintaining the drop outside a viewing area, and a switching electrode from accelerating the drop from a location within the holding electrode to a location within the viewing area. The sustrates may further define non-wetting surface areas to create potential energy barriers to assist in controlling movement of the drop. The forces acting on the drop are quadratic in nature to provide a nonlinear response for increased image contrast. A crossed electrode structure can be used to activate the pixels whereby a large flat panel display is formed without active driver components at each pixel.

  6. Reconstruction of Missing Pixels in Satellite Images Using the Data Interpolating Empirical Orthogonal Function (DINEOF)

    NASA Astrophysics Data System (ADS)

    Liu, X.; Wang, M.

    2016-02-01

    For coastal and inland waters, complete (in spatial) and frequent satellite measurements are important in order to monitor and understand coastal biological and ecological processes and phenomena, such as diurnal variations. High-frequency images of the water diffuse attenuation coefficient at the wavelength of 490 nm (Kd(490)) derived from the Korean Geostationary Ocean Color Imager (GOCI) provide a unique opportunity to study diurnal variation of the water turbidity in coastal regions of the Bohai Sea, Yellow Sea, and East China Sea. However, there are lots of missing pixels in the original GOCI-derived Kd(490) images due to clouds and various other reasons. Data Interpolating Empirical Orthogonal Function (DINEOF) is a method to reconstruct missing data in geophysical datasets based on Empirical Orthogonal Function (EOF). In this study, the DINEOF is applied to GOCI-derived Kd(490) data in the Yangtze River mouth and the Yellow River mouth regions, the DINEOF reconstructed Kd(490) data are used to fill in the missing pixels, and the spatial patterns and temporal functions of the first three EOF modes are also used to investigate the sub-diurnal variation due to the tidal forcing. In addition, DINEOF method is also applied to the Visible Infrared Imaging Radiometer Suite (VIIRS) on board the Suomi National Polar-orbiting Partnership (SNPP) satellite to reconstruct missing pixels in the daily Kd(490) and chlorophyll-a concentration images, and some application examples in the Chesapeake Bay and the Gulf of Mexico will be presented.

  7. The Phase-II ATLAS ITk pixel upgrade

    NASA Astrophysics Data System (ADS)

    Terzo, S.

    2017-07-01

    The entire tracking system of the ATLAS experiment will be replaced during the LHC Phase-II shutdown (foreseen to take place around 2025) by an all-silicon detector called the ``ITk'' (Inner Tracker). The innermost portion of ITk will consist of a pixel detector with five layers in the barrel region and ring-shaped supports in the end-cap regions. It will be instrumented with new sensor and readout electronics technologies to improve the tracking performance and cope with the HL-LHC environment, which will be severe in terms of occupancy and radiation levels. The new pixel system could include up to 14 m2 of silicon, depending on the final layout, which is expected to be decided in 2017. Several layout options are being investigated at the moment, including some with novel inclined support structures in the barrel end-cap overlap region and others with very long innermost barrel layers. Forward coverage could be as high as |eta| <4. Supporting structures will be based on low mass, highly stable and highly thermally conductive carbon-based materials cooled by evaporative carbon dioxide circulated in thin-walled titanium pipes embedded in the structures. Planar, 3D, and CMOS sensors are being investigated to identify the optimal technology, which may be different for the various layers. The RD53 Collaboration is developing the new readout chip. The pixel off-detector readout electronics will be implemented in the framework of the general ATLAS trigger and DAQ system. A readout speed of up to 5 Gb/s per data link will be needed in the innermost layers going down to 640 Mb/s for the outermost. Because of the very high radiation level inside the detector, the first part of the transmission has to be implemented electrically, with signals converted for optical transmission at larger radii. Extensive tests are being carried out to prove the feasibility of implementing serial powering, which has been chosen as the baseline for the ITk pixel system due to the reduced material in the servicing cables foreseen for this option.

  8. The Europa Imaging System (EIS): High-Resolution, 3-D Insight into Europa's Geology, Ice Shell, and Potential for Current Activity

    NASA Astrophysics Data System (ADS)

    Turtle, E. P.; McEwen, A. S.; Collins, G. C.; Fletcher, L. N.; Hansen, C. J.; Hayes, A.; Hurford, T., Jr.; Kirk, R. L.; Barr, A.; Nimmo, F.; Patterson, G.; Quick, L. C.; Soderblom, J. M.; Thomas, N.

    2015-12-01

    The Europa Imaging System will transform our understanding of Europa through global decameter-scale coverage, three-dimensional maps, and unprecedented meter-scale imaging. EIS combines narrow-angle and wide-angle cameras (NAC and WAC) designed to address high-priority Europa science and reconnaissance goals. It will: (A) Characterize the ice shell by constraining its thickness and correlating surface features with subsurface structures detected by ice penetrating radar; (B) Constrain formation processes of surface features and the potential for current activity by characterizing endogenic structures, surface units, global cross-cutting relationships, and relationships to Europa's subsurface structure, and by searching for evidence of recent activity, including potential plumes; and (C) Characterize scientifically compelling landing sites and hazards by determining the nature of the surface at scales relevant to a potential lander. The NAC provides very high-resolution, stereo reconnaissance, generating 2-km-wide swaths at 0.5-m pixel scale from 50-km altitude, and uses a gimbal to enable independent targeting. NAC observations also include: near-global (>95%) mapping of Europa at ≤50-m pixel scale (to date, only ~14% of Europa has been imaged at ≤500 m/pixel, with best pixel scale 6 m); regional and high-resolution stereo imaging at <1-m/pixel; and high-phase-angle observations for plume searches. The WAC is designed to acquire pushbroom stereo swaths along flyby ground-tracks, generating digital topographic models with 32-m spatial scale and 4-m vertical precision from 50-km altitude. These data support characterization of cross-track clutter for radar sounding. The WAC also performs pushbroom color imaging with 6 broadband filters (350-1050 nm) to map surface units and correlations with geologic features and topography. EIS will provide comprehensive data sets essential to fulfilling the goal of exploring Europa to investigate its habitability and perform collaborative science with other investigations, including cartographic and geologic maps, regional and high-resolution digital topography, GIS products, color and photometric data products, a geodetic control network tied to radar altimetry, and a database of plume-search observations.

  9. Influence of the Pixel Sizes of Reference Computed Tomography on Single-photon Emission Computed Tomography Image Reconstruction Using Conjugate-gradient Algorithm.

    PubMed

    Okuda, Kyohei; Sakimoto, Shota; Fujii, Susumu; Ida, Tomonobu; Moriyama, Shigeru

    The frame-of-reference using computed-tomography (CT) coordinate system on single-photon emission computed tomography (SPECT) reconstruction is one of the advanced characteristics of the xSPECT reconstruction system. The aim of this study was to reveal the influence of the high-resolution frame-of-reference on the xSPECT reconstruction. 99m Tc line-source phantom and National Electrical Manufacturers Association (NEMA) image quality phantom were scanned using the SPECT/CT system. xSPECT reconstructions were performed with the reference CT images in different sizes of the display field-of-view (DFOV) and pixel. The pixel sizes of the reconstructed xSPECT images were close to 2.4 mm, which is acquired as originally projection data, even if the reference CT resolution was varied. The full width at half maximum (FWHM) of the line-source, absolute recovery coefficient, and background variability of image quality phantom were independent on the sizes of DFOV in the reference CT images. The results of this study revealed that the image quality of the reconstructed xSPECT images is not influenced by the resolution of frame-of-reference on SPECT reconstruction.

  10. Optimal Compression Methods for Floating-point Format Images

    NASA Technical Reports Server (NTRS)

    Pence, W. D.; White, R. L.; Seaman, R.

    2009-01-01

    We report on the results of a comparison study of different techniques for compressing FITS images that have floating-point (real*4) pixel values. Standard file compression methods like GZIP are generally ineffective in this case (with compression ratios only in the range 1.2 - 1.6), so instead we use a technique of converting the floating-point values into quantized scaled integers which are compressed using the Rice algorithm. The compressed data stream is stored in FITS format using the tiled-image compression convention. This is technically a lossy compression method, since the pixel values are not exactly reproduced, however all the significant photometric and astrometric information content of the image can be preserved while still achieving file compression ratios in the range of 4 to 8. We also show that introducing dithering, or randomization, when assigning the quantized pixel-values can significantly improve the photometric and astrometric precision in the stellar images in the compressed file without adding additional noise. We quantify our results by comparing the stellar magnitudes and positions as measured in the original uncompressed image to those derived from the same image after applying successively greater amounts of compression.

  11. Saturnian Hexagon Collage

    NASA Image and Video Library

    2016-12-06

    This collage of images from NASA's Cassini spacecraft shows Saturn's northern hemisphere and rings as viewed with four different spectral filters. Each filter is sensitive to different wavelengths of light and reveals clouds and hazes at different altitudes. Clockwise from top left, the filters used are sensitive to violet (420 nanometers), red (648 nanometers), near-infrared (728 nanometers) and infrared (939 nanometers) light. The image was taken with the Cassini spacecraft wide-angle camera on Dec. 2, 2016, at a distance of about 400,000 miles (640,000 kilometers) from Saturn. Image scale is 95 miles (153 kilometers) per pixel. The images have been enlarged by a factor of two. The original versions of these images, as sent by the spacecraft, have a size of 256 pixels by 256 pixels. Cassini's images are sometimes planned to be compressed to smaller sizes due to data storage limitations on the spacecraft, or to allow a larger number of images to be taken than would otherwise be possible. These images were obtained about two days before its first close pass by the outer edges of Saturn's main rings during its penultimate mission phase. http://photojournal.jpl.nasa.gov/catalog/PIA21053

  12. On edge-aware path-based color spatial sampling for Retinex: from Termite Retinex to Light Energy-driven Termite Retinex

    NASA Astrophysics Data System (ADS)

    Simone, Gabriele; Cordone, Roberto; Serapioni, Raul Paolo; Lecca, Michela

    2017-05-01

    Retinex theory estimates the human color sensation at any observed point by correcting its color based on the spatial arrangement of the colors in proximate regions. We revise two recent path-based, edge-aware Retinex implementations: Termite Retinex (TR) and Energy-driven Termite Retinex (ETR). As the original Retinex implementation, TR and ETR scan the neighborhood of any image pixel by paths and rescale their chromatic intensities by intensity levels computed by reworking the colors of the pixels on the paths. Our interest in TR and ETR is due to their unique, content-based scanning scheme, which uses the image edges to define the paths and exploits a swarm intelligence model for guiding the spatial exploration of the image. The exploration scheme of ETR has been showed to be particularly effective: its paths are local minima of an energy functional, designed to favor the sampling of image pixels highly relevant to color sensation. Nevertheless, since its computational complexity makes ETR poorly practicable, here we present a light version of it, named Light Energy-driven TR, and obtained from ETR by implementing a modified, optimized minimization procedure and by exploiting parallel computing.

  13. When Dijkstra Meets Vanishing Point: A Stereo Vision Approach for Road Detection.

    PubMed

    Zhang, Yigong; Su, Yingna; Yang, Jian; Ponce, Jean; Kong, Hui

    2018-05-01

    In this paper, we propose a vanishing-point constrained Dijkstra road model for road detection in a stereo-vision paradigm. First, the stereo-camera is used to generate the u- and v-disparity maps of road image, from which the horizon can be extracted. With the horizon and ground region constraints, we can robustly locate the vanishing point of road region. Second, a weighted graph is constructed using all pixels of the image, and the detected vanishing point is treated as the source node of the graph. By computing a vanishing-point constrained Dijkstra minimum-cost map, where both disparity and gradient of gray image are used to calculate cost between two neighbor pixels, the problem of detecting road borders in image is transformed into that of finding two shortest paths that originate from the vanishing point to two pixels in the last row of image. The proposed approach has been implemented and tested over 2600 grayscale images of different road scenes in the KITTI data set. The experimental results demonstrate that this training-free approach can detect horizon, vanishing point, and road regions very accurately and robustly. It can achieve promising performance.

  14. VK-phantom male with 583 structures and female with 459 structures, based on the sectioned images of a male and a female, for computational dosimetry

    PubMed Central

    Park, Jin Seo; Jung, Yong Wook; Choi, Hyung-Do; Lee, Ae-Kyoung

    2018-01-01

    Abstract The anatomical structures in most phantoms are classified according to tissue properties rather than according to their detailed structures, because the tissue properties, not the detailed structures, are what is considered important. However, if a phantom does not have detailed structures, the phantom will be unreliable because different tissues can be regarded as the same. Thus, we produced the Visible Korean (VK) -phantoms with detailed structures (male, 583 structures; female, 459 structures) based on segmented images of the whole male body (interval, 1.0 mm; pixel size, 1.0 mm2) and the whole female body (interval, 1.0 mm; pixel size, 1.0 mm2), using house-developed software to analyze the text string and voxel information for each of the structures. The density of each structure in the VK-phantom was calculated based on Virtual Population and a publication of the International Commission on Radiological Protection. In the future, we will standardize the size of each structure in the VK-phantoms. If the VK-phantoms are standardized and the mass density of each structure is precisely known, researchers will be able to measure the exact absorption rate of electromagnetic radiation in specific organs and tissues of the whole body. PMID:29659988

  15. VK-phantom male with 583 structures and female with 459 structures, based on the sectioned images of a male and a female, for computational dosimetry.

    PubMed

    Park, Jin Seo; Jung, Yong Wook; Choi, Hyung-Do; Lee, Ae-Kyoung

    2018-05-01

    The anatomical structures in most phantoms are classified according to tissue properties rather than according to their detailed structures, because the tissue properties, not the detailed structures, are what is considered important. However, if a phantom does not have detailed structures, the phantom will be unreliable because different tissues can be regarded as the same. Thus, we produced the Visible Korean (VK) -phantoms with detailed structures (male, 583 structures; female, 459 structures) based on segmented images of the whole male body (interval, 1.0 mm; pixel size, 1.0 mm2) and the whole female body (interval, 1.0 mm; pixel size, 1.0 mm2), using house-developed software to analyze the text string and voxel information for each of the structures. The density of each structure in the VK-phantom was calculated based on Virtual Population and a publication of the International Commission on Radiological Protection. In the future, we will standardize the size of each structure in the VK-phantoms. If the VK-phantoms are standardized and the mass density of each structure is precisely known, researchers will be able to measure the exact absorption rate of electromagnetic radiation in specific organs and tissues of the whole body.

  16. Numerical simulation of the modulation transfer function (MTF) in infrared focal plane arrays: simulation methodology and MTF optimization

    NASA Astrophysics Data System (ADS)

    Schuster, J.

    2018-02-01

    Military requirements demand both single and dual-color infrared (IR) imaging systems with both high resolution and sharp contrast. To quantify the performance of these imaging systems, a key measure of performance, the modulation transfer function (MTF), describes how well an optical system reproduces an objects contrast in the image plane at different spatial frequencies. At the center of an IR imaging system is the focal plane array (FPA). IR FPAs are hybrid structures consisting of a semiconductor detector pixel array, typically fabricated from HgCdTe, InGaAs or III-V superlattice materials, hybridized with heat/pressure to a silicon read-out integrated circuit (ROIC) with indium bumps on each pixel providing the mechanical and electrical connection. Due to the growing sophistication of the pixel arrays in these FPAs, sophisticated modeling techniques are required to predict, understand, and benchmark the pixel array MTF that contributes to the total imaging system MTF. To model the pixel array MTF, computationally exhaustive 2D and 3D numerical simulation approaches are required to correctly account for complex architectures and effects such as lateral diffusion from the pixel corners. It is paramount to accurately model the lateral di_usion (pixel crosstalk) as it can become the dominant mechanism limiting the detector MTF if not properly mitigated. Once the detector MTF has been simulated, it is directly decomposed into its constituent contributions to reveal exactly what is limiting the total detector MTF, providing a path for optimization. An overview of the MTF will be given and the simulation approach will be discussed in detail, along with how different simulation parameters effect the MTF calculation. Finally, MTF optimization strategies (crosstalk mitigation) will be discussed.

  17. Amorphous selenium direct detection CMOS digital x-ray imager with 25 micron pixel pitch

    NASA Astrophysics Data System (ADS)

    Scott, Christopher C.; Abbaszadeh, Shiva; Ghanbarzadeh, Sina; Allan, Gary; Farrier, Michael; Cunningham, Ian A.; Karim, Karim S.

    2014-03-01

    We have developed a high resolution amorphous selenium (a-Se) direct detection imager using a large-area compatible back-end fabrication process on top of a CMOS active pixel sensor having 25 micron pixel pitch. Integration of a-Se with CMOS technology requires overcoming CMOS/a-Se interfacial strain, which initiates nucleation of crystalline selenium and results in high detector dark currents. A CMOS-compatible polyimide buffer layer was used to planarize the backplane and provide a low stress and thermally stable surface for a-Se. The buffer layer inhibits crystallization and provides detector stability that is not only a performance factor but also critical for favorable long term cost-benefit considerations in the application of CMOS digital x-ray imagers in medical practice. The detector structure is comprised of a polyimide (PI) buffer layer, the a-Se layer, and a gold (Au) top electrode. The PI layer is applied by spin-coating and is patterned using dry etching to open the backplane bond pads for wire bonding. Thermal evaporation is used to deposit the a-Se and Au layers, and the detector is operated in hole collection mode (i.e. a positive bias on the Au top electrode). High resolution a-Se diagnostic systems typically use 70 to 100 μm pixel pitch and have a pre-sampling modulation transfer function (MTF) that is significantly limited by the pixel aperture. Our results confirm that, for a densely integrated 25 μm pixel pitch CMOS array, the MTF approaches the fundamental material limit, i.e. where the MTF begins to be limited by the a-Se material properties and not the pixel aperture. Preliminary images demonstrating high spatial resolution have been obtained from a frst prototype imager.

  18. Digital radiography using amorphous selenium: photoconductively activated switch (PAS) readout system.

    PubMed

    Reznik, Nikita; Komljenovic, Philip T; Germann, Stephen; Rowlands, John A

    2008-03-01

    A new amorphous selenium (a-Se) digital radiography detector is introduced. The proposed detector generates a charge image in the a-Se layer in a conventional manner, which is stored on electrode pixels at the surface of the a-Se layer. A novel method, called photoconductively activated switch (PAS), is used to read out the latent x-ray charge image. The PAS readout method uses lateral photoconduction at the a-Se surface which is a revolutionary modification of the bulk photoinduced discharge (PID) methods. The PAS method addresses and eliminates the fundamental weaknesses of the PID methods--long readout times and high readout noise--while maintaining the structural simplicity and high resolution for which PID optical readout systems are noted. The photoconduction properties of the a-Se surface were investigated and the geometrical design for the electrode pixels for a PAS radiography system was determined. This design was implemented in a single pixel PAS evaluation system. The results show that the PAS x-ray induced output charge signal was reproducible and depended linearly on the x-ray exposure in the diagnostic exposure range. Furthermore, the readout was reasonably rapid (10 ms for pixel discharge). The proposed detector allows readout of half a pixel row at a time (odd pixels followed by even pixels), thus permitting the readout of a complete image in 30 s for a 40 cm x 40 cm detector with the potential of reducing that time by using greater readout light intensity. This demonstrates that a-Se based x-ray detectors using photoconductively activated switches could form a basis for a practical integrated digital radiography system.

  19. Hexagons in Icy Terrain

    NASA Image and Video Library

    2018-01-23

    Ground cemented by ice cover the high latitudes of Mars, much as they do on Earth's cold climates. A common landform that occurs in icy terrain are polygons as shown in this image from NASA's Mars Reconnaissance Orbiter (MRO). Polygonal patterns form by winter cooling and contraction cracking of the frozen ground. Over time these thin cracks develop and coalesce into a honeycomb network, with a few meters spacing between neighboring cracks. Shallow troughs mark the locations of the underground cracks, which are clearly visible form orbit. The map is projected here at a scale of 25 centimeters (9.8 inches) per pixel. [The original image scale is 30.2 centimeters (11.9 inches) per pixel (with 1 x 1 binning); objects on the order of 91 centimeters (35.8 inches) across are resolved.] North is up. https://photojournal.jpl.nasa.gov/catalog/PIA22180

  20. Image processing on the image with pixel noise bits removed

    NASA Astrophysics Data System (ADS)

    Chuang, Keh-Shih; Wu, Christine

    1992-06-01

    Our previous studies used statistical methods to assess the noise level in digital images of various radiological modalities. We separated the pixel data into signal bits and noise bits and demonstrated visually that the removal of the noise bits does not affect the image quality. In this paper we apply image enhancement techniques on noise-bits-removed images and demonstrate that the removal of noise bits has no effect on the image property. The image processing techniques used are gray-level look up table transformation, Sobel edge detector, and 3-D surface display. Preliminary results show no noticeable difference between original image and noise bits removed image using look up table operation and Sobel edge enhancement. There is a slight enhancement of the slicing artifact in the 3-D surface display of the noise bits removed image.

  1. Charge-coupled device for low background observations

    NASA Technical Reports Server (NTRS)

    Loh, Edwin D. (Inventor); Cheng, Edward S. (Inventor)

    2002-01-01

    A charge-coupled device with a low-emissivity metal layer located between a sensing layer and a substrate provides reduction in ghost images. In a typical charge-coupled device of a silicon sensing layer, a silicon dioxide insulating layer, with a glass substrate and a metal carrier layer, a near-infrared photon, not absorbed in the first pass, enters the glass substrate, reflects from the metal carrier, thereby returning far from the original pixel in its entry path. The placement of a low-emissivity metal layer between the glass substrate and the sensing layer reflects near infrared photons before they reach the substrate so that they may be absorbed in the silicon nearer the pixel of their points of entry so that the reflected ghost image is coincident with the primary image for a sharper, brighter image.

  2. Pixel-level plasmonic microcavity infrared photodetector

    PubMed Central

    Jing, You Liang; Li, Zhi Feng; Li, Qian; Chen, Xiao Shuang; Chen, Ping Ping; Wang, Han; Li, Meng Yao; Li, Ning; Lu, Wei

    2016-01-01

    Recently, plasmonics has been central to the manipulation of photons on the subwavelength scale, and superior infrared imagers have opened novel applications in many fields. Here, we demonstrate the first pixel-level plasmonic microcavity infrared photodetector with a single quantum well integrated between metal patches and a reflection layer. Greater than one order of magnitude enhancement of the peak responsivity has been observed. The significant improvement originates from the highly confined optical mode in the cavity, leading to a strong coupling between photons and the quantum well, resulting in the enhanced photo-electric conversion process. Such strong coupling from the localized surface plasmon mode inside the cavity is independent of incident angles, offering a unique solution to high-performance focal plane array devices. This demonstration paves the way for important infrared optoelectronic devices for sensing and imaging. PMID:27181111

  3. Tradeoff between picture element dimensions and noncoherent averaging in side-looking airborne radar

    NASA Technical Reports Server (NTRS)

    Moore, R. K.

    1979-01-01

    An experiment was performed in which three synthetic-aperture images and one real-aperture image were successively degraded in spatial resolution, both retaining the same number of independent samples per pixel and using the spatial degradation to allow averaging of different numbers of independent samples within each pixel. The original and degraded images were provided to three interpreters familiar with both aerial photographs and radar images. The interpreters were asked to grade each image in terms of their ability to interpret various specified features on the image. The numerical interpretability grades were then used as a quantitative measure of the utility of the different kinds of image processing and different resolutions. The experiment demonstrated empirically that the interpretability is related exponentially to the SGL volume which is the product of azimuth, range, and gray-level resolution.

  4. Indium-bump-free antimonide superlattice membrane detectors on silicon substrates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zamiri, M., E-mail: mzamiri@chtm.unm.edu, E-mail: skrishna@chtm.unm.edu; Klein, B.; Schuler-Sandy, T.

    2016-02-29

    We present an approach to realize antimonide superlattices on silicon substrates without using conventional Indium-bump hybridization. In this approach, PIN superlattices are grown on top of a 60 nm Al{sub 0.6}Ga{sub 0.4}Sb sacrificial layer on a GaSb host substrate. Following the growth, the individual pixels are transferred using our epitaxial-lift off technique, which consists of a wet-etch to undercut the pixels followed by a dry-stamp process to transfer the pixels to a silicon substrate prepared with a gold layer. Structural and optical characterization of the transferred pixels was done using an optical microscope, scanning electron microscopy, and photoluminescence. The interface betweenmore » the transferred pixels and the new substrate was abrupt, and no significant degradation in the optical quality was observed. An Indium-bump-free membrane detector was then fabricated using this approach. Spectral response measurements provided a 100% cut-off wavelength of 4.3 μm at 77 K. The performance of the membrane detector was compared to a control detector on the as-grown substrate. The membrane detector was limited by surface leakage current. The proposed approach could pave the way for wafer-level integration of photonic detectors on silicon substrates, which could dramatically reduce the cost of these detectors.« less

  5. QLog Solar-Cell Mode Photodiode Logarithmic CMOS Pixel Using Charge Compression and Readout †

    PubMed Central

    Ni, Yang

    2018-01-01

    In this paper, we present a new logarithmic pixel design currently under development at New Imaging Technologies SA (NIT). This new logarithmic pixel design uses charge domain logarithmic signal compression and charge-transfer-based signal readout. This structure gives a linear response in low light conditions and logarithmic response in high light conditions. The charge transfer readout efficiently suppresses the reset (KTC) noise by using true correlated double sampling (CDS) in low light conditions. In high light conditions, thanks to charge domain logarithmic compression, it has been demonstrated that 3000 electrons should be enough to cover a 120 dB dynamic range with a mobile phone camera-like signal-to-noise ratio (SNR) over the whole dynamic range. This low electron count permits the use of ultra-small floating diffusion capacitance (sub-fF) without charge overflow. The resulting large conversion gain permits a single photon detection capability with a wide dynamic range without a complex sensor/system design. A first prototype sensor with 320 × 240 pixels has been implemented to validate this charge domain logarithmic pixel concept and modeling. The first experimental results validate the logarithmic charge compression theory and the low readout noise due to the charge-transfer-based readout. PMID:29443903

  6. QLog Solar-Cell Mode Photodiode Logarithmic CMOS Pixel Using Charge Compression and Readout.

    PubMed

    Ni, Yang

    2018-02-14

    In this paper, we present a new logarithmic pixel design currently under development at New Imaging Technologies SA (NIT). This new logarithmic pixel design uses charge domain logarithmic signal compression and charge-transfer-based signal readout. This structure gives a linear response in low light conditions and logarithmic response in high light conditions. The charge transfer readout efficiently suppresses the reset (KTC) noise by using true correlated double sampling (CDS) in low light conditions. In high light conditions, thanks to charge domain logarithmic compression, it has been demonstrated that 3000 electrons should be enough to cover a 120 dB dynamic range with a mobile phone camera-like signal-to-noise ratio (SNR) over the whole dynamic range. This low electron count permits the use of ultra-small floating diffusion capacitance (sub-fF) without charge overflow. The resulting large conversion gain permits a single photon detection capability with a wide dynamic range without a complex sensor/system design. A first prototype sensor with 320 × 240 pixels has been implemented to validate this charge domain logarithmic pixel concept and modeling. The first experimental results validate the logarithmic charge compression theory and the low readout noise due to the charge-transfer-based readout.

  7. The NUC and blind pixel eliminating in the DTDI application

    NASA Astrophysics Data System (ADS)

    Su, Xiao Feng; Chen, Fan Sheng; Pan, Sheng Da; Gong, Xue Yi; Dong, Yu Cui

    2013-12-01

    AS infrared CMOS Digital TDI (Time Delay and integrate) has a simple structure, excellent performance and flexible operation, it has been used in more and more applications. Because of the limitation of the Production process level, the plane array of the infrared detector has a large NU (non-uniformity) and a certain blind pixel rate. Both of the two will raise the noise and lead to the TDI works not very well. In this paper, for the impact of the system performance, the most important elements are analyzed, which are the NU of the optical system, the NU of the Plane array and the blind pixel in the Plane array. Here a reasonable algorithm which considers the background removal and the linear response model of the infrared detector is used to do the NUC (Non-uniformity correction) process, when the infrared detector array is used as a Digital TDI. In order to eliminate the impact of the blind pixel, the concept of surplus pixel method is introduced in, through the method, the SNR (signal to noise ratio) can be improved and the spatial and temporal resolution will not be changed. Finally we use a MWIR (Medium Ware Infrared) detector to do the experiment and the result proves the effectiveness of the method.

  8. Fifty Years of Mars Imaging: from Mariner 4 to HiRISE

    NASA Image and Video Library

    2017-11-20

    This image from NASA's Mars Reconnaissance Orbiter (MRO) shows Mars' surface in detail. Mars has captured the imagination of astronomers for thousands of years, but it wasn't until the last half a century that we were able to capture images of its surface in detail. This particular site on Mars was first imaged in 1965 by the Mariner 4 spacecraft during the first successful fly-by mission to Mars. From an altitude of around 10,000 kilometers, this image (the ninth frame taken) achieved a resolution of approximately 1.25 kilometers per pixel. Since then, this location has been observed by six other visible cameras producing images with varying resolutions and sizes. This includes HiRISE (highlighted in yellow), which is the highest-resolution and has the smallest "footprint." This compilation, spanning Mariner 4 to HiRISE, shows each image at full-resolution. Beginning with Viking 1 and ending with our HiRISE image, this animation documents the historic imaging of a particular site on another world. In 1976, the Viking 1 orbiter began imaging Mars in unprecedented detail, and by 1980 had successfully mosaicked the planet at approximately 230 meters per pixel. In 1999, the Mars Orbiter Camera onboard the Mars Global Surveyor (1996) also imaged this site with its Wide Angle lens, at around 236 meters per pixel. This was followed by the Thermal Emission Imaging System on Mars Odyssey (2001), which also provided a visible camera producing the image we see here at 17 meters per pixel. Later in 2012, the High-Resolution Stereo Camera on the Mars Express orbiter (2003) captured this image of the surface at 25 meters per pixel. In 2010, the Context Camera on the Mars Reconnaissance Orbiter (2005) imaged this site at about 5 meters per pixel. Finally, in 2017, HiRISE acquired the highest resolution image of this location to date at 50 centimeters per pixel. When seen at this unprecedented scale, we can discern a crater floor strewn with small rocky deposits, boulders several meters across, and wind-blown deposits in the floors of small craters and depressions. This compilation of Mars images spanning over 50 years gives us a visual appreciation of the evolution of orbital Mars imaging over a single site. The map is projected here at a scale of 50 centimeters (19.7 inches) per pixel. [The original image scale is 52.2 centimeters (20.6 inches) per pixel (with 2 x 2 binning); objects on the order of 156 centimeters (61.4 inches) across are resolved.] North is up. https://photojournal.jpl.nasa.gov/catalog/PIA22115

  9. Comparing Pixel- and Object-Based Approaches in Effectively Classifying Wetland-Dominated Landscapes

    PubMed Central

    Berhane, Tedros M.; Lane, Charles R.; Wu, Qiusheng; Anenkhonov, Oleg A.; Chepinoga, Victor V.; Autrey, Bradley C.; Liu, Hongxing

    2018-01-01

    Wetland ecosystems straddle both terrestrial and aquatic habitats, performing many ecological functions directly and indirectly benefitting humans. However, global wetland losses are substantial. Satellite remote sensing and classification informs wise wetland management and monitoring. Both pixel- and object-based classification approaches using parametric and non-parametric algorithms may be effectively used in describing wetland structure and habitat, but which approach should one select? We conducted both pixel- and object-based image analyses (OBIA) using parametric (Iterative Self-Organizing Data Analysis Technique, ISODATA, and maximum likelihood, ML) and non-parametric (random forest, RF) approaches in the Barguzin Valley, a large wetland (~500 km2) in the Lake Baikal, Russia, drainage basin. Four Quickbird multispectral bands plus various spatial and spectral metrics (e.g., texture, Non-Differentiated Vegetation Index, slope, aspect, etc.) were analyzed using field-based regions of interest sampled to characterize an initial 18 ISODATA-based classes. Parsimoniously using a three-layer stack (Quickbird band 3, water ratio index (WRI), and mean texture) in the analyses resulted in the highest accuracy, 87.9% with pixel-based RF, followed by OBIA RF (segmentation scale 5, 84.6% overall accuracy), followed by pixel-based ML (83.9% overall accuracy). Increasing the predictors from three to five by adding Quickbird bands 2 and 4 decreased the pixel-based overall accuracy while increasing the OBIA RF accuracy to 90.4%. However, McNemar’s chi-square test confirmed no statistically significant difference in overall accuracy among the classifiers (pixel-based ML, RF, or object-based RF) for either the three- or five-layer analyses. Although potentially useful in some circumstances, the OBIA approach requires substantial resources and user input (such as segmentation scale selection—which was found to substantially affect overall accuracy). Hence, we conclude that pixel-based RF approaches are likely satisfactory for classifying wetland-dominated landscapes. PMID:29707381

  10. Comparing Pixel- and Object-Based Approaches in Effectively Classifying Wetland-Dominated Landscapes.

    PubMed

    Berhane, Tedros M; Lane, Charles R; Wu, Qiusheng; Anenkhonov, Oleg A; Chepinoga, Victor V; Autrey, Bradley C; Liu, Hongxing

    2018-01-01

    Wetland ecosystems straddle both terrestrial and aquatic habitats, performing many ecological functions directly and indirectly benefitting humans. However, global wetland losses are substantial. Satellite remote sensing and classification informs wise wetland management and monitoring. Both pixel- and object-based classification approaches using parametric and non-parametric algorithms may be effectively used in describing wetland structure and habitat, but which approach should one select? We conducted both pixel- and object-based image analyses (OBIA) using parametric (Iterative Self-Organizing Data Analysis Technique, ISODATA, and maximum likelihood, ML) and non-parametric (random forest, RF) approaches in the Barguzin Valley, a large wetland (~500 km 2 ) in the Lake Baikal, Russia, drainage basin. Four Quickbird multispectral bands plus various spatial and spectral metrics (e.g., texture, Non-Differentiated Vegetation Index, slope, aspect, etc.) were analyzed using field-based regions of interest sampled to characterize an initial 18 ISODATA-based classes. Parsimoniously using a three-layer stack (Quickbird band 3, water ratio index (WRI), and mean texture) in the analyses resulted in the highest accuracy, 87.9% with pixel-based RF, followed by OBIA RF (segmentation scale 5, 84.6% overall accuracy), followed by pixel-based ML (83.9% overall accuracy). Increasing the predictors from three to five by adding Quickbird bands 2 and 4 decreased the pixel-based overall accuracy while increasing the OBIA RF accuracy to 90.4%. However, McNemar's chi-square test confirmed no statistically significant difference in overall accuracy among the classifiers (pixel-based ML, RF, or object-based RF) for either the three- or five-layer analyses. Although potentially useful in some circumstances, the OBIA approach requires substantial resources and user input (such as segmentation scale selection-which was found to substantially affect overall accuracy). Hence, we conclude that pixel-based RF approaches are likely satisfactory for classifying wetland-dominated landscapes.

  11. Geological Interpretations of the Topography of Selected Regions of Venus from Arecibo to Goldstone Radar Interferometry

    NASA Technical Reports Server (NTRS)

    Jurgens, R. F.; Margot, J-L.; Simons, M.; Pritchard, M. E.; Slade, M. A.

    2002-01-01

    Radar interferometry using Arecibo to transmit and three antennas at the Goldstone to receive was conducted on 14 dates in Spring, 2001. This data has been used so far to generate DEMs (digital elevation models) for several of the dates with pixel resolution of 0.5-1.0 km. Additional information is contained in the original extended abstract.

  12. High resolution as a key feature to perform accurate ELISPOT measurements using Zeiss KS ELISPOT readers.

    PubMed

    Malkusch, Wolf

    2005-01-01

    The enzyme-linked immunospot (ELISPOT) assay was originally developed for the detection of individual antibody secreting B-cells. Since then, the method has been improved, and ELISPOT is used for the determination of the production of tumor necrosis factor (TNF)-alpha, interferon (IFN)-gamma, or various interleukins (IL)-4, IL-5. ELISPOT measurements are performed in 96-well plates with nitrocellulose membranes either visually or by means of image analysis. Image analysis offers various procedures to overcome variable background intensity problems and separate true from false spots. ELISPOT readers offer a complete solution for precise and automatic evaluation of ELISPOT assays. Number, size, and intensity of each single spot can be determined, printed, or saved for further statistical evaluation. Cytokine spots are always round, but because of floating edges with the background, they have a nonsmooth borderline. Resolution is a key feature for a precise detection of ELISPOT. In standard applications shape and edge steepness are essential parameters in addition to size and color for an accurate spot recognition. These parameters need a minimum spot diameter of 6 pixels. Collecting one single image per well with a standard color camera with 750 x 560 pixels will result in a resolution much too low to get all of the spots in a specimen. IFN-gamma spots may have only 25 microm diameters, and TNF-alpha spots just 15 microm. A 750 x 560 pixel image of a 6-mm well has a pixel size of 12 microm, resulting in only 1 or 2 pixel for a spot. Using a precise microscope optic in combination with a high resolution (1300 x 1030 pixel) integrating digital color camera, and at least 2 x 2 images per well will result in a pixel size of 2.5 microm and, as a minimum, 6 pixel diameter per spot. New approaches try to detect two cytokines per cell at the same time (i.e., IFN-gamma and IL-5). Standard staining procedures produce brownish spots (horseradish peroxidase) and blue spots (alkaline phosphatase). Problems may occur with color overlaps from cells producing both cytokines, resulting in violet spots. The latest experiments therefore try to use fluorescence labels as a marker. Fluorescein isothiocyanate results in green spots and Rhodamine in red spots. Cells producing both cytokines appear yellow. These colors can be separated much easier than the violet, red, and blue, especially using a high resolution.

  13. Resolution Enhancement of MODIS-derived Water Indices for Studying Persistent Flooding

    NASA Astrophysics Data System (ADS)

    Underwood, L. W.; Kalcic, M. T.; Fletcher, R. M.

    2012-12-01

    Monitoring coastal marshes for persistent flooding and salinity stress is a high priority issue in Louisiana. Remote sensing can identify environmental variables that can be indicators of marsh habitat conditions, and offer timely and relatively accurate information for aiding wetland vegetation management. Monitoring activity accuracy is often limited by mixed pixels which occur when areas represented by the pixel encompasses more than one cover type. Mixtures of marsh grasses and open water in 250m Moderate Resolution Imaging Spectroradiometer (MODIS) data can impede flood area estimation. Flood mapping of such mixtures requires finer spatial resolution data to better represent the cover type composition within 250m MODIS pixel. Fusion of MODIS and Landsat can improve both spectral and temporal resolution of time series products to resolve rapid changes from forcing mechanisms like hurricane winds and storm surge. For this study, using a method for estimating sub-pixel values from a MODIS time series of a Normalized Difference Water Index (NDWI), using temporal weighting, was implemented to map persistent flooding in Louisiana coastal marshes. Ordinarily NDWI computed from daily 250m MODIS pixels represents a mixture of fragmented marshes and water. Here, sub-pixel NDWI values were derived for MODIS data using Landsat 30-m data. Each MODIS pixel was disaggregated into a mixture of the eight cover types according to the classified image pixels falling inside the MODIS pixel. The Landsat pixel means for each cover type inside a MODIS pixel were computed for the Landsat data preceding the MODIS image in time and for the Landsat data succeeding the MODIS image. The Landsat data were then weighted exponentially according to closeness in date to the MODIS data. The reconstructed MODIS data were produced by summing the product of fractional cover type with estimated NDWI values within each cover type. A new daily time series was produced using both the reconstructed 250-m MODIS, with enhanced features, and the approximated daily 30-m high-resolution image based on Landsat data. The algorithm was developed and tested over the Calcasieu-Sabine Basin, which was heavily inundated by storm surge from Hurricane Ike to study the extent and duration of flooding following the storm. Time series for 2000-2009, covering flooding events by Hurricane Rita in 2005 and Hurricane Ike in 2008, were derived. High resolution images were formed for all days in 2008 between the first cloud free Landsat scene and the last cloud-free Landsat scene. To refine and validate flooding maps, each time series was compared to Louisiana Coastwide Reference Monitoring System (CRMS) station water levels adjusted to marsh to optimize thresholds for MODIS-derived time series of NDWI. Seasonal fluctuations were adjusted by subtracting ten year average NDWI for marshes, excluding the hurricane events. Results from different NDWI indices and a combination of indices were compared. Flooding persistence that was mapped with higher-resolution data showed some improvement over the original MODIS time series estimates. The advantage of this novel technique is that improved mapping of extent and duration of inundation can be provided.

  14. Resolution Enhancement of MODIS-Derived Water Indices for Studying Persistent Flooding

    NASA Technical Reports Server (NTRS)

    Underwood, L. W.; Kalcic, Maria; Fletcher, Rose

    2012-01-01

    Monitoring coastal marshes for persistent flooding and salinity stress is a high priority issue in Louisiana. Remote sensing can identify environmental variables that can be indicators of marsh habitat conditions, and offer timely and relatively accurate information for aiding wetland vegetation management. Monitoring activity accuracy is often limited by mixed pixels which occur when areas represented by the pixel encompasses more than one cover type. Mixtures of marsh grasses and open water in 250m Moderate Resolution Imaging Spectroradiometer (MODIS) data can impede flood area estimation. Flood mapping of such mixtures requires finer spatial resolution data to better represent the cover type composition within 250m MODIS pixel. Fusion of MODIS and Landsat can improve both spectral and temporal resolution of time series products to resolve rapid changes from forcing mechanisms like hurricane winds and storm surge. For this study, using a method for estimating sub-pixel values from a MODIS time series of a Normalized Difference Water Index (NDWI), using temporal weighting, was implemented to map persistent flooding in Louisiana coastal marshes. Ordinarily NDWI computed from daily 250m MODIS pixels represents a mixture of fragmented marshes and water. Here, sub-pixel NDWI values were derived for MODIS data using Landsat 30-m data. Each MODIS pixel was disaggregated into a mixture of the eight cover types according to the classified image pixels falling inside the MODIS pixel. The Landsat pixel means for each cover type inside a MODIS pixel were computed for the Landsat data preceding the MODIS image in time and for the Landsat data succeeding the MODIS image. The Landsat data were then weighted exponentially according to closeness in date to the MODIS data. The reconstructed MODIS data were produced by summing the product of fractional cover type with estimated NDWI values within each cover type. A new daily time series was produced using both the reconstructed 250-m MODIS, with enhanced features, and the approximated daily 30-m high-resolution image based on Landsat data. The algorithm was developed and tested over the Calcasieu-Sabine Basin, which was heavily inundated by storm surge from Hurricane Ike to study the extent and duration of flooding following the storm. Time series for 2000-2009, covering flooding events by Hurricane Rita in 2005 and Hurricane Ike in 2008, were derived. High resolution images were formed for all days in 2008 between the first cloud free Landsat scene and the last cloud-free Landsat scene. To refine and validate flooding maps, each time series was compared to Louisiana Coastwide Reference Monitoring System (CRMS) station water levels adjusted to marsh to optimize thresholds for MODIS-derived time series of NDWI. Seasonal fluctuations were adjusted by subtracting ten year average NDWI for marshes, excluding the hurricane events. Results from different NDWI indices and a combination of indices were compared. Flooding persistence that was mapped with higher-resolution data showed some improvement over the original MODIS time series estimates. The advantage of this novel technique is that improved mapping of extent and duration of inundation can be provided.

  15. Discovery of Finely Structured Dynamic Solar Corona Observed in the Hi-C Telescope

    NASA Technical Reports Server (NTRS)

    Winebarger, A.; Cirtain, J.; Golub, L.; DeLuca, E.; Savage, S.; Alexander, C.; Schuler, T.

    2014-01-01

    In the summer of 2012, the High-resolution Coronal Imager (Hi-C) flew aboard a NASA sounding rocket and collected the highest spatial resolution images ever obtained of the solar corona. One of the goals of the Hi-C flight was to characterize the substructure of the solar corona. We therefore examine how the intensity scales from AIA resolution to Hi-C resolution. For each low-resolution pixel, we calculate the standard deviation in the contributing high-resolution pixel intensities and compare that to the expected standard deviation calculated from the noise. If these numbers are approximately equal, the corona can be assumed to be smoothly varying, i.e. have no evidence of substructure in the Hi-C image to within Hi-C's ability to measure it given its throughput and readout noise. A standard deviation much larger than the noise value indicates the presence of substructure. We calculate these values for each low-resolution pixel for each frame of the Hi-C data. On average, 70 percent of the pixels in each Hi-C image show no evidence of substructure. The locations where substructure is prevalent is in the moss regions and in regions of sheared magnetic field. We also find that the level of substructure varies significantly over the roughly 160 s of the Hi-C data analyzed here. This result indicates that the finely structured corona is concentrated in regions of heating and is highly time dependent.

  16. DISCOVERY OF FINELY STRUCTURED DYNAMIC SOLAR CORONA OBSERVED IN THE Hi-C TELESCOPE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Winebarger, Amy R.; Cirtain, Jonathan; Savage, Sabrina

    In the Summer of 2012, the High-resolution Coronal Imager (Hi-C) flew on board a NASA sounding rocket and collected the highest spatial resolution images ever obtained of the solar corona. One of the goals of the Hi-C flight was to characterize the substructure of the solar corona. We therefore examine how the intensity scales from AIA resolution to Hi-C resolution. For each low-resolution pixel, we calculate the standard deviation in the contributing high-resolution pixel intensities and compare that to the expected standard deviation calculated from the noise. If these numbers are approximately equal, the corona can be assumed to bemore » smoothly varying, i.e., have no evidence of substructure in the Hi-C image to within Hi-C's ability to measure it given its throughput and readout noise. A standard deviation much larger than the noise value indicates the presence of substructure. We calculate these values for each low-resolution pixel for each frame of the Hi-C data. On average, 70% of the pixels in each Hi-C image show no evidence of substructure. The locations where substructure is prevalent is in the moss regions and in regions of sheared magnetic field. We also find that the level of substructure varies significantly over the roughly 160 s of the Hi-C data analyzed here. This result indicates that the finely structured corona is concentrated in regions of heating and is highly time dependent.« less

  17. Physically-based parameterization of spatially variable soil and vegetation using satellite multispectral data

    NASA Technical Reports Server (NTRS)

    Jasinski, Michael F.; Eagleson, Peter S.

    1989-01-01

    A stochastic-geometric landsurface reflectance model is formulated and tested for the parameterization of spatially variable vegetation and soil at subpixel scales using satellite multispectral images without ground truth. Landscapes are conceptualized as 3-D Lambertian reflecting surfaces consisting of plant canopies, represented by solid geometric figures, superposed on a flat soil background. A computer simulation program is developed to investigate image characteristics at various spatial aggregations representative of satellite observational scales, or pixels. The evolution of the shape and structure of the red-infrared space, or scattergram, of typical semivegetated scenes is investigated by sequentially introducing model variables into the simulation. The analytical moments of the total pixel reflectance, including the mean, variance, spatial covariance, and cross-spectral covariance, are derived in terms of the moments of the individual fractional cover and reflectance components. The moments are applied to the solution of the inverse problem: The estimation of subpixel landscape properties on a pixel-by-pixel basis, given only one multispectral image and limited assumptions on the structure of the landscape. The landsurface reflectance model and inversion technique are tested using actual aerial radiometric data collected over regularly spaced pecan trees, and using both aerial and LANDSAT Thematic Mapper data obtained over discontinuous, randomly spaced conifer canopies in a natural forested watershed. Different amounts of solar backscattered diffuse radiation are assumed and the sensitivity of the estimated landsurface parameters to those amounts is examined.

  18. Pixelated filters for spatial imaging

    NASA Astrophysics Data System (ADS)

    Mathieu, Karine; Lequime, Michel; Lumeau, Julien; Abel-Tiberini, Laetitia; Savin De Larclause, Isabelle; Berthon, Jacques

    2015-10-01

    Small satellites are often used by spatial agencies to meet scientific spatial mission requirements. Their payloads are composed of various instruments collecting an increasing amount of data, as well as respecting the growing constraints relative to volume and mass; So small-sized integrated camera have taken a favored place among these instruments. To ensure scene specific color information sensing, pixelated filters seem to be more attractive than filter wheels. The work presented here, in collaboration with Institut Fresnel, deals with the manufacturing of this kind of component, based on thin film technologies and photolithography processes. CCD detectors with a pixel pitch about 30 μm were considered. In the configuration where the matrix filters are positioned the closest to the detector, the matrix filters are composed of 2x2 macro pixels (e.g. 4 filters). These 4 filters have a bandwidth about 40 nm and are respectively centered at 550, 700, 770 and 840 nm with a specific rejection rate defined on the visible spectral range [500 - 900 nm]. After an intense design step, 4 thin-film structures have been elaborated with a maximum thickness of 5 μm. A run of tests has allowed us to choose the optimal micro-structuration parameters. The 100x100 matrix filters prototypes have been successfully manufactured with lift-off and ion assisted deposition processes. High spatial and spectral characterization, with a dedicated metrology bench, showed that initial specifications and simulations were globally met. These excellent performances knock down the technological barriers for high-end integrated specific multi spectral imaging.

  19. Fractional order integration and fuzzy logic based filter for denoising of echocardiographic image.

    PubMed

    Saadia, Ayesha; Rashdi, Adnan

    2016-12-01

    Ultrasound is widely used for imaging due to its cost effectiveness and safety feature. However, ultrasound images are inherently corrupted with speckle noise which severely affects the quality of these images and create difficulty for physicians in diagnosis. To get maximum benefit from ultrasound imaging, image denoising is an essential requirement. To perform image denoising, a two stage methodology using fuzzy weighted mean and fractional integration filter has been proposed in this research work. In stage-1, image pixels are processed by applying a 3 × 3 window around each pixel and fuzzy logic is used to assign weights to the pixels in each window, replacing central pixel of the window with weighted mean of all neighboring pixels present in the same window. Noise suppression is achieved by assigning weights to the pixels while preserving edges and other important features of an image. In stage-2, the resultant image is further improved by fractional order integration filter. Effectiveness of the proposed methodology has been analyzed for standard test images artificially corrupted with speckle noise and real ultrasound B-mode images. Results of the proposed technique have been compared with different state-of-the-art techniques including Lsmv, Wiener, Geometric filter, Bilateral, Non-local means, Wavelet, Perona et al., Total variation (TV), Global Adaptive Fractional Integral Algorithm (GAFIA) and Improved Fractional Order Differential (IFD) model. Comparison has been done on quantitative and qualitative basis. For quantitative analysis different metrics like Peak Signal to Noise Ratio (PSNR), Speckle Suppression Index (SSI), Structural Similarity (SSIM), Edge Preservation Index (β) and Correlation Coefficient (ρ) have been used. Simulations have been done using Matlab. Simulation results of artificially corrupted standard test images and two real Echocardiographic images reveal that the proposed method outperforms existing image denoising techniques reported in the literature. The proposed method for denoising of Echocardiographic images is effective in noise suppression/removal. It not only removes noise from an image but also preserves edges and other important structure. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  20. Tracking brain motion during the cardiac cycle using spiral cine-DENSE MRI

    PubMed Central

    Zhong, Xiaodong; Meyer, Craig H.; Schlesinger, David J.; Sheehan, Jason P.; Epstein, Frederick H.; Larner, James M.; Benedict, Stanley H.; Read, Paul W.; Sheng, Ke; Cai, Jing

    2009-01-01

    Cardiac-synchronized brain motion is well documented, but the accurate measurement of such motion on the pixel-by-pixel basis has been hampered by the lack of proper imaging technique. In this article, the authors present the implementation of an autotracking spiral cine displacement-encoded stimulation echo (DENSE) magnetic resonance imaging (MRI) technique for the measurement of pulsatile brain motion during the cardiac cycle. Displacement-encoded dynamic MR images of three healthy volunteers were acquired throughout the cardiac cycle using the spiral cine-DENSE pulse sequence gated to the R wave of an electrocardiogram. Pixelwise Lagrangian displacement maps were computed, and 2D displacement as a function of time was determined for selected regions of interests. Different intracranial structures exhibited characteristic motion amplitude, direction, and pattern throughout the cardiac cycle. Time-resolved displacement curves revealed the pathway of pulsatile motion from brain stem to peripheral brain lobes. These preliminary results demonstrated that the spiral cine-DENSE MRI technique can be used to measure cardiac-synchronized pulsatile brain motion on the pixel-by-pixel basis with high temporal∕spatial resolution and sensitivity. PMID:19746774

  1. Fast Readout Architectures for Large Arrays of Digital Pixels: Examples and Applications

    PubMed Central

    Gabrielli, A.

    2014-01-01

    Modern pixel detectors, particularly those designed and constructed for applications and experiments for high-energy physics, are commonly built implementing general readout architectures, not specifically optimized in terms of speed. High-energy physics experiments use bidimensional matrices of sensitive elements located on a silicon die. Sensors are read out via other integrated circuits bump bonded over the sensor dies. The speed of the readout electronics can significantly increase the overall performance of the system, and so here novel forms of readout architectures are studied and described. These circuits have been investigated in terms of speed and are particularly suited for large monolithic, low-pitch pixel detectors. The idea is to have a small simple structure that may be expanded to fit large matrices without affecting the layout complexity of the chip, while maintaining a reasonably high readout speed. The solutions might be applied to devices for applications not only in physics but also to general-purpose pixel detectors whenever online fast data sparsification is required. The paper presents also simulations on the efficiencies of the systems as proof of concept for the proposed ideas. PMID:24778588

  2. Charge collection properties in an irradiated pixel sensor built in a thick-film HV-SOI process

    NASA Astrophysics Data System (ADS)

    Hiti, B.; Cindro, V.; Gorišek, A.; Hemperek, T.; Kishishita, T.; Kramberger, G.; Krüger, H.; Mandić, I.; Mikuž, M.; Wermes, N.; Zavrtanik, M.

    2017-10-01

    Investigation of HV-CMOS sensors for use as a tracking detector in the ATLAS experiment at the upgraded LHC (HL-LHC) has recently been an active field of research. A potential candidate for a pixel detector built in Silicon-On-Insulator (SOI) technology has already been characterized in terms of radiation hardness to TID (Total Ionizing Dose) and charge collection after a moderate neutron irradiation. In this article we present results of an extensive irradiation hardness study with neutrons up to a fluence of 1× 1016 neq/cm2. Charge collection in a passive pixelated structure was measured by Edge Transient Current Technique (E-TCT). The evolution of the effective space charge concentration was found to be compliant with the acceptor removal model, with the minimum of the space charge concentration being reached after 5× 1014 neq/cm2. An investigation of the in-pixel uniformity of the detector response revealed parasitic charge collection by the epitaxial silicon layer characteristic for the SOI design. The results were backed by a numerical simulation of charge collection in an equivalent detector layout.

  3. Modulate chopper technique used in pyroelectric uncooled focal plane array thermal imager

    NASA Astrophysics Data System (ADS)

    He, Yuqing; Jin, Weiqi; Liu, Guangrong; Gao, Zhiyun; Wang, Xia; Wang, Lingxue

    2002-09-01

    Pyroelectric uncooled focal plane array (FPA) thermal imager has the advantages of low cost, small size, high responsibility and can work under room temperature, so it has great progress in recent years. As a matched technique, the modulate chopper has become one of the key techniques in uncooled FPA thermal imaging system. Now the Archimedes spiral cord chopper technique is mostly used. When it works, the chopper pushing scans the detector's pixel array, thus makes the pixels being exposed continuously. This paper simulates the shape of this kind of chopper, analyses the exposure time of the detector's every pixel, and also analyses the whole detector pixels' exposure sequence. From the analysis we can get the results: the parameter of Archimedes spiral cord, the detector's thermal time constant, the detector's geometrical dimension, the relative position of the detector to the chopper's spiral cord are the system's important parameters, they will affect the chopper's exposure efficiency and uniformity. We should design the chopper's relevant parameter according to the practical request to achieve the chopper's appropriate structure.

  4. Shade images of forested areas obtained from LANDSAT MSS data

    NASA Technical Reports Server (NTRS)

    Shimabukuro, Yosio Edemir; Smith, James A.

    1989-01-01

    The pixel size in the present day Remote Sensing systems is large enough to include different types of land cover. Depending upon the target area, several components may be present within the pixel. In forested areas, generally, three main components are present: tree canopy, soil (understory), and shadow. The objective is to generate a shade (shadow) image of forested areas from multispectral measurements of LANDSAT MSS (Multispectral Scanner) data by implementing a linear mixing model, where shadow is considered as one of the primary components in a pixel. The shade images are related to the observed variation in forest structure, i.e., the proportion of inferred shadow in a pixel is related to different forest ages, forest types, and tree crown cover. The Constrained Least Squares (CLS) method is used to generate shade images for forest of eucalyptus and vegetation of cerrado using LANDSAT MSS imagery over Itapeva study area in Brazil. The resulted shade images may explain the difference on ages for forest of eucalyptus and the difference on three crown cover for vegetation of cerrado.

  5. High-speed reconstruction of compressed images

    NASA Astrophysics Data System (ADS)

    Cox, Jerome R., Jr.; Moore, Stephen M.

    1990-07-01

    A compression scheme is described that allows high-definition radiological images with greater than 8-bit intensity resolution to be represented by 8-bit pixels. Reconstruction of the images with their original intensity resolution can be carried out by means of a pipeline architecture suitable for compact, high-speed implementation. A reconstruction system is described that can be fabricated according to this approach and placed between an 8-bit display buffer and the display's video system thereby allowing contrast control of images at video rates. Results for 50 CR chest images are described showing that error-free reconstruction of the original 10-bit CR images can be achieved.

  6. Mapping shorelines to subpixel accuracy using Landsat imagery

    NASA Astrophysics Data System (ADS)

    Abileah, Ron; Vignudelli, Stefano; Scozzari, Andrea

    2013-04-01

    A promising method to accurately map the shoreline of oceans, lakes, reservoirs, and rivers is proposed and verified in this work. The method is applied to multispectral satellite imagery in two stages. The first stage is a classification of each image pixel into land/water categories using the conventional 'dark pixel' method. The approach presented here, makes use of a single shortwave IR image band (SWIR), if available. It is well known that SWIR has the least water leaving radiance and relatively little sensitivity to water pollutants and suspended sediments. It is generally the darkest (over water) and most reliable single band for land-water discrimination. The boundary of the water cover map determined in stage 1 underestimates the water cover and often misses the true shoreline by a quantity up to one pixel. A more accurate shoreline would be obtained by connecting the center point of pixels with exactly 50-50 mix of water and land. Then, stage 2 finds the 50-50 mix points. According to the method proposed, image data is interpolated and up-sampled to ten times the original resolution. The local gradient in radiance is used to find the direction to the shore, thus searching along that path for the interpolated pixel closest to a 50-50 mix. Landsat images with 30m resolution, processed by this method, may thus provide the shoreline accurate to 3m. Compared to similar approaches available in the literature, the method proposed discriminates sub-pixels crossed by the shoreline by using a criteria based on the absolute value of radiance, rather than its gradient. Preliminary experimentation of the algorithm shows that 10m resolution accuracy is easily achieved and in some cases is often better than 5m. The proposed method can be used to study long term shoreline changes by exploiting the 30 years of archived world-wide coverage Landsat imagery. Landsat imagery is free and easily accessible for downloading. Some applications that exploit the Landsat dataset and the new method are discussed in the companion poster: "Case-studies of potential applications for highly resolved shorelines."

  7. TU-FG-209-03: Exploring the Maximum Count Rate Capabilities of Photon Counting Arrays Based On Polycrystalline Silicon

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liang, A K; Koniczek, M; Antonuk, L E

    Purpose: Photon counting arrays (PCAs) offer several advantages over conventional, fluence-integrating x-ray imagers, such as improved contrast by means of energy windowing. For that reason, we are exploring the feasibility and performance of PCA pixel circuitry based on polycrystalline silicon. This material, unlike the crystalline silicon commonly used in photon counting detectors, lends itself toward the economic manufacture of radiation tolerant, monolithic large area (e.g., ∼43×43 cm2) devices. In this presentation, exploration of maximum count rate, a critical performance parameter for such devices, is reported. Methods: Count rate performance for a variety of pixel circuit designs was explored through detailedmore » circuit simulations over a wide range of parameters (including pixel pitch and operating conditions) with the additional goal of preserving good energy resolution. The count rate simulations assume input events corresponding to a 72 kVp x-ray spectrum with 20 mm Al filtration interacting with a CZT detector at various input flux rates. Output count rates are determined at various photon energy threshold levels, and the percentage of counts lost (e.g., due to deadtime or pile-up) is calculated from the ratio of output to input counts. The energy resolution simulations involve thermal and flicker noise originating from each circuit element in a design. Results: Circuit designs compatible with pixel pitches ranging from 250 to 1000 µm that allow count rates over a megacount per second per pixel appear feasible. Such rates are expected to be suitable for radiographic and fluoroscopic imaging. Results for the analog front-end circuitry of the pixels show that acceptable energy resolution can also be achieved. Conclusion: PCAs created using polycrystalline silicon have the potential to offer monolithic large-area detectors with count rate performance comparable to those of crystalline silicon detectors. Further improvement through detailed circuit simulations and prototyping is expected. Partially supported by NIH grant R01-EB000558. This work was partially supported by NIH grant no. R01-EB000558.« less

  8. Chandra ACIS Sub-pixel Resolution

    NASA Astrophysics Data System (ADS)

    Kim, Dong-Woo; Anderson, C. S.; Mossman, A. E.; Allen, G. E.; Fabbiano, G.; Glotfelty, K. J.; Karovska, M.; Kashyap, V. L.; McDowell, J. C.

    2011-05-01

    We investigate how to achieve the best possible ACIS spatial resolution by binning in ACIS sub-pixel and applying an event repositioning algorithm after removing pixel-randomization from the pipeline data. We quantitatively assess the improvement in spatial resolution by (1) measuring point source sizes and (2) detecting faint point sources. The size of a bright (but no pile-up), on-axis point source can be reduced by about 20-30%. With the improve resolution, we detect 20% more faint sources when embedded on the extended, diffuse emission in a crowded field. We further discuss the false source rate of about 10% among the newly detected sources, using a few ultra-deep observations. We also find that the new algorithm does not introduce a grid structure by an aliasing effect for dithered observations and does not worsen the positional accuracy

  9. Application of low-noise CID imagers in scientific instrumentation cameras

    NASA Astrophysics Data System (ADS)

    Carbone, Joseph; Hutton, J.; Arnold, Frank S.; Zarnowski, Jeffrey J.; Vangorden, Steven; Pilon, Michael J.; Wadsworth, Mark V.

    1991-07-01

    CIDTEC has developed a PC-based instrumentation camera incorporating a preamplifier per row CID imager and a microprocessor/LCA camera controller. The camera takes advantage of CID X-Y addressability to randomly read individual pixels and potentially overlapping pixel subsets in true nondestructive (NDRO) as well as destructive readout modes. Using an oxy- nitride fabricated CID and the NDRO readout technique, pixel full well and noise levels of approximately 1*10(superscript 6) and 40 electrons, respectively, were measured. Data taken from test structures indicates noise levels (which appear to be 1/f limited) can be reduced by a factor of two by eliminating the nitride under the preamplifier gate. Due to software programmability, versatile readout capabilities, wide dynamic range, and extended UV/IR capability, this camera appears to be ideally suited for use in spectroscopy and other scientific applications.

  10. Investigating error structure of shuttle radar topography mission elevation data product

    NASA Astrophysics Data System (ADS)

    Becek, Kazimierz

    2008-08-01

    An attempt was made to experimentally assess the instrumental component of error of the C-band SRTM (SRTM). This was achieved by comparing elevation data of 302 runways from airports all over the world with the shuttle radar topography mission data product (SRTM). It was found that the rms of the instrumental error is about +/-1.55 m. Modeling of the remaining SRTM error sources, including terrain relief and pixel size, shows that downsampling from 30 m to 90 m (1 to 3 arc-sec pixels) worsened SRTM vertical accuracy threefold. It is suspected that the proximity of large metallic objects is a source of large SRTM errors. The achieved error estimates allow a pixel-based accuracy assessment of the SRTM elevation data product to be constructed. Vegetation-induced errors were not considered in this work.

  11. Solution processed integrated pixel element for an imaging device

    NASA Astrophysics Data System (ADS)

    Swathi, K.; Narayan, K. S.

    2016-09-01

    We demonstrate the implementation of a solid state circuit/structure comprising of a high performing polymer field effect transistor (PFET) utilizing an oxide layer in conjunction with a self-assembled monolayer (SAM) as the dielectric and a bulk-heterostructure based organic photodiode as a CMOS-like pixel element for an imaging sensor. Practical usage of functional organic photon detectors requires on chip components for image capture and signal transfer as in the CMOS/CCD architecture rather than simple photodiode arrays in order to increase speed and sensitivity of the sensor. The availability of high performing PFETs with low operating voltage and photodiodes with high sensitivity provides the necessary prerequisite to implement a CMOS type image sensing device structure based on organic electronic devices. Solution processing routes in organic electronics offers relatively facile procedures to integrate these components, combined with unique features of large-area, form factor and multiple optical attributes. We utilize the inherent property of a binary mixture in a blend to phase-separate vertically and create a graded junction for effective photocurrent response. The implemented design enables photocharge generation along with on chip charge to voltage conversion with performance parameters comparable to traditional counterparts. Charge integration analysis for the passive pixel element using 2D TCAD simulations is also presented to evaluate the different processes that take place in the monolithic structure.

  12. Cerebral vessels segmentation for light-sheet microscopy image using convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Hu, Chaoen; Hui, Hui; Wang, Shuo; Dong, Di; Liu, Xia; Yang, Xin; Tian, Jie

    2017-03-01

    Cerebral vessel segmentation is an important step in image analysis for brain function and brain disease studies. To extract all the cerebrovascular patterns, including arteries and capillaries, some filter-based methods are used to segment vessels. However, the design of accurate and robust vessel segmentation algorithms is still challenging, due to the variety and complexity of images, especially in cerebral blood vessel segmentation. In this work, we addressed a problem of automatic and robust segmentation of cerebral micro-vessels structures in cerebrovascular images acquired by light-sheet microscope for mouse. To segment micro-vessels in large-scale image data, we proposed a convolutional neural networks (CNNs) architecture trained by 1.58 million pixels with manual label. Three convolutional layers and one fully connected layer were used in the CNNs model. We extracted a patch of size 32x32 pixels in each acquired brain vessel image as training data set to feed into CNNs for classification. This network was trained to output the probability that the center pixel of input patch belongs to vessel structures. To build the CNNs architecture, a series of mouse brain vascular images acquired from a commercial light sheet fluorescence microscopy (LSFM) system were used for training the model. The experimental results demonstrated that our approach is a promising method for effectively segmenting micro-vessels structures in cerebrovascular images with vessel-dense, nonuniform gray-level and long-scale contrast regions.

  13. Deep learning of the sectional appearances of 3D CT images for anatomical structure segmentation based on an FCN voting method.

    PubMed

    Zhou, Xiangrong; Takayama, Ryosuke; Wang, Song; Hara, Takeshi; Fujita, Hiroshi

    2017-10-01

    We propose a single network trained by pixel-to-label deep learning to address the general issue of automatic multiple organ segmentation in three-dimensional (3D) computed tomography (CT) images. Our method can be described as a voxel-wise multiple-class classification scheme for automatically assigning labels to each pixel/voxel in a 2D/3D CT image. We simplify the segmentation algorithms of anatomical structures (including multiple organs) in a CT image (generally in 3D) to a majority voting scheme over the semantic segmentation of multiple 2D slices drawn from different viewpoints with redundancy. The proposed method inherits the spirit of fully convolutional networks (FCNs) that consist of "convolution" and "deconvolution" layers for 2D semantic image segmentation, and expands the core structure with 3D-2D-3D transformations to adapt to 3D CT image segmentation. All parameters in the proposed network are trained pixel-to-label from a small number of CT cases with human annotations as the ground truth. The proposed network naturally fulfills the requirements of multiple organ segmentations in CT cases of different sizes that cover arbitrary scan regions without any adjustment. The proposed network was trained and validated using the simultaneous segmentation of 19 anatomical structures in the human torso, including 17 major organs and two special regions (lumen and content inside of stomach). Some of these structures have never been reported in previous research on CT segmentation. A database consisting of 240 (95% for training and 5% for testing) 3D CT scans, together with their manually annotated ground-truth segmentations, was used in our experiments. The results show that the 19 structures of interest were segmented with acceptable accuracy (88.1% and 87.9% voxels in the training and testing datasets, respectively, were labeled correctly) against the ground truth. We propose a single network based on pixel-to-label deep learning to address the challenging issue of anatomical structure segmentation in 3D CT cases. The novelty of this work is the policy of deep learning of the different 2D sectional appearances of 3D anatomical structures for CT cases and the majority voting of the 3D segmentation results from multiple crossed 2D sections to achieve availability and reliability with better efficiency, generality, and flexibility than conventional segmentation methods, which must be guided by human expertise. © 2017 The Authors. Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  14. MAMA detector systems - A status report

    NASA Technical Reports Server (NTRS)

    Timothy, J. Gethyn; Morgan, Jeffrey S.; Slater, David C.; Kasle, David B.; Bybee, Richard L.

    1989-01-01

    Third-generation, 224 x 960 and 360 x 1024-pixel multianode microchannel (MAMA) detectors are under development for satellite-borne FUV and EUV observations, using pixel dimensions of 25 x 25 microns. An account is presently given of the configurations, modes of operation, and recent performance data of these systems. At UV and visible wavelengths, these MAMAs employ a semitransparent, proximity-focused photocathode structure. At FUV and EUV wavelengths below about 1500 A, opaque alkali-halide photocathodes deposited directly on the front surface of the MCP furnish the best detective quantum efficiencies.

  15. dada - a web-based 2D detector analysis tool

    NASA Astrophysics Data System (ADS)

    Osterhoff, Markus

    2017-06-01

    The data daemon, dada, is a server backend for unified access to 2D pixel detector image data stored with different detectors, file formats and saved with varying naming conventions and folder structures across instruments. Furthermore, dada implements basic pre-processing and analysis routines from pixel binning over azimuthal integration to raster scan processing. Common user interactions with dada are by a web frontend, but all parameters for an analysis are encoded into a Uniform Resource Identifier (URI) which can also be written by hand or scripts for batch processing.

  16. Vertical waveguides integrated with silicon photodetectors: Towards high efficiency and low cross-talk image sensors

    NASA Astrophysics Data System (ADS)

    Tut, Turgut; Dan, Yaping; Duane, Peter; Yu, Young; Wober, Munib; Crozier, Kenneth B.

    2012-01-01

    We describe the experimental realization of vertical silicon nitride waveguides integrated with silicon photodetectors. The waveguides are embedded in a silicon dioxide layer. Scanning photocurrent microscopy is performed on a device containing a waveguide, and on a device containing the silicon dioxide layer, but without the waveguide. The results confirm the waveguide's ability to guide light onto the photodetector with high efficiency. We anticipate that the use of these structures in image sensors, with one waveguide per pixel, would greatly improve efficiency and significantly reduce inter-pixel crosstalk.

  17. The least-squares mixing models to generate fraction images derived from remote sensing multispectral data

    NASA Technical Reports Server (NTRS)

    Shimabukuro, Yosio Edemir; Smith, James A.

    1991-01-01

    Constrained-least-squares and weighted-least-squares mixing models for generating fraction images derived from remote sensing multispectral data are presented. An experiment considering three components within the pixels-eucalyptus, soil (understory), and shade-was performed. The generated fraction images for shade (shade image) derived from these two methods were compared by considering the performance and computer time. The derived shade images are related to the observed variation in forest structure, i.e., the fraction of inferred shade in the pixel is related to different eucalyptus ages.

  18. Ultrathin phase-change coatings on metals for electrothermally tunable colors

    NASA Astrophysics Data System (ADS)

    Bakan, Gokhan; Ayas, Sencer; Saidzoda, Tohir; Celebi, Kemal; Dana, Aykutlu

    2016-08-01

    Metal surfaces coated with ultrathin lossy dielectrics enable color generation through strong interferences in the visible spectrum. Using a phase-change thin film as the coating layer offers tuning the generated color by crystallization or re-amorphization. Here, we study the optical response of surfaces consisting of thin (5-40 nm) phase-changing Ge2Sb2Te5 (GST) films on metal, primarily Al, layers. A color scale ranging from yellow to red to blue that is obtained using different thicknesses of as-deposited amorphous GST layers turns dim gray upon annealing-induced crystallization of the GST. Moreover, when a relatively thick (>100 nm) and lossless dielectric film is introduced between the GST and Al layers, optical cavity modes are observed, offering a rich color gamut at the expense of the angle independent optical response. Finally, a color pixel structure is proposed for ultrahigh resolution (pixel size: 5 × 5 μm2), non-volatile displays, where the metal layer acting like a mirror is used as a heater element. The electrothermal simulations of such a pixel structure suggest that crystallization and re-amorphization of the GST layer using electrical pulses are possible for electrothermal color tuning.

  19. Global map and spectroscopic analyses of Martian fluvial systems: paleoclimatic implications

    NASA Astrophysics Data System (ADS)

    Alemanno, Giulia; Orofino, Vincenzo; Mancarella, Francesca; Fonti, Sergio

    2017-04-01

    Currently environmental conditions on Mars do not allow the presence of liquid water on its surface for long periods of time. However, there are various evidences for past water flow at its surface. In fact, the ancient terrains of Mars are covered with fluvial and lacustrine features such as valley networks, longitudinal valleys and basin lakes. There are no doubts about the fact that the Martian valleys were originated by water flow. This led many researchers to think that probably, at the time of their formation, the conditions of atmospheric pressure and surface temperature were different from the present[1]. To infer the climate history of Mars from valley networks, a global approach is necessary. We produced a global map of Martian valleys. We manually mapped all the valleys (longer than 20 km) as vector-based polylines within the QGIS software, using THEMIS daytime IR (100 m/pixel), and where possible CTX images (up to 6 m/pixel), plus topographic MOLA data ( 500 m/pixel). Respect to the previous manual maps[1,2] data of higher image quality (new THEMIS mosaic) and topographic information allow us to identify new structures and more tributaries for a large number of systems. We also used the geologic map of Mars[3] in order to determine the valleys age distribution. Most valleys are too small for age determination from superposition of impact craters so we have assumed that a valley is as old as the terrain on which it has been carved[1]. Furthermore we are, currently, analyzing spectroscopic data from CRISM instrument (Compact Reconnaissance Imaging Spectrometer for Mars) onboard Mars Reconnaissance Orbiter, concerning the mapped valleys or associated basin lakes with the aim of assessing the mineralogy of these structures. Our attention is especially focused on the possible detection of any hydrated minerals (e.g. phyllosilicates, hydrated silica) or evaporites (e.g. carbonates, sulfates, chlorides). Phyllosilicates- bearing rocks are considered as an ideal place on Mars for prebiotic chemistry and the possible development of life[4]. Using spectral parameters[5], applied to the images to highlight the presence of different aqueous alteration minerals, we have found deposits of possible hydrated minerals in some of these structures. References [1]Hynek B.M., Hoke M.R.T., Beach M.: 2010, Jou. Geophys. Res., 115, doi:10.1029/2009JE003548. [2]Carr M.H.: 1995, Jou. Geophys. Res., 100, 7479, doi:10.1029/95JE00260. [3]Tanaka K. L. et al.: 2014, Planet. and Spa. Sci., 95, 11. [4]Bishop et al.: 2013, Planet. and Spa. Sci., 86, 130. [5]Viviano-Beck C.E. et al.: 2014, Jou. Geophys. Res., 119, doi: 10.1002/2014JE004627.

  20. Large-Scale Structure of the Molecular Gas in Taurus Revealed by High Spatial Dynamic Range Spectral Line Mapping

    NASA Technical Reports Server (NTRS)

    Goldsmith, Paul F.

    2008-01-01

    Viewgraph topics include: optical image of Taurus; dust extinction in IR has provided a new tool for probing cloud morphology; observations of the gas can contribute critical information on gas temperature, gas column density and distribution, mass, and kinematics; the Taurus molecular cloud complex; average spectra in each mask region; mas 2 data; dealing with mask 1 data; behavior of mask 1 pixels; distribution of CO column densities; conversion to H2 column density; variable CO/H2 ratio with values much less than 10(exp -4) at low N indicated by UV results; histogram of N(H2) distribution; H2 column density distribution in Taurus; cumulative distribution of mass and area; lower CO fractional abundance in mask 0 and 1 regions greatly increases mass determined in the analysis; masses determined with variable X(CO) and including diffuse regions agrees well with the found from L(CO); distribution of young stars as a function of molecular column density; star formation efficiency; star formation rate and gas depletion; and enlarged images of some of the regions with numerous young stars. Additional slides examine the origin of the Taurus molecular cloud, evolution from HI gas, kinematics as a clue to its origin, and its relationship to star formation.

  1. Two-dimensional wavelet analysis based classification of gas chromatogram differential mobility spectrometry signals.

    PubMed

    Zhao, Weixiang; Sankaran, Shankar; Ibáñez, Ana M; Dandekar, Abhaya M; Davis, Cristina E

    2009-08-04

    This study introduces two-dimensional (2-D) wavelet analysis to the classification of gas chromatogram differential mobility spectrometry (GC/DMS) data which are composed of retention time, compensation voltage, and corresponding intensities. One reported method to process such large data sets is to convert 2-D signals to 1-D signals by summing intensities either across retention time or compensation voltage, but it can lose important signal information in one data dimension. A 2-D wavelet analysis approach keeps the 2-D structure of original signals, while significantly reducing data size. We applied this feature extraction method to 2-D GC/DMS signals measured from control and disordered fruit and then employed two typical classification algorithms to testify the effects of the resultant features on chemical pattern recognition. Yielding a 93.3% accuracy of separating data from control and disordered fruit samples, 2-D wavelet analysis not only proves its feasibility to extract feature from original 2-D signals but also shows its superiority over the conventional feature extraction methods including converting 2-D to 1-D and selecting distinguishable pixels from training set. Furthermore, this process does not require coupling with specific pattern recognition methods, which may help ensure wide applications of this method to 2-D spectrometry data.

  2. Phototaxis and the origin of visual eyes

    PubMed Central

    Randel, Nadine

    2016-01-01

    Vision allows animals to detect spatial differences in environmental light levels. High-resolution image-forming eyes evolved from low-resolution eyes via increases in photoreceptor cell number, improvements in optics and changes in the neural circuits that process spatially resolved photoreceptor input. However, the evolutionary origins of the first low-resolution visual systems have been unclear. We propose that the lowest resolving (two-pixel) visual systems could initially have functioned in visual phototaxis. During visual phototaxis, such elementary visual systems compare light on either side of the body to regulate phototactic turns. Another, even simpler and non-visual strategy is characteristic of helical phototaxis, mediated by sensory–motor eyespots. The recent mapping of the complete neural circuitry (connectome) of an elementary visual system in the larva of the annelid Platynereis dumerilii sheds new light on the possible paths from non-visual to visual phototaxis and to image-forming vision. We outline an evolutionary scenario focusing on the neuronal circuitry to account for these transitions. We also present a comprehensive review of the structure of phototactic eyes in invertebrate larvae and assign them to the non-visual and visual categories. We propose that non-visual systems may have preceded visual phototactic systems in evolution that in turn may have repeatedly served as intermediates during the evolution of image-forming eyes. PMID:26598725

  3. Parallel-Processing Software for Creating Mosaic Images

    NASA Technical Reports Server (NTRS)

    Klimeck, Gerhard; Deen, Robert; McCauley, Michael; DeJong, Eric

    2008-01-01

    A computer program implements parallel processing for nearly real-time creation of panoramic mosaics of images of terrain acquired by video cameras on an exploratory robotic vehicle (e.g., a Mars rover). Because the original images are typically acquired at various camera positions and orientations, it is necessary to warp the images into the reference frame of the mosaic before stitching them together to create the mosaic. [Also see "Parallel-Processing Software for Correlating Stereo Images," Software Supplement to NASA Tech Briefs, Vol. 31, No. 9 (September 2007) page 26.] The warping algorithm in this computer program reflects the considerations that (1) for every pixel in the desired final mosaic, a good corresponding point must be found in one or more of the original images and (2) for this purpose, one needs a good mathematical model of the cameras and a good correlation of individual pixels with respect to their positions in three dimensions. The desired mosaic is divided into slices, each of which is assigned to one of a number of central processing units (CPUs) operating simultaneously. The results from the CPUs are gathered and placed into the final mosaic. The time taken to create the mosaic depends upon the number of CPUs, the speed of each CPU, and whether a local or a remote data-staging mechanism is used.

  4. Conceptual Study of A Hetrodyne Receiver for the Origins Space Telescope

    NASA Astrophysics Data System (ADS)

    Wiedner, Martina

    2018-01-01

    The Origins Space Telescope (OST) is a mission concept of an extremely versatile observatory with 5 science instruments, of which the HEterodyne Receivers for OST (HERO) is one. HERO's main targets are high spectral resolution observations (Δλ/λ up to 107 or Δv = 0.03km/s) of water to follow its trail from cores to YSOs as well as H2O and HDO observations on comets. HERO will probe all neutral ISM phases using cooling lines ([CII], [OI]) and hydrides as probes of CO-dark H2 (CH, HF). HERO will reveal how molecular clouds and filaments form in the local ISM up to nearby galaxies. In order to achieve these observational goals, HERO will cover an extremely wide frequency range from 468 to 2700 GHz and a window around the OI line at 4563 to 4752GHz. It will consist of very large focal plane arrays of 128 pixels between 900 - 2700 GHz and at 4.7 THz, and 32 pixels for the 468 to 900 GHz range. The instrument is exploiting Herschel/HIFI heritage. HERO's large arrays require low dissipation and low power components. The HERO concept makes use of the latest cryogenic SiGe amplifier technology, as well as CMOS technology for the backends with 2 orders of magnitude lower power.

  5. From the First to the Last

    NASA Image and Video Library

    2015-04-30

    On March 18, 2011, MESSENGER made history by becoming the first spacecraft ever to orbit Mercury. Eleven days later, the spacecraft captured the first image ever obtained from Mercury orbit, shown here on the left. Originally planned as a one-year orbital mission, the MESSENGER spacecraft orbited Mercury for more than four years, accomplishing technological firsts and making new scientific discoveries about the origin and evolution of the Solar System's innermost planet. Check out the Top 10 Science Results. Dates acquired: March 29, 2011; April 30, 2015 Image IDs: 65056, 8422953 Instrument: Mercury Dual Imaging System (MDIS) Left Image Center Latitude: -53.3° Left Image Center Longitude: 13.0° E Left Image Resolution: 2.7 kilometers/pixel Left Image Scale: The rayed crater Debussy has a diameter of 80 kilometers (50 miles) Right Image Center Latitude: 72.0° Right Image Center Longitude: 223.8° E Right Image Resolution: 2.1 meters/pixel Right Image Scale: This image is about 1 kilometers (0.6 miles) across On April 30, 2015, MESSENGER again made history, becoming the first spacecraft to impact the planet. In total, MESSENGER acquired and returned to Earth more than 277,000 images from orbit about Mercury. The last of those images is shown here on the right. http://photojournal.jpl.nasa.gov/catalog/PIA19449

  6. Improved signal to noise ratio and sensitivity of an infrared imaging video bolometer on large helical device by using an infrared periscope

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pandya, Shwetang N., E-mail: pandya.shwetang@LHD.nifs.ac.jp; Sano, Ryuichi; Peterson, Byron J.

    An Infrared imaging Video Bolometer (IRVB) diagnostic is currently being used in the Large Helical Device (LHD) for studying the localization of radiation structures near the magnetic island and helical divertor X-points during plasma detachment and for 3D tomography. This research demands high signal to noise ratio (SNR) and sensitivity to improve the temporal resolution for studying the evolution of radiation structures during plasma detachment and a wide IRVB field of view (FoV) for tomography. Introduction of an infrared periscope allows achievement of a higher SNR and higher sensitivity, which in turn, permits a twofold improvement in the temporal resolutionmore » of the diagnostic. Higher SNR along with wide FoV is achieved simultaneously by reducing the separation of the IRVB detector (metal foil) from the bolometer's aperture and the LHD plasma. Altering the distances to meet the aforesaid requirements results in an increased separation between the foil and the IR camera. This leads to a degradation of the diagnostic performance in terms of its sensitivity by 1.5-fold. Using an infrared periscope to image the IRVB foil results in a 7.5-fold increase in the number of IR camera pixels imaging the foil. This improves the IRVB sensitivity which depends on the square root of the number of IR camera pixels being averaged per bolometer channel. Despite the slower f-number (f/# = 1.35) and reduced transmission (τ{sub 0} = 89%, due to an increased number of lens elements) for the periscope, the diagnostic with an infrared periscope operational on LHD has improved in terms of sensitivity and SNR by a factor of 1.4 and 4.5, respectively, as compared to the original diagnostic without a periscope (i.e., IRVB foil being directly imaged by the IR camera through conventional optics). The bolometer's field of view has also increased by two times. The paper discusses these improvements in apt details.« less

  7. Crystallography Without Crystals: Determining the Structure of Individual Biological Molecules and Nanoparticles

    ScienceCinema

    Ourmazd, Abbas [University of Wisconsin, Milwaukee, Wisconsin, USA

    2017-12-09

    Ever shattered a valuable vase into 10 to the 6th power pieces and tried to reassemble it under a light providing a mean photon count of 10 minus 2 per detector pixel with shot noise? If you can do that, you can do single-molecule crystallography. This talk will outline how this can be done in principle. In more technical terms, the talk will describe how the combination of scattering physics and Bayesian algorithms can be used to reconstruct the 3-D diffracted intensity distribution from a collection of individual 2-D diffiraction patterns down to a mean photon count of 10 minus 2 per pixel, the signal level anticipated from the Linac Coherent Light Source, and hence determine the structure of individual macromolecules and nanoparticles.

  8. 640 x 512 Pixels Long-Wavelength Infrared (LWIR) Quantum-Dot Infrared Photodetector (QDIP) Imaging Focal Plane Array

    NASA Technical Reports Server (NTRS)

    Gunapala, Sarath D.; Bandara, Sumith V.; Hill, Cory J.; Ting, David Z.; Liu, John K.; Rafol, Sir B.; Blazejewski, Edward R.; Mumolo, Jason M.; Keo, Sam A.; Krishna, Sanjay; hide

    2007-01-01

    Epitaxially grown self-assembled. InAs-InGaAs-GaAs quantum dots (QDs) are exploited for the development of large-format long-wavelength infrared focal plane arrays (FPAs). The dot-in-a-well (DWELL) structures were experimentally shown to absorb both 45 degrees and normal incident light, therefore, a reflection grating structure was used to enhance the quantum efficiency. The devices exhibit peak responsivity out to 8.1 micrometers, with peak detectivity reaching approximately 1 X 10(exp 10) Jones at 77 K. The devices were fabricated into the first long-wavelength 640 x 512 pixel QD infrared photodetector imaging FPA, which has produced excellent infrared imagery with noise equivalent temperature difference of 40 mK at 60-K operating temperature.

  9. Graphics processing unit accelerated intensity-based optical coherence tomography angiography using differential frames with real-time motion correction.

    PubMed

    Watanabe, Yuuki; Takahashi, Yuhei; Numazawa, Hiroshi

    2014-02-01

    We demonstrate intensity-based optical coherence tomography (OCT) angiography using the squared difference of two sequential frames with bulk-tissue-motion (BTM) correction. This motion correction was performed by minimization of the sum of the pixel values using axial- and lateral-pixel-shifted structural OCT images. We extract the BTM-corrected image from a total of 25 calculated OCT angiographic images. Image processing was accelerated by a graphics processing unit (GPU) with many stream processors to optimize the parallel processing procedure. The GPU processing rate was faster than that of a line scan camera (46.9 kHz). Our OCT system provides the means of displaying structural OCT images and BTM-corrected OCT angiographic images in real time.

  10. An iterative approach to region growing using associative memories

    NASA Technical Reports Server (NTRS)

    Snyder, W. E.; Cowart, A.

    1983-01-01

    Region growing, often given as a classical example of the recursive control structures used in image processing which are often awkward to implement in hardware where the intent is the segmentation of an image at raster scan rates, is addressed in light of the postulate that any computation which can be performed recursively can be performed easily and efficiently by iteration coupled with association. Attention is given to an algorithm and hardware structure able to perform region labeling iteratively at scan rates. Every pixel is individually labeled with an identifier which signifies the region to which it belongs. Difficulties otherwise requiring recursion are handled by maintaining an equivalence table in hardware transparent to the computer, which reads the labeled pixels. A simulation of the associative memory has demonstrated its effectiveness.

  11. Fluorescence XAS using Ge PAD: Application to High-Temperature Superconducting Thin Film Single Crystals

    NASA Astrophysics Data System (ADS)

    Oyanagi, H.; Tsukada, A.; Naito, M.; Saini, N. L.; Zhang, C.

    2007-02-01

    A Ge pixel array detector (PAD) with 100 segments was used in fluorescence x-ray absorption spectroscopy (XAS) study, probing local structure of high temperature superconducting thin film single crystals. Independent monitoring of individual pixel outputs allows real-time inspection of interference of substrates which has long been a major source of systematic error. By optimizing grazing-incidence angle and azimuthal orientation, smooth extended x-ray absorption fine structure (EXAFS) oscillations were obtained, demonstrating that strain effects can be studied using high-quality data for thin film single crystals grown by molecular beam epitaxy (MBE). The results of (La,Sr)2CuO4 thin film single crystals under strain are related to the strain dependence of the critical temperature of superconductivity.

  12. Compressed normalized block difference for object tracking

    NASA Astrophysics Data System (ADS)

    Gao, Yun; Zhang, Dengzhuo; Cai, Donglan; Zhou, Hao; Lan, Ge

    2018-04-01

    Feature extraction is very important for robust and real-time tracking. Compressive sensing provided a technical support for real-time feature extraction. However, all existing compressive tracking were based on compressed Haar-like feature, and how to compress many more excellent high-dimensional features is worth researching. In this paper, a novel compressed normalized block difference feature (CNBD) was proposed. For resisting noise effectively in a highdimensional normalized pixel difference feature (NPD), a normalized block difference feature extends two pixels in the original formula of NPD to two blocks. A CNBD feature can be obtained by compressing a normalized block difference feature based on compressive sensing theory, with the sparse random Gaussian matrix as the measurement matrix. The comparative experiments of 7 trackers on 20 challenging sequences showed that the tracker based on CNBD feature can perform better than other trackers, especially than FCT tracker based on compressed Haar-like feature, in terms of AUC, SR and Precision.

  13. Sources of Gullies in Hale Crater

    NASA Image and Video Library

    2017-04-12

    Color from the High Resolution Imaging Science Experiment (HiRISE) instrument onboard NASA's Mars Reconnaissance Orbiter can show mineralogical differences due to the near-infrared filter. The sources of channels on the north rim of Hale Crater show fresh blue, green, purple and light toned exposures under the the overlying reddish dust. The causes and timing of activity in channels and gullies on Mars remains an active area of research. Geologists infer the timing of different events based on what are called "superposition relationships" between different landforms. Areas like this are a puzzle. The map is projected here at a scale of 25 centimeters (9.8 inches) per pixel. [The original image scale is 25.2 centimeters (9.9 inches) per pixel (with 1 x 1 binning); objects on the order of 76 centimeters (29.9 inches) across are resolved.] North is up. https://photojournal.jpl.nasa.gov/catalog/PIA21586

  14. Locality-preserving sparse representation-based classification in hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Gao, Lianru; Yu, Haoyang; Zhang, Bing; Li, Qingting

    2016-10-01

    This paper proposes to combine locality-preserving projections (LPP) and sparse representation (SR) for hyperspectral image classification. The LPP is first used to reduce the dimensionality of all the training and testing data by finding the optimal linear approximations to the eigenfunctions of the Laplace Beltrami operator on the manifold, where the high-dimensional data lies. Then, SR codes the projected testing pixels as sparse linear combinations of all the training samples to classify the testing pixels by evaluating which class leads to the minimum approximation error. The integration of LPP and SR represents an innovative contribution to the literature. The proposed approach, called locality-preserving SR-based classification, addresses the imbalance between high dimensionality of hyperspectral data and the limited number of training samples. Experimental results on three real hyperspectral data sets demonstrate that the proposed approach outperforms the original counterpart, i.e., SR-based classification.

  15. Gullies in Winter Shadow

    NASA Image and Video Library

    2017-03-21

    This is an odd-looking image. It shows gullies during the winter while entirely in the shadow of the crater wall. Illumination comes only from the winter skylight. We acquire such images because gullies on Mars actively form in the winter when there is carbon dioxide frost on the ground, so we image them in the winter, even though not well illuminated, to look for signs of activity. The dark streaks might be signs of current activity, removing the frost, but further analysis is needed. NB: North is down in the cutout, and the terrain slopes towards the bottom of the image. The map is projected here at a scale of 50 centimeters (19.7 inches) per pixel. [The original image scale is 62.3 centimeters (24.5 inches) per pixel (with 2 x 2 binning); objects on the order of 187 centimeters (73.6 inches) across are resolved.] North is up. http://photojournal.jpl.nasa.gov/catalog/PIA21568

  16. Novel image compression-encryption hybrid algorithm based on key-controlled measurement matrix in compressive sensing

    NASA Astrophysics Data System (ADS)

    Zhou, Nanrun; Zhang, Aidi; Zheng, Fen; Gong, Lihua

    2014-10-01

    The existing ways to encrypt images based on compressive sensing usually treat the whole measurement matrix as the key, which renders the key too large to distribute and memorize or store. To solve this problem, a new image compression-encryption hybrid algorithm is proposed to realize compression and encryption simultaneously, where the key is easily distributed, stored or memorized. The input image is divided into 4 blocks to compress and encrypt, then the pixels of the two adjacent blocks are exchanged randomly by random matrices. The measurement matrices in compressive sensing are constructed by utilizing the circulant matrices and controlling the original row vectors of the circulant matrices with logistic map. And the random matrices used in random pixel exchanging are bound with the measurement matrices. Simulation results verify the effectiveness, security of the proposed algorithm and the acceptable compression performance.

  17. ADART: an adaptive algebraic reconstruction algorithm for discrete tomography.

    PubMed

    Maestre-Deusto, F Javier; Scavello, Giovanni; Pizarro, Joaquín; Galindo, Pedro L

    2011-08-01

    In this paper we suggest an algorithm based on the Discrete Algebraic Reconstruction Technique (DART) which is capable of computing high quality reconstructions from substantially fewer projections than required for conventional continuous tomography. Adaptive DART (ADART) goes a step further than DART on the reduction of the number of unknowns of the associated linear system achieving a significant reduction in the pixel error rate of reconstructed objects. The proposed methodology automatically adapts the border definition criterion at each iteration, resulting in a reduction of the number of pixels belonging to the border, and consequently of the number of unknowns in the general algebraic reconstruction linear system to be solved, being this reduction specially important at the final stage of the iterative process. Experimental results show that reconstruction errors are considerably reduced using ADART when compared to original DART, both in clean and noisy environments.

  18. Clinoforms in Melas Chasma

    NASA Image and Video Library

    2017-04-10

    In this image from NASA's Mars Reconnaissance Orbiter, a group of steeply inclined light-toned layers is bounded above and below by unconformities (sudden or irregular changes from one deposit to another) that indicate a "break" where erosion of pre-existing layers was taking place at a higher rate than deposition of new materials. The layered deposits in Melas Basin may have been deposited during the growth of a delta complex. This depositional sequence likely represents a period where materials were being deposited on the floor of a lake or running river. The map is projected here at a scale of 25 centimeters (9.8 inches) per pixel. [The original image scale is 28.9 centimeters (11.4 inches) per pixel (with 1 x 1 binning); objects on the order of 87 centimeters (34.2 inches) across are resolved.] North is up. https://photojournal.jpl.nasa.gov/catalog/PIA21580

  19. Multi-Sensor Registration of Earth Remotely Sensed Imagery

    NASA Technical Reports Server (NTRS)

    LeMoigne, Jacqueline; Cole-Rhodes, Arlene; Eastman, Roger; Johnson, Kisha; Morisette, Jeffrey; Netanyahu, Nathan S.; Stone, Harold S.; Zavorin, Ilya; Zukor, Dorothy (Technical Monitor)

    2001-01-01

    Assuming that approximate registration is given within a few pixels by a systematic correction system, we develop automatic image registration methods for multi-sensor data with the goal of achieving sub-pixel accuracy. Automatic image registration is usually defined by three steps; feature extraction, feature matching, and data resampling or fusion. Our previous work focused on image correlation methods based on the use of different features. In this paper, we study different feature matching techniques and present five algorithms where the features are either original gray levels or wavelet-like features, and the feature matching is based on gradient descent optimization, statistical robust matching, and mutual information. These algorithms are tested and compared on several multi-sensor datasets covering one of the EOS Core Sites, the Konza Prairie in Kansas, from four different sensors: IKONOS (4m), Landsat-7/ETM+ (30m), MODIS (500m), and SeaWIFS (1000m).

  20. Contemplative Janus

    NASA Image and Video Library

    2015-01-19

    Janus (111 miles or 179 kilometers across) seems to almost stare off into the distance, contemplating deep, moonish thoughts as the F ring stands by at the bottom of this image. From this image, it is easy to distinguish Janus' shape from that of a sphere. Many of Saturn's smaller moons have similarly irregular shapes that scientists believe may give clues to their origins and internal structure. Models combining the dynamics of this moon with its shape imply the existence of mass inhomogeneities within Janus. This would be a surprising result for a body the size of Janus. By studying more images of Janus, scientists may be able confirm this finding and determine just how complicated the internal structure of this small body is. This image is roughly centered on the side of Janus which faces away from Saturn. North on Janus is up and rotated 3 degrees to the right. The image was taken in visible light with the Cassini spacecraft narrow-angle camera on March 28, 2012. The view was obtained at a distance of approximately 54,000 miles (87,000 kilometers) from Janus. Image scale is 1,700 feet (520 meters) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA18299

  1. Pixel-based CTE Correction of ACS/WFC: Modifications To The ACS Calibration Pipeline (CALACS)

    NASA Astrophysics Data System (ADS)

    Smith, Linda J.; Anderson, J.; Armstrong, A.; Avila, R.; Bedin, L.; Chiaberge, M.; Davis, M.; Ferguson, B.; Fruchter, A.; Golimowski, D.; Grogin, N.; Hack, W.; Lim, P. L.; Lucas, R.; Maybhate, A.; McMaster, M.; Ogaz, S.; Suchkov, A.; Ubeda, L.

    2012-01-01

    The Advanced Camera for Surveys (ACS) was installed on the Hubble Space Telescope (HST) nearly ten years ago. Over the last decade, continuous exposure to the harsh radiation environment has degraded the charge transfer efficiency (CTE) of the CCDs. The worsening CTE impacts the science that can be obtained by altering the photometric, astrometric and morphological characteristics of sources, particularly those farthest from the readout amplifiers. To ameliorate these effects, Anderson & Bedin (2010, PASP, 122, 1035) developed a pixel-based empirical approach to correcting ACS data by characterizing the CTE profiles of trails behind warm pixels in dark exposures. The success of this technique means that it is now possible to correct full-frame ACS/WFC images for CTE degradation in the standard data calibration and reduction pipeline CALACS. Over the past year, the ACS team at STScI has developed, refined and tested the new software. The details of this work are described in separate posters. The new code is more effective at low flux levels (< 50 electrons) than the original Anderson & Bedin code, and employs a more accurate time and temperature dependence for CTE. The new CALACS includes the automatic removal of low-level bias stripes (produced by the post-repair ACS electronics) and pixel-based CTE correction. In addition to the standard cosmic ray corrected, flat-fielded and drizzled data products (crj, flt and drz files) there are three new equivalent files (crc, flc and drc) which contain the CTE-corrected data products. The user community will be able to choose whether to use the standard or CTE-corrected products.

  2. A New Serial-direction Trail Effect in CCD Images of the Lunar-based Ultraviolet Telescope

    NASA Astrophysics Data System (ADS)

    Wu, C.; Deng, J. S.; Guyonnet, A.; Antilogus, P.; Cao, L.; Cai, H. B.; Meng, X. M.; Han, X. H.; Qiu, Y. L.; Wang, J.; Wang, S.; Wei, J. Y.; Xin, L. P.; Li, G. W.

    2016-10-01

    Unexpected trails have been seen subsequent to relative bright sources in astronomical images taken with the CCD camera of the Lunar-based Ultraviolet Telescope (LUT) since its first light on the Moon’s surface. The trails can only be found in the serial-direction of CCD readout, differing themselves from image trails of radiation-damaged space-borne CCDs, which usually appear in the parallel-readout direction. After analyzing the same trail defects following warm pixels (WPs) in dark frames, we found that the relative intensity profile of the LUT CCD trails can be expressed as an exponential function of the distance i (in number of pixels) of the trailing pixel to the original source (or WP), i.e., {\\mathtt{\\exp }}(α {\\mathtt{i}}+β ). The parameters α and β seem to be independent of the CCD temperature, intensity of the source (or WP), and its position in the CCD frame. The main trail characteristics show evolution occurring at an increase rate of ˜(7.3 ± 3.6) × 10-4 in the first two operation years. The trails affect the consistency of the profiles of different brightness sources, which make smaller aperture photometry have larger extra systematic error. The astrometric uncertainty caused by the trails is too small to be acceptable based on LUT requirements for astrometry accuracy. Based on the empirical profile model, a correction method has been developed for LUT images that works well for restoring the fluxes of astronomical sources that are lost in trailing pixels.

  3. Local gray level S-curve transformation - A generalized contrast enhancement technique for medical images.

    PubMed

    Gandhamal, Akash; Talbar, Sanjay; Gajre, Suhas; Hani, Ahmad Fadzil M; Kumar, Dileep

    2017-04-01

    Most medical images suffer from inadequate contrast and brightness, which leads to blurred or weak edges (low contrast) between adjacent tissues resulting in poor segmentation and errors in classification of tissues. Thus, contrast enhancement to improve visual information is extremely important in the development of computational approaches for obtaining quantitative measurements from medical images. In this research, a contrast enhancement algorithm that applies gray-level S-curve transformation technique locally in medical images obtained from various modalities is investigated. The S-curve transformation is an extended gray level transformation technique that results into a curve similar to a sigmoid function through a pixel to pixel transformation. This curve essentially increases the difference between minimum and maximum gray values and the image gradient, locally thereby, strengthening edges between adjacent tissues. The performance of the proposed technique is determined by measuring several parameters namely, edge content (improvement in image gradient), enhancement measure (degree of contrast enhancement), absolute mean brightness error (luminance distortion caused by the enhancement), and feature similarity index measure (preservation of the original image features). Based on medical image datasets comprising 1937 images from various modalities such as ultrasound, mammograms, fluorescent images, fundus, X-ray radiographs and MR images, it is found that the local gray-level S-curve transformation outperforms existing techniques in terms of improved contrast and brightness, resulting in clear and strong edges between adjacent tissues. The proposed technique can be used as a preprocessing tool for effective segmentation and classification of tissue structures in medical images. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Automatic concrete cracks detection and mapping of terrestrial laser scan data

    NASA Astrophysics Data System (ADS)

    Rabah, Mostafa; Elhattab, Ahmed; Fayad, Atef

    2013-12-01

    Terrestrial laser scanning has become one of the standard technologies for object acquisition in surveying engineering. The high spatial resolution of imaging and the excellent capability of measuring the 3D space by laser scanning bear a great potential if combined for both data acquisition and data compilation. Automatic crack detection from concrete surface images is very effective for nondestructive testing. The crack information can be used to decide the appropriate rehabilitation method to fix the cracked structures and prevent any catastrophic failure. In practice, cracks on concrete surfaces are traced manually for diagnosis. On the other hand, automatic crack detection is highly desirable for efficient and objective crack assessment. The current paper submits a method for automatic concrete cracks detection and mapping from the data that was obtained during laser scanning survey. The method of cracks detection and mapping is achieved by three steps, namely the step of shading correction in the original image, step of crack detection and finally step of crack mapping and processing steps. The detected crack is defined in a pixel coordinate system. To remap the crack into the referred coordinate system, a reverse engineering is used. This is achieved by a hybrid concept of terrestrial laser-scanner point clouds and the corresponding camera image, i.e. a conversion from the pixel coordinate system to the terrestrial laser-scanner or global coordinate system. The results of the experiment show that the mean differences between terrestrial laser scan and the total station are about 30.5, 16.4 and 14.3 mms in x, y and z direction, respectively.

  5. Psoriasis skin biopsy image segmentation using Deep Convolutional Neural Network.

    PubMed

    Pal, Anabik; Garain, Utpal; Chandra, Aditi; Chatterjee, Raghunath; Senapati, Swapan

    2018-06-01

    Development of machine assisted tools for automatic analysis of psoriasis skin biopsy image plays an important role in clinical assistance. Development of automatic approach for accurate segmentation of psoriasis skin biopsy image is the initial prerequisite for developing such system. However, the complex cellular structure, presence of imaging artifacts, uneven staining variation make the task challenging. This paper presents a pioneering attempt for automatic segmentation of psoriasis skin biopsy images. Several deep neural architectures are tried for segmenting psoriasis skin biopsy images. Deep models are used for classifying the super-pixels generated by Simple Linear Iterative Clustering (SLIC) and the segmentation performance of these architectures is compared with the traditional hand-crafted feature based classifiers built on popularly used classifiers like K-Nearest Neighbor (KNN), Support Vector Machine (SVM) and Random Forest (RF). A U-shaped Fully Convolutional Neural Network (FCN) is also used in an end to end learning fashion where input is the original color image and the output is the segmentation class map for the skin layers. An annotated real psoriasis skin biopsy image data set of ninety (90) images is developed and used for this research. The segmentation performance is evaluated with two metrics namely, Jaccard's Coefficient (JC) and the Ratio of Correct Pixel Classification (RCPC) accuracy. The experimental results show that the CNN based approaches outperform the traditional hand-crafted feature based classification approaches. The present research shows that practical system can be developed for machine assisted analysis of psoriasis disease. Copyright © 2018 Elsevier B.V. All rights reserved.

  6. A conceptual model for quantifying connectivity using graph theory and cellular (per-pixel) approach

    NASA Astrophysics Data System (ADS)

    Singh, Manudeo; Sinha, Rajiv; Tandon, Sampat K.

    2016-04-01

    The concept of connectivity is being increasingly used for understanding hydro-geomorphic processes at all spatio-temporal scales. Connectivity is defined as the potential for energy and material flux (water, sediments, nutrients, heat, etc.) to navigate within or between the landscape systems and has two components, structural connectivity and dynamic connectivity. Structural connectivity is defined by the spatially connected features (physical linkages) through which energy and materials flow. Dynamic connectivity is a process defined connectivity component. These two connectivity components also interact with each other by forming a feedback system. This study attempts to explore a method to quantify structural and dynamic connectivity. In fluvial transport systems, sediment and water can flow in either a diffused manner or in a channelized way. At all the scales, hydrological and sediment fluxes can be tracked using a cellular (per-pixel) approach and can be quantified using graphical approach. The material flux, slope and LULC (Land Use Land Cover) weightage factors of a pixel together determine if it will contribute towards connectivity of the landscape/system. In a graphical approach, all the contributing pixels will form a node at their centroid and this node will be connected to the next 'down-node' via a directed edge with 'least cost path'. The length of the edge will depend on the desired spatial scale and its path direction will depend on the traversed pixel's slope and the LULC (weightage) factors. The weightage factors will lie in-between 0 to 1. This value approaches 1 for the LULC factors which promote connectivity. For example, in terms of sediment connectivity, the weightage could be RUSLE (Revised Universal Soil Loss Equation) C-factors with bare unconsolidated surfaces having values close to 1. This method is best suited for areas with low slopes, where LULC can be a deciding as well as dominating factor. The degree of connectivity and its pathways will show changes under different LULC conditions even if the slope remains the same. The graphical approach provides the statistics of connected and disconnected graph elements (edges, nodes) and graph components, thereby allowing the quantification of structural connectivity. This approach also quantifies the dynamic connectivity by allowing the measurement of the fluxes (e.g. via hydrographs or sedimentographs) at any node as well as at any system outlet. The contribution of any sub-system can be understood by removing the remaining sub-systems which can be conveniently achieved by masking associated graph elements.

  7. Leveraging unsupervised training sets for multi-scale compartmentalization in renal pathology

    NASA Astrophysics Data System (ADS)

    Lutnick, Brendon; Tomaszewski, John E.; Sarder, Pinaki

    2017-03-01

    Clinical pathology relies on manual compartmentalization and quantification of biological structures, which is time consuming and often error-prone. Application of computer vision segmentation algorithms to histopathological image analysis, in contrast, can offer fast, reproducible, and accurate quantitative analysis to aid pathologists. Algorithms tunable to different biologically relevant structures can allow accurate, precise, and reproducible estimates of disease states. In this direction, we have developed a fast, unsupervised computational method for simultaneously separating all biologically relevant structures from histopathological images in multi-scale. Segmentation is achieved by solving an energy optimization problem. Representing the image as a graph, nodes (pixels) are grouped by minimizing a Potts model Hamiltonian, adopted from theoretical physics, modeling interacting electron spins. Pixel relationships (modeled as edges) are used to update the energy of the partitioned graph. By iteratively improving the clustering, the optimal number of segments is revealed. To reduce computational time, the graph is simplified using a Cantor pairing function to intelligently reduce the number of included nodes. The classified nodes are then used to train a multiclass support vector machine to apply the segmentation over the full image. Accurate segmentations of images with as many as 106 pixels can be completed only in 5 sec, allowing for attainable multi-scale visualization. To establish clinical potential, we employed our method in renal biopsies to quantitatively visualize for the first time scale variant compartments of heterogeneous intra- and extraglomerular structures simultaneously. Implications of the utility of our method extend to fields such as oncology, genomics, and non-biological problems.

  8. Image recovery by removing stochastic artefacts identified as local asymmetries

    NASA Astrophysics Data System (ADS)

    Osterloh, K.; Bücherl, T.; Zscherpel, U.; Ewert, U.

    2012-04-01

    Stochastic artefacts are frequently encountered in digital radiography and tomography with neutrons. Most obviously, they are caused by ubiquitous scattered radiation hitting the CCD-sensor. They appear as scattered dots and, at higher frequency of occurrence, they may obscure the image. Some of these dotted interferences vary with time, however, a large portion of them remains persistent so the problem cannot be resolved by collecting stacks of images and to merge them to a median image. The situation becomes even worse in computed tomography (CT) where each artefact causes a circular pattern in the reconstructed plane. Therefore, these stochastic artefacts have to be removed completely and automatically while leaving the original image content untouched. A simplified image acquisition and artefact removal tool was developed at BAM and is available to interested users. Furthermore, an algorithm complying with all the requirements mentioned above was developed that reliably removes artefacts that could even exceed the size of a single pixel without affecting other parts of the image. It consists of an iterative two-step algorithm adjusting pixel values within a 3 × 3 matrix inside of a 5 × 5 kernel and the centre pixel only within a 3 × 3 kernel, resp. It has been applied to thousands of images obtained from the NECTAR facility at the FRM II in Garching, Germany, without any need of a visual control. In essence, the procedure consists of identifying and tackling asymmetric intensity distributions locally with recording each treatment of a pixel. Searching for the local asymmetry with subsequent correction rather than replacing individually identified pixels constitutes the basic idea of the algorithm. The efficiency of the proposed algorithm is demonstrated with a severely spoiled example of neutron radiography and tomography as compared with median filtering, the most convenient alternative approach by visual check, histogram and power spectra analysis.

  9. A summary of image segmentation techniques

    NASA Technical Reports Server (NTRS)

    Spirkovska, Lilly

    1993-01-01

    Machine vision systems are often considered to be composed of two subsystems: low-level vision and high-level vision. Low level vision consists primarily of image processing operations performed on the input image to produce another image with more favorable characteristics. These operations may yield images with reduced noise or cause certain features of the image to be emphasized (such as edges). High-level vision includes object recognition and, at the highest level, scene interpretation. The bridge between these two subsystems is the segmentation system. Through segmentation, the enhanced input image is mapped into a description involving regions with common features which can be used by the higher level vision tasks. There is no theory on image segmentation. Instead, image segmentation techniques are basically ad hoc and differ mostly in the way they emphasize one or more of the desired properties of an ideal segmenter and in the way they balance and compromise one desired property against another. These techniques can be categorized in a number of different groups including local vs. global, parallel vs. sequential, contextual vs. noncontextual, interactive vs. automatic. In this paper, we categorize the schemes into three main groups: pixel-based, edge-based, and region-based. Pixel-based segmentation schemes classify pixels based solely on their gray levels. Edge-based schemes first detect local discontinuities (edges) and then use that information to separate the image into regions. Finally, region-based schemes start with a seed pixel (or group of pixels) and then grow or split the seed until the original image is composed of only homogeneous regions. Because there are a number of survey papers available, we will not discuss all segmentation schemes. Rather than a survey, we take the approach of a detailed overview. We focus only on the more common approaches in order to give the reader a flavor for the variety of techniques available yet present enough details to facilitate implementation and experimentation.

  10. Detection of diluted contaminants on chicken carcasses using a two-dimensional scatter plot based on a two-dimensional hyperspectral correlation spectrum.

    PubMed

    Wu, Wei; Chen, Gui-Yun; Wu, Ming-Qing; Yu, Zhen-Wei; Chen, Kun-Jie

    2017-03-20

    A two-dimensional (2D) scatter plot method based on the 2D hyperspectral correlation spectrum is proposed to detect diluted blood, bile, and feces from the cecum and duodenum on chicken carcasses. First, from the collected hyperspectral data, a set of uncontaminated regions of interest (ROIs) and four sets of contaminated ROIs were selected, whose average spectra were treated as the original spectrum and influenced spectra, respectively. Then, the difference spectra were obtained and used to conduct correlation analysis, from which the 2D hyperspectral correlation spectrum was constructed using the analogy method of 2D IR correlation spectroscopy. Two maximum auto-peaks and a pair of cross peaks appeared at 656 and 474 nm. Therefore, 656 and 474 nm were selected as the characteristic bands because they were most sensitive to the spectral change induced by the contaminants. The 2D scatter plots of the contaminants, clean skin, and background in the 474- and 656-nm space were used to distinguish the contaminants from the clean skin and background. The threshold values of the 474- and 656-nm bands were determined by receiver operating characteristic (ROC) analysis. According to the ROC results, a pixel whose relative reflectance at 656 nm was greater than 0.5 and relative reflectance at 474 nm was lower than 0.3 was judged as a contaminated pixel. A region with more than 50 pixels identified was marked in the detection graph. This detection method achieved a recognition rate of up to 95.03% at the region level and 31.84% at the pixel level. The false-positive rate was only 0.82% at the pixel level. The results of this study confirm that the 2D scatter plot method based on the 2D hyperspectral correlation spectrum is an effective method for detecting diluted contaminants on chicken carcasses.

  11. Qualitative and quantitative ultrasound attributes of maternal-foetal structures in pregnant ewes.

    PubMed

    da Silva, Pda; Uscategui, Rar; Santos, Vjc; Taira, A R; Mariano, Rsg; Rodrigues, Mgk; Simões, Apr; Maronezi, M C; Avante, M L; Vicente, Wrr; Feliciano, Mar

    2018-06-01

    The aim of this study was to examine foetal organs and placental tissue to establish a correlation between the changes in the composition of these structures associated with their maturation and the ultrasonographic characteristics of the images. Twenty-four pregnant ewes were included in the study. Ultrasonography assessments were performed in B-mode, from the ninth gestational week until parturition. The lungs, liver and kidneys of foetuses and placentomes were located in transverse and longitudinal sections to evaluate the echogenicity (hypoechoic, isoechoic, hyperechoic or mixed) and echotexture (homogeneous and heterogeneous) of the tissues of interest. For quantitative evaluation of the ultrasonographic characteristics, it was performed a computerized image analysis using a commercial software (Image ProPlus ® ). Mean numerical pixel values (NPVs), pixel heterogeneity (standard deviation of NPVs) and minimum and maximum pixel values were measured by selecting five circular regions of interest in each assessed tissue. All evaluated tissues presented significant variations in the NPVs, except for the liver. Pulmonary NPVmean, NPVmin and NPVmax decreased gradually through gestational weeks. The renal parameters gradually decreased with the advancement of the gestational weeks until the 17th week and later stabilized. The placentome NPVmean, NPVmin and NPVmax decreased gradually over the course of weeks. The hepatic tissue did not show echogenicity and echotexture variations and presented medium echogenicity and homogeneous echotexture throughout the experimental period. It was concluded that pixels numerical evaluation of maternal-foetal tissues was applicable and allowed the identification of quantitative ultrasonographic characteristics showing changes in echogenicity related to gestational age. © 2018 Blackwell Verlag GmbH.

  12. Automatic bone outer contour extraction from B-modes ultrasound images based on local phase symmetry and quadratic polynomial fitting

    NASA Astrophysics Data System (ADS)

    Karlita, Tita; Yuniarno, Eko Mulyanto; Purnama, I. Ketut Eddy; Purnomo, Mauridhi Hery

    2017-06-01

    Analyzing ultrasound (US) images to get the shapes and structures of particular anatomical regions is an interesting field of study since US imaging is a non-invasive method to capture internal structures of a human body. However, bone segmentation of US images is still challenging because it is strongly influenced by speckle noises and it has poor image quality. This paper proposes a combination of local phase symmetry and quadratic polynomial fitting methods to extract bone outer contour (BOC) from two dimensional (2D) B-modes US image as initial steps of three-dimensional (3D) bone surface reconstruction. By using local phase symmetry, the bone is initially extracted from US images. BOC is then extracted by scanning one pixel on the bone boundary in each column of the US images using first phase features searching method. Quadratic polynomial fitting is utilized to refine and estimate the pixel location that fails to be detected during the extraction process. Hole filling method is then applied by utilize the polynomial coefficients to fill the gaps with new pixel. The proposed method is able to estimate the new pixel position and ensures smoothness and continuity of the contour path. Evaluations are done using cow and goat bones by comparing the resulted BOCs with the contours produced by manual segmentation and contours produced by canny edge detection. The evaluation shows that our proposed methods produces an excellent result with average MSE before and after hole filling at the value of 0.65.

  13. Great Expectations: The New Horizons Imaging and Composition Pre-Encounter Plans and Contemplations of 2014 MU69

    NASA Astrophysics Data System (ADS)

    Moore, J. M.; Grundy, W. M.; Spencer, J. R.; McKinnon, W. B.; Cruikshank, D. P.; White, O. L.; Umurhan, O. M.; Beyer, R. A.; Singer, K. N.; Schenk, P.; Stern, A.; Weaver, H. A., Jr.; Olkin, C.

    2017-12-01

    The New Horizons encounter with 2014 MU69 on 1 January 2019 will be the first small Kuiper belt object to be studied in detail from a spacecraft. The prospect that the cold classical population, which includes 2014 MU69, may represent a primordial, in situ population is exciting. Indeed, as we have learned just how complex and dynamic the early Solar System was, the cold classical population of the Kuiper belt has emerged as a singular candidate for a fundamentally unaltered original planetesimal population. MU69 in particular provides a unique opportunity to explore the disk processes and chemistry of the primordial solar nebula. As such, compositional measurements during the NH flyby are of paramount importance. So is high-resolution imaging of shape and structure, as the intermediate size of MU69 (much smaller than Pluto but much larger than a typical comet) may show signs of its accretion from much smaller bodies (layers, pebbles, lobes, etc., in the manner of 67P/C-G), or alternatively, derivation via the collisional fragmentation of a larger body if KBOs are "born big". MU69 may also be big enough to show signs of internal evolution driven by radiogenic heat from 26Al decay, if it accreted early enough and fast enough. The size of MU69 (20 - 40 km) places it in a class that has the potential to harbor unusual, and in some cases, possibly active, surface geological processes: several small satellites of similar size, including Helene and Epimetheus, display what appears to be fine-grained material covering large portions of their surfaces, and the surface of Phobos displays an unusual system of parallel grooves. Invariably, these intriguing surface features are only clearly defined at imaging resolutions of at least tens of meters per pixel. The best images of MU69 are planned to have resolutions of 20 - 40 m/pixel at a phase angle range of 40 - 70°. We also plan color imaging in 4 channels at 0.4 to 1 µ at 200 - 500 m/pixel, and 256 channel spectroscopy from 1.25 to 2.5 µ at 1 - 4 km/pixel. Ices such as H2O, NH3, CO2, and CH3OH would be stable and can be detected and mapped if they are exposed at the surface. It will be especially instructive to compare with Cassini VIMS spectra of Phoebe, thought to be a captured outer solar system planetesimal that formed in a related nebular environment to where MU69 formed.

  14. Mitigating Satellite-Based Fire Sampling Limitations in Deriving Biomass Burning Emission Rates: Application to WRF-Chem Model Over the Northern sub-Saharan African Region

    NASA Astrophysics Data System (ADS)

    Wang, Jun; Yue, Yun; Wang, Yi; Ichoku, Charles; Ellison, Luke; Zeng, Jing

    2018-01-01

    Largely used in several independent estimates of fire emissions, fire products based on MODIS sensors aboard the Terra and Aqua polar-orbiting satellites have a number of inherent limitations, including (a) inability to detect fires below clouds, (b) significant decrease of detection sensitivity at the edge of scan where pixel sizes are much larger than at nadir, and (c) gaps between adjacent swaths in tropical regions. To remedy these limitations, an empirical method is developed here and applied to correct fire emission estimates based on MODIS pixel level fire radiative power measurements and emission coefficients from the Fire Energetics and Emissions Research (FEER) biomass burning emission inventory. The analysis was performed for January 2010 over the northern sub-Saharan African region. Simulations from WRF-Chem model using original and adjusted emissions are compared with the aerosol optical depth (AOD) products from MODIS and AERONET as well as aerosol vertical profile from CALIOP data. The comparison confirmed an 30-50% improvement in the model simulation performance (in terms of correlation, bias, and spatial pattern of AOD with respect to observations) by the adjusted emissions that not only increases the original emission amount by a factor of two but also results in the spatially continuous estimates of instantaneous fire emissions at daily time scales. Such improvement cannot be achieved by simply scaling the original emission across the study domain. Even with this improvement, a factor of two underestimations still exists in the modeled AOD, which is within the current global fire emissions uncertainty envelope.

  15. Multidisciplinary Analysis of the NEXUS Precursor Space Telescope

    NASA Astrophysics Data System (ADS)

    de Weck, Olivier L.; Miller, David W.; Mosier, Gary E.

    2002-12-01

    A multidisciplinary analysis is demonstrated for the NEXUS space telescope precursor mission. This mission was originally designed as an in-space technology testbed for the Next Generation Space Telescope (NGST). One of the main challenges is to achieve a very tight pointing accuracy with a sub-pixel line-of-sight (LOS) jitter budget and a root-mean-square (RMS) wavefront error smaller than λ/50 despite the presence of electronic and mechanical disturbances sources. The analysis starts with the assessment of the performance for an initial design, which turns out not to meet the requirements. Twentyfive design parameters from structures, optics, dynamics and controls are then computed in a sensitivity and isoperformance analysis, in search of better designs. Isoperformance allows finding an acceptable design that is well "balanced" and does not place undue burden on a single subsystem. An error budget analysis shows the contributions of individual disturbance sources. This paper might be helpful in analyzing similar, innovative space telescope systems in the future.

  16. Skeletonization of gray-scale images by gray weighted distance transform

    NASA Astrophysics Data System (ADS)

    Qian, Kai; Cao, Siqi; Bhattacharya, Prabir

    1997-07-01

    In pattern recognition, thinning algorithms are often a useful tool to represent a digital pattern by means of a skeletonized image, consisting of a set of one-pixel-width lines that highlight the significant features interest in applying thinning directly to gray-scale images, motivated by the desire of processing images characterized by meaningful information distributed over different levels of gray intensity. In this paper, a new algorithm is presented which can skeletonize both black-white and gray pictures. This algorithm is based on the gray distance transformation and can be used to process any non-well uniformly distributed gray-scale picture and can preserve the topology of original picture. This process includes a preliminary phase of investigation in the 'hollows' in the gray-scale image; these hollows are considered not as topological constrains for the skeleton structure depending on their statistically significant depth. This algorithm can also be executed on a parallel machine as all the operations are executed in local. Some examples are discussed to illustrate the algorithm.

  17. Blind decomposition of Herschel-HIFI spectral maps of the NGC 7023 nebula

    NASA Astrophysics Data System (ADS)

    Berné, O.; Joblin, C.; Deville, Y.; Pilleri, P.; Pety, J.; Teyssier, D.; Gerin, M.; Fuente, A.

    2012-12-01

    Large spatial-spectral surveys are more and more common in astronomy. This calls for the need of new methods to analyze such mega- to giga-pixel data-cubes. In this paper we present a method to decompose such observations into a limited and comprehensive set of components. The original data can then be interpreted in terms of linear combinations of these components. The method uses non-negative matrix factorization (NMF) to extract latent spectral end-members in the data. The number of needed end-members is estimated based on the level of noise in the data. A Monte-Carlo scheme is adopted to estimate the optimal end-members, and their standard deviations. Finally, the maps of linear coefficients are reconstructed using non-negative least squares. We apply this method to a set of hyperspectral data of the NGC 7023 nebula, obtained recently with the HIFI instrument onboard the Herschel space observatory, and provide a first interpretation of the results in terms of 3-dimensional dynamical structure of the region.

  18. Spatio-Temporal Video Segmentation with Shape Growth or Shrinkage Constraint

    NASA Technical Reports Server (NTRS)

    Tarabalka, Yuliya; Charpiat, Guillaume; Brucker, Ludovic; Menze, Bjoern H.

    2014-01-01

    We propose a new method for joint segmentation of monotonously growing or shrinking shapes in a time sequence of noisy images. The task of segmenting the image time series is expressed as an optimization problem using the spatio-temporal graph of pixels, in which we are able to impose the constraint of shape growth or of shrinkage by introducing monodirectional infinite links connecting pixels at the same spatial locations in successive image frames. The globally optimal solution is computed with a graph cut. The performance of the proposed method is validated on three applications: segmentation of melting sea ice floes and of growing burned areas from time series of 2D satellite images, and segmentation of a growing brain tumor from sequences of 3D medical scans. In the latter application, we impose an additional intersequences inclusion constraint by adding directed infinite links between pixels of dependent image structures.

  19. Performance of the first HAWAII 4RG-15 arrays in the laboratory and at the telescope

    NASA Astrophysics Data System (ADS)

    Hall, Donald N. B.; Atkinson, Dani; Beletic, James W.; Blank, Richard; Farris, Mark; Hodapp, Klaus W.; Jacobson, Shane M.; Loose, Markus; Luppino, Gerard

    2012-07-01

    The primary goal of the HAWAII 4RG-15 (H4RG-15) development is to provide a 16 megapixel 4096x4096 format at significantly reduced price per pixel while maintaining the superb low background performance of the HAWAII 2RG (H2RG). The H4RG-15 design incorporates several new features, notably clocked reference output and interleaved reference pixel readout, that promise to significantly improve noise performance while the reduction in pixel pitch from 18 to 15 microns should improve transimpedance gain although at the expense of some degradation in full well and crosstalk. During the Phase-1 development, Teledyne has produced and screen tested six hybrid arrays. In preparation for Phase-2, the most promising of these are being extensively characterized in the University of Hawaii’s (UH) ULBCam test facility originally developed for the JWST H2RG program. The end-to-end performance of the most promising array has been directly established through astronomical imaging observations at the UH 88-inch telescope on Mauna Kea. We report the performance of these Phase-1 H4RG-15s within the context of established H2RG performance for key parameters (primarily CDS read noise), also highlighting the improvements from the new readout modes.

  20. Fast processing of digital imaging and communications in medicine (DICOM) metadata using multiseries DICOM format.

    PubMed

    Ismail, Mahmoud; Philbin, James

    2015-04-01

    The digital imaging and communications in medicine (DICOM) information model combines pixel data and its metadata in a single object. There are user scenarios that only need metadata manipulation, such as deidentification and study migration. Most picture archiving and communication system use a database to store and update the metadata rather than updating the raw DICOM files themselves. The multiseries DICOM (MSD) format separates metadata from pixel data and eliminates duplicate attributes. This work promotes storing DICOM studies in MSD format to reduce the metadata processing time. A set of experiments are performed that update the metadata of a set of DICOM studies for deidentification and migration. The studies are stored in both the traditional single frame DICOM (SFD) format and the MSD format. The results show that it is faster to update studies' metadata in MSD format than in SFD format because the bulk data is separated in MSD and is not retrieved from the storage system. In addition, it is space efficient to store the deidentified studies in MSD format as it shares the same bulk data object with the original study. In summary, separation of metadata from pixel data using the MSD format provides fast metadata access and speeds up applications that process only the metadata.

  1. Fast processing of digital imaging and communications in medicine (DICOM) metadata using multiseries DICOM format

    PubMed Central

    Ismail, Mahmoud; Philbin, James

    2015-01-01

    Abstract. The digital imaging and communications in medicine (DICOM) information model combines pixel data and its metadata in a single object. There are user scenarios that only need metadata manipulation, such as deidentification and study migration. Most picture archiving and communication system use a database to store and update the metadata rather than updating the raw DICOM files themselves. The multiseries DICOM (MSD) format separates metadata from pixel data and eliminates duplicate attributes. This work promotes storing DICOM studies in MSD format to reduce the metadata processing time. A set of experiments are performed that update the metadata of a set of DICOM studies for deidentification and migration. The studies are stored in both the traditional single frame DICOM (SFD) format and the MSD format. The results show that it is faster to update studies’ metadata in MSD format than in SFD format because the bulk data is separated in MSD and is not retrieved from the storage system. In addition, it is space efficient to store the deidentified studies in MSD format as it shares the same bulk data object with the original study. In summary, separation of metadata from pixel data using the MSD format provides fast metadata access and speeds up applications that process only the metadata. PMID:26158117

  2. Using triple gamma coincidences with a pixelated semiconductor Compton-PET scanner: a simulation study

    NASA Astrophysics Data System (ADS)

    Kolstein, M.; Chmeissani, M.

    2016-01-01

    The Voxel Imaging PET (VIP) Pathfinder project presents a novel design using pixelated semiconductor detectors for nuclear medicine applications to achieve the intrinsic image quality limits set by physics. The conceptual design can be extended to a Compton gamma camera. The use of a pixelated CdTe detector with voxel sizes of 1 × 1 × 2 mm3 guarantees optimal energy and spatial resolution. However, the limited time resolution of semiconductor detectors makes it impossible to use Time Of Flight (TOF) with VIP PET. TOF is used in order to improve the signal to noise ratio (SNR) by using only the most probable portion of the Line-Of-Response (LOR) instead of its entire length. To overcome the limitation of CdTe time resolution, we present in this article a simulation study using β+-γ emitting isotopes with a Compton-PET scanner. When the β+ annihilates with an electron it produces two gammas which produce a LOR in the PET scanner, while the additional gamma, when scattered in the scatter detector, provides a Compton cone that intersects with the aforementioned LOR. The intersection indicates, within a few mm of uncertainty along the LOR, the origin of the beta-gamma decay. Hence, one can limit the part of the LOR used by the image reconstruction algorithm.

  3. Pixel-based dust-extinction mapping in nearby galaxies: A new approach to lifting the veil of dust

    NASA Astrophysics Data System (ADS)

    Tamura, Kazuyuki

    In the first part of this dissertation, I explore a new approach to mapping dust extinction in galaxies, using the observed and estimated dust-free flux- ratios of optical V -band and mid-IR 3.6 micro-meter emission. Inferred missing V -band flux is then converted into an estimate of dust extinction. While dust features are not clearly evident in the observed ground-based images of NGC 0959, the target of my pilot study, the dust-map created with this method clearly traces the distribution of dust seen in higher resolution Hubble images. Stellar populations are then analyzed through various pixel Color- Magnitude Diagrams and pixel Color-Color Diagrams (pCCDs), both before and after extinction correction. The ( B - 3.6 microns) versus (far-UV - U ) pCCD proves particularly powerful to distinguish pixels that are dominated by different types of or mixtures of stellar populations. Mapping these pixel- groups onto a pixel-coordinate map shows that they are not distributed randomly, but follow genuine galactic structures, such as a previously unrecognized bar. I show that selecting pixel-groups is not meaningful when using uncorrected colors, and that pixel-based extinction correction is crucial to reveal the true spatial variations in stellar populations. This method is then applied to a sample of late-type galaxies to study the distribution of dust and stellar population as a function of their morphological type and absolute magnitude. In each galaxy, I find that dust extinction is not simply decreasing radially, but that is concentrated in localized clumps throughout a galaxy. I also find some cases where star-formation regions are not associated with dust. In the second part, I describe the application of astronomical image analysis tools for medical purposes. In particular, Source Extractor is used to detect nerve fibers in the basement membrane images of human skin-biopsies of obese subjects. While more development and testing is necessary for this kind of work, I show that computerized detection methods significantly increase the repeatability and reliability of the results. A patent on this work is pending.

  4. Highly Reflective Multi-stable Electrofluidic Display Pixels

    NASA Astrophysics Data System (ADS)

    Yang, Shu

    Electronic papers (E-papers) refer to the displays that mimic the appearance of printed papers, but still owning the features of conventional electronic displays, such as the abilities of browsing websites and playing videos. The motivation of creating paper-like displays is inspired by the truths that reading on a paper caused least eye fatigue due to the paper's reflective and light diffusive nature, and, unlike the existing commercial displays, there is no cost of any form of energy for sustaining the displayed image. To achieve the equivalent visual effect of a paper print, an ideal E-paper has to be a highly reflective with good contrast ratio and full-color capability. To sustain the image with zero power consumption, the display pixels need to be bistable, which means the "on" and "off" states are both lowest energy states. Pixel can change its state only when sufficient external energy is given. There are many emerging technologies competing to demonstrate the first ideal E-paper device. However, none is able to achieve satisfactory visual effect, bistability and video speed at the same time. Challenges come from either the inherent physical/chemical properties or the fabrication process. Electrofluidic display is one of the most promising E-paper technologies. It has successfully demonstrated high reflectivity, brilliant color and video speed operation by moving colored pigment dispersion between visible and invisible places with electrowetting force. However, the pixel design did not allow the image bistability. Presented in this dissertation are the multi-stable electrofluidic display pixels that are able to sustain grayscale levels without any power consumption, while keeping the favorable features of the previous generation electrofluidic display. The pixel design, fabrication method using multiple layer dry film photoresist lamination, and physical/optical characterizations are discussed in details. Based on the pixel structure, the preliminary results of a simplified design and fabrication method are demonstrated. As advanced research topics regarding the device optical performance, firstly an optical model for evaluating reflective displays' light out-coupling efficiency is established to guide the pixel design; Furthermore, Aluminum surface diffusers are analytically modeled and then fabricated onto multi-stable electrofluidic display pixels to demonstrate truly "white" multi-stable electrofluidic display modules. The achieved results successfully promoted multi-stable electrofluidic display as excellent candidate for the ultimate E-paper device especially for larger scale signage applications.

  5. The NORDA MC&G Map Data Formatting Facility: Development of a Digital Map Data Base

    DTIC Science & Technology

    1989-12-01

    Lempel - Ziv compression . extract such features as roads, water, urban areas, and Also investigated were various transform encoding text from the scanned... Compression Ratios scanned maps revealed a small number of color classes and lar .e homogeneous regions. The original 24-bit Lempel Ziv Lempel Ziv pixel...Various high performance, lossless compression tech- Table 6. Compression ratios for VQ classification niques were tried. followed by Lempel Ziv

  6. Convolutional Neural Networks for 1-D Many-Channel Data

    DTIC Science & Technology

    Deep convolutional neural networks (CNNs) represent the state of the art in image recognition. The same properties that led to their success in that... crack detection ( 8,000 data points, 72 channels). Though the models predictive ability is limited to fitting the trend , its partial success suggests that...originally written to classify digits in the MNIST database (28 28 pixels, 1 channel), for use on 1-D acoustic data taken from experiments focused on

  7. Study of the material of the ATLAS inner detector for Run 2 of the LHC

    DOE PAGES

    Aaboud, M.; Aad, G.; Abbott, B.; ...

    2017-12-07

    The ATLAS inner detector comprises three different sub-detectors: the pixel detector, the silicon strip tracker, and the transition-radiation drift-tube tracker. The Insertable B-Layer, a new innermost pixel layer, was installed during the shutdown period in 2014, together with modifications to the layout of the cables and support structures of the existing pixel detector. The material in the inner detector is studied with several methods, using a low-luminosity √s=13 TeV pp collision sample corresponding to around 2.0 nb -1 collected in 2015 with the ATLAS experiment at the LHC. In this paper, the material within the innermost barrel region is studiedmore » using reconstructed hadronic interaction and photon conversion vertices. For the forward rapidity region, the material is probed by a measurement of the efficiency with which single tracks reconstructed from pixel detector hits alone can be extended with hits on the track in the strip layers. The results of these studies have been taken into account in an improved description of the material in the ATLAS inner detector simulation, resulting in a reduction in the uncertainties associated with the charged-particle reconstruction efficiency determined from simulation.« less

  8. Study of the material of the ATLAS inner detector for Run 2 of the LHC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aaboud, M.; Aad, G.; Abbott, B.

    The ATLAS inner detector comprises three different sub-detectors: the pixel detector, the silicon strip tracker, and the transition-radiation drift-tube tracker. The Insertable B-Layer, a new innermost pixel layer, was installed during the shutdown period in 2014, together with modifications to the layout of the cables and support structures of the existing pixel detector. The material in the inner detector is studied with several methods, using a low-luminosity √s=13 TeV pp collision sample corresponding to around 2.0 nb -1 collected in 2015 with the ATLAS experiment at the LHC. In this paper, the material within the innermost barrel region is studiedmore » using reconstructed hadronic interaction and photon conversion vertices. For the forward rapidity region, the material is probed by a measurement of the efficiency with which single tracks reconstructed from pixel detector hits alone can be extended with hits on the track in the strip layers. The results of these studies have been taken into account in an improved description of the material in the ATLAS inner detector simulation, resulting in a reduction in the uncertainties associated with the charged-particle reconstruction efficiency determined from simulation.« less

  9. Study of the material of the ATLAS inner detector for Run 2 of the LHC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aaboud, M.; Aad, G.; Abbott, B.

    The ATLAS inner detector comprises three different sub-detectors: the pixel detector, the silicon strip tracker, and the transition-radiation drift-tube tracker. The Insertable B-Layer, a new innermost pixel layer, was installed during the shutdown period in 2014, together with modifications to the layout of the cables and support structures of the existing pixel detector. The material in the inner detector is studied with several methods, using a low-luminosity √s = 13 TeV pp collision sample corresponding to around 2.0 nb -1 collected in 2015 with the ATLAS experiment at the LHC. In this paper, the material within the innermost barrel regionmore » is studied using reconstructed hadronic interaction and photon conversion vertices. For the forward rapidity region, the material is probed by a measurement of the efficiency with which single tracks reconstructed from pixel detector hits alone can be extended with hits on the track in the strip layers. The results of these studies have been taken into account in an improved description of the material in the ATLAS inner detector simulation, resulting in a reduction in the uncertainties associated with the charged-particle reconstruction efficiency determined from simulation.« less

  10. Study of the material of the ATLAS inner detector for Run 2 of the LHC

    NASA Astrophysics Data System (ADS)

    Aaboud, M.; Aad, G.; Abbott, B.; Abdallah, J.; Abdinov, O.; Abeloos, B.; Abidi, S. H.; AbouZeid, O. S.; Abraham, N. L.; Abramowicz, H.; Abreu, H.; Abreu, R.; Abulaiti, Y.; Acharya, B. S.; Adachi, S.; Adamczyk, L.; Adelman, J.; Adersberger, M.; Adye, T.; Affolder, A. A.; Agatonovic-Jovin, T.; Agheorghiesei, C.; Aguilar-Saavedra, J. A.; Ahlen, S. P.; Ahmadov, F.; Aielli, G.; Akatsuka, S.; Akerstedt, H.; Åkesson, T. P. A.; Akilli, E.; Akimov, A. V.; Alberghi, G. L.; Albert, J.; Albicocco, P.; Alconada Verzini, M. J.; Aleksa, M.; Aleksandrov, I. N.; Alexa, C.; Alexander, G.; Alexopoulos, T.; Alhroob, M.; Ali, B.; Aliev, M.; Alimonti, G.; Alison, J.; Alkire, S. P.; Allbrooke, B. M. M.; Allen, B. W.; Allport, P. P.; Aloisio, A.; Alonso, A.; Alonso, F.; Alpigiani, C.; Alshehri, A. A.; Alstaty, M.; Alvarez Gonzalez, B.; Álvarez Piqueras, D.; Alviggi, M. G.; Amadio, B. T.; Amaral Coutinho, Y.; Amelung, C.; Amidei, D.; Amor Dos Santos, S. P.; Amorim, A.; Amoroso, S.; Amundsen, G.; Anastopoulos, C.; Ancu, L. S.; Andari, N.; Andeen, T.; Anders, C. F.; Anders, J. K.; Anderson, K. J.; Andreazza, A.; Andrei, V.; Angelidakis, S.; Angelozzi, I.; Angerami, A.; Anisenkov, A. V.; Anjos, N.; Annovi, A.; Antel, C.; Antonelli, M.; Antonov, A.; Antrim, D. J.; Anulli, F.; Aoki, M.; Aperio Bella, L.; Arabidze, G.; Arai, Y.; Araque, J. P.; Araujo Ferraz, V.; Arce, A. T. H.; Ardell, R. E.; Arduh, F. A.; Arguin, J.-F.; Argyropoulos, S.; Arik, M.; Armbruster, A. J.; Armitage, L. J.; Arnaez, O.; Arnold, H.; Arratia, M.; Arslan, O.; Artamonov, A.; Artoni, G.; Artz, S.; Asai, S.; Asbah, N.; Ashkenazi, A.; Asquith, L.; Assamagan, K.; Astalos, R.; Atkinson, M.; Atlay, N. B.; Augsten, K.; Avolio, G.; Axen, B.; Ayoub, M. K.; Azuelos, G.; Baas, A. E.; Baca, M. J.; Bachacou, H.; Bachas, K.; Backes, M.; Backhaus, M.; Bagnaia, P.; Bahrasemani, H.; Baines, J. T.; Bajic, M.; Baker, O. K.; Baldin, E. M.; Balek, P.; Balli, F.; Balunas, W. K.; Banas, E.; Banerjee, Sw.; Bannoura, A. A. E.; Barak, L.; Barberio, E. L.; Barberis, D.; Barbero, M.; Barillari, T.; Barisits, M.-S.; Barkeloo, J. T.; Barklow, T.; Barlow, N.; Barnes, S. L.; Barnett, B. M.; Barnett, R. M.; Barnovska-Blenessy, Z.; Baroncelli, A.; Barone, G.; Barr, A. J.; Barranco Navarro, L.; Barreiro, F.; Barreiro Guimarães da Costa, J.; Bartoldus, R.; Barton, A. E.; Bartos, P.; Basalaev, A.; Bassalat, A.; Bates, R. L.; Batista, S. J.; Batley, J. R.; Battaglia, M.; Bauce, M.; Bauer, F.; Bawa, H. S.; Beacham, J. B.; Beattie, M. D.; Beau, T.; Beauchemin, P. H.; Bechtle, P.; Beck, H. P.; Becker, K.; Becker, M.; Beckingham, M.; Becot, C.; Beddall, A. J.; Beddall, A.; Bednyakov, V. A.; Bedognetti, M.; Bee, C. P.; Beermann, T. A.; Begalli, M.; Begel, M.; Behr, J. K.; Bell, A. S.; Bella, G.; Bellagamba, L.; Bellerive, A.; Bellomo, M.; Belotskiy, K.; Beltramello, O.; Belyaev, N. L.; Benary, O.; Benchekroun, D.; Bender, M.; Bendtz, K.; Benekos, N.; Benhammou, Y.; Benhar Noccioli, E.; Benitez, J.; Benjamin, D. P.; Benoit, M.; Bensinger, J. R.; Bentvelsen, S.; Beresford, L.; Beretta, M.; Berge, D.; Bergeaas Kuutmann, E.; Berger, N.; Beringer, J.; Berlendis, S.; Bernard, N. R.; Bernardi, G.; Bernius, C.; Bernlochner, F. U.; Berry, T.; Berta, P.; Bertella, C.; Bertoli, G.; Bertolucci, F.; Bertram, I. A.; Bertsche, C.; Bertsche, D.; Besjes, G. J.; Bessidskaia Bylund, O.; Bessner, M.; Besson, N.; Betancourt, C.; Bethani, A.; Bethke, S.; Bevan, A. J.; Beyer, J.; Bianchi, R. M.; Biebel, O.; Biedermann, D.; Bielski, R.; Biesuz, N. V.; Biglietti, M.; Bilbao De Mendizabal, J.; Billoud, T. R. V.; Bilokon, H.; Bindi, M.; Bingul, A.; Bini, C.; Biondi, S.; Bisanz, T.; Bittrich, C.; Bjergaard, D. M.; Black, C. W.; Black, J. E.; Black, K. M.; Blair, R. E.; Blazek, T.; Bloch, I.; Blocker, C.; Blue, A.; Blum, W.; Blumenschein, U.; Blunier, S.; Bobbink, G. J.; Bobrovnikov, V. S.; Bocchetta, S. S.; Bocci, A.; Bock, C.; Boehler, M.; Boerner, D.; Bogavac, D.; Bogdanchikov, A. G.; Bohm, C.; Boisvert, V.; Bokan, P.; Bold, T.; Boldyrev, A. S.; Bolz, A. E.; Bomben, M.; Bona, M.; Boonekamp, M.; Borisov, A.; Borissov, G.; Bortfeldt, J.; Bortoletto, D.; Bortolotto, V.; Boscherini, D.; Bosman, M.; Bossio Sola, J. D.; Boudreau, J.; Bouffard, J.; Bouhova-Thacker, E. V.; Boumediene, D.; Bourdarios, C.; Boutle, S. K.; Boveia, A.; Boyd, J.; Boyko, I. R.; Bracinik, J.; Brandt, A.; Brandt, G.; Brandt, O.; Bratzler, U.; Brau, B.; Brau, J. E.; Breaden Madden, W. D.; Brendlinger, K.; Brennan, A. J.; Brenner, L.; Brenner, R.; Bressler, S.; Briglin, D. L.; Bristow, T. M.; Britton, D.; Britzger, D.; Brochu, F. M.; Brock, I.; Brock, R.; Brooijmans, G.; Brooks, T.; Brooks, W. K.; Brosamer, J.; Brost, E.; Broughton, J. H.; Bruckman de Renstrom, P. A.; Bruncko, D.; Bruni, A.; Bruni, G.; Bruni, L. S.; Brunt, BH; Bruschi, M.; Bruscino, N.; Bryant, P.; Bryngemark, L.; Buanes, T.; Buat, Q.; Buchholz, P.; Buckley, A. G.; Budagov, I. A.; Buehrer, F.; Bugge, M. K.; Bulekov, O.; Bullock, D.; Burch, T. J.; Burckhart, H.; Burdin, S.; Burgard, C. D.; Burger, A. M.; Burghgrave, B.; Burka, K.; Burke, S.; Burmeister, I.; Burr, J. T. P.; Busato, E.; Büscher, D.; Büscher, V.; Bussey, P.; Butler, J. M.; Buttar, C. M.; Butterworth, J. M.; Butti, P.; Buttinger, W.; Buzatu, A.; Buzykaev, A. R.; Cabrera Urbán, S.; Caforio, D.; Cairo, V. M.; Cakir, O.; Calace, N.; Calafiura, P.; Calandri, A.; Calderini, G.; Calfayan, P.; Callea, G.; Caloba, L. P.; Calvente Lopez, S.; Calvet, D.; Calvet, S.; Calvet, T. P.; Camacho Toro, R.; Camarda, S.; Camarri, P.; Cameron, D.; Caminal Armadans, R.; Camincher, C.; Campana, S.; Campanelli, M.; Camplani, A.; Campoverde, A.; Canale, V.; Cano Bret, M.; Cantero, J.; Cao, T.; Capeans Garrido, M. D. M.; Caprini, I.; Caprini, M.; Capua, M.; Carbone, R. M.; Cardarelli, R.; Cardillo, F.; Carli, I.; Carli, T.; Carlino, G.; Carlson, B. T.; Carminati, L.; Carney, R. M. D.; Caron, S.; Carquin, E.; Carrá, S.; Carrillo-Montoya, G. D.; Carvalho, J.; Casadei, D.; Casado, M. P.; Casolino, M.; Casper, D. W.; Castelijn, R.; Castillo Gimenez, V.; Castro, N. F.; Catinaccio, A.; Catmore, J. R.; Cattai, A.; Caudron, J.; Cavaliere, V.; Cavallaro, E.; Cavalli, D.; Cavalli-Sforza, M.; Cavasinni, V.; Celebi, E.; Ceradini, F.; Cerda Alberich, L.; Cerqueira, A. S.; Cerri, A.; Cerrito, L.; Cerutti, F.; Cervelli, A.; Cetin, S. A.; Chafaq, A.; Chakraborty, D.; Chan, S. K.; Chan, W. S.; Chan, Y. L.; Chang, P.; Chapman, J. D.; Charlton, D. G.; Chau, C. C.; Chavez Barajas, C. A.; Che, S.; Cheatham, S.; Chegwidden, A.; Chekanov, S.; Chekulaev, S. V.; Chelkov, G. A.; Chelstowska, M. A.; Chen, C.; Chen, H.; Chen, S.; Chen, S.; Chen, X.; Chen, Y.; Cheng, H. C.; Cheng, H. J.; Cheplakov, A.; Cheremushkina, E.; Cherkaoui El Moursli, R.; Chernyatin, V.; Cheu, E.; Cheung, K.; Chevalier, L.; Chiarella, V.; Chiarelli, G.; Chiodini, G.; Chisholm, A. S.; Chitan, A.; Chiu, Y. H.; Chizhov, M. V.; Choi, K.; Chomont, A. R.; Chouridou, S.; Christodoulou, V.; Chromek-Burckhart, D.; Chu, M. C.; Chudoba, J.; Chuinard, A. J.; Chwastowski, J. J.; Chytka, L.; Ciftci, A. K.; Cinca, D.; Cindro, V.; Cioara, I. A.; Ciocca, C.; Ciocio, A.; Cirotto, F.; Citron, Z. H.; Citterio, M.; Ciubancan, M.; Clark, A.; Clark, B. L.; Clark, M. R.; Clark, P. J.; Clarke, R. N.; Clement, C.; Coadou, Y.; Cobal, M.; Coccaro, A.; Cochran, J.; Colasurdo, L.; Cole, B.; Colijn, A. P.; Collot, J.; Colombo, T.; Conde Muiño, P.; Coniavitis, E.; Connell, S. H.; Connelly, I. A.; Constantinescu, S.; Conti, G.; Conventi, F.; Cooke, M.; Cooper-Sarkar, A. M.; Cormier, F.; Cormier, K. J. R.; Corradi, M.; Corriveau, F.; Cortes-Gonzalez, A.; Cortiana, G.; Costa, G.; Costa, M. J.; Costanzo, D.; Cottin, G.; Cowan, G.; Cox, B. E.; Cranmer, K.; Crawley, S. J.; Creager, R. A.; Cree, G.; Crépé-Renaudin, S.; Crescioli, F.; Cribbs, W. A.; Cristinziani, M.; Croft, V.; Crosetti, G.; Cueto, A.; Cuhadar Donszelmann, T.; Cukierman, A. R.; Cummings, J.; Curatolo, M.; Cúth, J.; Czirr, H.; Czodrowski, P.; D'amen, G.; D'Auria, S.; D'eramo, L.; D'Onofrio, M.; Da Cunha Sargedas De Sousa, M. J.; Da Via, C.; Dabrowski, W.; Dado, T.; Dai, T.; Dale, O.; Dallaire, F.; Dallapiccola, C.; Dam, M.; Dandoy, J. R.; Daneri, M. F.; Dang, N. P.; Daniells, A. C.; Dann, N. S.; Danninger, M.; Dano Hoffmann, M.; Dao, V.; Darbo, G.; Darmora, S.; Dassoulas, J.; Dattagupta, A.; Daubney, T.; Davey, W.; David, C.; Davidek, T.; Davies, M.; Davis, D. R.; Davison, P.; Dawe, E.; Dawson, I.; De, K.; de Asmundis, R.; De Benedetti, A.; De Castro, S.; De Cecco, S.; De Groot, N.; de Jong, P.; De la Torre, H.; De Lorenzi, F.; De Maria, A.; De Pedis, D.; De Salvo, A.; De Sanctis, U.; De Santo, A.; De Vasconcelos Corga, K.; De Vivie De Regie, J. B.; Dearnaley, W. J.; Debbe, R.; Debenedetti, C.; Dedovich, D. V.; Dehghanian, N.; Deigaard, I.; Del Gaudio, M.; Del Peso, J.; Del Prete, T.; Delgove, D.; Deliot, F.; Delitzsch, C. M.; Dell'Acqua, A.; Dell'Asta, L.; Dell'Orso, M.; Della Pietra, M.; della Volpe, D.; Delmastro, M.; Delporte, C.; Delsart, P. A.; DeMarco, D. A.; Demers, S.; Demichev, M.; Demilly, A.; Denisov, S. P.; Denysiuk, D.; Derendarz, D.; Derkaoui, J. E.; Derue, F.; Dervan, P.; Desch, K.; Deterre, C.; Dette, K.; Devesa, M. R.; Deviveiros, P. O.; Dewhurst, A.; Dhaliwal, S.; Di Bello, F. A.; Di Ciaccio, A.; Di Ciaccio, L.; Di Clemente, W. K.; Di Donato, C.; Di Girolamo, A.; Di Girolamo, B.; Di Micco, B.; Di Nardo, R.; Di Petrillo, K. F.; Di Simone, A.; Di Sipio, R.; Di Valentino, D.; Diaconu, C.; Diamond, M.; Dias, F. A.; Diaz, M. A.; Diehl, E. B.; Dietrich, J.; Díez Cornell, S.; Dimitrievska, A.; Dingfelder, J.; Dita, P.; Dita, S.; Dittus, F.; Djama, F.; Djobava, T.; Djuvsland, J. I.; do Vale, M. A. B.; Dobos, D.; Dobre, M.; Doglioni, C.; Dolejsi, J.; Dolezal, Z.; Donadelli, M.; Donati, S.; Dondero, P.; Donini, J.; Dopke, J.; Doria, A.; Dova, M. T.; Doyle, A. T.; Drechsler, E.; Dris, M.; Du, Y.; Duarte-Campderros, J.; Dubreuil, A.; Duchovni, E.; Duckeck, G.; Ducourthial, A.; Ducu, O. A.; Duda, D.; Dudarev, A.; Dudder, A. Chr.; Duffield, E. M.; Duflot, L.; Dührssen, M.; Dumancic, M.; Dumitriu, A. E.; Duncan, A. K.; Dunford, M.; Duran Yildiz, H.; Düren, M.; Durglishvili, A.; Duschinger, D.; Dutta, B.; Dyndal, M.; Dziedzic, B. S.; Eckardt, C.; Ecker, K. M.; Edgar, R. C.; Eifert, T.; Eigen, G.; Einsweiler, K.; Ekelof, T.; El Kacimi, M.; El Kosseifi, R.; Ellajosyula, V.; Ellert, M.; Elles, S.; Ellinghaus, F.; Elliot, A. A.; Ellis, N.; Elmsheuser, J.; Elsing, M.; Emeliyanov, D.; Enari, Y.; Endner, O. C.; Ennis, J. S.; Erdmann, J.; Ereditato, A.; Ernis, G.; Ernst, M.; Errede, S.; Escalier, M.; Escobar, C.; Esposito, B.; Estrada Pastor, O.; Etienvre, A. I.; Etzion, E.; Evans, H.; Ezhilov, A.; Ezzi, M.; Fabbri, F.; Fabbri, L.; Facini, G.; Fakhrutdinov, R. M.; Falciano, S.; Falla, R. J.; Faltova, J.; Fang, Y.; Fanti, M.; Farbin, A.; Farilla, A.; Farina, C.; Farina, E. M.; Farooque, T.; Farrell, S.; Farrington, S. M.; Farthouat, P.; Fassi, F.; Fassnacht, P.; Fassouliotis, D.; Faucci Giannelli, M.; Favareto, A.; Fawcett, W. J.; Fayard, L.; Fedin, O. L.; Fedorko, W.; Feigl, S.; Feligioni, L.; Feng, C.; Feng, E. J.; Feng, H.; Fenton, M. J.; Fenyuk, A. B.; Feremenga, L.; Fernandez Martinez, P.; Fernandez Perez, S.; Ferrando, J.; Ferrari, A.; Ferrari, P.; Ferrari, R.; Ferreira de Lima, D. E.; Ferrer, A.; Ferrere, D.; Ferretti, C.; Fiedler, F.; Filipčič, A.; Filipuzzi, M.; Filthaut, F.; Fincke-Keeler, M.; Finelli, K. D.; Fiolhais, M. C. N.; Fiorini, L.; Fischer, A.; Fischer, C.; Fischer, J.; Fisher, W. C.; Flaschel, N.; Fleck, I.; Fleischmann, P.; Fletcher, R. R. M.; Flick, T.; Flierl, B. M.; Flores Castillo, L. R.; Flowerdew, M. J.; Forcolin, G. T.; Formica, A.; Förster, F. A.; Forti, A.; Foster, A. G.; Fournier, D.; Fox, H.; Fracchia, S.; Francavilla, P.; Franchini, M.; Franchino, S.; Francis, D.; Franconi, L.; Franklin, M.; Frate, M.; Fraternali, M.; Freeborn, D.; Fressard-Batraneanu, S. M.; Freund, B.; Froidevaux, D.; Frost, J. A.; Fukunaga, C.; Fusayasu, T.; Fuster, J.; Gabaldon, C.; Gabizon, O.; Gabrielli, A.; Gabrielli, A.; Gach, G. P.; Gadatsch, S.; Gadomski, S.; Gagliardi, G.; Gagnon, L. G.; Galea, C.; Galhardo, B.; Gallas, E. J.; Gallop, B. J.; Gallus, P.; Galster, G.; Gan, K. K.; Ganguly, S.; Gao, Y.; Gao, Y. S.; Garay Walls, F. M.; García, C.; García Navarro, J. E.; García Pascual, J. A.; Garcia-Sciveres, M.; Gardner, R. W.; Garelli, N.; Garonne, V.; Gascon Bravo, A.; Gasnikova, K.; Gatti, C.; Gaudiello, A.; Gaudio, G.; Gavrilenko, I. L.; Gay, C.; Gaycken, G.; Gazis, E. N.; Gee, C. N. P.; Geisen, J.; Geisen, M.; Geisler, M. P.; Gellerstedt, K.; Gemme, C.; Genest, M. H.; Geng, C.; Gentile, S.; Gentsos, C.; George, S.; Gerbaudo, D.; Gershon, A.; Geßner, G.; Ghasemi, S.; Ghneimat, M.; Giacobbe, B.; Giagu, S.; Giannetti, P.; Gibson, S. M.; Gignac, M.; Gilchriese, M.; Gillberg, D.; Gilles, G.; Gingrich, D. M.; Giokaris, N.; Giordani, M. P.; Giorgi, F. M.; Giraud, P. F.; Giromini, P.; Giugni, D.; Giuli, F.; Giuliani, C.; Giulini, M.; Gjelsten, B. K.; Gkaitatzis, S.; Gkialas, I.; Gkougkousis, E. L.; Gkountoumis, P.; Gladilin, L. K.; Glasman, C.; Glatzer, J.; Glaysher, P. C. F.; Glazov, A.; Goblirsch-Kolb, M.; Godlewski, J.; Goldfarb, S.; Golling, T.; Golubkov, D.; Gomes, A.; Gonçalo, R.; Goncalves Gama, R.; Goncalves Pinto Firmino Da Costa, J.; Gonella, G.; Gonella, L.; Gongadze, A.; González de la Hoz, S.; Gonzalez-Sevilla, S.; Goossens, L.; Gorbounov, P. A.; Gordon, H. A.; Gorelov, I.; Gorini, B.; Gorini, E.; Gorišek, A.; Goshaw, A. T.; Gössling, C.; Gostkin, M. I.; Gottardo, C. A.; Goudet, C. R.; Goujdami, D.; Goussiou, A. G.; Govender, N.; Gozani, E.; Graber, L.; Grabowska-Bold, I.; Gradin, P. O. J.; Gramling, J.; Gramstad, E.; Grancagnolo, S.; Gratchev, V.; Gravila, P. M.; Gray, C.; Gray, H. M.; Greenwood, Z. D.; Grefe, C.; Gregersen, K.; Gregor, I. M.; Grenier, P.; Grevtsov, K.; Griffiths, J.; Grillo, A. A.; Grimm, K.; Grinstein, S.; Gris, Ph.; Grivaz, J.-F.; Groh, S.; Gross, E.; Grosse-Knetter, J.; Grossi, G. C.; Grout, Z. J.; Grummer, A.; Guan, L.; Guan, W.; Guenther, J.; Guescini, F.; Guest, D.; Gueta, O.; Gui, B.; Guido, E.; Guillemin, T.; Guindon, S.; Gul, U.; Gumpert, C.; Guo, J.; Guo, W.; Guo, Y.; Gupta, R.; Gupta, S.; Gustavino, G.; Gutierrez, P.; Gutierrez Ortiz, N. G.; Gutschow, C.; Guyot, C.; Guzik, M. P.; Gwenlan, C.; Gwilliam, C. B.; Haas, A.; Haber, C.; Hadavand, H. K.; Haddad, N.; Hadef, A.; Hageböck, S.; Hagihara, M.; Hakobyan, H.; Haleem, M.; Haley, J.; Halladjian, G.; Hallewell, G. D.; Hamacher, K.; Hamal, P.; Hamano, K.; Hamilton, A.; Hamity, G. N.; Hamnett, P. G.; Han, L.; Han, S.; Hanagaki, K.; Hanawa, K.; Hance, M.; Haney, B.; Hanke, P.; Hansen, J. B.; Hansen, J. D.; Hansen, M. C.; Hansen, P. H.; Hara, K.; Hard, A. S.; Harenberg, T.; Hariri, F.; Harkusha, S.; Harrington, R. D.; Harrison, P. F.; Hartmann, N. M.; Hasegawa, M.; Hasegawa, Y.; Hasib, A.; Hassani, S.; Haug, S.; Hauser, R.; Hauswald, L.; Havener, L. B.; Havranek, M.; Hawkes, C. M.; Hawkings, R. J.; Hayakawa, D.; Hayden, D.; Hays, C. P.; Hays, J. M.; Hayward, H. S.; Haywood, S. J.; Head, S. J.; Heck, T.; Hedberg, V.; Heelan, L.; Heidegger, K. K.; Heim, S.; Heim, T.; Heinemann, B.; Heinrich, J. J.; Heinrich, L.; Heinz, C.; Hejbal, J.; Helary, L.; Held, A.; Hellman, S.; Helsens, C.; Henderson, R. C. W.; Heng, Y.; Henkelmann, S.; Henriques Correia, A. M.; Henrot-Versille, S.; Herbert, G. H.; Herde, H.; Herget, V.; Hernández Jiménez, Y.; Herr, H.; Herten, G.; Hertenberger, R.; Hervas, L.; Herwig, T. C.; Hesketh, G. G.; Hessey, N. P.; Hetherly, J. W.; Higashino, S.; Higón-Rodriguez, E.; Hill, E.; Hill, J. C.; Hiller, K. H.; Hillier, S. J.; Hils, M.; Hinchliffe, I.; Hirose, M.; Hirschbuehl, D.; Hiti, B.; Hladik, O.; Hoad, X.; Hobbs, J.; Hod, N.; Hodgkinson, M. C.; Hodgson, P.; Hoecker, A.; Hoeferkamp, M. R.; Hoenig, F.; Hohn, D.; Holmes, T. R.; Homann, M.; Honda, S.; Honda, T.; Hong, T. M.; Hooberman, B. H.; Hopkins, W. H.; Horii, Y.; Horton, A. J.; Hostachy, J.-Y.; Hou, S.; Hoummada, A.; Howarth, J.; Hoya, J.; Hrabovsky, M.; Hrdinka, J.; Hristova, I.; Hrivnac, J.; Hryn'ova, T.; Hrynevich, A.; Hsu, P. J.; Hsu, S.-C.; Hu, Q.; Hu, S.; Huang, Y.; Hubacek, Z.; Hubaut, F.; Huegging, F.; Huffman, T. B.; Hughes, E. W.; Hughes, G.; Huhtinen, M.; Huo, P.; Huseynov, N.; Huston, J.; Huth, J.; Iacobucci, G.; Iakovidis, G.; Ibragimov, I.; Iconomidou-Fayard, L.; Idrissi, Z.; Iengo, P.; Igonkina, O.; Iizawa, T.; Ikegami, Y.; Ikeno, M.; Ilchenko, Y.; Iliadis, D.; Ilic, N.; Introzzi, G.; Ioannou, P.; Iodice, M.; Iordanidou, K.; Ippolito, V.; Isacson, M. F.; Ishijima, N.; Ishino, M.; Ishitsuka, M.; Issever, C.; Istin, S.; Ito, F.; Ponce, J. M. Iturbe; Iuppa, R.; Iwasaki, H.; Izen, J. M.; Izzo, V.; Jabbar, S.; Jackson, P.; Jacobs, R. M.; Jain, V.; Jakobi, K. B.; Jakobs, K.; Jakobsen, S.; Jakoubek, T.; Jamin, D. O.; Jana, D. K.; Jansky, R.; Janssen, J.; Janus, M.; Janus, P. A.; Jarlskog, G.; Javadov, N.; Javůrek, T.; Javurkova, M.; Jeanneau, F.; Jeanty, L.; Jejelava, J.; Jelinskas, A.; Jenni, P.; Jeske, C.; Jézéquel, S.; Ji, H.; Jia, J.; Jiang, H.; Jiang, Y.; Jiang, Z.; Jiggins, S.; Jimenez Pena, J.; Jin, S.; Jinaru, A.; Jinnouchi, O.; Jivan, H.; Johansson, P.; Johns, K. A.; Johnson, C. A.; Johnson, W. J.; Jon-And, K.; Jones, R. W. L.; Jones, S. D.; Jones, S.; Jones, T. J.; Jongmanns, J.; Jorge, P. M.; Jovicevic, J.; Ju, X.; Juste Rozas, A.; Köhler, M. K.; Kaczmarska, A.; Kado, M.; Kagan, H.; Kagan, M.; Kahn, S. J.; Kaji, T.; Kajomovitz, E.; Kalderon, C. W.; Kaluza, A.; Kama, S.; Kamenshchikov, A.; Kanaya, N.; Kanjir, L.; Kantserov, V. A.; Kanzaki, J.; Kaplan, B.; Kaplan, L. S.; Kar, D.; Karakostas, K.; Karastathis, N.; Kareem, M. J.; Karentzos, E.; Karpov, S. N.; Karpova, Z. M.; Karthik, K.; Kartvelishvili, V.; Karyukhin, A. N.; Kasahara, K.; Kashif, L.; Kass, R. D.; Kastanas, A.; Kataoka, Y.; Kato, C.; Katre, A.; Katzy, J.; Kawade, K.; Kawagoe, K.; Kawamoto, T.; Kawamura, G.; Kay, E. F.; Kazanin, V. F.; Keeler, R.; Kehoe, R.; Keller, J. S.; Kempster, J. J.; Kendrick, J.; Keoshkerian, H.; Kepka, O.; Kerševan, B. P.; Kersten, S.; Keyes, R. A.; Khader, M.; Khalil-zada, F.; Khanov, A.; Kharlamov, A. G.; Kharlamova, T.; Khodinov, A.; Khoo, T. J.; Khovanskiy, V.; Khramov, E.; Khubua, J.; Kido, S.; Kilby, C. R.; Kim, H. Y.; Kim, S. H.; Kim, Y. K.; Kimura, N.; Kind, O. M.; King, B. T.; Kirchmeier, D.; Kirk, J.; Kiryunin, A. E.; Kishimoto, T.; Kisielewska, D.; Kitali, V.; Kiuchi, K.; Kivernyk, O.; Kladiva, E.; Klapdor-Kleingrothaus, T.; Klein, M. H.; Klein, M.; Klein, U.; Kleinknecht, K.; Klimek, P.; Klimentov, A.; Klingenberg, R.; Klingl, T.; Klioutchnikova, T.; Kluge, E.-E.; Kluit, P.; Kluth, S.; Kneringer, E.; Knoops, E. B. F. G.; Knue, A.; Kobayashi, A.; Kobayashi, D.; Kobayashi, T.; Kobel, M.; Kocian, M.; Kodys, P.; Koffas, T.; Koffeman, E.; Köhler, N. M.; Koi, T.; Kolb, M.; Koletsou, I.; Komar, A. A.; Komori, Y.; Kondo, T.; Kondrashova, N.; Köneke, K.; König, A. C.; Kono, T.; Konoplich, R.; Konstantinidis, N.; Kopeliansky, R.; Koperny, S.; Kopp, A. K.; Korcyl, K.; Kordas, K.; Korn, A.; Korol, A. A.; Korolkov, I.; Korolkova, E. V.; Kortner, O.; Kortner, S.; Kosek, T.; Kostyukhin, V. V.; Kotwal, A.; Koulouris, A.; Kourkoumeli-Charalampidi, A.; Kourkoumelis, C.; Kourlitis, E.; Kouskoura, V.; Kowalewska, A. B.; Kowalewski, R.; Kowalski, T. Z.; Kozakai, C.; Kozanecki, W.; Kozhin, A. S.; Kramarenko, V. A.; Kramberger, G.; Krasnopevtsev, D.; Krasny, M. W.; Krasznahorkay, A.; Krauss, D.; Kremer, J. A.; Kretzschmar, J.; Kreutzfeldt, K.; Krieger, P.; Krizka, K.; Kroeninger, K.; Kroha, H.; Kroll, J.; Kroll, J.; Kroseberg, J.; Krstic, J.; Kruchonak, U.; Krüger, H.; Krumnack, N.; Kruse, M. C.; Kubota, T.; Kucuk, H.; Kuday, S.; Kuechler, J. T.; Kuehn, S.; Kugel, A.; Kuger, F.; Kuhl, T.; Kukhtin, V.; Kukla, R.; Kulchitsky, Y.; Kuleshov, S.; Kulinich, Y. P.; Kuna, M.; Kunigo, T.; Kupco, A.; Kupfer, T.; Kuprash, O.; Kurashige, H.; Kurchaninov, L. L.; Kurochkin, Y. A.; Kurth, M. G.; Kus, V.; Kuwertz, E. S.; Kuze, M.; Kvita, J.; Kwan, T.; Kyriazopoulos, D.; La Rosa, A.; La Rosa Navarro, J. L.; La Rotonda, L.; Lacasta, C.; Lacava, F.; Lacey, J.; Lacker, H.; Lacour, D.; Ladygin, E.; Lafaye, R.; Laforge, B.; Lagouri, T.; Lai, S.; Lammers, S.; Lampl, W.; Lançon, E.; Landgraf, U.; Landon, M. P. J.; Lanfermann, M. C.; Lang, V. S.; Lange, J. C.; Langenberg, R. J.; Lankford, A. J.; Lanni, F.; Lantzsch, K.; Lanza, A.; Lapertosa, A.; Laplace, S.; Laporte, J. F.; Lari, T.; Lasagni Manghi, F.; Lassnig, M.; Laurelli, P.; Lavrijsen, W.; Law, A. T.; Laycock, P.; Lazovich, T.; Lazzaroni, M.; Le, B.; Le Dortz, O.; Le Guirriec, E.; Le Quilleuc, E. P.; LeBlanc, M.; LeCompte, T.; Ledroit-Guillon, F.; Lee, C. A.; Lee, G. R.; Lee, S. C.; Lee, L.; Lefebvre, B.; Lefebvre, G.; Lefebvre, M.; Legger, F.; Leggett, C.; Lehan, A.; Lehmann Miotto, G.; Lei, X.; Leight, W. A.; Leite, M. A. L.; Leitner, R.; Lellouch, D.; Lemmer, B.; Leney, K. J. C.; Lenz, T.; Lenzi, B.; Leone, R.; Leone, S.; Leonidopoulos, C.; Lerner, G.; Leroy, C.; Lesage, A. A. J.; Lester, C. G.; Levchenko, M.; Levêque, J.; Levin, D.; Levinson, L. J.; Levy, M.; Lewis, D.; Li, B.; Li, Changqiao; Li, H.; Li, L.; Li, Q.; Li, S.; Li, X.; Li, Y.; Liang, Z.; Liberti, B.; Liblong, A.; Lie, K.; Liebal, J.; Liebig, W.; Limosani, A.; Lin, S. C.; Lin, T. H.; Lindquist, B. E.; Lionti, A. E.; Lipeles, E.; Lipniacka, A.; Lisovyi, M.; Liss, T. M.; Lister, A.; Litke, A. M.; Liu, B.; Liu, H.; Liu, H.; Liu, J. K. K.; Liu, J.; Liu, J. B.; Liu, K.; Liu, L.; Liu, M.; Liu, Y. L.; Liu, Y.; Livan, M.; Lleres, A.; Llorente Merino, J.; Lloyd, S. L.; Lo, C. Y.; Lo Sterzo, F.; Lobodzinska, E. M.; Loch, P.; Loebinger, F. K.; Loesle, A.; Loew, K. M.; Loginov, A.; Lohse, T.; Lohwasser, K.; Lokajicek, M.; Long, B. A.; Long, J. D.; Long, R. E.; Longo, L.; Looper, K. A.; Lopez, J. A.; Lopez Mateos, D.; Lopez Paz, I.; Lopez Solis, A.; Lorenz, J.; Martinez, N. Lorenzo; Losada, M.; Lösel, P. J.; Lou, X.; Lounis, A.; Love, J.; Love, P. A.; Lu, H.; Lu, N.; Lu, Y. J.; Lubatti, H. J.; Luci, C.; Lucotte, A.; Luedtke, C.; Luehring, F.; Lukas, W.; Luminari, L.; Lundberg, O.; Lund-Jensen, B.; Luzi, P. M.; Lynn, D.; Lysak, R.; Lytken, E.; Lyubushkin, V.; Ma, H.; Ma, L. L.; Ma, Y.; Maccarrone, G.; Macchiolo, A.; Macdonald, C. M.; Maček, B.; Machado Miguens, J.; Madaffari, D.; Madar, R.; Mader, W. F.; Madsen, A.; Maeda, J.; Maeland, S.; Maeno, T.; Maevskiy, A. S.; Magradze, E.; Mahlstedt, J.; Maiani, C.; Maidantchik, C.; Maier, A. A.; Maier, T.; Maio, A.; Majersky, O.; Majewski, S.; Makida, Y.; Makovec, N.; Malaescu, B.; Malecki, Pa.; Maleev, V. P.; Malek, F.; Mallik, U.; Malon, D.; Malone, C.; Maltezos, S.; Malyukov, S.; Mamuzic, J.; Mancini, G.; Mandelli, L.; Mandić, I.; Maneira, J.; Filho, L. Manhaes de Andrade; Manjarres Ramos, J.; Mann, A.; Manousos, A.; Mansoulie, B.; Mansour, J. D.; Mantifel, R.; Mantoani, M.; Manzoni, S.; Mapelli, L.; Marceca, G.; March, L.; Marchese, L.; Marchiori, G.; Marcisovsky, M.; Marjanovic, M.; Marley, D. E.; Marroquim, F.; Marsden, S. P.; Marshall, Z.; Martensson, M. U. F.; Marti-Garcia, S.; Martin, C. B.; Martin, T. A.; Martin, V. J.; dit Latour, B. Martin; Martinez, M.; Martinez Outschoorn, V. I.; Martin-Haugh, S.; Martoiu, V. S.; Martyniuk, A. C.; Marzin, A.; Masetti, L.; Mashimo, T.; Mashinistov, R.; Masik, J.; Maslennikov, A. L.; Massa, L.; Mastrandrea, P.; Mastroberardino, A.; Masubuchi, T.; Mättig, P.; Maurer, J.; Maxfield, S. J.; Maximov, D. A.; Mazini, R.; Maznas, I.; Mazza, S. M.; McFadden, N. C.; McGoldrick, G.; McKee, S. P.; McCarn, A.; McCarthy, R. L.; McCarthy, T. G.; McClymont, L. I.; McDonald, E. F.; Mcfayden, J. A.; Mchedlidze, G.; McMahon, S. J.; McNamara, P. C.; McPherson, R. A.; Meehan, S.; Megy, T. J.; Mehlhase, S.; Mehta, A.; Meideck, T.; Meier, K.; Meirose, B.; Melini, D.; Mellado Garcia, B. R.; Mellenthin, J. D.; Melo, M.; Meloni, F.; Menary, S. B.; Meng, L.; Meng, X. T.; Mengarelli, A.; Menke, S.; Meoni, E.; Mergelmeyer, S.; Mermod, P.; Merola, L.; Meroni, C.; Merritt, F. S.; Messina, A.; Metcalfe, J.; Mete, A. S.; Meyer, C.; Meyer, J.-P.; Meyer, J.; Theenhausen, H. Meyer Zu; Miano, F.; Middleton, R. P.; Miglioranzi, S.; Mijović, L.; Mikenberg, G.; Mikestikova, M.; Mikuž, M.; Milesi, M.; Milic, A.; Miller, D. W.; Mills, C.; Milov, A.; Milstead, D. A.; Minaenko, A. A.; Minami, Y.; Minashvili, I. A.; Mincer, A. I.; Mindur, B.; Mineev, M.; Minegishi, Y.; Ming, Y.; Mir, L. M.; Mistry, K. P.; Mitani, T.; Mitrevski, J.; Mitsou, V. A.; Miucci, A.; Miyagawa, P. S.; Mizukami, A.; Mjörnmark, J. U.; Mkrtchyan, T.; Mlynarikova, M.; Moa, T.; Mochizuki, K.; Mogg, P.; Mohapatra, S.; Molander, S.; Moles-Valls, R.; Monden, R.; Mondragon, M. C.; Mönig, K.; Monk, J.; Monnier, E.; Montalbano, A.; Montejo Berlingen, J.; Monticelli, F.; Monzani, S.; Moore, R. W.; Morange, N.; Moreno, D.; Moreno Llácer, M.; Morettini, P.; Morgenstern, S.; Mori, D.; Mori, T.; Morii, M.; Morinaga, M.; Morisbak, V.; Morley, A. K.; Mornacchi, G.; Morris, J. D.; Morvaj, L.; Moschovakos, P.; Mosidze, M.; Moss, H. J.; Moss, J.; Motohashi, K.; Mount, R.; Mountricha, E.; Moyse, E. J. W.; Muanza, S.; Mudd, R. D.; Mueller, F.; Mueller, J.; Mueller, R. S. P.; Muenstermann, D.; Mullen, P.; Mullier, G. A.; Munoz Sanchez, F. J.; Murray, W. J.; Musheghyan, H.; Muškinja, M.; Myagkov, A. G.; Myska, M.; Nachman, B. P.; Nackenhorst, O.; Nagai, K.; Nagai, R.; Nagano, K.; Nagasaka, Y.; Nagata, K.; Nagel, M.; Nagy, E.; Nairz, A. M.; Nakahama, Y.; Nakamura, K.; Nakamura, T.; Nakano, I.; Naranjo Garcia, R. F.; Narayan, R.; Narrias Villar, D. I.; Naryshkin, I.; Naumann, T.; Navarro, G.; Nayyar, R.; Neal, H. A.; Nechaeva, P. Yu.; Neep, T. J.; Negri, A.; Negrini, M.; Nektarijevic, S.; Nellist, C.; Nelson, A.; Nelson, M. E.; Nemecek, S.; Nemethy, P.; Nessi, M.; Neubauer, M. S.; Neumann, M.; Newman, P. R.; Ng, T. Y.; Nguyen Manh, T.; Nickerson, R. B.; Nicolaidou, R.; Nielsen, J.; Nikolaenko, V.; Nikolic-Audit, I.; Nikolopoulos, K.; Nilsen, J. K.; Nilsson, P.; Ninomiya, Y.; Nisati, A.; Nishu, N.; Nisius, R.; Nitsche, I.; Nitta, T.; Nobe, T.; Noguchi, Y.; Nomachi, M.; Nomidis, I.; Nomura, M. A.; Nooney, T.; Nordberg, M.; Norjoharuddeen, N.; Novgorodova, O.; Nowak, S.; Nozaki, M.; Nozka, L.; Ntekas, K.; Nurse, E.; Nuti, F.; O'connor, K.; O'Neil, D. C.; O'Rourke, A. A.; O'Shea, V.; Oakham, F. G.; Oberlack, H.; Obermann, T.; Ocariz, J.; Ochi, A.; Ochoa, I.; Ochoa-Ricoux, J. P.; Oda, S.; Odaka, S.; Ogren, H.; Oh, A.; Oh, S. H.; Ohm, C. C.; Ohman, H.; Oide, H.; Okawa, H.; Okumura, Y.; Okuyama, T.; Olariu, A.; Oleiro Seabra, L. F.; Olivares Pino, S. A.; Oliveira Damazio, D.; Olszewski, A.; Olszowska, J.; Onofre, A.; Onogi, K.; Onyisi, P. U. E.; Oreglia, M. J.; Oren, Y.; Orestano, D.; Orlando, N.; Orr, R. S.; Osculati, B.; Ospanov, R.; Garzon, G. Otero y.; Otono, H.; Ouchrif, M.; Ould-Saada, F.; Ouraou, A.; Oussoren, K. P.; Ouyang, Q.; Owen, M.; Owen, R. E.; Ozcan, V. E.; Ozturk, N.; Pachal, K.; Pacheco Pages, A.; Pacheco Rodriguez, L.; Padilla Aranda, C.; Pagan Griso, S.; Paganini, M.; Paige, F.; Palacino, G.; Palazzo, S.; Palestini, S.; Palka, M.; Pallin, D.; Panagiotopoulou, E. St.; Panagoulias, I.; Pandini, C. E.; Panduro Vazquez, J. G.; Pani, P.; Panitkin, S.; Pantea, D.; Paolozzi, L.; Papadopoulou, Th. D.; Papageorgiou, K.; Paramonov, A.; Paredes Hernandez, D.; Parker, A. J.; Parker, M. A.; Parker, K. A.; Parodi, F.; Parsons, J. A.; Parzefall, U.; Pascuzzi, V. R.; Pasner, J. M.; Pasqualucci, E.; Passaggio, S.; Pastore, Fr.; Pataraia, S.; Pater, J. R.; Pauly, T.; Pearson, B.; Pedraza Lopez, S.; Pedro, R.; Peleganchuk, S. V.; Penc, O.; Peng, C.; Peng, H.; Penwell, J.; Peralva, B. S.; Perego, M. M.; Perepelitsa, D. V.; Peri, F.; Perini, L.; Pernegger, H.; Perrella, S.; Peschke, R.; Peshekhonov, V. D.; Peters, K.; Peters, R. F. Y.; Petersen, B. A.; Petersen, T. C.; Petit, E.; Petridis, A.; Petridou, C.; Petroff, P.; Petrolo, E.; Petrov, M.; Petrucci, F.; Pettersson, N. E.; Peyaud, A.; Pezoa, R.; Phillips, F. H.; Phillips, P. W.; Piacquadio, G.; Pianori, E.; Picazio, A.; Piccaro, E.; Pickering, M. A.; Piegaia, R.; Pilcher, J. E.; Pilkington, A. D.; Pin, A. W. J.; Pinamonti, M.; Pinfold, J. L.; Pirumov, H.; Pitt, M.; Plazak, L.; Pleier, M.-A.; Pleskot, V.; Plotnikova, E.; Pluth, D.; Podberezko, P.; Poettgen, R.; Poggi, R.; Poggioli, L.; Pohl, D.; Polesello, G.; Poley, A.; Policicchio, A.; Polifka, R.; Polini, A.; Pollard, C. S.; Polychronakos, V.; Pommès, K.; Ponomarenko, D.; Pontecorvo, L.; Pope, B. G.; Popeneciu, G. A.; Poppleton, A.; Pospisil, S.; Potamianos, K.; Potrap, I. N.; Potter, C. J.; Poulard, G.; Poulsen, T.; Poveda, J.; Pozo Astigarraga, M. E.; Pralavorio, P.; Pranko, A.; Prell, S.; Price, D.; Price, L. E.; Primavera, M.; Prince, S.; Proklova, N.; Prokofiev, K.; Prokoshin, F.; Protopopescu, S.; Proudfoot, J.; Przybycien, M.; Puri, A.; Puzo, P.; Qian, J.; Qin, G.; Qin, Y.; Quadt, A.; Queitsch-Maitland, M.; Quilty, D.; Raddum, S.; Radeka, V.; Radescu, V.; Radhakrishnan, S. K.; Radloff, P.; Rados, P.; Ragusa, F.; Rahal, G.; Raine, J. A.; Rajagopalan, S.; Rangel-Smith, C.; Rashid, T.; Raspopov, S.; Ratti, M. G.; Rauch, D. M.; Rauscher, F.; Rave, S.; Ravinovich, I.; Rawling, J. H.; Raymond, M.; Read, A. L.; Readioff, N. P.; Reale, M.; Rebuzzi, D. M.; Redelbach, A.; Redlinger, G.; Reece, R.; Reed, R. G.; Reeves, K.; Rehnisch, L.; Reichert, J.; Reiss, A.; Rembser, C.; Ren, H.; Rescigno, M.; Resconi, S.; Resseguie, E. D.; Rettie, S.; Reynolds, E.; Rezanova, O. L.; Reznicek, P.; Rezvani, R.; Richter, R.; Richter, S.; Richter-Was, E.; Ricken, O.; Ridel, M.; Rieck, P.; Riegel, C. J.; Rieger, J.; Rifki, O.; Rijssenbeek, M.; Rimoldi, A.; Rimoldi, M.; Rinaldi, L.; Ripellino, G.; Ristić, B.; Ritsch, E.; Riu, I.; Rizatdinova, F.; Rizvi, E.; Rizzi, C.; Roberts, R. T.; Robertson, S. H.; Robichaud-Veronneau, A.; Robinson, D.; Robinson, J. E. M.; Robson, A.; Rocco, E.; Roda, C.; Rodina, Y.; Rodriguez Bosca, S.; Rodriguez Perez, A.; Rodriguez Rodriguez, D.; Roe, S.; Rogan, C. S.; RØhne, O.; Roloff, J.; Romaniouk, A.; Romano, M.; Romano Saez, S. M.; Romero Adam, E.; Rompotis, N.; Ronzani, M.; Roos, L.; Rosati, S.; Rosbach, K.; Rose, P.; Rosien, N.-A.; Rossi, E.; Rossi, L. P.; Rosten, J. H. N.; Rosten, R.; Rotaru, M.; Roth, I.; Rothberg, J.; Rousseau, D.; Rozanov, A.; Rozen, Y.; Ruan, X.; Rubbo, F.; Rühr, F.; Ruiz-Martinez, A.; Rurikova, Z.; Rusakovich, N. A.; Russell, H. L.; Rutherfoord, J. P.; Ruthmann, N.; Ryabov, Y. F.; Rybar, M.; Rybkin, G.; Ryu, S.; Ryzhov, A.; Rzehorz, G. F.; Saavedra, A. F.; Sabato, G.; Sacerdoti, S.; Sadrozinski, H. F.-W.; Sadykov, R.; Safai Tehrani, F.; Saha, P.; Sahinsoy, M.; Saimpert, M.; Saito, M.; Saito, T.; Sakamoto, H.; Sakurai, Y.; Salamanna, G.; Salazar Loyola, J. E.; Salek, D.; Sales De Bruin, P. H.; Salihagic, D.; Salnikov, A.; Salt, J.; Salvatore, D.; Salvatore, F.; Salvucci, A.; Salzburger, A.; Sammel, D.; Sampsonidis, D.; Sampsonidou, D.; Sánchez, J.; Sanchez Martinez, V.; Sanchez Pineda, A.; Sandaker, H.; Sandbach, R. L.; Sander, C. O.; Sandhoff, M.; Sandoval, C.; Sankey, D. P. C.; Sannino, M.; Sano, Y.; Sansoni, A.; Santoni, C.; Santonico, R.; Santos, H.; Santoyo Castillo, I.; Sapronov, A.; Saraiva, J. G.; Sarrazin, B.; Sasaki, O.; Sato, K.; Sauvan, E.; Savage, G.; Savard, P.; Savic, N.; Sawyer, C.; Sawyer, L.; Saxon, J.; Sbarra, C.; Sbrizzi, A.; Scanlon, T.; Scannicchio, D. A.; Scarcella, M.; Scarfone, V.; Schaarschmidt, J.; Schacht, P.; Schachtner, B. M.; Schaefer, D.; Schaefer, L.; Schaefer, R.; Schaeffer, J.; Schaepe, S.; Schaetzel, S.; Schäfer, U.; Schaffer, A. C.; Schaile, D.; Schamberger, R. D.; Scharf, V.; Schegelsky, V. A.; Scheirich, D.; Schernau, M.; Schiavi, C.; Schier, S.; Schildgen, L. K.; Schillo, C.; Schioppa, M.; Schlenker, S.; Schmidt-Sommerfeld, K. R.; Schmieden, K.; Schmitt, C.; Schmitt, S.; Schmitz, S.; Schnoor, U.; Schoeffel, L.; Schoening, A.; Schoenrock, B. D.; Schopf, E.; Schott, M.; Schouwenberg, J. F. P.; Schovancova, J.; Schramm, S.; Schuh, N.; Schulte, A.; Schultens, M. J.; Schultz-Coulon, H.-C.; Schulz, H.; Schumacher, M.; Schumm, B. A.; Schune, Ph.; Schwartzman, A.; Schwarz, T. A.; Schweiger, H.; Schwemling, Ph.; Schwienhorst, R.; Schwindling, J.; Sciandra, A.; Sciolla, G.; Scuri, F.; Scutti, F.; Searcy, J.; Seema, P.; Seidel, S. C.; Seiden, A.; Seixas, J. M.; Sekhniaidze, G.; Sekhon, K.; Sekula, S. J.; Semprini-Cesari, N.; Senkin, S.; Serfon, C.; Serin, L.; Serkin, L.; Sessa, M.; Seuster, R.; Severini, H.; Sfiligoj, T.; Sforza, F.; Sfyrla, A.; Shabalina, E.; Shaikh, N. W.; Shan, L. Y.; Shang, R.; Shank, J. T.; Shapiro, M.; Shatalov, P. B.; Shaw, K.; Shaw, S. M.; Shcherbakova, A.; Shehu, C. Y.; Shen, Y.; Sherafati, N.; Sherwood, P.; Shi, L.; Shimizu, S.; Shimmin, C. O.; Shimojima, M.; Shipsey, I. P. J.; Shirabe, S.; Shiyakova, M.; Shlomi, J.; Shmeleva, A.; Shoaleh Saadi, D.; Shochet, M. J.; Shojaii, S.; Shope, D. R.; Shrestha, S.; Shulga, E.; Shupe, M. A.; Sicho, P.; Sickles, A. M.; Sidebo, P. E.; Sideras Haddad, E.; Sidiropoulou, O.; Sidoti, A.; Siegert, F.; Sijacki, Dj.; Silva, J.; Silverstein, S. B.; Simak, V.; Simic, Lj.; Simion, S.; Simioni, E.; Simmons, B.; Simon, M.; Sinervo, P.; Sinev, N. B.; Sioli, M.; Siragusa, G.; Siral, I.; Sivoklokov, S. Yu.; Sjölin, J.; Skinner, M. B.; Skubic, P.; Slater, M.; Slavicek, T.; Slawinska, M.; Sliwa, K.; Slovak, R.; Smakhtin, V.; Smart, B. H.; Smiesko, J.; Smirnov, N.; Smirnov, S. Yu.; Smirnov, Y.; Smirnova, L. N.; Smirnova, O.; Smith, J. W.; Smith, M. N. K.; Smith, R. W.; Smizanska, M.; Smolek, K.; Snesarev, A. A.; Snyder, I. M.; Snyder, S.; Sobie, R.; Socher, F.; Soffer, A.; Soh, D. A.; Sokhrannyi, G.; Solans Sanchez, C. A.; Solar, M.; Soldatov, E. Yu.; Soldevila, U.; Solodkov, A. A.; Soloshenko, A.; Solovyanov, O. V.; Solovyev, V.; Sommer, P.; Son, H.; Sopczak, A.; Sosa, D.; Sotiropoulou, C. L.; Soualah, R.; Soukharev, A. M.; South, D.; Sowden, B. C.; Spagnolo, S.; Spalla, M.; Spangenberg, M.; Spanò, F.; Sperlich, D.; Spettel, F.; Spieker, T. M.; Spighi, R.; Spigo, G.; Spiller, L. A.; Spousta, M.; St. Denis, R. D.; Stabile, A.; Stamen, R.; Stamm, S.; Stanecka, E.; Stanek, R. W.; Stanescu, C.; Stanitzki, M. M.; Stapf, B. S.; Stapnes, S.; Starchenko, E. A.; Stark, G. H.; Stark, J.; Stark, S. H.; Staroba, P.; Starovoitov, P.; Stärz, S.; Staszewski, R.; Steinberg, P.; Stelzer, B.; Stelzer, H. J.; Stelzer-Chilton, O.; Stenzel, H.; Stewart, G. A.; Stockton, M. C.; Stoebe, M.; Stoicea, G.; Stolte, P.; Stonjek, S.; Stradling, A. R.; Straessner, A.; Stramaglia, M. E.; Strandberg, J.; Strandberg, S.; Strauss, M.; Strizenec, P.; Ströhmer, R.; Strom, D. M.; Stroynowski, R.; Strubig, A.; Stucci, S. A.; Stugu, B.; Styles, N. A.; Su, D.; Su, J.; Suchek, S.; Sugaya, Y.; Suk, M.; Sulin, V. V.; Sultan, DMS; Sultansoy, S.; Sumida, T.; Sun, S.; Sun, X.; Suruliz, K.; Suster, C. J. E.; Sutton, M. R.; Suzuki, S.; Svatos, M.; Swiatlowski, M.; Swift, S. P.; Sykora, I.; Sykora, T.; Ta, D.; Tackmann, K.; Taenzer, J.; Taffard, A.; Tafirout, R.; Taiblum, N.; Takai, H.; Takashima, R.; Takasugi, E. H.; Takeshita, T.; Takubo, Y.; Talby, M.; Talyshev, A. A.; Tanaka, J.; Tanaka, M.; Tanaka, R.; Tanaka, S.; Tanioka, R.; Tannenwald, B. B.; Tapia Araya, S.; Tapprogge, S.; Tarem, S.; Tartarelli, G. F.; Tas, P.; Tasevsky, M.; Tashiro, T.; Tassi, E.; Tavares Delgado, A.; Tayalati, Y.; Taylor, A. C.; Taylor, G. N.; Taylor, P. T. E.; Taylor, W.; Teixeira-Dias, P.; Temple, D.; Ten Kate, H.; Teng, P. K.; Teoh, J. J.; Tepel, F.; Terada, S.; Terashi, K.; Terron, J.; Terzo, S.; Testa, M.; Teuscher, R. J.; Theveneaux-Pelzer, T.; Thomas, J. P.; Thomas-Wilsker, J.; Thompson, P. D.; Thompson, A. S.; Thomsen, L. A.; Thomson, E.; Tibbetts, M. J.; Ticse Torres, R. E.; Tikhomirov, V. O.; Tikhonov, Yu. A.; Timoshenko, S.; Tipton, P.; Tisserant, S.; Todome, K.; Todorova-Nova, S.; Tojo, J.; Tokár, S.; Tokushuku, K.; Tolley, E.; Tomlinson, L.; Tomoto, M.; Tompkins, L.; Toms, K.; Tong, B.; Tornambe, P.; Torrence, E.; Torres, H.; Torró Pastor, E.; Toth, J.; Touchard, F.; Tovey, D. R.; Treado, C. J.; Trefzger, T.; Tresoldi, F.; Tricoli, A.; Trigger, I. M.; Trincaz-Duvoid, S.; Tripiana, M. F.; Trischuk, W.; Trocmé, B.; Trofymov, A.; Troncon, C.; Trottier-McDonald, M.; Trovatelli, M.; Truong, L.; Trzebinski, M.; Trzupek, A.; Tsang, K. W.; Tseng, J. C.-L.; Tsiareshka, P. V.; Tsipolitis, G.; Tsirintanis, N.; Tsiskaridze, S.; Tsiskaridze, V.; Tskhadadze, E. G.; Tsui, K. M.; Tsukerman, I. I.; Tsulaia, V.; Tsuno, S.; Tsybychev, D.; Tu, Y.; Tudorache, A.; Tudorache, V.; Tulbure, T. T.; Tuna, A. N.; Tupputi, S. A.; Turchikhin, S.; Turgeman, D.; Turk Cakir, I.; Turra, R.; Tuts, P. M.; Ucchielli, G.; Ueda, I.; Ughetto, M.; Ukegawa, F.; Unal, G.; Undrus, A.; Unel, G.; Ungaro, F. C.; Unno, Y.; Unverdorben, C.; Urban, J.; Urquijo, P.; Urrejola, P.; Usai, G.; Usui, J.; Vacavant, L.; Vacek, V.; Vachon, B.; Vaidya, A.; Valderanis, C.; Valdes Santurio, E.; Valentinetti, S.; Valero, A.; Valéry, L.; Valkar, S.; Vallier, A.; Valls Ferrer, J. A.; Van Den Wollenberg, W.; van der Graaf, H.; van Gemmeren, P.; Van Nieuwkoop, J.; van Vulpen, I.; van Woerden, M. C.; Vanadia, M.; Vandelli, W.; Vaniachine, A.; Vankov, P.; Vardanyan, G.; Vari, R.; Varnes, E. W.; Varni, C.; Varol, T.; Varouchas, D.; Vartapetian, A.; Varvell, K. E.; Vasquez, J. G.; Vasquez, G. A.; Vazeille, F.; Vazquez Schroeder, T.; Veatch, J.; Veeraraghavan, V.; Veloce, L. M.; Veloso, F.; Veneziano, S.; Ventura, A.; Venturi, M.; Venturi, N.; Venturini, A.; Vercesi, V.; Verducci, M.; Verkerke, W.; Vermeulen, A. T.; Vermeulen, J. C.; Vetterli, M. C.; Viaux Maira, N.; Viazlo, O.; Vichou, I.; Vickey, T.; Boeriu, O. E. Vickey; Viehhauser, G. H. A.; Viel, S.; Vigani, L.; Villa, M.; Villaplana Perez, M.; Vilucchi, E.; Vincter, M. G.; Vinogradov, V. B.; Vishwakarma, A.; Vittori, C.; Vivarelli, I.; Vlachos, S.; Vogel, M.; Vokac, P.; Volpi, G.; von der Schmitt, H.; von Toerne, E.; Vorobel, V.; Vorobev, K.; Vos, M.; Voss, R.; Vossebeld, J. H.; Vranjes, N.; Vranjes Milosavljevic, M.; Vrba, V.; Vreeswijk, M.; Vuillermet, R.; Vukotic, I.; Wagner, P.; Wagner, W.; Wagner-Kuhr, J.; Wahlberg, H.; Wahrmund, S.; Wakabayashi, J.; Walder, J.; Walker, R.; Walkowiak, W.; Wallangen, V.; Wang, C.; Wang, C.; Wang, F.; Wang, H.; Wang, H.; Wang, J.; Wang, J.; Wang, Q.; Wang, R.; Wang, S. M.; Wang, T.; Wang, W.; Wang, W.; Wang, Z.; Wanotayaroj, C.; Warburton, A.; Ward, C. P.; Wardrope, D. R.; Washbrook, A.; Watkins, P. M.; Watson, A. T.; Watson, M. F.; Watts, G.; Watts, S.; Waugh, B. M.; Webb, A. F.; Webb, S.; Weber, M. S.; Weber, S. W.; Weber, S. A.; Webster, J. S.; Weidberg, A. R.; Weinert, B.; Weingarten, J.; Weirich, M.; Weiser, C.; Weits, H.; Wells, P. S.; Wenaus, T.; Wengler, T.; Wenig, S.; Wermes, N.; Werner, M. D.; Werner, P.; Wessels, M.; Whalen, K.; Whallon, N. L.; Wharton, A. M.; White, A. S.; White, A.; White, M. J.; White, R.; Whiteson, D.; Whitmore, B. W.; Wickens, F. J.; Wiedenmann, W.; Wielers, M.; Wiglesworth, C.; Wiik-Fuchs, L. A. M.; Wildauer, A.; Wilk, F.; Wilkens, H. G.; Williams, H. H.; Williams, S.; Willis, C.; Willocq, S.; Wilson, J. A.; Wingerter-Seez, I.; Winkels, E.; Winklmeier, F.; Winston, O. J.; Winter, B. T.; Wittgen, M.; Wobisch, M.; Wolf, T. M. H.; Wolff, R.; Wolter, M. W.; Wolters, H.; Wong, V. W. S.; Worm, S. D.; Wosiek, B. K.; Wotschack, J.; Wozniak, K. W.; Wu, M.; Wu, S. L.; Wu, X.; Wu, Y.; Wyatt, T. R.; Wynne, B. M.; Xella, S.; Xi, Z.; Xia, L.; Xu, D.; Xu, L.; Xu, T.; Yabsley, B.; Yacoob, S.; Yamaguchi, D.; Yamaguchi, Y.; Yamamoto, A.; Yamamoto, S.; Yamanaka, T.; Yamatani, M.; Yamauchi, K.; Yamazaki, Y.; Yan, Z.; Yang, H.; Yang, H.; Yang, Y.; Yang, Z.; Yao, W.-M.; Yap, Y. C.; Yasu, Y.; Yatsenko, E.; Yau Wong, K. H.; Ye, J.; Ye, S.; Yeletskikh, I.; Yigitbasi, E.; Yildirim, E.; Yorita, K.; Yoshihara, K.; Young, C.; Young, C. J. S.; Yu, J.; Yu, J.; Yuen, S. P. Y.; Yusuff, I.; Zabinski, B.; Zacharis, G.; Zaidan, R.; Zaitsev, A. M.; Zakharchuk, N.; Zalieckas, J.; Zaman, A.; Zambito, S.; Zanzi, D.; Zeitnitz, C.; Zemaityte, G.; Zemla, A.; Zeng, J. C.; Zeng, Q.; Zenin, O.; Ženiš, T.; Zerwas, D.; Zhang, D.; Zhang, F.; Zhang, G.; Zhang, H.; Zhang, J.; Zhang, L.; Zhang, L.; Zhang, M.; Zhang, P.; Zhang, R.; Zhang, R.; Zhang, X.; Zhang, Y.; Zhang, Z.; Zhao, X.; Zhao, Y.; Zhao, Z.; Zhemchugov, A.; Zhou, B.; Zhou, C.; Zhou, L.; Zhou, M.; Zhou, M.; Zhou, N.; Zhu, C. G.; Zhu, H.; Zhu, J.; Zhu, Y.; Zhuang, X.; Zhukov, K.; Zibell, A.; Zieminska, D.; Zimine, N. I.; Zimmermann, C.; Zimmermann, S.; Zinonos, Z.; Zinser, M.; Ziolkowski, M.; Živković, L.; Zobernig, G.; Zoccoli, A.; Zou, R.; zur Nedden, M.; Zwalinski, L.

    2017-12-01

    The ATLAS inner detector comprises three different sub-detectors: the pixel detector, the silicon strip tracker, and the transition-radiation drift-tube tracker. The Insertable B-Layer, a new innermost pixel layer, was installed during the shutdown period in 2014, together with modifications to the layout of the cables and support structures of the existing pixel detector. The material in the inner detector is studied with several methods, using a low-luminosity √s=13 TeV pp collision sample corresponding to around 2.0 nb-1 collected in 2015 with the ATLAS experiment at the LHC. In this paper, the material within the innermost barrel region is studied using reconstructed hadronic interaction and photon conversion vertices. For the forward rapidity region, the material is probed by a measurement of the efficiency with which single tracks reconstructed from pixel detector hits alone can be extended with hits on the track in the strip layers. The results of these studies have been taken into account in an improved description of the material in the ATLAS inner detector simulation, resulting in a reduction in the uncertainties associated with the charged-particle reconstruction efficiency determined from simulation.

  11. Study of the material of the ATLAS inner detector for Run 2 of the LHC

    DOE PAGES

    Aaboud, M.; Aad, G.; Abbott, B.; ...

    2017-12-07

    The ATLAS inner detector comprises three different sub-detectors: the pixel detector, the silicon strip tracker, and the transition-radiation drift-tube tracker. The Insertable B-Layer, a new innermost pixel layer, was installed during the shutdown period in 2014, together with modifications to the layout of the cables and support structures of the existing pixel detector. The material in the inner detector is studied with several methods, using a low-luminosity √s = 13 TeV pp collision sample corresponding to around 2.0 nb -1 collected in 2015 with the ATLAS experiment at the LHC. In this paper, the material within the innermost barrel regionmore » is studied using reconstructed hadronic interaction and photon conversion vertices. For the forward rapidity region, the material is probed by a measurement of the efficiency with which single tracks reconstructed from pixel detector hits alone can be extended with hits on the track in the strip layers. The results of these studies have been taken into account in an improved description of the material in the ATLAS inner detector simulation, resulting in a reduction in the uncertainties associated with the charged-particle reconstruction efficiency determined from simulation.« less

  12. Toward Gleasonian landscape ecology: From communities to species, from patches to pixels

    Treesearch

    Samuel A. Cushman; Jeffrey S. Evans; Kevin McGarigal; Joseph M. Kiesecker

    2010-01-01

    The fusion of individualistic community ecology with the Hutchinsonian niche concept enabled a broad integration of ecological theory, spanning all the way from the niche characteristics of individual species, to the composition, structure, and dynamics of ecological communities. Landscape ecology has been variously described as the study of the structure, function,...

  13. Two cloud-based cues for estimating scene structure and camera calibration.

    PubMed

    Jacobs, Nathan; Abrams, Austin; Pless, Robert

    2013-10-01

    We describe algorithms that use cloud shadows as a form of stochastically structured light to support 3D scene geometry estimation. Taking video captured from a static outdoor camera as input, we use the relationship of the time series of intensity values between pairs of pixels as the primary input to our algorithms. We describe two cues that relate the 3D distance between a pair of points to the pair of intensity time series. The first cue results from the fact that two pixels that are nearby in the world are more likely to be under a cloud at the same time than two distant points. We describe methods for using this cue to estimate focal length and scene structure. The second cue is based on the motion of cloud shadows across the scene; this cue results in a set of linear constraints on scene structure. These constraints have an inherent ambiguity, which we show how to overcome by combining the cloud motion cue with the spatial cue. We evaluate our method on several time lapses of real outdoor scenes.

  14. Defect analysis and detection of micro nano structured optical thin film

    NASA Astrophysics Data System (ADS)

    Xu, Chang; Shi, Nuo; Zhou, Lang; Shi, Qinfeng; Yang, Yang; Li, Zhuo

    2017-10-01

    This paper focuses on developing an automated method for detecting defects on our wavelength conversion thin film. We analyzes the operating principle of our wavelength conversion Micro/Nano thin film which absorbing visible light and emitting infrared radiation, indicates the relationship between the pixel's pattern and the radiation of the thin film, and issues the principle of defining blind pixels and their categories due to the calculated and experimental results. An effective method is issued for the automated detection based on wavelet transform and template matching. The results reveal that this method has desired accuracy and processing speed.

  15. Antenna-coupled Superconducting Bolometers for Observations of the Cosmic Microwave Background Polarization

    NASA Astrophysics Data System (ADS)

    Myers, Michael James

    We describe the development of a novel millimeter-wave cryogenic detector. The device integrates a planar antenna, superconducting transmission line, bandpass filter, and bolometer onto a single silicon wafer. The bolometer uses a superconducting Transition-Edge Sensor (TES) thermistor, which provides substantial advantages over conventional semiconductor bolometers. The detector chip is fabricated using standard micro-fabrication techniques. This highly-integrated detector architecture is particularly well-suited for use in the de- velopment of polarization-sensitive cryogenic receivers with thousands of pixels. Such receivers are needed to meet the sensitivity requirements of next-generation cosmic microwave background polarization experiments. The design, fabrication, and testing of prototype array pixels are described. Preliminary considerations for a full array design are also discussed. A set of on-chip millimeter-wave test structures were developed to help understand the performance of our millimeter-wave microstrip circuits. These test structures produce a calibrated transmission measurement for an arbitrary two-port circuit using optical techniques, rather than a network analyzer. Some results of fabricated test structures are presented.

  16. Pixel-wise deblurring imaging system based on active vision for structural health monitoring at a speed of 100 km/h

    NASA Astrophysics Data System (ADS)

    Hayakawa, Tomohiko; Moko, Yushi; Morishita, Kenta; Ishikawa, Masatoshi

    2018-04-01

    In this paper, we propose a pixel-wise deblurring imaging (PDI) system based on active vision for compensation of the blur caused by high-speed one-dimensional motion between a camera and a target. The optical axis is controlled by back-and-forth motion of a galvanometer mirror to compensate the motion. High-spatial-resolution image captured by our system in high-speed motion is useful for efficient and precise visual inspection, such as visually judging abnormal parts of a tunnel surface to prevent accidents; hence, we applied the PDI system for structural health monitoring. By mounting the system onto a vehicle in a tunnel, we confirmed significant improvement in image quality for submillimeter black-and-white stripes and real tunnel-surface cracks at a speed of 100 km/h.

  17. Ionizing radiation effects on CMOS imagers manufactured in deep submicron process

    NASA Astrophysics Data System (ADS)

    Goiffon, Vincent; Magnan, Pierre; Bernard, Frédéric; Rolland, Guy; Saint-Pé, Olivier; Huger, Nicolas; Corbière, Franck

    2008-02-01

    We present here a study on both CMOS sensors and elementary structures (photodiodes and in-pixel MOSFETs) manufactured in a deep submicron process dedicated to imaging. We designed a test chip made of one 128×128-3T-pixel array with 10 μm pitch and more than 120 isolated test structures including photodiodes and MOSFETs with various implants and different sizes. All these devices were exposed to ionizing radiation up to 100 krad and their responses were correlated to identify the CMOS sensor weaknesses. Characterizations in darkness and under illumination demonstrated that dark current increase is the major sensor degradation. Shallow trench isolation was identified to be responsible for this degradation as it increases the number of generation centers in photodiode depletion regions. Consequences on hardness assurance and hardening-by-design are discussed.

  18. Fast and low-cost structured light pattern sequence projection.

    PubMed

    Wissmann, Patrick; Forster, Frank; Schmitt, Robert

    2011-11-21

    We present a high-speed and low-cost approach for structured light pattern sequence projection. Using a fast rotating binary spatial light modulator, our method is potentially capable of projection frequencies in the kHz domain, while enabling pattern rasterization as low as 2 μm pixel size and inherently linear grayscale reproduction quantized at 12 bits/pixel or better. Due to the circular arrangement of the projected fringe patterns, we extend the widely used ray-plane triangulation method to ray-cone triangulation and provide a detailed description of the optical calibration procedure. Using the proposed projection concept in conjunction with the recently published coded phase shift (CPS) pattern sequence, we demonstrate high accuracy 3-D measurement at 200 Hz projection frequency and 20 Hz 3-D reconstruction rate. © 2011 Optical Society of America

  19. Regional shape-based feature space for segmenting biomedical images using neural networks

    NASA Astrophysics Data System (ADS)

    Sundaramoorthy, Gopal; Hoford, John D.; Hoffman, Eric A.

    1993-07-01

    In biomedical images, structure of interest, particularly the soft tissue structures, such as the heart, airways, bronchial and arterial trees often have grey-scale and textural characteristics similar to other structures in the image, making it difficult to segment them using only gray- scale and texture information. However, these objects can be visually recognized by their unique shapes and sizes. In this paper we discuss, what we believe to be, a novel, simple scheme for extracting features based on regional shapes. To test the effectiveness of these features for image segmentation (classification), we use an artificial neural network and a statistical cluster analysis technique. The proposed shape-based feature extraction algorithm computes regional shape vectors (RSVs) for all pixels that meet a certain threshold criteria. The distance from each such pixel to a boundary is computed in 8 directions (or in 26 directions for a 3-D image). Together, these 8 (or 26) values represent the pixel's (or voxel's) RSV. All RSVs from an image are used to train a multi-layered perceptron neural network which uses these features to 'learn' a suitable classification strategy. To clearly distinguish the desired object from other objects within an image, several examples from inside and outside the desired object are used for training. Several examples are presented to illustrate the strengths and weaknesses of our algorithm. Both synthetic and actual biomedical images are considered. Future extensions to this algorithm are also discussed.

  20. Structure-Preserving Smoothing of Biomedical Images

    NASA Astrophysics Data System (ADS)

    Gil, Debora; Hernàndez-Sabaté, Aura; Burnat, Mireia; Jansen, Steven; Martínez-Villalta, Jordi

    Smoothing of biomedical images should preserve gray-level transitions between adjacent tissues, while restoring contours consistent with anatomical structures. Anisotropic diffusion operators are based on image appearance discontinuities (either local or contextual) and might fail at weak inter-tissue transitions. Meanwhile, the output of block-wise and morphological operations is prone to present a block structure due to the shape and size of the considered pixel neighborhood.

  1. An Adaptive Spectrally Weighted Structure Tensor Applied to Tensor Anisotropic Nonlinear Diffusion for Hyperspectral Images

    ERIC Educational Resources Information Center

    Marin Quintero, Maider J.

    2013-01-01

    The structure tensor for vector valued images is most often defined as the average of the scalar structure tensors in each band. The problem with this definition is the assumption that all bands provide the same amount of edge information giving them the same weights. As a result non-edge pixels can be reinforced and edges can be weakened…

  2. DETECTION OF SMALL-SCALE GRANULAR STRUCTURES IN THE QUIET SUN WITH THE NEW SOLAR TELESCOPE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abramenko, V. I.; Yurchyshyn, V. B.; Goode, P. R.

    2012-09-10

    Results of a statistical analysis of solar granulation are presented. A data set of 36 images of a quiet-Sun area on the solar disk center was used. The data were obtained with the 1.6 m clear aperture New Solar Telescope at Big Bear Solar Observatory and with a broadband filter centered at the TiO (705.7 nm) spectral line. The very high spatial resolution of the data (diffraction limit of 77 km and pixel scale of 0.''0375) augmented by the very high image contrast (15.5% {+-} 0.6%) allowed us to detect for the first time a distinct subpopulation of mini-granular structures.more » These structures are dominant on spatial scales below 600 km. Their size is distributed as a power law with an index of -1.8 (which is close to the Kolmogorov's -5/3 law) and no predominant scale. The regular granules display a Gaussian (normal) size distribution with a mean diameter of 1050 km. Mini-granular structures contribute significantly to the total granular area. They are predominantly confined to the wide dark lanes between regular granules and often form chains and clusters, but different from magnetic bright points. A multi-fractality test reveals that the structures smaller than 600 km represent a multi-fractal, whereas on larger scales the granulation pattern shows no multi-fractality and can be considered as a Gaussian random field. The origin, properties, and role of the population of mini-granular structures in the solar magnetoconvection are yet to be explored.« less

  3. Satellite image maps of Pakistan

    USGS Publications Warehouse

    ,

    1997-01-01

    Georeferenced Landsat satellite image maps of Pakistan are now being made available for purchase from the U.S. Geological Survey (USGS). The first maps to be released are a series of Multi-Spectral Scanner (MSS) color image maps compiled from Landsat scenes taken before 1979. The Pakistan image maps were originally developed by USGS as an aid for geologic and general terrain mapping in support of the Coal Resource Exploration and Development Program in Pakistan (COALREAP). COALREAP, a cooperative program between the USGS, the United States Agency for International Development, and the Geological Survey of Pakistan, was in effect from 1985 through 1994. The Pakistan MSS image maps (bands 1, 2, and 4) are available as a full-country mosaic of 72 Landsat scenes at a scale of 1:2,000,000, and in 7 regional sheets covering various portions of the entire country at a scale of 1:500,000. The scenes used to compile the maps were selected from imagery available at the Eros Data Center (EDC), Sioux Falls, S. Dak. Where possible, preference was given to cloud-free and snow-free scenes that displayed similar stages of seasonal vegetation development. The data for the MSS scenes were resampled from the original 80-meter resolution to 50-meter picture elements (pixels) and digitally transformed to a geometrically corrected Lambert conformal conic projection. The cubic convolution algorithm was used during rotation and resampling. The 50-meter pixel size allows for such data to be imaged at a scale of 1:250,000 without degradation; for cost and convenience considerations, however, the maps were printed at 1:500,000 scale. The seven regional sheets have been named according to the main province or area covered. The 50-meter data were averaged to 150-meter pixels to generate the country image on a single sheet at 1:2,000,000 scale

  4. QWIP from 4μm up to 18μm

    NASA Astrophysics Data System (ADS)

    Costard, Eric; Truffer, Jean P.; Huet, Odile; Dua, Lydie; Nedelcu, Alexandre; Robo, J. A.; Marcadet, Xavier; Brèire de l'Isle, Nadia; Bois, Philippe

    2006-09-01

    Standard GaAs/AlGaAs Quantum Well Infrared Photodetectors (QWIP) are considered as a technological choice for 3 rdgeneration thermal imagers [1], [2]. Since 2001, the THALES Group has been manufacturing sensitive arrays using AsGa based QWIP technology at THALES Research and Technology Laboratory. This QWIP technology allows the realization of large staring arrays for Thermal Imagers (TI) working in the Infrared region of the spectrum. The main advantage of this GaAs detector technology is that it is also used for other commercial devices. The GaAs industry has lead to important improvements over the last ten years and it reaches now an undeniable level of maturity. As a result the key parameters to reach high production yield: large substrate and good uniformity characteristics, have already been achieved. Considering defective pixels, the main usual features are a high operability (> 99.9%) and a low number of clusters having a maximum of 4 dead pixels. Another advantage of this III-V technology is the versatility of the design and processing phases. It allows customizing both the quantum structure and the pixel architecture in order to fulfill the requirements of any specific applications. The spectral response of QWIPs is intrinsically resonant but the quantum structure can be designed for a given detection wavelength window ranging from MWIR, LWIR to VLWIR.

  5. Direct imaging detectors for electron microscopy

    NASA Astrophysics Data System (ADS)

    Faruqi, A. R.; McMullan, G.

    2018-01-01

    Electronic detectors used for imaging in electron microscopy are reviewed in this paper. Much of the detector technology is based on the developments in microelectronics, which have allowed the design of direct detectors with fine pixels, fast readout and which are sufficiently radiation hard for practical use. Detectors included in this review are hybrid pixel detectors, monolithic active pixel sensors based on CMOS technology and pnCCDs, which share one important feature: they are all direct imaging detectors, relying on directly converting energy in a semiconductor. Traditional methods of recording images in the electron microscope such as film and CCDs, are mentioned briefly along with a more detailed description of direct electronic detectors. Many applications benefit from the use of direct electron detectors and a few examples are mentioned in the text. In recent years one of the most dramatic advances in structural biology has been in the deployment of the new backthinned CMOS direct detectors to attain near-atomic resolution molecular structures with electron cryo-microscopy (cryo-EM). The development of direct detectors, along with a number of other parallel advances, has seen a very significant amount of new information being recorded in the images, which was not previously possible-and this forms the main emphasis of the review.

  6. Non-Destructive Study of Bulk Crystallinity and Elemental Composition of Natural Gold Single Crystal Samples by Energy-Resolved Neutron Imaging

    PubMed Central

    Tremsin, Anton S.; Rakovan, John; Shinohara, Takenao; Kockelmann, Winfried; Losko, Adrian S.; Vogel, Sven C.

    2017-01-01

    Energy-resolved neutron imaging enables non-destructive analyses of bulk structure and elemental composition, which can be resolved with high spatial resolution at bright pulsed spallation neutron sources due to recent developments and improvements of neutron counting detectors. This technique, suitable for many applications, is demonstrated here with a specific study of ~5–10 mm thick natural gold samples. Through the analysis of neutron absorption resonances the spatial distribution of palladium (with average elemental concentration of ~0.4 atom% and ~5 atom%) is mapped within the gold samples. At the same time, the analysis of coherent neutron scattering in the thermal and cold energy regimes reveals which samples have a single-crystalline bulk structure through the entire sample volume. A spatially resolved analysis is possible because neutron transmission spectra are measured simultaneously on each detector pixel in the epithermal, thermal and cold energy ranges. With a pixel size of 55 μm and a detector-area of 512 by 512 pixels, a total of 262,144 neutron transmission spectra are measured concurrently. The results of our experiments indicate that high resolution energy-resolved neutron imaging is a very attractive analytical technique in cases where other conventional non-destructive methods are ineffective due to sample opacity. PMID:28102285

  7. Precision tracking with a single gaseous pixel detector

    NASA Astrophysics Data System (ADS)

    Tsigaridas, S.; van Bakel, N.; Bilevych, Y.; Gromov, V.; Hartjes, F.; Hessey, N. P.; de Jong, P.; Kluit, R.

    2015-09-01

    The importance of micro-pattern gaseous detectors has grown over the past few years after successful usage in a large number of applications in physics experiments and medicine. We develop gaseous pixel detectors using micromegas-based amplification structures on top of CMOS pixel readout chips. Using wafer post-processing we add a spark-protection layer and a grid to create an amplification region above the chip, allowing individual electrons released above the grid by the passage of ionising radiation to be recorded. The electron creation point is measured in 3D, using the pixel position for (x, y) and the drift time for z. The track can be reconstructed by fitting a straight line to these points. In this work we have used a pixel-readout-chip which is a small-scale prototype of Timepix3 chip (designed for both silicon and gaseous detection media). This prototype chip has several advantages over the existing Timepix chip, including a faster front-end (pre-amplifier and discriminator) and a faster TDC which reduce timewalk's contribution to the z position error. Although the chip is very small (sensitive area of 0.88 × 0.88mm2), we have built it into a detector with a short drift gap (1.3 mm), and measured its tracking performance in an electron beam at DESY. We present the results obtained, which lead to a significant improvement for the resolutions with respect to Timepix-based detectors.

  8. Recent developments in OLED-based chemical and biological sensors

    NASA Astrophysics Data System (ADS)

    Shinar, Joseph; Zhou, Zhaoqun; Cai, Yuankun; Shinar, Ruth

    2007-09-01

    Recent developments in the structurally integrated OLED-based platform of luminescent chemical and biological sensors are reviewed. In this platform, an array of OLED pixels, which is structurally integrated with the sensing elements, is used as the photoluminescence (PL) excitation source. The structural integration is achieved by fabricating the OLED array and the sensing element on opposite sides of a common glass substrate or on two glass substrates that are attached back-to-back. As it does not require optical fibers, lens, or mirrors, it results in a uniquely simple, low-cost, and potentially rugged geometry. The recent developments on this platform include the following: (1) Enhancing the performance of gas-phase and dissolved oxygen sensors. This is achieved by (a) incorporating high-dielectric TiO II nanoparticles in the oxygen-sensitive Pt and Pd octaethylporphyrin (PtOEP and PdOEP, respectively)- doped polystyrene (PS) sensor films, and (b) embedding the oxygen-sensitive dyes in a matrix of polymer blends such as PS:polydimethylsiloxane (PDMS). (2) Developing sensor arrays for simultaneous detection of multiple serum analytes, including oxygen, glucose, lactate, and alcohol. The sensing element for each analyte consists of a PtOEP-doped PS oxygen sensor, and a solution containing the oxidase enzyme specific to the analyte. Each sensing element is coupled to two individually addressable OLED pixels and a Si photodiode photodetector (PD). (3) Enhancing the integration of the platform, whereby a PD array is also structurally integrated with the OLED array and sensing elements. This enhanced integration is achieved by fabricating an array of amorphous or nanocrystalline Si-based PDs, followed by fabrication of the OLED pixels in the gaps between these Si PDs.

  9. Supervised classification of brain tissues through local multi-scale texture analysis by coupling DIR and FLAIR MR sequences

    NASA Astrophysics Data System (ADS)

    Poletti, Enea; Veronese, Elisa; Calabrese, Massimiliano; Bertoldo, Alessandra; Grisan, Enrico

    2012-02-01

    The automatic segmentation of brain tissues in magnetic resonance (MR) is usually performed on T1-weighted images, due to their high spatial resolution. T1w sequence, however, has some major downsides when brain lesions are present: the altered appearance of diseased tissues causes errors in tissues classification. In order to overcome these drawbacks, we employed two different MR sequences: fluid attenuated inversion recovery (FLAIR) and double inversion recovery (DIR). The former highlights both gray matter (GM) and white matter (WM), the latter highlights GM alone. We propose here a supervised classification scheme that does not require any anatomical a priori information to identify the 3 classes, "GM", "WM", and "background". Features are extracted by means of a local multi-scale texture analysis, computed for each pixel of the DIR and FLAIR sequences. The 9 textures considered are average, standard deviation, kurtosis, entropy, contrast, correlation, energy, homogeneity, and skewness, evaluated on a neighborhood of 3x3, 5x5, and 7x7 pixels. Hence, the total number of features associated to a pixel is 56 (9 textures x3 scales x2 sequences +2 original pixel values). The classifier employed is a Support Vector Machine with Radial Basis Function as kernel. From each of the 4 brain volumes evaluated, a DIR and a FLAIR slice have been selected and manually segmented by 2 expert neurologists, providing 1st and 2nd human reference observations which agree with an average accuracy of 99.03%. SVM performances have been assessed with a 4-fold cross-validation, yielding an average classification accuracy of 98.79%.

  10. An intermediate significant bit (ISB) watermarking technique using neural networks.

    PubMed

    Zeki, Akram; Abubakar, Adamu; Chiroma, Haruna

    2016-01-01

    Prior research studies have shown that the peak signal to noise ratio (PSNR) is the most frequent watermarked image quality metric that is used for determining the levels of strength and weakness of watermarking algorithms. Conversely, normalised cross correlation (NCC) is the most common metric used after attacks were applied to a watermarked image to verify the strength of the algorithm used. Many researchers have used these approaches to evaluate their algorithms. These strategies have been used for a long time, however, which unfortunately limits the value of PSNR and NCC in reflecting the strength and weakness of the watermarking algorithms. This paper considers this issue to determine the threshold values of these two parameters in reflecting the amount of strength and weakness of the watermarking algorithms. We used our novel watermarking technique for embedding four watermarks in intermediate significant bits (ISB) of six image files one-by-one through replacing the image pixels with new pixels and, at the same time, keeping the new pixels very close to the original pixels. This approach gains an improved robustness based on the PSNR and NCC values that were gathered. A neural network model was built that uses the image quality metrics (PSNR and NCC) values obtained from the watermarking of six grey-scale images that use ISB as the desired output and that are trained for each watermarked image's PSNR and NCC. The neural network predicts the watermarked image's PSNR together with NCC after the attacks when a portion of the output of the same or different types of image quality metrics (PSNR and NCC) are obtained. The results indicate that the NCC metric fluctuates before the PSNR values deteriorate.

  11. Frequency-domain cascading microwave superconducting quantum interference device multiplexers; beyond limitations originating from room-temperature electronics

    NASA Astrophysics Data System (ADS)

    Kohjiro, Satoshi; Hirayama, Fuminori

    2018-07-01

    A novel approach, frequency-domain cascading microwave multiplexers (MW-Mux), has been proposed and its basic operation has been demonstrated to increase the number of pixels multiplexed in a readout line U of MW-Mux for superconducting detector arrays. This method is an alternative to the challenging development of wideband, large power, and spurious-free room-temperature (300 K) electronics. The readout system for U pixels consists of four main parts: (1) multiplexer chips connected in series those contain U superconducting resonators in total. (2) A cryogenic high-electron-mobility transistor amplifier (HEMT). (3) A 300 K microwave frequency comb generator based on N(≡U/M) parallel units of digital-to-analog converters (DAC). (4) N parallel units of 300 K analog-to-digital converters (ADC). Here, M is the number of tones each DAC produces and each ADC handles. The output signal of U detectors multiplexed at the cryogenic stage is transmitted through a cable to the room temperature and divided into N processors where each handles M pixels. Due to the reduction factor of 1/N, U is not anymore dominated by the 300 K electronics but can be increased up to the potential value determined by either the bandwidth or the spurious-free power of the HEMT. Based on experimental results on the prototype system with N = 2 and M = 3, neither excess inter-pixel crosstalk nor excess noise has been observed in comparison with conventional MW-Mux. This indicates that the frequency-domain cascading MW-Mux provides the full (100%) usage of the HEMT band by assigning N 300 K bands on the frequency axis without inter-band gaps.

  12. Autonomous Visual Navigation of an Indoor Environment Using a Parsimonious, Insect Inspired Familiarity Algorithm

    PubMed Central

    Brayfield, Brad P.

    2016-01-01

    The navigation of bees and ants from hive to food and back has captivated people for more than a century. Recently, the Navigation by Scene Familiarity Hypothesis (NSFH) has been proposed as a parsimonious approach that is congruent with the limited neural elements of these insects’ brains. In the NSFH approach, an agent completes an initial training excursion, storing images along the way. To retrace the path, the agent scans the area and compares the current scenes to those previously experienced. By turning and moving to minimize the pixel-by-pixel differences between encountered and stored scenes, the agent is guided along the path without having memorized the sequence. An important premise of the NSFH is that the visual information of the environment is adequate to guide navigation without aliasing. Here we demonstrate that an image landscape of an indoor setting possesses ample navigational information. We produced a visual landscape of our laboratory and part of the adjoining corridor consisting of 2816 panoramic snapshots arranged in a grid at 12.7-cm centers. We show that pixel-by-pixel comparisons of these images yield robust translational and rotational visual information. We also produced a simple algorithm that tracks previously experienced routes within our lab based on an insect-inspired scene familiarity approach and demonstrate that adequate visual information exists for an agent to retrace complex training routes, including those where the path’s end is not visible from its origin. We used this landscape to systematically test the interplay of sensor morphology, angles of inspection, and similarity threshold with the recapitulation performance of the agent. Finally, we compared the relative information content and chance of aliasing within our visually rich laboratory landscape to scenes acquired from indoor corridors with more repetitive scenery. PMID:27119720

  13. Pixelized Measurement of (99m)Tc-HDP Micro Particles Formed in Gamma Correction Phantom Pinhole Scan: a Reference Study.

    PubMed

    Jung, Joo-Young; Cheon, Gi Jeong; Lee, Yun-Sang; Ha, Seunggyun; Chae, Mi-Hye; Chung, Yong-An; Yoon, Do Kyun; Bahk, Yong-Whee

    2016-09-01

    Currently, traumatic bone diseases are diagnosed by assessing the micro (99m)Tc-hydroxymethylene diphosphonate (HDP) uptake in injured trabeculae with ongoing osteoneogenesis demonstrated by gamma correction pinhole scan (GCPS). However, the mathematic size quantification of micro-uptake is not yet available. We designed and performed this phantom-based study to set up an in-vitro model of the mathematical calculation of micro-uptake by the pixelized measurement. The micro (99m)Tc-HDP deposits used in this study were spontaneously formed both in a large standard flood and small house-made dish phantoms. The processing was as follows: first, phantoms were flooded with distilled water and (99m)Tc-HDP was therein injected to induce micro (99m)Tc-HDP deposition; second, the deposits were scanned using parallel-hole and pinhole collimator to generally survey (99m)Tc-HDP deposition pattern; and third, the scans underwent gamma correction (GC) to discern individual deposits for size measurement. In original naïve scans, tracer distribution was simply nebulous in appearance and, hence, could not be measured. Impressively, however, GCPS could discern individual micro deposits so that they were calculated by pixelized measurement. Phantoms naturally formed micro (99m)Tc-HDP deposits that are analogous to (99m)Tc-HDP uptake on in-vivo bone scan. The smallest one we measured was 0.414 mm. Flooded phantoms and therein injected (99m)Tc-HDP form nebulous micro (99m)Tc-HDP deposits that are rendered discernible by GCPB and precisely calculable using pixelized measurement. This method can be used for precise quantitative and qualitative diagnosis of bone and joint diseases at the trabecular level.

  14. A Regional View of the Libya Montes

    NASA Technical Reports Server (NTRS)

    2000-01-01

    [figure removed for brevity, see original site]

    The Libya Montes are a ring of mountains up-lifted by the giant impact that created the Isidis basin to the north. During 1999, this region became one of the top two that were being considered for the now-canceled Mars Surveyor 2001 Lander. The Isidis basin is very, very ancient. Thus, the mountains that form its rims would contain some of the oldest rocks available at the Martian surface, and a landing in this region might potentially provide information about conditions on early Mars. In May 1999, the wide angle cameras of the Mars Global Surveyor Mars Orbiter Camera system were used in what was called the 'Geodesy Campaign' to obtain nearly global maps of the planet in color and in stereo at resolutions of 240 m/pixel (787 ft/pixel) for the red camera and 480 m/pixel (1575 ft/pixel) for the blue. Shown here are color and stereo views constructed from mosaics of the Geodesy Campaign images for the Libya Montes region of Mars. After they formed by giant impact, the Libya Mountains and valleys were subsequently modified and eroded by other processes, including wind, impact cratering, and flow of liquid water to make the many small valleys that can be seen running northward in the scene. The pictures shown here cover nearly 122,000 square kilometers (47,000 square miles) between latitudes 0.1oN and 4.0oN, longitudes 271.5oW and 279.9oW. The mosaics are about 518 km (322 mi) wide by 235 km (146 mi)high. Red-blue '3-D' glasses are needed to view the stereo image.

  15. Digital shaded relief image of a carbonate platform (northern Great Bahama Bank): Scenery seen and unseen

    NASA Astrophysics Data System (ADS)

    Boss, Stephen K.

    1996-11-01

    A mosaic image of the northern Great Bahama Bank was created from separate gray-scale Landsat images using photo-editing and image analysis software that is commercially available for desktop computers. Measurements of pixel gray levels (relative scale from 0 to 255 referred to as digital number, DN) on the mosaic image were compared to bank-top bathymetry (determined from a network of single-channel, high-resolution seismic profiles), bottom type (coarse sand, sandy mud, barren rock, or reef determined from seismic profiles and diver observations), and vegetative cover (presence and/or absence and relative density of the marine angiosperm Thalassia testudinum determined from diver observations). Results of these analyses indicate that bank-top bathymetry is a primary control on observed pixel DN, bottom type is a secondary control on pixel DN, and vegetative cover is a tertiary influence on pixel DN. Consequently, processing of the gray-scale Landsat mosaic with a directional gradient edge-detection filter generated a physiographic shaded relief image resembling bank-top bathymetric patterns related to submerged physiographic features across the platform. The visibility of submerged karst landforms, Pleistocene eolianite ridges, islands, and possible paleo-drainage patterns created during sea-level lowstands is significantly enhanced on processed images relative to the original mosaic. Bank-margin ooid shoals, platform interior sand bodies, reef edifices, and bidirectional sand waves are features resulting from Holocene carbonate deposition that are also more clearly visible on the new physiographic images. Combined with observational data (single-channel, high-resolution seismic profiles, bottom observations by SCUBA divers, sediment and rock cores) across the northern Great Bahama Bank, these physiographic images facilitate comprehension of areal relations among antecedent platform topography, physical processes, and ensuing depositional patterns during sea-level rise.

  16. GDP Spatialization and Economic Differences in South China Based on NPP-VIIRS Nighttime Light Imagery

    NASA Astrophysics Data System (ADS)

    Zhao, M.

    2017-12-01

    Accurate data on gross domestic product (GDP) at pixel level are needed to understand the dynamics of regional economies. GDP spatialization is the basis of quantitative analysis on economic diversities of different administrative divisions and areas with different natural or humanistic attributes. Data from the Visible Infrared Imaging Radiometer Suite (VIIRS), carried by the Suomi National Polar-orbiting Partnership (NPP) satellite, are capable of estimating GDP, but few studies have been conducted for mapping GDP at pixel level and further pattern analysis of economic differences in different regions using the VIIRS data. This paper produced a pixel-level (500 m × 500 m) GDP map for South China in 2014 and quantitatively analyzed economic differences among diverse geomorphological types. Based on a regression analysis, the total nighttime light (TNL) of corrected VIIRS data were found to exhibit R2 values of 0.8935 and 0.9243 for prefecture GDP and county GDP, respectively. This demonstrated that TNL showed a more significant capability in reflecting economic status (R2 > 0.88) than other nighttime light indices (R2 < 0.52), and showed quadratic polynomial relationships with GDP rather than simple linear correlations at both prefecture and county levels. The corrected NPP-VIIRS data showed a better fit than the original data, and the estimation at the county level was better than at the prefecture level. The pixel-level GDP map indicated that: (a) economic development in coastal areas was higher than that in inland areas; (b) low altitude plains were the most developed areas, followed by low altitude platforms and low altitude hills; and (c) economic development in middle altitude areas, and low altitude hills and mountains remained to be strengthened.

  17. High-resolution photography of clouds from the surface: Retrieval of optical depth of thin clouds down to centimeter scales: High-Resolution Photography of Clouds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schwartz, Stephen E.; Huang, Dong; Vladutescu, Daniela Viviana

    This article describes the approach and presents initial results, for a period of several minutes in north central Oklahoma, of an examination of clouds by high resolution digital photography from the surface looking vertically upward. A commercially available camera having 35-mm equivalent focal length up to 1200 mm (nominal resolution as fine as 6 µrad, which corresponds to 9 mm for cloud height 1.5 km) is used to obtain a measure of zenith radiance of a 30 m × 30 m domain as a two-dimensional image consisting of 3456 × 3456 pixels (12 million pixels). Downwelling zenith radiance varies substantiallymore » within single images and between successive images obtained at 4-s intervals. Variation in zenith radiance found on scales down to about 10 cm is attributed to variation in cloud optical depth (COD). Attention here is directed primarily to optically thin clouds, COD less than about 2. A radiation transfer model used to relate downwelling zenith radiance to COD and to relate the counts in the camera image to zenith radiance, permits determination of COD on a pixel-by-pixel basis. COD for thin clouds determined in this way exhibits considerable variation, for example, an order of magnitude within 15 m, a factor of 2 within 4 m, and 25% (0.12 to 0.15) over 14 cm. In conclusion, this approach, which examines cloud structure on scales 3 to 5 orders of magnitude finer than satellite products, opens new avenues for examination of cloud structure and evolution.« less

  18. High-resolution photography of clouds from the surface: Retrieval of optical depth of thin clouds down to centimeter scales: High-Resolution Photography of Clouds

    DOE PAGES

    Schwartz, Stephen E.; Huang, Dong; Vladutescu, Daniela Viviana

    2017-03-08

    This article describes the approach and presents initial results, for a period of several minutes in north central Oklahoma, of an examination of clouds by high resolution digital photography from the surface looking vertically upward. A commercially available camera having 35-mm equivalent focal length up to 1200 mm (nominal resolution as fine as 6 µrad, which corresponds to 9 mm for cloud height 1.5 km) is used to obtain a measure of zenith radiance of a 30 m × 30 m domain as a two-dimensional image consisting of 3456 × 3456 pixels (12 million pixels). Downwelling zenith radiance varies substantiallymore » within single images and between successive images obtained at 4-s intervals. Variation in zenith radiance found on scales down to about 10 cm is attributed to variation in cloud optical depth (COD). Attention here is directed primarily to optically thin clouds, COD less than about 2. A radiation transfer model used to relate downwelling zenith radiance to COD and to relate the counts in the camera image to zenith radiance, permits determination of COD on a pixel-by-pixel basis. COD for thin clouds determined in this way exhibits considerable variation, for example, an order of magnitude within 15 m, a factor of 2 within 4 m, and 25% (0.12 to 0.15) over 14 cm. In conclusion, this approach, which examines cloud structure on scales 3 to 5 orders of magnitude finer than satellite products, opens new avenues for examination of cloud structure and evolution.« less

  19. A novel radiation hard pixel design for space applications

    NASA Astrophysics Data System (ADS)

    Aurora, A. M.; Marochkin, V. V.; Tuuva, T.

    2017-11-01

    We have developed a novel radiation hard photon detector concept based on Modified Internal Gate Field Effect Transistor (MIGFET) wherein a buried Modified Internal Gate (MIG) is implanted underneath a channel of a FET. In between the MIG and the channel of the FET there is depleted semiconductor material forming a potential barrier between charges in the channel and similar type signal charges located in the MIG. The signal charges in the MIG have a measurable effect on the conductance of the channel. In this paper a radiation hard double MIGFET pixel is investigated comprising two MIGFETs. By transferring the signal charges between the two MIGs Non-Destructive Correlated Double Sampling Readout (NDCDSR) is enabled. The radiation hardness of the proposed double MIGFET structure stems from the fact that interface related issues can be considerably mitigated. The reason for this is, first of all, that interface generated dark noise can be completely avoided and secondly, that interface generated 1/f noise can be considerably reduced due to a deep buried channel readout configuration. Electrical parameters of the double MIGFET pixel have been evaluated by 3D TCAD simulation study. Simulation results show the absence of interface generated dark noise, significantly reduced interface generated 1/f noise, well performing NDCDSR operation, and blooming protection due to an inherent vertical anti-blooming structure. In addition, the backside illuminated thick fully depleted pixel design results in low crosstalk due to lack of diffusion and good quantum efficiency from visible to Near Infra-Red (NIR) light. These facts result in excellent Signal-to-Noise Ratio (SNR) and very low crosstalk enabling thus excellent image quality. The simulation demonstrates the charge to current conversion gain for source current read-out to be 1.4 nA/e.

  20. Using false colors to protect visual privacy of sensitive content

    NASA Astrophysics Data System (ADS)

    Ćiftçi, Serdar; Korshunov, Pavel; Akyüz, Ahmet O.; Ebrahimi, Touradj

    2015-03-01

    Many privacy protection tools have been proposed for preserving privacy. Tools for protection of visual privacy available today lack either all or some of the important properties that are expected from such tools. Therefore, in this paper, we propose a simple yet effective method for privacy protection based on false color visualization, which maps color palette of an image into a different color palette, possibly after a compressive point transformation of the original pixel data, distorting the details of the original image. This method does not require any prior face detection or other sensitive regions detection and, hence, unlike typical privacy protection methods, it is less sensitive to inaccurate computer vision algorithms. It is also secure as the look-up tables can be encrypted, reversible as table look-ups can be inverted, flexible as it is independent of format or encoding, adjustable as the final result can be computed by interpolating the false color image with the original using different degrees of interpolation, less distracting as it does not create visually unpleasant artifacts, and selective as it preserves better semantic structure of the input. Four different color scales and four different compression functions, one which the proposed method relies, are evaluated via objective (three face recognition algorithms) and subjective (50 human subjects in an online-based study) assessments using faces from FERET public dataset. The evaluations demonstrate that DEF and RBS color scales lead to the strongest privacy protection, while compression functions add little to the strength of privacy protection. Statistical analysis also shows that recognition algorithms and human subjects perceive the proposed protection similarly

  1. Orientation selectivity based structure for texture classification

    NASA Astrophysics Data System (ADS)

    Wu, Jinjian; Lin, Weisi; Shi, Guangming; Zhang, Yazhong; Lu, Liu

    2014-10-01

    Local structure, e.g., local binary pattern (LBP), is widely used in texture classification. However, LBP is too sensitive to disturbance. In this paper, we introduce a novel structure for texture classification. Researches on cognitive neuroscience indicate that the primary visual cortex presents remarkable orientation selectivity for visual information extraction. Inspired by this, we investigate the orientation similarities among neighbor pixels, and propose an orientation selectivity based pattern for local structure description. Experimental results on texture classification demonstrate that the proposed structure descriptor is quite robust to disturbance.

  2. Intelligent Mobile Autonomous System (IMAS).

    DTIC Science & Technology

    1987-01-01

    the "tile" of tesselation at a level (grain, discrete, pixel, or voxel of the space). These terms can be used intermittently , and each of them...search on the original level of traversability space is not fast enough to be considered for actual control application. Alternatives to limit the...0 (b) It must be concise and easy to "compute". In other words there must exist simple, fast procedures for instantiating the "words" or "sentences

  3. Modifications to Improve Data Acquisition and Analysis for Camouflage Design

    DTIC Science & Technology

    1983-01-01

    terrains into facsimiles of the original scenes in 3, 4# or 5 colors in CIELAB notation. Tasks that were addressed included optimization of the...a histogram algorithm (HIST) was used as a first step In the clustering of the CIELAB values of the scene pixels. This algorithm Is highly efficient...however, an optimal process and the CIELAB coordinates of the final color domains can be Influenced by the color coordinate Increments used In the

  4. REBL: design progress toward 16 nm half-pitch maskless projection electron beam lithography

    NASA Astrophysics Data System (ADS)

    McCord, Mark A.; Petric, Paul; Ummethala, Upendra; Carroll, Allen; Kojima, Shinichi; Grella, Luca; Shriyan, Sameet; Rettner, Charles T.; Bevis, Chris F.

    2012-03-01

    REBL (Reflective Electron Beam Lithography) is a novel concept for high speed maskless projection electron beam lithography. Originally targeting 45 nm HP (half pitch) under a DARPA funded contract, we are now working on optimizing the optics and architecture for the commercial silicon integrated circuit fabrication market at the equivalent of 16 nm HP. The shift to smaller features requires innovation in most major subsystems of the tool, including optics, stage, and metrology. We also require better simulation and understanding of the exposure process. In order to meet blur requirements for 16 nm lithography, we are both shrinking the pixel size and reducing the beam current. Throughput will be maintained by increasing the number of columns as well as other design optimizations. In consequence, the maximum stage speed required to meet wafer throughput targets at 16 nm will be much less than originally planned for at 45 nm. As a result, we are changing the stage architecture from a rotary design to a linear design that can still meet the throughput requirements but with more conventional technology that entails less technical risk. The linear concept also allows for simplifications in the datapath, primarily from being able to reuse pattern data across dies and columns. Finally, we are now able to demonstrate working dynamic pattern generator (DPG) chips, CMOS chips with microfabricated lenslets on top to prevent crosstalk between pixels.

  5. A multi-scale tensor voting approach for small retinal vessel segmentation in high resolution fundus images.

    PubMed

    Christodoulidis, Argyrios; Hurtut, Thomas; Tahar, Houssem Ben; Cheriet, Farida

    2016-09-01

    Segmenting the retinal vessels from fundus images is a prerequisite for many CAD systems for the automatic detection of diabetic retinopathy lesions. So far, research efforts have concentrated mainly on the accurate localization of the large to medium diameter vessels. However, failure to detect the smallest vessels at the segmentation step can lead to false positive lesion detection counts in a subsequent lesion analysis stage. In this study, a new hybrid method for the segmentation of the smallest vessels is proposed. Line detection and perceptual organization techniques are combined in a multi-scale scheme. Small vessels are reconstructed from the perceptual-based approach via tracking and pixel painting. The segmentation was validated in a high resolution fundus image database including healthy and diabetic subjects using pixel-based as well as perceptual-based measures. The proposed method achieves 85.06% sensitivity rate, while the original multi-scale line detection method achieves 81.06% sensitivity rate for the corresponding images (p<0.05). The improvement in the sensitivity rate for the database is 6.47% when only the smallest vessels are considered (p<0.05). For the perceptual-based measure, the proposed method improves the detection of the vasculature by 7.8% against the original multi-scale line detection method (p<0.05). Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Region-based multifocus image fusion for the precise acquisition of Pap smear images.

    PubMed

    Tello-Mijares, Santiago; Bescós, Jesús

    2018-05-01

    A multifocus image fusion method to obtain a single focused image from a sequence of microscopic high-magnification Papanicolau source (Pap smear) images is presented. These images, captured each in a different position of the microscope lens, frequently show partially focused cells or parts of cells, which makes them unpractical for the direct application of image analysis techniques. The proposed method obtains a focused image with a high preservation of original pixels information while achieving a negligible visibility of the fusion artifacts. The method starts by identifying the best-focused image of the sequence; then, it performs a mean-shift segmentation over this image; the focus level of the segmented regions is evaluated in all the images of the sequence, and best-focused regions are merged in a single combined image; finally, this image is processed with an adaptive artifact removal process. The combination of a region-oriented approach, instead of block-based approaches, and a minimum modification of the value of focused pixels in the original images achieve a highly contrasted image with no visible artifacts, which makes this method especially convenient for the medical imaging domain. The proposed method is compared with several state-of-the-art alternatives over a representative dataset. The experimental results show that our proposal obtains the best and more stable quality indicators. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  7. Cloud-Induced Uncertainty for Visual Navigation

    DTIC Science & Technology

    2014-12-26

    images at the pixel level. The result is a method that can overlay clouds with various structures on top of any desired image to produce realistic...cloud-shaped structures . The primary contribution of this research, however, is to investigate and quantify the errors in features due to clouds. The...of clouds types, this method does not emulate the true structure of clouds. An alternative popular modern method of creating synthetic clouds is known

  8. Video Image Tracking Engine

    NASA Technical Reports Server (NTRS)

    Howard, Richard T. (Inventor); Bryan, ThomasC. (Inventor); Book, Michael L. (Inventor)

    2004-01-01

    A method and system for processing an image including capturing an image and storing the image as image pixel data. Each image pixel datum is stored in a respective memory location having a corresponding address. Threshold pixel data is selected from the image pixel data and linear spot segments are identified from the threshold pixel data selected.. Ihe positions of only a first pixel and a last pixel for each linear segment are saved. Movement of one or more objects are tracked by comparing the positions of fust and last pixels of a linear segment present in the captured image with respective first and last pixel positions in subsequent captured images. Alternatively, additional data for each linear data segment is saved such as sum of pixels and the weighted sum of pixels i.e., each threshold pixel value is multiplied by that pixel's x-location).

  9. New false color mapping for image fusion

    NASA Astrophysics Data System (ADS)

    Toet, Alexander; Walraven, Jan

    1996-03-01

    A pixel-based color-mapping algorithm is presented that produces a fused false color rendering of two gray-level images representing different sensor modalities. The resulting images have a higher information content than each of the original images and retain sensor-specific image information. The unique component of each image modality is enhanced in the resulting fused color image representation. First, the common component of the two original input images is determined. Second, the common component is subtracted from the original images to obtain the unique component of each image. Third, the unique component of each image modality is subtracted from the image of the other modality. This step serves to enhance the representation of sensor-specific details in the final fused result. Finally, a fused color image is produced by displaying the images resulting from the last step through, respectively, the red and green channels of a color display. The method is applied to fuse thermal and visual images. The results show that the color mapping enhances the visibility of certain details and preserves the specificity of the sensor information. The fused images also have a fairly natural appearance. The fusion scheme involves only operations on corresponding pixels. The resolution of a fused image is therefore directly related to the resolution of the input images. Before fusing, the contrast of the images can be enhanced and their noise can be reduced by standard image- processing techniques. The color mapping algorithm is computationally simple. This implies that the investigated approaches can eventually be applied in real time and that the hardware needed is not too complicated or too voluminous (an important consideration when it has to fit in an airplane, for instance).

  10. Pixel-based image fusion with false color mapping

    NASA Astrophysics Data System (ADS)

    Zhao, Wei; Mao, Shiyi

    2003-06-01

    In this paper, we propose a pixel-based image fusion algorithm that combines the gray-level image fusion method with the false color mapping. This algorithm integrates two gray-level images presenting different sensor modalities or at different frequencies and produces a fused false-color image. The resulting image has higher information content than each of the original images. The objects in the fused color image are easy to be recognized. This algorithm has three steps: first, obtaining the fused gray-level image of two original images; second, giving the generalized high-boost filtering images between fused gray-level image and two source images respectively; third, generating the fused false-color image. We use the hybrid averaging and selection fusion method to obtain the fused gray-level image. The fused gray-level image will provide better details than two original images and reduce noise at the same time. But the fused gray-level image can't contain all detail information in two source images. At the same time, the details in gray-level image cannot be discerned as easy as in a color image. So a color fused image is necessary. In order to create color variation and enhance details in the final fusion image, we produce three generalized high-boost filtering images. These three images are displayed through red, green and blue channel respectively. A fused color image is produced finally. This method is used to fuse two SAR images acquired on the San Francisco area (California, USA). The result shows that fused false-color image enhances the visibility of certain details. The resolution of the final false-color image is the same as the resolution of the input images.

  11. Research on Image Encryption Based on DNA Sequence and Chaos Theory

    NASA Astrophysics Data System (ADS)

    Tian Zhang, Tian; Yan, Shan Jun; Gu, Cheng Yan; Ren, Ran; Liao, Kai Xin

    2018-04-01

    Nowadays encryption is a common technique to protect image data from unauthorized access. In recent years, many scientists have proposed various encryption algorithms based on DNA sequence to provide a new idea for the design of image encryption algorithm. Therefore, a new method of image encryption based on DNA computing technology is proposed in this paper, whose original image is encrypted by DNA coding and 1-D logistic chaotic mapping. First, the algorithm uses two modules as the encryption key. The first module uses the real DNA sequence, and the second module is made by one-dimensional logistic chaos mapping. Secondly, the algorithm uses DNA complementary rules to encode original image, and uses the key and DNA computing technology to compute each pixel value of the original image, so as to realize the encryption of the whole image. Simulation results show that the algorithm has good encryption effect and security.

  12. EDITORIAL: Micro-pixellated LEDs for science and instrumentation

    NASA Astrophysics Data System (ADS)

    Dawson, Martin D.; Neil, Mark A. A.

    2008-05-01

    This Cluster Issue of Journal of Physics D: Applied Physics highlights micro-pixellated gallium nitride light-emitting diodes or `micro-LEDs', an emerging technology offering considerable attractions for a broad range of scientific and instrumentation applications. It showcases the results of a Research Councils UK (RCUK) Basic Technology Research programme (http://bt-onethousand.photonics.ac.uk), running from 2004-2008, which has drawn together a multi-disciplinary and multi-institutional research partnership to develop these devices and explore their potential. Images of LEDs Examples of GaN micro-pixel LEDs in operation. Images supplied courtesy of the Guest Editors. The partnership, of physicists, engineers and chemists drawn from the University of Strathclyde, Heriot-Watt University, the University of Sheffield and Imperial College London, has sought to move beyond the established mass-market uses of gallium nitride LEDs in illumination and lighting. Instead, it focuses on specialised solid-state micro-projection devices the size of a match-head, containing up to several thousand individually-addressable micro-pixel elements emitting light in the ultraviolet or visible regions of the spectrum. Such sources are pattern-programmable under computer control and can project into materials fixed or high-frame rate optical images or spatially-controllable patterns of nanosecond excitation pulses. These materials can be as diverse as biological cells and tissues, biopolymers, photoresists and organic semiconductors, leading to new developments in optical microscopy, bio-sensing and chemical sensing, mask-free lithography and direct writing, and organic electronics. Particular areas of interest are multi-modal microscopy, integrated forms of organic semiconductor lasers, lab-on-a-chip, GaN/Si optoelectronics and hybrid inorganic/organic semiconductor structures. This Cluster Issue contains four invited papers and ten contributed papers. The invited papers serve to set the work in an international context. Fan et al, who introduced the original forms of these devices in 2000, give a historical perspective as well as illustrating some recent trends in their work. Xu et al, another of the main international groups in this area, concentrate on biological imaging and detection applications. One of the most exciting prospects for this technology is its compatibility with CMOS, and Charbon reviews recent results with single-photon detection arrays which facilitate integrated optical lab-on-chip devices in conjunction with the micro-LEDs. Belton et al, from within the project partnership, overview the hybrid inorganic/organic semiconductor structures achieved by combining gallium nitride optoelectronics with organic semiconductor materials. The contributed papers cover many other aspects related to the devices themselves, their integration with polymers and CMOS, and also cover several associated developments such as UV-emitting nitride materials, new polymers, and the broader use of LEDs in microscopy. Images of LED fibres Emission patterns generated at the end of a multicore image fibre 600 μm in diameter, from article 094013 by H Xu et al of Brown University. We would like to thank Paul French for suggesting this special issue, the staff of IOP Publishing for their help and support, Dr Caroline Vance for her administration of the programme, and EPSRC (particularly Dr Lindsey Weston) for organizational and financial support.

  13. Limit characteristics of digital optoelectronic processor

    NASA Astrophysics Data System (ADS)

    Kolobrodov, V. G.; Tymchik, G. S.; Kolobrodov, M. S.

    2018-01-01

    In this article, the limiting characteristics of a digital optoelectronic processor are explored. The limits are defined by diffraction effects and a matrix structure of the devices for input and output of optical signals. The purpose of a present research is to optimize the parameters of the processor's components. The developed physical and mathematical model of DOEP allowed to establish the limit characteristics of the processor, restricted by diffraction effects and an array structure of the equipment for input and output of optical signals, as well as to optimize the parameters of the processor's components. The diameter of the entrance pupil of the Fourier lens is determined by the size of SLM and the pixel size of the modulator. To determine the spectral resolution, it is offered to use a concept of an optimum phase when the resolved diffraction maxima coincide with the pixel centers of the radiation detector.

  14. Moiré-reduction method for slanted-lenticular-based quasi-three-dimensional displays

    NASA Astrophysics Data System (ADS)

    Zhuang, Zhenfeng; Surman, Phil; Zhang, Lei; Rawat, Rahul; Wang, Shizheng; Zheng, Yuanjin; Sun, Xiao Wei

    2016-12-01

    In this paper we present a method for determining the preferred slanted angle for a lenticular film that minimizes moiré patterns in quasi-three-dimensional (Q3D) displays. We evaluate the preferred slanted angles of the lenticular film for the stripe-type sub-pixel structure liquid crystal display (LCD) panel. Additionally, the sub-pixels mapping algorithm of the specific angle is proposed to assign the images to either the right or left eye channel. A Q3D display prototype is built. Compared with the conventional SLF, this newly implemented Q3D display can not only eliminate moiré patterns but also provide 3D images in both portrait and landscape orientations. It is demonstrated that the developed slanted lenticular film (SLF) provides satisfactory 3D images by employing a compact structure, minimum moiré patterns and stabilized 3D contrast.

  15. Multiscale vector fields for image pattern recognition

    NASA Technical Reports Server (NTRS)

    Low, Kah-Chan; Coggins, James M.

    1990-01-01

    A uniform processing framework for low-level vision computing in which a bank of spatial filters maps the image intensity structure at each pixel into an abstract feature space is proposed. Some properties of the filters and the feature space are described. Local orientation is measured by a vector sum in the feature space as follows: each filter's preferred orientation along with the strength of the filter's output determine the orientation and the length of a vector in the feature space; the vectors for all filters are summed to yield a resultant vector for a particular pixel and scale. The orientation of the resultant vector indicates the local orientation, and the magnitude of the vector indicates the strength of the local orientation preference. Limitations of the vector sum method are discussed. Investigations show that the processing framework provides a useful, redundant representation of image structure across orientation and scale.

  16. Study of the properties of new SPM detectors

    NASA Astrophysics Data System (ADS)

    Stewart, A. G.; Greene-O'Sullivan, E.; Herbert, D. J.; Saveliev, V.; Quinlan, F.; Wall, L.; Hughes, P. J.; Mathewson, A.; Jackson, J. C.

    2006-02-01

    The operation and performance of multi-pixel, Geiger-mode APD structures referred to as Silicon Photomultiplier (SPM) are reported. The SPM is a solid state device that has emerged over the last decade as a promising alternative to vacuum PMTs. This is due to their comparable performance in addition to their lower bias operation and power consumption, insensitivity to magnetic fields and ambient light, smaller size and ruggedness. Applications for these detectors are numerous and include life sciences, nuclear medicine, particle physics, microscopy and general instrumentation. With SPM devices, many geometrical and device parameters can be adjusted to optimize their performance for a particular application. In this paper, Monte Carlo simulations and experimental results for 1mm2 SPM structures are reported. In addition, trade-offs involved in optimizing the SPM in terms of the number and size of pixels for a given light intensity, and its affect on the dynamic range are discussed.

  17. Laser deposition and direct-writing of thermoelectric misfit cobaltite thin films

    NASA Astrophysics Data System (ADS)

    Chen, Jikun; Palla-Papavlu, Alexandra; Li, Yulong; Chen, Lidong; Shi, Xun; Döbeli, Max; Stender, Dieter; Populoh, Sascha; Xie, Wenjie; Weidenkaff, Anke; Schneider, Christof W.; Wokaun, Alexander; Lippert, Thomas

    2014-06-01

    A two-step process combining pulsed laser deposition of calcium cobaltite thin films and a subsequent laser induced forward transfer as micro-pixel is demonstrated as a direct writing approach of micro-scale thin film structures for potential applications in thermoelectric micro-devices. To achieve the desired thermo-electric properties of the cobaltite thin film, the laser induced plasma properties have been characterized utilizing plasma mass spectrometry establishing a direct correlation to the corresponding film composition and structure. The introduction of a platinum sacrificial layer when growing the oxide thin film enables a damage-free laser transfer of calcium cobaltite thereby preserving the film composition and crystallinity as well as the shape integrity of the as-transferred pixels. The demonstrated direct writing approach simplifies the fabrication of micro-devices and provides a large degree of flexibility in designing and fabricating fully functional thermoelectric micro-devices.

  18. Non-Invasive Survey of Old Paintings Using Vnir Hyperspectral Sensor

    NASA Astrophysics Data System (ADS)

    Matouskova, E.; Pavelka, K.; Svadlenkova, Z.

    2013-07-01

    Hyperspectral imaging is relatively new method developed primarily for army applications with respect to detection of possible chemical weapon existence and as an efficient assistant for a geological survey. The method is based on recording spectral profile for many hundreds of narrow spectral band. The technique gives full spectral curve of explored pixel which is an unparalleled signature of pixels material. Spectral signatures can then be compared with pre-defined spectral libraries or they can be created with respect to application. A new project named "New Modern Methods of Non-invasive Survey of Historical Site Objects" started at CTU in Prague with the New Year. The project is designed for 4 years and is funded by the Ministry of Culture in the Czech Republic. It is focused on material and chemical composition, damage diagnostics, condition description of paintings, images, construction components and whole structure object analysis in cultural heritage domain. This paper shows first results of the project on painting documentation field as well as used instrument. Hyperspec VNIR by Headwall Photonics was used for this analysis. It operates in the spectral range between 400 and 1000 nm. Comparison with infrared photography is discussed. The goal of this contribution is a non-destructive deep exploration of specific paintings. Two original 17th century paintings by Flemish authors Thomas van Apshoven ("On the Road") and David Teniers the Younger ("The Interior of a Mill") were chosen for the first analysis with a kind permission of academic painter Mr. M. Martan. Both paintings oil painted on wooden panel. This combination was chosen because of the possibility of underdrawing visualization which is supposed to be the most uncomplicated painting combination for this type of analysis.

  19. High-resolution pulse-counting array detectors for imaging and spectroscopy at ultraviolet wavelengths

    NASA Technical Reports Server (NTRS)

    Timothy, J. Gethyn; Bybee, Richard L.

    1986-01-01

    The performance characteristics of multianode microchannel array (MAMA) detector systems which have formats as large as 256 x 1024 pixels and which have application to imaging and spectroscopy at UV wavelengths are evaluated. Sealed and open-structure MAMA detector tubes with opaque CsI photocathodes can determine the arrival time of the detected photon to an accuracy of 100 ns or better. Very large format MAMA detectors with CsI and Cs2Te photocathodes and active areas of 52 x 52 mm (2048 x 2048 pixels) will be used as the UV solar blind detectors for the NASA STIS.

  20. Spectral X-Ray Diffraction using a 6 Megapixel Photon Counting Array Detector.

    PubMed

    Muir, Ryan D; Pogranichniy, Nicholas R; Muir, J Lewis; Sullivan, Shane Z; Battaile, Kevin P; Mulichak, Anne M; Toth, Scott J; Keefe, Lisa J; Simpson, Garth J

    2015-03-12

    Pixel-array array detectors allow single-photon counting to be performed on a massively parallel scale, with several million counting circuits and detectors in the array. Because the number of photoelectrons produced at the detector surface depends on the photon energy, these detectors offer the possibility of spectral imaging. In this work, a statistical model of the instrument response is used to calibrate the detector on a per-pixel basis. In turn, the calibrated sensor was used to perform separation of dual-energy diffraction measurements into two monochromatic images. Targeting applications include multi-wavelength diffraction to aid in protein structure determination and X-ray diffraction imaging.

Top