Science.gov

Sample records for pixel space convolution

  1. FAST PIXEL SPACE CONVOLUTION FOR COSMIC MICROWAVE BACKGROUND SURVEYS WITH ASYMMETRIC BEAMS AND COMPLEX SCAN STRATEGIES: FEBeCoP

    SciTech Connect

    Mitra, S.; Rocha, G.; Gorski, K. M.; Lawrence, C. R.; Huffenberger, K. M.; Eriksen, H. K.; Ashdown, M. A. J. E-mail: graca@caltech.edu E-mail: Charles.R.Lawrence@jpl.nasa.gov E-mail: h.k.k.eriksen@astro.uio.no

    2011-03-15

    Precise measurement of the angular power spectrum of the cosmic microwave background (CMB) temperature and polarization anisotropy can tightly constrain many cosmological models and parameters. However, accurate measurements can only be realized in practice provided all major systematic effects have been taken into account. Beam asymmetry, coupled with the scan strategy, is a major source of systematic error in scanning CMB experiments such as Planck, the focus of our current interest. We envision Monte Carlo methods to rigorously study and account for the systematic effect of beams in CMB analysis. Toward that goal, we have developed a fast pixel space convolution method that can simulate sky maps observed by a scanning instrument, taking into account real beam shapes and scan strategy. The essence is to pre-compute the 'effective beams' using a computer code, 'Fast Effective Beam Convolution in Pixel space' (FEBeCoP), that we have developed for the Planck mission. The code computes effective beams given the focal plane beam characteristics of the Planck instrument and the full history of actual satellite pointing, and performs very fast convolution of sky signals using the effective beams. In this paper, we describe the algorithm and the computational scheme that has been implemented. We also outline a few applications of the effective beams in the precision analysis of Planck data, for characterizing the CMB anisotropy and for detecting and measuring properties of point sources.

  2. Active pixel array devices in space missions

    NASA Astrophysics Data System (ADS)

    Hopkinson, G. R.; Purll, D. J.; Abbey, A. F.; Short, A.; Watson, D. J.; Wells, A.

    2003-11-01

    The X-ray Telescope for NASA's Swift mission incorporates a Telescope Alignment Monitor (TAM) to measure thermo-elastic misalignments between the telescope and the spacecraft star tracker. A LED in the X-ray focal plane is imaged on to a position-sensitive detector via two paths, directly and after reflection from the star tracker alignment cube. The separation of the two spots of light on the detector is determined with sub-pixel accuracy using a centroiding algorithm. The active element of the TAM is a miniature camera supplied by Sira Electro-Optics Ltd, using an Active Pixel Sensor (APS). The camera was based on similar pointing sensors developed on European Space Agency programmes, such as acquisition sensors for optical inter-satellite links and miniaturized star trackers. The paper gives the background to APS-based pointing sensors, describes the Swift TAM system, and presents test results from the instrument development programme.

  3. Fast convolution with free-space Green's functions

    NASA Astrophysics Data System (ADS)

    Vico, Felipe; Greengard, Leslie; Ferrando, Miguel

    2016-10-01

    We introduce a fast algorithm for computing volume potentials - that is, the convolution of a translation invariant, free-space Green's function with a compactly supported source distribution defined on a uniform grid. The algorithm relies on regularizing the Fourier transform of the Green's function by cutting off the interaction in physical space beyond the domain of interest. This permits the straightforward application of trapezoidal quadrature and the standard FFT, with superalgebraic convergence for smooth data. Moreover, the method can be interpreted as employing a Nystrom discretization of the corresponding integral operator, with matrix entries which can be obtained explicitly and rapidly. This is of use in the design of preconditioners or fast direct solvers for a variety of volume integral equations. The method proposed permits the computation of any derivative of the potential, at the cost of an additional FFT.

  4. Fast space-varying convolution using matrix source coding with applications to camera stray light reduction.

    PubMed

    Wei, Jianing; Bouman, Charles A; Allebach, Jan P

    2014-05-01

    Many imaging applications require the implementation of space-varying convolution for accurate restoration and reconstruction of images. Here, we use the term space-varying convolution to refer to linear operators whose impulse response has slow spatial variation. In addition, these space-varying convolution operators are often dense, so direct implementation of the convolution operator is typically computationally impractical. One such example is the problem of stray light reduction in digital cameras, which requires the implementation of a dense space-varying deconvolution operator. However, other inverse problems, such as iterative tomographic reconstruction, can also depend on the implementation of dense space-varying convolution. While space-invariant convolution can be efficiently implemented with the fast Fourier transform, this approach does not work for space-varying operators. So direct convolution is often the only option for implementing space-varying convolution. In this paper, we develop a general approach to the efficient implementation of space-varying convolution, and demonstrate its use in the application of stray light reduction. Our approach, which we call matrix source coding, is based on lossy source coding of the dense space-varying convolution matrix. Importantly, by coding the transformation matrix, we not only reduce the memory required to store it; we also dramatically reduce the computation required to implement matrix-vector products. Our algorithm is able to reduce computation by approximately factoring the dense space-varying convolution operator into a product of sparse transforms. Experimental results show that our method can dramatically reduce the computation required for stray light reduction while maintaining high accuracy.

  5. Convolution theorems: partitioning the space of integral transforms

    NASA Astrophysics Data System (ADS)

    Lindsey, Alan R.; Suter, Bruce W.

    1999-03-01

    Investigating a number of different integral transforms uncovers distinct patterns in the type of translation convolution theorems afforded by each. It is shown that transforms based on separable kernels (aka Fourier, Laplace and their relatives) have a form of the convolution theorem providing for a transform domain product of the convolved functions. However, transforms based on kernels not separable in the function and transform variables mandate a convolution theorem of a different type; namely in the transform domain the convolution becomes another convolution--one function with the transform of the other.

  6. Classification of Urban Aerial Data Based on Pixel Labelling with Deep Convolutional Neural Networks and Logistic Regression

    NASA Astrophysics Data System (ADS)

    Yao, W.; Poleswki, P.; Krzystek, P.

    2016-06-01

    The recent success of deep convolutional neural networks (CNN) on a large number of applications can be attributed to large amounts of available training data and increasing computing power. In this paper, a semantic pixel labelling scheme for urban areas using multi-resolution CNN and hand-crafted spatial-spectral features of airborne remotely sensed data is presented. Both CNN and hand-crafted features are applied to image/DSM patches to produce per-pixel class probabilities with a L1-norm regularized logistical regression classifier. The evidence theory infers a degree of belief for pixel labelling from different sources to smooth regions by handling the conflicts present in the both classifiers while reducing the uncertainty. The aerial data used in this study were provided by ISPRS as benchmark datasets for 2D semantic labelling tasks in urban areas, which consists of two data sources from LiDAR and color infrared camera. The test sites are parts of a city in Germany which is assumed to consist of typical object classes including impervious surfaces, trees, buildings, low vegetation, vehicles and clutter. The evaluation is based on the computation of pixel-based confusion matrices by random sampling. The performance of the strategy with respect to scene characteristics and method combination strategies is analyzed and discussed. The competitive classification accuracy could be not only explained by the nature of input data sources: e.g. the above-ground height of nDSM highlight the vertical dimension of houses, trees even cars and the nearinfrared spectrum indicates vegetation, but also attributed to decision-level fusion of CNN's texture-based approach with multichannel spatial-spectral hand-crafted features based on the evidence combination theory.

  7. Off-resonance artifacts correction with convolution in k-space (ORACLE).

    PubMed

    Lin, Wei; Huang, Feng; Simonotto, Enrico; Duensing, George R; Reykowski, Arne

    2012-06-01

    Off-resonance artifacts hinder the wider applicability of echo-planar imaging and non-Cartesian MRI methods such as radial and spiral. In this work, a general and rapid method is proposed for off-resonance artifacts correction based on data convolution in k-space. The acquired k-space is divided into multiple segments based on their acquisition times. Off-resonance-induced artifact within each segment is removed by applying a convolution kernel, which is the Fourier transform of an off-resonance correcting spatial phase modulation term. The field map is determined from the inverse Fourier transform of a basis kernel, which is calibrated from data fitting in k-space. The technique was demonstrated in phantom and in vivo studies for radial, spiral and echo-planar imaging datasets. For radial acquisitions, the proposed method allows the self-calibration of the field map from the imaging data, when an alternating view-angle ordering scheme is used. An additional advantage for off-resonance artifacts correction based on data convolution in k-space is the reusability of convolution kernels to images acquired with the same sequence but different contrasts. Copyright © 2011 Wiley-Liss, Inc.

  8. convolve_image.pro: Common-Resolution Convolution Kernels for Space- and Ground-Based Telescopes

    NASA Astrophysics Data System (ADS)

    Aniano, Gonzalo J.

    2014-01-01

    The IDL package convolve_image.pro transforms images between different instrumental point spread functions (PSFs). It can load an image file and corresponding kernel and return the convolved image, thus preserving the colors of the astronomical sources. Convolution kernels are available for images from Spitzer (IRAC MIPS), Herschel (PACS SPIRE), GALEX (FUV NUV), WISE (W1 - W4), Optical PSFs (multi- Gaussian and Moffat functions), and Gaussian PSFs; they allow the study of the Spectral Energy Distribution (SED) of extended objects and preserve the characteristic SED in each pixel.

  9. A semiconductor radiation imaging pixel detector for space radiation dosimetry.

    PubMed

    Kroupa, Martin; Bahadori, Amir; Campbell-Ricketts, Thomas; Empl, Anton; Hoang, Son Minh; Idarraga-Munoz, John; Rios, Ryan; Semones, Edward; Stoffle, Nicholas; Tlustos, Lukas; Turecek, Daniel; Pinsky, Lawrence

    2015-07-01

    Progress in the development of high-performance semiconductor radiation imaging pixel detectors based on technologies developed for use in high-energy physics applications has enabled the development of a completely new generation of compact low-power active dosimeters and area monitors for use in space radiation environments. Such detectors can provide real-time information concerning radiation exposure, along with detailed analysis of the individual particles incident on the active medium. Recent results from the deployment of detectors based on the Timepix from the CERN-based Medipix2 Collaboration on the International Space Station (ISS) are reviewed, along with a glimpse of developments to come. Preliminary results from Orion MPCV Exploration Flight Test 1 are also presented. Copyright © 2015 The Committee on Space Research (COSPAR). All rights reserved.

  10. Partial fourier reconstruction through data fitting and convolution in k-space.

    PubMed

    Huang, Feng; Lin, Wei; Li, Yu

    2009-11-01

    A partial Fourier acquisition scheme has been widely adopted for fast imaging. There are two problems associated with the existing techniques. First, the majority of the existing techniques demodulate the phase information and cannot provide improved phase information over zero-padding. Second, serious artifacts can be observed in reconstruction when the phase changes rapidly because the low-resolution phase estimate in the image space is prone to error. To tackle these two problems, a novel and robust method is introduced for partial Fourier reconstruction, using k-space convolution. In this method, the phase information is implicitly estimated in k-space through data fitting; the approximated phase information is applied to recover the unacquired k-space data through Hermitian operation and convolution in k-space. In both spin echo and gradient echo imaging experiments, the proposed method consistently produced images with the lowest error level when compared to Cuppen's algorithm, projection onto convex sets-based iterative algorithm, and Homodyne algorithm. Significant improvements are observed in images with rapid phase change. Besides the improvement on magnitude, the phase map of the images reconstructed by the proposed method also has significantly lower error level than conventional methods. (c) 2009 Wiley-Liss, Inc.

  11. Pixel detectors for x-ray imaging spectroscopy in space

    NASA Astrophysics Data System (ADS)

    Treis, J.; Andritschke, R.; Hartmann, R.; Herrmann, S.; Holl, P.; Lauf, T.; Lechner, P.; Lutz, G.; Meidinger, N.; Porro, M.; Richter, R. H.; Schopper, F.; Soltau, H.; Strüder, L.

    2009-03-01

    Pixelated semiconductor detectors for X-ray imaging spectroscopy are foreseen as key components of the payload of various future space missions exploring the x-ray sky. Located on the platform of the new Spectrum-Roentgen-Gamma satellite, the eROSITA (extended Roentgen Survey with an Imaging Telescope Array) instrument will perform an imaging all-sky survey up to an X-ray energy of 10 keV with unprecedented spectral and angular resolution. The instrument will consist of seven parallel oriented mirror modules each having its own pnCCD camera in the focus. The satellite born X-ray observatory SIMBOL-X will be the first mission to use formation-flying techniques to implement an X-ray telescope with an unprecedented focal length of around 20 m. The detector instrumentation consists of separate high- and low energy detectors, a monolithic 128 × 128 DEPFET macropixel array and a pixellated CdZTe detector respectively, making energy band between 0.5 to 80 keV accessible. A similar concept is proposed for the next generation X-ray observatory IXO. Finally, the MIXS (Mercury Imaging X-ray Spectrometer) instrument on the European Mercury exploration mission BepiColombo will use DEPFET macropixel arrays together with a small X-ray telescope to perform a spatially resolved planetary XRF analysis of Mercury's crust. Here, the mission concepts and their scientific targets are briefly discussed, and the resulting requirements on the detector devices together with the implementation strategies are shown.

  12. Thin Film on CMOS Active Pixel Sensor for Space Applications.

    PubMed

    Schulze Spuentrup, Jan Dirk; Burghartz, Joachim N; Graf, Heinz-Gerd; Harendt, Christine; Hutter, Franz; Nicke, Markus; Schmidt, Uwe; Schubert, Markus; Sterzel, Juergen

    2008-10-13

    A 664 x 664 element Active Pixel image Sensor (APS) with integrated analog signal processing, full frame synchronous shutter and random access for applications in star sensors is presented and discussed. A thick vertical diode array in Thin Film on CMOS (TFC) technology is explored to achieve radiation hardness and maximum fill factor.

  13. Compressed convolution

    NASA Astrophysics Data System (ADS)

    Elsner, Franz; Wandelt, Benjamin D.

    2014-01-01

    We introduce the concept of compressed convolution, a technique to convolve a given data set with a large number of non-orthogonal kernels. In typical applications our technique drastically reduces the effective number of computations. The new method is applicable to convolutions with symmetric and asymmetric kernels and can be easily controlled for an optimal trade-off between speed and accuracy. It is based on linear compression of the collection of kernels into a small number of coefficients in an optimal eigenbasis. The final result can then be decompressed in constant time for each desired convolved output. The method is fully general and suitable for a wide variety of problems. We give explicit examples in the context of simulation challenges for upcoming multi-kilo-detector cosmic microwave background (CMB) missions. For a CMB experiment with detectors with similar beam properties, we demonstrate that the algorithm can decrease the costs of beam convolution by two to three orders of magnitude with negligible loss of accuracy. Likewise, it has the potential to allow the reduction of disk space required to store signal simulations by a similar amount. Applications in other areas of astrophysics and beyond are optimal searches for a large number of templates in noisy data, e.g. from a parametrized family of gravitational wave templates; or calculating convolutions with highly overcomplete wavelet dictionaries, e.g. in methods designed to uncover sparse signal representations.

  14. Supervised pixel classification using a feature space derived from an artificial visual system

    NASA Technical Reports Server (NTRS)

    Baxter, Lisa C.; Coggins, James M.

    1991-01-01

    Image segmentation involves labelling pixels according to their membership in image regions. This requires the understanding of what a region is. Using supervised pixel classification, the paper investigates how groups of pixels labelled manually according to perceived image semantics map onto the feature space created by an Artificial Visual System. Multiscale structure of regions are investigated and it is shown that pixels form clusters based on their geometric roles in the image intensity function, not by image semantics. A tentative abstract definition of a 'region' is proposed based on this behavior.

  15. Increased space-bandwidth product in pixel super-resolved lensfree on-chip microscopy

    NASA Astrophysics Data System (ADS)

    Greenbaum, Alon; Luo, Wei; Khademhosseinieh, Bahar; Su, Ting-Wei; Coskun, Ahmet F.; Ozcan, Aydogan

    2013-04-01

    Pixel-size limitation of lensfree on-chip microscopy can be circumvented by utilizing pixel-super-resolution techniques to synthesize a smaller effective pixel, improving the resolution. Here we report that by using the two-dimensional pixel-function of an image sensor-array as an input to lensfree image reconstruction, pixel-super-resolution can improve the numerical aperture of the reconstructed image by ~3 fold compared to a raw lensfree image. This improvement was confirmed using two different sensor-arrays that significantly vary in their pixel-sizes, circuit architectures and digital/optical readout mechanisms, empirically pointing to roughly the same space-bandwidth improvement factor regardless of the sensor-array employed in our set-up. Furthermore, such a pixel-count increase also renders our on-chip microscope into a Giga-pixel imager, where an effective pixel count of ~1.6-2.5 billion can be obtained with different sensors. Finally, using an ultra-violet light-emitting-diode, this platform resolves 225 nm grating lines and can be useful for wide-field on-chip imaging of nano-scale objects, e.g., multi-walled-carbon-nanotubes.

  16. Smart-Pixel Array Processors Based on Optimal Cellular Neural Networks for Space Sensor Applications

    NASA Technical Reports Server (NTRS)

    Fang, Wai-Chi; Sheu, Bing J.; Venus, Holger; Sandau, Rainer

    1997-01-01

    A smart-pixel cellular neural network (CNN) with hardware annealing capability, digitally programmable synaptic weights, and multisensor parallel interface has been under development for advanced space sensor applications. The smart-pixel CNN architecture is a programmable multi-dimensional array of optoelectronic neurons which are locally connected with their local neurons and associated active-pixel sensors. Integration of the neuroprocessor in each processor node of a scalable multiprocessor system offers orders-of-magnitude computing performance enhancements for on-board real-time intelligent multisensor processing and control tasks of advanced small satellites. The smart-pixel CNN operation theory, architecture, design and implementation, and system applications are investigated in detail. The VLSI (Very Large Scale Integration) implementation feasibility was illustrated by a prototype smart-pixel 5x5 neuroprocessor array chip of active dimensions 1380 micron x 746 micron in a 2-micron CMOS technology.

  17. Smart-Pixel Array Processors Based on Optimal Cellular Neural Networks for Space Sensor Applications

    NASA Technical Reports Server (NTRS)

    Fang, Wai-Chi; Sheu, Bing J.; Venus, Holger; Sandau, Rainer

    1997-01-01

    A smart-pixel cellular neural network (CNN) with hardware annealing capability, digitally programmable synaptic weights, and multisensor parallel interface has been under development for advanced space sensor applications. The smart-pixel CNN architecture is a programmable multi-dimensional array of optoelectronic neurons which are locally connected with their local neurons and associated active-pixel sensors. Integration of the neuroprocessor in each processor node of a scalable multiprocessor system offers orders-of-magnitude computing performance enhancements for on-board real-time intelligent multisensor processing and control tasks of advanced small satellites. The smart-pixel CNN operation theory, architecture, design and implementation, and system applications are investigated in detail. The VLSI (Very Large Scale Integration) implementation feasibility was illustrated by a prototype smart-pixel 5x5 neuroprocessor array chip of active dimensions 1380 micron x 746 micron in a 2-micron CMOS technology.

  18. Noise and interpixel dead space studies of GaAs pixellated detectors

    NASA Astrophysics Data System (ADS)

    Abate, L.; Bertolucci, E.; Conti, M.; Mettivier, G.; Montesi, M. C.; Russo, P.

    2001-02-01

    In the framework of the development of a digital radiography/autoradiography system using solid state detectors, we studied the performance of GaAs pixellated detectors regarding noise level and detection behavior of interpixel space. The detector is a 64×64 pixel array, 200 μm thick GaAs, 150 μm contact size and 20 μm interpixel space. Studies involve I- V curves, detector behavior for long-period biasing, noise as a function of temperature, and possible detection efficiency loss due to interpixel dead spaces.

  19. HUBBLE SPACE TELESCOPE PIXEL ANALYSIS OF THE INTERACTING S0 GALAXY NGC 5195 (M51B)

    SciTech Connect

    Lee, Joon Hyeop; Kim, Sang Chul; Ree, Chang Hee; Kim, Minjin; Jeong, Hyunjin; Lee, Jong Chul; Kyeong, Jaemann E-mail: sckim@kasi.re.kr E-mail: mkim@kasi.re.kr E-mail: jclee@kasi.re.kr

    2012-08-01

    We report the properties of the interacting S0 galaxy NGC 5195 (M51B), revealed in a pixel analysis using the Hubble Space Telescope/Advanced Camera for Surveys images in the F435W, F555W, and F814W (BVI) bands. We analyze the pixel color-magnitude diagram (pCMD) of NGC 5195, focusing on the properties of its red and blue pixel sequences and the difference from the pCMD of NGC 5194 (M51A; the spiral galaxy interacting with NGC 5195). The red pixel sequence of NGC 5195 is redder than that of NGC 5194, which corresponds to the difference in the dust optical depth of 2 < {Delta}{tau}{sub V} < 4 at fixed age and metallicity. The blue pixel sequence of NGC 5195 is very weak and spatially corresponds to the tidal bridge between the two interacting galaxies. This implies that the blue pixel sequence is not an ordinary feature in the pCMD of an early-type galaxy, but that it is a transient feature of star formation caused by the galaxy-galaxy interaction. We also find a difference in the shapes of the red pixel sequences on the pixel color-color diagrams (pCCDs) of NGC 5194 and NGC 5195. We investigate the spatial distributions of the pCCD-based pixel stellar populations. The young population fraction in the tidal bridge area is larger than that in other areas by a factor >15. Along the tidal bridge, young populations seem to be clumped particularly at the middle point of the bridge. On the other hand, the dusty population shows a relatively wide distribution between the tidal bridge and the center of NGC 5195.

  20. Autonomous Sub-Pixel Satellite Track Endpoint Determination for Space Based Images

    SciTech Connect

    Simms, L M

    2011-03-07

    An algorithm for determining satellite track endpoints with sub-pixel resolution in spaced-based images is presented. The algorithm allows for significant curvature in the imaged track due to rotation of the spacecraft capturing the image. The motivation behind the subpixel endpoint determination is first presented, followed by a description of the methodology used. Results from running the algorithm on real ground-based and simulated spaced-based images are shown to highlight its effectiveness.

  1. H4RG Near-IR Detectors with 10 micron pixels for WFIRST and Space Astrophysics

    NASA Astrophysics Data System (ADS)

    Kruk, Jeffrey W.; Rauscher, B. J.

    2014-01-01

    Hybrid sensor chip assemblies (SCAs) employing HgCdTe photo-diode arrays integrated with CMOS read-out integrated circuits (ROICs) have become the detector of choice for many cutting-edge ground-based and space-based astronomical instruments operating at near infrared wavelengths. 2Kx2K arrays of 18-micron pixels are in use at many ground-based observatories and will fly on JWST and Euclid later this decade. The Wide-Field Infra-Red Survey Telescope (WFIRST) mission, which will survey large areas of the sky with reasonably-fine sampling, is extending these prior designs by developing 4Kx4K HgCdTe NIR hybrid detectors with 10 micron pixels. These will provide four times as many pixels as the current 2Kx2K detectors in a package that is only slightly larger. Four prototype 4Kx4K devices with conservative pixel designs were produced in 2011; these devices met many though not all WFIRST performance requirements. A Strategic Astrophysics Technology proposal was submitted to further the development of these detectors. This poster describes the technology development plan, progress made in the first year of the program, and plans for the future.

  2. Soccer player recognition by pixel classification in a hybrid color space

    NASA Astrophysics Data System (ADS)

    Vandenbroucke, Nicolas; Macaire, Ludovic; Postaire, Jack-Gerard

    1997-08-01

    Soccer is a very popular sport all over the world, Coaches and sport commentators need accurate information about soccer games, especially about the players behavior. These information can be gathered by inspectors who watch the soccer match and report manually the actions of the players involved in the principal phases of the game. Generally, these inspectors focus their attention on the few players standing near the ball and don't report about the motion of all the other players. So it seems desirable to design a system which automatically tracks all the players in real- time. That's why we propose to automatically track each player through the successive color images of the sequences acquired by a fixed color camera. Each player which is present in the image, is modelized by an active contour model or snake. When, during the soccer match, a player is hidden by another, the snakes which track these two players merge. So, it becomes impossible to track the players, except if the snakes are interactively re-initialized. Fortunately, in most cases, the two players don't belong to the same team. That is why we present an algorithm which recognizes the teams of the players by pixels representing the soccer ground which must be withdrawn before considering the players themselves. To eliminate these pixels, the color characteristics of the ground are determined interactively. In a second step, dealing with windows containing only one player of one team, the color features which yield the best discrimination between the two teams are selected. Thanks to these color features, the pixels associated to the players of the two teams form two separated clusters into a color space. In fact, there are many color representation systems and it's interesting to evaluate the features which provide the best separation between the two classes of pixels according to the players soccer suit. Finally, the classification process for image segmentation is based on the three most

  3. Verification of Dosimetry Measurements with Timepix Pixel Detectors for Space Applications

    NASA Technical Reports Server (NTRS)

    Kroupa, M.; Pinsky, L. S.; Idarraga-Munoz, J.; Hoang, S. M.; Semones, E.; Bahadori, A.; Stoffle, N.; Rios, R.; Vykydal, Z.; Jakubek, J.; hide

    2014-01-01

    The current capabilities of modern pixel-detector technology has provided the possibility to design a new generation of radiation monitors. Timepix detectors are semiconductor pixel detectors based on a hybrid configuration. As such, the read-out chip can be used with different types and thicknesses of sensors. For space radiation dosimetry applications, Timepix devices with 300 and 500 microns thick silicon sensors have been used by a collaboration between NASA and University of Houston to explore their performance. For that purpose, an extensive evaluation of the response of Timepix for such applications has been performed. Timepix-based devices were tested in many different environments both at ground-based accelerator facilities such as HIMAC (Heavy Ion Medical Accelerator in Chiba, Japan), and at NSRL (NASA Space Radiation Laboratory at Brookhaven National Laboratory in Upton, NY), as well as in space on board of the International Space Station (ISS). These tests have included a wide range of the particle types and energies, from protons through iron nuclei. The results have been compared both with other devices and theoretical values. This effort has demonstrated that Timepix-based detectors are exceptionally capable at providing accurate dosimetry measurements in this application as verified by the confirming correspondence with the other accepted techniques.

  4. Non-equal spacing CMOS sensor impact on response between even and odd pixels

    NASA Astrophysics Data System (ADS)

    Liu, Cynthia; Chen, Nai-Yu

    2010-10-01

    With the self-developing CMOS imaging sensors in the instrument Focal Plane Assembly (FPA), there is flexibility in the trade-off for optimal specifications of CMOS sensor for systematic study. The criteria considered for the optimization are MTF and SNR, the CMOS imaging sensor considered is with TDI (time delay integration) feature. Among the specifications, fill factor is a key item. It affect not only the window effect in FPA MTF (static), but also the smearing effect in dynamic MTF, especially in satellite along track direction. Considering different fill factors, mirror-type and non-mirror-type pixel layout were studied for estimating the system MTF, another concern from image user point of view is mirror type pixel layout may cause different response between even and odd pixels. This work is to present the analysis results based on the construction of the non-equal spacing signal via Whittaker -Shannon interpolation formula. Further to present the analysis results about fill factor and stage number of TDI CMOS sensor. The result can function as a practice of FPA design specification.

  5. Carotenoid pixels characterization under color space tests and RGB formulas for mesocarp of mango's fruits cultivars

    NASA Astrophysics Data System (ADS)

    Hammad, Ahmed Yahya; Kassim, Farid Saad Eid Saad

    2010-01-01

    This study experimented the pulp (mesocarp) of fourteen cultivars were healthy ripe of Mango fruits (Mangifera indica L.) selected after picking from Mango Spp. namely Taimour [Ta], Dabsha [Da], Aromanis [Ar], Zebda [Ze], Fagri Kelan [Fa], Alphonse [Al], Bulbek heart [Bu], Hindi- Sinnara [Hi], Compania [Co], Langra [La], Mestikawi [Me], Ewais [Ew], Montakhab El Kanater [Mo] and Mabroka [Ma] . Under seven color space tests included (RGB: Red, Green and Blue), (CMY: Cyan, Magenta and Yellow), (CMY: Cyan, Magenta and Yellow), (HSL: Hue, Saturation and Lightness), (CMYK%: Cyan%, Magenta%, Yellow% and Black%), (HSV: Hue, Saturation and Value), (HºSB%: Hueº, Saturation% and Brightness%) and (Lab). (CMY: Cyan, Magenta and Yellow), (HSL: Hue, Saturation and Lightness), (CMYK%: Cyan%, Magenta%, Yellow% and Black%), (HSV: Hue, Saturation and Value), (HºSB%: Hueº, Saturation% and Brightness%) and (Lab). Addition, nine formula of color space tests included (sRGB 0÷1, CMY, CMYK, XYZ, CIE-L*ab, CIE-L*CH, CIE-L*uv, Yxy and Hunter-Lab) and (RGB 0÷FF/hex triplet) and Carotenoid Pixels Scale. Utilizing digital color photographs as tool for obtainment the natural color information for each cultivar then the result expounded with chemical pigment estimations. Our location study in the visual yellow to orange color degrees from the visible color of electromagnetic spectrum in wavelength between (~570 to 620) nm and frequency between (~480 to 530) THz. The results found carotene very strong influence in band Red while chlorophyll (a & b) was very lower subsequently, the values in band Green was depressed. Meanwhile, the general ratios percentage for carotenoid pixels in bands Red, Green and Blue were 50%, 39% and 11% as orderliness opposite the ratios percentage for carotene, chlorophyll a and chlorophyll b which were 63%, 22% and 16% approximately. According to that the pigments influence in all color space tests and RGB formulas. Band Yellow% in color test (CMYK%) as signature

  6. HUBBLE SPACE TELESCOPE PIXEL ANALYSIS OF THE INTERACTING FACE-ON SPIRAL GALAXY NGC 5194 (M51A)

    SciTech Connect

    Lee, Joon Hyeop; Kim, Sang Chul; Park, Hong Soo; Ree, Chang Hee; Kyeong, Jaemann; Chung, Jiwon E-mail: sckim@kasi.re.kr E-mail: chr@kasi.re.kr E-mail: jiwon@kasi.re.kr

    2011-10-10

    A pixel analysis is carried out on the interacting face-on spiral galaxy NGC 5194 (M51A), using the Hubble Space Telescope (HST)/Advanced Camera for Surveys (ACS) images in the F435W, F555W, and F814W (BVI) bands. After 4 x 4 binning of the HST/ACS images to secure a sufficient signal-to-noise ratio for each pixel, we derive several quantities describing the pixel color-magnitude diagram (pCMD) of NGC 5194: blue/red color cut, red pixel sequence parameters, blue pixel sequence parameters, and blue-to-red pixel ratio. The red sequence pixels are mostly older than 1 Gyr, while the blue sequence pixels are mostly younger than 1 Gyr, in their luminosity-weighted mean stellar ages. The color variation in the red pixel sequence from V = 20 mag arcsec{sup -2} to V = 17 mag arcsec{sup -2} corresponds to a metallicity variation of {Delta}[Fe/H] {approx}2 or an optical depth variation of {Delta}{tau}{sub V} {approx} 4 by dust, but the actual sequence is thought to originate from the combination of those two effects. At V < 20 mag arcsec{sup -2}, the color variation in the blue pixel sequence corresponds to an age variation from 5 Myr to 300 Myr under the assumption of solar metallicity and {tau}{sub V} = 1. To investigate the spatial distributions of stellar populations, we divide pixel stellar populations using the pixel color-color diagram and population synthesis models. As a result, we find that the pixel population distributions across the spiral arms agree with a compressing process by spiral density waves: dense dust {yields} newly formed stars. The tidal interaction between NGC 5194 and NGC 5195 appears to enhance the star formation at the tidal bridge connecting the two galaxies. We find that the pixels corresponding to the central active galactic nucleus (AGN) area of NGC 5194 show a tight sequence at the bright-end of the pCMD, which are in the region of R {approx} 100 pc and may be a photometric indicator of AGN properties.

  7. From Single Pixels to Many Megapixels: Progress in Astronomical Infrared Imaging from Space-borne Telescopes

    NASA Astrophysics Data System (ADS)

    Pipher, Judith

    2017-01-01

    In the 1960s, rocket infrared astronomy was in its infancy. The Cornell group planned a succession of rocket launches of a small cryogenically cooled telescope above much of the atmosphere. Cornell graduate students were tasked with hand-making single pixel detectors for the focal plane at wavelengths ranging from ~5 microns to just short of 1 mm. “Images” could only be constructed from scans of objects such as HII regions/giant molecular clouds, the galactic center, and of diffuse radiation from the various IR backgrounds. IRAS and COBE, followed by the KAO utilized ever more sensitive single IR detectors, and revolutionized our understanding of the Universe. The first IR arrays came onto the scene in the early 1970s - and in 1983 several experiments for the space mission SIRTF (later named Spitzer Space Telescope following launch 20 years later) were selected, all boasting (relatively small) arrays. Europe’s ISO and Herschel also employed arrays to good advantage, as has SOFIA, and now, many-megapixel IR arrays are sufficiently well-developed for upcoming space missions.

  8. A kilo-pixel imaging system for future space based far-infrared observatories using microwave kinetic inductance detectors

    NASA Astrophysics Data System (ADS)

    Baselmans, J. J. A.; Bueno, J.; Yates, S. J. C.; Yurduseven, O.; Llombart, N.; Karatsu, K.; Baryshev, A. M.; Ferrari, L.; Endo, A.; Thoen, D. J.; de Visser, P. J.; Janssen, R. M. J.; Murugesan, V.; Driessen, E. F. C.; Coiffard, G.; Martin-Pintado, J.; Hargrave, P.; Griffin, M.

    2017-05-01

    Aims: Future astrophysics and cosmic microwave background space missions operating in the far-infrared to millimetre part of the spectrum will require very large arrays of ultra-sensitive detectors in combination with high multiplexing factors and efficient low-noise and low-power readout systems. We have developed a demonstrator system suitable for such applications. Methods: The system combines a 961 pixel imaging array based upon Microwave Kinetic Inductance Detectors (MKIDs) with a readout system capable of reading out all pixels simultaneously with only one readout cable pair and a single cryogenic amplifier. We evaluate, in a representative environment, the system performance in terms of sensitivity, dynamic range, optical efficiency, cosmic ray rejection, pixel-pixel crosstalk and overall yield at an observation centre frequency of 850 GHz and 20% fractional bandwidth. Results: The overall system has an excellent sensitivity, with an average detector sensitivity < NEPdet> =3×10-19 WHz measured using a thermal calibration source. At a loading power per pixel of 50 fW we demonstrate white, photon noise limited detector noise down to 300 mHz. The dynamic range would allow the detection of 1 Jy bright sources within the field of view without tuning the readout of the detectors. The expected dead time due to cosmic ray interactions, when operated in an L2 or a similar far-Earth orbit, is found to be <4%. Additionally, the achieved pixel yield is 83% and the crosstalk between the pixels is <-30 dB. Conclusions: This demonstrates that MKID technology can provide multiplexing ratios on the order of a 1000 with state-of-the-art single pixel performance, and that the technology is now mature enough to be considered for future space based observatories and experiments.

  9. Signal dependence of inter-pixel capacitance in hybridized HgCdTe H2RG arrays for use in James Webb space telescope's NIRcam

    NASA Astrophysics Data System (ADS)

    Donlon, Kevan; Ninkov, Zoran; Baum, Stefi

    2016-08-01

    Interpixel capacitance (IPC) is a deterministic electronic coupling by which signal generated in one pixel is measured in neighboring pixels. Examination of dark frames from test NIRcam arrays corroborates earlier results and simulations illustrating a signal dependent coupling. When the signal on an individual pixel is larger, the fractional coupling to nearest neighbors is lesser than when the signal is lower. Frames from test arrays indicate a drop in average coupling from approximately 1.0% at low signals down to approximately 0.65% at high signals depending on the particular array in question. The photometric ramifications for this non-uniformity are not fully understood. This non-uniformity intro-duces a non-linearity in the current mathematical model for IPC coupling. IPC coupling has been mathematically formalized as convolution by a blur kernel. Signal dependence requires that the blur kernel be locally defined as a function of signal intensity. Through application of a signal dependent coupling kernel, the IPC coupling can be modeled computationally. This method allows for simultaneous knowledge of the intrinsic parameters of the image scene, the result of applying a constant IPC, and the result of a signal dependent IPC. In the age of sub-pixel precision in astronomy these effects must be properly understood and accounted for in order for the data to accurately represent the object of observation. Implementation of this method is done through python scripted processing of images. The introduction of IPC into simulated frames is accomplished through convolution of the image with a blur kernel whose parameters are themselves locally defined functions of the image. These techniques can be used to enhance the data processing pipeline for NIRcam.

  10. Early breast tumor and late SARS detections using space-variant multispectral infrared imaging at a single pixel

    NASA Astrophysics Data System (ADS)

    Szu, Harold H.; Buss, James R.; Kopriva, Ivica

    2004-04-01

    We proposed the physics approach to solve a physical inverse problem, namely to choose the unique equilibrium solution (at the minimum free energy: H= E - ToS, including the Wiener, l.m.s E, and ICA, Max S, as special cases). The "unsupervised classification" presumes that required information must be learned and derived directly and solely from the data alone, in consistence with the classical Duda-Hart ATR definition of the "unlabelled data". Such truly unsupervised methodology is presented for space-variant imaging processing for a single pixel in the real world case of remote sensing, early tumor detections and SARS. The indeterminacy of the multiple solutions of the inverse problem is regulated or selected by means of the absolute minimum of isothermal free energy as the ground truth of local equilibrium condition at the single-pixel foot print.

  11. Comparison of rate one-half, equivalent constraint length 24, binary convolutional codes for use with sequential decoding on the deep-space channel

    NASA Technical Reports Server (NTRS)

    Massey, J. L.

    1976-01-01

    Virtually all previously-suggested rate 1/2 binary convolutional codes with KE = 24 are compared. Their distance properties are given; and their performance, both in computation and in error probability, with sequential decoding on the deep-space channel is determined by simulation. Recommendations are made both for the choice of a specific KE = 24 code as well as for codes to be included in future coding standards for the deep-space channel. A new result given in this report is a method for determining the statistical significance of error probability data when the error probability is so small that it is not feasible to perform enough decoding simulations to obtain more than a very small number of decoding errors.

  12. Analysis and Optimization of the Performance of a Convolutionally Encoded Deep-Space Link in the Presence of Spacecraft Oscillator Phase Noise

    NASA Astrophysics Data System (ADS)

    Shambayati, S.

    1999-10-01

    In order to reduce the cost of deep-space missions, NASA is exploring the possibility of using new, cheaper technologies. Among these is the possibility of replacing ultra-stable oscillators (USOs) onboard the spacecraft with oscillators with measurable phase noise. In addition, it is proposed that these spacecraft use higher 32-GHz (Ka-band) radio frequencies in order to save mass. In this article, the performance of a convolutionally encoded deep-space link using non-USO-type oscillators onboard the spacecraft at Ka-band is analyzed. It is shown that the ground-receiver tracking-loop bandwidth settings need to be optimized and that, by selecting an oscillator with good phase-noise characteristics, the minimum required power onboard the spacecraft could be reduced by as much as 10 dB.

  13. Infimal Convolution Regularisation Functionals of BV and [Formula: see text] Spaces: Part I: The Finite [Formula: see text] Case.

    PubMed

    Burger, Martin; Papafitsoros, Konstantinos; Papoutsellis, Evangelos; Schönlieb, Carola-Bibiane

    We study a general class of infimal convolution type regularisation functionals suitable for applications in image processing. These functionals incorporate a combination of the total variation seminorm and [Formula: see text] norms. A unified well-posedness analysis is presented and a detailed study of the one-dimensional model is performed, by computing exact solutions for the corresponding denoising problem and the case [Formula: see text]. Furthermore, the dependency of the regularisation properties of this infimal convolution approach to the choice of p is studied. It turns out that in the case [Formula: see text] this regulariser is equivalent to the Huber-type variant of total variation regularisation. We provide numerical examples for image decomposition as well as for image denoising. We show that our model is capable of eliminating the staircasing effect, a well-known disadvantage of total variation regularisation. Moreover as p increases we obtain almost piecewise affine reconstructions, leading also to a better preservation of hat-like structures.

  14. High-End CMOS Active Pixel Sensors For Space-Borne Imaging Instruments

    DTIC Science & Technology

    2005-07-13

    sur la technologie CCD, alors que les capteurs CMOS à pixel actifs (APS) ont des nombreux avantages pour des applications embarquées. Cette...Les capteurs optiques intégrés sont utilisés dans le domaine spatial dans un large éventail d’applications. Beaucoup d’entres elles reposent toujours...publication présente des capteurs CMOS hautes performances d’aujourd’hui et met en lumière leurs avantages par rapport à leur équivalent CCD. Ces capteurs

  15. Convolution of Two Series

    ERIC Educational Resources Information Center

    Umar, A.; Yusau, B.; Ghandi, B. M.

    2007-01-01

    In this note, we introduce and discuss convolutions of two series. The idea is simple and can be introduced to higher secondary school classes, and has the potential of providing a good background for the well known convolution of function.

  16. Convolution in Convolution for Network in Network.

    PubMed

    Pang, Yanwei; Sun, Manli; Jiang, Xiaoheng; Li, Xuelong

    2017-03-16

    Network in network (NiN) is an effective instance and an important extension of deep convolutional neural network consisting of alternating convolutional layers and pooling layers. Instead of using a linear filter for convolution, NiN utilizes shallow multilayer perceptron (MLP), a nonlinear function, to replace the linear filter. Because of the powerfulness of MLP and 1 x 1 convolutions in spatial domain, NiN has stronger ability of feature representation and hence results in better recognition performance. However, MLP itself consists of fully connected layers that give rise to a large number of parameters. In this paper, we propose to replace dense shallow MLP with sparse shallow MLP. One or more layers of the sparse shallow MLP are sparely connected in the channel dimension or channel-spatial domain. The proposed method is implemented by applying unshared convolution across the channel dimension and applying shared convolution across the spatial dimension in some computational layers. The proposed method is called convolution in convolution (CiC). The experimental results on the CIFAR10 data set, augmented CIFAR10 data set, and CIFAR100 data set demonstrate the effectiveness of the proposed CiC method.

  17. Distal Convoluted Tubule

    PubMed Central

    Ellison, David H.

    2014-01-01

    The distal convoluted tubule is the nephron segment that lies immediately downstream of the macula densa. Although short in length, the distal convoluted tubule plays a critical role in sodium, potassium, and divalent cation homeostasis. Recent genetic and physiologic studies have greatly expanded our understanding of how the distal convoluted tubule regulates these processes at the molecular level. This article provides an update on the distal convoluted tubule, highlighting concepts and pathophysiology relevant to clinical practice. PMID:24855283

  18. Search for optimal distance spectrum convolutional codes

    NASA Technical Reports Server (NTRS)

    Connor, Matthew C.; Perez, Lance C.; Costello, Daniel J., Jr.

    1993-01-01

    In order to communicate reliably and to reduce the required transmitter power, NASA uses coded communication systems on most of their deep space satellites and probes (e.g. Pioneer, Voyager, Galileo, and the TDRSS network). These communication systems use binary convolutional codes. Better codes make the system more reliable and require less transmitter power. However, there are no good construction techniques for convolutional codes. Thus, to find good convolutional codes requires an exhaustive search over the ensemble of all possible codes. In this paper, an efficient convolutional code search algorithm was implemented on an IBM RS6000 Model 580. The combination of algorithm efficiency and computational power enabled us to find, for the first time, the optimal rate 1/2, memory 14, convolutional code.

  19. Human Parsing with Contextualized Convolutional Neural Network.

    PubMed

    Liang, Xiaodan; Xu, Chunyan; Shen, Xiaohui; Yang, Jianchao; Tang, Jinhui; Lin, Liang; Yan, Shuicheng

    2016-03-02

    In this work, we address the human parsing task with a novel Contextualized Convolutional Neural Network (Co-CNN) architecture, which well integrates the cross-layer context, global image-level context, semantic edge context, within-super-pixel context and cross-super-pixel neighborhood context into a unified network. Given an input human image, Co-CNN produces the pixel-wise categorization in an end-to-end way. First, the cross-layer context is captured by our basic local-to-global-to-local structure, which hierarchically combines the global semantic information and the local fine details across different convolutional layers. Second, the global image-level label prediction is used as an auxiliary objective in the intermediate layer of the Co-CNN, and its outputs are further used for guiding the feature learning in subsequent convolutional layers to leverage the global imagelevel context. Third, semantic edge context is further incorporated into Co-CNN, where the high-level semantic boundaries are leveraged to guide pixel-wise labeling. Finally, to further utilize the local super-pixel contexts, the within-super-pixel smoothing and cross-super-pixel neighbourhood voting are formulated as natural sub-components of the Co-CNN to achieve the local label consistency in both training and testing process. Comprehensive evaluations on two public datasets well demonstrate the significant superiority of our Co-CNN over other state-of-the-arts for human parsing. In particular, the F-1 score on the large dataset [1] reaches 81:72% by Co-CNN, significantly higher than 62:81% and 64:38% by the state-of-the-art algorithms, MCNN [2] and ATR [1], respectively. By utilizing our newly collected large dataset for training, our Co-CNN can achieve 85:36% in F-1 score.

  20. ENGage: The use of space and pixel art for increasing primary school children's interest in science, technology, engineering and mathematics

    NASA Astrophysics Data System (ADS)

    Roberts, Simon J.

    2014-01-01

    The Faculty of Engineering at The University of Nottingham, UK, has developed interdisciplinary, hands-on workshops for primary schools that introduce space technology, its relevance to everyday life and the importance of science, technology, engineering and maths. The workshop activities for 7-11 year olds highlight the roles that space and satellite technology play in observing and monitoring the Earth's biosphere as well as being vital to communications in the modern digital world. The programme also provides links to 'how science works', the environment and citizenship and uses pixel art through the medium of digital photography to demonstrate the importance of maths in a novel and unconventional manner. The interactive programme of activities provides learners with an opportunity to meet 'real' scientists and engineers, with one of the key messages from the day being that anyone can become involved in science and engineering whatever their ability or subject of interest. The methodology introduces the role of scientists and engineers using space technology themes, but it could easily be adapted for use with any inspirational topic. Analysis of learners' perceptions of science, technology, engineering and maths before and after participating in ENGage showed very positive and significant changes in their attitudes to these subjects and an increase in the number of children thinking they would be interested and capable in pursuing a career in science and engineering. This paper provides an overview of the activities, the methodology, the evaluation process and results.

  1. Using hybrid GPU/CPU kernel splitting to accelerate spherical convolutions

    NASA Astrophysics Data System (ADS)

    Sutter, P. M.; Wandelt, B. D.; Elsner, F.

    2015-06-01

    We present a general method for accelerating by more than an order of magnitude the convolution of pixelated functions on the sphere with a radially-symmetric kernel. Our method splits the kernel into a compact real-space component and a compact spherical harmonic space component. These components can then be convolved in parallel using an inexpensive commodity GPU and a CPU. We provide models for the computational cost of both real-space and Fourier space convolutions and an estimate for the approximation error. Using these models we can determine the optimum split that minimizes the wall clock time for the convolution while satisfying the desired error bounds. We apply this technique to the problem of simulating a cosmic microwave background (CMB) anisotropy sky map at the resolution typical of the high resolution maps produced by the Planck mission. For the main Planck CMB science channels we achieve a speedup of over a factor of ten, assuming an acceptable fractional rms error of order 10-5 in the power spectrum of the output map.

  2. Correction of defective pixels for medical and space imagers based on Ising Theory

    NASA Astrophysics Data System (ADS)

    Cohen, Eliahu; Shnitser, Moriel; Avraham, Tsvika; Hadar, Ofer

    2014-09-01

    We propose novel models for image restoration based on statistical physics. We investigate the affinity between these fields and describe a framework from which interesting denoising algorithms can be derived: Ising-like models and simulated annealing techniques. When combined with known predictors such as Median and LOCO-I, these models become even more effective. In order to further examine the proposed models we apply them to two important problems: (i) Digital Cameras in space damaged from cosmic radiation. (ii) Ultrasonic medical devices damaged from speckle noise. The results, as well as benchmark and comparisons, suggest in most of the cases a significant gain in PSNR and SSIM in comparison to other filters.

  3. Development of a pixel sensor with fine space-time resolution based on SOI technology for the ILC vertex detector

    NASA Astrophysics Data System (ADS)

    Ono, Shun; Togawa, Manabu; Tsuji, Ryoji; Mori, Teppei; Yamada, Miho; Arai, Yasuo; Tsuboyama, Toru; Hanagaki, Kazunori

    2017-02-01

    We have been developing a new monolithic pixel sensor with silicon-on-insulator (SOI) technology for the International Linear Collider (ILC) vertex detector system. The SOI monolithic pixel detector is realized using standard CMOS circuits fabricated on a fully depleted sensor layer. The new SOI sensor SOFIST can store both the position and timing information of charged particles in each 20×20 μm2 pixel. The position resolution is further improved by the position weighted with the charges spread to multiple pixels. The pixel also records the hit timing with an embedded time-stamp circuit. The sensor chip has column-parallel analog-to-digital conversion (ADC) circuits and zero-suppression logic for high-speed data readout. We are designing and evaluating some prototype sensor chips for optimizing and minimizing the pixel circuit.

  4. 4K×4K format 10μm pixel pitch H4RG-10 hybrid CMOS silicon visible focal plane array for space astronomy

    NASA Astrophysics Data System (ADS)

    Bai, Yibin; Tennant, William; Anglin, Selmer; Wong, Andre; Farris, Mark; Xu, Min; Holland, Eric; Cooper, Donald; Hosack, Joseph; Ho, Kenneth; Sprafke, Thomas; Kopp, Robert; Starr, Brian; Blank, Richard; Beletic, James W.; Luppino, Gerard A.

    2012-07-01

    Teledyne’s silicon hybrid CMOS focal plane array technology has matured into a viable, high performance and high- TRL alternative to scientific CCD sensors for space-based applications in the UV-visible-NIR wavelengths. This paper presents the latest results from Teledyne’s low noise silicon hybrid CMOS visible focal place array produced in 4K×4K format with 10 μm pixel pitch. The H4RG-10 readout circuit retains all of the CMOS functionality (windowing, guide mode, reference pixels) and heritage of its highly successful predecessor (H2RG) developed for JWST, with additional features for improved performance. Combined with a silicon PIN detector layer, this technology is termed HyViSI™ (Hybrid Visible Silicon Imager). H4RG-10 HyViSI™ arrays achieve high pixel interconnectivity (<99.99%), low readout noise (<10 e- rms single CDS), low dark current (<0.5 e-/pixel/s at 193K), high quantum efficiency (<90% broadband), and large dynamic range (<13 bits). Pixel crosstalk and interpixel capacitance (IPC) have been predicted using detailed models of the hybrid structure and these predictions have been confirmed by measurements with Fe-55 Xray events and the single pixel reset technique. For a 100-micron thick detector, IPC of less than 3% and total pixel crosstalk of less than 7% have been achieved for the HyViSI™ H4RG-10. The H4RG-10 array is mounted on a lightweight silicon carbide (SiC) package and has been qualified to Technology Readiness Level 6 (TRL-6). As part of space qualification, the HyViSI™ H4RG-10 array passed radiation testing for low earth orbit (LEO) environment.

  5. Adaptive pseudo-color enhancement method of weld radiographic images based on HSI color space and self-transformation of pixels

    NASA Astrophysics Data System (ADS)

    Jiang, Hongquan; Zhao, Yalin; Gao, Jianmin; Gao, Zhiyong

    2017-06-01

    The radiographic testing (RT) image of a steam turbine manufacturing enterprise has the characteristics of low gray level, low contrast, and blurriness, which lead to a substandard image quality. Moreover, it is not conducive for human eyes to detect and evaluate defects. This study proposes an adaptive pseudo-color enhancement method for weld radiographic images based on the hue, saturation, and intensity (HSI) color space and the self-transformation of pixels to solve these problems. First, the pixel's self-transformation is performed to the pixel value of the original RT image. The function value after the pixel's self-transformation is assigned to the HSI components in the HSI color space. Thereafter, the average intensity of the enhanced image is adaptively adjusted to 0.5 according to the intensity of the original image. Moreover, the hue range and interval can be adjusted according to personal habits. Finally, the HSI components after the adaptive adjustment can be transformed to display in the red, green, and blue color space. Numerous weld radiographic images from a steam turbine manufacturing enterprise are used to validate the proposed method. The experimental results show that the proposed pseudo-color enhancement method can improve image definition and make the target and background areas distinct in weld radiographic images. The enhanced images will be more conducive for defect recognition. Moreover, the image enhanced using the proposed method conforms to the human eye visual properties, and the effectiveness of defect recognition and evaluation can be ensured.

  6. Non-Uniform Object-Space Pixelation (NUOP) for Penalized Maximum-Likelihood Image Reconstruction for a Single Photon Emission Microscope System

    PubMed Central

    Meng, L. J.; Li, Nan

    2016-01-01

    This paper presents a non-uniform object-space pixelation (NUOP) approach for image reconstruction using the penalized maximum likelihood methods. This method was developed for use with a single photon emission microscope (SPEM) system that offers an ultrahigh spatial resolution for a targeted local region inside mouse brain. In this approach, the object-space is divided with non-uniform pixel sizes, which are chosen adaptively based on object-dependent criteria. These include (a) some known characteristics of a target-region, (b) the associated Fisher Information that measures the weighted correlation between the responses of the system to gamma ray emissions occurred at different spatial locations, and (c) the linear distance from a given location to the target-region. In order to quantify the impact of this non-uniform pixelation approach on image quality, we used the Modified Uniform Cramer-Rao bound (MUCRB) to evaluate the local resolution-variance and bias-variance tradeoffs achievable with different pixelation strategies. As demonstrated in this paper, an efficient object-space pixelation could improve the speed of computation by 1–2 orders of magnitude, whilst maintaining an excellent reconstruction for the target-region. This improvement is crucial for making the SPEM system a practical imaging tool for mouse brain studies. The proposed method also allows rapid computation of the first and second order statistics of reconstructed images using analytical approximations, which is the key for the evaluation of several analytical system performance indices for system design and optimization.

  7. The analysis of convolutional codes via the extended Smith algorithm

    NASA Technical Reports Server (NTRS)

    Mceliece, R. J.; Onyszchuk, I.

    1993-01-01

    Convolutional codes have been the central part of most error-control systems in deep-space communication for many years. Almost all such applications, however, have used the restricted class of (n,1), also known as 'rate 1/n,' convolutional codes. The more general class of (n,k) convolutional codes contains many potentially useful codes, but their algebraic theory is difficult and has proved to be a stumbling block in the evolution of convolutional coding systems. In this article, the situation is improved by describing a set of practical algorithms for computing certain basic things about a convolutional code (among them the degree, the Forney indices, a minimal generator matrix, and a parity-check matrix), which are usually needed before a system using the code can be built. The approach is based on the classic Forney theory for convolutional codes, together with the extended Smith algorithm for polynomial matrices, which is introduced in this article.

  8. Spatial-Spectral Classification Based on the Unsupervised Convolutional Sparse Auto-Encoder for Hyperspectral Remote Sensing Imagery

    NASA Astrophysics Data System (ADS)

    Han, Xiaobing; Zhong, Yanfei; Zhang, Liangpei

    2016-06-01

    Current hyperspectral remote sensing imagery spatial-spectral classification methods mainly consider concatenating the spectral information vectors and spatial information vectors together. However, the combined spatial-spectral information vectors may cause information loss and concatenation deficiency for the classification task. To efficiently represent the spatial-spectral feature information around the central pixel within a neighbourhood window, the unsupervised convolutional sparse auto-encoder (UCSAE) with window-in-window selection strategy is proposed in this paper. Window-in-window selection strategy selects the sub-window spatial-spectral information for the spatial-spectral feature learning and extraction with the sparse auto-encoder (SAE). Convolution mechanism is applied after the SAE feature extraction stage with the SAE features upon the larger outer window. The UCSAE algorithm was validated by two common hyperspectral imagery (HSI) datasets - Pavia University dataset and the Kennedy Space Centre (KSC) dataset, which shows an improvement over the traditional hyperspectral spatial-spectral classification methods.

  9. The time-space relationship of the data point (Pixels) of the thematic mapper and multispectral scanner or the myth of simultaneity

    NASA Technical Reports Server (NTRS)

    Gordon, F., Jr.

    1980-01-01

    A simplified explanation of the time space relationships among scanner pixels is presented. The examples of the multispectral scanner (MSS) on Landsats 1, 2, and 3 and the thematic mapper (TM) of Landsat D are used to describe the concept and degree of nonsimultaneity of scanning system data. The time aspects of scanner data acquisition and those parts of the MSS and TM systems related to that phenomena are addressed.

  10. Optoelectronic Systems Based on InGaAs Complementary-Metal-Oxide-Semiconductor Smart-Pixel Arrays and Free-Space Optical Interconnects

    NASA Astrophysics Data System (ADS)

    Walker, Andrew C.; Yang, Tsung-Yi; Gourlay, James; Dines, Julian A. B.; Forbes, Mark G.; Prince, Simon M.; Baillie, Douglas A.; Neilson, David T.; Williams, Rhys; Wilkinson, Lucy C.; Smith, George R.; Desmulliez, Mark P. Y.; Buller, Gerald S.; Taghizadeh, Mohammad R.; Waddie, Andrew; Underwood, Ian; Stanley, Colin R.; Pottier, Francois; Vgele, Brigitte; Sibbett, Wilson

    1998-05-01

    Free-space optical interconnects have been identified as a potentially important technology for future massively parallel-computing systems. The development of optoelectronic smart pixels based on InGaAs AlGaAs multiple-quantum-well modulators and detectors flip-chip solder-bump bonded onto complementary-metal-oxide-semiconductor (CMOS) circuits and the design and construction of an experimental processor in which the devices are linked by free-space optical interconnects are described. For demonstrating the capabilities of the technology, a parallel data-sorting system has been identified as an effective demonstrator. By use of Batcher s bitonic sorting algorithm and exploitation of a perfect-shuffle optical interconnection, the system has the potential to perform a full sort on 1024, 16-bit words in less than 16 s. We describe the design, testing, and characterization of the smart-pixel devices and free-space optical components. InGaAs CMOS smart-pixel, chip-to-chip communication has been demonstrated at 50 Mbits s. It is shown that the initial system specifications can be met by the component technologies.

  11. PIXEL PUSHER

    NASA Technical Reports Server (NTRS)

    Stanfill, D. F.

    1994-01-01

    Pixel Pusher is a Macintosh application used for viewing and performing minor enhancements on imagery. It will read image files in JPL's two primary image formats- VICAR and PDS - as well as the Macintosh PICT format. VICAR (NPO-18076) handles an array of image processing capabilities which may be used for a variety of applications including biomedical image processing, cartography, earth resources, and geological exploration. Pixel Pusher can also import VICAR format color lookup tables for viewing images in pseudocolor (256 colors). This program currently supports only eight bit images but will work on monitors with any number of colors. Arbitrarily large image files may be viewed in a normal Macintosh window. Color and contrast enhancement can be performed with a graphical "stretch" editor (as in contrast stretch). In addition, VICAR images may be saved as Macintosh PICT files for exporting into other Macintosh programs, and individual pixels can be queried to determine their locations and actual data values. Pixel Pusher is written in Symantec's Think C and was developed for use on a Macintosh SE30, LC, or II series computer running System Software 6.0.3 or later and 32 bit QuickDraw. Pixel Pusher will only run on a Macintosh which supports color (whether a color monitor is being used or not). The standard distribution medium for this program is a set of three 3.5 inch Macintosh format diskettes. The program price includes documentation. Pixel Pusher was developed in 1991 and is a copyrighted work with all copyright vested in NASA. Think C is a trademark of Symantec Corporation. Macintosh is a registered trademark of Apple Computer, Inc.

  12. PIXEL PUSHER

    NASA Technical Reports Server (NTRS)

    Stanfill, D. F.

    1994-01-01

    Pixel Pusher is a Macintosh application used for viewing and performing minor enhancements on imagery. It will read image files in JPL's two primary image formats- VICAR and PDS - as well as the Macintosh PICT format. VICAR (NPO-18076) handles an array of image processing capabilities which may be used for a variety of applications including biomedical image processing, cartography, earth resources, and geological exploration. Pixel Pusher can also import VICAR format color lookup tables for viewing images in pseudocolor (256 colors). This program currently supports only eight bit images but will work on monitors with any number of colors. Arbitrarily large image files may be viewed in a normal Macintosh window. Color and contrast enhancement can be performed with a graphical "stretch" editor (as in contrast stretch). In addition, VICAR images may be saved as Macintosh PICT files for exporting into other Macintosh programs, and individual pixels can be queried to determine their locations and actual data values. Pixel Pusher is written in Symantec's Think C and was developed for use on a Macintosh SE30, LC, or II series computer running System Software 6.0.3 or later and 32 bit QuickDraw. Pixel Pusher will only run on a Macintosh which supports color (whether a color monitor is being used or not). The standard distribution medium for this program is a set of three 3.5 inch Macintosh format diskettes. The program price includes documentation. Pixel Pusher was developed in 1991 and is a copyrighted work with all copyright vested in NASA. Think C is a trademark of Symantec Corporation. Macintosh is a registered trademark of Apple Computer, Inc.

  13. Efficient convolutional sparse coding

    DOEpatents

    Wohlberg, Brendt

    2017-06-20

    Computationally efficient algorithms may be applied for fast dictionary learning solving the convolutional sparse coding problem in the Fourier domain. More specifically, efficient convolutional sparse coding may be derived within an alternating direction method of multipliers (ADMM) framework that utilizes fast Fourier transforms (FFT) to solve the main linear system in the frequency domain. Such algorithms may enable a significant reduction in computational cost over conventional approaches by implementing a linear solver for the most critical and computationally expensive component of the conventional iterative algorithm. The theoretical computational cost of the algorithm may be reduced from O(M.sup.3N) to O(MN log N), where N is the dimensionality of the data and M is the number of elements in the dictionary. This significant improvement in efficiency may greatly increase the range of problems that can practically be addressed via convolutional sparse representations.

  14. Exploring the Hidden Structure of Astronomical Images: A "Pixelated" View of Solar System and Deep Space Features!

    ERIC Educational Resources Information Center

    Ward, R. Bruce; Sienkiewicz, Frank; Sadler, Philip; Antonucci, Paul; Miller, Jaimie

    2013-01-01

    We describe activities created to help student participants in Project ITEAMS (Innovative Technology-Enabled Astronomy for Middle Schools) develop a deeper understanding of picture elements (pixels), image creation, and analysis of the recorded data. ITEAMS is an out-of-school time (OST) program funded by the National Science Foundation (NSF) with…

  15. Exploring the Hidden Structure of Astronomical Images: A "Pixelated" View of Solar System and Deep Space Features!

    ERIC Educational Resources Information Center

    Ward, R. Bruce; Sienkiewicz, Frank; Sadler, Philip; Antonucci, Paul; Miller, Jaimie

    2013-01-01

    We describe activities created to help student participants in Project ITEAMS (Innovative Technology-Enabled Astronomy for Middle Schools) develop a deeper understanding of picture elements (pixels), image creation, and analysis of the recorded data. ITEAMS is an out-of-school time (OST) program funded by the National Science Foundation (NSF) with…

  16. Astronomical Image Subtraction by Cross-Convolution

    NASA Astrophysics Data System (ADS)

    Yuan, Fang; Akerlof, Carl W.

    2008-04-01

    In recent years, there has been a proliferation of wide-field sky surveys to search for a variety of transient objects. Using relatively short focal lengths, the optics of these systems produce undersampled stellar images often marred by a variety of aberrations. As participants in such activities, we have developed a new algorithm for image subtraction that no longer requires high-quality reference images for comparison. The computational efficiency is comparable with similar procedures currently in use. The general technique is cross-convolution: two convolution kernels are generated to make a test image and a reference image separately transform to match as closely as possible. In analogy to the optimization technique for generating smoothing splines, the inclusion of an rms width penalty term constrains the diffusion of stellar images. In addition, by evaluating the convolution kernels on uniformly spaced subimages across the total area, these routines can accommodate point-spread functions that vary considerably across the focal plane.

  17. Effect of Pixel's Spatial Characteristics on Recognition of Isolated Pixelized Chinese Character.

    PubMed

    Yang, Kun; Liu, Shuang; Wang, Hong; Liu, Wei; Wu, Yaowei

    2015-01-01

    The influence of pixel's spatial characteristics on recognition of isolated Chinese character was investigated using simulated prosthestic vision. The accuracy of Chinese character recognition with 4 kinds of pixel number (6*6, 8*8, 10*10, and 12*12 pixel array) and 3 kinds of pixel shape (Square, Dot and Gaussian) and different pixel spacing were tested through head-mounted display (HMD). A captured image of Chinese characters in font style of Hei were pixelized with Square, Dot and Gaussian pixel. Results showed that pixel number was the most important factor which could affect the recognition of isolated pixelized Chinese Chartars and the accuracy of recognition increased with the addition of pixel number. 10*10 pixel array could provide enough information for people to recognize an isolated Chinese character. At low resolution (6*6 and 8*8 pixel array), there were little difference of recognition accuracy between different pixel shape and different pixel spacing. While as for high resolution (10*10 and 12*12 pixel array), the fluctuation of pixel shape and pixel spacing could not affect the performance of recognition of isolated pixelized Chinese Character.

  18. Fiber pixelated image database

    NASA Astrophysics Data System (ADS)

    Shinde, Anant; Perinchery, Sandeep Menon; Matham, Murukeshan Vadakke

    2016-08-01

    Imaging of physically inaccessible parts of the body such as the colon at micron-level resolution is highly important in diagnostic medical imaging. Though flexible endoscopes based on the imaging fiber bundle are used for such diagnostic procedures, their inherent honeycomb-like structure creates fiber pixelation effects. This impedes the observer from perceiving the information from an image captured and hinders the direct use of image processing and machine intelligence techniques on the recorded signal. Significant efforts have been made by researchers in the recent past in the development and implementation of pixelation removal techniques. However, researchers have often used their own set of images without making source data available which subdued their usage and adaptability universally. A database of pixelated images is the current requirement to meet the growing diagnostic needs in the healthcare arena. An innovative fiber pixelated image database is presented, which consists of pixelated images that are synthetically generated and experimentally acquired. Sample space encompasses test patterns of different scales, sizes, and shapes. It is envisaged that this proposed database will alleviate the current limitations associated with relevant research and development and would be of great help for researchers working on comb structure removal algorithms.

  19. NUCLEI SEGMENTATION VIA SPARSITY CONSTRAINED CONVOLUTIONAL REGRESSION

    PubMed Central

    Zhou, Yin; Chang, Hang; Barner, Kenneth E.; Parvin, Bahram

    2017-01-01

    Automated profiling of nuclear architecture, in histology sections, can potentially help predict the clinical outcomes. However, the task is challenging as a result of nuclear pleomorphism and cellular states (e.g., cell fate, cell cycle), which are compounded by the batch effect (e.g., variations in fixation and staining). Present methods, for nuclear segmentation, are based on human-designed features that may not effectively capture intrinsic nuclear architecture. In this paper, we propose a novel approach, called sparsity constrained convolutional regression (SCCR), for nuclei segmentation. Specifically, given raw image patches and the corresponding annotated binary masks, our algorithm jointly learns a bank of convolutional filters and a sparse linear regressor, where the former is used for feature extraction, and the latter aims to produce a likelihood for each pixel being nuclear region or background. During classification, the pixel label is simply determined by a thresholding operation applied on the likelihood map. The method has been evaluated using the benchmark dataset collected from The Cancer Genome Atlas (TCGA). Experimental results demonstrate that our method outperforms traditional nuclei segmentation algorithms and is able to achieve competitive performance compared to the state-of-the-art algorithm built upon human-designed features with biological prior knowledge. PMID:28101301

  20. Understanding deep convolutional networks

    PubMed Central

    Mallat, Stéphane

    2016-01-01

    Deep convolutional networks provide state-of-the-art classifications and regressions results over many high-dimensional problems. We review their architecture, which scatters data with a cascade of linear filter weights and nonlinearities. A mathematical framework is introduced to analyse their properties. Computations of invariants involve multiscale contractions with wavelets, the linearization of hierarchical symmetries and sparse separations. Applications are discussed. PMID:26953183

  1. Two dimensional convolute integers for machine vision and image recognition

    NASA Technical Reports Server (NTRS)

    Edwards, Thomas R.

    1988-01-01

    Machine vision and image recognition require sophisticated image processing prior to the application of Artificial Intelligence. Two Dimensional Convolute Integer Technology is an innovative mathematical approach for addressing machine vision and image recognition. This new technology generates a family of digital operators for addressing optical images and related two dimensional data sets. The operators are regression generated, integer valued, zero phase shifting, convoluting, frequency sensitive, two dimensional low pass, high pass and band pass filters that are mathematically equivalent to surface fitted partial derivatives. These operators are applied non-recursively either as classical convolutions (replacement point values), interstitial point generators (bandwidth broadening or resolution enhancement), or as missing value calculators (compensation for dead array element values). These operators show frequency sensitive feature selection scale invariant properties. Such tasks as boundary/edge enhancement and noise or small size pixel disturbance removal can readily be accomplished. For feature selection tight band pass operators are essential. Results from test cases are given.

  2. Two dimensional convolute integers for machine vision and image recognition

    NASA Technical Reports Server (NTRS)

    Edwards, Thomas R.

    1988-01-01

    Machine vision and image recognition require sophisticated image processing prior to the application of Artificial Intelligence. Two Dimensional Convolute Integer Technology is an innovative mathematical approach for addressing machine vision and image recognition. This new technology generates a family of digital operators for addressing optical images and related two dimensional data sets. The operators are regression generated, integer valued, zero phase shifting, convoluting, frequency sensitive, two dimensional low pass, high pass and band pass filters that are mathematically equivalent to surface fitted partial derivatives. These operators are applied non-recursively either as classical convolutions (replacement point values), interstitial point generators (bandwidth broadening or resolution enhancement), or as missing value calculators (compensation for dead array element values). These operators show frequency sensitive feature selection scale invariant properties. Such tasks as boundary/edge enhancement and noise or small size pixel disturbance removal can readily be accomplished. For feature selection tight band pass operators are essential. Results from test cases are given.

  3. Pixelation Effects in Weak Lensing

    NASA Technical Reports Server (NTRS)

    High, F. William; Rhodes, Jason; Massey, Richard; Ellis, Richard

    2007-01-01

    Weak gravitational lensing can be used to investigate both dark matter and dark energy but requires accurate measurements of the shapes of faint, distant galaxies. Such measurements are hindered by the finite resolution and pixel scale of digital cameras. We investigate the optimum choice of pixel scale for a space-based mission, using the engineering model and survey strategy of the proposed Supernova Acceleration Probe as a baseline. We do this by simulating realistic astronomical images containing a known input shear signal and then attempting to recover the signal using the Rhodes, Refregier, and Groth algorithm. We find that the quality of shear measurement is always improved by smaller pixels. However, in practice, telescopes are usually limited to a finite number of pixels and operational life span, so the total area of a survey increases with pixel size. We therefore fix the survey lifetime and the number of pixels in the focal plane while varying the pixel scale, thereby effectively varying the survey size. In a pure trade-off for image resolution versus survey area, we find that measurements of the matter power spectrum would have minimum statistical error with a pixel scale of 0.09' for a 0.14' FWHM point-spread function (PSF). The pixel scale could be increased to 0.16' if images dithered by exactly half-pixel offsets were always available. Some of our results do depend on our adopted shape measurement method and should be regarded as an upper limit: future pipelines may require smaller pixels to overcome systematic floors not yet accessible, and, in certain circumstances, measuring the shape of the PSF might be more difficult than those of galaxies. However, the relative trends in our analysis are robust, especially those of the surface density of resolved galaxies. Our approach thus provides a snapshot of potential in available technology, and a practical counterpart to analytic studies of pixelation, which necessarily assume an idealized shape

  4. Convolution of degrees of coherence.

    PubMed

    Korotkova, Olga; Mei, Zhangrong

    2015-07-01

    The conditions under which convolution of two degrees of coherence represents a novel legitimate degree of coherence are established for wide-sense statistically stationary Schell-model beam-like optical fields. Several examples are given to illustrate how convolution can be used for generation of a far field being a modulated version of another one. Practically, the convolutions of the degrees of coherence can be achieved by programming the liquid crystal spatial light modulators.

  5. The effect of whitening transformation on pooling operations in convolutional autoencoders

    NASA Astrophysics Data System (ADS)

    Li, Zuhe; Fan, Yangyu; Liu, Weihua

    2015-12-01

    Convolutional autoencoders (CAEs) are unsupervised feature extractors for high-resolution images. In the pre-processing step, whitening transformation has widely been adopted to remove redundancy by making adjacent pixels less correlated. Pooling is a biologically inspired operation to reduce the resolution of feature maps and achieve spatial invariance in convolutional neural networks. Conventionally, pooling methods are mainly determined empirically in most previous work. Therefore, our main purpose is to study the relationship between whitening processing and pooling operations in convolutional autoencoders for image classification. We propose an adaptive pooling approach based on the concepts of information entropy to test the effect of whitening on pooling in different conditions. Experimental results on benchmark datasets indicate that the performance of pooling strategies is associated with the distribution of feature activations, which can be affected by whitening processing. This provides guidance for the selection of pooling methods in convolutional autoencoders and other convolutional neural networks.

  6. Convolutional coding techniques for data protection

    NASA Technical Reports Server (NTRS)

    Massey, J. L.

    1975-01-01

    Results of research on the use of convolutional codes in data communications are presented. Convolutional coding fundamentals are discussed along with modulation and coding interaction. Concatenated coding systems and data compression with convolutional codes are described.

  7. Silicon-Gas Pixel Detector

    NASA Astrophysics Data System (ADS)

    Bashindzhagyan, G.; Korotkova, N.; Romaniouk, A.; Sinev, N.; Tikhomirov, V.

    2017-01-01

    The proposed Silicon-Gas Pixel Detector (SGPD) combines the advantages of Silicon and Gas-pixel detectors (GPD). 7 micron space resolution and down to 0.2 degree both angles measurements are inside 10 mm thick and very low material detector. Silicon pixels implemented directly into electronic chip structure allow to know exact time when particle crossed the detector and to use SGPD as a completely self-triggered device. Binary readout, advanced data collection and analysis on hardware level allow to obtain all the information in less than 1 microsecond and to use SGPD for the fast trigger generation.

  8. Compressed imaging by sparse random convolution.

    PubMed

    Marcos, Diego; Lasser, Theo; López, Antonio; Bourquard, Aurélien

    2016-01-25

    The theory of compressed sensing (CS) shows that signals can be acquired at sub-Nyquist rates if they are sufficiently sparse or compressible. Since many images bear this property, several acquisition models have been proposed for optical CS. An interesting approach is random convolution (RC). In contrast with single-pixel CS approaches, RC allows for the parallel capture of visual information on a sensor array as in conventional imaging approaches. Unfortunately, the RC strategy is difficult to implement as is in practical settings due to important contrast-to-noise-ratio (CNR) limitations. In this paper, we introduce a modified RC model circumventing such difficulties by considering measurement matrices involving sparse non-negative entries. We then implement this model based on a slightly modified microscopy setup using incoherent light. Our experiments demonstrate the suitability of this approach for dealing with distinct CS scenarii, including 1-bit CS.

  9. Image statistics decoding for convolutional codes

    NASA Technical Reports Server (NTRS)

    Pitt, G. H., III; Swanson, L.; Yuen, J. H.

    1987-01-01

    It is a fact that adjacent pixels in a Voyager image are very similar in grey level. This fact can be used in conjunction with the Maximum-Likelihood Convolutional Decoder (MCD) to decrease the error rate when decoding a picture from Voyager. Implementing this idea would require no changes in the Voyager spacecraft and could be used as a backup to the current system without too much expenditure, so the feasibility of it and the possible gains for Voyager were investigated. Simulations have shown that the gain could be as much as 2 dB at certain error rates, and experiments with real data inspired new ideas on ways to get the most information possible out of the received symbol stream.

  10. Pixel Perfect

    SciTech Connect

    Perrine, Kenneth A.; Hopkins, Derek F.; Lamarche, Brian L.; Sowa, Marianne B.

    2005-09-01

    cubic warp. During image acquisitions, the cubic warp is evaluated by way of forward differencing. Unwanted pixelation artifacts are minimized by bilinear sampling. The resulting system is state-of-the-art for biological imaging. Precisely registered images enable the reliable use of FRET techniques. In addition, real-time image processing performance allows computed images to be fed back and displayed to scientists immediately, and the pipelined nature of the FPGA allows additional image processing algorithms to be incorporated into the system without slowing throughput.

  11. DCMDN: Deep Convolutional Mixture Density Network

    NASA Astrophysics Data System (ADS)

    D'Isanto, Antonio; Polsterer, Kai Lars

    2017-09-01

    Deep Convolutional Mixture Density Network (DCMDN) estimates probabilistic photometric redshift directly from multi-band imaging data by combining a version of a deep convolutional network with a mixture density network. The estimates are expressed as Gaussian mixture models representing the probability density functions (PDFs) in the redshift space. In addition to the traditional scores, the continuous ranked probability score (CRPS) and the probability integral transform (PIT) are applied as performance criteria. DCMDN is able to predict redshift PDFs independently from the type of source, e.g. galaxies, quasars or stars and renders pre-classification of objects and feature extraction unnecessary; the method is extremely general and allows the solving of any kind of probabilistic regression problems based on imaging data, such as estimating metallicity or star formation rate in galaxies.

  12. Convolutional neural network regression for short-axis left ventricle segmentation in cardiac cine MR sequences.

    PubMed

    Tan, Li Kuo; Liew, Yih Miin; Lim, Einly; McLaughlin, Robert A

    2017-07-01

    Automated left ventricular (LV) segmentation is crucial for efficient quantification of cardiac function and morphology to aid subsequent management of cardiac pathologies. In this paper, we parameterize the complete (all short axis slices and phases) LV segmentation task in terms of the radial distances between the LV centerpoint and the endo- and epicardial contours in polar space. We then utilize convolutional neural network regression to infer these parameters. Utilizing parameter regression, as opposed to conventional pixel classification, allows the network to inherently reflect domain-specific physical constraints. We have benchmarked our approach primarily against the publicly-available left ventricle segmentation challenge (LVSC) dataset, which consists of 100 training and 100 validation cardiac MRI cases representing a heterogeneous mix of cardiac pathologies and imaging parameters across multiple centers. Our approach attained a .77 Jaccard index, which is the highest published overall result in comparison to other automated algorithms. To test general applicability, we also evaluated against the Kaggle Second Annual Data Science Bowl, where the evaluation metric was the indirect clinical measures of LV volume rather than direct myocardial contours. Our approach attained a Continuous Ranked Probability Score (CRPS) of .0124, which would have ranked tenth in the original challenge. With this we demonstrate the effectiveness of convolutional neural network regression paired with domain-specific features in clinical segmentation. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Dealiased convolutions for pseudospectral simulations

    NASA Astrophysics Data System (ADS)

    Roberts, Malcolm; Bowman, John C.

    2011-12-01

    Efficient algorithms have recently been developed for calculating dealiased linear convolution sums without the expense of conventional zero-padding or phase-shift techniques. For one-dimensional in-place convolutions, the memory requirements are identical with the zero-padding technique, with the important distinction that the additional work memory need not be contiguous with the input data. This decoupling of data and work arrays dramatically reduces the memory and computation time required to evaluate higher-dimensional in-place convolutions. The memory savings is achieved by computing the in-place Fourier transform of the data in blocks, rather than all at once. The technique also allows one to dealias the n-ary convolutions that arise on Fourier transforming cubic and higher powers. Implicitly dealiased convolutions can be built on top of state-of-the-art adaptive fast Fourier transform libraries like FFTW. Vectorized multidimensional implementations for the complex and centered Hermitian (pseudospectral) cases have already been implemented in the open-source software FFTW++. With the advent of this library, writing a high-performance dealiased pseudospectral code for solving nonlinear partial differential equations has now become a relatively straightforward exercise. New theoretical estimates of computational complexity and memory use are provided, including corrected timing results for 3D pruned convolutions and further consideration of higher-order convolutions.

  14. Determinate-state convolutional codes

    NASA Technical Reports Server (NTRS)

    Collins, O.; Hizlan, M.

    1991-01-01

    A determinate state convolutional code is formed from a conventional convolutional code by pruning away some of the possible state transitions in the decoding trellis. The type of staged power transfer used in determinate state convolutional codes proves to be an extremely efficient way of enhancing the performance of a concatenated coding system. The decoder complexity is analyzed along with free distances of these new codes and extensive simulation results is provided of their performance at the low signal to noise ratios where a real communication system would operate. Concise, practical examples are provided.

  15. Machine learning approach to OAM beam demultiplexing via convolutional neural networks.

    PubMed

    Doster, Timothy; Watnik, Abbie T

    2017-04-20

    Orbital angular momentum (OAM) beams allow for increased channel capacity in free-space optical communication. Conventionally, these OAM beams are multiplexed together at a transmitter and then propagated through the atmosphere to a receiver where, due to their orthogonality properties, they are demultiplexed. We propose a technique to demultiplex these OAM-carrying beams by capturing an image of the unique multiplexing intensity pattern and training a convolutional neural network (CNN) as a classifier. This CNN-based demultiplexing method allows for simplicity of operation as alignment is unnecessary, orthogonality constraints are loosened, and costly optical hardware is not required. We test our CNN-based technique against a traditional demultiplexing method, conjugate mode sorting, with various OAM mode sets and levels of simulated atmospheric turbulence in a laboratory setting. Furthermore, we examine our CNN-based technique with respect to added sensor noise, number of photon detections, number of pixels, unknown levels of turbulence, and training set size. Results show that the CNN-based demultiplexing method is able to demultiplex combinatorially multiplexed OAM modes from a fixed set with >99% accuracy for high levels of turbulence-well exceeding the conjugate mode demultiplexing method. We also show that this new method is robust to added sensor noise, number of photon detections, number of pixels, unknown levels of turbulence, and training set size.

  16. [An improvement on the two-dimensional convolution method of image reconstruction and its application to SPECT].

    PubMed

    Suzuki, S; Arai, H

    1990-04-01

    In single-photon emission computed tomography (SPECT) and X-ray CT one-dimensional (1-D) convolution method is used for their image reconstruction from projections. The method makes a 1-D convolution filtering on projection data with a 1-D filter in the space domain, and back projects the filtered data for reconstruction. Images can also be reconstructed by first forming the 2-D backprojection images from projections and then convoluting them with a 2-D space-domain filter. This is the reconstruction by the 2-D convolution method, and it has the opposite reconstruction process to the 1-D convolution method. Since the 2-D convolution method is inferior to the 1-D convolution method in speed in reconstruction, it has no practical use. In the actual reconstruction by the 2-D convolution method, convolution is made on a finite plane which is called convolution window. A convolution window of size N X N needs a 2-D discrete filter of the same size. If better reconstructions are achieved with small convolution windows, the reconstruction time for the 2-D convolution method can be reduced. For this purpose, 2-D filters of a simple function form are proposed which can give good reconstructions with small convolution windows. They are here defined on a finite plane, depending on the window size used, although a filter function is usually defined on the infinite plane. They are however set so that they better approximate the property of a 2-D filter function defined on the infinite plane. Filters of size N X N are thus determined. Their value varies with window size. The filters are applied to image reconstructions of SPECT.(ABSTRACT TRUNCATED AT 250 WORDS)

  17. Spectral interpolation - Zero fill or convolution. [image processing

    NASA Technical Reports Server (NTRS)

    Forman, M. L.

    1977-01-01

    Zero fill, or augmentation by zeros, is a method used in conjunction with fast Fourier transforms to obtain spectral spacing at intervals closer than obtainable from the original input data set. In the present paper, an interpolation technique (interpolation by repetitive convolution) is proposed which yields values accurate enough for plotting purposes and which lie within the limits of calibration accuracies. The technique is shown to operate faster than zero fill, since fewer operations are required. The major advantages of interpolation by repetitive convolution are that efficient use of memory is possible (thus avoiding the difficulties encountered in decimation in time FFTs) and that is is easy to implement.

  18. Convolution formulations for non-negative intensity.

    PubMed

    Williams, Earl G

    2013-08-01

    Previously unknown spatial convolution formulas for a variant of the active normal intensity in planar coordinates have been derived that use measured pressure or normal velocity near-field holograms to construct a positive-only (outward) intensity distribution in the plane, quantifying the areas of the vibrating structure that produce radiation to the far-field. This is an extension of the outgoing-only (unipolar) intensity technique recently developed for arbitrary geometries by Steffen Marburg. The method is applied independently to pressure and velocity data measured in a plane close to the surface of a point-driven, unbaffled rectangular plate in the laboratory. It is demonstrated that the sound producing regions of the structure are clearly revealed using the derived formulas and that the spatial resolution is limited to a half-wavelength. A second set of formulas called the hybrid-intensity formulas are also derived which yield a bipolar intensity using a different spatial convolution operator, again using either the measured pressure or velocity. It is demonstrated from the experiment results that the velocity formula yields the classical active intensity and the pressure formula an interesting hybrid intensity that may be useful for source localization. Computations are fast and carried out in real space without Fourier transforms into wavenumber space.

  19. General logarithmic image processing convolution.

    PubMed

    Palomares, Jose M; González, Jesús; Ros, Eduardo; Prieto, Alberto

    2006-11-01

    The logarithmic image processing model (LIP) is a robust mathematical framework, which, among other benefits, behaves invariantly to illumination changes. This paper presents, for the first time, two general formulations of the 2-D convolution of separable kernels under the LIP paradigm. Although both formulations are mathematically equivalent, one of them has been designed avoiding the operations which are computationally expensive in current computers. Therefore, this fast LIP convolution method allows to obtain significant speedups and is more adequate for real-time processing. In order to support these statements, some experimental results are shown in Section V.

  20. Design of convolutional tornado code

    NASA Astrophysics Data System (ADS)

    Zhou, Hui; Yang, Yao; Gao, Hongmin; Tan, Lu

    2017-09-01

    As a linear block code, the traditional tornado (tTN) code is inefficient in burst-erasure environment and its multi-level structure may lead to high encoding/decoding complexity. This paper presents a convolutional tornado (cTN) code which is able to improve the burst-erasure protection capability by applying the convolution property to the tTN code, and reduce computational complexity by abrogating the multi-level structure. The simulation results show that cTN code can provide a better packet loss protection performance with lower computation complexity than tTN code.

  1. Pixelated gamma detector

    SciTech Connect

    Dolinsky, Sergei Ivanovich; Yanoff, Brian David; Guida, Renato; Ivan, Adrian

    2016-12-27

    A pixelated gamma detector includes a scintillator column assembly having scintillator crystals and optical transparent elements alternating along a longitudinal axis, a collimator assembly having longitudinal walls separated by collimator septum, the collimator septum spaced apart to form collimator channels, the scintillator column assembly positioned adjacent to the collimator assembly so that the respective ones of the scintillator crystal are positioned adjacent to respective ones of the collimator channels, the respective ones of the optical transparent element are positioned adjacent to respective ones of the collimator septum, and a first photosensor and a second photosensor, the first and the second photosensor each connected to an opposing end of the scintillator column assembly. A system and a method for inspecting and/or detecting defects in an interior of an object are also disclosed.

  2. Simplified Convolution Codes

    NASA Technical Reports Server (NTRS)

    Truong, T. K.; Reed, I. S.

    1985-01-01

    Simple recursive algorithm efficiently calculates minimum-weight error vectors using Diophantine equations. Recursive algorithm uses general solution of polynomial linear Diophantine equation to determine minimum-weight error polynomial vector in equation in polynomial space.

  3. ATLAS IBL Pixel Upgrade

    NASA Astrophysics Data System (ADS)

    La Rosa, A.; Atlas Ibl Collaboration

    2011-06-01

    The upgrade for ATLAS detector will undergo different phases towards super-LHC. The first upgrade for the Pixel detector will consist of the construction of a new pixel layer which will be installed during the first shutdown of the LHC machine (LHC phase-I upgrade). The new detector, called Insertable B-Layer (IBL), will be inserted between the existing pixel detector and a new (smaller radius) beam-pipe at a radius of 3.3 cm. The IBL will require the development of several new technologies to cope with increase of radiation or pixel occupancy and also to improve the physics performance which will be achieved by reducing the pixel size and of the material budget. Three different promising sensor technologies (planar-Si, 3D-Si and diamond) are currently under investigation for the pixel detector. An overview of the project with particular emphasis on the pixel module is presented in this paper.

  4. Inhibitor Discovery by Convolution ABPP.

    PubMed

    Chandrasekar, Balakumaran; Hong, Tram Ngoc; van der Hoorn, Renier A L

    2017-01-01

    Activity-based protein profiling (ABPP) has emerged as a powerful proteomic approach to study the active proteins in their native environment by using chemical probes that label active site residues in proteins. Traditionally, ABPP is classified as either comparative or competitive ABPP. In this protocol, we describe a simple method called convolution ABPP, which takes benefit from both the competitive and comparative ABPP. Convolution ABPP allows one to detect if a reduced signal observed during comparative ABPP could be due to the presence of inhibitors. In convolution ABPP, the proteomes are analyzed by comparing labeling intensities in two mixed proteomes that were labeled either before or after mixing. A reduction of labeling in the mix-and-label sample when compared to the label-and-mix sample indicates the presence of an inhibitor excess in one of the proteomes. This method is broadly applicable to detect inhibitors in proteomes against any proteome containing protein activities of interest. As a proof of concept, we applied convolution ABPP to analyze secreted proteomes from Pseudomonas syringae-infected Nicotiana benthamiana leaves to display the presence of a beta-galactosidase inhibitor.

  5. Bad pixel mapping

    NASA Astrophysics Data System (ADS)

    Smith, Roger M.; Hale, David; Wizinowich, Peter

    2014-07-01

    Bad pixels are generally treated as a loss of useable area and then excluded from averaged performance metrics. The definition and detection of "bad pixels" or "cosmetic defects" are seldom discussed, perhaps because they are considered self-evident or of minor consequence for any scientific grade detector, however the ramifications can be more serious than generally appreciated. While the definition of pixel performance is generally understood, the classification of pixels as useable is highly application-specific, as are the consequences of ignoring or interpolating over such pixels. CMOS sensors (including NIR detectors) exhibit less compact distributions of pixel properties than CCDs. The extended tails in these distributions result in a steeper increase in bad pixel counts as performance thresholds are tightened which comes as a surprise to many users. To illustrate how some applications are much more sensitive to bad pixels than others, we present a bad pixel mapping exercise for the Teledyne H2RG used as the NIR tip-tilt sensor in the Keck-1 Adaptive Optics system. We use this example to illustrate the wide range of metrics by which a pixel might be judged inadequate. These include pixel bump bond connectivity, vignetting, addressing faults in the mux, severe sensitivity deficiency of some pixels, non linearity, poor signal linearity, low full well, poor mean-variance linearity, excessive noise and high dark current. Some pixels appear bad by multiple metrics. We also discuss the importance of distinguishing true performance outliers from measurement errors. We note how the complexity of these issues has ramifications for sensor procurement and acceptance testing strategies.

  6. PixelLearn

    NASA Technical Reports Server (NTRS)

    Mazzoni, Dominic; Wagstaff, Kiri; Bornstein, Benjamin; Tang, Nghia; Roden, Joseph

    2006-01-01

    PixelLearn is an integrated user-interface computer program for classifying pixels in scientific images. Heretofore, training a machine-learning algorithm to classify pixels in images has been tedious and difficult. PixelLearn provides a graphical user interface that makes it faster and more intuitive, leading to more interactive exploration of image data sets. PixelLearn also provides image-enhancement controls to make it easier to see subtle details in images. PixelLearn opens images or sets of images in a variety of common scientific file formats and enables the user to interact with several supervised or unsupervised machine-learning pixel-classifying algorithms while the user continues to browse through the images. The machinelearning algorithms in PixelLearn use advanced clustering and classification methods that enable accuracy much higher than is achievable by most other software previously available for this purpose. PixelLearn is written in portable C++ and runs natively on computers running Linux, Windows, or Mac OS X.

  7. High stroke pixel for a deformable mirror

    DOEpatents

    Miles, Robin R.; Papavasiliou, Alexandros P.

    2005-09-20

    A mirror pixel that can be fabricated using standard MEMS methods for a deformable mirror. The pixel is electrostatically actuated and is capable of the high deflections needed for spaced-based mirror applications. In one embodiment, the mirror comprises three layers, a top or mirror layer, a middle layer which consists of flexures, and a comb drive layer, with the flexures of the middle layer attached to the mirror layer and to the comb drive layer. The comb drives are attached to a frame via spring flexures. A number of these mirror pixels can be used to construct a large mirror assembly. The actuator for the mirror pixel may be configured as a crenellated beam with one end fixedly secured, or configured as a scissor jack. The mirror pixels may be used in various applications requiring high stroke adaptive optics.

  8. Some partial-unit-memory convolutional codes

    NASA Technical Reports Server (NTRS)

    Abdel-Ghaffar, K.; Mceliece, R. J.; Solomon, G.

    1991-01-01

    The results of a study on a class of error correcting codes called partial unit memory (PUM) codes are presented. This class of codes, though not entirely new, has until now remained relatively unexplored. The possibility of using the well developed theory of block codes to construct a large family of promising PUM codes is shown. The performance of several specific PUM codes are compared with that of the Voyager standard (2, 1, 6) convolutional code. It was found that these codes can outperform the Voyager code with little or no increase in decoder complexity. This suggests that there may very well be PUM codes that can be used for deep space telemetry that offer both increased performance and decreased implementational complexity over current coding systems.

  9. Local dynamic range compensation for scanning electron microscope imaging system by sub-blocking multiple peak HE with convolution.

    PubMed

    Sim, K S; Teh, V; Tey, Y C; Kho, T K

    2016-11-01

    This paper introduces new development technique to improve the Scanning Electron Microscope (SEM) image quality and we name it as sub-blocking multiple peak histogram equalization (SUB-B-MPHE) with convolution operator. By using this new proposed technique, it shows that the new modified MPHE performs better than original MPHE. In addition, the sub-blocking method consists of convolution operator which can help to remove the blocking effect for SEM images after applying this new developed technique. Hence, by using the convolution operator, it effectively removes the blocking effect by properly distributing the suitable pixel value for the whole image. Overall, the SUB-B-MPHE with convolution outperforms the rest of methods. SCANNING 38:492-501, 2016. © 2015 Wiley Periodicals, Inc. © Wiley Periodicals, Inc.

  10. The Convolution Method in Neutrino Physics Searches

    SciTech Connect

    Tsakstara, V.; Kosmas, T. S.; Chasioti, V. C.; Divari, P. C.; Sinatkas, J.

    2007-12-26

    We concentrate on the convolution method used in nuclear and astro-nuclear physics studies and, in particular, in the investigation of the nuclear response of various neutrino detection targets to the energy-spectra of specific neutrino sources. Since the reaction cross sections of the neutrinos with nuclear detectors employed in experiments are extremely small, very fine and fast convolution techniques are required. Furthermore, sophisticated de-convolution methods are also needed whenever a comparison between calculated unfolded cross sections and existing convoluted results is necessary.

  11. Data convolution and combination operation (COCOA) for motion ghost artifacts reduction.

    PubMed

    Huang, Feng; Lin, Wei; Börnert, Peter; Li, Yu; Reykowski, Arne

    2010-07-01

    A novel method, data convolution and combination operation, is introduced for the reduction of ghost artifacts due to motion or flow during data acquisition. Since neighboring k-space data points from different coil elements have strong correlations, a new "synthetic" k-space with dispersed motion artifacts can be generated through convolution for each coil. The corresponding convolution kernel can be self-calibrated using the acquired k-space data. The synthetic and the acquired data sets can be checked for consistency to identify k-space areas that are motion corrupted. Subsequently, these two data sets can be combined appropriately to produce a k-space data set showing a reduced level of motion induced error. If the acquired k-space contains isolated error, the error can be completely eliminated through data convolution and combination operation. If the acquired k-space data contain widespread errors, the application of the convolution also significantly reduces the overall error. Results with simulated and in vivo data demonstrate that this self-calibrated method robustly reduces ghost artifacts due to swallowing, breathing, or blood flow, with a minimum impact on the image signal-to-noise ratio. (c) 2010 Wiley-Liss, Inc.

  12. Simplified Decoding of Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Truong, T. K.; Reed, I. S.

    1986-01-01

    Some complicated intermediate steps shortened or eliminated. Decoding of convolutional error-correcting digital codes simplified by new errortrellis syndrome technique. In new technique, syndrome vector not computed. Instead, advantage taken of newly-derived mathematical identities simplify decision tree, folding it back on itself into form called "error trellis." This trellis graph of all path solutions of syndrome equations. Each path through trellis corresponds to specific set of decisions as to received digits. Existing decoding algorithms combined with new mathematical identities reduce number of combinations of errors considered and enable computation of correction vector directly from data and check bits as received.

  13. Scene text detection via extremal region based double threshold convolutional network classification

    PubMed Central

    Zhu, Wei; Lou, Jing; Chen, Longtao; Xia, Qingyuan

    2017-01-01

    In this paper, we present a robust text detection approach in natural images which is based on region proposal mechanism. A powerful low-level detector named saliency enhanced-MSER extended from the widely-used MSER is proposed by incorporating saliency detection methods, which ensures a high recall rate. Given a natural image, character candidates are extracted from three channels in a perception-based illumination invariant color space by saliency-enhanced MSER algorithm. A discriminative convolutional neural network (CNN) is jointly trained with multi-level information including pixel-level and character-level information as character candidate classifier. Each image patch is classified as strong text, weak text and non-text by double threshold filtering instead of conventional one-step classification, leveraging confident scores obtained via CNN. To further prune non-text regions, we develop a recursive neighborhood search algorithm to track credible texts from weak text set. Finally, characters are grouped into text lines using heuristic features such as spatial location, size, color, and stroke width. We compare our approach with several state-of-the-art methods, and experiments show that our method achieves competitive performance on public datasets ICDAR 2011 and ICDAR 2013. PMID:28820891

  14. Scene text detection via extremal region based double threshold convolutional network classification.

    PubMed

    Zhu, Wei; Lou, Jing; Chen, Longtao; Xia, Qingyuan; Ren, Mingwu

    2017-01-01

    In this paper, we present a robust text detection approach in natural images which is based on region proposal mechanism. A powerful low-level detector named saliency enhanced-MSER extended from the widely-used MSER is proposed by incorporating saliency detection methods, which ensures a high recall rate. Given a natural image, character candidates are extracted from three channels in a perception-based illumination invariant color space by saliency-enhanced MSER algorithm. A discriminative convolutional neural network (CNN) is jointly trained with multi-level information including pixel-level and character-level information as character candidate classifier. Each image patch is classified as strong text, weak text and non-text by double threshold filtering instead of conventional one-step classification, leveraging confident scores obtained via CNN. To further prune non-text regions, we develop a recursive neighborhood search algorithm to track credible texts from weak text set. Finally, characters are grouped into text lines using heuristic features such as spatial location, size, color, and stroke width. We compare our approach with several state-of-the-art methods, and experiments show that our method achieves competitive performance on public datasets ICDAR 2011 and ICDAR 2013.

  15. High density pixel array

    NASA Technical Reports Server (NTRS)

    Wiener-Avnear, Eliezer (Inventor); McFall, James Earl (Inventor)

    2004-01-01

    A pixel array device is fabricated by a laser micro-milling method under strict process control conditions. The device has an array of pixels bonded together with an adhesive filling the grooves between adjacent pixels. The array is fabricated by moving a substrate relative to a laser beam of predetermined intensity at a controlled, constant velocity along a predetermined path defining a set of grooves between adjacent pixels so that a predetermined laser flux per unit area is applied to the material, and repeating the movement for a plurality of passes of the laser beam until the grooves are ablated to a desired depth. The substrate is of an ultrasonic transducer material in one example for fabrication of a 2D ultrasonic phase array transducer. A substrate of phosphor material is used to fabricate an X-ray focal plane array detector.

  16. The trellis complexity of convolutional codes

    NASA Technical Reports Server (NTRS)

    Mceliece, R. J.; Lin, W.

    1995-01-01

    It has long been known that convolutional codes have a natural, regular trellis structure that facilitates the implementation of Viterbi's algorithm. It has gradually become apparent that linear block codes also have a natural, though not in general a regular, 'minimal' trellis structure, which allows them to be decoded with a Viterbi-like algorithm. In both cases, the complexity of the Viterbi decoding algorithm can be accurately estimated by the number of trellis edges per encoded bit. It would, therefore, appear that we are in a good position to make a fair comparison of the Viterbi decoding complexity of block and convolutional codes. Unfortunately, however, this comparison is somewhat muddled by the fact that some convolutional codes, the punctured convolutional codes, are known to have trellis representations that are significantly less complex than the conventional trellis. In other words, the conventional trellis representation for a convolutional code may not be the minimal trellis representation. Thus, ironically, at present we seem to know more about the minimal trellis representation for block than for convolutional codes. In this article, we provide a remedy, by developing a theory of minimal trellises for convolutional codes. (A similar theory has recently been given by Sidorenko and Zyablov). This allows us to make a direct performance-complexity comparison for block and convolutional codes. A by-product of our work is an algorithm for choosing, from among all generator matrices for a given convolutional code, what we call a trellis-minimal generator matrix, from which the minimal trellis for the code can be directly constructed. Another by-product is that, in the new theory, punctured convolutional codes no longer appear as a special class, but simply as high-rate convolutional codes whose trellis complexity is unexpectedly small.

  17. Convolution-deconvolution in DIGES

    SciTech Connect

    Philippacopoulos, A.J.; Simos, N.

    1995-05-01

    Convolution and deconvolution operations is by all means a very important aspect of SSI analysis since it influences the input to the seismic analysis. This paper documents some of the convolution/deconvolution procedures which have been implemented into the DIGES code. The 1-D propagation of shear and dilatational waves in typical layered configurations involving a stack of layers overlying a rock is treated by DIGES in a similar fashion to that of available codes, e.g. CARES, SHAKE. For certain configurations, however, there is no need to perform such analyses since the corresponding solutions can be obtained in analytic form. Typical cases involve deposits which can be modeled by a uniform halfspace or simple layered halfspaces. For such cases DIGES uses closed-form solutions. These solutions are given for one as well as two dimensional deconvolution. The type of waves considered include P, SV and SH waves. The non-vertical incidence is given special attention since deconvolution can be defined differently depending on the problem of interest. For all wave cases considered, corresponding transfer functions are presented in closed-form. Transient solutions are obtained in the frequency domain. Finally, a variety of forms are considered for representing the free field motion both in terms of deterministic as well as probabilistic representations. These include (a) acceleration time histories, (b) response spectra (c) Fourier spectra and (d) cross-spectral densities.

  18. Modified cubic convolution resampling for Landsat

    NASA Technical Reports Server (NTRS)

    Prakash, A.; Mckee, B.

    1985-01-01

    An overview is given of Landsat Thematic Mapper resampling technique, including a modification of the well-known cubic convolution interpolator (nearest neighbor interpolation) used to provide geometric correction for TM data. Post launch study has shown that the modified cubic convolution interpolator can selectively enhance or suppress frequency bands in the output image. This selectivity is demonstrated on TM Band 3 imagery.

  19. The general theory of convolutional codes

    NASA Technical Reports Server (NTRS)

    Mceliece, R. J.; Stanley, R. P.

    1993-01-01

    This article presents a self-contained introduction to the algebraic theory of convolutional codes. This introduction is partly a tutorial, but at the same time contains a number of new results which will prove useful for designers of advanced telecommunication systems. Among the new concepts introduced here are the Hilbert series for a convolutional code and the class of compact codes.

  20. Precise two-dimensional D-bar reconstructions of human chest and phantom tank via sinc-convolution algorithm.

    PubMed

    Abbasi, Mahdi; Naghsh-Nilchi, Ahmad-Reza

    2012-06-20

    Electrical Impedance Tomography (EIT) is used as a fast clinical imaging technique for monitoring the health of the human organs such as lungs, heart, brain and breast. Each practical EIT reconstruction algorithm should be efficient enough in terms of convergence rate, and accuracy. The main objective of this study is to investigate the feasibility of precise empirical conductivity imaging using a sinc-convolution algorithm in D-bar framework. At the first step, synthetic and experimental data were used to compute an intermediate object named scattering transform. Next, this object was used in a two-dimensional integral equation which was precisely and rapidly solved via sinc-convolution algorithm to find the square root of the conductivity for each pixel of image. For the purpose of comparison, multigrid and NOSER algorithms were implemented under a similar setting. Quality of reconstructions of synthetic models was tested against GREIT approved quality measures. To validate the simulation results, reconstructions of a phantom chest and a human lung were used. Evaluation of synthetic reconstructions shows that the quality of sinc-convolution reconstructions is considerably better than that of each of its competitors in terms of amplitude response, position error, ringing, resolution and shape-deformation. In addition, the results confirm near-exponential and linear convergence rates for sinc-convolution and multigrid, respectively. Moreover, the least degree of relative errors and the most degree of truth were found in sinc-convolution reconstructions from experimental phantom data. Reconstructions of clinical lung data show that the related physiological effect is well recovered by sinc-convolution algorithm. Parametric evaluation demonstrates the efficiency of sinc-convolution to reconstruct accurate conductivity images from experimental data. Excellent results in phantom and clinical reconstructions using sinc-convolution support parametric assessment results

  1. Precise two-dimensional D-bar reconstructions of human chest and phantom tank via sinc-convolution algorithm

    PubMed Central

    2012-01-01

    Background Electrical Impedance Tomography (EIT) is used as a fast clinical imaging technique for monitoring the health of the human organs such as lungs, heart, brain and breast. Each practical EIT reconstruction algorithm should be efficient enough in terms of convergence rate, and accuracy. The main objective of this study is to investigate the feasibility of precise empirical conductivity imaging using a sinc-convolution algorithm in D-bar framework. Methods At the first step, synthetic and experimental data were used to compute an intermediate object named scattering transform. Next, this object was used in a two-dimensional integral equation which was precisely and rapidly solved via sinc-convolution algorithm to find the square root of the conductivity for each pixel of image. For the purpose of comparison, multigrid and NOSER algorithms were implemented under a similar setting. Quality of reconstructions of synthetic models was tested against GREIT approved quality measures. To validate the simulation results, reconstructions of a phantom chest and a human lung were used. Results Evaluation of synthetic reconstructions shows that the quality of sinc-convolution reconstructions is considerably better than that of each of its competitors in terms of amplitude response, position error, ringing, resolution and shape-deformation. In addition, the results confirm near-exponential and linear convergence rates for sinc-convolution and multigrid, respectively. Moreover, the least degree of relative errors and the most degree of truth were found in sinc-convolution reconstructions from experimental phantom data. Reconstructions of clinical lung data show that the related physiological effect is well recovered by sinc-convolution algorithm. Conclusions Parametric evaluation demonstrates the efficiency of sinc-convolution to reconstruct accurate conductivity images from experimental data. Excellent results in phantom and clinical reconstructions using sinc-convolution

  2. Achieving unequal error protection with convolutional codes

    NASA Technical Reports Server (NTRS)

    Mills, D. G.; Costello, D. J., Jr.; Palazzo, R., Jr.

    1994-01-01

    This paper examines the unequal error protection capabilities of convolutional codes. Both time-invariant and periodically time-varying convolutional encoders are examined. The effective free distance vector is defined and is shown to be useful in determining the unequal error protection (UEP) capabilities of convolutional codes. A modified transfer function is used to determine an upper bound on the bit error probabilities for individual input bit positions in a convolutional encoder. The bound is heavily dependent on the individual effective free distance of the input bit position. A bound relating two individual effective free distances is presented. The bound is a useful tool in determining the maximum possible disparity in individual effective free distances of encoders of specified rate and memory distribution. The unequal error protection capabilities of convolutional encoders of several rates and memory distributions are determined and discussed.

  3. Effect of Pixel’s Spatial Characteristics on Recognition of Isolated Pixelized Chinese Character

    PubMed Central

    Yang, Kun; Liu, Shuang; Wang, Hong; Liu, Wei; Wu, Yaowei

    2015-01-01

    The influence of pixel’s spatial characteristics on recognition of isolated Chinese character was investigated using simulated prosthestic vision. The accuracy of Chinese character recognition with 4 kinds of pixel number (6*6, 8*8, 10*10, and 12*12 pixel array) and 3 kinds of pixel shape (Square, Dot and Gaussian) and different pixel spacing were tested through head-mounted display (HMD). A captured image of Chinese characters in font style of Hei were pixelized with Square, Dot and Gaussian pixel. Results showed that pixel number was the most important factor which could affect the recognition of isolated pixelized Chinese Chartars and the accuracy of recognition increased with the addition of pixel number. 10*10 pixel array could provide enough information for people to recognize an isolated Chinese character. At low resolution (6*6 and 8*8 pixel array), there were little difference of recognition accuracy between different pixel shape and different pixel spacing. While as for high resolution (10*10 and 12*12 pixel array), the fluctuation of pixel shape and pixel spacing could not affect the performance of recognition of isolated pixelized Chinese Character. PMID:26628934

  4. Deep convolutional neural network for prostate MR segmentation

    NASA Astrophysics Data System (ADS)

    Tian, Zhiqiang; Liu, Lizhi; Fei, Baowei

    2017-03-01

    Automatic segmentation of the prostate in magnetic resonance imaging (MRI) has many applications in prostate cancer diagnosis and therapy. We propose a deep fully convolutional neural network (CNN) to segment the prostate automatically. Our deep CNN model is trained end-to-end in a single learning stage based on prostate MR images and the corresponding ground truths, and learns to make inference for pixel-wise segmentation. Experiments were performed on our in-house data set, which contains prostate MR images of 20 patients. The proposed CNN model obtained a mean Dice similarity coefficient of 85.3%+/-3.2% as compared to the manual segmentation. Experimental results show that our deep CNN model could yield satisfactory segmentation of the prostate.

  5. Learning Building Extraction in Aerial Scenes with Convolutional Networks.

    PubMed

    Yuan, Jiangye

    2017-09-11

    Extracting buildings from aerial scene images is an important task with many applications. However, this task is highly difficult to automate due to extremely large variations of building appearances, and still heavily relies on manual work. To attack this problem, we design a deep convolutional network with a simple structure that integrates activation from multiple layers for pixel-wise prediction, and introduce the signed distance function of building boundaries as the output representation, which has an enhanced representation power. To train the network, we leverage abundant building footprint data from geographic information systems (GIS) to generate large amounts of labeled data. The trained model achieves a superior performance on datasets that are significantly larger and more complex than those used in prior work, demonstrating that the proposed method provides a promising and scalable solution for automating this labor-intensive task.

  6. Star-galaxy classification using deep convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Kim, Edward J.; Brunner, Robert J.

    2017-02-01

    Most existing star-galaxy classifiers use the reduced summary information from catalogues, requiring careful feature extraction and selection. The latest advances in machine learning that use deep convolutional neural networks (ConvNets) allow a machine to automatically learn the features directly from the data, minimizing the need for input from human experts. We present a star-galaxy classification framework that uses deep ConvNets directly on the reduced, calibrated pixel values. Using data from the Sloan Digital Sky Survey and the Canada-France-Hawaii Telescope Lensing Survey, we demonstrate that ConvNets are able to produce accurate and well-calibrated probabilistic classifications that are competitive with conventional machine learning techniques. Future advances in deep learning may bring more success with current and forthcoming photometric surveys, such as the Dark Energy Survey and the Large Synoptic Survey Telescope, because deep neural networks require very little, manual feature engineering.

  7. Convolutional neural network features based change detection in satellite images

    NASA Astrophysics Data System (ADS)

    Mohammed El Amin, Arabi; Liu, Qingjie; Wang, Yunhong

    2016-07-01

    With the popular use of high resolution remote sensing (HRRS) satellite images, a huge research efforts have been placed on change detection (CD) problem. An effective feature selection method can significantly boost the final result. While hand-designed features have proven difficulties to design features that effectively capture high and mid-level representations, the recent developments in machine learning (Deep Learning) omit this problem by learning hierarchical representation in an unsupervised manner directly from data without human intervention. In this letter, we propose approaching the change detection problem from a feature learning perspective. A novel deep Convolutional Neural Networks (CNN) features based HR satellite images change detection method is proposed. The main guideline is to produce a change detection map directly from two images using a pretrained CNN. This method can omit the limited performance of hand-crafted features. Firstly, CNN features are extracted through different convolutional layers. Then, a concatenation step is evaluated after an normalization step, resulting in a unique higher dimensional feature map. Finally, a change map was computed using pixel-wise Euclidean distance. Our method has been validated on real bitemporal HRRS satellite images according to qualitative and quantitative analyses. The results obtained confirm the interest of the proposed method.

  8. Fovea detection in optical coherence tomography using convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Liefers, Bart; Venhuizen, Freerk G.; Theelen, Thomas; Hoyng, Carel; van Ginneken, Bram; Sánchez, Clara I.

    2017-02-01

    The fovea is an important clinical landmark that is used as a reference for assessing various quantitative measures, such as central retinal thickness or drusen count. In this paper we propose a novel method for automatic detection of the foveal center in Optical Coherence Tomography (OCT) scans. Although the clinician will generally aim to center the OCT scan on the fovea, post-acquisition image processing will give a more accurate estimate of the true location of the foveal center. A Convolutional Neural Network (CNN) was trained on a set of 781 OCT scans that classifies each pixel in the OCT B-scan with a probability of belonging to the fovea. Dilated convolutions were used to obtain a large receptive field, while maintaining pixel-level accuracy. In order to train the network more effectively, negative patches were sampled selectively after each epoch. After CNN classification of the entire OCT volume, the predicted foveal center was chosen as the voxel with maximum output probability, after applying an optimized three-dimensional Gaussian blurring. We evaluate the performance of our method on a data set of 99 OCT scans presenting different stages of Age-related Macular Degeneration (AMD). The fovea was correctly detected in 96:9% of the cases, with a mean distance error of 73 μm(+/-112 μm). This result was comparable to the performance of a second human observer who obtained a mean distance error of 69 μm (+/-94 μm). Experiments showed that the proposed method is accurate and robust even in retinas heavily affected by pathology.

  9. Two-dimensional cubic convolution.

    PubMed

    Reichenbach, Stephen E; Geng, Frank

    2003-01-01

    The paper develops two-dimensional (2D), nonseparable, piecewise cubic convolution (PCC) for image interpolation. Traditionally, PCC has been implemented based on a one-dimensional (1D) derivation with a separable generalization to two dimensions. However, typical scenes and imaging systems are not separable, so the traditional approach is suboptimal. We develop a closed-form derivation for a two-parameter, 2D PCC kernel with support [-2,2] x [-2,2] that is constrained for continuity, smoothness, symmetry, and flat-field response. Our analyses, using several image models, including Markov random fields, demonstrate that the 2D PCC yields small improvements in interpolation fidelity over the traditional, separable approach. The constraints on the derivation can be relaxed to provide greater flexibility and performance.

  10. Selecting Pixels for Kepler Downlink

    NASA Technical Reports Server (NTRS)

    Bryson, Stephen T.; Jenkins, Jon M.; Klaus, Todd C.; Cote, Miles T.; Quintana, Elisa V.; Hall, Jennifer R.; Ibrahim, Khadeejah; Chandrasekaran, Hema; Caldwell, Douglas A.; Van Cleve, Jeffrey E.; hide

    2010-01-01

    The Kepler mission monitors > 100,000 stellar targets using 42 2200 1024 pixel CCDs. Bandwidth constraints prevent the downlink of all 96 million pixels per 30-minute cadence, so the Kepler spacecraft downlinks a specified collection of pixels for each target. These pixels are selected by considering the object brightness, background and the signal-to-noise of each pixel, and are optimized to maximize the signal-to-noise ratio of the target. This paper describes pixel selection, creation of spacecraft apertures that efficiently capture selected pixels, and aperture assignment to a target. Diagnostic apertures, short-cadence targets and custom specified shapes are discussed.

  11. Illuminant spectrum estimation at a pixel.

    PubMed

    Ratnasingam, Sivalogeswaran; Hernández-Andrés, Javier

    2011-04-01

    In this paper, an algorithm is proposed to estimate the spectral power distribution of a light source at a pixel. The first step of the algorithm is forming a two-dimensional illuminant invariant chromaticity space. In estimating the illuminant spectrum, generalized inverse estimation and Wiener estimation methods were applied. The chromaticity space was divided into small grids and a weight matrix was used to estimate the illuminant spectrum illuminating the pixels that fall within a grid. The algorithm was tested using a different number of sensor responses to determine the optimum number of sensors for accurate colorimetric and spectral reproduction. To investigate the performance of the algorithm realistically, the responses were multiplied with Gaussian noise and then quantized to 10 bits. The algorithm was tested with standard and measured data. Based on the results presented, the algorithm can be used with six sensors to obtain a colorimetrically good estimate of the illuminant spectrum at a pixel.

  12. Constructing Parton Convolution in Effective Field Theory

    SciTech Connect

    Chen, Jiunn-Wei; Ji, Xiangdong

    2001-10-08

    Parton convolution models have been used extensively in describing the sea quarks in the nucleon and explaining quark distributions in nuclei (the EMC effect). From the effective field theory point of view, we construct the parton convolution formalism which has been the underlying conception of all convolution models. We explain the significance of scheme and scale dependence of the auxiliary quantities such as the pion distributions in a nucleon. As an application, we calculate the complete leading nonanalytic chiral contribution to the isovector component of the nucleon sea.

  13. The multipoint de la Vallee-Poussin problem for a convolution operator

    SciTech Connect

    Napalkov, Valentin V; Nuyatov, Andrey A

    2012-02-28

    Conditions are discovered which ensure that the space of entire functions can be represented as the sum of an ideal in the space of entire functions and the kernel of a convolution operator. In this way conditions for the multipoint de la Vallee-Poussin problem to have a solution are found. Bibliography: 14 titles.

  14. Small pixel oversampled IR focal plane arrays

    NASA Astrophysics Data System (ADS)

    Caulfield, John; Curzan, Jon; Lewis, Jay; Dhar, Nibir

    2015-06-01

    We report on a new high definition high charge capacity 2.1 Mpixel MWIR Infrared Focal Plane Array. This high definition (HD) FPA utilizes a small 5 um pitch pixel size which is below the Nyquist limit imposed by the optical systems Point Spread Function (PSF). These smaller sub diffraction limited pixels allow spatial oversampling of the image. We show that oversampling IRFPAs enables improved fidelity in imaging including resolution improvements, advanced pixel correlation processing to reduce false alarm rates, improved detection ranges, and an improved ability to track closely spaced objects. Small pixel HD arrays are viewed as the key component enabling lower size, power and weight of the IR Sensor System. Small pixels enables a reduction in the size of the systems components from the smaller detector and ROIC array, the reduced optics focal length and overall lens size, resulting in an overall compactness in the sensor package, cooling and associated electronics. The highly sensitive MWIR small pixel HD FPA has the capability to detect dimmer signals at longer ranges than previously demonstrated.

  15. Fast vision through frameless event-based sensing and convolutional processing: application to texture recognition.

    PubMed

    Perez-Carrasco, Jose Antonio; Acha, Begona; Serrano, Carmen; Camunas-Mesa, Luis; Serrano-Gotarredona, Teresa; Linares-Barranco, Bernabe

    2010-04-01

    Address-event representation (AER) is an emergent hardware technology which shows a high potential for providing in the near future a solid technological substrate for emulating brain-like processing structures. When used for vision, AER sensors and processors are not restricted to capturing and processing still image frames, as in commercial frame-based video technology, but sense and process visual information in a pixel-level event-based frameless manner. As a result, vision processing is practically simultaneous to vision sensing, since there is no need to wait for sensing full frames. Also, only meaningful information is sensed, communicated, and processed. Of special interest for brain-like vision processing are some already reported AER convolutional chips, which have revealed a very high computational throughput as well as the possibility of assembling large convolutional neural networks in a modular fashion. It is expected that in a near future we may witness the appearance of large scale convolutional neural networks with hundreds or thousands of individual modules. In the meantime, some research is needed to investigate how to assemble and configure such large scale convolutional networks for specific applications. In this paper, we analyze AER spiking convolutional neural networks for texture recognition hardware applications. Based on the performance figures of already available individual AER convolution chips, we emulate large scale networks using a custom made event-based behavioral simulator. We have developed a new event-based processing architecture that emulates with AER hardware Manjunath's frame-based feature recognition software algorithm, and have analyzed its performance using our behavioral simulator. Recognition rate performance is not degraded. However, regarding speed, we show that recognition can be achieved before an equivalent frame is fully sensed and transmitted.

  16. NRZ Data Asymmetry Corrector and Convolutional Encoder

    NASA Technical Reports Server (NTRS)

    Pfiffner, H. J.

    1983-01-01

    Circuit compensates for timing, amplitude and symmetry perturbations. Data asymmetry corrector and convolutional encoder regenerate data and clock signals in spite of signal variations such as data or clock asymmetry, phase errors, and amplitude variations, then encode data for transmission.

  17. Parallel architectures for computing cyclic convolutions

    NASA Technical Reports Server (NTRS)

    Yeh, C.-S.; Reed, I. S.; Truong, T. K.

    1983-01-01

    In the paper two parallel architectural structures are developed to compute one-dimensional cyclic convolutions. The first structure is based on the Chinese remainder theorem and Kung's pipelined array. The second structure is a direct mapping from the mathematical definition of a cyclic convolution to a computational architecture. To compute a d-point cyclic convolution the first structure needs d/2 inner product cells, while the second structure and Kung's linear array require d cells. However, to compute a cyclic convolution, the second structure requires less time than both the first structure and Kung's linear array. Another application of the second structure is to multiply a Toeplitz matrix by a vector. A table is listed to compare these two structures and Kung's linear array. Both structures are simple and regular and are therefore suitable for VLSI implementation.

  18. Parallel architectures for computing cyclic convolutions

    NASA Technical Reports Server (NTRS)

    Yeh, C.-S.; Reed, I. S.; Truong, T. K.

    1983-01-01

    In the paper two parallel architectural structures are developed to compute one-dimensional cyclic convolutions. The first structure is based on the Chinese remainder theorem and Kung's pipelined array. The second structure is a direct mapping from the mathematical definition of a cyclic convolution to a computational architecture. To compute a d-point cyclic convolution the first structure needs d/2 inner product cells, while the second structure and Kung's linear array require d cells. However, to compute a cyclic convolution, the second structure requires less time than both the first structure and Kung's linear array. Another application of the second structure is to multiply a Toeplitz matrix by a vector. A table is listed to compare these two structures and Kung's linear array. Both structures are simple and regular and are therefore suitable for VLSI implementation.

  19. Utilization of low-redundancy convolutional codes

    NASA Technical Reports Server (NTRS)

    Cain, J. B.

    1973-01-01

    This paper suggests guidelines for the utilization of low-redundancy convolutional codes with emphasis on providing a quick look capability (no decoding) and a moderate amount of coding gain. The performance and implementation complexity of threshold, Viterbi, and sequential decoding when used with low-redundancy, systematic, convolutional codes is discussed. An extensive list of optimum, short constraint length codes is found for use with Viterbi decoding, and several good, long constraint length codes are found for use with sequential decoding.

  20. A note on cubic convolution interpolation.

    PubMed

    Meijering, Erik; Unser, Michael

    2003-01-01

    We establish a link between classical osculatory interpolation and modern convolution-based interpolation and use it to show that two well-known cubic convolution schemes are formally equivalent to two osculatory interpolation schemes proposed in the actuarial literature about a century ago. We also discuss computational differences and give examples of other cubic interpolation schemes not previously studied in signal and image processing.

  1. Classification of breast cancer cytological specimen using convolutional neural network

    NASA Astrophysics Data System (ADS)

    Żejmo, Michał; Kowal, Marek; Korbicz, Józef; Monczak, Roman

    2017-01-01

    The paper presents a deep learning approach for automatic classification of breast tumors based on fine needle cytology. The main aim of the system is to distinguish benign from malignant cases based on microscopic images. Experiment was carried out on cytological samples derived from 50 patients (25 benign cases + 25 malignant cases) diagnosed in Regional Hospital in Zielona Góra. To classify microscopic images, we used convolutional neural networks (CNN) of two types: GoogLeNet and AlexNet. Due to the very large size of images of cytological specimen (on average 200000 × 100000 pixels), they were divided into smaller patches of size 256 × 256 pixels. Breast cancer classification usually is based on morphometric features of nuclei. Therefore, training and validation patches were selected using Support Vector Machine (SVM) so that suitable amount of cell material was depicted. Neural classifiers were tuned using GPU accelerated implementation of gradient descent algorithm. Training error was defined as a cross-entropy classification loss. Classification accuracy was defined as the percentage ratio of successfully classified validation patches to the total number of validation patches. The best accuracy rate of 83% was obtained by GoogLeNet model. We observed that more misclassified patches belong to malignant cases.

  2. Coset Codes Viewed as Terminated Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Fossorier, Marc P. C.; Lin, Shu

    1996-01-01

    In this paper, coset codes are considered as terminated convolutional codes. Based on this approach, three new general results are presented. First, it is shown that the iterative squaring construction can equivalently be defined from a convolutional code whose trellis terminates. This convolutional code determines a simple encoder for the coset code considered, and the state and branch labelings of the associated trellis diagram become straightforward. Also, from the generator matrix of the code in its convolutional code form, much information about the trade-off between the state connectivity and complexity at each section, and the parallel structure of the trellis, is directly available. Based on this generator matrix, it is shown that the parallel branches in the trellis diagram of the convolutional code represent the same coset code C(sub 1), of smaller dimension and shorter length. Utilizing this fact, a two-stage optimum trellis decoding method is devised. The first stage decodes C(sub 1), while the second stage decodes the associated convolutional code, using the branch metrics delivered by stage 1. Finally, a bidirectional decoding of each received block starting at both ends is presented. If about the same number of computations is required, this approach remains very attractive from a practical point of view as it roughly doubles the decoding speed. This fact is particularly interesting whenever the second half of the trellis is the mirror image of the first half, since the same decoder can be implemented for both parts.

  3. Frequency domain convolution for SCANSAR

    NASA Astrophysics Data System (ADS)

    Cantraine, Guy; Dendal, Didier

    1994-12-01

    Starting from basic signals expressions, the rigorous formulation of frequency domain convolution is demonstrated, in general and impulse terms, including antenna patterns and squint angle. The major differences with conventional algorithms are discussed and theoretical concepts clarified. In a second part, the philosophy of advanced SAR algorithms is compared with that of a SCANSAR observation (several subswaths). It is proved that a general impulse response can always be written as the product of three factors, i.e., a phasor, an antenna coefficient, and a migration expression, and that the details of antenna effects can be ignored in the usual SAR system, but not the range migration (the situation is reversed in a SCANSAR reconstruction scheme). In a next step, some possible inverse filter kernels (the matched filter, the true inverse filter, ...) for general SAR or SCANSAR mode reconstructions, are compared. By adopting a noise corrupted model of data, we get the corresponding Wiener filter, the major interest of which is to avoid all divergence risk. Afterwards, the vocable `a class of filter' is introduced and summarized by a parametric formulation. Lastly, the homogeneity of the reconstruction, with a noncyclic fast Fourier transform deconvolution is studied by comparing peak responses according to the burst location. The more homogeneous sensitivity of the Wiener filter, with a stepper fall when the target begins to go outside the antenna pattern, is confirmed. A linear optimal merging of adjacent looks (in azimuth) minimizing the rms noise is also presented, as well as consideration about squint ambiguity.

  4. Pixel-to-Pixel Flat Field Changes on the WFC

    NASA Astrophysics Data System (ADS)

    Gilliland, R. L.; Bohlin, R.

    2007-01-01

    The pixel-to-pixel flat field changes noted by Bohlin and Mack (2005) for the WFC are further quantified. During each period between anneals, a population of pixels with lowered sensitivity develops which is largely reset by the next anneal. The sensitivity deficits are twice as large in the blue as in the red. The pixels with lowered sensitivity appear to be a unique set each anneal cycle, rather than a subset that ‘telegraph’ on and off. The low QE pixels recover 90% of their losses on a time scale of a few monthly anneals, but never return fully. Some evidence for spontaneous recovery of the low QE pixels between anneal cycles is developed, but is not conclusive. The number of low pixels would become a large source of error in the absence of performing anneals on a frequent basis. Prior to cooldown in July 2006, the flat field changes that arise continuously within anneal cycles are larger than cumulative persistent changes in the pixel-to-pixel flats. The pre-cooldown reference flat field remained excellent. Post-cooldown, the number of persistent deviant pixels, although still modest in number, may have reached a level justifying delivery of new pixel-to-pixel flats, although providing such will require acquisition of further data.

  5. Pixel super resolution using wavelength scanning

    DTIC Science & Technology

    2016-04-08

    Journal Article Journal : Light: Science & Applications (Nature Publishing Group) Publication Location: Article Title: Pixel super-resolution using...of a wide-field imaging system and significantly increase its space-bandwidth product . Publication Identifier Type: DOI Issue: 4 Date Published: 12...significantly increase its space-bandwidth product . We confirmed the effectiveness of this new technique by improving the resolution of lens-free as

  6. K2flix: Kepler pixel data visualizer

    NASA Astrophysics Data System (ADS)

    Barentsen, Geert

    2015-03-01

    K2flix makes it easy to inspect the CCD pixel data obtained by NASA's Kepler space telescope. The two-wheeled extended Kepler mission, K2, is affected by new sources of systematics, including pointing jitter and foreground asteroids, that are easier to spot by eye than by algorithm. The code takes Kepler's Target Pixel Files (TPF) as input and turns them into contrast-stretched animated gifs or MPEG-4 movies. K2flix can be used both as a command-line tool or using its Python API.

  7. Nonlinear Pixel Replacement Estimation.

    DTIC Science & Technology

    1986-04-01

    Systems Command _________ _______________________ _____ 6e ADDRESS IC,,, Sra,. and ZIP Co~ de , 10 SOURCE Of FUNDING NUMBERS% RORMELEMENT NO PROJECT NO... de - scribed, this method does not replace array elements with computed values, but rather replaces them with one of the nine original pixel values. The...8217, gmx . NOISE PStd dev=’. sd call exit end include ’mathfunc For/nolist’ C-9 c biweighted 3xZ3 filter compute weighted mean from input iara subroutine

  8. Gallium arsenide pixel detectors

    NASA Astrophysics Data System (ADS)

    Bates, R.; Campbell, M.; Cantatore, E.; D'Auria, S.; da Vià, C.; del Papa, C.; Heijne, E. M.; Middelkamp, P.; O'Shea, V.; Raine, C.; Ropotar, I.; Scharfetter, L.; Smith, K.; Snoeys, W.

    1998-02-01

    GaAs detectors can be fabricated with bidimensional single-sided electrode segmentation. They have been successfully bonded using flip-chip technology to the Omega-3 silicon read-out chip. We present here the design features of the GaAs pixel detectors and results from a test performed at the CERN SpS with a 120 GeV π- beam. The detection efficiency was 99.2% with a nominal threshold of 5000 e -.

  9. Noise-enhanced convolutional neural networks.

    PubMed

    Audhkhasi, Kartik; Osoba, Osonde; Kosko, Bart

    2016-06-01

    Injecting carefully chosen noise can speed convergence in the backpropagation training of a convolutional neural network (CNN). The Noisy CNN algorithm speeds training on average because the backpropagation algorithm is a special case of the generalized expectation-maximization (EM) algorithm and because such carefully chosen noise always speeds up the EM algorithm on average. The CNN framework gives a practical way to learn and recognize images because backpropagation scales with training data. It has only linear time complexity in the number of training samples. The Noisy CNN algorithm finds a special separating hyperplane in the network's noise space. The hyperplane arises from the likelihood-based positivity condition that noise-boosts the EM algorithm. The hyperplane cuts through a uniform-noise hypercube or Gaussian ball in the noise space depending on the type of noise used. Noise chosen from above the hyperplane speeds training on average. Noise chosen from below slows it on average. The algorithm can inject noise anywhere in the multilayered network. Adding noise to the output neurons reduced the average per-iteration training-set cross entropy by 39% on a standard MNIST image test set of handwritten digits. It also reduced the average per-iteration training-set classification error by 47%. Adding noise to the hidden layers can also reduce these performance measures. The noise benefit is most pronounced for smaller data sets because the largest EM hill-climbing gains tend to occur in the first few iterations. This noise effect can assist random sampling from large data sets because it allows a smaller random sample to give the same or better performance than a noiseless sample gives. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Simplified Syndrome Decoding of (n, 1) Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1983-01-01

    A new syndrome decoding algorithm for the (n, 1) convolutional codes (CC) that is different and simpler than the previous syndrome decoding algorithm of Schalkwijk and Vinck is presented. The new algorithm uses the general solution of the polynomial linear Diophantine equation for the error polynomial vector E(D). This set of Diophantine solutions is a coset of the CC space. A recursive or Viterbi-like algorithm is developed to find the minimum weight error vector cirumflex E(D) in this error coset. An example illustrating the new decoding algorithm is given for the binary nonsymmetric (2,1)CC.

  11. Infrared astronomy - Pixels to spare

    SciTech Connect

    Mccaughrean, M. )

    1991-07-01

    An infrared CCD camera containing an array with 311,040 pixels arranged in 486 rows of 640 each is tested. The array is a chip of platinum silicide (PtSi), sensitive to photons with wavelengths between 1 and 6 microns. Observations of the Hubble Space Telescope, Mars, Pluto and moon are reported. It is noted that the satellite's twin solar-cell arrays, at an apparent separation of about 1 1/4 arc second, are well resolved. Some two dozen video frames were stacked to make each presented image of Mars at 1.6 microns; at this wavelength Mars appears much as it does in visible light. A stack of 11 images at a wavelength of 1.6 microns is used for an image of Jupiter with its Great Red Spot and moons Io and Europa.

  12. Supervised local error estimation for nonlinear image registration using convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Eppenhof, Koen A. J.; Pluim, Josien P. W.

    2017-02-01

    Error estimation in medical image registration is valuable when validating, comparing, or combining registration methods. To validate a nonlinear image registration method, ideally the registration error should be known for the entire image domain. We propose a supervised method for the estimation of a registration error map for nonlinear image registration. The method is based on a convolutional neural network that estimates the norm of the residual deformation from patches around each pixel in two registered images. This norm is interpreted as the registration error, and is defined for every pixel in the image domain. The network is trained using a set of artificially deformed images. Each training example is a pair of images: the original image, and a random deformation of that image. No manually labeled ground truth error is required. At test time, only the two registered images are required as input. We train and validate the network on registrations in a set of 2D digital subtraction angiography sequences, such that errors up to eight pixels can be estimated. We show that for this range of errors the convolutional network is able to learn the registration error in pairs of 2D registered images at subpixel precision. Finally, we present a proof of principle for the extension to 3D registration problems in chest CTs, showing that the method has the potential to estimate errors in 3D registration problems.

  13. Cerebral vessels segmentation for light-sheet microscopy image using convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Hu, Chaoen; Hui, Hui; Wang, Shuo; Dong, Di; Liu, Xia; Yang, Xin; Tian, Jie

    2017-03-01

    Cerebral vessel segmentation is an important step in image analysis for brain function and brain disease studies. To extract all the cerebrovascular patterns, including arteries and capillaries, some filter-based methods are used to segment vessels. However, the design of accurate and robust vessel segmentation algorithms is still challenging, due to the variety and complexity of images, especially in cerebral blood vessel segmentation. In this work, we addressed a problem of automatic and robust segmentation of cerebral micro-vessels structures in cerebrovascular images acquired by light-sheet microscope for mouse. To segment micro-vessels in large-scale image data, we proposed a convolutional neural networks (CNNs) architecture trained by 1.58 million pixels with manual label. Three convolutional layers and one fully connected layer were used in the CNNs model. We extracted a patch of size 32x32 pixels in each acquired brain vessel image as training data set to feed into CNNs for classification. This network was trained to output the probability that the center pixel of input patch belongs to vessel structures. To build the CNNs architecture, a series of mouse brain vascular images acquired from a commercial light sheet fluorescence microscopy (LSFM) system were used for training the model. The experimental results demonstrated that our approach is a promising method for effectively segmenting micro-vessels structures in cerebrovascular images with vessel-dense, nonuniform gray-level and long-scale contrast regions.

  14. The kilopixel array pathfinder project (KAPPa), a 16-pixel integrated heterodyne focal plane array: characterization of the single pixel prototype

    NASA Astrophysics Data System (ADS)

    Wheeler, Caleb H.; Groppi, Christopher E.; Mani, Hamdi; McGarey, Patrick; Kuenzi, Linda; Weinreb, Sander; Russell, Damon S.; Kooi, Jacob W.; Lichtenberger, Arthur W.; Walker, Christopher K.; Kulesa, Craig

    2014-07-01

    We report on the laboratory testing of KAPPa, a 16-pixel proof-of-concept array to enable the creation THz imaging spectrometer with ~1000 pixels. Creating an array an order of magnitude larger than the existing state of the art of 64 pixels requires a simple and robust design as well as improvements to mixer selection, testing, and assembly. Our testing employs a single pixel test bench where a novel 2D array architecture is tested. The minimum size of the footprint is dictated by the diameter of the drilled feedhorn aperture. In the adjoining detector block, a 6mm × 6mm footprint houses the SIS mixer, LNA, matching and bias networks, and permanent magnet. We present an initial characterization of the single pixel prototype using a computer controlled test bench to determine Y-factors for a parameter space of LO power, LO frequency, IF bandwidth, magnet field strength, and SIS bias voltage. To reduce the need to replace poorly preforming pixels that are already mounted in a large format array, we show techniques to improve SIS mixer selection prior to mounting in the detector block. The 2D integrated 16-pixel array design has been evolved as we investigate the properties of the single pixel prototype. Carful design of the prototype has allowed for rapid translation of single pixel design improvements to be easily incorporated into the 16-pixel model.

  15. Pixelated neutron image plates

    NASA Astrophysics Data System (ADS)

    Schlapp, M.; Conrad, H.; von Seggern, H.

    2004-09-01

    Neutron image plates (NIPs) have found widespread application as neutron detectors for single-crystal and powder diffraction, small-angle scattering and tomography. After neutron exposure, the image plate can be read out by scanning with a laser. Commercially available NIPs consist of a powder mixture of BaFBr : Eu2+ and Gd2O3 dispersed in a polymer matrix and supported by a flexible polymer sheet. Since BaFBr : Eu2+ is an excellent x-ray storage phosphor, these NIPs are particularly sensitive to ggr-radiation, which is always present as a background radiation in neutron experiments. In this work we present results on NIPs consisting of KCl : Eu2+ and LiF that were fabricated into ceramic image plates in which the alkali halides act as a self-supporting matrix without the necessity for using a polymeric binder. An advantage of this type of NIP is the significantly reduced ggr-sensitivity. However, the much lower neutron absorption cross section of LiF compared with Gd2O3 demands a thicker image plate for obtaining comparable neutron absorption. The greater thickness of the NIP inevitably leads to a loss in spatial resolution of the image plate. However, this reduction in resolution can be restricted by a novel image plate concept in which a ceramic structure with square cells (referred to as a 'honeycomb') is embedded in the NIP, resulting in a pixelated image plate. In such a NIP the read-out light is confined to the particular illuminated pixel, decoupling the spatial resolution from the optical properties of the image plate material and morphology. In this work, a comparison of experimentally determined and simulated spatial resolutions of pixelated and unstructured image plates for a fixed read-out laser intensity is presented, as well as simulations of the properties of these NIPs at higher laser powers.

  16. Interpolation by two-dimensional cubic convolution

    NASA Astrophysics Data System (ADS)

    Shi, Jiazheng; Reichenbach, Stephen E.

    2003-08-01

    This paper presents results of image interpolation with an improved method for two-dimensional cubic convolution. Convolution with a piecewise cubic is one of the most popular methods for image reconstruction, but the traditional approach uses a separable two-dimensional convolution kernel that is based on a one-dimensional derivation. The traditional, separable method is sub-optimal for the usual case of non-separable images. The improved method in this paper implements the most general non-separable, two-dimensional, piecewise-cubic interpolator with constraints for symmetry, continuity, and smoothness. The improved method of two-dimensional cubic convolution has three parameters that can be tuned to yield maximal fidelity for specific scene ensembles characterized by autocorrelation or power-spectrum. This paper illustrates examples for several scene models (a circular disk of parametric size, a square pulse with parametric rotation, and a Markov random field with parametric spatial detail) and actual images -- presenting the optimal parameters and the resulting fidelity for each model. In these examples, improved two-dimensional cubic convolution is superior to several other popular small-kernel interpolation methods.

  17. The ALICE Pixel Detector

    NASA Astrophysics Data System (ADS)

    Mercado-Perez, Jorge

    2002-07-01

    The present document is a brief summary of the performed activities during the 2001 Summer Student Programme at CERN under the Scientific Summer at Foreign Laboratories Program organized by the Particles and Fields Division of the Mexican Physical Society (Sociedad Mexicana de Fisica). In this case, the activities were related with the ALICE Pixel Group of the EP-AIT Division, under the supervision of Jeroen van Hunen, research fellow in this group. First, I give an introduction and overview to the ALICE experiment; followed by a description of wafer probing. A brief summary of the test beam that we had from July 13th to July 25th is given as well.

  18. Fast space-variant elliptical filtering using box splines.

    PubMed

    Chaudhury, Kunal Narayan; Munoz-Barrutia, Arrate; Unser, Michael

    2010-09-01

    The efficient realization of linear space-variant (non-convolution) filters is a challenging computational problem in image processing. In this paper, we demonstrate that it is possible to filter an image with a Gaussian-like elliptic window of varying size, elongation and orientation using a fixed number of computations per pixel. The associated algorithm, which is based upon a family of smooth compactly supported piecewise polynomials, the radially-uniform box splines, is realized using preintegration and local finite-differences. The radially-uniform box splines are constructed through the repeated convolution of a fixed number of box distributions, which have been suitably scaled and distributed radially in an uniform fashion. The attractive features of these box splines are their asymptotic behavior, their simple covariance structure, and their quasi-separability. They converge to Gaussians with the increase of their order, and are used to approximate anisotropic Gaussians of varying covariance simply by controlling the scales of the constituent box distributions. Based upon the second feature, we develop a technique for continuously controlling the size, elongation and orientation of these Gaussian-like functions. Finally, the quasi-separable structure, along with a certain scaling property of box distributions, is used to efficiently realize the associated space-variant elliptical filtering, which requires O(1) computations per pixel irrespective of the shape and size of the filter.

  19. Analog pixel array detectors.

    PubMed

    Ercan, A; Tate, M W; Gruner, S M

    2006-03-01

    X-ray pixel array detectors (PADs) are generally thought of as either digital photon counters (DPADs) or X-ray analog-integrating pixel array detectors (APADs). Experiences with APADs, which are especially well suited for X-ray imaging experiments where transient or high instantaneous flux events must be recorded, are reported. The design, characterization and experimental applications of several APAD designs developed at Cornell University are discussed. The simplest design is a ;flash' architecture, wherein successive integrated X-ray images, as short as several hundred nanoseconds in duration, are stored in the detector chips for later off-chip digitization. Radiography experiments using a prototype flash APAD are summarized. Another design has been implemented that combines flash capability with the ability to continuously stream X-ray images at slower (e.g. milliseconds) rates. Progress is described towards radiation-hardened APADs that can be tiled to cover a large area. A mixed-mode PAD, design by combining many of the attractive features of both APADs and DPADs, is also described.

  20. Uncertainty estimation by convolution using spatial statistics.

    PubMed

    Sanchez-Brea, Luis Miguel; Bernabeu, Eusebio

    2006-10-01

    Kriging has proven to be a useful tool in image processing since it behaves, under regular sampling, as a convolution. Convolution kernels obtained with kriging allow noise filtering and include the effects of the random fluctuations of the experimental data and the resolution of the measuring devices. The uncertainty at each location of the image can also be determined using kriging. However, this procedure is slow since, currently, only matrix methods are available. In this work, we compare the way kriging performs the uncertainty estimation with the standard statistical technique for magnitudes without spatial dependence. As a result, we propose a much faster technique, based on the variogram, to determine the uncertainty using a convolutional procedure. We check the validity of this approach by applying it to one-dimensional images obtained in diffractometry and two-dimensional images obtained by shadow moire.

  1. Molecular graph convolutions: moving beyond fingerprints

    PubMed Central

    Kearnes, Steven; McCloskey, Kevin; Berndl, Marc; Pande, Vijay; Riley, Patrick

    2016-01-01

    Molecular “fingerprints” encoding structural information are the workhorse of cheminformatics and machine learning in drug discovery applications. However, fingerprint representations necessarily emphasize particular aspects of the molecular structure while ignoring others, rather than allowing the model to make data-driven decisions. We describe molecular graph convolutions, a machine learning architecture for learning from undirected graphs, specifically small molecules. Graph convolutions use a simple encoding of the molecular graph—atoms, bonds, distances, etc.—which allows the model to take greater advantage of information in the graph structure. Although graph convolutions do not outperform all fingerprint-based methods, they (along with other graph-based methods) represent a new paradigm in ligand-based virtual screening with exciting opportunities for future improvement. PMID:27558503

  2. Cyclic Cocycles on Twisted Convolution Algebras

    NASA Astrophysics Data System (ADS)

    Angel, Eitan

    2013-01-01

    We give a construction of cyclic cocycles on convolution algebras twisted by gerbes over discrete translation groupoids. For proper étale groupoids, Tu and Xu (Adv Math 207(2):455-483, 2006) provide a map between the periodic cyclic cohomology of a gerbe-twisted convolution algebra and twisted cohomology groups which is similar to the construction of Mathai and Stevenson (Adv Math 200(2):303-335, 2006). When the groupoid is not proper, we cannot construct an invariant connection on the gerbe; therefore to study this algebra, we instead develop simplicial techniques to construct a simplicial curvature 3-form representing the class of the gerbe. Then by using a JLO formula we define a morphism from a simplicial complex twisted by this simplicial curvature 3-form to the mixed bicomplex computing the periodic cyclic cohomology of the twisted convolution algebras.

  3. Molecular graph convolutions: moving beyond fingerprints.

    PubMed

    Kearnes, Steven; McCloskey, Kevin; Berndl, Marc; Pande, Vijay; Riley, Patrick

    2016-08-01

    Molecular "fingerprints" encoding structural information are the workhorse of cheminformatics and machine learning in drug discovery applications. However, fingerprint representations necessarily emphasize particular aspects of the molecular structure while ignoring others, rather than allowing the model to make data-driven decisions. We describe molecular graph convolutions, a machine learning architecture for learning from undirected graphs, specifically small molecules. Graph convolutions use a simple encoding of the molecular graph-atoms, bonds, distances, etc.-which allows the model to take greater advantage of information in the graph structure. Although graph convolutions do not outperform all fingerprint-based methods, they (along with other graph-based methods) represent a new paradigm in ligand-based virtual screening with exciting opportunities for future improvement.

  4. Molecular graph convolutions: moving beyond fingerprints

    NASA Astrophysics Data System (ADS)

    Kearnes, Steven; McCloskey, Kevin; Berndl, Marc; Pande, Vijay; Riley, Patrick

    2016-08-01

    Molecular "fingerprints" encoding structural information are the workhorse of cheminformatics and machine learning in drug discovery applications. However, fingerprint representations necessarily emphasize particular aspects of the molecular structure while ignoring others, rather than allowing the model to make data-driven decisions. We describe molecular graph convolutions, a machine learning architecture for learning from undirected graphs, specifically small molecules. Graph convolutions use a simple encoding of the molecular graph—atoms, bonds, distances, etc.—which allows the model to take greater advantage of information in the graph structure. Although graph convolutions do not outperform all fingerprint-based methods, they (along with other graph-based methods) represent a new paradigm in ligand-based virtual screening with exciting opportunities for future improvement.

  5. Image reconstruction by parametric cubic convolution

    NASA Technical Reports Server (NTRS)

    Park, S. K.; Schowengerdt, R. A.

    1983-01-01

    Cubic convolution, which has been discussed by Rifman and McKinnon (1974), was originally developed for the reconstruction of Landsat digital images. In the present investigation, the reconstruction properties of the one-parameter family of cubic convolution interpolation functions are considered and thee image degradation associated with reasonable choices of this parameter is analyzed. With the aid of an analysis in the frequency domain it is demonstrated that in an image-independent sense there is an optimal value for this parameter. The optimal value is not the standard value commonly referenced in the literature. It is also demonstrated that in an image-dependent sense, cubic convolution can be adapted to any class of images characterized by a common energy spectrum.

  6. On the twisted convolution product and the Weyl transformation of tempered distributions

    NASA Astrophysics Data System (ADS)

    Maillard, J. M.

    It is well known that the Weyl transformation in a phase space R21, transforms the elements of L( R21) in trace class operators and the elements of L 2( R21) in the Hilbert-Schmidt operators of the Hilbert space L 2( R1); this fact leads to a general method of quantization suggested by E. Wigner and J.E. Moyal and developed by M. Flato, A. Lichnerowicz, C. Fronsdal, D. Sternheimer and F. Bayen for an arbitrary symplectic manifold, known under the name of star-product method. In this context, it is important to study the Weyl transforms of the tempered distributions on the phase space and that of the star-exponentials which gave the spectrum in this process of quantization. We analyze here the relations between the star-product, the twisted convolution product and the Weyl transformation of tempered distributions. We introduce symplectic differential operators which permit us to study the structure of the space O1λ λ ≠ 0, (similar to the space O1C) of the left (twisted) convolution operators of L( R21) which permit to define the twisted convolution product in the space L( R21), and the structures of the admissible symbols for the Weyl transformation (i.e. the domain of the Weyl transformation). We prove that the bounded operators of L 2( R1) are exactly the Weyl transforms of the bounded (twisted) convolution operators of L 2( R21). We give an expression of the integral formula of the star product in terms of twisted convolution products which is valid in the most general case. The unitary representations of the Heisenberg group play an important role here.

  7. Architectural style classification of Mexican historical buildings using deep convolutional neural networks and sparse features

    NASA Astrophysics Data System (ADS)

    Obeso, Abraham Montoya; Benois-Pineau, Jenny; Acosta, Alejandro Álvaro Ramirez; Vázquez, Mireya Saraí García

    2017-01-01

    We propose a convolutional neural network to classify images of buildings using sparse features at the network's input in conjunction with primary color pixel values. As a result, a trained neuronal model is obtained to classify Mexican buildings in three classes according to the architectural styles: prehispanic, colonial, and modern with an accuracy of 88.01%. The problem of poor information in a training dataset is faced due to the unequal availability of cultural material. We propose a data augmentation and oversampling method to solve this problem. The results are encouraging and allow for prefiltering of the content in the search tasks.

  8. Multihop optical network with convolutional coding

    NASA Astrophysics Data System (ADS)

    Chien, Sufong; Takahashi, Kenzo; Prasad Majumder, Satya

    2002-01-01

    We evaluate the bit-error-rate (BER) performance of a multihop optical ShuffleNet with and without convolutional coding. Computed results show that there is considerable improvement in network performance resulting from coding in terms of an increased number of traversable hops from a given transmitter power at a given BER. For a rate-1/2 convolutional code with constraint length K = 9 at BER = 10-9, the hop gains are found to be 20 hops for hot-potato routing and 7 hops for single-buffer routing at the transmitter power of 0 dBm. We can further increase the hop gain by increasing transmitter power.

  9. Fast convolution algorithms for SAR processing

    NASA Astrophysics Data System (ADS)

    dall, Jorgen

    Most high resolution SAR processors apply the Fast Fourier Transform (FFT) to implement convolution by a matched filter impulse response. However, a lower computational complexity is attainable with other algorithms which accordingly have the potential of offering faster and/or simpler processors. Thirteen different fast transform and convolution algorithms are presented, and their characteristics are compared with the fundamental requirements imposed on the algorithms by various SAR processing schemes. The most promising algorithm is based on a Fermat Number Transform (FNT). SAR-580 and SEASAT SAR images have been successfully processed with the FNT, and in this connection the range curvature correction, noise properties and processing speed are discussed.

  10. A fast convolution-based methodology to simulate 2-D/3-D cardiac ultrasound images.

    PubMed

    Gao, Hang; Choi, Hon Fai; Claus, Piet; Boonen, Steven; Jaecques, Siegfried; Van Lenthe, G Harry; Van der Perre, Georges; Lauriks, Walter; D'hooge, Jan

    2009-02-01

    This paper describes a fast convolution-based methodology for simulating ultrasound images in a 2-D/3-D sector format as typically used in cardiac ultrasound. The conventional convolution model is based on the assumption of a space-invariant point spread function (PSF) and typically results in linear images. These characteristics are not representative for cardiac data sets. The spatial impulse response method (IRM) has excellent accuracy in the linear domain; however, calculation time can become an issue when scatterer numbers become significant and when 3-D volumetric data sets need to be computed. As a solution to these problems, the current manuscript proposes a new convolution-based methodology in which the data sets are produced by reducing the conventional 2-D/3-D convolution model to multiple 1-D convolutions (one for each image line). As an example, simulated 2-D/3-D phantom images are presented along with their gray scale histogram statistics. In addition, the computation time is recorded and contrasted to a commonly used implementation of IRM (Field II). It is shown that COLE can produce anatomically plausible images with local Rayleigh statistics but at improved calculation time (1200 times faster than the reference method).

  11. Discretization of continuous convolution operators for accurate modeling of wave propagation in digital holography.

    PubMed

    Chacko, Nikhil; Liebling, Michael; Blu, Thierry

    2013-10-01

    Discretization of continuous (analog) convolution operators by direct sampling of the convolution kernel and use of fast Fourier transforms is highly efficient. However, it assumes the input and output signals are band-limited, a condition rarely met in practice, where signals have finite support or abrupt edges and sampling is nonideal. Here, we propose to approximate signals in analog, shift-invariant function spaces, which do not need to be band-limited, resulting in discrete coefficients for which we derive discrete convolution kernels that accurately model the analog convolution operator while taking into account nonideal sampling devices (such as finite fill-factor cameras). This approach retains the efficiency of direct sampling but not its limiting assumption. We propose fast forward and inverse algorithms that handle finite-length, periodic, and mirror-symmetric signals with rational sampling rates. We provide explicit convolution kernels for computing coherent wave propagation in the context of digital holography. When compared to band-limited methods in simulations, our method leads to fewer reconstruction artifacts when signals have sharp edges or when using nonideal sampling devices.

  12. Imaging properties of pixellated scintillators with deep pixels

    PubMed Central

    Barber, H. Bradford; Fastje, David; Lemieux, Daniel; Grim, Gary P.; Furenlid, Lars R.; Miller, Brian W.; Parkhurst, Philip; Nagarkar, Vivek V.

    2015-01-01

    We have investigated the light-transport properties of scintillator arrays with long, thin pixels (deep pixels) for use in high-energy gamma-ray imaging. We compared 10×10 pixel arrays of YSO:Ce, LYSO:Ce and BGO (1mm × 1mm × 20 mm pixels) made by Proteus, Inc. with similar 10×10 arrays of LSO:Ce and BGO (1mm × 1mm × 15mm pixels) loaned to us by Saint-Gobain. The imaging and spectroscopic behaviors of these scintillator arrays are strongly affected by the choice of a reflector used as an inter-pixel spacer (3M ESR in the case of the Proteus arrays and white, diffuse-reflector for the Saint-Gobain arrays). We have constructed a 3700-pixel LYSO:Ce Prototype NIF Gamma-Ray Imager for use in diagnosing target compression in inertial confinement fusion. This system was tested at the OMEGA Laser and exhibited significant optical, inter-pixel cross-talk that was traced to the use of a single-layer of ESR film as an inter-pixel spacer. We show how the optical cross-talk can be mapped, and discuss correction procedures. We demonstrate a 10×10 YSO:Ce array as part of an iQID (formerly BazookaSPECT) imager and discuss issues related to the internal activity of 176Lu in LSO:Ce and LYSO:Ce detectors. PMID:26236070

  13. Imaging properties of pixellated scintillators with deep pixels

    NASA Astrophysics Data System (ADS)

    Barber, H. Bradford; Fastje, David; Lemieux, Daniel; Grim, Gary P.; Furenlid, Lars R.; Miller, Brian W.; Parkhurst, Philip; Nagarkar, Vivek V.

    2014-09-01

    We have investigated the light-transport properties of scintillator arrays with long, thin pixels (deep pixels) for use in high-energy gamma-ray imaging. We compared 10x10 pixel arrays of YSO:Ce, LYSO:Ce and BGO (1mm x 1mm x 20 mm pixels) made by Proteus, Inc. with similar 10x10 arrays of LSO:Ce and BGO (1mm x 1mm x 15mm pixels) loaned to us by Saint-Gobain. The imaging and spectroscopic behaviors of these scintillator arrays are strongly affected by the choice of a reflector used as an inter-pixel spacer (3M ESR in the case of the Proteus arrays and white, diffuse-reflector for the Saint-Gobain arrays). We have constructed a 3700-pixel LYSO:Ce Prototype NIF Gamma-Ray Imager for use in diagnosing target compression in inertial confinement fusion. This system was tested at the OMEGA Laser and exhibited significant optical, inter-pixel cross-talk that was traced to the use of a single-layer of ESR film as an inter-pixel spacer. We show how the optical cross-talk can be mapped, and discuss correction procedures. We demonstrate a 10x10 YSO:Ce array as part of an iQID (formerly BazookaSPECT) imager and discuss issues related to the internal activity of 176Lu in LSO:Ce and LYSO:Ce detectors.

  14. Imaging properties of pixellated scintillators with deep pixels.

    PubMed

    Barber, H Bradford; Fastje, David; Lemieux, Daniel; Grim, Gary P; Furenlid, Lars R; Miller, Brian W; Parkhurst, Philip; Nagarkar, Vivek V

    2014-08-17

    We have investigated the light-transport properties of scintillator arrays with long, thin pixels (deep pixels) for use in high-energy gamma-ray imaging. We compared 10×10 pixel arrays of YSO:Ce, LYSO:Ce and BGO (1mm × 1mm × 20 mm pixels) made by Proteus, Inc. with similar 10×10 arrays of LSO:Ce and BGO (1mm × 1mm × 15mm pixels) loaned to us by Saint-Gobain. The imaging and spectroscopic behaviors of these scintillator arrays are strongly affected by the choice of a reflector used as an inter-pixel spacer (3M ESR in the case of the Proteus arrays and white, diffuse-reflector for the Saint-Gobain arrays). We have constructed a 3700-pixel LYSO:Ce Prototype NIF Gamma-Ray Imager for use in diagnosing target compression in inertial confinement fusion. This system was tested at the OMEGA Laser and exhibited significant optical, inter-pixel cross-talk that was traced to the use of a single-layer of ESR film as an inter-pixel spacer. We show how the optical cross-talk can be mapped, and discuss correction procedures. We demonstrate a 10×10 YSO:Ce array as part of an iQID (formerly BazookaSPECT) imager and discuss issues related to the internal activity of (176)Lu in LSO:Ce and LYSO:Ce detectors.

  15. FPT Algorithm for Two-Dimensional Cyclic Convolutions

    NASA Technical Reports Server (NTRS)

    Truong, Trieu-Kie; Shao, Howard M.; Pei, D. Y.; Reed, Irving S.

    1987-01-01

    Fast-polynomial-transform (FPT) algorithm computes two-dimensional cyclic convolution of two-dimensional arrays of complex numbers. New algorithm uses cyclic polynomial convolutions of same length. Algorithm regular, modular, and expandable.

  16. Mechanisms of circumferential gyral convolution in primate brains.

    PubMed

    Zhang, Tuo; Razavi, Mir Jalil; Chen, Hanbo; Li, Yujie; Li, Xiao; Li, Longchuan; Guo, Lei; Hu, Xiaoping; Liu, Tianming; Wang, Xianqiao

    2017-06-01

    Mammalian cerebral cortices are characterized by elaborate convolutions. Radial convolutions exhibit homology across primate species and generally are easily identified in individuals of the same species. In contrast, circumferential convolutions vary across species as well as individuals of the same species. However, systematic study of circumferential convolution patterns is lacking. To address this issue, we utilized structural MRI (sMRI) and diffusion MRI (dMRI) data from primate brains. We quantified cortical thickness and circumferential convolutions on gyral banks in relation to axonal pathways and density along the gray matter/white matter boundaries. Based on these observations, we performed a series of computational simulations. Results demonstrated that the interplay of heterogeneous cortex growth and mechanical forces along axons plays a vital role in the regulation of circumferential convolutions. In contrast, gyral geometry controls the complexity of circumferential convolutions. These findings offer insight into the mystery of circumferential convolutions in primate brains.

  17. Towards dropout training for convolutional neural networks.

    PubMed

    Wu, Haibing; Gu, Xiaodong

    2015-11-01

    Recently, dropout has seen increasing use in deep learning. For deep convolutional neural networks, dropout is known to work well in fully-connected layers. However, its effect in convolutional and pooling layers is still not clear. This paper demonstrates that max-pooling dropout is equivalent to randomly picking activation based on a multinomial distribution at training time. In light of this insight, we advocate employing our proposed probabilistic weighted pooling, instead of commonly used max-pooling, to act as model averaging at test time. Empirical evidence validates the superiority of probabilistic weighted pooling. We also empirically show that the effect of convolutional dropout is not trivial, despite the dramatically reduced possibility of over-fitting due to the convolutional architecture. Elaborately designing dropout training simultaneously in max-pooling and fully-connected layers, we achieve state-of-the-art performance on MNIST, and very competitive results on CIFAR-10 and CIFAR-100, relative to other approaches without data augmentation. Finally, we compare max-pooling dropout and stochastic pooling, both of which introduce stochasticity based on multinomial distributions at pooling stage. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Sequential Syndrome Decoding of Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1984-01-01

    The algebraic structure of convolutional codes are reviewed and sequential syndrome decoding is applied to those codes. These concepts are then used to realize by example actual sequential decoding, using the stack algorithm. The Fano metric for use in sequential decoding is modified so that it can be utilized to sequentially find the minimum weight error sequence.

  19. Number-Theoretic Functions via Convolution Rings.

    ERIC Educational Resources Information Center

    Berberian, S. K.

    1992-01-01

    Demonstrates the number theory property that the number of divisors of an integer n times the number of positive integers k, less than or equal to and relatively prime to n, equals the sum of the divisors of n using theory developed about multiplicative functions, the units of a convolution ring, and the Mobius Function. (MDH)

  20. Convolutions and Their Applications in Information Science.

    ERIC Educational Resources Information Center

    Rousseau, Ronald

    1998-01-01

    Presents definitions of convolutions, mathematical operations between sequences or between functions, and gives examples of their use in information science. In particular they can be used to explain the decline in the use of older literature (obsolescence) or the influence of publication delays on the aging of scientific literature. (Author/LRW)

  1. VLSI Unit for Two-Dimensional Convolutions

    NASA Technical Reports Server (NTRS)

    Liu, K. Y.

    1983-01-01

    Universal logic structure allows same VLSI chip to be used for variety of computational functions required for two dimensional convolutions. Fast polynomial transform technique is extended into tree computational structure composed of two units: fast polynomial transform (FPT) unit and Chinese remainder theorem (CRT) computational unit.

  2. PIXELS: Using field-based learning to investigate students' concepts of pixels and sense of scale

    NASA Astrophysics Data System (ADS)

    Pope, A.; Tinigin, L.; Petcovic, H. L.; Ormand, C. J.; LaDue, N.

    2015-12-01

    Empirical work over the past decade supports the notion that a high level of spatial thinking skill is critical to success in the geosciences. Spatial thinking incorporates a host of sub-skills such as mentally rotating an object, imagining the inside of a 3D object based on outside patterns, unfolding a landscape, and disembedding critical patterns from background noise. In this study, we focus on sense of scale, which refers to how an individual quantified space, and is thought to develop through kinesthetic experiences. Remote sensing data are increasingly being used for wide-reaching and high impact research. A sense of scale is critical to many areas of the geosciences, including understanding and interpreting remotely sensed imagery. In this exploratory study, students (N=17) attending the Juneau Icefield Research Program participated in a 3-hour exercise designed to study how a field-based activity might impact their sense of scale and their conceptions of pixels in remotely sensed imagery. Prior to the activity, students had an introductory remote sensing lecture and completed the Sense of Scale inventory. Students walked and/or skied the perimeter of several pixel types, including a 1 m square (representing a WorldView sensor's pixel), a 30 m square (a Landsat pixel) and a 500 m square (a MODIS pixel). The group took reflectance measurements using a field radiometer as they physically traced out the pixel. The exercise was repeated in two different areas, one with homogenous reflectance, and another with heterogeneous reflectance. After the exercise, students again completed the Sense of Scale instrument and a demographic survey. This presentation will share the effects and efficacy of the field-based intervention to teach remote sensing concepts and to investigate potential relationships between students' concepts of pixels and sense of scale.

  3. Effectiveness of Convolutional Code in Multipath Underwater Acoustic Channel

    NASA Astrophysics Data System (ADS)

    Park, Jihyun; Seo, Chulwon; Park, Kyu-Chil; Yoon, Jong Rak

    2013-07-01

    The forward error correction (FEC) is achieved by increasing redundancy of information. Convolutional coding with Viterbi decoding is a typical FEC technique in channel corrupted by additive white gaussian noise. But the FEC effectiveness of convolutional code is questioned in multipath frequency selective fading channel. In this paper, how convolutional code works in multipath channel in underwater, is examined. Bit error rates (BER) with and without 1/2 convolutional code are analyzed based on channel bandwidth which is frequency selectivity parameter. It is found that convolution code performance is well matched in non selective channel and also effective in selective channel.

  4. Building Extraction from Remote Sensing Data Using Fully Convolutional Networks

    NASA Astrophysics Data System (ADS)

    Bittner, K.; Cui, S.; Reinartz, P.

    2017-05-01

    Building detection and footprint extraction are highly demanded for many remote sensing applications. Though most previous works have shown promising results, the automatic extraction of building footprints still remains a nontrivial topic, especially in complex urban areas. Recently developed extensions of the CNN framework made it possible to perform dense pixel-wise classification of input images. Based on these abilities we propose a methodology, which automatically generates a full resolution binary building mask out of a Digital Surface Model (DSM) using a Fully Convolution Network (FCN) architecture. The advantage of using the depth information is that it provides geometrical silhouettes and allows a better separation of buildings from background as well as through its invariance to illumination and color variations. The proposed framework has mainly two steps. Firstly, the FCN is trained on a large set of patches consisting of normalized DSM (nDSM) as inputs and available ground truth building mask as target outputs. Secondly, the generated predictions from FCN are viewed as unary terms for a Fully connected Conditional Random Fields (FCRF), which enables us to create a final binary building mask. A series of experiments demonstrate that our methodology is able to extract accurate building footprints which are close to the buildings original shapes to a high degree. The quantitative and qualitative analysis show the significant improvements of the results in contrast to the multy-layer fully connected network from our previous work.

  5. An optimal nonorthogonal separation of the anisotropic Gaussian convolution filter.

    PubMed

    Lampert, Christoph H; Wirjadi, Oliver

    2006-11-01

    We give an analytical and geometrical treatment of what it means to separate a Gaussian kernel along arbitrary axes in R(n), and we present a separation scheme that allows us to efficiently implement anisotropic Gaussian convolution filters for data of arbitrary dimensionality. Based on our previous analysis we show that this scheme is optimal with regard to the number of memory accesses and interpolation operations needed. The proposed method relies on nonorthogonal convolution axes and works completely in image space. Thus, it avoids the need for a fast Fourier transform (FFT)-subroutine. Depending on the accuracy and speed requirements, different interpolation schemes and methods to implement the one-dimensional Gaussian (finite impulse response and infinite impulse response) can be integrated. Special emphasis is put on analyzing the performance and accuracy of the new method. In particular, we show that without any special optimization of the source code, it can perform anisotropic Gaussian filtering faster than methods relying on the FFT.

  6. Trainable Convolution Filters and Their Application to Face Recognition.

    PubMed

    Kumar, Ritwik; Banerjee, Arunava; Vemuri, Baba C; Pfister, Hanspeter

    2012-07-01

    In this paper, we present a novel image classification system that is built around a core of trainable filter ensembles that we call Volterra kernel classifiers. Our system treats images as a collection of possibly overlapping patches and is composed of three components: (1) A scheme for a single patch classification that seeks a smooth, possibly nonlinear, functional mapping of the patches into a range space, where patches of the same class are close to one another, while patches from different classes are far apart-in the L_2 sense. This mapping is accomplished using trainable convolution filters (or Volterra kernels) where the convolution kernel can be of any shape or order. (2) Given a corpus of Volterra classifiers with various kernel orders and shapes for each patch, a boosting scheme for automatically selecting the best weighted combination of the classifiers to achieve higher per-patch classification rate. (3) A scheme for aggregating the classification information obtained for each patch via voting for the parent image classification. We demonstrate the effectiveness of the proposed technique using face recognition as an application area and provide extensive experiments on the Yale, CMU PIE, Extended Yale B, Multi-PIE, and MERL Dome benchmark face data sets. We call the Volterra kernel classifiers applied to face recognition Volterrafaces. We show that our technique, which falls into the broad class of embedding-based face image discrimination methods, consistently outperforms various state-of-the-art methods in the same category.

  7. Video-based face recognition via convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Bao, Tianlong; Ding, Chunhui; Karmoshi, Saleem; Zhu, Ming

    2017-06-01

    Face recognition has been widely studied recently while video-based face recognition still remains a challenging task because of the low quality and large intra-class variation of video captured face images. In this paper, we focus on two scenarios of video-based face recognition: 1)Still-to-Video(S2V) face recognition, i.e., querying a still face image against a gallery of video sequences; 2)Video-to-Still(V2S) face recognition, in contrast to S2V scenario. A novel method was proposed in this paper to transfer still and video face images to an Euclidean space by a carefully designed convolutional neural network, then Euclidean metrics are used to measure the distance between still and video images. Identities of still and video images that group as pairs are used as supervision. In the training stage, a joint loss function that measures the Euclidean distance between the predicted features of training pairs and expanding vectors of still images is optimized to minimize the intra-class variation while the inter-class variation is guaranteed due to the large margin of still images. Transferred features are finally learned via the designed convolutional neural network. Experiments are performed on COX face dataset. Experimental results show that our method achieves reliable performance compared with other state-of-the-art methods.

  8. Pixel Stability in HST Advanced Camera for Surveys Images

    NASA Astrophysics Data System (ADS)

    Borncamp, David; Grogin, Norman A.; Bourque, Matthew; Ogaz, Sara

    2017-06-01

    Excess thermal energy present in a Charged Coupled Device (CCD) can result in additional electrical current that is propagated into individual pixels in an exposure. This excess signal from the CCD itself can be persistently existent through multiple exposures and can have an adverse effect on the detectors science performance unless properly flagged and corrected for. The traditional way to correct for this extra charge is to take occasional long-exposure images with the camera shutter closed to map the location of these pixels. These images, generally referred to as “dark” images, allow for the measurement of the thermal-electron contamination present in each pixel of the CCD lattice. This "dark current" can then be subtracted from the science images by re-scaling the dark to the science exposure times. Pixels that have signal above a certain threshold are traditionally marked as “hot” and flagged in the data quality array. Many users will discard these pixels as being bad because of this extra current. However, these pixels may not be "bad" in the traditional sense that they cannot be reliably dark-subtracted. If these pixels are shown to be stable over an anneal period, the charge can be properly subtracted and the extra Poisson noise from this hot pixel’s dark current can be taken into account. Here we present the results of a pixel history study that analyzes every individual pixel of the Hubble Space Telescope's (HST) Advanced Camera for Surveys (ACS) Wide Field Channel (WFC) CCDs over time and allows pixels that were previously marked as bad to be brought back into the science image as a reliable pixel.

  9. THE KEPLER PIXEL RESPONSE FUNCTION

    SciTech Connect

    Bryson, Stephen T.; Haas, Michael R.; Dotson, Jessie L.; Koch, David G.; Borucki, William J.; Tenenbaum, Peter; Jenkins, Jon M.; Chandrasekaran, Hema; Caldwell, Douglas A.; Klaus, Todd; Gilliland, Ronald L.

    2010-04-20

    Kepler seeks to detect sequences of transits of Earth-size exoplanets orbiting solar-like stars. Such transit signals are on the order of 100 ppm. The high photometric precision demanded by Kepler requires detailed knowledge of how the Kepler pixels respond to starlight during a nominal observation. This information is provided by the Kepler pixel response function (PRF), defined as the composite of Kepler's optical point-spread function, integrated spacecraft pointing jitter during a nominal cadence and other systematic effects. To provide sub-pixel resolution, the PRF is represented as a piecewise-continuous polynomial on a sub-pixel mesh. This continuous representation allows the prediction of a star's flux value on any pixel given the star's pixel position. The advantages and difficulties of this polynomial representation are discussed, including characterization of spatial variation in the PRF and the smoothing of discontinuities between sub-pixel polynomial patches. On-orbit super-resolution measurements of the PRF across the Kepler field of view are described. Two uses of the PRF are presented: the selection of pixels for each star that maximizes the photometric signal-to-noise ratio for that star, and PRF-fitted centroids which provide robust and accurate stellar positions on the CCD, primarily used for attitude and plate scale tracking. Good knowledge of the PRF has been a critical component for the successful collection of high-precision photometry by Kepler.

  10. Small pixel infrared sensor technology

    NASA Astrophysics Data System (ADS)

    Caulfield, John; Curzan, Jon

    2017-02-01

    We report on product maturation of small pixel high definition high charge capacity 2.4 Mpixel MWIR Infrared Focal Plane Arrays. This high definition (HD) FPA utilizes a small 5 um pitch pixel size which enables near Nyquist limited sampling with by the optical system of many IR lenses. These smaller sub diffraction pitch pixels enable improved sensitivity and resolution resulting in clear, crisp high contrast imaging with excellent IFOVs even with small focal length lenses. The small pixel IR sensor allows the designer to trade off field of view, MTF, optics F/# to obtain a more compact and high performance IR sensor. This enables lower size, power and weight reductions of the entire IR Sensor System. The highly sensitive MWIR small pixel HD FPA has the capability to detect dimmer signals at longer ranges than previously demonstrated.

  11. Sub pixel location identification using super resolved multilooking CHRIS data

    NASA Astrophysics Data System (ADS)

    Sahithi, V. S.; Agrawal, S.

    2014-11-01

    CHRIS /Proba is a multiviewing hyperspectral sensor that monitors the earth in five different zenith angles +55°, +36°, nadir, -36° and -55° with a spatial resolution of 17 m and within a spectral range of 400-1050 nm in mode 3. These multiviewing images are suitable for constructing a super resolved high resolution image that can reveal the mixed pixel of the hyperspectral image. In the present work, an attempt is made to find the location of various features constituted within the 17m mixed pixel of the CHRIS image using various super resolution reconstruction techniques. Four different super resolution reconstruction techniques namely interpolation, iterative back projection, projection on to convex sets (POCS) and robust super resolution were tried on the -36, nadir and +36 images to construct a super resolved high resolution 5.6 m image. The results of super resolution reconstruction were compared with the scaled nadir image and bicubic convoluted image for comparision of the spatial and spectral property preservance. A support vector machine classification of the best super resolved high resolution image was performed to analyse the location of the sub pixel features. Validation of the obtained results was performed using the spectral unmixing fraction images and the 5.6 m classified LISS IV image.

  12. Hybrid convolution kernel: optimized CT of the head, neck, and spine.

    PubMed

    Weiss, Kenneth L; Cornelius, Rebecca S; Greeley, Aaron L; Sun, Dongmei; Chang, I-Yuan Joseph; Boyce, William O; Weiss, Jane L

    2011-02-01

    Conventional CT requires generation of separate images utilizing different convolution kernels to optimize lesion detection. Our goal was to develop and test a hybrid CT algorithm to simultaneously optimize bone and soft-tissue characterization, potentially halving the number of images that need to be stored, transmitted, and reviewed. CT images generated with separate high-pass (bone) and low-pass (soft tissue) kernels were retrospectively combined so that low-pass algorithm pixels less than -150 HU or greater than 150 HU are substituted with corresponding high-pass kernel reconstructed pixels. A total of 38 CT examinations were reviewed using the hybrid technique, including 20 head, eight spine, and 10 head and neck scans. Three neuroradiologists independently reviewed all 38 hybrid cases, comparing them to both standard low-pass and high-pass kernel convolved images for characterization of anatomy and pathologic abnormalities. The conspicuity of bone, soft tissue, and related anatomy were compared for each CT reconstruction technique. For the depiction of bone, in all 38 cases, the three neuroradiologists scored the hybrid images as being equivalent to high-pass kernel reconstructions but superior to the low-pass kernel. For depiction of extracranial soft tissues and brain, the hybrid kernel was rated equivalent to the low-pass kernel but superior to that of the high-pass kernel. The hybrid convolution kernel is a promising technique affording optimized bone and soft tissue evaluation while potentially halving the number of images needed to be transmitted, stored, and reviewed.

  13. Producing data-based sensitivity kernels from convolution and correlation in exploration geophysics.

    NASA Astrophysics Data System (ADS)

    Chmiel, M. J.; Roux, P.; Herrmann, P.; Rondeleux, B.

    2016-12-01

    Many studies have shown that seismic interferometry can be used to estimate surface wave arrivals by correlation of seismic signals recorded at a pair of locations. In the case of ambient noise sources, the convergence towards the surface wave Green's functions is obtained with the criterion of equipartitioned energy. However, seismic acquisition with active, controlled sources gives more possibilities when it comes to interferometry. The use of controlled sources makes it possible to recover the surface wave Green's function between two points using either correlation or convolution. We investigate the convolutional and correlational approaches using land active-seismic data from exploration geophysics. The data were recorded on 10,710 vertical receivers using 51,808 sources (seismic vibrator trucks). The sources spacing is the same in both X and Y directions (30 m) which is known as a "carpet shooting". The receivers are placed in parallel lines with a spacing 150 m in the X direction and 30 m in the Y direction. Invoking spatial reciprocity between sources and receivers, correlation and convolution functions can thus be constructed between either pairs of receivers or pairs of sources. Benefiting from the dense acquisition, we extract sensitivity kernels from correlation and convolution measurements of the seismic data. These sensitivity kernels are subsequently used to produce phase-velocity dispersion curves between two points and to separate the higher mode from the fundamental mode for surface waves. Potential application to surface wave cancellation is also envisaged.

  14. A patch-based convolutional neural network for remote sensing image classification.

    PubMed

    Sharma, Atharva; Liu, Xiuwen; Yang, Xiaojun; Shi, Di

    2017-11-01

    Availability of accurate land cover information over large areas is essential to the global environment sustainability; digital classification using medium-resolution remote sensing data would provide an effective method to generate the required land cover information. However, low accuracy of existing per-pixel based classification methods for medium-resolution data is a fundamental limiting factor. While convolutional neural networks (CNNs) with deep layers have achieved unprecedented improvements in object recognition applications that rely on fine image structures, they cannot be applied directly to medium-resolution data due to lack of such fine structures. In this paper, considering the spatial relation of a pixel to its neighborhood, we propose a new deep patch-based CNN system tailored for medium-resolution remote sensing data. The system is designed by incorporating distinctive characteristics of medium-resolution data; in particular, the system computes patch-based samples from multidimensional top of atmosphere reflectance data. With a test site from the Florida Everglades area (with a size of 771 square kilometers), the proposed new system has outperformed pixel-based neural network, pixel-based CNN and patch-based neural network by 24.36%, 24.23% and 11.52%, respectively, in overall classification accuracy. By combining the proposed deep CNN and the huge collection of medium-resolution remote sensing data, we believe that much more accurate land cover datasets can be produced over large areas. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Dense Semantic Labeling of Subdecimeter Resolution Images With Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Volpi, Michele; Tuia, Devis

    2017-02-01

    Semantic labeling (or pixel-level land-cover classification) in ultra-high resolution imagery (< 10cm) requires statistical models able to learn high level concepts from spatial data, with large appearance variations. Convolutional Neural Networks (CNNs) achieve this goal by learning discriminatively a hierarchy of representations of increasing abstraction. In this paper we present a CNN-based system relying on an downsample-then-upsample architecture. Specifically, it first learns a rough spatial map of high-level representations by means of convolutions and then learns to upsample them back to the original resolution by deconvolutions. By doing so, the CNN learns to densely label every pixel at the original resolution of the image. This results in many advantages, including i) state-of-the-art numerical accuracy, ii) improved geometric accuracy of predictions and iii) high efficiency at inference time. We test the proposed system on the Vaihingen and Potsdam sub-decimeter resolution datasets, involving semantic labeling of aerial images of 9cm and 5cm resolution, respectively. These datasets are composed by many large and fully annotated tiles allowing an unbiased evaluation of models making use of spatial information. We do so by comparing two standard CNN architectures to the proposed one: standard patch classification, prediction of local label patches by employing only convolutions and full patch labeling by employing deconvolutions. All the systems compare favorably or outperform a state-of-the-art baseline relying on superpixels and powerful appearance descriptors. The proposed full patch labeling CNN outperforms these models by a large margin, also showing a very appealing inference time.

  16. Pixel size adjustment in coherent diffractive imaging within the Rayleigh-Sommerfeld regime.

    PubMed

    Claus, Daniel; Rodenburg, John Marius

    2015-03-10

    The reconstruction of the smallest resolvable object detail in digital holography and coherent diffractive imaging when the detector is mounted close to the object of interest is restricted by the sensor's pixel size. Very high resolution information is intrinsically encoded in the data because the effective numerical aperture (NA) of the detector (its solid angular size as subtended at the object plane) is very high. The correct physical propagation model to use in the reconstruction process for this setup should be based on the Rayleigh-Sommerfeld diffraction integral, which is commonly implemented via a convolution operation. However, the convolution operation has the drawback that the pixel size of the propagation calculation is preserved between the object and the detector, and so the maximum resolution of the reconstruction is limited by the detector pixel size, not its effective NA. Here we show that this problem can be overcome via the introduction of a numerical spherical lens with adjustable magnification. This approach enables the reconstruction of object details smaller than the detector pixel size or of objects that extend beyond the size of the detector. It will have applications in all forms of near-field lensless microscopy.

  17. From Pixels to Planets

    NASA Technical Reports Server (NTRS)

    Brownston, Lee; Jenkins, Jon M.

    2015-01-01

    The Kepler Mission was launched in 2009 as NASAs first mission capable of finding Earth-size planets in the habitable zone of Sun-like stars. Its telescope consists of a 1.5-m primary mirror and a 0.95-m aperture. The 42 charge-coupled devices in its focal plane are read out every half hour, compressed, and then downlinked monthly. After four years, the second of four reaction wheels failed, ending the original mission. Back on earth, the Science Operations Center developed the Science Pipeline to analyze about 200,000 target stars in Keplers field of view, looking for evidence of periodic dimming suggesting that one or more planets had crossed the face of its host star. The Pipeline comprises several steps, from pixel-level calibration, through noise and artifact removal, to detection of transit-like signals and the construction of a suite of diagnostic tests to guard against false positives. The Kepler Science Pipeline consists of a pipeline infrastructure written in the Java programming language, which marshals data input to and output from MATLAB applications that are executed as external processes. The pipeline modules, which underwent continuous development and refinement even after data started arriving, employ several analytic techniques, many developed for the Kepler Project. Because of the large number of targets, the large amount of data per target and the complexity of the pipeline algorithms, the processing demands are daunting. Some pipeline modules require days to weeks to process all of their targets, even when run on NASA's 128-node Pleiades supercomputer. The software developers are still seeking ways to increase the throughput. To date, the Kepler project has discovered more than 4000 planetary candidates, of which more than 1000 have been independently confirmed or validated to be exoplanets. Funding for this mission is provided by NASAs Science Mission Directorate.

  18. Star integrals, convolutions and simplices

    NASA Astrophysics Data System (ADS)

    Nandan, Dhritiman; Paulos, Miguel F.; Spradlin, Marcus; Volovich, Anastasia

    2013-05-01

    We explore single and multi-loop conformal integrals, such as the ones appearing in dual conformal theories in flat space. Using Mellin amplitudes, a large class of higher loop integrals can be written as simple integro-differential operators on star integrals: one-loop n-gon integrals in n dimensions. These are known to be given by volumes of hyperbolic simplices. We explicitly compute the five-dimensional pentagon integral in full generality using Schläfli's formula. Then, as a first step to understanding higher loops, we use spline technology to construct explicitly the 6 d hexagon and 8 d octagon integrals in two-dimensional kinematics. The fully massive hexagon and octagon integrals are then related to the double box and triple box integrals respectively. We comment on the classes of functions needed to express these integrals in general kinematics, involving elliptic functions and beyond.

  19. A convolutional neural network neutrino event classifier

    NASA Astrophysics Data System (ADS)

    Aurisano, A.; Radovic, A.; Rocco, D.; Himmel, A.; Messier, M. D.; Niner, E.; Pawloski, G.; Psihas, F.; Sousa, A.; Vahle, P.

    2016-09-01

    Convolutional neural networks (CNNs) have been widely applied in the computer vision community to solve complex problems in image recognition and analysis. We describe an application of the CNN technology to the problem of identifying particle interactions in sampling calorimeters used commonly in high energy physics and high energy neutrino physics in particular. Following a discussion of the core concepts of CNNs and recent innovations in CNN architectures related to the field of deep learning, we outline a specific application to the NOvA neutrino detector. This algorithm, CVN (Convolutional Visual Network) identifies neutrino interactions based on their topology without the need for detailed reconstruction and outperforms algorithms currently in use by the NOvA collaboration.

  20. A Construction of MDS Quantum Convolutional Codes

    NASA Astrophysics Data System (ADS)

    Zhang, Guanghui; Chen, Bocong; Li, Liangchen

    2015-09-01

    In this paper, two new families of MDS quantum convolutional codes are constructed. The first one can be regarded as a generalization of [36, Theorem 6.5], in the sense that we do not assume that q≡1 (mod 4). More specifically, we obtain two classes of MDS quantum convolutional codes with parameters: (i) [( q 2+1, q 2-4 i+3,1;2,2 i+2)] q , where q≥5 is an odd prime power and 2≤ i≤( q-1)/2; (ii) , where q is an odd prime power with the form q=10 m+3 or 10 m+7 ( m≥2), and 2≤ i≤2 m-1.

  1. Deep Learning with Hierarchical Convolutional Factor Analysis

    PubMed Central

    Chen, Bo; Polatkan, Gungor; Sapiro, Guillermo; Blei, David; Dunson, David; Carin, Lawrence

    2013-01-01

    Unsupervised multi-layered (“deep”) models are considered for general data, with a particular focus on imagery. The model is represented using a hierarchical convolutional factor-analysis construction, with sparse factor loadings and scores. The computation of layer-dependent model parameters is implemented within a Bayesian setting, employing a Gibbs sampler and variational Bayesian (VB) analysis, that explicitly exploit the convolutional nature of the expansion. In order to address large-scale and streaming data, an online version of VB is also developed. The number of basis functions or dictionary elements at each layer is inferred from the data, based on a beta-Bernoulli implementation of the Indian buffet process. Example results are presented for several image-processing applications, with comparisons to related models in the literature. PMID:23787342

  2. Performance of convolutionally coded unbalanced QPSK systems

    NASA Technical Reports Server (NTRS)

    Divsalar, D.; Yuen, J. H.

    1980-01-01

    An evaluation is presented of the performance of three representative convolutionally coded unbalanced quadri-phase-shift-keying (UQPSK) systems in the presence of noisy carrier reference and crosstalk. The use of a coded UQPSK system for transmitting two telemetry data streams with different rates and different powers has been proposed for the Venus Orbiting Imaging Radar mission. Analytical expressions for bit error rates in the presence of a noisy carrier phase reference are derived for three representative cases: (1) I and Q channels are coded independently; (2) I channel is coded, Q channel is uncoded; and (3) I and Q channels are coded by a common 1/2 code. For rate 1/2 convolutional codes, QPSK modulation can be used to reduce the bandwidth requirement.

  3. Digital Correlation By Optical Convolution/Correlation

    NASA Astrophysics Data System (ADS)

    Trimble, Joel; Casasent, David; Psaltis, Demetri; Caimi, Frank; Carlotto, Mark; Neft, Deborah

    1980-12-01

    Attention is given to various methods by which the accuracy achieveable and the dynamic range requirements of an optical computer can be enhanced. A new time position coding acousto-optic technique for optical residue arithmetic processing is presented and experimental demonstration is included. Major attention is given to the implementation of a correlator operating on digital or decimal encoded signals. Using a convolution description of multiplication, we realize such a correlator by optical convolution in one dimension and optical correlation in the other dimension of a optical system. A coherent matched spatial filter system operating on digital encoded signals, a noncoherent processor operating on complex-valued digital-encoded data, and a real-time multi-channel acousto-optic system for such operations are described and experimental verifications are included.

  4. A convolutional neural network neutrino event classifier

    DOE PAGES

    Aurisano, A.; Radovic, A.; Rocco, D.; ...

    2016-09-01

    Here, convolutional neural networks (CNNs) have been widely applied in the computer vision community to solve complex problems in image recognition and analysis. We describe an application of the CNN technology to the problem of identifying particle interactions in sampling calorimeters used commonly in high energy physics and high energy neutrino physics in particular. Following a discussion of the core concepts of CNNs and recent innovations in CNN architectures related to the field of deep learning, we outline a specific application to the NOvA neutrino detector. This algorithm, CVN (Convolutional Visual Network) identifies neutrino interactions based on their topology withoutmore » the need for detailed reconstruction and outperforms algorithms currently in use by the NOvA collaboration.« less

  5. Convoluted accommodation structures in folded rocks

    NASA Astrophysics Data System (ADS)

    Dodwell, T. J.; Hunt, G. W.

    2012-10-01

    A simplified variational model for the formation of convoluted accommodation structures, as seen in the hinge zones of larger-scale geological folds, is presented. The model encapsulates some important and intriguing nonlinear features, notably: infinite critical loads, formation of plastic hinges, and buckling on different length-scales. An inextensible elastic beam is forced by uniform overburden pressure and axial load into a V-shaped geometry dictated by formation of a plastic hinge. Using variational methods developed by Dodwell et al., upon which this paper leans heavily, energy minimisation leads to representation as a fourth-order nonlinear differential equation with free boundary conditions. Equilibrium solutions are found using numerical shooting techniques. Under the Maxwell stability criterion, it is recognised that global energy minimisers can exist with convoluted physical shapes. For such solutions, parallels can be drawn with some of the accommodation structures seen in exposed escarpments of real geological folds.

  6. A convolutional neural network neutrino event classifier

    SciTech Connect

    Aurisano, A.; Radovic, A.; Rocco, D.; Himmel, A.; Messier, M. D.; Niner, E.; Pawloski, G.; Psihas, F.; Sousa, A.; Vahle, P.

    2016-09-01

    Here, convolutional neural networks (CNNs) have been widely applied in the computer vision community to solve complex problems in image recognition and analysis. We describe an application of the CNN technology to the problem of identifying particle interactions in sampling calorimeters used commonly in high energy physics and high energy neutrino physics in particular. Following a discussion of the core concepts of CNNs and recent innovations in CNN architectures related to the field of deep learning, we outline a specific application to the NOvA neutrino detector. This algorithm, CVN (Convolutional Visual Network) identifies neutrino interactions based on their topology without the need for detailed reconstruction and outperforms algorithms currently in use by the NOvA collaboration.

  7. A convolutional neural network neutrino event classifier

    SciTech Connect

    Aurisano, A.; Radovic, A.; Rocco, D.; Himmel, A.; Messier, M. D.; Niner, E.; Pawloski, G.; Psihas, F.; Sousa, A.; Vahle, P.

    2016-09-01

    Here, convolutional neural networks (CNNs) have been widely applied in the computer vision community to solve complex problems in image recognition and analysis. We describe an application of the CNN technology to the problem of identifying particle interactions in sampling calorimeters used commonly in high energy physics and high energy neutrino physics in particular. Following a discussion of the core concepts of CNNs and recent innovations in CNN architectures related to the field of deep learning, we outline a specific application to the NOvA neutrino detector. This algorithm, CVN (Convolutional Visual Network) identifies neutrino interactions based on their topology without the need for detailed reconstruction and outperforms algorithms currently in use by the NOvA collaboration.

  8. Quantum convolutional codes derived from constacyclic codes

    NASA Astrophysics Data System (ADS)

    Yan, Tingsu; Huang, Xinmei; Tang, Yuansheng

    2014-12-01

    In this paper, three families of quantum convolutional codes are constructed. The first one and the second one can be regarded as a generalization of Theorems 3, 4, 7 and 8 [J. Chen, J. Li, F. Yang and Y. Huang, Int. J. Theor. Phys., doi:10.1007/s10773-014-2214-6 (2014)], in the sense that we drop the constraint q ≡ 1 (mod 4). Furthermore, the second one and the third one attain the quantum generalized Singleton bound.

  9. Long decoding runs for Galileo's convolutional codes

    NASA Technical Reports Server (NTRS)

    Lahmeyer, C. R.; Cheung, K.-M.

    1988-01-01

    Decoding results are described for long decoding runs of Galileo's convolutional codes. A 1 k-bit/sec hardware Viterbi decoder is used for the (15, 1/4) convolutional code, and a software Viterbi decoder is used for the (7, 1/2) convolutional code. The output data of these long runs are stored in data files using a data compression format which can reduce file size by a factor of 100 to 1 typically. These data files can be used to replicate the long, time-consuming runs exactly and are useful to anyone who wants to analyze the burst statistics of the Viterbi decoders. The 1 k-bit/sec hardware Viterbi decoder was developed in order to demonstrate the correctness of certain algorithmic concepts for decoding Galileo's experimental (15, 1/4) code, and for the long-constraint-length codes in general. The hardware decoder can be used both to search for good codes and to measure accurately the performance of known codes.

  10. Pixel-based reconstruction (PBR) promising simultaneous techniques for CT reconstructions.

    PubMed

    Fager, R S; Peddanarappagari, K V; Kumar, G N

    1993-01-01

    Algorithms belonging to the class of pixel-based reconstruction (PBR) algorithms, which are similar to simultaneous iterative reconstruction techniques (SIRTs) for reconstruction of objects from their fan beam projections in X-ray transmission tomography, are discussed. The general logic of these algorithms is discussed. Simulation studies indicate that, contrary to previous results with parallel beam projections, the iterative algebraic algorithms do not diverge when a more logical technique of obtaining the pseudoprojections is used. These simulations were carried out under conditions in which the number of object pixels exceeded (double) the number of detector pixel readings, i.e., the equations were highly underdetermined. The effect of the number of projections on the reconstruction and the convergence (empirical) to the exact solution is shown. For comparison, the reconstructions obtained by convolution backprojection are also given.

  11. $\\mathtt {Deepr}$: A Convolutional Net for Medical Records.

    PubMed

    Nguyen, Phuoc; Tran, Truyen; Wickramasinghe, Nilmini; Venkatesh, Svetha

    2017-01-01

    Feature engineering remains a major bottleneck when creating predictive systems from electronic medical records. At present, an important missing element is detecting predictive regular clinical motifs from irregular episodic records. We present Deepr (short for Deep record), a new end-to-end deep learning system that learns to extract features from medical records and predicts future risk automatically. Deepr transforms a record into a sequence of discrete elements separated by coded time gaps and hospital transfers. On top of the sequence is a convolutional neural net that detects and combines predictive local clinical motifs to stratify the risk. Deepr permits transparent inspection and visualization of its inner working. We validate Deepr on hospital data to predict unplanned readmission after discharge. Deepr achieves superior accuracy compared to traditional techniques, detects meaningful clinical motifs, and uncovers the underlying structure of the disease and intervention space.

  12. Invariant Descriptor Learning Using a Siamese Convolutional Neural Network

    NASA Astrophysics Data System (ADS)

    Chen, L.; Rottensteiner, F.; Heipke, C.

    2016-06-01

    In this paper we describe learning of a descriptor based on the Siamese Convolutional Neural Network (CNN) architecture and evaluate our results on a standard patch comparison dataset. The descriptor learning architecture is composed of an input module, a Siamese CNN descriptor module and a cost computation module that is based on the L2 Norm. The cost function we use pulls the descriptors of matching patches close to each other in feature space while pushing the descriptors for non-matching pairs away from each other. Compared to related work, we optimize the training parameters by combining a moving average strategy for gradients and Nesterov's Accelerated Gradient. Experiments show that our learned descriptor reaches a good performance and achieves state-of-art results in terms of the false positive rate at a 95 % recall rate on standard benchmark datasets.

  13. A novel end-to-end classifier using domain transferred deep convolutional neural networks for biomedical images.

    PubMed

    Pang, Shuchao; Yu, Zhezhou; Orgun, Mehmet A

    2017-03-01

    Highly accurate classification of biomedical images is an essential task in the clinical diagnosis of numerous medical diseases identified from those images. Traditional image classification methods combined with hand-crafted image feature descriptors and various classifiers are not able to effectively improve the accuracy rate and meet the high requirements of classification of biomedical images. The same also holds true for artificial neural network models directly trained with limited biomedical images used as training data or directly used as a black box to extract the deep features based on another distant dataset. In this study, we propose a highly reliable and accurate end-to-end classifier for all kinds of biomedical images via deep learning and transfer learning. We first apply domain transferred deep convolutional neural network for building a deep model; and then develop an overall deep learning architecture based on the raw pixels of original biomedical images using supervised training. In our model, we do not need the manual design of the feature space, seek an effective feature vector classifier or segment specific detection object and image patches, which are the main technological difficulties in the adoption of traditional image classification methods. Moreover, we do not need to be concerned with whether there are large training sets of annotated biomedical images, affordable parallel computing resources featuring GPUs or long times to wait for training a perfect deep model, which are the main problems to train deep neural networks for biomedical image classification as observed in recent works. With the utilization of a simple data augmentation method and fast convergence speed, our algorithm can achieve the best accuracy rate and outstanding classification ability for biomedical images. We have evaluated our classifier on several well-known public biomedical datasets and compared it with several state-of-the-art approaches. We propose a robust

  14. Local Pixel Bundles: Bringing the Pixels to the People

    NASA Astrophysics Data System (ADS)

    Anderson, Jay

    2014-12-01

    The automated galaxy-based alignment software package developed for the Frontier Fields program (hst2galign, see Anderson & Ogaz 2014 and http://www.stsci.edu/hst/campaigns/frontier-fields/) produces a direct mapping from the pixels of the flt frame of each science exposure into a common master frame. We can use these mappings to extract the flt-pixels in the vicinity of a source of interest and package them into a convenient "bundle". In addition to the pixels, this data bundle can also contain "meta" information that will allow users to transform positions from the flt pixels to the reference frame and vice-versa. Since the un-resampled pixels in the flt frames are the only true constraints we have on the astronomical scene, the ability to inter-relate these pixels will enable many high-precision studies, such as: point-source-fitting and deconvolution with accurate PSFs, easy exploration of different image-combining algorithms, and accurate faint-source finding and photometry. The data products introduced in this ISR are a very early attempt to provide the flt-level pixel constraints in a package that is accessible to more than the handful of experts in HST astrometry. The hope is that users in the community might begin using them and will provide feedback as to what information they might want to see in the bundles and what general analysis packages they might find useful. For that reason, this document is somewhat informally written, since I know that it will be modified and updated as the products and tools are optimized.

  15. A multilevel local discrete convolution method for the numerical solution for Maxwell's Equations

    NASA Astrophysics Data System (ADS)

    Lo, Boris; Colella, Phillip

    2016-10-01

    We present a new discrete multilevel local discrete convolution method for solving Maxwell's equations in three dimensions. We obtain an explicit real-space representation for the propagator of an auxiliary system of differential equations with initial value constraints that is equivalent to Maxwell's equations. The propagator preserves finite speed of propagation and source locality. Because the propagator involves convolution against a singular distribution, we regularize via convolution with smoothing kernels (B-splines) prior to sampling. We have shown that the ultimate discrete convolutional propagator can be constructed to attain an arbitrarily high order of accuracy by using higher-order regularizing kernels and finite difference stencils. The discretized propagator is compactly supported and can be applied using Hockney's method (1970) and parallelized using domain decomposition, leading to a method that is computationally efficient. The algorithm is extended to work for locally refined fixed hierarchy of rectangular grids. This research is supported by the Office of Advanced Scientific Computing Research of the US Department of Energy under Contract Number DE-AC02-05CH11231.

  16. Intra-pixel response of infrared detector arrays for JWST

    NASA Astrophysics Data System (ADS)

    Hardy, Tim; Baril, M. R.; Pazder, J.; Stilburn, J. S.

    2008-07-01

    The near-infrared instruments on the James Webb Space Telescope will use 5 micron cutoff HAWAII-2RG detector arrays. We have investigated the response of this type of detector at sub-pixel resolution to determine whether variations at this scale would affect the performance of the instruments. Using a simple experimental setup we were able to get measurements with a resolution of approximately 4 microns. We have measured an un-hybridized HAWAII-1RG multiplexer, a hybridized HAWAII-1RG device with a 5 micron cutoff HgCdTe detector layer, and a hybridized HAWAII-2RG device with a 5 micron cutoff substrate-removed HgCdTe detector layer. We found that the intra-pixel response functions of the hybrid devices are basically smooth and well behaved, and vary little from pixel to pixel. However, we did find numerous sub-pixel sized defects, notably some long straight thin features like scratches. We were not able to detect any significant variations with wavelength between 0.65 and 2.2 microns, but in the -1RG device there was a variation with temperature. When cooled from 80K to 40K, the pixel response became narrower, and some signal began to be lost at the edges of the pixel. We believe this reflects a reduction in charge diffusion at the lower temperature.

  17. Baryon Acoustic Oscillations reconstruction with pixels

    NASA Astrophysics Data System (ADS)

    Obuljen, Andrej; Villaescusa-Navarro, Francisco; Castorina, Emanuele; Viel, Matteo

    2017-09-01

    Gravitational non-linear evolution induces a shift in the position of the baryon acoustic oscillations (BAO) peak together with a damping and broadening of its shape that bias and degrades the accuracy with which the position of the peak can be determined. BAO reconstruction is a technique developed to undo part of the effect of non-linearities. We present and analyse a reconstruction method that consists of displacing pixels instead of galaxies and whose implementation is easier than the standard reconstruction method. We show that this method is equivalent to the standard reconstruction technique in the limit where the number of pixels becomes very large. This method is particularly useful in surveys where individual galaxies are not resolved, as in 21cm intensity mapping observations. We validate this method by reconstructing mock pixelated maps, that we build from the distribution of matter and halos in real- and redshift-space, from a large set of numerical simulations. We find that this method is able to decrease the uncertainty in the BAO peak position by 30-50% over the typical angular resolution scales of 21 cm intensity mapping experiments.

  18. Crosstalk characterization of PMD pixels using the spatial response function at subpixel level

    NASA Astrophysics Data System (ADS)

    Heredia Conde, Miguel; Hartmann, Klaus; Loffeld, Otmar

    2015-03-01

    Time-of-Flight cameras have become one of the most widely-spread low-cost 3D-sensing devices. Most of them do not actually measure the time the light needs to hit an object and come back to the camera, but the difference of phase with respect to a reference signal. This requires special pixels with complex spatial structure, such as PMD pixels, able to sample the cross-correlation function between the incoming signal, reflected by the scene, and the reference signal. The complex structure, together with the presence of in-pixel electronics and the need for a compact readout circuitry for both pixel channels, suggests that systematic crosstalk effects will come up in this kind of devices. For the first time, we take profit of recent results on subpixel spatial responses of PMD pixels to detect and characterize crosstalk occurrences. Well-defined crosstalk patterns have been identified and quantitatively characterized through integration of the inter-pixel spatial response over each sensitive area. We cast the crosstalk problem into an image convolution and provide deconvolution kernels for cleaning PMD raw images from crosstalk. Experiments on real PMD raw images show that our results can be used to undo the lowpass filtering caused by crosstalk in high contrast image areas. The application of our kernels to undo crosstalk effects leads to reductions of the depth RMSE up to 50% in critical areas.

  19. A fast complex integer convolution using a hybrid transform

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; K Truong, T.

    1978-01-01

    It is shown that the Winograd transform can be combined with a complex integer transform over the Galois field GF(q-squared) to yield a new algorithm for computing the discrete cyclic convolution of complex number points. By this means a fast method for accurately computing the cyclic convolution of a sequence of complex numbers for long convolution lengths can be obtained. This new hybrid algorithm requires fewer multiplications than previous algorithms.

  20. Convolutional Architecture Exploration for Action Recognition and Image Classification

    DTIC Science & Technology

    2015-01-01

    Convolutional Architecture Exploration for Action Recognition and Image Classification JT Turner∗1, David Aha2, Leslie Smith2, and Kalyan Moy Gupta1...Intelligence; Naval Research Laboratory (Code 5514); Washington, DC 20375 Abstract Convolutional Architecture for Fast Feature Encoding (CAFFE) [11] is a soft...This is especially true with convolutional neural networks which depend upon the architecture to detect edges and objects in the same way the human

  1. A fast complex integer convolution using a hybrid transform

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; K Truong, T.

    1978-01-01

    It is shown that the Winograd transform can be combined with a complex integer transform over the Galois field GF(q-squared) to yield a new algorithm for computing the discrete cyclic convolution of complex number points. By this means a fast method for accurately computing the cyclic convolution of a sequence of complex numbers for long convolution lengths can be obtained. This new hybrid algorithm requires fewer multiplications than previous algorithms.

  2. Multi-scale feature learning on pixels and super-pixels for seminal vesicles MRI segmentation

    NASA Astrophysics Data System (ADS)

    Gao, Qinquan; Asthana, Akshay; Tong, Tong; Rueckert, Daniel; Edwards, Philip "Eddie"

    2014-03-01

    We propose a learning-based approach to segment the seminal vesicles (SV) via random forest classifiers. The proposed discriminative approach relies on the decision forest using high-dimensional multi-scale context-aware spatial, textual and descriptor-based features at both pixel and super-pixel level. After affine transformation to a template space, the relevant high-dimensional multi-scale features are extracted and random forest classifiers are learned based on the masked region of the seminal vesicles from the most similar atlases. Using these classifiers, an intermediate probabilistic segmentation is obtained for the test images. Then, a graph-cut based refinement is applied to this intermediate probabilistic representation of each voxel to get the final segmentation. We apply this approach to segment the seminal vesicles from 30 MRI T2 training images of the prostate, which presents a particularly challenging segmentation task. The results show that the multi-scale approach and the augmentation of the pixel based features with the super-pixel based features enhances the discriminative power of the learnt classifier which leads to a better quality segmentation in some very difficult cases. The results are compared to the radiologist labeled ground truth using leave-one-out cross-validation. Overall, the Dice metric of 0:7249 and Hausdorff surface distance of 7:0803 mm are achieved for this difficult task.

  3. Painting with pixels.

    PubMed

    Kyte, S

    1989-04-01

    Two decades ago the subject of computer graphics was regarded as pure science fiction, more within the realms of Star Trek fantasy than of everyday use, but today it is difficult to avoid its influence. Television programmes abound with slick moving, twisting, distorting images, the printing media throws colourful shapes and forms off the page at you, and computer games explode noisily into our living rooms. In a very short space of time computer graphics have risen from being a toy of the affluent minority to a working tool of the cost-conscious majority. Even the most purist of artists have realized that in order to survive in an increasingly competitive world they must inevitably take the plunge into the world of electronic imagery.

  4. Applications of convolution voltammetry in electroanalytical chemistry.

    PubMed

    Bentley, Cameron L; Bond, Alan M; Hollenkamp, Anthony F; Mahon, Peter J; Zhang, Jie

    2014-02-18

    The robustness of convolution voltammetry for determining accurate values of the diffusivity (D), bulk concentration (C(b)), and stoichiometric number of electrons (n) has been demonstrated by applying the technique to a series of electrode reactions in molecular solvents and room temperature ionic liquids (RTILs). In acetonitrile, the relatively minor contribution of nonfaradaic current facilitates analysis with macrodisk electrodes, thus moderate scan rates can be used without the need to perform background subtraction to quantify the diffusivity of iodide [D = 1.75 (±0.02) × 10(-5) cm(2) s(-1)] in this solvent. In the RTIL 1-ethyl-3-methylimidazolium bis(trifluoromethanesulfonyl)imide, background subtraction is necessary at a macrodisk electrode but can be avoided at a microdisk electrode, thereby simplifying the analytical procedure and allowing the diffusivity of iodide [D = 2.70 (±0.03) × 10(-7) cm(2) s(-1)] to be quantified. Use of a convolutive procedure which simultaneously allows D and nC(b) values to be determined is also demonstrated. Three conditions under which a technique of this kind may be applied are explored and are related to electroactive species which display slow dissolution kinetics, undergo a single multielectron transfer step, or contain multiple noninteracting redox centers using ferrocene in an RTIL, 1,4-dinitro-2,3,5,6-tetramethylbenzene, and an alkynylruthenium trimer, respectively, as examples. The results highlight the advantages of convolution voltammetry over steady-state techniques such as rotating disk electrode voltammetry and microdisk electrode voltammetry, as it is not restricted by the mode of diffusion (planar or radial), hence removing limitations on solvent viscosity, electrode geometry, and voltammetric scan rate.

  5. Zebrafish tracking using convolutional neural networks

    PubMed Central

    XU, Zhiping; Cheng, Xi En

    2017-01-01

    Keeping identity for a long term after occlusion is still an open problem in the video tracking of zebrafish-like model animals, and accurate animal trajectories are the foundation of behaviour analysis. We utilize the highly accurate object recognition capability of a convolutional neural network (CNN) to distinguish fish of the same congener, even though these animals are indistinguishable to the human eye. We used data augmentation and an iterative CNN training method to optimize the accuracy for our classification task, achieving surprisingly accurate trajectories of zebrafish of different size and age zebrafish groups over different time spans. This work will make further behaviour analysis more reliable. PMID:28211462

  6. Zebrafish tracking using convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Xu, Zhiping; Cheng, Xi En

    2017-02-01

    Keeping identity for a long term after occlusion is still an open problem in the video tracking of zebrafish-like model animals, and accurate animal trajectories are the foundation of behaviour analysis. We utilize the highly accurate object recognition capability of a convolutional neural network (CNN) to distinguish fish of the same congener, even though these animals are indistinguishable to the human eye. We used data augmentation and an iterative CNN training method to optimize the accuracy for our classification task, achieving surprisingly accurate trajectories of zebrafish of different size and age zebrafish groups over different time spans. This work will make further behaviour analysis more reliable.

  7. Convolutional coding combined with continuous phase modulation

    NASA Technical Reports Server (NTRS)

    Pizzi, S. V.; Wilson, S. G.

    1985-01-01

    Background theory and specific coding designs for combined coding/modulation schemes utilizing convolutional codes and continuous-phase modulation (CPM) are presented. In this paper the case of r = 1/2 coding onto a 4-ary CPM is emphasized, with short-constraint length codes presented for continuous-phase FSK, double-raised-cosine, and triple-raised-cosine modulation. Coding buys several decibels of coding gain over the Gaussian channel, with an attendant increase of bandwidth. Performance comparisons in the power-bandwidth tradeoff with other approaches are made.

  8. QCDNUM: Fast QCD evolution and convolution

    NASA Astrophysics Data System (ADS)

    Botje, M.

    2011-02-01

    The QCDNUM program numerically solves the evolution equations for parton densities and fragmentation functions in perturbative QCD. Un-polarised parton densities can be evolved up to next-to-next-to-leading order in powers of the strong coupling constant, while polarised densities or fragmentation functions can be evolved up to next-to-leading order. Other types of evolution can be accessed by feeding alternative sets of evolution kernels into the program. A versatile convolution engine provides tools to compute parton luminosities, cross-sections in hadron-hadron scattering, and deep inelastic structure functions in the zero-mass scheme or in generalised mass schemes. Input to these calculations are either the QCDNUM evolved densities, or those read in from an external parton density repository. Included in the software distribution are packages to calculate zero-mass structure functions in un-polarised deep inelastic scattering, and heavy flavour contributions to these structure functions in the fixed flavour number scheme. Program summaryProgram title: QCDNUM version: 17.00 Catalogue identifier: AEHV_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHV_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU Public Licence No. of lines in distributed program, including test data, etc.: 45 736 No. of bytes in distributed program, including test data, etc.: 911 569 Distribution format: tar.gz Programming language: Fortran-77 Computer: All Operating system: All RAM: Typically 3 Mbytes Classification: 11.5 Nature of problem: Evolution of the strong coupling constant and parton densities, up to next-to-next-to-leading order in perturbative QCD. Computation of observable quantities by Mellin convolution of the evolved densities with partonic cross-sections. Solution method: Parametrisation of the parton densities as linear or quadratic splines on a discrete grid, and evolution of the spline

  9. Fast Convolution Algorithms and Associated VHSIC Architectures.

    DTIC Science & Technology

    1983-05-23

    Idenftfy by block number) Finite field, Mersenne prime , Fermat number, primitive element, number- theoretic transform, cyclic convolution, polynomial...elements of order 2 P+p and 2k n in the finite field GF(q 2), where q = 2P-l is a Mersenne prime , p is a prime number, and n is a divisor of 2pl...Abstract - A high-radix f.f.t. algorithm for computing transforms over GF(q2), where q is a Mersenne prime , is developed to implement fast circular

  10. Bacterial colony counting by Convolutional Neural Networks.

    PubMed

    Ferrari, Alessandro; Lombardi, Stefano; Signoroni, Alberto

    2015-01-01

    Counting bacterial colonies on microbiological culture plates is a time-consuming, error-prone, nevertheless fundamental task in microbiology. Computer vision based approaches can increase the efficiency and the reliability of the process, but accurate counting is challenging, due to the high degree of variability of agglomerated colonies. In this paper, we propose a solution which adopts Convolutional Neural Networks (CNN) for counting the number of colonies contained in confluent agglomerates, that scored an overall accuracy of the 92.8% on a large challenging dataset. The proposed CNN-based technique for estimating the cardinality of colony aggregates outperforms traditional image processing approaches, becoming a promising approach to many related applications.

  11. Convolutional coding combined with continuous phase modulation

    NASA Technical Reports Server (NTRS)

    Pizzi, S. V.; Wilson, S. G.

    1985-01-01

    Background theory and specific coding designs for combined coding/modulation schemes utilizing convolutional codes and continuous-phase modulation (CPM) are presented. In this paper the case of r = 1/2 coding onto a 4-ary CPM is emphasized, with short-constraint length codes presented for continuous-phase FSK, double-raised-cosine, and triple-raised-cosine modulation. Coding buys several decibels of coding gain over the Gaussian channel, with an attendant increase of bandwidth. Performance comparisons in the power-bandwidth tradeoff with other approaches are made.

  12. DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs.

    PubMed

    Chen, Liang-Chieh; Papandreou, George; Kokkinos, Iasonas; Murphy, Kevin; Yuille, Alan L

    2017-04-27

    In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First, we highlight convolution with upsampled filters, or 'atrous convolution', as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second, we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third, we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed "DeepLab" system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7% mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.

  13. Pixel response function experimental techniques and analysis of active pixel sensor star cameras

    NASA Astrophysics Data System (ADS)

    Fumo, Patrick; Waldron, Erik; Laine, Juha-Pekka; Evans, Gary

    2015-04-01

    The pixel response function (PRF) of a pixel within a focal plane is defined as the pixel intensity with respect to the position of a point source within the pixel. One of its main applications is in the field of astrometry, which is a branch of astronomy that deals with positioning data of a celestial body for tracking movement or adjusting the attitude of a spacecraft. Complementary metal oxide semiconductor (CMOS) image sensors generally offer better radiation tolerance to protons and heavy ions than CCDs making them ideal candidates for space applications aboard satellites, but like all image sensors they are limited by their spatial frequency response, better known as the modulation transfer function. Having a well-calibrated PRF allows us to eliminate some of the uncertainty in the spatial response of the system providing better resolution and a more accurate centroid estimation. This paper describes the experimental setup for determining the PRF of a CMOS image sensor and analyzes the effect on the oversampled point spread function (PSF) of an image intensifier, as well as the effects due to the wavelength of light used as a point source. It was found that using electron bombarded active pixel sensor (EBAPS) intensification technology had a significant impact on the PRF of the camera being tested as a result of an increase in the amount of carrier diffusion between collection sites generated by the intensification process. Taking the full width at half maximum (FWHM) of the resulting data, it was found that the intensified version of a CMOS camera exhibited a PSF roughly 16.42% larger than its nonintensified counterpart.

  14. Edge pixel response studies of edgeless silicon sensor technology for pixellated imaging detectors

    NASA Astrophysics Data System (ADS)

    Maneuski, D.; Bates, R.; Blue, A.; Buttar, C.; Doonan, K.; Eklund, L.; Gimenez, E. N.; Hynds, D.; Kachkanov, S.; Kalliopuska, J.; McMullen, T.; O'Shea, V.; Tartoni, N.; Plackett, R.; Vahanen, S.; Wraight, K.

    2015-03-01

    Silicon sensor technologies with reduced dead area at the sensor's perimeter are under development at a number of institutes. Several fabrication methods for sensors which are sensitive close to the physical edge of the device are under investigation utilising techniques such as active-edges, passivated edges and current-terminating rings. Such technologies offer the goal of a seamlessly tiled detection surface with minimum dead space between the individual modules. In order to quantify the performance of different geometries and different bulk and implant types, characterisation of several sensors fabricated using active-edge technology were performed at the B16 beam line of the Diamond Light Source. The sensors were fabricated by VTT and bump-bonded to Timepix ROICs. They were 100 and 200 μ m thick sensors, with the last pixel-to-edge distance of either 50 or 100 μ m. The sensors were fabricated as either n-on-n or n-on-p type devices. Using 15 keV monochromatic X-rays with a beam spot of 2.5 μ m, the performance at the outer edge and corners pixels of the sensors was evaluated at three bias voltages. The results indicate a significant change in the charge collection properties between the edge and 5th (up to 275 μ m) from edge pixel for the 200 μ m thick n-on-n sensor. The edge pixel performance of the 100 μ m thick n-on-p sensors is affected only for the last two pixels (up to 110 μ m) subject to biasing conditions. Imaging characteristics of all sensor types investigated are stable over time and the non-uniformities can be minimised by flat-field corrections. The results from the synchrotron tests combined with lab measurements are presented along with an explanation of the observed effects.

  15. Sink Pixels in ACS/WFC

    NASA Astrophysics Data System (ADS)

    Ryon, J. E.; Grogin, N.

    2017-02-01

    We investigate the properties of sink pixels in the Advanced Camera for Surveys (ACS) Wide Field Channel (WFC) detector. These pixels likely contain extra charge traps and therefore appear anomalously low in images with relatively high backgrounds. We identify sink pixels in the average short (0.5-second) dark image from each monthly anneal cycle, which, since January 2015, have been post-flashed to a background of about 60 e-. Sink pixels can affect the pixels immediately above and below them in the same column, resulting in high downstream pixels and low trails of upstream pixels. We determine typical trail lengths for sink pixels of different depths at various background levels. We create a reference image, one for each anneal cycle since January 2015, that will be used to flag sink pixels and the adjacent affected pixels in science images.

  16. Convolutional fountain distribution over fading wireless channels

    NASA Astrophysics Data System (ADS)

    Usman, Mohammed

    2012-08-01

    Mobile broadband has opened the possibility of a rich variety of services to end users. Broadcast/multicast of multimedia data is one such service which can be used to deliver multimedia to multiple users economically. However, the radio channel poses serious challenges due to its time-varying properties, resulting in each user experiencing different channel characteristics, independent of other users. Conventional methods of achieving reliability in communication, such as automatic repeat request and forward error correction do not scale well in a broadcast/multicast scenario over radio channels. Fountain codes, being rateless and information additive, overcome these problems. Although the design of fountain codes makes it possible to generate an infinite sequence of encoded symbols, the erroneous nature of radio channels mandates the need for protecting the fountain-encoded symbols, so that the transmission is feasible. In this article, the performance of fountain codes in combination with convolutional codes, when used over radio channels, is presented. An investigation of various parameters, such as goodput, delay and buffer size requirements, pertaining to the performance of fountain codes in a multimedia broadcast/multicast environment is presented. Finally, a strategy for the use of 'convolutional fountain' over radio channels is also presented.

  17. Convolution Inequalities for the Boltzmann Collision Operator

    NASA Astrophysics Data System (ADS)

    Alonso, Ricardo J.; Carneiro, Emanuel; Gamba, Irene M.

    2010-09-01

    We study integrability properties of a general version of the Boltzmann collision operator for hard and soft potentials in n-dimensions. A reformulation of the collisional integrals allows us to write the weak form of the collision operator as a weighted convolution, where the weight is given by an operator invariant under rotations. Using a symmetrization technique in L p we prove a Young’s inequality for hard potentials, which is sharp for Maxwell molecules in the L 2 case. Further, we find a new Hardy-Littlewood-Sobolev type of inequality for Boltzmann collision integrals with soft potentials. The same method extends to radially symmetric, non-increasing potentials that lie in some {Ls_{weak}} or L s . The method we use resembles a Brascamp, Lieb and Luttinger approach for multilinear weighted convolution inequalities and follows a weak formulation setting. Consequently, it is closely connected to the classical analysis of Young and Hardy-Littlewood-Sobolev inequalities. In all cases, the inequality constants are explicitly given by formulas depending on integrability conditions of the angular cross section (in the spirit of Grad cut-off). As an additional application of the technique we also obtain estimates with exponential weights for hard potentials in both conservative and dissipative interactions.

  18. New quantum MDS-convolutional codes derived from constacyclic codes

    NASA Astrophysics Data System (ADS)

    Li, Fengwei; Yue, Qin

    2015-12-01

    In this paper, we utilize a family of Hermitian dual-containing constacyclic codes to construct classical and quantum MDS convolutional codes. Our classical and quantum convolutional codes are optimal in the sense that they attain the classical (quantum) generalized Singleton bound.

  19. Experimental Investigation of Convoluted Contouring for Aircraft Afterbody Drag Reduction

    NASA Technical Reports Server (NTRS)

    Deere, Karen A.; Hunter, Craig A.

    1999-01-01

    An experimental investigation was performed in the NASA Langley 16-Foot Transonic Tunnel to determine the aerodynamic effects of external convolutions, placed on the boattail of a nonaxisymmetric nozzle for drag reduction. Boattail angles of 15 and 22 were tested with convolutions placed at a forward location upstream of the boattail curvature, at a mid location along the curvature and at a full location that spanned the entire boattail flap. Each of the baseline nozzle afterbodies (no convolutions) had a parabolic, converging contour with a parabolically decreasing corner radius. Data were obtained at several Mach numbers from static conditions to 1.2 for a range of nozzle pressure ratios and angles of attack. An oil paint flow visualization technique was used to qualitatively assess the effect of the convolutions. Results indicate that afterbody drag reduction by convoluted contouring is convolution location, Mach number, boattail angle, and NPR dependent. The forward convolution location was the most effective contouring geometry for drag reduction on the 22 afterbody, but was only effective for M < 0.95. At M = 0.8, drag was reduced 20 and 36 percent at NPRs of 5.4 and 7, respectively, but drag was increased 10 percent for M = 0.95 at NPR = 7. Convoluted contouring along the 15 boattail angle afterbody was not effective at reducing drag because the flow was minimally separated from the baseline afterbody, unlike the massive separation along the 22 boattail angle baseline afterbody.

  20. WFC3 Pixel Area Maps

    NASA Astrophysics Data System (ADS)

    Kalirai, J. S.; Cox, C.; Dressel, L.; Fruchter, A.; Hack, W.; Kozhurina-Platais, V.; Mack, J.

    2010-04-01

    We present the pixel area maps (PAMs) for the WFC3/UVIS and WFC3/IR detectors, and discuss the normalization of these images. HST processed flt images suffer from geometric distortion and therefore have pixel areas that vary on the sky. The counts (electrons) measured for a source on these images depends on the position of the source on the detector, an effect that is implicitly corrected when these images are multidrizzled into drz files. The flt images can be multiplied by the PAMs to yield correct and uniform counts for a given source irrespective of its location on the image. To ensure consistency between the count rate measured for sources in drz images and near the center of flt images, we set the normalization of the PAMs to unity at a reference pixel near the center of the UVIS mosaic and IR detector, and set the SCALE in the IDCTAB equal to the square root of the area of this reference pixel. The implications of this choice for photometric measurements are discussed.

  1. Pixel History for Advanced Camera for Surveys Wide Field Channel

    NASA Astrophysics Data System (ADS)

    Borncamp, D.; Grogin, N.; Bourque, M.; Ogaz, S.

    2017-06-01

    Excess thermal energy present in a Charged Coupled Device (CCD) can result in additional electrical current. This excess charge is trapped within the silicon lattice structure of the CCD electronics. It can persist through multiple exposures and have an adverse effect on science performance of the detectors unless properly flagged and corrected for. The traditional way to correct for this extra charge is to take occasional long-exposure images with the camera shutter closed. These images, generally referred to as "dark" images, allow for the measurement of the thermal-electron contamination present in each pixel of the CCD lattice. This so-called "dark current" can then be subtracted from the science images by re-scaling the dark to the corresponding exposure times. Pixels that have signal above a certain threshold are traditionally marked as "hot" and flagged in the data quality array. Many users will discard these because of the extra current. However, these pixels may not be unusable because of an unreliable dark subtraction; if we find these pixels to be stable over an anneal period, we can properly subtract the charge and the extra Poisson noise from this dark current will be propagated into the error arrays. Here we present the results of a pixel history study that analyzes every individual pixel of the Hubble Space Telescope's (HST) Advanced Camera for Surveys (ACS) Wide Field Channel (WFC) CCDs over time and allows pixels that were previously flagged as unusable to be brought back into the science image as a reliable pixel.

  2. Sub-pixel localisation of passive micro-coil fiducial markers in interventional MRI.

    PubMed

    Rea, Marc; McRobbie, Donald; Elhawary, Haytham; Tse, Zion T H; Lamperth, Michael; Young, Ian

    2009-04-01

    Electromechanical devices enable increased accuracy in surgical procedures, and the recent development of MRI-compatible mechatronics permits the use of MRI for real-time image guidance. Integrated imaging of resonant micro-coil fiducials provides an accurate method of tracking devices in a scanner with increased flexibility compared to gradient tracking. Here we report on the ability of ten different image-processing algorithms to track micro-coil fiducials with sub-pixel accuracy. Five algorithms: maximum pixel, barycentric weighting, linear interpolation, quadratic fitting and Gaussian fitting were applied both directly to the pixel intensity matrix and to the cross-correlation matrix obtained by 2D convolution with a reference image. Using images of a 3 mm fiducial marker and a pixel size of 1.1 mm, intensity linear interpolation, which calculates the position of the fiducial centre by interpolating the pixel data to find the fiducial edges, was found to give the best performance for minimal computing power; a maximum error of 0.22 mm was observed in fiducial localisation for displacements up to 40 mm. The inherent standard deviation of fiducial localisation was 0.04 mm. This work enables greater accuracy to be achieved in passive fiducial tracking.

  3. Object class segmentation of RGB-D video using recurrent convolutional neural networks.

    PubMed

    Pavel, Mircea Serban; Schulz, Hannes; Behnke, Sven

    2017-04-01

    Object class segmentation is a computer vision task which requires labeling each pixel of an image with the class of the object it belongs to. Deep convolutional neural networks (DNN) are able to learn and take advantage of local spatial correlations required for this task. They are, however, restricted by their small, fixed-sized filters, which limits their ability to learn long-range dependencies. Recurrent Neural Networks (RNN), on the other hand, do not suffer from this restriction. Their iterative interpretation allows them to model long-range dependencies by propagating activity. This property is especially useful when labeling video sequences, where both spatial and temporal long-range dependencies occur. In this work, a novel RNN architecture for object class segmentation is presented. We investigate several ways to train such a network. We evaluate our models on the challenging NYU Depth v2 dataset for object class segmentation and obtain competitive results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Comparison of pixel and sub-pixel based techniques to separate Pteronia incana invaded areas using multi-temporal high resolution imagery

    NASA Astrophysics Data System (ADS)

    Odindi, John; Kakembo, Vincent

    2009-08-01

    Remote Sensing using high resolution imagery (HRI) is fast becoming an important tool in detailed land-cover mapping and analysis of plant species invasion. In this study, we sought to test the separability of Pteronia incana invader species by pixel content aggregation and pixel content de-convolution using multi-temporal infrared HRI. An invaded area in Eastern Cape, South Africa was flown in 2001, 2004 and 2006 and HRI of 1x1m resolution captured using a DCS 420 colour infrared camera. The images were separated into bands, geo-rectified and radiometrically corrected using Idrisi Kilimanjaro GIS. Value files were extracted from the bands in order to compare spectral values for P. incana, green vegetation and bare surfaces using the pixel based Perpendicular Vegetation Index (PVI), while Constrained Linear Spectral Unmixing (CLSU) surface endmembers were used to generate sub-pixel land surface image fractions. Spectroscopy was used to validate spectral trends identified from HRI. The PVI successfully separated the multi-temporal imagery surfaces and was consistent with the unmixed surface image fractions from CLSU. Separability between the respective surfaces was also achieved using reflectance measurements.

  5. Robust smile detection using convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Bianco, Simone; Celona, Luigi; Schettini, Raimondo

    2016-11-01

    We present a fully automated approach for smile detection. Faces are detected using a multiview face detector and aligned and scaled using automatically detected eye locations. Then, we use a convolutional neural network (CNN) to determine whether it is a smiling face or not. To this end, we investigate different shallow CNN architectures that can be trained even when the amount of learning data is limited. We evaluate our complete processing pipeline on the largest publicly available image database for smile detection in an uncontrolled scenario. We investigate the robustness of the method to different kinds of geometric transformations (rotation, translation, and scaling) due to imprecise face localization, and to several kinds of distortions (compression, noise, and blur). To the best of our knowledge, this is the first time that this type of investigation has been performed for smile detection. Experimental results show that our proposal outperforms state-of-the-art methods on both high- and low-quality images.

  6. Convolutional neural network for pottery retrieval

    NASA Astrophysics Data System (ADS)

    Benhabiles, Halim; Tabia, Hedi

    2017-01-01

    The effectiveness of the convolutional neural network (CNN) has already been demonstrated in many challenging tasks of computer vision, such as image retrieval, action recognition, and object classification. This paper specifically exploits CNN to design local descriptors for content-based retrieval of complete or nearly complete three-dimensional (3-D) vessel replicas. Based on vector quantization, the designed descriptors are clustered to form a shape vocabulary. Then, each 3-D object is associated to a set of clusters (words) in that vocabulary. Finally, a weighted vector counting the occurrences of every word is computed. The reported experimental results on the 3-D pottery benchmark show the superior performance of the proposed method.

  7. Convolution models for induced electromagnetic responses

    PubMed Central

    Litvak, Vladimir; Jha, Ashwani; Flandin, Guillaume; Friston, Karl

    2013-01-01

    In Kilner et al. [Kilner, J.M., Kiebel, S.J., Friston, K.J., 2005. Applications of random field theory to electrophysiology. Neurosci. Lett. 374, 174–178.] we described a fairly general analysis of induced responses—in electromagnetic brain signals—using the summary statistic approach and statistical parametric mapping. This involves localising induced responses—in peristimulus time and frequency—by testing for effects in time–frequency images that summarise the response of each subject to each trial type. Conventionally, these time–frequency summaries are estimated using post‐hoc averaging of epoched data. However, post‐hoc averaging of this sort fails when the induced responses overlap or when there are multiple response components that have variable timing within each trial (for example stimulus and response components associated with different reaction times). In these situations, it is advantageous to estimate response components using a convolution model of the sort that is standard in the analysis of fMRI time series. In this paper, we describe one such approach, based upon ordinary least squares deconvolution of induced responses to input functions encoding the onset of different components within each trial. There are a number of fundamental advantages to this approach: for example; (i) one can disambiguate induced responses to stimulus onsets and variably timed responses; (ii) one can test for the modulation of induced responses—over peristimulus time and frequency—by parametric experimental factors and (iii) one can gracefully handle confounds—such as slow drifts in power—by including them in the model. In what follows, we consider optimal forms for convolution models of induced responses, in terms of impulse response basis function sets and illustrate the utility of deconvolution estimators using simulated and real MEG data. PMID:22982359

  8. Image quality of mixed convolution kernel in thoracic computed tomography

    PubMed Central

    Neubauer, Jakob; Spira, Eva Maria; Strube, Juliane; Langer, Mathias; Voss, Christian; Kotter, Elmar

    2016-01-01

    Abstract The mixed convolution kernel alters his properties geographically according to the depicted organ structure, especially for the lung. Therefore, we compared the image quality of the mixed convolution kernel to standard soft and hard kernel reconstructions for different organ structures in thoracic computed tomography (CT) images. Our Ethics Committee approved this prospective study. In total, 31 patients who underwent contrast-enhanced thoracic CT studies were included after informed consent. Axial reconstructions were performed with hard, soft, and mixed convolution kernel. Three independent and blinded observers rated the image quality according to the European Guidelines for Quality Criteria of Thoracic CT for 13 organ structures. The observers rated the depiction of the structures in all reconstructions on a 5-point Likert scale. Statistical analysis was performed with the Friedman Test and post hoc analysis with the Wilcoxon rank-sum test. Compared to the soft convolution kernel, the mixed convolution kernel was rated with a higher image quality for lung parenchyma, segmental bronchi, and the border between the pleura and the thoracic wall (P < 0.03). Compared to the hard convolution kernel, the mixed convolution kernel was rated with a higher image quality for aorta, anterior mediastinal structures, paratracheal soft tissue, hilar lymph nodes, esophagus, pleuromediastinal border, large and medium sized pulmonary vessels and abdomen (P < 0.004) but a lower image quality for trachea, segmental bronchi, lung parenchyma, and skeleton (P < 0.001). The mixed convolution kernel cannot fully substitute the standard CT reconstructions. Hard and soft convolution kernel reconstructions still seem to be mandatory for thoracic CT. PMID:27858910

  9. Image quality of mixed convolution kernel in thoracic computed tomography.

    PubMed

    Neubauer, Jakob; Spira, Eva Maria; Strube, Juliane; Langer, Mathias; Voss, Christian; Kotter, Elmar

    2016-11-01

    The mixed convolution kernel alters his properties geographically according to the depicted organ structure, especially for the lung. Therefore, we compared the image quality of the mixed convolution kernel to standard soft and hard kernel reconstructions for different organ structures in thoracic computed tomography (CT) images.Our Ethics Committee approved this prospective study. In total, 31 patients who underwent contrast-enhanced thoracic CT studies were included after informed consent. Axial reconstructions were performed with hard, soft, and mixed convolution kernel. Three independent and blinded observers rated the image quality according to the European Guidelines for Quality Criteria of Thoracic CT for 13 organ structures. The observers rated the depiction of the structures in all reconstructions on a 5-point Likert scale. Statistical analysis was performed with the Friedman Test and post hoc analysis with the Wilcoxon rank-sum test.Compared to the soft convolution kernel, the mixed convolution kernel was rated with a higher image quality for lung parenchyma, segmental bronchi, and the border between the pleura and the thoracic wall (P < 0.03). Compared to the hard convolution kernel, the mixed convolution kernel was rated with a higher image quality for aorta, anterior mediastinal structures, paratracheal soft tissue, hilar lymph nodes, esophagus, pleuromediastinal border, large and medium sized pulmonary vessels and abdomen (P < 0.004) but a lower image quality for trachea, segmental bronchi, lung parenchyma, and skeleton (P < 0.001).The mixed convolution kernel cannot fully substitute the standard CT reconstructions. Hard and soft convolution kernel reconstructions still seem to be mandatory for thoracic CT.

  10. Space suit

    NASA Technical Reports Server (NTRS)

    Shepard, L. F.; Durney, G. P.; Case, M. C.; Kenneway, A. J., III; Wise, R. C.; Rinehart, D.; Bessette, R. J.; Pulling, R. C. (Inventor)

    1973-01-01

    A pressure suit for high altitude flights, particularly space missions is reported. The suit is designed for astronauts in the Apollo space program and may be worn both inside and outside a space vehicle, as well as on the lunar surface. It comprises an integrated assembly of inner comfort liner, intermediate pressure garment, and outer thermal protective garment with removable helmet, and gloves. The pressure garment comprises an inner convoluted sealing bladder and outer fabric restraint to which are attached a plurality of cable restraint assemblies. It provides versitility in combination with improved sealing and increased mobility for internal pressures suitable for life support in the near vacuum of outer space.

  11. Single-pixel hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Suo, Jinli; Wang, Yuwang; Bian, Liheng; Dai, Qionghai

    2016-10-01

    Conventional multispectral imaging methods detect photons of a 3D hyperspectral data cube separately either in the spatial or spectral dimension using array detectors, and are thus photon inefficient and spectrum range limited. Besides, they are usually bulky and highly expensive. To address these issues, this paper presents single-pixel multispectral imaging techniques, which are of high sensitivity, wide spectrum range, low cost and light weight. Two mechanisms are proposed, and experimental validation are also reported.

  12. SAR Image Complex Pixel Representations

    SciTech Connect

    Doerry, Armin W.

    2015-03-01

    Complex pixel values for Synthetic Aperture Radar (SAR) images of uniform distributed clutter can be represented as either real/imaginary (also known as I/Q) values, or as Magnitude/Phase values. Generally, these component values are integers with limited number of bits. For clutter energy well below full-scale, Magnitude/Phase offers lower quantization noise than I/Q representation. Further improvement can be had with companding of the Magnitude value.

  13. Convolution kernel design and efficient algorithm for sampling density correction.

    PubMed

    Johnson, Kenneth O; Pipe, James G

    2009-02-01

    Sampling density compensation is an important step in non-cartesian image reconstruction. One of the common techniques to determine weights that compensate for differences in sampling density involves a convolution. A new convolution kernel is designed for sampling density attempting to minimize the error in a fully reconstructed image. The resulting weights obtained using this new kernel are compared with various previous methods, showing a reduction in reconstruction error. A computationally efficient algorithm is also presented that facilitates the calculation of the convolution of finite kernels. Both the kernel and the algorithm are extended to 3D. Copyright 2009 Wiley-Liss, Inc.

  14. CMOS digital pixel sensors: technology and applications

    NASA Astrophysics Data System (ADS)

    Skorka, Orit; Joseph, Dileepan

    2014-04-01

    CMOS active pixel sensor technology, which is widely used these days for digital imaging, is based on analog pixels. Transition to digital pixel sensors can boost signal-to-noise ratios and enhance image quality, but can increase pixel area to dimensions that are impractical for the high-volume market of consumer electronic devices. There are two main approaches to digital pixel design. The first uses digitization methods that largely rely on photodetector properties and so are unique to imaging. The second is based on adaptation of a classical analog-to-digital converter (ADC) for in-pixel data conversion. Imaging systems for medical, industrial, and security applications are emerging lower-volume markets that can benefit from these in-pixel ADCs. With these applications, larger pixels are typically acceptable, and imaging may be done in invisible spectral bands.

  15. Mapping Electrical Crosstalk in Pixelated Sensor Arrays

    NASA Technical Reports Server (NTRS)

    Seshadri, Suresh (Inventor); Cole, David (Inventor); Smith, Roger M (Inventor); Hancock, Bruce R. (Inventor)

    2013-01-01

    The effects of inter pixel capacitance in a pixilated array may be measured by first resetting all pixels in the array to a first voltage, where a first image is read out, followed by resetting only a subset of pixels in the array to a second voltage, where a second image is read out, where the difference in the first and second images provide information about the inter pixel capacitance. Other embodiments are described and claimed.

  16. Mapping Electrical Crosstalk in Pixelated Sensor Arrays

    NASA Technical Reports Server (NTRS)

    Seshadri, Suresh (Inventor); Cole, David (Inventor); Smith, Roger M. (Inventor); Hancock, Bruce R. (Inventor)

    2017-01-01

    The effects of inter pixel capacitance in a pixilated array may be measured by first resetting all pixels in the array to a first voltage, where a first image is read out, followed by resetting only a subset of pixels in the array to a second voltage, where a second image is read out, where the difference in the first and second images provide information about the inter pixel capacitance. Other embodiments are described and claimed.

  17. Convolution modeling of two-domain, nonlinear water-level responses in karst aquifers (Invited)

    NASA Astrophysics Data System (ADS)

    Long, A. J.

    2009-12-01

    Convolution modeling is a useful method for simulating the hydraulic response of water levels to sinking streamflow or precipitation infiltration at the macro scale. This approach is particularly useful in karst aquifers, where the complex geometry of the conduit and pore network is not well characterized but can be represented approximately by a parametric impulse-response function (IRF) with very few parameters. For many applications, one-dimensional convolution models can be equally effective as complex two- or three-dimensional models for analyzing water-level responses to recharge. Moreover, convolution models are well suited for identifying and characterizing the distinct domains of quick flow and slow flow (e.g., conduit flow and diffuse flow). Two superposed lognormal functions were used in the IRF to approximate the impulses of the two flow domains. Nonlinear response characteristics of the flow domains were assessed by observing temporal changes in the IRFs. Precipitation infiltration was simulated by filtering the daily rainfall record with a backward-in-time exponential function that weights each day’s rainfall with the rainfall of previous days and thus accounts for the effects of soil moisture on aquifer infiltration. The model was applied to the Edwards aquifer in Texas and the Madison aquifer in South Dakota. Simulations of both aquifers showed similar characteristics, including a separation on the order of years between the quick-flow and slow-flow IRF peaks and temporal changes in the IRF shapes when water levels increased and empty pore spaces became saturated.

  18. Output-sensitive 3D line integral convolution.

    PubMed

    Falk, Martin; Weiskopf, Daniel

    2008-01-01

    We propose an output-sensitive visualization method for 3D line integral convolution (LIC) whose rendering speed is largely independent of the data set size and mostly governed by the complexity of the output on the image plane. Our approach of view-dependent visualization tightly links the LIC generation with the volume rendering of the LIC result in order to avoid the computation of unnecessary LIC points: early-ray termination and empty-space leaping techniques are used to skip the computation of the LIC integral in a lazy-evaluation approach; both ray casting and texture slicing can be used as volume-rendering techniques. The input noise is modeled in object space to allow for temporal coherence under object and camera motion. Different noise models are discussed, covering dense representations based on filtered white noise all the way to sparse representations similar to oriented LIC. Aliasing artifacts are avoided by frequency control over the 3D noise and by employing a 3D variant of MIPmapping. A range of illumination models is applied to the LIC streamlines: different codimension-2 lighting models and a novel gradient-based illumination model that relies on precomputed gradients and does not require any direct calculation of gradients after the LIC integral is evaluated. We discuss the issue of proper sampling of the LIC and volume-rendering integrals by employing a frequency-space analysis of the noise model and the precomputed gradients. Finally, we demonstrate that our visualization approach lends itself to a fast graphics processing unit (GPU) implementation that supports both steady and unsteady flow. Therefore, this 3D LIC method allows users to interactively explore 3D flow by means of high-quality, view-dependent, and adaptive LIC volume visualization. Applications to flow visualization in combination with feature extraction and focus-and-context visualization are described, a comparison to previous methods is provided, and a detailed performance

  19. A robust sub-pixel edge detection method of infrared image based on tremor-based retinal receptive field model

    NASA Astrophysics Data System (ADS)

    Gao, Kun; Yang, Hu; Chen, Xiaomei; Ni, Guoqiang

    2008-03-01

    Because of complex thermal objects in an infrared image, the prevalent image edge detection operators are often suitable for a certain scene and extract too wide edges sometimes. From a biological point of view, the image edge detection operators work reliably when assuming a convolution-based receptive field architecture. A DoG (Difference-of- Gaussians) model filter based on ON-center retinal ganglion cell receptive field architecture with artificial eye tremors introduced is proposed for the image contour detection. Aiming at the blurred edges of an infrared image, the subsequent orthogonal polynomial interpolation and sub-pixel level edge detection in rough edge pixel neighborhood is adopted to locate the foregoing rough edges in sub-pixel level. Numerical simulations show that this method can locate the target edge accurately and robustly.

  20. Sensitivity of landscape metrics to pixel size

    Treesearch

    J. D. Wickham; K. H. Riitters

    1995-01-01

    Analysis of diversity and evenness metrics using land cover data are becoming formalized in landscape ecology. Diversity and evenness metrics are dependent on the pixel size (scale) over which the data are collected. Aerial photography was interpreted for land cover and converted into four raster data sets with 4, 12, 28, and 80 m pixel sizes, representing pixel sizes...

  1. Toward Content Based Image Retrieval with Deep Convolutional Neural Networks.

    PubMed

    Sklan, Judah E S; Plassard, Andrew J; Fabbri, Daniel; Landman, Bennett A

    2015-03-19

    Content-based image retrieval (CBIR) offers the potential to identify similar case histories, understand rare disorders, and eventually, improve patient care. Recent advances in database capacity, algorithm efficiency, and deep Convolutional Neural Networks (dCNN), a machine learning technique, have enabled great CBIR success for general photographic images. Here, we investigate applying the leading ImageNet CBIR technique to clinically acquired medical images captured by the Vanderbilt Medical Center. Briefly, we (1) constructed a dCNN with four hidden layers, reducing dimensionality of an input scaled to 128×128 to an output encoded layer of 4×384, (2) trained the network using back-propagation 1 million random magnetic resonance (MR) and computed tomography (CT) images, (3) labeled an independent set of 2100 images, and (4) evaluated classifiers on the projection of the labeled images into manifold space. Quantitative results were disappointing (averaging a true positive rate of only 20%); however, the data suggest that improvements would be possible with more evenly distributed sampling across labels and potential re-grouping of label structures. This prelimainry effort at automated classification of medical images with ImageNet is promising, but shows that more work is needed beyond direct adaptation of existing techniques.

  2. Steady-state modeling of current loss in a post-hole convolute driven by high power magnetically insulated transmission lines

    NASA Astrophysics Data System (ADS)

    Madrid, E. A.; Rose, D. V.; Welch, D. R.; Clark, R. E.; Mostrom, C. B.; Stygar, W. A.; Cuneo, M. E.; Gomez, M. R.; Hughes, T. P.; Pointon, T. D.; Seidel, D. B.

    2013-12-01

    Quasiequilibrium power flow in two radial magnetically insulated transmission lines (MITLs) coupled to a vacuum post-hole convolute is studied at 50TW-200TW using three-dimensional particle-in-cell simulations. The key physical dimensions in the model are based on the ZR accelerator [D. H. McDaniel, et al., Proceedings of 5th International Conference on Dense Z-Pinches, edited by J. Davis (AIP, New York, 2002), p. 23]. The voltages assumed for this study result in electron emission from all cathode surfaces. Electrons emitted from the MITL cathodes upstream of the convolute cause a portion of the MITL current to be carried by an electron sheath. Under the simplifying assumptions made by the simulations, it is found that the transition from the two MITLs to the convolute results in the loss of most of the sheath current to anode structures. The loss is quantified as a function of radius and correlated with Poynting vector stream lines which would be followed by individual electrons. For a fixed MITL-convolute geometry, the current loss, defined to be the difference between the total (i.e. anode) current in the system upstream of the convolute and the current delivered to the load, increases with both operating voltage and load impedance. It is also found that in the absence of ion emission, the convolute is efficient when the load impedance is much less than the impedance of the two parallel MITLs. The effects of space-charge-limited (SCL) ion emission from anode surfaces are considered for several specific cases. Ion emission from anode surfaces in the convolute is found to increase the current loss by a factor of 2-3. When SCL ion emission is allowed from anode surfaces in the MITLs upstream of the convolute, substantially higher current losses are obtained. Note that the results reported here are valid given the spatial resolution used for the simulations.

  3. X-ray micro-beam characterization of a small pixel spectroscopic CdTe detector

    NASA Astrophysics Data System (ADS)

    Veale, M. C.; Bell, S. J.; Seller, P.; Wilson, M. D.; Kachkanov, V.

    2012-07-01

    A small pixel, spectroscopic, CdTe detector has been developed at the Rutherford Appleton Laboratory (RAL) for X-ray imaging applications. The detector consists of 80 × 80 pixels on a 250 μm pitch with 50 μm inter-pixel spacing. Measurements with an 241Am γ-source demonstrated that 96% of all pixels have a FWHM of better than 1 keV while the majority of the remaining pixels have FWHM of less than 4 keV. Using the Diamond Light Source synchrotron, a 10 μm collimated beam of monochromatic 20 keV X-rays has been used to map the spatial variation in the detector response and the effects of charge sharing corrections on detector efficiency and resolution. The mapping measurements revealed the presence of inclusions in the detector and quantified their effect on the spectroscopic resolution of pixels.

  4. Missing pixels restoration for remote sensing images using adaptive search window and linear regression

    NASA Astrophysics Data System (ADS)

    Tai, Shen-Chuan; Chen, Peng-Yu; Chao, Chian-Yen

    2016-07-01

    The Consultative Committee for Space Data Systems proposed an efficient image compression standard that can do lossless compression (CCSDS-ICS). CCSDS-ICS is the most widely utilized standard for satellite communications. However, the original CCSDS-ICS is weak in terms of error resilience with even a single incorrect bit possibly causing numerous missing pixels. A restoration algorithm based on the neighborhood similar pixel interpolator is proposed to fill in missing pixels. The linear regression model is used to generate the reference image from other panchromatic or multispectral images. Furthermore, an adaptive search window is utilized to sieve out similar pixels from the pixels in the search region defined in the neighborhood similar pixel interpolator. The experimental results show that the proposed methods are capable of reconstructing missing regions with good visual quality.

  5. Quantification and adjustment of pixel-locking in particle image velocimetry

    NASA Astrophysics Data System (ADS)

    Hearst, R. J.; Ganapathisubramani, B.

    2015-10-01

    A quantification metric is provided to determine the degree to which a particle image velocimetry data set is pixel-locked. The metric is calculated by integrating the histogram equalization transfer function and normalizing by the worst-case scenario to return the percentage pixel-locked. When this metric is calculated for each position in the vector field, it is shown that pixel-locking is non-uniform across the field. Hence, pixel-locking adjustments should be made on a vector-by-vector basis rather than uniformly across a field, although the latter is the common practice. A methodology is provided to compensate for the effects of pixel-locking on a vector-by-vector basis. This includes applying a Gaussian filter directly to the images, processing the images with window deformation, ensuring the vector fields are in pixel displacements, applying histogram equalization calculated at each vector coordinate, and mapping the adjusted vector fields to physical space.

  6. Programmable convolution via the chirp Z-transform with CCD's

    NASA Technical Reports Server (NTRS)

    Buss, D. D.

    1977-01-01

    Technique filtering by convolution in frequency domain rather than in time domain presents possible solution to problem of programmable transversal filters. Process is accomplished through utilization of chip z-transform (CZT) with charge-coupled devices

  7. Model Convolution: A Computational Approach to Digital Image Interpretation.

    PubMed

    Gardner, Melissa K; Sprague, Brian L; Pearson, Chad G; Cosgrove, Benjamin D; Bicek, Andrew D; Bloom, Kerry; Salmon, E D; Odde, David J

    2010-06-01

    Digital fluorescence microscopy is commonly used to track individual proteins and their dynamics in living cells. However, extracting molecule-specific information from fluorescence images is often limited by the noise and blur intrinsic to the cell and the imaging system. Here we discuss a method called "model-convolution," which uses experimentally measured noise and blur to simulate the process of imaging fluorescent proteins whose spatial distribution cannot be resolved. We then compare model-convolution to the more standard approach of experimental deconvolution. In some circumstances, standard experimental deconvolution approaches fail to yield the correct underlying fluorophore distribution. In these situations, model-convolution removes the uncertainty associated with deconvolution and therefore allows direct statistical comparison of experimental and theoretical data. Thus, if there are structural constraints on molecular organization, the model-convolution method better utilizes information gathered via fluorescence microscopy, and naturally integrates experiment and theory.

  8. A fast computation of complex convolution using a hybrid transform

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1978-01-01

    The cyclic convolution of complex values was obtained by a hybrid transform that is a combination of a Winograd transform and a fast complex integer transform. This new hybrid algorithm requires fewer multiplications than any previously known algorithm.

  9. Programmable convolution via the chirp Z-transform with CCD's

    NASA Technical Reports Server (NTRS)

    Buss, D. D.

    1977-01-01

    Technique filtering by convolution in frequency domain rather than in time domain presents possible solution to problem of programmable transversal filters. Process is accomplished through utilization of chip z-transform (CZT) with charge-coupled devices

  10. Model Convolution: A Computational Approach to Digital Image Interpretation

    PubMed Central

    Gardner, Melissa K.; Sprague, Brian L.; Pearson, Chad G.; Cosgrove, Benjamin D.; Bicek, Andrew D.; Bloom, Kerry; Salmon, E. D.

    2010-01-01

    Digital fluorescence microscopy is commonly used to track individual proteins and their dynamics in living cells. However, extracting molecule-specific information from fluorescence images is often limited by the noise and blur intrinsic to the cell and the imaging system. Here we discuss a method called “model-convolution,” which uses experimentally measured noise and blur to simulate the process of imaging fluorescent proteins whose spatial distribution cannot be resolved. We then compare model-convolution to the more standard approach of experimental deconvolution. In some circumstances, standard experimental deconvolution approaches fail to yield the correct underlying fluorophore distribution. In these situations, model-convolution removes the uncertainty associated with deconvolution and therefore allows direct statistical comparison of experimental and theoretical data. Thus, if there are structural constraints on molecular organization, the model-convolution method better utilizes information gathered via fluorescence microscopy, and naturally integrates experiment and theory. PMID:20461132

  11. Determination of collisional linewidths and shifts by a convolution method

    NASA Technical Reports Server (NTRS)

    Pickett, H. M.

    1980-01-01

    A technique is described for fitting collisional linewidths and shifts from experimental spectral data. The method involves convoluting a low-pressure reference spectrum with a Lorentz shape function and comparing the convoluted spectrum with higher pressure spectra. Several experimental examples are given. One advantage of the method is that no extra information is needed about the instrument response function or spectral modulation. In addition, the method is shown to be relatively insensitive to the presence of reflections in the sample cell.

  12. WE-G-204-03: Photon-Counting Hexagonal Pixel Array CdTe Detector: Optimal Resampling to Square Pixels

    SciTech Connect

    Shrestha, S; Vedantham, S; Karellas, A; Bellazzini, R; Spandre, G; Brez, A

    2015-06-15

    Purpose: Detectors with hexagonal pixels require resampling to square pixels for distortion-free display of acquired images. In this work, the presampling modulation transfer function (MTF) of a hexagonal pixel array photon-counting CdTe detector for region-of-interest fluoroscopy was measured and the optimal square pixel size for resampling was determined. Methods: A 0.65mm thick CdTe Schottky sensor capable of concurrently acquiring up to 3 energy-windowed images was operated in a single energy-window mode to include ≥10 KeV photons. The detector had hexagonal pixels with apothem of 30 microns resulting in pixel spacing of 60 and 51.96 microns along the two orthogonal directions. Images of a tungsten edge test device acquired under IEC RQA5 conditions were double Hough transformed to identify the edge and numerically differentiated. The presampling MTF was determined from the finely sampled line spread function that accounted for the hexagonal sampling. The optimal square pixel size was determined in two ways; the square pixel size for which the aperture function evaluated at the Nyquist frequencies along the two orthogonal directions matched that from the hexagonal pixel aperture functions, and the square pixel size for which the mean absolute difference between the square and hexagonal aperture functions was minimized over all frequencies up to the Nyquist limit. Results: Evaluation of the aperture functions over the entire frequency range resulted in square pixel size of 53 microns with less than 2% difference from the hexagonal pixel. Evaluation of the aperture functions at Nyquist frequencies alone resulted in 54 microns square pixels. For the photon-counting CdTe detector and after resampling to 53 microns square pixels using quadratic interpolation, the presampling MTF at Nyquist frequency of 9.434 cycles/mm along the two directions were 0.501 and 0.507. Conclusion: Hexagonal pixel array photon-counting CdTe detector after resampling to square pixels

  13. Fully convolutional neural network for removing background in noisy images of uranium bearing particles

    SciTech Connect

    Tarolli, Jay G.; Naes, Benjamin E.; Butler, Lamar; Foster, Keeyahna; Gumbs, Caleb M.; Howard, Andrea L.; Willingham, David

    2017-01-01

    A fully convolutional neural network (FCN) was developed to supersede automatic or manual thresholding algorithms used for tabulating SIMS particle search data. The FCN was designed to perform a binary classification of pixels in each image belonging to a particle or not, thereby effectively removing background signal without manually or automatically determining an intensity threshold. Using 8,000 images from 28 different particle screening analyses, the FCN was trained to accurately predict pixels belonging to a particle with near 99% accuracy. Background eliminated images were then segmented using a watershed technique in order to determine isotopic ratios of particles. A comparison of the isotopic distributions of an independent data set segmented using the neural network, compared to a commercially available automated particle measurement (APM) program developed by CAMECA, highlighted the necessity for effective background removal to ensure that resulting particle identification is not only accurate, but preserves valuable signal that could be lost due to improper segmentation. The FCN approach improves the robustness of current state-of-the-art particle searching algorithms by reducing user input biases, resulting in an improved absolute signal per particle and decreased uncertainty of the determined isotope ratios.

  14. Single image depth estimation based on convolutional neural network and sparse connected conditional random field

    NASA Astrophysics Data System (ADS)

    Zhu, Leqing; Wang, Xun; Wang, Dadong; Wang, Huiyan

    2016-10-01

    Deep convolutional neural networks (DCNNs) have attracted significant interest in the computer vision community in the recent years and have exhibited high performance in resolving many computer vision problems, such as image classification. We address the pixel-level depth prediction from a single image by combining DCNN and sparse connected conditional random field (CRF). Owing to the invariance properties of DCNNs that make them suitable for high-level tasks, their outputs are generally not localized enough for detailed pixel-level regression. A multiscale DCNN and sparse connected CRF are combined to overcome this localization weakness. We have evaluated our framework using the well-known NYU V2 depth dataset, and the results show that the proposed method can improve the depth prediction accuracy both qualitatively and quantitatively, as compared to previous works. This finding shows the potential use of the proposed method in three-dimensional (3-D) modeling or 3-D video production from the given two-dimensional (2-D) images or 2-D videos.

  15. Making a trillion pixels dance

    NASA Astrophysics Data System (ADS)

    Singh, Vivek; Hu, Bin; Toh, Kenny; Bollepalli, Srinivas; Wagner, Stephan; Borodovsky, Yan

    2008-03-01

    In June 2007, Intel announced a new pixelated mask technology. This technology was created to address the problem caused by the growing gap between the lithography wavelength and the feature sizes patterned with it. As this gap has increased, the quality of the image has deteriorated. About a decade ago, Optical Proximity Correction (OPC) was introduced to bridge this gap, but as this gap continued to increase, one could not rely on the same basic set of techniques to maintain image quality. The computational lithography group at Intel sought to alleviate this problem by experimenting with additional degrees of freedom within the mask. This paper describes the resulting pixelated mask technology, and some of the computational methods used to create it. The first key element of this technology is a thick mask model. We realized very early in the development that, unlike traditional OPC methods, the pixelated mask would require a very accurate thick mask model. Whereas in the traditional methods, one can use the relatively coarse approximations such as the boundary layer method, use of such techniques resulted not just in incorrect sizing of parts of the pattern, but in whole features missing. We built on top of previously published domain decomposition methods, and incorporated limitations of the mask manufacturing process, to create an accurate thick mask model. Several additional computational techniques were invoked to substantially increase the speed of this method to a point that it was feasible for full chip tapeout. A second key element of the computational scheme was the comprehension of mask manufacturability, including the vital issue of the number of colors in the mask. While it is obvious that use of three or more colors will give the best image, one has to be practical about projecting mask manufacturing capabilities for such a complex mask. To circumvent this serious issue, we eventually settled on a two color mask - comprising plain glass and etched

  16. Radial Structure Scaffolds Convolution Patterns of Developing Cerebral Cortex

    PubMed Central

    Razavi, Mir Jalil; Zhang, Tuo; Chen, Hanbo; Li, Yujie; Platt, Simon; Zhao, Yu; Guo, Lei; Hu, Xiaoping; Wang, Xianqiao; Liu, Tianming

    2017-01-01

    Commonly-preserved radial convolution is a prominent characteristic of the mammalian cerebral cortex. Endeavors from multiple disciplines have been devoted for decades to explore the causes for this enigmatic structure. However, the underlying mechanisms that lead to consistent cortical convolution patterns still remain poorly understood. In this work, inspired by prior studies, we propose and evaluate a plausible theory that radial convolution during the early development of the brain is sculptured by radial structures consisting of radial glial cells (RGCs) and maturing axons. Specifically, the regionally heterogeneous development and distribution of RGCs controlled by Trnp1 regulate the convex and concave convolution patterns (gyri and sulci) in the radial direction, while the interplay of RGCs' effects on convolution and axons regulates the convex (gyral) convolution patterns. This theory is assessed by observations and measurements in literature from multiple disciplines such as neurobiology, genetics, biomechanics, etc., at multiple scales to date. Particularly, this theory is further validated by multimodal imaging data analysis and computational simulations in this study. We offer a versatile and descriptive study model that can provide reasonable explanations of observations, experiments, and simulations of the characteristic mammalian cortical folding. PMID:28860983

  17. Radial Structure Scaffolds Convolution Patterns of Developing Cerebral Cortex.

    PubMed

    Razavi, Mir Jalil; Zhang, Tuo; Chen, Hanbo; Li, Yujie; Platt, Simon; Zhao, Yu; Guo, Lei; Hu, Xiaoping; Wang, Xianqiao; Liu, Tianming

    2017-01-01

    Commonly-preserved radial convolution is a prominent characteristic of the mammalian cerebral cortex. Endeavors from multiple disciplines have been devoted for decades to explore the causes for this enigmatic structure. However, the underlying mechanisms that lead to consistent cortical convolution patterns still remain poorly understood. In this work, inspired by prior studies, we propose and evaluate a plausible theory that radial convolution during the early development of the brain is sculptured by radial structures consisting of radial glial cells (RGCs) and maturing axons. Specifically, the regionally heterogeneous development and distribution of RGCs controlled by Trnp1 regulate the convex and concave convolution patterns (gyri and sulci) in the radial direction, while the interplay of RGCs' effects on convolution and axons regulates the convex (gyral) convolution patterns. This theory is assessed by observations and measurements in literature from multiple disciplines such as neurobiology, genetics, biomechanics, etc., at multiple scales to date. Particularly, this theory is further validated by multimodal imaging data analysis and computational simulations in this study. We offer a versatile and descriptive study model that can provide reasonable explanations of observations, experiments, and simulations of the characteristic mammalian cortical folding.

  18. Convolution kernels for multi-wavelength imaging

    NASA Astrophysics Data System (ADS)

    Boucaud, A.; Bocchio, M.; Abergel, A.; Orieux, F.; Dole, H.; Hadj-Youcef, M. A.

    2016-12-01

    Astrophysical images issued from different instruments and/or spectral bands often require to be processed together, either for fitting or comparison purposes. However each image is affected by an instrumental response, also known as point-spread function (PSF), that depends on the characteristics of the instrument as well as the wavelength and the observing strategy. Given the knowledge of the PSF in each band, a straightforward way of processing images is to homogenise them all to a target PSF using convolution kernels, so that they appear as if they had been acquired by the same instrument. We propose an algorithm that generates such PSF-matching kernels, based on Wiener filtering with a tunable regularisation parameter. This method ensures all anisotropic features in the PSFs to be taken into account. We compare our method to existing procedures using measured Herschel/PACS and SPIRE PSFs and simulated JWST/MIRI PSFs. Significant gains up to two orders of magnitude are obtained with respect to the use of kernels computed assuming Gaussian or circularised PSFs. A software to compute these kernels is available at https://github.com/aboucaud/pypher

  19. Piano Transcription with Convolutional Sparse Lateral Inhibition

    DOE PAGES

    Cogliati, Andrea; Duan, Zhiyao; Wohlberg, Brendt Egon

    2017-02-08

    This paper extends our prior work on contextdependent piano transcription to estimate the length of the notes in addition to their pitch and onset. This approach employs convolutional sparse coding along with lateral inhibition constraints to approximate a musical signal as the sum of piano note waveforms (dictionary elements) convolved with their temporal activations. The waveforms are pre-recorded for the specific piano to be transcribed in the specific environment. A dictionary containing multiple waveforms per pitch is generated by truncating a long waveform for each pitch to different lengths. During transcription, the dictionary elements are fixed and their temporal activationsmore » are estimated and post-processed to obtain the pitch, onset and note length estimation. A sparsity penalty promotes globally sparse activations of the dictionary elements, and a lateral inhibition term penalizes concurrent activations of different waveforms corresponding to the same pitch within a temporal neighborhood, to achieve note length estimation. Experiments on the MAPS dataset show that the proposed approach significantly outperforms a state-of-the-art music transcription method trained in the same context-dependent setting in transcription accuracy.« less

  20. Event Discrimination using Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Menon, Hareesh; Hughes, Richard; Daling, Alec; Winer, Brian

    2017-01-01

    Convolutional Neural Networks (CNNs) are computational models that have been shown to be effective at classifying different types of images. We present a method to use CNNs to distinguish events involving the production of a top quark pair and a Higgs boson from events involving the production of a top quark pair and several quark and gluon jets. To do this, we generate and simulate data using MADGRAPH and DELPHES for a general purpose LHC detector at 13 TeV. We produce images using a particle flow algorithm by binning the particles geometrically based on their position in the detector and weighting the bins by the energy of each particle within each bin, and by defining channels based on particle types (charged track, neutral hadronic, neutral EM, lepton, heavy flavor). Our classification results are competitive with standard machine learning techniques. We have also looked into the classification of the substructure of the events, in a process known as scene labeling. In this context, we look for the presence of boosted objects (such as top quarks) with substructure encompassed within single jets. Preliminary results on substructure classification will be presented.

  1. Accelerated unsteady flow line integral convolution.

    PubMed

    Liu, Zhanping; Moorhead, Robert J

    2005-01-01

    Unsteady flow line integral convolution (UFLIC) is a texture synthesis technique for visualizing unsteady flows with high temporal-spatial coherence. Unfortunately, UFLIC requires considerable time to generate each frame due to the huge amount of pathline integration that is computed for particle value scattering. This paper presents Accelerated UFLIC (AUFLIC) for near interactive (1 frame/second) visualization with 160,000 particles per frame. AUFLIC reuses pathlines in the value scattering process to reduce computationally expensive pathline integration. A flow-driven seeding strategy is employed to distribute seeds such that only a few of them need pathline integration while most seeds are placed along the pathlines advected at earlier times by other seeds upstream and, therefore, the known pathlines can be reused for fast value scattering. To maintain a dense scattering coverage to convey high temporal-spatial coherence while keeping the expense of pathline integration low, a dynamic seeding controller is designed to decide whether to advect, copy, or reuse a pathline. At a negligible memory cost, AUFLIC is 9 times faster than UFLIC with comparable image quality.

  2. Do Convolutional Neural Networks Learn Class Hierarchy?

    PubMed

    Alsallakh, Bilal; Jourabloo, Amin; Ye, Mao; Liu, Xiaoming; Ren, Liu

    2017-08-29

    Convolutional Neural Networks (CNNs) currently achieve state-of-the-art accuracy in image classification. With a growing number of classes, the accuracy usually drops as the possibilities of confusion increase. Interestingly, the class confusion patterns follow a hierarchical structure over the classes. We present visual-analytics methods to reveal and analyze this hierarchy of similar classes in relation with CNN-internal data. We found that this hierarchy not only dictates the confusion patterns between the classes, it furthermore dictates the learning behavior of CNNs. In particular, the early layers in these networks develop feature detectors that can separate high-level groups of classes quite well, even after a few training epochs. In contrast, the latter layers require substantially more epochs to develop specialized feature detectors that can separate individual classes. We demonstrate how these insights are key to significant improvement in accuracy by designing hierarchy-aware CNNs that accelerate model convergence and alleviate overfitting. We further demonstrate how our methods help in identifying various quality issues in the training data.

  3. Metaheuristic Algorithms for Convolution Neural Network

    PubMed Central

    Fanany, Mohamad Ivan; Arymurthy, Aniati Murni

    2016-01-01

    A typical modern optimization technique is usually either heuristic or metaheuristic. This technique has managed to solve some optimization problems in the research area of science, engineering, and industry. However, implementation strategy of metaheuristic for accuracy improvement on convolution neural networks (CNN), a famous deep learning method, is still rarely investigated. Deep learning relates to a type of machine learning technique, where its aim is to move closer to the goal of artificial intelligence of creating a machine that could successfully perform any intellectual tasks that can be carried out by a human. In this paper, we propose the implementation strategy of three popular metaheuristic approaches, that is, simulated annealing, differential evolution, and harmony search, to optimize CNN. The performances of these metaheuristic methods in optimizing CNN on classifying MNIST and CIFAR dataset were evaluated and compared. Furthermore, the proposed methods are also compared with the original CNN. Although the proposed methods show an increase in the computation time, their accuracy has also been improved (up to 7.14 percent). PMID:27375738

  4. Metaheuristic Algorithms for Convolution Neural Network.

    PubMed

    Rere, L M Rasdi; Fanany, Mohamad Ivan; Arymurthy, Aniati Murni

    2016-01-01

    A typical modern optimization technique is usually either heuristic or metaheuristic. This technique has managed to solve some optimization problems in the research area of science, engineering, and industry. However, implementation strategy of metaheuristic for accuracy improvement on convolution neural networks (CNN), a famous deep learning method, is still rarely investigated. Deep learning relates to a type of machine learning technique, where its aim is to move closer to the goal of artificial intelligence of creating a machine that could successfully perform any intellectual tasks that can be carried out by a human. In this paper, we propose the implementation strategy of three popular metaheuristic approaches, that is, simulated annealing, differential evolution, and harmony search, to optimize CNN. The performances of these metaheuristic methods in optimizing CNN on classifying MNIST and CIFAR dataset were evaluated and compared. Furthermore, the proposed methods are also compared with the original CNN. Although the proposed methods show an increase in the computation time, their accuracy has also been improved (up to 7.14 percent).

  5. SegNet: A Deep Convolutional Encoder-Decoder Architecture for Scene Segmentation.

    PubMed

    Badrinarayanan, Vijay; Kendall, Alex; Cipolla, Roberto

    2017-01-02

    We present a novel and practical deep fully convolutional neural network architecture for semantic pixel-wise segmentation termed SegNet. This core trainable segmentation engine consists of an encoder network, a corresponding decoder network followed by a pixel-wise classification layer. The architecture of the encoder network is topologically identical to the 13 convolutional layers in the VGG16 network [1]. The role of the decoder network is to map the low resolution encoder feature maps to full input resolution feature maps for pixel-wise classification. The novelty of SegNet lies is in the manner in which the decoder upsamples its lower resolution input feature map(s). Specifically, the decoder uses pooling indices computed in the max-pooling step of the corresponding encoder to perform non-linear upsampling. This eliminates the need for learning to upsample. The upsampled maps are sparse and are then convolved with trainable filters to produce dense feature maps. We compare our proposed architecture with the widely adopted FCN [2] and also with the well known DeepLab-LargeFOV [3], DeconvNet [4] architectures. This comparison reveals the memory versus accuracy trade-off involved in achieving good segmentation performance. SegNet was primarily motivated by scene understanding applications. Hence, it is designed to be efficient both in terms of memory and computational time during inference. It is also significantly smaller in the number of trainable parameters than other competing architectures and can be trained end-to-end using stochastic gradient descent. We also performed a controlled benchmark of SegNet and other architectures on both road scenes and SUN RGB-D indoor scene segmentation tasks. These quantitative assessments show that SegNet provides good performance with competitive inference time and most efficient inference memory-wise as compared to other architectures. We also provide a Caffe implementation of SegNet and a web demo at http://mi.eng.cam.ac.uk/projects/segnet/.

  6. Methods of editing cloud and atmospheric layer affected pixels from satellite data

    NASA Technical Reports Server (NTRS)

    Nixon, P. R. (Principal Investigator); Wiegand, C. L.; Richardson, A. J.; Johnson, M. P.; Goodier, B. G.

    1981-01-01

    The location and migration of cloud, land and water features were examined in spectral space (reflective VIS vs. emissive IR). Daytime HCMM data showed two distinct types of cloud affected pixels in the south Texas test area. High altitude cirrus and/or cirrostratus and "subvisible cirrus" (SCi) reflected the same or only slightly more than land features. In the emissive band, the digital counts ranged from 1 to over 75 and overlapped land features. Pixels consisting of cumulus clouds, or of mixed cumulus and landscape, clustered in a different area of spectral space than the high altitude cloud pixels. Cumulus affected pixels were more reflective than land and water pixels. In August the high altitude clouds and SCi were more emissive than similar clouds were in July. Four-channel TIROS-N data were examined with the objective of developing a multispectral screening technique for removing SCi contaminated data.

  7. Active pixel sensor with intra-pixel charge transfer

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R. (Inventor); Mendis, Sunetra (Inventor); Kemeny, Sabrina E. (Inventor)

    2004-01-01

    An imaging device formed as a monolithic complementary metal oxide semiconductor integrated circuit in an industry standard complementary metal oxide semiconductor process, the integrated circuit including a focal plane array of pixel cells, each one of the cells including a photogate overlying the substrate for accumulating photo-generated charge in an underlying portion of the substrate, a readout circuit including at least an output field effect transistor formed in the substrate, and a charge coupled device section formed on the substrate adjacent the photogate having a sensing node connected to the output transistor and at least one charge coupled device stage for transferring charge from the underlying portion of the substrate to the sensing node.

  8. Active pixel sensor with intra-pixel charge transfer

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R. (Inventor); Mendis, Sunetra (Inventor); Kemeny, Sabrina E. (Inventor)

    1995-01-01

    An imaging device formed as a monolithic complementary metal oxide semiconductor integrated circuit in an industry standard complementary metal oxide semiconductor process, the integrated circuit including a focal plane array of pixel cells, each one of the cells including a photogate overlying the substrate for accumulating photo-generated charge in an underlying portion of the substrate, a readout circuit including at least an output field effect transistor formed in the substrate, and a charge coupled device section formed on the substrate adjacent the photogate having a sensing node connected to the output transistor and at least one charge coupled device stage for transferring charge from the underlying portion of the substrate to the sensing node.

  9. Proceedings of PIXEL98 -- International pixel detector workshop

    SciTech Connect

    Anderson, D.F.; Kwan, S.

    1998-08-01

    Experiments around the globe face new challenges of more precision in the face of higher interaction rates, greater track densities, and higher radiation doses, as they look for rarer and rarer processes, leading many to incorporate pixelated solid-state detectors into their plans. The highest-readout rate devices require new technologies for implementation. This workshop reviewed recent, significant progress in meeting these technical challenges. Participants presented many new results; many of them from the weeks--even days--just before the workshop. Brand new at this workshop were results on cryogenic operation of radiation-damaged silicon detectors (dubbed the Lazarus effect). Other new work included a diamond sensor with 280-micron collection distance; new results on breakdown in p-type silicon detectors; testing of the latest versions of read-out chip and interconnection designs; and the radiation hardness of deep-submicron processes.

  10. Active pixel sensor with intra-pixel charge transfer

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R. (Inventor); Mendis, Sunetra (Inventor); Kemeny, Sabrina E. (Inventor)

    2003-01-01

    An imaging device formed as a monolithic complementary metal oxide semiconductor integrated circuit in an industry standard complementary metal oxide semiconductor process, the integrated circuit including a focal plane array of pixel cells, each one of the cells including a photogate overlying the substrate for accumulating photo-generated charge in an underlying portion of the substrate, a readout circuit including at least an output field effect transistor formed in the substrate, and a charge coupled device section formed on the substrate adjacent the photogate having a sensing node connected to the output transistor and at least one charge coupled device stage for transferring charge from the underlying portion of the substrate to the sensing node.

  11. Fourier deconvolution reveals the role of the Lorentz function as the convolution kernel of narrow photon beams.

    PubMed

    Djouguela, Armand; Harder, Dietrich; Kollhoff, Ralf; Foschepoth, Simon; Kunth, Wolfgang; Rühmann, Antje; Willborn, Kay; Poppe, Björn

    2009-05-07

    The two-dimensional lateral dose profiles D(x, y) of narrow photon beams, typically used for beamlet-based IMRT, stereotactic radiosurgery and tomotherapy, can be regarded as resulting from the convolution of a two-dimensional rectangular function R(x, y), which represents the photon fluence profile within the field borders, with a rotation-symmetric convolution kernel K(r). This kernel accounts not only for the lateral transport of secondary electrons and small-angle scattered photons in the absorber, but also for the 'geometrical spread' of each pencil beam due to the phase-space distribution of the photon source. The present investigation of the convolution kernel was based on an experimental study of the associated line-spread function K(x). Systematic cross-plane scans of rectangular and quadratic fields of variable side lengths were made by utilizing the linear current versus dose rate relationship and small energy dependence of the unshielded Si diode PTW 60012 as well as its narrow spatial resolution function. By application of the Fourier convolution theorem, it was observed that the values of the Fourier transform of K(x) could be closely fitted by an exponential function exp(-2pilambdanu(x)) of the spatial frequency nu(x). Thereby, the line-spread function K(x) was identified as the Lorentz function K(x) = (lambda/pi)[1/(x(2) + lambda(2))], a single-parameter, bell-shaped but non-Gaussian function with a narrow core, wide curve tail, full half-width 2lambda and convenient convolution properties. The variation of the 'kernel width parameter' lambda with the photon energy, field size and thickness of a water-equivalent absorber was systematically studied. The convolution of a rectangular fluence profile with K(x) in the local space results in a simple equation accurately reproducing the measured lateral dose profiles. The underlying 2D convolution kernel (point-spread function) was identified as K(r) = (lambda/2pi)[1/(r(2) + lambda(2))](3/2), fitting

  12. The use of interleaving for reducing radio loss in convolutionally coded systems

    NASA Technical Reports Server (NTRS)

    Divsalar, D.; Simon, M. K.; Yuen, J. H.

    1989-01-01

    The use of interleaving after convolutional coding and deinterleaving before Viterbi decoding is proposed. This effectively reduces radio loss at low-loop Signal to Noise Ratios (SNRs) by several decibels and at high-loop SNRs by a few tenths of a decibel. Performance of the coded system can further be enhanced if the modulation index is optimized for this system. This will correspond to a reduction of bit SNR at a certain bit error rate for the overall system. The introduction of interleaving/deinterleaving into communication systems designed for future deep space missions does not substantially complicate their hardware design or increase their system cost.

  13. Serial Pixel Analog-to-Digital Converter

    SciTech Connect

    Larson, E D

    2010-02-01

    This method reduces the data path from the counter to the pixel register of the analog-to-digital converter (ADC) from as many as 10 bits to a single bit. The reduction in data path width is accomplished by using a coded serial data stream similar to a pseudo random number (PRN) generator. The resulting encoded pixel data is then decoded into a standard hexadecimal format before storage. The high-speed serial pixel ADC concept is based on the single-slope integrating pixel ADC architecture. Previous work has described a massively parallel pixel readout of a similar architecture. The serial ADC connection is similar to the state-of-the art method with the exception that the pixel ADC register is a shift register and the data path is a single bit. A state-of-the-art individual-pixel ADC uses a single-slope charge integration converter architecture with integral registers and “one-hot” counters. This implies that parallel data bits are routed among the counter and the individual on-chip pixel ADC registers. The data path bit-width to the pixel is therefore equivalent to the pixel ADC bit resolution.

  14. Colonoscopic polyp detection using convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Park, Sun Young; Sargent, Dusty

    2016-03-01

    Computer aided diagnosis (CAD) systems for medical image analysis rely on accurate and efficient feature extraction methods. Regardless of which type of classifier is used, the results will be limited if the input features are not diagnostically relevant and do not properly discriminate between the different classes of images. Thus, a large amount of research has been dedicated to creating feature sets that capture the salient features that physicians are able to observe in the images. Successful feature extraction reduces the semantic gap between the physician's interpretation and the computer representation of images, and helps to reduce the variability in diagnosis between physicians. Due to the complexity of many medical image classification tasks, feature extraction for each problem often requires domainspecific knowledge and a carefully constructed feature set for the specific type of images being classified. In this paper, we describe a method for automatic diagnostic feature extraction from colonoscopy images that may have general application and require a lower level of domain-specific knowledge. The work in this paper expands on our previous CAD algorithm for detecting polyps in colonoscopy video. In that work, we applied an eigenimage model to extract features representing polyps, normal tissue, diverticula, etc. from colonoscopy videos taken from various viewing angles and imaging conditions. Classification was performed using a conditional random field (CRF) model that accounted for the spatial and temporal adjacency relationships present in colonoscopy video. In this paper, we replace the eigenimage feature descriptor with features extracted from a convolutional neural network (CNN) trained to recognize the same image types in colonoscopy video. The CNN-derived features show greater invariance to viewing angles and image quality factors when compared to the eigenimage model. The CNN features are used as input to the CRF classifier as before. We report

  15. Characterizing pixel and point patterns with a hyperuniformity disorder length

    NASA Astrophysics Data System (ADS)

    Chieco, A. T.; Dreyfus, R.; Durian, D. J.

    2017-09-01

    We introduce the concept of a "hyperuniformity disorder length" h that controls the variance of volume fraction fluctuations for randomly placed windows of fixed size. In particular, fluctuations are determined by the average number of particles within a distance h from the boundary of the window. We first compute special expectations and bounds in d dimensions, and then illustrate the range of behavior of h versus window size L by analyzing several different types of simulated two-dimensional pixel patterns—where particle positions are stored as a binary digital image in which pixels have value zero if empty and one if they contain a particle. The first are random binomial patterns, where pixels are randomly flipped from zero to one with probability equal to area fraction. These have long-ranged density fluctuations, and simulations confirm the exact result h =L /2 . Next we consider vacancy patterns, where a fraction f of particles on a lattice are randomly removed. These also display long-range density fluctuations, but with h =(L /2 )(f /d ) for small f , and h =L /2 for f →1 . And finally, for a hyperuniform system with no long-range density fluctuations, we consider "Einstein patterns," where each particle is independently displaced from a lattice site by a Gaussian-distributed amount. For these, at large L ,h approaches a constant equal to about half the root-mean-square displacement in each dimension. Then we turn to gray-scale pixel patterns that represent simulated arrangements of polydisperse particles, where the volume of a particle is encoded in the value of its central pixel. And we discuss the continuum limit of point patterns, where pixel size vanishes. In general, we thus propose to quantify particle configurations not just by the scaling of the density fluctuation spectrum but rather by the real-space spectrum of h (L ) versus L . We call this approach "hyperuniformity disorder length spectroscopy".

  16. Characterization of Pixelated Cadmium-Zinc-Telluride Detectors for Astrophysical Applications

    NASA Technical Reports Server (NTRS)

    Gaskin, Jessica; Sharma, Dharma; Ramsey, Brian; Seller, Paul

    2003-01-01

    Comparisons of charge sharing and charge loss measurements between two pixelated Cadmium-Zinc-Telluride (CdZnTe) detectors are discussed. These properties along with the detector geometry help to define the limiting energy resolution and spatial resolution of the detector in question. The first detector consists of a 1-mm-thick piece of CdZnTe sputtered with a 4x4 array of pixels with pixel pitch of 750 microns (inter-pixel gap is 100 microns). Signal readout is via discrete ultra-low-noise preamplifiers, one for each of the 16 pixels. The second detector consists of a 2-mm-thick piece of CdZnTe sputtered with a 16x16 array of pixels with a pixel pitch of 300 microns (inter-pixel gap is 50 microns). This crystal is bonded to a custom-built readout chip (ASIC) providing all front-end electronics to each of the 256 independent pixels. These detectors act as precursors to that which will be used at the focal plane of the High Energy Replicated Optics (HERO) telescope currently being developed at Marshall Space Flight Center. With a telescope focal length of 6 meters, the detector needs to have a spatial resolution of around 200 microns in order to take full advantage of the HERO angular resolution. We discuss to what degree charge sharing will degrade energy resolution but will improve our spatial resolution through position interpolation.

  17. Characterization of Pixelated Cadmium-Zinc-Telluride Detectors for Astrophysical Applications

    NASA Technical Reports Server (NTRS)

    Gaskin, Jessica; Sharma, Dharma; Ramsey, Brian; Seller, Paul

    2003-01-01

    Comparisons of charge sharing and charge loss measurements between two pixelated Cadmium-Zinc-Telluride (CdZnTe) detectors are discussed. These properties along with the detector geometry help to define the limiting energy resolution and spatial resolution of the detector in question. The first detector consists of a 1-mm-thick piece of CdZnTe sputtered with a 4x4 array of pixels with pixel pitch of 750 microns (inter-pixel gap is 100 microns). Signal readout is via discrete ultra-low-noise preamplifiers, one for each of the 16 pixels. The second detector consists of a 2-mm-thick piece of CdZnTe sputtered with a 16x16 array of pixels with a pixel pitch of 300 microns (inter-pixel gap is 50 microns). This crystal is bonded to a custom-built readout chip (ASIC) providing all front-end electronics to each of the 256 independent pixels. These detectors act as precursors to that which will be used at the focal plane of the High Energy Replicated Optics (HERO) telescope currently being developed at Marshall Space Flight Center. With a telescope focal length of 6 meters, the detector needs to have a spatial resolution of around 200 microns in order to take full advantage of the HERO angular resolution. We discuss to what degree charge sharing will degrade energy resolution but will improve our spatial resolution through position interpolation.

  18. Skin segmentation using color pixel classification: analysis and comparison.

    PubMed

    Phung, Son Lam; Bouzerdoum, Abdesselam; Chai, Douglas

    2005-01-01

    This paper presents a study of three important issues of the color pixel classification approach to skin segmentation: color representation, color quantization, and classification algorithm. Our analysis of several representative color spaces using the Bayesian classifier with the histogram technique shows that skin segmentation based on color pixel classification is largely unaffected by the choice of the color space. However, segmentation performance degrades when only chrominance channels are used in classification. Furthermore, we find that color quantization can be as low as 64 bins per channel, although higher histogram sizes give better segmentation performance. The Bayesian classifier with the histogram technique and the multilayer perceptron classifier are found to perform better compared to other tested classifiers, including three piecewise linear classifiers, three unimodal Gaussian classifiers, and a Gaussian mixture classifier.

  19. Secured Medical Images - a Chaotic Pixel Scrambling Approach.

    PubMed

    Parvees, M Y Mohamed; Samath, J Abdul; Bose, B Parameswaran

    2016-11-01

    In this paper, a cryptosystem is proposed to encrypt 16-bit monochrome DICOM image using enhanced chaotic economic map. A new enhanced chaotic economic map (ECEM) is designed from the chaotic economic map which has better bifurcation nature and positive Lyapunov exponent values. In order to improve the sternness of the encryption algorithm, the enhanced chaotic map is employed to generate the pixel permutation, masking, and swapping sequences. The substitution operation is introduced in-between the standard permutation and diffusion operations. The robustness of the proposed image encryption algorithm is measured by various analyses such as histogram, key sensitivity, key space, number of pixel change rate (NPCR), unified average change intensity (UACI), information entropy and correlation coefficient. The results of the security analyses are compared with existing algorithms to validate that the proposed algorithm is better in terms of larger key space to resist brute force attacks and other common attacks on encryption.

  20. Dead pixel replacement in LWIR microgrid polarimeters.

    PubMed

    Ratliff, Bradley M; Tyo, J Scott; Boger, James K; Black, Wiley T; Bowers, David L; Fetrow, Matthew P

    2007-06-11

    LWIR imaging arrays are often affected by nonresponsive pixels, or "dead pixels." These dead pixels can severely degrade the quality of imagery and often have to be replaced before subsequent image processing and display of the imagery data. For LWIR arrays that are integrated with arrays of micropolarizers, the problem of dead pixels is amplified. Conventional dead pixel replacement (DPR) strategies cannot be employed since neighboring pixels are of different polarizations. In this paper we present two DPR schemes. The first is a modified nearest-neighbor replacement method. The second is a method based on redundancy in the polarization measurements.We find that the redundancy-based DPR scheme provides an order-of-magnitude better performance for typical LWIR polarimetric data.

  1. Equivalence of a Bit Pixel Image to a Quantum Pixel Image

    NASA Astrophysics Data System (ADS)

    Ortega, Laurel Carlos; Dong, Shi-Hai; Cruz-Irisson, M.

    2015-11-01

    We propose a new method to transform a pixel image to the corresponding quantum-pixel using a qubit per pixel to represent each pixels classical weight in a quantum image matrix weight. All qubits are linear superposition, changing the coefficients level by level to the entire longitude of the gray scale with respect to the base states of the qubit. Classically, these states are just bytes represented in a binary matrix, having code combinations of 1 or 0 at all pixel locations. This method introduces a qubit-pixel image representation of images captured by classical optoelectronic methods. Supported partially by the project 20150964-SIP-IPN, Mexico

  2. Method for fabricating pixelated silicon device cells

    SciTech Connect

    Nielson, Gregory N.; Okandan, Murat; Cruz-Campa, Jose Luis; Nelson, Jeffrey S.; Anderson, Benjamin John

    2015-08-18

    A method, apparatus and system for flexible, ultra-thin, and high efficiency pixelated silicon or other semiconductor photovoltaic solar cell array fabrication is disclosed. A structure and method of creation for a pixelated silicon or other semiconductor photovoltaic solar cell array with interconnects is described using a manufacturing method that is simplified compared to previous versions of pixelated silicon photovoltaic cells that require more microfabrication steps.

  3. [Hadamard transform spectrometer mixed pixels' unmixing method].

    PubMed

    Yan, Peng; Hu, Bing-Liang; Liu, Xue-Bin; Sun, Wei; Li, Li-Bo; Feng, Yu-Tao; Liu, Yong-Zheng

    2011-10-01

    Hadamard transform imaging spectrometer is a multi-channel digital transform spectrometer detection technology, this paper based on digital micromirror array device (DMD) of the Hadamard transform spectrometer working principle and instrument structure, obtained by the imaging sensor mixed pixel were analyzed, theory derived the solution of pixel aliasing hybrid method, simulation results show that the method is simple and effective to improve the accuracy of mixed pixel spectrum more than 10% recovery.

  4. Brain and art: illustrations of the cerebral convolutions. A review.

    PubMed

    Lazić, D; Marinković, S; Tomić, I; Mitrović, D; Starčević, A; Milić, I; Grujičić, M; Marković, B

    2014-08-01

    Aesthetics and functional significance of the cerebral cortical relief gave us the idea to find out how often the convolutions are presented in fine art, and in which techniques, conceptual meaning and pathophysiological aspect. We examined 27,614 art works created by 2,856 authors and presented in art literature, and in Google images search. The cerebral gyri were shown in 0.85% of the art works created by 2.35% of the authors. The concept of the brain was first mentioned in ancient Egypt some 3,700 years ago. The first artistic drawing of the convolutions was made by Leonardo da Vinci, and the first colour picture by an unknown Italian author. Rembrandt van Rijn was the first to paint the gyri. Dozens of modern authors, who are professional artists, medical experts or designers, presented the cerebralc onvolutions in drawings, paintings, digital works or sculptures, with various aesthetic, symbolic and metaphorical connotation. Some artistic compositions and natural forms show a gyral pattern. The convolutions, whose cortical layers enable the cognitive functions, can be affected by various disorders. Some artists suffered from those disorders, and some others presented them in their artworks. The cerebral convolutions or gyri, thanks to their extensive cortical mantle, are the specific morphological basis for the human mind, but also the structures with their own aesthetics. Contemporary authors relatively often depictor model the cerebral convolutions, either from the aesthetic or conceptual aspect. In this way, they make a connection between the neuroscience and fineart.

  5. Inequalities and consequences of new convolutions for the fractional Fourier transform with Hermite weights

    NASA Astrophysics Data System (ADS)

    Anh, P. K.; Castro, L. P.; Thao, P. T.; Tuan, N. M.

    2017-01-01

    This paper presents new convolutions for the fractional Fourier transform which are somehow associated with the Hermite functions. Consequent inequalities and properties are derived for these convolutions, among which we emphasize two new types of Young's convolution inequalities. The results guarantee a general framework where the present convolutions are well-defined, allowing larger possibilities than the known ones for other convolutions. Furthermore, we exemplify the use of our convolutions by providing explicit solutions of some classes of integral equations which appear in engineering problems.

  6. Commissioning of the CMS Forward Pixel Detector

    SciTech Connect

    Kumar, Ashish; /SUNY, Buffalo

    2008-12-01

    The Compact Muon Solenoid (CMS) experiment is scheduled for physics data taking in summer 2009 after the commissioning of high energy proton-proton collisions at Large Hadron Collider (LHC). At the core of the CMS all-silicon tracker is the silicon pixel detector, comprising three barrel layers and two pixel disks in the forward and backward regions, accounting for a total of 66 million channels. The pixel detector will provide high-resolution, 3D tracking points, essential for pattern recognition and precise vertexing, while being embedded in a hostile radiation environment. The end disks of the pixel detector, known as the Forward Pixel detector, has been assembled and tested at Fermilab, USA. It has 18 million pixel cells with dimension 100 x 150 {micro}m{sup 2}. The complete forward pixel detector was shipped to CERN in December 2007, where it underwent extensive system tests for commissioning prior to the installation. The pixel system was put in its final place inside the CMS following the installation and bake out of the LHC beam pipe in July 2008. It has been integrated with other sub-detectors in the readout since September 2008 and participated in the cosmic data taking. This report covers the strategy and results from commissioning of CMS forward pixel detector at CERN.

  7. Improving the full spectrum fitting method: accurate convolution with Gauss-Hermite functions

    NASA Astrophysics Data System (ADS)

    Cappellari, Michele

    2017-04-01

    I start by providing an updated summary of the penalized pixel-fitting (PPXF) method that is used to extract the stellar and gas kinematics, as well as the stellar population of galaxies, via full spectrum fitting. I then focus on the problem of extracting the kinematics when the velocity dispersion σ is smaller than the velocity sampling ΔV that is generally, by design, close to the instrumental dispersion σinst. The standard approach consists of convolving templates with a discretized kernel, while fitting for its parameters. This is obviously very inaccurate when σ ≲ ΔV/2, due to undersampling. Oversampling can prevent this, but it has drawbacks. Here I present a more accurate and efficient alternative. It avoids the evaluation of the undersampled kernel and instead directly computes its well-sampled analytic Fourier transform, for use with the convolution theorem. A simple analytic transform exists when the kernel is described by the popular Gauss-Hermite parametrization (which includes the Gaussian as special case) for the line-of-sight velocity distribution. I describe how this idea was implemented in a significant upgrade to the publicly available PPXF software. The key advantage of the new approach is that it provides accurate velocities regardless of σ. This is important e.g. for spectroscopic surveys targeting galaxies with σ ≪ σinst, for galaxy redshift determinations or for measuring line-of-sight velocities of individual stars. The proposed method could also be used to fix Gaussian convolution algorithms used in today's popular software packages.

  8. Implementation of TDI based digital pixel ROIC with 15μm pixel pitch

    NASA Astrophysics Data System (ADS)

    Ceylan, Omer; Shafique, Atia; Burak, A.; Caliskan, Can; Abbasi, Shahbaz; Yazici, Melik; Gurbuz, Yasar

    2016-05-01

    A 15um pixel pitch digital pixel for LWIR time delay integration (TDI) applications is implemented which occupies one fourth of pixel area compared to previous digital TDI implementation. TDI is implemented on 8 pixels with oversampling rate of 2. ROIC provides 16 bits output with 8 bits of MSB and 8 bits of LSB. Pixel can store 75 M electrons with a quantization noise of 500 electrons. Digital pixel TDI implementation is advantageous over analog counterparts considering power consumption, chip area and signal-to-noise ratio. Digital pixel TDI ROIC is fabricated with 0.18um CMOS process. In digital pixel TDI implementation photocurrent is integrated on a capacitor in pixel and converted to digital data in pixel. This digital data triggers the summation counters which implements TDI addition. After all pixels in a row contribute, the summed data is divided to the number of TDI pixels(N) to have the actual output which is square root of N improved version of a single pixel output in terms of signal-to-noise-ratio (SNR).

  9. Image Labeling for LIDAR Intensity Image Using K-Nn of Feature Obtained by Convolutional Neural Network

    NASA Astrophysics Data System (ADS)

    Umemura, Masaki; Hotta, Kazuhiro; Nonaka, Hideki; Oda, Kazuo

    2016-06-01

    We propose an image labeling method for LIDAR intensity image obtained by Mobile Mapping System (MMS) using K-Nearest Neighbor (KNN) of feature obtained by Convolutional Neural Network (CNN). Image labeling assigns labels (e.g., road, cross-walk and road shoulder) to semantic regions in an image. Since CNN is effective for various image recognition tasks, we try to use the feature of CNN (Caffenet) pre-trained by ImageNet. We use 4,096-dimensional feature at fc7 layer in the Caffenet as the descriptor of a region because the feature at fc7 layer has effective information for object classification. We extract the feature by the Caffenet from regions cropped from images. Since the similarity between features reflects the similarity of contents of regions, we can select top K similar regions cropped from training samples with a test region. Since regions in training images have manually-annotated ground truth labels, we vote the labels attached to top K similar regions to the test region. The class label with the maximum vote is assigned to each pixel in the test image. In experiments, we use 36 LIDAR intensity images with ground truth labels. We divide 36 images into training (28 images) and test sets (8 images). We use class average accuracy and pixel-wise accuracy as evaluation measures. Our method was able to assign the same label as human beings in 97.8% of the pixels in test LIDAR intensity images.

  10. Spatially variant convolution with scaled B-splines.

    PubMed

    Muñoz-Barrutia, Arrate; Artaechevarria, Xabier; Ortiz-de-Solorzano, Carlos

    2010-01-01

    We present an efficient algorithm to compute multidimensional spatially variant convolutions--or inner products--between N-dimensional signals and B-splines--or their derivatives--of any order and arbitrary sizes. The multidimensional B-splines are computed as tensor products of 1-D B-splines, and the input signal is expressed in a B-spline basis. The convolution is then computed by using an adequate combination of integration and scaled finite differences as to have, for moderate and large scale values, a computational complexity that does not depend on the scaling factor. To show in practice the benefit of using our spatially variant convolution approach, we present an adaptive noise filter that adjusts the kernel size to the local image characteristics and a high sensitivity local ridge detector.

  11. Error-trellis syndrome decoding techniques for convolutional codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1985-01-01

    An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decordig is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.

  12. Error-trellis Syndrome Decoding Techniques for Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1984-01-01

    An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decoding is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.

  13. Error-trellis syndrome decoding techniques for convolutional codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1985-01-01

    An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decordig is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.

  14. Urban Image Classification: Per-Pixel Classifiers, Sub-Pixel Analysis, Object-Based Image Analysis, and Geospatial Methods. 10; Chapter

    NASA Technical Reports Server (NTRS)

    Myint, Soe W.; Mesev, Victor; Quattrochi, Dale; Wentz, Elizabeth A.

    2013-01-01

    Remote sensing methods used to generate base maps to analyze the urban environment rely predominantly on digital sensor data from space-borne platforms. This is due in part from new sources of high spatial resolution data covering the globe, a variety of multispectral and multitemporal sources, sophisticated statistical and geospatial methods, and compatibility with GIS data sources and methods. The goal of this chapter is to review the four groups of classification methods for digital sensor data from space-borne platforms; per-pixel, sub-pixel, object-based (spatial-based), and geospatial methods. Per-pixel methods are widely used methods that classify pixels into distinct categories based solely on the spectral and ancillary information within that pixel. They are used for simple calculations of environmental indices (e.g., NDVI) to sophisticated expert systems to assign urban land covers. Researchers recognize however, that even with the smallest pixel size the spectral information within a pixel is really a combination of multiple urban surfaces. Sub-pixel classification methods therefore aim to statistically quantify the mixture of surfaces to improve overall classification accuracy. While within pixel variations exist, there is also significant evidence that groups of nearby pixels have similar spectral information and therefore belong to the same classification category. Object-oriented methods have emerged that group pixels prior to classification based on spectral similarity and spatial proximity. Classification accuracy using object-based methods show significant success and promise for numerous urban 3 applications. Like the object-oriented methods that recognize the importance of spatial proximity, geospatial methods for urban mapping also utilize neighboring pixels in the classification process. The primary difference though is that geostatistical methods (e.g., spatial autocorrelation methods) are utilized during both the pre- and post

  15. Single-pixel optical imaging with compressed reference intensity patterns

    NASA Astrophysics Data System (ADS)

    Chen, Wen; Chen, Xudong

    2015-03-01

    Ghost imaging with single-pixel bucket detector has attracted more and more current attention due to its marked physical characteristics. However, in ghost imaging, a large number of reference intensity patterns are usually required for object reconstruction, hence many applications based on ghost imaging (such as tomography and optical security) may be tedious since heavy storage or transmission is requested. In this paper, we report that the compressed reference intensity patterns can be used for object recovery in computational ghost imaging (with single-pixel bucket detector), and object verification can be further conducted. Only a small portion (such as 2.0% pixels) of each reference intensity pattern is used for object reconstruction, and the recovered object is verified by using nonlinear correlation algorithm. Since statistical characteristic and speckle averaging property are inherent in ghost imaging, sidelobes or multiple peaks can be effectively suppressed or eliminated in the nonlinear correlation outputs when random pixel positions are selected from each reference intensity pattern. Since pixel positions can be randomly selected from each 2D reference intensity pattern (such as total measurements of 20000), a large key space and high flexibility can be generated when the proposed method is applied for authenticationbased cryptography. When compressive sensing is used to recover the object with a small number of measurements, the proposed strategy could still be feasible through further compressing the recorded data (i.e., reference intensity patterns) followed by object verification. It is expected that the proposed method not only compresses the recorded data and facilitates the storage or transmission, but also can build up novel capability (i.e., classical or quantum information verification) for ghost imaging.

  16. Sub-pixel Area Calculation Methods for Estimating Irrigated Areas.

    PubMed

    Thenkabailc, Prasad S; Biradar, Chandrashekar M; Noojipady, Praveen; Cai, Xueliang; Dheeravath, Venkateswarlu; Li, Yuanjie; Velpuri, Manohar; Gumma, Muralikrishna; Pandey, Suraj

    2007-10-31

    The goal of this paper was to develop and demonstrate practical methods forcomputing sub-pixel areas (SPAs) from coarse-resolution satellite sensor data. Themethods were tested and verified using: (a) global irrigated area map (GIAM) at 10-kmresolution based, primarily, on AVHRR data, and (b) irrigated area map for India at 500-mbased, primarily, on MODIS data. The sub-pixel irrigated areas (SPIAs) from coarse-resolution satellite sensor data were estimated by multiplying the full pixel irrigated areas(FPIAs) with irrigated area fractions (IAFs). Three methods were presented for IAFcomputation: (a) Google Earth Estimate (IAF-GEE); (b) High resolution imagery (IAF-HRI); and (c) Sub-pixel de-composition technique (IAF-SPDT). The IAF-GEE involvedthe use of "zoom-in-views" of sub-meter to 4-meter very high resolution imagery (VHRI)from Google Earth and helped determine total area available for irrigation (TAAI) or netirrigated areas that does not consider intensity or seasonality of irrigation. The IAF-HRI isa well known method that uses finer-resolution data to determine SPAs of the coarser-resolution imagery. The IAF-SPDT is a unique and innovative method wherein SPAs aredetermined based on the precise location of every pixel of a class in 2-dimensionalbrightness-greenness-wetness (BGW) feature-space plot of red band versus near-infraredband spectral reflectivity. The SPIAs computed using IAF-SPDT for the GIAM was within2 % of the SPIA computed using well known IAF-HRI. Further the fractions from the 2 methods were significantly correlated. The IAF-HRI and IAF-SPDT help to determine annualized or gross irrigated areas (AIA) that does consider intensity or seasonality (e.g., sum of areas from season 1, season 2, and continuous year-round crops). The national census based irrigated areas for the top 40 irrigated nations (which covers about 90% of global irrigation) was significantly better related (and had lesser uncertainties and errors) when compared to SPIAs than

  17. Intermediate elemental image reconstruction for refocused three-dimensional images in integral imaging by convolution with δ-function sequences

    NASA Astrophysics Data System (ADS)

    Yoo, Hoon; Jang, Jae-Young

    2017-10-01

    We propose a novel approach for intermediate elemental image reconstruction in integral imaging. To reconstruct intermediate elemental images, we introduce a null elemental image whose pixels are all zero. In the proposed method a number of null elemental images are inserted into a given elemental image array. The elemental image array with null elemental images is convolved with the δ-function sequence. The convolution result shows that the proposed method provides an efficient structure to expand an elemental image array. The resulting elemental image array from the proposed method can supply three-dimensional information for an object at a specific depth. In addition, the proposed method provides adjustable parameters, which can be utilized in design of integral imaging systems. The feasibility of the proposed method has been confirmed through preliminary experiments and theoretical analysis.

  18. Optimized color decomposition of localized whole slide images and convolutional neural network for intermediate prostate cancer classification

    NASA Astrophysics Data System (ADS)

    Zhou, Naiyun; Gao, Yi

    2017-03-01

    This paper presents a fully automatic approach to grade intermediate prostate malignancy with hematoxylin and eosin-stained whole slide images. Deep learning architectures such as convolutional neural networks have been utilized in the domain of histopathology for automated carcinoma detection and classification. However, few work show its power in discriminating intermediate Gleason patterns, due to sporadic distribution of prostate glands on stained surgical section samples. We propose optimized hematoxylin decomposition on localized images, followed by convolutional neural network to classify Gleason patterns 3+4 and 4+3 without handcrafted features or gland segmentation. Crucial glands morphology and structural relationship of nuclei are extracted twice in different color space by the multi-scale strategy to mimic pathologists' visual examination. Our novel classification scheme evaluated on 169 whole slide images yielded a 70.41% accuracy and corresponding area under the receiver operating characteristic curve of 0.7247.

  19. Designing multiplane computer-generated holograms with consideration of the pixel shape and the illumination wave.

    PubMed

    Kämpfe, Thomas; Kley, Ernst-Bernhard; Tünnermann, Andreas

    2008-07-01

    The majority of image-generating computer-generated holograms (CGHs) are calculated on a discrete numerical grid, whose spacing is defined by the desired pixel size. For single-plane CGHs the influence of the pixel shape and the illumination wave on the actual output distribution is minor and can be treated separately from the numerical calculation. We show that in the case of multiplane CGHs this influence is much more severe. We introduce a new method that takes the pixel shape into account during the design and derive conditions to retain an illumination-wave-independent behavior.

  20. New SOFRADIR 10μm pixel pitch infrared products

    NASA Astrophysics Data System (ADS)

    Lefoul, X.; Pere-Laperne, N.; Augey, T.; Rubaldo, L.; Aufranc, Sébastien; Decaens, G.; Ricard, N.; Mazaleyrat, E.; Billon-Lanfrey, D.; Gravrand, Olivier; Bisotto, Sylvette

    2014-10-01

    Recent advances in miniaturization of IR imaging technology have led to a growing market for mini thermal-imaging sensors. In that respect, Sofradir development on smaller pixel pitch has made much more compact products available to the users. When this competitive advantage is mixed with smaller coolers, made possible by HOT technology, we achieved valuable reductions in the size, weight and power of the overall package. At the same time, we are moving towards a global offer based on digital interfaces that provides our customers simplifications at the IR system design process while freeing up more space. This paper discusses recent developments on hot and small pixel pitch technologies as well as efforts made on compact packaging solution developed by SOFRADIR in collaboration with CEA-LETI.

  1. Maximum-likelihood estimation of circle parameters via convolution.

    PubMed

    Zelniker, Emanuel E; Clarkson, I Vaughan L

    2006-04-01

    The accurate fitting of a circle to noisy measurements of circumferential points is a much studied problem in the literature. In this paper, we present an interpretation of the maximum-likelihood estimator (MLE) and the Delogne-Kåsa estimator (DKE) for circle-center and radius estimation in terms of convolution on an image which is ideal in a certain sense. We use our convolution-based MLE approach to find good estimates for the parameters of a circle in digital images. In digital images, it is then possible to treat these estimates as preliminary estimates into various other numerical techniques which further refine them to achieve subpixel accuracy. We also investigate the relationship between the convolution of an ideal image with a "phase-coded kernel" (PCK) and the MLE. This is related to the "phase-coded annulus" which was introduced by Atherton and Kerbyson who proposed it as one of a number of new convolution kernels for estimating circle center and radius. We show that the PCK is an approximate MLE (AMLE). We compare our AMLE method to the MLE and the DKE as well as the Cramér-Rao Lower Bound in ideal images and in both real and synthetic digital images.

  2. Convolutional virtual electric field for image segmentation using active contours.

    PubMed

    Wang, Yuanquan; Zhu, Ce; Zhang, Jiawan; Jian, Yuden

    2014-01-01

    Gradient vector flow (GVF) is an effective external force for active contours; however, it suffers from heavy computation load. The virtual electric field (VEF) model, which can be implemented in real time using fast Fourier transform (FFT), has been proposed later as a remedy for the GVF model. In this work, we present an extension of the VEF model, which is referred to as CONvolutional Virtual Electric Field, CONVEF for short. This proposed CONVEF model takes the VEF model as a convolution operation and employs a modified distance in the convolution kernel. The CONVEF model is also closely related to the vector field convolution (VFC) model. Compared with the GVF, VEF and VFC models, the CONVEF model possesses not only some desirable properties of these models, such as enlarged capture range, u-shape concavity convergence, subject contour convergence and initialization insensitivity, but also some other interesting properties such as G-shape concavity convergence, neighboring objects separation, and noise suppression and simultaneously weak edge preserving. Meanwhile, the CONVEF model can also be implemented in real-time by using FFT. Experimental results illustrate these advantages of the CONVEF model on both synthetic and natural images.

  3. Hardy's inequalities for the twisted convolution with Laguerre functions.

    PubMed

    Xiao, Jinsen; He, Jianxun

    2017-01-01

    In this article, two types of Hardy's inequalities for the twisted convolution with Laguerre functions are studied. The proofs are mainly based on an estimate for the Heisenberg left-invariant vectors of the special Hermite functions deduced by the Heisenberg group approach.

  4. Die and telescoping punch form convolutions in thin diaphragm

    NASA Technical Reports Server (NTRS)

    1965-01-01

    Die and punch set forms convolutions in thin dished metal diaphragm without stretching the metal too thin at sharp curvatures. The die corresponds to the metal shape to be formed, and the punch consists of elements that progressively slide against one another under the restraint of a compressed-air cushion to mate with the die.

  5. An Interactive Graphics Program for Assistance in Learning Convolution.

    ERIC Educational Resources Information Center

    Frederick, Dean K.; Waag, Gary L.

    1980-01-01

    A program has been written for the interactive computer graphics facility at Rensselaer Polytechnic Institute that is designed to assist the user in learning the mathematical technique of convolving two functions. Because convolution can be represented graphically by a sequence of steps involving folding, shifting, multiplying, and integration, it…

  6. Stacked Convolutional Denoising Auto-Encoders for Feature Representation.

    PubMed

    Du, Bo; Xiong, Wei; Wu, Jia; Zhang, Lefei; Zhang, Liangpei; Tao, Dacheng

    2016-03-16

    Deep networks have achieved excellent performance in learning representation from visual data. However, the supervised deep models like convolutional neural network require large quantities of labeled data, which are very expensive to obtain. To solve this problem, this paper proposes an unsupervised deep network, called the stacked convolutional denoising auto-encoders, which can map images to hierarchical representations without any label information. The network, optimized by layer-wise training, is constructed by stacking layers of denoising auto-encoders in a convolutional way. In each layer, high dimensional feature maps are generated by convolving features of the lower layer with kernels learned by a denoising auto-encoder. The auto-encoder is trained on patches extracted from feature maps in the lower layer to learn robust feature detectors. To better train the large network, a layer-wise whitening technique is introduced into the model. Before each convolutional layer, a whitening layer is embedded to sphere the input data. By layers of mapping, raw images are transformed into high-level feature representations which would boost the performance of the subsequent support vector machine classifier. The proposed algorithm is evaluated by extensive experimentations and demonstrates superior classification performance to state-of-the-art unsupervised networks.

  7. Sub-pixel mapping of water boundaries using pixel swapping algorithm (case study: Tagliamento River, Italy)

    NASA Astrophysics Data System (ADS)

    Niroumand-Jadidi, Milad; Vitti, Alfonso

    2015-10-01

    Taking the advantages of remotely sensed data for mapping and monitoring of water boundaries is of particular importance in many different management and conservation activities. Imagery data are classified using automatic techniques to produce maps entering the water bodies' analysis chain in several and different points. Very commonly, medium or coarse spatial resolution imagery is used in studies of large water bodies. Data of this kind is affected by the presence of mixed pixels leading to very outstanding problems, in particular when dealing with boundary pixels. A considerable amount of uncertainty inescapably occurs when conventional hard classifiers (e.g., maximum likelihood) are applied on mixed pixels. In this study, Linear Spectral Mixture Model (LSMM) is used to estimate the proportion of water in boundary pixels. Firstly by applying an unsupervised clustering, the water body is identified approximately and a buffer area considered ensuring the selection of entire boundary pixels. Then LSMM is applied on this buffer region to estimate the fractional maps. However, resultant output of LSMM does not provide a sub-pixel map corresponding to water abundances. To tackle with this problem, Pixel Swapping (PS) algorithm is used to allocate sub-pixels within mixed pixels in such a way to maximize the spatial proximity of sub-pixels and pixels in the neighborhood. The water area of two segments of Tagliamento River (Italy) are mapped in sub-pixel resolution (10m) using a 30m Landsat image. To evaluate the proficiency of the proposed approach for sub-pixel boundary mapping, the image is also classified using a conventional hard classifier. A high resolution image of the same area is also classified and used as a reference for accuracy assessment. According to the results, sub-pixel map shows in average about 8 percent higher overall accuracy than hard classification and fits very well in the boundaries with the reference map.

  8. It's not the pixel count, you fool

    NASA Astrophysics Data System (ADS)

    Kriss, Michael A.

    2012-01-01

    The first thing a "marketing guy" asks the digital camera engineer is "how many pixels does it have, for we need as many mega pixels as possible since the other guys are killing us with their "umpteen" mega pixel pocket sized digital cameras. And so it goes until the pixels get smaller and smaller in order to inflate the pixel count in the never-ending pixel-wars. These small pixels just are not very good. The truth of the matter is that the most important feature of digital cameras in the last five years is the automatic motion control to stabilize the image on the sensor along with some very sophisticated image processing. All the rest has been hype and some "cool" design. What is the future for digital imaging and what will drive growth of camera sales (not counting the cell phone cameras which totally dominate the market in terms of camera sales) and more importantly after sales profits? Well sit in on the Dark Side of Color and find out what is being done to increase the after sales profits and don't be surprised if has been done long ago in some basement lab of a photographic company and of course, before its time.

  9. Micro-Pixel Image Position Sensing Testbed

    NASA Technical Reports Server (NTRS)

    Nemati, Bijan; Shao, Michael; Zhai, Chengxing; Erlig, Hernan; Wang, Xu; Goullioud, Renaud

    2011-01-01

    The search for Earth-mass planets in the habitable zones of nearby Sun-like stars is an important goal of astrophysics. This search is not feasible with the current slate of astronomical instruments. We propose a new concept for microarcsecond astrometry which uses a simplified instrument and hence promises to be low cost. The concept employs a telescope with only a primary, laser metrology applied to the focal plane array, and new algorithms for measuring image position and displacement on the focal plane. The required level of accuracy in both the metrology and image position sensing is at a few micro-pixels. We have begun a detailed investigation of the feasibility of our approach using simulations and a micro-pixel image position sensing testbed called MCT. So far we have been able to demonstrate that the pixel-to-pixel distances in a focal plane can be measured with a precision of 20 micro-pixels and image-to-image distances with a precision of 30 micro-pixels. We have also shown using simulations that our image position algorithm can achieve accuracy of 4 micro-pixels in the presence of lambda/20 wavefront errors.

  10. High spatial resolution performance of pixelated scintillators

    NASA Astrophysics Data System (ADS)

    Shigeta, Kazuki; Fujioka, Nobuyasu; Murai, Takahiro; Hikita, Izumi; Morinaga, Tomohiro; Tanino, Takahiro; Kodama, Haruhito; Okamura, Masaki

    2017-03-01

    In indirect conversion flat panel detectors (FPDs) for digital X-ray imaging, scintillating materials such as Terbiumdoped Gadolinium Oxysulfide (Gadox) convert X-ray into visible light, and an amorphous silicon (a-Si) photodiode array converts the light into electrons. It is, however, desired that the detector spatial resolution is improved because the light spreading inside scintillator causes crosstalk to next a-Si photodiode pixels and the resolution is degraded compared with direct conversion FPDs which directly convert X-ray into electrons by scintillating material such as amorphous selenium. In this study, the scintillator was pixelated with same pixel pitch as a-Si photodiode array by barrier rib structure to limit the light spreading, and the detector spatial resolution was improved. The FPD with pixelated scintillator was manufactured as follows. The barrier rib structure with 127μm pitch was fabricated on a substrate by a photosensitive organic-inorganic paste method, and a reflective layer was coated on the surface of the barrier rib, then the structure was filled up with Gadox particles. The pixelated scintillator was aligned with 127μm pixel pitch of a-Si photodiode array and set as a FPD. The FPD with pixelated scintillator showed high modulation transfer function (MTF) and 0.94 at 1cycle/mm and 0.88 at 2cycles/mm were achieved. The MTF values were almost equal to the maximum value that can be theoretically achieved in the FPD with 127μm pixel pitch of a-Si photodiode array. Thus the FPD with pixelated scintillators has great potential to apply for high spatial resolution applications such as mammography and nondestructive testing.

  11. LISe pixel detector for neutron imaging

    NASA Astrophysics Data System (ADS)

    Herrera, Elan; Hamm, Daniel; Wiggins, Brenden; Milburn, Rob; Burger, Arnold; Bilheux, Hassina; Santodonato, Louis; Chvala, Ondrej; Stowe, Ashley; Lukosi, Eric

    2016-10-01

    Semiconducting lithium indium diselenide, 6LiInSe2 or LISe, has promising characteristics for neutron detection applications. The 95% isotopic enrichment of 6Li results in a highly efficient thermal neutron-sensitive material. In this study, we report on a proof-of-principle investigation of a semiconducting LISe pixel detector to demonstrate its potential as an efficient neutron imager. The LISe pixel detector had a 4×4 of pixels with a 550 μm pitch on a 5×5×0.56 mm3 LISe substrate. An experimentally verified spatial resolution of 300 μm was observed utilizing a super-sampling technique.

  12. Per-Pixel Lighting Data Analysis

    SciTech Connect

    Inanici, Mehlika

    2005-08-01

    This report presents a framework for per-pixel analysis of the qualitative and quantitative aspects of luminous environments. Recognizing the need for better lighting analysis capabilities and appreciating the new measurement abilities developed within the LBNL Lighting Measurement and Simulation Toolbox, ''Per-pixel Lighting Data Analysis'' project demonstrates several techniques for analyzing luminance distribution patterns, luminance ratios, adaptation luminance and glare assessment. The techniques are the syntheses of the current practices in lighting design and the unique practices that can be done with per-pixel data availability. Demonstrated analysis techniques are applicable to both computer-generated and digitally captured images (physically-based renderings and High Dynamic Range photographs).

  13. Anode readout for pixellated CZT detectors

    NASA Astrophysics Data System (ADS)

    Narita, Tomohiko; Grindlay, Jonathan E.; Hong, Jaesub; Niestemski, Francis C.

    2004-02-01

    Determination of the photon interaction depth offers numerous advantages for an astronomical hard X-ray telescope. The interaction depth is typically derived from two signals: anode and cathode, or collecting and non-collecting electrodes. We present some preliminary results from our depth sensing detectors using only the anode pixel signals. By examining several anode pixel signals simultaneously, we find that we can estimate the interaction depth, and get sub-pixel 2-D position resolution. We discuss our findings and the requirements for future ASIC development.

  14. Color constancy at a pixel.

    PubMed

    Finlayson, G D; Hordley, S D

    2001-02-01

    In computational terms we can solve the color constancy problem if device red, green, and blue sensor responses, or RGB's, for surfaces seen under an unknown illuminant can be mapped to corresponding RGB's under a known reference light. In recent years almost all authors have argued that this three-dimensional problem is too hard. It is argued that because a bright light striking a dark surface results in the same physical spectra as those of a dim light incident on a light surface, the magnitude of RGB's cannot be recovered. Consequently, modern color constancy algorithms attempt only to recover image chromaticities under the reference light: They solve a two-dimensional problem. While significant progress has been made toward achieving chromaticity constancy, recent work has shown that the most advanced algorithms are unable to render chromaticity stable enough so that it can be used as a cue for object recognition [B. V. Funt, K. Bernard, and L. Martin, in Proceedings of the Fifth European Conference on Computer Vision (European Vision Society, Springer-Verlag, Berlin, 1998), Vol. II, p. 445.] We take this reductionist approach a little further and look at the one-dimensional color constancy problem. We ask, Is there a single color coordinate, a function of image chromaticities, for which the color constancy problem can be solved? Our answer is an emphatic yes. We show that there exists a single invariant color coordinate, a function of R, G, and B, that depends only on surface reflectance. Two corollaries follow. First, given an RGB image of a scene viewed under any illuminant, we can trivially synthesize the same gray-scale image (we simply code the invariant coordinate as a gray scale). Second, this result implies that we can solve the one-dimensional color constancy problem at a pixel (in scenes with no color diversity whatsoever). We present experiments that show that invariant gray-scale histograms are a stable feature for object recognition. Indexing on

  15. Charge Loss and Charge Sharing Measurements for Two Different Pixelated Cadmium-Zinc-Telluride Detectors

    NASA Astrophysics Data System (ADS)

    Gaskin, J. A.; Sharma, D. P.; Ramsey, B. D.; Seller, P.

    2003-05-01

    As part of ongoing research at Marshall Space Flight Center, Cadmium-Zinc-Telluride (CdZnTe) multi-pixel detectors are being developed for use at the focal plane of the High Energy Replicated Optics (HERO) telescope. HERO requires a 64x64 pixel array with a spatial resolution of around 200 microns (with a 6 meter focal length) and high energy resolution (< 2% at 60keV). We are currently testing smaller arrays as a necessary first step towards this goal. In this presentation, we compare charge sharing and charge loss measurements between two devices that differ both electronically and geometrically. The first device consists of a 1-mm-thick piece of CdZnTe that is sputtered with a 4x4 array of pixels with pixel pitch of 750 microns (inter-pixel gap is 100 microns). The signal is read out using discrete ultra-low-noise preamplifiers, one for each of the 16 pixels. The second detector consists of a 2-mm-thick piece of CdZnTe that is sputtered with a 16x16 array of pixels with a pixel pitch of 300 microns (inter-pixel gap is 50 microns). Instead of using discrete preamplifiers, the crystal is bonded to an ASIC that provides all of the front-end electronics to each of the 256 pixels. Further, we compare the measured results with simulated results and discuss to what degree the bias voltage (i.e. the electric field) and hence the drift and diffusion coefficients affect our measurements.

  16. Coded aperture detector: an image sensor with sub 20-nm pixel resolution.

    PubMed

    Miyakawa, Ryan; Mayer, Rafael; Wojdyla, Antoine; Vannier, Nicolas; Lesser, Ian; Aron-Dine, Shifrah; Naulleau, Patrick

    2014-08-11

    We describe the coded aperture detector, a novel image sensor based on uniformly redundant arrays (URAs) with customizable pixel size, resolution, and operating photon energy regime. In this sensor, a coded aperture is scanned laterally at the image plane of an optical system, and the transmitted intensity is measured by a photodiode. The image intensity is then digitally reconstructed using a simple convolution. We present results from a proof-of-principle optical prototype, demonstrating high-fidelity image sensing comparable to a CCD. A 20-nm half-pitch URA fabricated by the Center for X-ray Optics (CXRO) nano-fabrication laboratory is presented that is suitable for high-resolution image sensing at EUV and soft X-ray wavelengths.

  17. Lung nodule detection using 3D convolutional neural networks trained on weakly labeled data

    NASA Astrophysics Data System (ADS)

    Anirudh, Rushil; Thiagarajan, Jayaraman J.; Bremer, Timo; Kim, Hyojin

    2016-03-01

    Early detection of lung nodules is currently the one of the most effective ways to predict and treat lung cancer. As a result, the past decade has seen a lot of focus on computer aided diagnosis (CAD) of lung nodules, whose goal is to efficiently detect, segment lung nodules and classify them as being benign or malignant. Effective detection of such nodules remains a challenge due to their arbitrariness in shape, size and texture. In this paper, we propose to employ 3D convolutional neural networks (CNN) to learn highly discriminative features for nodule detection in lieu of hand-engineered ones such as geometric shape or texture. While 3D CNNs are promising tools to model the spatio-temporal statistics of data, they are limited by their need for detailed 3D labels, which can be prohibitively expensive when compared obtaining 2D labels. Existing CAD methods rely on obtaining detailed labels for lung nodules, to train models, which is also unrealistic and time consuming. To alleviate this challenge, we propose a solution wherein the expert needs to provide only a point label, i.e., the central pixel of of the nodule, and its largest expected size. We use unsupervised segmentation to grow out a 3D region, which is used to train the CNN. Using experiments on the SPIE-LUNGx dataset, we show that the network trained using these weak labels can produce reasonably low false positive rates with a high sensitivity, even in the absence of accurate 3D labels.

  18. Segmentation and classification of colon glands with deep convolutional neural networks and total variation regularization.

    PubMed

    Kainz, Philipp; Pfeiffer, Michael; Urschler, Martin

    2017-01-01

    Segmentation of histopathology sections is a necessary preprocessing step for digital pathology. Due to the large variability of biological tissue, machine learning techniques have shown superior performance over conventional image processing methods. Here we present our deep neural network-based approach for segmentation and classification of glands in tissue of benign and malignant colorectal cancer, which was developed to participate in the GlaS@MICCAI2015 colon gland segmentation challenge. We use two distinct deep convolutional neural networks (CNN) for pixel-wise classification of Hematoxylin-Eosin stained images. While the first classifier separates glands from background, the second classifier identifies gland-separating structures. In a subsequent step, a figure-ground segmentation based on weighted total variation produces the final segmentation result by regularizing the CNN predictions. We present both quantitative and qualitative segmentation results on the recently released and publicly available Warwick-QU colon adenocarcinoma dataset associated with the GlaS@MICCAI2015 challenge and compare our approach to the simultaneously developed other approaches that participated in the same challenge. On two test sets, we demonstrate our segmentation performance and show that we achieve a tissue classification accuracy of 98% and 95%, making use of the inherent capability of our system to distinguish between benign and malignant tissue. Our results show that deep learning approaches can yield highly accurate and reproducible results for biomedical image analysis, with the potential to significantly improve the quality and speed of medical diagnoses.

  19. The Probabilistic Convolution Tree: Efficient Exact Bayesian Inference for Faster LC-MS/MS Protein Inference

    PubMed Central

    Serang, Oliver

    2014-01-01

    Exact Bayesian inference can sometimes be performed efficiently for special cases where a function has commutative and associative symmetry of its inputs (called “causal independence”). For this reason, it is desirable to exploit such symmetry on big data sets. Here we present a method to exploit a general form of this symmetry on probabilistic adder nodes by transforming those probabilistic adder nodes into a probabilistic convolution tree with which dynamic programming computes exact probabilities. A substantial speedup is demonstrated using an illustration example that can arise when identifying splice forms with bottom-up mass spectrometry-based proteomics. On this example, even state-of-the-art exact inference algorithms require a runtime more than exponential in the number of splice forms considered. By using the probabilistic convolution tree, we reduce the runtime to and the space to where is the number of variables joined by an additive or cardinal operator. This approach, which can also be used with junction tree inference, is applicable to graphs with arbitrary dependency on counting variables or cardinalities and can be used on diverse problems and fields like forward error correcting codes, elemental decomposition, and spectral demixing. The approach also trivially generalizes to multiple dimensions. PMID:24626234

  20. The probabilistic convolution tree: efficient exact Bayesian inference for faster LC-MS/MS protein inference.

    PubMed

    Serang, Oliver

    2014-01-01

    Exact Bayesian inference can sometimes be performed efficiently for special cases where a function has commutative and associative symmetry of its inputs (called "causal independence"). For this reason, it is desirable to exploit such symmetry on big data sets. Here we present a method to exploit a general form of this symmetry on probabilistic adder nodes by transforming those probabilistic adder nodes into a probabilistic convolution tree with which dynamic programming computes exact probabilities. A substantial speedup is demonstrated using an illustration example that can arise when identifying splice forms with bottom-up mass spectrometry-based proteomics. On this example, even state-of-the-art exact inference algorithms require a runtime more than exponential in the number of splice forms considered. By using the probabilistic convolution tree, we reduce the runtime to O(k log(k)2) and the space to O(k log(k)) where k is the number of variables joined by an additive or cardinal operator. This approach, which can also be used with junction tree inference, is applicable to graphs with arbitrary dependency on counting variables or cardinalities and can be used on diverse problems and fields like forward error correcting codes, elemental decomposition, and spectral demixing. The approach also trivially generalizes to multiple dimensions.

  1. Pixels, Imagers and Related Fabrication Methods

    NASA Technical Reports Server (NTRS)

    Pain, Bedabrata (Inventor); Cunningham, Thomas J. (Inventor)

    2014-01-01

    Pixels, imagers and related fabrication methods are described. The described methods result in cross-talk reduction in imagers and related devices by generating depletion regions. The devices can also be used with electronic circuits for imaging applications.

  2. Pixels, Imagers and Related Fabrication Methods

    NASA Technical Reports Server (NTRS)

    Pain, Bedabrata (Inventor); Cunningham, Thomas J. (Inventor)

    2016-01-01

    Pixels, imagers and related fabrication methods are described. The described methods result in cross-talk reduction in imagers and related devices by generating depletion regions. The devices can also be used with electronic circuits for imaging applications.

  3. Readout and DAQ for Pixel Detectors

    NASA Astrophysics Data System (ADS)

    Platkevic, Michal

    2010-01-01

    Data readout and acquisition control of pixel detectors demand the transfer of significantly a large amounts of bits between the detector and the computer. For this purpose dedicated interfaces are used which are designed with focus on features like speed, small dimensions or flexibility of use such as digital signal processors, field-programmable gate arrays (FPGA) and USB communication ports. This work summarizes the readout and DAQ system built for state-of-the-art pixel detectors of the Medipix family.

  4. Design of the small pixel pitch ROIC

    NASA Astrophysics Data System (ADS)

    Liang, Qinghua; Jiang, Dazhao; Chen, Honglei; Zhai, Yongcheng; Gao, Lei; Ding, Ruijun

    2014-11-01

    Since the technology trend of the third generation IRFPA towards resolution enhancing has steadily progressed,the pixel pitch of IRFPA has been greatly reduced.A 640×512 readout integrated circuit(ROIC) of IRFPA with 15μm pixel pitch is presented in this paper.The 15μm pixel pitch ROIC design will face many challenges.As we all known,the integrating capacitor is a key performance parameter when considering pixel area,charge capacity and dynamic range,so we adopt the effective method of 2 by 2 pixels sharing an integrating capacitor to solve this problem.The input unit cell architecture will contain two paralleled sample and hold parts,which not only allow the FPA to be operated in full frame snapshot mode but also save relatively unit circuit area.Different applications need more matching input unit circuits. Because the dimension of 2×2 pixels is 30μm×30μm, an input stage based on direct injection (DI) which has medium injection ratio and small layout area is proved to be suitable for middle wave (MW) while BDI with three-transistor cascode amplifier for long wave(LW). By adopting the 0.35μm 2P4M mixed signal process, the circuit architecture can make the effective charge capacity of 7.8Me- per pixel with 2.2V output range for MW and 7.3 Me- per pixel with 2.6V output range for LW. According to the simulation results, this circuit works well under 5V power supply and achieves less than 0.1% nonlinearity.

  5. Toward Multispectral Imaging with Colloidal Metasurface Pixels.

    PubMed

    Stewart, Jon W; Akselrod, Gleb M; Smith, David R; Mikkelsen, Maiken H

    2017-02-01

    Multispectral colloidal metasurfaces are fabricated that exhibit greater than 85% absorption and ≈100 nm linewidths by patterning film-coupled nanocubes in pixels using a fusion of bottom-up and top-down fabrication techniques over wafer-scale areas. With this technique, the authors realize a multispectral pixel array consisting of six resonances between 580 and 1125 nm and reconstruct an RGB image with 9261 color combinations.

  6. Fast convolution method and its application in mask optimization for intensity calculation using basis expansion.

    PubMed

    Sun, Yaping; Zhang, Jinyu; Wang, Yan; Yu, Zhiping

    2014-12-01

    Finer grid representation is required for a more accurate description of mask patterns in inverse lithography techniques, thus resulting in a large-size mask representation and heavy computational cost. To mitigate the computation problem caused by intensive convolutions in mask optimization, a new method called convolution using basis expansion (CBE) is discussed in this paper. Matrices defined in fine grid are projected on coarse gird under a base matrix set. The new matrices formed by the expansion coefficients are used to perform convolution on the coarse grid. The convolution on fine grid can be approximated by the sum of a few convolutions on coarse grid following an interpolation procedure. The CBE is verified by random matrix convolutions and intensity calculation in lithography simulation. Results show that the use of the CBE method results in similar image quality with significant running speed enhancement compared with traditional convolution method.

  7. Holographic imaging with single pixel sensor

    NASA Astrophysics Data System (ADS)

    Leportier, Thibault; Lee, Young Tack; Hwang, Do Kyung; Park, Min-Chul

    2016-09-01

    Imaging techniques based on CCD sensors presenting very high number of pixels enable to record images with high resolution. However, the huge storage load and high bandwidth required to store and transmit digital holographic information are technical bottlenecks that should be overcome for the future of holographic display. Techniques to capture images with single pixel sensors have been greatly improved recently with the development of compressive sensing algorithm (CS). Since interference patterns may be considered sparse, the number of measurements required to recover the information with CS is lower than the number of pixels of the reconstructed image. In addition, this method does not need any scanning system. One other advantage of single pixel imaging is that the cost of recording system can be dramatically reduced since high-resolution cameras are expensive while compressive sensing exploits only one pixel. In this paper, we present an imaging system based on phase-shifting holography. First, simulations were performed to confirm that hologram could be reconstructed by compressive sensing even if the number of measurements was smaller than the number of pixels. Then, experimental set-up was realized. Several holograms with different phase shifts introduced by quarter and half wave plates in the reference beam were acquired. We demonstrated that our system enables the reconstruction of the object.

  8. Steganography based on pixel intensity value decomposition

    NASA Astrophysics Data System (ADS)

    Abdulla, Alan Anwar; Sellahewa, Harin; Jassim, Sabah A.

    2014-05-01

    This paper focuses on steganography based on pixel intensity value decomposition. A number of existing schemes such as binary, Fibonacci, Prime, Natural, Lucas, and Catalan-Fibonacci (CF) are evaluated in terms of payload capacity and stego quality. A new technique based on a specific representation is proposed to decompose pixel intensity values into 16 (virtual) bit-planes suitable for embedding purposes. The proposed decomposition has a desirable property whereby the sum of all bit-planes does not exceed the maximum pixel intensity value, i.e. 255. Experimental results demonstrate that the proposed technique offers an effective compromise between payload capacity and stego quality of existing embedding techniques based on pixel intensity value decomposition. Its capacity is equal to that of binary and Lucas, while it offers a higher capacity than Fibonacci, Prime, Natural, and CF when the secret bits are embedded in 1st Least Significant Bit (LSB). When the secret bits are embedded in higher bit-planes, i.e., 2nd LSB to 8th Most Significant Bit (MSB), the proposed scheme has more capacity than Natural numbers based embedding. However, from the 6th bit-plane onwards, the proposed scheme offers better stego quality. In general, the proposed decomposition scheme has less effect in terms of quality on pixel value when compared to most existing pixel intensity value decomposition techniques when embedding messages in higher bit-planes.

  9. Simulation study of pixel detector charge digitization

    NASA Astrophysics Data System (ADS)

    Wang, Fuyue; Nachman, Benjamin; Sciveres, Maurice; Lawrence Berkeley National Laboratory Team

    2017-01-01

    Reconstruction of tracks from nearly overlapping particles, called Tracking in Dense Environments (TIDE), is an increasingly important component of many physics analyses at the Large Hadron Collider as signatures involving highly boosted jets are investigated. TIDE makes use of the charge distribution inside a pixel cluster to resolve tracks that share one of more of their pixel detector hits. In practice, the pixel charge is discretized using the Time-over-Threshold (ToT) technique. More charge information is better for discrimination, but more challenging for designing and operating the detector. A model of the silicon pixels has been developed in order to study the impact of the precision of the digitized charge distribution on distinguishing multi-particle clusters. The output of the GEANT4-based simulation is used to train neutral networks that predict the multiplicity and location of particles depositing energy inside one cluster of pixels. By studying the multi-particle cluster identification efficiency and position resolution, we quantify the trade-off between the number of ToT bits and low-level tracking inputs. As both ATLAS and CMS are designing upgraded detectors, this work provides guidance for the pixel module designs to meet TIDE needs. Work funded by the China Scholarship Council and the Office of High Energy Physics of the U.S. Department of Energy under contract DE-AC02-05CH11231.

  10. Focal plane array with modular pixel array components for scalability

    SciTech Connect

    Kay, Randolph R; Campbell, David V; Shinde, Subhash L; Rienstra, Jeffrey L; Serkland, Darwin K; Holmes, Michael L

    2014-12-09

    A modular, scalable focal plane array is provided as an array of integrated circuit dice, wherein each die includes a given amount of modular pixel array circuitry. The array of dice effectively multiplies the amount of modular pixel array circuitry to produce a larger pixel array without increasing die size. Desired pixel pitch across the enlarged pixel array is preserved by forming die stacks with each pixel array circuitry die stacked on a separate die that contains the corresponding signal processing circuitry. Techniques for die stack interconnections and die stack placement are implemented to ensure that the desired pixel pitch is preserved across the enlarged pixel array.

  11. New CMOS digital pixel sensor architecture dedicated to a visual cortical implant

    NASA Astrophysics Data System (ADS)

    Trépanier, Annie; Trépanier, Jean-Luc; Sawan, Mohamad; Audet, Yves

    2004-10-01

    A CMOS image sensor with pixel level analog to digital conversion is presented. Each 16μm x 16μm pixel area contains a photodiode, with a fill factor of 22%, a comparator and an 8-bit DRAM, resulting in a total of 44 transistors per pixel. A digital to analog converter is used to deliver a voltage reference to compare with the pixel voltage for the analog to digital conversion. This sensor is required by a visual cortical stimulator, primarily to capture the image which is dedicated to stimulate the visual cortex of a blind patient. An active range finder system will be added to the implant, requiring the difference information between two images, in order to obtain the 3D information useful to the patient. For this purpose, three selectable operation modes are combined in the same pixel circuit. The linear integration, resulting from image capture at multiple exposure times, allows a high intrascene dynamic range. Random accessibility, in space and time, of the array of sensors is possible with the logarithmic mode. And the new differential mode makes the difference between two consecutive images. The circuit of a pixel has been fabricated in CMOS 0.18μm technology and it is under test to validate the full operation of the 3 modes. Also, a matrix of 45 x 90 pixels is currently being implemented for fabrication.

  12. Physics benchmarks for the Belle II pixel detector

    NASA Astrophysics Data System (ADS)

    Li Gioi, L.

    2015-03-01

    SuperKEKB, the massive upgrade of the asymmetric electron positron collider KEKB in Tsukuba, Japan, aims at an integrated luminosity in excess of 50 ab-1. It will deliver an instantaneous luminosity of 8 ṡ 1035 cm-2s-1, which is 40 times higher than the world record set by KEKB. At this high luminosity, a large increase of the background relative to the previous KEKB machine is expected. This and the more demanding physics rate ask for an entirely new tracking system. The expected increase of background would in fact create an unacceptable high occupancy for a silicon strip detector, making an efficient tracks reconstruction and vertexing impossible. The solution for Belle II is a pixel detector which intrinsically provides three dimensional space points. The new two layers silicon pixel vertex detector, based on DEPFET technology, will be mounted directly on the beam pipe. It will provide an accurate measurement of the tracks position in order to precisely reconstruct the decay vertex of the short living particles.In this paper we will discuss the physics performance of the Belle II pixel vertex detector which will be essential for the precise measurement of the CP parameters in various B and D decay modes.

  13. Construction of the Phase I Forward Pixel Detector

    NASA Astrophysics Data System (ADS)

    Neylon, Ashton; Bartek, Rachel

    2017-01-01

    The silicon pixel detector is the innermost component of the CMS tracking system, providing high precision space point measurements of charged particle trajectories. The original CMS detector was designed for the nominal instantaneous LHC luminosity of 1 x 1034 cm-2s-1 . The LHC has already started to exceed this luminosity causing the CMS pixel detector to see a dynamic inefficiency caused by data losses due to buffer overflows. For this reason the CMS Collaboration has been building an upgraded pixel detector which is scheduled for installation during an extended year end technical stop during winter 2016/2017. The phase 1 upgrade includes four barrel layers and three forward disks, providing robust tracking and vertexing for LHC luminosities up to 2 x 1034 cm-2s-1 . The upgrade incorporates new readout chips, front-end electronics, DC-DC powering, and dual-phase CO2 cooling to achieve performance exceeding that of the present detector with a lower material budget. This contribution will review the design and technology choices of the Phase I detector and discuss the status of the detector. The challenges and difficulties encountered during the construction will also be presented, as well as the lessons learned for future upgrades. National Science Foundation.

  14. Automatic sleep stage classification of single-channel EEG by using complex-valued convolutional neural network.

    PubMed

    Zhang, Junming; Wu, Yan

    2017-02-21

    Many systems are developed for automatic sleep stage classification. However, nearly all models are based on handcrafted features. Because of the large feature space, there are so many features that feature selection should be used. Meanwhile, designing handcrafted features is a difficult and time-consuming task because the feature designing needs domain knowledge of experienced experts. Results vary when different sets of features are chosen to identify sleep stages. Additionally, many features that we may be unaware of exist. However, these features may be important for sleep stage classification. Therefore, a new sleep stage classification system, which is based on the complex-valued convolutional neural network (CCNN), is proposed in this study. Unlike the existing sleep stage methods, our method can automatically extract features from raw electroencephalography data and then classify sleep stage based on the learned features. Additionally, we also prove that the decision boundaries for the real and imaginary parts of a complex-valued convolutional neuron intersect orthogonally. The classification performances of handcrafted features are compared with those of learned features via CCNN. Experimental results show that the proposed method is comparable to the existing methods. CCNN obtains a better classification performance and considerably faster convergence speed than convolutional neural network. Experimental results also show that the proposed method is a useful decision-support tool for automatic sleep stage classification.

  15. Planar slim-edge pixel sensors for the ATLAS upgrades

    NASA Astrophysics Data System (ADS)

    Altenheiner, S.; Goessling, C.; Jentzsch, J.; Klingenberg, R.; Lapsien, T.; Muenstermann, D.; Rummler, A.; Troska, G.; Wittig, T.

    2012-02-01

    The ATLAS detector at CERN is a general-purpose experiment at the Large Hadron Collider (LHC). The ATLAS Pixel Detector is the innermost tracking detector of ATLAS and requires a sufficient level of hermeticity to achieve superb track reconstruction performance. The current planar n-type pixel sensors feature a pixel matrix of n+-implantations which is (on the opposite p-side) surrounded by so-called guard rings to reduce the high voltage stepwise towards the cutting edge and an additional safety margin. Because of the inactive region around the active area, the sensor modules have been shingled on top of each other's edge which limits the thermal performance and adds complexity in the present detector. The first upgrade phase of the ATLAS pixel detector will consist of the insertable b-layer (IBL), an additional b-layer which will be inserted into the present detector in 2013. Several changes in the sensor design with respect to the existing detector had to be applied to comply with the IBL's specifications and are described in detail. A key issue for the ATLAS upgrades is a flat arrangement of the sensors. To maintain the required level of hermeticity in the detector, the inactive sensor edges have to be reduced to minimize the dead space between the adjacent detector modules. Unirradiated and irradiated sensors with the IBL design have been operated in test beams to study the efficiency performance in the sensor edge region and it was found that the inactive edge width could be reduced from 1100 μm to less than 250 μm.

  16. Geometry optimization of a barrel silicon pixelated tracker

    NASA Astrophysics Data System (ADS)

    Liu, Qing-Yuan; Wang, Meng; Winter, Marc

    2017-08-01

    We have studied optimization of the design of a barrel-shaped pixelated tracker for given spatial boundaries. The optimization includes choice of number of layers and layer spacing. Focusing on tracking performance only, momentum resolution is chosen as the figure of merit. The layer spacing is studied based on Gluckstern’s method and a numerical geometry scan of all possible tracker layouts. A formula to give the optimal geometry for curvature measurement is derived in the case of negligible multiple scattering to deal with trajectories of very high momentum particles. The result is validated by a numerical scan method, which could also be implemented with any track fitting algorithm involving material effects, to search for the optimal layer spacing and to determine the total number of layers for the momentum range of interest under the same magnetic field. The geometry optimization of an inner silicon pixel tracker proposed for BESIII is also studied by using a numerical scan and these results are compared with Geant4-based simulations. Supported by National Natural Science Foundation of China (U1232202)

  17. Method and apparatus for decoding compatible convolutional codes

    NASA Technical Reports Server (NTRS)

    Doland, G. D. (Inventor)

    1974-01-01

    This invention relates to learning decoders for decoding compatible convolutional codes. The decoder decodes signals which have been encoded by a convolutional coder and allows performance near the theoretical limit of performance for coded data systems. The decoder includes a sub-bit shift register wherein the received sub-bits are entered after regeneration and shifted in synchronization with a clock signal recovered from the received sub-bit stream. The received sub-bits are processed by a sub-bit decision circuit, entered into a sub-bit shift register, decoded by a decision circuit, entered into a data shift register, and updated to reduce data errors. The bit decision circuit utilizes stored sub-bits and stored data bits to determine subsequent data-bits. Data errors are reduced by using at least one up-date circuit.

  18. Miniaturized Band Stop FSS Using Convoluted Swastika Structure

    NASA Astrophysics Data System (ADS)

    Bilvam, Sridhar; Sivasamy, Ramprabhu; Kanagasabai, Malathi; Alsath M, Gulam Nabi; Baisakhiya, Sanjay

    2017-01-01

    This paper presents a miniaturized frequency selective surface (FSS) with stop band characteristics at the resonant frequency of 5.12 GHz. The unit cell size of the proposed FSS design is in the order of 0.095 λ×0.095 λ. The proposed unit cell is obtained by convoluting the arms of the basic swastika structure. The design provides fractional bandwidth of 9.0 % at the center frequency of 5.12 GHz in the 20 dB reference level of insertion loss. The symmetrical aspect of the design delivers identical response for both transverse electric (TE) and transverse magnetic (TM) modes thereby exhibiting polarization independent operation. The miniaturized design provides good angular independency for various incident angles. The dispersion analysis is done to substantiate the band stop operation of the convoluted swastika FSS. The proposed FSS is fabricated and its working is validated through measurements.

  19. Two-dimensional convolute integers for analytical instrumentation

    NASA Technical Reports Server (NTRS)

    Edwards, T. R.

    1982-01-01

    As new analytical instruments and techniques emerge with increased dimensionality, a corresponding need is seen for data processing logic which can appropriately address the data. Two-dimensional measurements reveal enhanced unknown mixture analysis capability as a result of the greater spectral information content over two one-dimensional methods taken separately. It is noted that two-dimensional convolute integers are merely an extension of the work by Savitzky and Golay (1964). It is shown that these low-pass, high-pass and band-pass digital filters are truly two-dimensional and that they can be applied in a manner identical with their one-dimensional counterpart, that is, a weighted nearest-neighbor, moving average with zero phase shifting, convoluted integer (universal number) weighting coefficients.

  20. A new computational decoding complexity measure of convolutional codes

    NASA Astrophysics Data System (ADS)

    Benchimol, Isaac B.; Pimentel, Cecilio; Souza, Richard Demo; Uchôa-Filho, Bartolomeu F.

    2014-12-01

    This paper presents a computational complexity measure of convolutional codes well suitable for software implementations of the Viterbi algorithm (VA) operating with hard decision. We investigate the number of arithmetic operations performed by the decoding process over the conventional and minimal trellis modules. A relation between the complexity measure defined in this work and the one defined by McEliece and Lin is investigated. We also conduct a refined computer search for good convolutional codes (in terms of distance spectrum) with respect to two minimal trellis complexity measures. Finally, the computational cost of implementation of each arithmetic operation is determined in terms of machine cycles taken by its execution using a typical digital signal processor widely used for low-power telecommunications applications.

  1. UFLIC: A Line Integral Convolution Algorithm for Visualizing Unsteady Flows

    NASA Technical Reports Server (NTRS)

    Shen, Han-Wei; Kao, David L.; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    This paper presents an algorithm, UFLIC (Unsteady Flow LIC), to visualize vector data in unsteady flow fields. Using the Line Integral Convolution (LIC) as the underlying method, a new convolution algorithm is proposed that can effectively trace the flow's global features over time. The new algorithm consists of a time-accurate value depositing scheme and a successive feed-forward method. The value depositing scheme accurately models the flow advection, and the successive feed-forward method maintains the coherence between animation frames. Our new algorithm can produce time-accurate, highly coherent flow animations to highlight global features in unsteady flow fields. CFD scientists, for the first time, are able to visualize unsteady surface flows using our algorithm.

  2. The analysis of VERITAS muon images using convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Feng, Qi; Lin, Tony T. Y.; VERITAS Collaboration

    2017-06-01

    Imaging atmospheric Cherenkov telescopes (IACTs) are sensitive to rare gamma-ray photons, buried in the background of charged cosmic-ray (CR) particles, the flux of which is several orders of magnitude greater. The ability to separate gamma rays from CR particles is important, as it is directly related to the sensitivity of the instrument. This gamma-ray/CR-particle classification problem in IACT data analysis can be treated with the rapidly-advancing machine learning algorithms, which have the potential to outperform the traditional box-cut methods on image parameters. We present preliminary results of a precise classification of a small set of muon events using a convolutional neural networks model with the raw images as input features. We also show the possibility of using the convolutional neural networks model for regression problems, such as the radius and brightness measurement of muon events, which can be used to calibrate the throughput efficiency of IACTs.

  3. Self-Taught convolutional neural networks for short text clustering.

    PubMed

    Xu, Jiaming; Xu, Bo; Wang, Peng; Zheng, Suncong; Tian, Guanhua; Zhao, Jun; Xu, Bo

    2017-04-01

    Short text clustering is a challenging problem due to its sparseness of text representation. Here we propose a flexible Self-Taught Convolutional neural network framework for Short Text Clustering (dubbed STC(2)), which can flexibly and successfully incorporate more useful semantic features and learn non-biased deep text representation in an unsupervised manner. In our framework, the original raw text features are firstly embedded into compact binary codes by using one existing unsupervised dimensionality reduction method. Then, word embeddings are explored and fed into convolutional neural networks to learn deep feature representations, meanwhile the output units are used to fit the pre-trained binary codes in the training process. Finally, we get the optimal clusters by employing K-means to cluster the learned representations. Extensive experimental results demonstrate that the proposed framework is effective, flexible and outperform several popular clustering methods when tested on three public short text datasets.

  4. Deep learning for steganalysis via convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Qian, Yinlong; Dong, Jing; Wang, Wei; Tan, Tieniu

    2015-03-01

    Current work on steganalysis for digital images is focused on the construction of complex handcrafted features. This paper proposes a new paradigm for steganalysis to learn features automatically via deep learning models. We novelly propose a customized Convolutional Neural Network for steganalysis. The proposed model can capture the complex dependencies that are useful for steganalysis. Compared with existing schemes, this model can automatically learn feature representations with several convolutional layers. The feature extraction and classification steps are unified under a single architecture, which means the guidance of classification can be used during the feature extraction step. We demonstrate the effectiveness of the proposed model on three state-of-theart spatial domain steganographic algorithms - HUGO, WOW, and S-UNIWARD. Compared to the Spatial Rich Model (SRM), our model achieves comparable performance on BOSSbase and the realistic and large ImageNet database.

  5. Spectral density of generalized Wishart matrices and free multiplicative convolution

    NASA Astrophysics Data System (ADS)

    Młotkowski, Wojciech; Nowak, Maciej A.; Penson, Karol A.; Życzkowski, Karol

    2015-07-01

    We investigate the level density for several ensembles of positive random matrices of a Wishart-like structure, W =X X† , where X stands for a non-Hermitian random matrix. In particular, making use of the Cauchy transform, we study the free multiplicative powers of the Marchenko-Pastur (MP) distribution, MP⊠s, which for an integer s yield Fuss-Catalan distributions corresponding to a product of s -independent square random matrices, X =X1⋯Xs . New formulas for the level densities are derived for s =3 and s =1 /3 . Moreover, the level density corresponding to the generalized Bures distribution, given by the free convolution of arcsine and MP distributions, is obtained. We also explain the reason of such a curious convolution. The technique proposed here allows for the derivation of the level densities for several other cases.

  6. Rationale-Augmented Convolutional Neural Networks for Text Classification

    PubMed Central

    Zhang, Ye; Marshall, Iain; Wallace, Byron C.

    2016-01-01

    We present a new Convolutional Neural Network (CNN) model for text classification that jointly exploits labels on documents and their constituent sentences. Specifically, we consider scenarios in which annotators explicitly mark sentences (or snippets) that support their overall document categorization, i.e., they provide rationales. Our model exploits such supervision via a hierarchical approach in which each document is represented by a linear combination of the vector representations of its component sentences. We propose a sentence-level convolutional model that estimates the probability that a given sentence is a rationale, and we then scale the contribution of each sentence to the aggregate document representation in proportion to these estimates. Experiments on five classification datasets that have document labels and associated rationales demonstrate that our approach consistently outperforms strong baselines. Moreover, our model naturally provides explanations for its predictions. PMID:28191551

  7. Statistical Downscaling using Super Resolution Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Vandal, T.; Ganguly, S.; Ganguly, A. R.; Kodra, E.

    2016-12-01

    We present a novel approach to statistical downscaling using image super-resolution and convolutional neural networks. Image super-resolution (SR), a widely researched topic in the machine learning community, aims to increase the resolution of low resolution images, similar to the goal of downscaling Global Circulation Models (GCMs). With SR we are able to capture and generalize spatial patterns in the climate by representing each climate state as an "image". In particular, we show the applicability of Super Resolution Convolutional Neural Networks (SRCNN) to downscaling daily precipitation in the United States. SRCNN is a state-of-the-art single image SR method and has the advantage of utilizing multiple input variables, known as channels. We apply SRCNN to downscaling precipitation by using low resolution precipitation and high resolution elevation as inputs and compare to bias correction spatial disaggregation (BCSD).

  8. Fully convolutional neural networks for polyp segmentation in colonoscopy

    NASA Astrophysics Data System (ADS)

    Brandao, Patrick; Mazomenos, Evangelos; Ciuti, Gastone; Caliò, Renato; Bianchi, Federico; Menciassi, Arianna; Dario, Paolo; Koulaouzidis, Anastasios; Arezzo, Alberto; Stoyanov, Danail

    2017-03-01

    Colorectal cancer (CRC) is one of the most common and deadliest forms of cancer, accounting for nearly 10% of all forms of cancer in the world. Even though colonoscopy is considered the most effective method for screening and diagnosis, the success of the procedure is highly dependent on the operator skills and level of hand-eye coordination. In this work, we propose to adapt fully convolution neural networks (FCN), to identify and segment polyps in colonoscopy images. We converted three established networks into a fully convolution architecture and fine-tuned their learned representations to the polyp segmentation task. We validate our framework on the 2015 MICCAI polyp detection challenge dataset, surpassing the state-of-the-art in automated polyp detection. Our method obtained high segmentation accuracy and a detection precision and recall of 73.61% and 86.31%, respectively.

  9. Spectral density of generalized Wishart matrices and free multiplicative convolution.

    PubMed

    Młotkowski, Wojciech; Nowak, Maciej A; Penson, Karol A; Życzkowski, Karol

    2015-07-01

    We investigate the level density for several ensembles of positive random matrices of a Wishart-like structure, W=XX(†), where X stands for a non-Hermitian random matrix. In particular, making use of the Cauchy transform, we study the free multiplicative powers of the Marchenko-Pastur (MP) distribution, MP(⊠s), which for an integer s yield Fuss-Catalan distributions corresponding to a product of s-independent square random matrices, X=X(1)⋯X(s). New formulas for the level densities are derived for s=3 and s=1/3. Moreover, the level density corresponding to the generalized Bures distribution, given by the free convolution of arcsine and MP distributions, is obtained. We also explain the reason of such a curious convolution. The technique proposed here allows for the derivation of the level densities for several other cases.

  10. UFLIC: A Line Integral Convolution Algorithm for Visualizing Unsteady Flows

    NASA Technical Reports Server (NTRS)

    Shen, Han-Wei; Kao, David L.; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    This paper presents an algorithm, UFLIC (Unsteady Flow LIC), to visualize vector data in unsteady flow fields. Using the Line Integral Convolution (LIC) as the underlying method, a new convolution algorithm is proposed that can effectively trace the flow's global features over time. The new algorithm consists of a time-accurate value depositing scheme and a successive feed-forward method. The value depositing scheme accurately models the flow advection, and the successive feed-forward method maintains the coherence between animation frames. Our new algorithm can produce time-accurate, highly coherent flow animations to highlight global features in unsteady flow fields. CFD scientists, for the first time, are able to visualize unsteady surface flows using our algorithm.

  11. Spatial clustering of pixels of a multispectral image

    DOEpatents

    Conger, James Lynn

    2014-08-19

    A method and system for clustering the pixels of a multispectral image is provided. A clustering system computes a maximum spectral similarity score for each pixel that indicates the similarity between that pixel and the most similar neighboring. To determine the maximum similarity score for a pixel, the clustering system generates a similarity score between that pixel and each of its neighboring pixels and then selects the similarity score that represents the highest similarity as the maximum similarity score. The clustering system may apply a filtering criterion based on the maximum similarity score so that pixels with similarity scores below a minimum threshold are not clustered. The clustering system changes the current pixel values of the pixels in a cluster based on an averaging of the original pixel values of the pixels in the cluster.

  12. Charge Sharing and Charge Loss in a Cadmium-Zinc-Telluride Fine-Pixel Detector Array

    NASA Technical Reports Server (NTRS)

    Gaskin, J. A.; Sharma, D. P.; Ramsey, B. D.; Six, N. Frank (Technical Monitor)

    2002-01-01

    Because of its high atomic number, room temperature operation, low noise, and high spatial resolution a Cadmium-Zinc-Telluride (CZT) multi-pixel detector is ideal for hard x-ray astrophysical observation. As part of on-going research at MSFC (Marshall Space Flight Center) to develop multi-pixel CdZnTe detectors for this purpose, we have measured charge sharing and charge loss for a 4x4 (750micron pitch), lmm thick pixel array and modeled these results using a Monte-Carlo simulation. This model was then used to predict the amount of charge sharing for a much finer pixel array (with a 300micron pitch). Future work will enable us to compare the simulated results for the finer array to measured values.

  13. Image interpolation by two-dimensional parametric cubic convolution.

    PubMed

    Shi, Jiazheng; Reichenbach, Stephen E

    2006-07-01

    Cubic convolution is a popular method for image interpolation. Traditionally, the piecewise-cubic kernel has been derived in one dimension with one parameter and applied to two-dimensional (2-D) images in a separable fashion. However, images typically are statistically nonseparable, which motivates this investigation of nonseparable cubic convolution. This paper derives two new nonseparable, 2-D cubic-convolution kernels. The first kernel, with three parameters (designated 2D-3PCC), is the most general 2-D, piecewise-cubic interpolator defined on [-2, 2] x [-2, 2] with constraints for biaxial symmetry, diagonal (or 90 degrees rotational) symmetry, continuity, and smoothness. The second kernel, with five parameters (designated 2D-5PCC), relaxes the constraint of diagonal symmetry, based on the observation that many images have rotationally asymmetric statistical properties. This paper also develops a closed-form solution for determining the optimal parameter values for parametric cubic-convolution kernels with respect to ensembles of scenes characterized by autocorrelation (or power spectrum). This solution establishes a practical foundation for adaptive interpolation based on local autocorrelation estimates. Quantitative fidelity analyses and visual experiments indicate that these new methods can outperform several popular interpolation methods. An analysis of the error budgets for reconstruction error associated with blurring and aliasing illustrates that the methods improve interpolation fidelity for images with aliased components. For images with little or no aliasing, the methods yield results similar to other popular methods. Both 2D-3PCC and 2D-5PCC are low-order polynomials with small spatial support and so are easy to implement and efficient to apply.

  14. New syndrome decoder for (n, 1) convolutional codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1983-01-01

    The letter presents a new syndrome decoding algorithm for the (n, 1) convolutional codes (CC) that is different and simpler than the previous syndrome decoding algorithm of Schalkwijk and Vinck. The new technique uses the general solution of the polynomial linear Diophantine equation for the error polynomial vector E(D). A recursive, Viterbi-like, algorithm is developed to find the minimum weight error vector E(D). An example is given for the binary nonsystematic (2, 1) CC.

  15. Convolution using guided acoustooptical interaction in thin-film waveguides

    NASA Technical Reports Server (NTRS)

    Chang, W. S. C.; Becker, R. A.; Tsai, C. S.; Yao, I. W.

    1977-01-01

    Interaction of two antiparallel acoustic surface waves (ASW) with an optical guided wave has been investigated theoretically as well as experimentally to obtain the convolution of two ASW signals. The maximum time-bandwidth product that can be achieved by such a convolver is shown to be of the order of 1000 or more. The maximum dynamic range can be as large as 83 dB.

  16. Image data compression using cubic convolution spline interpolation.

    PubMed

    Truong, T K; Wang, L J; Reed, I S; Hsieh, W S

    2000-01-01

    A new cubic convolution spline interpolation (CCSI )for both one-dimensional (1-D) and two-dimensional (2-D) signals is developed in order to subsample signal and image compression data. The CCSI yields a very accurate algorithm for smoothing. It is also shown that this new and fast smoothing filter for CCSI can be used with the JPEG standard to design an improved JPEG encoder-decoder for a high compression ratio.

  17. Charge Loss and Charge Sharing Measurements for Two Different Pixelated Cadmium-Zinc-Telluride Detectors

    NASA Technical Reports Server (NTRS)

    Gaskin, Jessica; Sharma, Dharma; Ramsey, Brian; Seller, Paul

    2003-01-01

    As part of ongoing research at Marshall Space Flight Center, Cadmium-Zinc- Telluride (CdZnTe) pixilated detectors are being developed for use at the focal plane of the High Energy Replicated Optics (HERO) telescope. HERO requires a 64x64 pixel array with a spatial resolution of around 200 microns (with a 6m focal length) and high energy resolution (< 2% at 60keV). We are currently testing smaller arrays as a necessary first step towards this goal. In this presentation, we compare charge sharing and charge loss measurements between two devices that differ both electronically and geometrically. The first device consists of a 1-mm-thick piece of CdZnTe that is sputtered with a 4x4 array of pixels with pixel pitch of 750 microns (inter-pixel gap is 100 microns). The signal is read out using discrete ultra-low-noise preamplifiers, one for each of the 16 pixels. The second detector consists of a 2-mm-thick piece of CdZnTe that is sputtered with a 16x16 array of pixels with a pixel pitch of 300 microns (inter-pixel gap is 50 microns). Instead of using discrete preamplifiers, the crystal is bonded to an ASIC that provides all of the front-end electronics to each of the 256 pixels. what degree the bias voltage (i.e. the electric field) and hence the drift and diffusion coefficients affect our measurements. Further, we compare the measured results with simulated results and discuss to

  18. Charge Loss and Charge Sharing Measurements for Two Different Pixelated Cadmium-Zinc-Telluride Detectors

    NASA Technical Reports Server (NTRS)

    Gaskin, Jessica; Sharma, Dharma; Ramsey, Brian; Seller, Paul

    2003-01-01

    As part of ongoing research at Marshall Space Flight Center, Cadmium-Zinc- Telluride (CdZnTe) pixilated detectors are being developed for use at the focal plane of the High Energy Replicated Optics (HERO) telescope. HERO requires a 64x64 pixel array with a spatial resolution of around 200 microns (with a 6m focal length) and high energy resolution (< 2% at 60keV). We are currently testing smaller arrays as a necessary first step towards this goal. In this presentation, we compare charge sharing and charge loss measurements between two devices that differ both electronically and geometrically. The first device consists of a 1-mm-thick piece of CdZnTe that is sputtered with a 4x4 array of pixels with pixel pitch of 750 microns (inter-pixel gap is 100 microns). The signal is read out using discrete ultra-low-noise preamplifiers, one for each of the 16 pixels. The second detector consists of a 2-mm-thick piece of CdZnTe that is sputtered with a 16x16 array of pixels with a pixel pitch of 300 microns (inter-pixel gap is 50 microns). Instead of using discrete preamplifiers, the crystal is bonded to an ASIC that provides all of the front-end electronics to each of the 256 pixels. what degree the bias voltage (i.e. the electric field) and hence the drift and diffusion coefficients affect our measurements. Further, we compare the measured results with simulated results and discuss to

  19. Automatic localization of vertebrae based on convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Shen, Wei; Yang, Feng; Mu, Wei; Yang, Caiyun; Yang, Xin; Tian, Jie

    2015-03-01

    Localization of the vertebrae is of importance in many medical applications. For example, the vertebrae can serve as the landmarks in image registration. They can also provide a reference coordinate system to facilitate the localization of other organs in the chest. In this paper, we propose a new vertebrae localization method using convolutional neural networks (CNN). The main advantage of the proposed method is the removal of hand-crafted features. We construct two training sets to train two CNNs that share the same architecture. One is used to distinguish the vertebrae from other tissues in the chest, and the other is aimed at detecting the centers of the vertebrae. The architecture contains two convolutional layers, both of which are followed by a max-pooling layer. Then the output feature vector from the maxpooling layer is fed into a multilayer perceptron (MLP) classifier which has one hidden layer. Experiments were performed on ten chest CT images. We used leave-one-out strategy to train and test the proposed method. Quantitative comparison between the predict centers and ground truth shows that our convolutional neural networks can achieve promising localization accuracy without hand-crafted features.

  20. A model of traffic signs recognition with convolutional neural network

    NASA Astrophysics Data System (ADS)

    Hu, Haihe; Li, Yujian; Zhang, Ting; Huo, Yi; Kuang, Wenqing

    2016-10-01

    In real traffic scenes, the quality of captured images are generally low due to some factors such as lighting conditions, and occlusion on. All of these factors are challengeable for automated recognition algorithms of traffic signs. Deep learning has provided a new way to solve this kind of problems recently. The deep network can automatically learn features from a large number of data samples and obtain an excellent recognition performance. We therefore approach this task of recognition of traffic signs as a general vision problem, with few assumptions related to road signs. We propose a model of Convolutional Neural Network (CNN) and apply the model to the task of traffic signs recognition. The proposed model adopts deep CNN as the supervised learning model, directly takes the collected traffic signs image as the input, alternates the convolutional layer and subsampling layer, and automatically extracts the features for the recognition of the traffic signs images. The proposed model includes an input layer, three convolutional layers, three subsampling layers, a fully-connected layer, and an output layer. To validate the proposed model, the experiments are implemented using the public dataset of China competition of fuzzy image processing. Experimental results show that the proposed model produces a recognition accuracy of 99.01 % on the training dataset, and yield a record of 92% on the preliminary contest within the fourth best.

  1. Fine-grained representation learning in convolutional autoencoders

    NASA Astrophysics Data System (ADS)

    Luo, Chang; Wang, Jie

    2016-03-01

    Convolutional autoencoders (CAEs) have been widely used as unsupervised feature extractors for high-resolution images. As a key component in CAEs, pooling is a biologically inspired operation to achieve scale and shift invariances, and the pooled representation directly affects the CAEs' performance. Fine-grained pooling, which uses small and dense pooling regions, encodes fine-grained visual cues and enhances local characteristics. However, it tends to be sensitive to spatial rearrangements. In most previous works, pooled features were obtained by empirically modulating parameters in CAEs. We see the CAE as a whole and propose a fine-grained representation learning law to extract better fine-grained features. This representation learning law suggests two directions for improvement. First, we probabilistically evaluate the discrimination-invariance tradeoff with fine-grained granularity in the pooled feature maps, and suggest the proper filter scale in the convolutional layer and appropriate whitening parameters in preprocessing step. Second, pooling approaches are combined with the sparsity degree in pooling regions, and we propose the preferable pooling approach. Experimental results on two independent benchmark datasets demonstrate that our representation learning law could guide CAEs to extract better fine-grained features and performs better in multiclass classification task. This paper also provides guidance for selecting appropriate parameters to obtain better fine-grained representation in other convolutional neural networks.

  2. Fast convolution quadrature for the wave equation in three dimensions

    NASA Astrophysics Data System (ADS)

    Banjai, L.; Kachanovska, M.

    2014-12-01

    This work addresses the numerical solution of time-domain boundary integral equations arising from acoustic and electromagnetic scattering in three dimensions. The semidiscretization of the time-domain boundary integral equations by Runge-Kutta convolution quadrature leads to a lower triangular Toeplitz system of size N. This system can be solved recursively in an almost linear time (O(Nlog2⁡N)), but requires the construction of O(N) dense spatial discretizations of the single layer boundary operator for the Helmholtz equation. This work introduces an improvement of this algorithm that allows to solve the scattering problem in an almost linear time. The new approach is based on two main ingredients: the near-field reuse and the application of data-sparse techniques. Exponential decay of Runge-Kutta convolution weights wnh(d) outside of a neighborhood of d≈nh (where h is a time step) allows to avoid constructing the near-field (i.e. singular and near-singular integrals) for most of the discretizations of the single layer boundary operators (near-field reuse). The far-field of these matrices is compressed with the help of data-sparse techniques, namely, H-matrices and the high-frequency fast multipole method. Numerical experiments indicate the efficiency of the proposed approach compared to the conventional Runge-Kutta convolution quadrature algorithm.

  3. On the growth and form of cortical convolutions

    NASA Astrophysics Data System (ADS)

    Tallinen, Tuomas; Chung, Jun Young; Rousseau, François; Girard, Nadine; Lefèvre, Julien; Mahadevan, L.

    2016-06-01

    The rapid growth of the human cortex during development is accompanied by the folding of the brain into a highly convoluted structure. Recent studies have focused on the genetic and cellular regulation of cortical growth, but understanding the formation of the gyral and sulcal convolutions also requires consideration of the geometry and physical shaping of the growing brain. To study this, we use magnetic resonance images to build a 3D-printed layered gel mimic of the developing smooth fetal brain; when immersed in a solvent, the outer layer swells relative to the core, mimicking cortical growth. This relative growth puts the outer layer into mechanical compression and leads to sulci and gyri similar to those in fetal brains. Starting with the same initial geometry, we also build numerical simulations of the brain modelled as a soft tissue with a growing cortex, and show that this also produces the characteristic patterns of convolutions over a realistic developmental course. All together, our results show that although many molecular determinants control the tangential expansion of the cortex, the size, shape, placement and orientation of the folds arise through iterations and variations of an elementary mechanical instability modulated by early fetal brain geometry.

  4. Deep Convolutional Neural Network for Inverse Problems in Imaging

    NASA Astrophysics Data System (ADS)

    Jin, Kyong Hwan; McCann, Michael T.; Froustey, Emmanuel; Unser, Michael

    2017-09-01

    In this paper, we propose a novel deep convolutional neural network (CNN)-based algorithm for solving ill-posed inverse problems. Regularized iterative algorithms have emerged as the standard approach to ill-posed inverse problems in the past few decades. These methods produce excellent results, but can be challenging to deploy in practice due to factors including the high computational cost of the forward and adjoint operators and the difficulty of hyper parameter selection. The starting point of our work is the observation that unrolled iterative methods have the form of a CNN (filtering followed by point-wise non-linearity) when the normal operator (H*H, the adjoint of H times H) of the forward model is a convolution. Based on this observation, we propose using direct inversion followed by a CNN to solve normal-convolutional inverse problems. The direct inversion encapsulates the physical model of the system, but leads to artifacts when the problem is ill-posed; the CNN combines multiresolution decomposition and residual learning in order to learn to remove these artifacts while preserving image structure. We demonstrate the performance of the proposed network in sparse-view reconstruction (down to 50 views) on parallel beam X-ray computed tomography in synthetic phantoms as well as in real experimental sinograms. The proposed network outperforms total variation-regularized iterative reconstruction for the more realistic phantoms and requires less than a second to reconstruct a 512 x 512 image on GPU.

  5. Calcium transport in the rabbit superficial proximal convoluted tubule

    SciTech Connect

    Ng, R.C.; Rouse, D.; Suki, W.N.

    1984-09-01

    Calcium transport was studied in isolated S2 segments of rabbit superficial proximal convoluted tubules. 45Ca was added to the perfusate for measurement of lumen-to-bath flux (JlbCa), to the bath for bath-to-lumen flux (JblCa), and to both perfusate and bath for net flux (JnetCa). In these studies, the perfusate consisted of an equilibrium solution that was designed to minimize water flux or electrochemical potential differences (PD). Under these conditions, JlbCa (9.1 +/- 1.0 peq/mm X min) was not different from JblCa (7.3 +/- 1.3 peq/mm X min), and JnetCa was not different from zero, which suggests that calcium transport in the superficial proximal convoluted tubule is due primarily to passive transport. The efflux coefficient was 9.5 +/- 1.2 X 10(-5) cm/s, which was not significantly different from the influx coefficient, 7.0 +/- 1.3 X 10(-5) cm/s. When the PD was made positive or negative with use of different perfusates, net calcium absorption or secretion was demonstrated, respectively, which supports a major role for passive transport. These results indicate that in the superficial proximal convoluted tubule of the rabbit, passive driving forces are the major determinants of calcium transport.

  6. Robust hepatic vessel segmentation using multi deep convolution network

    NASA Astrophysics Data System (ADS)

    Kitrungrotsakul, Titinunt; Han, Xian-Hua; Iwamoto, Yutaro; Foruzan, Amir Hossein; Lin, Lanfen; Chen, Yen-Wei

    2017-03-01

    Extraction of blood vessels of the organ is a challenging task in the area of medical image processing. It is really difficult to get accurate vessel segmentation results even with manually labeling by human being. The difficulty of vessels segmentation is the complicated structure of blood vessels and its large variations that make them hard to recognize. In this paper, we present deep artificial neural network architecture to automatically segment the hepatic vessels from computed tomography (CT) image. We proposed novel deep neural network (DNN) architecture for vessel segmentation from a medical CT volume, which consists of three deep convolution neural networks to extract features from difference planes of CT data. The three networks have share features at the first convolution layer but will separately learn their own features in the second layer. All three networks will join again at the top layer. To validate effectiveness and efficiency of our proposed method, we conduct experiments on 12 CT volumes which training data are randomly generate from 5 CT volumes and 7 using for test. Our network can yield an average dice coefficient 0.830, while 3D deep convolution neural network can yield around 0.7 and multi-scale can yield only 0.6.

  7. A nonvoxel-based dose convolution/superposition algorithm optimized for scalable GPU architectures

    SciTech Connect

    Neylon, J. Sheng, K.; Yu, V.; Low, D. A.; Kupelian, P.; Santhanam, A.; Chen, Q.

    2014-10-15

    Purpose: Real-time adaptive planning and treatment has been infeasible due in part to its high computational complexity. There have been many recent efforts to utilize graphics processing units (GPUs) to accelerate the computational performance and dose accuracy in radiation therapy. Data structure and memory access patterns are the key GPU factors that determine the computational performance and accuracy. In this paper, the authors present a nonvoxel-based (NVB) approach to maximize computational and memory access efficiency and throughput on the GPU. Methods: The proposed algorithm employs a ray-tracing mechanism to restructure the 3D data sets computed from the CT anatomy into a nonvoxel-based framework. In a process that takes only a few milliseconds of computing time, the algorithm restructured the data sets by ray-tracing through precalculated CT volumes to realign the coordinate system along the convolution direction, as defined by zenithal and azimuthal angles. During the ray-tracing step, the data were resampled according to radial sampling and parallel ray-spacing parameters making the algorithm independent of the original CT resolution. The nonvoxel-based algorithm presented in this paper also demonstrated a trade-off in computational performance and dose accuracy for different coordinate system configurations. In order to find the best balance between the computed speedup and the accuracy, the authors employed an exhaustive parameter search on all sampling parameters that defined the coordinate system configuration: zenithal, azimuthal, and radial sampling of the convolution algorithm, as well as the parallel ray spacing during ray tracing. The angular sampling parameters were varied between 4 and 48 discrete angles, while both radial sampling and parallel ray spacing were varied from 0.5 to 10 mm. The gamma distribution analysis method (γ) was used to compare the dose distributions using 2% and 2 mm dose difference and distance-to-agreement criteria

  8. A nonvoxel-based dose convolution/superposition algorithm optimized for scalable GPU architectures.

    PubMed

    Neylon, J; Sheng, K; Yu, V; Chen, Q; Low, D A; Kupelian, P; Santhanam, A

    2014-10-01

    Real-time adaptive planning and treatment has been infeasible due in part to its high computational complexity. There have been many recent efforts to utilize graphics processing units (GPUs) to accelerate the computational performance and dose accuracy in radiation therapy. Data structure and memory access patterns are the key GPU factors that determine the computational performance and accuracy. In this paper, the authors present a nonvoxel-based (NVB) approach to maximize computational and memory access efficiency and throughput on the GPU. The proposed algorithm employs a ray-tracing mechanism to restructure the 3D data sets computed from the CT anatomy into a nonvoxel-based framework. In a process that takes only a few milliseconds of computing time, the algorithm restructured the data sets by ray-tracing through precalculated CT volumes to realign the coordinate system along the convolution direction, as defined by zenithal and azimuthal angles. During the ray-tracing step, the data were resampled according to radial sampling and parallel ray-spacing parameters making the algorithm independent of the original CT resolution. The nonvoxel-based algorithm presented in this paper also demonstrated a trade-off in computational performance and dose accuracy for different coordinate system configurations. In order to find the best balance between the computed speedup and the accuracy, the authors employed an exhaustive parameter search on all sampling parameters that defined the coordinate system configuration: zenithal, azimuthal, and radial sampling of the convolution algorithm, as well as the parallel ray spacing during ray tracing. The angular sampling parameters were varied between 4 and 48 discrete angles, while both radial sampling and parallel ray spacing were varied from 0.5 to 10 mm. The gamma distribution analysis method (γ) was used to compare the dose distributions using 2% and 2 mm dose difference and distance-to-agreement criteria, respectively

  9. Renormalization plus convolution method for atomic-scale modeling of electrical and thermal transport in nanowires.

    PubMed

    Wang, Chumin; Salazar, Fernando; Sánchez, Vicenta

    2008-12-01

    Based on the Kubo-Greenwood formula, the transport of electrons and phonons in nanowires is studied by means of a real-space renormalization plus convolution method. This method has the advantage of being efficient, without introducing additional approximations and capable to analyze nanowires of a wide range of lengths even with defects. The Born and tight-binding models are used to investigate the lattice thermal and electrical conductivities, respectively. The results show a quantized electrical dc conductance, which is attenuated when an oscillating electric field is applied. Effects of single and multiple planar defects, such as a quasi-periodic modulation, on the conductance of nanowires are also investigated. For the low temperature region, the lattice thermal conductance reveals a power-law temperature dependence, in agreement with experimental data.

  10. Mapping Electrical Crosstalk in Pixelated Sensor Arrays

    NASA Technical Reports Server (NTRS)

    Seshadri, S.; Cole, D. M.; Hancock, B. R.; Smith, R. M.

    2008-01-01

    Electronic coupling effects such as Inter-Pixel Capacitance (IPC) affect the quantitative interpretation of image data from CMOS, hybrid visible and infrared imagers alike. Existing methods of characterizing IPC do not provide a map of the spatial variation of IPC over all pixels. We demonstrate a deterministic method that provides a direct quantitative map of the crosstalk across an imager. The approach requires only the ability to reset single pixels to an arbitrary voltage, different from the rest of the imager. No illumination source is required. Mapping IPC independently for each pixel is also made practical by the greater S/N ratio achievable for an electrical stimulus than for an optical stimulus, which is subject to both Poisson statistics and diffusion effects of photo-generated charge. The data we present illustrates a more complex picture of IPC in Teledyne HgCdTe and HyViSi focal plane arrays than is presently understood, including the presence of a newly discovered, long range IPC in the HyViSi FPA that extends tens of pixels in distance, likely stemming from extended field effects in the fully depleted substrate. The sensitivity of the measurement approach has been shown to be good enough to distinguish spatial structure in IPC of the order of 0.1%.

  11. Photothermal Multi-Pixel Imaging Microscope

    SciTech Connect

    Stolz, C J; Chinn, D J; Huber, R D; Weinzapfel, C L; Wu, Z

    2003-12-01

    Photothermal microscopy is a useful nondestructive tool for the identification of fluence-limiting defects in optical coatings. Traditional photothermal microscopes are single-pixel detection devices. Samples are scanned under the microscope to generate a defect map. For high-resolution images, scan times can be quite long (1 mm{sup 2} per hour). Single-pixel detection has been used traditionally because of the ease in separating the laser-induced topographical change due to defect absorption from the defect surface topography. This is accomplished by using standard chopper and lock-in amplifier techniques to remove the DC signal. Multi-pixel photothermal microscopy is now possible by utilizing an optical lock-in technique. This eliminates the lock-in amplifier and enables the use of a CCD camera with an optical lock in for each pixel. With this technique, the data acquisition speed can be increased by orders of magnitude depending on laser power, beam size, and pixel density.

  12. An estimation error bound for pixelated sensing

    NASA Astrophysics Data System (ADS)

    Kreucher, Chris; Bell, Kristine

    2016-05-01

    This paper considers the ubiquitous problem of estimating the state (e.g., position) of an object based on a series of noisy measurements. The standard approach is to formulate this problem as one of measuring the state (or a function of the state) corrupted by additive Gaussian noise. This model assumes both (i) the sensor provides a measurement of the true target (or, alternatively, a separate signal processing step has eliminated false alarms), and (ii) The error source in the measurement is accurately described by a Gaussian model. In reality, however, sensor measurement are often formed on a grid of pixels - e.g., Ground Moving Target Indication (GMTI) measurements are formed for a discrete set of (angle, range, velocity) voxels, and EO imagery is made on (x, y) grids. When a target is present in a pixel, therefore, uncertainty is not Gaussian (instead it is a boxcar function) and unbiased estimation is not generally possible as the location of the target within the pixel defines the bias of the estimator. It turns out that this small modification to the measurement model makes traditional bounding approaches not applicable. This paper discusses pixelated sensing in more detail and derives the minimum mean squared error (MMSE) bound for estimation in the pixelated scenario. We then use this error calculation to investigate the utility of using non-thresholded measurements.

  13. Some physical factors influencing the accuracy of convolution scatter correction in SPECT.

    PubMed

    Msaki, P; Axelsson, B; Larsson, S A

    1989-03-01

    Some important physical factors influencing the accuracy of convolution scatter correction techniques in SPECT are presented. In these techniques scatter correction in the projection relies on filter functions, QF, evaluated by Fourier transforms, from measured scatter functions, Qp, obtained from point spread functions. The spatial resolution has a marginal effect on Qp. Thus a single QF can be used in the scatter correction of SPECT measurements acquired with the low energy high resolution or the low energy general purpose collimators and over a wide range of patient-collimator distances. However, it is necessary to examine the details of the shape of point spread functions during evaluation of Qp. QF is completely described by scatter amplitude AF, slope BF and filter sum SF. SF is obtained by summation of the values of QF occupying a 31 x 31 pixels matrix. Regardless of differences in amplitude and slope, two filter functions are shown to be equivalent in terms of scatter correction ability, whenever their sums are equal. On the basis of filter sum, the observed small influence of ellipticity on QF implies that an average function can be used in scatter correcting SPECT measurements conducted with elliptic objects. SF is shown to increase with a decrease in photon energy and with an increase in window size. Thus, scatter correction by convolution may be severely hampered by photon statistics when SPECT imaging is done with low-energy photons. It is pointless to use unnecessarily large discriminator windows, in the hope of improving photon statistics, since most of the extra events acquired will eventually be subtracted during scatter correction. Regardless of the observed moderate reduction in SF when a lung-equivalent material replaces a portion of a water phantom, further studies are needed to develop a technique that is capable of handling attenuation and scatter corrections simultaneously. Whenever superficial and inner radioactive distributions coexist the

  14. Pixels, Blocks of Pixels, and Polygons: Choosing a Spatial Unit for Thematic Accuracy Assessment

    EPA Science Inventory

    Pixels, polygons, and blocks of pixels are all potentially viable spatial assessment units for conducting an accuracy assessment. We develop a statistical population-based framework to examine how the spatial unit chosen affects the outcome of an accuracy assessment. The populati...

  15. Radiation tolerance of CMOS monolithic active pixel sensors with self-biased pixels

    NASA Astrophysics Data System (ADS)

    Deveaux, M.; Amar-Youcef, S.; Besson, A.; Claus, G.; Colledani, C.; Dorokhov, M.; Dritsa, C.; Dulinski, W.; Fröhlich, I.; Goffe, M.; Grandjean, D.; Heini, S.; Himmi, A.; Hu, C.; Jaaskelainen, K.; Müntz, C.; Shabetai, A.; Stroth, J.; Szelezniak, M.; Valin, I.; Winter, M.

    2010-12-01

    CMOS monolithic active pixel sensors (MAPS) are proposed as a technology for various vertex detectors in nuclear and particle physics. We discuss the mechanisms of ionizing radiation damage on MAPS hosting the dead time free, so-called self bias pixel. Moreover, we introduce radiation hardened sensor designs which allow operating detectors after exposing them to irradiation doses above 1 Mrad.

  16. Torsional random walk statistics on lattices using convolution on crystallographic motion groups.

    PubMed Central

    Skliros, Aris; Chirikjian, Gregory S.

    2007-01-01

    This paper presents a new algorithm for generating the conformational statistics of lattice polymer models. The inputs to the algorithm are the distributions of poses (positions and orientations) of reference frames attached to sequentially proximal bonds in the chain as it undergoes all possible torsional motions in the lattice. If z denotes the number of discrete torsional motions allowable around each of the n bonds, our method generates the probability distribution in end-to-end pose corresponding to all of the zn independent lattice conformations in O(nD+1) arithmetic operations for lattices in D-dimensional space. This is achieved by dividing the chain into short segments and performing multiple generalized convolutions of the pose distribution functions for each segment. The convolution is performed with respect to the crystallographic space group for the lattice on which the chain is defined. The formulation is modified to include the effects of obstacles (excluded volumes), and to calculate the frequency of the occurrence of each conformation when the effects of pairwise conformational energy are included. In the latter case (which is for 3 dimensional lattices only) the computational cost is O(z4n4). This polynomial complexity is a vast improvement over the O(zn) exponential complexity associated with the brute force enumeration of all conformations. The distribution of end-to-end distances and average radius of gyration are calculated easily once the pose distribution for the full chain is found. The method is demonstrated with square, hexagonal, cubic and tetrahedral lattices. PMID:17898862

  17. Adaptive Multi-Objective Sub-Pixel Mapping Framework Based on Memetic Algorithm for Hyperspectral Remote Sensing Imagery

    NASA Astrophysics Data System (ADS)

    Zhong, Y.; Zhang, L.

    2012-07-01

    Sub-pixel mapping technique can specify the location of each class within the pixels based on the assumption of spatial dependence. Traditional sub-pixel mapping algorithms only consider the spatial dependence at the pixel level. The spatial dependence of each sub-pixel is ignored and sub-pixel spatial relation is lost. In this paper, a novel multi-objective sub-pixel mapping framework based on memetic algorithm, namely MSMF, is proposed. In MSMF, the sub-pixel mapping is transformed to a multi-objective optimization problem, which maximizing the spatial dependence index (SDI) and Moran's I, synchronously. Memetic algorithm is utilized to solve the multi-objective problem, which combines global search strategies with local search heuristics. In this framework, the sub-pixel mapping problem can be solved using different evolutionary algorithms and local algorithms. In this paper, memetic algorithm based on clonal selection algorithm (CSA) and random swapping as an example is designed and applied simultaneously in the proposed MSMF. In MSMF, CSA inherits the biologic properties of human immune systems, i.e. clone, mutation, memory, to search the possible sub-pixel mapping solution in the global space. After the exploration based on CSA, the local search based on random swapping is employed to dynamically decide which neighbourhood should be selected to stress exploitation in each generation. In addition, a solution set is used in MSMF to hold and update the obtained non-dominated solutions for multi-objective problem. Experimental results demonstrate that the proposed approach outperform traditional sub-pixel mapping algorithms, and hence provide an effective option for sub-pixel mapping of hyperspectral remote sensing imagery.

  18. Active Pixel Sensors: Are CCD's Dinosaurs?

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R.

    1993-01-01

    Charge-coupled devices (CCD's) are presently the technology of choice for most imaging applications. In the 23 years since their invention in 1970, they have evolved to a sophisticated level of performance. However, as with all technologies, we can be certain that they will be supplanted someday. In this paper, the Active Pixel Sensor (APS) technology is explored as a possible successor to the CCD. An active pixel is defined as a detector array technology that has at least one active transistor within the pixel unit cell. The APS eliminates the need for nearly perfect charge transfer -- the Achilles' heel of CCDs. This perfect charge transfer makes CCD's radiation 'soft,' difficult to use under low light conditions, difficult to manufacture in large array sizes, difficult to integrate with on-chip electronics, difficult to use at low temperatures, difficult to use at high frame rates, and difficult to manufacture in non-silicon materials that extend wavelength response.

  19. Power Studies for the CMS Pixel Tracker

    SciTech Connect

    Todri, A.; Turqueti, M.; Rivera, R.; Kwan, S.; /Fermilab

    2009-01-01

    The Electronic Systems Engineering Department of the Computing Division at the Fermi National Accelerator Laboratory is carrying out R&D investigations for the upgrade of the power distribution system of the Compact Muon Solenoid (CMS) Pixel Tracker at the Large Hadron Collider (LHC). Among the goals of this effort is that of analyzing the feasibility of alternative powering schemes for the forward tracker, including DC to DC voltage conversion techniques using commercially available and custom switching regulator circuits. Tests of these approaches are performed using the PSI46 pixel readout chip currently in use at the CMS Tracker. Performance measures of the detector electronics will include pixel noise and threshold dispersion results. Issues related to susceptibility to switching noise will be studied and presented. In this paper, we describe the current power distribution network of the CMS Tracker, study the implications of the proposed upgrade with DC-DC converters powering scheme and perform noise susceptibility analysis.

  20. Vivid, full-color aluminum plasmonic pixels

    PubMed Central

    Olson, Jana; Manjavacas, Alejandro; Liu, Lifei; Chang, Wei-Shun; Foerster, Benjamin; King, Nicholas S.; Knight, Mark W.; Nordlander, Peter; Halas, Naomi J.; Link, Stephan

    2014-01-01

    Aluminum is abundant, low in cost, compatible with complementary metal-oxide semiconductor manufacturing methods, and capable of supporting tunable plasmon resonance structures that span the entire visible spectrum. However, the use of Al for color displays has been limited by its intrinsically broad spectral features. Here we show that vivid, highly polarized, and broadly tunable color pixels can be produced from periodic patterns of oriented Al nanorods. Whereas the nanorod longitudinal plasmon resonance is largely responsible for pixel color, far-field diffractive coupling is used to narrow the plasmon linewidth, enabling monochromatic coloration and significantly enhancing the far-field scattering intensity of the individual nanorod elements. The bright coloration can be observed with p-polarized white light excitation, consistent with the use of this approach in display devices. The resulting color pixels are constructed with a simple design, are compatible with scalable fabrication methods, and provide contrast ratios exceeding 100:1. PMID:25225385

  1. Towards spark-proof gaseous pixel detectors

    NASA Astrophysics Data System (ADS)

    Tsigaridas, S.; Beuzekom, M. v.; Chan, H. W.; Graaf, H. v. d.; Hartjes, F.; Heijhoff, K.; Hessey, N. P.; Prodanovic, V.

    2016-11-01

    The micro-pattern gaseous pixel detector, is a promising technology for imaging and particle tracking applications. It is a combination of a gas layer acting as detection medium and a CMOS pixelated readout-chip. As a prevention against discharges we deposit a protection layer on the chip and then integrate on top a micromegas-like amplification structure. With this technology we are able to reconstruct 3D track segments of particles passing through the gas thanks to the functionality of the chip. We have turned a Timepix3 chip into a gaseous pixel detector and tested it at the SPS at Cern. The preliminary results are promising and within the expectations. However, the spark protection layer needs further improvement to make reliable detectors. For this reason, we have created a setup for spark-testing. We present the first results obtained from the lab-measurements along with preliminary results from the testbeam.

  2. Pixel lensing observations towards globular clusters

    NASA Astrophysics Data System (ADS)

    Cardone, V. F.; Cantiello, M.

    2003-07-01

    It has been suggested that a monitoring program employing the pixel lensing method to search for microlensing events towards galactic globular clusters may increase the statistics and discriminate among different halo models. Stimulated by this proposal, we evaluate an upper limit to the pixel lensing event rate for such a survey. Four different dark halo models have been considered changing both the flattening and the slope of the mass density profile. The lens mass function has been modelled as a homogenous power - law for mu in (mul, muu) and both the mass limits and the slope of the mass function have been varied to investigate their effect on the rate. The target globular clusters have been selected in order to minimize the disk contribution to the event rate. We find that a pixel lensing survey towards globular clusters is unable to discriminate among different halo models since the number of detectable events is too small to allow any reliable statistical analysis.

  3. SVGA AMOLED with world's highest pixel pitch

    NASA Astrophysics Data System (ADS)

    Prache, Olivier; Wacyk, Ihor

    2006-05-01

    We present the design and early evaluation results of the world's highest pixel pitch full-color 800x3x600- pixel, active matrix organic light emitting diode (AMOLED) color microdisplay for consumer and environmentally demanding applications. The design premises were aimed at improving small area uniformity as well as reducing the pixel size while expanding the functionality found in existing eMagin Corporations' microdisplay products without incurring any power consumption degradation when compared to existing OLED microdisplays produced by eMagin. The initial results of the first silicon prototype presented here demonstrate compliance with all major objectives as well as the validation of a new adaptive gamma correction technique that can operate automatically over temperature.

  4. GALAPAGOS: from pixels to parameters

    NASA Astrophysics Data System (ADS)

    Barden, Marco; Häußler, Boris; Peng, Chien Y.; McIntosh, Daniel H.; Guo, Yicheng

    2012-05-01

    To automate source detection, two-dimensional light profile Sérsic modelling and catalogue compilation in large survey applications, we introduce a new code Galaxy Analysis over Large Areas: Parameter Assessment by GALFITting Objects from SEXTRACTOR (GALAPAGOS). Based on a single set-up, GALAPAGOS can process a complete set of survey images. It detects sources in the data, estimates a local sky background, cuts postage stamp images for all sources, prepares object masks, performs Sérsic fitting including neighbours and compiles all objects in a final output catalogue. For the initial source detection, GALAPAGOS applies SEXTRACTOR, while GALFIT is incorporated for modelling Sérsic profiles. It measures the background sky involved in the Sérsic fitting by means of a flux growth curve. GALAPAGOS determines postage stamp sizes based on SEXTRACTOR shape parameters. In order to obtain precise model parameters, GALAPAGOS incorporates a complex sorting mechanism and makes use of modern CPU's multiplexing capabilities. It combines SEXTRACTOR and GALFIT data in a single output table. When incorporating information from overlapping tiles, GALAPAGOS automatically removes multiple entries from identical sources. GALAPAGOS is programmed in the Interactive Data Language (IDL). We test the stability and the ability to properly recover structural parameters extensively with artificial image simulations. Moreover, we apply GALAPAGOS successfully to the STAGES data set. For one-orbit Hubble Space Telescope data, a single 2.2-GHz CPU processes about 1000 primary sources per 24 h. Note that GALAPAGOS results depend critically on the user-defined parameter set-up. This paper provides useful guidelines to help the user make sensible choices.

  5. Modulation transfer function of a trapezoidal pixel array detector

    NASA Astrophysics Data System (ADS)

    Wang, Fan; Guo, Rongli; Ni, Jinping; Dong, Tao

    2016-01-01

    The modulation transfer function (MTF) is the tool most commonly used for quantifying the performance of an electro-optical imaging system. Recently, trapezoid-shaped pixels were designed and used in a retina-like sensor in place of rectangular-shaped pixels. The MTF of a detector with a trapezoidal pixel array is determined according to its definition. Additionally, the MTFs of detectors with differently shaped pixels, but the same pixel areas, are compared. The results show that the MTF values of the trapezoidal pixel array detector are obviously larger than those of rectangular and triangular pixel array detectors at the same frequencies.

  6. Detector Sampling of Optical/IR Spectra: How Many Pixels per FWHM?

    NASA Astrophysics Data System (ADS)

    Robertson, J. Gordon

    2017-08-01

    Most optical and IR spectra are now acquired using detectors with finite-width pixels in a square array. Each pixel records the received intensity integrated over its own area, and pixels are separated by the array pitch. This paper examines the effects of such pixellation, using computed simulations to illustrate the effects which most concern the astronomer end-user. It is shown that coarse sampling increases the random noise errors in wavelength by typically 10-20 % at 2 pixels per Full Width at Half Maximum, but with wide variation depending on the functional form of the instrumental Line Spread Function (i.e. the instrumental response to a monochromatic input) and on the pixel phase. If line widths are determined, they are even more strongly affected at low sampling frequencies. However, the noise in fitted peak amplitudes is minimally affected by pixellation, with increases less than about 5%. Pixellation has a substantial but complex effect on the ability to see a relative minimum between two closely spaced peaks (or relative maximum between two absorption lines). The consistent scale of resolving power presented by Robertson to overcome the inadequacy of the Full Width at Half Maximum as a resolution measure is here extended to cover pixellated spectra. The systematic bias errors in wavelength introduced by pixellation, independent of signal/noise ratio, are examined. While they may be negligible for smooth well-sampled symmetric Line Spread Functions, they are very sensitive to asymmetry and high spatial frequency sub-structure. The Modulation Transfer Function for sampled data is shown to give a useful indication of the extent of improperly sampled signal in an Line Spread Function. The common maxim that 2 pixels per Full Width at Half Maximum is the Nyquist limit is incorrect and most Line Spread Functions will exhibit some aliasing at this sample frequency. While 2 pixels per Full Width at Half Maximum is nevertheless often an acceptable minimum for

  7. Commissioning of the ATLAS pixel detector

    SciTech Connect

    ATLAS Collaboration; Golling, Tobias

    2008-09-01

    The ATLAS pixel detector is a high precision silicon tracking device located closest to the LHC interaction point. It belongs to the first generation of its kind in a hadron collider experiment. It will provide crucial pattern recognition information and will largely determine the ability of ATLAS to precisely track particle trajectories and find secondary vertices. It was the last detector to be installed in ATLAS in June 2007, has been fully connected and tested in-situ during spring and summer 2008, and is ready for the imminent LHC turn-on. The highlights of the past and future commissioning activities of the ATLAS pixel system are presented.

  8. Physics performance of the ATLAS pixel detector

    NASA Astrophysics Data System (ADS)

    Tsuno, S.

    2017-01-01

    In preparation for LHC Run-2 the ATLAS detector introduced a new pixel detector, the Insertable B-Layer (IBL). This detector is located between the beampipe and what was the innermost pixel layer. The tracking and vertex reconstruction are significantly improved and good performance is expected in high level objects such a b-quark jet tagging. This in turn, leads to better physics results. This note summarizes the impact of the IBL detector on physics results, especially focusing on the analyses using b-quark jets throughout 2016 summer physics program.

  9. Super-resolution reconstruction algorithm based on adaptive convolution kernel size selection

    NASA Astrophysics Data System (ADS)

    Gao, Hang; Chen, Qian; Sui, Xiubao; Zeng, Junjie; Zhao, Yao

    2016-09-01

    Restricted by the detector technology and optical diffraction limit, the spatial resolution of infrared imaging system is difficult to achieve significant improvement. Super-Resolution (SR) reconstruction algorithm is an effective way to solve this problem. Among them, the SR algorithm based on multichannel blind deconvolution (MBD) estimates the convolution kernel only by low resolution observation images, according to the appropriate regularization constraints introduced by a priori assumption, to realize the high resolution image restoration. The algorithm has been shown effective when each channel is prime. In this paper, we use the significant edges to estimate the convolution kernel and introduce an adaptive convolution kernel size selection mechanism, according to the uncertainty of the convolution kernel size in MBD processing. To reduce the interference of noise, we amend the convolution kernel in an iterative process, and finally restore a clear image. Experimental results show that the algorithm can meet the convergence requirement of the convolution kernel estimation.

  10. Of FFT-based convolutions and correlations, with application to solving Poisson's equation in an open rectangular pipe

    SciTech Connect

    Ryne, Robert D.

    2011-11-07

    A new method is presented for solving Poisson's equation inside an open-ended rectangular pipe. The method uses Fast Fourier Transforms (FFTs)to perform mixed convolutions and correlations of the charge density with the Green function. Descriptions are provided for algorithms based on theordinary Green function and for an integrated Green function (IGF). Due to its similarity to the widely used Hockney algorithm for solving Poisson'sequation in free space, this capability can be easily implemented in many existing particle-in-cell beam dynamics codes.

  11. HST/WFC3 Characteristics: gain, post-flash stability, UVIS low-sensitivity pixels, persistence, IR flats and bad pixel table

    NASA Astrophysics Data System (ADS)

    Gunning, Heather C.; Baggett, Sylvia; Gosmeyer, Catherine M.; Long, Knox S.; Ryan, Russell E.; MacKenty, John W.; Durbin, Meredith

    2015-08-01

    The Wide Field Camera 3 (WFC3) is a fourth-generation imaging instrument on the Hubble Space Telescope (HST). Installed in May 2009, WFC3 is comprised of two observational channels covering wavelengths from UV/Visible (UVIS) to infrared (IR); both have been performing well on-orbit. We discuss the gain stability of both WFC3 channel detectors from ground testing through present day. For UVIS, we detail a low-sensitivity pixel population that evolves during the time between anneals, but is largely reset by the annealing procedure. We characterize the post-flash LED lamp stability, used and recommended to mitigate CTE effects for observations with less than 12e-/pixel backgrounds. We present mitigation options for IR persistence during and after observations. Finally, we give an overview on the construction of the IR flats and provide updates on the bad pixel table.

  12. Per-Pixel, Dual-Counter Scheme for Optical Communications

    NASA Technical Reports Server (NTRS)

    Farr, William H.; Bimbaum, Kevin M.; Quirk, Kevin J.; Sburlan, Suzana; Sahasrabudhe, Adit

    2013-01-01

    Free space optical communications links from deep space are projected to fulfill future NASA communication requirements for 2020 and beyond. Accurate laser-beam pointing is required to achieve high data rates at low power levels.This innovation is a per-pixel processing scheme using a pair of three-state digital counters to implement acquisition and tracking of a dim laser beacon transmitted from Earth for pointing control of an interplanetary optical communications system using a focal plane array of single sensitive detectors. It shows how to implement dim beacon acquisition and tracking for an interplanetary optical transceiver with a method that is suitable for both achieving theoretical performance, as well as supporting additional functions of high data rate forward links and precision spacecraft ranging.

  13. Nonlinearity and pixel shifting effects in HXRG infrared detectors

    NASA Astrophysics Data System (ADS)

    Plazas, A. A.; Shapiro, C.; Smith, R.; Rhodes, J.; Huff, E.

    2017-04-01

    We study the nonlinearity (NL) in the conversion from charge to voltage in infrared detectors (HXRG) for use in precision astronomy. We present laboratory measurements of the NL function of a H2RG detector and discuss the accuracy to which it would need to be calibrated in future space missions to perform cosmological measurements through the weak gravitational lensing technique. In addition, we present an analysis of archival data from the infrared H1RG detector of the Wide Field Camera 3 in the Hubble Space Telescope that provides evidence consistent with the existence of a sensor effect analogous to the ``brighter-fatter'' effect found in Charge-Coupled Devices. We propose a model in which this effect could be understood as shifts in the effective pixel boundaries, and discuss prospects of laboratory measurements to fully characterize this effect.

  14. Deep-learning convolution neural network for computer-aided detection of microcalcifications in digital breast tomosynthesis

    NASA Astrophysics Data System (ADS)

    Samala, Ravi K.; Chan, Heang-Ping; Hadjiiski, Lubomir M.; Cha, Kenny; Helvie, Mark A.

    2016-03-01

    A deep learning convolution neural network (DLCNN) was designed to differentiate microcalcification candidates detected during the prescreening stage as true calcifications or false positives in a computer-aided detection (CAD) system for clustered microcalcifications. The microcalcification candidates were extracted from the planar projection image generated from the digital breast tomosynthesis volume reconstructed by a multiscale bilateral filtering regularized simultaneous algebraic reconstruction technique. For training and testing of the DLCNN, true microcalcifications are manually labeled for the data sets and false positives were obtained from the candidate objects identified by the CAD system at prescreening after exclusion of the true microcalcifications. The DLCNN architecture was selected by varying the number of filters, filter kernel sizes and gradient computation parameter in the convolution layers, resulting in a parameter space of 216 combinations. The exhaustive grid search method was used to select an optimal architecture within the parameter space studied, guided by the area under the receiver operating characteristic curve (AUC) as a figure-of-merit. The effects of varying different categories of the parameter space were analyzed. The selected DLCNN was compared with our previously designed CNN architecture for the test set. The AUCs of the CNN and DLCNN was 0.89 and 0.93, respectively. The improvement was statistically significant (p < 0.05).

  15. From hybrid to CMOS pixels ... a possibility for LHC's pixel future?

    NASA Astrophysics Data System (ADS)

    Wermes, N.

    2015-12-01

    Hybrid pixel detectors have been invented for the LHC to make tracking and vertexing possible at all in LHC's radiation intense environment. The LHC pixel detectors have meanwhile very successfully fulfilled their promises and R&D for the planned HL-LHC upgrade is in full swing, targeting even higher ionising doses and non-ionising fluences. In terms of rate and radiation tolerance hybrid pixels are unrivaled. But they have disadvantages as well, most notably material thickness, production complexity, and cost. Meanwhile also active pixel sensors (DEPFET, MAPS) have become real pixel detectors but they would by far not stand the rates and radiation faced from HL-LHC. New MAPS developments, so-called DMAPS (depleted MAPS) which are full CMOS-pixel structures with charge collection in a depleted region have come in the R&D focus for pixels at high rate/radiation levels. This goal can perhaps be realised exploiting HV technologies, high ohmic substrates and/or SOI based technologies. The paper covers the main ideas and some encouraging results from prototyping R&D, not hiding the difficulties.

  16. Improved iterative image reconstruction using variable projection binning and abbreviated convolution.

    PubMed

    Schmidlin, P

    1994-09-01

    Noise propagation in iterative reconstruction can be reduced by exact data projection. This can be done by area-weighted projection using the convolution method. Large arrays have to be convolved in order to achieve satisfactory image quality. Two procedures are described which improve the convolution method used so far. Variable binning helps to reduce the size of the convolution arrays without loss of image quality. Computation time is further reduced by abbreviated convolution. The effects of the procedures are illustrated by means of phantom measurements.

  17. Operational and convolution properties of three-dimensional Fourier transforms in spherical polar coordinates.

    PubMed

    Baddour, Natalie

    2010-10-01

    For functions that are best described with spherical coordinates, the three-dimensional Fourier transform can be written in spherical coordinates as a combination of spherical Hankel transforms and spherical harmonic series. However, to be as useful as its Cartesian counterpart, a spherical version of the Fourier operational toolset is required for the standard operations of shift, multiplication, convolution, etc. This paper derives the spherical version of the standard Fourier operation toolset. In particular, convolution in various forms is discussed in detail as this has important consequences for filtering. It is shown that standard multiplication and convolution rules do apply as long as the correct definition of convolution is applied.

  18. SU-E-T-508: A Novel Method of Heterogeneity Compensation for the Convolution / Superposition Algorithm.

    PubMed

    Jacques, R; McNutt, T

    2012-06-01

    We developed a better method of accounting for the effects of heterogeneity in convolution algorithms. We integrated this method into our GPU-accelerated, multi-energetic convolution/superposition (C/S) implementation. In doing so, we have created a new dose algorithm: heterogeneity compensated superposition (HCS). Convolution in the spherical density-scaled distance space, a.k.a. C/S, has proven to be a good estimator of the dose deposited in a homogeneous volume. However, near heterogeneities electron disequilibrium occurs, leading to faster fall-off and re-buildup than predicted by C/S. We propose to filter the actual patient density in a position and direction sensitive manner, allowing the dose deposited near interfaces to be increased or decreased relative to traditional C/S. We implemented the effective density function as a multivariate first-order recursive filter. We compared HCS against traditional C/S using the ICCR 2000 Monte-Carlo accuracy benchmark, 23 similar accuracy benchmarks and 5 patient cases. For the patient cases, we created custom routines capable of using the discrete material mappings used by Monte-Carlo. C/S normally considers each voxel to be a mixture of materials based on a piecewise-linear density look-up table. Multi-energetic HCS increased the dosimetric accuracy for the vast majority of voxels; in many cases near Monte-Carlo results were achieved. HCS improved the mean Van Dyk error by 0.79 (% of Dmax or mm) on average for the patient volumes; reducing the mean error from 1.93%|mm to 1.14%|mm. We found a mean error difference of up to 0.30 %|mm between linear and discrete material mappings. Very low densities (i.e. <0.1 g / cm(3) ) remained problematic, but may be solvable with a better filter function. We have developed a novel dose calculation algorithm based on the principals of C/S that better accounts for the electron disequilibrium caused by patient heterogeneity. This work was funded in part by the National Science

  19. Dynamic holography using pixelated light modulators.

    PubMed

    Zwick, Susanne; Haist, Tobias; Warber, Michael; Osten, Wolfgang

    2010-09-01

    Dynamic holography using spatial light modulators is a very flexible technique that offers various new applications compared to static holography. We give an overview on the technical background of dynamic holography focusing on pixelated spatial light modulators and their technical restrictions, and we present a selection of the numerous applications of dynamic holography.

  20. Spatially Locating FIA Plots from Pixel Values

    Treesearch

    Greg C. Liknes; Geoffrey R. Holden; Mark D. Nelson; Ronald E. McRoberts

    2005-01-01

    The USDA Forest Service Forest Inventory and Analysis (FIA) program is required to ensure the confidentiality of the geographic locations of plots. To accommodate user requests for data without releasing actual plot coordinates, FIA creates overlays of plot locations on various geospatial data, including satellite imagery. Methods for reporting pixel values associated...

  1. JPL CMOS Active Pixel Sensor Technology

    NASA Technical Reports Server (NTRS)

    Fossum, E. R.

    1995-01-01

    This paper will present the JPL-developed complementary metal- oxide-semiconductor (CMOS) active pixel sensor (APS) technology. The CMOS APS has achieved performance comparable to charge coupled devices, yet features ultra low power operation, random access readout, on-chip timing and control, and on-chip analog to digital conversion. Previously published open literature will be reviewed.

  2. Pixel telescope test in STAR at RHIC

    NASA Astrophysics Data System (ADS)

    Sun, Xiangming; Szelezniak, Michal; Greiner, Leo; Matis, Howard; Vu, Chinh; Stezelberger, Thorsten; Wieman, Howard

    2007-10-01

    The STAR experiment at RHIC is designing a new inner vertex detector called the Heavy Flavor Tracker (HFT). The HFT's innermost two layers is called the PIXEL detector which uses Monolithic Active Pixel Sensor technology (MAPS). To test the MAPS technology, we just constructed and tested a telescope. The telescope uses a stack of three MIMOSTAR2 chips, Each MIMOSTAR2 sensor, which was designed by IPHC, is an array of 132x128 pixels with a square pixel size of 30 μ. The readout of the telescope makes use of the ALICE DDL/SIU cards, which is compatible with the future STAR data acquisition system called DAQ1000. The telescope was first studied in a 1.2 GeV/c electron beam at LBNL's Advanced Light Source. Afterwards, the telescope was outside the STAR magnet, and then later inside it, 145 cm away from STAR's center. We will describe this first test of MAPS technology in a collider environment, and report on the occupancy, particle flux, and performance of the telescope.

  3. Uncooled infrared detectors toward smaller pixel pitch with newly proposed pixel structure

    NASA Astrophysics Data System (ADS)

    Tohyama, Shigeru; Sasaki, Tokuhito; Endoh, Tsutomu; Sano, Masahiko; Kato, Koji; Kurashina, Seiji; Miyoshi, Masaru; Yamazaki, Takao; Ueno, Munetaka; Katayama, Haruyoshi; Imai, Tadashi

    2013-12-01

    An uncooled infrared (IR) focal plane array (FPA) with 23.5 μm pixel pitch has been successfully demonstrated and has found wide commercial applications in the areas of thermography, security cameras, and other applications. One of the key issues for uncooled IRFPA technology is to shrink the pixel pitch because the size of the pixel pitch determines the overall size of the FPA, which, in turn, determines the cost of the IR camera products. This paper proposes an innovative pixel structure with a diaphragm and beams placed in different levels to realize an uncooled IRFPA with smaller pixel pitch (≦17 μm). The upper level consists of a diaphragm with VOx bolometer and IR absorber layers, while the lower level consists of the two beams, which are designed to be placed on the adjacent pixels. The test devices of this pixel design with 12, 15, and 17 μm pitch have been fabricated on the Si read-out integrated circuit (ROIC) of quarter video graphics array (QVGA) (320×240) with 23.5 μm pitch. Their performances are nearly equal to those of the IRFPA with 23.5 μm pitch. For example, a noise equivalent temperature difference of 12 μm pixel is 63.1 mK for F/1 optics with the thermal time constant of 14.5 ms. Then, the proposed structure is shown to be effective for the existing IRFPA with 23.5 μm pitch because of the improvements in IR sensitivity. Furthermore, the advanced pixel structure that has the beams composed of two levels are demonstrated to be realizable.

  4. Telemetry degradation due to a CW RFI induced carrier tracking error for the block IV receiving system with maximum likelihood convolution decoding

    NASA Technical Reports Server (NTRS)

    Sue, M. K.

    1981-01-01

    Models to characterize the behavior of the Deep Space Network (DSN) Receiving System in the presence of a radio frequency interference (RFI) are considered. A simple method to evaluate the telemetry degradation due to the presence of a CW RFI near the carrier frequency for the DSN Block 4 Receiving System using the maximum likelihood convolutional decoding assembly is presented. Analytical and experimental results are given.

  5. Finding strong lenses in CFHTLS using convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Jacobs, C.; Glazebrook, K.; Collett, T.; More, A.; McCarthy, C.

    2017-10-01

    We train and apply convolutional neural networks, a machine learning technique developed to learn from and classify image data, to Canada-France-Hawaii Telescope Legacy Survey (CFHTLS) imaging for the identification of potential strong lensing systems. An ensemble of four convolutional neural networks was trained on images of simulated galaxy-galaxy lenses. The training sets consisted of a total of 62 406 simulated lenses and 64 673 non-lens negative examples generated with two different methodologies. An ensemble of trained networks was applied to all of the 171 deg2 of the CFHTLS wide field image data, identifying 18 861 candidates including 63 known and 139 other potential lens candidates. A second search of 1.4 million early-type galaxies selected from the survey catalogue as potential deflectors, identified 2465 candidates including 117 previously known lens candidates, 29 confirmed lenses/high-quality lens candidates, 266 novel probable or potential lenses and 2097 candidates we classify as false positives. For the catalogue-based search we estimate a completeness of 21-28 per cent with respect to detectable lenses and a purity of 15 per cent, with a false-positive rate of 1 in 671 images tested. We predict a human astronomer reviewing candidates produced by the system would identify 20 probable lenses and 100 possible lenses per hour in a sample selected by the robot. Convolutional neural networks are therefore a promising tool for use in the search for lenses in current and forthcoming surveys such as the Dark Energy Survey and the Large Synoptic Survey Telescope.

  6. Gallium arsenide pixel detectors for medical imaging

    NASA Astrophysics Data System (ADS)

    Da Via, C.; Bates, R.; Bertolucci, E.; Bottigli, U.; Campbell, M.; Chesi, E.; Conti, M.; D'Auria, S.; DelPapa, C.; Fantacci, M. E.; Grossi, G.; Heijne, E.; Mancini, E.; Middelkamp, P.; Raine, C.; Russo, P.; O'Shea, V.; Scharfetter, L.; Smith, K.; Snoeys, W.; Stefanini, A.

    1997-08-01

    Gallium arsenide pixel detectors processed on a 200 μm Semi-Insulating (SI) Hitachi substrate were bump-bonded to the Omega3 electronics developed at CERN for high energy physics [1]. The pixel dimensions are 50 μm × 500 μm for a total of 2048 cells and an active area of ˜0.5 cm 2. Our aim is to use this system for medical imaging. We report the results obtained after irradiation of the detector with different X-ray sources on phantoms with different contrasts. The system showed good sensitivity to X-rays from 241Am (60 keV) and 109Cd (22.1 keV). It is also sensitive to β- particles from 90Sr as well as from 32P which is used as a tracer for autoradiography applications. The inherent high absorption efficiency of GaAs associated with the self-triggering capabilities of the pixel readout system reduced considerably the acquisition time compared with traditional systems based on silicon or emulsions. The present configuration is not optimised for X-ray imaging. The reduction of the pixel dimensions to 200 μm × 200 μm together with the integration of a counter in the pixel electronics would make the detector competitive for applications like mammography or dental radiology. For certain applications in biochemistry, such as DNA sequencing, where good spatial resolution is required only in one direction, the present setup should allow the best spatial resolution available up to now with respect to other digital autoradiographic systems. DNA sequencing tests are now under way.

  7. SU-E-T-423: Fast Photon Convolution Calculation with a 3D-Ideal Kernel On the GPU

    SciTech Connect

    Moriya, S; Sato, M; Tachibana, H

    2015-06-15

    Purpose: The calculation time is a trade-off for improving the accuracy of convolution dose calculation with fine calculation spacing of the KERMA kernel. We investigated to accelerate the convolution calculation using an ideal kernel on the Graphic Processing Units (GPU). Methods: The calculation was performed on the AMD graphics hardware of Dual FirePro D700 and our algorithm was implemented using the Aparapi that convert Java bytecode to OpenCL. The process of dose calculation was separated with the TERMA and KERMA steps. The dose deposited at the coordinate (x, y, z) was determined in the process. In the dose calculation running on the central processing unit (CPU) of Intel Xeon E5, the calculation loops were performed for all calculation points. On the GPU computation, all of the calculation processes for the points were sent to the GPU and the multi-thread computation was done. In this study, the dose calculation was performed in a water equivalent homogeneous phantom with 150{sup 3} voxels (2 mm calculation grid) and the calculation speed on the GPU to that on the CPU and the accuracy of PDD were compared. Results: The calculation time for the GPU and the CPU were 3.3 sec and 4.4 hour, respectively. The calculation speed for the GPU was 4800 times faster than that for the CPU. The PDD curve for the GPU was perfectly matched to that for the CPU. Conclusion: The convolution calculation with the ideal kernel on the GPU was clinically acceptable for time and may be more accurate in an inhomogeneous region. Intensity modulated arc therapy needs dose calculations for different gantry angles at many control points. Thus, it would be more practical that the kernel uses a coarse spacing technique if the calculation is faster while keeping the similar accuracy to a current treatment planning system.

  8. Medical image fusion using the convolution of Meridian distributions.

    PubMed

    Agrawal, Mayank; Tsakalides, Panagiotis; Achim, Alin

    2010-01-01

    The aim of this paper is to introduce a novel non-Gaussian statistical model-based approach for medical image fusion based on the Meridian distribution. The paper also includes a new approach to estimate the parameters of generalized Cauchy distribution. The input images are first decomposed using the Dual-Tree Complex Wavelet Transform (DT-CWT) with the subband coefficients modelled as Meridian random variables. Then, the convolution of Meridian distributions is applied as a probabilistic prior to model the fused coefficients, and the weights used to combine the source images are optimised via Maximum Likelihood (ML) estimation. The superior performance of the proposed method is demonstrated using medical images.

  9. Faster GPU-based convolutional gridding via thread coarsening

    NASA Astrophysics Data System (ADS)

    Merry, B.

    2016-07-01

    Convolutional gridding is a processor-intensive step in interferometric imaging. While it is possible to use graphics processing units (GPUs) to accelerate this operation, existing methods use only a fraction of the available flops. We apply thread coarsening to improve the efficiency of an existing algorithm, and observe performance gains of up to 3.2 × for single-polarization gridding and 1.9 × for quad-polarization gridding on a GeForce GTX 980, and smaller but still significant gains on a Radeon R9 290X.

  10. Convolutional neural networks for synthetic aperture radar classification

    NASA Astrophysics Data System (ADS)

    Profeta, Andrew; Rodriguez, Andres; Clouse, H. Scott

    2016-05-01

    For electro-optical object recognition, convolutional neural networks (CNNs) are the state-of-the-art. For large datasets, CNNs are able to learn meaningful features used for classification. However, their application to synthetic aperture radar (SAR) has been limited. In this work we experimented with various CNN architectures on the MSTAR SAR dataset. As the input to the CNN we used the magnitude and phase (2 channels) of the SAR imagery. We used the deep learning toolboxes CAFFE and Torch7. Our results show that we can achieve 93% accuracy on the MSTAR dataset using CNNs.

  11. New syndrome decoding techniques for the (n, k) convolutional codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1984-01-01

    This paper presents a new syndrome decoding algorithm for the (n, k) convolutional codes (CC) which differs completely from an earlier syndrome decoding algorithm of Schalkwijk and Vinck. The new algorithm is based on the general solution of the syndrome equation, a linear Diophantine equation for the error polynomial vector E(D). The set of Diophantine solutions is a coset of the CC. In this error coset a recursive, Viterbi-like algorithm is developed to find the minimum weight error vector (circumflex)E(D). An example, illustrating the new decoding algorithm, is given for the binary nonsystemmatic (3, 1)CC. Previously announced in STAR as N83-34964

  12. New Syndrome Decoding Techniques for the (n, K) Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1983-01-01

    This paper presents a new syndrome decoding algorithm for the (n,k) convolutional codes (CC) which differs completely from an earlier syndrome decoding algorithm of Schalkwijk and Vinck. The new algorithm is based on the general solution of the syndrome equation, a linear Diophantine equation for the error polynomial vector E(D). The set of Diophantine solutions is a coset of the CC. In this error coset a recursive, Viterbi-like algorithm is developed to find the minimum weight error vector (circumflex)E(D). An example, illustrating the new decoding algorithm, is given for the binary nonsystemmatic (3,1)CC.

  13. Surrogacy theory and models of convoluted organic systems.

    PubMed

    Konopka, Andrzej K

    2007-03-01

    The theory of surrogacy is briefly outlined as one of the conceptual foundations of systems biology that has been developed for the last 30 years in the context of Hertz-Rosen modeling relationship. Conceptual foundations of modeling convoluted (biologically complex) systems are briefly reviewed and discussed in terms of current and future research in systems biology. New as well as older results that pertain to the concepts of modeling relationship, sequence of surrogacies, cascade of representations, complementarity, analogy, metaphor, and epistemic time are presented together with a classification of models in a cascade. Examples of anticipated future applications of surrogacy theory in life sciences are briefly discussed.

  14. A Fortran 90 code for magnetohydrodynamics. Part 1, Banded convolution

    SciTech Connect

    Walker, D.W.

    1992-03-01

    This report describes progress in developing a Fortran 90 version of the KITE code for studying plasma instabilities in Tokamaks. In particular, the evaluation of convolution terms appearing in the numerical solution is discussed, and timing results are presented for runs performed on an 8k processor Connection Machine (CM-2). Estimates of the performance on a full-size 64k CM-2 are given, and range between 100 and 200 Mflops. The advantages of having a Fortran 90 version of the KITE code are stressed, and the future use of such a code on the newly announced CM5 and Paragon computers, from Thinking Machines Corporation and Intel, is considered.

  15. Convolution Algebra for Fluid Modes with Finite Energy

    DTIC Science & Technology

    1992-04-01

    PHILLIPS LABORATORY AIR FORCE SYSTEMS COMMAND UNITED STATES AIR FORCE HANSCOM AIR FORCE BASE, MASSACHIUSETTS 01731-5000 94-22604 "This technical report ’-as...with finite spatial and temporal extents. At Boston University, we have developed a full form of wavelet expansion which has the advantage over more...distribution: 00 bX =00 0l if, TZ< VPf (X) = V •a,,,’(x) = E bnb 𔄀(x) where b, =otherwise (34) V=o ,i=o a._, otherwise 7 The convolution of two

  16. Continuous speech recognition based on convolutional neural network

    NASA Astrophysics Data System (ADS)

    Zhang, Qing-qing; Liu, Yong; Pan, Jie-lin; Yan, Yong-hong

    2015-07-01

    Convolutional Neural Networks (CNNs), which showed success in achieving translation invariance for many image processing tasks, are investigated for continuous speech recognitions in the paper. Compared to Deep Neural Networks (DNNs), which have been proven to be successful in many speech recognition tasks nowadays, CNNs can reduce the NN model sizes significantly, and at the same time achieve even better recognition accuracies. Experiments on standard speech corpus TIMIT showed that CNNs outperformed DNNs in the term of the accuracy when CNNs had even smaller model size.

  17. Convolution seal for transition duct in turbine system

    DOEpatents

    Flanagan, James Scott; LeBegue, Jeffrey Scott; McMahan, Kevin Weston; Dillard, Daniel Jackson; Pentecost, Ronnie Ray

    2015-05-26

    A turbine system is disclosed. In one embodiment, the turbine system includes a transition duct. The transition duct includes an inlet, an outlet, and a passage extending between the inlet and the outlet and defining a longitudinal axis, a radial axis, and a tangential axis. The outlet of the transition duct is offset from the inlet along the longitudinal axis and the tangential axis. The transition duct further includes an interface feature for interfacing with an adjacent transition duct. The turbine system further includes a convolution seal contacting the interface feature to provide a seal between the interface feature and the adjacent transition duct.

  18. A digital model for streamflow routing by convolution methods

    USGS Publications Warehouse

    Doyle, W.H.; Shearman, H.O.; Stiltner, G.J.; Krug, W.O.

    1984-01-01

    U.S. Geological Survey computer model, CONROUT, for routing streamflow by unit-response convolution flow-routing techniques from an upstream channel location to a downstream channel location has been developed and documented. Calibration and verification of the flow-routing model and subsequent use of the model for simulation is also documented. Three hypothetical examples and two field applications are presented to illustrate basic flow-routing concepts. Most of the discussion is limited to daily flow routing since, to date, all completed and current studies of this nature involve daily flow routing. However, the model is programmed to accept hourly flow-routing data. (USGS)

  19. Tandem mass spectrometry data quality assessment by self-convolution

    PubMed Central

    Choo, Keng Wah; Tham, Wai Mun

    2007-01-01

    Background Many algorithms have been developed for deciphering the tandem mass spectrometry (MS) data sets. They can be essentially clustered into two classes. The first performs searches on theoretical mass spectrum database, while the second based itself on de novo sequencing from raw mass spectrometry data. It was noted that the quality of mass spectra affects significantly the protein identification processes in both instances. This prompted the authors to explore ways to measure the quality of MS data sets before subjecting them to the protein identification algorithms, thus allowing for more meaningful searches and increased confidence level of proteins identified. Results The proposed method measures the qualities of MS data sets based on the symmetric property of b- and y-ion peaks present in a MS spectrum. Self-convolution on MS data and its time-reversal copy was employed. Due to the symmetric nature of b-ions and y-ions peaks, the self-convolution result of a good spectrum would produce a highest mid point intensity peak. To reduce processing time, self-convolution was achieved using Fast Fourier Transform and its inverse transform, followed by the removal of the "DC" (Direct Current) component and the normalisation of the data set. The quality score was defined as the ratio of the intensity at the mid point to the remaining peaks of the convolution result. The method was validated using both theoretical mass spectra, with various permutations, and several real MS data sets. The results were encouraging, revealing a high percentage of positive prediction rates for spectra with good quality scores. Conclusion We have demonstrated in this work a method for determining the quality of tandem MS data set. By pre-determining the quality of tandem MS data before subjecting them to protein identification algorithms, spurious protein predictions due to poor tandem MS data are avoided, giving scientists greater confidence in the predicted results. We conclude that

  20. Visualizing Vector Fields Using Line Integral Convolution and Dye Advection

    NASA Technical Reports Server (NTRS)

    Shen, Han-Wei; Johnson, Christopher R.; Ma, Kwan-Liu

    1996-01-01

    We present local and global techniques to visualize three-dimensional vector field data. Using the Line Integral Convolution (LIC) method to image the global vector field, our new algorithm allows the user to introduce colored 'dye' into the vector field to highlight local flow features. A fast algorithm is proposed that quickly recomputes the dyed LIC images. In addition, we introduce volume rendering methods that can map the LIC texture on any contour surface and/or translucent region defined by additional scalar quantities, and can follow the advection of colored dye throughout the volume.

  1. Convolution seal for transition duct in turbine system

    DOEpatents

    Flanagan, James Scott; LeBegue, Jeffrey Scott; McMahan, Kevin Weston; Dillard, Daniel Jackson; Pentecost, Ronnie Ray

    2015-03-10

    A turbine system is disclosed. In one embodiment, the turbine system includes a transition duct. The transition duct includes an inlet, an outlet, and a passage extending between the inlet and the outlet and defining a longitudinal axis, a radial axis, and a tangential axis. The outlet of the transition duct is offset from the inlet along the longitudinal axis and the tangential axis. The transition duct further includes an interface member for interfacing with a turbine section. The turbine system further includes a convolution seal contacting the interface member to provide a seal between the interface member and the turbine section.

  2. A Deep Convolutional Neural Network for segmenting and classifying epithelial and stromal regions in histopathological images

    PubMed Central

    Xu, Jun; Luo, Xiaofei; Wang, Guanhao; Gilmore, Hannah; Madabhushi, Anant

    2016-01-01

    Epithelial (EP) and stromal (ST) are two types of tissues in histological images. Automated segmentation or classification of EP and ST tissues is important when developing computerized system for analyzing the tumor microenvironment. In this paper, a Deep Convolutional Neural Networks (DCNN) based feature learning is presented to automatically segment or classify EP and ST regions from digitized tumor tissue microarrays (TMAs). Current approaches are based on handcraft feature representation, such as color, texture, and Local Binary Patterns (LBP) in classifying two regions. Compared to handcrafted feature based approaches, which involve task dependent representation, DCNN is an end-to-end feature extractor that may be directly learned from the raw pixel intensity value of EP and ST tissues in a data driven fashion. These high-level features contribute to the construction of a supervised classifier for discriminating the two types of tissues. In this work we compare DCNN based models with three handcraft feature extraction based approaches on two different datasets which consist of 157 Hematoxylin and Eosin (H&E) stained images of breast cancer and 1376 immunohistological (IHC) stained images of colorectal cancer, respectively. The DCNN based feature learning approach was shown to have a F1 classification score of 85%, 89%, and 100%, accuracy (ACC) of 84%, 88%, and 100%, and Matthews Correlation Coefficient (MCC) of 86%, 77%, and 100% on two H&E stained (NKI and VGH) and IHC stained data, respectively. Our DNN based approach was shown to outperform three handcraft feature extraction based approaches in terms of the classification of EP and ST regions. PMID:28154470

  3. Left ventricle segmentation in cardiac MRI images using fully convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Vázquez Romaguera, Liset; Costa, Marly Guimarães Fernandes; Romero, Francisco Perdigón; Costa Filho, Cicero Ferreira Fernandes

    2017-03-01

    According to the World Health Organization, cardiovascular diseases are the leading cause of death worldwide, accounting for 17.3 million deaths per year, a number that is expected to grow to more than 23.6 million by 2030. Most cardiac pathologies involve the left ventricle; therefore, estimation of several functional parameters from a previous segmentation of this structure can be helpful in diagnosis. Manual delineation is a time consuming and tedious task that is also prone to high intra and inter-observer variability. Thus, there exists a need for automated cardiac segmentation method to help facilitate the diagnosis of cardiovascular diseases. In this work we propose a deep fully convolutional neural network architecture to address this issue and assess its performance. The model was trained end to end in a supervised learning stage from whole cardiac MRI images input and ground truth to make a per pixel classification. For its design, development and experimentation was used Caffe deep learning framework over an NVidia Quadro K4200 Graphics Processing Unit. The net architecture is: Conv64-ReLU (2x) - MaxPooling - Conv128-ReLU (2x) - MaxPooling - Conv256-ReLU (2x) - MaxPooling - Conv512-ReLu-Dropout (2x) - Conv2-ReLU - Deconv - Crop - Softmax. Training and testing processes were carried out using 5-fold cross validation with short axis cardiac magnetic resonance images from Sunnybrook Database. We obtained a Dice score of 0.92 and 0.90, Hausdorff distance of 4.48 and 5.43, Jaccard index of 0.97 and 0.97, sensitivity of 0.92 and 0.90 and specificity of 0.99 and 0.99, overall mean values with SGD and RMSProp, respectively.

  4. A Deep Convolutional Neural Network for segmenting and classifying epithelial and stromal regions in histopathological images.

    PubMed

    Xu, Jun; Luo, Xiaofei; Wang, Guanhao; Gilmore, Hannah; Madabhushi, Anant

    2016-05-26

    Epithelial (EP) and stromal (ST) are two types of tissues in histological images. Automated segmentation or classification of EP and ST tissues is important when developing computerized system for analyzing the tumor microenvironment. In this paper, a Deep Convolutional Neural Networks (DCNN) based feature learning is presented to automatically segment or classify EP and ST regions from digitized tumor tissue microarrays (TMAs). Current approaches are based on handcraft feature representation, such as color, texture, and Local Binary Patterns (LBP) in classifying two regions. Compared to handcrafted feature based approaches, which involve task dependent representation, DCNN is an end-to-end feature extractor that may be directly learned from the raw pixel intensity value of EP and ST tissues in a data driven fashion. These high-level features contribute to the construction of a supervised classifier for discriminating the two types of tissues. In this work we compare DCNN based models with three handcraft feature extraction based approaches on two different datasets which consist of 157 Hematoxylin and Eosin (H&E) stained images of breast cancer and 1376 immunohistological (IHC) stained images of colorectal cancer, respectively. The DCNN based feature learning approach was shown to have a F1 classification score of 85%, 89%, and 100%, accuracy (ACC) of 84%, 88%, and 100%, and Matthews Correlation Coefficient (MCC) of 86%, 77%, and 100% on two H&E stained (NKI and VGH) and IHC stained data, respectively. Our DNN based approach was shown to outperform three handcraft feature extraction based approaches in terms of the classification of EP and ST regions.

  5. Intelligent constellation diagram analyzer using convolutional neural network-based deep learning.

    PubMed

    Wang, Danshi; Zhang, Min; Li, Jin; Li, Ze; Li, Jianqiang; Song, Chuang; Chen, Xue

    2017-07-24

    An intelligent constellation diagram analyzer is proposed to implement both modulation format recognition (MFR) and optical signal-to-noise rate (OSNR) estimation by using convolution neural network (CNN)-based deep learning technique. With the ability of feature extraction and self-learning, CNN can process constellation diagram in its raw data form (i.e., pixel points of an image) from the perspective of image processing, without manual intervention nor data statistics. The constellation diagram images of six widely-used modulation formats over a wide OSNR range (15~30 dB and 20~35 dB) are obtained from a constellation diagram generation module in oscilloscope. Both simulation and experiment are conducted. Compared with other 4 traditional machine learning algorithms, CNN achieves the better accuracies and is obviously superior to other methods at the cost of O(n) computation complexity and less than 0.5 s testing time. For OSNR estimation, the high accuracies are obtained at epochs of 200 (95% for 64QAM, and over 99% for other five formats); for MFR, 100% accuracies are achieved even with less training data at lower epochs. The experimental results show that the OSNR estimation errors for all the signals are less than 0.7 dB. Additionally, the effects of multiple factors on CNN performance are comprehensively investigated, including the training data size, image resolution, and network structure. The proposed technique has the potential to be embedded in the test instrument to perform intelligent signal analysis or applied for optical performance monitoring.

  6. Feature extraction using convolutional neural network for classifying breast density in mammographic images

    NASA Astrophysics Data System (ADS)

    Thomaz, Ricardo L.; Carneiro, Pedro C.; Patrocinio, Ana C.

    2017-03-01

    Breast cancer is the leading cause of death for women in most countries. The high levels of mortality relate mostly to late diagnosis and to the direct proportionally relationship between breast density and breast cancer development. Therefore, the correct assessment of breast density is important to provide better screening for higher risk patients. However, in modern digital mammography the discrimination among breast densities is highly complex due to increased contrast and visual information for all densities. Thus, a computational system for classifying breast density might be a useful tool for aiding medical staff. Several machine-learning algorithms are already capable of classifying small number of classes with good accuracy. However, machinelearning algorithms main constraint relates to the set of features extracted and used for classification. Although well-known feature extraction techniques might provide a good set of features, it is a complex task to select an initial set during design of a classifier. Thus, we propose feature extraction using a Convolutional Neural Network (CNN) for classifying breast density by a usual machine-learning classifier. We used 307 mammographic images downsampled to 260x200 pixels to train a CNN and extract features from a deep layer. After training, the activation of 8 neurons from a deep fully connected layer are extracted and used as features. Then, these features are feedforward to a single hidden layer neural network that is cross-validated using 10-folds to classify among four classes of breast density. The global accuracy of this method is 98.4%, presenting only 1.6% of misclassification. However, the small set of samples and memory constraints required the reuse of data in both CNN and MLP-NN, therefore overfitting might have influenced the results even though we cross-validated the network. Thus, although we presented a promising method for extracting features and classifying breast density, a greater database is

  7. Adaptive bad pixel correction algorithm for IRFPA based on PCNN

    NASA Astrophysics Data System (ADS)

    Leng, Hanbing; Zhou, Zuofeng; Cao, Jianzhong; Yi, Bo; Yan, Aqi; Zhang, Jian

    2013-10-01

    Bad pixels and response non-uniformity are the primary obstacles when IRFPA is used in different thermal imaging systems. The bad pixels of IRFPA include fixed bad pixels and random bad pixels. The former is caused by material or manufacture defect and their positions are always fixed, the latter is caused by temperature drift and their positions are always changing. Traditional radiometric calibration-based bad pixel detection and compensation algorithm is only valid to the fixed bad pixels. Scene-based bad pixel correction algorithm is the effective way to eliminate these two kinds of bad pixels. Currently, the most used scene-based bad pixel correction algorithm is based on adaptive median filter (AMF). In this algorithm, bad pixels are regarded as image noise and then be replaced by filtered value. However, missed correction and false correction often happens when AMF is used to handle complex infrared scenes. To solve this problem, a new adaptive bad pixel correction algorithm based on pulse coupled neural networks (PCNN) is proposed. Potential bad pixels are detected by PCNN in the first step, then image sequences are used periodically to confirm the real bad pixels and exclude the false one, finally bad pixels are replaced by the filtered result. With the real infrared images obtained from a camera, the experiment results show the effectiveness of the proposed algorithm.

  8. An induced charge readout scheme incorporating image charge splitting on discrete pixels

    NASA Astrophysics Data System (ADS)

    Kataria, D. O.; Lapington, J. S.

    2003-11-01

    Top hat electrostatic analysers used in space plasma instruments typically use microchannel plates (MCPs) followed by discrete pixel anode readout for the angular definition of the incoming particles. Better angular definition requires more pixels/readout electronics channels but with stringent mass and power budgets common in space applications, the number of channels is restricted. We describe here a technique that improves the angular definition using induced charge and an interleaved anode pattern. The technique adopts the readout philosophy used on the CRRES and CLUSTER I instruments but has the advantages of the induced charge scheme and significantly reduced capacitance. Charge from the MCP collected by an anode pixel is inductively split onto discrete pixels whose geometry can be tailored to suit the scientific requirements of the instrument. For our application, the charge is induced over two pixels. One of them is used for a coarse angular definition but is read out by a single channel of electronics, allowing a higher rate handling. The other provides a finer angular definition but is interleaved and hence carries the expense of lower rate handling. Using the technique and adding four channels of electronics, a four-fold increase in the angular resolution is obtained. Details of the scheme and performance results are presented.

  9. [High-Performance Active Pixel X-Ray Sensors for X-Ray Astronomy

    NASA Technical Reports Server (NTRS)

    Bautz, Mark; Suntharalingam, Vyshnavi

    2005-01-01

    The subject grants support development of High-Performance Active Pixel Sensors for X-ray Astronomy at the Massachusetts Institute of Technology (MIT) Center for Space Research and at MIT's Lincoln Laboratory. This memo reports our progress in the second year of the project, from April, 2004 through the present.

  10. A near-infrared 64-pixel superconducting nanowire single photon detector array with integrated multiplexed readout

    SciTech Connect

    Allman, M. S. Verma, V. B.; Stevens, M.; Gerrits, T.; Horansky, R. D.; Lita, A. E.; Mirin, R.; Nam, S. W.; Marsili, F.; Beyer, A.; Shaw, M. D.; Kumor, D.

    2015-05-11

    We demonstrate a 64-pixel free-space-coupled array of superconducting nanowire single photon detectors optimized for high detection efficiency in the near-infrared range. An integrated, readily scalable, multiplexed readout scheme is employed to reduce the number of readout lines to 16. The cryogenic, optical, and electronic packaging to read out the array as well as characterization measurements are discussed.

  11. Low Power Camera-on-a-Chip Using CMOS Active Pixel Sensor Technology

    NASA Technical Reports Server (NTRS)

    Fossum, E. R.

    1995-01-01

    A second generation image sensor technology has been developed at the NASA Jet Propulsion Laboratory as a result of the continuing need to miniaturize space science imaging instruments. Implemented using standard CMOS, the active pixel sensor (APS) technology permits the integration of the detector array with on-chip timing, control and signal chain electronics, including analog-to-digital conversion.

  12. Design Methodology: ASICs with complex in-pixel processing for Pixel Detectors

    SciTech Connect

    Fahim, Farah

    2014-10-31

    The development of Application Specific Integrated Circuits (ASIC) for pixel detectors with complex in-pixel processing using Computer Aided Design (CAD) tools that are, themselves, mainly developed for the design of conventional digital circuits requires a specialized approach. Mixed signal pixels often require parasitically aware detailed analog front-ends and extremely compact digital back-ends with more than 1000 transistors in small areas below 100μm x 100μm. These pixels are tiled to create large arrays, which have the same clock distribution and data readout speed constraints as in, for example, micro-processors. The methodology uses a modified mixed-mode on-top digital implementation flow to not only harness the tool efficiency for timing and floor-planning but also to maintain designer control over compact parasitically aware layout.

  13. WFC3/IR Cycle 19 Bad Pixel Table Update

    NASA Astrophysics Data System (ADS)

    Hilbert, B.

    2012-06-01

    Using data from Cycles 17, 18, and 19, we have updated the IR channel bad pixel table for WFC3. The bad pixel table contains flags that mark the position of pixels that are dead, unstable, have a bad zeroth read value, or are affected by "blobs". In all, 28,500 of the science pixels (2.77%) are flagged as bad. Observers are encouraged to dither their observations as a means of lessening the effects of these bad pixels. The new bad pixel table is in the calibration database system (CDBS) as w681807ii_bpx.fits.

  14. Design methodology: edgeless 3D ASICs with complex in-pixel processing for pixel detectors

    SciTech Connect

    Fahim Farah, Fahim Farah; Deptuch, Grzegorz W.; Hoff, James R.; Mohseni, Hooman

    2015-08-28

    The design methodology for the development of 3D integrated edgeless pixel detectors with in-pixel processing using Electronic Design Automation (EDA) tools is presented. A large area 3 tier 3D detector with one sensor layer and two ASIC layers containing one analog and one digital tier, is built for x-ray photon time of arrival measurement and imaging. A full custom analog pixel is 65μm x 65μm. It is connected to a sensor pixel of the same size on one side, and on the other side it has approximately 40 connections to the digital pixel. A 32 x 32 edgeless array without any peripheral functional blocks constitutes a sub-chip. The sub-chip is an indivisible unit, which is further arranged in a 6 x 6 array to create the entire 1.248cm x 1.248cm ASIC. Each chip has 720 bump-bond I/O connections, on the back of the digital tier to the ceramic PCB. All the analog tier power and biasing is conveyed through the digital tier from the PCB. The assembly has no peripheral functional blocks, and hence the active area extends to the edge of the detector. This was achieved by using a few flavors of almost identical analog pixels (minimal variation in layout) to allow for peripheral biasing blocks to be placed within pixels. The 1024 pixels within a digital sub-chip array have a variety of full custom, semi-custom and automated timing driven functional blocks placed together. The methodology uses a modified mixed-mode on-top digital implementation flow to not only harness the tool efficiency for timing and floor-planning but also to maintain designer control over compact parasitically aware layout. The methodology uses the Cadence design platform, however it is not limited to this tool.

  15. Pixel-by-pixel absolute phase retrieval using three phase-shifted fringe patterns without markers

    NASA Astrophysics Data System (ADS)

    Jiang, Chufan; Li, Beiwen; Zhang, Song

    2017-04-01

    This paper presents a method that can recover absolute phase pixel by pixel without embedding markers on three phase-shifted fringe patterns, acquiring additional images, or introducing additional hardware component(s). The proposed three-dimensional (3D) absolute shape measurement technique includes the following major steps: (1) segment the measured object into different regions using rough priori knowledge of surface geometry; (2) artificially create phase maps at different z planes using geometric constraints of structured light system; (3) unwrap the phase pixel by pixel for each region by properly referring to the artificially created phase map; and (4) merge unwrapped phases from all regions into a complete absolute phase map for 3D reconstruction. We demonstrate that conventional three-step phase-shifted fringe patterns can be used to create absolute phase map pixel by pixel even for large depth range objects. We have successfully implemented our proposed computational framework to achieve absolute 3D shape measurement at 40 Hz.

  16. ACS/WFC Pixel Stability - Bringing the Pixels Back to the Science

    NASA Astrophysics Data System (ADS)

    Borncamp, David; Grogin, Norman A.; Bourque, Matthew; Ogaz, Sara

    2016-06-01

    Electrical current that has been trapped within the lattice structure of a Charged Coupled Device (CCD) can be present through multiple exposures, which will have an adverse effect on its science performance. The traditional way to correct for this extra charge is to take an image with the camera shutter closed periodically throughout the lifetime of the instrument. These images, generally referred to as dark images, allow for the characterization of the extra charge that is trapped within the CCD at the time of observation. This extra current can then be subtracted out of science images to correct for the extra charge that was there at this time. Pixels that have a charge above a certain threshold of current are marked as “hot” and flagged in the data quality array. However, these pixels may not be "bad" in the traditional sense that they cannot be reliably dark-subtracted. If these pixels are shown to be stable over an anneal period, the charge can be properly subtracted and the extra noise from this dark current can be taken into account. We present the results of a pixel history study that analyzes every pixel of ACS/WFC individually and allows pixels that were marked as bad to be brought back into the science image.

  17. Thermalnet: a Deep Convolutional Network for Synthetic Thermal Image Generation

    NASA Astrophysics Data System (ADS)

    Kniaz, V. V.; Gorbatsevich, V. S.; Mizginov, V. A.

    2017-05-01

    Deep convolutional neural networks have dramatically changed the landscape of the modern computer vision. Nowadays methods based on deep neural networks show the best performance among image recognition and object detection algorithms. While polishing of network architectures received a lot of scholar attention, from the practical point of view the preparation of a large image dataset for a successful training of a neural network became one of major challenges. This challenge is particularly profound for image recognition in wavelengths lying outside the visible spectrum. For example no infrared or radar image datasets large enough for successful training of a deep neural network are available to date in public domain. Recent advances of deep neural networks prove that they are also capable to do arbitrary image transformations such as super-resolution image generation, grayscale image colorisation and imitation of style of a given artist. Thus a natural question arise: how could be deep neural networks used for augmentation of existing large image datasets? This paper is focused on the development of the Thermalnet deep convolutional neural network for augmentation of existing large visible image datasets with synthetic thermal images. The Thermalnet network architecture is inspired by colorisation deep neural networks.

  18. Digital image correlation based on a fast convolution strategy

    NASA Astrophysics Data System (ADS)

    Yuan, Yuan; Zhan, Qin; Xiong, Chunyang; Huang, Jianyong

    2017-10-01

    In recent years, the efficiency of digital image correlation (DIC) methods has attracted increasing attention because of its increasing importance for many engineering applications. Based on the classical affine optical flow (AOF) algorithm and the well-established inverse compositional Gauss-Newton algorithm, which is essentially a natural extension of the AOF algorithm under a nonlinear iterative framework, this paper develops a set of fast convolution-based DIC algorithms for high-efficiency subpixel image registration. Using a well-developed fast convolution technique, the set of algorithms establishes a series of global data tables (GDTs) over the digital images, which allows the reduction of the computational complexity of DIC significantly. Using the pre-calculated GDTs, the subpixel registration calculations can be implemented efficiently in a look-up-table fashion. Both numerical simulation and experimental verification indicate that the set of algorithms significantly enhances the computational efficiency of DIC, especially in the case of a dense data sampling for the digital images. Because the GDTs need to be computed only once, the algorithms are also suitable for efficiently coping with image sequences that record the time-varying dynamics of specimen deformations.

  19. Enhancing Neutron Beam Production with a Convoluted Moderator

    SciTech Connect

    Iverson, Erik B; Baxter, David V; Muhrer, Guenter; Ansell, Stuart; Gallmeier, Franz X; Dalgliesh, Robert; Lu, Wei; Kaiser, Helmut

    2014-10-01

    We describe a new concept for a neutron moderating assembly resulting in the more efficient production of slow neutron beams. The Convoluted Moderator, a heterogeneous stack of interleaved moderating material and nearly transparent single-crystal spacers, is a directionally-enhanced neutron beam source, improving beam effectiveness over an angular range comparable to the range accepted by neutron beam lines and guides. We have demonstrated gains of 50% in slow neutron intensity for a given fast neutron production rate while simultaneously reducing the wavelength-dependent emission time dispersion by 25%, both coming from a geometric effect in which the neutron beam lines view a large surface area of moderating material in a relatively small volume. Additionally, we have confirmed a Bragg-enhancement effect arising from coherent scattering within the single-crystal spacers. We have not observed hypothesized refractive effects leading to additional gains at long wavelength. In addition to confirmation of the validity of the Convoluted Moderator concept, our measurements provide a series of benchmark experiments suitable for developing simulation and analysis techniques for practical optimization and eventual implementation at slow neutron source facilities.

  20. Generalized type II hybrid ARQ scheme using punctured convolutional coding

    NASA Astrophysics Data System (ADS)

    Kallel, Samir; Haccoun, David

    1990-11-01

    A method is presented to construct rate-compatible convolutional (RCC) codes from known high-rate punctured convolutional codes, obtained from best-rate 1/2 codes. The construction method is rather simple and straightforward, and still yields good codes. Moreover, low-rate codes can be obtained without any limit on the lowest achievable code rate. Based on the RCC codes, a generalized type-II hybrid ARQ scheme, which combines the benefits of the modified type-II hybrid ARQ strategy of Hagenauer (1988) with the code-combining ARQ strategy of Chase (1985), is proposed and analyzed. With the proposed generalized type-II hybrid ARQ strategy, the throughput increases as the starting coding rate increases, and as the channel degrades, it tends to merge with the throughput of rate 1/2 type-II hybrid ARQ schemes with code combining, thus allowing the system to be flexible and adaptive to channel conditions, even under wide noise variations and severe degradations.

  1. Classifications of Multispectral Colorectal Cancer Tissues Using Convolution Neural Network

    PubMed Central

    Haj-Hassan, Hawraa; Chaddad, Ahmad; Harkouss, Youssef; Desrosiers, Christian; Toews, Matthew; Tanougast, Camel

    2017-01-01

    Background: Colorectal cancer (CRC) is the third most common cancer among men and women. Its diagnosis in early stages, typically done through the analysis of colon biopsy images, can greatly improve the chances of a successful treatment. This paper proposes to use convolution neural networks (CNNs) to predict three tissue types related to the progression of CRC: benign hyperplasia (BH), intraepithelial neoplasia (IN), and carcinoma (Ca). Methods: Multispectral biopsy images of thirty CRC patients were retrospectively analyzed. Images of tissue samples were divided into three groups, based on their type (10 BH, 10 IN, and 10 Ca). An active contour model was used to segment image regions containing pathological tissues. Tissue samples were classified using a CNN containing convolution, max-pooling, and fully-connected layers. Available tissue samples were split into a training set, for learning the CNN parameters, and test set, for evaluating its performance. Results: An accuracy of 99.17% was obtained from segmented image regions, outperforming existing approaches based on traditional feature extraction, and classification techniques. Conclusions: Experimental results demonstrate the effectiveness of CNN for the classification of CRC tissue types, in particular when using presegmented regions of interest. PMID:28400990

  2. Selective Convolutional Descriptor Aggregation for Fine-Grained Image Retrieval.

    PubMed

    Wei, Xiu-Shen; Luo, Jian-Hao; Wu, Jianxin; Zhou, Zhi-Hua

    2017-03-27

    Deep convolutional neural network models pretrained for the ImageNet classification task have been successfully adopted to tasks in other domains, such as texture description and object proposal generation, but these tasks require annotations for images in the new domain. In this paper, we focus on a novel and challenging task in the pure unsupervised setting: fine-grained image retrieval. Even with image labels, fine-grained images are difficult to classify, let alone the unsupervised retrieval task. We propose the Selective Convolutional Descriptor Aggregation (SCDA) method. SCDA firstly localizes the main object in fine-grained images, a step that discards the noisy background and keeps useful deep descriptors. The selected descriptors are then aggregated and dimensionality reduced into a short feature vector using the best practices we found. SCDA is unsupervised, using no image label or bounding box annotation. Experiments on six fine-grained datasets confirm the effectiveness of SCDA for fine-grained image retrieval. Besides, visualization of the SCDA features shows that they correspond to visual attributes (even subtle ones), which might explain SCDA's high mean average precision in fine-grained retrieval. Moreover, on general image retrieval datasets, SCDA achieves comparable retrieval results with state-of-the-art general image retrieval approaches.

  3. Coronary artery calcification (CAC) classification with deep convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Liu, Xiuming; Wang, Shice; Deng, Yufeng; Chen, Kuan

    2017-03-01

    Coronary artery calcification (CAC) is a typical marker of the coronary artery disease, which is one of the biggest causes of mortality in the U.S. This study evaluates the feasibility of using a deep convolutional neural network (DCNN) to automatically detect CAC in X-ray images. 1768 posteroanterior (PA) view chest X-Ray images from Sichuan Province Peoples Hospital, China were collected retrospectively. Each image is associated with a corresponding diagnostic report written by a trained radiologist (907 normal, 861 diagnosed with CAC). Onequarter of the images were randomly selected as test samples; the rest were used as training samples. DCNN models consisting of 2,4,6 and 8 convolutional layers were designed using blocks of pre-designed CNN layers. Each block was implemented in Theano with Graphics Processing Units (GPU). Human-in-the-loop learning was also performed on a subset of 165 images with framed arteries by trained physicians. The results from the DCNN models were compared to the diagnostic reports. The average diagnostic accuracies for models with 2,4,6,8 layers were 0.85, 0.87, 0.88, and 0.89 respectively. The areas under the curve (AUC) were 0.92, 0.95, 0.95, and 0.96. As the model grows deeper, the AUC or diagnostic accuracies did not have statistically significant changes. The results of this study indicate that DCNN models have promising potential in the field of intelligent medical image diagnosis practice.

  4. Fluence-convolution broad-beam (FCBB) dose calculation.

    PubMed

    Lu, Weiguo; Chen, Mingli

    2010-12-07

    IMRT optimization requires a fast yet relatively accurate algorithm to calculate the iteration dose with small memory demand. In this paper, we present a dose calculation algorithm that approaches these goals. By decomposing the infinitesimal pencil beam (IPB) kernel into the central axis (CAX) component and lateral spread function (LSF) and taking the beam's eye view (BEV), we established a non-voxel and non-beamlet-based dose calculation formula. Both LSF and CAX are determined by a commissioning procedure using the collapsed-cone convolution/superposition (CCCS) method as the standard dose engine. The proposed dose calculation involves a 2D convolution of a fluence map with LSF followed by ray tracing based on the CAX lookup table with radiological distance and divergence correction, resulting in complexity of O(N(3)) both spatially and temporally. This simple algorithm is orders of magnitude faster than the CCCS method. Without pre-calculation of beamlets, its implementation is also orders of magnitude smaller than the conventional voxel-based beamlet-superposition (VBS) approach. We compared the presented algorithm with the CCCS method using simulated and clinical cases. The agreement was generally within 3% for a homogeneous phantom and 5% for heterogeneous and clinical cases. Combined with the 'adaptive full dose correction', the algorithm is well suitable for calculating the iteration dose during IMRT optimization.

  5. Transforming Musical Signals through a Genre Classifying Convolutional Neural Network

    NASA Astrophysics Data System (ADS)

    Geng, S.; Ren, G.; Ogihara, M.

    2017-05-01

    Convolutional neural networks (CNNs) have been successfully applied on both discriminative and generative modeling for music-related tasks. For a particular task, the trained CNN contains information representing the decision making or the abstracting process. One can hope to manipulate existing music based on this 'informed' network and create music with new features corresponding to the knowledge obtained by the network. In this paper, we propose a method to utilize the stored information from a CNN trained on musical genre classification task. The network was composed of three convolutional layers, and was trained to classify five-second song clips into five different genres. After training, randomly selected clips were modified by maximizing the sum of outputs from the network layers. In addition to the potential of such CNNs to produce interesting audio transformation, more information about the network and the original music could be obtained from the analysis of the generated features since these features indicate how the network 'understands' the music.

  6. Multichannel Convolutional Neural Network for Biological Relation Extraction

    PubMed Central

    Quan, Chanqin; Sun, Xiao; Bai, Wenjun

    2016-01-01

    The plethora of biomedical relations which are embedded in medical logs (records) demands researchers' attention. Previous theoretical and practical focuses were restricted on traditional machine learning techniques. However, these methods are susceptible to the issues of “vocabulary gap” and data sparseness and the unattainable automation process in feature extraction. To address aforementioned issues, in this work, we propose a multichannel convolutional neural network (MCCNN) for automated biomedical relation extraction. The proposed model has the following two contributions: (1) it enables the fusion of multiple (e.g., five) versions in word embeddings; (2) the need for manual feature engineering can be obviated by automated feature learning with convolutional neural network (CNN). We evaluated our model on two biomedical relation extraction tasks: drug-drug interaction (DDI) extraction and protein-protein interaction (PPI) extraction. For DDI task, our system achieved an overall f-score of 70.2% compared to the standard linear SVM based system (e.g., 67.0%) on DDIExtraction 2013 challenge dataset. And for PPI task, we evaluated our system on Aimed and BioInfer PPI corpus; our system exceeded the state-of-art ensemble SVM system by 2.7% and 5.6% on f-scores. PMID:28053977

  7. A Mathematical Motivation for Complex-Valued Convolutional Networks.

    PubMed

    Tygert, Mark; Bruna, Joan; Chintala, Soumith; LeCun, Yann; Piantino, Serkan; Szlam, Arthur

    2016-05-01

    A complex-valued convolutional network (convnet) implements the repeated application of the following composition of three operations, recursively applying the composition to an input vector of nonnegative real numbers: (1) convolution with complex-valued vectors, followed by (2) taking the absolute value of every entry of the resulting vectors, followed by (3) local averaging. For processing real-valued random vectors, complex-valued convnets can be viewed as data-driven multiscale windowed power spectra, data-driven multiscale windowed absolute spectra, data-driven multiwavelet absolute values, or (in their most general configuration) data-driven nonlinear multiwavelet packets. Indeed, complex-valued convnets can calculate multiscale windowed spectra when the convnet filters are windowed complex-valued exponentials. Standard real-valued convnets, using rectified linear units (ReLUs), sigmoidal (e.g., logistic or tanh) nonlinearities, or max pooling, for example, do not obviously exhibit the same exact correspondence with data-driven wavelets (whereas for complex-valued convnets, the correspondence is much more than just a vague analogy). Courtesy of the exact correspondence, the remarkably rich and rigorous body of mathematical analysis for wavelets applies directly to (complex-valued) convnets.

  8. Classification of Histology Sections via Multispectral Convolutional Sparse Coding.

    PubMed

    Zhou, Yin; Chang, Hang; Barner, Kenneth; Spellman, Paul; Parvin, Bahram

    2014-06-01

    Image-based classification of histology sections plays an important role in predicting clinical outcomes. However this task is very challenging due to the presence of large technical variations (e.g., fixation, staining) and biological heterogeneities (e.g., cell type, cell state). In the field of biomedical imaging, for the purposes of visualization and/or quantification, different stains are typically used for different targets of interest (e.g., cellular/subcellular events), which generates multi-spectrum data (images) through various types of microscopes and, as a result, provides the possibility of learning biological-component-specific features by exploiting multispectral information. We propose a multispectral feature learning model that automatically learns a set of convolution filter banks from separate spectra to efficiently discover the intrinsic tissue morphometric signatures, based on convolutional sparse coding (CSC). The learned feature representations are then aggregated through the spatial pyramid matching framework (SPM) and finally classified using a linear SVM. The proposed system has been evaluated using two large-scale tumor cohorts, collected from The Cancer Genome Atlas (TCGA). Experimental results show that the proposed model 1) outperforms systems utilizing sparse coding for unsupervised feature learning (e.g., PSD-SPM [5]); 2) is competitive with systems built upon features with biological prior knowledge (e.g., SMLSPM [4]).

  9. Deep Convolutional Neural Networks for large-scale speech tasks.

    PubMed

    Sainath, Tara N; Kingsbury, Brian; Saon, George; Soltau, Hagen; Mohamed, Abdel-rahman; Dahl, George; Ramabhadran, Bhuvana

    2015-04-01

    Convolutional Neural Networks (CNNs) are an alternative type of neural network that can be used to reduce spectral variations and model spectral correlations which exist in signals. Since speech signals exhibit both of these properties, we hypothesize that CNNs are a more effective model for speech compared to Deep Neural Networks (DNNs). In this paper, we explore applying CNNs to large vocabulary continuous speech recognition (LVCSR) tasks. First, we determine the appropriate architecture to make CNNs effective compared to DNNs for LVCSR tasks. Specifically, we focus on how many convolutional layers are needed, what is an appropriate number of hidden units, what is the best pooling strategy. Second, investigate how to incorporate speaker-adapted features, which cannot directly be modeled by CNNs as they do not obey locality in frequency, into the CNN framework. Third, given the importance of sequence training for speech tasks, we introduce a strategy to use ReLU+dropout during Hessian-free sequence training of CNNs. Experiments on 3 LVCSR tasks indicate that a CNN with the proposed speaker-adapted and ReLU+dropout ideas allow for a 12%-14% relative improvement in WER over a strong DNN system, achieving state-of-the art results in these 3 tasks. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Convolutional Neural Network Based Fault Detection for Rotating Machinery

    NASA Astrophysics Data System (ADS)

    Janssens, Olivier; Slavkovikj, Viktor; Vervisch, Bram; Stockman, Kurt; Loccufier, Mia; Verstockt, Steven; Van de Walle, Rik; Van Hoecke, Sofie

    2016-09-01

    Vibration analysis is a well-established technique for condition monitoring of rotating machines as the vibration patterns differ depending on the fault or machine condition. Currently, mainly manually-engineered features, such as the ball pass frequencies of the raceway, RMS, kurtosis an crest, are used for automatic fault detection. Unfortunately, engineering and interpreting such features requires a significant level of human expertise. To enable non-experts in vibration analysis to perform condition monitoring, the overhead of feature engineering for specific faults needs to be reduced as much as possible. Therefore, in this article we propose a feature learning model for condition monitoring based on convolutional neural networks. The goal of this approach is to autonomously learn useful features for bearing fault detection from the data itself. Several types of bearing faults such as outer-raceway faults and lubrication degradation are considered, but also healthy bearings and rotor imbalance are included. For each condition, several bearings are tested to ensure generalization of the fault-detection system. Furthermore, the feature-learning based approach is compared to a feature-engineering based approach using the same data to objectively quantify their performance. The results indicate that the feature-learning system, based on convolutional neural networks, significantly outperforms the classical feature-engineering based approach which uses manually engineered features and a random forest classifier. The former achieves an accuracy of 93.61 percent and the latter an accuracy of 87.25 percent.

  11. Single-Cell Phenotype Classification Using Deep Convolutional Neural Networks.

    PubMed

    Dürr, Oliver; Sick, Beate

    2016-10-01

    Deep learning methods are currently outperforming traditional state-of-the-art computer vision algorithms in diverse applications and recently even surpassed human performance in object recognition. Here we demonstrate the potential of deep learning methods to high-content screening-based phenotype classification. We trained a deep learning classifier in the form of convolutional neural networks with approximately 40,000 publicly available single-cell images from samples treated with compounds from four classes known to lead to different phenotypes. The input data consisted of multichannel images. The construction of appropriate feature definitions was part of the training and carried out by the convolutional network, without the need for expert knowledge or handcrafted features. We compare our results against the recent state-of-the-art pipeline in which predefined features are extracted from each cell using specialized software and then fed into various machine learning algorithms (support vector machine, Fisher linear discriminant, random forest) for classification. The performance of all classification approaches is evaluated on an untouched test image set with known phenotype classes. Compared to the best reference machine learning algorithm, the misclassification rate is reduced from 8.9% to 6.6%. © 2016 Society for Laboratory Automation and Screening.

  12. Spatiotemporal Recurrent Convolutional Networks for Traffic Prediction in Transportation Networks

    PubMed Central

    Yu, Haiyang; Wu, Zhihai; Wang, Shuqin; Wang, Yunpeng; Ma, Xiaolei

    2017-01-01

    Predicting large-scale transportation network traffic has become an important and challenging topic in recent decades. Inspired by the domain knowledge of motion prediction, in which the future motion of an object can be predicted based on previous scenes, we propose a network grid representation method that can retain the fine-scale structure of a transportation network. Network-wide traffic speeds are converted into a series of static images and input into a novel deep architecture, namely, spatiotemporal recurrent convolutional networks (SRCNs), for traffic forecasting. The proposed SRCNs inherit the advantages of deep convolutional neural networks (DCNNs) and long short-term memory (LSTM) neural networks. The spatial dependencies of network-wide traffic can be captured by DCNNs, and the temporal dynamics can be learned by LSTMs. An experiment on a Beijing transportation network with 278 links demonstrates that SRCNs outperform other deep learning-based algorithms in both short-term and long-term traffic prediction. PMID:28672867

  13. Multiple deep convolutional neural networks averaging for face alignment

    NASA Astrophysics Data System (ADS)

    Zhang, Shaohua; Yang, Hua; Yin, Zhouping

    2015-05-01

    Face alignment is critical for face recognition, and the deep learning-based method shows promise for solving such issues, given that competitive results are achieved on benchmarks with additional benefits, such as dispensing with handcrafted features and initial shape. However, most existing deep learning-based approaches are complicated and quite time-consuming during training. We propose a compact face alignment method for fast training without decreasing its accuracy. Rectified linear unit is employed, which allows all networks approximately five times faster convergence than a tanh neuron. An eight learnable layer deep convolutional neural network (DCNN) based on local response normalization and a padding convolutional layer (PCL) is designed to provide reliable initial values during prediction. A model combination scheme is presented to further reduce errors, while showing that only two network architectures and hyperparameter selection procedures are required in our approach. A three-level cascaded system is ultimately built based on the DCNNs and model combination mode. Extensive experiments validate the effectiveness of our method and demonstrate comparable accuracy with state-of-the-art methods on BioID, labeled face parts in the wild, and Helen datasets.

  14. Convolutional neural network architectures for predicting DNA–protein binding

    PubMed Central

    Zeng, Haoyang; Edwards, Matthew D.; Liu, Ge; Gifford, David K.

    2016-01-01

    Motivation: Convolutional neural networks (CNN) have outperformed conventional methods in modeling the sequence specificity of DNA–protein binding. Yet inappropriate CNN architectures can yield poorer performance than simpler models. Thus an in-depth understanding of how to match CNN architecture to a given task is needed to fully harness the power of CNNs for computational biology applications. Results: We present a systematic exploration of CNN architectures for predicting DNA sequence binding using a large compendium of transcription factor datasets. We identify the best-performing architectures by varying CNN width, depth and pooling designs. We find that adding convolutional kernels to a network is important for motif-based tasks. We show the benefits of CNNs in learning rich higher-order sequence features, such as secondary motifs and local sequence context, by comparing network performance on multiple modeling tasks ranging in difficulty. We also demonstrate how careful construction of sequence benchmark datasets, using approaches that control potentially confounding effects like positional or motif strength bias, is critical in making fair comparisons between competing methods. We explore how to establish the sufficiency of training data for these learning tasks, and we have created a flexible cloud-based framework that permits the rapid exploration of alternative neural network architectures for problems in computational biology. Availability and Implementation: All the models analyzed are available at http://cnn.csail.mit.edu. Contact: gifford@mit.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307608

  15. Classifications of Multispectral Colorectal Cancer Tissues Using Convolution Neural Network.

    PubMed

    Haj-Hassan, Hawraa; Chaddad, Ahmad; Harkouss, Youssef; Desrosiers, Christian; Toews, Matthew; Tanougast, Camel

    2017-01-01

    Colorectal cancer (CRC) is the third most common cancer among men and women. Its diagnosis in early stages, typically done through the analysis of colon biopsy images, can greatly improve the chances of a successful treatment. This paper proposes to use convolution neural networks (CNNs) to predict three tissue types related to the progression of CRC: benign hyperplasia (BH), intraepithelial neoplasia (IN), and carcinoma (Ca). Multispectral biopsy images of thirty CRC patients were retrospectively analyzed. Images of tissue samples were divided into three groups, based on their type (10 BH, 10 IN, and 10 Ca). An active contour model was used to segment image regions containing pathological tissues. Tissue samples were classified using a CNN containing convolution, max-pooling, and fully-connected layers. Available tissue samples were split into a training set, for learning the CNN parameters, and test set, for evaluating its performance. An accuracy of 99.17% was obtained from segmented image regions, outperforming existing approaches based on traditional feature extraction, and classification techniques. Experimental results demonstrate the effectiveness of CNN for the classification of CRC tissue types, in particular when using presegmented regions of interest.

  16. Neutron irradiation test of depleted CMOS pixel detector prototypes

    NASA Astrophysics Data System (ADS)

    Mandić, I.; Cindro, V.; Gorišek, A.; Hiti, B.; Kramberger, G.; Mikuž, M.; Zavrtanik, M.; Hemperek, T.; Daas, M.; Hügging, F.; Krüger, H.; Pohl, D.-L.; Wermes, N.; Gonella, L.

    2017-02-01

    Charge collection properties of depleted CMOS pixel detector prototypes produced on p-type substrate of 2 kΩ cm initial resistivity (by LFoundry 150 nm process) were studied using Edge-TCT method before and after neutron irradiation. The test structures were produced for investigation of CMOS technology in tracking detectors for experiments at HL-LHC upgrade. Measurements were made with passive detector structures in which current pulses induced on charge collecting electrodes could be directly observed. Thickness of depleted layer was estimated and studied as function of neutron irradiation fluence. An increase of depletion thickness was observed after first two irradiation steps to 1 · 1013 n/cm2 and 5 · 1013 n/cm2 and attributed to initial acceptor removal. At higher fluences the depletion thickness at given voltage decreases with increasing fluence because of radiation induced defects contributing to the effective space charge concentration. The behaviour is consistent with that of high resistivity silicon used for standard particle detectors. The measured thickness of the depleted layer after irradiation with 1 · 1015 n/cm2 is more than 50 μm at 100 V bias. This is sufficient to guarantee satisfactory signal/noise performance on outer layers of pixel trackers in HL-LHC experiments.

  17. Monolithic pixel detectors for high energy physics

    NASA Astrophysics Data System (ADS)

    Snoeys, W.

    2013-12-01

    Monolithic pixel detectors integrating sensor matrix and readout in one piece of silicon have revolutionized imaging for consumer applications, but despite years of research they have not yet been widely adopted for high energy physics. Two major requirements for this application, radiation tolerance and low power consumption, require charge collection by drift for the most extreme radiation levels and an optimization of the collected signal charge over input capacitance ratio (Q/C). It is shown that monolithic detectors can achieve Q/C for low analog power consumption and even carryout the promise to practically eliminate analog power consumption, but combining sufficient Q/C, collection by drift, and integration of readout circuitry within the pixel remains a challenge. An overview is given of different approaches to address this challenge, with possible advantages and disadvantages.

  18. On the accuracy of pixel relaxation labeling

    NASA Technical Reports Server (NTRS)

    Richards, J. A.; Landgrebe, D. A.; Swain, P. H.

    1981-01-01

    An analysis of pixel labeling by probabilistic relaxation techniques is presented to demonstrate that these labeling procedures degenerate to weighted averages in the vicinity of fixed points. A consequence of this is that undesired label conversions can occur, leading to a deterioration of labeling accuracy at a stage after an improvement has already been achieved. Means for overcoming the accuracy deterioration are suggested and used as the basis for a possible design strategy for using probabilistic relaxation procedures. The results obtained are illustrated using simple data sets in which labeling on individual pixels can be examined and also using Landsat imagery to show application to data typical of that encountered in remote sensing applications.

  19. MTF evaluation of white pixel sensors

    NASA Astrophysics Data System (ADS)

    Lindner, Albrecht; Atanassov, Kalin; Luo, Jiafu; Goma, Sergio

    2015-01-01

    We present a methodology to compare image sensors with traditional Bayer RGB layouts to sensors with alternative layouts containing white pixels. We focused on the sensors' resolving powers, which we measured in the form of a modulation transfer function for variations in both luma and chroma channels. We present the design of the test chart, the acquisition of images, the image analysis, and an interpretation of results. We demonstrate the approach at the example of two sensors that only differ in their color filter arrays. We confirmed that the sensor with white pixels and the corresponding demosaicing result in a higher resolving power in the luma channel, but a lower resolving power in the chroma channels when compared to the traditional Bayer sensor.

  20. Using convolutional decoding to improve time delay and phase estimation in digital communications

    DOEpatents

    Ormesher, Richard C.; Mason, John J.

    2010-01-26

    The time delay and/or phase of a communication signal received by a digital communication receiver can be estimated based on a convolutional decoding operation that the communication receiver performs on the received communication signal. If the original transmitted communication signal has been spread according to a spreading operation, a corresponding despreading operation can be integrated into the convolutional decoding operation.

  1. Experimental study of current loss and plasma formation in the Z machine post-hole convolute

    NASA Astrophysics Data System (ADS)

    Gomez, M. R.; Gilgenbach, R. M.; Cuneo, M. E.; Jennings, C. A.; McBride, R. D.; Waisman, E. M.; Hutsel, B. T.; Stygar, W. A.; Rose, D. V.; Maron, Y.

    2017-01-01

    The Z pulsed-power generator at Sandia National Laboratories drives high energy density physics experiments with load currents of up to 26 MA. Z utilizes a double post-hole convolute to combine the current from four parallel magnetically insulated transmission lines into a single transmission line just upstream of the load. Current loss is observed in most experiments and is traditionally attributed to inefficient convolute performance. The apparent loss current varies substantially for z-pinch loads with different inductance histories; however, a similar convolute impedance history is observed for all load types. This paper details direct spectroscopic measurements of plasma density, temperature, and apparent and actual plasma closure velocities within the convolute. Spectral measurements indicate a correlation between impedance collapse and plasma formation in the convolute. Absorption features in the spectra show the convolute plasma consists primarily of hydrogen, which likely forms from desorbed electrode contaminant species such as H2O , H2 , and hydrocarbons. Plasma densities increase from 1 ×1016 cm-3 (level of detectability) just before peak current to over 1 ×1017 cm-3 at stagnation (tens of ns later). The density seems to be highest near the cathode surface, with an apparent cathode to anode plasma velocity in the range of 35 - 50 cm /μ s . Similar plasma conditions and convolute impedance histories are observed in experiments with high and low losses, suggesting that losses are driven largely by load dynamics, which determine the voltage on the convolute.

  2. There is no MacWilliams identity for convolutional codes. [transmission gain comparison

    NASA Technical Reports Server (NTRS)

    Shearer, J. B.; Mceliece, R. J.

    1977-01-01

    An example is provided of two convolutional codes that have the same transmission gain but whose dual codes do not. This shows that no analog of the MacWilliams identity for block codes can exist relating the transmission gains of a convolutional code and its dual.

  3. Convolution algorithm for normalization constant evaluation in queuing system with random requirements

    NASA Astrophysics Data System (ADS)

    Samouylov, K.; Sopin, E.; Vikhrova, O.; Shorgin, S.

    2017-07-01

    We suggest a convolution algorithm for calculating the normalization constant for stationary probabilities of a multiserver queuing system with random resource requirements. Our algorithm significantly reduces computing time of the stationary probabilities and system characteristics such as blocking probabilities and average number of occupied resources. The algorithm aims to avoid calculation of k-fold convolutions and reasonably use memory resources.

  4. Linear diffusion-wave channel routing using a discrete Hayami convolution method

    Treesearch

    Li Wang; Joan Q. Wu; William J. Elliot; Fritz R. Feidler; Sergey. Lapin

    2014-01-01

    The convolution of an input with a response function has been widely used in hydrology as a means to solve various problems analytically. Due to the high computation demand in solving the functions using numerical integration, it is often advantageous to use the discrete convolution instead of the integration of the continuous functions. This approach greatly reduces...

  5. Iterative algorithm for reconstructing rotationally asymmetric surface deviation with pixel-level spatial resolution

    NASA Astrophysics Data System (ADS)

    Quan, Haiyang; Wu, Fan; Hou, Xi

    2015-10-01

    New method for reconstructing rotationally asymmetric surface deviation with pixel-level spatial resolution is proposed. It is based on basic iterative scheme and accelerates the Gauss-Seidel method by introducing an acceleration parameter. This modified Successive Over-relaxation (SOR) is effective for solving the rotationally asymmetric components with pixel-level spatial resolution, without the usage of a fitting procedure. Compared to the Jacobi and Gauss-Seidel method, the modified SOR method with an optimal relaxation factor converges much faster and saves more computational costs and memory space without reducing accuracy. It has been proved by real experimental results.

  6. Local Histograms for Per-Pixel Classification

    DTIC Science & Technology

    2012-03-01

    Domain-Knowledge-Inspired Math - ematical Framework for the Description and Classification of H&E Stained Histopathology Images,” Proceedings of SPIE, 8138... computed over discrete images as the number of pixels in a particular bin. In order to obtain a “density” independent of the bin-width, one can divide the...Notes in Computer Science , 5112: 688–696 (2008). [12] van Ginneken, Bram and Bart M. ter Haar Romeny. “Applications of Locally Orderless Images

  7. Demonstration of a Polarimeter in a Pixel

    DTIC Science & Technology

    2011-02-15

    Well Infrared Photo- detector ( QWIP ) with gratings at different angles. Annual Progress Report-April 2010...of a single pixel QWIP Polarimeter. 2.2. Growth of Structure Fig.3 Schematic showing the structure of QWIP . 2.3. Modified Fabrication...One side open for Middle contact Next device Ge/Au/Ni/Au/In Ge/Au/Ni/Au Top QWIP Top QWIP Lower QWIP Lower QWIP Underfill Epoxy Top X 50 SI-GaAs

  8. The Silicon Pixel Detector for ALICE Experiment

    SciTech Connect

    Fabris, D.; Bombonati, C.; Dima, R.; Lunardon, M.; Moretto, S.; Pepato, A.; Bohus, L. Sajo; Scarlassara, F.; Segato, G.; Shen, D.; Turrisi, R.; Viesti, G.; Anelli, G.; Boccardi, A.; Burns, M.; Campbell, M.; Ceresa, S.; Conrad, J.; Kluge, A.; Kral, M.

    2007-10-26

    The Inner Tracking System (ITS) of the ALICE experiment is made of position sensitive detectors which have to operate in a region where the track density may be as high as 50 tracks/cm{sup 2}. To handle such densities detectors with high precision and granularity are mandatory. The Silicon Pixel Detector (SPD), the innermost part of the ITS, has been designed to provide tracking information close to primary interaction point. The assembly of the entire SPD has been completed.

  9. The Belle II DEPFET pixel detector

    NASA Astrophysics Data System (ADS)

    Moser, Hans-Günther

    2016-09-01

    The Belle II experiment at KEK (Tsukuba, Japan) will explore heavy flavour physics (B, charm and tau) at the starting of 2018 with unprecedented precision. Charged particles are tracked by a two-layer DEPFET pixel device (PXD), a four-layer silicon strip detector (SVD) and the central drift chamber (CDC). The PXD will consist of two layers at radii of 14 mm and 22 mm with 8 and 12 ladders, respectively. The pixel sizes will vary, between 50 μm×(55-60) μm in the first layer and between 50 μm×(70-85) μm in the second layer, to optimize the charge sharing efficiency. These innermost layers have to cope with high background occupancy, high radiation and must have minimal material to reduce multiple scattering. These challenges are met using the DEPFET technology. Each pixel is a FET integrated on a fully depleted silicon bulk. The signal charge collected in the 'internal gate' modulates the FET current resulting in a first stage amplification and therefore very low noise. This allows very thin sensors (75 μm) reducing the overall material budget of the detector (0.21% X0). Four fold multiplexing of the column parallel readout allows read out a full frame of the pixel matrix in only 20 μs while keeping the power consumption low enough for air cooling. Only the active electronics outside the detector acceptance has to be cooled actively with a two phase CO2 system. Furthermore the DEPFET technology offers the unique feature of an electronic shutter which allows the detector to operate efficiently in the continuous injection mode of superKEKB.

  10. Status of the CMS pixel project

    SciTech Connect

    Uplegger, Lorenzo; /Fermilab

    2008-01-01

    The Compact Muon Solenoid Experiment (CMS) will start taking data at the Large Hadron Collider (LHC) in 2008. The closest detector to the interaction point is the silicon pixel detector which is the heart of the tracking system. It consists of three barrel layers and two pixel disks on each side of the interaction point for a total of 66 million channels. Its proximity to the interaction point means there will be very large particle fluences and therefore a radiation-tolerant design is necessary. The pixel detector will be crucial to achieve a good vertex resolution and will play a key role in pattern recognition and track reconstruction. The results from test beam runs prove that the expected performances can be achieved. The detector is currently being assembled and will be ready for insertion into CMS in early 2008. During the assembly phase, a thorough electronic test is being done to check the functionality of each channel to guarantee the performance required to achieve the physics goals. This report will present the final detector design, the status of the production as well as results from test beam runs to validate the expected performance.

  11. Pixel electronics for the ATLAS experiment

    NASA Astrophysics Data System (ADS)

    Fischer, P.

    2001-06-01

    The ATLAS experiment at LHC will use 3 barrel layers and 2×5 disks of silicon pixel detectors as the innermost elements of the semiconductor tracker. The basic building blocks are pixel modules with an active area of 16.4 mm×60.8 mm which include an n + on n-type silicon sensor and 16 VLSI front-end (FE) chips. Every FE chip contains a low power, high speed charge sensitive preamplifier, a fast discriminator, and a readout system which operates at the 40 MHz rate of LHC. The addresses of hit pixels (as well as a low resolution pulse height information) are stored on the FE chips until arrival of a level 1 trigger signal. Hits are then transferred to a module controller chip (MCC) which collects the data of all 16 FE chips, builds complete events and sends the data through two optical links to the data acquisition system. The MCC receives clock and data through an additional optical link and provides timing and configuration information for the FE chips. Two additional chips are used to amplify and decode the pin diode signal and to drive the VCSEL laser diodes of the optical links.

  12. Photovoltaic Retinal Prosthesis with High Pixel Density

    PubMed Central

    Mathieson, Keith; Loudin, James; Goetz, Georges; Huie, Philip; Wang, Lele; Kamins, Theodore I.; Galambos, Ludwig; Smith, Richard; Harris, James S.; Sher, Alexander; Palanker, Daniel

    2012-01-01

    Retinal degenerative diseases lead to blindness due to loss of the “image capturing” photoreceptors, while neurons in the “image processing” inner retinal layers are relatively well preserved. Electronic retinal prostheses seek to restore sight by electrically stimulating surviving neurons. Most implants are powered through inductive coils, requiring complex surgical methods to implant the coil-decoder-cable-array systems, which deliver energy to stimulating electrodes via intraocular cables. We present a photovoltaic subretinal prosthesis, in which silicon photodiodes in each pixel receive power and data directly through pulsed near-infrared illumination and electrically stimulate neurons. Stimulation was produced in normal and degenerate rat retinas, with pulse durations from 0.5 to 4 ms, and threshold peak irradiances from 0.2 to 10 mW/mm2, two orders of magnitude below the ocular safety limit. Neural responses were elicited by illuminating a single 70 μm bipolar pixel, demonstrating the possibility of a fully-integrated photovoltaic retinal prosthesis with high pixel density. PMID:23049619

  13. Soil moisture variability within remote sensing pixels

    SciTech Connect

    Charpentier, M.A.; Groffman, P.M. )

    1992-11-30

    This work is part of the First International Satellite Land Surface Climatology Project (ISLSCP) Field Experiment (FIFE), an international land-surface-atmosphere experiment aimed at improving the way climate models represent energy, water, heat, and carbon exchanges, and improving the utilization of satellite based remote sensing to monitor such parameters. This paper addresses the question of soil moisture variation within the field of view of a remote sensing pixel. Remote sensing is the only practical way to sense soil moisture over large areas, but it is known that there can be large variations of soil moisture within the field of view of a pixel. The difficulty with this is that many processes, such as gas exchange between surface and atmosphere can vary dramatically with moisture content, and a small wet spot, for example, can have a dramatic impact on such processes, and thereby bias remote sensing data results. Here the authors looked at the impact of surface topography on the level of soil moisture, and the interaction of both on the variability of soil moisture sensed by a push broom microwave radiometer (PBMR). In addition the authors looked at the question of whether variations of soil moisture within pixel size areas could be used to assign errors to PBMR generated soil moisture data.

  14. Photovoltaic retinal prosthesis with high pixel density

    NASA Astrophysics Data System (ADS)

    Mathieson, Keith; Loudin, James; Goetz, Georges; Huie, Philip; Wang, Lele; Kamins, Theodore I.; Galambos, Ludwig; Smith, Richard; Harris, James S.; Sher, Alexander; Palanker, Daniel

    2012-06-01

    Retinal degenerative diseases lead to blindness due to loss of the `image capturing' photoreceptors, while neurons in the `image-processing' inner retinal layers are relatively well preserved. Electronic retinal prostheses seek to restore sight by electrically stimulating the surviving neurons. Most implants are powered through inductive coils, requiring complex surgical methods to implant the coil-decoder-cable-array systems that deliver energy to stimulating electrodes via intraocular cables. We present a photovoltaic subretinal prosthesis, in which silicon photodiodes in each pixel receive power and data directly through pulsed near-infrared illumination and electrically stimulate neurons. Stimulation is produced in normal and degenerate rat retinas, with pulse durations of 0.5-4 ms, and threshold peak irradiances of 0.2-10 mW mm-2, two orders of magnitude below the ocular safety limit. Neural responses were elicited by illuminating a single 70 µm bipolar pixel, demonstrating the possibility of a fully integrated photovoltaic retinal prosthesis with high pixel density.

  15. Experimental Measurements of the Convolute Plasma on the Z-Machine*

    NASA Astrophysics Data System (ADS)

    Gomez, M. R.; Gilgenbach, R. M.; Cuneo, M. E.; McBride, R. D.; Rochau, G. A.; Jones, B.; Ampleford, D. J.; Sinars, D. B.; Bailey, J. E.; Stygar, W. A.; Savage, M. E.; Jones, M.; Edens, A. D.; Lopez, M. R.; Stambulchik, E.; Maron, Y.; Rose, D. V.; Welch, D. R.

    2011-10-01

    Post-hole convolutes are used in large pulsed power devices to combine the current from several self-magnetically insulated transmission lines at the load. The efficiency of Z's post-hole convolute has decreased with increasing electrical power. Losses as high as 20% of the peak current have been recorded on the most lossy shots. Spectroscopic measurements of the plasma that forms in the convolute are underway. Initial results show that there is a strong correlation between convolute plasma density and the load. This presentation will cover convolute plasma behavior and loss current for several load configurations on the Z-Machine. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  16. CMOS Active Pixel Sensor Technology and Reliability Characterization Methodology

    NASA Technical Reports Server (NTRS)

    Chen, Yuan; Guertin, Steven M.; Pain, Bedabrata; Kayaii, Sammy

    2006-01-01

    This paper describes the technology, design features and reliability characterization methodology of a CMOS Active Pixel Sensor. Both overall chip reliability and pixel reliability are projected for the imagers.

  17. Effect of mixed (boundary) pixels on crop proportion estimation

    NASA Technical Reports Server (NTRS)

    Chhikara, R. S.

    1984-01-01

    In estimating acreage proportions of crop types in a segment using Landsat data, considerable problem is caused by the presence of mixed pixels. Due to lack of understanding of their spectral characteristics, mixed pixels have been treated in the past as pure while clustering and classifying the segment data. This paper examines this approach of treating mixed pixels as pure pixels and the effect of mixed pixels on the bias and variance of a crop type proportion estimate. First, the spectral response of a boundary pixel is modeled and an analytical expression for the bias and variance of a proportion estimate is obtained. This is followed by a numerical illustration of the effect of mixed pixels on bias and variance. It is shown that as the size of the mixed pixel class increases in a segment, the variance increases, however, such increase does not always affect the bias of the proportion estimate.

  18. Fabrication of X-ray Microcalorimeter Focal Planes Composed of Two Distinct Pixel Types.

    PubMed

    Wassell, E J; Adams, J S; Bandler, S R; Betancourt-Martinez, G L; Chiao, M P; Chang, M P; Chervenak, J A; Datesman, A M; Eckart, M E; Ewin, A J; Finkbeiner, F M; Ha, J Y; Kelley, R; Kilbourne, C A; Miniussi, A R; Sakai, K; Porter, F; Sadleir, J E; Smith, S J; Wakeham, N A; Yoon, W

    2017-06-01

    We are developing superconducting transition-edge sensor (TES) microcalorimeter focal planes for versatility in meeting specifications of X-ray imaging spectrometers including high count-rate, high energy resolution, and large field-of-view. In particular, a focal plane composed of two sub-arrays: one of fine-pitch, high count-rate devices and the other of slower, larger pixels with similar energy resolution, offers promise for the next generation of astrophysics instruments, such as the X-ray Integral Field Unit (X-IFU) instrument on the European Space Agency's Athena mission. We have based the sub-arrays of our current design on successful pixel designs that have been demonstrated separately. Pixels with an all gold X-ray absorber on 50 and 75 micron scales where the Mo/Au TES sits atop a thick metal heatsinking layer have shown high resolution and can accommodate high count-rates. The demonstrated larger pixels use a silicon nitride membrane for thermal isolation, thinner Au and an added bismuth layer in a 250 micron square absorber. To tune the parameters of each sub-array requires merging the fabrication processes of the two detector types. We present the fabrication process for dual production of different X-ray absorbers on the same substrate, thick Au on the small pixels and thinner Au with a Bi capping layer on the larger pixels to tune their heat capacities. The process requires multiple electroplating and etching steps, but the absorbers are defined in a single ion milling step. We demonstrate methods for integrating heatsinking of the two types of pixel into the same focal plane consistent with the requirements for each sub-array, including the limiting of thermal crosstalk. We also discuss fabrication process modifications for tuning the intrinsic transition temperature (Tc) of the bilayers for the different device types through variation of the bilayer thicknesses. The latest results on these "hybrid" arrays will be presented.

  19. Fabrication of X-ray Microcalorimeter Focal Planes Composed of Two Distinct Pixel Types

    NASA Technical Reports Server (NTRS)

    Wassell, Edward J.; Adams, Joseph S.; Bandler, Simon R.; Betancour-Martinez, Gabriele L; Chiao, Meng P.; Chang, Meng Ping; Chervenak, James A.; Datesman, Aaron M.; Eckart, Megan E.; Ewin, Audrey J.; hide

    2016-01-01

    We develop superconducting transition-edge sensor (TES) microcalorimeter focal planes for versatility in meeting the specifications of X-ray imaging spectrometers, including high count rate, high energy resolution, and large field of view. In particular, a focal plane composed of two subarrays: one of fine pitch, high count-rate devices and the other of slower, larger pixels with similar energy resolution, offers promise for the next generation of astrophysics instruments, such as the X-ray Integral Field Unit Instrument on the European Space Agencys ATHENA mission. We have based the subarrays of our current design on successful pixel designs that have been demonstrated separately. Pixels with an all-gold X-ray absorber on 50 and 75 micron pitch, where the Mo/Au TES sits atop a thick metal heatsinking layer, have shown high resolution and can accommodate high count rates. The demonstrated larger pixels use a silicon nitride membrane for thermal isolation, thinner Au, and an added bismuth layer in a 250-sq micron absorber. To tune the parameters of each subarray requires merging the fabrication processes of the two detector types. We present the fabrication process for dual production of different X-ray absorbers on the same substrate, thick Au on the small pixels and thinner Au with a Bi capping layer on the larger pixels to tune their heat capacities. The process requires multiple electroplating and etching steps, but the absorbers are defined in a single-ion milling step. We demonstrate methods for integrating the heatsinking of the two types of pixel into the same focal plane consistent with the requirements for each subarray, including the limiting of thermal crosstalk. We also discuss fabrication process modifications for tuning the intrinsic transition temperature (T(sub c)) of the bilayers for the different device types through variation of the bilayer thicknesses. The latest results on these 'hybrid' arrays will be presented.

  20. Methods of editing cloud and atmospheric layer affected pixels from satellite data

    NASA Technical Reports Server (NTRS)

    Nixon, P. R. (Principal Investigator); Wiegand, C. L.; Richardson, A. J.; Johnson, M. P.

    1981-01-01

    Plotted transects made from south Texas daytime HCMM data show the effect of subvisible cirrus (SCI) clouds in the emissive (IR) band but the effect is unnoticable in the reflective (VIS) band. The depression of satellite indicated temperatures ws greatest in the center of SCi streamers and tapered off at the edges. Pixels of uncontaminated land and water features in the HCMM test area shared identical VIS and IR digital count combinations with other pixels representing similar features. A minimum of 0.015 percent repeats of identical VIS-IR combinations are characteristic of land and water features in a scene of 30 percent cloud cover. This increases to 0.021 percent of more when the scene is clear. Pixels having shared VIS-IR combinations less than these amounts are considered to be cloud contaminated in the cluster screening method. About twenty percent of SCi was machine indistinguishable from land features in two dimensional spectral space (VIS vs IR).

  1. Development of Kilo-Pixel Arrays of Transition-Edge Sensors for X-Ray Spectroscopy

    NASA Technical Reports Server (NTRS)

    Adams, J. S.; Bandler, S. R.; Busch, S. E.; Chervenak, J. A.; Chiao, M. P.; Eckart, M. E.; Ewin, A. J.; Finkbeiner, F. M.; Kelley, R. L.; Kelly, D. P.; hide

    2012-01-01

    We are developing kilo-pixel arrays of transition-edge sensor (TES) microcalorimeters for future X-ray astronomy observatories or for use in laboratory astrophysics applications. For example, Athena/XMS (currently under study by the european space agency) would require a close-packed 32x32 pixel array on a 250-micron pitch with < 3.0 eV full-width-half-maximum energy resolution at 6 keV and at count-rates of up to 50 counts/pixel/second. We present characterization of 32x32 arrays. These detectors will be readout using state of the art SQUID based time-domain multiplexing (TDM). We will also present the latest results in integrating these detectors and the TDM readout technology into a 16 row x N column field-able instrument.

  2. Using Dark Images to Characterize the Stability of Pixels in the WFC3/UVIS Detector

    NASA Astrophysics Data System (ADS)

    Bourque, Matthew; Borncamp, David; Baggett, Sylvia M.; Grogin, Norman A.; WFC3 Team

    2017-06-01

    The Ultraviolet-Visible (UVIS) detector on board the Hubble Space Telescope's (HST) Wide Field Camera 3 (WFC3) instrument has been acquiring 'dark' images on a daily basis since its installation in 2009. These dark images are 900 second exposures with the shutter closed as to measure the inherent dark current of the detector. Using these dark exposures, we have constructed ‘pixel history' images in which a specific column of the detector is extracted from each dark and placed into a new time-ordered array. We discuss how the pixel history images are used to characterize the stability of each pixel over time, as well as current trends in the WFC3/UVIS dark current.

  3. Characterization of a 2-mm thick, 16x16 Cadmium-Zinc-Telluride Pixel Array

    NASA Technical Reports Server (NTRS)

    Gaskin, Jessica; Richardson, Georgia; Mitchell, Shannon; Ramsey, Brian; Seller, Paul; Sharma, Dharma

    2003-01-01

    The detector under study is a 2-mm-thick, 16x16 Cadmium-Zinc-Telluride pixel array with a pixel pitch of 300 microns and inter-pixel gap of 50 microns. This detector is a precursor to that which will be used at the focal plane of the High Energy Replicated Optics (HERO) telescope currently being developed at Marshall Space Flight Center. With a telescope focal length of 6 meters, the detector needs to have a spatial resolution of around 200 microns in order to take full advantage of the HERO angular resolution. We discuss to what degree charge sharing will degrade energy resolution but will improve our spatial resolution through position interpolation. In addition, we discuss electric field modeling for this specific detector geometry and the role this mapping will play in terms of charge sharing and charge loss in the detector.

  4. Characterization of a 2-mm thick, 16x16 Cadmium-Zinc-Telluride Pixel Array

    NASA Technical Reports Server (NTRS)

    Gaskin, Jessica; Richardson, Georgia; Mitchell, Shannon; Ramsey, Brian; Seller, Paul; Sharma, Dharma

    2003-01-01

    The detector under study is a 2-mm-thick, 16x16 Cadmium-Zinc-Telluride pixel array with a pixel pitch of 300 microns and inter-pixel gap of 50 microns. This detector is a precursor to that which will be used at the focal plane of the High Energy Replicated Optics (HERO) telescope currently being developed at Marshall Space Flight Center. With a telescope focal length of 6 meters, the detector needs to have a spatial resolution of around 200 microns in order to take full advantage of the HERO angular resolution. We discuss to what degree charge sharing will degrade energy resolution but will improve our spatial resolution through position interpolation. In addition, we discuss electric field modeling for this specific detector geometry and the role this mapping will play in terms of charge sharing and charge loss in the detector.

  5. A self-adaptive image encryption scheme with half-pixel interchange permutation operation

    NASA Astrophysics Data System (ADS)

    Ye, Ruisong; Liu, Li; Liao, Minyu; Li, Yafang; Liao, Zikang

    2017-01-01

    A plain-image dependent image encryption scheme with half-pixel-level swapping permutation strategy is proposed. In the new permutation operation, a pixel-swapping operation between four higher bit-planes and four lower bit-planes is employed to replace the traditional confusion operation, which not only improves the conventional permutation efficiency within the plain-image, but also changes all the pixel gray values. The control parameters of generalized Arnold map applied for the permutation operation are related to the plain-image content and consequently can resist chosen-plaintext and known-plaintext attacks effectively. To enhance the security of the proposed image encryption, one multimodal skew tent map is applied to generate pseudo-random gray value sequence for diffusion operation. Simulations have been carried out thoroughly to demonstrate that the proposed image encryption scheme is highly secure thanks to its large key space and efficient permutation-diffusion operations.

  6. Improving class separability using extended pixel planes: a comparative study

    PubMed Central

    Orlov, Nikita V.; Eckley, D. Mark; Shamir, Lior; Goldberg, Ilya G.

    2011-01-01

    In this work we explored class separability in feature spaces built on extended representations of pixel planes (EPP) produced using scale pyramid, subband pyramid, and image transforms. The image transforms included Chebyshev, Fourier, wavelets, gradient and Laplacian; we also utilized transform combinations, including Fourier, Chebyshev and wavelets of the gradient transform, as well as Fourier of the Laplacian transform. We demonstrate that all three types of EPP promote class separation. We also explored the effect of EPP on suboptimal feature libraries, using only textural features in one case and only Haralick features in another. The effect of EPP was especially clear for these suboptimal libraries, where the transform-based representations were found to increase separability to a greater extent than scale or subband pyramids. EPP can be particularly useful in new applications where optimal features have not yet been developed. PMID:23074356

  7. Improving class separability using extended pixel planes: a comparative study.

    PubMed

    Orlov, Nikita V; Eckley, D Mark; Shamir, Lior; Goldberg, Ilya G

    2012-09-01

    In this work we explored class separability in feature spaces built on extended representations of pixel planes (EPP) produced using scale pyramid, subband pyramid, and image transforms. The image transforms included Chebyshev, Fourier, wavelets, gradient and Laplacian; we also utilized transform combinations, including Fourier, Chebyshev and wavelets of the gradient transform, as well as Fourier of the Laplacian transform. We demonstrate that all three types of EPP promote class separation. We also explored the effect of EPP on suboptimal feature libraries, using only textural features in one case and only Haralick features in another. The effect of EPP was especially clear for these suboptimal libraries, where the transform-based representations were found to increase separability to a greater extent than scale or subband pyramids. EPP can be particularly useful in new applications where optimal features have not yet been developed.

  8. A Pixelated Emission Detector for RadiOisotopes (PEDRO)

    NASA Astrophysics Data System (ADS)

    Dimmock, M. R.; Gillam, J. E.; Beveridge, T. E.; Brown, J. M. C.; Lewis, R. A.; Hall, C. J.

    2009-12-01

    The Pixelated Emission Detector for RadiOisotopes (PEDRO) is a hybrid imager designed for the measurement of single photon emission from small animals. The proof-of-principle device currently under development consists of a Compton-camera situated behind a mechanical modulator. The combination of mechanical and electronic (hybrid) collimation should provide optimal detection characteristics over a broad spectral range (30 keV≤Eγ≤511 keV), through a reduction in the sensitivity-resolution trade-off, inherent in conventional mechanically collimated configurations. This paper presents GEANT4 simulation results from the PEDRO geometry operated only as a Compton camera in order to gauge its advantage when used in concert with mechanical collimation—regardless of the collimation pattern. The optimization of multiple detector spacing and resolution parameters is performed utilizing the Median Distance of Closest Approach (MDCA) and has been shown to result in an optimum distance, beyond which only a loss in sensitivity occurs.

  9. Experimental evaluation and simulation of multi-pixel cadmium-zinc-telluride hard-X-ray detectors

    NASA Astrophysics Data System (ADS)

    Gaskin, Jessica Anne

    2004-08-01

    This dissertation describes the evaluation of many-pixel Cadmium-Zinc-Telluride (CdZnTe) hard-X-ray detectors for future use with the High Energy Replicated Optics (HERO) telescope being developed at Marshall Space Flight Center. The detector requirements for the HERO application are good energy resolution (sufficient to resolve cyclotron features and nuclear lines), spatial resolution of ˜200 μm, minimal charge loss of absorbed X rays, and minimal sensitivity to the background environment. This research concentrates on assessing the suitability of these detectors for the focus of HERO, and includes the development of a simulation of the physics involved in an X-ray-detector interaction, a study of the intrinsic material properties, measurements with prototype detectors such as the energy and spatial resolution, charge loss, and X-ray background reduction through 3-dimensional depth sensing. Two types of detectors were available for evaluation. The first type includes 1-mm and 2-mm thick 4 x 4 pixel arrays, developed by Metorex Inc. and Baltic Scientific Instruments. The pixel size is 650 μm with inter-pixel gap of 100 μm. Each of the 16 pixels is wired to a charge sensitive preamplifier and then fed to external electronics. The second detector type includes 1-mm and 2-mm thick 16 x 16 pixel arrays with pixel size of 250 μm square and 50 μm inter-pixel gap. Each array is bonded to an Application Specific Integrated Circuit (ASIC) readout chip, developed by Rutherford Appleton Laboratory (RAL) and fabricated by Metorex Inc. The best energy resolution for both detector types is ˜2% at 60 keV. However, the energy resolution across the 16 x 16 pixel arrays varies dramatically, possibly due to the bonding technique used between the CdZnTe crystal and the ASIC. Position interpolation through charge sharing improves spatial resolution on the 16 x 16 pixel arrays from 300 μm to ˜250 μm. Minimal charge loss was measured for the 16 x 16 pixel arrays. Preliminary

  10. Measurements with MÖNCH, a 25 μm pixel pitch hybrid pixel detector

    NASA Astrophysics Data System (ADS)

    Ramilli, M.; Bergamaschi, A.; Andrae, M.; Brückner, M.; Cartier, S.; Dinapoli, R.; Fröjdh, E.; Greiffenberg, D.; Hutwelker, T.; Lopez-Cuenca, C.; Mezza, D.; Mozzanica, A.; Ruat, M.; Redford, S.; Schmitt, B.; Shi, X.; Tinti, G.; Zhang, J.

    2017-01-01

    MÖNCH is a hybrid silicon pixel detector based on charge integration and with analog readout, featuring a pixel size of 25×25 μm2. The latest working prototype consists of an array of 400×400 identical pixels for a total active area of 1×1 cm2. Its design is optimized for the single photon regime. An exhaustive characterization of this large area prototype has been carried out in the past months, and it confirms an ENC in the order of 35 electrons RMS and a dynamic range of ~4×12 keV photons in high gain mode, which increases to ~100×12 keV photons with the lowest gain setting. The low noise levels of MÖNCH make it a suitable candidate for X-ray detection at energies around 1 keV and below. Imaging applications in particular can benefit significantly from the use of MÖNCH: due to its extremely small pixel pitch, the detector intrinsically offers excellent position resolution. Moreover, in low flux conditions, charge sharing between neighboring pixels allows the use of position interpolation algorithms which grant a resolution at the micrometer-level. Its energy reconstruction and imaging capabilities have been tested for the first time at a low energy beamline at PSI, with photon energies between 1.75 keV and 3.5 keV, and results will be shown.

  11. ACS/WFC Pixel History, Bringing the Pixels Back to Science

    NASA Astrophysics Data System (ADS)

    Borncamp, David; Grogin, Norman; Bourque, Matthew; Ogaz, Sara

    2017-06-01

    Excess thermal energy within a Charged Coupled Device (CCD) results in excess electrical current that is trapped within the lattice structure of the electronics. This excess signal from the CCD itself can be present through multiple exposures, which will have an adverse effect on its science performance unless it is corrected for. The traditional way to correct for this extra charge is to take occasional long-exposure images with the camera shutter closed. These images, generally referred to as ``dark'' images, allow for the measurement of thermal-electron contamination at each pixel of the CCD. This so-called ``dark current'' can then be subtracted from the science images by re-scaling to the science exposure times. Pixels that have signal above a certain value are traditionally marked as ``hot'' and flagged in the data quality array. Many users will discard these pixels as being bad. However, these pixels may not be bad in the sense that they cannot be reliably dark-subtracted; if these pixels are shown to be stable over a given anneal period, the charge can be properly subtracted and the extra Poisson noise from this dark current can be taken into account and put into the error arrays.

  12. High-speed camera based on a CMOS active pixel sensor

    NASA Astrophysics Data System (ADS)

    Bloss, Hans S.; Ernst, Juergen D.; Firla, Heidrun; Schmoelz, Sybille C.; Gick, Stephan K.; Lauxtermann, Stefan C.

    2000-02-01

    Standard CMOS technologies offer great flexibility in the design of image sensors, which is a big advantage especially for high framerate system. For this application we have integrated an active pixel sensor with 256 X 256 pixel using a standard 0.5 micrometers CMOS technologies. With 16 analog outputs and a clockrate of 25-30 MHz per output, a continuous framerate of more than 50000 Hz is achieved. A global synchronous shutter is provided, but it required a more complex pixel circuit of five transistors and a special pixel layout to get a good optical fill factor. The active area of the photodiode is 9 X 9 micrometers . These square diodes are arranged in a chess pattern, while the remaining space is used for the electronic circuit. FIll factor is nearly 50 percent. The sensor is embedded in a high-speed camera system with 16 ADCs, 256Mbyte dynamic RAM, FPGAs for high-speed real time image processing, and a PC for user interface, data archive and network operation. Fixed pattern noise, which is always a problem of CMOS sensor, and the mismatching of the 16 analog channels is removed by a pixelwise gain-offset correction. After this, the chess pattern requires a reconstruction of all the 'missing' pixels, which can be done by a special edge sensitive algorithm. So a high quality 512 X 256 image with low remaining noise can be displayed. Sensor, architecture and processing are also suitable for color imaging.

  13. Where can pixel counting area estimates meet user-defined accuracy requirements?

    NASA Astrophysics Data System (ADS)

    Waldner, François; Defourny, Pierre

    2017-08-01

    Pixel counting is probably the most popular way to estimate class areas from satellite-derived maps. It involves determining the number of pixels allocated to a specific thematic class and multiplying it by the pixel area. In the presence of asymmetric classification errors, the pixel counting estimator is biased. The overarching objective of this article is to define the applicability conditions of pixel counting so that the estimates are below a user-defined accuracy target. By reasoning in terms of landscape fragmentation and spatial resolution, the proposed framework decouples the resolution bias and the classifier bias from the overall classification bias. The consequence is that prior to any classification, part of the tolerated bias is already committed due to the choice of the spatial resolution of the imagery. How much classification bias is affordable depends on the joint interaction of spatial resolution and fragmentation. The method was implemented over South Africa for cropland mapping, demonstrating its operational applicability. Particular attention was paid to modeling a realistic sensor's spatial response by explicitly accounting for the effect of its point spread function. The diagnostic capabilities offered by this framework have multiple potential domains of application such as guiding users in their choice of imagery and providing guidelines for space agencies to elaborate the design specifications of future instruments.

  14. Characterization of pixelated cadmium-zinc-telluride detectors for astrophysical application

    NASA Astrophysics Data System (ADS)

    Gaskin, Jessica A.; Sharma, Dharma P.; Ramsey, Brian D.; Mitchell, Shannon; Seller, Paul

    2004-02-01

    Charge sharing and charge loss measurements for a many-pixel, Cadmium-Zinc-Telluride (CdZnTe) detector are discussed. These properties that are set by the material characteristics and the detector geometry help to define the limiting energy resolution and spatial resolution of the detector in question. The detector consists of a 1-mm-thick piece of CdZnTe sputtered with a 16x16 array of pixels with a 300 micron pixel pitch (inter-pixel gap is 50 microns). This crystal is bonded to a custom-built readout chip (ASIC) providing all front-end electronics to each of the 256 independent pixels. These types of detectors act as precursors to that which will be used at the focal plane of the High Energy Replicated Optics (HERO) telescope currently being developed at Marshall Space Flight Center. With a telescope focal length of 6 meters, the detector needs to have a spatial resolution of around 200 microns in order to take full advantage of the HERO angular resolution. We discuss to what degree charge sharing degrades energy resolution through charge loss and improves spatial resolution through position interpolation.

  15. Visualization of vasculature with convolution surfaces: method, validation and evaluation.

    PubMed

    Oeltze, Steffen; Preim, Bernhard

    2005-04-01

    We present a method for visualizing vasculature based on clinical computed tomography or magnetic resonance data. The vessel skeleton as well as the diameter information per voxel serve as input. Our method adheres to these data, while producing smooth transitions at branchings and closed, rounded ends by means of convolution surfaces. We examine the filter design with respect to irritating bulges, unwanted blending and the correct visualization of the vessel diameter. The method has been applied to a large variety of anatomic trees. We discuss the validation of the method by means of a comparison to other visualization methods. Surface distance measures are carried out to perform a quantitative validation. Furthermore, we present the evaluation of the method which has been accomplished on the basis of a survey by 11 radiologists and surgeons.

  16. Training strategy for convolutional neural networks in pedestrian gender classification

    NASA Astrophysics Data System (ADS)

    Ng, Choon-Boon; Tay, Yong-Haur; Goi, Bok-Min

    2017-06-01

    In this work, we studied a strategy for training a convolutional neural network in pedestrian gender classification with limited amount of labeled training data. Unsupervised learning by k-means clustering on pedestrian images was used to learn the filters to initialize the first layer of the network. As a form of pre-training, supervised learning for the related task of pedestrian classification was performed. Finally, the network was fine-tuned for gender classification. We found that this strategy improved the network's generalization ability in gender classification, achieving better test results when compared to random weights initialization and slightly more beneficial than merely initializing the first layer filters by unsupervised learning. This shows that unsupervised learning followed by pre-training with pedestrian images is an effective strategy to learn useful features for pedestrian gender classification.

  17. Truncation Depth Rule-of-Thumb for Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Moision, Bruce

    2009-01-01

    In this innovation, it is shown that a commonly used rule of thumb (that the truncation depth of a convolutional code should be five times the memory length, m, of the code) is accurate only for rate 1/2 codes. In fact, the truncation depth should be 2.5 m/(1 - r), where r is the code rate. The accuracy of this new rule is demonstrated by tabulating the distance properties of a large set of known codes. This new rule was derived by bounding the losses due to truncation as a function of the code rate. With regard to particular codes, a good indicator of the required truncation depth is the path length at which all paths that diverge from a particular path have accumulated the minimum distance of the code. It is shown that the new rule of thumb provides an accurate prediction of this depth for codes of varying rates.

  18. Radio frequency interference mitigation using deep convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Akeret, J.; Chang, C.; Lucchi, A.; Refregier, A.

    2017-01-01

    We propose a novel approach for mitigating radio frequency interference (RFI) signals in radio data using the latest advances in deep learning. We employ a special type of Convolutional Neural Network, the U-Net, that enables the classification of clean signal and RFI signatures in 2D time-ordered data acquired from a radio telescope. We train and assess the performance of this network using the HIDE &SEEK radio data simulation and processing packages, as well as early Science Verification data acquired with the 7m single-dish telescope at the Bleien Observatory. We find that our U-Net implementation is showing competitive accuracy to classical RFI mitigation algorithms such as SEEK's SUMTHRESHOLD implementation. We publish our U-Net software package on GitHub under GPLv3 license.

  19. Learning to Generate Chairs, Tables and Cars with Convolutional Networks.

    PubMed

    Dosovitskiy, Alexey; Springenberg, Jost Tobias; Tatarchenko, Maxim; Brox, Thomas

    2017-04-01

    We train generative 'up-convolutional' neural networks which are able to generate images of objects given object style, viewpoint, and color. We train the networks on rendered 3D models of chairs, tables, and cars. Our experiments show that the networks do not merely learn all images by heart, but rather find a meaningful representation of 3D models allowing them to assess the similarity of different models, interpolate between given views to generate the missing ones, extrapolate views, and invent new objects not present in the training set by recombining training instances, or even two different object classes. Moreover, we show that such generative networks can be used to find correspondences between different objects from the dataset, outperforming existing approaches on this task.

  20. Rapid Exact Signal Scanning With Deep Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Thom, Markus; Gritschneder, Franz

    2017-03-01

    A rigorous formulation of the dynamics of a signal processing scheme aimed at dense signal scanning without any loss in accuracy is introduced and analyzed. Related methods proposed in the recent past lack a satisfactory analysis of whether they actually fulfill any exactness constraints. This is improved through an exact characterization of the requirements for a sound sliding window approach. The tools developed in this paper are especially beneficial if Convolutional Neural Networks are employed, but can also be used as a more general framework to validate related approaches to signal scanning. The proposed theory helps to eliminate redundant computations and renders special case treatment unnecessary, resulting in a dramatic boost in efficiency particularly on massively parallel processors. This is demonstrated both theoretically in a computational complexity analysis and empirically on modern parallel processors.