Science.gov

Sample records for pixel space convolution

  1. FAST PIXEL SPACE CONVOLUTION FOR COSMIC MICROWAVE BACKGROUND SURVEYS WITH ASYMMETRIC BEAMS AND COMPLEX SCAN STRATEGIES: FEBeCoP

    SciTech Connect

    Mitra, S.; Rocha, G.; Gorski, K. M.; Lawrence, C. R.; Huffenberger, K. M.; Eriksen, H. K.; Ashdown, M. A. J. E-mail: graca@caltech.edu E-mail: Charles.R.Lawrence@jpl.nasa.gov E-mail: h.k.k.eriksen@astro.uio.no

    2011-03-15

    Precise measurement of the angular power spectrum of the cosmic microwave background (CMB) temperature and polarization anisotropy can tightly constrain many cosmological models and parameters. However, accurate measurements can only be realized in practice provided all major systematic effects have been taken into account. Beam asymmetry, coupled with the scan strategy, is a major source of systematic error in scanning CMB experiments such as Planck, the focus of our current interest. We envision Monte Carlo methods to rigorously study and account for the systematic effect of beams in CMB analysis. Toward that goal, we have developed a fast pixel space convolution method that can simulate sky maps observed by a scanning instrument, taking into account real beam shapes and scan strategy. The essence is to pre-compute the 'effective beams' using a computer code, 'Fast Effective Beam Convolution in Pixel space' (FEBeCoP), that we have developed for the Planck mission. The code computes effective beams given the focal plane beam characteristics of the Planck instrument and the full history of actual satellite pointing, and performs very fast convolution of sky signals using the effective beams. In this paper, we describe the algorithm and the computational scheme that has been implemented. We also outline a few applications of the effective beams in the precision analysis of Planck data, for characterizing the CMB anisotropy and for detecting and measuring properties of point sources.

  2. Fast convolution with free-space Green's functions

    NASA Astrophysics Data System (ADS)

    Vico, Felipe; Greengard, Leslie; Ferrando, Miguel

    2016-10-01

    We introduce a fast algorithm for computing volume potentials - that is, the convolution of a translation invariant, free-space Green's function with a compactly supported source distribution defined on a uniform grid. The algorithm relies on regularizing the Fourier transform of the Green's function by cutting off the interaction in physical space beyond the domain of interest. This permits the straightforward application of trapezoidal quadrature and the standard FFT, with superalgebraic convergence for smooth data. Moreover, the method can be interpreted as employing a Nystrom discretization of the corresponding integral operator, with matrix entries which can be obtained explicitly and rapidly. This is of use in the design of preconditioners or fast direct solvers for a variety of volume integral equations. The method proposed permits the computation of any derivative of the potential, at the cost of an additional FFT.

  3. Fast space-varying convolution using matrix source coding with applications to camera stray light reduction.

    PubMed

    Wei, Jianing; Bouman, Charles A; Allebach, Jan P

    2014-05-01

    Many imaging applications require the implementation of space-varying convolution for accurate restoration and reconstruction of images. Here, we use the term space-varying convolution to refer to linear operators whose impulse response has slow spatial variation. In addition, these space-varying convolution operators are often dense, so direct implementation of the convolution operator is typically computationally impractical. One such example is the problem of stray light reduction in digital cameras, which requires the implementation of a dense space-varying deconvolution operator. However, other inverse problems, such as iterative tomographic reconstruction, can also depend on the implementation of dense space-varying convolution. While space-invariant convolution can be efficiently implemented with the fast Fourier transform, this approach does not work for space-varying operators. So direct convolution is often the only option for implementing space-varying convolution. In this paper, we develop a general approach to the efficient implementation of space-varying convolution, and demonstrate its use in the application of stray light reduction. Our approach, which we call matrix source coding, is based on lossy source coding of the dense space-varying convolution matrix. Importantly, by coding the transformation matrix, we not only reduce the memory required to store it; we also dramatically reduce the computation required to implement matrix-vector products. Our algorithm is able to reduce computation by approximately factoring the dense space-varying convolution operator into a product of sparse transforms. Experimental results show that our method can dramatically reduce the computation required for stray light reduction while maintaining high accuracy. PMID:24710398

  4. Fast space-varying convolution and its application in stray light reduction

    NASA Astrophysics Data System (ADS)

    Wei, Jianing; Cao, Guangzhi; Bouman, Charles A.; Allebach, Jan P.

    2009-02-01

    Space-varying convolution often arises in the modeling or restoration of images captured by optical imaging systems. For example, in applications such as microscopy or photography the distortions introduced by lenses typically vary across the field of view, so accurate restoration also requires the use of space-varying convolution. While space-invariant convolution can be efficiently implemented with the Fast Fourier Transform (FFT), space-varying convolution requires direct implementation of the convolution operation, which can be very computationally expensive when the convolution kernel is large. In this paper, we develop a general approach to the efficient implementation of space-varying convolution through the use of matrix source coding techniques. This method can dramatically reduce computation by approximately factoring the dense space-varying convolution operator into a product of sparse transforms. This approach leads to a tradeoff between the accuracy and speed of the operation that is closely related to the distortion-rate tradeoff that is commonly made in lossy source coding. We apply our method to the problem of stray light reduction for digital photographs, where convolution with a spatially varying stray light point spread function is required. The experimental results show that our algorithm can achieve a dramatic reduction in computation while achieving high accuracy.

  5. Efficient single pixel imaging in Fourier space

    NASA Astrophysics Data System (ADS)

    Bian, Liheng; Suo, Jinli; Hu, Xuemei; Chen, Feng; Dai, Qionghai

    2016-08-01

    Single pixel imaging (SPI) is a novel technique capturing 2D images using a bucket detector with a high signal-to-noise ratio, wide spectrum range and low cost. Conventional SPI projects random illumination patterns to randomly and uniformly sample the entire scene’s information. Determined by Nyquist sampling theory, SPI needs either numerous projections or high computation cost to reconstruct the target scene, especially for high-resolution cases. To address this issue, we propose an efficient single pixel imaging technique (eSPI), which instead projects sinusoidal patterns for importance sampling of the target scene’s spatial spectrum in Fourier space. Specifically, utilizing the centrosymmetric conjugation and sparsity priors of natural images’ spatial spectra, eSPI sequentially projects two \\tfrac{π }{2}-phase-shifted sinusoidal patterns to obtain each Fourier coefficient in the most informative spatial frequency bands. eSPI can reduce requisite patterns by two orders of magnitude compared to conventional SPI, which helps a lot for fast and high-resolution SPI.

  6. Classification of Urban Aerial Data Based on Pixel Labelling with Deep Convolutional Neural Networks and Logistic Regression

    NASA Astrophysics Data System (ADS)

    Yao, W.; Poleswki, P.; Krzystek, P.

    2016-06-01

    The recent success of deep convolutional neural networks (CNN) on a large number of applications can be attributed to large amounts of available training data and increasing computing power. In this paper, a semantic pixel labelling scheme for urban areas using multi-resolution CNN and hand-crafted spatial-spectral features of airborne remotely sensed data is presented. Both CNN and hand-crafted features are applied to image/DSM patches to produce per-pixel class probabilities with a L1-norm regularized logistical regression classifier. The evidence theory infers a degree of belief for pixel labelling from different sources to smooth regions by handling the conflicts present in the both classifiers while reducing the uncertainty. The aerial data used in this study were provided by ISPRS as benchmark datasets for 2D semantic labelling tasks in urban areas, which consists of two data sources from LiDAR and color infrared camera. The test sites are parts of a city in Germany which is assumed to consist of typical object classes including impervious surfaces, trees, buildings, low vegetation, vehicles and clutter. The evaluation is based on the computation of pixel-based confusion matrices by random sampling. The performance of the strategy with respect to scene characteristics and method combination strategies is analyzed and discussed. The competitive classification accuracy could be not only explained by the nature of input data sources: e.g. the above-ground height of nDSM highlight the vertical dimension of houses, trees even cars and the nearinfrared spectrum indicates vegetation, but also attributed to decision-level fusion of CNN's texture-based approach with multichannel spatial-spectral hand-crafted features based on the evidence combination theory.

  7. Partial fourier reconstruction through data fitting and convolution in k-space.

    PubMed

    Huang, Feng; Lin, Wei; Li, Yu

    2009-11-01

    A partial Fourier acquisition scheme has been widely adopted for fast imaging. There are two problems associated with the existing techniques. First, the majority of the existing techniques demodulate the phase information and cannot provide improved phase information over zero-padding. Second, serious artifacts can be observed in reconstruction when the phase changes rapidly because the low-resolution phase estimate in the image space is prone to error. To tackle these two problems, a novel and robust method is introduced for partial Fourier reconstruction, using k-space convolution. In this method, the phase information is implicitly estimated in k-space through data fitting; the approximated phase information is applied to recover the unacquired k-space data through Hermitian operation and convolution in k-space. In both spin echo and gradient echo imaging experiments, the proposed method consistently produced images with the lowest error level when compared to Cuppen's algorithm, projection onto convex sets-based iterative algorithm, and Homodyne algorithm. Significant improvements are observed in images with rapid phase change. Besides the improvement on magnitude, the phase map of the images reconstructed by the proposed method also has significantly lower error level than conventional methods.

  8. A semiconductor radiation imaging pixel detector for space radiation dosimetry

    NASA Astrophysics Data System (ADS)

    Kroupa, Martin; Bahadori, Amir; Campbell-Ricketts, Thomas; Empl, Anton; Hoang, Son Minh; Idarraga-Munoz, John; Rios, Ryan; Semones, Edward; Stoffle, Nicholas; Tlustos, Lukas; Turecek, Daniel; Pinsky, Lawrence

    2015-07-01

    Progress in the development of high-performance semiconductor radiation imaging pixel detectors based on technologies developed for use in high-energy physics applications has enabled the development of a completely new generation of compact low-power active dosimeters and area monitors for use in space radiation environments. Such detectors can provide real-time information concerning radiation exposure, along with detailed analysis of the individual particles incident on the active medium. Recent results from the deployment of detectors based on the Timepix from the CERN-based Medipix2 Collaboration on the International Space Station (ISS) are reviewed, along with a glimpse of developments to come. Preliminary results from Orion MPCV Exploration Flight Test 1 are also presented.

  9. A semiconductor radiation imaging pixel detector for space radiation dosimetry.

    PubMed

    Kroupa, Martin; Bahadori, Amir; Campbell-Ricketts, Thomas; Empl, Anton; Hoang, Son Minh; Idarraga-Munoz, John; Rios, Ryan; Semones, Edward; Stoffle, Nicholas; Tlustos, Lukas; Turecek, Daniel; Pinsky, Lawrence

    2015-07-01

    Progress in the development of high-performance semiconductor radiation imaging pixel detectors based on technologies developed for use in high-energy physics applications has enabled the development of a completely new generation of compact low-power active dosimeters and area monitors for use in space radiation environments. Such detectors can provide real-time information concerning radiation exposure, along with detailed analysis of the individual particles incident on the active medium. Recent results from the deployment of detectors based on the Timepix from the CERN-based Medipix2 Collaboration on the International Space Station (ISS) are reviewed, along with a glimpse of developments to come. Preliminary results from Orion MPCV Exploration Flight Test 1 are also presented. PMID:26256630

  10. Supervised pixel classification using a feature space derived from an artificial visual system

    NASA Astrophysics Data System (ADS)

    Baxter, Lisa C.; Coggins, James M.

    1991-06-01

    Image segmentation involves labelling pixels according to their membership in image regions. This requires the understanding of what a region is. Using supervised pixel classification, the paper investigates how groups of pixels labelled manually according to perceived image semantics map onto the feature space created by an Artificial Visual System. Multiscale structure of regions are investigated and it is shown that pixels form clusters based on their geometric roles in the image intensity function, not by image semantics. A tentative abstract definition of a 'region' is proposed based on this behavior.

  11. Pixel partition method using Markov random field for measurements of closely spaced objects by optical sensors

    NASA Astrophysics Data System (ADS)

    Wang, Xueying; Li, Jun; Sheng, Weidong; An, Wei; Du, Qinfeng

    2015-10-01

    ABSTRACT In Space-based optical system, during the tracking for closely spaced objects (CSOs), the traditional method with a constant false alarm rate(CFAR) detecting brings either more clutter measurements or the loss of target information. CSOs can be tracked as Extended targets because their features on optical sensor's pixel-plane. A pixel partition method under the framework of Markov random field(MRF) is proposed, simulation results indicate: the method proposed provides higher pixel partition performance than traditional method, especially when the signal-noise-rate is poor.

  12. Increased space-bandwidth product in pixel super-resolved lensfree on-chip microscopy

    PubMed Central

    Greenbaum, Alon; Luo, Wei; Khademhosseinieh, Bahar; Su, Ting-Wei; Coskun, Ahmet F.; Ozcan, Aydogan

    2013-01-01

    Pixel-size limitation of lensfree on-chip microscopy can be circumvented by utilizing pixel-super-resolution techniques to synthesize a smaller effective pixel, improving the resolution. Here we report that by using the two-dimensional pixel-function of an image sensor-array as an input to lensfree image reconstruction, pixel-super-resolution can improve the numerical aperture of the reconstructed image by ~3 fold compared to a raw lensfree image. This improvement was confirmed using two different sensor-arrays that significantly vary in their pixel-sizes, circuit architectures and digital/optical readout mechanisms, empirically pointing to roughly the same space-bandwidth improvement factor regardless of the sensor-array employed in our set-up. Furthermore, such a pixel-count increase also renders our on-chip microscope into a Giga-pixel imager, where an effective pixel count of ~1.6–2.5 billion can be obtained with different sensors. Finally, using an ultra-violet light-emitting-diode, this platform resolves 225 nm grating lines and can be useful for wide-field on-chip imaging of nano-scale objects, e.g., multi-walled-carbon-nanotubes.

  13. Smart-Pixel Array Processors Based on Optimal Cellular Neural Networks for Space Sensor Applications

    NASA Technical Reports Server (NTRS)

    Fang, Wai-Chi; Sheu, Bing J.; Venus, Holger; Sandau, Rainer

    1997-01-01

    A smart-pixel cellular neural network (CNN) with hardware annealing capability, digitally programmable synaptic weights, and multisensor parallel interface has been under development for advanced space sensor applications. The smart-pixel CNN architecture is a programmable multi-dimensional array of optoelectronic neurons which are locally connected with their local neurons and associated active-pixel sensors. Integration of the neuroprocessor in each processor node of a scalable multiprocessor system offers orders-of-magnitude computing performance enhancements for on-board real-time intelligent multisensor processing and control tasks of advanced small satellites. The smart-pixel CNN operation theory, architecture, design and implementation, and system applications are investigated in detail. The VLSI (Very Large Scale Integration) implementation feasibility was illustrated by a prototype smart-pixel 5x5 neuroprocessor array chip of active dimensions 1380 micron x 746 micron in a 2-micron CMOS technology.

  14. HUBBLE SPACE TELESCOPE PIXEL ANALYSIS OF THE INTERACTING S0 GALAXY NGC 5195 (M51B)

    SciTech Connect

    Lee, Joon Hyeop; Kim, Sang Chul; Ree, Chang Hee; Kim, Minjin; Jeong, Hyunjin; Lee, Jong Chul; Kyeong, Jaemann E-mail: sckim@kasi.re.kr E-mail: mkim@kasi.re.kr E-mail: jclee@kasi.re.kr

    2012-08-01

    We report the properties of the interacting S0 galaxy NGC 5195 (M51B), revealed in a pixel analysis using the Hubble Space Telescope/Advanced Camera for Surveys images in the F435W, F555W, and F814W (BVI) bands. We analyze the pixel color-magnitude diagram (pCMD) of NGC 5195, focusing on the properties of its red and blue pixel sequences and the difference from the pCMD of NGC 5194 (M51A; the spiral galaxy interacting with NGC 5195). The red pixel sequence of NGC 5195 is redder than that of NGC 5194, which corresponds to the difference in the dust optical depth of 2 < {Delta}{tau}{sub V} < 4 at fixed age and metallicity. The blue pixel sequence of NGC 5195 is very weak and spatially corresponds to the tidal bridge between the two interacting galaxies. This implies that the blue pixel sequence is not an ordinary feature in the pCMD of an early-type galaxy, but that it is a transient feature of star formation caused by the galaxy-galaxy interaction. We also find a difference in the shapes of the red pixel sequences on the pixel color-color diagrams (pCCDs) of NGC 5194 and NGC 5195. We investigate the spatial distributions of the pCCD-based pixel stellar populations. The young population fraction in the tidal bridge area is larger than that in other areas by a factor >15. Along the tidal bridge, young populations seem to be clumped particularly at the middle point of the bridge. On the other hand, the dusty population shows a relatively wide distribution between the tidal bridge and the center of NGC 5195.

  15. Autonomous Sub-Pixel Satellite Track Endpoint Determination for Space Based Images

    SciTech Connect

    Simms, L M

    2011-03-07

    An algorithm for determining satellite track endpoints with sub-pixel resolution in spaced-based images is presented. The algorithm allows for significant curvature in the imaged track due to rotation of the spacecraft capturing the image. The motivation behind the subpixel endpoint determination is first presented, followed by a description of the methodology used. Results from running the algorithm on real ground-based and simulated spaced-based images are shown to highlight its effectiveness.

  16. Verification of Dosimetry Measurements with Timepix Pixel Detectors for Space Applications

    NASA Technical Reports Server (NTRS)

    Kroupa, M.; Pinsky, L. S.; Idarraga-Munoz, J.; Hoang, S. M.; Semones, E.; Bahadori, A.; Stoffle, N.; Rios, R.; Vykydal, Z.; Jakubek, J.; Pospisil, S.; Turecek, D.; Kitamura, H.

    2014-01-01

    The current capabilities of modern pixel-detector technology has provided the possibility to design a new generation of radiation monitors. Timepix detectors are semiconductor pixel detectors based on a hybrid configuration. As such, the read-out chip can be used with different types and thicknesses of sensors. For space radiation dosimetry applications, Timepix devices with 300 and 500 microns thick silicon sensors have been used by a collaboration between NASA and University of Houston to explore their performance. For that purpose, an extensive evaluation of the response of Timepix for such applications has been performed. Timepix-based devices were tested in many different environments both at ground-based accelerator facilities such as HIMAC (Heavy Ion Medical Accelerator in Chiba, Japan), and at NSRL (NASA Space Radiation Laboratory at Brookhaven National Laboratory in Upton, NY), as well as in space on board of the International Space Station (ISS). These tests have included a wide range of the particle types and energies, from protons through iron nuclei. The results have been compared both with other devices and theoretical values. This effort has demonstrated that Timepix-based detectors are exceptionally capable at providing accurate dosimetry measurements in this application as verified by the confirming correspondence with the other accepted techniques.

  17. The optimization of zero-spaced microlenses for 2.2um pixel CMOS image sensor

    NASA Astrophysics Data System (ADS)

    Nam, Hyun hee; Park, Jeong Lyeol; Choi, Jea Sung; Lee, Jeong Gun

    2007-03-01

    In CMOS image sensor, microlens arrays are generally used as light propagation carrier onto photo diode to increase collection efficiency and reduce optical cross-talk. Today, the scaling trend of CMOS technology drives reduction of the pixel size for higher integration density and resolution improvement. Microlenses are typically formed by photo resist patterning and thermal reflowing, and the space between photo resist is necessary to avoid merging of microlenses during thermal reflow process. With the shrinking sizes, microlenses become more and more difficult to manufacture without their merging. Hence, the key of light loss free microlens fabrication is still zero-space between microlenses. In this paper, we report the selection of the optimum shape of microlens by the dead space and the curvature of radius. The improvements of critical dimension and thickness uniformities of microlens are also reported.

  18. Carotenoid pixels characterization under color space tests and RGB formulas for mesocarp of mango's fruits cultivars

    NASA Astrophysics Data System (ADS)

    Hammad, Ahmed Yahya; Kassim, Farid Saad Eid Saad

    2010-01-01

    This study experimented the pulp (mesocarp) of fourteen cultivars were healthy ripe of Mango fruits (Mangifera indica L.) selected after picking from Mango Spp. namely Taimour [Ta], Dabsha [Da], Aromanis [Ar], Zebda [Ze], Fagri Kelan [Fa], Alphonse [Al], Bulbek heart [Bu], Hindi- Sinnara [Hi], Compania [Co], Langra [La], Mestikawi [Me], Ewais [Ew], Montakhab El Kanater [Mo] and Mabroka [Ma] . Under seven color space tests included (RGB: Red, Green and Blue), (CMY: Cyan, Magenta and Yellow), (CMY: Cyan, Magenta and Yellow), (HSL: Hue, Saturation and Lightness), (CMYK%: Cyan%, Magenta%, Yellow% and Black%), (HSV: Hue, Saturation and Value), (HºSB%: Hueº, Saturation% and Brightness%) and (Lab). (CMY: Cyan, Magenta and Yellow), (HSL: Hue, Saturation and Lightness), (CMYK%: Cyan%, Magenta%, Yellow% and Black%), (HSV: Hue, Saturation and Value), (HºSB%: Hueº, Saturation% and Brightness%) and (Lab). Addition, nine formula of color space tests included (sRGB 0÷1, CMY, CMYK, XYZ, CIE-L*ab, CIE-L*CH, CIE-L*uv, Yxy and Hunter-Lab) and (RGB 0÷FF/hex triplet) and Carotenoid Pixels Scale. Utilizing digital color photographs as tool for obtainment the natural color information for each cultivar then the result expounded with chemical pigment estimations. Our location study in the visual yellow to orange color degrees from the visible color of electromagnetic spectrum in wavelength between (~570 to 620) nm and frequency between (~480 to 530) THz. The results found carotene very strong influence in band Red while chlorophyll (a & b) was very lower subsequently, the values in band Green was depressed. Meanwhile, the general ratios percentage for carotenoid pixels in bands Red, Green and Blue were 50%, 39% and 11% as orderliness opposite the ratios percentage for carotene, chlorophyll a and chlorophyll b which were 63%, 22% and 16% approximately. According to that the pigments influence in all color space tests and RGB formulas. Band Yellow% in color test (CMYK%) as signature

  19. A Double Precision High Speed Convolution Processor

    NASA Astrophysics Data System (ADS)

    Larochelle, F.; Coté, J. F.; Malowany, A. S.

    1989-11-01

    There exist several convolution processors on the market that can process images at video rate. However, none of these processors operates in floating point arithmetic. Unfortunately, many image processing algorithms presently under development are inoperable in integer arithmetic, forcing the researchers to use regular computers. To solve this problem, we designed a specialized convolution processor that operates in double precision floating point arithmetic with a throughput several thousand times faster than the one obtained on regular computer. Its high performance is attributed to a VLSI double precision convolution systolic cell designed in our laboratories. A 9X9 systolic array carries out, in a pipeline manner, every arithmetic operation. The processor is designed to interface directly with the VME Bus. A DMA chip is responsible for bringing the original pixel intensities from the memory of the computer to the systolic array and to return the convolved pixels back to memory. A special use of 8K RAMs allows an inexpensive and efficient way of delaying the pixel intensities in order to supply the right sequence to the systolic array. On board circuitry converts pixel values into floating point representation when the image is originally represented with integer values. An additional systolic cell, used as a pipeline adder at the output of the systolic array, offers the possibility of combining images together which allows a variable convolution window size and color image processing.

  20. Comparison of rate one-half, equivalent constraint length 24, binary convolutional codes for use with sequential decoding on the deep-space channel

    NASA Technical Reports Server (NTRS)

    Massey, J. L.

    1976-01-01

    Virtually all previously-suggested rate 1/2 binary convolutional codes with KE = 24 are compared. Their distance properties are given; and their performance, both in computation and in error probability, with sequential decoding on the deep-space channel is determined by simulation. Recommendations are made both for the choice of a specific KE = 24 code as well as for codes to be included in future coding standards for the deep-space channel. A new result given in this report is a method for determining the statistical significance of error probability data when the error probability is so small that it is not feasible to perform enough decoding simulations to obtain more than a very small number of decoding errors.

  1. Distal Convoluted Tubule

    PubMed Central

    Ellison, David H.

    2014-01-01

    The distal convoluted tubule is the nephron segment that lies immediately downstream of the macula densa. Although short in length, the distal convoluted tubule plays a critical role in sodium, potassium, and divalent cation homeostasis. Recent genetic and physiologic studies have greatly expanded our understanding of how the distal convoluted tubule regulates these processes at the molecular level. This article provides an update on the distal convoluted tubule, highlighting concepts and pathophysiology relevant to clinical practice. PMID:24855283

  2. Search for optimal distance spectrum convolutional codes

    NASA Technical Reports Server (NTRS)

    Connor, Matthew C.; Perez, Lance C.; Costello, Daniel J., Jr.

    1993-01-01

    In order to communicate reliably and to reduce the required transmitter power, NASA uses coded communication systems on most of their deep space satellites and probes (e.g. Pioneer, Voyager, Galileo, and the TDRSS network). These communication systems use binary convolutional codes. Better codes make the system more reliable and require less transmitter power. However, there are no good construction techniques for convolutional codes. Thus, to find good convolutional codes requires an exhaustive search over the ensemble of all possible codes. In this paper, an efficient convolutional code search algorithm was implemented on an IBM RS6000 Model 580. The combination of algorithm efficiency and computational power enabled us to find, for the first time, the optimal rate 1/2, memory 14, convolutional code.

  3. Some easily analyzable convolutional codes

    NASA Technical Reports Server (NTRS)

    Mceliece, R.; Dolinar, S.; Pollara, F.; Vantilborg, H.

    1989-01-01

    Convolutional codes have played and will play a key role in the downlink telemetry systems on many NASA deep-space probes, including Voyager, Magellan, and Galileo. One of the chief difficulties associated with the use of convolutional codes, however, is the notorious difficulty of analyzing them. Given a convolutional code as specified, say, by its generator polynomials, it is no easy matter to say how well that code will perform on a given noisy channel. The usual first step in such an analysis is to computer the code's free distance; this can be done with an algorithm whose complexity is exponential in the code's constraint length. The second step is often to calculate the transfer function in one, two, or three variables, or at least a few terms in its power series expansion. This step is quite hard, and for many codes of relatively short constraint lengths, it can be intractable. However, a large class of convolutional codes were discovered for which the free distance can be computed by inspection, and for which there is a closed-form expression for the three-variable transfer function. Although for large constraint lengths, these codes have relatively low rates, they are nevertheless interesting and potentially useful. Furthermore, the ideas developed here to analyze these specialized codes may well extend to a much larger class.

  4. ARKCoS: artifact-suppressed accelerated radial kernel convolution on the sphere

    NASA Astrophysics Data System (ADS)

    Elsner, F.; Wandelt, B. D.

    2011-08-01

    We describe a hybrid Fourier/direct space convolution algorithm for compact radial (azimuthally symmetric) kernels on the sphere. For high resolution maps covering a large fraction of the sky, our implementation takes advantage of the inexpensive massive parallelism afforded by consumer graphics processing units (GPUs). Its applications include modeling of instrumental beam shapes in terms of compact kernels, computation of fine-scale wavelet transformations, and optimal filtering for the detection of point sources. Our algorithm works for any pixelization where pixels are grouped into isolatitude rings. Even for kernels that are not bandwidth-limited, ringing features are completely absent on an ECP grid. We demonstrate that they can be highly suppressed on the popular HEALPix pixelization, for which we develop a freely available implementation of the algorithm. As an example application, we show that running on a high-end consumer graphics card our method speeds up beam convolution for simulations of a characteristic Planck high frequency instrument channel by two orders of magnitude compared to the commonly used HEALPix implementation on one CPU core, while typically maintaining a fractional RMS accuracy of about 1 part in 105.

  5. The analysis of convolutional codes via the extended Smith algorithm

    NASA Technical Reports Server (NTRS)

    Mceliece, R. J.; Onyszchuk, I.

    1993-01-01

    Convolutional codes have been the central part of most error-control systems in deep-space communication for many years. Almost all such applications, however, have used the restricted class of (n,1), also known as 'rate 1/n,' convolutional codes. The more general class of (n,k) convolutional codes contains many potentially useful codes, but their algebraic theory is difficult and has proved to be a stumbling block in the evolution of convolutional coding systems. In this article, the situation is improved by describing a set of practical algorithms for computing certain basic things about a convolutional code (among them the degree, the Forney indices, a minimal generator matrix, and a parity-check matrix), which are usually needed before a system using the code can be built. The approach is based on the classic Forney theory for convolutional codes, together with the extended Smith algorithm for polynomial matrices, which is introduced in this article.

  6. ENGage: The use of space and pixel art for increasing primary school children's interest in science, technology, engineering and mathematics

    NASA Astrophysics Data System (ADS)

    Roberts, Simon J.

    2014-01-01

    The Faculty of Engineering at The University of Nottingham, UK, has developed interdisciplinary, hands-on workshops for primary schools that introduce space technology, its relevance to everyday life and the importance of science, technology, engineering and maths. The workshop activities for 7-11 year olds highlight the roles that space and satellite technology play in observing and monitoring the Earth's biosphere as well as being vital to communications in the modern digital world. The programme also provides links to 'how science works', the environment and citizenship and uses pixel art through the medium of digital photography to demonstrate the importance of maths in a novel and unconventional manner. The interactive programme of activities provides learners with an opportunity to meet 'real' scientists and engineers, with one of the key messages from the day being that anyone can become involved in science and engineering whatever their ability or subject of interest. The methodology introduces the role of scientists and engineers using space technology themes, but it could easily be adapted for use with any inspirational topic. Analysis of learners' perceptions of science, technology, engineering and maths before and after participating in ENGage showed very positive and significant changes in their attitudes to these subjects and an increase in the number of children thinking they would be interested and capable in pursuing a career in science and engineering. This paper provides an overview of the activities, the methodology, the evaluation process and results.

  7. Asymmetric quantum convolutional codes

    NASA Astrophysics Data System (ADS)

    La Guardia, Giuliano G.

    2016-01-01

    In this paper, we construct the first families of asymmetric quantum convolutional codes (AQCCs). These new AQCCs are constructed by means of the CSS-type construction applied to suitable families of classical convolutional codes, which are also constructed here. The new codes have non-catastrophic generator matrices, and they have great asymmetry. Since our constructions are performed algebraically, i.e. we develop general algebraic methods and properties to perform the constructions, it is possible to derive several families of such codes and not only codes with specific parameters. Additionally, several different types of such codes are obtained.

  8. Spatial-Spectral Classification Based on the Unsupervised Convolutional Sparse Auto-Encoder for Hyperspectral Remote Sensing Imagery

    NASA Astrophysics Data System (ADS)

    Han, Xiaobing; Zhong, Yanfei; Zhang, Liangpei

    2016-06-01

    Current hyperspectral remote sensing imagery spatial-spectral classification methods mainly consider concatenating the spectral information vectors and spatial information vectors together. However, the combined spatial-spectral information vectors may cause information loss and concatenation deficiency for the classification task. To efficiently represent the spatial-spectral feature information around the central pixel within a neighbourhood window, the unsupervised convolutional sparse auto-encoder (UCSAE) with window-in-window selection strategy is proposed in this paper. Window-in-window selection strategy selects the sub-window spatial-spectral information for the spatial-spectral feature learning and extraction with the sparse auto-encoder (SAE). Convolution mechanism is applied after the SAE feature extraction stage with the SAE features upon the larger outer window. The UCSAE algorithm was validated by two common hyperspectral imagery (HSI) datasets - Pavia University dataset and the Kennedy Space Centre (KSC) dataset, which shows an improvement over the traditional hyperspectral spatial-spectral classification methods.

  9. Astronomical Image Subtraction by Cross-Convolution

    NASA Astrophysics Data System (ADS)

    Yuan, Fang; Akerlof, Carl W.

    2008-04-01

    In recent years, there has been a proliferation of wide-field sky surveys to search for a variety of transient objects. Using relatively short focal lengths, the optics of these systems produce undersampled stellar images often marred by a variety of aberrations. As participants in such activities, we have developed a new algorithm for image subtraction that no longer requires high-quality reference images for comparison. The computational efficiency is comparable with similar procedures currently in use. The general technique is cross-convolution: two convolution kernels are generated to make a test image and a reference image separately transform to match as closely as possible. In analogy to the optimization technique for generating smoothing splines, the inclusion of an rms width penalty term constrains the diffusion of stellar images. In addition, by evaluating the convolution kernels on uniformly spaced subimages across the total area, these routines can accommodate point-spread functions that vary considerably across the focal plane.

  10. 4K×4K format 10μm pixel pitch H4RG-10 hybrid CMOS silicon visible focal plane array for space astronomy

    NASA Astrophysics Data System (ADS)

    Bai, Yibin; Tennant, William; Anglin, Selmer; Wong, Andre; Farris, Mark; Xu, Min; Holland, Eric; Cooper, Donald; Hosack, Joseph; Ho, Kenneth; Sprafke, Thomas; Kopp, Robert; Starr, Brian; Blank, Richard; Beletic, James W.; Luppino, Gerard A.

    2012-07-01

    Teledyne’s silicon hybrid CMOS focal plane array technology has matured into a viable, high performance and high- TRL alternative to scientific CCD sensors for space-based applications in the UV-visible-NIR wavelengths. This paper presents the latest results from Teledyne’s low noise silicon hybrid CMOS visible focal place array produced in 4K×4K format with 10 μm pixel pitch. The H4RG-10 readout circuit retains all of the CMOS functionality (windowing, guide mode, reference pixels) and heritage of its highly successful predecessor (H2RG) developed for JWST, with additional features for improved performance. Combined with a silicon PIN detector layer, this technology is termed HyViSI™ (Hybrid Visible Silicon Imager). H4RG-10 HyViSI™ arrays achieve high pixel interconnectivity (<99.99%), low readout noise (<10 e- rms single CDS), low dark current (<0.5 e-/pixel/s at 193K), high quantum efficiency (<90% broadband), and large dynamic range (<13 bits). Pixel crosstalk and interpixel capacitance (IPC) have been predicted using detailed models of the hybrid structure and these predictions have been confirmed by measurements with Fe-55 Xray events and the single pixel reset technique. For a 100-micron thick detector, IPC of less than 3% and total pixel crosstalk of less than 7% have been achieved for the HyViSI™ H4RG-10. The H4RG-10 array is mounted on a lightweight silicon carbide (SiC) package and has been qualified to Technology Readiness Level 6 (TRL-6). As part of space qualification, the HyViSI™ H4RG-10 array passed radiation testing for low earth orbit (LEO) environment.

  11. Understanding deep convolutional networks.

    PubMed

    Mallat, Stéphane

    2016-04-13

    Deep convolutional networks provide state-of-the-art classifications and regressions results over many high-dimensional problems. We review their architecture, which scatters data with a cascade of linear filter weights and nonlinearities. A mathematical framework is introduced to analyse their properties. Computations of invariants involve multiscale contractions with wavelets, the linearization of hierarchical symmetries and sparse separations. Applications are discussed. PMID:26953183

  12. Smart pixels

    NASA Astrophysics Data System (ADS)

    Seitz, Peter

    2004-09-01

    Semiconductor technology progresses at a relentless pace, making it possible to provide image sensors and each pixel with an increasing amount of custom analog and digital functionality. As experience with such photosensor functionality grows, an increasing variety of modular building blocks become available for smart pixels, single-chip digital cameras and functional image sensors. Examples include a non-linear pixel response circuit for high-dynamic range imaging with a dynamic range exceeding 180 dB, low-noise amplifiers and avalanche-effect pixels for high-sensitivity detection performance approaching single-photoelectron resolution, lock-in pixels for optical time-of-flight range cameras with sub-centimeter distance resolution and in-pixel demodulation circuits for optical coherence tomography imaging. The future is seen in system-on-a-chip machine vision cameras ("seeing chips"), post-processing with non-silicon materials for the extension of the detection range to the X-ray, ultraviolet and infrared spectrum, the use of organic semiconductors for low-cost large-area photonic microsystems, as well as imaging of fields other than electromagnetic radiation.

  13. Two dimensional convolute integers for machine vision and image recognition

    NASA Technical Reports Server (NTRS)

    Edwards, Thomas R.

    1988-01-01

    Machine vision and image recognition require sophisticated image processing prior to the application of Artificial Intelligence. Two Dimensional Convolute Integer Technology is an innovative mathematical approach for addressing machine vision and image recognition. This new technology generates a family of digital operators for addressing optical images and related two dimensional data sets. The operators are regression generated, integer valued, zero phase shifting, convoluting, frequency sensitive, two dimensional low pass, high pass and band pass filters that are mathematically equivalent to surface fitted partial derivatives. These operators are applied non-recursively either as classical convolutions (replacement point values), interstitial point generators (bandwidth broadening or resolution enhancement), or as missing value calculators (compensation for dead array element values). These operators show frequency sensitive feature selection scale invariant properties. Such tasks as boundary/edge enhancement and noise or small size pixel disturbance removal can readily be accomplished. For feature selection tight band pass operators are essential. Results from test cases are given.

  14. The time-space relationship of the data point (Pixels) of the thematic mapper and multispectral scanner or the myth of simultaneity

    NASA Technical Reports Server (NTRS)

    Gordon, F., Jr.

    1980-01-01

    A simplified explanation of the time space relationships among scanner pixels is presented. The examples of the multispectral scanner (MSS) on Landsats 1, 2, and 3 and the thematic mapper (TM) of Landsat D are used to describe the concept and degree of nonsimultaneity of scanning system data. The time aspects of scanner data acquisition and those parts of the MSS and TM systems related to that phenomena are addressed.

  15. a Convolutional Network for Semantic Facade Segmentation and Interpretation

    NASA Astrophysics Data System (ADS)

    Schmitz, Matthias; Mayer, Helmut

    2016-06-01

    In this paper we present an approach for semantic interpretation of facade images based on a Convolutional Network. Our network processes the input images in a fully convolutional way and generates pixel-wise predictions. We show that there is no need for large datasets to train the network when transfer learning is employed, i. e., a part of an already existing network is used and fine-tuned, and when the available data is augmented by using deformed patches of the images for training. The network is trained end-to-end with patches of the images and each patch is augmented independently. To undo the downsampling for the classification, we add deconvolutional layers to the network. Outputs of different layers of the network are combined to achieve more precise pixel-wise predictions. We demonstrate the potential of our network based on results for the eTRIMS (Korč and Förstner, 2009) dataset reduced to facades.

  16. PIXEL PUSHER

    NASA Technical Reports Server (NTRS)

    Stanfill, D. F.

    1994-01-01

    Pixel Pusher is a Macintosh application used for viewing and performing minor enhancements on imagery. It will read image files in JPL's two primary image formats- VICAR and PDS - as well as the Macintosh PICT format. VICAR (NPO-18076) handles an array of image processing capabilities which may be used for a variety of applications including biomedical image processing, cartography, earth resources, and geological exploration. Pixel Pusher can also import VICAR format color lookup tables for viewing images in pseudocolor (256 colors). This program currently supports only eight bit images but will work on monitors with any number of colors. Arbitrarily large image files may be viewed in a normal Macintosh window. Color and contrast enhancement can be performed with a graphical "stretch" editor (as in contrast stretch). In addition, VICAR images may be saved as Macintosh PICT files for exporting into other Macintosh programs, and individual pixels can be queried to determine their locations and actual data values. Pixel Pusher is written in Symantec's Think C and was developed for use on a Macintosh SE30, LC, or II series computer running System Software 6.0.3 or later and 32 bit QuickDraw. Pixel Pusher will only run on a Macintosh which supports color (whether a color monitor is being used or not). The standard distribution medium for this program is a set of three 3.5 inch Macintosh format diskettes. The program price includes documentation. Pixel Pusher was developed in 1991 and is a copyrighted work with all copyright vested in NASA. Think C is a trademark of Symantec Corporation. Macintosh is a registered trademark of Apple Computer, Inc.

  17. Exploring the Hidden Structure of Astronomical Images: A "Pixelated" View of Solar System and Deep Space Features!

    ERIC Educational Resources Information Center

    Ward, R. Bruce; Sienkiewicz, Frank; Sadler, Philip; Antonucci, Paul; Miller, Jaimie

    2013-01-01

    We describe activities created to help student participants in Project ITEAMS (Innovative Technology-Enabled Astronomy for Middle Schools) develop a deeper understanding of picture elements (pixels), image creation, and analysis of the recorded data. ITEAMS is an out-of-school time (OST) program funded by the National Science Foundation (NSF) with…

  18. The effect of whitening transformation on pooling operations in convolutional autoencoders

    NASA Astrophysics Data System (ADS)

    Li, Zuhe; Fan, Yangyu; Liu, Weihua

    2015-12-01

    Convolutional autoencoders (CAEs) are unsupervised feature extractors for high-resolution images. In the pre-processing step, whitening transformation has widely been adopted to remove redundancy by making adjacent pixels less correlated. Pooling is a biologically inspired operation to reduce the resolution of feature maps and achieve spatial invariance in convolutional neural networks. Conventionally, pooling methods are mainly determined empirically in most previous work. Therefore, our main purpose is to study the relationship between whitening processing and pooling operations in convolutional autoencoders for image classification. We propose an adaptive pooling approach based on the concepts of information entropy to test the effect of whitening on pooling in different conditions. Experimental results on benchmark datasets indicate that the performance of pooling strategies is associated with the distribution of feature activations, which can be affected by whitening processing. This provides guidance for the selection of pooling methods in convolutional autoencoders and other convolutional neural networks.

  19. Pixel Paradise

    NASA Technical Reports Server (NTRS)

    1998-01-01

    PixelVision, Inc., has developed a series of integrated imaging engines capable of high-resolution image capture at dynamic speeds. This technology was used originally at Jet Propulsion Laboratory in a series of imaging engines for a NASA mission to Pluto. By producing this integrated package, Charge-Coupled Device (CCD) technology has been made accessible to a wide range of users.

  20. Projection space image reconstruction using strip functions to calculate pixels more "natural" for modeling the geometric response of the SPECT collimator.

    PubMed

    Hsieh, Y L; Zeng, G L; Gullberg, G T

    1998-02-01

    The spatially varying geometric response of the collimator-detector system in single photon emission computed tomography (SPECT) causes loss in resolution, shape distortions, reconstructed density nonuniformity, and quantitative inaccuracies. A projection space image reconstruction algorithm is used to correct these reconstruction artifacts. The projectors F use strip functions to calculate pixels more "natural" for modeling the two-dimensional (2-D) geometric response of the SPECT collimator transaxially to the axis of rotation. These projectors are defined by summing the intersection of an array of multiple strips rotated at equal angles to approximate the ideal system geometric response of the collimator. Two projection models were evaluated for modeling the system geometric response function. For one projector each strip is of equal weight, for the other projector a Gaussian weighting is used. Parallel beam and fan beam projections of a physical three-dimensional (3-D) Hoffman brain phantom and a Jaszczak cold rod phantom were used to evaluate the geometric response correction. Reconstructions were obtained by using the singular value decomposition (SVD) method and the iterative conjugate gradient algorithm to solve for q in the imaging equation FGq = p, where p is the projection measurement. The projector F included the new models for the geometric response, whereas, the backprojector G did not always model the geometric response in order to increase the computational speed. The final reconstruction was obtained by sampling the backprojection Gq at a discrete array of points. Reconstructions produced by the two proposed projectors showed improved resolution when compared against a unit-strip "natural" pixel model, the conventional image pixelized model with ray tracing to calculate the geometric response, and the filtered backprojection algorithm. When the reconstruction is displayed on fine grid points, the continuity and resolution of the image is preserved

  1. Image statistics decoding for convolutional codes

    NASA Technical Reports Server (NTRS)

    Pitt, G. H., III; Swanson, L.; Yuen, J. H.

    1987-01-01

    It is a fact that adjacent pixels in a Voyager image are very similar in grey level. This fact can be used in conjunction with the Maximum-Likelihood Convolutional Decoder (MCD) to decrease the error rate when decoding a picture from Voyager. Implementing this idea would require no changes in the Voyager spacecraft and could be used as a backup to the current system without too much expenditure, so the feasibility of it and the possible gains for Voyager were investigated. Simulations have shown that the gain could be as much as 2 dB at certain error rates, and experiments with real data inspired new ideas on ways to get the most information possible out of the received symbol stream.

  2. Dealiased convolutions for pseudospectral simulations

    NASA Astrophysics Data System (ADS)

    Roberts, Malcolm; Bowman, John C.

    2011-12-01

    Efficient algorithms have recently been developed for calculating dealiased linear convolution sums without the expense of conventional zero-padding or phase-shift techniques. For one-dimensional in-place convolutions, the memory requirements are identical with the zero-padding technique, with the important distinction that the additional work memory need not be contiguous with the input data. This decoupling of data and work arrays dramatically reduces the memory and computation time required to evaluate higher-dimensional in-place convolutions. The memory savings is achieved by computing the in-place Fourier transform of the data in blocks, rather than all at once. The technique also allows one to dealias the n-ary convolutions that arise on Fourier transforming cubic and higher powers. Implicitly dealiased convolutions can be built on top of state-of-the-art adaptive fast Fourier transform libraries like FFTW. Vectorized multidimensional implementations for the complex and centered Hermitian (pseudospectral) cases have already been implemented in the open-source software FFTW++. With the advent of this library, writing a high-performance dealiased pseudospectral code for solving nonlinear partial differential equations has now become a relatively straightforward exercise. New theoretical estimates of computational complexity and memory use are provided, including corrected timing results for 3D pruned convolutions and further consideration of higher-order convolutions.

  3. New optimal quantum convolutional codes

    NASA Astrophysics Data System (ADS)

    Zhu, Shixin; Wang, Liqi; Kai, Xiaoshan

    2015-04-01

    One of the most challenges to prove the feasibility of quantum computers is to protect the quantum nature of information. Quantum convolutional codes are aimed at protecting a stream of quantum information in a long distance communication, which are the correct generalization to the quantum domain of their classical analogs. In this paper, we construct some classes of quantum convolutional codes by employing classical constacyclic codes. These codes are optimal in the sense that they attain the Singleton bound for pure convolutional stabilizer codes.

  4. Entanglement-assisted quantum convolutional coding

    SciTech Connect

    Wilde, Mark M.; Brun, Todd A.

    2010-04-15

    We show how to protect a stream of quantum information from decoherence induced by a noisy quantum communication channel. We exploit preshared entanglement and a convolutional coding structure to develop a theory of entanglement-assisted quantum convolutional coding. Our construction produces a Calderbank-Shor-Steane (CSS) entanglement-assisted quantum convolutional code from two arbitrary classical binary convolutional codes. The rate and error-correcting properties of the classical convolutional codes directly determine the corresponding properties of the resulting entanglement-assisted quantum convolutional code. We explain how to encode our CSS entanglement-assisted quantum convolutional codes starting from a stream of information qubits, ancilla qubits, and shared entangled bits.

  5. Distal convoluted tubule.

    PubMed

    McCormick, James A; Ellison, David H

    2015-01-01

    The distal convoluted tubule (DCT) is a short nephron segment, interposed between the macula densa and collecting duct. Even though it is short, it plays a key role in regulating extracellular fluid volume and electrolyte homeostasis. DCT cells are rich in mitochondria, and possess the highest density of Na+/K+-ATPase along the nephron, where it is expressed on the highly amplified basolateral membranes. DCT cells are largely water impermeable, and reabsorb sodium and chloride across the apical membrane via electroneurtral pathways. Prominent among this is the thiazide-sensitive sodium chloride cotransporter, target of widely used diuretic drugs. These cells also play a key role in magnesium reabsorption, which occurs predominantly, via a transient receptor potential channel (TRPM6). Human genetic diseases in which DCT function is perturbed have provided critical insights into the physiological role of the DCT, and how transport is regulated. These include Familial Hyperkalemic Hypertension, the salt-wasting diseases Gitelman syndrome and EAST syndrome, and hereditary hypomagnesemias. The DCT is also established as an important target for the hormones angiotensin II and aldosterone; it also appears to respond to sympathetic-nerve stimulation and changes in plasma potassium. Here, we discuss what is currently known about DCT physiology. Early studies that determined transport rates of ions by the DCT are described, as are the channels and transporters expressed along the DCT with the advent of molecular cloning. Regulation of expression and activity of these channels and transporters is also described; particular emphasis is placed on the contribution of genetic forms of DCT dysregulation to our understanding.

  6. Real-time rendering of optical effects using spatial convolution

    NASA Astrophysics Data System (ADS)

    Rokita, Przemyslaw

    1998-03-01

    Simulation of special effects such as: defocus effect, depth-of-field effect, raindrops or water film falling on the windshield, may be very useful in visual simulators and in all computer graphics applications that need realistic images of outdoor scenery. Those effects are especially important in rendering poor visibility conditions in flight and driving simulators, but can also be applied, for example, in composing computer graphics and video sequences- -i.e. in Augmented Reality systems. This paper proposes a new approach to the rendering of those optical effects by iterative adaptive filtering using spatial convolution. The advantage of this solution is that the adaptive convolution can be done in real-time by existing hardware. Optical effects mentioned above can be introduced into the image computed using conventional camera model by applying to the intensity of each pixel the convolution filter having an appropriate point spread function. The algorithms described in this paper can be easily implemented int the visualization pipeline--the final effect may be obtained by iterative filtering using a single hardware convolution filter or with the pipeline composed of identical 3 X 3 filters placed as the stages of this pipeline. Another advantage of the proposed solution is that the extension based on proposed algorithm can be added to the existing rendering systems as a final stage of the visualization pipeline.

  7. Pixelation Effects in Weak Lensing

    NASA Astrophysics Data System (ADS)

    High, F. William; Rhodes, Jason; Massey, Richard; Ellis, Richard

    2007-11-01

    Weak gravitational lensing can be used to investigate both dark matter and dark energy but requires accurate measurements of the shapes of faint, distant galaxies. Such measurements are hindered by the finite resolution and pixel scale of digital cameras. We investigate the optimum choice of pixel scale for a space-based mission, using the engineering model and survey strategy of the proposed Supernova Acceleration Probe as a baseline. We do this by simulating realistic astronomical images containing a known input shear signal and then attempting to recover the signal using the Rhodes, Refregier, & Groth algorithm. We find that the quality of shear measurement is always improved by smaller pixels. However, in practice, telescopes are usually limited to a finite number of pixels and operational life span, so the total area of a survey increases with pixel size. We therefore fix the survey lifetime and the number of pixels in the focal plane while varying the pixel scale, thereby effectively varying the survey size. In a pure trade-off for image resolution versus survey area, we find that measurements of the matter power spectrum would have minimum statistical error with a pixel scale of 0.09" for a 0.14" FWHM point-spread function (PSF). The pixel scale could be increased to ~0.16" if images dithered by exactly half-pixel offsets were always available. Some of our results do depend on our adopted shape measurement method and should be regarded as an upper limit: future pipelines may require smaller pixels to overcome systematic floors not yet accessible, and, in certain circumstances, measuring the shape of the PSF might be more difficult than those of galaxies. However, the relative trends in our analysis are robust, especially those of the surface density of resolved galaxies. Our approach thus provides a snapshot of potential in available technology, and a practical counterpart to analytic studies of pixelation, which necessarily assume an idealized shape

  8. Convolution formulations for non-negative intensity.

    PubMed

    Williams, Earl G

    2013-08-01

    Previously unknown spatial convolution formulas for a variant of the active normal intensity in planar coordinates have been derived that use measured pressure or normal velocity near-field holograms to construct a positive-only (outward) intensity distribution in the plane, quantifying the areas of the vibrating structure that produce radiation to the far-field. This is an extension of the outgoing-only (unipolar) intensity technique recently developed for arbitrary geometries by Steffen Marburg. The method is applied independently to pressure and velocity data measured in a plane close to the surface of a point-driven, unbaffled rectangular plate in the laboratory. It is demonstrated that the sound producing regions of the structure are clearly revealed using the derived formulas and that the spatial resolution is limited to a half-wavelength. A second set of formulas called the hybrid-intensity formulas are also derived which yield a bipolar intensity using a different spatial convolution operator, again using either the measured pressure or velocity. It is demonstrated from the experiment results that the velocity formula yields the classical active intensity and the pressure formula an interesting hybrid intensity that may be useful for source localization. Computations are fast and carried out in real space without Fourier transforms into wavenumber space. PMID:23927105

  9. Nonbinary Quantum Convolutional Codes Derived from Negacyclic Codes

    NASA Astrophysics Data System (ADS)

    Chen, Jianzhang; Li, Jianping; Yang, Fan; Huang, Yuanyuan

    2015-01-01

    In this paper, some families of nonbinary quantum convolutional codes are constructed by using negacyclic codes. These nonbinary quantum convolutional codes are different from quantum convolutional codes in the literature. Moreover, we construct a family of optimal quantum convolutional codes.

  10. Face recognition: a convolutional neural-network approach.

    PubMed

    Lawrence, S; Giles, C L; Tsoi, A C; Back, A D

    1997-01-01

    We present a hybrid neural-network for human face recognition which compares favourably with other methods. The system combines local image sampling, a self-organizing map (SOM) neural network, and a convolutional neural network. The SOM provides a quantization of the image samples into a topological space where inputs that are nearby in the original space are also nearby in the output space, thereby providing dimensionality reduction and invariance to minor changes in the image sample, and the convolutional neural network provides partial invariance to translation, rotation, scale, and deformation. The convolutional network extracts successively larger features in a hierarchical set of layers. We present results using the Karhunen-Loeve transform in place of the SOM, and a multilayer perceptron (MLP) in place of the convolutional network for comparison. We use a database of 400 images of 40 individuals which contains quite a high degree of variability in expression, pose, and facial details. We analyze the computational complexity and discuss how new classes could be added to the trained recognizer.

  11. Pixel Perfect

    SciTech Connect

    Perrine, Kenneth A.; Hopkins, Derek F.; Lamarche, Brian L.; Sowa, Marianne B.

    2005-09-01

    cubic warp. During image acquisitions, the cubic warp is evaluated by way of forward differencing. Unwanted pixelation artifacts are minimized by bilinear sampling. The resulting system is state-of-the-art for biological imaging. Precisely registered images enable the reliable use of FRET techniques. In addition, real-time image processing performance allows computed images to be fed back and displayed to scientists immediately, and the pipelined nature of the FPGA allows additional image processing algorithms to be incorporated into the system without slowing throughput.

  12. Approximating large convolutions in digital images.

    PubMed

    Mount, D M; Kanungo, T; Netanyahu, N S; Piatko, C; Silverman, R; Wu, A Y

    2001-01-01

    Computing discrete two-dimensional (2-D) convolutions is an important problem in image processing. In mathematical morphology, an important variant is that of computing binary convolutions, where the kernel of the convolution is a 0-1 valued function. This operation can be quite costly, especially when large kernels are involved. We present an algorithm for computing convolutions of this form, where the kernel of the binary convolution is derived from a convex polygon. Because the kernel is a geometric object, we allow the algorithm some flexibility in how it elects to digitize the convex kernel at each placement, as long as the digitization satisfies certain reasonable requirements. We say that such a convolution is valid. Given this flexibility we show that it is possible to compute binary convolutions more efficiently than would normally be possible for large kernels. Our main result is an algorithm which, given an m x n image and a k-sided convex polygonal kernel K, computes a valid convolution in O(kmn) time. Unlike standard algorithms for computing correlations and convolutions, the running time is independent of the area or perimeter of K, and our techniques do not rely on computing fast Fourier transforms. Our algorithm is based on a novel use of Bresenham's (1965) line-drawing algorithm and prefix-sums to update the convolution incrementally as the kernel is moved from one position to another across the image. PMID:18255522

  13. Stellar photometry with big pixels

    SciTech Connect

    Buonanno, R.; Iannicola, G.; European Southern Observatory, Garching )

    1989-03-01

    A new software for stellar photometry in crowded fields is presented. This software overcomes the limitations present in a traditional package like ROMAFOT when the pixel size of the detector is comparable to the scale length of point images. This is the case, for instance, with the Hubble Space Telescope-Wide Field Camera and, partially, with the Planetary Camera. The numerical solution presented here is compared to the technical solution of obtaining more exposures of the same field, each shifted by a fraction of pixel. This software will be available in MIDAS. 11 refs.

  14. The trellis complexity of convolutional codes

    NASA Technical Reports Server (NTRS)

    Mceliece, R. J.; Lin, W.

    1995-01-01

    It has long been known that convolutional codes have a natural, regular trellis structure that facilitates the implementation of Viterbi's algorithm. It has gradually become apparent that linear block codes also have a natural, though not in general a regular, 'minimal' trellis structure, which allows them to be decoded with a Viterbi-like algorithm. In both cases, the complexity of the Viterbi decoding algorithm can be accurately estimated by the number of trellis edges per encoded bit. It would, therefore, appear that we are in a good position to make a fair comparison of the Viterbi decoding complexity of block and convolutional codes. Unfortunately, however, this comparison is somewhat muddled by the fact that some convolutional codes, the punctured convolutional codes, are known to have trellis representations that are significantly less complex than the conventional trellis. In other words, the conventional trellis representation for a convolutional code may not be the minimal trellis representation. Thus, ironically, at present we seem to know more about the minimal trellis representation for block than for convolutional codes. In this article, we provide a remedy, by developing a theory of minimal trellises for convolutional codes. (A similar theory has recently been given by Sidorenko and Zyablov). This allows us to make a direct performance-complexity comparison for block and convolutional codes. A by-product of our work is an algorithm for choosing, from among all generator matrices for a given convolutional code, what we call a trellis-minimal generator matrix, from which the minimal trellis for the code can be directly constructed. Another by-product is that, in the new theory, punctured convolutional codes no longer appear as a special class, but simply as high-rate convolutional codes whose trellis complexity is unexpectedly small.

  15. Convolution-deconvolution in DIGES

    SciTech Connect

    Philippacopoulos, A.J.; Simos, N.

    1995-05-01

    Convolution and deconvolution operations is by all means a very important aspect of SSI analysis since it influences the input to the seismic analysis. This paper documents some of the convolution/deconvolution procedures which have been implemented into the DIGES code. The 1-D propagation of shear and dilatational waves in typical layered configurations involving a stack of layers overlying a rock is treated by DIGES in a similar fashion to that of available codes, e.g. CARES, SHAKE. For certain configurations, however, there is no need to perform such analyses since the corresponding solutions can be obtained in analytic form. Typical cases involve deposits which can be modeled by a uniform halfspace or simple layered halfspaces. For such cases DIGES uses closed-form solutions. These solutions are given for one as well as two dimensional deconvolution. The type of waves considered include P, SV and SH waves. The non-vertical incidence is given special attention since deconvolution can be defined differently depending on the problem of interest. For all wave cases considered, corresponding transfer functions are presented in closed-form. Transient solutions are obtained in the frequency domain. Finally, a variety of forms are considered for representing the free field motion both in terms of deterministic as well as probabilistic representations. These include (a) acceleration time histories, (b) response spectra (c) Fourier spectra and (d) cross-spectral densities.

  16. The general theory of convolutional codes

    NASA Technical Reports Server (NTRS)

    Mceliece, R. J.; Stanley, R. P.

    1993-01-01

    This article presents a self-contained introduction to the algebraic theory of convolutional codes. This introduction is partly a tutorial, but at the same time contains a number of new results which will prove useful for designers of advanced telecommunication systems. Among the new concepts introduced here are the Hilbert series for a convolutional code and the class of compact codes.

  17. Achieving unequal error protection with convolutional codes

    NASA Technical Reports Server (NTRS)

    Mills, D. G.; Costello, D. J., Jr.; Palazzo, R., Jr.

    1994-01-01

    This paper examines the unequal error protection capabilities of convolutional codes. Both time-invariant and periodically time-varying convolutional encoders are examined. The effective free distance vector is defined and is shown to be useful in determining the unequal error protection (UEP) capabilities of convolutional codes. A modified transfer function is used to determine an upper bound on the bit error probabilities for individual input bit positions in a convolutional encoder. The bound is heavily dependent on the individual effective free distance of the input bit position. A bound relating two individual effective free distances is presented. The bound is a useful tool in determining the maximum possible disparity in individual effective free distances of encoders of specified rate and memory distribution. The unequal error protection capabilities of convolutional encoders of several rates and memory distributions are determined and discussed.

  18. PixelLearn

    NASA Technical Reports Server (NTRS)

    Mazzoni, Dominic; Wagstaff, Kiri; Bornstein, Benjamin; Tang, Nghia; Roden, Joseph

    2006-01-01

    PixelLearn is an integrated user-interface computer program for classifying pixels in scientific images. Heretofore, training a machine-learning algorithm to classify pixels in images has been tedious and difficult. PixelLearn provides a graphical user interface that makes it faster and more intuitive, leading to more interactive exploration of image data sets. PixelLearn also provides image-enhancement controls to make it easier to see subtle details in images. PixelLearn opens images or sets of images in a variety of common scientific file formats and enables the user to interact with several supervised or unsupervised machine-learning pixel-classifying algorithms while the user continues to browse through the images. The machinelearning algorithms in PixelLearn use advanced clustering and classification methods that enable accuracy much higher than is achievable by most other software previously available for this purpose. PixelLearn is written in portable C++ and runs natively on computers running Linux, Windows, or Mac OS X.

  19. High stroke pixel for a deformable mirror

    DOEpatents

    Miles, Robin R.; Papavasiliou, Alexandros P.

    2005-09-20

    A mirror pixel that can be fabricated using standard MEMS methods for a deformable mirror. The pixel is electrostatically actuated and is capable of the high deflections needed for spaced-based mirror applications. In one embodiment, the mirror comprises three layers, a top or mirror layer, a middle layer which consists of flexures, and a comb drive layer, with the flexures of the middle layer attached to the mirror layer and to the comb drive layer. The comb drives are attached to a frame via spring flexures. A number of these mirror pixels can be used to construct a large mirror assembly. The actuator for the mirror pixel may be configured as a crenellated beam with one end fixedly secured, or configured as a scissor jack. The mirror pixels may be used in various applications requiring high stroke adaptive optics.

  20. The multipoint de la Vallee-Poussin problem for a convolution operator

    SciTech Connect

    Napalkov, Valentin V; Nuyatov, Andrey A

    2012-02-28

    Conditions are discovered which ensure that the space of entire functions can be represented as the sum of an ideal in the space of entire functions and the kernel of a convolution operator. In this way conditions for the multipoint de la Vallee-Poussin problem to have a solution are found. Bibliography: 14 titles.

  1. Fast vision through frameless event-based sensing and convolutional processing: application to texture recognition.

    PubMed

    Perez-Carrasco, Jose Antonio; Acha, Begona; Serrano, Carmen; Camunas-Mesa, Luis; Serrano-Gotarredona, Teresa; Linares-Barranco, Bernabe

    2010-04-01

    Address-event representation (AER) is an emergent hardware technology which shows a high potential for providing in the near future a solid technological substrate for emulating brain-like processing structures. When used for vision, AER sensors and processors are not restricted to capturing and processing still image frames, as in commercial frame-based video technology, but sense and process visual information in a pixel-level event-based frameless manner. As a result, vision processing is practically simultaneous to vision sensing, since there is no need to wait for sensing full frames. Also, only meaningful information is sensed, communicated, and processed. Of special interest for brain-like vision processing are some already reported AER convolutional chips, which have revealed a very high computational throughput as well as the possibility of assembling large convolutional neural networks in a modular fashion. It is expected that in a near future we may witness the appearance of large scale convolutional neural networks with hundreds or thousands of individual modules. In the meantime, some research is needed to investigate how to assemble and configure such large scale convolutional networks for specific applications. In this paper, we analyze AER spiking convolutional neural networks for texture recognition hardware applications. Based on the performance figures of already available individual AER convolution chips, we emulate large scale networks using a custom made event-based behavioral simulator. We have developed a new event-based processing architecture that emulates with AER hardware Manjunath's frame-based feature recognition software algorithm, and have analyzed its performance using our behavioral simulator. Recognition rate performance is not degraded. However, regarding speed, we show that recognition can be achieved before an equivalent frame is fully sensed and transmitted.

  2. Noise-enhanced convolutional neural networks.

    PubMed

    Audhkhasi, Kartik; Osoba, Osonde; Kosko, Bart

    2016-06-01

    Injecting carefully chosen noise can speed convergence in the backpropagation training of a convolutional neural network (CNN). The Noisy CNN algorithm speeds training on average because the backpropagation algorithm is a special case of the generalized expectation-maximization (EM) algorithm and because such carefully chosen noise always speeds up the EM algorithm on average. The CNN framework gives a practical way to learn and recognize images because backpropagation scales with training data. It has only linear time complexity in the number of training samples. The Noisy CNN algorithm finds a special separating hyperplane in the network's noise space. The hyperplane arises from the likelihood-based positivity condition that noise-boosts the EM algorithm. The hyperplane cuts through a uniform-noise hypercube or Gaussian ball in the noise space depending on the type of noise used. Noise chosen from above the hyperplane speeds training on average. Noise chosen from below slows it on average. The algorithm can inject noise anywhere in the multilayered network. Adding noise to the output neurons reduced the average per-iteration training-set cross entropy by 39% on a standard MNIST image test set of handwritten digits. It also reduced the average per-iteration training-set classification error by 47%. Adding noise to the hidden layers can also reduce these performance measures. The noise benefit is most pronounced for smaller data sets because the largest EM hill-climbing gains tend to occur in the first few iterations. This noise effect can assist random sampling from large data sets because it allows a smaller random sample to give the same or better performance than a noiseless sample gives.

  3. Simplified Syndrome Decoding of (n, 1) Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1983-01-01

    A new syndrome decoding algorithm for the (n, 1) convolutional codes (CC) that is different and simpler than the previous syndrome decoding algorithm of Schalkwijk and Vinck is presented. The new algorithm uses the general solution of the polynomial linear Diophantine equation for the error polynomial vector E(D). This set of Diophantine solutions is a coset of the CC space. A recursive or Viterbi-like algorithm is developed to find the minimum weight error vector cirumflex E(D) in this error coset. An example illustrating the new decoding algorithm is given for the binary nonsymmetric (2,1)CC.

  4. High density pixel array

    NASA Technical Reports Server (NTRS)

    Wiener-Avnear, Eliezer (Inventor); McFall, James Earl (Inventor)

    2004-01-01

    A pixel array device is fabricated by a laser micro-milling method under strict process control conditions. The device has an array of pixels bonded together with an adhesive filling the grooves between adjacent pixels. The array is fabricated by moving a substrate relative to a laser beam of predetermined intensity at a controlled, constant velocity along a predetermined path defining a set of grooves between adjacent pixels so that a predetermined laser flux per unit area is applied to the material, and repeating the movement for a plurality of passes of the laser beam until the grooves are ablated to a desired depth. The substrate is of an ultrasonic transducer material in one example for fabrication of a 2D ultrasonic phase array transducer. A substrate of phosphor material is used to fabricate an X-ray focal plane array detector.

  5. Molecular graph convolutions: moving beyond fingerprints.

    PubMed

    Kearnes, Steven; McCloskey, Kevin; Berndl, Marc; Pande, Vijay; Riley, Patrick

    2016-08-01

    Molecular "fingerprints" encoding structural information are the workhorse of cheminformatics and machine learning in drug discovery applications. However, fingerprint representations necessarily emphasize particular aspects of the molecular structure while ignoring others, rather than allowing the model to make data-driven decisions. We describe molecular graph convolutions, a machine learning architecture for learning from undirected graphs, specifically small molecules. Graph convolutions use a simple encoding of the molecular graph-atoms, bonds, distances, etc.-which allows the model to take greater advantage of information in the graph structure. Although graph convolutions do not outperform all fingerprint-based methods, they (along with other graph-based methods) represent a new paradigm in ligand-based virtual screening with exciting opportunities for future improvement. PMID:27558503

  6. Molecular graph convolutions: moving beyond fingerprints.

    PubMed

    Kearnes, Steven; McCloskey, Kevin; Berndl, Marc; Pande, Vijay; Riley, Patrick

    2016-08-01

    Molecular "fingerprints" encoding structural information are the workhorse of cheminformatics and machine learning in drug discovery applications. However, fingerprint representations necessarily emphasize particular aspects of the molecular structure while ignoring others, rather than allowing the model to make data-driven decisions. We describe molecular graph convolutions, a machine learning architecture for learning from undirected graphs, specifically small molecules. Graph convolutions use a simple encoding of the molecular graph-atoms, bonds, distances, etc.-which allows the model to take greater advantage of information in the graph structure. Although graph convolutions do not outperform all fingerprint-based methods, they (along with other graph-based methods) represent a new paradigm in ligand-based virtual screening with exciting opportunities for future improvement.

  7. Molecular graph convolutions: moving beyond fingerprints

    NASA Astrophysics Data System (ADS)

    Kearnes, Steven; McCloskey, Kevin; Berndl, Marc; Pande, Vijay; Riley, Patrick

    2016-08-01

    Molecular "fingerprints" encoding structural information are the workhorse of cheminformatics and machine learning in drug discovery applications. However, fingerprint representations necessarily emphasize particular aspects of the molecular structure while ignoring others, rather than allowing the model to make data-driven decisions. We describe molecular "graph convolutions", a machine learning architecture for learning from undirected graphs, specifically small molecules. Graph convolutions use a simple encoding of the molecular graph---atoms, bonds, distances, etc.---which allows the model to take greater advantage of information in the graph structure. Although graph convolutions do not outperform all fingerprint-based methods, they (along with other graph-based methods) represent a new paradigm in ligand-based virtual screening with exciting opportunities for future improvement.

  8. Sequential Syndrome Decoding of Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1984-01-01

    The algebraic structure of convolutional codes are reviewed and sequential syndrome decoding is applied to those codes. These concepts are then used to realize by example actual sequential decoding, using the stack algorithm. The Fano metric for use in sequential decoding is modified so that it can be utilized to sequentially find the minimum weight error sequence.

  9. Convolutions and Their Applications in Information Science.

    ERIC Educational Resources Information Center

    Rousseau, Ronald

    1998-01-01

    Presents definitions of convolutions, mathematical operations between sequences or between functions, and gives examples of their use in information science. In particular they can be used to explain the decline in the use of older literature (obsolescence) or the influence of publication delays on the aging of scientific literature. (Author/LRW)

  10. Number-Theoretic Functions via Convolution Rings.

    ERIC Educational Resources Information Center

    Berberian, S. K.

    1992-01-01

    Demonstrates the number theory property that the number of divisors of an integer n times the number of positive integers k, less than or equal to and relatively prime to n, equals the sum of the divisors of n using theory developed about multiplicative functions, the units of a convolution ring, and the Mobius Function. (MDH)

  11. Selecting Pixels for Kepler Downlink

    NASA Technical Reports Server (NTRS)

    Bryson, Stephen T.; Jenkins, Jon M.; Klaus, Todd C.; Cote, Miles T.; Quintana, Elisa V.; Hall, Jennifer R.; Ibrahim, Khadeejah; Chandrasekaran, Hema; Caldwell, Douglas A.; Van Cleve, Jeffrey E.; Haas, Michael R.

    2010-01-01

    The Kepler mission monitors > 100,000 stellar targets using 42 2200 1024 pixel CCDs. Bandwidth constraints prevent the downlink of all 96 million pixels per 30-minute cadence, so the Kepler spacecraft downlinks a specified collection of pixels for each target. These pixels are selected by considering the object brightness, background and the signal-to-noise of each pixel, and are optimized to maximize the signal-to-noise ratio of the target. This paper describes pixel selection, creation of spacecraft apertures that efficiently capture selected pixels, and aperture assignment to a target. Diagnostic apertures, short-cadence targets and custom specified shapes are discussed.

  12. Five-stage free-space optical switching network with field-effect transistor self-electro-optic-effect-device smart-pixel arrays.

    PubMed

    McCormick, F B; Cloonan, T J; Lentine, A L; Sasian, J M; Morrison, R L; Beckman, M G; Walker, S L; Wojcik, M J; Hinterlong, S J; Crisci, R J; Novotny, R A; Hinton, H S

    1994-03-10

    The design, construction, and operational testing of a five-stage, fully interconnected 32 × 16 switching fabric by the use of smart-pixel (2, 1, 1) switching nodes are described. The arrays of switching nodes use monolithically integrated GaAs field-effect transistors, multiple-quantum-well p-i-n detectors, and self-electro-optic-device modulators. Each switching node incorporates 25 field-effect transistors and 17 p-i-n diodes to realize two differential optical receivers, the 2 × 1 node switching logic, a single-bit node control memory, and one differential optical transmitter. The five stages of node arrays are interconnected to form a two-dimensional banyan network by the use of Fourier-plane computer-generated holograms. System input and output are made by two-dimensional fiber-bundle matrices, and the system optical hardware design incorporates frequency-stabilized lasers, pupil-division beam combination, and a hybrid micro-macro lens for fiber-bundle imaging. Optomechanical packaging of the system ut lizes modular kinematic component positioning and active thermal control to enable simple rapid assembly. Two preliminary operational experiments are completed. In the first experiment, five stages are operated at 50 Mbits/s with 15 active inputs and outputs. The second experiment attempts to operate two stages of second-generation node arrays at 155 Mbits/s, with eight of the 15 active nodes functioning correctly along the straight switch-routing paths. PMID:20862186

  13. Sensor development for the CMS pixel detector

    NASA Astrophysics Data System (ADS)

    Bolla, G.; Bortoletto, D.; Horisberger, R.; Kaufmann, R.; Rohe, T.; Roy, A.

    2002-06-01

    The CMS experiment which is currently under construction at the Large Hadron Collider (LHC) at CERN (Geneva, Switzerland) will contain a pixel detector which provides in its final configuration three space points per track close to the interaction point of the colliding beams. Because of the harsh radiation environment of the LHC, the technical realization of the pixel detector is extremely challenging. The readout chip as the most damageable part of the system is believed to survive a particle fluence of 6×10 14 neq/ cm2 (All fluences are normalized to 1 MeV neutrons and therefore all components of the hybrid pixel detector have to perform well up to at least this fluence. As this requires a partially depleted operation of the silicon sensors after irradiation-induced type inversion of the substrate, an "n in n" concept has been chosen. In order to perform IV-tests on wafer level and to hold accidentally unconnected pixels close to ground potential, a resistive path between the pixels has been implemented by the openings in the p-stop implants surrounding every pixel cell. A prototype of such sensors has been produced by two different companies and especially the properties of these resistors have extensively been tested before and after irradiation.

  14. Small pixel oversampled IR focal plane arrays

    NASA Astrophysics Data System (ADS)

    Caulfield, John; Curzan, Jon; Lewis, Jay; Dhar, Nibir

    2015-06-01

    We report on a new high definition high charge capacity 2.1 Mpixel MWIR Infrared Focal Plane Array. This high definition (HD) FPA utilizes a small 5 um pitch pixel size which is below the Nyquist limit imposed by the optical systems Point Spread Function (PSF). These smaller sub diffraction limited pixels allow spatial oversampling of the image. We show that oversampling IRFPAs enables improved fidelity in imaging including resolution improvements, advanced pixel correlation processing to reduce false alarm rates, improved detection ranges, and an improved ability to track closely spaced objects. Small pixel HD arrays are viewed as the key component enabling lower size, power and weight of the IR Sensor System. Small pixels enables a reduction in the size of the systems components from the smaller detector and ROIC array, the reduced optics focal length and overall lens size, resulting in an overall compactness in the sensor package, cooling and associated electronics. The highly sensitive MWIR small pixel HD FPA has the capability to detect dimmer signals at longer ranges than previously demonstrated.

  15. Accurate Segmentation of Cervical Cytoplasm and Nuclei Based on Multiscale Convolutional Network and Graph Partitioning.

    PubMed

    Song, Youyi; Zhang, Ling; Chen, Siping; Ni, Dong; Lei, Baiying; Wang, Tianfu

    2015-10-01

    In this paper, a multiscale convolutional network (MSCN) and graph-partitioning-based method is proposed for accurate segmentation of cervical cytoplasm and nuclei. Specifically, deep learning via the MSCN is explored to extract scale invariant features, and then, segment regions centered at each pixel. The coarse segmentation is refined by an automated graph partitioning method based on the pretrained feature. The texture, shape, and contextual information of the target objects are learned to localize the appearance of distinctive boundary, which is also explored to generate markers to split the touching nuclei. For further refinement of the segmentation, a coarse-to-fine nucleus segmentation framework is developed. The computational complexity of the segmentation is reduced by using superpixel instead of raw pixels. Extensive experimental results demonstrate that the proposed cervical nucleus cell segmentation delivers promising results and outperforms existing methods.

  16. STIS CCD Hot Pixel Annealing

    NASA Astrophysics Data System (ADS)

    Hernandez, Svea

    2013-10-01

    This purpose of this activity is to repair radiation induced hot pixel damage to theSTIS CCD by warming the CCD to the ambient instrument temperature and annealing radiation damaged pixels. Radiation damage creates hot pixels in the STIS CCD Detector. Many of these hot pixels can be repaired by warming the CCD from its normal operating temperature near-83 C to the ambient instrument temperature { +5 C} for several hours. The number of hot pixels repaired is a function of annealing temperature. The effectiveness of the CCD hot pixel annealing process is assessed by measuring the dark current behavior before and after annealing and by searching for any window contamination effects.

  17. Deep Learning with Hierarchical Convolutional Factor Analysis

    PubMed Central

    Chen, Bo; Polatkan, Gungor; Sapiro, Guillermo; Blei, David; Dunson, David; Carin, Lawrence

    2013-01-01

    Unsupervised multi-layered (“deep”) models are considered for general data, with a particular focus on imagery. The model is represented using a hierarchical convolutional factor-analysis construction, with sparse factor loadings and scores. The computation of layer-dependent model parameters is implemented within a Bayesian setting, employing a Gibbs sampler and variational Bayesian (VB) analysis, that explicitly exploit the convolutional nature of the expansion. In order to address large-scale and streaming data, an online version of VB is also developed. The number of basis functions or dictionary elements at each layer is inferred from the data, based on a beta-Bernoulli implementation of the Indian buffet process. Example results are presented for several image-processing applications, with comparisons to related models in the literature. PMID:23787342

  18. A convolutional neural network neutrino event classifier

    DOE PAGESBeta

    Aurisano, A.; Radovic, A.; Rocco, D.; Himmel, A.; Messier, M. D.; Niner, E.; Pawloski, G.; Psihas, F.; Sousa, A.; Vahle, P.

    2016-09-01

    Here, convolutional neural networks (CNNs) have been widely applied in the computer vision community to solve complex problems in image recognition and analysis. We describe an application of the CNN technology to the problem of identifying particle interactions in sampling calorimeters used commonly in high energy physics and high energy neutrino physics in particular. Following a discussion of the core concepts of CNNs and recent innovations in CNN architectures related to the field of deep learning, we outline a specific application to the NOvA neutrino detector. This algorithm, CVN (Convolutional Visual Network) identifies neutrino interactions based on their topology withoutmore » the need for detailed reconstruction and outperforms algorithms currently in use by the NOvA collaboration.« less

  19. A convolutional neural network neutrino event classifier

    NASA Astrophysics Data System (ADS)

    Aurisano, A.; Radovic, A.; Rocco, D.; Himmel, A.; Messier, M. D.; Niner, E.; Pawloski, G.; Psihas, F.; Sousa, A.; Vahle, P.

    2016-09-01

    Convolutional neural networks (CNNs) have been widely applied in the computer vision community to solve complex problems in image recognition and analysis. We describe an application of the CNN technology to the problem of identifying particle interactions in sampling calorimeters used commonly in high energy physics and high energy neutrino physics in particular. Following a discussion of the core concepts of CNNs and recent innovations in CNN architectures related to the field of deep learning, we outline a specific application to the NOvA neutrino detector. This algorithm, CVN (Convolutional Visual Network) identifies neutrino interactions based on their topology without the need for detailed reconstruction and outperforms algorithms currently in use by the NOvA collaboration.

  20. A Construction of MDS Quantum Convolutional Codes

    NASA Astrophysics Data System (ADS)

    Zhang, Guanghui; Chen, Bocong; Li, Liangchen

    2015-09-01

    In this paper, two new families of MDS quantum convolutional codes are constructed. The first one can be regarded as a generalization of [36, Theorem 6.5], in the sense that we do not assume that q≡1 (mod 4). More specifically, we obtain two classes of MDS quantum convolutional codes with parameters: (i) [( q 2+1, q 2-4 i+3,1;2,2 i+2)] q , where q≥5 is an odd prime power and 2≤ i≤( q-1)/2; (ii) , where q is an odd prime power with the form q=10 m+3 or 10 m+7 ( m≥2), and 2≤ i≤2 m-1.

  1. Quantum convolutional codes derived from constacyclic codes

    NASA Astrophysics Data System (ADS)

    Yan, Tingsu; Huang, Xinmei; Tang, Yuansheng

    2014-12-01

    In this paper, three families of quantum convolutional codes are constructed. The first one and the second one can be regarded as a generalization of Theorems 3, 4, 7 and 8 [J. Chen, J. Li, F. Yang and Y. Huang, Int. J. Theor. Phys., doi:10.1007/s10773-014-2214-6 (2014)], in the sense that we drop the constraint q ≡ 1 (mod 4). Furthermore, the second one and the third one attain the quantum generalized Singleton bound.

  2. Convolutional Neural Network Based dem Super Resolution

    NASA Astrophysics Data System (ADS)

    Chen, Zixuan; Wang, Xuewen; Xu, Zekai; Hou, Wenguang

    2016-06-01

    DEM super resolution is proposed in our previous publication to improve the resolution for a DEM on basis of some learning examples. Meanwhile, the nonlocal algorithm is introduced to deal with it and lots of experiments show that the strategy is feasible. In our publication, the learning examples are defined as the partial original DEM and their related high measurements due to this way can avoid the incompatibility between the data to be processed and the learning examples. To further extent the applications of this new strategy, the learning examples should be diverse and easy to obtain. Yet, it may cause the problem of incompatibility and unrobustness. To overcome it, we intend to investigate a convolutional neural network based method. The input of the convolutional neural network is a low resolution DEM and the output is expected to be its high resolution one. A three layers model will be adopted. The first layer is used to detect some features from the input, the second integrates the detected features to some compressed ones and the final step transforms the compressed features as a new DEM. According to this designed structure, some learning DEMs will be taken to train it. Specifically, the designed network will be optimized by minimizing the error of the output and its expected high resolution DEM. In practical applications, a testing DEM will be input to the convolutional neural network and a super resolution will be obtained. Many experiments show that the CNN based method can obtain better reconstructions than many classic interpolation methods.

  3. Invariant Descriptor Learning Using a Siamese Convolutional Neural Network

    NASA Astrophysics Data System (ADS)

    Chen, L.; Rottensteiner, F.; Heipke, C.

    2016-06-01

    In this paper we describe learning of a descriptor based on the Siamese Convolutional Neural Network (CNN) architecture and evaluate our results on a standard patch comparison dataset. The descriptor learning architecture is composed of an input module, a Siamese CNN descriptor module and a cost computation module that is based on the L2 Norm. The cost function we use pulls the descriptors of matching patches close to each other in feature space while pushing the descriptors for non-matching pairs away from each other. Compared to related work, we optimize the training parameters by combining a moving average strategy for gradients and Nesterov's Accelerated Gradient. Experiments show that our learned descriptor reaches a good performance and achieves state-of-art results in terms of the false positive rate at a 95 % recall rate on standard benchmark datasets.

  4. Virtual reflector multidimensional deconvolution: inversion issues for convolutive-type interferometry

    NASA Astrophysics Data System (ADS)

    Poletto, F.; Bellezza, C.; Farina, B.

    2014-02-01

    We examine the multidimensional deconvolution (MDD) approach for virtual-reflector (VR) signal representation by cross-convolution. Assuming wavefield separation at receivers, the VR signal can be synthesized by cross-convolution of inward and outward wavefields generated from a multiplicity of transient sources. Under suitable conditions, this virtual signal is representable as the multidimensional composition of (1) the outward wavefield from the redatumed virtual sources at receivers and (2) of the so-called point-spread function (PSF) for VRs. Multidimensional inversion of the PSF provides the solution to get deblurred signals and recover the Green's function either of transmitted wavefields or reflectivity. This approach is similar to the MDD by backward seismic interferometry of cross-correlation type. The forward approach by cross-convolution poses the issue to use suitable projections and representations by functions with convex trends in space and time. This work discusses the main differences for illumination and stability in the cross-convolution and cross-correlation approaches, providing, under appropriate coverage conditions, equivalent and robust inversion results.

  5. The ALICE Pixel Detector

    NASA Astrophysics Data System (ADS)

    Mercado-Perez, Jorge

    2002-07-01

    The present document is a brief summary of the performed activities during the 2001 Summer Student Programme at CERN under the Scientific Summer at Foreign Laboratories Program organized by the Particles and Fields Division of the Mexican Physical Society (Sociedad Mexicana de Fisica). In this case, the activities were related with the ALICE Pixel Group of the EP-AIT Division, under the supervision of Jeroen van Hunen, research fellow in this group. First, I give an introduction and overview to the ALICE experiment; followed by a description of wafer probing. A brief summary of the test beam that we had from July 13th to July 25th is given as well.

  6. Applications of convolution voltammetry in electroanalytical chemistry.

    PubMed

    Bentley, Cameron L; Bond, Alan M; Hollenkamp, Anthony F; Mahon, Peter J; Zhang, Jie

    2014-02-18

    The robustness of convolution voltammetry for determining accurate values of the diffusivity (D), bulk concentration (C(b)), and stoichiometric number of electrons (n) has been demonstrated by applying the technique to a series of electrode reactions in molecular solvents and room temperature ionic liquids (RTILs). In acetonitrile, the relatively minor contribution of nonfaradaic current facilitates analysis with macrodisk electrodes, thus moderate scan rates can be used without the need to perform background subtraction to quantify the diffusivity of iodide [D = 1.75 (±0.02) × 10(-5) cm(2) s(-1)] in this solvent. In the RTIL 1-ethyl-3-methylimidazolium bis(trifluoromethanesulfonyl)imide, background subtraction is necessary at a macrodisk electrode but can be avoided at a microdisk electrode, thereby simplifying the analytical procedure and allowing the diffusivity of iodide [D = 2.70 (±0.03) × 10(-7) cm(2) s(-1)] to be quantified. Use of a convolutive procedure which simultaneously allows D and nC(b) values to be determined is also demonstrated. Three conditions under which a technique of this kind may be applied are explored and are related to electroactive species which display slow dissolution kinetics, undergo a single multielectron transfer step, or contain multiple noninteracting redox centers using ferrocene in an RTIL, 1,4-dinitro-2,3,5,6-tetramethylbenzene, and an alkynylruthenium trimer, respectively, as examples. The results highlight the advantages of convolution voltammetry over steady-state techniques such as rotating disk electrode voltammetry and microdisk electrode voltammetry, as it is not restricted by the mode of diffusion (planar or radial), hence removing limitations on solvent viscosity, electrode geometry, and voltammetric scan rate.

  7. Bacterial colony counting by Convolutional Neural Networks.

    PubMed

    Ferrari, Alessandro; Lombardi, Stefano; Signoroni, Alberto

    2015-01-01

    Counting bacterial colonies on microbiological culture plates is a time-consuming, error-prone, nevertheless fundamental task in microbiology. Computer vision based approaches can increase the efficiency and the reliability of the process, but accurate counting is challenging, due to the high degree of variability of agglomerated colonies. In this paper, we propose a solution which adopts Convolutional Neural Networks (CNN) for counting the number of colonies contained in confluent agglomerates, that scored an overall accuracy of the 92.8% on a large challenging dataset. The proposed CNN-based technique for estimating the cardinality of colony aggregates outperforms traditional image processing approaches, becoming a promising approach to many related applications.

  8. Convolution neural networks for ship type recognition

    NASA Astrophysics Data System (ADS)

    Rainey, Katie; Reeder, John D.; Corelli, Alexander G.

    2016-05-01

    Algorithms to automatically recognize ship type from satellite imagery are desired for numerous maritime applications. This task is difficult, and example imagery accurately labeled with ship type is hard to obtain. Convolutional neural networks (CNNs) have shown promise in image recognition settings, but many of these applications rely on the availability of thousands of example images for training. This work attempts to under- stand for which types of ship recognition tasks CNNs might be well suited. We report the results of baseline experiments applying a CNN to several ship type classification tasks, and discuss many of the considerations that must be made in approaching this problem.

  9. QCDNUM: Fast QCD evolution and convolution

    NASA Astrophysics Data System (ADS)

    Botje, M.

    2011-02-01

    The QCDNUM program numerically solves the evolution equations for parton densities and fragmentation functions in perturbative QCD. Un-polarised parton densities can be evolved up to next-to-next-to-leading order in powers of the strong coupling constant, while polarised densities or fragmentation functions can be evolved up to next-to-leading order. Other types of evolution can be accessed by feeding alternative sets of evolution kernels into the program. A versatile convolution engine provides tools to compute parton luminosities, cross-sections in hadron-hadron scattering, and deep inelastic structure functions in the zero-mass scheme or in generalised mass schemes. Input to these calculations are either the QCDNUM evolved densities, or those read in from an external parton density repository. Included in the software distribution are packages to calculate zero-mass structure functions in un-polarised deep inelastic scattering, and heavy flavour contributions to these structure functions in the fixed flavour number scheme. Program summaryProgram title: QCDNUM version: 17.00 Catalogue identifier: AEHV_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHV_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU Public Licence No. of lines in distributed program, including test data, etc.: 45 736 No. of bytes in distributed program, including test data, etc.: 911 569 Distribution format: tar.gz Programming language: Fortran-77 Computer: All Operating system: All RAM: Typically 3 Mbytes Classification: 11.5 Nature of problem: Evolution of the strong coupling constant and parton densities, up to next-to-next-to-leading order in perturbative QCD. Computation of observable quantities by Mellin convolution of the evolved densities with partonic cross-sections. Solution method: Parametrisation of the parton densities as linear or quadratic splines on a discrete grid, and evolution of the spline

  10. Description of a quantum convolutional code.

    PubMed

    Ollivier, Harold; Tillich, Jean-Pierre

    2003-10-24

    We describe a quantum error correction scheme aimed at protecting a flow of quantum information over long distance communication. It is largely inspired by the theory of classical convolutional codes which are used in similar circumstances in classical communication. The particular example shown here uses the stabilizer formalism. We provide an explicit encoding circuit and its associated error estimation algorithm. The latter gives the most likely error over any memoryless quantum channel, with a complexity growing only linearly with the number of encoded qubits.

  11. Imaging properties of pixellated scintillators with deep pixels

    NASA Astrophysics Data System (ADS)

    Barber, H. Bradford; Fastje, David; Lemieux, Daniel; Grim, Gary P.; Furenlid, Lars R.; Miller, Brian W.; Parkhurst, Philip; Nagarkar, Vivek V.

    2014-09-01

    We have investigated the light-transport properties of scintillator arrays with long, thin pixels (deep pixels) for use in high-energy gamma-ray imaging. We compared 10x10 pixel arrays of YSO:Ce, LYSO:Ce and BGO (1mm x 1mm x 20 mm pixels) made by Proteus, Inc. with similar 10x10 arrays of LSO:Ce and BGO (1mm x 1mm x 15mm pixels) loaned to us by Saint-Gobain. The imaging and spectroscopic behaviors of these scintillator arrays are strongly affected by the choice of a reflector used as an inter-pixel spacer (3M ESR in the case of the Proteus arrays and white, diffuse-reflector for the Saint-Gobain arrays). We have constructed a 3700-pixel LYSO:Ce Prototype NIF Gamma-Ray Imager for use in diagnosing target compression in inertial confinement fusion. This system was tested at the OMEGA Laser and exhibited significant optical, inter-pixel cross-talk that was traced to the use of a single-layer of ESR film as an inter-pixel spacer. We show how the optical cross-talk can be mapped, and discuss correction procedures. We demonstrate a 10x10 YSO:Ce array as part of an iQID (formerly BazookaSPECT) imager and discuss issues related to the internal activity of 176Lu in LSO:Ce and LYSO:Ce detectors.

  12. Experimental Investigation of Convoluted Contouring for Aircraft Afterbody Drag Reduction

    NASA Technical Reports Server (NTRS)

    Deere, Karen A.; Hunter, Craig A.

    1999-01-01

    An experimental investigation was performed in the NASA Langley 16-Foot Transonic Tunnel to determine the aerodynamic effects of external convolutions, placed on the boattail of a nonaxisymmetric nozzle for drag reduction. Boattail angles of 15 and 22 were tested with convolutions placed at a forward location upstream of the boattail curvature, at a mid location along the curvature and at a full location that spanned the entire boattail flap. Each of the baseline nozzle afterbodies (no convolutions) had a parabolic, converging contour with a parabolically decreasing corner radius. Data were obtained at several Mach numbers from static conditions to 1.2 for a range of nozzle pressure ratios and angles of attack. An oil paint flow visualization technique was used to qualitatively assess the effect of the convolutions. Results indicate that afterbody drag reduction by convoluted contouring is convolution location, Mach number, boattail angle, and NPR dependent. The forward convolution location was the most effective contouring geometry for drag reduction on the 22 afterbody, but was only effective for M < 0.95. At M = 0.8, drag was reduced 20 and 36 percent at NPRs of 5.4 and 7, respectively, but drag was increased 10 percent for M = 0.95 at NPR = 7. Convoluted contouring along the 15 boattail angle afterbody was not effective at reducing drag because the flow was minimally separated from the baseline afterbody, unlike the massive separation along the 22 boattail angle baseline afterbody.

  13. Predicting Flow-Induced Vibrations In A Convoluted Hose

    NASA Technical Reports Server (NTRS)

    Harvey, Stuart A.

    1994-01-01

    Composite model constructed from two less accurate models. Predicts approximately frequencies and modes of vibrations induced by flows of various fluids in convoluted hose. Based partly on spring-and-lumped-mass representation of dynamics involving springiness and mass of convolution of hose and density of fluid in hose.

  14. New quantum MDS-convolutional codes derived from constacyclic codes

    NASA Astrophysics Data System (ADS)

    Li, Fengwei; Yue, Qin

    2015-12-01

    In this paper, we utilize a family of Hermitian dual-containing constacyclic codes to construct classical and quantum MDS convolutional codes. Our classical and quantum convolutional codes are optimal in the sense that they attain the classical (quantum) generalized Singleton bound.

  15. Sub pixel location identification using super resolved multilooking CHRIS data

    NASA Astrophysics Data System (ADS)

    Sahithi, V. S.; Agrawal, S.

    2014-11-01

    CHRIS /Proba is a multiviewing hyperspectral sensor that monitors the earth in five different zenith angles +55°, +36°, nadir, -36° and -55° with a spatial resolution of 17 m and within a spectral range of 400-1050 nm in mode 3. These multiviewing images are suitable for constructing a super resolved high resolution image that can reveal the mixed pixel of the hyperspectral image. In the present work, an attempt is made to find the location of various features constituted within the 17m mixed pixel of the CHRIS image using various super resolution reconstruction techniques. Four different super resolution reconstruction techniques namely interpolation, iterative back projection, projection on to convex sets (POCS) and robust super resolution were tried on the -36, nadir and +36 images to construct a super resolved high resolution 5.6 m image. The results of super resolution reconstruction were compared with the scaled nadir image and bicubic convoluted image for comparision of the spatial and spectral property preservance. A support vector machine classification of the best super resolved high resolution image was performed to analyse the location of the sub pixel features. Validation of the obtained results was performed using the spectral unmixing fraction images and the 5.6 m classified LISS IV image.

  16. Pixel size adjustment in coherent diffractive imaging within the Rayleigh-Sommerfeld regime.

    PubMed

    Claus, Daniel; Rodenburg, John Marius

    2015-03-10

    The reconstruction of the smallest resolvable object detail in digital holography and coherent diffractive imaging when the detector is mounted close to the object of interest is restricted by the sensor's pixel size. Very high resolution information is intrinsically encoded in the data because the effective numerical aperture (NA) of the detector (its solid angular size as subtended at the object plane) is very high. The correct physical propagation model to use in the reconstruction process for this setup should be based on the Rayleigh-Sommerfeld diffraction integral, which is commonly implemented via a convolution operation. However, the convolution operation has the drawback that the pixel size of the propagation calculation is preserved between the object and the detector, and so the maximum resolution of the reconstruction is limited by the detector pixel size, not its effective NA. Here we show that this problem can be overcome via the introduction of a numerical spherical lens with adjustable magnification. This approach enables the reconstruction of object details smaller than the detector pixel size or of objects that extend beyond the size of the detector. It will have applications in all forms of near-field lensless microscopy. PMID:25968368

  17. PIXELS: Using field-based learning to investigate students' concepts of pixels and sense of scale

    NASA Astrophysics Data System (ADS)

    Pope, A.; Tinigin, L.; Petcovic, H. L.; Ormand, C. J.; LaDue, N.

    2015-12-01

    Empirical work over the past decade supports the notion that a high level of spatial thinking skill is critical to success in the geosciences. Spatial thinking incorporates a host of sub-skills such as mentally rotating an object, imagining the inside of a 3D object based on outside patterns, unfolding a landscape, and disembedding critical patterns from background noise. In this study, we focus on sense of scale, which refers to how an individual quantified space, and is thought to develop through kinesthetic experiences. Remote sensing data are increasingly being used for wide-reaching and high impact research. A sense of scale is critical to many areas of the geosciences, including understanding and interpreting remotely sensed imagery. In this exploratory study, students (N=17) attending the Juneau Icefield Research Program participated in a 3-hour exercise designed to study how a field-based activity might impact their sense of scale and their conceptions of pixels in remotely sensed imagery. Prior to the activity, students had an introductory remote sensing lecture and completed the Sense of Scale inventory. Students walked and/or skied the perimeter of several pixel types, including a 1 m square (representing a WorldView sensor's pixel), a 30 m square (a Landsat pixel) and a 500 m square (a MODIS pixel). The group took reflectance measurements using a field radiometer as they physically traced out the pixel. The exercise was repeated in two different areas, one with homogenous reflectance, and another with heterogeneous reflectance. After the exercise, students again completed the Sense of Scale instrument and a demographic survey. This presentation will share the effects and efficacy of the field-based intervention to teach remote sensing concepts and to investigate potential relationships between students' concepts of pixels and sense of scale.

  18. Convolutional code performance in planetary entry channels

    NASA Technical Reports Server (NTRS)

    Modestino, J. W.

    1974-01-01

    The planetary entry channel is modeled for communication purposes representing turbulent atmospheric scattering effects. The performance of short and long constraint length convolutional codes is investigated in conjunction with coherent BPSK modulation and Viterbi maximum likelihood decoding. Algorithms for sequential decoding are studied in terms of computation and/or storage requirements as a function of the fading channel parameters. The performance of the coded coherent BPSK system is compared with the coded incoherent MFSK system. Results indicate that: some degree of interleaving is required to combat time correlated fading of channel; only modest amounts of interleaving are required to approach performance of memoryless channel; additional propagational results are required on the phase perturbation process; and the incoherent MFSK system is superior when phase tracking errors are considered.

  19. THE KEPLER PIXEL RESPONSE FUNCTION

    SciTech Connect

    Bryson, Stephen T.; Haas, Michael R.; Dotson, Jessie L.; Koch, David G.; Borucki, William J.; Tenenbaum, Peter; Jenkins, Jon M.; Chandrasekaran, Hema; Caldwell, Douglas A.; Klaus, Todd; Gilliland, Ronald L.

    2010-04-20

    Kepler seeks to detect sequences of transits of Earth-size exoplanets orbiting solar-like stars. Such transit signals are on the order of 100 ppm. The high photometric precision demanded by Kepler requires detailed knowledge of how the Kepler pixels respond to starlight during a nominal observation. This information is provided by the Kepler pixel response function (PRF), defined as the composite of Kepler's optical point-spread function, integrated spacecraft pointing jitter during a nominal cadence and other systematic effects. To provide sub-pixel resolution, the PRF is represented as a piecewise-continuous polynomial on a sub-pixel mesh. This continuous representation allows the prediction of a star's flux value on any pixel given the star's pixel position. The advantages and difficulties of this polynomial representation are discussed, including characterization of spatial variation in the PRF and the smoothing of discontinuities between sub-pixel polynomial patches. On-orbit super-resolution measurements of the PRF across the Kepler field of view are described. Two uses of the PRF are presented: the selection of pixels for each star that maximizes the photometric signal-to-noise ratio for that star, and PRF-fitted centroids which provide robust and accurate stellar positions on the CCD, primarily used for attitude and plate scale tracking. Good knowledge of the PRF has been a critical component for the successful collection of high-precision photometry by Kepler.

  20. From Pixels to Planets

    NASA Technical Reports Server (NTRS)

    Brownston, Lee; Jenkins, Jon M.

    2015-01-01

    The Kepler Mission was launched in 2009 as NASAs first mission capable of finding Earth-size planets in the habitable zone of Sun-like stars. Its telescope consists of a 1.5-m primary mirror and a 0.95-m aperture. The 42 charge-coupled devices in its focal plane are read out every half hour, compressed, and then downlinked monthly. After four years, the second of four reaction wheels failed, ending the original mission. Back on earth, the Science Operations Center developed the Science Pipeline to analyze about 200,000 target stars in Keplers field of view, looking for evidence of periodic dimming suggesting that one or more planets had crossed the face of its host star. The Pipeline comprises several steps, from pixel-level calibration, through noise and artifact removal, to detection of transit-like signals and the construction of a suite of diagnostic tests to guard against false positives. The Kepler Science Pipeline consists of a pipeline infrastructure written in the Java programming language, which marshals data input to and output from MATLAB applications that are executed as external processes. The pipeline modules, which underwent continuous development and refinement even after data started arriving, employ several analytic techniques, many developed for the Kepler Project. Because of the large number of targets, the large amount of data per target and the complexity of the pipeline algorithms, the processing demands are daunting. Some pipeline modules require days to weeks to process all of their targets, even when run on NASA's 128-node Pleiades supercomputer. The software developers are still seeking ways to increase the throughput. To date, the Kepler project has discovered more than 4000 planetary candidates, of which more than 1000 have been independently confirmed or validated to be exoplanets. Funding for this mission is provided by NASAs Science Mission Directorate.

  1. High-precision measurement of pixel positions in a charge-coupled device.

    PubMed

    Shaklan, S; Sharman, M C; Pravdo, S H

    1995-10-10

    The high level of spatial uniformity in modern CCD's makes them excellent devices for astrometric instruments. However, at the level of accuracy envisioned by the more ambitious projects such as the Astrometric Imaging Telescope, current technology produces CCD's with significant pixel registration errors. We describe a technique for making high-precision measurements of relative pixel positions. We measured CCD's manufactured for the Wide Field Planetary Camera II installed in the Hubble Space Telescope. These CCD's are shown to have significant step-and-repeat errors of 0.033 pixel along every 34th row, as well as a 0.003-pixel curvature along 34-pixel stripes. The source of these errors is described. Our experiments achieved a per-pixel accuracy of 0.011 pixel. The ultimate shot-noise limited precision of the method is less than 0.001 pixel.

  2. Convolution neural-network-based detection of lung structures

    NASA Astrophysics Data System (ADS)

    Hasegawa, Akira; Lo, Shih-Chung B.; Freedman, Matthew T.; Mun, Seong K.

    1994-05-01

    Chest radiography is one of the most primary and widely used techniques in diagnostic imaging. Nowadays with the advent of digital radiology, the digital medical image processing techniques for digital chest radiographs have attracted considerable attention, and several studies on the computer-aided diagnosis (CADx) as well as on the conventional image processing techniques for chest radiographs have been reported. In the automatic diagnostic process for chest radiographs, it is important to outline the areas of the lungs, the heart, and the diaphragm. This is because the original chest radiograph is composed of important anatomic structures and, without knowing exact positions of the organs, the automatic diagnosis may result in unexpected detections. The automatic extraction of an anatomical structure from digital chest radiographs can be a useful tool for (1) the evaluation of heart size, (2) automatic detection of interstitial lung diseases, (3) automatic detection of lung nodules, and (4) data compression, etc. Based on the clearly defined boundaries of heart area, rib spaces, rib positions, and rib cage extracted, one should be able to use this information to facilitate the tasks of the CADx on chest radiographs. In this paper, we present an automatic scheme for the detection of lung field from chest radiographs by using a shift-invariant convolution neural network. A novel algorithm for smoothing boundaries of lungs is also presented.

  3. Convolution modeling of two-domain, nonlinear water-level responses in karst aquifers (Invited)

    NASA Astrophysics Data System (ADS)

    Long, A. J.

    2009-12-01

    Convolution modeling is a useful method for simulating the hydraulic response of water levels to sinking streamflow or precipitation infiltration at the macro scale. This approach is particularly useful in karst aquifers, where the complex geometry of the conduit and pore network is not well characterized but can be represented approximately by a parametric impulse-response function (IRF) with very few parameters. For many applications, one-dimensional convolution models can be equally effective as complex two- or three-dimensional models for analyzing water-level responses to recharge. Moreover, convolution models are well suited for identifying and characterizing the distinct domains of quick flow and slow flow (e.g., conduit flow and diffuse flow). Two superposed lognormal functions were used in the IRF to approximate the impulses of the two flow domains. Nonlinear response characteristics of the flow domains were assessed by observing temporal changes in the IRFs. Precipitation infiltration was simulated by filtering the daily rainfall record with a backward-in-time exponential function that weights each day’s rainfall with the rainfall of previous days and thus accounts for the effects of soil moisture on aquifer infiltration. The model was applied to the Edwards aquifer in Texas and the Madison aquifer in South Dakota. Simulations of both aquifers showed similar characteristics, including a separation on the order of years between the quick-flow and slow-flow IRF peaks and temporal changes in the IRF shapes when water levels increased and empty pore spaces became saturated.

  4. Metaheuristic Algorithms for Convolution Neural Network

    PubMed Central

    Fanany, Mohamad Ivan; Arymurthy, Aniati Murni

    2016-01-01

    A typical modern optimization technique is usually either heuristic or metaheuristic. This technique has managed to solve some optimization problems in the research area of science, engineering, and industry. However, implementation strategy of metaheuristic for accuracy improvement on convolution neural networks (CNN), a famous deep learning method, is still rarely investigated. Deep learning relates to a type of machine learning technique, where its aim is to move closer to the goal of artificial intelligence of creating a machine that could successfully perform any intellectual tasks that can be carried out by a human. In this paper, we propose the implementation strategy of three popular metaheuristic approaches, that is, simulated annealing, differential evolution, and harmony search, to optimize CNN. The performances of these metaheuristic methods in optimizing CNN on classifying MNIST and CIFAR dataset were evaluated and compared. Furthermore, the proposed methods are also compared with the original CNN. Although the proposed methods show an increase in the computation time, their accuracy has also been improved (up to 7.14 percent). PMID:27375738

  5. Metaheuristic Algorithms for Convolution Neural Network.

    PubMed

    Rere, L M Rasdi; Fanany, Mohamad Ivan; Arymurthy, Aniati Murni

    2016-01-01

    A typical modern optimization technique is usually either heuristic or metaheuristic. This technique has managed to solve some optimization problems in the research area of science, engineering, and industry. However, implementation strategy of metaheuristic for accuracy improvement on convolution neural networks (CNN), a famous deep learning method, is still rarely investigated. Deep learning relates to a type of machine learning technique, where its aim is to move closer to the goal of artificial intelligence of creating a machine that could successfully perform any intellectual tasks that can be carried out by a human. In this paper, we propose the implementation strategy of three popular metaheuristic approaches, that is, simulated annealing, differential evolution, and harmony search, to optimize CNN. The performances of these metaheuristic methods in optimizing CNN on classifying MNIST and CIFAR dataset were evaluated and compared. Furthermore, the proposed methods are also compared with the original CNN. Although the proposed methods show an increase in the computation time, their accuracy has also been improved (up to 7.14 percent). PMID:27375738

  6. CMS Pixel Data Quality Monitoring

    NASA Astrophysics Data System (ADS)

    Merkel, Petra

    2010-05-01

    We present the CMS Pixel Data Quality Monitoring (DQM) system. The concept and architecture are discussed. The monitored quantities are introduced, and the methods on how to ensure that the detector takes high-quality data with large efficiency are explained. Finally we describe the automated data certification scheme, which is used to certify and classify the data from the Pixel detector for physics analyses.

  7. Pixel-wise absolute phase unwrapping using geometric constraints of structured light system.

    PubMed

    An, Yatong; Hyun, Jae-Sang; Zhang, Song

    2016-08-01

    This paper presents a method to unwrap phase pixel by pixel by solely using geometric constraints of the structured light system without requiring additional image acquisition or another camera. Specifically, an artificial absolute phase map, Φmin, at a given virtual depth plane z = zmin, is created from geometric constraints of the calibrated structured light system; the wrapped phase is pixel-by-pixel unwrapped by referring to Φmin. Since Φmin is defined in the projector space, the unwrapped phase obtained from this method is absolute for each pixel. Experimental results demonstrate the success of this proposed novel absolute phase unwrapping method. PMID:27505808

  8. Local Pixel Bundles: Bringing the Pixels to the People

    NASA Astrophysics Data System (ADS)

    Anderson, Jay

    2014-12-01

    The automated galaxy-based alignment software package developed for the Frontier Fields program (hst2galign, see Anderson & Ogaz 2014 and http://www.stsci.edu/hst/campaigns/frontier-fields/) produces a direct mapping from the pixels of the flt frame of each science exposure into a common master frame. We can use these mappings to extract the flt-pixels in the vicinity of a source of interest and package them into a convenient "bundle". In addition to the pixels, this data bundle can also contain "meta" information that will allow users to transform positions from the flt pixels to the reference frame and vice-versa. Since the un-resampled pixels in the flt frames are the only true constraints we have on the astronomical scene, the ability to inter-relate these pixels will enable many high-precision studies, such as: point-source-fitting and deconvolution with accurate PSFs, easy exploration of different image-combining algorithms, and accurate faint-source finding and photometry. The data products introduced in this ISR are a very early attempt to provide the flt-level pixel constraints in a package that is accessible to more than the handful of experts in HST astrometry. The hope is that users in the community might begin using them and will provide feedback as to what information they might want to see in the bundles and what general analysis packages they might find useful. For that reason, this document is somewhat informally written, since I know that it will be modified and updated as the products and tools are optimized.

  9. Multi-scale feature learning on pixels and super-pixels for seminal vesicles MRI segmentation

    NASA Astrophysics Data System (ADS)

    Gao, Qinquan; Asthana, Akshay; Tong, Tong; Rueckert, Daniel; Edwards, Philip "Eddie"

    2014-03-01

    We propose a learning-based approach to segment the seminal vesicles (SV) via random forest classifiers. The proposed discriminative approach relies on the decision forest using high-dimensional multi-scale context-aware spatial, textual and descriptor-based features at both pixel and super-pixel level. After affine transformation to a template space, the relevant high-dimensional multi-scale features are extracted and random forest classifiers are learned based on the masked region of the seminal vesicles from the most similar atlases. Using these classifiers, an intermediate probabilistic segmentation is obtained for the test images. Then, a graph-cut based refinement is applied to this intermediate probabilistic representation of each voxel to get the final segmentation. We apply this approach to segment the seminal vesicles from 30 MRI T2 training images of the prostate, which presents a particularly challenging segmentation task. The results show that the multi-scale approach and the augmentation of the pixel based features with the super-pixel based features enhances the discriminative power of the learnt classifier which leads to a better quality segmentation in some very difficult cases. The results are compared to the radiologist labeled ground truth using leave-one-out cross-validation. Overall, the Dice metric of 0:7249 and Hausdorff surface distance of 7:0803 mm are achieved for this difficult task.

  10. Convolutional Sparse Coding for Trajectory Reconstruction.

    PubMed

    Zhu, Yingying; Lucey, Simon

    2015-03-01

    Trajectory basis Non-Rigid Structure from Motion (NRSfM) refers to the process of reconstructing the 3D trajectory of each point of a non-rigid object from just their 2D projected trajectories. Reconstruction relies on two factors: (i) the condition of the composed camera & trajectory basis matrix, and (ii) whether the trajectory basis has enough degrees of freedom to model the 3D point trajectory. These two factors are inherently conflicting. Employing a trajectory basis with small capacity has the positive characteristic of reducing the likelihood of an ill-conditioned system (when composed with the camera) during reconstruction. However, this has the negative characteristic of increasing the likelihood that the basis will not be able to fully model the object's "true" 3D point trajectories. In this paper we draw upon a well known result centering around the Reduced Isometry Property (RIP) condition for sparse signal reconstruction. RIP allow us to relax the requirement that the full trajectory basis composed with the camera matrix must be well conditioned. Further, we propose a strategy for learning an over-complete basis using convolutional sparse coding from naturally occurring point trajectory corpora to increase the likelihood that the RIP condition holds for a broad class of point trajectories and camera motions. Finally, we propose an l1 inspired objective for trajectory reconstruction that is able to "adaptively" select the smallest sub-matrix from an over-complete trajectory basis that balances (i) and (ii). We present more practical 3D reconstruction results compared to current state of the art in trajectory basis NRSfM.

  11. Colonoscopic polyp detection using convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Park, Sun Young; Sargent, Dusty

    2016-03-01

    Computer aided diagnosis (CAD) systems for medical image analysis rely on accurate and efficient feature extraction methods. Regardless of which type of classifier is used, the results will be limited if the input features are not diagnostically relevant and do not properly discriminate between the different classes of images. Thus, a large amount of research has been dedicated to creating feature sets that capture the salient features that physicians are able to observe in the images. Successful feature extraction reduces the semantic gap between the physician's interpretation and the computer representation of images, and helps to reduce the variability in diagnosis between physicians. Due to the complexity of many medical image classification tasks, feature extraction for each problem often requires domainspecific knowledge and a carefully constructed feature set for the specific type of images being classified. In this paper, we describe a method for automatic diagnostic feature extraction from colonoscopy images that may have general application and require a lower level of domain-specific knowledge. The work in this paper expands on our previous CAD algorithm for detecting polyps in colonoscopy video. In that work, we applied an eigenimage model to extract features representing polyps, normal tissue, diverticula, etc. from colonoscopy videos taken from various viewing angles and imaging conditions. Classification was performed using a conditional random field (CRF) model that accounted for the spatial and temporal adjacency relationships present in colonoscopy video. In this paper, we replace the eigenimage feature descriptor with features extracted from a convolutional neural network (CNN) trained to recognize the same image types in colonoscopy video. The CNN-derived features show greater invariance to viewing angles and image quality factors when compared to the eigenimage model. The CNN features are used as input to the CRF classifier as before. We report

  12. Painting with pixels.

    PubMed

    Kyte, S

    1989-04-01

    Two decades ago the subject of computer graphics was regarded as pure science fiction, more within the realms of Star Trek fantasy than of everyday use, but today it is difficult to avoid its influence. Television programmes abound with slick moving, twisting, distorting images, the printing media throws colourful shapes and forms off the page at you, and computer games explode noisily into our living rooms. In a very short space of time computer graphics have risen from being a toy of the affluent minority to a working tool of the cost-conscious majority. Even the most purist of artists have realized that in order to survive in an increasingly competitive world they must inevitably take the plunge into the world of electronic imagery.

  13. Fourier deconvolution reveals the role of the Lorentz function as the convolution kernel of narrow photon beams

    NASA Astrophysics Data System (ADS)

    Djouguela, Armand; Harder, Dietrich; Kollhoff, Ralf; Foschepoth, Simon; Kunth, Wolfgang; Rühmann, Antje; Willborn, Kay; Poppe, Björn

    2009-05-01

    The two-dimensional lateral dose profiles D(x, y) of narrow photon beams, typically used for beamlet-based IMRT, stereotactic radiosurgery and tomotherapy, can be regarded as resulting from the convolution of a two-dimensional rectangular function R(x, y), which represents the photon fluence profile within the field borders, with a rotation-symmetric convolution kernel K(r). This kernel accounts not only for the lateral transport of secondary electrons and small-angle scattered photons in the absorber, but also for the 'geometrical spread' of each pencil beam due to the phase-space distribution of the photon source. The present investigation of the convolution kernel was based on an experimental study of the associated line-spread function K(x). Systematic cross-plane scans of rectangular and quadratic fields of variable side lengths were made by utilizing the linear current versus dose rate relationship and small energy dependence of the unshielded Si diode PTW 60012 as well as its narrow spatial resolution function. By application of the Fourier convolution theorem, it was observed that the values of the Fourier transform of K(x) could be closely fitted by an exponential function exp(-2πλνx) of the spatial frequency νx. Thereby, the line-spread function K(x) was identified as the Lorentz function K(x) = (λ/π)[1/(x2 + λ2)], a single-parameter, bell-shaped but non-Gaussian function with a narrow core, wide curve tail, full half-width 2λ and convenient convolution properties. The variation of the 'kernel width parameter' λ with the photon energy, field size and thickness of a water-equivalent absorber was systematically studied. The convolution of a rectangular fluence profile with K(x) in the local space results in a simple equation accurately reproducing the measured lateral dose profiles. The underlying 2D convolution kernel (point-spread function) was identified as K(r) = (λ/2π)[1/(r2 + λ2)]3/2, fitting experimental results as well. These results are

  14. Space suit

    NASA Technical Reports Server (NTRS)

    Shepard, L. F.; Durney, G. P.; Case, M. C.; Kenneway, A. J., III; Wise, R. C.; Rinehart, D.; Bessette, R. J.; Pulling, R. C. (Inventor)

    1973-01-01

    A pressure suit for high altitude flights, particularly space missions is reported. The suit is designed for astronauts in the Apollo space program and may be worn both inside and outside a space vehicle, as well as on the lunar surface. It comprises an integrated assembly of inner comfort liner, intermediate pressure garment, and outer thermal protective garment with removable helmet, and gloves. The pressure garment comprises an inner convoluted sealing bladder and outer fabric restraint to which are attached a plurality of cable restraint assemblies. It provides versitility in combination with improved sealing and increased mobility for internal pressures suitable for life support in the near vacuum of outer space.

  15. Pixel response function experimental techniques and analysis of active pixel sensor star cameras

    NASA Astrophysics Data System (ADS)

    Fumo, Patrick; Waldron, Erik; Laine, Juha-Pekka; Evans, Gary

    2015-04-01

    The pixel response function (PRF) of a pixel within a focal plane is defined as the pixel intensity with respect to the position of a point source within the pixel. One of its main applications is in the field of astrometry, which is a branch of astronomy that deals with positioning data of a celestial body for tracking movement or adjusting the attitude of a spacecraft. Complementary metal oxide semiconductor (CMOS) image sensors generally offer better radiation tolerance to protons and heavy ions than CCDs making them ideal candidates for space applications aboard satellites, but like all image sensors they are limited by their spatial frequency response, better known as the modulation transfer function. Having a well-calibrated PRF allows us to eliminate some of the uncertainty in the spatial response of the system providing better resolution and a more accurate centroid estimation. This paper describes the experimental setup for determining the PRF of a CMOS image sensor and analyzes the effect on the oversampled point spread function (PSF) of an image intensifier, as well as the effects due to the wavelength of light used as a point source. It was found that using electron bombarded active pixel sensor (EBAPS) intensification technology had a significant impact on the PRF of the camera being tested as a result of an increase in the amount of carrier diffusion between collection sites generated by the intensification process. Taking the full width at half maximum (FWHM) of the resulting data, it was found that the intensified version of a CMOS camera exhibited a PSF roughly 16.42% larger than its nonintensified counterpart.

  16. Edge pixel response studies of edgeless silicon sensor technology for pixellated imaging detectors

    NASA Astrophysics Data System (ADS)

    Maneuski, D.; Bates, R.; Blue, A.; Buttar, C.; Doonan, K.; Eklund, L.; Gimenez, E. N.; Hynds, D.; Kachkanov, S.; Kalliopuska, J.; McMullen, T.; O'Shea, V.; Tartoni, N.; Plackett, R.; Vahanen, S.; Wraight, K.

    2015-03-01

    Silicon sensor technologies with reduced dead area at the sensor's perimeter are under development at a number of institutes. Several fabrication methods for sensors which are sensitive close to the physical edge of the device are under investigation utilising techniques such as active-edges, passivated edges and current-terminating rings. Such technologies offer the goal of a seamlessly tiled detection surface with minimum dead space between the individual modules. In order to quantify the performance of different geometries and different bulk and implant types, characterisation of several sensors fabricated using active-edge technology were performed at the B16 beam line of the Diamond Light Source. The sensors were fabricated by VTT and bump-bonded to Timepix ROICs. They were 100 and 200 μ m thick sensors, with the last pixel-to-edge distance of either 50 or 100 μ m. The sensors were fabricated as either n-on-n or n-on-p type devices. Using 15 keV monochromatic X-rays with a beam spot of 2.5 μ m, the performance at the outer edge and corners pixels of the sensors was evaluated at three bias voltages. The results indicate a significant change in the charge collection properties between the edge and 5th (up to 275 μ m) from edge pixel for the 200 μ m thick n-on-n sensor. The edge pixel performance of the 100 μ m thick n-on-p sensors is affected only for the last two pixels (up to 110 μ m) subject to biasing conditions. Imaging characteristics of all sensor types investigated are stable over time and the non-uniformities can be minimised by flat-field corrections. The results from the synchrotron tests combined with lab measurements are presented along with an explanation of the observed effects.

  17. The CMS pixel luminosity telescope

    NASA Astrophysics Data System (ADS)

    Kornmayer, A.

    2016-07-01

    The Pixel Luminosity Telescope (PLT) is a new complement to the CMS detector for the LHC Run II data taking period. It consists of eight 3-layer telescopes based on silicon pixel detectors that are placed around the beam pipe on each end of CMS viewing the interaction point at small angle. A fast 3-fold coincidence of the pixel planes in each telescope will provide a bunch-by-bunch measurement of the luminosity. Particle tracking allows collision products to be distinguished from beam background, provides a self-alignment of the detectors, and a continuous in-time monitoring of the efficiency of each telescope plane. The PLT is an independent luminometer, essential to enhance the robustness on the measurement of the delivered luminosity and to reduce its systematic uncertainties. This will allow to determine production cross-sections, and hence couplings, with high precision and to set more stringent limits on new particle production.

  18. Modelling ocean carbon cycle with a nonlinear convolution model

    NASA Astrophysics Data System (ADS)

    Kheshgi, Haroon S.; White, Benjamin S.

    1996-02-01

    A nonlinear convolution integral is developed to model the response of the ocean carbon sink to changes in the atmospheric concentration of CO2. This model can accurately represent the atmospheric response of complex ocean carbon cycle models in which the nonlinear behavior stems from the nonlinear dependence of CO2 solubility in seawater on CO2 partial pressure, which is often represented by the buffer factor. The kernel of the nonlinear convolution model can be constructed from a response of such a complex model to an arbitrary change in CO2 emissions, along with the functional dependence of the buffer factor. Once the convolution kernel has been constructed, either analytically or from a model experiment, the convolution representation can be used to estimate responses of the ocean carbon sink to other changes in the atmospheric concentration of CO2. Thus the method can be used, e.g., to explore alternative emissions scenarios for assessments of climate change. A derivation for the nonlinear convolution integral model is given, and the model is used to reproduce the response of two carbon cycle models: a one-dimensional diffusive ocean model, and a three-dimensional ocean-general-circulation tracer model.

  19. Evaluation of convolutional neural networks for visual recognition.

    PubMed

    Nebauer, C

    1998-01-01

    Convolutional neural networks provide an efficient method to constrain the complexity of feedforward neural networks by weight sharing and restriction to local connections. This network topology has been applied in particular to image classification when sophisticated preprocessing is to be avoided and raw images are to be classified directly. In this paper two variations of convolutional networks--neocognitron and a modification of neocognitron--are compared with classifiers based on fully connected feedforward layers (i.e., multilayer perceptron, nearest neighbor classifier, auto-encoding network) with respect to their visual recognition performance. Beside the original neocognitron a modification of the neocognitron is proposed which combines neurons from perceptron with the localized network structure of neocognitron. Instead of training convolutional networks by time-consuming error backpropagation, in this work a modular procedure is applied whereby layers are trained sequentially from the input to the output layer in order to recognize features of increasing complexity. For a quantitative experimental comparison with standard classifiers two very different recognition tasks have been chosen: handwritten digit recognition and face recognition. In the first example on handwritten digit recognition the generalization of convolutional networks is compared to fully connected networks. In several experiments the influence of variations of position, size, and orientation of digits is determined and the relation between training sample size and validation error is observed. In the second example recognition of human faces is investigated under constrained and variable conditions with respect to face orientation and illumination and the limitations of convolutional networks are discussed.

  20. Charge amplitude distribution of the Gossip gaseous pixel detector

    NASA Astrophysics Data System (ADS)

    Blanco Carballo, V. M.; Chefdeville, M.; Colas, P.; Giomataris, Y.; van der Graaf, H.; Gromov, V.; Hartjes, F.; Kluit, R.; Koffeman, E.; Salm, C.; Schmitz, J.; Smits, S. M.; Timmermans, J.; Visschers, J. L.

    2007-12-01

    The Gossip gaseous pixel detector is being developed for the detection of charged particles in extreme high radiation environments as foreseen close to the interaction point of the proposed super LHC. The detecting medium is a thin layer of gas. Because of the low density of this medium, only a few primary electron/ion pairs are created by the traversing particle. To get a detectable signal, the electrons drift towards a perforated metal foil (Micromegas) whereafter they are multiplied in a gas avalanche to provide a detectable signal. The gas avalanche occurs in the high field between the Micromegas and the pixel readout chip (ROC). Compared to a silicon pixel detector, Gossip features a low material budget and a low cooling power. An experiment using X-rays has indicated a possible high radiation tolerance exceeding 10 16 hadrons/cm 2. The amplified charge signal has a broad amplitude distribution due to the limited statistics of the primary ionization and the statistical variation of the gas amplification. Therefore, some degree of inefficiency is inevitable. This study presents experimental results on the charge amplitude distribution for CO 2/DME (dimethyl-ether) and Ar/iC 4H 10 mixtures. The measured curves were fitted with the outcome of a theoretical model. In the model, the physical Landau distribution is approximated by a Poisson distribution that is convoluted with the variation of the gas gain and the electronic noise. The value for the fraction of pedestal events is used for a direct calculation of the cluster density. For some gases, the measured cluster density is considerably lower than given in literature.

  1. Comparison of pixel and sub-pixel based techniques to separate Pteronia incana invaded areas using multi-temporal high resolution imagery

    NASA Astrophysics Data System (ADS)

    Odindi, John; Kakembo, Vincent

    2009-08-01

    Remote Sensing using high resolution imagery (HRI) is fast becoming an important tool in detailed land-cover mapping and analysis of plant species invasion. In this study, we sought to test the separability of Pteronia incana invader species by pixel content aggregation and pixel content de-convolution using multi-temporal infrared HRI. An invaded area in Eastern Cape, South Africa was flown in 2001, 2004 and 2006 and HRI of 1x1m resolution captured using a DCS 420 colour infrared camera. The images were separated into bands, geo-rectified and radiometrically corrected using Idrisi Kilimanjaro GIS. Value files were extracted from the bands in order to compare spectral values for P. incana, green vegetation and bare surfaces using the pixel based Perpendicular Vegetation Index (PVI), while Constrained Linear Spectral Unmixing (CLSU) surface endmembers were used to generate sub-pixel land surface image fractions. Spectroscopy was used to validate spectral trends identified from HRI. The PVI successfully separated the multi-temporal imagery surfaces and was consistent with the unmixed surface image fractions from CLSU. Separability between the respective surfaces was also achieved using reflectance measurements.

  2. Error-trellis Syndrome Decoding Techniques for Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1984-01-01

    An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decoding is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.

  3. Error-trellis syndrome decoding techniques for convolutional codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1985-01-01

    An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decordig is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.

  4. Glaucoma detection based on deep convolutional neural network.

    PubMed

    Xiangyu Chen; Yanwu Xu; Damon Wing Kee Wong; Tien Yin Wong; Jiang Liu

    2015-08-01

    Glaucoma is a chronic and irreversible eye disease, which leads to deterioration in vision and quality of life. In this paper, we develop a deep learning (DL) architecture with convolutional neural network for automated glaucoma diagnosis. Deep learning systems, such as convolutional neural networks (CNNs), can infer a hierarchical representation of images to discriminate between glaucoma and non-glaucoma patterns for diagnostic decisions. The proposed DL architecture contains six learned layers: four convolutional layers and two fully-connected layers. Dropout and data augmentation strategies are adopted to further boost the performance of glaucoma diagnosis. Extensive experiments are performed on the ORIGA and SCES datasets. The results show area under curve (AUC) of the receiver operating characteristic curve in glaucoma detection at 0.831 and 0.887 in the two databases, much better than state-of-the-art algorithms. The method could be used for glaucoma detection. PMID:26736362

  5. Glaucoma detection based on deep convolutional neural network.

    PubMed

    Xiangyu Chen; Yanwu Xu; Damon Wing Kee Wong; Tien Yin Wong; Jiang Liu

    2015-08-01

    Glaucoma is a chronic and irreversible eye disease, which leads to deterioration in vision and quality of life. In this paper, we develop a deep learning (DL) architecture with convolutional neural network for automated glaucoma diagnosis. Deep learning systems, such as convolutional neural networks (CNNs), can infer a hierarchical representation of images to discriminate between glaucoma and non-glaucoma patterns for diagnostic decisions. The proposed DL architecture contains six learned layers: four convolutional layers and two fully-connected layers. Dropout and data augmentation strategies are adopted to further boost the performance of glaucoma diagnosis. Extensive experiments are performed on the ORIGA and SCES datasets. The results show area under curve (AUC) of the receiver operating characteristic curve in glaucoma detection at 0.831 and 0.887 in the two databases, much better than state-of-the-art algorithms. The method could be used for glaucoma detection.

  6. Error-Trellis Construction for Convolutional Codes Using Shifted Error/Syndrome-Subsequences

    NASA Astrophysics Data System (ADS)

    Tajima, Masato; Okino, Koji; Miyagoshi, Takashi

    In this paper, we extend the conventional error-trellis construction for convolutional codes to the case where a given check matrix H(D) has a factor Dl in some column (row). In the first case, there is a possibility that the size of the state space can be reduced using shifted error-subsequences, whereas in the second case, the size of the state space can be reduced using shifted syndrome-subsequences. The construction presented in this paper is based on the adjoint-obvious realization of the corresponding syndrome former HT(D). In the case where all the columns and rows of H(D) are delay free, the proposed construction is reduced to the conventional one of Schalkwijk et al. We also show that the proposed construction can equally realize the state-space reduction shown by Ariel et al. Moreover, we clarify the difference between their construction and that of ours using examples.

  7. Image Labeling for LIDAR Intensity Image Using K-Nn of Feature Obtained by Convolutional Neural Network

    NASA Astrophysics Data System (ADS)

    Umemura, Masaki; Hotta, Kazuhiro; Nonaka, Hideki; Oda, Kazuo

    2016-06-01

    We propose an image labeling method for LIDAR intensity image obtained by Mobile Mapping System (MMS) using K-Nearest Neighbor (KNN) of feature obtained by Convolutional Neural Network (CNN). Image labeling assigns labels (e.g., road, cross-walk and road shoulder) to semantic regions in an image. Since CNN is effective for various image recognition tasks, we try to use the feature of CNN (Caffenet) pre-trained by ImageNet. We use 4,096-dimensional feature at fc7 layer in the Caffenet as the descriptor of a region because the feature at fc7 layer has effective information for object classification. We extract the feature by the Caffenet from regions cropped from images. Since the similarity between features reflects the similarity of contents of regions, we can select top K similar regions cropped from training samples with a test region. Since regions in training images have manually-annotated ground truth labels, we vote the labels attached to top K similar regions to the test region. The class label with the maximum vote is assigned to each pixel in the test image. In experiments, we use 36 LIDAR intensity images with ground truth labels. We divide 36 images into training (28 images) and test sets (8 images). We use class average accuracy and pixel-wise accuracy as evaluation measures. Our method was able to assign the same label as human beings in 97.8% of the pixels in test LIDAR intensity images.

  8. Die and telescoping punch form convolutions in thin diaphragm

    NASA Technical Reports Server (NTRS)

    1965-01-01

    Die and punch set forms convolutions in thin dished metal diaphragm without stretching the metal too thin at sharp curvatures. The die corresponds to the metal shape to be formed, and the punch consists of elements that progressively slide against one another under the restraint of a compressed-air cushion to mate with the die.

  9. Convolutional virtual electric field for image segmentation using active contours.

    PubMed

    Wang, Yuanquan; Zhu, Ce; Zhang, Jiawan; Jian, Yuden

    2014-01-01

    Gradient vector flow (GVF) is an effective external force for active contours; however, it suffers from heavy computation load. The virtual electric field (VEF) model, which can be implemented in real time using fast Fourier transform (FFT), has been proposed later as a remedy for the GVF model. In this work, we present an extension of the VEF model, which is referred to as CONvolutional Virtual Electric Field, CONVEF for short. This proposed CONVEF model takes the VEF model as a convolution operation and employs a modified distance in the convolution kernel. The CONVEF model is also closely related to the vector field convolution (VFC) model. Compared with the GVF, VEF and VFC models, the CONVEF model possesses not only some desirable properties of these models, such as enlarged capture range, u-shape concavity convergence, subject contour convergence and initialization insensitivity, but also some other interesting properties such as G-shape concavity convergence, neighboring objects separation, and noise suppression and simultaneously weak edge preserving. Meanwhile, the CONVEF model can also be implemented in real-time by using FFT. Experimental results illustrate these advantages of the CONVEF model on both synthetic and natural images. PMID:25360586

  10. Reply to 'Comment on 'Quantum convolutional error-correcting codes''

    SciTech Connect

    Chau, H.F.

    2005-08-15

    In their Comment, de Almeida and Palazzo [Phys. Rev. A 72, 026301 (2005)] discovered an error in my earlier paper concerning the construction of quantum convolutional codes [Phys. Rev. A 58, 905 (1998)]. This error can be repaired by modifying the method of code construction.

  11. Maximum-likelihood estimation of circle parameters via convolution.

    PubMed

    Zelniker, Emanuel E; Clarkson, I Vaughan L

    2006-04-01

    The accurate fitting of a circle to noisy measurements of circumferential points is a much studied problem in the literature. In this paper, we present an interpretation of the maximum-likelihood estimator (MLE) and the Delogne-Kåsa estimator (DKE) for circle-center and radius estimation in terms of convolution on an image which is ideal in a certain sense. We use our convolution-based MLE approach to find good estimates for the parameters of a circle in digital images. In digital images, it is then possible to treat these estimates as preliminary estimates into various other numerical techniques which further refine them to achieve subpixel accuracy. We also investigate the relationship between the convolution of an ideal image with a "phase-coded kernel" (PCK) and the MLE. This is related to the "phase-coded annulus" which was introduced by Atherton and Kerbyson who proposed it as one of a number of new convolution kernels for estimating circle center and radius. We show that the PCK is an approximate MLE (AMLE). We compare our AMLE method to the MLE and the DKE as well as the Cramér-Rao Lower Bound in ideal images and in both real and synthetic digital images. PMID:16579374

  12. Convolutions of Rayleigh functions and their application to semi-linear equations in circular domains

    NASA Astrophysics Data System (ADS)

    Varlamov, Vladimir

    2007-03-01

    Rayleigh functions [sigma]l([nu]) are defined as series in inverse powers of the Bessel function zeros [lambda][nu],n[not equal to]0, where ; [nu] is the index of the Bessel function J[nu](x) and n=1,2,... is the number of the zeros. Convolutions of Rayleigh functions with respect to the Bessel index, are needed for constructing global-in-time solutions of semi-linear evolution equations in circular domains [V. Varlamov, On the spatially two-dimensional Boussinesq equation in a circular domain, Nonlinear Anal. 46 (2001) 699-725; V. Varlamov, Convolution of Rayleigh functions with respect to the Bessel index, J. Math. Anal. Appl. 306 (2005) 413-424]. The study of this new family of special functions was initiated in [V. Varlamov, Convolution of Rayleigh functions with respect to the Bessel index, J. Math. Anal. Appl. 306 (2005) 413-424], where the properties of R1(m) were investigated. In the present work a general representation of Rl(m) in terms of [sigma]l([nu]) is deduced. On the basis of this a representation for the function R2(m) is obtained in terms of the [psi]-function. An asymptotic expansion is computed for R2(m) as m-->[infinity]. Such asymptotics are needed for establishing function spaces for solutions of semi-linear equations in bounded domains with periodicity conditions in one coordinate. As an example of application of Rl(m) a forced Boussinesq equationutt-2b[Delta]ut=-[alpha][Delta]2u+[Delta]u+[beta][Delta](u2)+f with [alpha],b=const>0 and [beta]=const[set membership, variant]R is considered in a unit disc with homogeneous boundary and initial data. Construction of its global-in-time solutions involves the use of the functions R1(m) and R2(m) which are responsible for the nonlinear smoothing effect.

  13. Single-pixel polarimetric imaging.

    PubMed

    Durán, Vicente; Clemente, Pere; Fernández-Alonso, Mercedes; Tajahuerce, Enrique; Lancis, Jesús

    2012-03-01

    We present an optical system that performs Stokes polarimetric imaging with a single-pixel detector. This fact is possible by applying the theory of compressive sampling to the data acquired by a commercial polarimeter without spatial resolution. The measurement process is governed by a spatial light modulator, which sequentially generates a set of preprogrammed light intensity patterns. Experimental results are presented and discussed for an object that provides an inhomogeneous polarization distribution. PMID:22378406

  14. Representing SAR complex image pixels

    NASA Astrophysics Data System (ADS)

    Doerry, A. W.

    2016-05-01

    Synthetic Aperture Radar (SAR) images are often complex-valued to facilitate specific exploitation modes. Furthermore, these pixel values are typically represented with either real/imaginary (also known as I/Q) values, or as Magnitude/Phase values, with constituent components comprised of integers with limited number of bits. For clutter energy well below full-scale, Magnitude/Phase offers lower quantization noise than I/Q representation. Further improvement can be had with companding of the Magnitude value.

  15. SAR Image Complex Pixel Representations

    SciTech Connect

    Doerry, Armin W.

    2015-03-01

    Complex pixel values for Synthetic Aperture Radar (SAR) images of uniform distributed clutter can be represented as either real/imaginary (also known as I/Q) values, or as Magnitude/Phase values. Generally, these component values are integers with limited number of bits. For clutter energy well below full-scale, Magnitude/Phase offers lower quantization noise than I/Q representation. Further improvement can be had with companding of the Magnitude value.

  16. CMOS digital pixel sensors: technology and applications

    NASA Astrophysics Data System (ADS)

    Skorka, Orit; Joseph, Dileepan

    2014-04-01

    CMOS active pixel sensor technology, which is widely used these days for digital imaging, is based on analog pixels. Transition to digital pixel sensors can boost signal-to-noise ratios and enhance image quality, but can increase pixel area to dimensions that are impractical for the high-volume market of consumer electronic devices. There are two main approaches to digital pixel design. The first uses digitization methods that largely rely on photodetector properties and so are unique to imaging. The second is based on adaptation of a classical analog-to-digital converter (ADC) for in-pixel data conversion. Imaging systems for medical, industrial, and security applications are emerging lower-volume markets that can benefit from these in-pixel ADCs. With these applications, larger pixels are typically acceptable, and imaging may be done in invisible spectral bands.

  17. Mapping Electrical Crosstalk in Pixelated Sensor Arrays

    NASA Technical Reports Server (NTRS)

    Seshadri, Suresh (Inventor); Cole, David (Inventor); Smith, Roger M (Inventor); Hancock, Bruce R. (Inventor)

    2013-01-01

    The effects of inter pixel capacitance in a pixilated array may be measured by first resetting all pixels in the array to a first voltage, where a first image is read out, followed by resetting only a subset of pixels in the array to a second voltage, where a second image is read out, where the difference in the first and second images provide information about the inter pixel capacitance. Other embodiments are described and claimed.

  18. Quantification and adjustment of pixel-locking in particle image velocimetry

    NASA Astrophysics Data System (ADS)

    Hearst, R. J.; Ganapathisubramani, B.

    2015-10-01

    A quantification metric is provided to determine the degree to which a particle image velocimetry data set is pixel-locked. The metric is calculated by integrating the histogram equalization transfer function and normalizing by the worst-case scenario to return the percentage pixel-locked. When this metric is calculated for each position in the vector field, it is shown that pixel-locking is non-uniform across the field. Hence, pixel-locking adjustments should be made on a vector-by-vector basis rather than uniformly across a field, although the latter is the common practice. A methodology is provided to compensate for the effects of pixel-locking on a vector-by-vector basis. This includes applying a Gaussian filter directly to the images, processing the images with window deformation, ensuring the vector fields are in pixel displacements, applying histogram equalization calculated at each vector coordinate, and mapping the adjusted vector fields to physical space.

  19. Methods of editing cloud and atmospheric layer affected pixels from satellite data

    NASA Technical Reports Server (NTRS)

    Nixon, P. R. (Principal Investigator); Wiegand, C. L.; Richardson, A. J.; Johnson, M. P.; Goodier, B. G.

    1981-01-01

    The location and migration of cloud, land and water features were examined in spectral space (reflective VIS vs. emissive IR). Daytime HCMM data showed two distinct types of cloud affected pixels in the south Texas test area. High altitude cirrus and/or cirrostratus and "subvisible cirrus" (SCi) reflected the same or only slightly more than land features. In the emissive band, the digital counts ranged from 1 to over 75 and overlapped land features. Pixels consisting of cumulus clouds, or of mixed cumulus and landscape, clustered in a different area of spectral space than the high altitude cloud pixels. Cumulus affected pixels were more reflective than land and water pixels. In August the high altitude clouds and SCi were more emissive than similar clouds were in July. Four-channel TIROS-N data were examined with the objective of developing a multispectral screening technique for removing SCi contaminated data.

  20. The Probabilistic Convolution Tree: Efficient Exact Bayesian Inference for Faster LC-MS/MS Protein Inference

    PubMed Central

    Serang, Oliver

    2014-01-01

    Exact Bayesian inference can sometimes be performed efficiently for special cases where a function has commutative and associative symmetry of its inputs (called “causal independence”). For this reason, it is desirable to exploit such symmetry on big data sets. Here we present a method to exploit a general form of this symmetry on probabilistic adder nodes by transforming those probabilistic adder nodes into a probabilistic convolution tree with which dynamic programming computes exact probabilities. A substantial speedup is demonstrated using an illustration example that can arise when identifying splice forms with bottom-up mass spectrometry-based proteomics. On this example, even state-of-the-art exact inference algorithms require a runtime more than exponential in the number of splice forms considered. By using the probabilistic convolution tree, we reduce the runtime to and the space to where is the number of variables joined by an additive or cardinal operator. This approach, which can also be used with junction tree inference, is applicable to graphs with arbitrary dependency on counting variables or cardinalities and can be used on diverse problems and fields like forward error correcting codes, elemental decomposition, and spectral demixing. The approach also trivially generalizes to multiple dimensions. PMID:24626234

  1. Pixelated filters for spatial imaging

    NASA Astrophysics Data System (ADS)

    Mathieu, Karine; Lequime, Michel; Lumeau, Julien; Abel-Tiberini, Laetitia; Savin De Larclause, Isabelle; Berthon, Jacques

    2015-10-01

    Small satellites are often used by spatial agencies to meet scientific spatial mission requirements. Their payloads are composed of various instruments collecting an increasing amount of data, as well as respecting the growing constraints relative to volume and mass; So small-sized integrated camera have taken a favored place among these instruments. To ensure scene specific color information sensing, pixelated filters seem to be more attractive than filter wheels. The work presented here, in collaboration with Institut Fresnel, deals with the manufacturing of this kind of component, based on thin film technologies and photolithography processes. CCD detectors with a pixel pitch about 30 μm were considered. In the configuration where the matrix filters are positioned the closest to the detector, the matrix filters are composed of 2x2 macro pixels (e.g. 4 filters). These 4 filters have a bandwidth about 40 nm and are respectively centered at 550, 700, 770 and 840 nm with a specific rejection rate defined on the visible spectral range [500 - 900 nm]. After an intense design step, 4 thin-film structures have been elaborated with a maximum thickness of 5 μm. A run of tests has allowed us to choose the optimal micro-structuration parameters. The 100x100 matrix filters prototypes have been successfully manufactured with lift-off and ion assisted deposition processes. High spatial and spectral characterization, with a dedicated metrology bench, showed that initial specifications and simulations were globally met. These excellent performances knock down the technological barriers for high-end integrated specific multi spectral imaging.

  2. Deep feature learning for knee cartilage segmentation using a triplanar convolutional neural network.

    PubMed

    Prasoon, Adhish; Petersen, Kersten; Igel, Christian; Lauze, François; Dam, Erik; Nielsen, Mads

    2013-01-01

    Segmentation of anatomical structures in medical images is often based on a voxel/pixel classification approach. Deep learning systems, such as convolutional neural networks (CNNs), can infer a hierarchical representation of images that fosters categorization. We propose a novel system for voxel classification integrating three 2D CNNs, which have a one-to-one association with the xy, yz and zx planes of 3D image, respectively. We applied our method to the segmentation of tibial cartilage in low field knee MRI scans and tested it on 114 unseen scans. Although our method uses only 2D features at a single scale, it performs better than a state-of-the-art method using 3D multi-scale features. In the latter approach, the features and the classifier have been carefully adapted to the problem at hand. That we were able to get better results by a deep learning architecture that autonomously learns the features from the images is the main insight of this study.

  3. Lung nodule detection using 3D convolutional neural networks trained on weakly labeled data

    NASA Astrophysics Data System (ADS)

    Anirudh, Rushil; Thiagarajan, Jayaraman J.; Bremer, Timo; Kim, Hyojin

    2016-03-01

    Early detection of lung nodules is currently the one of the most effective ways to predict and treat lung cancer. As a result, the past decade has seen a lot of focus on computer aided diagnosis (CAD) of lung nodules, whose goal is to efficiently detect, segment lung nodules and classify them as being benign or malignant. Effective detection of such nodules remains a challenge due to their arbitrariness in shape, size and texture. In this paper, we propose to employ 3D convolutional neural networks (CNN) to learn highly discriminative features for nodule detection in lieu of hand-engineered ones such as geometric shape or texture. While 3D CNNs are promising tools to model the spatio-temporal statistics of data, they are limited by their need for detailed 3D labels, which can be prohibitively expensive when compared obtaining 2D labels. Existing CAD methods rely on obtaining detailed labels for lung nodules, to train models, which is also unrealistic and time consuming. To alleviate this challenge, we propose a solution wherein the expert needs to provide only a point label, i.e., the central pixel of of the nodule, and its largest expected size. We use unsupervised segmentation to grow out a 3D region, which is used to train the CNN. Using experiments on the SPIE-LUNGx dataset, we show that the network trained using these weak labels can produce reasonably low false positive rates with a high sensitivity, even in the absence of accurate 3D labels.

  4. Spectral density of generalized Wishart matrices and free multiplicative convolution

    NASA Astrophysics Data System (ADS)

    Młotkowski, Wojciech; Nowak, Maciej A.; Penson, Karol A.; Życzkowski, Karol

    2015-07-01

    We investigate the level density for several ensembles of positive random matrices of a Wishart-like structure, W =X X† , where X stands for a non-Hermitian random matrix. In particular, making use of the Cauchy transform, we study the free multiplicative powers of the Marchenko-Pastur (MP) distribution, MP⊠s, which for an integer s yield Fuss-Catalan distributions corresponding to a product of s -independent square random matrices, X =X1⋯Xs . New formulas for the level densities are derived for s =3 and s =1 /3 . Moreover, the level density corresponding to the generalized Bures distribution, given by the free convolution of arcsine and MP distributions, is obtained. We also explain the reason of such a curious convolution. The technique proposed here allows for the derivation of the level densities for several other cases.

  5. Spectral density of generalized Wishart matrices and free multiplicative convolution.

    PubMed

    Młotkowski, Wojciech; Nowak, Maciej A; Penson, Karol A; Życzkowski, Karol

    2015-07-01

    We investigate the level density for several ensembles of positive random matrices of a Wishart-like structure, W=XX(†), where X stands for a non-Hermitian random matrix. In particular, making use of the Cauchy transform, we study the free multiplicative powers of the Marchenko-Pastur (MP) distribution, MP(⊠s), which for an integer s yield Fuss-Catalan distributions corresponding to a product of s-independent square random matrices, X=X(1)⋯X(s). New formulas for the level densities are derived for s=3 and s=1/3. Moreover, the level density corresponding to the generalized Bures distribution, given by the free convolution of arcsine and MP distributions, is obtained. We also explain the reason of such a curious convolution. The technique proposed here allows for the derivation of the level densities for several other cases.

  6. UFLIC: A Line Integral Convolution Algorithm for Visualizing Unsteady Flows

    NASA Technical Reports Server (NTRS)

    Shen, Han-Wei; Kao, David L.; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    This paper presents an algorithm, UFLIC (Unsteady Flow LIC), to visualize vector data in unsteady flow fields. Using the Line Integral Convolution (LIC) as the underlying method, a new convolution algorithm is proposed that can effectively trace the flow's global features over time. The new algorithm consists of a time-accurate value depositing scheme and a successive feed-forward method. The value depositing scheme accurately models the flow advection, and the successive feed-forward method maintains the coherence between animation frames. Our new algorithm can produce time-accurate, highly coherent flow animations to highlight global features in unsteady flow fields. CFD scientists, for the first time, are able to visualize unsteady surface flows using our algorithm.

  7. Image Super-Resolution Using Deep Convolutional Networks.

    PubMed

    Dong, Chao; Loy, Chen Change; He, Kaiming; Tang, Xiaoou

    2016-02-01

    We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low/high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage. We explore different network structures and parameter settings to achieve trade-offs between performance and speed. Moreover, we extend our network to cope with three color channels simultaneously, and show better overall reconstruction quality. PMID:26761735

  8. Deep learning for steganalysis via convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Qian, Yinlong; Dong, Jing; Wang, Wei; Tan, Tieniu

    2015-03-01

    Current work on steganalysis for digital images is focused on the construction of complex handcrafted features. This paper proposes a new paradigm for steganalysis to learn features automatically via deep learning models. We novelly propose a customized Convolutional Neural Network for steganalysis. The proposed model can capture the complex dependencies that are useful for steganalysis. Compared with existing schemes, this model can automatically learn feature representations with several convolutional layers. The feature extraction and classification steps are unified under a single architecture, which means the guidance of classification can be used during the feature extraction step. We demonstrate the effectiveness of the proposed model on three state-of-theart spatial domain steganographic algorithms - HUGO, WOW, and S-UNIWARD. Compared to the Spatial Rich Model (SRM), our model achieves comparable performance on BOSSbase and the realistic and large ImageNet database.

  9. Image Super-Resolution Using Deep Convolutional Networks.

    PubMed

    Dong, Chao; Loy, Chen Change; He, Kaiming; Tang, Xiaoou

    2016-02-01

    We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low/high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage. We explore different network structures and parameter settings to achieve trade-offs between performance and speed. Moreover, we extend our network to cope with three color channels simultaneously, and show better overall reconstruction quality.

  10. A new computational decoding complexity measure of convolutional codes

    NASA Astrophysics Data System (ADS)

    Benchimol, Isaac B.; Pimentel, Cecilio; Souza, Richard Demo; Uchôa-Filho, Bartolomeu F.

    2014-12-01

    This paper presents a computational complexity measure of convolutional codes well suitable for software implementations of the Viterbi algorithm (VA) operating with hard decision. We investigate the number of arithmetic operations performed by the decoding process over the conventional and minimal trellis modules. A relation between the complexity measure defined in this work and the one defined by McEliece and Lin is investigated. We also conduct a refined computer search for good convolutional codes (in terms of distance spectrum) with respect to two minimal trellis complexity measures. Finally, the computational cost of implementation of each arithmetic operation is determined in terms of machine cycles taken by its execution using a typical digital signal processor widely used for low-power telecommunications applications.

  11. Active pixel sensor with intra-pixel charge transfer

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R. (Inventor); Mendis, Sunetra (Inventor); Kemeny, Sabrina E. (Inventor)

    2004-01-01

    An imaging device formed as a monolithic complementary metal oxide semiconductor integrated circuit in an industry standard complementary metal oxide semiconductor process, the integrated circuit including a focal plane array of pixel cells, each one of the cells including a photogate overlying the substrate for accumulating photo-generated charge in an underlying portion of the substrate, a readout circuit including at least an output field effect transistor formed in the substrate, and a charge coupled device section formed on the substrate adjacent the photogate having a sensing node connected to the output transistor and at least one charge coupled device stage for transferring charge from the underlying portion of the substrate to the sensing node.

  12. Active pixel sensor with intra-pixel charge transfer

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R. (Inventor); Mendis, Sunetra (Inventor); Kemeny, Sabrina E. (Inventor)

    2003-01-01

    An imaging device formed as a monolithic complementary metal oxide semiconductor integrated circuit in an industry standard complementary metal oxide semiconductor process, the integrated circuit including a focal plane array of pixel cells, each one of the cells including a photogate overlying the substrate for accumulating photo-generated charge in an underlying portion of the substrate, a readout circuit including at least an output field effect transistor formed in the substrate, and a charge coupled device section formed on the substrate adjacent the photogate having a sensing node connected to the output transistor and at least one charge coupled device stage for transferring charge from the underlying portion of the substrate to the sensing node.

  13. Proceedings of PIXEL98 -- International pixel detector workshop

    SciTech Connect

    Anderson, D.F.; Kwan, S.

    1998-08-01

    Experiments around the globe face new challenges of more precision in the face of higher interaction rates, greater track densities, and higher radiation doses, as they look for rarer and rarer processes, leading many to incorporate pixelated solid-state detectors into their plans. The highest-readout rate devices require new technologies for implementation. This workshop reviewed recent, significant progress in meeting these technical challenges. Participants presented many new results; many of them from the weeks--even days--just before the workshop. Brand new at this workshop were results on cryogenic operation of radiation-damaged silicon detectors (dubbed the Lazarus effect). Other new work included a diamond sensor with 280-micron collection distance; new results on breakdown in p-type silicon detectors; testing of the latest versions of read-out chip and interconnection designs; and the radiation hardness of deep-submicron processes.

  14. Active pixel sensor with intra-pixel charge transfer

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R. (Inventor); Mendis, Sunetra (Inventor); Kemeny, Sabrina E. (Inventor)

    1995-01-01

    An imaging device formed as a monolithic complementary metal oxide semiconductor integrated circuit in an industry standard complementary metal oxide semiconductor process, the integrated circuit including a focal plane array of pixel cells, each one of the cells including a photogate overlying the substrate for accumulating photo-generated charge in an underlying portion of the substrate, a readout circuit including at least an output field effect transistor formed in the substrate, and a charge coupled device section formed on the substrate adjacent the photogate having a sensing node connected to the output transistor and at least one charge coupled device stage for transferring charge from the underlying portion of the substrate to the sensing node.

  15. Characterization of Pixelated Cadmium-Zinc-Telluride Detectors for Astrophysical Applications

    NASA Technical Reports Server (NTRS)

    Gaskin, Jessica; Sharma, Dharma; Ramsey, Brian; Seller, Paul

    2003-01-01

    Comparisons of charge sharing and charge loss measurements between two pixelated Cadmium-Zinc-Telluride (CdZnTe) detectors are discussed. These properties along with the detector geometry help to define the limiting energy resolution and spatial resolution of the detector in question. The first detector consists of a 1-mm-thick piece of CdZnTe sputtered with a 4x4 array of pixels with pixel pitch of 750 microns (inter-pixel gap is 100 microns). Signal readout is via discrete ultra-low-noise preamplifiers, one for each of the 16 pixels. The second detector consists of a 2-mm-thick piece of CdZnTe sputtered with a 16x16 array of pixels with a pixel pitch of 300 microns (inter-pixel gap is 50 microns). This crystal is bonded to a custom-built readout chip (ASIC) providing all front-end electronics to each of the 256 independent pixels. These detectors act as precursors to that which will be used at the focal plane of the High Energy Replicated Optics (HERO) telescope currently being developed at Marshall Space Flight Center. With a telescope focal length of 6 meters, the detector needs to have a spatial resolution of around 200 microns in order to take full advantage of the HERO angular resolution. We discuss to what degree charge sharing will degrade energy resolution but will improve our spatial resolution through position interpolation.

  16. Electronic holographic device based on macro-pixel with local coherence

    NASA Astrophysics Data System (ADS)

    Moon, Woonchan; Kwon, Jaebeom; Kim, Hwi; Hahn, Joonku

    2015-09-01

    Holography has been regarded as one of the most ideal technique for three-dimensional (3D) display because it records and reconstructs both amplitude and phase of object wave simultaneously. Nevertheless, many people think that this technique is not suitable for commercialization due to some significant problems. In this paper, we propose an electronic holographic 3D display based on macro-pixel with local coherence. Here, the incident wave within each macro-pixel is coherent but the wave in one macro-pixel is not mutually coherent with the wave in the other macro-pixel. This concept provides amazing freedom in distribution of the pixels in modulator. The relative distance between two macro-pixels results in negligible change of interference pattern in observation space. Also it is possible to make the sub-pixels in a macro-pixel in order to enlarge the field of view (FOV). The idea has amazing effects to reduce the data capacity of the holographic display. Moreover, the dimension of the system is can be remarkably downsized by micro-optics. As a result, the holographic display will be designed to have full parallax with large FOV and screen size. We think that the macro-pixel idea is a practical solution in electronic holography since it can provide reasonable FOV and large screen size with relatively small amount of data.

  17. Convolution using guided acoustooptical interaction in thin-film waveguides

    NASA Technical Reports Server (NTRS)

    Chang, W. S. C.; Becker, R. A.; Tsai, C. S.; Yao, I. W.

    1977-01-01

    Interaction of two antiparallel acoustic surface waves (ASW) with an optical guided wave has been investigated theoretically as well as experimentally to obtain the convolution of two ASW signals. The maximum time-bandwidth product that can be achieved by such a convolver is shown to be of the order of 1000 or more. The maximum dynamic range can be as large as 83 dB.

  18. New syndrome decoder for (n, 1) convolutional codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1983-01-01

    The letter presents a new syndrome decoding algorithm for the (n, 1) convolutional codes (CC) that is different and simpler than the previous syndrome decoding algorithm of Schalkwijk and Vinck. The new technique uses the general solution of the polynomial linear Diophantine equation for the error polynomial vector E(D). A recursive, Viterbi-like, algorithm is developed to find the minimum weight error vector E(D). An example is given for the binary nonsystematic (2, 1) CC.

  19. Face Detection Using GPU-Based Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Nasse, Fabian; Thurau, Christian; Fink, Gernot A.

    In this paper, we consider the problem of face detection under pose variations. Unlike other contributions, a focus of this work resides within efficient implementation utilizing the computational powers of modern graphics cards. The proposed system consists of a parallelized implementation of convolutional neural networks (CNNs) with a special emphasize on also parallelizing the detection process. Experimental validation in a smart conference room with 4 active ceiling-mounted cameras shows a dramatic speed-gain under real-life conditions.

  20. Serial Pixel Analog-to-Digital Converter

    SciTech Connect

    Larson, E D

    2010-02-01

    This method reduces the data path from the counter to the pixel register of the analog-to-digital converter (ADC) from as many as 10 bits to a single bit. The reduction in data path width is accomplished by using a coded serial data stream similar to a pseudo random number (PRN) generator. The resulting encoded pixel data is then decoded into a standard hexadecimal format before storage. The high-speed serial pixel ADC concept is based on the single-slope integrating pixel ADC architecture. Previous work has described a massively parallel pixel readout of a similar architecture. The serial ADC connection is similar to the state-of-the art method with the exception that the pixel ADC register is a shift register and the data path is a single bit. A state-of-the-art individual-pixel ADC uses a single-slope charge integration converter architecture with integral registers and “one-hot” counters. This implies that parallel data bits are routed among the counter and the individual on-chip pixel ADC registers. The data path bit-width to the pixel is therefore equivalent to the pixel ADC bit resolution.

  1. On the growth and form of cortical convolutions

    NASA Astrophysics Data System (ADS)

    Tallinen, Tuomas; Chung, Jun Young; Rousseau, François; Girard, Nadine; Lefèvre, Julien; Mahadevan, L.

    2016-06-01

    The rapid growth of the human cortex during development is accompanied by the folding of the brain into a highly convoluted structure. Recent studies have focused on the genetic and cellular regulation of cortical growth, but understanding the formation of the gyral and sulcal convolutions also requires consideration of the geometry and physical shaping of the growing brain. To study this, we use magnetic resonance images to build a 3D-printed layered gel mimic of the developing smooth fetal brain; when immersed in a solvent, the outer layer swells relative to the core, mimicking cortical growth. This relative growth puts the outer layer into mechanical compression and leads to sulci and gyri similar to those in fetal brains. Starting with the same initial geometry, we also build numerical simulations of the brain modelled as a soft tissue with a growing cortex, and show that this also produces the characteristic patterns of convolutions over a realistic developmental course. All together, our results show that although many molecular determinants control the tangential expansion of the cortex, the size, shape, placement and orientation of the folds arise through iterations and variations of an elementary mechanical instability modulated by early fetal brain geometry.

  2. Fine-grained representation learning in convolutional autoencoders

    NASA Astrophysics Data System (ADS)

    Luo, Chang; Wang, Jie

    2016-03-01

    Convolutional autoencoders (CAEs) have been widely used as unsupervised feature extractors for high-resolution images. As a key component in CAEs, pooling is a biologically inspired operation to achieve scale and shift invariances, and the pooled representation directly affects the CAEs' performance. Fine-grained pooling, which uses small and dense pooling regions, encodes fine-grained visual cues and enhances local characteristics. However, it tends to be sensitive to spatial rearrangements. In most previous works, pooled features were obtained by empirically modulating parameters in CAEs. We see the CAE as a whole and propose a fine-grained representation learning law to extract better fine-grained features. This representation learning law suggests two directions for improvement. First, we probabilistically evaluate the discrimination-invariance tradeoff with fine-grained granularity in the pooled feature maps, and suggest the proper filter scale in the convolutional layer and appropriate whitening parameters in preprocessing step. Second, pooling approaches are combined with the sparsity degree in pooling regions, and we propose the preferable pooling approach. Experimental results on two independent benchmark datasets demonstrate that our representation learning law could guide CAEs to extract better fine-grained features and performs better in multiclass classification task. This paper also provides guidance for selecting appropriate parameters to obtain better fine-grained representation in other convolutional neural networks.

  3. Automatic localization of vertebrae based on convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Shen, Wei; Yang, Feng; Mu, Wei; Yang, Caiyun; Yang, Xin; Tian, Jie

    2015-03-01

    Localization of the vertebrae is of importance in many medical applications. For example, the vertebrae can serve as the landmarks in image registration. They can also provide a reference coordinate system to facilitate the localization of other organs in the chest. In this paper, we propose a new vertebrae localization method using convolutional neural networks (CNN). The main advantage of the proposed method is the removal of hand-crafted features. We construct two training sets to train two CNNs that share the same architecture. One is used to distinguish the vertebrae from other tissues in the chest, and the other is aimed at detecting the centers of the vertebrae. The architecture contains two convolutional layers, both of which are followed by a max-pooling layer. Then the output feature vector from the maxpooling layer is fed into a multilayer perceptron (MLP) classifier which has one hidden layer. Experiments were performed on ten chest CT images. We used leave-one-out strategy to train and test the proposed method. Quantitative comparison between the predict centers and ground truth shows that our convolutional neural networks can achieve promising localization accuracy without hand-crafted features.

  4. A new dual-isotope convolution cross-talk correction method: a Tl-201/Tc-99m SPECT cardiac phantom study.

    PubMed

    Knesaurek, K

    1994-10-01

    Simultaneous dual-isotope SPECT imaging provides a clear advantage in situations where two concurrent metabolic, anatomic, or background measurements are desired. It obviates the need for two separate imaging sessions, reduces patient motion problems, and provides exact image registration between images. However, a potential limitation of dual-isotope SPECT imaging is contribution of scattered and primary photons from one radionuclide into the second radionuclide's photopeak energy window, referred to here as cross-talk. Cross-talk in both photopeak energy windows can significantly degrade image quality, resolution, and quantitation to an unacceptable level. Simple cross-talk correction method used in dual-radionuclide in vitro counting, even applied on a pixel-by-pixel basis, does not account for the differences in spatial distribution of the photopeak and cross-talk photons. Here a new convolution cross-talk correction method is presented. The convolution filters are derived from point response functions (PRFs) for Tc-99m and Tl-201 point sources. Three separate acquisitions were performed, each with two 20% wide energy windows, one centered at 140 keV and another at 70 keV. The first acquisition was done with Tc-99m solution only, the second with Tl-201 solution only, and the third with a mixture of Tc-99m and Tl-201. The nonuniform RH-2 thorax-heart phantom was used to test a new correction technique. The main difficulty and limitation of the convolution correction approach is caused by the variation in PRF as a function of depth. Thus, average PRF should be used in the creation of an approximative filter.(ABSTRACT TRUNCATED AT 250 WORDS) PMID:7869989

  5. Penrose Pixels for Super-Resolution.

    PubMed

    Ben-Ezra, M; Lin, Zhouchen; Wilburn, Bennett; Zhang, Wei

    2011-07-01

    We present a novel approach to reconstruction-based super-resolution that uses aperiodic pixel tilings, such as a Penrose tiling or a biological retina, for improved performance. To this aim, we develop a new variant of the well-known error back projection super-resolution algorithm that makes use of the exact detector model in its back projection operator for better accuracy. Pixels in our model can vary in shape and size, and there may be gaps between adjacent pixels. The algorithm applies equally well to periodic or aperiodic pixel tilings. We present analysis and extensive tests using synthetic and real images to show that our approach using aperiodic layouts substantially outperforms existing reconstruction-based algorithms for regular pixel arrays. We close with a discussion of the feasibility of manufacturing CMOS or CCD chips with pixels arranged in Penrose tilings.

  6. Dead pixel replacement in LWIR microgrid polarimeters.

    PubMed

    Ratliff, Bradley M; Tyo, J Scott; Boger, James K; Black, Wiley T; Bowers, David L; Fetrow, Matthew P

    2007-06-11

    LWIR imaging arrays are often affected by nonresponsive pixels, or "dead pixels." These dead pixels can severely degrade the quality of imagery and often have to be replaced before subsequent image processing and display of the imagery data. For LWIR arrays that are integrated with arrays of micropolarizers, the problem of dead pixels is amplified. Conventional dead pixel replacement (DPR) strategies cannot be employed since neighboring pixels are of different polarizations. In this paper we present two DPR schemes. The first is a modified nearest-neighbor replacement method. The second is a method based on redundancy in the polarization measurements.We find that the redundancy-based DPR scheme provides an order-of-magnitude better performance for typical LWIR polarimetric data. PMID:19547086

  7. Equivalence of a Bit Pixel Image to a Quantum Pixel Image

    NASA Astrophysics Data System (ADS)

    Ortega, Laurel Carlos; Dong, Shi-Hai; Cruz-Irisson, M.

    2015-11-01

    We propose a new method to transform a pixel image to the corresponding quantum-pixel using a qubit per pixel to represent each pixels classical weight in a quantum image matrix weight. All qubits are linear superposition, changing the coefficients level by level to the entire longitude of the gray scale with respect to the base states of the qubit. Classically, these states are just bytes represented in a binary matrix, having code combinations of 1 or 0 at all pixel locations. This method introduces a qubit-pixel image representation of images captured by classical optoelectronic methods. Supported partially by the project 20150964-SIP-IPN, Mexico

  8. Convolution based method for calculating inputs from dendritic fields in a continuum model of the retina.

    PubMed

    Al Abed, Amr; Yin, Shijie; Suaning, Gregg J; Lovell, Nigel H; Dokos, Socrates

    2012-01-01

    Computational models are valuable tools that can be used to aid the design and test the efficacy of electrical stimulation strategies in prosthetic vision devices. In continuum models of retinal electrophysiology, the effective extracellular potential can be considered as an approximate measure of the electrotonic loading a neuron's dendritic tree exerts on the soma. A convolution based method is presented to calculate the local spatial average of the effective extracellular loading in retinal ganglion cells (RGCs) in a continuum model of the retina which includes an active RGC tissue layer. The method can be used to study the effect of the dendritic tree size on the activation of RGCs by electrical stimulation using a hexagonal arrangement of electrodes (hexpolar) placed in the suprachoroidal space.

  9. A nonvoxel-based dose convolution/superposition algorithm optimized for scalable GPU architectures

    SciTech Connect

    Neylon, J. Sheng, K.; Yu, V.; Low, D. A.; Kupelian, P.; Santhanam, A.; Chen, Q.

    2014-10-15

    Purpose: Real-time adaptive planning and treatment has been infeasible due in part to its high computational complexity. There have been many recent efforts to utilize graphics processing units (GPUs) to accelerate the computational performance and dose accuracy in radiation therapy. Data structure and memory access patterns are the key GPU factors that determine the computational performance and accuracy. In this paper, the authors present a nonvoxel-based (NVB) approach to maximize computational and memory access efficiency and throughput on the GPU. Methods: The proposed algorithm employs a ray-tracing mechanism to restructure the 3D data sets computed from the CT anatomy into a nonvoxel-based framework. In a process that takes only a few milliseconds of computing time, the algorithm restructured the data sets by ray-tracing through precalculated CT volumes to realign the coordinate system along the convolution direction, as defined by zenithal and azimuthal angles. During the ray-tracing step, the data were resampled according to radial sampling and parallel ray-spacing parameters making the algorithm independent of the original CT resolution. The nonvoxel-based algorithm presented in this paper also demonstrated a trade-off in computational performance and dose accuracy for different coordinate system configurations. In order to find the best balance between the computed speedup and the accuracy, the authors employed an exhaustive parameter search on all sampling parameters that defined the coordinate system configuration: zenithal, azimuthal, and radial sampling of the convolution algorithm, as well as the parallel ray spacing during ray tracing. The angular sampling parameters were varied between 4 and 48 discrete angles, while both radial sampling and parallel ray spacing were varied from 0.5 to 10 mm. The gamma distribution analysis method (γ) was used to compare the dose distributions using 2% and 2 mm dose difference and distance-to-agreement criteria

  10. [Hadamard transform spectrometer mixed pixels' unmixing method].

    PubMed

    Yan, Peng; Hu, Bing-Liang; Liu, Xue-Bin; Sun, Wei; Li, Li-Bo; Feng, Yu-Tao; Liu, Yong-Zheng

    2011-10-01

    Hadamard transform imaging spectrometer is a multi-channel digital transform spectrometer detection technology, this paper based on digital micromirror array device (DMD) of the Hadamard transform spectrometer working principle and instrument structure, obtained by the imaging sensor mixed pixel were analyzed, theory derived the solution of pixel aliasing hybrid method, simulation results show that the method is simple and effective to improve the accuracy of mixed pixel spectrum more than 10% recovery. PMID:22250574

  11. Method for fabricating pixelated silicon device cells

    SciTech Connect

    Nielson, Gregory N.; Okandan, Murat; Cruz-Campa, Jose Luis; Nelson, Jeffrey S.; Anderson, Benjamin John

    2015-08-18

    A method, apparatus and system for flexible, ultra-thin, and high efficiency pixelated silicon or other semiconductor photovoltaic solar cell array fabrication is disclosed. A structure and method of creation for a pixelated silicon or other semiconductor photovoltaic solar cell array with interconnects is described using a manufacturing method that is simplified compared to previous versions of pixelated silicon photovoltaic cells that require more microfabrication steps.

  12. Commissioning of the CMS Forward Pixel Detector

    SciTech Connect

    Kumar, Ashish; /SUNY, Buffalo

    2008-12-01

    The Compact Muon Solenoid (CMS) experiment is scheduled for physics data taking in summer 2009 after the commissioning of high energy proton-proton collisions at Large Hadron Collider (LHC). At the core of the CMS all-silicon tracker is the silicon pixel detector, comprising three barrel layers and two pixel disks in the forward and backward regions, accounting for a total of 66 million channels. The pixel detector will provide high-resolution, 3D tracking points, essential for pattern recognition and precise vertexing, while being embedded in a hostile radiation environment. The end disks of the pixel detector, known as the Forward Pixel detector, has been assembled and tested at Fermilab, USA. It has 18 million pixel cells with dimension 100 x 150 {micro}m{sup 2}. The complete forward pixel detector was shipped to CERN in December 2007, where it underwent extensive system tests for commissioning prior to the installation. The pixel system was put in its final place inside the CMS following the installation and bake out of the LHC beam pipe in July 2008. It has been integrated with other sub-detectors in the readout since September 2008 and participated in the cosmic data taking. This report covers the strategy and results from commissioning of CMS forward pixel detector at CERN.

  13. Implementation of TDI based digital pixel ROIC with 15μm pixel pitch

    NASA Astrophysics Data System (ADS)

    Ceylan, Omer; Shafique, Atia; Burak, A.; Caliskan, Can; Abbasi, Shahbaz; Yazici, Melik; Gurbuz, Yasar

    2016-05-01

    A 15um pixel pitch digital pixel for LWIR time delay integration (TDI) applications is implemented which occupies one fourth of pixel area compared to previous digital TDI implementation. TDI is implemented on 8 pixels with oversampling rate of 2. ROIC provides 16 bits output with 8 bits of MSB and 8 bits of LSB. Pixel can store 75 M electrons with a quantization noise of 500 electrons. Digital pixel TDI implementation is advantageous over analog counterparts considering power consumption, chip area and signal-to-noise ratio. Digital pixel TDI ROIC is fabricated with 0.18um CMOS process. In digital pixel TDI implementation photocurrent is integrated on a capacitor in pixel and converted to digital data in pixel. This digital data triggers the summation counters which implements TDI addition. After all pixels in a row contribute, the summed data is divided to the number of TDI pixels(N) to have the actual output which is square root of N improved version of a single pixel output in terms of signal-to-noise-ratio (SNR).

  14. Of FFT-based convolutions and correlations, with application to solving Poisson's equation in an open rectangular pipe

    SciTech Connect

    Ryne, Robert D.

    2011-11-07

    A new method is presented for solving Poisson's equation inside an open-ended rectangular pipe. The method uses Fast Fourier Transforms (FFTs)to perform mixed convolutions and correlations of the charge density with the Green function. Descriptions are provided for algorithms based on theordinary Green function and for an integrated Green function (IGF). Due to its similarity to the widely used Hockney algorithm for solving Poisson'sequation in free space, this capability can be easily implemented in many existing particle-in-cell beam dynamics codes.

  15. Urban Image Classification: Per-Pixel Classifiers, Sub-Pixel Analysis, Object-Based Image Analysis, and Geospatial Methods. 10; Chapter

    NASA Technical Reports Server (NTRS)

    Myint, Soe W.; Mesev, Victor; Quattrochi, Dale; Wentz, Elizabeth A.

    2013-01-01

    Remote sensing methods used to generate base maps to analyze the urban environment rely predominantly on digital sensor data from space-borne platforms. This is due in part from new sources of high spatial resolution data covering the globe, a variety of multispectral and multitemporal sources, sophisticated statistical and geospatial methods, and compatibility with GIS data sources and methods. The goal of this chapter is to review the four groups of classification methods for digital sensor data from space-borne platforms; per-pixel, sub-pixel, object-based (spatial-based), and geospatial methods. Per-pixel methods are widely used methods that classify pixels into distinct categories based solely on the spectral and ancillary information within that pixel. They are used for simple calculations of environmental indices (e.g., NDVI) to sophisticated expert systems to assign urban land covers. Researchers recognize however, that even with the smallest pixel size the spectral information within a pixel is really a combination of multiple urban surfaces. Sub-pixel classification methods therefore aim to statistically quantify the mixture of surfaces to improve overall classification accuracy. While within pixel variations exist, there is also significant evidence that groups of nearby pixels have similar spectral information and therefore belong to the same classification category. Object-oriented methods have emerged that group pixels prior to classification based on spectral similarity and spatial proximity. Classification accuracy using object-based methods show significant success and promise for numerous urban 3 applications. Like the object-oriented methods that recognize the importance of spatial proximity, geospatial methods for urban mapping also utilize neighboring pixels in the classification process. The primary difference though is that geostatistical methods (e.g., spatial autocorrelation methods) are utilized during both the pre- and post

  16. Fast computational integral imaging reconstruction by combined use of spatial filtering and rearrangement of elemental image pixels

    NASA Astrophysics Data System (ADS)

    Jang, Jae-Young; Cho, Myungjin

    2015-12-01

    In this paper, we propose a new fast computational integral imaging reconstruction (CIIR) scheme without the deterioration of the spatial filtering effect by combined use of spatial filtering and rearrangement of elemental image pixels. In the proposed scheme, the elemental image array (EIA) recorded by lenslet array is spatially filtered through the convolution of depth-dependent delta function array for a given depth. Then, the spatially filtered EIA is reconstructed as the 3D slice image using pixels of the elemental image rearrangement technique. Our scheme provides both the fast calculation with the same properties of the conventional CIIR and the improved visual quality of the reconstructed 3D slice image. To verify our scheme, we perform preliminary experiments and compare other techniques.

  17. Learning Contextual Dependence With Convolutional Hierarchical Recurrent Neural Networks

    NASA Astrophysics Data System (ADS)

    Zuo, Zhen; Shuai, Bing; Wang, Gang; Liu, Xiao; Wang, Xingxing; Wang, Bing; Chen, Yushi

    2016-07-01

    Existing deep convolutional neural networks (CNNs) have shown their great success on image classification. CNNs mainly consist of convolutional and pooling layers, both of which are performed on local image areas without considering the dependencies among different image regions. However, such dependencies are very important for generating explicit image representation. In contrast, recurrent neural networks (RNNs) are well known for their ability of encoding contextual information among sequential data, and they only require a limited number of network parameters. General RNNs can hardly be directly applied on non-sequential data. Thus, we proposed the hierarchical RNNs (HRNNs). In HRNNs, each RNN layer focuses on modeling spatial dependencies among image regions from the same scale but different locations. While the cross RNN scale connections target on modeling scale dependencies among regions from the same location but different scales. Specifically, we propose two recurrent neural network models: 1) hierarchical simple recurrent network (HSRN), which is fast and has low computational cost; and 2) hierarchical long-short term memory recurrent network (HLSTM), which performs better than HSRN with the price of more computational cost. In this manuscript, we integrate CNNs with HRNNs, and develop end-to-end convolutional hierarchical recurrent neural networks (C-HRNNs). C-HRNNs not only make use of the representation power of CNNs, but also efficiently encodes spatial and scale dependencies among different image regions. On four of the most challenging object/scene image classification benchmarks, our C-HRNNs achieve state-of-the-art results on Places 205, SUN 397, MIT indoor, and competitive results on ILSVRC 2012.

  18. A stochastic convolution/superposition method with isocenter sampling to evaluate intrafraction motion effects in IMRT

    SciTech Connect

    Naqvi, Shahid A.; D'Souza, Warren D.

    2005-04-01

    Current methods to calculate dose distributions with organ motion can be broadly classified as 'dose convolution' and 'fluence convolution' methods. In the former, a static dose distribution is convolved with the probability distribution function (PDF) that characterizes the motion. However, artifacts are produced near the surface and around inhomogeneities because the method assumes shift invariance. Fluence convolution avoids these artifacts by convolving the PDF with the incident fluence instead of the patient dose. In this paper we present an alternative method that improves the accuracy, generality as well as the speed of dose calculation with organ motion. The algorithm starts by sampling an isocenter point from a parametrically defined space curve corresponding to the patient-specific motion trajectory. Then a photon is sampled in the linac head and propagated through the three-dimensional (3-D) collimator structure corresponding to a particular MLC segment chosen randomly from the planned IMRT leaf sequence. The photon is then made to interact at a point in the CT-based simulation phantom. Randomly sampled monoenergetic kernel rays issued from this point are then made to deposit energy in the voxels. Our method explicitly accounts for MLC-specific effects (spectral hardening, tongue-and-groove, head scatter) as well as changes in SSD with isocentric displacement, assuming that the body moves rigidly with the isocenter. Since the positions are randomly sampled from a continuum, there is no motion discretization, and the computation takes no more time than a static calculation. To validate our method, we obtained ten separate film measurements of an IMRT plan delivered on a phantom moving sinusoidally, with each fraction starting with a random phase. For 2 cm motion amplitude, we found that a ten-fraction average of the film measurements gave an agreement with the calculated infinite fraction average to within 2 mm in the isodose curves. The results also

  19. Faster GPU-based convolutional gridding via thread coarsening

    NASA Astrophysics Data System (ADS)

    Merry, B.

    2016-07-01

    Convolutional gridding is a processor-intensive step in interferometric imaging. While it is possible to use graphics processing units (GPUs) to accelerate this operation, existing methods use only a fraction of the available flops. We apply thread coarsening to improve the efficiency of an existing algorithm, and observe performance gains of up to 3.2 × for single-polarization gridding and 1.9 × for quad-polarization gridding on a GeForce GTX 980, and smaller but still significant gains on a Radeon R9 290X.

  20. New syndrome decoding techniques for the (n, k) convolutional codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1984-01-01

    This paper presents a new syndrome decoding algorithm for the (n, k) convolutional codes (CC) which differs completely from an earlier syndrome decoding algorithm of Schalkwijk and Vinck. The new algorithm is based on the general solution of the syndrome equation, a linear Diophantine equation for the error polynomial vector E(D). The set of Diophantine solutions is a coset of the CC. In this error coset a recursive, Viterbi-like algorithm is developed to find the minimum weight error vector (circumflex)E(D). An example, illustrating the new decoding algorithm, is given for the binary nonsystemmatic (3, 1)CC. Previously announced in STAR as N83-34964

  1. New Syndrome Decoding Techniques for the (n, K) Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1983-01-01

    This paper presents a new syndrome decoding algorithm for the (n,k) convolutional codes (CC) which differs completely from an earlier syndrome decoding algorithm of Schalkwijk and Vinck. The new algorithm is based on the general solution of the syndrome equation, a linear Diophantine equation for the error polynomial vector E(D). The set of Diophantine solutions is a coset of the CC. In this error coset a recursive, Viterbi-like algorithm is developed to find the minimum weight error vector (circumflex)E(D). An example, illustrating the new decoding algorithm, is given for the binary nonsystemmatic (3,1)CC.

  2. Convolution seal for transition duct in turbine system

    SciTech Connect

    Flanagan, James Scott; LeBegue, Jeffrey Scott; McMahan, Kevin Weston; Dillard, Daniel Jackson; Pentecost, Ronnie Ray

    2015-05-26

    A turbine system is disclosed. In one embodiment, the turbine system includes a transition duct. The transition duct includes an inlet, an outlet, and a passage extending between the inlet and the outlet and defining a longitudinal axis, a radial axis, and a tangential axis. The outlet of the transition duct is offset from the inlet along the longitudinal axis and the tangential axis. The transition duct further includes an interface feature for interfacing with an adjacent transition duct. The turbine system further includes a convolution seal contacting the interface feature to provide a seal between the interface feature and the adjacent transition duct.

  3. Convolution seal for transition duct in turbine system

    SciTech Connect

    Flanagan, James Scott; LeBegue, Jeffrey Scott; McMahan, Kevin Weston; Dillard, Daniel Jackson; Pentecost, Ronnie Ray

    2015-03-10

    A turbine system is disclosed. In one embodiment, the turbine system includes a transition duct. The transition duct includes an inlet, an outlet, and a passage extending between the inlet and the outlet and defining a longitudinal axis, a radial axis, and a tangential axis. The outlet of the transition duct is offset from the inlet along the longitudinal axis and the tangential axis. The transition duct further includes an interface member for interfacing with a turbine section. The turbine system further includes a convolution seal contacting the interface member to provide a seal between the interface member and the turbine section.

  4. Convolutional neural networks for mammography mass lesion classification.

    PubMed

    Arevalo, John; Gonzalez, Fabio A; Ramos-Pollan, Raul; Oliveira, Jose L; Guevara Lopez, Miguel Angel

    2015-08-01

    Feature extraction is a fundamental step when mammography image analysis is addressed using learning based approaches. Traditionally, problem dependent handcrafted features are used to represent the content of images. An alternative approach successfully applied in other domains is the use of neural networks to automatically discover good features. This work presents an evaluation of convolutional neural networks to learn features for mammography mass lesions before feeding them to a classification stage. Experimental results showed that this approach is a suitable strategy outperforming the state-of-the-art representation from 79.9% to 86% in terms of area under the ROC curve. PMID:26736382

  5. Convolutional neural networks for synthetic aperture radar classification

    NASA Astrophysics Data System (ADS)

    Profeta, Andrew; Rodriguez, Andres; Clouse, H. Scott

    2016-05-01

    For electro-optical object recognition, convolutional neural networks (CNNs) are the state-of-the-art. For large datasets, CNNs are able to learn meaningful features used for classification. However, their application to synthetic aperture radar (SAR) has been limited. In this work we experimented with various CNN architectures on the MSTAR SAR dataset. As the input to the CNN we used the magnitude and phase (2 channels) of the SAR imagery. We used the deep learning toolboxes CAFFE and Torch7. Our results show that we can achieve 93% accuracy on the MSTAR dataset using CNNs.

  6. A digital model for streamflow routing by convolution methods

    USGS Publications Warehouse

    Doyle, W.H., Jr.; Shearman, H.O.; Stiltner, G.J.; Krug, W.O.

    1984-01-01

    U.S. Geological Survey computer model, CONROUT, for routing streamflow by unit-response convolution flow-routing techniques from an upstream channel location to a downstream channel location has been developed and documented. Calibration and verification of the flow-routing model and subsequent use of the model for simulation is also documented. Three hypothetical examples and two field applications are presented to illustrate basic flow-routing concepts. Most of the discussion is limited to daily flow routing since, to date, all completed and current studies of this nature involve daily flow routing. However, the model is programmed to accept hourly flow-routing data. (USGS)

  7. A Fortran 90 code for magnetohydrodynamics. Part 1, Banded convolution

    SciTech Connect

    Walker, D.W.

    1992-03-01

    This report describes progress in developing a Fortran 90 version of the KITE code for studying plasma instabilities in Tokamaks. In particular, the evaluation of convolution terms appearing in the numerical solution is discussed, and timing results are presented for runs performed on an 8k processor Connection Machine (CM-2). Estimates of the performance on a full-size 64k CM-2 are given, and range between 100 and 200 Mflops. The advantages of having a Fortran 90 version of the KITE code are stressed, and the future use of such a code on the newly announced CM5 and Paragon computers, from Thinking Machines Corporation and Intel, is considered.

  8. Deep-learning convolution neural network for computer-aided detection of microcalcifications in digital breast tomosynthesis

    NASA Astrophysics Data System (ADS)

    Samala, Ravi K.; Chan, Heang-Ping; Hadjiiski, Lubomir M.; Cha, Kenny; Helvie, Mark A.

    2016-03-01

    A deep learning convolution neural network (DLCNN) was designed to differentiate microcalcification candidates detected during the prescreening stage as true calcifications or false positives in a computer-aided detection (CAD) system for clustered microcalcifications. The microcalcification candidates were extracted from the planar projection image generated from the digital breast tomosynthesis volume reconstructed by a multiscale bilateral filtering regularized simultaneous algebraic reconstruction technique. For training and testing of the DLCNN, true microcalcifications are manually labeled for the data sets and false positives were obtained from the candidate objects identified by the CAD system at prescreening after exclusion of the true microcalcifications. The DLCNN architecture was selected by varying the number of filters, filter kernel sizes and gradient computation parameter in the convolution layers, resulting in a parameter space of 216 combinations. The exhaustive grid search method was used to select an optimal architecture within the parameter space studied, guided by the area under the receiver operating characteristic curve (AUC) as a figure-of-merit. The effects of varying different categories of the parameter space were analyzed. The selected DLCNN was compared with our previously designed CNN architecture for the test set. The AUCs of the CNN and DLCNN was 0.89 and 0.93, respectively. The improvement was statistically significant (p < 0.05).

  9. Designing multiplane computer-generated holograms with consideration of the pixel shape and the illumination wave.

    PubMed

    Kämpfe, Thomas; Kley, Ernst-Bernhard; Tünnermann, Andreas

    2008-07-01

    The majority of image-generating computer-generated holograms (CGHs) are calculated on a discrete numerical grid, whose spacing is defined by the desired pixel size. For single-plane CGHs the influence of the pixel shape and the illumination wave on the actual output distribution is minor and can be treated separately from the numerical calculation. We show that in the case of multiplane CGHs this influence is much more severe. We introduce a new method that takes the pixel shape into account during the design and derive conditions to retain an illumination-wave-independent behavior.

  10. Coded aperture detector: an image sensor with sub 20-nm pixel resolution.

    PubMed

    Miyakawa, Ryan; Mayer, Rafael; Wojdyla, Antoine; Vannier, Nicolas; Lesser, Ian; Aron-Dine, Shifrah; Naulleau, Patrick

    2014-08-11

    We describe the coded aperture detector, a novel image sensor based on uniformly redundant arrays (URAs) with customizable pixel size, resolution, and operating photon energy regime. In this sensor, a coded aperture is scanned laterally at the image plane of an optical system, and the transmitted intensity is measured by a photodiode. The image intensity is then digitally reconstructed using a simple convolution. We present results from a proof-of-principle optical prototype, demonstrating high-fidelity image sensing comparable to a CCD. A 20-nm half-pitch URA fabricated by the Center for X-ray Optics (CXRO) nano-fabrication laboratory is presented that is suitable for high-resolution image sensing at EUV and soft X-ray wavelengths. PMID:25321062

  11. Multiscale Edge Detection Using a Finite Element Framework for Hexagonal Pixel-Based Images.

    PubMed

    Gardiner, Bryan; Coleman, Sonya A; Scotney, Bryan W

    2016-04-01

    In recent years, the processing of hexagonal pixel-based images has been investigated, and as a result, a number of edge detection algorithms for direct application to such image structures have been developed. We build on this paper by presenting a novel and efficient approach to the design of hexagonal image processing operators using linear basis and test functions within the finite element framework. Development of these scalable first order and Laplacian operators using this approach presents a framework both for obtaining large-scale neighborhood operators in an efficient manner and for obtaining edge maps at different scales by efficient reuse of the seven-point linear operator. We evaluate the accuracy of these proposed operators and compare the algorithmic performance using the efficient linear approach with conventional operator convolution for generating edge maps at different scale levels. PMID:26890865

  12. Development of prototype pixellated PIN CdZnTe detectors

    NASA Astrophysics Data System (ADS)

    Narita, Tomohiko; Bloser, Peter F.; Grindlay, Jonathan E.; Sudharsanan, R.; Reiche, C.; Stenstrom, Claudia

    1998-07-01

    We report initial results from the design and evaluation of two pixellated PIN Cadmium Zinc Telluride detectors and an ASIC-based readout system. The prototype imaging PIN detectors consist of 4 X 4 1.5 mm square indium anode contacts with 0.2 mm spacing and a solid cathode plane on 10 X 10 mm CdZnTe substrates of thickness 2 mm and 5 mm. The detector readout system, based on low noise preamplifier ASICs, allows for parallel readout of all channels upon cathode trigger. This prototype is under development for use in future astrophysical hard X-ray imagers with 10 - 600 keV energy response. Measurements of the detector uniformity, spatial resolution, and spectral resolution will be discussed and compared with a similar pixellated MSM detector. Finally, a prototype design for a large imaging array is outlined.

  13. New SOFRADIR 10μm pixel pitch infrared products

    NASA Astrophysics Data System (ADS)

    Lefoul, X.; Pere-Laperne, N.; Augey, T.; Rubaldo, L.; Aufranc, Sébastien; Decaens, G.; Ricard, N.; Mazaleyrat, E.; Billon-Lanfrey, D.; Gravrand, Olivier; Bisotto, Sylvette

    2014-10-01

    Recent advances in miniaturization of IR imaging technology have led to a growing market for mini thermal-imaging sensors. In that respect, Sofradir development on smaller pixel pitch has made much more compact products available to the users. When this competitive advantage is mixed with smaller coolers, made possible by HOT technology, we achieved valuable reductions in the size, weight and power of the overall package. At the same time, we are moving towards a global offer based on digital interfaces that provides our customers simplifications at the IR system design process while freeing up more space. This paper discusses recent developments on hot and small pixel pitch technologies as well as efforts made on compact packaging solution developed by SOFRADIR in collaboration with CEA-LETI.

  14. Single-Cell Phenotype Classification Using Deep Convolutional Neural Networks.

    PubMed

    Dürr, Oliver; Sick, Beate

    2016-10-01

    Deep learning methods are currently outperforming traditional state-of-the-art computer vision algorithms in diverse applications and recently even surpassed human performance in object recognition. Here we demonstrate the potential of deep learning methods to high-content screening-based phenotype classification. We trained a deep learning classifier in the form of convolutional neural networks with approximately 40,000 publicly available single-cell images from samples treated with compounds from four classes known to lead to different phenotypes. The input data consisted of multichannel images. The construction of appropriate feature definitions was part of the training and carried out by the convolutional network, without the need for expert knowledge or handcrafted features. We compare our results against the recent state-of-the-art pipeline in which predefined features are extracted from each cell using specialized software and then fed into various machine learning algorithms (support vector machine, Fisher linear discriminant, random forest) for classification. The performance of all classification approaches is evaluated on an untouched test image set with known phenotype classes. Compared to the best reference machine learning algorithm, the misclassification rate is reduced from 8.9% to 6.6%.

  15. Convolutional neural network architectures for predicting DNA–protein binding

    PubMed Central

    Zeng, Haoyang; Edwards, Matthew D.; Liu, Ge; Gifford, David K.

    2016-01-01

    Motivation: Convolutional neural networks (CNN) have outperformed conventional methods in modeling the sequence specificity of DNA–protein binding. Yet inappropriate CNN architectures can yield poorer performance than simpler models. Thus an in-depth understanding of how to match CNN architecture to a given task is needed to fully harness the power of CNNs for computational biology applications. Results: We present a systematic exploration of CNN architectures for predicting DNA sequence binding using a large compendium of transcription factor datasets. We identify the best-performing architectures by varying CNN width, depth and pooling designs. We find that adding convolutional kernels to a network is important for motif-based tasks. We show the benefits of CNNs in learning rich higher-order sequence features, such as secondary motifs and local sequence context, by comparing network performance on multiple modeling tasks ranging in difficulty. We also demonstrate how careful construction of sequence benchmark datasets, using approaches that control potentially confounding effects like positional or motif strength bias, is critical in making fair comparisons between competing methods. We explore how to establish the sufficiency of training data for these learning tasks, and we have created a flexible cloud-based framework that permits the rapid exploration of alternative neural network architectures for problems in computational biology. Availability and Implementation: All the models analyzed are available at http://cnn.csail.mit.edu. Contact: gifford@mit.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307608

  16. Convolutional Neural Network Based Fault Detection for Rotating Machinery

    NASA Astrophysics Data System (ADS)

    Janssens, Olivier; Slavkovikj, Viktor; Vervisch, Bram; Stockman, Kurt; Loccufier, Mia; Verstockt, Steven; Van de Walle, Rik; Van Hoecke, Sofie

    2016-09-01

    Vibration analysis is a well-established technique for condition monitoring of rotating machines as the vibration patterns differ depending on the fault or machine condition. Currently, mainly manually-engineered features, such as the ball pass frequencies of the raceway, RMS, kurtosis an crest, are used for automatic fault detection. Unfortunately, engineering and interpreting such features requires a significant level of human expertise. To enable non-experts in vibration analysis to perform condition monitoring, the overhead of feature engineering for specific faults needs to be reduced as much as possible. Therefore, in this article we propose a feature learning model for condition monitoring based on convolutional neural networks. The goal of this approach is to autonomously learn useful features for bearing fault detection from the data itself. Several types of bearing faults such as outer-raceway faults and lubrication degradation are considered, but also healthy bearings and rotor imbalance are included. For each condition, several bearings are tested to ensure generalization of the fault-detection system. Furthermore, the feature-learning based approach is compared to a feature-engineering based approach using the same data to objectively quantify their performance. The results indicate that the feature-learning system, based on convolutional neural networks, significantly outperforms the classical feature-engineering based approach which uses manually engineered features and a random forest classifier. The former achieves an accuracy of 93.61 percent and the latter an accuracy of 87.25 percent.

  17. Discriminative Unsupervised Feature Learning with Exemplar Convolutional Neural Networks.

    PubMed

    Dosovitskiy, Alexey; Fischer, Philipp; Springenberg, Jost Tobias; Riedmiller, Martin; Brox, Thomas

    2016-09-01

    Deep convolutional networks have proven to be very successful in learning task specific features that allow for unprecedented performance on various computer vision tasks. Training of such networks follows mostly the supervised learning paradigm, where sufficiently many input-output pairs are required for training. Acquisition of large training sets is one of the key challenges, when approaching a new task. In this paper, we aim for generic feature learning and present an approach for training a convolutional network using only unlabeled data. To this end, we train the network to discriminate between a set of surrogate classes. Each surrogate class is formed by applying a variety of transformations to a randomly sampled 'seed' image patch. In contrast to supervised network training, the resulting feature representation is not class specific. It rather provides robustness to the transformations that have been applied during training. This generic feature representation allows for classification results that outperform the state of the art for unsupervised learning on several popular datasets (STL-10, CIFAR-10, Caltech-101, Caltech-256). While features learned with our approach cannot compete with class specific features from supervised training on a classification task, we show that they are advantageous on geometric matching problems, where they also outperform the SIFT descriptor. PMID:26540673

  18. Discriminative Unsupervised Feature Learning with Exemplar Convolutional Neural Networks.

    PubMed

    Dosovitskiy, Alexey; Fischer, Philipp; Springenberg, Jost Tobias; Riedmiller, Martin; Brox, Thomas

    2016-09-01

    Deep convolutional networks have proven to be very successful in learning task specific features that allow for unprecedented performance on various computer vision tasks. Training of such networks follows mostly the supervised learning paradigm, where sufficiently many input-output pairs are required for training. Acquisition of large training sets is one of the key challenges, when approaching a new task. In this paper, we aim for generic feature learning and present an approach for training a convolutional network using only unlabeled data. To this end, we train the network to discriminate between a set of surrogate classes. Each surrogate class is formed by applying a variety of transformations to a randomly sampled 'seed' image patch. In contrast to supervised network training, the resulting feature representation is not class specific. It rather provides robustness to the transformations that have been applied during training. This generic feature representation allows for classification results that outperform the state of the art for unsupervised learning on several popular datasets (STL-10, CIFAR-10, Caltech-101, Caltech-256). While features learned with our approach cannot compete with class specific features from supervised training on a classification task, we show that they are advantageous on geometric matching problems, where they also outperform the SIFT descriptor.

  19. A Mathematical Motivation for Complex-Valued Convolutional Networks.

    PubMed

    Tygert, Mark; Bruna, Joan; Chintala, Soumith; LeCun, Yann; Piantino, Serkan; Szlam, Arthur

    2016-05-01

    A complex-valued convolutional network (convnet) implements the repeated application of the following composition of three operations, recursively applying the composition to an input vector of nonnegative real numbers: (1) convolution with complex-valued vectors, followed by (2) taking the absolute value of every entry of the resulting vectors, followed by (3) local averaging. For processing real-valued random vectors, complex-valued convnets can be viewed as data-driven multiscale windowed power spectra, data-driven multiscale windowed absolute spectra, data-driven multiwavelet absolute values, or (in their most general configuration) data-driven nonlinear multiwavelet packets. Indeed, complex-valued convnets can calculate multiscale windowed spectra when the convnet filters are windowed complex-valued exponentials. Standard real-valued convnets, using rectified linear units (ReLUs), sigmoidal (e.g., logistic or tanh) nonlinearities, or max pooling, for example, do not obviously exhibit the same exact correspondence with data-driven wavelets (whereas for complex-valued convnets, the correspondence is much more than just a vague analogy). Courtesy of the exact correspondence, the remarkably rich and rigorous body of mathematical analysis for wavelets applies directly to (complex-valued) convnets.

  20. Multiple deep convolutional neural networks averaging for face alignment

    NASA Astrophysics Data System (ADS)

    Zhang, Shaohua; Yang, Hua; Yin, Zhouping

    2015-05-01

    Face alignment is critical for face recognition, and the deep learning-based method shows promise for solving such issues, given that competitive results are achieved on benchmarks with additional benefits, such as dispensing with handcrafted features and initial shape. However, most existing deep learning-based approaches are complicated and quite time-consuming during training. We propose a compact face alignment method for fast training without decreasing its accuracy. Rectified linear unit is employed, which allows all networks approximately five times faster convergence than a tanh neuron. An eight learnable layer deep convolutional neural network (DCNN) based on local response normalization and a padding convolutional layer (PCL) is designed to provide reliable initial values during prediction. A model combination scheme is presented to further reduce errors, while showing that only two network architectures and hyperparameter selection procedures are required in our approach. A three-level cascaded system is ultimately built based on the DCNNs and model combination mode. Extensive experiments validate the effectiveness of our method and demonstrate comparable accuracy with state-of-the-art methods on BioID, labeled face parts in the wild, and Helen datasets.

  1. Enhancing Neutron Beam Production with a Convoluted Moderator

    SciTech Connect

    Iverson, Erik B; Baxter, David V; Muhrer, Guenter; Ansell, Stuart; Gallmeier, Franz X; Dalgliesh, Robert; Lu, Wei; Kaiser, Helmut

    2014-10-01

    We describe a new concept for a neutron moderating assembly resulting in the more efficient production of slow neutron beams. The Convoluted Moderator, a heterogeneous stack of interleaved moderating material and nearly transparent single-crystal spacers, is a directionally-enhanced neutron beam source, improving beam effectiveness over an angular range comparable to the range accepted by neutron beam lines and guides. We have demonstrated gains of 50% in slow neutron intensity for a given fast neutron production rate while simultaneously reducing the wavelength-dependent emission time dispersion by 25%, both coming from a geometric effect in which the neutron beam lines view a large surface area of moderating material in a relatively small volume. Additionally, we have confirmed a Bragg-enhancement effect arising from coherent scattering within the single-crystal spacers. We have not observed hypothesized refractive effects leading to additional gains at long wavelength. In addition to confirmation of the validity of the Convoluted Moderator concept, our measurements provide a series of benchmark experiments suitable for developing simulation and analysis techniques for practical optimization and eventual implementation at slow neutron source facilities.

  2. Deep Convolutional Neural Networks for large-scale speech tasks.

    PubMed

    Sainath, Tara N; Kingsbury, Brian; Saon, George; Soltau, Hagen; Mohamed, Abdel-rahman; Dahl, George; Ramabhadran, Bhuvana

    2015-04-01

    Convolutional Neural Networks (CNNs) are an alternative type of neural network that can be used to reduce spectral variations and model spectral correlations which exist in signals. Since speech signals exhibit both of these properties, we hypothesize that CNNs are a more effective model for speech compared to Deep Neural Networks (DNNs). In this paper, we explore applying CNNs to large vocabulary continuous speech recognition (LVCSR) tasks. First, we determine the appropriate architecture to make CNNs effective compared to DNNs for LVCSR tasks. Specifically, we focus on how many convolutional layers are needed, what is an appropriate number of hidden units, what is the best pooling strategy. Second, investigate how to incorporate speaker-adapted features, which cannot directly be modeled by CNNs as they do not obey locality in frequency, into the CNN framework. Third, given the importance of sequence training for speech tasks, we introduce a strategy to use ReLU+dropout during Hessian-free sequence training of CNNs. Experiments on 3 LVCSR tasks indicate that a CNN with the proposed speaker-adapted and ReLU+dropout ideas allow for a 12%-14% relative improvement in WER over a strong DNN system, achieving state-of-the art results in these 3 tasks.

  3. Sub-pixel mapping of water boundaries using pixel swapping algorithm (case study: Tagliamento River, Italy)

    NASA Astrophysics Data System (ADS)

    Niroumand-Jadidi, Milad; Vitti, Alfonso

    2015-10-01

    Taking the advantages of remotely sensed data for mapping and monitoring of water boundaries is of particular importance in many different management and conservation activities. Imagery data are classified using automatic techniques to produce maps entering the water bodies' analysis chain in several and different points. Very commonly, medium or coarse spatial resolution imagery is used in studies of large water bodies. Data of this kind is affected by the presence of mixed pixels leading to very outstanding problems, in particular when dealing with boundary pixels. A considerable amount of uncertainty inescapably occurs when conventional hard classifiers (e.g., maximum likelihood) are applied on mixed pixels. In this study, Linear Spectral Mixture Model (LSMM) is used to estimate the proportion of water in boundary pixels. Firstly by applying an unsupervised clustering, the water body is identified approximately and a buffer area considered ensuring the selection of entire boundary pixels. Then LSMM is applied on this buffer region to estimate the fractional maps. However, resultant output of LSMM does not provide a sub-pixel map corresponding to water abundances. To tackle with this problem, Pixel Swapping (PS) algorithm is used to allocate sub-pixels within mixed pixels in such a way to maximize the spatial proximity of sub-pixels and pixels in the neighborhood. The water area of two segments of Tagliamento River (Italy) are mapped in sub-pixel resolution (10m) using a 30m Landsat image. To evaluate the proficiency of the proposed approach for sub-pixel boundary mapping, the image is also classified using a conventional hard classifier. A high resolution image of the same area is also classified and used as a reference for accuracy assessment. According to the results, sub-pixel map shows in average about 8 percent higher overall accuracy than hard classification and fits very well in the boundaries with the reference map.

  4. Pixel multichip module development at Fermilab

    SciTech Connect

    Turqueti, M A; Cardoso, G; Andresen, J; Appel, J A; Christian, D C; Kwan, S W; Prosser, A; Uplegger, L

    2005-10-01

    At Fermilab, there is an ongoing pixel detector R&D effort for High Energy Physics with the objective of developing high performance vertex detectors suitable for the next generation of HEP experiments. The pixel module presented here is a direct result of work undertaken for the canceled BTeV experiment. It is a very mature piece of hardware, having many characteristics of high performance, low mass and radiation hardness driven by the requirements of the BTeV experiment. The detector presented in this paper consists of three basic devices; the readout integrated circuit (IC) FPIX2A [2][5], the pixel sensor (TESLA p-spray) [6] and the high density interconnect (HDI) flex circuit [1][3] that is capable of supporting eight readout ICs. The characterization of the pixel multichip module prototype as well as the baseline design of the eight chip pixel module and its capabilities are presented. These prototypes were characterized for threshold and noise dispersion. The bump-bonds of the pixel module were examined using an X-ray inspection system. Furthermore, the connectivity of the bump-bonds was tested using a radioactive source ({sup 90}Sr), while the absolute calibration of the modules was achieved using an X-ray source. This paper provides a view of the integration of the three components that together comprise the pixel multichip module.

  5. Micro-Pixel Image Position Sensing Testbed

    NASA Technical Reports Server (NTRS)

    Nemati, Bijan; Shao, Michael; Zhai, Chengxing; Erlig, Hernan; Wang, Xu; Goullioud, Renaud

    2011-01-01

    The search for Earth-mass planets in the habitable zones of nearby Sun-like stars is an important goal of astrophysics. This search is not feasible with the current slate of astronomical instruments. We propose a new concept for microarcsecond astrometry which uses a simplified instrument and hence promises to be low cost. The concept employs a telescope with only a primary, laser metrology applied to the focal plane array, and new algorithms for measuring image position and displacement on the focal plane. The required level of accuracy in both the metrology and image position sensing is at a few micro-pixels. We have begun a detailed investigation of the feasibility of our approach using simulations and a micro-pixel image position sensing testbed called MCT. So far we have been able to demonstrate that the pixel-to-pixel distances in a focal plane can be measured with a precision of 20 micro-pixels and image-to-image distances with a precision of 30 micro-pixels. We have also shown using simulations that our image position algorithm can achieve accuracy of 4 micro-pixels in the presence of lambda/20 wavefront errors.

  6. It's not the pixel count, you fool

    NASA Astrophysics Data System (ADS)

    Kriss, Michael A.

    2012-01-01

    The first thing a "marketing guy" asks the digital camera engineer is "how many pixels does it have, for we need as many mega pixels as possible since the other guys are killing us with their "umpteen" mega pixel pocket sized digital cameras. And so it goes until the pixels get smaller and smaller in order to inflate the pixel count in the never-ending pixel-wars. These small pixels just are not very good. The truth of the matter is that the most important feature of digital cameras in the last five years is the automatic motion control to stabilize the image on the sensor along with some very sophisticated image processing. All the rest has been hype and some "cool" design. What is the future for digital imaging and what will drive growth of camera sales (not counting the cell phone cameras which totally dominate the market in terms of camera sales) and more importantly after sales profits? Well sit in on the Dark Side of Color and find out what is being done to increase the after sales profits and don't be surprised if has been done long ago in some basement lab of a photographic company and of course, before its time.

  7. LISe pixel detector for neutron imaging

    NASA Astrophysics Data System (ADS)

    Herrera, Elan; Hamm, Daniel; Wiggins, Brenden; Milburn, Rob; Burger, Arnold; Bilheux, Hassina; Santodonato, Louis; Chvala, Ondrej; Stowe, Ashley; Lukosi, Eric

    2016-10-01

    Semiconducting lithium indium diselenide, 6LiInSe2 or LISe, has promising characteristics for neutron detection applications. The 95% isotopic enrichment of 6Li results in a highly efficient thermal neutron-sensitive material. In this study, we report on a proof-of-principle investigation of a semiconducting LISe pixel detector to demonstrate its potential as an efficient neutron imager. The LISe pixel detector had a 4×4 of pixels with a 550 μm pitch on a 5×5×0.56 mm3 LISe substrate. An experimentally verified spatial resolution of 300 μm was observed utilizing a super-sampling technique.

  8. Per-Pixel Lighting Data Analysis

    SciTech Connect

    Inanici, Mehlika

    2005-08-01

    This report presents a framework for per-pixel analysis of the qualitative and quantitative aspects of luminous environments. Recognizing the need for better lighting analysis capabilities and appreciating the new measurement abilities developed within the LBNL Lighting Measurement and Simulation Toolbox, ''Per-pixel Lighting Data Analysis'' project demonstrates several techniques for analyzing luminance distribution patterns, luminance ratios, adaptation luminance and glare assessment. The techniques are the syntheses of the current practices in lighting design and the unique practices that can be done with per-pixel data availability. Demonstrated analysis techniques are applicable to both computer-generated and digitally captured images (physically-based renderings and High Dynamic Range photographs).

  9. There is no MacWilliams identity for convolutional codes. [transmission gain comparison

    NASA Technical Reports Server (NTRS)

    Shearer, J. B.; Mceliece, R. J.

    1977-01-01

    An example is provided of two convolutional codes that have the same transmission gain but whose dual codes do not. This shows that no analog of the MacWilliams identity for block codes can exist relating the transmission gains of a convolutional code and its dual.

  10. Using convolutional decoding to improve time delay and phase estimation in digital communications

    DOEpatents

    Ormesher, Richard C.; Mason, John J.

    2010-01-26

    The time delay and/or phase of a communication signal received by a digital communication receiver can be estimated based on a convolutional decoding operation that the communication receiver performs on the received communication signal. If the original transmitted communication signal has been spread according to a spreading operation, a corresponding despreading operation can be integrated into the convolutional decoding operation.

  11. Pixels, Imagers and Related Fabrication Methods

    NASA Technical Reports Server (NTRS)

    Pain, Bedabrata (Inventor); Cunningham, Thomas J. (Inventor)

    2016-01-01

    Pixels, imagers and related fabrication methods are described. The described methods result in cross-talk reduction in imagers and related devices by generating depletion regions. The devices can also be used with electronic circuits for imaging applications.

  12. Pixels, Imagers and Related Fabrication Methods

    NASA Technical Reports Server (NTRS)

    Pain, Bedabrata (Inventor); Cunningham, Thomas J. (Inventor)

    2014-01-01

    Pixels, imagers and related fabrication methods are described. The described methods result in cross-talk reduction in imagers and related devices by generating depletion regions. The devices can also be used with electronic circuits for imaging applications.

  13. Design of the small pixel pitch ROIC

    NASA Astrophysics Data System (ADS)

    Liang, Qinghua; Jiang, Dazhao; Chen, Honglei; Zhai, Yongcheng; Gao, Lei; Ding, Ruijun

    2014-11-01

    Since the technology trend of the third generation IRFPA towards resolution enhancing has steadily progressed,the pixel pitch of IRFPA has been greatly reduced.A 640×512 readout integrated circuit(ROIC) of IRFPA with 15μm pixel pitch is presented in this paper.The 15μm pixel pitch ROIC design will face many challenges.As we all known,the integrating capacitor is a key performance parameter when considering pixel area,charge capacity and dynamic range,so we adopt the effective method of 2 by 2 pixels sharing an integrating capacitor to solve this problem.The input unit cell architecture will contain two paralleled sample and hold parts,which not only allow the FPA to be operated in full frame snapshot mode but also save relatively unit circuit area.Different applications need more matching input unit circuits. Because the dimension of 2×2 pixels is 30μm×30μm, an input stage based on direct injection (DI) which has medium injection ratio and small layout area is proved to be suitable for middle wave (MW) while BDI with three-transistor cascode amplifier for long wave(LW). By adopting the 0.35μm 2P4M mixed signal process, the circuit architecture can make the effective charge capacity of 7.8Me- per pixel with 2.2V output range for MW and 7.3 Me- per pixel with 2.6V output range for LW. According to the simulation results, this circuit works well under 5V power supply and achieves less than 0.1% nonlinearity.

  14. Steganography based on pixel intensity value decomposition

    NASA Astrophysics Data System (ADS)

    Abdulla, Alan Anwar; Sellahewa, Harin; Jassim, Sabah A.

    2014-05-01

    This paper focuses on steganography based on pixel intensity value decomposition. A number of existing schemes such as binary, Fibonacci, Prime, Natural, Lucas, and Catalan-Fibonacci (CF) are evaluated in terms of payload capacity and stego quality. A new technique based on a specific representation is proposed to decompose pixel intensity values into 16 (virtual) bit-planes suitable for embedding purposes. The proposed decomposition has a desirable property whereby the sum of all bit-planes does not exceed the maximum pixel intensity value, i.e. 255. Experimental results demonstrate that the proposed technique offers an effective compromise between payload capacity and stego quality of existing embedding techniques based on pixel intensity value decomposition. Its capacity is equal to that of binary and Lucas, while it offers a higher capacity than Fibonacci, Prime, Natural, and CF when the secret bits are embedded in 1st Least Significant Bit (LSB). When the secret bits are embedded in higher bit-planes, i.e., 2nd LSB to 8th Most Significant Bit (MSB), the proposed scheme has more capacity than Natural numbers based embedding. However, from the 6th bit-plane onwards, the proposed scheme offers better stego quality. In general, the proposed decomposition scheme has less effect in terms of quality on pixel value when compared to most existing pixel intensity value decomposition techniques when embedding messages in higher bit-planes.

  15. Focal plane array with modular pixel array components for scalability

    DOEpatents

    Kay, Randolph R; Campbell, David V; Shinde, Subhash L; Rienstra, Jeffrey L; Serkland, Darwin K; Holmes, Michael L

    2014-12-09

    A modular, scalable focal plane array is provided as an array of integrated circuit dice, wherein each die includes a given amount of modular pixel array circuitry. The array of dice effectively multiplies the amount of modular pixel array circuitry to produce a larger pixel array without increasing die size. Desired pixel pitch across the enlarged pixel array is preserved by forming die stacks with each pixel array circuitry die stacked on a separate die that contains the corresponding signal processing circuitry. Techniques for die stack interconnections and die stack placement are implemented to ensure that the desired pixel pitch is preserved across the enlarged pixel array.

  16. Tomography by iterative convolution - Empirical study and application to interferometry

    NASA Technical Reports Server (NTRS)

    Vest, C. M.; Prikryl, I.

    1984-01-01

    An algorithm for computer tomography has been developed that is applicable to reconstruction from data having incomplete projections because an opaque object blocks some of the probing radiation as it passes through the object field. The algorithm is based on iteration between the object domain and the projection (Radon transform) domain. Reconstructions are computed during each iteration by the well-known convolution method. Although it is demonstrated that this algorithm does not converge, an empirically justified criterion for terminating the iteration when the most accurate estimate has been computed is presented. The algorithm has been studied by using it to reconstruct several different object fields with several different opaque regions. It also has been used to reconstruct aerodynamic density fields from interferometric data recorded in wind tunnel tests.

  17. Small convolution kernels for high-fidelity image restoration

    NASA Technical Reports Server (NTRS)

    Reichenbach, Stephen E.; Park, Stephen K.

    1991-01-01

    An algorithm is developed for computing the mean-square-optimal values for small, image-restoration kernels. The algorithm is based on a comprehensive, end-to-end imaging system model that accounts for the important components of the imaging process: the statistics of the scene, the point-spread function of the image-gathering device, sampling effects, noise, and display reconstruction. Subject to constraints on the spatial support of the kernel, the algorithm generates the kernel values that restore the image with maximum fidelity, that is, the kernel minimizes the expected mean-square restoration error. The algorithm is consistent with the derivation of the spatially unconstrained Wiener filter, but leads to a small, spatially constrained kernel that, unlike the unconstrained filter, can be efficiently implemented by convolution. Simulation experiments demonstrate that for a wide range of imaging systems these small kernels can restore images with fidelity comparable to images restored with the unconstrained Wiener filter.

  18. Convolution properties for certain classes of multivalent functions

    NASA Astrophysics Data System (ADS)

    Sokól, Janusz; Trojnar-Spelina, Lucyna

    2008-01-01

    Recently N.E. Cho, O.S. Kwon and H.M. Srivastava [Nak Eun Cho, Oh Sang Kwon, H.M. Srivastava, Inclusion relationships and argument properties for certain subclasses of multivalent functions associated with a family of linear operators, J. Math. Anal. Appl. 292 (2004) 470-483] have introduced the class of multivalent analytic functions and have given a number of results. This class has been defined by means of a special linear operator associated with the Gaussian hypergeometric function. In this paper we have extended some of the previous results and have given other properties of this class. We have made use of differential subordinations and properties of convolution in geometric function theory.

  19. Joint Face Detection and Alignment Using Multitask Cascaded Convolutional Networks

    NASA Astrophysics Data System (ADS)

    Zhang, Kaipeng; Zhang, Zhanpeng; Li, Zhifeng; Qiao, Yu

    2016-10-01

    Face detection and alignment in unconstrained environment are challenging due to various poses, illuminations and occlusions. Recent studies show that deep learning approaches can achieve impressive performance on these two tasks. In this paper, we propose a deep cascaded multi-task framework which exploits the inherent correlation between them to boost up their performance. In particular, our framework adopts a cascaded structure with three stages of carefully designed deep convolutional networks that predict face and landmark location in a coarse-to-fine manner. In addition, in the learning process, we propose a new online hard sample mining strategy that can improve the performance automatically without manual sample selection. Our method achieves superior accuracy over the state-of-the-art techniques on the challenging FDDB and WIDER FACE benchmark for face detection, and AFLW benchmark for face alignment, while keeps real time performance.

  20. Deep convolutional neural networks for ATR from SAR imagery

    NASA Astrophysics Data System (ADS)

    Morgan, David A. E.

    2015-05-01

    Deep architectures for classification and representation learning have recently attracted significant attention within academia and industry, with many impressive results across a diverse collection of problem sets. In this work we consider the specific application of Automatic Target Recognition (ATR) using Synthetic Aperture Radar (SAR) data from the MSTAR public release data set. The classification performance achieved using a Deep Convolutional Neural Network (CNN) on this data set was found to be competitive with existing methods considered to be state-of-the-art. Unlike most existing algorithms, this approach can learn discriminative feature sets directly from training data instead of requiring pre-specification or pre-selection by a human designer. We show how this property can be exploited to efficiently adapt an existing classifier to recognise a previously unseen target and discuss potential practical applications.

  1. Drug-Drug Interaction Extraction via Convolutional Neural Networks.

    PubMed

    Liu, Shengyu; Tang, Buzhou; Chen, Qingcai; Wang, Xiaolong

    2016-01-01

    Drug-drug interaction (DDI) extraction as a typical relation extraction task in natural language processing (NLP) has always attracted great attention. Most state-of-the-art DDI extraction systems are based on support vector machines (SVM) with a large number of manually defined features. Recently, convolutional neural networks (CNN), a robust machine learning method which almost does not need manually defined features, has exhibited great potential for many NLP tasks. It is worth employing CNN for DDI extraction, which has never been investigated. We proposed a CNN-based method for DDI extraction. Experiments conducted on the 2013 DDIExtraction challenge corpus demonstrate that CNN is a good choice for DDI extraction. The CNN-based DDI extraction method achieves an F-score of 69.75%, which outperforms the existing best performing method by 2.75%. PMID:26941831

  2. Drug-Drug Interaction Extraction via Convolutional Neural Networks.

    PubMed

    Liu, Shengyu; Tang, Buzhou; Chen, Qingcai; Wang, Xiaolong

    2016-01-01

    Drug-drug interaction (DDI) extraction as a typical relation extraction task in natural language processing (NLP) has always attracted great attention. Most state-of-the-art DDI extraction systems are based on support vector machines (SVM) with a large number of manually defined features. Recently, convolutional neural networks (CNN), a robust machine learning method which almost does not need manually defined features, has exhibited great potential for many NLP tasks. It is worth employing CNN for DDI extraction, which has never been investigated. We proposed a CNN-based method for DDI extraction. Experiments conducted on the 2013 DDIExtraction challenge corpus demonstrate that CNN is a good choice for DDI extraction. The CNN-based DDI extraction method achieves an F-score of 69.75%, which outperforms the existing best performing method by 2.75%.

  3. Enhanced Line Integral Convolution with Flow Feature Detection

    NASA Technical Reports Server (NTRS)

    Lane, David; Okada, Arthur

    1996-01-01

    The Line Integral Convolution (LIC) method, which blurs white noise textures along a vector field, is an effective way to visualize overall flow patterns in a 2D domain. The method produces a flow texture image based on the input velocity field defined in the domain. Because of the nature of the algorithm, the texture image tends to be blurry. This sometimes makes it difficult to identify boundaries where flow separation and reattachments occur. We present techniques to enhance LIC texture images and use colored texture images to highlight flow separation and reattachment boundaries. Our techniques have been applied to several flow fields defined in 3D curvilinear multi-block grids and scientists have found the results to be very useful.

  4. Plane-wave decomposition by spherical-convolution microphone array

    NASA Astrophysics Data System (ADS)

    Rafaely, Boaz; Park, Munhum

    2001-05-01

    Reverberant sound fields are widely studied, as they have a significant influence on the acoustic performance of enclosures in a variety of applications. For example, the intelligibility of speech in lecture rooms, the quality of music in auditoria, the noise level in offices, and the production of 3D sound in living rooms are all affected by the enclosed sound field. These sound fields are typically studied through frequency response measurements or statistical measures such as reverberation time, which do not provide detailed spatial information. The aim of the work presented in this seminar is the detailed analysis of reverberant sound fields. A measurement and analysis system based on acoustic theory and signal processing, designed around a spherical microphone array, is presented. Detailed analysis is achieved by decomposition of the sound field into waves, using spherical Fourier transform and spherical convolution. The presentation will include theoretical review, simulation studies, and initial experimental results.

  5. Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition.

    PubMed

    He, Kaiming; Zhang, Xiangyu; Ren, Shaoqing; Sun, Jian

    2015-09-01

    Existing deep convolutional neural networks (CNNs) require a fixed-size (e.g., 224 × 224) input image. This requirement is "artificial" and may reduce the recognition accuracy for the images or sub-images of an arbitrary size/scale. In this work, we equip the networks with another pooling strategy, "spatial pyramid pooling", to eliminate the above requirement. The new network structure, called SPP-net, can generate a fixed-length representation regardless of image size/scale. Pyramid pooling is also robust to object deformations. With these advantages, SPP-net should in general improve all CNN-based image classification methods. On the ImageNet 2012 dataset, we demonstrate that SPP-net boosts the accuracy of a variety of CNN architectures despite their different designs. On the Pascal VOC 2007 and Caltech101 datasets, SPP-net achieves state-of-the-art classification results using a single full-image representation and no fine-tuning. The power of SPP-net is also significant in object detection. Using SPP-net, we compute the feature maps from the entire image only once, and then pool features in arbitrary regions (sub-images) to generate fixed-length representations for training the detectors. This method avoids repeatedly computing the convolutional features. In processing test images, our method is 24-102 × faster than the R-CNN method, while achieving better or comparable accuracy on Pascal VOC 2007. In ImageNet Large Scale Visual Recognition Challenge (ILSVRC) 2014, our methods rank #2 in object detection and #3 in image classification among all 38 teams. This manuscript also introduces the improvement made for this competition.

  6. Planar slim-edge pixel sensors for the ATLAS upgrades

    NASA Astrophysics Data System (ADS)

    Altenheiner, S.; Goessling, C.; Jentzsch, J.; Klingenberg, R.; Lapsien, T.; Muenstermann, D.; Rummler, A.; Troska, G.; Wittig, T.

    2012-02-01

    The ATLAS detector at CERN is a general-purpose experiment at the Large Hadron Collider (LHC). The ATLAS Pixel Detector is the innermost tracking detector of ATLAS and requires a sufficient level of hermeticity to achieve superb track reconstruction performance. The current planar n-type pixel sensors feature a pixel matrix of n+-implantations which is (on the opposite p-side) surrounded by so-called guard rings to reduce the high voltage stepwise towards the cutting edge and an additional safety margin. Because of the inactive region around the active area, the sensor modules have been shingled on top of each other's edge which limits the thermal performance and adds complexity in the present detector. The first upgrade phase of the ATLAS pixel detector will consist of the insertable b-layer (IBL), an additional b-layer which will be inserted into the present detector in 2013. Several changes in the sensor design with respect to the existing detector had to be applied to comply with the IBL's specifications and are described in detail. A key issue for the ATLAS upgrades is a flat arrangement of the sensors. To maintain the required level of hermeticity in the detector, the inactive sensor edges have to be reduced to minimize the dead space between the adjacent detector modules. Unirradiated and irradiated sensors with the IBL design have been operated in test beams to study the efficiency performance in the sensor edge region and it was found that the inactive edge width could be reduced from 1100 μm to less than 250 μm.

  7. Charge Sharing and Charge Loss in a Cadmium-Zinc-Telluride Fine-Pixel Detector Array

    NASA Technical Reports Server (NTRS)

    Gaskin, J. A.; Sharma, D. P.; Ramsey, B. D.; Six, N. Frank (Technical Monitor)

    2002-01-01

    Because of its high atomic number, room temperature operation, low noise, and high spatial resolution a Cadmium-Zinc-Telluride (CZT) multi-pixel detector is ideal for hard x-ray astrophysical observation. As part of on-going research at MSFC (Marshall Space Flight Center) to develop multi-pixel CdZnTe detectors for this purpose, we have measured charge sharing and charge loss for a 4x4 (750micron pitch), lmm thick pixel array and modeled these results using a Monte-Carlo simulation. This model was then used to predict the amount of charge sharing for a much finer pixel array (with a 300micron pitch). Future work will enable us to compare the simulated results for the finer array to measured values.

  8. Spatial clustering of pixels of a multispectral image

    SciTech Connect

    Conger, James Lynn

    2014-08-19

    A method and system for clustering the pixels of a multispectral image is provided. A clustering system computes a maximum spectral similarity score for each pixel that indicates the similarity between that pixel and the most similar neighboring. To determine the maximum similarity score for a pixel, the clustering system generates a similarity score between that pixel and each of its neighboring pixels and then selects the similarity score that represents the highest similarity as the maximum similarity score. The clustering system may apply a filtering criterion based on the maximum similarity score so that pixels with similarity scores below a minimum threshold are not clustered. The clustering system changes the current pixel values of the pixels in a cluster based on an averaging of the original pixel values of the pixels in the cluster.

  9. Charge Loss and Charge Sharing Measurements for Two Different Pixelated Cadmium-Zinc-Telluride Detectors

    NASA Technical Reports Server (NTRS)

    Gaskin, Jessica; Sharma, Dharma; Ramsey, Brian; Seller, Paul

    2003-01-01

    As part of ongoing research at Marshall Space Flight Center, Cadmium-Zinc- Telluride (CdZnTe) pixilated detectors are being developed for use at the focal plane of the High Energy Replicated Optics (HERO) telescope. HERO requires a 64x64 pixel array with a spatial resolution of around 200 microns (with a 6m focal length) and high energy resolution (< 2% at 60keV). We are currently testing smaller arrays as a necessary first step towards this goal. In this presentation, we compare charge sharing and charge loss measurements between two devices that differ both electronically and geometrically. The first device consists of a 1-mm-thick piece of CdZnTe that is sputtered with a 4x4 array of pixels with pixel pitch of 750 microns (inter-pixel gap is 100 microns). The signal is read out using discrete ultra-low-noise preamplifiers, one for each of the 16 pixels. The second detector consists of a 2-mm-thick piece of CdZnTe that is sputtered with a 16x16 array of pixels with a pixel pitch of 300 microns (inter-pixel gap is 50 microns). Instead of using discrete preamplifiers, the crystal is bonded to an ASIC that provides all of the front-end electronics to each of the 256 pixels. what degree the bias voltage (i.e. the electric field) and hence the drift and diffusion coefficients affect our measurements. Further, we compare the measured results with simulated results and discuss to

  10. Study on pixel matching method of the multi-angle observation from airborne AMPR measurements

    NASA Astrophysics Data System (ADS)

    Hou, Weizhen; Qie, Lili; Li, Zhengqiang; Sun, Xiaobing; Hong, Jin; Chen, Xingfeng; Xu, Hua; Sun, Bin; Wang, Han

    2015-10-01

    For the along-track scanning mode, the same place along the ground track could be detected by the Advanced Multi-angular Polarized Radiometer (AMPR) with several different scanning angles from -55 to 55 degree, which provides a possible means to get the multi-angular detection for some nearby pixels. However, due to the ground sample spacing and spatial footprint of the detection, the different sizes of footprints cannot guarantee the spatial matching of some partly overlap pixels, which turn into a bottleneck for the effective use of the multi-angular detected information of AMPR to study the aerosol and surface polarized properties. Based on our definition and calculation of t he pixel coincidence rate for the multi-angular detection, an effective multi-angle observation's pixel matching method is presented to solve the spatial matching problem for airborne AMPR. Assuming the shape of AMPR's each pixel is an ellipse, and the major axis and minor axis depends on the flying attitude and each scanning angle. By the definition of coordinate system and origin of coordinate, the latitude and longitude could be transformed into the Euclidian distance, and the pixel coincidence rate of two nearby ellipses could be calculated. Via the traversal of each ground pixel, those pixels with high coincidence rate could be selected and merged, and with the further quality control of observation data, thus the ground pixels dataset with multi-angular detection could be obtained and analyzed, providing the support for the multi-angular and polarized retrieval algorithm research in t he next study.

  11. Plasma evolution and dynamics in high-power vacuum-transmission-line post-hole convolutes

    NASA Astrophysics Data System (ADS)

    Rose, D. V.; Welch, D. R.; Hughes, T. P.; Clark, R. E.; Stygar, W. A.

    2008-06-01

    Vacuum-post-hole convolutes are used in pulsed high-power generators to join several magnetically insulated transmission lines (MITL) in parallel. Such convolutes add the output currents of the MITLs, and deliver the combined current to a single MITL that, in turn, delivers the current to a load. Magnetic insulation of electron flow, established upstream of the convolute region, is lost at the convolute due to symmetry breaking and the formation of magnetic nulls, resulting in some current losses. At very high-power operating levels and long pulse durations, the expansion of electrode plasmas into the MITL of such devices is considered likely. This work examines the evolution and dynamics of cathode plasmas in the double-post-hole convolutes used on the Z accelerator [R. B. Spielman , Phys. Plasmas 5, 2105 (1998)PHPAEN1070-664X10.1063/1.872881]. Three-dimensional particle-in-cell (PIC) simulations that model the entire radial extent of the Z accelerator convolute—from the parallel-plate transmission-line power feeds to the z-pinch load region—are used to determine electron losses in the convolute. The results of the simulations demonstrate that significant current losses (1.5 MA out of a total system current of 18.5 MA), which are comparable to the losses observed experimentally, could be caused by the expansion of cathode plasmas in the convolute regions.

  12. Adaptive Multi-Objective Sub-Pixel Mapping Framework Based on Memetic Algorithm for Hyperspectral Remote Sensing Imagery

    NASA Astrophysics Data System (ADS)

    Zhong, Y.; Zhang, L.

    2012-07-01

    Sub-pixel mapping technique can specify the location of each class within the pixels based on the assumption of spatial dependence. Traditional sub-pixel mapping algorithms only consider the spatial dependence at the pixel level. The spatial dependence of each sub-pixel is ignored and sub-pixel spatial relation is lost. In this paper, a novel multi-objective sub-pixel mapping framework based on memetic algorithm, namely MSMF, is proposed. In MSMF, the sub-pixel mapping is transformed to a multi-objective optimization problem, which maximizing the spatial dependence index (SDI) and Moran's I, synchronously. Memetic algorithm is utilized to solve the multi-objective problem, which combines global search strategies with local search heuristics. In this framework, the sub-pixel mapping problem can be solved using different evolutionary algorithms and local algorithms. In this paper, memetic algorithm based on clonal selection algorithm (CSA) and random swapping as an example is designed and applied simultaneously in the proposed MSMF. In MSMF, CSA inherits the biologic properties of human immune systems, i.e. clone, mutation, memory, to search the possible sub-pixel mapping solution in the global space. After the exploration based on CSA, the local search based on random swapping is employed to dynamically decide which neighbourhood should be selected to stress exploitation in each generation. In addition, a solution set is used in MSMF to hold and update the obtained non-dominated solutions for multi-objective problem. Experimental results demonstrate that the proposed approach outperform traditional sub-pixel mapping algorithms, and hence provide an effective option for sub-pixel mapping of hyperspectral remote sensing imagery.

  13. Pixel Dynamics Analysis of Photospheric Spectral Data

    NASA Astrophysics Data System (ADS)

    Rasca, Anthony P.; Chen, James; Pevtsov, Alexei A.

    2015-04-01

    Recent advances in solar observations have led to higher-resolution surface (photosphere) images that reveal bipolar magnetic features operating near the resolution limit during emerging flux events. Further improvements in resolution are expected to reveal even smaller dynamic features. Such photospheric features provide observable indications of what is happening before, during, and after flux emergence, eruptions in the corona, and other phenomena. Visible changes in photospheric active regions also play a major role in predicting eruptions that are responsible for geomagnetic plasma disturbances. A new method has been developed to extract physical information from photospheric data (e.g., SOLIS Stokes parameters) based on the statistics of pixel-by-pixel variations in spectral (absorption or emission) line quantities such as line profile Doppler shift, width, asymmetry, and flatness. Such properties are determined by the last interaction between detected photons and optically thick photospheric plasmas, and may contain extractable information on local plasma properties at sub-pixel scales. Applying the method to photospheric data with high spectral resolution, our pixel-by-pixel analysis is performed for various regions on the solar disk, ranging from quiet-Sun regions to active regions exhibiting eruptions, characterizing photospheric dynamics using spectral profiles. In particular, the method quantitatively characterizes the time profile of changes in spectral properties in photospheric features and provides improved physical constraints on observed quantities.

  14. Mapping Electrical Crosstalk in Pixelated Sensor Arrays

    NASA Technical Reports Server (NTRS)

    Seshadri, S.; Cole, D. M.; Hancock, B. R.; Smith, R. M.

    2008-01-01

    Electronic coupling effects such as Inter-Pixel Capacitance (IPC) affect the quantitative interpretation of image data from CMOS, hybrid visible and infrared imagers alike. Existing methods of characterizing IPC do not provide a map of the spatial variation of IPC over all pixels. We demonstrate a deterministic method that provides a direct quantitative map of the crosstalk across an imager. The approach requires only the ability to reset single pixels to an arbitrary voltage, different from the rest of the imager. No illumination source is required. Mapping IPC independently for each pixel is also made practical by the greater S/N ratio achievable for an electrical stimulus than for an optical stimulus, which is subject to both Poisson statistics and diffusion effects of photo-generated charge. The data we present illustrates a more complex picture of IPC in Teledyne HgCdTe and HyViSi focal plane arrays than is presently understood, including the presence of a newly discovered, long range IPC in the HyViSi FPA that extends tens of pixels in distance, likely stemming from extended field effects in the fully depleted substrate. The sensitivity of the measurement approach has been shown to be good enough to distinguish spatial structure in IPC of the order of 0.1%.

  15. Pixels, Blocks of Pixels, and Polygons: Choosing a Spatial Unit for Thematic Accuracy Assessment

    EPA Science Inventory

    Pixels, polygons, and blocks of pixels are all potentially viable spatial assessment units for conducting an accuracy assessment. We develop a statistical population-based framework to examine how the spatial unit chosen affects the outcome of an accuracy assessment. The populati...

  16. Uncooled infrared detectors toward smaller pixel pitch with newly proposed pixel structure

    NASA Astrophysics Data System (ADS)

    Tohyama, Shigeru; Sasaki, Tokuhito; Endoh, Tsutomu; Sano, Masahiko; Katoh, Kouji; Kurashina, Seiji; Miyoshi, Masaru; Yamazaki, Takao; Ueno, Munetaka; Katayama, Haruyoshi; Imai, Tadashi

    2011-06-01

    Since authors have successfully demonstrated uncooled infrared (IR) focal plane array (FPA) with 23.5 um pixel pitch, it has been widely utilized for commercial applications such as thermography, security camera and so on. One of the key issues for uncooled IR detector technology is to shrink the pixel size. The smaller the pixel pitch, the more the IR camera products become compact and the less cost. This paper proposes a new pixel structure with a diaphragm and beams which are placed in different level, to realize an uncooled IRFPA with smaller pixel pitch )<=17 μm). The upper level consists of diaphragm with VOx bolometer and IR absorber layers, while the lower level consists of the two beams, which are designed to place on the adjacent pixels. The test devices of this pixel design with 12 um, 15 um and 17 um pitch have been fabricated on the Si ROIC of QVGA (320 × 240) with 23.5 um pitch. Their performances reveal nearly equal to the IRFPA with 23.5 um pitch. For example, noise equivalent temperature difference (NETD) of 12 μm pixel is 63.1 mK with thermal time constant of 14.5 msec. In addition, this new structure is expected to be more effective for the existing IRFPA with 23.5 um pitch in order to improve the IR responsivity.

  17. Development of CMOS Pixel Sensors with digital pixel dedicated to future particle physics experiments

    NASA Astrophysics Data System (ADS)

    Zhao, W.; Wang, T.; Pham, H.; Hu-Guo, C.; Dorokhov, A.; Hu, Y.

    2014-02-01

    Two prototypes of CMOS pixel sensor with in-pixel analog to digital conversion have been developed in a 0.18 μm CIS process. The first design integrates a discriminator into each pixel within an area of 22 × 33 μm2 in order to meet the requirements of the ALICE inner tracking system (ALICE-ITS) upgrade. The second design features 3-bit charge encoding inside a 35 × 35 μm2 pixel which is motivated by the specifications of the outer layers of the ILD vertex detector (ILD-VXD). This work aims to validate the concept of in-pixel digitization which offers higher readout speed, lower power consumption and less dead zone compared with the column-level charge encoding.

  18. Optical links for the ATLAS Pixel Detector

    NASA Astrophysics Data System (ADS)

    Stucci, Stefania

    2016-07-01

    With the expected increase in the instantaneous luminosity of the LHC in the next few years, the off-detector optical read-out system of the outer two layers of the Pixel Detector of the ATLAS experiment will reach its bandwidth limits. The bandwidth will be increased with new optical receivers, which had to be redesigned since commercial solutions could not be used. The new design allows for a wider operational range in terms of data frequency and input optical power to match the on-detector transmitters of the present Pixel Detector. We report on the design and testing of prototypes of these components and the plans for the installation in the Pixel Detector read-out chain in 2015.

  19. Power Studies for the CMS Pixel Tracker

    SciTech Connect

    Todri, A.; Turqueti, M.; Rivera, R.; Kwan, S.; /Fermilab

    2009-01-01

    The Electronic Systems Engineering Department of the Computing Division at the Fermi National Accelerator Laboratory is carrying out R&D investigations for the upgrade of the power distribution system of the Compact Muon Solenoid (CMS) Pixel Tracker at the Large Hadron Collider (LHC). Among the goals of this effort is that of analyzing the feasibility of alternative powering schemes for the forward tracker, including DC to DC voltage conversion techniques using commercially available and custom switching regulator circuits. Tests of these approaches are performed using the PSI46 pixel readout chip currently in use at the CMS Tracker. Performance measures of the detector electronics will include pixel noise and threshold dispersion results. Issues related to susceptibility to switching noise will be studied and presented. In this paper, we describe the current power distribution network of the CMS Tracker, study the implications of the proposed upgrade with DC-DC converters powering scheme and perform noise susceptibility analysis.

  20. Vivid, full-color aluminum plasmonic pixels

    PubMed Central

    Olson, Jana; Manjavacas, Alejandro; Liu, Lifei; Chang, Wei-Shun; Foerster, Benjamin; King, Nicholas S.; Knight, Mark W.; Nordlander, Peter; Halas, Naomi J.; Link, Stephan

    2014-01-01

    Aluminum is abundant, low in cost, compatible with complementary metal-oxide semiconductor manufacturing methods, and capable of supporting tunable plasmon resonance structures that span the entire visible spectrum. However, the use of Al for color displays has been limited by its intrinsically broad spectral features. Here we show that vivid, highly polarized, and broadly tunable color pixels can be produced from periodic patterns of oriented Al nanorods. Whereas the nanorod longitudinal plasmon resonance is largely responsible for pixel color, far-field diffractive coupling is used to narrow the plasmon linewidth, enabling monochromatic coloration and significantly enhancing the far-field scattering intensity of the individual nanorod elements. The bright coloration can be observed with p-polarized white light excitation, consistent with the use of this approach in display devices. The resulting color pixels are constructed with a simple design, are compatible with scalable fabrication methods, and provide contrast ratios exceeding 100:1. PMID:25225385

  1. Active Pixel Sensors: Are CCD's Dinosaurs?

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R.

    1993-01-01

    Charge-coupled devices (CCD's) are presently the technology of choice for most imaging applications. In the 23 years since their invention in 1970, they have evolved to a sophisticated level of performance. However, as with all technologies, we can be certain that they will be supplanted someday. In this paper, the Active Pixel Sensor (APS) technology is explored as a possible successor to the CCD. An active pixel is defined as a detector array technology that has at least one active transistor within the pixel unit cell. The APS eliminates the need for nearly perfect charge transfer -- the Achilles' heel of CCDs. This perfect charge transfer makes CCD's radiation 'soft,' difficult to use under low light conditions, difficult to manufacture in large array sizes, difficult to integrate with on-chip electronics, difficult to use at low temperatures, difficult to use at high frame rates, and difficult to manufacture in non-silicon materials that extend wavelength response.

  2. HST/WFC3 Characteristics: gain, post-flash stability, UVIS low-sensitivity pixels, persistence, IR flats and bad pixel table

    NASA Astrophysics Data System (ADS)

    Gunning, Heather C.; Baggett, Sylvia; Gosmeyer, Catherine M.; Long, Knox S.; Ryan, Russell E.; MacKenty, John W.; Durbin, Meredith

    2015-08-01

    The Wide Field Camera 3 (WFC3) is a fourth-generation imaging instrument on the Hubble Space Telescope (HST). Installed in May 2009, WFC3 is comprised of two observational channels covering wavelengths from UV/Visible (UVIS) to infrared (IR); both have been performing well on-orbit. We discuss the gain stability of both WFC3 channel detectors from ground testing through present day. For UVIS, we detail a low-sensitivity pixel population that evolves during the time between anneals, but is largely reset by the annealing procedure. We characterize the post-flash LED lamp stability, used and recommended to mitigate CTE effects for observations with less than 12e-/pixel backgrounds. We present mitigation options for IR persistence during and after observations. Finally, we give an overview on the construction of the IR flats and provide updates on the bad pixel table.

  3. Development of a CMOS SOI Pixel Detector

    SciTech Connect

    Arai, Y.; Hazumi, M.; Ikegami, Y.; Kohriki, T.; Tajima, O.; Terada, S.; Tsuboyama, T.; Unno, Y.; Ushiroda, Y.; Ikeda, H.; Hara, K.; Ishino, H.; Kawasaki, T.; Miyake, H.; Martin, E.; Varner, G.; Tajima, H.; Ohno, M.; Fukuda, K.; Komatsubara, H.; Ida, J.; /NONE - OKI ELECTR INDUST TOKYO

    2008-08-19

    We have developed a monolithic radiation pixel detector using silicon on insulator (SOI) with a commercial 0.15 {micro}m fully-depleted-SOI technology and a Czochralski high resistivity silicon substrate in place of a handle wafer. The SOI TEG (Test Element Group) chips with a size of 2.5 x 2.5 mm{sup 2} consisting of 20 x 20 {micro}m{sup 2} pixels have been designed and manufactured. Performance tests with a laser light illumination and a {beta} ray radioactive source indicate successful operation of the detector. We also briefly discuss the back gate effect as well as the simulation study.

  4. Commissioning of the ATLAS pixel detector

    SciTech Connect

    ATLAS Collaboration; Golling, Tobias

    2008-09-01

    The ATLAS pixel detector is a high precision silicon tracking device located closest to the LHC interaction point. It belongs to the first generation of its kind in a hadron collider experiment. It will provide crucial pattern recognition information and will largely determine the ability of ATLAS to precisely track particle trajectories and find secondary vertices. It was the last detector to be installed in ATLAS in June 2007, has been fully connected and tested in-situ during spring and summer 2008, and is ready for the imminent LHC turn-on. The highlights of the past and future commissioning activities of the ATLAS pixel system are presented.

  5. Crosswell electromagnetic modeling from impulsive source: Optimization strategy for dispersion suppression in convolutional perfectly matched layer.

    PubMed

    Fang, Sinan; Pan, Heping; Du, Ting; Konaté, Ahmed Amara; Deng, Chengxiang; Qin, Zhen; Guo, Bo; Peng, Ling; Ma, Huolin; Li, Gang; Zhou, Feng

    2016-01-01

    This study applied the finite-difference time-domain (FDTD) method to forward modeling of the low-frequency crosswell electromagnetic (EM) method. Specifically, we implemented impulse sources and convolutional perfectly matched layer (CPML). In the process to strengthen CPML, we observed that some dispersion was induced by the real stretch κ, together with an angular variation of the phase velocity of the transverse electric plane wave; the conclusion was that this dispersion was positively related to the real stretch and was little affected by grid interval. To suppress the dispersion in the CPML, we first derived the analytical solution for the radiation field of the magneto-dipole impulse source in the time domain. Then, a numerical simulation of CPML absorption with high-frequency pulses qualitatively amplified the dispersion laws through wave field snapshots. A numerical simulation using low-frequency pulses suggested an optimal parameter strategy for CPML from the established criteria. Based on its physical nature, the CPML method of simply warping space-time was predicted to be a promising approach to achieve ideal absorption, although it was still difficult to entirely remove the dispersion. PMID:27585538

  6. Toward content-based image retrieval with deep convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Sklan, Judah E. S.; Plassard, Andrew J.; Fabbri, Daniel; Landman, Bennett A.

    2015-03-01

    Content-based image retrieval (CBIR) offers the potential to identify similar case histories, understand rare disorders, and eventually, improve patient care. Recent advances in database capacity, algorithm efficiency, and deep Convolutional Neural Networks (dCNN), a machine learning technique, have enabled great CBIR success for general photographic images. Here, we investigate applying the leading ImageNet CBIR technique to clinically acquired medical images captured by the Vanderbilt Medical Center. Briefly, we (1) constructed a dCNN with four hidden layers, reducing dimensionality of an input scaled to 128x128 to an output encoded layer of 4x384, (2) trained the network using back-propagation 1 million random magnetic resonance (MR) and computed tomography (CT) images, (3) labeled an independent set of 2100 images, and (4) evaluated classifiers on the projection of the labeled images into manifold space. Quantitative results were disappointing (averaging a true positive rate of only 20%); however, the data suggest that improvements would be possible with more evenly distributed sampling across labels and potential re-grouping of label structures. This preliminary effort at automated classification of medical images with ImageNet is promising, but shows that more work is needed beyond direct adaptation of existing techniques.

  7. Single-Image Super Resolution for Multispectral Remote Sensing Data Using Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Liebel, L.; Körner, M.

    2016-06-01

    In optical remote sensing, spatial resolution of images is crucial for numerous applications. Space-borne systems are most likely to be affected by a lack of spatial resolution, due to their natural disadvantage of a large distance between the sensor and the sensed object. Thus, methods for single-image super resolution are desirable to exceed the limits of the sensor. Apart from assisting visual inspection of datasets, post-processing operations—e.g., segmentation or feature extraction—can benefit from detailed and distinguishable structures. In this paper, we show that recently introduced state-of-the-art approaches for single-image super resolution of conventional photographs, making use of deep learning techniques, such as convolutional neural networks (CNN), can successfully be applied to remote sensing data. With a huge amount of training data available, end-to-end learning is reasonably easy to apply and can achieve results unattainable using conventional handcrafted algorithms. We trained our CNN on a specifically designed, domain-specific dataset, in order to take into account the special characteristics of multispectral remote sensing data. This dataset consists of publicly available SENTINEL-2 images featuring 13 spectral bands, a ground resolution of up to 10m, and a high radiometric resolution and thus satisfying our requirements in terms of quality and quantity. In experiments, we obtained results superior compared to competing approaches trained on generic image sets, which failed to reasonably scale satellite images with a high radiometric resolution, as well as conventional interpolation methods.

  8. Crosswell electromagnetic modeling from impulsive source: Optimization strategy for dispersion suppression in convolutional perfectly matched layer

    NASA Astrophysics Data System (ADS)

    Fang, Sinan; Pan, Heping; Du, Ting; Konaté, Ahmed Amara; Deng, Chengxiang; Qin, Zhen; Guo, Bo; Peng, Ling; Ma, Huolin; Li, Gang; Zhou, Feng

    2016-09-01

    This study applied the finite-difference time-domain (FDTD) method to forward modeling of the low-frequency crosswell electromagnetic (EM) method. Specifically, we implemented impulse sources and convolutional perfectly matched layer (CPML). In the process to strengthen CPML, we observed that some dispersion was induced by the real stretch κ, together with an angular variation of the phase velocity of the transverse electric plane wave; the conclusion was that this dispersion was positively related to the real stretch and was little affected by grid interval. To suppress the dispersion in the CPML, we first derived the analytical solution for the radiation field of the magneto-dipole impulse source in the time domain. Then, a numerical simulation of CPML absorption with high-frequency pulses qualitatively amplified the dispersion laws through wave field snapshots. A numerical simulation using low-frequency pulses suggested an optimal parameter strategy for CPML from the established criteria. Based on its physical nature, the CPML method of simply warping space-time was predicted to be a promising approach to achieve ideal absorption, although it was still difficult to entirely remove the dispersion.

  9. The Luminous Convolution Model-The light side of dark matter

    NASA Astrophysics Data System (ADS)

    Cisneros, Sophia; Oblath, Noah; Formaggio, Joe; Goedecke, George; Chester, David; Ott, Richard; Ashley, Aaron; Rodriguez, Adrianna

    2014-03-01

    We present a heuristic model for predicting the rotation curves of spiral galaxies. The Luminous Convolution Model (LCM) utilizes Lorentz-type transformations of very small changes in the photon's frequencies from curved space-times to construct a dynamic mass model of galaxies. These frequency changes are derived using the exact solution to the exterior Kerr wave equation, as opposed to a linearized treatment. The LCM Lorentz-type transformations map between the emitter and the receiver rotating galactic frames, and then to the associated flat frames in each galaxy where the photons are emitted and received. This treatment necessarily rests upon estimates of the luminous matter in both the emitter and the receiver galaxies. The LCM is tested on a sample of 22 randomly chosen galaxies, represented in 33 different data sets. LCM fits are compared to the Navarro, Frenk & White (NFW) Dark Matter Model and to the Modified Newtonian Dynamics (MOND) model when possible. The high degree of sensitivity of the LCM to the initial assumption of a luminous mass to light ratios (M/L), of the given galaxy, is demonstrated. We demonstrate that the LCM is successful across a wide range of spiral galaxies for predicting the observed rotation curves. Through the generous support of the MIT Dr. Martin Luther King Jr. Fellowship program.

  10. Crosswell electromagnetic modeling from impulsive source: Optimization strategy for dispersion suppression in convolutional perfectly matched layer

    PubMed Central

    Fang, Sinan; Pan, Heping; Du, Ting; Konaté, Ahmed Amara; Deng, Chengxiang; Qin, Zhen; Guo, Bo; Peng, Ling; Ma, Huolin; Li, Gang; Zhou, Feng

    2016-01-01

    This study applied the finite-difference time-domain (FDTD) method to forward modeling of the low-frequency crosswell electromagnetic (EM) method. Specifically, we implemented impulse sources and convolutional perfectly matched layer (CPML). In the process to strengthen CPML, we observed that some dispersion was induced by the real stretch κ, together with an angular variation of the phase velocity of the transverse electric plane wave; the conclusion was that this dispersion was positively related to the real stretch and was little affected by grid interval. To suppress the dispersion in the CPML, we first derived the analytical solution for the radiation field of the magneto-dipole impulse source in the time domain. Then, a numerical simulation of CPML absorption with high-frequency pulses qualitatively amplified the dispersion laws through wave field snapshots. A numerical simulation using low-frequency pulses suggested an optimal parameter strategy for CPML from the established criteria. Based on its physical nature, the CPML method of simply warping space-time was predicted to be a promising approach to achieve ideal absorption, although it was still difficult to entirely remove the dispersion. PMID:27585538

  11. Applying Convolution-Based Processing Methods To A Dual-Channel, Large Array Artificial Olfactory Mucosa

    NASA Astrophysics Data System (ADS)

    Taylor, J. E.; Che Harun, F. K.; Covington, J. A.; Gardner, J. W.

    2009-05-01

    Our understanding of the human olfactory system, particularly with respect to the phenomenon of nasal chromatography, has led us to develop a new generation of novel odour-sensitive instruments (or electronic noses). This novel instrument is in need of new approaches to data processing so that the information rich signals can be fully exploited; here, we apply a novel time-series based technique for processing such data. The dual-channel, large array artificial olfactory mucosa consists of 3 arrays of 300 sensors each. The sensors are divided into 24 groups, with each group made from a particular type of polymer. The first array is connected to the other two arrays by a pair of retentive columns. One channel is coated with Carbowax 20 M, and the other with OV-1. This configuration partly mimics the nasal chromatography effect, and partly augments it by utilizing not only polar (mucus layer) but also non-polar (artificial) coatings. Such a device presents several challenges to multi-variate data processing: a large, redundant dataset, spatio-temporal output, and small sample space. By applying a novel convolution approach to this problem, it has been demonstrated that these problems can be overcome. The artificial mucosa signals have been classified using a probabilistic neural network and gave an accuracy of 85%. Even better results should be possible through the selection of other sensors with lower correlation.

  12. Optimization of radiation hardness and charge collection of edgeless silicon pixel sensors for photon science

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Tartarotti Maimone, D.; Pennicard, D.; Sarajlic, M.; Graafsma, H.

    2014-12-01

    Recent progress in active-edge technology of silicon sensors enables the development of large-area tiled silicon pixel detectors with small dead space between modules by utilizing edgeless sensors. Such technology has been proven in successful productions of ATLAS and Medipix-based silicon pixel sensors by a few foundries. However, the drawbacks of edgeless sensors are poor radiation hardness for ionizing radiation and non-uniform charge collection by edge pixels. In this work, the radiation hardness of edgeless sensors with different polarities has been investigated using Synopsys TCAD with X-ray radiation-damage parameters implemented. Results show that if no conventional guard ring is present, none of the current designs are able to achieve a high breakdown voltage (typically < 30 V) after irradiation to a dose of ~ 10 MGy. In addition, a charge-collection model has been developed and was used to calculate the charges collected by the edge pixels of edgeless sensors when illuminated with X-rays. The model takes into account the electric field distribution inside the pixel sensor, the absorption of X-rays, drift and diffusion of electrons and holes, charge sharing effects, and threshold settings in ASICs. It is found that the non-uniform charge collection of edge pixels is caused by the strong bending of the electric field and the non-uniformity depends on bias voltage, sensor thickness and distance from active edge to the last pixel (``edge space"). In particular, the last few pixels close to the active edge of the sensor are not sensitive to low-energy X-rays ( < 10 keV), especially for sensors with thicker Si and smaller edge space. The results from the model calculation have been compared to measurements and good agreement was obtained. The model can be used to optimize the edge design. From the edge optimization, it is found that in order to guarantee the sensitivity of the last few pixels to low-energy X-rays, the edge space should be kept at least 50% of

  13. Convoluted nozzle design for the RL10 derivative 2B engine

    NASA Technical Reports Server (NTRS)

    1985-01-01

    The convoluted nozzle is a conventional refractory metal nozzle extension that is formed with a portion of the nozzle convoluted to show the extendible nozzle within the length of the rocket engine. The convoluted nozzle (CN) was deployed by a system of four gas driven actuators. For spacecraft applications the optimum CN may be self-deployed by internal pressure retained, during deployment, by a jettisonable exit closure. The convoluted nozzle is included in a study of extendible nozzles for the RL10 Engine Derivative 2B for use in an early orbit transfer vehicle (OTV). Four extendible nozzle configurations for the RL10-2B engine were evaluated. Three configurations of the two position nozzle were studied including a hydrogen dump cooled metal nozzle and radiation cooled nozzles of refractory metal and carbon/carbon composite construction respectively.

  14. Directional Radiometry and Radiative Transfer: the Convoluted Path From Centuries-old Phenomenology to Physical Optics

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.

    2014-01-01

    This Essay traces the centuries-long history of the phenomenological disciplines of directional radiometry and radiative transfer in turbid media, discusses their fundamental weaknesses, and outlines the convoluted process of their conversion into legitimate branches of physical optics.

  15. Comparison of bladder segmentation using deep-learning convolutional neural network with and without level sets

    NASA Astrophysics Data System (ADS)

    Cha, Kenny H.; Hadjiiski, Lubomir M.; Samala, Ravi K.; Chan, Heang-Ping; Cohan, Richard H.; Caoili, Elaine M.

    2016-03-01

    We are developing a CAD system for detection of bladder cancer in CTU. In this study we investigated the application of deep-learning convolutional neural network (DL-CNN) to the segmentation of the bladder, which is a challenging problem because of the strong boundary between the non-contrast and contrast-filled regions in the bladder. We trained a DL-CNN to estimate the likelihood of a pixel being inside the bladder using neighborhood information. The segmented bladder was obtained from thresholding and hole-filling of the likelihood map. We compared the segmentation performance of the DL-CNN alone and with additional cascaded 3D and 2D level sets to refine the segmentation using 3D hand-segmented contours as reference standard. The segmentation accuracy was evaluated by five performance measures: average volume intersection %, average % volume error, average absolute % error, average minimum distance, and average Jaccard index for a data set of 81 training and 92 test cases. For the training set, DLCNN with level sets achieved performance measures of 87.2+/-6.1%, 6.0+/-9.1%, 8.7+/-6.1%, 3.0+/-1.2 mm, and 81.9+/-7.6%, respectively, while the DL-CNN alone obtained the values of 73.6+/-8.5%, 23.0+/-8.5%, 23.0+/-8.5%, 5.1+/-1.5 mm, and 71.5+/-9.2%, respectively. For the test set, the DL-CNN with level sets achieved performance measures of 81.9+/-12.1%, 10.2+/-16.2%, 14.0+/-13.0%, 3.6+/-2.0 mm, and 76.2+/-11.8%, respectively, while DL-CNN alone obtained 68.7+/-12.0%, 27.2+/-13.7%, 27.4+/-13.6%, 5.7+/-2.2 mm, and 66.2+/-11.8%, respectively. DL-CNN alone is effective in segmenting bladders but may not follow the details of the bladder wall. The combination of DL-CNN with level sets provides highly accurate bladder segmentation.

  16. Toward an optimal convolutional neural network for traffic sign recognition

    NASA Astrophysics Data System (ADS)

    Habibi Aghdam, Hamed; Jahani Heravi, Elnaz; Puig, Domenec

    2015-12-01

    Convolutional Neural Networks (CNN) beat the human performance on German Traffic Sign Benchmark competition. Both the winner and the runner-up teams trained CNNs to recognize 43 traffic signs. However, both networks are not computationally efficient since they have many free parameters and they use highly computational activation functions. In this paper, we propose a new architecture that reduces the number of the parameters 27% and 22% compared with the two networks. Furthermore, our network uses Leaky Rectified Linear Units (ReLU) as the activation function that only needs a few operations to produce the result. Specifically, compared with the hyperbolic tangent and rectified sigmoid activation functions utilized in the two networks, Leaky ReLU needs only one multiplication operation which makes it computationally much more efficient than the two other functions. Our experiments on the Gertman Traffic Sign Benchmark dataset shows 0:6% improvement on the best reported classification accuracy while it reduces the overall number of parameters 85% compared with the winner network in the competition.

  17. Innervation of the renal proximal convoluted tubule of the rat

    SciTech Connect

    Barajas, L.; Powers, K. )

    1989-12-01

    Experimental data suggest the proximal tubule as a major site of neurogenic influence on tubular function. The functional and anatomical axial heterogeneity of the proximal tubule prompted this study of the distribution of innervation sites along the early, mid, and late proximal convoluted tubule (PCT) of the rat. Serial section autoradiograms, with tritiated norepinephrine serving as a marker for monoaminergic nerves, were used in this study. Freehand clay models and graphic reconstructions of proximal tubules permitted a rough estimation of the location of the innervation sites along the PCT. In the subcapsular nephrons, the early PCT (first third) was devoid of innervation sites with most of the innervation occurring in the mid (middle third) and in the late (last third) PCT. Innervation sites were found in the early PCT in nephrons located deeper in the cortex. In juxtamedullary nephrons, innervation sites could be observed on the PCT as it left the glomerulus. This gradient of PCT innervation can be explained by the different tubulovascular relationships of nephrons at different levels of the cortex. The absence of innervation sites in the early PCT of subcapsular nephrons suggests that any influence of the renal nerves on the early PCT might be due to an effect of neurotransmitter released from renal nerves reaching the early PCT via the interstitium and/or capillaries.

  18. Synthesising Primary Reflections by Marchenko Redatuming and Convolutional Interferometry

    NASA Astrophysics Data System (ADS)

    Curtis, A.

    2015-12-01

    Standard active-source seismic processing and imaging steps such as velocity analysis and reverse time migration usually provide best results when all reflected waves in the input data are primaries (waves that reflect only once). Multiples (recorded waves that reflect multiple times) represent a source of coherent noise in data that must be suppressed to avoid imaging artefacts. Consequently, multiple-removal methods have been a primcipal direction of active-source seismic research for decades. We describe a new method to estimate primaries directly, which obviates the need for multiple removal. Primaries are constructed within convolutional interferometry by combining first arriving events of up-going and direct wave down-going Green's functions to virtual receivers in the subsurface. The required up-going wavefields to virtual receivers along discrete subsurface boundaries can be constructed using Marchenko redatuming. Crucially, this is possible without detailed models of the Earth's subsurface velocity structure: similarly to most migration techniques, the method only requires surface reflection data and estimates of direct (non-reflected) arrivals between subsurface sources and the acquisition surface. The method is demonstrated on a stratified synclinal model. It is shown both to improve reverse time migration compared to standard methods, and to be particularly robust against errors in the reference velocity model used.

  19. Cell osmotic water permeability of isolated rabbit proximal convoluted tubules.

    PubMed

    Carpi-Medina, P; González, E; Whittembury, G

    1983-05-01

    Cell osmotic water permeability, Pcos, of the peritubular aspect of the proximal convoluted tubule (PCT) was measured from the time course of cell volume changes subsequent to the sudden imposition of an osmotic gradient, delta Cio, across the cell membrane of PCT that had been dissected and mounted in a chamber. The possibilities of artifact were minimized. The bath was vigorously stirred, the solutions could be 95% changed within 0.1 s, and small osmotic gradients (10-20 mosM) were used. Thus, the osmotically induced water flow was a linear function of delta Cio and the effect of the 70-microns-thick unstirred layers was negligible. In addition, data were extrapolated to delta Cio = 0. Pcos for PCT was 41.6 (+/- 3.5) X 10(-4) cm3 X s-1 X osM-1 per cm2 of peritubular basal area. The standing gradient osmotic theory for transcellular osmosis is incompatible with this value. Published values for Pcos of PST are 25.1 X 10(-4), and for the transepithelial permeability Peos values are 64 X 10(-4) for PCT and 94 X 10(-4) for PST, in the same units. These results indicate that there is room for paracellular water flow in both nephron segments and that the magnitude of the transcellular and paracellular water flows may vary from one segment of the proximal tubule to another. PMID:6846543

  20. Adapting line integral convolution for fabricating artistic virtual environment

    NASA Astrophysics Data System (ADS)

    Lee, Jiunn-Shyan; Wang, Chung-Ming

    2003-04-01

    Vector field occurs not only extensively in scientific applications but also in treasured art such as sculptures and paintings. Artist depicts our natural environment stressing valued directional feature besides color and shape information. Line integral convolution (LIC), developed for imaging vector field in scientific visualization, has potential of producing directional image. In this paper we present several techniques of exploring LIC techniques to generate impressionistic images forming artistic virtual environment. We take advantage of directional information given by a photograph, and incorporate many investigations to the work including non-photorealistic shading technique and statistical detail control. In particular, the non-photorealistic shading technique blends cool and warm colors into the photograph to imitate artists painting convention. Besides, we adopt statistical technique controlling integral length according to image variance to preserve details. Furthermore, we also propose method for generating a series of mip-maps, which revealing constant strokes under multi-resolution viewing and achieving frame coherence in an interactive walkthrough system. The experimental results show merits of emulating satisfyingly and computing efficiently, as a consequence, relying on the proposed technique successfully fabricates a wide category of non-photorealistic rendering (NPR) application such as interactive virtual environment with artistic perception.

  1. Multi-modal vertebrae recognition using Transformed Deep Convolution Network.

    PubMed

    Cai, Yunliang; Landis, Mark; Laidley, David T; Kornecki, Anat; Lum, Andrea; Li, Shuo

    2016-07-01

    Automatic vertebra recognition, including the identification of vertebra locations and naming in multiple image modalities, are highly demanded in spinal clinical diagnoses where large amount of imaging data from various of modalities are frequently and interchangeably used. However, the recognition is challenging due to the variations of MR/CT appearances or shape/pose of the vertebrae. In this paper, we propose a method for multi-modal vertebra recognition using a novel deep learning architecture called Transformed Deep Convolution Network (TDCN). This new architecture can unsupervisely fuse image features from different modalities and automatically rectify the pose of vertebra. The fusion of MR and CT image features improves the discriminativity of feature representation and enhances the invariance of the vertebra pattern, which allows us to automatically process images from different contrast, resolution, protocols, even with different sizes and orientations. The feature fusion and pose rectification are naturally incorporated in a multi-layer deep learning network. Experiment results show that our method outperforms existing detection methods and provides a fully automatic location+naming+pose recognition for routine clinical practice. PMID:27104497

  2. A deep convolutional neural network for recognizing foods

    NASA Astrophysics Data System (ADS)

    Jahani Heravi, Elnaz; Habibi Aghdam, Hamed; Puig, Domenec

    2015-12-01

    Controlling the food intake is an efficient way that each person can undertake to tackle the obesity problem in countries worldwide. This is achievable by developing a smartphone application that is able to recognize foods and compute their calories. State-of-art methods are chiefly based on hand-crafted feature extraction methods such as HOG and Gabor. Recent advances in large-scale object recognition datasets such as ImageNet have revealed that deep Convolutional Neural Networks (CNN) possess more representation power than the hand-crafted features. The main challenge with CNNs is to find the appropriate architecture for each problem. In this paper, we propose a deep CNN which consists of 769; 988 parameters. Our experiments show that the proposed CNN outperforms the state-of-art methods and improves the best result of traditional methods 17%. Moreover, using an ensemble of two CNNs that have been trained two different times, we are able to improve the classification performance 21:5%.

  3. Convolutional networks for fast, energy-efficient neuromorphic computing

    PubMed Central

    Esser, Steven K.; Merolla, Paul A.; Arthur, John V.; Cassidy, Andrew S.; Appuswamy, Rathinakumar; Andreopoulos, Alexander; Berg, David J.; McKinstry, Jeffrey L.; Melano, Timothy; Barch, Davis R.; di Nolfo, Carmelo; Datta, Pallab; Amir, Arnon; Taba, Brian; Flickner, Myron D.; Modha, Dharmendra S.

    2016-01-01

    Deep networks are now able to achieve human-level performance on a broad spectrum of recognition tasks. Independently, neuromorphic computing has now demonstrated unprecedented energy-efficiency through a new chip architecture based on spiking neurons, low precision synapses, and a scalable communication network. Here, we demonstrate that neuromorphic computing, despite its novel architectural primitives, can implement deep convolution networks that (i) approach state-of-the-art classification accuracy across eight standard datasets encompassing vision and speech, (ii) perform inference while preserving the hardware’s underlying energy-efficiency and high throughput, running on the aforementioned datasets at between 1,200 and 2,600 frames/s and using between 25 and 275 mW (effectively >6,000 frames/s per Watt), and (iii) can be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. This approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer. PMID:27651489

  4. Fast convolution-superposition dose calculation on graphics hardware.

    PubMed

    Hissoiny, Sami; Ozell, Benoît; Després, Philippe

    2009-06-01

    The numerical calculation of dose is central to treatment planning in radiation therapy and is at the core of optimization strategies for modern delivery techniques. In a clinical environment, dose calculation algorithms are required to be accurate and fast. The accuracy is typically achieved through the integration of patient-specific data and extensive beam modeling, which generally results in slower algorithms. In order to alleviate execution speed problems, the authors have implemented a modern dose calculation algorithm on a massively parallel hardware architecture. More specifically, they have implemented a convolution-superposition photon beam dose calculation algorithm on a commodity graphics processing unit (GPU). They have investigated a simple porting scenario as well as slightly more complex GPU optimization strategies. They have achieved speed improvement factors ranging from 10 to 20 times with GPU implementations compared to central processing unit (CPU) implementations, with higher values corresponding to larger kernel and calculation grid sizes. In all cases, they preserved the numerical accuracy of the GPU calculations with respect to the CPU calculations. These results show that streaming architectures such as GPUs can significantly accelerate dose calculation algorithms and let envision benefits for numerically intensive processes such as optimizing strategies, in particular, for complex delivery techniques such as IMRT and are therapy.

  5. Protein Secondary Structure Prediction Using Deep Convolutional Neural Fields.

    PubMed

    Wang, Sheng; Peng, Jian; Ma, Jianzhu; Xu, Jinbo

    2016-01-01

    Protein secondary structure (SS) prediction is important for studying protein structure and function. When only the sequence (profile) information is used as input feature, currently the best predictors can obtain ~80% Q3 accuracy, which has not been improved in the past decade. Here we present DeepCNF (Deep Convolutional Neural Fields) for protein SS prediction. DeepCNF is a Deep Learning extension of Conditional Neural Fields (CNF), which is an integration of Conditional Random Fields (CRF) and shallow neural networks. DeepCNF can model not only complex sequence-structure relationship by a deep hierarchical architecture, but also interdependency between adjacent SS labels, so it is much more powerful than CNF. Experimental results show that DeepCNF can obtain ~84% Q3 accuracy, ~85% SOV score, and ~72% Q8 accuracy, respectively, on the CASP and CAMEO test proteins, greatly outperforming currently popular predictors. As a general framework, DeepCNF can be used to predict other protein structure properties such as contact number, disorder regions, and solvent accessibility. PMID:26752681

  6. Accelerating Very Deep Convolutional Networks for Classification and Detection.

    PubMed

    Zhang, Xiangyu; Zou, Jianhua; He, Kaiming; Sun, Jian

    2016-10-01

    This paper aims to accelerate the test-time computation of convolutional neural networks (CNNs), especially very deep CNNs [1] that have substantially impacted the computer vision community. Unlike previous methods that are designed for approximating linear filters or linear responses, our method takes the nonlinear units into account. We develop an effective solution to the resulting nonlinear optimization problem without the need of stochastic gradient descent (SGD). More importantly, while previous methods mainly focus on optimizing one or two layers, our nonlinear method enables an asymmetric reconstruction that reduces the rapidly accumulated error when multiple (e.g., ≥ 10) layers are approximated. For the widely used very deep VGG-16 model [1] , our method achieves a whole-model speedup of 4 × with merely a 0.3 percent increase of top-5 error in ImageNet classification. Our 4 × accelerated VGG-16 model also shows a graceful accuracy degradation for object detection when plugged into the Fast R-CNN detector [2] . PMID:26599615

  7. Predicting Semantic Descriptions from Medical Images with Convolutional Neural Networks.

    PubMed

    Schlegl, Thomas; Waldstein, Sebastian M; Vogl, Wolf-Dieter; Schmidt-Erfurth, Ursula; Langs, Georg

    2015-01-01

    Learning representative computational models from medical imaging data requires large training data sets. Often, voxel-level annotation is unfeasible for sufficient amounts of data. An alternative to manual annotation, is to use the enormous amount of knowledge encoded in imaging data and corresponding reports generated during clinical routine. Weakly supervised learning approaches can link volume-level labels to image content but suffer from the typical label distributions in medical imaging data where only a small part consists of clinically relevant abnormal structures. In this paper we propose to use a semantic representation of clinical reports as a learning target that is predicted from imaging data by a convolutional neural network. We demonstrate how we can learn accurate voxel-level classifiers based on weak volume-level semantic descriptions on a set of 157 optical coherence tomography (OCT) volumes. We specifically show how semantic information increases classification accuracy for intraretinal cystoid fluid (IRC), subretinal fluid (SRF) and normal retinal tissue, and how the learning algorithm links semantic concepts to image content and geometry.

  8. Protein Secondary Structure Prediction Using Deep Convolutional Neural Fields

    NASA Astrophysics Data System (ADS)

    Wang, Sheng; Peng, Jian; Ma, Jianzhu; Xu, Jinbo

    2016-01-01

    Protein secondary structure (SS) prediction is important for studying protein structure and function. When only the sequence (profile) information is used as input feature, currently the best predictors can obtain ~80% Q3 accuracy, which has not been improved in the past decade. Here we present DeepCNF (Deep Convolutional Neural Fields) for protein SS prediction. DeepCNF is a Deep Learning extension of Conditional Neural Fields (CNF), which is an integration of Conditional Random Fields (CRF) and shallow neural networks. DeepCNF can model not only complex sequence-structure relationship by a deep hierarchical architecture, but also interdependency between adjacent SS labels, so it is much more powerful than CNF. Experimental results show that DeepCNF can obtain ~84% Q3 accuracy, ~85% SOV score, and ~72% Q8 accuracy, respectively, on the CASP and CAMEO test proteins, greatly outperforming currently popular predictors. As a general framework, DeepCNF can be used to predict other protein structure properties such as contact number, disorder regions, and solvent accessibility.

  9. Deep convolutional neural networks for classifying GPR B-scans

    NASA Astrophysics Data System (ADS)

    Besaw, Lance E.; Stimac, Philip J.

    2015-05-01

    Symmetric and asymmetric buried explosive hazards (BEHs) present real, persistent, deadly threats on the modern battlefield. Current approaches to mitigate these threats rely on highly trained operatives to reliably detect BEHs with reasonable false alarm rates using handheld Ground Penetrating Radar (GPR) and metal detectors. As computers become smaller, faster and more efficient, there exists greater potential for automated threat detection based on state-of-the-art machine learning approaches, reducing the burden on the field operatives. Recent advancements in machine learning, specifically deep learning artificial neural networks, have led to significantly improved performance in pattern recognition tasks, such as object classification in digital images. Deep convolutional neural networks (CNNs) are used in this work to extract meaningful signatures from 2-dimensional (2-D) GPR B-scans and classify threats. The CNNs skip the traditional "feature engineering" step often associated with machine learning, and instead learn the feature representations directly from the 2-D data. A multi-antennae, handheld GPR with centimeter-accurate positioning data was used to collect shallow subsurface data over prepared lanes containing a wide range of BEHs. Several heuristics were used to prevent over-training, including cross validation, network weight regularization, and "dropout." Our results show that CNNs can extract meaningful features and accurately classify complex signatures contained in GPR B-scans, complementing existing GPR feature extraction and classification techniques.

  10. Method for Veterbi decoding of large constraint length convolutional codes

    NASA Astrophysics Data System (ADS)

    Hsu, In-Shek; Truong, Trieu-Kie; Reed, Irving S.; Jing, Sun

    1988-05-01

    A new method of Viterbi decoding of convolutional codes lends itself to a pipline VLSI architecture using a single sequential processor to compute the path metrics in the Viterbi trellis. An array method is used to store the path information for NK intervals where N is a number, and K is constraint length. The selected path at the end of each NK interval is then selected from the last entry in the array. A trace-back method is used for returning to the beginning of the selected path back, i.e., to the first time unit of the interval NK to read out the stored branch metrics of the selected path which correspond to the message bits. The decoding decision made in this way is no longer maximum likelihood, but can be almost as good, provided that constraint length K in not too small. The advantage is that for a long message, it is not necessary to provide a large memory to store the trellis derived information until the end of the message to select the path that is to be decoded; the selection is made at the end of every NK time unit, thus decoding a long message in successive blocks.

  11. A quantum algorithm for Viterbi decoding of classical convolutional codes

    NASA Astrophysics Data System (ADS)

    Grice, Jon R.; Meyer, David A.

    2015-07-01

    We present a quantum Viterbi algorithm (QVA) with better than classical performance under certain conditions. In this paper, the proposed algorithm is applied to decoding classical convolutional codes, for instance, large constraint length and short decode frames . Other applications of the classical Viterbi algorithm where is large (e.g., speech processing) could experience significant speedup with the QVA. The QVA exploits the fact that the decoding trellis is similar to the butterfly diagram of the fast Fourier transform, with its corresponding fast quantum algorithm. The tensor-product structure of the butterfly diagram corresponds to a quantum superposition that we show can be efficiently prepared. The quantum speedup is possible because the performance of the QVA depends on the fanout (number of possible transitions from any given state in the hidden Markov model) which is in general much less than . The QVA constructs a superposition of states which correspond to all legal paths through the decoding lattice, with phase as a function of the probability of the path being taken given received data. A specialized amplitude amplification procedure is applied one or more times to recover a superposition where the most probable path has a high probability of being measured.

  12. Designing the optimal convolution kernel for modeling the motion blur

    NASA Astrophysics Data System (ADS)

    Jelinek, Jan

    2011-06-01

    Motion blur acts on an image like a two dimensional low pass filter, whose spatial frequency characteristic depends both on the trajectory of the relative motion between the scene and the camera and on the velocity vector variation along it. When motion during exposure is permitted, the conventional, static notions of both the image exposure and the scene-toimage mapping become unsuitable and must be revised to accommodate the image formation dynamics. This paper develops an exact image formation model for arbitrary object-camera relative motion with arbitrary velocity profiles. Moreover, for any motion the camera may operate in either continuous or flutter shutter exposure mode. Its result is a convolution kernel, which is optimally designed for both the given motion and sensor array geometry, and hence permits the most accurate computational undoing of the blurring effects for the given camera required in forensic and high security applications. The theory has been implemented and a few examples are shown in the paper.

  13. Method for Veterbi decoding of large constraint length convolutional codes

    NASA Technical Reports Server (NTRS)

    Hsu, In-Shek (Inventor); Truong, Trieu-Kie (Inventor); Reed, Irving S. (Inventor); Jing, Sun (Inventor)

    1988-01-01

    A new method of Viterbi decoding of convolutional codes lends itself to a pipline VLSI architecture using a single sequential processor to compute the path metrics in the Viterbi trellis. An array method is used to store the path information for NK intervals where N is a number, and K is constraint length. The selected path at the end of each NK interval is then selected from the last entry in the array. A trace-back method is used for returning to the beginning of the selected path back, i.e., to the first time unit of the interval NK to read out the stored branch metrics of the selected path which correspond to the message bits. The decoding decision made in this way is no longer maximum likelihood, but can be almost as good, provided that constraint length K in not too small. The advantage is that for a long message, it is not necessary to provide a large memory to store the trellis derived information until the end of the message to select the path that is to be decoded; the selection is made at the end of every NK time unit, thus decoding a long message in successive blocks.

  14. From hybrid to CMOS pixels ... a possibility for LHC's pixel future?

    NASA Astrophysics Data System (ADS)

    Wermes, N.

    2015-12-01

    Hybrid pixel detectors have been invented for the LHC to make tracking and vertexing possible at all in LHC's radiation intense environment. The LHC pixel detectors have meanwhile very successfully fulfilled their promises and R&D for the planned HL-LHC upgrade is in full swing, targeting even higher ionising doses and non-ionising fluences. In terms of rate and radiation tolerance hybrid pixels are unrivaled. But they have disadvantages as well, most notably material thickness, production complexity, and cost. Meanwhile also active pixel sensors (DEPFET, MAPS) have become real pixel detectors but they would by far not stand the rates and radiation faced from HL-LHC. New MAPS developments, so-called DMAPS (depleted MAPS) which are full CMOS-pixel structures with charge collection in a depleted region have come in the R&D focus for pixels at high rate/radiation levels. This goal can perhaps be realised exploiting HV technologies, high ohmic substrates and/or SOI based technologies. The paper covers the main ideas and some encouraging results from prototyping R&D, not hiding the difficulties.

  15. A convolution model for computing the far-field directivity of a parametric loudspeaker array.

    PubMed

    Shi, Chuang; Kajikawa, Yoshinobu

    2015-02-01

    This paper describes a method to compute the far-field directivity of a parametric loudspeaker array (PLA), whereby the steerable parametric loudspeaker can be implemented when phased array techniques are applied. The convolution of the product directivity and the Westervelt's directivity is suggested, substituting for the past practice of using the product directivity only. Computed directivity of a PLA using the proposed convolution model achieves significant improvement in agreement to measured directivity at a negligible computational cost.

  16. Uncooled infrared detectors toward smaller pixel pitch with newly proposed pixel structure

    NASA Astrophysics Data System (ADS)

    Tohyama, Shigeru; Sasaki, Tokuhito; Endoh, Tsutomu; Sano, Masahiko; Kato, Koji; Kurashina, Seiji; Miyoshi, Masaru; Yamazaki, Takao; Ueno, Munetaka; Katayama, Haruyoshi; Imai, Tadashi

    2013-12-01

    An uncooled infrared (IR) focal plane array (FPA) with 23.5 μm pixel pitch has been successfully demonstrated and has found wide commercial applications in the areas of thermography, security cameras, and other applications. One of the key issues for uncooled IRFPA technology is to shrink the pixel pitch because the size of the pixel pitch determines the overall size of the FPA, which, in turn, determines the cost of the IR camera products. This paper proposes an innovative pixel structure with a diaphragm and beams placed in different levels to realize an uncooled IRFPA with smaller pixel pitch (≦17 μm). The upper level consists of a diaphragm with VOx bolometer and IR absorber layers, while the lower level consists of the two beams, which are designed to be placed on the adjacent pixels. The test devices of this pixel design with 12, 15, and 17 μm pitch have been fabricated on the Si read-out integrated circuit (ROIC) of quarter video graphics array (QVGA) (320×240) with 23.5 μm pitch. Their performances are nearly equal to those of the IRFPA with 23.5 μm pitch. For example, a noise equivalent temperature difference of 12 μm pixel is 63.1 mK for F/1 optics with the thermal time constant of 14.5 ms. Then, the proposed structure is shown to be effective for the existing IRFPA with 23.5 μm pitch because of the improvements in IR sensitivity. Furthermore, the advanced pixel structure that has the beams composed of two levels are demonstrated to be realizable.

  17. STIS CCD Hot Pixel Annealing Cycle 11

    NASA Astrophysics Data System (ADS)

    Proffitt, Charles

    2002-07-01

    The effectiveness of the CCD hot pixel annealing process is assessed by measuring the dark current behavior before and after annealing and by searching for any window contamination effects. In addition CTE performance is examined by looking for traps in a low signal level flat. Follows on from proposal 8906.

  18. STIS CCD Hot Pixel Annealing Cycle 12

    NASA Astrophysics Data System (ADS)

    Maiz Apellaniz, Jesus

    2003-07-01

    The effectiveness of the CCD hot pixel annealing process is assessed by measuring the dark current behavior before and after annealing and by searching for any window contamination effects. In addition CTE performance is examined by looking for traps in a low signal level flat. Follows on from proposal 9612.

  19. Digital-pixel focal plane array development

    NASA Astrophysics Data System (ADS)

    Brown, Matthew G.; Baker, Justin; Colonero, Curtis; Costa, Joe; Gardner, Tom; Kelly, Mike; Schultz, Ken; Tyrrell, Brian; Wey, Jim

    2010-01-01

    Since 2006, MIT Lincoln Laboratory has been developing Digital-pixel Focal Plane Array (DFPA) readout integrated circuits (ROICs). To date, four 256 × 256 30 μm pitch DFPA designs with in-pixel analog to digital conversion have been fabricated using IBM 90 nm CMOS processes. The DFPA ROICs are compatible with a wide range of detector materials and cutoff wavelengths; HgCdTe, QWIP, and InGaAs photo-detectors with cutoff wavelengths ranging from 1.6 to 14.5 μm have been hybridized to the same digital-pixel readout. The digital-pixel readout architecture offers high dynamic range, A/C or D/C coupled integration, and on-chip image processing with low power orthogonal transfer operations. The newest ROIC designs support two-color operation with a single Indium bump connection. Development and characterization of the two-color DFPA designs is presented along with applications for this new digital readout technology.

  20. Minimal-memory realization of pearl-necklace encoders of general quantum convolutional codes

    SciTech Connect

    Houshmand, Monireh; Hosseini-Khayat, Saied

    2011-02-15

    Quantum convolutional codes, like their classical counterparts, promise to offer higher error correction performance than block codes of equivalent encoding complexity, and are expected to find important applications in reliable quantum communication where a continuous stream of qubits is transmitted. Grassl and Roetteler devised an algorithm to encode a quantum convolutional code with a ''pearl-necklace'' encoder. Despite their algorithm's theoretical significance as a neat way of representing quantum convolutional codes, it is not well suited to practical realization. In fact, there is no straightforward way to implement any given pearl-necklace structure. This paper closes the gap between theoretical representation and practical implementation. In our previous work, we presented an efficient algorithm to find a minimal-memory realization of a pearl-necklace encoder for Calderbank-Shor-Steane (CSS) convolutional codes. This work is an extension of our previous work and presents an algorithm for turning a pearl-necklace encoder for a general (non-CSS) quantum convolutional code into a realizable quantum convolutional encoder. We show that a minimal-memory realization depends on the commutativity relations between the gate strings in the pearl-necklace encoder. We find a realization by means of a weighted graph which details the noncommutative paths through the pearl necklace. The weight of the longest path in this graph is equal to the minimal amount of memory needed to implement the encoder. The algorithm has a polynomial-time complexity in the number of gate strings in the pearl-necklace encoder.

  1. An induced charge readout scheme incorporating image charge splitting on discrete pixels

    NASA Astrophysics Data System (ADS)

    Kataria, D. O.; Lapington, J. S.

    2003-11-01

    Top hat electrostatic analysers used in space plasma instruments typically use microchannel plates (MCPs) followed by discrete pixel anode readout for the angular definition of the incoming particles. Better angular definition requires more pixels/readout electronics channels but with stringent mass and power budgets common in space applications, the number of channels is restricted. We describe here a technique that improves the angular definition using induced charge and an interleaved anode pattern. The technique adopts the readout philosophy used on the CRRES and CLUSTER I instruments but has the advantages of the induced charge scheme and significantly reduced capacitance. Charge from the MCP collected by an anode pixel is inductively split onto discrete pixels whose geometry can be tailored to suit the scientific requirements of the instrument. For our application, the charge is induced over two pixels. One of them is used for a coarse angular definition but is read out by a single channel of electronics, allowing a higher rate handling. The other provides a finer angular definition but is interleaved and hence carries the expense of lower rate handling. Using the technique and adding four channels of electronics, a four-fold increase in the angular resolution is obtained. Details of the scheme and performance results are presented.

  2. A near-infrared 64-pixel superconducting nanowire single photon detector array with integrated multiplexed readout

    SciTech Connect

    Allman, M. S. Verma, V. B.; Stevens, M.; Gerrits, T.; Horansky, R. D.; Lita, A. E.; Mirin, R.; Nam, S. W.; Marsili, F.; Beyer, A.; Shaw, M. D.; Kumor, D.

    2015-05-11

    We demonstrate a 64-pixel free-space-coupled array of superconducting nanowire single photon detectors optimized for high detection efficiency in the near-infrared range. An integrated, readily scalable, multiplexed readout scheme is employed to reduce the number of readout lines to 16. The cryogenic, optical, and electronic packaging to read out the array as well as characterization measurements are discussed.

  3. [High-Performance Active Pixel X-Ray Sensors for X-Ray Astronomy

    NASA Technical Reports Server (NTRS)

    Bautz, Mark; Suntharalingam, Vyshnavi

    2005-01-01

    The subject grants support development of High-Performance Active Pixel Sensors for X-ray Astronomy at the Massachusetts Institute of Technology (MIT) Center for Space Research and at MIT's Lincoln Laboratory. This memo reports our progress in the second year of the project, from April, 2004 through the present.

  4. Adaptive bad pixel correction algorithm for IRFPA based on PCNN

    NASA Astrophysics Data System (ADS)

    Leng, Hanbing; Zhou, Zuofeng; Cao, Jianzhong; Yi, Bo; Yan, Aqi; Zhang, Jian

    2013-10-01

    Bad pixels and response non-uniformity are the primary obstacles when IRFPA is used in different thermal imaging systems. The bad pixels of IRFPA include fixed bad pixels and random bad pixels. The former is caused by material or manufacture defect and their positions are always fixed, the latter is caused by temperature drift and their positions are always changing. Traditional radiometric calibration-based bad pixel detection and compensation algorithm is only valid to the fixed bad pixels. Scene-based bad pixel correction algorithm is the effective way to eliminate these two kinds of bad pixels. Currently, the most used scene-based bad pixel correction algorithm is based on adaptive median filter (AMF). In this algorithm, bad pixels are regarded as image noise and then be replaced by filtered value. However, missed correction and false correction often happens when AMF is used to handle complex infrared scenes. To solve this problem, a new adaptive bad pixel correction algorithm based on pulse coupled neural networks (PCNN) is proposed. Potential bad pixels are detected by PCNN in the first step, then image sequences are used periodically to confirm the real bad pixels and exclude the false one, finally bad pixels are replaced by the filtered result. With the real infrared images obtained from a camera, the experiment results show the effectiveness of the proposed algorithm.

  5. Design Methodology: ASICs with complex in-pixel processing for Pixel Detectors

    SciTech Connect

    Fahim, Farah

    2014-10-31

    The development of Application Specific Integrated Circuits (ASIC) for pixel detectors with complex in-pixel processing using Computer Aided Design (CAD) tools that are, themselves, mainly developed for the design of conventional digital circuits requires a specialized approach. Mixed signal pixels often require parasitically aware detailed analog front-ends and extremely compact digital back-ends with more than 1000 transistors in small areas below 100μm x 100μm. These pixels are tiled to create large arrays, which have the same clock distribution and data readout speed constraints as in, for example, micro-processors. The methodology uses a modified mixed-mode on-top digital implementation flow to not only harness the tool efficiency for timing and floor-planning but also to maintain designer control over compact parasitically aware layout.

  6. Design methodology: edgeless 3D ASICs with complex in-pixel processing for pixel detectors

    SciTech Connect

    Fahim Farah, Fahim Farah; Deptuch, Grzegorz W.; Hoff, James R.; Mohseni, Hooman

    2015-08-28

    The design methodology for the development of 3D integrated edgeless pixel detectors with in-pixel processing using Electronic Design Automation (EDA) tools is presented. A large area 3 tier 3D detector with one sensor layer and two ASIC layers containing one analog and one digital tier, is built for x-ray photon time of arrival measurement and imaging. A full custom analog pixel is 65μm x 65μm. It is connected to a sensor pixel of the same size on one side, and on the other side it has approximately 40 connections to the digital pixel. A 32 x 32 edgeless array without any peripheral functional blocks constitutes a sub-chip. The sub-chip is an indivisible unit, which is further arranged in a 6 x 6 array to create the entire 1.248cm x 1.248cm ASIC. Each chip has 720 bump-bond I/O connections, on the back of the digital tier to the ceramic PCB. All the analog tier power and biasing is conveyed through the digital tier from the PCB. The assembly has no peripheral functional blocks, and hence the active area extends to the edge of the detector. This was achieved by using a few flavors of almost identical analog pixels (minimal variation in layout) to allow for peripheral biasing blocks to be placed within pixels. The 1024 pixels within a digital sub-chip array have a variety of full custom, semi-custom and automated timing driven functional blocks placed together. The methodology uses a modified mixed-mode on-top digital implementation flow to not only harness the tool efficiency for timing and floor-planning but also to maintain designer control over compact parasitically aware layout. The methodology uses the Cadence design platform, however it is not limited to this tool.

  7. Text-Attentional Convolutional Neural Network for Scene Text Detection.

    PubMed

    He, Tong; Huang, Weilin; Qiao, Yu; Yao, Jian

    2016-06-01

    Recent deep learning models have demonstrated strong capabilities for classifying text and non-text components in natural images. They extract a high-level feature globally computed from a whole image component (patch), where the cluttered background information may dominate true text features in the deep representation. This leads to less discriminative power and poorer robustness. In this paper, we present a new system for scene text detection by proposing a novel text-attentional convolutional neural network (Text-CNN) that particularly focuses on extracting text-related regions and features from the image components. We develop a new learning mechanism to train the Text-CNN with multi-level and rich supervised information, including text region mask, character label, and binary text/non-text information. The rich supervision information enables the Text-CNN with a strong capability for discriminating ambiguous texts, and also increases its robustness against complicated background components. The training process is formulated as a multi-task learning problem, where low-level supervised information greatly facilitates the main task of text/non-text classification. In addition, a powerful low-level detector called contrast-enhancement maximally stable extremal regions (MSERs) is developed, which extends the widely used MSERs by enhancing intensity contrast between text patterns and background. This allows it to detect highly challenging text patterns, resulting in a higher recall. Our approach achieved promising results on the ICDAR 2013 data set, with an F-measure of 0.82, substantially improving the state-of-the-art results. PMID:27093723

  8. A new model of the distal convoluted tubule.

    PubMed

    Ko, Benjamin; Mistry, Abinash C; Hanson, Lauren; Mallick, Rickta; Cooke, Leslie L; Hack, Bradley K; Cunningham, Patrick; Hoover, Robert S

    2012-09-01

    The Na(+)-Cl(-) cotransporter (NCC) in the distal convoluted tubule (DCT) of the kidney is a key determinant of Na(+) balance. Disturbances in NCC function are characterized by disordered volume and blood pressure regulation. However, many details concerning the mechanisms of NCC regulation remain controversial or undefined. This is partially due to the lack of a mammalian cell model of the DCT that is amenable to functional assessment of NCC activity. Previously reported investigations of NCC regulation in mammalian cells have either not attempted measurements of NCC function or have required perturbation of the critical without a lysine kinase (WNK)/STE20/SPS-1-related proline/alanine-rich kinase regulatory pathway before functional assessment. Here, we present a new mammalian model of the DCT, the mouse DCT15 (mDCT15) cell line. These cells display native NCC function as measured by thiazide-sensitive, Cl(-)-dependent (22)Na(+) uptake and allow for the separate assessment of NCC surface expression and activity. Knockdown by short interfering RNA confirmed that this function was dependent on NCC protein. Similar to the mammalian DCT, these cells express many of the known regulators of NCC and display significant baseline activity and dimerization of NCC. As described in previous models, NCC activity is inhibited by appropriate concentrations of thiazides, and phorbol esters strongly suppress function. Importantly, they display release of WNK4 inhibition of NCC by small hairpin RNA knockdown. We feel that this new model represents a critical tool for the study of NCC physiology. The work that can be accomplished in such a system represents a significant step forward toward unraveling the complex regulation of NCC.

  9. A convolution-superposition dose calculation engine for GPUs

    SciTech Connect

    Hissoiny, Sami; Ozell, Benoit; Despres, Philippe

    2010-03-15

    Purpose: Graphic processing units (GPUs) are increasingly used for scientific applications, where their parallel architecture and unprecedented computing power density can be exploited to accelerate calculations. In this paper, a new GPU implementation of a convolution/superposition (CS) algorithm is presented. Methods: This new GPU implementation has been designed from the ground-up to use the graphics card's strengths and to avoid its weaknesses. The CS GPU algorithm takes into account beam hardening, off-axis softening, kernel tilting, and relies heavily on raytracing through patient imaging data. Implementation details are reported as well as a multi-GPU solution. Results: An overall single-GPU acceleration factor of 908x was achieved when compared to a nonoptimized version of the CS algorithm implemented in PlanUNC in single threaded central processing unit (CPU) mode, resulting in approximatively 2.8 s per beam for a 3D dose computation on a 0.4 cm grid. A comparison to an established commercial system leads to an acceleration factor of approximately 29x or 0.58 versus 16.6 s per beam in single threaded mode. An acceleration factor of 46x has been obtained for the total energy released per mass (TERMA) calculation and a 943x acceleration factor for the CS calculation compared to PlanUNC. Dose distributions also have been obtained for a simple water-lung phantom to verify that the implementation gives accurate results. Conclusions: These results suggest that GPUs are an attractive solution for radiation therapy applications and that careful design, taking the GPU architecture into account, is critical in obtaining significant acceleration factors. These results potentially can have a significant impact on complex dose delivery techniques requiring intensive dose calculations such as intensity-modulated radiation therapy (IMRT) and arc therapy. They also are relevant for adaptive radiation therapy where dose results must be obtained rapidly.

  10. Text-Attentional Convolutional Neural Network for Scene Text Detection.

    PubMed

    He, Tong; Huang, Weilin; Qiao, Yu; Yao, Jian

    2016-06-01

    Recent deep learning models have demonstrated strong capabilities for classifying text and non-text components in natural images. They extract a high-level feature globally computed from a whole image component (patch), where the cluttered background information may dominate true text features in the deep representation. This leads to less discriminative power and poorer robustness. In this paper, we present a new system for scene text detection by proposing a novel text-attentional convolutional neural network (Text-CNN) that particularly focuses on extracting text-related regions and features from the image components. We develop a new learning mechanism to train the Text-CNN with multi-level and rich supervised information, including text region mask, character label, and binary text/non-text information. The rich supervision information enables the Text-CNN with a strong capability for discriminating ambiguous texts, and also increases its robustness against complicated background components. The training process is formulated as a multi-task learning problem, where low-level supervised information greatly facilitates the main task of text/non-text classification. In addition, a powerful low-level detector called contrast-enhancement maximally stable extremal regions (MSERs) is developed, which extends the widely used MSERs by enhancing intensity contrast between text patterns and background. This allows it to detect highly challenging text patterns, resulting in a higher recall. Our approach achieved promising results on the ICDAR 2013 data set, with an F-measure of 0.82, substantially improving the state-of-the-art results.

  11. A staggered-grid convolutional differentiator for elastic wave modelling

    NASA Astrophysics Data System (ADS)

    Sun, Weijia; Zhou, Binzhong; Fu, Li-Yun

    2015-11-01

    The computation of derivatives in governing partial differential equations is one of the most investigated subjects in the numerical simulation of physical wave propagation. An analytical staggered-grid convolutional differentiator (CD) for first-order velocity-stress elastic wave equations is derived in this paper by inverse Fourier transformation of the band-limited spectrum of a first derivative operator. A taper window function is used to truncate the infinite staggered-grid CD stencil. The truncated CD operator is almost as accurate as the analytical solution, and as efficient as the finite-difference (FD) method. The selection of window functions will influence the accuracy of the CD operator in wave simulation. We search for the optimal Gaussian windows for different order CDs by minimizing the spectral error of the derivative and comparing the windows with the normal Hanning window function for tapering the CD operators. It is found that the optimal Gaussian window appears to be similar to the Hanning window function for tapering the same CD operator. We investigate the accuracy of the windowed CD operator and the staggered-grid FD method with different orders. Compared to the conventional staggered-grid FD method, a short staggered-grid CD operator achieves an accuracy equivalent to that of a long FD operator, with lower computational costs. For example, an 8th order staggered-grid CD operator can achieve the same accuracy of a 16th order staggered-grid FD algorithm but with half of the computational resources and time required. Numerical examples from a homogeneous model and a crustal waveguide model are used to illustrate the superiority of the CD operators over the conventional staggered-grid FD operators for the simulation of wave propagations.

  12. Deep convolutional networks for pancreas segmentation in CT imaging

    NASA Astrophysics Data System (ADS)

    Roth, Holger R.; Farag, Amal; Lu, Le; Turkbey, Evrim B.; Summers, Ronald M.

    2015-03-01

    Automatic organ segmentation is an important prerequisite for many computer-aided diagnosis systems. The high anatomical variability of organs in the abdomen, such as the pancreas, prevents many segmentation methods from achieving high accuracies when compared to state-of-the-art segmentation of organs like the liver, heart or kidneys. Recently, the availability of large annotated training sets and the accessibility of affordable parallel computing resources via GPUs have made it feasible for "deep learning" methods such as convolutional networks (ConvNets) to succeed in image classification tasks. These methods have the advantage that used classification features are trained directly from the imaging data. We present a fully-automated bottom-up method for pancreas segmentation in computed tomography (CT) images of the abdomen. The method is based on hierarchical coarse-to-fine classification of local image regions (superpixels). Superpixels are extracted from the abdominal region using Simple Linear Iterative Clustering (SLIC). An initial probability response map is generated, using patch-level confidences and a two-level cascade of random forest classifiers, from which superpixel regions with probabilities larger 0.5 are retained. These retained superpixels serve as a highly sensitive initial input of the pancreas and its surroundings to a ConvNet that samples a bounding box around each superpixel at different scales (and random non-rigid deformations at training time) in order to assign a more distinct probability of each superpixel region being pancreas or not. We evaluate our method on CT images of 82 patients (60 for training, 2 for validation, and 20 for testing). Using ConvNets we achieve maximum Dice scores of an average 68% +/- 10% (range, 43-80%) in testing. This shows promise for accurate pancreas segmentation, using a deep learning approach and compares favorably to state-of-the-art methods.

  13. Text-Attentional Convolutional Neural Network for Scene Text Detection

    NASA Astrophysics Data System (ADS)

    He, Tong; Huang, Weilin; Qiao, Yu; Yao, Jian

    2016-06-01

    Recent deep learning models have demonstrated strong capabilities for classifying text and non-text components in natural images. They extract a high-level feature computed globally from a whole image component (patch), where the cluttered background information may dominate true text features in the deep representation. This leads to less discriminative power and poorer robustness. In this work, we present a new system for scene text detection by proposing a novel Text-Attentional Convolutional Neural Network (Text-CNN) that particularly focuses on extracting text-related regions and features from the image components. We develop a new learning mechanism to train the Text-CNN with multi-level and rich supervised information, including text region mask, character label, and binary text/nontext information. The rich supervision information enables the Text-CNN with a strong capability for discriminating ambiguous texts, and also increases its robustness against complicated background components. The training process is formulated as a multi-task learning problem, where low-level supervised information greatly facilitates main task of text/non-text classification. In addition, a powerful low-level detector called Contrast- Enhancement Maximally Stable Extremal Regions (CE-MSERs) is developed, which extends the widely-used MSERs by enhancing intensity contrast between text patterns and background. This allows it to detect highly challenging text patterns, resulting in a higher recall. Our approach achieved promising results on the ICDAR 2013 dataset, with a F-measure of 0.82, improving the state-of-the-art results substantially.

  14. A convolutional neural network approach for objective video quality assessment.

    PubMed

    Le Callet, Patrick; Viard-Gaudin, Christian; Barba, Dominique

    2006-09-01

    This paper describes an application of neural networks in the field of objective measurement method designed to automatically assess the perceived quality of digital videos. This challenging issue aims to emulate human judgment and to replace very complex and time consuming subjective quality assessment. Several metrics have been proposed in literature to tackle this issue. They are based on a general framework that combines different stages, each of them addressing complex problems. The ambition of this paper is not to present a global perfect quality metric but rather to focus on an original way to use neural networks in such a framework in the context of reduced reference (RR) quality metric. Especially, we point out the interest of such a tool for combining features and pooling them in order to compute quality scores. The proposed approach solves some problems inherent to objective metrics that should predict subjective quality score obtained using the single stimulus continuous quality evaluation (SSCQE) method. This latter has been adopted by video quality expert group (VQEG) in its recently finalized reduced referenced and no reference (RRNR-TV) test plan. The originality of such approach compared to previous attempts to use neural networks for quality assessment, relies on the use of a convolutional neural network (CNN) that allows a continuous time scoring of the video. Objective features are extracted on a frame-by-frame basis on both the reference and the distorted sequences; they are derived from a perceptual-based representation and integrated along the temporal axis using a time-delay neural network (TDNN). Experiments conducted on different MPEG-2 videos, with bit rates ranging 2-6 Mb/s, show the effectiveness of the proposed approach to get a plausible model of temporal pooling from the human vision system (HVS) point of view. More specifically, a linear correlation criteria, between objective and subjective scoring, up to 0.92 has been obtained on

  15. Single-trial EEG RSVP classification using convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Shamwell, Jared; Lee, Hyungtae; Kwon, Heesung; Marathe, Amar R.; Lawhern, Vernon; Nothwang, William

    2016-05-01

    Traditionally, Brain-Computer Interfaces (BCI) have been explored as a means to return function to paralyzed or otherwise debilitated individuals. An emerging use for BCIs is in human-autonomy sensor fusion where physiological data from healthy subjects is combined with machine-generated information to enhance the capabilities of artificial systems. While human-autonomy fusion of physiological data and computer vision have been shown to improve classification during visual search tasks, to date these approaches have relied on separately trained classification models for each modality. We aim to improve human-autonomy classification performance by developing a single framework that builds codependent models of human electroencephalograph (EEG) and image data to generate fused target estimates. As a first step, we developed a novel convolutional neural network (CNN) architecture and applied it to EEG recordings of subjects classifying target and non-target image presentations during a rapid serial visual presentation (RSVP) image triage task. The low signal-to-noise ratio (SNR) of EEG inherently limits the accuracy of single-trial classification and when combined with the high dimensionality of EEG recordings, extremely large training sets are needed to prevent overfitting and achieve accurate classification from raw EEG data. This paper explores a new deep CNN architecture for generalized multi-class, single-trial EEG classification across subjects. We compare classification performance from the generalized CNN architecture trained across all subjects to the individualized XDAWN, HDCA, and CSP neural classifiers which are trained and tested on single subjects. Preliminary results show that our CNN meets and slightly exceeds the performance of the other classifiers despite being trained across subjects.

  16. Impact of CT detector pixel-to-pixel crosstalk on image quality

    NASA Astrophysics Data System (ADS)

    Engel, Klaus J.; Spies, Lothar; Vogtmeier, Gereon; Luhta, Randy

    2006-03-01

    In Computed Tomography (CT), the image quality sensitively depends on the accuracy of the X-ray projection signal, which is acquired by a two-dimensional array of pixel cells in the detector. If the signal of X-ray photons is spread out to neighboring pixels (crosstalk), a decrease of spatial resolution may result. Moreover, streak and ring artifacts may emerge. Deploying system simulations for state-of-the-art CT detector configurations, we characterize origin and appearance of these artifacts in the reconstructed CT images for different scenarios. A uniform pixel-to-pixel crosstalk results in a loss of spatial resolution only. The Modulation Transfer Function (MTF) is attenuated, without affecting the limiting resolution, which is defined as the first zero of the MTF. Additional streak and ring artifacts appear, if the pixel-to-pixel crosstalk is non-uniform. Parallel to the system simulations we developed an analytical model. The model explains resolution loss and artifact level using the first and second derivative of the X-ray profile acquired by the detector. Simulations and analytical model are in agreement to each other. We discuss the perceptibility of ring and streak artifacts within noisy images if no crosstalk correction is applied.

  17. Empirical formula for rates of hot pixel defects based on pixel size, sensor area, and ISO

    NASA Astrophysics Data System (ADS)

    Chapman, Glenn H.; Thomas, Rohit; Koren, Zahava; Koren, Israel

    2013-02-01

    Experimentally, image sensors measurements show a continuous development of in-field permanent hot pixel defects increasing in numbers over time. In our tests we accumulated data on defects in cameras ranging from large area (<300 sq mm) DSLR's, medium sized (~40 sq mm) point and shoot, and small (20 sq mm) cell phone cameras. The results show that the rate of defects depends on the technology (APS or CCD), and on design parameters like imager area, pixel size (from 1.5 to 7 um), and gain (from ISO100 to 1600). Comparing different sensor sizes with similar pixel sizes has shown that defect rates scale linearly with sensor area, suggesting the metric of defects/year/sq mm, which we call defect density. A search was made to model this defect density as a function of the two parameters pixel size and ISO. The best empirical fit was obtained by a power law curve. For CCD imagers, the defect densities are proportional to the pixel size to the power of -2.25 times the ISO to the power of 0.69. For APS (CMOS) sensors the power law had the defect densities proportional to the pixel size to the power of -3.07 times the ISO raised to the power of 0.5. Extending our empirical formula to include ISO allows us to predict the expected defect development rate for a wide set of sensor parameters.

  18. ACS/WFC Pixel Stability - Bringing the Pixels Back to the Science

    NASA Astrophysics Data System (ADS)

    Borncamp, David; Grogin, Norman A.; Bourque, Matthew; Ogaz, Sara

    2016-06-01

    Electrical current that has been trapped within the lattice structure of a Charged Coupled Device (CCD) can be present through multiple exposures, which will have an adverse effect on its science performance. The traditional way to correct for this extra charge is to take an image with the camera shutter closed periodically throughout the lifetime of the instrument. These images, generally referred to as dark images, allow for the characterization of the extra charge that is trapped within the CCD at the time of observation. This extra current can then be subtracted out of science images to correct for the extra charge that was there at this time. Pixels that have a charge above a certain threshold of current are marked as “hot” and flagged in the data quality array. However, these pixels may not be "bad" in the traditional sense that they cannot be reliably dark-subtracted. If these pixels are shown to be stable over an anneal period, the charge can be properly subtracted and the extra noise from this dark current can be taken into account. We present the results of a pixel history study that analyzes every pixel of ACS/WFC individually and allows pixels that were marked as bad to be brought back into the science image.

  19. Characterisation of Vanilla—A novel active pixel sensor for radiation detection

    NASA Astrophysics Data System (ADS)

    Blue, A.; Bates, R.; Laing, A.; Maneuski, D.; O'Shea, V.; Clark, A.; Prydderch, M.; Turchetta, R.; Arvanitis, C.; Bohndiek, S.

    2007-10-01

    Novel features of a new monolithic active pixel sensor, Vanilla, with 520×520 pixels ( 25 μm square) has been characterised for the first time. Optimisation of the sensor operation was made through variation of frame rates, integration times and on-chip biases and voltages. Features such as flushed reset operation, ROI capturing and readout modes have been fully tested. Stability measurements were performed to test its suitablility for long-term applications. These results suggest the Vanilla sensor—along with bio-medical and space applications—is suitable for use in particle physics experiments.

  20. Two-dimensional single-pixel imaging by cascaded orthogonal line spatial modulation.

    PubMed

    Winters, David G; Bartels, Randy A

    2015-06-15

    Two-dimensional (2D) images are taken using a single-pixel detector by temporally multiplexing spatial frequency projections from orthogonal, time varying spatial line modulation gratings. Unique temporal frequencies are applied to each point in 2D space, applying a continuous spread of frequencies to one dimension, and an offset frequency applied to each line in the orthogonal dimension. The object contrast information can then be recovered from the electronic spectrum of the single pixel, and through simple processing be reformed into a spatial image. PMID:26076259

  1. Iterative algorithm for reconstructing rotationally asymmetric surface deviation with pixel-level spatial resolution

    NASA Astrophysics Data System (ADS)

    Quan, Haiyang; Wu, Fan; Hou, Xi

    2015-10-01

    New method for reconstructing rotationally asymmetric surface deviation with pixel-level spatial resolution is proposed. It is based on basic iterative scheme and accelerates the Gauss-Seidel method by introducing an acceleration parameter. This modified Successive Over-relaxation (SOR) is effective for solving the rotationally asymmetric components with pixel-level spatial resolution, without the usage of a fitting procedure. Compared to the Jacobi and Gauss-Seidel method, the modified SOR method with an optimal relaxation factor converges much faster and saves more computational costs and memory space without reducing accuracy. It has been proved by real experimental results.

  2. Interpixel crosstalk in a 3D-integrated active pixel sensor for x-ray detection

    NASA Astrophysics Data System (ADS)

    LaMarr, Beverly; Bautz, Mark; Foster, Rick; Kissel, Steve; Prigozhin, Gregory; Suntharalingam, Vyshnavi

    2010-07-01

    MIT Lincoln Laboratories and MIT Kavli Institute for Astrophysics and Space Research have developed an active pixel sensor for use as a photon counting device for imaging spectroscopy in the soft X-ray band. A silicon-on-insulator (SOI) readout circuit was integrated with a high-resistivity silicon diode detector array using a per-pixel 3D integration technique developed at Lincoln Laboratory. We have tested these devices at 5.9 keV and 1.5 keV. Here we examine the interpixel cross-talk measured with 5.9 keV X-rays.

  3. Extraction of electrical characteristics from pixels of multifrequency EIT images.

    PubMed

    Fitzgerald, A J; Thomas, B J; Cornish, B H; Michael, G J; Ward, L C

    1997-05-01

    Computer modelling has shown that electrical characteristics of individual pixels may be extracted from within multiple-frequency electrical impedance tomography (MFEIT) images formed using a reference data set obtained from a purely resistive, homogeneous medium. In some applications it is desirable to extract the electrical characteristics of individual pixels from images where a purely resistive, homogeneous reference data set is not available. One such application of the technique of MFEIT is to allow the acquisition of in vivo images using reference data sets obtained from a non-homogeneous medium with a reactive component. However, the reactive component of the reference data set introduces difficulties with the extraction of the true electrical characteristics from the image pixels. This study was a preliminary investigation of a technique to extract electrical parameters from multifrequency images when the reference data set has a reactive component. Unlike the situation in which a homogeneous, resistive data set is available, it is not possible to obtain the impedance and phase information directly from the image pixel values of the MFEIT images data set, as the phase of the reactive reference is not known. The method reported here to extract the electrical characteristics (the Cole-Cole plot) initially assumes that this phase angle is zero. With this assumption, an impedance spectrum can be directly extracted from the image set. To obtain the true Cole-Cole plot a correction must be applied to account for the inherent rotation of the extracted impedance spectrum about the origin, which is a result of the assumption. This work shows that the angle of rotation associated with the reactive component of the reference data set may be determined using a priori knowledge of the distribution of frequencies of the Cole-Cole plot. Using this angle of rotation, the true Cole-Cole plot can be obtained from the impedance spectrum extracted from the MFEIT image data

  4. Advanced monolithic pixel sensors using SOI technology

    NASA Astrophysics Data System (ADS)

    Miyoshi, Toshinobu; Arai, Yasuo; Asano, Mari; Fujita, Yowichi; Hamasaki, Ryutaro; Hara, Kazuhiko; Honda, Shunsuke; Ikegami, Yoichi; Kurachi, Ikuo; Mitsui, Shingo; Nishimura, Ryutaro; Tauchi, Kazuya; Tobita, Naoshi; Tsuboyama, Toru; Yamada, Miho

    2016-07-01

    We are developing advanced pixel sensors using silicon-on-insulator (SOI) technology. A SOI wafer is used; top silicon is used for electric circuit and bottom silicon is used as a sensor. Target applications are high-energy physics, X-ray astronomy, material science, non-destructive inspection, medical application and so on. We have developed two integration-type pixel sensors, FPIXb and INTPIX7. These sensors were processed on single SOI wafers with various substrates in n- or p-type and double SOI wafers. The development status of double SOI sensors and some up-to-date test results of n-type and p-type SOI sensors are shown.

  5. The Belle II DEPFET pixel detector

    NASA Astrophysics Data System (ADS)

    Moser, Hans-Günther

    2016-09-01

    The Belle II experiment at KEK (Tsukuba, Japan) will explore heavy flavour physics (B, charm and tau) at the starting of 2018 with unprecedented precision. Charged particles are tracked by a two-layer DEPFET pixel device (PXD), a four-layer silicon strip detector (SVD) and the central drift chamber (CDC). The PXD will consist of two layers at radii of 14 mm and 22 mm with 8 and 12 ladders, respectively. The pixel sizes will vary, between 50 μm×(55-60) μm in the first layer and between 50 μm×(70-85) μm in the second layer, to optimize the charge sharing efficiency. These innermost layers have to cope with high background occupancy, high radiation and must have minimal material to reduce multiple scattering. These challenges are met using the DEPFET technology. Each pixel is a FET integrated on a fully depleted silicon bulk. The signal charge collected in the 'internal gate' modulates the FET current resulting in a first stage amplification and therefore very low noise. This allows very thin sensors (75 μm) reducing the overall material budget of the detector (0.21% X0). Four fold multiplexing of the column parallel readout allows read out a full frame of the pixel matrix in only 20 μs while keeping the power consumption low enough for air cooling. Only the active electronics outside the detector acceptance has to be cooled actively with a two phase CO2 system. Furthermore the DEPFET technology offers the unique feature of an electronic shutter which allows the detector to operate efficiently in the continuous injection mode of superKEKB.

  6. Active-Pixel Cosmic-Ray Sensor

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R.; Cunningham, Thomas J.; Holtzman, Melinda J.

    1994-01-01

    Cosmic-ray sensor comprises planar rectangular array of lateral bipolar npn floating-base transistors each of which defines pixel. Collector contacts of all transistors in each row connected to same X (column) line conductor; emitter contacts of all transistors in each column connected to same Y (row) line conductor; and current in each row and column line sensed by amplifier, output of which fed to signal-processing circuits.

  7. The Silicon Pixel Detector for ALICE Experiment

    SciTech Connect

    Fabris, D.; Bombonati, C.; Dima, R.; Lunardon, M.; Moretto, S.; Pepato, A.; Bohus, L. Sajo; Scarlassara, F.; Segato, G.; Shen, D.; Turrisi, R.; Viesti, G.; Anelli, G.; Boccardi, A.; Burns, M.; Campbell, M.; Ceresa, S.; Conrad, J.; Kluge, A.; Kral, M.

    2007-10-26

    The Inner Tracking System (ITS) of the ALICE experiment is made of position sensitive detectors which have to operate in a region where the track density may be as high as 50 tracks/cm{sup 2}. To handle such densities detectors with high precision and granularity are mandatory. The Silicon Pixel Detector (SPD), the innermost part of the ITS, has been designed to provide tracking information close to primary interaction point. The assembly of the entire SPD has been completed.

  8. CMOS Active Pixel Sensor Technology and Reliability Characterization Methodology

    NASA Technical Reports Server (NTRS)

    Chen, Yuan; Guertin, Steven M.; Pain, Bedabrata; Kayaii, Sammy

    2006-01-01

    This paper describes the technology, design features and reliability characterization methodology of a CMOS Active Pixel Sensor. Both overall chip reliability and pixel reliability are projected for the imagers.

  9. Status of the CMS pixel project

    SciTech Connect

    Uplegger, Lorenzo; /Fermilab

    2008-01-01

    The Compact Muon Solenoid Experiment (CMS) will start taking data at the Large Hadron Collider (LHC) in 2008. The closest detector to the interaction point is the silicon pixel detector which is the heart of the tracking system. It consists of three barrel layers and two pixel disks on each side of the interaction point for a total of 66 million channels. Its proximity to the interaction point means there will be very large particle fluences and therefore a radiation-tolerant design is necessary. The pixel detector will be crucial to achieve a good vertex resolution and will play a key role in pattern recognition and track reconstruction. The results from test beam runs prove that the expected performances can be achieved. The detector is currently being assembled and will be ready for insertion into CMS in early 2008. During the assembly phase, a thorough electronic test is being done to check the functionality of each channel to guarantee the performance required to achieve the physics goals. This report will present the final detector design, the status of the production as well as results from test beam runs to validate the expected performance.

  10. Soil moisture variability within remote sensing pixels

    SciTech Connect

    Charpentier, M.A.; Groffman, P.M. )

    1992-11-30

    This work is part of the First International Satellite Land Surface Climatology Project (ISLSCP) Field Experiment (FIFE), an international land-surface-atmosphere experiment aimed at improving the way climate models represent energy, water, heat, and carbon exchanges, and improving the utilization of satellite based remote sensing to monitor such parameters. This paper addresses the question of soil moisture variation within the field of view of a remote sensing pixel. Remote sensing is the only practical way to sense soil moisture over large areas, but it is known that there can be large variations of soil moisture within the field of view of a pixel. The difficulty with this is that many processes, such as gas exchange between surface and atmosphere can vary dramatically with moisture content, and a small wet spot, for example, can have a dramatic impact on such processes, and thereby bias remote sensing data results. Here the authors looked at the impact of surface topography on the level of soil moisture, and the interaction of both on the variability of soil moisture sensed by a push broom microwave radiometer (PBMR). In addition the authors looked at the question of whether variations of soil moisture within pixel size areas could be used to assign errors to PBMR generated soil moisture data.

  11. Photovoltaic retinal prosthesis with high pixel density

    NASA Astrophysics Data System (ADS)

    Mathieson, Keith; Loudin, James; Goetz, Georges; Huie, Philip; Wang, Lele; Kamins, Theodore I.; Galambos, Ludwig; Smith, Richard; Harris, James S.; Sher, Alexander; Palanker, Daniel

    2012-06-01

    Retinal degenerative diseases lead to blindness due to loss of the `image capturing' photoreceptors, while neurons in the `image-processing' inner retinal layers are relatively well preserved. Electronic retinal prostheses seek to restore sight by electrically stimulating the surviving neurons. Most implants are powered through inductive coils, requiring complex surgical methods to implant the coil-decoder-cable-array systems that deliver energy to stimulating electrodes via intraocular cables. We present a photovoltaic subretinal prosthesis, in which silicon photodiodes in each pixel receive power and data directly through pulsed near-infrared illumination and electrically stimulate neurons. Stimulation is produced in normal and degenerate rat retinas, with pulse durations of 0.5-4 ms, and threshold peak irradiances of 0.2-10 mW mm-2, two orders of magnitude below the ocular safety limit. Neural responses were elicited by illuminating a single 70 µm bipolar pixel, demonstrating the possibility of a fully integrated photovoltaic retinal prosthesis with high pixel density.

  12. Photovoltaic Retinal Prosthesis with High Pixel Density.

    PubMed

    Mathieson, Keith; Loudin, James; Goetz, Georges; Huie, Philip; Wang, Lele; Kamins, Theodore I; Galambos, Ludwig; Smith, Richard; Harris, James S; Sher, Alexander; Palanker, Daniel

    2012-06-01

    Retinal degenerative diseases lead to blindness due to loss of the "image capturing" photoreceptors, while neurons in the "image processing" inner retinal layers are relatively well preserved. Electronic retinal prostheses seek to restore sight by electrically stimulating surviving neurons. Most implants are powered through inductive coils, requiring complex surgical methods to implant the coil-decoder-cable-array systems, which deliver energy to stimulating electrodes via intraocular cables. We present a photovoltaic subretinal prosthesis, in which silicon photodiodes in each pixel receive power and data directly through pulsed near-infrared illumination and electrically stimulate neurons. Stimulation was produced in normal and degenerate rat retinas, with pulse durations from 0.5 to 4 ms, and threshold peak irradiances from 0.2 to 10 mW/mm(2), two orders of magnitude below the ocular safety limit. Neural responses were elicited by illuminating a single 70 μm bipolar pixel, demonstrating the possibility of a fully-integrated photovoltaic retinal prosthesis with high pixel density.

  13. Development of silicon micropattern pixel detectors

    NASA Astrophysics Data System (ADS)

    Heijne, E. H. M.; Antinori, F.; Beker, H.; Batignani, G.; Beusch, W.; Bonvicini, V.; Bosisio, L.; Boutonnet, C.; Burger, P.; Campbell, M.; Cantoni, P.; Catanesi, M. G.; Chesi, E.; Claeys, C.; Clemens, J. C.; Cohen Solal, M.; Darbo, G.; Da Via, C.; Debusschere, I.; Delpierre, P.; Di Bari, D.; Di Liberto, S.; Dierickx, B.; Enz, C. C.; Focardi, E.; Forti, F.; Gally, Y.; Glaser, M.; Gys, T.; Habrard, M. C.; Hallewell, G.; Hermans, L.; Heuser, J.; Hurst, R.; Inzani, P.; Jæger, J. J.; Jarron, P.; Karttaavi, T.; Kersten, S.; Krummenacher, F.; Leitner, R.; Lemeilleur, F.; Lenti, V.; Letheren, M.; Lokajicek, M.; Loukas, D.; Macdermott, M.; Maggi, G.; Manzari, V.; Martinengo, P.; Meddeler, G.; Meddi, F.; Mekkaoui, A.; Menetrey, A.; Middelkamp, P.; Morando, M.; Munns, A.; Musico, P.; Nava, P.; Navach, F.; Neyer, C.; Pellegrini, F.; Pengg, F.; Perego, R.; Pindo, M.; Pospisil, S.; Potheau, R.; Quercigh, E.; Redaelli, N.; Ridky, J.; Rossi, L.; Sauvage, D.; Segato, G.; Simone, S.; Sopko, B.; Stefanini, G.; Strakos, V.; Tempesta, P.; Tonelli, G.; Vegni, G.; Verweij, H.; Viertel, G. M.; Vrba, V.; Waisbard, J.; CERN RD19 Collaboration

    1994-09-01

    Successive versions of high speed, active silicon pixel detectors with integrated readout electronics have been developed for particle physics experiments using monolithic and hybrid technologies. Various matrices with binary output as well as a linear detector with analog output have been made. The hybrid binary matrix with 1024 cells (dimension 75 μm×500 μm) can capture events at ˜5 MHz and a selected event can then be read out in < 10 μs. In different beam tests at CERN a precision of 25 μm has been achieved and the efficiency was better than 99.2%. Detector thicknesses of 300 μm and 150 μm of silicon have been used. In a test with a 109Cd source a noise level of 170 e - r.m.s. (1.4 keV fwhm) has been measured with a threshold non-uniformity of 750 e - r.m.s. Objectives of the development work are the increase of the size of detecting area without loss of efficiency, the design of an appropriate readout architecture for collider operation, the reduction of material thickness in the detector, understanding of the threshold non-uniformity, study of the sensitivity of the pixel matrices to light and low energy electrons for scintillating fiber detector readout and last but not least, the optimization of cost and yield of the pixel detectors in production.

  14. Photovoltaic Retinal Prosthesis with High Pixel Density

    PubMed Central

    Mathieson, Keith; Loudin, James; Goetz, Georges; Huie, Philip; Wang, Lele; Kamins, Theodore I.; Galambos, Ludwig; Smith, Richard; Harris, James S.; Sher, Alexander; Palanker, Daniel

    2012-01-01

    Retinal degenerative diseases lead to blindness due to loss of the “image capturing” photoreceptors, while neurons in the “image processing” inner retinal layers are relatively well preserved. Electronic retinal prostheses seek to restore sight by electrically stimulating surviving neurons. Most implants are powered through inductive coils, requiring complex surgical methods to implant the coil-decoder-cable-array systems, which deliver energy to stimulating electrodes via intraocular cables. We present a photovoltaic subretinal prosthesis, in which silicon photodiodes in each pixel receive power and data directly through pulsed near-infrared illumination and electrically stimulate neurons. Stimulation was produced in normal and degenerate rat retinas, with pulse durations from 0.5 to 4 ms, and threshold peak irradiances from 0.2 to 10 mW/mm2, two orders of magnitude below the ocular safety limit. Neural responses were elicited by illuminating a single 70 μm bipolar pixel, demonstrating the possibility of a fully-integrated photovoltaic retinal prosthesis with high pixel density. PMID:23049619

  15. A PFM based digital pixel with off-pixel residue measurement for 15μm pitch MWIR FPAs

    NASA Astrophysics Data System (ADS)

    Abbasi, Shahbaz; Shafique, Atia; Galioglu, Arman; Ceylan, Omer; Yazici, Melik; Gurbuz, Yasar

    2016-05-01

    Digital pixels based on pulse frequency modulation (PFM) employ counting techniques to achieve very high charge handling capability compared to their analog counterparts. Moreover, extended counting methods making use of leftover charge (residue) on the integration capacitor help improve the noise performance of these pixels. However, medium wave infrared (MWIR) focal plane arrays (FPAs) having smaller pixel pitch are constrained in terms of pixel area which makes it difficult to add extended counting circuitry to the pixel. Thus, this paper investigates the performance of digital pixels employing off-pixel residue measurement. A circuit prototype of such a pixel has been designed for 15μm pixel pitch and fabricated in 90nm CMOS. The prototype is composed of a pixel front-end based on a PFM loop. The frontend is a modified version of conventional design providing a means for buffering the signal that needs to be converted to a digital value by an off-pixel ADC. The pixel has an integration phase and a residue measurement phase. Measured integration performance of the pixel has been reported in this paper for various detector currents and integration times.

  16. Development of Kilo-Pixel Arrays of Transition-Edge Sensors for X-Ray Spectroscopy

    NASA Technical Reports Server (NTRS)

    Adams, J. S.; Bandler, S. R.; Busch, S. E.; Chervenak, J. A.; Chiao, M. P.; Eckart, M. E.; Ewin, A. J.; Finkbeiner, F. M.; Kelley, R. L.; Kelly, D. P.; Kilbourne, C. A.; Leutenegger, M. A.; Porst, J.-P.; Porter, F. S.; Ray, C. A.; Sadleir, J. E.; Smith, S. J.; Wassell, E. J.; Doriese, W. B.; Fowler, J. W.; Hilton, G. C.; Irwin, K. D.; Reintsema, C. D.; Smith, D. R.; Swetz, D. S.

    2012-01-01

    We are developing kilo-pixel arrays of transition-edge sensor (TES) microcalorimeters for future X-ray astronomy observatories or for use in laboratory astrophysics applications. For example, Athena/XMS (currently under study by the european space agency) would require a close-packed 32x32 pixel array on a 250-micron pitch with < 3.0 eV full-width-half-maximum energy resolution at 6 keV and at count-rates of up to 50 counts/pixel/second. We present characterization of 32x32 arrays. These detectors will be readout using state of the art SQUID based time-domain multiplexing (TDM). We will also present the latest results in integrating these detectors and the TDM readout technology into a 16 row x N column field-able instrument.

  17. Characterization of a 2-mm thick, 16x16 Cadmium-Zinc-Telluride Pixel Array

    NASA Technical Reports Server (NTRS)

    Gaskin, Jessica; Richardson, Georgia; Mitchell, Shannon; Ramsey, Brian; Seller, Paul; Sharma, Dharma

    2003-01-01

    The detector under study is a 2-mm-thick, 16x16 Cadmium-Zinc-Telluride pixel array with a pixel pitch of 300 microns and inter-pixel gap of 50 microns. This detector is a precursor to that which will be used at the focal plane of the High Energy Replicated Optics (HERO) telescope currently being developed at Marshall Space Flight Center. With a telescope focal length of 6 meters, the detector needs to have a spatial resolution of around 200 microns in order to take full advantage of the HERO angular resolution. We discuss to what degree charge sharing will degrade energy resolution but will improve our spatial resolution through position interpolation. In addition, we discuss electric field modeling for this specific detector geometry and the role this mapping will play in terms of charge sharing and charge loss in the detector.

  18. Methods of editing cloud and atmospheric layer affected pixels from satellite data

    NASA Technical Reports Server (NTRS)

    Nixon, P. R. (Principal Investigator); Wiegand, C. L.; Richardson, A. J.; Johnson, M. P.

    1981-01-01

    Plotted transects made from south Texas daytime HCMM data show the effect of subvisible cirrus (SCI) clouds in the emissive (IR) band but the effect is unnoticable in the reflective (VIS) band. The depression of satellite indicated temperatures ws greatest in the center of SCi streamers and tapered off at the edges. Pixels of uncontaminated land and water features in the HCMM test area shared identical VIS and IR digital count combinations with other pixels representing similar features. A minimum of 0.015 percent repeats of identical VIS-IR combinations are characteristic of land and water features in a scene of 30 percent cloud cover. This increases to 0.021 percent of more when the scene is clear. Pixels having shared VIS-IR combinations less than these amounts are considered to be cloud contaminated in the cluster screening method. About twenty percent of SCi was machine indistinguishable from land features in two dimensional spectral space (VIS vs IR).

  19. Visualization of dyed NAPL concentration in transparent porous media using color space components.

    PubMed

    Kashuk, Sina; Mercurio, Sophia R; Iskander, Magued

    2014-07-01

    Finding a correlation between image pixel information and non-aqueous phase liquid (NAPL) saturation is an important issue in bench-scale geo-environmental model studies that employ optical imaging techniques. Another concern is determining the best dye color and its optimum concentration as a tracer for use in mapping NAPL zones. Most bench scale flow studies employ monochromatic gray-scale imaging to analyze the concentration of mostly red dyed NAPL tracers in porous media. However, the use of grayscale utilizes a third of the available information in color images, which typically contain three color-space components. In this study, eight color spaces consisting of 24 color-space components were calibrated against dye concentration for three color-dyes. Additionally, multiple color space components were combined to increase the correlation between color-space data and dyed NAPL concentration. This work is performed to support imaging of NAPL migration in transparent synthetic soils representing the macroscopic behavior of natural soils. The transparent soil used in this study consists of fused quartz and a matched refractive index mineral-oil solution that represents the natural aquifer. The objective is to determine the best color dye concentration and ideal color space components for rendering dyed sucrose-saturated fused quartz that represents contamination of the natural aquifer by a dense NAPL (DNAPL). Calibration was achieved for six NAPL zone lengths using 3456 images (24 color space components×3 dyes×48 NAPL combinations) of contaminants within a defined criteria expressed as peak signal to noise ratio. The effect of data filtering was also considered and a convolution average filter is recommended for image conditioning. The technology presented in this paper is fast, accurate, non-intrusive and inexpensive method for quantifying contamination zones using transparent soil models.

  20. Convolution effect on TCR log response curve and the correction method for it

    NASA Astrophysics Data System (ADS)

    Chen, Q.; Liu, L. J.; Gao, J.

    2016-09-01

    Through-casing resistivity (TCR) logging has been successfully used in production wells for the dynamic monitoring of oil pools and the distribution of the residual oil, but its vertical resolution has limited its efficiency in identification of thin beds. The vertical resolution is limited by the distortion phenomenon of vertical response of TCR logging. The distortion phenomenon was studied in this work. It was found that the vertical response curve of TCR logging is the convolution of the true formation resistivity and the convolution function of TCR logging tool. Due to the effect of convolution, the measurement error at thin beds can reach 30% or even bigger. Thus the information of thin bed might be covered up very likely. The convolution function of TCR logging tool was obtained in both continuous and discrete way in this work. Through modified Lyle-Kalman deconvolution method, the true formation resistivity can be optimally estimated, so this inverse algorithm can correct the error caused by the convolution effect. Thus it can improve the vertical resolution of TCR logging tool for identification of thin beds.

  1. Alternative measures of dispersion applied to flow in a convoluted channel

    NASA Astrophysics Data System (ADS)

    Moroni, Monica; Kleinfelter-Domelle, Natalie; Cushman, John H.

    2009-05-01

    Steady flow in a convoluted channel is studied via Particle Tracking Velocimetry. The channel is constructed from a sequence of closed parallel cylindrical tubes welded together in plane which are then sliced down the lateral mid-plane and the lower complex is laterally shifted relative to the upper complex. Flow is induced in the lateral direction normal to the axis of the tubes. The a-time, Ta, finite-size Lyapunov exponent, λa, and the real-space self- and distinct-part of the intermediate scattering functions, Gs and Gd, and the pair density function, Gp, are computed from the data. Particle trajectories, velocity maps and streamlines show the channel has two prominent recirculation zones and a main flow region. The first passage time probability density function of tagged particles past a plane transverse to the mean flow illustrates how particles are delayed by recirculation zones. The delay caused by fluid element folding is manifested in single particle statistics such as the first passage time and the slowing increase in horizontal evolution of Gs. Gp describes initial particle distribution and allows areas in the flow domain trapping particles to be identified and visualized. Gd shows the evolution of the average separation of pairs of particles and when examined in a recirculation zone, it evolves little because of fluid element rotation. λa gives information on what transpires at a fixed scale and provides an estimate of the rate at which particles initially separated by a distance x separate to a distance ax as opposed to Gd which allows one to view changes over time. At small separations, λ1.3 approaches a constant and for intermediate separations it scales as x-0.8.

  2. Illustrating Surface Shape in Volume Data via Principal Direction-Driven 3D Line Integral Convolution

    NASA Technical Reports Server (NTRS)

    Interrante, Victoria

    1997-01-01

    The three-dimensional shape and relative depth of a smoothly curving layered transparent surface may be communicated particularly effectively when the surface is artistically enhanced with sparsely distributed opaque detail. This paper describes how the set of principal directions and principal curvatures specified by local geometric operators can be understood to define a natural 'flow' over the surface of an object, and can be used to guide the placement of the lines of a stroke texture that seeks to represent 3D shape information in a perceptually intuitive way. The driving application for this work is the visualization of layered isovalue surfaces in volume data, where the particular identity of an individual surface is not generally known a priori and observers will typically wish to view a variety of different level surfaces from the same distribution, superimposed over underlying opaque structures. By advecting an evenly distributed set of tiny opaque particles, and the empty space between them, via 3D line integral convolution through the vector field defined by the principal directions and principal curvatures of the level surfaces passing through each gridpoint of a 3D volume, it is possible to generate a single scan-converted solid stroke texture that may intuitively represent the essential shape information of any level surface in the volume. To generate longer strokes over more highly curved areas, where the directional information is both most stable and most relevant, and to simultaneously downplay the visual impact of directional information in the flatter regions, one may dynamically redefine the length of the filter kernel according to the magnitude of the maximum principal curvature of the level surface at the point around which it is applied.

  3. Detection and evaluation of mixed pixels in Landsat agricultural scenes

    NASA Technical Reports Server (NTRS)

    Merickel, M. B.; Lundgren, J. C.; Lennington, R. K.

    1982-01-01

    A major problem area encountered in the identification and estimation of agricultural crop proportions in Landsat imagery involves the large proportion of the pixels which are mixed pixels, whose spectral response is influenced by more than one ground cover type. The development of methods for the detection and estimation of crop proportions in mixed pixels is presently reported. The procedure designated CASCADE, based on the estimation of the gradient image for the detection of mixed pixels, considers the consequences of a linear mixing model and is found to provide a method for the allocation of mixed pixels to the surrounding homogeneous region.

  4. Accounting for sub-pixel variability of clouds and/or unresolved spectral variability, as needed, with generalized radiative transfer theory

    DOE PAGESBeta

    Davis, Anthony B.; Xu, Feng; Collins, William D.

    2015-03-01

    Atmospheric hyperspectral VNIR sensing struggles with sub-pixel variability of clouds and limited spectral resolution mixing molecular lines. Our generalized radiative transfer model addresses both issues with new propagation kernels characterized by power-law decay in space.

  5. Active pixel sensor pixel having a photodetector whose output is coupled to an output transistor gate

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R. (Inventor); Nakamura, Junichi (Inventor); Kemeny, Sabrina E. (Inventor)

    2005-01-01

    An imaging device formed as a monolithic complementary metal oxide semiconductor integrated circuit in an industry standard complementary metal oxide semiconductor process, the integrated circuit including a focal plane array of pixel cells, each one of the cells including a photogate overlying the substrate for accumulating photo-generated charge in an underlying portion of the substrate and a charge coupled device section formed on the substrate adjacent the photogate having a sensing node and at least one charge coupled device stage for transferring charge from the underlying portion of the substrate to the sensing node. There is also a readout circuit, part of which can be disposed at the bottom of each column of cells and be common to all the cells in the column. A Simple Floating Gate (SFG) pixel structure could also be employed in the imager to provide a non-destructive readout and smaller pixel sizes.

  6. Improving Ship Detection with Polarimetric SAR based on Convolution between Co-polarization Channels.

    PubMed

    Li, Haiyan; He, Yijun; Wang, Wenguang

    2009-01-01

    The convolution between co-polarization amplitude only data is studied to improve ship detection performance. The different statistical behaviors of ships and surrounding ocean are characterized a by two-dimensional convolution function (2D-CF) between different polarization channels. The convolution value of the ocean decreases relative to initial data, while that of ships increases. Therefore the contrast of ships to ocean is increased. The opposite variation trend of ocean and ships can distinguish the high intensity ocean clutter from ships' signatures. The new criterion can generally avoid mistaken detection by a constant false alarm rate detector. Our new ship detector is compared with other polarimetric approaches, and the results confirm the robustness of the proposed method.

  7. Modified convolution method to reconstruct particle hologram with an elliptical Gaussian beam illumination.

    PubMed

    Wu, Xuecheng; Wu, Yingchun; Yang, Jing; Wang, Zhihua; Zhou, Binwu; Gréhan, Gérard; Cen, Kefa

    2013-05-20

    Application of the modified convolution method to reconstruct digital inline holography of particle illuminated by an elliptical Gaussian beam is investigated. Based on the analysis on the formation of particle hologram using the Collins formula, the convolution method is modified to compensate the astigmatism by adding two scaling factors. Both simulated and experimental holograms of transparent droplets and opaque particles are used to test the algorithm, and the reconstructed images are compared with that using FRFT reconstruction. Results show that the modified convolution method can accurately reconstruct the particle image. This method has an advantage that the reconstructed images in different depth positions have the same size and resolution with the hologram. This work shows that digital inline holography has great potential in particle diagnostics in curvature containers.

  8. How many pixels does it take to make a good 4"×6" print? Pixel count wars revisited

    NASA Astrophysics Data System (ADS)

    Kriss, Michael A.

    2011-01-01

    In the early 1980's the future of conventional silver-halide photographic systems was of great concern due to the potential introduction of electronic imaging systems then typified by the Sony Mavica analog electronic camera. The focus was on the quality of film-based systems as expressed in the number of equivalent number pixels and bits-per-pixel, and how many pixels would be required to create an equivalent quality image from a digital camera. It was found that 35-mm frames, for ISO 100 color negative film, contained equivalent pixels of 12 microns for a total of 18 million pixels per frame (6 million pixels per layer) with about 6 bits of information per pixel; the introduction of new emulsion technology, tabular AgX grains, increased the value to 8 bit per pixel. Higher ISO speed films had larger equivalent pixels, fewer pixels per frame, but retained the 8 bits per pixel. Further work found that a high quality 3.5" x 5.25" print could be obtained from a three layer system containing 1300 x 1950 pixels per layer or about 7.6 million pixels in all. In short, it became clear that when a digital camera contained about 6 million pixels (in a single layer using a color filter array and appropriate image processing) that digital systems would challenge and replace conventional film-based system for the consumer market. By 2005 this became the reality. Since 2005 there has been a "pixel war" raging amongst digital camera makers. The question arises about just how many pixels are required and are all pixels equal? This paper will provide a practical look at how many pixels are needed for a good print based on the form factor of the sensor (sensor size) and the effective optical modulation transfer function (optical spread function) of the camera lens. Is it better to have 16 million, 5.7-micron pixels or 6 million 7.8-micron pixels? How does intrinsic (no electronic boost) ISO speed and exposure latitude vary with pixel size? A systematic review of these issues will

  9. A new 9T global shutter pixel with CDS technique

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Ma, Cheng; Zhou, Quan; Wang, Xinyang

    2015-04-01

    Benefiting from motion blur free, Global shutter pixel is very widely used in the design of CMOS image sensors for high speed applications such as motion vision, scientifically inspection, etc. In global shutter sensors, all pixel signal information needs to be stored in the pixel first and then waiting for readout. For higher frame rate, we need very fast operation of the pixel array. There are basically two ways for the in pixel signal storage, one is in charge domain, such as the one shown in [1], this needs complicated process during the pixel fabrication. The other one is in voltage domain, one example is the one in [2], this pixel is based on the 4T PPD technology and normally the driving of the high capacitive transfer gate limits the speed of the array operation. In this paper we report a new 9T global shutter pixel based on 3-T partially pinned photodiode (PPPD) technology. It incorporates three in-pixel storage capacitors allowing for correlated double sampling (CDS) and pipeline operation of the array (pixel exposure during the readout of the array). Only two control pulses are needed for all the pixels at the end of exposure which allows high speed exposure control.

  10. Punctured Parallel and Serial Concatenated Convolutional Codes for BPSK/QPSK Channels

    NASA Technical Reports Server (NTRS)

    Acikel, Omer Fatih

    1999-01-01

    As available bandwidth for communication applications becomes scarce, bandwidth-efficient modulation and coding schemes become ever important. Since their discovery in 1993, turbo codes (parallel concatenated convolutional codes) have been the center of the attention in the coding community because of their bit error rate performance near the Shannon limit. Serial concatenated convolutional codes have also been shown to be as powerful as turbo codes. In this dissertation, we introduce algorithms for designing bandwidth-efficient rate r = k/(k + 1),k = 2, 3,..., 16, parallel and rate 3/4, 7/8, and 15/16 serial concatenated convolutional codes via puncturing for BPSK/QPSK (Binary Phase Shift Keying/Quadrature Phase Shift Keying) channels. Both parallel and serial concatenated convolutional codes have initially, steep bit error rate versus signal-to-noise ratio slope (called the -"cliff region"). However, this steep slope changes to a moderate slope with increasing signal-to-noise ratio, where the slope is characterized by the weight spectrum of the code. The region after the cliff region is called the "error rate floor" which dominates the behavior of these codes in moderate to high signal-to-noise ratios. Our goal is to design high rate parallel and serial concatenated convolutional codes while minimizing the error rate floor effect. The design algorithm includes an interleaver enhancement procedure and finds the polynomial sets (only for parallel concatenated convolutional codes) and the puncturing schemes that achieve the lowest bit error rate performance around the floor for the code rates of interest.

  11. Research on Optical Observation for Space Debris

    NASA Astrophysics Data System (ADS)

    Sun, R. Y.

    2015-01-01

    Space debris has been recognized as a serious danger for operational spacecraft and manned spaceflights. Discussions are made in the methods of high order position precision and high detecting efficiency for space debris, including the design of surveying strategy, the extraction of object centroid, the precise measurement of object positions, the correlation and catalogue technique. To meet the needs of detecting space objects in the GEO (Geosynchronous Orbit), and prevent the saturation of CCD pixels with a long exposure time, a method of stacking a series of short exposure time images is presented. The results demonstrate that the saturation of pixels is eliminated effectively, and the SNR (Signal Noise Ratio) is increased by about 3.2 times, the detection ability is improved by about 2.5 magnitude when 10 seriate images are stacked, and the accuracy is reliable to satisfy the requirement by using the mean plate parameters for the astronomical orientation. A method combined with the geometrical morphology identification and linear correlation is adopted for the data calibration of IADC (Inter-Agency Space Debris Coordination Committee) AI23.4. After calibration, 139 tracklets are acquired, in which 116 tracklets are correlated with the catalogue. The distributions of magnitude, semi-major axis, inclination, and longitude of ascending node are obtained as well. A new method for detecting space debris in images is presented. The algorithm sets the gate around the image of objects, then several criterions are introduced for the object detection, at last the object position in the frame is obtained by the barycenter method and a simple linear transformation. The tests demonstrate that this technique is convenient for application, and the objects in image can be detected with a high centroid precision. In the observations of space objects, the shutter of camera is often removed, and the smear noise is ineluctable. Based on the differences of the geometry between the

  12. Prototype pixel optohybrid for the CMS phase 1 upgraded pixel detector

    NASA Astrophysics Data System (ADS)

    Troska, J.; Detraz, S.; El Nasr-Storey, S. S.; Stejskal, P.; Sigaud, C.; Soos, C.; Vasey, F.

    2012-01-01

    The CMS Pixel detector phase 1 upgrade calls for an optical readout system operating digitally at or above 320 Mb/s. Since the re-use of the existing link components as installed is excluded, we have designed a new Pixel Optohybrid (POH) for use within this system. We report on the design and choice of components as well as their measured performance. In particular, we have studied the impact upon error-free link operation of the way the data are encoded before being transmitted over the link. We have thus demonstrated the feasibility of operating the new POH within the upgraded readout system.

  13. Active pixel sensor array with electronic shuttering

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R. (Inventor)

    2002-01-01

    An active pixel cell includes electronic shuttering capability. The cell can be shuttered to prevent additional charge accumulation. One mode transfers the current charge to a storage node that is blocked against accumulation of optical radiation. The charge is sampled from a floating node. Since the charge is stored, the node can be sampled at the beginning and the end of every cycle. Another aspect allows charge to spill out of the well whenever the charge amount gets higher than some amount, thereby providing anti blooming.

  14. Single-pixel complementary compressive sampling spectrometer

    NASA Astrophysics Data System (ADS)

    Lan, Ruo-Ming; Liu, Xue-Feng; Yao, Xu-Ri; Yu, Wen-Kai; Zhai, Guang-Jie

    2016-05-01

    A new type of compressive spectroscopy technique employing a complementary sampling strategy is reported. In a single sequence of spectral compressive sampling, positive and negative measurements are performed, in which sensing matrices with a complementary relationship are used. The restricted isometry property condition necessary for accurate recovery of compressive sampling theory is satisfied mathematically. Compared with the conventional single-pixel spectroscopy technique, the complementary compressive sampling strategy can achieve spectral recovery of considerably higher quality within a shorter sampling time. We also investigate the influence of the sampling ratio and integration time on the recovery quality.

  15. Small pixel uncooled imaging FPAs and applications

    NASA Astrophysics Data System (ADS)

    Blackwell, Richard; Franks, Glen; Lacroix, Daniel; Hyland, Sandra; Murphy, Robert

    2010-04-01

    BAE Systems continues to make dramatic progress in uncooled microbolometer sensors and applications. This paper will review the latest advancements in microbolometer technology at BAE Systems, including the development status of 17 micrometer pixel pitch detectors and imaging modules which are entering production and will be finding their way into BAE Systems products and applications. Benefits include increased die per wafer and potential benefits to SWAP for many applications. Applications include thermal weapons sights, thermal imaging modules for remote weapon stations, vehicle situational awareness sensors and mast/pole mounted sensors.

  16. Pixel-Level Simulation of Imaging Data

    NASA Astrophysics Data System (ADS)

    Stoughton, C.; Kuropatkin, N. P.; Neilsen, E., Jr.; Harms, D. C.

    2007-10-01

    We are preparing a set of Java packages to facilitate the design and operation of imaging surveys. The packages use shapelets to describe shapes of astronomical sources, optical distortions, and shear from weak gravitational lensing. We introduce noise, bad pixels, cosmic rays, the pupil image, saturation, and other observational effects. A set of utility classes handles I/O, plotting, and interfaces to existing packages: nom.tam.fits for FITS I/O; uk.ac.starlink.table for tables; and cern.colt for algorithms. The packages have been used to generate images for the Dark Energy Survey data challenges, and will be used by SNAP to continue evaluating its design.

  17. Calculation of reflectance distribution using angular spectrum convolution in mesh-based computer generated hologram.

    PubMed

    Yeom, Han-Ju; Park, Jae-Hyeung

    2016-08-22

    We propose a method to obtain a computer-generated hologram that renders reflectance distributions of individual mesh surfaces of three-dimensional objects. Unlike previous methods which find phase distribution inside each mesh, the proposed method performs convolution of angular spectrum of the mesh to obtain desired reflectance distribution. Manipulation in the angular spectrum domain enables its application to fully-analytic mesh based computer generated hologram, removing the necessity for resampling of the spatial frequency grid. It is also computationally inexpensive as the convolution can be performed efficiently using Fourier transform. In this paper, we present principle, error analysis, simulation, and experimental verification results of the proposed method.

  18. Transfer Function Bounds for Partial-unit-memory Convolutional Codes Based on Reduced State Diagram

    NASA Technical Reports Server (NTRS)

    Lee, P. J.

    1984-01-01

    The performance of a coding system consisting of a convolutional encoder and a Viterbi decoder is analytically found by the well-known transfer function bounding technique. For the partial-unit-memory byte-oriented convolutional encoder with m sub 0 binary memory cells and (k sub 0 m sub 0) inputs, a state diagram of 2(K) (sub 0) was for the transfer function bound. A reduced state diagram of (2 (m sub 0) +1) is used for easy evaluation of transfer function bounds for partial-unit-memory codes.

  19. Blind separation of convolutive sEMG mixtures based on independent vector analysis

    NASA Astrophysics Data System (ADS)

    Wang, Xiaomei; Guo, Yina; Tian, Wenyan

    2015-12-01

    An independent vector analysis (IVA) method base on variable-step gradient algorithm is proposed in this paper. According to the sEMG physiological properties, the IVA model is applied to the frequency-domain separation of convolutive sEMG mixtures to extract motor unit action potentials information of sEMG signals. The decomposition capability of proposed method is compared to the one of independent component analysis (ICA), and experimental results show the variable-step gradient IVA method outperforms ICA in blind separation of convolutive sEMG mixtures.

  20. A convolutional learning system for object classification in 3-D Lidar data.

    PubMed

    Prokhorov, Danil

    2010-05-01

    In this brief, a convolutional learning system for classification of segmented objects represented in 3-D as point clouds of laser reflections is proposed. Several novelties are discussed: (1) extension of the existing convolutional neural network (CNN) framework to direct processing of 3-D data in a multiview setting which may be helpful for rotation-invariant consideration, (2) improvement of CNN training effectiveness by employing a stochastic meta-descent (SMD) method, and (3) combination of unsupervised and supervised training for enhanced performance of CNN. CNN performance is illustrated on a two-class data set of objects in a segmented outdoor environment.

  1. Patient-specific dosimetry based on quantitative SPECT imaging and 3D-DFT convolution

    SciTech Connect

    Akabani, G.; Hawkins, W.G.; Eckblade, M.B.; Leichner, P.K.

    1999-01-01

    The objective of this study was to validate the use of a 3-D discrete Fourier Transform (3D-DFT) convolution method to carry out the dosimetry for I-131 for soft tissues in radioimmunotherapy procedures. To validate this convolution method, mathematical and physical phantoms were used as a basis of comparison with Monte Carlo transport (MCT) calculations which were carried out using the EGS4 system code. The mathematical phantom consisted of a sphere containing uniform and nonuniform activity distributions. The physical phantom consisted of a cylinder containing uniform and nonuniform activity distributions. Quantitative SPECT reconstruction was carried out using the Circular Harmonic Transform (CHT) algorithm.

  2. A new method to improve multiplication factor in micro-pixel avalanche photodiodes with high pixel density

    NASA Astrophysics Data System (ADS)

    Sadygov, Z.; Ahmadov, F.; Khorev, S.; Sadigov, A.; Suleymanov, S.; Madatov, R.; Mehdiyeva, R.; Zerrouk, F.

    2016-07-01

    Presented is a new model describing development of the avalanche process in time, taking into account the dynamics of electric field within the depleted region of the diode and the effect of parasitic capacitance shunting individual quenching micro-resistors on device parameters. Simulations show that the effective capacitance of a single pixel, which defines the multiplication factor, is the sum of the pixel capacitance and a parasitic capacitance shunting its quenching micro-resistor. Conclusions obtained as a result of modeling open possibilities of improving the pixel gain in micropixel avalanche photodiodes with high pixel density (or low pixel capacitance).

  3. Active pixel sensor array with multiresolution readout

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R. (Inventor); Kemeny, Sabrina E. (Inventor); Pain, Bedabrata (Inventor)

    1999-01-01

    An imaging device formed as a monolithic complementary metal oxide semiconductor integrated circuit in an industry standard complementary metal oxide semiconductor process, the integrated circuit including a focal plane array of pixel cells, each one of the cells including a photogate overlying the substrate for accumulating photo-generated charge in an underlying portion of the substrate and a charge coupled device section formed on the substrate adjacent the photogate having a sensing node and at least one charge coupled device stage for transferring charge from the underlying portion of the substrate to the sensing node. There is also a readout circuit, part of which can be disposed at the bottom of each column of cells and be common to all the cells in the column. The imaging device can also include an electronic shutter formed on the substrate adjacent the photogate, and/or a storage section to allow for simultaneous integration. In addition, the imaging device can include a multiresolution imaging circuit to provide images of varying resolution. The multiresolution circuit could also be employed in an array where the photosensitive portion of each pixel cell is a photodiode. This latter embodiment could further be modified to facilitate low light imaging.

  4. Further applications for mosaic pixel FPA technology

    NASA Astrophysics Data System (ADS)

    Liddiard, Kevin C.

    2011-06-01

    In previous papers to this SPIE forum the development of novel technology for next generation PIR security sensors has been described. This technology combines the mosaic pixel FPA concept with low cost optics and purpose-designed readout electronics to provide a higher performance and affordable alternative to current PIR sensor technology, including an imaging capability. Progressive development has resulted in increased performance and transition from conventional microbolometer fabrication to manufacture on 8 or 12 inch CMOS/MEMS fabrication lines. A number of spin-off applications have been identified. In this paper two specific applications are highlighted: high performance imaging IRFPA design and forest fire detection. The former involves optional design for small pixel high performance imaging. The latter involves cheap expendable sensors which can detect approaching fire fronts and send alarms with positional data via mobile phone or satellite link. We also introduce to this SPIE forum the application of microbolometer IR sensor technology to IoT, the Internet of Things.

  5. Moving from pixel to object scale when inverting radiative transfer models for quantitative estimation of biophysical variables in vegetation (Invited)

    NASA Astrophysics Data System (ADS)

    Atzberger, C.

    2013-12-01

    The robust and accurate retrieval of vegetation biophysical variables using RTM is seriously hampered by the ill-posedness of the inverse problem. The contribution presents our object-based inversion approach and evaluate it against measured data. The proposed method takes advantage of the fact that nearby pixels are generally more similar than those at a larger distance. For example, within a given vegetation patch, nearby pixels often share similar leaf angular distributions. This leads to spectral co-variations in the n-dimensional spectral features space, which can be used for regularization purposes. Using a set of leaf area index (LAI) measurements (n=26) acquired over alfalfa, sugar beet and garlic crops of the Barrax test site (Spain), it is demonstrated that the proposed regularization using neighbourhood information yields more accurate results compared to the traditional pixel-based inversion. Principle of the ill-posed inverse problem and the proposed solution illustrated in the red-nIR feature space using (PROSAIL). [A] spectral trajectory ('soil trajectory') obtained for one leaf angle (ALA) and one soil brightness (αsoil), when LAI varies between 0 and 10, [B] 'soil trajectories' for 5 soil brightness values and three leaf angles, [C] ill-posed inverse problem: different combinations of ALA × αsoil yield an identical crossing point, [D] object-based RTM inversion; only one 'soil trajectory' fits all nine pixelswithin a gliding (3×3) window. The black dots (plus the rectangle=central pixel) represent the hypothetical position of nine pixels within a 3×3 (gliding) window. Assuming that over short distances (× 1 pixel) variations in soil brightness can be neglected, the proposed object-based inversion searches for one common set of ALA × αsoil so that the resulting 'soil trajectory' best fits the nine measured pixels. Ground measured vs. retrieved LAI values for three crops. Left: proposed object-based approach. Right: pixel-based inversion

  6. On-Orbit Solar Dynamics Observatory (SDO) Star Tracker Warm Pixel Analysis

    NASA Technical Reports Server (NTRS)

    Felikson, Denis; Ekinci, Matthew; Hashmall, Joseph A.; Vess, Melissa

    2011-01-01

    This paper describes the process of identification and analysis of warm pixels in two autonomous star trackers on the Solar Dynamics Observatory (SDO) mission. A brief description of the mission orbit and attitude regimes is discussed and pertinent star tracker hardware specifications are given. Warm pixels are defined and the Quality Index parameter is introduced, which can be explained qualitatively as a manifestation of a possible warm pixel event. A description of the algorithm used to identify warm pixel candidates is given. Finally, analysis of dumps of on-orbit star tracker charge coupled devices (CCD) images is presented and an operational plan going forward is discussed. SDO, launched on February 11, 2010, is operated from the NASA Goddard Space Flight Center (GSFC). SDO is in a geosynchronous orbit with a 28.5 inclination. The nominal mission attitude points the spacecraft X-axis at the Sun, with the spacecraft Z-axis roughly aligned with the Solar North Pole. The spacecraft Y-axis completes the triad. In attitude, SDO moves approximately 0.04 per hour, mostly about the spacecraft Z-axis. The SDO star trackers, manufactured by Galileo Avionica, project the images of stars in their 16.4deg x 16.4deg fields-of-view onto CCD detectors consisting of 512 x 512 pixels. The trackers autonomously identify the star patterns and provide an attitude estimate. Each unit is able to track up to 9 stars. Additionally, each tracker calculates a parameter called the Quality Index, which is a measure of the quality of the attitude solution. Each pixel in the CCD measures the intensity of light and a warns pixel is defined as having a measurement consistently and significantly higher than the mean background intensity level. A warns pixel should also have lower intensity than a pixel containing a star image and will not move across the field of view as the attitude changes (as would a dim star image). It should be noted that the maximum error introduced in the star tracker

  7. Edge effects in a small pixel CdTe for X-ray imaging

    NASA Astrophysics Data System (ADS)

    Duarte, D. D.; Bell, S. J.; Lipp, J.; Schneider, A.; Seller, P.; Veale, M. C.; Wilson, M. D.; Baker, M. A.; Sellin, P. J.; Kachkanov, V.; Sawhney, K. J. S.

    2013-10-01

    Large area detectors capable of operating with high detection efficiency at energies above 30 keV are required in many contemporary X-ray imaging applications. The properties of high Z compound semiconductors, such as CdTe, make them ideally suitable to these applications. The STFC Rutherford Appleton Laboratory has developed a small pixel CdTe detector with 80 × 80 pixels on a 250 μm pitch. Historically, these detectors have included a 200 μm wide guard band around the pixelated anode to reduce the effect of defects in the crystal edge. The latest version of the detector ASIC is capable of four-side butting that allows the tiling of N × N flat panel arrays. To limit the dead space between modules to the width of one pixel, edgeless detector geometries have been developed where the active volume of the detector extends to the physical edge of the crystal. The spectroscopic performance of an edgeless CdTe detector bump bonded to the HEXITEC ASIC was tested with sealed radiation sources and compared with a monochromatic X-ray micro-beam mapping measurements made at the Diamond Light Source, U.K. The average energy resolution at 59.54 keV of bulk and edge pixels was 1.23 keV and 1.58 keV, respectively. 87% of the edge pixels present fully spectroscopic performance demonstrating that edgeless CdTe detectors are a promising technology for the production of large panel radiation detectors for X-ray imaging.

  8. Geometrical modulation transfer function for different pixel active area shapes

    NASA Astrophysics Data System (ADS)

    Yadid-Pecht, Orly

    2000-04-01

    In this work we consider the effect of the pixel active area geometrical shape on the modulation transfer function (MTF) of an image sensor. When designing a CMOS Active Pixel Sensor, or a CCD or CID sensor for this matter, the active area of the pixel would have a certain geometrical shape which might not cover the whole pixel area. To improve the device performance, it is important to understand the effect this has on the pixel sensitivity and on the resulting MTF. We perform a theoretical analysis of the MTF for the active area shape and derive explicit formulas for the transfer function for pixel arrays with a square, a rectangular and an L shaped active area (most commonly used), and generalize for any connected active area shape. Preliminary experimental results of subpixel scanning sensitivity maps and the corresponding MTFs have also bee obtained, which confirm the theoretical derivations. Both the simulation results and the MTF calculated from the point spread function measurements of the actual pixel arrays show that the active area shape contributes significantly to the behavior of the overall MTF. The results also indicate that for any potential pixel active area shape, the effect of its diversion from the square pixel could be calculated, so that tradeoff between the conflicting requirements, such as SNR and MTF, could be compared per each pixel design for better overall sensor performance.

  9. Dimensional regularization in configuration space

    SciTech Connect

    Bollini, C.G. |; Giambiagi, J.J.

    1996-05-01

    Dimensional regularization is introduced in configuration space by Fourier transforming in {nu} dimensions the perturbative momentum space Green functions. For this transformation, the Bochner theorem is used; no extra parameters, such as those of Feynman or Bogoliubov and Shirkov, are needed for convolutions. The regularized causal functions in {ital x} space have {nu}-dependent moderated singularities at the origin. They can be multiplied together and Fourier transformed (Bochner) without divergence problems. The usual ultraviolet divergences appear as poles of the resultant analytic functions of {nu}. Several examples are discussed. {copyright} {ital 1996 The American Physical Society.}

  10. How big is an OMI pixel?

    NASA Astrophysics Data System (ADS)

    de Graaf, Martin; Sihler, Holger; Tilstra, Lieuwe G.; Stammes, Piet

    2016-08-01

    The Ozone Monitoring Instrument (OMI) is a push-broom imaging spectrometer, observing solar radiation backscattered by the Earth's atmosphere and surface. The incoming radiation is detected using a static imaging CCD (charge-coupled device) detector array with no moving parts, as opposed to most of the previous satellite spectrometers, which used a moving mirror to scan the Earth in the across-track direction. The field of view (FoV) of detector pixels is the solid angle from which radiation is observed, averaged over the integration time of a measurement. The OMI FoV is not quadrangular, which is common for scanning instruments, but rather super-Gaussian shaped and overlapping with the FoV of neighbouring pixels. This has consequences for pixel-area-dependent applications, like cloud fraction products, and visualisation.The shapes and sizes of OMI FoVs were determined pre-flight by theoretical and experimental tests but never verified after launch. In this paper the OMI FoV is characterised using collocated MODerate resolution Imaging Spectroradiometer (MODIS) reflectance measurements. MODIS measurements have a much higher spatial resolution than OMI measurements and spectrally overlap at 469 nm. The OMI FoV was verified by finding the highest correlation between MODIS and OMI reflectances in cloud-free scenes, assuming a 2-D super-Gaussian function with varying size and shape to represent the OMI FoV. Our results show that the OMPIXCOR product 75FoV corner coordinates are accurate as the full width at half maximum (FWHM) of a super-Gaussian FoV model when this function is assumed. The softness of the function edges, modelled by the super-Gaussian exponents, is different in both directions and is view angle dependent.The optimal overlap function between OMI and MODIS reflectances is scene dependent and highly dependent on time differences between overpasses, especially with clouds in the scene. For partially clouded scenes, the optimal overlap function was

  11. The VLSI design of an error-trellis syndrome decoder for certain convolutional codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Jensen, J. M.; Hsu, I.-S.; Truong, T. K.

    1986-01-01

    A recursive algorithm using the error-trellis decoding technique is developed to decode convolutional codes (CCs). An example, illustrating the very large scale integration (VLSI) architecture of such a decode, is given for a dual-K CC. It is demonstrated that such a decoder can be realized readily on a single chip with metal-nitride-oxide-semiconductor technology.

  12. The VLSI design of error-trellis syndrome decoding for convolutional codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Jensen, J. M.; Truong, T. K.; Hsu, I. S.

    1985-01-01

    A recursive algorithm using the error-trellis decoding technique is developed to decode convolutional codes (CCs). An example, illustrating the very large scale integration (VLSI) architecture of such a decode, is given for a dual-K CC. It is demonstrated that such a decoder can be realized readily on a single chip with metal-nitride-oxide-semiconductor technology.

  13. Profile of CT scan output dose in axial and helical modes using convolution

    NASA Astrophysics Data System (ADS)

    Anam, C.; Haryanto, F.; Widita, R.; Arif, I.; Dougherty, G.

    2016-03-01

    The profile of the CT scan output dose is crucial for establishing the patient dose profile. The purpose of this study is to investigate the profile of the CT scan output dose in both axial and helical modes using convolution. A single scan output dose profile (SSDP) in the center of a head phantom was measured using a solid-state detector. The multiple scan output dose profile (MSDP) in the axial mode was calculated using convolution between SSDP and delta function, whereas for the helical mode MSDP was calculated using convolution between SSDP and the rectangular function. MSDPs were calculated for a number of scans (5, 10, 15, 20 and 25). The multiple scan average dose (MSAD) for differing numbers of scans was compared to the value of CT dose index (CTDI). Finally, the edge values of MSDP for every scan number were compared to the corresponding MSAD values. MSDPs were successfully generated by using convolution between a SSDP and the appropriate function. We found that CTDI only accurately estimates MSAD when the number of scans was more than 10. We also found that the edge values of the profiles were 42% to 93% lower than that the corresponding MSADs.

  14. A generalized recursive convolution method for time-domain propagation in porous media.

    PubMed

    Dragna, Didier; Pineau, Pierre; Blanc-Benon, Philippe

    2015-08-01

    An efficient numerical method, referred to as the auxiliary differential equation (ADE) method, is proposed to compute convolutions between relaxation functions and acoustic variables arising in sound propagation equations in porous media. For this purpose, the relaxation functions are approximated in the frequency domain by rational functions. The time variation of the convolution is thus governed by first-order differential equations which can be straightforwardly solved. The accuracy of the method is first investigated and compared to that of recursive convolution methods. It is shown that, while recursive convolution methods are first or second-order accurate in time, the ADE method does not introduce any additional error. The ADE method is then applied for outdoor sound propagation using the equations proposed by Wilson et al. in the ground [(2007). Appl. Acoust. 68, 173-200]. A first one-dimensional case is performed showing that only five poles are necessary to accurately approximate the relaxation functions for typical applications. Finally, the ADE method is used to compute sound propagation in a three-dimensional geometry over an absorbing ground. Results obtained with Wilson's equations are compared to those obtained with Zwikker and Kosten's equations and with an impedance surface for different flow resistivities.

  15. Convolutional neural network based sensor fusion for forward looking ground penetrating radar

    NASA Astrophysics Data System (ADS)

    Sakaguchi, Rayn; Crosskey, Miles; Chen, David; Walenz, Brett; Morton, Kenneth

    2016-05-01

    Forward looking ground penetrating radar (FLGPR) is an alternative buried threat sensing technology designed to offer additional standoff compared to downward looking GPR systems. Due to additional flexibility in antenna configurations, FLGPR systems can accommodate multiple sensor modalities on the same platform that can provide complimentary information. The different sensor modalities present challenges in both developing informative feature extraction methods, and fusing sensor information in order to obtain the best discrimination performance. This work uses convolutional neural networks in order to jointly learn features across two sensor modalities and fuse the information in order to distinguish between target and non-target regions. This joint optimization is possible by modifying the traditional image-based convolutional neural network configuration to extract data from multiple sources. The filters generated by this process create a learned feature extraction method that is optimized to provide the best discrimination performance when fused. This paper presents the results of applying convolutional neural networks and compares these results to the use of fusion performed with a linear classifier. This paper also compares performance between convolutional neural networks architectures to show the benefit of fusing the sensor information in different ways.

  16. The venom apparatus in stenogastrine wasps: subcellular features of the convoluted gland.

    PubMed

    Petrocelli, Iacopo; Turillazzi, Stefano; Delfino, Giovanni

    2014-09-01

    In the wasp venom apparatus, the convoluted gland is the tract of the thin secretory unit, i.e. filament, contained in the muscular reservoir. Previous transmission electron microscope investigation on Stenogastrinae disclosed that the free filaments consist of distal and proximal tracts, from/to the venom reservoir, characterized by class 3 and 2 gland patterns, respectively. This study aims to extend the ultrastructural analysis to the convoluted tract, in order to provide a thorough, subcellular representation of the venom gland in these Asian wasps. Our findings showed that the convoluted gland is a continuation of the proximal tract, with secretory cells provided with a peculiar apical invagination, the extracellular cavity, collecting their products. This compartment holds a simple end-apparatus lined by large and ramified microvilli that contribute to the processing of the secretory product. A comparison between previous and present findings reveals a noticeable regionalization of the stenogastrine venom filaments and suggests that the secretory product acquires its ultimate composition in the convoluted tract.

  17. Two projects in theoretical neuroscience: A convolution-based metric for neural membrane potentials and a combinatorial connectionist semantic network method

    NASA Astrophysics Data System (ADS)

    Evans, Garrett Nolan

    In this work, I present two projects that both contribute to the aim of discovering how intelligence manifests in the brain. The first project is a method for analyzing recorded neural signals, which takes the form of a convolution-based metric on neural membrane potential recordings. Relying only on integral and algebraic operations, the metric compares the timing and number of spikes within recordings as well as the recordings' subthreshold features: summarizing differences in these with a single "distance" between the recordings. Like van Rossum's (2001) metric for spike trains, the metric is based on a convolution operation that it performs on the input data. The kernel used for the convolution is carefully chosen such that it produces a desirable frequency space response and, unlike van Rossum's kernel, causes the metric to be first order both in differences between nearby spike times and in differences between same-time membrane potential values: an important trait. The second project is a combinatorial syntax method for connectionist semantic network encoding. Combinatorial syntax has been a point on which those who support a symbol-processing view of intelligent processing and those who favor a connectionist view have had difficulty seeing eye-to-eye. Symbol-processing theorists have persuasively argued that combinatorial syntax is necessary for certain intelligent mental operations, such as reasoning by analogy. Connectionists have focused on the versatility and adaptability offered by self-organizing networks of simple processing units. With this project, I show that there is a way to reconcile the two perspectives and to ascribe a combinatorial syntax to a connectionist network. The critical principle is to interpret nodes, or units, in the connectionist network as bound integrations of the interpretations for nodes that they share links with. Nodes need not correspond exactly to neurons and may correspond instead to distributed sets, or assemblies, of

  18. Development of pixel detectors for SSC vertex tracking

    SciTech Connect

    Kramer, G. . Electro-Optical and Data Systems Group); Atlas, E.L.; Augustine, F.; Barken, O.; Collins, T.; Marking, W.L.; Worley, S.; Yacoub, G.Y. ) Shapiro, S.L. ); Arens, J.F.; Jernigan, J.G. . Space Sciences Lab.); Nygren,

    1991-04-01

    A description of hybrid PIN diode arrays and a readout architecture for their use as a vertex detector in the SSC environment is presented. Test results obtained with arrays having 256 {times} 256 pixels, each 30 {mu}m square, are also presented. The development of a custom readout for the SSC will be discussed, which supports a mechanism for time stamping hit pixels, storing their xy coordinates, and storing the analog information within the pixel. The peripheral logic located on the array, permits the selection of those pixels containing interesting data and their coordinates to be selectively read out. This same logic also resolves ambiguous pixel ghost locations and controls the pixel neighbor read out necessary to achieve high spatial resolution. The thermal design of the vertex tracker and the proposed signal processing architecture will also be discussed. 5 refs., 13 figs., 3 tabs.

  19. Pixel-level plasmonic microcavity infrared photodetector

    PubMed Central

    Jing, You Liang; Li, Zhi Feng; Li, Qian; Chen, Xiao Shuang; Chen, Ping Ping; Wang, Han; Li, Meng Yao; Li, Ning; Lu, Wei

    2016-01-01

    Recently, plasmonics has been central to the manipulation of photons on the subwavelength scale, and superior infrared imagers have opened novel applications in many fields. Here, we demonstrate the first pixel-level plasmonic microcavity infrared photodetector with a single quantum well integrated between metal patches and a reflection layer. Greater than one order of magnitude enhancement of the peak responsivity has been observed. The significant improvement originates from the highly confined optical mode in the cavity, leading to a strong coupling between photons and the quantum well, resulting in the enhanced photo-electric conversion process. Such strong coupling from the localized surface plasmon mode inside the cavity is independent of incident angles, offering a unique solution to high-performance focal plane array devices. This demonstration paves the way for important infrared optoelectronic devices for sensing and imaging. PMID:27181111

  20. Pixel-level plasmonic microcavity infrared photodetector

    NASA Astrophysics Data System (ADS)

    Jing, You Liang; Li, Zhi Feng; Li, Qian; Chen, Xiao Shuang; Chen, Ping Ping; Wang, Han; Li, Meng Yao; Li, Ning; Lu, Wei

    2016-05-01

    Recently, plasmonics has been central to the manipulation of photons on the subwavelength scale, and superior infrared imagers have opened novel applications in many fields. Here, we demonstrate the first pixel-level plasmonic microcavity infrared photodetector with a single quantum well integrated between metal patches and a reflection layer. Greater than one order of magnitude enhancement of the peak responsivity has been observed. The significant improvement originates from the highly confined optical mode in the cavity, leading to a strong coupling between photons and the quantum well, resulting in the enhanced photo-electric conversion process. Such strong coupling from the localized surface plasmon mode inside the cavity is independent of incident angles, offering a unique solution to high-performance focal plane array devices. This demonstration paves the way for important infrared optoelectronic devices for sensing and imaging.

  1. Distribution fitting-based pixel labeling for histology image segmentation

    NASA Astrophysics Data System (ADS)

    He, Lei; Long, L. Rodney; Antani, Sameer; Thoma, George

    2011-03-01

    This paper presents a new pixel labeling algorithm for complex histology image segmentation. For each image pixel, a Gaussian mixture model is applied to estimate its neighborhood intensity distributions. With this local distribution fitting, a set of pixels having a full set of source classes (e.g. nuclei, stroma, connective tissue, and background) in their neighborhoods are identified as the seeds for pixel labeling. A seed pixel is labeled by measuring its intensity distance to each of its neighborhood distributions, and the one with the shortest distance is selected to label the seed. For non-seed pixels, we propose two different labeling schemes: global voting and local clustering. In global voting each seed classifies a non-seed pixel into one of the seed's local distributions, i.e., it casts one vote; the final label for the non-seed pixel is the class which gets the most votes, across all the seeds. In local clustering, each non-seed pixel is labeled by one of its own neighborhood distributions. Because the local distributions in a non-seed pixel neighborhood do not necessarily correspond to distinct source classes (i.e., two or more local distributions may be produced by the same source class), we first identify the "true" source class of each local distribution by using the source classes of the seed pixels and a minimum distance criterion to determine the closest source class. The pixel can then be labeled as belonging to this class. With both labeling schemes, experiments on a set of uterine cervix histology images show encouraging performance of our algorithm when compared with traditional multithresholding and K-means clustering, as well as state-of-the-art mean shift clustering, multiphase active contours, and Markov random field-based algorithms.

  2. Steganography on quantum pixel images using Shannon entropy

    NASA Astrophysics Data System (ADS)

    Laurel, Carlos Ortega; Dong, Shi-Hai; Cruz-Irisson, M.

    2016-07-01

    This paper presents a steganographical algorithm based on least significant bit (LSB) from the most significant bit information (MSBI) and the equivalence of a bit pixel image to a quantum pixel image, which permits to make the information communicate secretly onto quantum pixel images for its secure transmission through insecure channels. This algorithm offers higher security since it exploits the Shannon entropy for an image.

  3. Data encoding efficiency in pixel detector readout with charge information

    NASA Astrophysics Data System (ADS)

    Garcia-Sciveres, Maurice; Wang, Xinkang

    2016-04-01

    The average minimum number of bits needed for lossless readout of a pixel detector is calculated, in the regime of interest for particle physics where only a small fraction of pixels have a non-zero value per frame. This permits a systematic comparison of the readout efficiency of different encoding implementations. The calculation is compared to the number of bits used by the FE-I4 pixel readout chip of the ATLAS experiment.

  4. Fast Pixel Buffer For Processing With Lookup Tables

    NASA Technical Reports Server (NTRS)

    Fisher, Timothy E.

    1992-01-01

    Proposed scheme for buffering data on intensities of picture elements (pixels) of image increases rate or processing beyond that attainable when data read, one pixel at time, from main image memory. Scheme applied in design of specialized image-processing circuitry. Intended to optimize performance of processor in which electronic equivalent of address-lookup table used to address those pixels in main image memory required for processing.

  5. Mapping Capacitive Coupling Among Pixels in a Sensor Array

    NASA Technical Reports Server (NTRS)

    Seshadri, Suresh; Cole, David M.; Smith, Roger M.

    2010-01-01

    An improved method of mapping the capacitive contribution to cross-talk among pixels in an imaging array of sensors (typically, an imaging photodetector array) has been devised for use in calibrating and/or characterizing such an array. The method involves a sequence of resets of subarrays of pixels to specified voltages and measurement of the voltage responses of neighboring non-reset pixels.

  6. Ultra-low power high-dynamic range color pixel embedding RGB to r-g chromaticity transformation

    NASA Astrophysics Data System (ADS)

    Lecca, Michela; Gasparini, Leonardo; Gottardi, Massimo

    2014-05-01

    This work describes a novel color pixel topology that converts the three chromatic components from the standard RGB space into the normalized r-g chromaticity space. This conversion is implemented with high-dynamic range and with no dc power consumption, and the auto-exposure capability of the sensor ensures to capture a high quality chromatic signal, even in presence of very bright illuminants or in the darkness. The pixel is intended to become the basic building block of a CMOS color vision sensor, targeted to ultra-low power applications for mobile devices, such as human machine interfaces, gesture recognition, face detection. The experiments show that significant improvements of the proposed pixel with respect to standard cameras in terms of energy saving and accuracy on data acquisition. An application to skin color-based description is presented.

  7. Dead pixel correction techniques for dual-band infrared imagery

    NASA Astrophysics Data System (ADS)

    Nguyen, Chuong T.; Mould, Nick; Regens, James L.

    2015-07-01

    We present two new dead pixel correction algorithms for dual-band infrared imagery. Specifically, we address the problem of repairing unresponsive elements in the sensor array using signal processing techniques to overcome deficiencies in image quality that are present following the nonuniformity correction process. Traditionally, dead pixel correction has been performed almost exclusively using variations of the nearest neighbor technique, where the value of the dead pixel is estimated based on pixel values associated with the neighboring image structure. Our approach differs from existing techniques, for the first time we estimate the values of dead pixels using information from both thermal bands collaboratively. The proposed dual-band statistical lookup (DSL) and dual-band inpainting (DIP) algorithms use intensity and local gradient information to estimate the values of dead pixels based on the values of unaffected pixels in the supplementary infrared band. The DSL algorithm is a regression technique that uses the image intensities from the reference band to estimate the dead pixel values in the band undergoing correction. The DIP algorithm is an energy minimization technique that uses the local image gradient from the reference band and the boundary values from the affected band to estimate the dead pixel values. We evaluate the effectiveness of the proposed algorithms with 50 dual-band videos. Simulation results indicate that the proposed techniques achieve perceptually and quantitatively superior results compared to existing methods.

  8. Hit efficiency study of CMS prototype forward pixel detectors

    SciTech Connect

    Kim, Dongwook; /Johns Hopkins U.

    2006-01-01

    In this paper the author describes the measurement of the hit efficiency of a prototype pixel device for the CMS forward pixel detector. These pixel detectors were FM type sensors with PSI46V1 chip readout. The data were taken with the 120 GeV proton beam at Fermilab during the period of December 2004 to February 2005. The detectors proved to be highly efficient (99.27 {+-} 0.02%). The inefficiency was primarily located near the corners of the individual pixels.

  9. A DC-DC converter based powering scheme for the upgrade of the CMS pixel detector

    NASA Astrophysics Data System (ADS)

    Feld, L.; Karpinski, W.; Klein, K.; Merz, J.; Sammet, J.; Wlochal, M.

    2011-11-01

    Around 2016, the pixel detector of the CMS experiment will be upgraded. The amount of current that has to be provided to the front-end electronics is expected to increase by a factor of two. Since the space available for cables is limited, this would imply unacceptable power losses in the currently installed supply cables. Therefore it is foreseen to place DC-DC converters close to the front-end electronics, allowing the provision of power at higher voltages, thereby facilitating the supply of the required currents with the present cable plant. This conference report introduces the foreseen powering scheme of the pixel upgrade. For the first time, system tests have been conducted with pixel barrel sensor modules, radiation tolerant DC-DC converters and the full power supply chain of the pixel detector. In addition, studies of the stability of different powering schemes under various conditions are summarized. In particular the impact of large and fast load variations, which are related to the bunch structure of the LHC beam, has been studied.

  10. An Empirical Pixel-Based Correction for Imperfect CTE. I. HST's Advanced Camera for Surveys

    NASA Astrophysics Data System (ADS)

    Anderson, Jay; Bedin, Luigi

    2010-09-01

    We use an empirical approach to characterize the effect of charge-transfer efficiency (CTE) losses in images taken with the Wide-Field Channel of the Advanced Camera for Surveys (ACS). The study is based on profiles of warm pixels in 168 dark exposures taken between 2009 September and October. The dark exposures allow us to explore charge traps that affect electrons when the background is extremely low. We develop a model for the readout process that reproduces the observed trails out to 70 pixels. We then invert the model to convert the observed pixel values in an image into an estimate of the original pixel values. We find that when we apply this image-restoration process to science images with a variety of stars on a variety of background levels, it restores flux, position, and shape. This means that the observed trails contain essentially all of the flux lost to inefficient CTE. The Space Telescope Science Institute is currently evaluating this algorithm with the aim of optimizing it and eventually providing enhanced data products. The empirical procedure presented here should also work for other epochs (e.g., pre-SM4), though the parameters may have to be recomputed for the time when ACS was operated at a higher temperature than the current -81°C. Finally, this empirical approach may also hold promise for other instruments, such as WFPC2, STIS, the ASC's HRC, and even WFC3/UVIS.

  11. Evaluation of a photon-counting hybrid pixel detector array with a synchrotron X-ray source

    NASA Astrophysics Data System (ADS)

    Ponchut, C.; Visschers, J. L.; Fornaini, A.; Graafsma, H.; Maiorino, M.; Mettivier, G.; Calvet, D.

    2002-05-01

    A photon-counting hybrid pixel detector (Medipix-1) has been characterized using a synchrotron X-ray source. The detector consists of a readout ASIC with 64×64 independent photon-counting cells of 170×170 μm 2 pitch, bump-bonded to a 300 μm thick silicon sensor, read out by a PCIbus-based electronics, and a graphical user interface (GUI) software. The intensity and the energy tunability of the X-ray source allow characterization of the detector in the time, space, and energy domains. The system can be read out on external trigger at a frame rate of 100 Hz with 3 ms exposure time per frame. The detector response is tested up to more than 7×10 5 detected events/pixel/s. The point-spread response shows <2% crosstalk between neighboring pixels. Fine scanning of the detector surface with a 10 μm beam reveals no loss in sensitivity between adjacent pixels as could result from charge sharing in the silicon sensor. Photons down to 6 keV can be detected after equalization of the thresholds of individual pixels. The obtained results demonstrate the advantages of photon-counting hybrid pixel detectors and particularly of the Medipix-1 chip for a wide range of X-ray imaging applications, including those using synchrotron X-ray beams.

  12. Evaluation of a single-pixel one-transistor active pixel sensor for fingerprint imaging

    NASA Astrophysics Data System (ADS)

    Xu, Man; Ou, Hai; Chen, Jun; Wang, Kai

    2015-08-01

    Since it first appeared in iPhone 5S in 2013, fingerprint identification (ID) has rapidly gained popularity among consumers. Current fingerprint-enabled smartphones unanimously consists of a discrete sensor to perform fingerprint ID. This architecture not only incurs higher material and manufacturing cost, but also provides only static identification and limited authentication. Hence as the demand for a thinner, lighter, and more secure handset grows, we propose a novel pixel architecture that is a photosensitive device embedded in a display pixel and detects the reflected light from the finger touch for high resolution, high fidelity and dynamic biometrics. To this purpose, an amorphous silicon (a-Si:H) dual-gate photo TFT working in both fingerprint-imaging mode and display-driving mode will be developed.

  13. Convolution-Based Forced Detection Monte Carlo Simulation Incorporating Septal Penetration Modeling

    PubMed Central

    Liu, Shaoying; King, Michael A.; Brill, Aaron B.; Stabin, Michael G.; Farncombe, Troy H.

    2010-01-01

    In SPECT imaging, photon transport effects such as scatter, attenuation and septal penetration can negatively affect the quality of the reconstructed image and the accuracy of quantitation estimation. As such, it is useful to model these effects as carefully as possible during the image reconstruction process. Many of these effects can be included in Monte Carlo (MC) based image reconstruction using convolution-based forced detection (CFD). With CFD Monte Carlo (CFD-MC), often only the geometric response of the collimator is modeled, thereby making the assumption that the collimator materials are thick enough to completely absorb photons. However, in order to retain high collimator sensitivity and high spatial resolution, it is required that the septa be as thin as possible, thus resulting in a significant amount of septal penetration for high energy radionuclides. A method for modeling the effects of both collimator septal penetration and geometric response using ray tracing (RT) techniques has been performed and included into a CFD-MC program. Two look-up tables are pre-calculated based on the specific collimator parameters and radionuclides, and subsequently incorporated into the SIMIND MC program. One table consists of the cumulative septal thickness between any point on the collimator and the center location of the collimator. The other table presents the resultant collimator response for a point source at different distances from the collimator and for various energies. A series of RT simulations have been compared to experimental data for different radionuclides and collimators. Results of the RT technique matches experimental data of collimator response very well, producing correlation coefficients higher than 0.995. Reasonable values of the parameters in the lookup table and computation speed are discussed in order to achieve high accuracy while using minimal storage space for the look-up tables. In order to achieve noise-free projection images from MC, it

  14. Design and characterization of high precision in-pixel discriminators for rolling shutter CMOS pixel sensors with full CMOS capability

    NASA Astrophysics Data System (ADS)

    Fu, Y.; Hu-Guo, C.; Dorokhov, A.; Pham, H.; Hu, Y.

    2013-07-01

    In order to exploit the ability to integrate a charge collecting electrode with analog and digital processing circuitry down to the pixel level, a new type of CMOS pixel sensors with full CMOS capability is presented in this paper. The pixel array is read out based on a column-parallel read-out architecture, where each pixel incorporates a diode, a preamplifier with a double sampling circuitry and a discriminator to completely eliminate analog read-out bottlenecks. The sensor featuring a pixel array of 8 rows and 32 columns with a pixel pitch of 80 μm×16 μm was fabricated in a 0.18 μm CMOS process. The behavior of each pixel-level discriminator isolated from the diode and the preamplifier was studied. The experimental results indicate that all in-pixel discriminators which are fully operational can provide significant improvements in the read-out speed and the power consumption of CMOS pixel sensors.

  15. Weld Spot Detection by Color Segmentation and Template Convolution

    SciTech Connect

    Cambrini, Luigi; Biber, Juergen; Hoenigmann, Dieter; Loehndorf, Maike

    2007-12-26

    There is a need of non-destructive evaluation of the quality of steel spot welds. A computer-vision based solution is presented performing the analysis of the weld spot imprints left by the electrode on the protection bands. In this paper we propose two different methods to locate the position of the weld spot imprint as a first step in order to verify the quality of the welding process; both methods consist of two stages: (i) the use of the X channel of the XYZ color space as a proper representation, and (ii) the analysis of this image channel by employing specific algorithms.

  16. An Empirical Pixel-Based Correction for Imperfect CTE. I. HST's Advanced Camera for Surveys

    NASA Astrophysics Data System (ADS)

    Anderson, Jay; Bedin, Luigi R.

    2010-09-01

    We use an empirical approach to characterize the effect of charge-transfer efficiency (CTE) losses in images taken with the Wide-Field Channel of the Advanced Camera for Surveys (ACS). The study is based on profiles of warm pixels in 168 dark exposures taken between 2009 September and October. The dark exposures allow us to explore charge traps that affect electrons when the background is extremely low. We develop a model for the readout process that reproduces the observed trails out to 70 pixels. We then invert the model to convert the observed pixel values in an image into an estimate of the original pixel values. We find that when we apply this image-restoration process to science images with a variety of stars on a variety of background levels, it restores flux, position, and shape. This means that the observed trails contain essentially all of the flux lost to inefficient CTE. The Space Telescope Science Institute is currently evaluating this algorithm with the aim of optimizing it and eventually providing enhanced data products. The empirical procedure presented here should also work for other epochs (e.g., pre-SM4), though the parameters may have to be recomputed for the time when ACS was operated at a higher temperature than the current -81°C. Finally, this empirical approach may also hold promise for other instruments, such as WFPC2, STIS, the ACS's HRC, and even WFC3/UVIS. Based on observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS 5-26555.

  17. 3D reconstructions with pixel-based images are made possible by digitally clearing plant and animal tissue

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Reconstruction of 3D images from a series of 2D images has been restricted by the limited capacity to decrease the opacity of surrounding tissue. Commercial software that allows color-keying and manipulation of 2D images in true 3D space allowed us to produce 3D reconstructions from pixel based imag...

  18. Hybrid Pixel Detectors for gamma/X-ray imaging

    NASA Astrophysics Data System (ADS)

    Hatzistratis, D.; Theodoratos, G.; Zografos, V.; Kazas, I.; Loukas, D.; Lambropoulos, C. P.

    2015-09-01

    Hybrid pixel detectors are made by direct converting high-Z semi-insulating single crystalline material coupled to complementary-metal-oxide semiconductor (CMOS) readout electronics. They are attractive because direct conversion exterminates all the problems of spatial localization related to light diffusion, energy resolution, is far superior from the combination of scintillation crystals and photomultipliers and lithography can be used to pattern electrodes with very fine pitch. We are developing 2-D pixel CMOS ASICs, connect them to pixilated CdTe crystals with the flip chip and bump bonding method and characterize the hybrids. We have designed a series of circuits, whose latest member consists of a 50×25 pixel array with 400um pitch and an embedded controller. In every pixel a full spectroscopic channel with time tagging information has been implemented. The detectors are targeting Compton scatter imaging and they can be used for coded aperture imaging too. Hybridization using CMOS can overcome the limit put on pixel circuit complexity by the use of thin film transistors (TFT) in large flat panels. Hybrid active pixel sensors are used in dental imaging and other applications (e.g. industrial CT etc.). Thus X-ray imaging can benefit from the work done on dynamic range enhancement methods developed initially for visible and infrared CMOS pixel sensors. A 2-D CMOS ASIC with 100um pixel pitch to demonstrate the feasibility of such methods in the context of X-ray imaging has been designed.

  19. Novel integrated CMOS pixel structures for vertex detectors

    SciTech Connect

    Kleinfelder, Stuart; Bieser, Fred; Chen, Yandong; Gareus, Robin; Matis, Howard S.; Oldenburg, Markus; Retiere, Fabrice; Ritter, Hans Georg; Wieman, Howard H.; Yamamoto, Eugene

    2003-10-29

    Novel CMOS active pixel structures for vertex detector applications have been designed and tested. The overriding goal of this work is to increase the signal to noise ratio of the sensors and readout circuits. A large-area native epitaxial silicon photogate was designed with the aim of increasing the charge collected per struck pixel and to reduce charge diffusion to neighboring pixels. The photogate then transfers the charge to a low capacitance readout node to maintain a high charge to voltage conversion gain. Two techniques for noise reduction are also presented. The first is a per-pixel kT/C noise reduction circuit that produces results similar to traditional correlated double sampling (CDS). It has the advantage of requiring only one read, as compared to two for CDS, and no external storage or subtraction is needed. The technique reduced input-referred temporal noise by a factor of 2.5, to 12.8 e{sup -}. Finally, a column-level active reset technique is explored that suppresses kT/C noise during pixel reset. In tests, noise was reduced by a factor of 7.6 times, to an estimated 5.1 e{sup -} input-referred noise. The technique also dramatically reduces fixed pattern (pedestal) noise, by up to a factor of 21 in our tests. The latter feature may possibly reduce pixel-by-pixel pedestal differences to levels low enough to permit sparse data scan without per-pixel offset corrections.

  20. Development of a pixel readout chip for BTeV

    SciTech Connect

    D.C. Christian et al.

    1998-11-01

    A description is given of the R&D program underway at Fermilab to develop a pixel readout ASIC appropriate for use at the Tevatron collider. Results are presentetd frOm tests performed on the first prototype pixel readout chip deigned at Fermilab, and a new readout architecture is described.

  1. Method for hyperspectral imagery exploitation and pixel spectral unmixing

    NASA Technical Reports Server (NTRS)

    Lin, Ching-Fang (Inventor)

    2003-01-01

    An efficiently hybrid approach to exploit hyperspectral imagery and unmix spectral pixels. This hybrid approach uses a genetic algorithm to solve the abundance vector for the first pixel of a hyperspectral image cube. This abundance vector is used as initial state in a robust filter to derive the abundance estimate for the next pixel. By using Kalman filter, the abundance estimate for a pixel can be obtained in one iteration procedure which is much fast than genetic algorithm. The output of the robust filter is fed to genetic algorithm again to derive accurate abundance estimate for the current pixel. The using of robust filter solution as starting point of the genetic algorithm speeds up the evolution of the genetic algorithm. After obtaining the accurate abundance estimate, the procedure goes to next pixel, and uses the output of genetic algorithm as the previous state estimate to derive abundance estimate for this pixel using robust filter. And again use the genetic algorithm to derive accurate abundance estimate efficiently based on the robust filter solution. This iteration continues until pixels in a hyperspectral image cube end.

  2. Singlet mega-pixel resolution lens

    NASA Astrophysics Data System (ADS)

    Lin, Chen-Hung; Lin, Hoang Yan; Chang, Horng

    2008-03-01

    There always exist some new challenges for lens designers to keep their old-line technology update. To minimize lens volume is one of the most notified examples. In this paper we designed a single thick lens, constructed by using one oblique (reflective) surface, apart from two conventional refractive surfaces, to bend the optical path of the optical system to achieve this goal. Detail design procedure, including system layout and lens performance diagrams, will be presented. Following the first order layout, we applied aspherical form to the two refractive surfaces in order to correct the spherical aberration up to an acceptable condition. Then, the reduced aberrations such as coma, astigmatism, field curvature and distortion can easily be corrected with some calculations related to spherical aberration as shown in the publication of H. H. Hopkins (1950). Plastic material is used in the design, because the aspherical surfaces can then be manufactured in a more cost effective way. The final specification of the design is: EFL is 4.6 mm, the F number is 2.8, the over all thickness of lens is 3.6 mm, its MTF is 0.3 at 227 lp/mm in center field and chief ray angle is less than 15 degrees. Lens data as well as optical performance curves are also presented in the paper. In conclusion we have successfully finished a mega-pixel resolution lens design and its overall thickness is compatible with the state of the art.

  3. A 400 KHz line rate 2048-pixel stitched SWIR linear array

    NASA Astrophysics Data System (ADS)

    Anchlia, Ankur; Vinella, Rosa M.; Gielen, Daphne; Wouters, Kristof; Vervenne, Vincent; Hooylaerts, Peter; Deroo, Pieter; Ruythooren, Wouter; De Gaspari, Danny; Das, Jo; Merken, Patrick

    2016-05-01

    Xenics has developed a family of stitched SWIR long linear arrays that operate up to 400 KHz of line rate. These arrays serve medical and industrial applications that require high line rates as well as space applications that require long linear arrays. The arrays are based on a modular ROIC design concept: modules of 512 pixels are stitched during fabrication to achieve 512, 1024 and 2048 pixel arrays. Each 512-pixel module has its own on-chip digital sequencer, analog readout chain and 4 output buffers. This modular concept enables a long array to run at a high line rates irrespective of the array length, which limits the line rate in a traditional linear array. The ROIC is flip-chipped with InGaAs detector arrays. The FPA has a pixel pitch of 12.5μm and has two pixel flavors: square (12.5μm) and rectangular (250μm). The frontend circuit is based on Capacitive Trans-impedance Amplifier (CTIA) to attain stable detector bias, and good linearity and signal integrity, especially at high speeds. The CTIA has an input auto-zero mechanism that allows to have low detector bias (<20mV). An on-chip Correlated Double Sample (CDS) facilitates removal of CTIA KTC and 1/f noise, and other offsets, achieving low noise performance. There are five gain modes in the FPA giving the full well range from 85Ke- to 40Me-. The measured input referred noise is 35e-rms in the highest gain mode. The FPA operates in Integrate While Read mode and, at a master clock rate of 60MHz and a minimum integration time of 1.4μs, achieves the highest line rate of 400 KHz. In this paper, design details and measurements results are presented in order to demonstrate the array performance.

  4. Status of the CMS Phase I pixel detector upgrade

    NASA Astrophysics Data System (ADS)

    Spannagel, S.

    2016-09-01

    A new pixel detector for the CMS experiment is being built, owing to the instantaneous luminosities anticipated for the Phase I Upgrade of the LHC. The new CMS pixel detector provides four-hit tracking while featuring a significantly reduced material budget as well as new cooling and powering schemes. A new front-end readout chip mitigates buffering and bandwidth limitations, and comprises a low-threshold comparator. These improvements allow the new pixel detector to sustain and improve the efficiency of the current pixel tracker at the increased requirements imposed by high luminosities and pile-up. This contribution gives an overview of the design of the upgraded pixel detector and the status of the upgrade project, and presents test beam performance measurements of the production read-out chip.

  5. DC-DC powering for the CMS pixel upgrade

    NASA Astrophysics Data System (ADS)

    Feld, Lutz; Fleck, Martin; Friedrichs, Marcel; Hensch, Richard; Karpinski, Waclaw; Klein, Katja; Rittich, David; Sammet, Jan; Wlochal, Michael

    2013-12-01

    The CMS experiment plans to replace its silicon pixel detector with a new one with improved rate capability and an additional detection layer at the end of 2016. In order to cope with the increased number of detector modules the new pixel detector will be powered via DC-DC converters close to the sensitive detector volume. This paper reviews the DC-DC powering scheme and reports on the ongoing R&D program to develop converters for the pixel upgrade. Design choices are discussed and results from the electrical and thermal characterisation of converter prototypes are shown. An emphasis is put on system tests with up to 24 converters. The performance of pixel modules powered by DC-DC converters is compared to conventional powering. The integration of the DC-DC powering scheme into the pixel detector is described and system design issues are reviewed.

  6. Attenuating Stereo Pixel-Locking via Affine Window Adaptation

    NASA Technical Reports Server (NTRS)

    Stein, Andrew N.; Huertas, Andres; Matthies, Larry H.

    2006-01-01

    For real-time stereo vision systems, the standard method for estimating sub-pixel stereo disparity given an initial integer disparity map involves fitting parabolas to a matching cost function aggregated over rectangular windows. This results in a phenomenon known as 'pixel-locking,' which produces artificially-peaked histograms of sub-pixel disparity. These peaks correspond to the introduction of erroneous ripples or waves in the 3D reconstruction of truly Rat surfaces. Since stereo vision is a common input modality for autonomous vehicles, these inaccuracies can pose a problem for safe, reliable navigation. This paper proposes a new method for sub-pixel stereo disparity estimation, based on ideas from Lucas-Kanade tracking and optical flow, which substantially reduces the pixel-locking effect. In addition, it has the ability to correct much larger initial disparity errors than previous approaches and is more general as it applies not only to the ground plane.

  7. Readout of TPC Tracking Chambers with GEMs and Pixel Chip

    SciTech Connect

    Kadyk, John; Kim, T.; Freytsis, M.; Button-Shafer, J.; Kadyk, J.; Vahsen, S.E.; Wenzel, W.A.

    2007-12-21

    Two layers of GEMs and the ATLAS Pixel Chip, FEI3, have been combined and tested as a prototype for Time Projection Chamber (TPC) readout at the International Linear Collider (ILC). The double-layer GEM system amplifies charge with gain sufficient to detect all track ionization. The suitability of three gas mixtures for this application was investigated, and gain measurements are presented. A large sample of cosmic ray tracks was reconstructed in 3D by using the simultaneous timing and 2D spatial information from the pixel chip. The chip provides pixel charge measurement as well as timing. These results demonstrate that a double GEM and pixel combination, with a suitably modified pixel ASIC, could meet the stringent readout requirements of the ILC.

  8. Using an Active Pixel Sensor In A Vertex Detector

    SciTech Connect

    Matis, Howard S.; Bieser, Fred; Chen, Yandong; Gareus, Robin; Kleinfelder, Stuart; Oldenburg, Markus; Retiere, Fabrice; Ritter, HansGeorg; Wieman, Howard H.; Wurzel, Samuel E.; Yamamoto, Eugene

    2004-04-22

    Research has shown that Active Pixel CMOS sensors can detect charged particles. We have been studying whether this process can be used in a collider environment. In particular, we studied the effect of radiation with 55 MeV protons. These results show that a fluence of about 2 x 10{sup 12} protons/cm{sup 2} reduces the signal by a factor of two while the noise increases by 25%. A measurement 6 months after exposure shows that the silicon lattice naturally repairs itself. Heating the silicon to 100 C reduced the shot noise and increased the collected charge. CMOS sensors have a reduced signal to noise ratio per pixel because charge diffuses to neighboring pixels. We have constructed a photogate to see if this structure can collect more charge per pixel. Results show that a photogate does collect charge in fewer pixels, but it takes about 15 ms to collect all of the electrons produced by a pulse of light.

  9. Detector apparatus having a hybrid pixel-waveform readout system

    DOEpatents

    Meng, Ling-Jian

    2014-10-21

    A gamma ray detector apparatus comprises a solid state detector that includes a plurality of anode pixels and at least one cathode. The solid state detector is configured for receiving gamma rays during an interaction and inducing a signal in an anode pixel and in a cathode. An anode pixel readout circuit is coupled to the plurality of anode pixels and is configured to read out and process the induced signal in the anode pixel and provide triggering and addressing information. A waveform sampling circuit is coupled to the at least one cathode and configured to read out and process the induced signal in the cathode and determine energy of the interaction, timing of the interaction, and depth of interaction.

  10. Single photon counting pixel detectors for synchrotron radiation experiments

    NASA Astrophysics Data System (ADS)

    Toyokawa, H.; Broennimann, Ch.; Eikenberry, E. F.; Henrich, B.; Kawase, M.; Kobas, M.; Kraft, P.; Sato, M.; Schmitt, B.; Suzuki, M.; Tanida, H.; Uruga, T.

    2010-11-01

    At the Paul Scherrer Institute PSI an X-ray single photon counting pixel detector (PILATUS) based on the hybrid-pixel detector technology was developed in collaboration with SPring-8. The detection element is a 320 or 450 μm thick silicon sensor forming pixelated pn-diodes with a pitch of 172 μm×172 μm. An array of 2×8 custom CMOS readout chips are indium bump-bonded to the sensor, which leads to 33.5 mm×83.8 mm detective area. Each pixel contains a charge-sensitive amplifier, a single level discriminator and a 20 bit counter. This design realizes a high dynamic range, short readout time of less than 3 ms, a high framing rate of over 200 images per second and an excellent point-spread function. The maximum counting rate achieves more than 2×10 6 X-rays/s/pixel.

  11. Field-portable pixel super-resolution colour microscope.

    PubMed

    Greenbaum, Alon; Akbari, Najva; Feizi, Alborz; Luo, Wei; Ozcan, Aydogan

    2013-01-01

    Based on partially-coherent digital in-line holography, we report a field-portable microscope that can render lensfree colour images over a wide field-of-view of e.g., >20 mm(2). This computational holographic microscope weighs less than 145 grams with dimensions smaller than 17×6×5 cm, making it especially suitable for field settings and point-of-care use. In this lensfree imaging design, we merged a colorization algorithm with a source shifting based multi-height pixel super-resolution technique to mitigate 'rainbow' like colour artefacts that are typical in holographic imaging. This image processing scheme is based on transforming the colour components of an RGB image into YUV colour space, which separates colour information from brightness component of an image. The resolution of our super-resolution colour microscope was characterized using a USAF test chart to confirm sub-micron spatial resolution, even for reconstructions that employ multi-height phase recovery to handle dense and connected objects. To further demonstrate the performance of this colour microscope Papanicolaou (Pap) smears were also successfully imaged. This field-portable and wide-field computational colour microscope could be useful for tele-medicine applications in resource poor settings.

  12. Generalized approach to inverse problems in tomography: Image reconstruction for spatially variant systems using natural pixels

    SciTech Connect

    Baker, J.R.; Budinger, T.F.; Huesman, R.H.

    1992-10-01

    A major limitation in tomographic inverse problems is inadequate computation speed, which frequently impedes the application of engineering ideas and principles in medical science more than in the physical and engineering sciences. Medical problems are computationally taxing because a minimum description of the system often involves 5 dimensions (3 space, 1 energy, 1 time), with the range of each space coordinate requiring up to 512 samples. The computational tasks for this problem can be simply expressed by posing the problem as one in which the tomograph system response function is spatially invariant, and the noise is additive and Gaussian. Under these assumptions, a number of reconstruction methods have been implemented with generally satisfactory results for general medical imaging purposes. However, if the system response function of the tomograph is assumed more realistically to be spatially variant and the noise to be Poisson, the computational problem becomes much more difficult. Some of the algorithms being studied to compensate for position dependent resolution and statistical fluctuations in the data acquisition process, when expressed in canonical form, are not practical for clinical applications because the number of computations necessary exceeds the capabilities of high performance computer systems currently available. Reconstruction methods based on natural pixels, specifically orthonormal natural pixels, preserve symmetries in the data acquisition process. Fast implementations of orthonormal natural pixel algorithms can achieve orders of magnitude speedup relative to general implementations. Thus, specialized thought in algorithm development can lead to more significant increases in performance than can be achieved through hardware improvements alone.

  13. Recent results of the ATLAS upgrade planar pixel sensors R&D project

    NASA Astrophysics Data System (ADS)

    Weigell, Philipp

    2013-12-01

    To extend the physics reach of the LHC experiments, several upgrades to the accelerator complex are planned, culminating in the HL-LHC, which eventually leads to an increase of the peak luminosity by a factor of five to ten compared to the LHC design value. To cope with the higher occupancy and radiation damage also the LHC experiments will be upgraded. The ATLAS Planar Pixel Sensor R&D Project is an international collaboration of 17 institutions and more than 80 scientists, exploring the feasibility of employing planar pixel sensors for this scenario. Depending on the radius, different pixel concepts are investigated using laboratory and beam test measurements. At small radii the extreme radiation environment and strong space constraints are addressed with very thin pixel sensors active thickness in the range of (75-150) μm, and the development of slim as well as active edges. At larger radii the main challenge is the cost reduction to allow for instrumenting the large area of (7-10) m2. To reach this goal the pixel productions are being transferred to 6 in production lines and more cost-efficient and industrialised interconnection techniques are investigated. Additionally, the n-in-p technology is employed, which requires less production steps since it relies on a single-sided process. An overview of the recent accomplishments obtained within the ATLAS Planar Pixel Sensor R&D Project is given. The performance in terms of charge collection and tracking efficiency, obtained with radioactive sources in the laboratory and at beam tests, is presented for devices built from sensors of different vendors connected to either the present ATLAS read-out chip FE-I3 or the new Insertable B-Layer read-out chip FE-I4. The devices, with a thickness varying between 75 μm and 300 μm, were irradiated to several fluences up to 2×1016 neq/cm2. Finally, the different approaches followed inside the collaboration to achieve slim or active edges for planar pixel sensors are presented.

  14. Two-level pipelined systolic array for multi-dimensional convolution

    SciTech Connect

    Kung, H.T.; Ruane, L.M.; Yen, D.W.L.

    1982-11-01

    This paper describes a systolic array for the computation of n-dimensional (n-D) convolutions of any positive integer n. Systolic systems usually achieve high performance by allowing computations to be pipelined over a large array of processing elements. To achieve even higher performance, the systolic array of this paper utilizes a second level of pipelining by allowing the processing elements themselves to be pipelined to an arbitrary degree. Moreover, it is shown that as far as orders of magnitude are concerned, the total amount of memory required by the systolic array is no more than that needed by any convolution device that reads in each input data item only once. Thus if only schemes that use the minimum-possible I/O are considered, the systolic array is not only high performance, but also optimal in terms of the amount of required memory.

  15. Systolic array architecture for convolutional decoding algorithms: Viterbi algorithm and stack algorithm

    SciTech Connect

    Chang, C.Y.

    1986-01-01

    New results on efficient forms of decoding convolutional codes based on Viterbi and stack algorithms using systolic array architecture are presented. Some theoretical aspects of systolic arrays are also investigated. First, systolic array implementation of Viterbi algorithm is considered, and various properties of convolutional codes are derived. A technique called strongly connected trellis decoding is introduced to increase the efficient utilization of all the systolic array processors. The issues dealing with the composite branch metric generation, survivor updating, overall system architecture, throughput rate, and computations overhead ratio are also investigated. Second, the existing stack algorithm is modified and restated in a more concise version so that it can be efficiently implemented by a special type of systolic array called systolic priority queue. Three general schemes of systolic priority queue based on random access memory, shift register, and ripple register are proposed. Finally, a systematic approach is presented to design systolic arrays for certain general classes of recursively formulated algorithms.

  16. Automatic detection of cell divisions (mitosis) in live-imaging microscopy images using Convolutional Neural Networks.

    PubMed

    Shkolyar, Anat; Gefen, Amit; Benayahu, Dafna; Greenspan, Hayit

    2015-08-01

    We propose a semi-automated pipeline for the detection of possible cell divisions in live-imaging microscopy and the classification of these mitosis candidates using a Convolutional Neural Network (CNN). We use time-lapse images of NIH3T3 scratch assay cultures, extract patches around bright candidate regions that then undergo segmentation and binarization, followed by a classification of the binary patches into either containing or not containing cell division. The classification is performed by training a Convolutional Neural Network on a specially constructed database. We show strong results of AUC = 0.91 and F-score = 0.89, competitive with state-of-the-art methods in this field. PMID:26736369

  17. Brain tumor grading based on Neural Networks and Convolutional Neural Networks.

    PubMed

    Yuehao Pan; Weimin Huang; Zhiping Lin; Wanzheng Zhu; Jiayin Zhou; Wong, Jocelyn; Zhongxiang Ding

    2015-08-01

    This paper studies brain tumor grading using multiphase MRI images and compares the results with various configurations of deep learning structure and baseline Neural Networks. The MRI images are used directly into the learning machine, with some combination operations between multiphase MRIs. Compared to other researches, which involve additional effort to design and choose feature sets, the approach used in this paper leverages the learning capability of deep learning machine. We present the grading performance on the testing data measured by the sensitivity and specificity. The results show a maximum improvement of 18% on grading performance of Convolutional Neural Networks based on sensitivity and specificity compared to Neural Networks. We also visualize the kernels trained in different layers and display some self-learned features obtained from Convolutional Neural Networks. PMID:26736358

  18. Automatic detection of cell divisions (mitosis) in live-imaging microscopy images using Convolutional Neural Networks.

    PubMed

    Shkolyar, Anat; Gefen, Amit; Benayahu, Dafna; Greenspan, Hayit

    2015-08-01

    We propose a semi-automated pipeline for the detection of possible cell divisions in live-imaging microscopy and the classification of these mitosis candidates using a Convolutional Neural Network (CNN). We use time-lapse images of NIH3T3 scratch assay cultures, extract patches around bright candidate regions that then undergo segmentation and binarization, followed by a classification of the binary patches into either containing or not containing cell division. The classification is performed by training a Convolutional Neural Network on a specially constructed database. We show strong results of AUC = 0.91 and F-score = 0.89, competitive with state-of-the-art methods in this field.

  19. Video-based convolutional neural networks for activity recognition from robot-centric videos

    NASA Astrophysics Data System (ADS)

    Ryoo, M. S.; Matthies, Larry

    2016-05-01

    In this evaluation paper, we discuss convolutional neural network (CNN)-based approaches for human activity recognition. In particular, we investigate CNN architectures designed to capture temporal information in videos and their applications to the human activity recognition problem. There have been multiple previous works to use CNN-features for videos. These include CNNs using 3-D XYT convolutional filters, CNNs using pooling operations on top of per-frame image-based CNN descriptors, and recurrent neural networks to learn temporal changes in per-frame CNN descriptors. We experimentally compare some of these different representatives CNNs while using first-person human activity videos. We especially focus on videos from a robots viewpoint, captured during its operations and human-robot interactions.

  20. Brain tumor grading based on Neural Networks and Convolutional Neural Networks.

    PubMed

    Yuehao Pan; Weimin Huang; Zhiping Lin; Wanzheng Zhu; Jiayin Zhou; Wong, Jocelyn; Zhongxiang Ding

    2015-08-01

    This paper studies brain tumor grading using multiphase MRI images and compares the results with various configurations of deep learning structure and baseline Neural Networks. The MRI images are used directly into the learning machine, with some combination operations between multiphase MRIs. Compared to other researches, which involve additional effort to design and choose feature sets, the approach used in this paper leverages the learning capability of deep learning machine. We present the grading performance on the testing data measured by the sensitivity and specificity. The results show a maximum improvement of 18% on grading performance of Convolutional Neural Networks based on sensitivity and specificity compared to Neural Networks. We also visualize the kernels trained in different layers and display some self-learned features obtained from Convolutional Neural Networks.

  1. Convolution theorem for the three-dimensional entangled fractional Fourier transformation deduced from the tripartite entangled state representation

    NASA Astrophysics Data System (ADS)

    Liu, Shu-Guang; Fan, Hong-Yi

    2009-12-01

    We find that constructing the two mutually-conjugate tripartite entangled state representations naturally leads to the entangled Fourier transformation. We then derive the convolution theorem for the threedimensional entangled fractional Fourier transformation in the context of quantum mechanics.

  2. Fuzzy Logic Module of Convolutional Neural Network for Handwritten Digits Recognition

    NASA Astrophysics Data System (ADS)

    Popko, E. A.; Weinstein, I. A.

    2016-08-01

    Optical character recognition is one of the important issues in the field of pattern recognition. This paper presents a method for recognizing handwritten digits based on the modeling of convolutional neural network. The integrated fuzzy logic module based on a structural approach was developed. Used system architecture adjusted the output of the neural network to improve quality of symbol identification. It was shown that proposed algorithm was flexible and high recognition rate of 99.23% was achieved.

  3. Hardware accelerator of convolution with exponential function for image processing applications

    NASA Astrophysics Data System (ADS)

    Panchenko, Ivan; Bucha, Victor

    2015-12-01

    In this paper we describe a Hardware Accelerator (HWA) for fast recursive approximation of separable convolution with exponential function. This filter can be used in many Image Processing (IP) applications, e.g. depth-dependent image blur, image enhancement and disparity estimation. We have adopted this filter RTL implementation to provide maximum throughput in constrains of required memory bandwidth and hardware resources to provide a power-efficient VLSI implementation.

  4. Convolution-based estimation of organ dose in tube current modulated CT

    NASA Astrophysics Data System (ADS)

    Tian, Xiaoyu; Segars, W. P.; Dixon, R. L.; Samei, Ehsan

    2015-03-01

    Among the various metrics that quantify radiation dose in computed tomography (CT), organ dose is one of the most representative quantities reflecting patient-specific radiation burden.1 Accurate estimation of organ dose requires one to effectively model the patient anatomy and the irradiation field. As illustrated in previous studies, the patient anatomy factor can be modeled using a library of computational phantoms with representative body habitus.2 However, the modeling of irradiation field can be practically challenging, especially for CT exams performed with tube current modulation. The central challenge is to effectively quantify the scatter irradiation field created by the dynamic change of tube current. In this study, we present a convolution-based technique to effectively quantify the primary and scatter irradiation field for TCM examinations. The organ dose for a given clinical patient can then be rapidly determined using the convolution-based method, a patient-matching technique, and a library of computational phantoms. 58 adult patients were included in this study (age range: 18-70 y.o., weight range: 60-180 kg). One computational phantom was created based on the clinical images of each patient. Each patient was optimally matched against one of the remaining 57 computational phantoms using a leave-one-out strategy. For each computational phantom, the organ dose coefficients (CTDIvol-normalized organ dose) under fixed tube current were simulated using a validated Monte Carlo simulation program. Such organ dose coefficients were multiplied by a scaling factor, (CTDIvol )organ, convolution that quantifies the regional irradiation field. The convolution-based organ dose was compared with the organ dose simulated from Monte Carlo program with TCM profiles explicitly modeled on the original phantom created based on patient images. The estimation error was within 10% across all organs and modulation profiles for abdominopelvic examination. This strategy

  5. Learning Depth from Single Monocular Images Using Deep Convolutional Neural Fields.

    PubMed

    Liu, Fayao; Shen, Chunhua; Lin, Guosheng; Reid, Ian

    2016-10-01

    In this article, we tackle the problem of depth estimation from single monocular images. Compared with depth estimation using multiple images such as stereo depth perception, depth from monocular images is much more challenging. Prior work typically focuses on exploiting geometric priors or additional sources of information, most using hand-crafted features. Recently, there is mounting evidence that features from deep convolutional neural networks (CNN) set new records for various vision applications. On the other hand, considering the continuous characteristic of the depth values, depth estimation can be naturally formulated as a continuous conditional random field (CRF) learning problem. Therefore, here we present a deep convolutional neural field model for estimating depths from single monocular images, aiming to jointly explore the capacity of deep CNN and continuous CRF. In particular, we propose a deep structured learning scheme which learns the unary and pairwise potentials of continuous CRF in a unified deep CNN framework. We then further propose an equally effective model based on fully convolutional networks and a novel superpixel pooling method, which is about 10 times faster, to speedup the patch-wise convolutions in the deep model. With this more efficient model, we are able to design deeper networks to pursue better performance. Our proposed method can be used for depth estimation of general scenes with no geometric priors nor any extra information injected. In our case, the integral of the partition function can be calculated in a closed form such that we can exactly solve the log-likelihood maximization. Moreover, solving the inference problem for predicting depths of a test image is highly efficient as closed-form solutions exist. Experiments on both indoor and outdoor scene datasets demonstrate that the proposed method outperforms state-of-the-art depth estimation approaches. PMID:26660697

  6. Quantum Fields Obtained from Convoluted Generalized White Noise Never Have Positive Metric

    NASA Astrophysics Data System (ADS)

    Albeverio, Sergio; Gottschalk, Hanno

    2016-05-01

    It is proven that the relativistic quantum fields obtained from analytic continuation of convoluted generalized (Lévy type) noise fields have positive metric, if and only if the noise is Gaussian. This follows as an easy observation from a criterion by Baumann, based on the Dell'Antonio-Robinson-Greenberg theorem, for a relativistic quantum field in positive metric to be a free field.

  7. Learning Depth from Single Monocular Images Using Deep Convolutional Neural Fields.

    PubMed

    Liu, Fayao; Shen, Chunhua; Lin, Guosheng; Reid, Ian

    2016-10-01

    In this article, we tackle the problem of depth estimation from single monocular images. Compared with depth estimation using multiple images such as stereo depth perception, depth from monocular images is much more challenging. Prior work typically focuses on exploiting geometric priors or additional sources of information, most using hand-crafted features. Recently, there is mounting evidence that features from deep convolutional neural networks (CNN) set new records for various vision applications. On the other hand, considering the continuous characteristic of the depth values, depth estimation can be naturally formulated as a continuous conditional random field (CRF) learning problem. Therefore, here we present a deep convolutional neural field model for estimating depths from single monocular images, aiming to jointly explore the capacity of deep CNN and continuous CRF. In particular, we propose a deep structured learning scheme which learns the unary and pairwise potentials of continuous CRF in a unified deep CNN framework. We then further propose an equally effective model based on fully convolutional networks and a novel superpixel pooling method, which is about 10 times faster, to speedup the patch-wise convolutions in the deep model. With this more efficient model, we are able to design deeper networks to pursue better performance. Our proposed method can be used for depth estimation of general scenes with no geometric priors nor any extra information injected. In our case, the integral of the partition function can be calculated in a closed form such that we can exactly solve the log-likelihood maximization. Moreover, solving the inference problem for predicting depths of a test image is highly efficient as closed-form solutions exist. Experiments on both indoor and outdoor scene datasets demonstrate that the proposed method outperforms state-of-the-art depth estimation approaches.

  8. Robust Matching of Wavelet Features for Sub-Pixel Registration of Landsat Data

    NASA Technical Reports Server (NTRS)

    LeMoigne, Jacqueline; Netanyahu, Nathan S.; Masek, Jeffrey G.; Mount, David M.; Goward, Samuel; Zukor, Dorothy (Technical Monitor)

    2001-01-01

    For many Earth and Space Science applications, automatic geo-registration at sub-pixel accuracy has become a necessity. In this work, we are focusing on building an operational system, which will provide a sub-pixel accuracy registration of Landsat-5 and Landsat-7 data. The input to our registration method consists of scenes that have been geometrically and radiometrically corrected. Such pre-processed scenes are then geo-registered relative to a database of Landsat chips. The method assumes a transformation composed of a rotation and a translation, and utilizes rotation- and translation-invariant wavelets to extract image features that are matched using statistically robust feature matching and a generalized Hausdorff distance metric. The registration process is described and results on four Landsat input scenes of the Washington, D.C. area are presented.

  9. Pixellated Cd(Zn)Te high-energy X-ray instrument

    NASA Astrophysics Data System (ADS)

    Seller, P.; Bell, S.; Cernik, R. J.; Christodoulou, C.; Egan, C. K.; Gaskin, J. A.; Jacques, S.; Pani, S.; Ramsey, B. D.; Reid, C.; Sellin, P. J.; Scuffham, J. W.; Speller, R. D.; Wilson, M. D.; Veale, M. C.

    2011-12-01

    We have developed a pixellated high energy X-ray detector instrument to be used in a variety of imaging applications. The instrument consists of either a Cadmium Zinc Telluride or Cadmium Telluride (Cd(Zn)Te) detector bump-bonded to a large area ASIC and packaged with a high performance data acquisition system. The 80 by 80 pixels each of 250 μm by 250 μm give better than 1 keV FWHM energy resolution at 59.5 keV and 1.5 keV FWHM at 141 keV, at the same time providing a high speed imaging performance. This system uses a relatively simple wire-bonded interconnection scheme but this is being upgraded to allow multiple modules to be used with very small dead space. The readout system and the novel interconnect technology is described and how the system is performing in several target applications.

  10. Methods of editing cloud and atmospheric layer affected pixels from satellite data

    NASA Technical Reports Server (NTRS)

    Nixon, P. R. (Principal Investigator); Wiegand, C. L.; Richardson, A. J.; Johnson, M. P.

    1982-01-01

    Practical methods of computer screening cloud-contaminated pixels from data of various satellite systems are proposed. Examples are given of the location of clouds and representative landscape features in HCMM spectral space of reflectance (VIS) vs emission (IR). Methods of screening out cloud affected HCMM are discussed. The character of subvisible absorbing-emitting atmospheric layers (subvisible cirrus or SCi) in HCMM data is considered and radiosonde soundings are examined in relation to the presence of SCi. The statistical characteristics of multispectral meteorological satellite data in clear and SCi affected areas are discussed. Examples in TIROS-N and NOAA-7 data from several states and Mexico are presented. The VIS-IR cluster screening method for removing clouds is applied to a 262, 144 pixel HCMM scene from south Texas and northeast Mexico. The SCi that remain after cluster screening are sited out by applying a statistically determined IR limit.

  11. Pixellated Cd(Zn)Te high-energy X-ray instrument

    PubMed Central

    Seller, P.; Bell, S.; Cernik, R.J.; Christodoulou, C.; Egan, C.K.; Gaskin, J.A.; Jacques, S.; Pani, S.; Ramsey, B.D.; Reid, C.; Sellin, P.J.; Scuffham, J.W.; Speller, R.D.; Wilson, M.D.; Veale, M.C.

    2012-01-01

    We have developed a pixellated high energy X-ray detector instrument to be used in a variety of imaging applications. The instrument consists of either a Cadmium Zinc Telluride or Cadmium Telluride (Cd(Zn)Te) detector bump-bonded to a large area ASIC and packaged with a high performance data acquisition system. The 80 by 80 pixels each of 250 μm by 250 μm give better than 1 keV FWHM energy resolution at 59.5 keV and 1.5 keV FWHM at 141 keV, at the same time providing a high speed imaging performance. This system uses a relatively simple wire-bonded interconnection scheme but this is being upgraded to allow multiple modules to be used with very small dead space. The readout system and the novel interconnect technology is described and how the system is performing in several target applications. PMID:22737179

  12. Getting small: new 10μm pixel pitch cooled infrared products

    NASA Astrophysics Data System (ADS)

    Reibel, Y.; Pere-Laperne, N.; Augey, T.; Rubaldo, L.; Decaens, G.; Bourqui, M.-L.; Manissadjian, A.; Billon-Lanfrey, D.; Bisotto, S.; Gravrand, O.; Destefanis, G.; Druart, G.; Guerineau, N.

    2014-06-01

    Recent advances in miniaturization of IR imaging technology have led to a burgeoning market for mini thermalimaging sensors. Seen in this context our development on smaller pixel pitch has opened the door to very compact products. When this competitive advantage is mixed with smaller coolers, thanks to HOT technology, we achieve valuable reductions in size, weight and power of the overall package. In the same time, we are moving towards a global offer based on digital interfaces that provides our customers lower power consumption and simplification on the IR system design process while freeing up more space. Additionally, we are also investigating new wafer level camera solution taking advantage of the progress in micro-optics. This paper discusses recent developments on hot and small pixel pitch technologies as well as efforts made on compact packaging solution developed by SOFRADIR in collaboration with CEA-LETI and ONERA.

  13. Development of a novel pixel-level signal processing chain for fast readout 3D integrated CMOS pixel sensors

    NASA Astrophysics Data System (ADS)

    Fu, Y.; Torheim, O.; Hu-Guo, C.; Degerli, Y.; Hu, Y.

    2013-03-01

    In order to resolve the inherent readout speed limitation of traditional 2D CMOS pixel sensors, operated in rolling shutter readout, a parallel readout architecture has been developed by taking advantage of 3D integration technologies. Since the rows of the pixel array are zero-suppressed simultaneously instead of sequentially, a frame readout time of a few microseconds is expected for coping with high hit rates foreseen in future collider experiments. In order to demonstrate the pixel readout functionality of such a pixel sensor, a 2D proof-of-concept chip including a novel pixel-level signal processing chain was designed and fabricated in a 0.13 μm CMOS technology. The functionalities of this chip have been verified through experimental characterization.

  14. Deep Convolutional Extreme Learning Machine and Its Application in Handwritten Digit Classification

    PubMed Central

    Yang, Xinyi

    2016-01-01

    In recent years, some deep learning methods have been developed and applied to image classification applications, such as convolutional neuron network (CNN) and deep belief network (DBN). However they are suffering from some problems like local minima, slow convergence rate, and intensive human intervention. In this paper, we propose a rapid learning method, namely, deep convolutional extreme learning machine (DC-ELM), which combines the power of CNN and fast training of ELM. It uses multiple alternate convolution layers and pooling layers to effectively abstract high level features from input images. Then the abstracted features are fed to an ELM classifier, which leads to better generalization performance with faster learning speed. DC-ELM also introduces stochastic pooling in the last hidden layer to reduce dimensionality of features greatly, thus saving much training time and computation resources. We systematically evaluated the performance of DC-ELM on two handwritten digit data sets: MNIST and USPS. Experimental results show that our method achieved better testing accuracy with significantly shorter training time in comparison with deep learning methods and other ELM methods. PMID:27610128

  15. Hardware efficient implementation of DFT using an improved first-order moments based cyclic convolution structure

    NASA Astrophysics Data System (ADS)

    Xiong, Jun; Liu, J. G.; Cao, Li

    2015-12-01

    This paper presents hardware efficient designs for implementing the one-dimensional (1D) discrete Fourier transform (DFT). Once DFT is formulated as the cyclic convolution form, the improved first-order moments-based cyclic convolution structure can be used as the basic computing unit for the DFT computation, which only contains a control module, a barrel shifter and (N-1)/2 accumulation units. After decomposing and reordering the twiddle factors, all that remains to do is shifting the input data sequence and accumulating them under the control of the statistical results on the twiddle factors. The whole calculation process only contains shift operations and additions with no need for multipliers and large memory. Compared with the previous first-order moments-based structure for DFT, the proposed designs have the advantages of less hardware consumption, lower power consumption and the flexibility to achieve better performance in certain cases. A series of experiments have proven the high performance of the proposed designs in terms of the area time product and power consumption. Similar efficient designs can be obtained for other computations, such as DCT/IDCT, DST/IDST, digital filter and correlation by transforming them into the forms of the first-order moments based cyclic convolution.

  16. Vehicle detection based on visual saliency and deep sparse convolution hierarchical model

    NASA Astrophysics Data System (ADS)

    Cai, Yingfeng; Wang, Hai; Chen, Xiaobo; Gao, Li; Chen, Long

    2016-07-01

    Traditional vehicle detection algorithms use traverse search based vehicle candidate generation and hand crafted based classifier training for vehicle candidate verification. These types of methods generally have high processing times and low vehicle detection performance. To address this issue, a visual saliency and deep sparse convolution hierarchical model based vehicle detection algorithm is proposed. A visual saliency calculation is firstly used to generate a small vehicle candidate area. The vehicle candidate sub images are then loaded into a sparse deep convolution hierarchical model with an SVM-based classifier to perform the final detection. The experimental results demonstrate that the proposed method is with 94.81% correct rate and 0.78% false detection rate on the existing datasets and the real road pictures captured by our group, which outperforms the existing state-of-the-art algorithms. More importantly, high discriminative multi-scale features are generated by deep sparse convolution network which has broad application prospects in target recognition in the field of intelligent vehicle.

  17. Vehicle detection based on visual saliency and deep sparse convolution hierarchical model

    NASA Astrophysics Data System (ADS)

    Cai, Yingfeng; Wang, Hai; Chen, Xiaobo; Gao, Li; Chen, Long

    2016-06-01

    Traditional vehicle detection algorithms use traverse search based vehicle candidate generation and hand crafted based classifier training for vehicle candidate verification. These types of methods generally have high processing times and low vehicle detection performance. To address this issue, a visual saliency and deep sparse convolution hierarchical model based vehicle detection algorithm is proposed. A visual saliency calculation is firstly used to generate a small vehicle candidate area. The vehicle candidate sub images are then loaded into a sparse deep convolution hierarchical model with an SVM-based classifier to perform the final detection. The experimental results demonstrate that the proposed method is with 94.81% correct rate and 0.78% false detection rate on the existing datasets and the real road pictures captured by our group, which outperforms the existing state-of-the-art algorithms. More importantly, high discriminative multi-scale features are generated by deep sparse convolution network which has broad application prospects in target recognition in the field of intelligent vehicle.

  18. Deep Convolutional Extreme Learning Machine and Its Application in Handwritten Digit Classification.

    PubMed

    Pang, Shan; Yang, Xinyi

    2016-01-01

    In recent years, some deep learning methods have been developed and applied to image classification applications, such as convolutional neuron network (CNN) and deep belief network (DBN). However they are suffering from some problems like local minima, slow convergence rate, and intensive human intervention. In this paper, we propose a rapid learning method, namely, deep convolutional extreme learning machine (DC-ELM), which combines the power of CNN and fast training of ELM. It uses multiple alternate convolution layers and pooling layers to effectively abstract high level features from input images. Then the abstracted features are fed to an ELM classifier, which leads to better generalization performance with faster learning speed. DC-ELM also introduces stochastic pooling in the last hidden layer to reduce dimensionality of features greatly, thus saving much training time and computation resources. We systematically evaluated the performance of DC-ELM on two handwritten digit data sets: MNIST and USPS. Experimental results show that our method achieved better testing accuracy with significantly shorter training time in comparison with deep learning methods and other ELM methods.

  19. Deep Convolutional Extreme Learning Machine and Its Application in Handwritten Digit Classification

    PubMed Central

    Yang, Xinyi

    2016-01-01

    In recent years, some deep learning methods have been developed and applied to image classification applications, such as convolutional neuron network (CNN) and deep belief network (DBN). However they are suffering from some problems like local minima, slow convergence rate, and intensive human intervention. In this paper, we propose a rapid learning method, namely, deep convolutional extreme learning machine (DC-ELM), which combines the power of CNN and fast training of ELM. It uses multiple alternate convolution layers and pooling layers to effectively abstract high level features from input images. Then the abstracted features are fed to an ELM classifier, which leads to better generalization performance with faster learning speed. DC-ELM also introduces stochastic pooling in the last hidden layer to reduce dimensionality of features greatly, thus saving much training time and computation resources. We systematically evaluated the performance of DC-ELM on two handwritten digit data sets: MNIST and USPS. Experimental results show that our method achieved better testing accuracy with significantly shorter training time in comparison with deep learning methods and other ELM methods.

  20. Deep Convolutional Extreme Learning Machine and Its Application in Handwritten Digit Classification.

    PubMed

    Pang, Shan; Yang, Xinyi

    2016-01-01

    In recent years, some deep learning methods have been developed and applied to image classification applications, such as convolutional neuron network (CNN) and deep belief network (DBN). However they are suffering from some problems like local minima, slow convergence rate, and intensive human intervention. In this paper, we propose a rapid learning method, namely, deep convolutional extreme learning machine (DC-ELM), which combines the power of CNN and fast training of ELM. It uses multiple alternate convolution layers and pooling layers to effectively abstract high level features from input images. Then the abstracted features are fed to an ELM classifier, which leads to better generalization performance with faster learning speed. DC-ELM also introduces stochastic pooling in the last hidden layer to reduce dimensionality of features greatly, thus saving much training time and computation resources. We systematically evaluated the performance of DC-ELM on two handwritten digit data sets: MNIST and USPS. Experimental results show that our method achieved better testing accuracy with significantly shorter training time in comparison with deep learning methods and other ELM methods. PMID:27610128

  1. Analysis of Multipath Pixels in SAR Images

    NASA Astrophysics Data System (ADS)

    Zhao, J. W.; Wu, J. C.; Ding, X. L.; Zhang, L.; Hu, F. M.

    2016-06-01

    As the received radar signal is the sum of signal contributions overlaid in one single pixel regardless of the travel path, the multipath effect should be seriously tackled as the multiple bounce returns are added to direct scatter echoes which leads to ghost scatters. Most of the existing solution towards the multipath is to recover the signal propagation path. To facilitate the signal propagation simulation process, plenty of aspects such as sensor parameters, the geometry of the objects (shape, location, orientation, mutual position between adjacent buildings) and the physical parameters of the surface (roughness, correlation length, permittivity)which determine the strength of radar signal backscattered to the SAR sensor should be given in previous. However, it's not practical to obtain the highly detailed object model in unfamiliar area by field survey as it's a laborious work and time-consuming. In this paper, SAR imaging simulation based on RaySAR is conducted at first aiming at basic understanding of multipath effects and for further comparison. Besides of the pre-imaging simulation, the product of the after-imaging, which refers to radar images is also taken into consideration. Both Cosmo-SkyMed ascending and descending SAR images of Lupu Bridge in Shanghai are used for the experiment. As a result, the reflectivity map and signal distribution map of different bounce level are simulated and validated by 3D real model. The statistic indexes such as the phase stability, mean amplitude, amplitude dispersion, coherence and mean-sigma ratio in case of layover are analyzed with combination of the RaySAR output.

  2. Pixel Analysis and Plasma Dynamics Characterized by Photospheric Spectral Data

    NASA Astrophysics Data System (ADS)

    Rasca, A.; Chen, J.; Pevtsov, A. A.

    2015-12-01

    Continued advances in solar observations have led to higher-resolution magnetograms and surface (photospheric) images, revealing bipolar magnetic features operating near the resolution limit during emerging flux events and other phenomena used to predict solar eruptions responsible for geomagnetic plasma disturbances. However, line of sight (LOS) magnetogram pixels only contain the net uncanceled magnetic flux, which is expected to increase for fixed regions as resolution limits improve. A pixel dynamics model utilizing Stokes I spectral profiles was previously-used to quantify changes in the Doppler shift, width, asymmetry, and tail flatness of Fe I lines at 6301.5 and 6302.5 Å and used pixel-by-pixel line profile fluctuations to characterize quiet and active regions on the Sun. We use this pixel dynamics model with circularly polarized photospheric data (e.g., SOLIS data) to estimate plasma dynamic properties at a sub-pixel level. The analysis can be extended to include the full Stokes parameters and study signatures of magnetic fields and coupled plasma properties on sub-pixel scales.

  3. Monolithic pixels on moderate resistivity substrate and sparsifying readout architecture

    NASA Astrophysics Data System (ADS)

    Giubilato, P.; Battaglia, M.; Bisello, D.; Caselle, M.; Chalmet, P.; Demaria, L.; Ikemoto, Y.; Kloukinas, K.; Mansuy, S. C.; Mattiazzo, S.; Marchioro, A.; Mugnier, H.; Pantano, D.; Potenza, A.; Rivetti, A.; Rousset, J.; Silvestrin, L.; Snoeys, W.

    2013-12-01

    The LePix projects aim realizing a new generation monolithic pixel detectors with improved performances at lesser cost with respect to both current state of the art monolithic and hybrid pixel sensors. The detector is built in a 90 nm CMOS process on a substrate of moderate resistivity. This allows charge collection by drift while maintaining the other advantages usually offered by MAPS, like having a single piece detector and using a standard CMOS production line. The collection by drift mechanism, coupled to the low capacitance design of the collecting node made possible by the monolithic approach, provides an excellent signal to noise ratio straight at the pixel cell together with a radiation tolerance far superior to conventional un-depleted MAPS. The excellent signal-to-noise performance is demonstrated by the device ability to separate the 6 keV 55Fe double peak at room temperature. To achieve high granularity (10-20 μm pitch pixels) over large detector areas maintaining high readout speed, a completely new compressing architecture has been devised. This architecture departs from the mainstream hybrid pixel sparsification approach, which uses in-pixel logic to reduce data, by using topological compression to minimize pixel area and power consumption.

  4. Convolution-based estimation of organ dose in tube current modulated CT

    PubMed Central

    Tian, Xiaoyu; Segars, W Paul; Dixon, Robert L; Samei, Ehsan

    2016-01-01

    Estimating organ dose for clinical patients requires accurate modeling of the patient anatomy and the dose field of the CT exam. The modeling of patient anatomy can be achieved using a library of representative computational phantoms (Samei et al 2014 Pediatr. Radiol. 44 460–7). The modeling of the dose field can be challenging for CT exams performed with a tube current modulation (TCM) technique. The purpose of this work was to effectively model the dose field for TCM exams using a convolution-based method. A framework was further proposed for prospective and retrospective organ dose estimation in clinical practice. The study included 60 adult patients (age range: 18–70 years, weight range: 60–180 kg). Patient-specific computational phantoms were generated based on patient CT image datasets. A previously validated Monte Carlo simulation program was used to model a clinical CT scanner (SOMATOM Definition Flash, Siemens Healthcare, Forchheim, Germany). A practical strategy was developed to achieve real-time organ dose estimation for a given clinical patient. CTDIvol-normalized organ dose coefficients (hOrgan) under constant tube current were estimated and modeled as a function of patient size. Each clinical patient in the library was optimally matched to another computational phantom to obtain a representation of organ location/distribution. The patient organ distribution was convolved with a dose distribution profile to generate (CTDIvol)organ, convolution values that quantified the regional dose field for each organ. The organ dose was estimated by multiplying (CTDIvol)organ, convolution with the organ dose coefficients (hOrgan). To validate the accuracy of this dose estimation technique, the organ dose of the original clinical patient was estimated using Monte Carlo program with TCM profiles explicitly modeled. The discrepancy between the estimated organ dose and dose simulated using TCM Monte Carlo program was quantified. We further compared the

  5. Convolution-based estimation of organ dose in tube current modulated CT

    NASA Astrophysics Data System (ADS)

    Tian, Xiaoyu; Segars, W. Paul; Dixon, Robert L.; Samei, Ehsan

    2016-05-01

    Estimating organ dose for clinical patients requires accurate modeling of the patient anatomy and the dose field of the CT exam. The modeling of patient anatomy can be achieved using a library of representative computational phantoms (Samei et al 2014 Pediatr. Radiol. 44 460–7). The modeling of the dose field can be challenging for CT exams performed with a tube current modulation (TCM) technique. The purpose of this work was to effectively model the dose field for TCM exams using a convolution-based method. A framework was further proposed for prospective and retrospective organ dose estimation in clinical practice. The study included 60 adult patients (age range: 18–70 years, weight range: 60–180 kg). Patient-specific computational phantoms were generated based on patient CT image datasets. A previously validated Monte Carlo simulation program was used to model a clinical CT scanner (SOMATOM Definition Flash, Siemens Healthcare, Forchheim, Germany). A practical strategy was developed to achieve real-time organ dose estimation for a given clinical patient. CTDIvol-normalized organ dose coefficients ({{h}\\text{Organ}} ) under constant tube current were estimated and modeled as a function of patient size. Each clinical patient in the library was optimally matched to another computational phantom to obtain a representation of organ location/distribution. The patient organ distribution was convolved with a dose distribution profile to generate {{≤ft(\\text{CTD}{{\\text{I}}\\text{vol}}\\right)}\\text{organ, \\text{convolution}}} values that quantified the regional dose field for each organ. The organ dose was estimated by multiplying {{≤ft(\\text{CTD}{{\\text{I}}\\text{vol}}\\right)}\\text{organ, \\text{convolution}}} with the organ dose coefficients ({{h}\\text{Organ}} ). To validate the accuracy of this dose estimation technique, the organ dose of the original clinical patient was estimated using Monte Carlo program with TCM profiles explicitly modeled

  6. Convolution-based estimation of organ dose in tube current modulated CT

    NASA Astrophysics Data System (ADS)

    Tian, Xiaoyu; Segars, W. Paul; Dixon, Robert L.; Samei, Ehsan

    2016-05-01

    Estimating organ dose for clinical patients requires accurate modeling of the patient anatomy and the dose field of the CT exam. The modeling of patient anatomy can be achieved using a library of representative computational phantoms (Samei et al 2014 Pediatr. Radiol. 44 460-7). The modeling of the dose field can be challenging for CT exams performed with a tube current modulation (TCM) technique. The purpose of this work was to effectively model the dose field for TCM exams using a convolution-based method. A framework was further proposed for prospective and retrospective organ dose estimation in clinical practice. The study included 60 adult patients (age range: 18-70 years, weight range: 60-180 kg). Patient-specific computational phantoms were generated based on patient CT image datasets. A previously validated Monte Carlo simulation program was used to model a clinical CT scanner (SOMATOM Definition Flash, Siemens Healthcare, Forchheim, Germany). A practical strategy was developed to achieve real-time organ dose estimation for a given clinical patient. CTDIvol-normalized organ dose coefficients ({{h}\\text{Organ}} ) under constant tube current were estimated and modeled as a function of patient size. Each clinical patient in the library was optimally matched to another computational phantom to obtain a representation of organ location/distribution. The patient organ distribution was convolved with a dose distribution profile to generate {{≤ft(\\text{CTD}{{\\text{I}}\\text{vol}}\\right)}\\text{organ, \\text{convolution}}} values that quantified the regional dose field for each organ. The organ dose was estimated by multiplying {{≤ft(\\text{CTD}{{\\text{I}}\\text{vol}}\\right)}\\text{organ, \\text{convolution}}} with the organ dose coefficients ({{h}\\text{Organ}} ). To validate the accuracy of this dose estimation technique, the organ dose of the original clinical patient was estimated using Monte Carlo program with TCM profiles explicitly modeled. The

  7. Monolithic pixel detectors with 0.2 μm FD-SOI pixel process technology

    NASA Astrophysics Data System (ADS)

    Miyoshi, Toshinobu; Arai, Yasuo; Chiba, Tadashi; Fujita, Yowichi; Hara, Kazuhiko; Honda, Shunsuke; Igarashi, Yasushi; Ikegami, Yoichi; Ikemoto, Yukiko; Kohriki, Takashi; Ohno, Morifumi; Ono, Yoshimasa; Shinoda, Naoyuki; Takeda, Ayaki; Tauchi, Kazuya; Tsuboyama, Toru; Tadokoro, Hirofumi; Unno, Yoshinobu; Yanagihara, Masashi

    2013-12-01

    Truly monolithic pixel detectors were fabricated with 0.2 μm SOI pixel process technology by collaborating with LAPIS Semiconductor Co., Ltd. for particle tracking experiment, X-ray imaging and medical applications. CMOS circuits were fabricated on a thin SOI layer and connected to diodes formed in the silicon handle wafer through the buried oxide layer. We can choose the handle wafer and therefore high-resistivity silicon is also available. Double SOI (D-SOI) wafers fabricated from Czochralski (CZ)-SOI wafers were newly obtained and successfully processed in 2012. The top SOI layers are used as electric circuits and the middle SOI layers used as a shield layer against the back-gate effect and cross-talk between sensors and CMOS circuits, and as an electrode to compensate for the total ionizing dose (TID) effect. In 2012, we developed two SOI detectors, INTPIX5 and INTPIX3g. A spatial resolution study was done with INTPIX5 and it showed excellent performance. The TID effect study with D-SOI INTPIX3g detectors was done and we confirmed improvement of TID tolerance in D-SOI sensors.

  8. Vertically integrated pixel readout chip for high energy physics

    SciTech Connect

    Deptuch, Grzegorz; Demarteau, Marcel; Hoff, James; Khalid, Farah; Lipton, Ronald; Shenai, Alpana; Trimpl, Marcel; Yarema, Raymond; Zimmerman, Tom; /Fermilab

    2011-01-01

    We report on the development of the vertex detector pixel readout chips based on multi-tier vertically integrated electronics for the International Linear Collider. Some testing results of the VIP2a prototype are presented. The chip is the second iteration of the silicon implementation of the prototype, data-pushed concept of the readout developed at Fermilab. The device was fabricated in the 3D MIT-LL 0.15 {micro}m fully depleted SOI process. The prototype is a three-tier design, featuring 30 x 30 {micro}m{sup 2} pixels, laid out in an array of 48 x 48 pixels.

  9. Pixel detectors in 3D technologies for high energy physics

    SciTech Connect

    Deptuch, G.; Demarteau, M.; Hoff, J.; Lipton, R.; Shenai, A.; Yarema, R.; Zimmerman, T.; /Fermilab

    2010-10-01

    This paper reports on the current status of the development of International Linear Collider vertex detector pixel readout chips based on multi-tier vertically integrated electronics. Initial testing results of the VIP2a prototype are presented. The chip is the second embodiment of the prototype data-pushed readout concept developed at Fermilab. The device was fabricated in the MIT-LL 0.15 {micro}m fully depleted SOI process. The prototype is a three-tier design, featuring 30 x 30 {micro}m{sup 2} pixels, laid out in an array of 48 x 48 pixels.

  10. Dual collection mode optical microscope with single-pixel detection

    NASA Astrophysics Data System (ADS)

    Rodríguez, A. D.; Clemente, P.; Fernández-Alonso, Mercedes; Tajahuerce, E.; Lancis, J.

    2015-07-01

    In this work we have developed a single-pixel optical microscope that provides both re ection and transmission images of the sample under test by attaching a diamond pixel layout DMD to a commercial inverted microscope. Our system performs simultaneous measurements of re ection and transmission modes. Besides, in contrast with a conventional system, in our single-element detection system both images belong, unequivocally, to the same plane of the sample. Furthermore, we have designed an algorithm to modify the shape of the projected patterns that improves the resolution and prevents the artifacts produced by the diamond pixel architecture.

  11. Consequences of Mixed Pixels on Temperature Emissivity Separation

    SciTech Connect

    Heasler, Patrick G.; Foley, Michael G.; Thompson, Sandra E.

    2007-02-01

    This report investigates the effect that a mixed pixel can have on temperature/emissivity seperation (i.e. temperature/emissivity estimation using long-wave infra-red data). Almost all temperature/emissivity estimation methods are based on a model that assumes both temperature and emissivity within the imaged pixel is homogeneous. A mixed pixel has heterogeneous temperature/emissivity and therefore does not satisfy the assumption. Needless to say, this heterogeneity causes biases in the estimates and this report quantifies the magnitude of the biases.

  12. A germanium hybrid pixel detector with 55μm pixel size and 65,000 channels

    NASA Astrophysics Data System (ADS)

    Pennicard, D.; Struth, B.; Hirsemann, H.; Sarajlic, M.; Smoljanin, S.; Zuvic, M.; Lampert, M. O.; Fritzsch, T.; Rothermund, M.; Graafsma, H.

    2014-12-01

    Hybrid pixel semiconductor detectors provide high performance through a combination of direct detection, a relatively small pixel size, fast readout and sophisticated signal processing circuitry in each pixel. For X-ray detection above 20 keV, high-Z sensor layers rather than silicon are needed to achieve high quantum efficiency, but many high-Z materials such as GaAs and CdTe often suffer from poor material properties or nonuniformities. Germanium is available in large wafers of extremely high quality, making it an appealing option for high-performance hybrid pixel X-ray detectors, but suitable technologies for finely pixelating and bump-bonding germanium have not previously been available. A finely-pixelated germanium photodiode sensor with a 256 by 256 array of 55μm pixels has been produced. The sensor has an n-on-p structure, with 700μm thickness. Using a low-temperature indium bump process, this sensor has been bonded to the Medipix3RX photoncounting readout chip. Tests with the LAMBDA readout system have shown that the detector works successfully, with a high bond yield and higher image uniformity than comparable high-Z systems. During cooling, the system is functional around -80°C (with warmer temperatures resulting in excessive leakage current), with -100°C sufficient for good performance.

  13. Characterization of a three side abuttable CMOS pixel sensor with digital pixel and data compression for charged particle tracking

    NASA Astrophysics Data System (ADS)

    Guilloux, F.; Değerli, Y.; Flouzat, C.; Lachkar, M.; Monmarthe, E.; Orsini, F.; Venault, P.

    2016-02-01

    CMOS monolithic pixel sensor technology has been chosen to equip the new ALICE trackers for HL-LHC . PIXAM is the final prototype from an R&D program specific to the Muon Forward Tracker which intends to push significantly forward the performances of the mature rolling shutter architecture. By implementing a digital pixel allowing to readout of a group of rows in parallel, the PIXAM sensor increases the rolling shutter readout speed while keeping the same power consumption as that of analogue pixel sensors. This paper will describe shortly the ASIC architecture and will focus on the analogue and digital performances of the sensor, obtained from laboratory measurements.

  14. Coherence experiments in single-pixel digital holography.

    PubMed

    Liu, Jung-Ping; Guo, Chia-Hao; Hsiao, Wei-Jen; Poon, Ting-Chung; Tsang, Peter

    2015-05-15

    In optical scanning holography (OSH), the coherence properties of the acquired holograms depend on the single-pixel size, i.e., the active area of the photodetector. For the first time, to the best of our knowledge, we have demonstrated coherent, partial coherent, and incoherent three-dimensional (3D) imaging by experiment in such a single-pixel digital holographic recording system. We have found, for the incoherent mode of OSH, in which the detector of the largest active area is applied, the 3D location of a diffusely reflecting object can be successfully retrieved without speckle noise. For the partial coherent mode employing a smaller pixel size of the detector, significant speckles and randomly distributed bright spots appear among the reconstructed images. For the coherent mode of OSH when the size of the pixel is vanishingly small, the bright spots disappear. However, the speckle remains and the signal-to-noise ratio is low. PMID:26393741

  15. Active pixel sensors with substantially planarized color filtering elements

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R. (Inventor); Kemeny, Sabrina E. (Inventor)

    1999-01-01

    A semiconductor imaging system preferably having an active pixel sensor array compatible with a CMOS fabrication process. Color-filtering elements such as polymer filters and wavelength-converting phosphors can be integrated with the image sensor.

  16. DAQ hardware and software development for the ATLAS Pixel Detector

    NASA Astrophysics Data System (ADS)

    Stramaglia, Maria Elena

    2016-07-01

    In 2014, the Pixel Detector of the ATLAS experiment has been extended by about 12 million pixels thanks to the installation of the Insertable B-Layer (IBL). Data-taking and tuning procedures have been implemented along with newly designed readout hardware to support high bandwidth for data readout and calibration. The hardware is supported by an embedded software stack running on the readout boards. The same boards will be used to upgrade the readout bandwidth for the two outermost barrel layers of the ATLAS Pixel Detector. We present the IBL readout hardware and the supporting software architecture used to calibrate and operate the 4-layer ATLAS Pixel Detector. We discuss the technical implementations and status for data taking, validation of the DAQ system in recent cosmic ray data taking, in-situ calibrations, and results from additional tests in preparation for Run 2 at the LHC.

  17. Two-dimensional pixel array image sensor for protein crystallography

    SciTech Connect

    Beuville, E.; Beche, J.-F.; Cork, C.

    1996-07-01

    A 2D pixel array image sensor module has been designed for time resolved Protein Crystallography. This smart pixels detector significantly enhances time resolved Laue Protein crystallography by two to three orders of magnitude compared to existing sensors like films or phosphor screens coupled to CCDs. The resolution in time and dynamic range of this type of detector will allow one to study the evolution of structural changes that occur within the protein as a function of time. This detector will also considerably accelerate data collection in static Laue or monochromatic crystallography and make better use of the intense beam delivered by synchrotron light sources. The event driven pixel array detectors, based on the column Architecture, can provide multiparameter information (energy discrimination, time), with sparse and frameless readout without significant dead time. The prototype module consists of a 16x16 pixel diode array bump-bonded to the integrated circuit. The detection area is 150x150 square microns.

  18. First Light with a 67-Million-Pixel WFI Camera

    NASA Astrophysics Data System (ADS)

    1999-01-01

    The newest astronomical instrument at the La Silla observatory is a super-camera with no less than sixty-seven million image elements. It represents the outcome of a joint project between the European Southern Observatory (ESO) , the Max-Planck-Institut für Astronomie (MPI-A) in Heidelberg (Germany) and the Osservatorio Astronomico di Capodimonte (OAC) near Naples (Italy), and was installed at the 2.2-m MPG/ESO telescope in December 1998. Following careful adjustment and testing, it has now produced the first spectacular test images. With a field size larger than the Full Moon, the new digital Wide Field Imager is able to obtain detailed views of extended celestial objects to very faint magnitudes. It is the first of a new generation of survey facilities at ESO with which a variety of large-scale searches will soon be made over extended regions of the southern sky. These programmes will lead to the discovery of particularly interesting and unusual (rare) celestial objects that may then be studied with large telescopes like the VLT at Paranal. This will in turn allow astronomers to penetrate deeper and deeper into the many secrets of the Universe. More light + larger fields = more information! The larger a telescope is, the more light - and hence information about the Universe and its constituents - it can collect. This simple truth represents the main reason for building ESO's Very Large Telescope (VLT) at the Paranal Observatory. However, the information-gathering power of astronomical equipment can also be increased by using a larger detector with more image elements (pixels) , thus permitting the simultaneous recording of images of larger sky fields (or more details in the same field). It is for similar reasons that many professional photographers prefer larger-format cameras and/or wide-angle lenses to the more conventional ones. The Wide Field Imager at the 2.2-m telescope Because of technological limitations, the sizes of detectors most commonly in use in

  19. Small pixel CZT detector for hard X-ray spectroscopy

    NASA Astrophysics Data System (ADS)

    Wilson, Matthew David; Cernik, Robert; Chen, Henry; Hansson, Conny; Iniewski, Kris; Jones, Lawrence L.; Seller, Paul; Veale, Matthew C.

    2011-10-01

    A new small pixel cadmium zinc telluride (CZT) detector has been developed for hard X-ray spectroscopy. The X-ray performance of four detectors is presented and the detectors are analysed in terms of the energy resolution of each pixel. The detectors were made from CZT crystals grown by the travelling heater method (THM) bonded to a 20×20 application specific integrated circuit (ASIC) and data acquisition (DAQ) system. The detectors had an array of 20×20 pixels on a 250 μm pitch, with each pixel gold-stud bonded to an energy resolving circuit in the ASIC. The DAQ system digitised the ASIC output with 14 bit resolution, performing offset corrections and data storage to disc in real time at up to 40,000 frames per second. The detector geometry and ASIC design was optimised for X-ray spectroscopy up to 150 keV and made use of the small pixel effect to preferentially measure the electron signal. A 241Am source was used to measure the spectroscopic performance and uniformity of the detectors. The average energy resolution (FWHM at 59.54 keV) of each pixel ranged from 1.09±0.46 to 1.50±0.57 keV across the four detectors. The detectors showed good spectral performance and uniform response over almost all pixels in the 20×20 array. A large area 80×80 pixel detector will be built that will utilise the scalable design of the ASIC and the large areas of monolithic spectroscopic grade THM grown CZT that are now available. The large area detector will have the same performance as that demonstrated here.

  20. A Chip and Pixel Qualification Methodology on Imaging Sensors

    NASA Technical Reports Server (NTRS)

    Chen, Yuan; Guertin, Steven M.; Petkov, Mihail; Nguyen, Duc N.; Novak, Frank

    2004-01-01

    This paper presents a qualification methodology on imaging sensors. In addition to overall chip reliability characterization based on sensor s overall figure of merit, such as Dark Rate, Linearity, Dark Current Non-Uniformity, Fixed Pattern Noise and Photon Response Non-Uniformity, a simulation technique is proposed and used to project pixel reliability. The projected pixel reliability is directly related to imaging quality and provides additional sensor reliability information and performance control.

  1. Pixel readout electronics for LHC and biomedical applications

    NASA Astrophysics Data System (ADS)

    Blanquart, L.; Bonzom, V.; Comes, G.; Delpierre, P.; Fischer, P.; Hausmann, J.; Keil, M.; Lindner, M.; Meuser, S.; Wermes, N.

    2000-01-01

    The demanding requirements for pixel readout electronics for high-energy physics experiments and biomedical applications are reviewed. Some examples of the measured analog performance of prototype chips are given. The readout architectures of the PIxel Readout for the ATlas Experiment (PIRATE) chip suited for LHC experiments and of the Multi Picture Element Counter (MPEC) counting chip targeted for biomedical applications are presented. First results with complete chip-sensor assemblies are also shown.

  2. FPIX2, the BTeV pixel readout chip

    SciTech Connect

    David C. Christian et al.

    2003-12-10

    A radiation tolerant pixel readout chip, FPIX2, has been developed at Fermilab for use by BTeV. Some of the requirements of the BTeV pixel readout chip are reviewed and contrasted with requirements for similar devices in LHC experiments. A description of the FPIX2 is given, and results of initial tests of its performance are presented, as is a summary of measurements planned for the coming year.

  3. High-voltage pixel sensors for ATLAS upgrade

    NASA Astrophysics Data System (ADS)

    Perić, I.; Kreidl, C.; Fischer, P.; Bompard, F.; Breugnon, P.; Clemens, J.-C.; Fougeron, D.; Liu, J.; Pangaud, P.; Rozanov, A.; Barbero, M.; Feigl, S.; Capeans, M.; Ferrere, D.; Pernegger, H.; Ristic, B.; Muenstermann, D.; Gonzalez Sevilla, S.; La Rosa, A.; Miucci, A.; Nessi, M.; Iacobucci, G.; Backhaus, M.; Hügging, Fabian; Krüger, H.; Hemperek, T.; Obermann, T.; Wermes, N.; Garcia-Sciveres, M.; Quadt, A.; Weingarten, J.; George, M.; Grosse-Knetter, J.; Rieger, J.; Bates, R.; Blue, A.; Buttar, C.; Hynds, D.

    2014-11-01

    The high-voltage (HV-) CMOS pixel sensors offer several good properties: a fast charge collection by drift, the possibility to implement relatively complex CMOS in-pixel electronics and the compatibility with commercial processes. The sensor element is a deep n-well diode in a p-type substrate. The n-well contains CMOS pixel electronics. The main charge collection mechanism is drift in a shallow, high field region, which leads to a fast charge collection and a high radiation tolerance. We are currently evaluating the use of the high-voltage detectors implemented in 180 nm HV-CMOS technology for the high-luminosity ATLAS upgrade. Our approach is replacing the existing pixel and strip sensors with the CMOS sensors while keeping the presently used readout ASICs. By intelligence we mean the ability of the sensor to recognize a particle hit and generate the address information. In this way we could benefit from the advantages of the HV sensor technology such as lower cost, lower mass, lower operating voltage, smaller pitch, smaller clusters at high incidence angles. Additionally we expect to achieve a radiation hardness necessary for ATLAS upgrade. In order to test the concept, we have designed two HV-CMOS prototypes that can be readout in two ways: using pixel and strip readout chips. In the case of the pixel readout, the connection between HV-CMOS sensor and the readout ASIC can be established capacitively.

  4. Frequency distribution signatures and classification of within-object pixels

    PubMed Central

    Stow, Douglas A.; Toure, Sory I.; Lippitt, Christopher D.; Lippitt, Caitlin L.; Lee, Chung-rui

    2011-01-01

    The premise of geographic object-based image analysis (GEOBIA) is that image objects are composed of aggregates of pixels that correspond to earth surface features of interest. Most commonly, image-derived objects (segments) or objects associated with predefined land units (e.g., agricultural fields) are classified using parametric statistical characteristics (e.g., mean and standard deviation) of the within-object pixels. The objective of this exploratory study was to examine the between- and within-class variability of frequency distributions of multispectral pixel values, and to evaluate a quantitative measure and classification rule that exploits the full pixel frequency distribution of within object pixels (i.e., histogram signatures) compared to simple parametric statistical characteristics. High spatial resolution Quickbird satellite multispectral data of Accra, Ghana were evaluated in the context of mapping land cover and land use and socioeconomic status. Results show that image objects associated with land cover and land use types can have characteristic, non-normal frequency distributions (histograms). Signatures of most image objects tended to match closely the training signature of a single class or sub-class. Curve matching approaches to classifying multi-pixel frequency distributions were found to be slightly more effective than standard statistical classifiers based on a nearest neighbor classifier. PMID:22408575

  5. CMOS Active Pixel Sensor (APS) Imager for Scientific Applications

    NASA Astrophysics Data System (ADS)

    Ay, Suat U.; Lesser, Michael P.; Fossum, Eric R.

    2002-12-01

    A 512×512 CMOS Active Pixel Sensor (APS) imager has been designed, fabricate, and tested for frontside illumination suitable for use in astronomy specifically in telescope guider systems as a replacement of CCD chips. The imager features a high-speed differential analog readout, 15 μm pixel pitch, 75 % fill factor (FF), 62 dB dynamic range, 315Ke- pixel capacity, less than 0.25% fixed pattern noise (FPN), 45 dB signal to noise ratio (SNR) and frame rate of up to 40 FPS. Design was implemented in a standard 0.5 μm CMOS process technology consuming less than 200mWatts on a single 5 Volt power supply. CMOS Active Pixel Sensor (APS) imager was developed with pixel structure suitable for both frontside and backside illumination holding large number of electron in relatively small pixel pitch of 15 μm. High-speed readout and signal processing circuits were designed to achieve low fixed pattern noise (FPN) and non-uniformity to provide CCD-like analog outputs. Target spectrum range of operation for the imager is in near ultraviolet (300-400 nm) with high quantum efficiency. This device is going to be used as a test vehicle to develop backside-thinning process.

  6. High frame rate measurements of semiconductor pixel detector readout IC

    NASA Astrophysics Data System (ADS)

    Szczygiel, R.; Grybos, P.; Maj, P.

    2012-07-01

    We report on high count rate and high frame rate measurements of a prototype IC named FPDR90, designed for readouts of hybrid pixel semiconductor detectors used for X-ray imaging applications. The FPDR90 is constructed in 90 nm CMOS technology and has dimensions of 4 mm×4 mm. Its main part is a matrix of 40×32 pixels with 100 μm×100 μm pixel size. The chip works in the single photon counting mode with two discriminators and two 16-bit ripple counters per pixel. The count rate per pixel depends on the effective CSA feedback resistance and can be set up to 6 Mcps. The FPDR90 can operate in the continuous readout mode, with zero dead time. Due to the architecture of digital blocks in pixel, one can select the number of bits read out from each counter from 1 to 16. Because in the FPDR90 prototype only one data output is available, the frame rate is 9 kfps and 72 kfps for 16 bits and 1 bit readout, respectively (with nominal clock frequency of 200 MHz).

  7. Preliminary investigations of active pixel sensors in Nuclear Medicine imaging

    NASA Astrophysics Data System (ADS)

    Ott, Robert; Evans, Noel; Evans, Phil; Osmond, J.; Clark, A.; Turchetta, R.

    2009-06-01

    Three CMOS active pixel sensors have been investigated for their application to Nuclear Medicine imaging. Startracker with 525×525 25 μm square pixels has been coupled via a fibre optic stud to a 2 mm thick segmented CsI(Tl) crystal. Imaging tests were performed using 99mTc sources, which emit 140 keV gamma rays. The system was interfaced to a PC via FPGA-based DAQ and optical link enabling imaging rates of 10 f/s. System noise was measured to be >100e and it was shown that the majority of this noise was fixed pattern in nature. The intrinsic spatial resolution was measured to be ˜80 μm and the system spatial resolution measured with a slit was ˜450 μm. The second sensor, On Pixel Intelligent CMOS (OPIC), had 64×72 40 μm pixels and was used to evaluate noise characteristics and to develop a method of differentiation between fixed pattern and statistical noise. The third sensor, Vanilla, had 520×520 25 μm pixels and a measured system noise of ˜25e. This sensor was coupled directly to the segmented phosphor. Imaging results show that even at this lower level of noise the signal from 140 keV gamma rays is small as the light from the phosphor is spread over a large number of pixels. Suggestions for the 'ideal' sensor are made.

  8. Inter-pixel Size Variations as Source of Spitzer Systematics

    NASA Astrophysics Data System (ADS)

    Himes, Michael David; Harrington, Joseph; Lust, Nathaniel B.

    2016-10-01

    In the astrophysical sciences imaging devices are commonly assumed to contain evenly sized pixels, with each pixel converting light to signal with a slightly different efficiency. These variations are accounted for by exposing the detector to a uniform light source and comparing each value to the mean of the exposure and dividing by the result (flatfielding) . If the detector instead had pixels which varied in size, the same variations to uniform illumination would be recorded and subsequently removed. However, in the presence of a flux gradient such as a star, the flatfielding will alter these flux values which in turn affects any analysis of the data. This alteration would be due to varying size pixels being corrected to a unit area through the flatfield, when the pixels themselves rightfully record a non-uniform area of the point-spread function (PSF). We believe that this may be the source of Spitzer's systematic error attributed to gain variations. We demonstrate what an imaging device with inter-pixel size differences looks like from a data standpoint, its effects on estimating the widths of a point source, and investigations to properly account for size variations without altering flux values.

  9. Challenges of small-pixel infrared detectors: a review

    NASA Astrophysics Data System (ADS)

    Rogalski, A.; Martyniuk, P.; Kopytko, M.

    2016-04-01

    In the last two decades, several new concepts for improving the performance of infrared detectors have been proposed. These new concepts particularly address the drive towards the so-called high operating temperature focal plane arrays (FPAs), aiming to increase detector operating temperatures, and as a consequence reduce the cost of infrared systems. In imaging systems with the above megapixel formats, pixel dimension plays a crucial role in determining critical system attributes such as system size, weight and power consumption (SWaP). The advent of smaller pixels has also resulted in the superior spatial and temperature resolution of these systems. Optimum pixel dimensions are limited by diffraction effects from the aperture, and are in turn wavelength-dependent. In this paper, the key challenges in realizing optimum pixel dimensions in FPA design including dark current, pixel hybridization, pixel delineation, and unit cell readout capacity are outlined to achieve a sufficiently adequate modulation transfer function for the ultra-small pitches involved. Both photon and thermal detectors have been considered. Concerning infrared photon detectors, the trade-offs between two types of competing technology—HgCdTe material systems and III-V materials (mainly barrier detectors)—have been investigated.

  10. Polycrystalline CVD diamond pixel array detector for nuclear particles monitoring

    NASA Astrophysics Data System (ADS)

    Pacilli, M.; Allegrini, P.; Girolami, M.; Conte, G.; Spiriti, E.; Ralchenko, V. G.; Komlenok, M. S.; Khomic, A. A.; Konov, V. I.

    2013-02-01

    We report the 90Sr beta response of a polycrystalline diamond pixel detector fabricated using metal-less graphitic ohmic contacts. Laser induced graphitization was used to realize multiple squared conductive contacts with 1mm × 1mm area, 0.2 mm apart, on one detector side while on the other side, for biasing, a 9mm × 9mm large graphite contact was realized. A proximity board was used to wire bonding nine pixels at a time and evaluate the charge collection homogeneity among the 36 detector pixels. Different configurations of biasing were experimented to test the charge collection and noise performance: connecting the pixel at the ground potential of the charge amplifier led to best results and minimum noise pedestal. The expected exponential trend typical of beta particles has been observed. Reversing the bias polarity the pulse height distribution (PHD) does not changes and signal saturation of any pixel was observed around ±200V (0.4 V/μm). Reasonable pixels response uniformity has been evidenced even if smaller pitch 50÷100 μm structures need to be tested.

  11. Spatial and sampling analysis for a sensor viewing a pixelized projector

    NASA Astrophysics Data System (ADS)

    Sieglinger, Breck A.; Flynn, David S.; Coker, Charles F.

    1997-07-01

    This paper presents an analysis of spatial blurring and sampling effects for a sensor viewing a pixelized scene projector. It addresses the ability of a projector to simulate an arbitrary continuous radiance scene using a field of discrete elements. The spatial fidelity of the projector as seen by an imaging sensor is shown to depend critically on the width of the sensor MTF or spatial response function, and the angular spacing between projector pixels. Quantitative results are presented based on a simulation that compares the output of a sensor viewing a reference scene to the output of the sensor viewing a projector display of the reference scene. Dependence on the blur of the sensor and projector, the scene content, and alignment both of features in the scene and sensor samples with the projector pixel locations are addressed. We attempt to determine the projector characteristics required to perform hardware-in-the-loop testing with adequate spatial realism to evaluate seeker functions like autonomous detection, measuring radiant intensities and angular positions or unresolved objects, or performing autonomous recognition and aimpoint selection for resolved objects.

  12. Characteristics of Monolithically Integrated InGaAs Active Pixel Imager Array

    NASA Technical Reports Server (NTRS)

    Kim, Q.; Cunningham, T. J.; Pain, B.; Lange, M. J.; Olsen, G. H.

    2000-01-01

    Switching and amplifying characteristics of a newly developed monolithic InGaAs Active Pixel Imager Array are presented. The sensor array is fabricated from InGaAs material epitaxially deposited on an InP substrate. It consists of an InGaAs photodiode connected to InP depletion-mode junction field effect transistors (JFETs) for low leakage, low power, and fast control of circuit signal amplifying, buffering, selection, and reset. This monolithically integrated active pixel sensor configuration eliminates the need for hybridization with silicon multiplexer. In addition, the configuration allows the sensor to be front illuminated, making it sensitive to visible as well as near infrared signal radiation. Adapting the existing 1.55 micrometer fiber optical communication technology, this integration will be an ideal system of optoelectronic integration for dual band (Visible/IR) applications near room temperature, for use in atmospheric gas sensing in space, and for target identification on earth. In this paper, two different types of small 4 x 1 test arrays will be described. The effectiveness of switching and amplifying circuits will be discussed in terms of circuit effectiveness (leakage, operating frequency, and temperature) in preparation for the second phase demonstration of integrated, two-dimensional monolithic InGaAs active pixel sensor arrays for applications in transportable shipboard surveillance, night vision, and emission spectroscopy.

  13. Active pixel sensor having intra-pixel charge transfer with analog-to-digital converter

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R. (Inventor); Mendis, Sunetra K. (Inventor); Pain, Bedabrata (Inventor); Nixon, Robert H. (Inventor); Zhou, Zhimin (Inventor)

    2003-01-01

    An imaging device formed as a monolithic complementary metal oxide semiconductor integrated circuit in an industry standard complementary metal oxide semiconductor process, the integrated circuit including a focal plane array of pixel cells, each one of the cells including a photogate overlying the substrate for accumulating photo-generated charge in an underlying portion of the substrate, a readout circuit including at least an output field effect transistor formed in the substrate, and a charge coupled device section formed on the substrate adjacent the photogate having a sensing node connected to the output transistor and at least one charge coupled device stage for transferring charge from the underlying portion of the substrate to the sensing node and an analog-to-digital converter formed in the substrate connected to the output of the readout circuit.

  14. Active pixel sensor having intra-pixel charge transfer with analog-to-digital converter

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R. (Inventor); Mendis, Sunetra K. (Inventor); Pain, Bedabrata (Inventor); Nixon, Robert H. (Inventor); Zhou, Zhimin (Inventor)

    2000-01-01

    An imaging device formed as a monolithic complementary metal oxide semiconductor Integrated circuit in an industry standard complementary metal oxide semiconductor process, the integrated circuit including a focal plane array of pixel cells, each one of the cells including a photogate overlying the substrate for accumulating photo-generated charge in an underlying portion of the substrate, a readout circuit including at least an output field effect transistor formed in the substrate, and a charge coupled device section formed on the substrate adjacent the photogate having a sensing node connected to the output transistor and at least one charge coupled device stage for transferring charge from the underlying portion of the substrate to the sensing node and an analog-to-digital converter formed in the substrate connected to the output of the readout circuit.

  15. Direct measurement and calibration of the Kepler CCD Pixel Response Function for improved photometry and astrometry

    NASA Astrophysics Data System (ADS)

    Ninkov, Zoran

    Stellar images taken with telescopes and detectors in space are usually undersampled, and to correct for this, an accurate pixel response function is required. The standard approach for HST and KEPLER has been to measure the telescope PSF combined ("convolved") with the actual pixel response function, super-sampled by taking into account dithered or offset observed images of many stars (Lauer [1999]). This combined response function has been called the "PRF" (Bryson et al. [2011]). However, using such results has not allowed astrometry from KEPLER to reach its full potential (Monet et al. [2010], [2014]). Given the precision of KEPLER photometry, it should be feasible to use a pre-determined detector pixel response function (PRF) and an optical point spread function (PSF) as separable quantities to more accurately correct photometry and astrometry for undersampling. Wavelength (i.e. stellar color) and instrumental temperature should be affecting each of these differently. Discussion of the PRF in the "KEPLER Instrument Handbook" is limited to an ad-hoc extension of earlier measurements on a quite different CCD. It is known that the KEPLER PSF typically has a sharp spike in the middle, and the main bulk of the PSF is still small enough to be undersampled, so that any substructure in the pixel may interact significantly with the optical PSF. Both the PSF and PRF are probably asymmetric. We propose to measure the PRF for an example of the CCD sensors used on KEPLER at sufficient sampling resolution to allow significant improvement of KEPLER photometry and astrometry, in particular allowing PSF fitting techniques to be used on the data archive.

  16. Hard x-ray response of pixellated CdZnTe detectors

    SciTech Connect

    Abbene, L.; Caccia, S.; Bertuccio, G.

    2009-06-15

    In recent years, the development of cadmium zinc telluride (CdZnTe) detectors for x-ray and gamma ray spectrometry has grown rapidly. The good room temperature performance and the high spatial resolution of pixellated CdZnTe detectors make them very attractive in space-borne x-ray astronomy, mainly as focal plane detectors for the new generation of hard x-ray focusing telescopes. In this work, we investigated on the spectroscopic performance of two pixellated CdZnTe detectors coupled with a custom low noise and low power readout application specific integrated circuit (ASIC). The detectors (10x10x1 and 10x10x2 mm{sup 3} single crystals) have an anode layout based on an array of 256 pixels with a geometric pitch of 0.5 mm. The ASIC, fabricated in 0.8 mum BiCMOS technology, is equipped with eight independent channels (preamplifier and shaper) and characterized by low power consumption (0.5 mW/channel) and low noise (150-500 electrons rms). The spectroscopic results point out the good energy resolution of both detectors at room temperature [5.8% full width at half maximum (FWHM) at 59.5 keV for the 1 mm thick detector; 5.5% FWHM at 59.5 keV for the 2 mm thick detector) and low tailing in the measured spectra, confirming the single charge carrier sensing properties of the CdZnTe detectors equipped with a pixellated anode layout. Temperature measurements show optimum performance of the system (detector and electronics) at T=10 deg.C and performance degradation at lower temperatures. The detectors and the ASIC were developed by our collaboration as two small focal plane detector prototypes for hard x-ray multilayer telescopes operating in the 20-70 keV energy range.

  17. Optical multi-token-ring networking using smart pixels with field programmable gate arrays (FPGAs)

    NASA Astrophysics Data System (ADS)

    Zhang, Liping; Hong, Sunkwang; Min, Changki; Alpaslan, Zahir Y.; Sawchuk, Alexander A.

    2001-12-01

    This research explores architectures and design principles for monolithic optoelectronic integrated circuits (OEICs) through the implementation of an optical multi-token-ring network testbed system. Monolithic smart pixel CMOS OEICs are of paramount importance to high performance networks, communication switches, computer interfaces, and parallel signal processing for demanding future multimedia applications. The general testbed system is called Reconfigurable Translucent Smart Pixel Array (R-Transpar) and includes a field programmable gate array (FPGA), a transimpedance receiver array, and an optoelectronic very large-scale integrated (OE-VLSI) smart pixel array. The FPGA is an Altera FLEX10K100E chip that performs logic functions and receives inputs from the transimpedance receiver array. A monolithic (OE-VLSI) smart pixel device containing an array of 4 X 4 vertical-cavity surface-emitting lasers (VCSELs) spatially interlaced with an array of 4 X 4 metal- semiconductor-metal (MSM) detectors connects to these devices and performs optical input-output functions. These components are mounted on a printed circuit board for testing and evaluation of integrated monolithic OEIC designs and various optical interconnection techniques. The system moves information between nodes by transferring 3-D optical packets in free space or through fiber image guides. The R-Transpar system is reconfigurable to test different network protocols and signal processing functions. In its operation as a 3-D multi-token-ring network, we use a specific version of the system called Transpar-Token-Ring (Transpar-TR) that uses novel time-division multiplexed (TDM) network node addressing to enhance channel utilization and throughput. Host computers interface with the system via a high-speed digital I/O board that sends commands for networking and application algorithm operations. We describe the system operation and experimental results in detail.

  18. Pixel-Based CTI Corrections for HST/STIS CCD Data

    NASA Astrophysics Data System (ADS)

    Biretta, John A.; Lockwood, Sean A.; Debes, John H.

    2016-01-01

    The Space Telescope Imaging Spectrograph (STIS) team at STScI has created stand-alone automated software to apply Charge Transfer Inefficiency (CTI) corrections to STIS CCD data. CTI results from radiation damage to the CCD detector during its many years in the space environment on-board the Hubble Space Telescope (HST). The software will remove trails and other image artifacts caused by CTI, and will help correct target fluxes and positions to their proper values. The software script (stis_cti v1.0) uses a pixel-based correction algorithm, and will correct both images and spectra. It automatically generates CTI corrected dark reference files, applies CTI corrections to the science data, and outputs the usual CALSTIS products with CTI corrections applied. Currently only the most common observation modes are supported -- full-frame, non-binned data, taken with the default CCD amplifier; future enhancements may include sub-array data. It is available free to the community for download and use. Further information can be found at www.stsci.edu/hst/stis/software/analyzing/scripts/pixel_based_CTI.

  19. Introducing sub-wavelength pixel THz camera for the understanding of close pixel-to-wavelength imaging challenges

    NASA Astrophysics Data System (ADS)

    Bergeron, A.; Marchese, L.; Bolduc, M.; Terroux, M.; Dufour, D.; Savard, E.; Tremblay, B.; Oulachgar, H.; Doucet, M.; Le Noc, L.; Alain, C.; Jerominek, H.

    2012-06-01

    Conventional guidelines and approximations useful in macro-scale system design can become invalidated when applied to the smaller scales. An illustration of this is when camera pixel size becomes smaller than the diffraction-limited resolution of the incident light. It is sometimes believed that there is no benefit in having a pixel width smaller than the resolving limit defined by the Raleigh criterion, 1.22 λ F/#. Though this rarely occurs in today's imaging technology, terahertz (THz) imaging is one emerging area where the pixel dimensions can be made smaller than the imaging wavelength. With terahertz camera technology, we are able to achieve sub-wavelength pixel sampling pitch, and therefore capable of directly measuring if there are image quality benefits to be derived from sub-wavelength sampling. Interest in terahertz imaging is high due to potential uses in security applications because of the greater penetration depth of terahertz radiation compared to the infrared and the visible. This paper discusses the modification by INO of its infrared MEMS microbolometer detector technology toward a THz imaging platform yielding a sub-wavelength pixel THz camera. Images obtained with this camera are reviewed in this paper. Measurements were also obtained using microscanning to increase sampling resolution. Parameters such as imaging resolution and sampling are addressed. A comparison is also made with results obtained with an 8-12 μm band camera having a pixel pitch close to the diffractionlimit.

  20. A nonlinear convolution model for the evasion of CO2 injected into the deep ocean

    NASA Astrophysics Data System (ADS)

    Kheshgi, Haroon S.; Archer, David E.

    2004-02-01

    Deep ocean storage of CO2 captured from, for example, flue gases is being considered as a potential response option to global warming concerns. For storage to be effective, CO2 injected into the deep ocean must remain sequestered from the atmosphere for a long time. However, a fraction of CO2 injected into the deep ocean is expected to eventually evade into the atmosphere. This fraction is expected to depend on the time since injection, the location of injection, and the future atmospheric concentration of CO2. We approximate the evasion of injected CO2 at specific locations using a nonlinear convolution model including explicitly the nonlinear response of CO2 solubility to future CO2 concentration and alkalinity and Green's functions for the transport of CO2 from injection locations to the ocean surface as well as alkalinity response to seafloor CaCO3 dissolution. Green's functions are calculated from the results of a three-dimensional model for ocean carbon cycle for impulses of CO2 either released to the atmosphere or injected a locations deep in the Pacific and Atlantic oceans. CO2 transport in the three-dimensional (3-D) model is governed by offline tracer transport in the ocean interior, exchange of CO2 with the atmosphere, and dissolution of ocean sediments. The convolution model is found to accurately approximate results of the 3-D model in test cases including both deep-ocean injection and sediment dissolution. The convolution model allows comparison of the CO2 evasion delay achieved by deep ocean injection with notional scenarios for CO2 stabilization and the time extent of the fossil fuel era.

  1. Knowledge Based 3d Building Model Recognition Using Convolutional Neural Networks from LIDAR and Aerial Imageries

    NASA Astrophysics Data System (ADS)

    Alidoost, F.; Arefi, H.

    2016-06-01

    In recent years, with the development of the high resolution data acquisition technologies, many different approaches and algorithms have been presented to extract the accurate and timely updated 3D models of buildings as a key element of city structures for numerous applications in urban mapping. In this paper, a novel and model-based approach is proposed for automatic recognition of buildings' roof models such as flat, gable, hip, and pyramid hip roof models based on deep structures for hierarchical learning of features that are extracted from both LiDAR and aerial ortho-photos. The main steps of this approach include building segmentation, feature extraction and learning, and finally building roof labeling in a supervised pre-trained Convolutional Neural Network (CNN) framework to have an automatic recognition system for various types of buildings over an urban area. In this framework, the height information provides invariant geometric features for convolutional neural network to localize the boundary of each individual roofs. CNN is a kind of feed-forward neural network with the multilayer perceptron concept which consists of a number of convolutional and subsampling layers in an adaptable structure and it is widely used in pattern recognition and object detection application. Since the training dataset is a small library of labeled models for different shapes of roofs, the computation time of learning can be decreased significantly using the pre-trained models. The experimental results highlight the effectiveness of the deep learning approach to detect and extract the pattern of buildings' roofs automatically considering the complementary nature of height and RGB information.

  2. De-convoluting mixed crude oil in Prudhoe Bay Field, North Slope, Alaska

    USGS Publications Warehouse

    Peters, K.E.; Scott, Ramos L.; Zumberge, J.E.; Valin, Z.C.; Bird, K.J.

    2008-01-01

    Seventy-four crude oil samples from the Barrow arch on the North Slope of Alaska were studied to assess the relative volumetric contributions from different source rocks to the giant Prudhoe Bay Field. We applied alternating least squares to concentration data (ALS-C) for 46 biomarkers in the range C19-C35 to de-convolute mixtures of oil generated from carbonate rich Triassic Shublik Formation and clay rich Jurassic Kingak Shale and Cretaceous Hue Shale-gamma ray zone (Hue-GRZ) source rocks. ALS-C results for 23 oil samples from the prolific Ivishak Formation reservoir of the Prudhoe Bay Field indicate approximately equal contributions from Shublik Formation and Hue-GRZ source rocks (37% each), less from the Kingak Shale (26%), and little or no contribution from other source rocks. These results differ from published interpretations that most oil in the Prudhoe Bay Field originated from the Shublik Formation source rock. With few exceptions, the relative contribution of oil from the Shublik Formation decreases, while that from the Hue-GRZ increases in reservoirs along the Barrow arch from Point Barrow in the northwest to Point Thomson in the southeast (???250 miles or 400 km). The Shublik contribution also decreases to a lesser degree between fault blocks within the Ivishak pool from west to east across the Prudhoe Bay Field. ALS-C provides a robust means to calculate the relative amounts of two or more oil types in a mixture. Furthermore, ALS-C does not require that pure end member oils be identified prior to analysis or that laboratory mixtures of these oils be prepared to evaluate mixing. ALS-C of biomarkers reliably de-convolutes mixtures because the concentrations of compounds in mixtures vary as linear functions of the amount of each oil type. ALS of biomarker ratios (ALS-R) cannot be used to de-convolute mixtures because compound ratios vary as nonlinear functions of the amount of each oil type.

  3. Automatic breast density classification using a convolutional neural network architecture search procedure

    NASA Astrophysics Data System (ADS)

    Fonseca, Pablo; Mendoza, Julio; Wainer, Jacques; Ferrer, Jose; Pinto, Joseph; Guerrero, Jorge; Castaneda, Benjamin

    2015-03-01

    Breast parenchymal density is considered a strong indicator of breast cancer risk and therefore useful for preventive tasks. Measurement of breast density is often qualitative and requires the subjective judgment of radiologists. Here we explore an automatic breast composition classification workflow based on convolutional neural networks for feature extraction in combination with a support vector machines classifier. This is compared to the assessments of seven experienced radiologists. The experiments yielded an average kappa value of 0.58 when using the mode of the radiologists' classifications as ground truth. Individual radiologist performance against this ground truth yielded kappa values between 0.56 and 0.79.

  4. Using convolutional neural networks for human activity classification on micro-Doppler radar spectrograms

    NASA Astrophysics Data System (ADS)

    Jordan, Tyler S.

    2016-05-01

    This paper presents the findings of using convolutional neural networks (CNNs) to classify human activity from micro-Doppler features. An emphasis on activities involving potential security threats such as holding a gun are explored. An automotive 24 GHz radar on chip was used to collect the data and a CNN (normally applied to image classification) was trained on the resulting spectrograms. The CNN achieves an error rate of 1.65 % on classifying running vs. walking, 17.3 % error on armed walking vs. unarmed walking, and 22 % on classifying six different actions.

  5. Robust and accurate transient light transport decomposition via convolutional sparse coding.

    PubMed

    Hu, Xuemei; Deng, Yue; Lin, Xing; Suo, Jinli; Dai, Qionghai; Barsi, Christopher; Raskar, Ramesh

    2014-06-01

    Ultrafast sources and detectors have been used to record the time-resolved scattering of light propagating through macroscopic scenes. In the context of computational imaging, decomposition of this transient light transport (TLT) is useful for applications, such as characterizing materials, imaging through diffuser layers, and relighting scenes dynamically. Here, we demonstrate a method of convolutional sparse coding to decompose TLT into direct reflections, inter-reflections, and subsurface scattering. The method relies on the sparsity composition of the time-resolved kernel. We show that it is robust and accurate to noise during the acquisition process.

  6. Diffuse dispersive delay and the time convolution/attenuation of transients

    NASA Technical Reports Server (NTRS)

    Bittner, Burt J.

    1991-01-01

    Test data and analytic evaluations are presented to show that relatively poor 100 KHz shielding of 12 Db can effectively provide an electromagnetic pulse transient reduction of 100 Db. More importantly, several techniques are shown for lightning surge attenuation as an alternative to crowbar, spark gap, or power zener type clipping which simply reflects the surge. A time delay test method is shown which allows CW testing, along with a convolution program to define transient shielding effectivity where the Fourier phase characteristics of the transient are known or can be broadly estimated.

  7. Voltage measurements at the vacuum post-hole convolute of the Z pulsed-power accelerator

    DOE PAGESBeta

    Waisman, E. M.; McBride, R. D.; Cuneo, M. E.; Wenger, D. F.; Fowler, W. E.; Johnson, W. A.; Basilio, L. I.; Coats, R. S.; Jennings, C. A.; Sinars, D. B.; et al

    2014-12-08

    Presented are voltage measurements taken near the load region on the Z pulsed-power accelerator using an inductive voltage monitor (IVM). Specifically, the IVM was connected to, and thus monitored the voltage at, the bottom level of the accelerator’s vacuum double post-hole convolute. Additional voltage and current measurements were taken at the accelerator’s vacuum-insulator stack (at a radius of 1.6 m) by using standard D-dot and B-dot probes, respectively. During postprocessing, the measurements taken at the stack were translated to the location of the IVM measurements by using a lossless propagation model of the Z accelerator’s magnetically insulated transmission lines (MITLs)more » and a lumped inductor model of the vacuum post-hole convolute. Across a wide variety of experiments conducted on the Z accelerator, the voltage histories obtained from the IVM and the lossless propagation technique agree well in overall shape and magnitude. However, large-amplitude, high-frequency oscillations are more pronounced in the IVM records. It is unclear whether these larger oscillations represent true voltage oscillations at the convolute or if they are due to noise pickup and/or transit-time effects and other resonant modes in the IVM. Results using a transit-time-correction technique and Fourier analysis support the latter. Regardless of which interpretation is correct, both true voltage oscillations and the excitement of resonant modes could be the result of transient electrical breakdowns in the post-hole convolute, though more information is required to determine definitively if such breakdowns occurred. Despite the larger oscillations in the IVM records, the general agreement found between the lossless propagation results and the results of the IVM shows that large voltages are transmitted efficiently through the MITLs on Z. These results are complementary to previous studies [R. D. McBride et al., Phys. Rev. ST Accel. Beams 13, 120401 (2010)] that showed

  8. Voltage measurements at the vacuum post-hole convolute of the Z pulsed-power accelerator

    NASA Astrophysics Data System (ADS)

    Waisman, E. M.; McBride, R. D.; Cuneo, M. E.; Wenger, D. F.; Fowler, W. E.; Johnson, W. A.; Basilio, L. I.; Coats, R. S.; Jennings, C. A.; Sinars, D. B.; Vesey, R. A.; Jones, B.; Ampleford, D. J.; Lemke, R. W.; Martin, M. R.; Schrafel, P. C.; Lewis, S. A.; Moore, J. K.; Savage, M. E.; Stygar, W. A.

    2014-12-01

    Presented are voltage measurements taken near the load region on the Z pulsed-power accelerator using an inductive voltage monitor (IVM). Specifically, the IVM was connected to, and thus monitored the voltage at, the bottom level of the accelerator's vacuum double post-hole convolute. Additional voltage and current measurements were taken at the accelerator's vacuum-insulator stack (at a radius of 1.6 m) by using standard D -dot and B -dot probes, respectively. During postprocessing, the measurements taken at the stack were translated to the location of the IVM measurements by using a lossless propagation model of the Z accelerator's magnetically insulated transmission lines (MITLs) and a lumped inductor model of the vacuum post-hole convolute. Across a wide variety of experiments conducted on the Z accelerator, the voltage histories obtained from the IVM and the lossless propagation technique agree well in overall shape and magnitude. However, large-amplitude, high-frequency oscillations are more pronounced in the IVM records. It is unclear whether these larger oscillations represent true voltage oscillations at the convolute or if they are due to noise pickup and/or transit-time effects and other resonant modes in the IVM. Results using a transit-time-correction technique and Fourier analysis support the latter. Regardless of which interpretation is correct, both true voltage oscillations and the excitement of resonant modes could be the result of transient electrical breakdowns in the post-hole convolute, though more information is required to determine definitively if such breakdowns occurred. Despite the larger oscillations in the IVM records, the general agreement found between the lossless propagation results and the results of the IVM shows that large voltages are transmitted efficiently through the MITLs on Z . These results are complementary to previous studies [R. D. McBride et al., Phys. Rev. ST Accel. Beams 13, 120401 (2010)] that showed efficient

  9. Cygrid: Cython-powered convolution-based gridding module for Python

    NASA Astrophysics Data System (ADS)

    Winkel, B.; Lenz, D.; Flöer, L.

    2016-06-01

    The Python module Cygrid grids (resamples) data to any collection of spherical target coordinates, although its typical application involves FITS maps or data cubes. The module supports the FITS world coordinate system (WCS) standard; its underlying algorithm is based on the convolution of the original samples with a 2D Gaussian kernel. A lookup table scheme allows parallelization of the code and is combined with the HEALPix tessellation of the sphere for fast neighbor searches. Cygrid's runtime scales between O(n) and O(nlog n), with n being the number of input samples.

  10. A convolutional recursive modified Self Organizing Map for handwritten digits recognition.

    PubMed

    Mohebi, Ehsan; Bagirov, Adil

    2014-12-01

    It is well known that the handwritten digits recognition is a challenging problem. Different classification algorithms have been applied to solve it. Among them, the Self Organizing Maps (SOM) produced promising results. In this paper, first we introduce a Modified SOM for the vector quantization problem with improved initialization process and topology preservation. Then we develop a Convolutional Recursive Modified SOM and apply it to the problem of handwritten digits recognition. The computational results obtained using the well known MNIST dataset demonstrate the superiority of the proposed algorithm over the existing SOM-based algorithms.

  11. Parity retransmission hybrid ARQ using rate 1/2 convolutional codes on a nonstationary channel

    NASA Technical Reports Server (NTRS)

    Lugand, Laurent R.; Costello, Daniel J., Jr.; Deng, Robert H.

    1989-01-01

    A parity retransmission hybrid automatic repeat request (ARQ) scheme is proposed which uses rate 1/2 convolutional codes and Viterbi decoding. A protocol is described which is capable of achieving higher throughputs than previously proposed parity retransmission schemes. The performance analysis is based on a two-state Markov model of a nonstationary channel. This model constitutes a first approximation to a nonstationary channel. The two-state channel model is used to analyze the throughput and undetected error probability of the protocol presented when the receiver has both an infinite and a finite buffer size. It is shown that the throughput improves as the channel becomes more bursty.

  12. Processing circuit with asymmetry corrector and convolutional encoder for digital data

    NASA Technical Reports Server (NTRS)

    Pfiffner, Harold J. (Inventor)

    1987-01-01

    A processing circuit is provided for correcting for input parameter variations, such as data and clock signal symmetry, phase offset and jitter, noise and signal amplitude, in incoming data signals. An asymmetry corrector circuit performs the correcting function and furnishes the corrected data signals to a convolutional encoder circuit. The corrector circuit further forms a regenerated clock signal from clock pulses in the incoming data signals and another clock signal at a multiple of the incoming clock signal. These clock signals are furnished to the encoder circuit so that encoded data may be furnished to a modulator at a high data rate for transmission.

  13. Vibration analysis of FG cylindrical shells with power-law index using discrete singular convolution technique

    NASA Astrophysics Data System (ADS)

    Mercan, Kadir; Demir, Çiǧdem; Civalek, Ömer

    2016-01-01

    In the present manuscript, free vibration response of circular cylindrical shells with functionally graded material (FGM) is investigated. The method of discrete singular convolution (DSC) is used for numerical solution of the related governing equation of motion of FGM cylindrical shell. The constitutive relations are based on the Love's first approximation shell theory. The material properties are graded in the thickness direction according to a volume fraction power law indexes. Frequency values are calculated for different types of boundary conditions, material and geometric parameters. In general, close agreement between the obtained results and those of other researchers has been found.

  14. Semi-supervised Convolutional Neural Networks for Text Categorization via Region Embedding

    PubMed Central

    Johnson, Rie; Zhang, Tong

    2016-01-01

    This paper presents a new semi-supervised framework with convolutional neural networks (CNNs) for text categorization. Unlike the previous approaches that rely on word embeddings, our method learns embeddings of small text regions from unlabeled data for integration into a supervised CNN. The proposed scheme for embedding learning is based on the idea of two-view semi-supervised learning, which is intended to be useful for the task of interest even though the training is done on unlabeled data. Our models achieve better results than previous approaches on sentiment classification and topic classification tasks. PMID:27087766

  15. Voltage measurements at the vacuum post-hole convolute of the Z pulsed-power accelerator

    SciTech Connect

    Waisman, E. M.; McBride, R. D.; Cuneo, M. E.; Wenger, D. F.; Fowler, W. E.; Johnson, W. A.; Basilio, L. I.; Coats, R. S.; Jennings, C. A.; Sinars, D. B.; Vesey, R. A.; Jones, B.; Ampleford, D. J.; Lemke, R. W.; Martin, M. R.; Schrafel, P. C.; Lewis, S. A.; Moore, J. K.; Savage, M. E.; Stygar, W. A.

    2014-12-08

    Presented are voltage measurements taken near the load region on the Z pulsed-power accelerator using an inductive voltage monitor (IVM). Specifically, the IVM was connected to, and thus monitored the voltage at, the bottom level of the accelerator’s vacuum double post-hole convolute. Additional voltage and current measurements were taken at the accelerator’s vacuum-insulator stack (at a radius of 1.6 m) by using standard D-dot and B-dot probes, respectively. During postprocessing, the measurements taken at the stack were translated to the location of the IVM measurements by using a lossless propagation model of the Z accelerator’s magnetically insulated transmission lines (MITLs) and a lumped inductor model of the vacuum post-hole convolute. Across a wide variety of experiments conducted on the Z accelerator, the voltage histories obtained from the IVM and the lossless propagation technique agree well in overall shape and magnitude. However, large-amplitude, high-frequency oscillations are more pronounced in the IVM records. It is unclear whether these larger oscillations represent true voltage oscillations at the convolute or if they are due to noise pickup and/or transit-time effects and other resonant modes in the IVM. Results using a transit-time-correction technique and Fourier analysis support the latter. Regardless of which interpretation is correct, both true voltage oscillations and the excitement of resonant modes could be the result of transient electrical breakdowns in the post-hole convolute, though more information is required to determine definitively if such breakdowns occurred. Despite the larger oscillations in the IVM records, the general agreement found between the lossless propagation results and the results of the IVM shows that large voltages are transmitted efficiently through the MITLs on Z. These results are complementary to previous studies [R. D. McBride et al., Phys. Rev. ST Accel. Beams 13, 120401 (2010)] that

  16. Performance of DPSK with convolutional encoding on time-varying fading channels

    NASA Technical Reports Server (NTRS)

    Mui, S. Y.; Modestino, J. W.

    1977-01-01

    The bit error probability performance of a differentially-coherent phase-shift keyed (DPSK) modem with convolutional encoding and Viterbi decoding on time-varying fading channels is examined. Both the Rician and the lognormal channels are considered. Bit error probability upper bounds on fully-interleaved (zero-memory) fading channels are derived and substantiated by computer simulation. It is shown that the resulting coded system performance is a relatively insensitive function of the choice of channel model provided that the channel parameters are related according to the correspondence developed as part of this paper. Finally, a comparison of DPSK with a number of other modulation strategies is provided.

  17. VeloPix: the pixel ASIC for the LHCb upgrade

    NASA Astrophysics Data System (ADS)

    Poikela, T.; De Gaspari, M.; Plosila, J.; Westerlund, T.; Ballabriga, R.; Buytaert, J.; Campbell, M.; Llopart, X.; Wyllie, K.; Gromov, V.; van Beuzekom, M.; Zivkovic, V.

    2015-01-01

    The LHCb Vertex Detector (VELO) will be upgraded in 2018 along with the other subsystems of LHCb in order to enable full readout at 40 MHz, with the data fed directly to the software triggering algorithms. The upgraded VELO is a lightweight hybrid pixel detector operating in vacuum in close proximity to the LHC beams. The readout will be provided by a dedicated front-end ASIC, dubbed VeloPix, matched to the LHCb readout requirements and the 55 × 55 μm VELO pixel dimensions. The chip is closely related to the Timepix3, from the Medipix family of ASICs. The principal challenge that the chip has to meet is a hit rate of up to 900 Mhits/s, resulting in a required output bandwidth of more than 16 Gbit/s. The occupancy across the chip is also very non-uniform, and the radiation levels reach an integrated 400 Mrad over the lifetime of the detector.VeloPix is a binary pixel readout chip with a data driven readout, designed in 130 nm CMOS technology. The pixels are combined into groups of 2 × 4 super pixels, enabling a shared logic and a reduction of bandwidth due to combined address and time stamp information. The pixel hits are combined with other simultaneous hits in the same super pixel, time stamped, and immediately driven off-chip. The analog front-end must be sufficiently fast to accurately time stamp the data, with a small enough dead time to minimize data loss in the most occupied regions of the chip. The data is driven off chip with a custom designed high speed serialiser. The current status of the ASIC design, the chip architecture and the simulations will be described.

  18. Level-1 pixel based tracking trigger algorithm for LHC upgrade

    NASA Astrophysics Data System (ADS)

    Moon, C.-S.; Savoy-Navarro, A.

    2015-10-01

    The Pixel Detector is the innermost detector of the tracking system of the Compact Muon Solenoid (CMS) experiment at CERN Large Hadron Collider (LHC) . It precisely determines the interaction point (primary vertex) of the events and the possible secondary vertexes due to heavy flavours (b and c quarks); it is part of the overall tracking system that allows reconstructing the tracks of the charged particles in the events and combined with the magnetic field to measure their momentum. The pixel detector allows measuring the tracks in the region closest to the interaction point. The Level-1 (real-time) pixel based tracking trigger is a novel trigger system that is currently being studied for the LHC upgrade. An important goal is developing real-time track reconstruction algorithms able to cope with very high rates and high flux of data in a very harsh environment. The pixel detector has an especially crucial role in precisely identifying the primary vertex of the rare physics events from the large pile-up (PU) of events. The goal of adding the pixel information already at the real-time level of the selection is to help reducing the total level-1 trigger rate while keeping an high selection capability. This is quite an innovative and challenging objective for the experiments upgrade for the High Luminosity LHC (HL-LHC) . The special case here addressed is the CMS experiment. This document describes exercises focusing on the development of a fast pixel track reconstruction where the pixel track matches with a Level-1 electron object using a ROOT-based simulation framework.

  19. Smart pixel imaging with computational-imaging arrays

    NASA Astrophysics Data System (ADS)

    Fernandez-Cull, Christy; Tyrrell, Brian M.; D'Onofrio, Richard; Bolstad, Andrew; Lin, Joseph; Little, Jeffrey W.; Blackwell, Megan; Renzi, Matthew; Kelly, Mike

    2014-07-01

    Smart pixel imaging with computational-imaging arrays (SPICA) transfers image plane coding typically realized in the optical architecture to the digital domain of the focal plane array, thereby minimizing signal-to-noise losses associated with static filters or apertures and inherent diffraction concerns. MIT Lincoln Laboratory has been developing digitalpixel focal plane array (DFPA) devices for many years. In this work, we leverage legacy designs modified with new features to realize a computational imaging array (CIA) with advanced pixel-processing capabilities. We briefly review the use of DFPAs for on-chip background removal and image plane filtering. We focus on two digital readout integrated circuits (DROICS) as CIAs for two-dimensional (2D) transient target tracking and three-dimensional (3D) transient target estimation using per-pixel coded-apertures or flutter shutters. This paper describes two DROICs - a SWIR pixelprocessing imager (SWIR-PPI) and a Visible CIA (VISCIA). SWIR-PPI is a DROIC with a 1 kHz global frame rate with a maximum per-pixel shuttering rate of 100 MHz, such that each pixel can be modulated by a time-varying, pseudorandom, and duo-binary signal (+1,-1,0). Combining per-pixel time-domain coding and processing enables 3D (x,y,t) target estimation with limited loss of spatial resolution. We evaluate structured and pseudo-random encoding strategies and employ linear inversion and non-linear inversion using total-variation minimization to estimate a 3D data cube from a single 2D temporally-encoded measurement. The VISCIA DROIC, while low-resolution, has a 6 kHz global frame rate and simultaneously encodes eight periodic or aperiodic transient target signatures at a maximum rate of 50 MHz using eight 8-bit counters. By transferring pixel-based image plane coding to the DROIC and utilizing sophisticated processing, our CIAs enable on-chip temporal super-resolution.

  20. Super pixel density based clustering automatic image classification method

    NASA Astrophysics Data System (ADS)

    Xu, Mingxing; Zhang, Chuan; Zhang, Tianxu

    2015-12-01

    The image classification is an important means of image segmentation and data mining, how to achieve rapid automated image classification has been the focus of research. In this paper, based on the super pixel density of cluster centers algorithm for automatic image classification and identify outlier. The use of the image pixel location coordinates and gray value computing density and distance, to achieve automatic image classification and outlier extraction. Due to the increased pixel dramatically increase the computational complexity, consider the method of ultra-pixel image preprocessing, divided into a small number of super-pixel sub-blocks after the density and distance calculations, while the design of a normalized density and distance discrimination law, to achieve automatic classification and clustering center selection, whereby the image automatically classify and identify outlier. After a lot of experiments, our method does not require human intervention, can automatically categorize images computing speed than the density clustering algorithm, the image can be effectively automated classification and outlier extraction.

  1. Multiport solid-state imager characterization at variable pixel rates

    SciTech Connect

    Yates, G.J.; Albright, K.A.; Turko, B.T.

    1993-08-01

    The imaging performance of an 8-port Full Frame Transfer Charge Coupled Device (FFT CCD) as a function of several parameters including pixel clock rate is presented. The device, model CCD- 13, manufactured by English Electric Valve (EEV) is a 512 {times} 512 pixel array designed with four individual programmable bidirectional serial registers and eight output amplifiers permitting simultaneous readout of eight segments (128 horizontal {times} 256 vertical pixels) of the array. The imager was evaluated in Los Alamos National Laboratory`s High-Speed Solid-State Imager Test Station at true pixel rates as high as 50 MHz for effective imager pixel rates approaching 400 MHz from multiporting. Key response characteristics measured include absolute responsivity, Charge-Transfer-Efficiency (CTE), dynamic range, resolution, signal-to-noise ratio, and electronic and optical crosstalk among the eight video channels. Preliminary test results and data obtained from the CCD-13 will be presented and the versatility/capabilities of the test station will be reviewed.

  2. Pixel classification based color image segmentation using quaternion exponent moments.

    PubMed

    Wang, Xiang-Yang; Wu, Zhi-Fang; Chen, Liang; Zheng, Hong-Liang; Yang, Hong-Ying

    2016-02-01

    Image segmentation remains an important, but hard-to-solve, problem since it appears to be application dependent with usually no a priori information available regarding the image structure. In recent years, many image segmentation algorithms have been developed, but they are often very complex and some undesired results occur frequently. In this paper, we propose a pixel classification based color image segmentation using quaternion exponent moments. Firstly, the pixel-level image feature is extracted based on quaternion exponent moments (QEMs), which can capture effectively the image pixel content by considering the correlation between different color channels. Then, the pixel-level image feature is used as input of twin support vector machines (TSVM) classifier, and the TSVM model is trained by selecting the training samples with Arimoto entropy thresholding. Finally, the color image is segmented with the trained TSVM model. The proposed scheme has the following advantages: (1) the effective QEMs is introduced to describe color image pixel content, which considers the correlation between different color channels, (2) the excellent TSVM classifier is utilized, which has lower computation time and higher classification accuracy. Experimental results show that our proposed method has very promising segmentation performance compared with the state-of-the-art segmentation approaches recently proposed in the literature.

  3. PIXSCAN: Pixel detector CT-scanner for small animal imaging

    NASA Astrophysics Data System (ADS)

    Delpierre, P.; Debarbieux, F.; Basolo, S.; Berar, J. F.; Bonissent, A.; Boudet, N.; Breugnon, P.; Caillot, B.; Cassol Brunner, F.; Chantepie, B.; Clemens, J. C.; Dinkespiler, B.; Khouri, R.; Koudobine, I.; Mararazzo, V.; Meessen, C.; Menouni, M.; Morel, C.; Mouget, C.; Pangaud, P.; Peyrin, F.; Rougon, G.; Sappey-Marinier, D.; Valton, S.; Vigeolas, E.

    2007-02-01

    The PIXSCAN is a small animal CT-scanner based on hybrid pixel detectors. These detectors provide very large dynamic range of photons counting at very low detector noise. They also provide high counting rates with fast image readout. Detection efficiency can be optimized by selecting the sensor medium according to the working energy range. Indeed, the use of CdTe allows a detection efficiency of 100% up to 50 keV. Altogether these characteristics are expected to improve the contrast of the CT-scanner, especially for soft tissues, and to reduce both the scan duration and the absorbed dose. A proof of principle has been performed by assembling into a PIXSCAN-XPAD2 prototype the photon counting pixel detector initially built for detection of X-ray synchrotron radiations. Despite the relatively large pixel size of this detector (330×330 μm 2), we can present three-dimensional tomographic reconstruction of mice at good contrast and spatial resolution. A new photon counting chip (XPAD3) is designed in sub-micronique technology to achieve 130×130 μm 2 pixels. This improved circuit has been equipped with an energy selection circuit to act as a band-pass emission filter. Furthermore, the PIXSCAN-XPAD3 hybrid pixel detectors will be combined with the Lausanne ClearPET scanner demonstrator. CT image reconstruction in this non-conventional geometry is under study for this purpose.

  4. Pixel classification based color image segmentation using quaternion exponent moments.

    PubMed

    Wang, Xiang-Yang; Wu, Zhi-Fang; Chen, Liang; Zheng, Hong-Liang; Yang, Hong-Ying

    2016-02-01

    Image segmentation remains an important, but hard-to-solve, problem since it appears to be application dependent with usually no a priori information available regarding the image structure. In recent years, many image segmentation algorithms have been developed, but they are often very complex and some undesired results occur frequently. In this paper, we propose a pixel classification based color image segmentation using quaternion exponent moments. Firstly, the pixel-level image feature is extracted based on quaternion exponent moments (QEMs), which can capture effectively the image pixel content by considering the correlation between different color channels. Then, the pixel-level image feature is used as input of twin support vector machines (TSVM) classifier, and the TSVM model is trained by selecting the training samples with Arimoto entropy thresholding. Finally, the color image is segmented with the trained TSVM model. The proposed scheme has the following advantages: (1) the effective QEMs is introduced to describe color image pixel content, which considers the correlation between different color channels, (2) the excellent TSVM classifier is utilized, which has lower computation time and higher classification accuracy. Experimental results show that our proposed method has very promising segmentation performance compared with the state-of-the-art segmentation approaches recently proposed in the literature. PMID:26618250

  5. Concatenated coding systems employing a unit-memory convolutional code and a byte-oriented decoding algorithm

    NASA Technical Reports Server (NTRS)

    Lee, L. N.

    1976-01-01

    Concatenated coding systems utilizing a convolutional code as the inner code and a Reed-Solomon code as the outer code are considered. In order to obtain very reliable communications over a very noisy channel with relatively small coding complexity, it is proposed to concatenate a byte oriented unit memory convolutional code with an RS outer code whose symbol size is one byte. It is further proposed to utilize a real time minimal byte error probability decoding algorithm, together with feedback from the outer decoder, in the decoder for the inner convolutional code. The performance of the proposed concatenated coding system is studied, and the improvement over conventional concatenated systems due to each additional feature is isolated.

  6. Concatenated coding systems employing a unit-memory convolutional code and a byte-oriented decoding algorithm

    NASA Technical Reports Server (NTRS)

    Lee, L.-N.

    1977-01-01

    Concatenated coding systems utilizing a convolutional code as the inner code and a Reed-Solomon code as the outer code are considered. In order to obtain very reliable communications over a very noisy channel with relatively modest coding complexity, it is proposed to concatenate a byte-oriented unit-memory convolutional code with an RS outer code whose symbol size is one byte. It is further proposed to utilize a real-time minimal-byte-error probability decoding algorithm, together with feedback from the outer decoder, in the decoder for the inner convolutional code. The performance of the proposed concatenated coding system is studied, and the improvement over conventional concatenated systems due to each additional feature is isolated.

  7. Assessing the Firing Properties of the Electrically Stimulated Auditory Nerve Using a Convolution Model.

    PubMed

    Strahl, Stefan B; Ramekers, Dyan; Nagelkerke, Marjolijn M B; Schwarz, Konrad E; Spitzer, Philipp; Klis, Sjaak F L; Grolman, Wilko; Versnel, Huib

    2016-01-01

    The electrically evoked compound action potential (eCAP) is a routinely performed measure of the auditory nerve in cochlear implant users. Using a convolution model of the eCAP, additional information about the neural firing properties can be obtained, which may provide relevant information about the health of the auditory nerve. In this study, guinea pigs with various degrees of nerve degeneration were used to directly relate firing properties to nerve histology. The same convolution model was applied on human eCAPs to examine similarities and ultimately to examine its clinical applicability. For most eCAPs, the estimated nerve firing probability was bimodal and could be parameterised by two Gaussian distributions with an average latency difference of 0.4 ms. The ratio of the scaling factors of the late and early component increased with neural degeneration in the guinea pig. This ratio decreased with stimulation intensity in humans. The latency of the early component decreased with neural degeneration in the guinea pig. Indirectly, this was observed in humans as well, assuming that the cochlear base exhibits more neural degeneration than the apex. Differences between guinea pigs and humans were observed, among other parameters, in the width of the early component: very robust in guinea pig, and dependent on stimulation intensity and cochlear region in humans. We conclude that the deconvolution of the eCAP is a valuable addition to existing analyses, in particular as it reveals two separate firing components in the auditory nerve. PMID:27080655

  8. Efficient pedestrian detection from aerial vehicles with object proposals and deep convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Minnehan, Breton; Savakis, Andreas

    2016-05-01

    As Unmanned Aerial Systems grow in numbers, pedestrian detection from aerial platforms is becoming a topic of increasing importance. By providing greater contextual information and a reduced potential for occlusion, the aerial vantage point provided by Unmanned Aerial Systems is highly advantageous for many surveillance applications, such as target detection, tracking, and action recognition. However, due to the greater distance between the camera and scene, targets of interest in aerial imagery are generally smaller and have less detail. Deep Convolutional Neural Networks (CNN's) have demonstrated excellent object classification performance and in this paper we adopt them to the problem of pedestrian detection from aerial platforms. We train a CNN with five layers consisting of three convolution-pooling layers and two fully connected layers. We also address the computational inefficiencies of the sliding window method for object detection. In the sliding window configuration, a very large number of candidate patches are generated from each frame, while only a small number of them contain pedestrians. We utilize the Edge Box object proposal generation method to screen candidate patches based on an "objectness" criterion, so that only regions that are likely to contain objects are processed. This method significantly reduces the number of image patches processed by the neural network and makes our classification method very efficient. The resulting two-stage system is a good candidate for real-time implementation onboard modern aerial vehicles. Furthermore, testing on three datasets confirmed that our system offers high detection accuracy for terrestrial pedestrian detection in aerial imagery.

  9. Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition.

    PubMed

    Ordóñez, Francisco Javier; Roggen, Daniel

    2016-01-18

    Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i) is suitable for multimodal wearable sensors; (ii) can perform sensor fusion naturally; (iii) does not require expert knowledge in designing features; and (iv) explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters' influence on performance to provide insights about their optimisation.

  10. Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition.

    PubMed

    Ordóñez, Francisco Javier; Roggen, Daniel

    2016-01-01

    Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i) is suitable for multimodal wearable sensors; (ii) can perform sensor fusion naturally; (iii) does not require expert knowledge in designing features; and (iv) explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters' influence on performance to provide insights about their optimisation. PMID:26797612

  11. Large patch convolutional neural networks for the scene classification of high spatial resolution imagery

    NASA Astrophysics Data System (ADS)

    Zhong, Yanfei; Fei, Feng; Zhang, Liangpei

    2016-04-01

    The increase of the spatial resolution of remote-sensing sensors helps to capture the abundant details related to the semantics of surface objects. However, it is difficult for the popular object-oriented classification approaches to acquire higher level semantics from the high spatial resolution remote-sensing (HSR-RS) images, which is often referred to as the "semantic gap." Instead of designing sophisticated operators, convolutional neural networks (CNNs), a typical deep learning method, can automatically discover intrinsic feature descriptors from a large number of input images to bridge the semantic gap. Due to the small data volume of the available HSR-RS scene datasets, which is far away from that of the natural scene datasets, there have been few reports of CNN approaches for HSR-RS image scene classifications. We propose a practical CNN architecture for HSR-RS scene classification, named the large patch convolutional neural network (LPCNN). The large patch sampling is used to generate hundreds of possible scene patches for the feature learning, and a global average pooling layer is used to replace the fully connected network as the classifier, which can greatly reduce the total parameters. The experiments confirm that the proposed LPCNN can learn effective local features to form an effective representation for different land-use scenes, and can achieve a performance that is comparable to the state-of-the-art on public HSR-RS scene datasets.

  12. Localization and Classification of Paddy Field Pests using a Saliency Map and Deep Convolutional Neural Network

    PubMed Central

    Liu, Ziyi; Gao, Junfeng; Yang, Guoguo; Zhang, Huan; He, Yong

    2016-01-01

    We present a pipeline for the visual localization and classification of agricultural pest insects by computing a saliency map and applying deep convolutional neural network (DCNN) learning. First, we used a global contrast region-based approach to compute a saliency map for localizing pest insect objects. Bounding squares containing targets were then extracted, resized to a fixed size, and used to construct a large standard database called Pest ID. This database was then utilized for self-learning of local image features which were, in turn, used for classification by DCNN. DCNN learning optimized the critical parameters, including size, number and convolutional stride of local receptive fields, dropout ratio and the final loss function. To demonstrate the practical utility of using DCNN, we explored different architectures by shrinking depth and width, and found effective sizes that can act as alternatives for practical applications. On the test set of paddy field images, our architectures achieved a mean Accuracy Precision (mAP) of 0.951, a significant improvement over previous methods. PMID:26864172

  13. Assessing the Firing Properties of the Electrically Stimulated Auditory Nerve Using a Convolution Model.

    PubMed

    Strahl, Stefan B; Ramekers, Dyan; Nagelkerke, Marjolijn M B; Schwarz, Konrad E; Spitzer, Philipp; Klis, Sjaak F L; Grolman, Wilko; Versnel, Huib

    2016-01-01

    The electrically evoked compound action potential (eCAP) is a routinely performed measure of the auditory nerve in cochlear implant users. Using a convolution model of the eCAP, additional information about the neural firing properties can be obtained, which may provide relevant information about the health of the auditory nerve. In this study, guinea pigs with various degrees of nerve degeneration were used to directly relate firing properties to nerve histology. The same convolution model was applied on human eCAPs to examine similarities and ultimately to examine its clinical applicability. For most eCAPs, the estimated nerve firing probability was bimodal and could be parameterised by two Gaussian distributions with an average latency difference of 0.4 ms. The ratio of the scaling factors of the late and early component increased with neural degeneration in the guinea pig. This ratio decreased with stimulation intensity in humans. The latency of the early component decreased with neural degeneration in the guinea pig. Indirectly, this was observed in humans as well, assuming that the cochlear base exhibits more neural degeneration than the apex. Differences between guinea pigs and humans were observed, among other parameters, in the width of the early component: very robust in guinea pig, and dependent on stimulation intensity and cochlear region in humans. We conclude that the deconvolution of the eCAP is a valuable addition to existing analyses, in particular as it reveals two separate firing components in the auditory nerve.

  14. Accelerating protein docking in ZDOCK using an advanced 3D convolution library.

    PubMed

    Pierce, Brian G; Hourai, Yuichiro; Weng, Zhiping

    2011-01-01

    Computational prediction of the 3D structures of molecular interactions is a challenging area, often requiring significant computational resources to produce structural predictions with atomic-level accuracy. This can be particularly burdensome when modeling large sets of interactions, macromolecular assemblies, or interactions between flexible proteins. We previously developed a protein docking program, ZDOCK, which uses a fast Fourier transform to perform a 3D search of the spatial degrees of freedom between two molecules. By utilizing a pairwise statistical potential in the ZDOCK scoring function, there were notable gains in docking accuracy over previous versions, but this improvement in accuracy came at a substantial computational cost. In this study, we incorporated a recently developed 3D convolution library into ZDOCK, and additionally modified ZDOCK to dynamically orient the input proteins for more efficient convolution. These modifications resulted in an average of over 8.5-fold improvement in running time when tested on 176 cases in a newly released protein docking benchmark, as well as substantially less memory usage, with no loss in docking accuracy. We also applied these improvements to a previous version of ZDOCK that uses a simpler non-pairwise atomic potential, yielding an average speed improvement of over 5-fold on the docking benchmark, while maintaining predictive success. This permits the utilization of ZDOCK for more intensive tasks such as docking flexible molecules and modeling of interactomes, and can be run more readily by those with limited computational resources. PMID:21949741

  15. Noise-induced bias for convolution-based interpolation in digital image correlation.

    PubMed

    Su, Yong; Zhang, Qingchuan; Gao, Zeren; Xu, Xiaohai

    2016-01-25

    In digital image correlation (DIC), the noise-induced bias is significant if the noise level is high or the contrast of the image is low. However, existing methods for the estimation of the noise-induced bias are merely applicable to traditional interpolation methods such as linear and cubic interpolation, but are not applicable to generalized interpolation methods such as BSpline and OMOMS. Both traditional interpolation and generalized interpolation belong to convolution-based interpolation. Considering the widely use of generalized interpolation, this paper presents a theoretical analysis of noise-induced bias for convolution-based interpolation. A sinusoidal approximate formula for noise-induced bias is derived; this formula motivates an estimating strategy which is with speed, ease, and accuracy; furthermore, based on this formula, the mechanism of sophisticated interpolation methods generally reducing noise-induced bias is revealed. The validity of the theoretical analysis is established by both numerical simulations and actual subpixel translation experiment. Compared to existing methods, formulae provided by this paper are simpler, briefer, and more general. In addition, a more intuitionistic explanation of the cause of noise-induced bias is provided by quantitatively characterized the position-dependence of noise variability in the spatial domain.

  16. Muon Neutrino Disappearance in NOvA with a Deep Convolutional Neural Network Classifier

    NASA Astrophysics Data System (ADS)

    Rocco, Dominick Rosario

    The NuMI Off-axis Neutrino Appearance Experiment (NOvA) is designed to study neutrino oscillation in the NuMI (Neutrinos at the Main Injector) beam. NOvA observes neutrino oscillation using two detectors separated by a baseline of 810 km; a 14 kt Far Detector in Ash River, MN and a functionally identical 0.3 kt Near Detector at Fermilab. The experiment aims to provide new measurements of $[special characters omitted]. and theta23 and has potential to determine the neutrino mass hierarchy as well as observe CP violation in the neutrino sector. Essential to these analyses is the classification of neutrino interaction events in NOvA detectors. Raw detector output from NOvA is interpretable as a pair of images which provide orthogonal views of particle interactions. A recent advance in the field of computer vision is the advent of convolutional neural networks, which have delivered top results in the latest image recognition contests. This work presents an approach novel to particle physics analysis in which a convolutional neural network is used for classification of particle interactions. The approach has been demonstrated to improve the signal efficiency and purity of the event selection, and thus physics sensitivity. Early NOvA data has been analyzed (2.74 x 1020 POT, 14 kt equivalent) to provide new best-fit measurements of sin2(theta23) = 0.43 (with a statistically-degenerate compliment near 0.60) and [special characters omitted]..

  17. Localization and Classification of Paddy Field Pests using a Saliency Map and Deep Convolutional Neural Network.

    PubMed

    Liu, Ziyi; Gao, Junfeng; Yang, Guoguo; Zhang, Huan; He, Yong

    2016-01-01

    We present a pipeline for the visual localization and classification of agricultural pest insects by computing a saliency map and applying deep convolutional neural network (DCNN) learning. First, we used a global contrast region-based approach to compute a saliency map for localizing pest insect objects. Bounding squares containing targets were then extracted, resized to a fixed size, and used to construct a large standard database called Pest ID. This database was then utilized for self-learning of local image features which were, in turn, used for classification by DCNN. DCNN learning optimized the critical parameters, including size, number and convolutional stride of local receptive fields, dropout ratio and the final loss function. To demonstrate the practical utility of using DCNN, we explored different architectures by shrinking depth and width, and found effective sizes that can act as alternatives for practical applications. On the test set of paddy field images, our architectures achieved a mean Accuracy Precision (mAP) of 0.951, a significant improvement over previous methods.

  18. Region-Based Convolutional Networks for Accurate Object Detection and Segmentation.

    PubMed

    Girshick, Ross; Donahue, Jeff; Darrell, Trevor; Malik, Jitendra

    2016-01-01

    Object detection performance, as measured on the canonical PASCAL VOC Challenge datasets, plateaued in the final years of the competition. The best-performing methods were complex ensemble systems that typically combined multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 50 percent relative to the previous best result on VOC 2012-achieving a mAP of 62.4 percent. Our approach combines two ideas: (1) one can apply high-capacity convolutional networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data are scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, boosts performance significantly. Since we combine region proposals with CNNs, we call the resulting model an R-CNN or Region-based Convolutional Network. Source code for the complete system is available at http://www.cs.berkeley.edu/~rbg/rcnn.

  19. Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition

    PubMed Central

    Ordóñez, Francisco Javier; Roggen, Daniel

    2016-01-01

    Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i) is suitable for multimodal wearable sensors; (ii) can perform sensor fusion naturally; (iii) does not require expert knowledge in designing features; and (iv) explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters’ influence on performance to provide insights about their optimisation. PMID:26797612

  20. Method and apparatus of high dynamic range image sensor with individual pixel reset

    NASA Technical Reports Server (NTRS)

    Yadid-Pecht, Orly (Inventor); Pain, Bedabrata (Inventor); Fossum, Eric R. (Inventor)

    2001-01-01

    A wide dynamic range image sensor provides individual pixel reset to vary the integration time of individual pixels. The integration time of each pixel is controlled by column and row reset control signals which activate a logical reset transistor only when both signals coincide for a given pixel.