Science.gov

Sample records for pixel space convolution

  1. FAST PIXEL SPACE CONVOLUTION FOR COSMIC MICROWAVE BACKGROUND SURVEYS WITH ASYMMETRIC BEAMS AND COMPLEX SCAN STRATEGIES: FEBeCoP

    SciTech Connect

    Mitra, S.; Rocha, G.; Gorski, K. M.; Lawrence, C. R.; Huffenberger, K. M.; Eriksen, H. K.; Ashdown, M. A. J. E-mail: graca@caltech.edu E-mail: Charles.R.Lawrence@jpl.nasa.gov E-mail: h.k.k.eriksen@astro.uio.no

    2011-03-15

    Precise measurement of the angular power spectrum of the cosmic microwave background (CMB) temperature and polarization anisotropy can tightly constrain many cosmological models and parameters. However, accurate measurements can only be realized in practice provided all major systematic effects have been taken into account. Beam asymmetry, coupled with the scan strategy, is a major source of systematic error in scanning CMB experiments such as Planck, the focus of our current interest. We envision Monte Carlo methods to rigorously study and account for the systematic effect of beams in CMB analysis. Toward that goal, we have developed a fast pixel space convolution method that can simulate sky maps observed by a scanning instrument, taking into account real beam shapes and scan strategy. The essence is to pre-compute the 'effective beams' using a computer code, 'Fast Effective Beam Convolution in Pixel space' (FEBeCoP), that we have developed for the Planck mission. The code computes effective beams given the focal plane beam characteristics of the Planck instrument and the full history of actual satellite pointing, and performs very fast convolution of sky signals using the effective beams. In this paper, we describe the algorithm and the computational scheme that has been implemented. We also outline a few applications of the effective beams in the precision analysis of Planck data, for characterizing the CMB anisotropy and for detecting and measuring properties of point sources.

  2. Fast space-varying convolution using matrix source coding with applications to camera stray light reduction.

    PubMed

    Wei, Jianing; Bouman, Charles A; Allebach, Jan P

    2014-05-01

    Many imaging applications require the implementation of space-varying convolution for accurate restoration and reconstruction of images. Here, we use the term space-varying convolution to refer to linear operators whose impulse response has slow spatial variation. In addition, these space-varying convolution operators are often dense, so direct implementation of the convolution operator is typically computationally impractical. One such example is the problem of stray light reduction in digital cameras, which requires the implementation of a dense space-varying deconvolution operator. However, other inverse problems, such as iterative tomographic reconstruction, can also depend on the implementation of dense space-varying convolution. While space-invariant convolution can be efficiently implemented with the fast Fourier transform, this approach does not work for space-varying operators. So direct convolution is often the only option for implementing space-varying convolution. In this paper, we develop a general approach to the efficient implementation of space-varying convolution, and demonstrate its use in the application of stray light reduction. Our approach, which we call matrix source coding, is based on lossy source coding of the dense space-varying convolution matrix. Importantly, by coding the transformation matrix, we not only reduce the memory required to store it; we also dramatically reduce the computation required to implement matrix-vector products. Our algorithm is able to reduce computation by approximately factoring the dense space-varying convolution operator into a product of sparse transforms. Experimental results show that our method can dramatically reduce the computation required for stray light reduction while maintaining high accuracy. PMID:24710398

  3. Classification of Urban Aerial Data Based on Pixel Labelling with Deep Convolutional Neural Networks and Logistic Regression

    NASA Astrophysics Data System (ADS)

    Yao, W.; Poleswki, P.; Krzystek, P.

    2016-06-01

    The recent success of deep convolutional neural networks (CNN) on a large number of applications can be attributed to large amounts of available training data and increasing computing power. In this paper, a semantic pixel labelling scheme for urban areas using multi-resolution CNN and hand-crafted spatial-spectral features of airborne remotely sensed data is presented. Both CNN and hand-crafted features are applied to image/DSM patches to produce per-pixel class probabilities with a L1-norm regularized logistical regression classifier. The evidence theory infers a degree of belief for pixel labelling from different sources to smooth regions by handling the conflicts present in the both classifiers while reducing the uncertainty. The aerial data used in this study were provided by ISPRS as benchmark datasets for 2D semantic labelling tasks in urban areas, which consists of two data sources from LiDAR and color infrared camera. The test sites are parts of a city in Germany which is assumed to consist of typical object classes including impervious surfaces, trees, buildings, low vegetation, vehicles and clutter. The evaluation is based on the computation of pixel-based confusion matrices by random sampling. The performance of the strategy with respect to scene characteristics and method combination strategies is analyzed and discussed. The competitive classification accuracy could be not only explained by the nature of input data sources: e.g. the above-ground height of nDSM highlight the vertical dimension of houses, trees even cars and the nearinfrared spectrum indicates vegetation, but also attributed to decision-level fusion of CNN's texture-based approach with multichannel spatial-spectral hand-crafted features based on the evidence combination theory.

  4. Efficient single pixel imaging in Fourier space

    NASA Astrophysics Data System (ADS)

    Bian, Liheng; Suo, Jinli; Hu, Xuemei; Chen, Feng; Dai, Qionghai

    2016-08-01

    Single pixel imaging (SPI) is a novel technique capturing 2D images using a bucket detector with a high signal-to-noise ratio, wide spectrum range and low cost. Conventional SPI projects random illumination patterns to randomly and uniformly sample the entire scene’s information. Determined by Nyquist sampling theory, SPI needs either numerous projections or high computation cost to reconstruct the target scene, especially for high-resolution cases. To address this issue, we propose an efficient single pixel imaging technique (eSPI), which instead projects sinusoidal patterns for importance sampling of the target scene’s spatial spectrum in Fourier space. Specifically, utilizing the centrosymmetric conjugation and sparsity priors of natural images’ spatial spectra, eSPI sequentially projects two \\tfrac{π }{2}-phase-shifted sinusoidal patterns to obtain each Fourier coefficient in the most informative spatial frequency bands. eSPI can reduce requisite patterns by two orders of magnitude compared to conventional SPI, which helps a lot for fast and high-resolution SPI.

  5. Geometric multi-resolution analysis and data-driven convolutions

    NASA Astrophysics Data System (ADS)

    Strawn, Nate

    2015-09-01

    We introduce a procedure for learning discrete convolutional operators for generic datasets which recovers the standard block convolutional operators when applied to sets of natural images. They key observation is that the standard block convolutional operators on images are intuitive because humans naturally understand the grid structure of the self-evident functions over images spaces (pixels). This procedure first constructs a Geometric Multi-Resolution Analysis (GMRA) on the set of variables giving rise to a dataset, and then leverages the details of this data structure to identify subsets of variables upon which convolutional operators are supported, as well as a space of functions that can be shared coherently amongst these supports.

  6. A semiconductor radiation imaging pixel detector for space radiation dosimetry.

    PubMed

    Kroupa, Martin; Bahadori, Amir; Campbell-Ricketts, Thomas; Empl, Anton; Hoang, Son Minh; Idarraga-Munoz, John; Rios, Ryan; Semones, Edward; Stoffle, Nicholas; Tlustos, Lukas; Turecek, Daniel; Pinsky, Lawrence

    2015-07-01

    Progress in the development of high-performance semiconductor radiation imaging pixel detectors based on technologies developed for use in high-energy physics applications has enabled the development of a completely new generation of compact low-power active dosimeters and area monitors for use in space radiation environments. Such detectors can provide real-time information concerning radiation exposure, along with detailed analysis of the individual particles incident on the active medium. Recent results from the deployment of detectors based on the Timepix from the CERN-based Medipix2 Collaboration on the International Space Station (ISS) are reviewed, along with a glimpse of developments to come. Preliminary results from Orion MPCV Exploration Flight Test 1 are also presented. PMID:26256630

  7. A semiconductor radiation imaging pixel detector for space radiation dosimetry

    NASA Astrophysics Data System (ADS)

    Kroupa, Martin; Bahadori, Amir; Campbell-Ricketts, Thomas; Empl, Anton; Hoang, Son Minh; Idarraga-Munoz, John; Rios, Ryan; Semones, Edward; Stoffle, Nicholas; Tlustos, Lukas; Turecek, Daniel; Pinsky, Lawrence

    2015-07-01

    Progress in the development of high-performance semiconductor radiation imaging pixel detectors based on technologies developed for use in high-energy physics applications has enabled the development of a completely new generation of compact low-power active dosimeters and area monitors for use in space radiation environments. Such detectors can provide real-time information concerning radiation exposure, along with detailed analysis of the individual particles incident on the active medium. Recent results from the deployment of detectors based on the Timepix from the CERN-based Medipix2 Collaboration on the International Space Station (ISS) are reviewed, along with a glimpse of developments to come. Preliminary results from Orion MPCV Exploration Flight Test 1 are also presented.

  8. Using a photon phase-space source for convolution/superposition dose calculations in radiation therapy

    NASA Astrophysics Data System (ADS)

    Naqvi, Shahid A.; D'Souza, Warren D.; Earl, Matthew A.; Ye, Sung-Joon; Shih, Rompin; Li, X. Allen

    2005-09-01

    For a given linac design, the dosimetric characteristics of a photon beam are determined uniquely by the energy and radial distributions of the electron beam striking the x-ray target. However, in the usual commissioning of a beam from measured data, a large number of variables can be independently tuned, making it difficult to derive a unique and self-consistent beam model. For example, the measured dosimetric penumbra in water may be attributed in various proportions to the lateral secondary electron range, the focal spot size and the transmission through the tips of a non-divergent collimator; the head-scatter component in the tails of the transverse profiles may not be easy to resolve from phantom scatter and head leakage; and the head-scatter tails corresponding to a certain extra-focal source model may not agree self-consistently with in-air output factors measured on the central axis. To reduce the number of adjustable variables in beam modelling, we replace the focal and extra-focal sources with a single phase-space plane scored just above the highest adjustable collimator in a EGS/BEAM simulation of the linac. The phase-space plane is then used as photon source in a stochastic convolution/superposition dose engine. A photon sampled from the uncollimated phase-space plane is first propagated through an arbitrary collimator arrangement and then interacted in the simulation phantom. Energy deposition kernel rays are then randomly issued from the interaction points and dose is deposited along these rays. The electrons in the phase-space file are used to account for electron contamination. 6 MV and 18 MV photon beams from an Elekta SL linac are used as representative examples. Except for small corrections for monitor backscatter and collimator forward scatter for large field sizes (<0.5% with <20 × 20 cm2 field size), we found that the use of a single phase-space photon source provides accurate and self-consistent results for both relative and absolute dose

  9. Thin Film on CMOS Active Pixel Sensor for Space Applications

    PubMed Central

    Schulze Spuentrup, Jan Dirk; Burghartz, Joachim N.; Graf, Heinz-Gerd; Harendt, Christine; Hutter, Franz; Nicke, Markus; Schmidt, Uwe; Schubert, Markus; Sterzel, Juergen

    2008-01-01

    A 664 × 664 element Active Pixel image Sensor (APS) with integrated analog signal processing, full frame synchronous shutter and random access for applications in star sensors is presented and discussed. A thick vertical diode array in Thin Film on CMOS (TFC) technology is explored to achieve radiation hardness and maximum fill factor.

  10. Supervised pixel classification using a feature space derived from an artificial visual system

    NASA Technical Reports Server (NTRS)

    Baxter, Lisa C.; Coggins, James M.

    1991-01-01

    Image segmentation involves labelling pixels according to their membership in image regions. This requires the understanding of what a region is. Using supervised pixel classification, the paper investigates how groups of pixels labelled manually according to perceived image semantics map onto the feature space created by an Artificial Visual System. Multiscale structure of regions are investigated and it is shown that pixels form clusters based on their geometric roles in the image intensity function, not by image semantics. A tentative abstract definition of a 'region' is proposed based on this behavior.

  11. Increased space-bandwidth product in pixel super-resolved lensfree on-chip microscopy

    NASA Astrophysics Data System (ADS)

    Greenbaum, Alon; Luo, Wei; Khademhosseinieh, Bahar; Su, Ting-Wei; Coskun, Ahmet F.; Ozcan, Aydogan

    2013-04-01

    Pixel-size limitation of lensfree on-chip microscopy can be circumvented by utilizing pixel-super-resolution techniques to synthesize a smaller effective pixel, improving the resolution. Here we report that by using the two-dimensional pixel-function of an image sensor-array as an input to lensfree image reconstruction, pixel-super-resolution can improve the numerical aperture of the reconstructed image by ~3 fold compared to a raw lensfree image. This improvement was confirmed using two different sensor-arrays that significantly vary in their pixel-sizes, circuit architectures and digital/optical readout mechanisms, empirically pointing to roughly the same space-bandwidth improvement factor regardless of the sensor-array employed in our set-up. Furthermore, such a pixel-count increase also renders our on-chip microscope into a Giga-pixel imager, where an effective pixel count of ~1.6-2.5 billion can be obtained with different sensors. Finally, using an ultra-violet light-emitting-diode, this platform resolves 225 nm grating lines and can be useful for wide-field on-chip imaging of nano-scale objects, e.g., multi-walled-carbon-nanotubes.

  12. Increased space-bandwidth product in pixel super-resolved lensfree on-chip microscopy

    PubMed Central

    Greenbaum, Alon; Luo, Wei; Khademhosseinieh, Bahar; Su, Ting-Wei; Coskun, Ahmet F.; Ozcan, Aydogan

    2013-01-01

    Pixel-size limitation of lensfree on-chip microscopy can be circumvented by utilizing pixel-super-resolution techniques to synthesize a smaller effective pixel, improving the resolution. Here we report that by using the two-dimensional pixel-function of an image sensor-array as an input to lensfree image reconstruction, pixel-super-resolution can improve the numerical aperture of the reconstructed image by ~3 fold compared to a raw lensfree image. This improvement was confirmed using two different sensor-arrays that significantly vary in their pixel-sizes, circuit architectures and digital/optical readout mechanisms, empirically pointing to roughly the same space-bandwidth improvement factor regardless of the sensor-array employed in our set-up. Furthermore, such a pixel-count increase also renders our on-chip microscope into a Giga-pixel imager, where an effective pixel count of ~1.6–2.5 billion can be obtained with different sensors. Finally, using an ultra-violet light-emitting-diode, this platform resolves 225 nm grating lines and can be useful for wide-field on-chip imaging of nano-scale objects, e.g., multi-walled-carbon-nanotubes.

  13. [Study on the nonlinear characteristics of the mixed pixel's reflectance in hyperspectral space].

    PubMed

    Zhu, Feng; Gong, Hui-Li; Sun, Tian-Lin; Zhao, Yun-Sheng

    2013-03-01

    Under the experimental condition of the 50 degree incidence zenith angle and 45 degree detection azimuth, 24 groups of reflectance spectral of the mixed pixel of lotus and water body acquired using the reflex platform and FieldSpec 3 Hi-Res portable spectrum instrument. The hyperspectral space was built based on the reflectance character. The relationship between similarity and the index of lotus area ratio was analyzed using the linear, logarithm and quadratic curve fitting, and the goodness of fitting is 63.6%, 76.2% and 82.9% respectively. According to the real relationship of the mixed pixel spectral vector and the reference spectral, the best fitting model has nonlinear characteristics. The idea that the mixed pixel may have the critical value was proposed on the base of the analysis. The research result will help understand the mixed pixel further, and provide a new direction for unmixing the mixed pixel. PMID:23705444

  14. Smart-Pixel Array Processors Based on Optimal Cellular Neural Networks for Space Sensor Applications

    NASA Technical Reports Server (NTRS)

    Fang, Wai-Chi; Sheu, Bing J.; Venus, Holger; Sandau, Rainer

    1997-01-01

    A smart-pixel cellular neural network (CNN) with hardware annealing capability, digitally programmable synaptic weights, and multisensor parallel interface has been under development for advanced space sensor applications. The smart-pixel CNN architecture is a programmable multi-dimensional array of optoelectronic neurons which are locally connected with their local neurons and associated active-pixel sensors. Integration of the neuroprocessor in each processor node of a scalable multiprocessor system offers orders-of-magnitude computing performance enhancements for on-board real-time intelligent multisensor processing and control tasks of advanced small satellites. The smart-pixel CNN operation theory, architecture, design and implementation, and system applications are investigated in detail. The VLSI (Very Large Scale Integration) implementation feasibility was illustrated by a prototype smart-pixel 5x5 neuroprocessor array chip of active dimensions 1380 micron x 746 micron in a 2-micron CMOS technology.

  15. HUBBLE SPACE TELESCOPE PIXEL ANALYSIS OF THE INTERACTING S0 GALAXY NGC 5195 (M51B)

    SciTech Connect

    Lee, Joon Hyeop; Kim, Sang Chul; Ree, Chang Hee; Kim, Minjin; Jeong, Hyunjin; Lee, Jong Chul; Kyeong, Jaemann E-mail: sckim@kasi.re.kr E-mail: mkim@kasi.re.kr E-mail: jclee@kasi.re.kr

    2012-08-01

    We report the properties of the interacting S0 galaxy NGC 5195 (M51B), revealed in a pixel analysis using the Hubble Space Telescope/Advanced Camera for Surveys images in the F435W, F555W, and F814W (BVI) bands. We analyze the pixel color-magnitude diagram (pCMD) of NGC 5195, focusing on the properties of its red and blue pixel sequences and the difference from the pCMD of NGC 5194 (M51A; the spiral galaxy interacting with NGC 5195). The red pixel sequence of NGC 5195 is redder than that of NGC 5194, which corresponds to the difference in the dust optical depth of 2 < {Delta}{tau}{sub V} < 4 at fixed age and metallicity. The blue pixel sequence of NGC 5195 is very weak and spatially corresponds to the tidal bridge between the two interacting galaxies. This implies that the blue pixel sequence is not an ordinary feature in the pCMD of an early-type galaxy, but that it is a transient feature of star formation caused by the galaxy-galaxy interaction. We also find a difference in the shapes of the red pixel sequences on the pixel color-color diagrams (pCCDs) of NGC 5194 and NGC 5195. We investigate the spatial distributions of the pCCD-based pixel stellar populations. The young population fraction in the tidal bridge area is larger than that in other areas by a factor >15. Along the tidal bridge, young populations seem to be clumped particularly at the middle point of the bridge. On the other hand, the dusty population shows a relatively wide distribution between the tidal bridge and the center of NGC 5195.

  16. Search for non-Gaussianity in pixel, harmonic, and wavelet space: Compared and combined

    NASA Astrophysics Data System (ADS)

    Cabella, Paolo; Hansen, Frode; Marinucci, Domenico; Pagano, Daniele; Vittorio, Nicola

    2004-03-01

    We present a comparison between three approaches to test the non-Gaussianity of cosmic microwave background data. The Minkowski functionals, the empirical process method, and the skewness of wavelet coefficients are applied to maps generated from nonstandard inflationary models and to Gaussian maps with point sources included. We discuss the different power of the pixel, harmonic, and wavelet space methods on these simulated almost full-sky data (with Planck-like noise). We also suggest a new procedure consisting of a combination of statistics in pixel, harmonic, and wavelet space.

  17. Autonomous Sub-Pixel Satellite Track Endpoint Determination for Space Based Images

    SciTech Connect

    Simms, L M

    2011-03-07

    An algorithm for determining satellite track endpoints with sub-pixel resolution in spaced-based images is presented. The algorithm allows for significant curvature in the imaged track due to rotation of the spacecraft capturing the image. The motivation behind the subpixel endpoint determination is first presented, followed by a description of the methodology used. Results from running the algorithm on real ground-based and simulated spaced-based images are shown to highlight its effectiveness.

  18. A 512×512 CMOS Monolithic Active Pixel Sensor with integrated ADCs for space science

    NASA Astrophysics Data System (ADS)

    Prydderch, M. L.; Waltham, N. J.; Turchetta, R.; French, M. J.; Holt, R.; Marshall, A.; Burt, D.; Bell, R.; Pool, P.; Eyles, C.; Mapson-Menard, H.

    2003-10-01

    In the last few years, CMOS sensors have become widely used for consumer applications, but little has been done for scientific instruments. In this paper we present the design and experimental characterisation of a Monolithic Active Pixel Sensor (MAPS) intended for a space science application. The sensor incorporates a 525×525 array of pixels on a 25 μm pitch. Each pixel contains a detector together with three transistors that are used for pixel reset, pixel selection and charge-to-voltage conversion. The detector consists of four n-well/p-substrate diodes combining optimum charge collection and low noise performance. The array readout is column-parallel with adjustable gain column amplifiers and a 10-bit single slope ADC. Data conversion takes place simultaneously for all the 525 pixels in one row. The ADC slope can be adjusted in order to give the best dynamic range for a given brightness of a scene. The digitised data are output on a 10-bit bus at 3 MHz. An on-chip state machine generates all of the control signals needed for the readout. All of the bias currents and voltages are generated on chip by a DAC that is programmable through an I 2C compatible interface. The sensor was designed and fabricated on a standard 0.5 μm CMOS technology. The overall die size is 16.7 mm×19.9 mm including the associated readout electronics and bond pads. Preliminary test results show that the full-scale design works well, meeting the Star Tracker requirements with less than 1-bit noise, good linearity and good optical performance.

  19. Space qualification of a 512x3 pixel uncooled microbolometer FPA

    NASA Astrophysics Data System (ADS)

    Pope, Timothy; Dupont, Fabien; Garcia Blanco, Sonia; Williamson, Fraser; Chevalier, Claude; Marchese, Linda; Chateauneuf, Francois; Jerominek, Hubert; Linh, Ngo-Phong; Bouchard, Robert

    2009-05-01

    We have previously reported on the initial development of a multi-linear uncooled microbolometer FPA for space applications. The IRL512 FPA features three parallel lines of 512 pixels on a 39 micron pixel pitch with parallel integration of all pixels, a complete detector bridge per pixel for offset and substrate temperature drift compensation, and one 14-bit digital output bus per line. The FPA achieves an NETD below 45 mK over the LWIR spectral band with 50 ms integration time, 300 K scene temperature, and f/0.87 optics. In the context of the NIRST instrument for the upcoming SAC-D/Aquarius earth observation mission, MWIR and LWIR optimized versions of the IRL512 in radiometric packages including integrated stripe filter and radiation shield have recently successfully undergone screening and qualification campaigns. The qualification strategy consists of part element and device qualification including proton and total dose radiation, shock, vibration, burn-in, and thermal cycling. The test conditions and results will be reviewed. The thermal resolution of the current generation of radiometrically packaged IRL512 FPA in the NIRST instrument is below 500 mK with an 0.9 micron spectral bandwidth centred at 10.85 μm, 50 ms integration time, the NIRST f/1 optics, and 300 K scene temperature.

  20. Verification of Dosimetry Measurements with Timepix Pixel Detectors for Space Applications

    NASA Technical Reports Server (NTRS)

    Kroupa, M.; Pinsky, L. S.; Idarraga-Munoz, J.; Hoang, S. M.; Semones, E.; Bahadori, A.; Stoffle, N.; Rios, R.; Vykydal, Z.; Jakubek, J.; Pospisil, S.; Turecek, D.; Kitamura, H.

    2014-01-01

    The current capabilities of modern pixel-detector technology has provided the possibility to design a new generation of radiation monitors. Timepix detectors are semiconductor pixel detectors based on a hybrid configuration. As such, the read-out chip can be used with different types and thicknesses of sensors. For space radiation dosimetry applications, Timepix devices with 300 and 500 microns thick silicon sensors have been used by a collaboration between NASA and University of Houston to explore their performance. For that purpose, an extensive evaluation of the response of Timepix for such applications has been performed. Timepix-based devices were tested in many different environments both at ground-based accelerator facilities such as HIMAC (Heavy Ion Medical Accelerator in Chiba, Japan), and at NSRL (NASA Space Radiation Laboratory at Brookhaven National Laboratory in Upton, NY), as well as in space on board of the International Space Station (ISS). These tests have included a wide range of the particle types and energies, from protons through iron nuclei. The results have been compared both with other devices and theoretical values. This effort has demonstrated that Timepix-based detectors are exceptionally capable at providing accurate dosimetry measurements in this application as verified by the confirming correspondence with the other accepted techniques.

  1. Carotenoid pixels characterization under color space tests and RGB formulas for mesocarp of mango's fruits cultivars

    NASA Astrophysics Data System (ADS)

    Hammad, Ahmed Yahya; Kassim, Farid Saad Eid Saad

    2010-01-01

    This study experimented the pulp (mesocarp) of fourteen cultivars were healthy ripe of Mango fruits (Mangifera indica L.) selected after picking from Mango Spp. namely Taimour [Ta], Dabsha [Da], Aromanis [Ar], Zebda [Ze], Fagri Kelan [Fa], Alphonse [Al], Bulbek heart [Bu], Hindi- Sinnara [Hi], Compania [Co], Langra [La], Mestikawi [Me], Ewais [Ew], Montakhab El Kanater [Mo] and Mabroka [Ma] . Under seven color space tests included (RGB: Red, Green and Blue), (CMY: Cyan, Magenta and Yellow), (CMY: Cyan, Magenta and Yellow), (HSL: Hue, Saturation and Lightness), (CMYK%: Cyan%, Magenta%, Yellow% and Black%), (HSV: Hue, Saturation and Value), (HºSB%: Hueº, Saturation% and Brightness%) and (Lab). (CMY: Cyan, Magenta and Yellow), (HSL: Hue, Saturation and Lightness), (CMYK%: Cyan%, Magenta%, Yellow% and Black%), (HSV: Hue, Saturation and Value), (HºSB%: Hueº, Saturation% and Brightness%) and (Lab). Addition, nine formula of color space tests included (sRGB 0÷1, CMY, CMYK, XYZ, CIE-L*ab, CIE-L*CH, CIE-L*uv, Yxy and Hunter-Lab) and (RGB 0÷FF/hex triplet) and Carotenoid Pixels Scale. Utilizing digital color photographs as tool for obtainment the natural color information for each cultivar then the result expounded with chemical pigment estimations. Our location study in the visual yellow to orange color degrees from the visible color of electromagnetic spectrum in wavelength between (~570 to 620) nm and frequency between (~480 to 530) THz. The results found carotene very strong influence in band Red while chlorophyll (a & b) was very lower subsequently, the values in band Green was depressed. Meanwhile, the general ratios percentage for carotenoid pixels in bands Red, Green and Blue were 50%, 39% and 11% as orderliness opposite the ratios percentage for carotene, chlorophyll a and chlorophyll b which were 63%, 22% and 16% approximately. According to that the pigments influence in all color space tests and RGB formulas. Band Yellow% in color test (CMYK%) as signature

  2. Runge-Kutta based generalized convolution quadrature

    NASA Astrophysics Data System (ADS)

    Lopez-Fernandez, Maria; Sauter, Stefan

    2016-06-01

    We present the Runge-Kutta generalized convolution quadrature (gCQ) with variable time steps for the numerical solution of convolution equations for time and space-time problems. We present the main properties of the method and a convergence result.

  3. Rolling-Convolute Joint For Pressurized Glove

    NASA Technical Reports Server (NTRS)

    Kosmo, Joseph J.; Bassick, John W.

    1994-01-01

    Rolling-convolute metacarpal/finger joint enhances mobility and flexibility of pressurized glove. Intended for use in space suit to increase dexterity and decrease wearer's fatigue. Also useful in diving suits and other pressurized protective garments. Two ring elements plus bladder constitute rolling-convolute joint balancing torques caused by internal pressurization of glove. Provides comfortable grasp of various pieces of equipment.

  4. Distal Convoluted Tubule

    PubMed Central

    Ellison, David H.

    2014-01-01

    The distal convoluted tubule is the nephron segment that lies immediately downstream of the macula densa. Although short in length, the distal convoluted tubule plays a critical role in sodium, potassium, and divalent cation homeostasis. Recent genetic and physiologic studies have greatly expanded our understanding of how the distal convoluted tubule regulates these processes at the molecular level. This article provides an update on the distal convoluted tubule, highlighting concepts and pathophysiology relevant to clinical practice. PMID:24855283

  5. Comparison of rate one-half, equivalent constraint length 24, binary convolutional codes for use with sequential decoding on the deep-space channel

    NASA Technical Reports Server (NTRS)

    Massey, J. L.

    1976-01-01

    Virtually all previously-suggested rate 1/2 binary convolutional codes with KE = 24 are compared. Their distance properties are given; and their performance, both in computation and in error probability, with sequential decoding on the deep-space channel is determined by simulation. Recommendations are made both for the choice of a specific KE = 24 code as well as for codes to be included in future coding standards for the deep-space channel. A new result given in this report is a method for determining the statistical significance of error probability data when the error probability is so small that it is not feasible to perform enough decoding simulations to obtain more than a very small number of decoding errors.

  6. Smart-pixel-based free-space interconnects: solving the high-speed multichip packaging bottleneck

    NASA Astrophysics Data System (ADS)

    Haney, Michael W.; Christensen, Marc P.; Milojkovic, Predrag; McFadden, Michael J.

    2001-11-01

    As IC densities grow to 100's of millions of devices per chip and beyond, the inter-chip link bandwidth becomes a critical performance-limiting bottleneck in many applications. Electronic packaging technology has not kept pace with the growth of IC I/O requirements. Recent advances in smart pixel technology, however, offer the potential to use 3-D optical interconnects to overcome the inter-chip I/O bottleneck by linking dense arrays of Vertical Cavity Surface Emitting Lasers (VCSELs) and photodetectors, which are directly integrated onto electronic IC circuits. Many switching and parallel computing applications demand multi-chip interconnection fabrics that achieve high-density global I/O across an array of chips. Such global interconnections require a high degree of space-variance in the interconnection fabric, in addition to high inter-chip throughput capacity. This paper reviews the architectural and optical design issues associated with global interconnections among arrays of chips. The emphasis is on progress made in the design and implementation of the second generation Free-space Accelerator for Switching Terabit Networks (FAST-Net) prototype. The FAST-Net prototype uses a macro-optical lens array and mirror to effect a global (fully connected) fabric across a 4 X 4 array of smart pixel chips. Clusters of VCSELs and photodetectors are imaged onto corresponding clusters on other chips, creating a high- density bi-directional data path between every pair of smart pixel chips on a multi-chip module. The combination of programmable intra-chip electronic routing and the fixed global inter-chip optical interconnection pattern of the FAST- Net architecture has been shown to provide a low latency, minimum complexity fabric, that can effect an arbitrary interconnection pattern across the chip array. Recent experimental results show that the narrow beam characteristics of VCSELs can be exploited in an efficient optical design for the FAST-Net optical interconnection

  7. Search for optimal distance spectrum convolutional codes

    NASA Technical Reports Server (NTRS)

    Connor, Matthew C.; Perez, Lance C.; Costello, Daniel J., Jr.

    1993-01-01

    In order to communicate reliably and to reduce the required transmitter power, NASA uses coded communication systems on most of their deep space satellites and probes (e.g. Pioneer, Voyager, Galileo, and the TDRSS network). These communication systems use binary convolutional codes. Better codes make the system more reliable and require less transmitter power. However, there are no good construction techniques for convolutional codes. Thus, to find good convolutional codes requires an exhaustive search over the ensemble of all possible codes. In this paper, an efficient convolutional code search algorithm was implemented on an IBM RS6000 Model 580. The combination of algorithm efficiency and computational power enabled us to find, for the first time, the optimal rate 1/2, memory 14, convolutional code.

  8. Some easily analyzable convolutional codes

    NASA Technical Reports Server (NTRS)

    Mceliece, R.; Dolinar, S.; Pollara, F.; Vantilborg, H.

    1989-01-01

    Convolutional codes have played and will play a key role in the downlink telemetry systems on many NASA deep-space probes, including Voyager, Magellan, and Galileo. One of the chief difficulties associated with the use of convolutional codes, however, is the notorious difficulty of analyzing them. Given a convolutional code as specified, say, by its generator polynomials, it is no easy matter to say how well that code will perform on a given noisy channel. The usual first step in such an analysis is to computer the code's free distance; this can be done with an algorithm whose complexity is exponential in the code's constraint length. The second step is often to calculate the transfer function in one, two, or three variables, or at least a few terms in its power series expansion. This step is quite hard, and for many codes of relatively short constraint lengths, it can be intractable. However, a large class of convolutional codes were discovered for which the free distance can be computed by inspection, and for which there is a closed-form expression for the three-variable transfer function. Although for large constraint lengths, these codes have relatively low rates, they are nevertheless interesting and potentially useful. Furthermore, the ideas developed here to analyze these specialized codes may well extend to a much larger class.

  9. Efficient space-time sampling with pixel-wise coded exposure for high-speed imaging.

    PubMed

    Liu, Dengyu; Gu, Jinwei; Hitomi, Yasunobu; Gupta, Mohit; Mitsunaga, Tomoo; Nayar, Shree K

    2014-02-01

    Cameras face a fundamental trade-off between spatial and temporal resolution. Digital still cameras can capture images with high spatial resolution, but most high-speed video cameras have relatively low spatial resolution. It is hard to overcome this trade-off without incurring a significant increase in hardware costs. In this paper, we propose techniques for sampling, representing, and reconstructing the space-time volume to overcome this trade-off. Our approach has two important distinctions compared to previous works: 1) We achieve sparse representation of videos by learning an overcomplete dictionary on video patches, and 2) we adhere to practical hardware constraints on sampling schemes imposed by architectures of current image sensors, which means that our sampling function can be implemented on CMOS image sensors with modified control units in the future. We evaluate components of our approach, sampling function and sparse representation, by comparing them to several existing approaches. We also implement a prototype imaging system with pixel-wise coded exposure control using a liquid crystal on silicon device. System characteristics such as field of view and modulation transfer function are evaluated for our imaging system. Both simulations and experiments on a wide range of scenes show that our method can effectively reconstruct a video from a single coded image while maintaining high spatial resolution. PMID:24356347

  10. Using hybrid GPU/CPU kernel splitting to accelerate spherical convolutions

    NASA Astrophysics Data System (ADS)

    Sutter, P. M.; Wandelt, B. D.; Elsner, F.

    2015-06-01

    We present a general method for accelerating by more than an order of magnitude the convolution of pixelated functions on the sphere with a radially-symmetric kernel. Our method splits the kernel into a compact real-space component and a compact spherical harmonic space component. These components can then be convolved in parallel using an inexpensive commodity GPU and a CPU. We provide models for the computational cost of both real-space and Fourier space convolutions and an estimate for the approximation error. Using these models we can determine the optimum split that minimizes the wall clock time for the convolution while satisfying the desired error bounds. We apply this technique to the problem of simulating a cosmic microwave background (CMB) anisotropy sky map at the resolution typical of the high resolution maps produced by the Planck mission. For the main Planck CMB science channels we achieve a speedup of over a factor of ten, assuming an acceptable fractional rms error of order 10-5 in the power spectrum of the output map.

  11. Early breast tumor and late SARS detections using space-variant multispectral infrared imaging at a single pixel

    NASA Astrophysics Data System (ADS)

    Szu, Harold H.; Buss, James R.; Kopriva, Ivica

    2004-04-01

    We proposed the physics approach to solve a physical inverse problem, namely to choose the unique equilibrium solution (at the minimum free energy: H= E - ToS, including the Wiener, l.m.s E, and ICA, Max S, as special cases). The "unsupervised classification" presumes that required information must be learned and derived directly and solely from the data alone, in consistence with the classical Duda-Hart ATR definition of the "unlabelled data". Such truly unsupervised methodology is presented for space-variant imaging processing for a single pixel in the real world case of remote sensing, early tumor detections and SARS. The indeterminacy of the multiple solutions of the inverse problem is regulated or selected by means of the absolute minimum of isothermal free energy as the ground truth of local equilibrium condition at the single-pixel foot print.

  12. ARKCoS: artifact-suppressed accelerated radial kernel convolution on the sphere

    NASA Astrophysics Data System (ADS)

    Elsner, F.; Wandelt, B. D.

    2011-08-01

    We describe a hybrid Fourier/direct space convolution algorithm for compact radial (azimuthally symmetric) kernels on the sphere. For high resolution maps covering a large fraction of the sky, our implementation takes advantage of the inexpensive massive parallelism afforded by consumer graphics processing units (GPUs). Its applications include modeling of instrumental beam shapes in terms of compact kernels, computation of fine-scale wavelet transformations, and optimal filtering for the detection of point sources. Our algorithm works for any pixelization where pixels are grouped into isolatitude rings. Even for kernels that are not bandwidth-limited, ringing features are completely absent on an ECP grid. We demonstrate that they can be highly suppressed on the popular HEALPix pixelization, for which we develop a freely available implementation of the algorithm. As an example application, we show that running on a high-end consumer graphics card our method speeds up beam convolution for simulations of a characteristic Planck high frequency instrument channel by two orders of magnitude compared to the commonly used HEALPix implementation on one CPU core, while typically maintaining a fractional RMS accuracy of about 1 part in 105.

  13. The analysis of convolutional codes via the extended Smith algorithm

    NASA Technical Reports Server (NTRS)

    Mceliece, R. J.; Onyszchuk, I.

    1993-01-01

    Convolutional codes have been the central part of most error-control systems in deep-space communication for many years. Almost all such applications, however, have used the restricted class of (n,1), also known as 'rate 1/n,' convolutional codes. The more general class of (n,k) convolutional codes contains many potentially useful codes, but their algebraic theory is difficult and has proved to be a stumbling block in the evolution of convolutional coding systems. In this article, the situation is improved by describing a set of practical algorithms for computing certain basic things about a convolutional code (among them the degree, the Forney indices, a minimal generator matrix, and a parity-check matrix), which are usually needed before a system using the code can be built. The approach is based on the classic Forney theory for convolutional codes, together with the extended Smith algorithm for polynomial matrices, which is introduced in this article.

  14. Asymmetric quantum convolutional codes

    NASA Astrophysics Data System (ADS)

    La Guardia, Giuliano G.

    2016-01-01

    In this paper, we construct the first families of asymmetric quantum convolutional codes (AQCCs). These new AQCCs are constructed by means of the CSS-type construction applied to suitable families of classical convolutional codes, which are also constructed here. The new codes have non-catastrophic generator matrices, and they have great asymmetry. Since our constructions are performed algebraically, i.e. we develop general algebraic methods and properties to perform the constructions, it is possible to derive several families of such codes and not only codes with specific parameters. Additionally, several different types of such codes are obtained.

  15. Do a bit more with convolution.

    PubMed

    Olsthoorn, Theo N

    2008-01-01

    Convolution is a form of superposition that efficiently deals with input varying arbitrarily in time or space. It works whenever superposition is applicable, that is, for linear systems. Even though convolution is well-known since the 19th century, this valuable method is still missing in most textbooks on ground water hydrology. This limits widespread application in this field. Perhaps most papers are too complex mathematically as they tend to focus on the derivation of analytical expressions rather than solving practical problems. However, convolution is straightforward with standard mathematical software or even a spreadsheet, as is demonstrated in the paper. The necessary system responses are not limited to analytic solutions; they may also be obtained by running an already existing ground water model for a single stress period until equilibrium is reached. With these responses, high-resolution time series of head or discharge may then be computed by convolution for arbitrary points and arbitrarily varying input, without further use of the model. There are probably thousands of applications in the field of ground water hydrology that may benefit from convolution. Therefore, its inclusion in ground water textbooks and courses is strongly needed. PMID:18181860

  16. Spatial-Spectral Classification Based on the Unsupervised Convolutional Sparse Auto-Encoder for Hyperspectral Remote Sensing Imagery

    NASA Astrophysics Data System (ADS)

    Han, Xiaobing; Zhong, Yanfei; Zhang, Liangpei

    2016-06-01

    Current hyperspectral remote sensing imagery spatial-spectral classification methods mainly consider concatenating the spectral information vectors and spatial information vectors together. However, the combined spatial-spectral information vectors may cause information loss and concatenation deficiency for the classification task. To efficiently represent the spatial-spectral feature information around the central pixel within a neighbourhood window, the unsupervised convolutional sparse auto-encoder (UCSAE) with window-in-window selection strategy is proposed in this paper. Window-in-window selection strategy selects the sub-window spatial-spectral information for the spatial-spectral feature learning and extraction with the sparse auto-encoder (SAE). Convolution mechanism is applied after the SAE feature extraction stage with the SAE features upon the larger outer window. The UCSAE algorithm was validated by two common hyperspectral imagery (HSI) datasets - Pavia University dataset and the Kennedy Space Centre (KSC) dataset, which shows an improvement over the traditional hyperspectral spatial-spectral classification methods.

  17. Understanding deep convolutional networks.

    PubMed

    Mallat, Stéphane

    2016-04-13

    Deep convolutional networks provide state-of-the-art classifications and regressions results over many high-dimensional problems. We review their architecture, which scatters data with a cascade of linear filter weights and nonlinearities. A mathematical framework is introduced to analyse their properties. Computations of invariants involve multiscale contractions with wavelets, the linearization of hierarchical symmetries and sparse separations. Applications are discussed. PMID:26953183

  18. ENGage: The use of space and pixel art for increasing primary school children's interest in science, technology, engineering and mathematics

    NASA Astrophysics Data System (ADS)

    Roberts, Simon J.

    2014-01-01

    The Faculty of Engineering at The University of Nottingham, UK, has developed interdisciplinary, hands-on workshops for primary schools that introduce space technology, its relevance to everyday life and the importance of science, technology, engineering and maths. The workshop activities for 7-11 year olds highlight the roles that space and satellite technology play in observing and monitoring the Earth's biosphere as well as being vital to communications in the modern digital world. The programme also provides links to 'how science works', the environment and citizenship and uses pixel art through the medium of digital photography to demonstrate the importance of maths in a novel and unconventional manner. The interactive programme of activities provides learners with an opportunity to meet 'real' scientists and engineers, with one of the key messages from the day being that anyone can become involved in science and engineering whatever their ability or subject of interest. The methodology introduces the role of scientists and engineers using space technology themes, but it could easily be adapted for use with any inspirational topic. Analysis of learners' perceptions of science, technology, engineering and maths before and after participating in ENGage showed very positive and significant changes in their attitudes to these subjects and an increase in the number of children thinking they would be interested and capable in pursuing a career in science and engineering. This paper provides an overview of the activities, the methodology, the evaluation process and results.

  19. Correction of defective pixels for medical and space imagers based on Ising Theory

    NASA Astrophysics Data System (ADS)

    Cohen, Eliahu; Shnitser, Moriel; Avraham, Tsvika; Hadar, Ofer

    2014-09-01

    We propose novel models for image restoration based on statistical physics. We investigate the affinity between these fields and describe a framework from which interesting denoising algorithms can be derived: Ising-like models and simulated annealing techniques. When combined with known predictors such as Median and LOCO-I, these models become even more effective. In order to further examine the proposed models we apply them to two important problems: (i) Digital Cameras in space damaged from cosmic radiation. (ii) Ultrasonic medical devices damaged from speckle noise. The results, as well as benchmark and comparisons, suggest in most of the cases a significant gain in PSNR and SSIM in comparison to other filters.

  20. Two dimensional convolute integers for machine vision and image recognition

    NASA Technical Reports Server (NTRS)

    Edwards, Thomas R.

    1988-01-01

    Machine vision and image recognition require sophisticated image processing prior to the application of Artificial Intelligence. Two Dimensional Convolute Integer Technology is an innovative mathematical approach for addressing machine vision and image recognition. This new technology generates a family of digital operators for addressing optical images and related two dimensional data sets. The operators are regression generated, integer valued, zero phase shifting, convoluting, frequency sensitive, two dimensional low pass, high pass and band pass filters that are mathematically equivalent to surface fitted partial derivatives. These operators are applied non-recursively either as classical convolutions (replacement point values), interstitial point generators (bandwidth broadening or resolution enhancement), or as missing value calculators (compensation for dead array element values). These operators show frequency sensitive feature selection scale invariant properties. Such tasks as boundary/edge enhancement and noise or small size pixel disturbance removal can readily be accomplished. For feature selection tight band pass operators are essential. Results from test cases are given.

  1. Spatio-spectral concentration of convolutions

    NASA Astrophysics Data System (ADS)

    Hanasoge, Shravan M.

    2016-05-01

    Differential equations may possess coefficients that vary on a spectrum of scales. Because coefficients are typically multiplicative in real space, they turn into convolution operators in spectral space, mixing all wavenumbers. However, in many applications, only the largest scales of the solution are of interest and so the question turns to whether it is possible to build effective coarse-scale models of the coefficients in such a manner that the large scales of the solution are left intact. Here we apply the method of numerical homogenisation to deterministic linear equations to generate sub-grid-scale models of coefficients at desired frequency cutoffs. We use the Fourier basis to project, filter and compute correctors for the coefficients. The method is tested in 1D and 2D scenarios and found to reproduce the coarse scales of the solution to varying degrees of accuracy depending on the cutoff. We relate this method to mode-elimination Renormalisation Group (RG) and discuss the connection between accuracy and the cutoff wavenumber. The tradeoff is governed by a form of the uncertainty principle for convolutions, which states that as the convolution operator is squeezed in the spectral domain, it broadens in real space. As a consequence, basis sparsity is a high virtue and the choice of the basis can be critical.

  2. Convolutional coding techniques for data protection

    NASA Technical Reports Server (NTRS)

    Massey, J. L.

    1975-01-01

    Results of research on the use of convolutional codes in data communications are presented. Convolutional coding fundamentals are discussed along with modulation and coding interaction. Concatenated coding systems and data compression with convolutional codes are described.

  3. a Convolutional Network for Semantic Facade Segmentation and Interpretation

    NASA Astrophysics Data System (ADS)

    Schmitz, Matthias; Mayer, Helmut

    2016-06-01

    In this paper we present an approach for semantic interpretation of facade images based on a Convolutional Network. Our network processes the input images in a fully convolutional way and generates pixel-wise predictions. We show that there is no need for large datasets to train the network when transfer learning is employed, i. e., a part of an already existing network is used and fine-tuned, and when the available data is augmented by using deformed patches of the images for training. The network is trained end-to-end with patches of the images and each patch is augmented independently. To undo the downsampling for the classification, we add deconvolutional layers to the network. Outputs of different layers of the network are combined to achieve more precise pixel-wise predictions. We demonstrate the potential of our network based on results for the eTRIMS (Korč and Förstner, 2009) dataset reduced to facades.

  4. Determinate-state convolutional codes

    NASA Technical Reports Server (NTRS)

    Collins, O.; Hizlan, M.

    1991-01-01

    A determinate state convolutional code is formed from a conventional convolutional code by pruning away some of the possible state transitions in the decoding trellis. The type of staged power transfer used in determinate state convolutional codes proves to be an extremely efficient way of enhancing the performance of a concatenated coding system. The decoder complexity is analyzed along with free distances of these new codes and extensive simulation results is provided of their performance at the low signal to noise ratios where a real communication system would operate. Concise, practical examples are provided.

  5. The effect of whitening transformation on pooling operations in convolutional autoencoders

    NASA Astrophysics Data System (ADS)

    Li, Zuhe; Fan, Yangyu; Liu, Weihua

    2015-12-01

    Convolutional autoencoders (CAEs) are unsupervised feature extractors for high-resolution images. In the pre-processing step, whitening transformation has widely been adopted to remove redundancy by making adjacent pixels less correlated. Pooling is a biologically inspired operation to reduce the resolution of feature maps and achieve spatial invariance in convolutional neural networks. Conventionally, pooling methods are mainly determined empirically in most previous work. Therefore, our main purpose is to study the relationship between whitening processing and pooling operations in convolutional autoencoders for image classification. We propose an adaptive pooling approach based on the concepts of information entropy to test the effect of whitening on pooling in different conditions. Experimental results on benchmark datasets indicate that the performance of pooling strategies is associated with the distribution of feature activations, which can be affected by whitening processing. This provides guidance for the selection of pooling methods in convolutional autoencoders and other convolutional neural networks.

  6. Image statistics decoding for convolutional codes

    NASA Technical Reports Server (NTRS)

    Pitt, G. H., III; Swanson, L.; Yuen, J. H.

    1987-01-01

    It is a fact that adjacent pixels in a Voyager image are very similar in grey level. This fact can be used in conjunction with the Maximum-Likelihood Convolutional Decoder (MCD) to decrease the error rate when decoding a picture from Voyager. Implementing this idea would require no changes in the Voyager spacecraft and could be used as a backup to the current system without too much expenditure, so the feasibility of it and the possible gains for Voyager were investigated. Simulations have shown that the gain could be as much as 2 dB at certain error rates, and experiments with real data inspired new ideas on ways to get the most information possible out of the received symbol stream.

  7. Entanglement-assisted quantum convolutional coding

    SciTech Connect

    Wilde, Mark M.; Brun, Todd A.

    2010-04-15

    We show how to protect a stream of quantum information from decoherence induced by a noisy quantum communication channel. We exploit preshared entanglement and a convolutional coding structure to develop a theory of entanglement-assisted quantum convolutional coding. Our construction produces a Calderbank-Shor-Steane (CSS) entanglement-assisted quantum convolutional code from two arbitrary classical binary convolutional codes. The rate and error-correcting properties of the classical convolutional codes directly determine the corresponding properties of the resulting entanglement-assisted quantum convolutional code. We explain how to encode our CSS entanglement-assisted quantum convolutional codes starting from a stream of information qubits, ancilla qubits, and shared entangled bits.

  8. The time-space relationship of the data point (Pixels) of the thematic mapper and multispectral scanner or the myth of simultaneity

    NASA Technical Reports Server (NTRS)

    Gordon, F., Jr.

    1980-01-01

    A simplified explanation of the time space relationships among scanner pixels is presented. The examples of the multispectral scanner (MSS) on Landsats 1, 2, and 3 and the thematic mapper (TM) of Landsat D are used to describe the concept and degree of nonsimultaneity of scanning system data. The time aspects of scanner data acquisition and those parts of the MSS and TM systems related to that phenomena are addressed.

  9. Exploring the Hidden Structure of Astronomical Images: A "Pixelated" View of Solar System and Deep Space Features!

    ERIC Educational Resources Information Center

    Ward, R. Bruce; Sienkiewicz, Frank; Sadler, Philip; Antonucci, Paul; Miller, Jaimie

    2013-01-01

    We describe activities created to help student participants in Project ITEAMS (Innovative Technology-Enabled Astronomy for Middle Schools) develop a deeper understanding of picture elements (pixels), image creation, and analysis of the recorded data. ITEAMS is an out-of-school time (OST) program funded by the National Science Foundation (NSF) with…

  10. PIXEL PUSHER

    NASA Technical Reports Server (NTRS)

    Stanfill, D. F.

    1994-01-01

    Pixel Pusher is a Macintosh application used for viewing and performing minor enhancements on imagery. It will read image files in JPL's two primary image formats- VICAR and PDS - as well as the Macintosh PICT format. VICAR (NPO-18076) handles an array of image processing capabilities which may be used for a variety of applications including biomedical image processing, cartography, earth resources, and geological exploration. Pixel Pusher can also import VICAR format color lookup tables for viewing images in pseudocolor (256 colors). This program currently supports only eight bit images but will work on monitors with any number of colors. Arbitrarily large image files may be viewed in a normal Macintosh window. Color and contrast enhancement can be performed with a graphical "stretch" editor (as in contrast stretch). In addition, VICAR images may be saved as Macintosh PICT files for exporting into other Macintosh programs, and individual pixels can be queried to determine their locations and actual data values. Pixel Pusher is written in Symantec's Think C and was developed for use on a Macintosh SE30, LC, or II series computer running System Software 6.0.3 or later and 32 bit QuickDraw. Pixel Pusher will only run on a Macintosh which supports color (whether a color monitor is being used or not). The standard distribution medium for this program is a set of three 3.5 inch Macintosh format diskettes. The program price includes documentation. Pixel Pusher was developed in 1991 and is a copyrighted work with all copyright vested in NASA. Think C is a trademark of Symantec Corporation. Macintosh is a registered trademark of Apple Computer, Inc.

  11. Simplified Convolution Codes

    NASA Technical Reports Server (NTRS)

    Truong, T. K.; Reed, I. S.

    1985-01-01

    Simple recursive algorithm efficiently calculates minimum-weight error vectors using Diophantine equations. Recursive algorithm uses general solution of polynomial linear Diophantine equation to determine minimum-weight error polynomial vector in equation in polynomial space.

  12. Convolution formulations for non-negative intensity.

    PubMed

    Williams, Earl G

    2013-08-01

    Previously unknown spatial convolution formulas for a variant of the active normal intensity in planar coordinates have been derived that use measured pressure or normal velocity near-field holograms to construct a positive-only (outward) intensity distribution in the plane, quantifying the areas of the vibrating structure that produce radiation to the far-field. This is an extension of the outgoing-only (unipolar) intensity technique recently developed for arbitrary geometries by Steffen Marburg. The method is applied independently to pressure and velocity data measured in a plane close to the surface of a point-driven, unbaffled rectangular plate in the laboratory. It is demonstrated that the sound producing regions of the structure are clearly revealed using the derived formulas and that the spatial resolution is limited to a half-wavelength. A second set of formulas called the hybrid-intensity formulas are also derived which yield a bipolar intensity using a different spatial convolution operator, again using either the measured pressure or velocity. It is demonstrated from the experiment results that the velocity formula yields the classical active intensity and the pressure formula an interesting hybrid intensity that may be useful for source localization. Computations are fast and carried out in real space without Fourier transforms into wavenumber space. PMID:23927105

  13. Pixelation Effects in Weak Lensing

    NASA Astrophysics Data System (ADS)

    High, F. William; Rhodes, Jason; Massey, Richard; Ellis, Richard

    2007-11-01

    Weak gravitational lensing can be used to investigate both dark matter and dark energy but requires accurate measurements of the shapes of faint, distant galaxies. Such measurements are hindered by the finite resolution and pixel scale of digital cameras. We investigate the optimum choice of pixel scale for a space-based mission, using the engineering model and survey strategy of the proposed Supernova Acceleration Probe as a baseline. We do this by simulating realistic astronomical images containing a known input shear signal and then attempting to recover the signal using the Rhodes, Refregier, & Groth algorithm. We find that the quality of shear measurement is always improved by smaller pixels. However, in practice, telescopes are usually limited to a finite number of pixels and operational life span, so the total area of a survey increases with pixel size. We therefore fix the survey lifetime and the number of pixels in the focal plane while varying the pixel scale, thereby effectively varying the survey size. In a pure trade-off for image resolution versus survey area, we find that measurements of the matter power spectrum would have minimum statistical error with a pixel scale of 0.09" for a 0.14" FWHM point-spread function (PSF). The pixel scale could be increased to ~0.16" if images dithered by exactly half-pixel offsets were always available. Some of our results do depend on our adopted shape measurement method and should be regarded as an upper limit: future pipelines may require smaller pixels to overcome systematic floors not yet accessible, and, in certain circumstances, measuring the shape of the PSF might be more difficult than those of galaxies. However, the relative trends in our analysis are robust, especially those of the surface density of resolved galaxies. Our approach thus provides a snapshot of potential in available technology, and a practical counterpart to analytic studies of pixelation, which necessarily assume an idealized shape

  14. Pixelation Effects in Weak Lensing

    NASA Technical Reports Server (NTRS)

    High, F. William; Rhodes, Jason; Massey, Richard; Ellis, Richard

    2007-01-01

    Weak gravitational lensing can be used to investigate both dark matter and dark energy but requires accurate measurements of the shapes of faint, distant galaxies. Such measurements are hindered by the finite resolution and pixel scale of digital cameras. We investigate the optimum choice of pixel scale for a space-based mission, using the engineering model and survey strategy of the proposed Supernova Acceleration Probe as a baseline. We do this by simulating realistic astronomical images containing a known input shear signal and then attempting to recover the signal using the Rhodes, Refregier, and Groth algorithm. We find that the quality of shear measurement is always improved by smaller pixels. However, in practice, telescopes are usually limited to a finite number of pixels and operational life span, so the total area of a survey increases with pixel size. We therefore fix the survey lifetime and the number of pixels in the focal plane while varying the pixel scale, thereby effectively varying the survey size. In a pure trade-off for image resolution versus survey area, we find that measurements of the matter power spectrum would have minimum statistical error with a pixel scale of 0.09' for a 0.14' FWHM point-spread function (PSF). The pixel scale could be increased to 0.16' if images dithered by exactly half-pixel offsets were always available. Some of our results do depend on our adopted shape measurement method and should be regarded as an upper limit: future pipelines may require smaller pixels to overcome systematic floors not yet accessible, and, in certain circumstances, measuring the shape of the PSF might be more difficult than those of galaxies. However, the relative trends in our analysis are robust, especially those of the surface density of resolved galaxies. Our approach thus provides a snapshot of potential in available technology, and a practical counterpart to analytic studies of pixelation, which necessarily assume an idealized shape

  15. Approximating large convolutions in digital images.

    PubMed

    Mount, D M; Kanungo, T; Netanyahu, N S; Piatko, C; Silverman, R; Wu, A Y

    2001-01-01

    Computing discrete two-dimensional (2-D) convolutions is an important problem in image processing. In mathematical morphology, an important variant is that of computing binary convolutions, where the kernel of the convolution is a 0-1 valued function. This operation can be quite costly, especially when large kernels are involved. We present an algorithm for computing convolutions of this form, where the kernel of the binary convolution is derived from a convex polygon. Because the kernel is a geometric object, we allow the algorithm some flexibility in how it elects to digitize the convex kernel at each placement, as long as the digitization satisfies certain reasonable requirements. We say that such a convolution is valid. Given this flexibility we show that it is possible to compute binary convolutions more efficiently than would normally be possible for large kernels. Our main result is an algorithm which, given an m x n image and a k-sided convex polygonal kernel K, computes a valid convolution in O(kmn) time. Unlike standard algorithms for computing correlations and convolutions, the running time is independent of the area or perimeter of K, and our techniques do not rely on computing fast Fourier transforms. Our algorithm is based on a novel use of Bresenham's (1965) line-drawing algorithm and prefix-sums to update the convolution incrementally as the kernel is moved from one position to another across the image. PMID:18255522

  16. Some partial-unit-memory convolutional codes

    NASA Technical Reports Server (NTRS)

    Abdel-Ghaffar, K.; Mceliece, R. J.; Solomon, G.

    1991-01-01

    The results of a study on a class of error correcting codes called partial unit memory (PUM) codes are presented. This class of codes, though not entirely new, has until now remained relatively unexplored. The possibility of using the well developed theory of block codes to construct a large family of promising PUM codes is shown. The performance of several specific PUM codes are compared with that of the Voyager standard (2, 1, 6) convolutional code. It was found that these codes can outperform the Voyager code with little or no increase in decoder complexity. This suggests that there may very well be PUM codes that can be used for deep space telemetry that offer both increased performance and decreased implementational complexity over current coding systems.

  17. The Convolution Method in Neutrino Physics Searches

    SciTech Connect

    Tsakstara, V.; Kosmas, T. S.; Chasioti, V. C.; Divari, P. C.; Sinatkas, J.

    2007-12-26

    We concentrate on the convolution method used in nuclear and astro-nuclear physics studies and, in particular, in the investigation of the nuclear response of various neutrino detection targets to the energy-spectra of specific neutrino sources. Since the reaction cross sections of the neutrinos with nuclear detectors employed in experiments are extremely small, very fine and fast convolution techniques are required. Furthermore, sophisticated de-convolution methods are also needed whenever a comparison between calculated unfolded cross sections and existing convoluted results is necessary.

  18. Spectrally tunable pixel sensors

    NASA Astrophysics Data System (ADS)

    Langfelder, G.; Buffa, C.; Longoni, A. F.; Zaraga, F.

    2013-01-01

    They are here reported the developments and experimental results of fully operating matrices of spectrally tunable pixels based on the Transverse Field Detector (TFD). Unlike several digital imaging sensors based on color filter arrays or layered junctions, the TFD has the peculiar feature of having electrically tunable spectral sensitivities. In this way the sensor color space is not fixed a priori but can be real-time adjusted, e.g. for a better adaptation to the scene content or for multispectral capture. These advantages come at the cost of an increased complexity both for the photosensitive elements and for the readout electronics. The challenges in the realization of a matrix of TFD pixels are analyzed in this work. First experimental results on an 8x8 (x 3 colors) and on a 64x64 (x 3 colors) matrix will be presented and analyzed in terms of colorimetric and noise performance, and compared to simulation predictions.

  19. Convolution-deconvolution in DIGES

    SciTech Connect

    Philippacopoulos, A.J.; Simos, N.

    1995-05-01

    Convolution and deconvolution operations is by all means a very important aspect of SSI analysis since it influences the input to the seismic analysis. This paper documents some of the convolution/deconvolution procedures which have been implemented into the DIGES code. The 1-D propagation of shear and dilatational waves in typical layered configurations involving a stack of layers overlying a rock is treated by DIGES in a similar fashion to that of available codes, e.g. CARES, SHAKE. For certain configurations, however, there is no need to perform such analyses since the corresponding solutions can be obtained in analytic form. Typical cases involve deposits which can be modeled by a uniform halfspace or simple layered halfspaces. For such cases DIGES uses closed-form solutions. These solutions are given for one as well as two dimensional deconvolution. The type of waves considered include P, SV and SH waves. The non-vertical incidence is given special attention since deconvolution can be defined differently depending on the problem of interest. For all wave cases considered, corresponding transfer functions are presented in closed-form. Transient solutions are obtained in the frequency domain. Finally, a variety of forms are considered for representing the free field motion both in terms of deterministic as well as probabilistic representations. These include (a) acceleration time histories, (b) response spectra (c) Fourier spectra and (d) cross-spectral densities.

  20. The trellis complexity of convolutional codes

    NASA Technical Reports Server (NTRS)

    Mceliece, R. J.; Lin, W.

    1995-01-01

    It has long been known that convolutional codes have a natural, regular trellis structure that facilitates the implementation of Viterbi's algorithm. It has gradually become apparent that linear block codes also have a natural, though not in general a regular, 'minimal' trellis structure, which allows them to be decoded with a Viterbi-like algorithm. In both cases, the complexity of the Viterbi decoding algorithm can be accurately estimated by the number of trellis edges per encoded bit. It would, therefore, appear that we are in a good position to make a fair comparison of the Viterbi decoding complexity of block and convolutional codes. Unfortunately, however, this comparison is somewhat muddled by the fact that some convolutional codes, the punctured convolutional codes, are known to have trellis representations that are significantly less complex than the conventional trellis. In other words, the conventional trellis representation for a convolutional code may not be the minimal trellis representation. Thus, ironically, at present we seem to know more about the minimal trellis representation for block than for convolutional codes. In this article, we provide a remedy, by developing a theory of minimal trellises for convolutional codes. (A similar theory has recently been given by Sidorenko and Zyablov). This allows us to make a direct performance-complexity comparison for block and convolutional codes. A by-product of our work is an algorithm for choosing, from among all generator matrices for a given convolutional code, what we call a trellis-minimal generator matrix, from which the minimal trellis for the code can be directly constructed. Another by-product is that, in the new theory, punctured convolutional codes no longer appear as a special class, but simply as high-rate convolutional codes whose trellis complexity is unexpectedly small.

  1. Pixel Perfect

    SciTech Connect

    Perrine, Kenneth A.; Hopkins, Derek F.; Lamarche, Brian L.; Sowa, Marianne B.

    2005-09-01

    cubic warp. During image acquisitions, the cubic warp is evaluated by way of forward differencing. Unwanted pixelation artifacts are minimized by bilinear sampling. The resulting system is state-of-the-art for biological imaging. Precisely registered images enable the reliable use of FRET techniques. In addition, real-time image processing performance allows computed images to be fed back and displayed to scientists immediately, and the pipelined nature of the FPGA allows additional image processing algorithms to be incorporated into the system without slowing throughput.

  2. Symbol synchronization in convolutionally coded systems

    NASA Technical Reports Server (NTRS)

    Baumert, L. D.; Mceliece, R. J.; Van Tilborg, H. C. A.

    1979-01-01

    Alternate symbol inversion is sometimes applied to the output of convolutional encoders to guarantee sufficient richness of symbol transition for the receiver symbol synchronizer. A bound is given for the length of the transition-free symbol stream in such systems, and those convolutional codes are characterized in which arbitrarily long transition free runs occur.

  3. The general theory of convolutional codes

    NASA Technical Reports Server (NTRS)

    Mceliece, R. J.; Stanley, R. P.

    1993-01-01

    This article presents a self-contained introduction to the algebraic theory of convolutional codes. This introduction is partly a tutorial, but at the same time contains a number of new results which will prove useful for designers of advanced telecommunication systems. Among the new concepts introduced here are the Hilbert series for a convolutional code and the class of compact codes.

  4. Achieving unequal error protection with convolutional codes

    NASA Technical Reports Server (NTRS)

    Mills, D. G.; Costello, D. J., Jr.; Palazzo, R., Jr.

    1994-01-01

    This paper examines the unequal error protection capabilities of convolutional codes. Both time-invariant and periodically time-varying convolutional encoders are examined. The effective free distance vector is defined and is shown to be useful in determining the unequal error protection (UEP) capabilities of convolutional codes. A modified transfer function is used to determine an upper bound on the bit error probabilities for individual input bit positions in a convolutional encoder. The bound is heavily dependent on the individual effective free distance of the input bit position. A bound relating two individual effective free distances is presented. The bound is a useful tool in determining the maximum possible disparity in individual effective free distances of encoders of specified rate and memory distribution. The unequal error protection capabilities of convolutional encoders of several rates and memory distributions are determined and discussed.

  5. Adaptive decoding of convolutional codes

    NASA Astrophysics Data System (ADS)

    Hueske, K.; Geldmacher, J.; Götze, J.

    2007-06-01

    Convolutional codes, which are frequently used as error correction codes in digital transmission systems, are generally decoded using the Viterbi Decoder. On the one hand the Viterbi Decoder is an optimum maximum likelihood decoder, i.e. the most probable transmitted code sequence is obtained. On the other hand the mathematical complexity of the algorithm only depends on the used code, not on the number of transmission errors. To reduce the complexity of the decoding process for good transmission conditions, an alternative syndrome based decoder is presented. The reduction of complexity is realized by two different approaches, the syndrome zero sequence deactivation and the path metric equalization. The two approaches enable an easy adaptation of the decoding complexity for different transmission conditions, which results in a trade-off between decoding complexity and error correction performance.

  6. A Discriminative Representation of Convolutional Features for Indoor Scene Recognition

    NASA Astrophysics Data System (ADS)

    Khan, Salman H.; Hayat, Munawar; Bennamoun, Mohammed; Togneri, Roberto; Sohel, Ferdous A.

    2016-07-01

    Indoor scene recognition is a multi-faceted and challenging problem due to the diverse intra-class variations and the confusing inter-class similarities. This paper presents a novel approach which exploits rich mid-level convolutional features to categorize indoor scenes. Traditionally used convolutional features preserve the global spatial structure, which is a desirable property for general object recognition. However, we argue that this structuredness is not much helpful when we have large variations in scene layouts, e.g., in indoor scenes. We propose to transform the structured convolutional activations to another highly discriminative feature space. The representation in the transformed space not only incorporates the discriminative aspects of the target dataset, but it also encodes the features in terms of the general object categories that are present in indoor scenes. To this end, we introduce a new large-scale dataset of 1300 object categories which are commonly present in indoor scenes. Our proposed approach achieves a significant performance boost over previous state of the art approaches on five major scene classification datasets.

  7. Precise two-dimensional D-bar reconstructions of human chest and phantom tank via sinc-convolution algorithm

    PubMed Central

    2012-01-01

    Background Electrical Impedance Tomography (EIT) is used as a fast clinical imaging technique for monitoring the health of the human organs such as lungs, heart, brain and breast. Each practical EIT reconstruction algorithm should be efficient enough in terms of convergence rate, and accuracy. The main objective of this study is to investigate the feasibility of precise empirical conductivity imaging using a sinc-convolution algorithm in D-bar framework. Methods At the first step, synthetic and experimental data were used to compute an intermediate object named scattering transform. Next, this object was used in a two-dimensional integral equation which was precisely and rapidly solved via sinc-convolution algorithm to find the square root of the conductivity for each pixel of image. For the purpose of comparison, multigrid and NOSER algorithms were implemented under a similar setting. Quality of reconstructions of synthetic models was tested against GREIT approved quality measures. To validate the simulation results, reconstructions of a phantom chest and a human lung were used. Results Evaluation of synthetic reconstructions shows that the quality of sinc-convolution reconstructions is considerably better than that of each of its competitors in terms of amplitude response, position error, ringing, resolution and shape-deformation. In addition, the results confirm near-exponential and linear convergence rates for sinc-convolution and multigrid, respectively. Moreover, the least degree of relative errors and the most degree of truth were found in sinc-convolution reconstructions from experimental phantom data. Reconstructions of clinical lung data show that the related physiological effect is well recovered by sinc-convolution algorithm. Conclusions Parametric evaluation demonstrates the efficiency of sinc-convolution to reconstruct accurate conductivity images from experimental data. Excellent results in phantom and clinical reconstructions using sinc-convolution

  8. Bernoulli convolutions and 1D dynamics

    NASA Astrophysics Data System (ADS)

    Kempton, Tom; Persson, Tomas

    2015-10-01

    We describe a family {φλ} of dynamical systems on the unit interval which preserve Bernoulli convolutions. We show that if there are parameter ranges for which these systems are piecewise convex, then the corresponding Bernoulli convolution will be absolutely continuous with bounded density. We study the systems {φλ} and give some numerical evidence to suggest values of λ for which {φλ} may be piecewise convex.

  9. Coset Codes Viewed as Terminated Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Fossorier, Marc P. C.; Lin, Shu

    1996-01-01

    In this paper, coset codes are considered as terminated convolutional codes. Based on this approach, three new general results are presented. First, it is shown that the iterative squaring construction can equivalently be defined from a convolutional code whose trellis terminates. This convolutional code determines a simple encoder for the coset code considered, and the state and branch labelings of the associated trellis diagram become straightforward. Also, from the generator matrix of the code in its convolutional code form, much information about the trade-off between the state connectivity and complexity at each section, and the parallel structure of the trellis, is directly available. Based on this generator matrix, it is shown that the parallel branches in the trellis diagram of the convolutional code represent the same coset code C(sub 1), of smaller dimension and shorter length. Utilizing this fact, a two-stage optimum trellis decoding method is devised. The first stage decodes C(sub 1), while the second stage decodes the associated convolutional code, using the branch metrics delivered by stage 1. Finally, a bidirectional decoding of each received block starting at both ends is presented. If about the same number of computations is required, this approach remains very attractive from a practical point of view as it roughly doubles the decoding speed. This fact is particularly interesting whenever the second half of the trellis is the mirror image of the first half, since the same decoder can be implemented for both parts.

  10. Noise-enhanced convolutional neural networks.

    PubMed

    Audhkhasi, Kartik; Osoba, Osonde; Kosko, Bart

    2016-06-01

    Injecting carefully chosen noise can speed convergence in the backpropagation training of a convolutional neural network (CNN). The Noisy CNN algorithm speeds training on average because the backpropagation algorithm is a special case of the generalized expectation-maximization (EM) algorithm and because such carefully chosen noise always speeds up the EM algorithm on average. The CNN framework gives a practical way to learn and recognize images because backpropagation scales with training data. It has only linear time complexity in the number of training samples. The Noisy CNN algorithm finds a special separating hyperplane in the network's noise space. The hyperplane arises from the likelihood-based positivity condition that noise-boosts the EM algorithm. The hyperplane cuts through a uniform-noise hypercube or Gaussian ball in the noise space depending on the type of noise used. Noise chosen from above the hyperplane speeds training on average. Noise chosen from below slows it on average. The algorithm can inject noise anywhere in the multilayered network. Adding noise to the output neurons reduced the average per-iteration training-set cross entropy by 39% on a standard MNIST image test set of handwritten digits. It also reduced the average per-iteration training-set classification error by 47%. Adding noise to the hidden layers can also reduce these performance measures. The noise benefit is most pronounced for smaller data sets because the largest EM hill-climbing gains tend to occur in the first few iterations. This noise effect can assist random sampling from large data sets because it allows a smaller random sample to give the same or better performance than a noiseless sample gives. PMID:26700535

  11. Simplified Syndrome Decoding of (n, 1) Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1983-01-01

    A new syndrome decoding algorithm for the (n, 1) convolutional codes (CC) that is different and simpler than the previous syndrome decoding algorithm of Schalkwijk and Vinck is presented. The new algorithm uses the general solution of the polynomial linear Diophantine equation for the error polynomial vector E(D). This set of Diophantine solutions is a coset of the CC space. A recursive or Viterbi-like algorithm is developed to find the minimum weight error vector cirumflex E(D) in this error coset. An example illustrating the new decoding algorithm is given for the binary nonsymmetric (2,1)CC.

  12. SOI monolithic pixel detector

    NASA Astrophysics Data System (ADS)

    Miyoshi, T.; Ahmed, M. I.; Arai, Y.; Fujita, Y.; Ikemoto, Y.; Takeda, A.; Tauchi, K.

    2014-05-01

    We are developing monolithic pixel detector using fully-depleted (FD) silicon-on-insulator (SOI) pixel process technology. The SOI substrate is high resistivity silicon with p-n junctions and another layer is a low resistivity silicon for SOI-CMOS circuitry. Tungsten vias are used for the connection between two silicons. Since flip-chip bump bonding process is not used, high sensor gain in a small pixel area can be obtained. In 2010 and 2011, high-resolution integration-type SOI pixel sensors, DIPIX and INTPIX5, have been developed. The characterizations by evaluating pixel-to-pixel crosstalk, quantum efficiency (QE), dark noise, and energy resolution were done. A phase-contrast imaging was demonstrated using the INTPIX5 pixel sensor for an X-ray application. The current issues and future prospect are also discussed.

  13. Sequential Syndrome Decoding of Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1984-01-01

    The algebraic structure of convolutional codes are reviewed and sequential syndrome decoding is applied to those codes. These concepts are then used to realize by example actual sequential decoding, using the stack algorithm. The Fano metric for use in sequential decoding is modified so that it can be utilized to sequentially find the minimum weight error sequence.

  14. Number-Theoretic Functions via Convolution Rings.

    ERIC Educational Resources Information Center

    Berberian, S. K.

    1992-01-01

    Demonstrates the number theory property that the number of divisors of an integer n times the number of positive integers k, less than or equal to and relatively prime to n, equals the sum of the divisors of n using theory developed about multiplicative functions, the units of a convolution ring, and the Mobius Function. (MDH)

  15. PixelLearn

    NASA Technical Reports Server (NTRS)

    Mazzoni, Dominic; Wagstaff, Kiri; Bornstein, Benjamin; Tang, Nghia; Roden, Joseph

    2006-01-01

    PixelLearn is an integrated user-interface computer program for classifying pixels in scientific images. Heretofore, training a machine-learning algorithm to classify pixels in images has been tedious and difficult. PixelLearn provides a graphical user interface that makes it faster and more intuitive, leading to more interactive exploration of image data sets. PixelLearn also provides image-enhancement controls to make it easier to see subtle details in images. PixelLearn opens images or sets of images in a variety of common scientific file formats and enables the user to interact with several supervised or unsupervised machine-learning pixel-classifying algorithms while the user continues to browse through the images. The machinelearning algorithms in PixelLearn use advanced clustering and classification methods that enable accuracy much higher than is achievable by most other software previously available for this purpose. PixelLearn is written in portable C++ and runs natively on computers running Linux, Windows, or Mac OS X.

  16. High stroke pixel for a deformable mirror

    DOEpatents

    Miles, Robin R.; Papavasiliou, Alexandros P.

    2005-09-20

    A mirror pixel that can be fabricated using standard MEMS methods for a deformable mirror. The pixel is electrostatically actuated and is capable of the high deflections needed for spaced-based mirror applications. In one embodiment, the mirror comprises three layers, a top or mirror layer, a middle layer which consists of flexures, and a comb drive layer, with the flexures of the middle layer attached to the mirror layer and to the comb drive layer. The comb drives are attached to a frame via spring flexures. A number of these mirror pixels can be used to construct a large mirror assembly. The actuator for the mirror pixel may be configured as a crenellated beam with one end fixedly secured, or configured as a scissor jack. The mirror pixels may be used in various applications requiring high stroke adaptive optics.

  17. About closedness by convolution of the Tsallis maximizers

    NASA Astrophysics Data System (ADS)

    Vignat, C.; Hero, A. O., III; Costa, J. A.

    2004-09-01

    In this paper, we study the stability under convolution of the maximizing distributions of the Tsallis entropy under energy constraint (called hereafter Tsallis distributions). These distributions are shown to obey three important properties: a stochastic representation property, an orthogonal invariance property and a duality property. As a consequence of these properties, the behavior of Tsallis distributions under convolution is characterized. At last, a special random convolution, called Kingman convolution, is shown to ensure the stability of Tsallis distributions.

  18. Deep Learning with Hierarchical Convolutional Factor Analysis

    PubMed Central

    Chen, Bo; Polatkan, Gungor; Sapiro, Guillermo; Blei, David; Dunson, David; Carin, Lawrence

    2013-01-01

    Unsupervised multi-layered (“deep”) models are considered for general data, with a particular focus on imagery. The model is represented using a hierarchical convolutional factor-analysis construction, with sparse factor loadings and scores. The computation of layer-dependent model parameters is implemented within a Bayesian setting, employing a Gibbs sampler and variational Bayesian (VB) analysis, that explicitly exploit the convolutional nature of the expansion. In order to address large-scale and streaming data, an online version of VB is also developed. The number of basis functions or dictionary elements at each layer is inferred from the data, based on a beta-Bernoulli implementation of the Indian buffet process. Example results are presented for several image-processing applications, with comparisons to related models in the literature. PMID:23787342

  19. A convolutional neural network neutrino event classifier

    DOE PAGESBeta

    Aurisano, A.; Radovic, A.; Rocco, D.; Himmel, A.; Messier, M. D.; Niner, E.; Pawloski, G.; Psihas, F.; Sousa, A.; Vahle, P.

    2016-09-01

    Here, convolutional neural networks (CNNs) have been widely applied in the computer vision community to solve complex problems in image recognition and analysis. We describe an application of the CNN technology to the problem of identifying particle interactions in sampling calorimeters used commonly in high energy physics and high energy neutrino physics in particular. Following a discussion of the core concepts of CNNs and recent innovations in CNN architectures related to the field of deep learning, we outline a specific application to the NOvA neutrino detector. This algorithm, CVN (Convolutional Visual Network) identifies neutrino interactions based on their topology withoutmore » the need for detailed reconstruction and outperforms algorithms currently in use by the NOvA collaboration.« less

  20. Quantum convolutional codes derived from constacyclic codes

    NASA Astrophysics Data System (ADS)

    Yan, Tingsu; Huang, Xinmei; Tang, Yuansheng

    2014-12-01

    In this paper, three families of quantum convolutional codes are constructed. The first one and the second one can be regarded as a generalization of Theorems 3, 4, 7 and 8 [J. Chen, J. Li, F. Yang and Y. Huang, Int. J. Theor. Phys., doi:10.1007/s10773-014-2214-6 (2014)], in the sense that we drop the constraint q ≡ 1 (mod 4). Furthermore, the second one and the third one attain the quantum generalized Singleton bound.

  1. Satellite image classification using convolutional learning

    NASA Astrophysics Data System (ADS)

    Nguyen, Thao; Han, Jiho; Park, Dong-Chul

    2013-10-01

    A satellite image classification method using Convolutional Neural Network (CNN) architecture is proposed in this paper. As a special case of deep learning, CNN classifies classes of images without any feature extraction step while other existing classification methods utilize rather complex feature extraction processes. Experiments on a set of satellite image data and the preliminary results show that the proposed classification method can be a promising alternative over existing feature extraction-based schemes in terms of classification accuracy and classification speed.

  2. Convolutional Neural Network Based dem Super Resolution

    NASA Astrophysics Data System (ADS)

    Chen, Zixuan; Wang, Xuewen; Xu, Zekai; Hou, Wenguang

    2016-06-01

    DEM super resolution is proposed in our previous publication to improve the resolution for a DEM on basis of some learning examples. Meanwhile, the nonlocal algorithm is introduced to deal with it and lots of experiments show that the strategy is feasible. In our publication, the learning examples are defined as the partial original DEM and their related high measurements due to this way can avoid the incompatibility between the data to be processed and the learning examples. To further extent the applications of this new strategy, the learning examples should be diverse and easy to obtain. Yet, it may cause the problem of incompatibility and unrobustness. To overcome it, we intend to investigate a convolutional neural network based method. The input of the convolutional neural network is a low resolution DEM and the output is expected to be its high resolution one. A three layers model will be adopted. The first layer is used to detect some features from the input, the second integrates the detected features to some compressed ones and the final step transforms the compressed features as a new DEM. According to this designed structure, some learning DEMs will be taken to train it. Specifically, the designed network will be optimized by minimizing the error of the output and its expected high resolution DEM. In practical applications, a testing DEM will be input to the convolutional neural network and a super resolution will be obtained. Many experiments show that the CNN based method can obtain better reconstructions than many classic interpolation methods.

  3. Blind Identification of Convolutional Encoder Parameters

    PubMed Central

    Su, Shaojing; Zhou, Jing; Huang, Zhiping; Liu, Chunwu; Zhang, Yimeng

    2014-01-01

    This paper gives a solution to the blind parameter identification of a convolutional encoder. The problem can be addressed in the context of the noncooperative communications or adaptive coding and modulations (ACM) for cognitive radio networks. We consider an intelligent communication receiver which can blindly recognize the coding parameters of the received data stream. The only knowledge is that the stream is encoded using binary convolutional codes, while the coding parameters are unknown. Some previous literatures have significant contributions for the recognition of convolutional encoder parameters in hard-decision situations. However, soft-decision systems are applied more and more as the improvement of signal processing techniques. In this paper we propose a method to utilize the soft information to improve the recognition performances in soft-decision communication systems. Besides, we propose a new recognition method based on correlation attack to meet low signal-to-noise ratio situations. Finally we give the simulation results to show the efficiency of the proposed methods. PMID:24982997

  4. High density pixel array

    NASA Technical Reports Server (NTRS)

    Wiener-Avnear, Eliezer (Inventor); McFall, James Earl (Inventor)

    2004-01-01

    A pixel array device is fabricated by a laser micro-milling method under strict process control conditions. The device has an array of pixels bonded together with an adhesive filling the grooves between adjacent pixels. The array is fabricated by moving a substrate relative to a laser beam of predetermined intensity at a controlled, constant velocity along a predetermined path defining a set of grooves between adjacent pixels so that a predetermined laser flux per unit area is applied to the material, and repeating the movement for a plurality of passes of the laser beam until the grooves are ablated to a desired depth. The substrate is of an ultrasonic transducer material in one example for fabrication of a 2D ultrasonic phase array transducer. A substrate of phosphor material is used to fabricate an X-ray focal plane array detector.

  5. Pixel-One

    NASA Astrophysics Data System (ADS)

    Pedichini, F.; Di Paola, A.; Testa, V.

    2010-07-01

    The early future of astronomy will be dominated by Extremely Large Telescopes where the focal lengths will be of the order of several hundred meters. This yields focal plane sizes of roughly one square meter to obtain a field of view of about 5 x 5 arcmin. When operated in seeing limited mode this field is correctly sampled with 1x1mm pixels. Such a sampling can be achieved using a peculiar array of tiny CMOS active photodiodes illuminated through microlenses or lightpipes. If the photodiode is small enough and utilizes the actual pixel technology, its dark current can be kept well below the sky background photocurrent, thus avoiding the use of cumbersome cryogenics systems. An active smart electronics will manage each pixel up to the A/D conversion and data transfer. This modular block is the Pixel-One. A 30x30 mm tile filled with 1000 Pixel-Ones could be the basic unit to mosaic very large focal planes. By inserting dispersion elements inside the optical path of the lenslet array one could also produce a low dispersed spectrum of each focal plane sub-aperture and, by using an array of few smart photodiodes, also get multi-wavelength information in the optical band for each equivalent focal plane pixel. An application to the E-ELT is proposed.

  6. Convolution and non convolution Perfectly Matched Layer techniques optimized at grazing incidence for high-order wave propagation modelling

    NASA Astrophysics Data System (ADS)

    Martin, Roland; Komatitsch, Dimitri; Bruthiaux, Emilien; Gedney, Stephen D.

    2010-05-01

    , Geophysics, Vol. 72, No. 5, September-October 2007, pp. SM 155-SM167. [2]Kristel C. Meza-Fajardo and Apostolos S. Papageorgiou. A nonconvolutional, split-field, perfectly matched layer for wave propagation in isotropic and anisotropic elastic media; stability analysis, Bulletin of the Seismological Society of America, Vol. 98, No. 4, pp. 1811-1836. [3]Roland Martin, Dimitri Komatitsch and Stephen D. Gedney. A variational formulation of a stabilized unsplit convolutional perfectly matched layer for the isotropic or anisotropic seismic wave equation, Computer Modeling in Engineering and Sciences, vol.37, No.3, pp.274-304, 2008. [4]Gaetano Festa and J.-P. Vilotte. The Newmark scheme as a velocity-stress time staggering. An efficient PML for spectral element simulations of elastodynamics, Geophysical Journal International, 161(3), pp.789-812, 2005. [5]Roland Martin, Dimitri Komatitsch and Abdelaziz Ezziani. An unsplit convolutional perfecly matched layer improved at grazing incidence for seismic wave equation in poroelastic media, Geophysics, Vol. 73, No. 4, 2008, pp.T51-T61. [6]Roland Martin and Dimitri Komatitsch, An unsplit convolutional perfectly matched layer technique improved at grazing incidence for the viscoelastic wave equation, Geophysical Journal International, Vol. 179, pp. 333-344,2009. [7] Roland Martin, Dimitri Komatitsch, Stephen Gedney and Emilien Bruthiaux. A high-order time and space formulation of the unsplit perfectly matched layer for the seismic wave equation, Computer Modeling in Engineering and Sciences, 2009.

  7. Invariant Descriptor Learning Using a Siamese Convolutional Neural Network

    NASA Astrophysics Data System (ADS)

    Chen, L.; Rottensteiner, F.; Heipke, C.

    2016-06-01

    In this paper we describe learning of a descriptor based on the Siamese Convolutional Neural Network (CNN) architecture and evaluate our results on a standard patch comparison dataset. The descriptor learning architecture is composed of an input module, a Siamese CNN descriptor module and a cost computation module that is based on the L2 Norm. The cost function we use pulls the descriptors of matching patches close to each other in feature space while pushing the descriptors for non-matching pairs away from each other. Compared to related work, we optimize the training parameters by combining a moving average strategy for gradients and Nesterov's Accelerated Gradient. Experiments show that our learned descriptor reaches a good performance and achieves state-of-art results in terms of the false positive rate at a 95 % recall rate on standard benchmark datasets.

  8. QCDNUM: Fast QCD evolution and convolution

    NASA Astrophysics Data System (ADS)

    Botje, M.

    2011-02-01

    The QCDNUM program numerically solves the evolution equations for parton densities and fragmentation functions in perturbative QCD. Un-polarised parton densities can be evolved up to next-to-next-to-leading order in powers of the strong coupling constant, while polarised densities or fragmentation functions can be evolved up to next-to-leading order. Other types of evolution can be accessed by feeding alternative sets of evolution kernels into the program. A versatile convolution engine provides tools to compute parton luminosities, cross-sections in hadron-hadron scattering, and deep inelastic structure functions in the zero-mass scheme or in generalised mass schemes. Input to these calculations are either the QCDNUM evolved densities, or those read in from an external parton density repository. Included in the software distribution are packages to calculate zero-mass structure functions in un-polarised deep inelastic scattering, and heavy flavour contributions to these structure functions in the fixed flavour number scheme. Program summaryProgram title: QCDNUM version: 17.00 Catalogue identifier: AEHV_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHV_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU Public Licence No. of lines in distributed program, including test data, etc.: 45 736 No. of bytes in distributed program, including test data, etc.: 911 569 Distribution format: tar.gz Programming language: Fortran-77 Computer: All Operating system: All RAM: Typically 3 Mbytes Classification: 11.5 Nature of problem: Evolution of the strong coupling constant and parton densities, up to next-to-next-to-leading order in perturbative QCD. Computation of observable quantities by Mellin convolution of the evolved densities with partonic cross-sections. Solution method: Parametrisation of the parton densities as linear or quadratic splines on a discrete grid, and evolution of the spline

  9. Convolutional coding combined with continuous phase modulation

    NASA Technical Reports Server (NTRS)

    Pizzi, S. V.; Wilson, S. G.

    1985-01-01

    Background theory and specific coding designs for combined coding/modulation schemes utilizing convolutional codes and continuous-phase modulation (CPM) are presented. In this paper the case of r = 1/2 coding onto a 4-ary CPM is emphasized, with short-constraint length codes presented for continuous-phase FSK, double-raised-cosine, and triple-raised-cosine modulation. Coding buys several decibels of coding gain over the Gaussian channel, with an attendant increase of bandwidth. Performance comparisons in the power-bandwidth tradeoff with other approaches are made.

  10. Convolution neural networks for ship type recognition

    NASA Astrophysics Data System (ADS)

    Rainey, Katie; Reeder, John D.; Corelli, Alexander G.

    2016-05-01

    Algorithms to automatically recognize ship type from satellite imagery are desired for numerous maritime applications. This task is difficult, and example imagery accurately labeled with ship type is hard to obtain. Convolutional neural networks (CNNs) have shown promise in image recognition settings, but many of these applications rely on the availability of thousands of example images for training. This work attempts to under- stand for which types of ship recognition tasks CNNs might be well suited. We report the results of baseline experiments applying a CNN to several ship type classification tasks, and discuss many of the considerations that must be made in approaching this problem.

  11. Selecting Pixels for Kepler Downlink

    NASA Technical Reports Server (NTRS)

    Bryson, Stephen T.; Jenkins, Jon M.; Klaus, Todd C.; Cote, Miles T.; Quintana, Elisa V.; Hall, Jennifer R.; Ibrahim, Khadeejah; Chandrasekaran, Hema; Caldwell, Douglas A.; Van Cleve, Jeffrey E.; Haas, Michael R.

    2010-01-01

    The Kepler mission monitors > 100,000 stellar targets using 42 2200 1024 pixel CCDs. Bandwidth constraints prevent the downlink of all 96 million pixels per 30-minute cadence, so the Kepler spacecraft downlinks a specified collection of pixels for each target. These pixels are selected by considering the object brightness, background and the signal-to-noise of each pixel, and are optimized to maximize the signal-to-noise ratio of the target. This paper describes pixel selection, creation of spacecraft apertures that efficiently capture selected pixels, and aperture assignment to a target. Diagnostic apertures, short-cadence targets and custom specified shapes are discussed.

  12. Convolutional fountain distribution over fading wireless channels

    NASA Astrophysics Data System (ADS)

    Usman, Mohammed

    2012-08-01

    Mobile broadband has opened the possibility of a rich variety of services to end users. Broadcast/multicast of multimedia data is one such service which can be used to deliver multimedia to multiple users economically. However, the radio channel poses serious challenges due to its time-varying properties, resulting in each user experiencing different channel characteristics, independent of other users. Conventional methods of achieving reliability in communication, such as automatic repeat request and forward error correction do not scale well in a broadcast/multicast scenario over radio channels. Fountain codes, being rateless and information additive, overcome these problems. Although the design of fountain codes makes it possible to generate an infinite sequence of encoded symbols, the erroneous nature of radio channels mandates the need for protecting the fountain-encoded symbols, so that the transmission is feasible. In this article, the performance of fountain codes in combination with convolutional codes, when used over radio channels, is presented. An investigation of various parameters, such as goodput, delay and buffer size requirements, pertaining to the performance of fountain codes in a multimedia broadcast/multicast environment is presented. Finally, a strategy for the use of 'convolutional fountain' over radio channels is also presented.

  13. Convolution Inequalities for the Boltzmann Collision Operator

    NASA Astrophysics Data System (ADS)

    Alonso, Ricardo J.; Carneiro, Emanuel; Gamba, Irene M.

    2010-09-01

    We study integrability properties of a general version of the Boltzmann collision operator for hard and soft potentials in n-dimensions. A reformulation of the collisional integrals allows us to write the weak form of the collision operator as a weighted convolution, where the weight is given by an operator invariant under rotations. Using a symmetrization technique in L p we prove a Young’s inequality for hard potentials, which is sharp for Maxwell molecules in the L 2 case. Further, we find a new Hardy-Littlewood-Sobolev type of inequality for Boltzmann collision integrals with soft potentials. The same method extends to radially symmetric, non-increasing potentials that lie in some {Ls_{weak}} or L s . The method we use resembles a Brascamp, Lieb and Luttinger approach for multilinear weighted convolution inequalities and follows a weak formulation setting. Consequently, it is closely connected to the classical analysis of Young and Hardy-Littlewood-Sobolev inequalities. In all cases, the inequality constants are explicitly given by formulas depending on integrability conditions of the angular cross section (in the spirit of Grad cut-off). As an additional application of the technique we also obtain estimates with exponential weights for hard potentials in both conservative and dissipative interactions.

  14. Small pixel oversampled IR focal plane arrays

    NASA Astrophysics Data System (ADS)

    Caulfield, John; Curzan, Jon; Lewis, Jay; Dhar, Nibir

    2015-06-01

    We report on a new high definition high charge capacity 2.1 Mpixel MWIR Infrared Focal Plane Array. This high definition (HD) FPA utilizes a small 5 um pitch pixel size which is below the Nyquist limit imposed by the optical systems Point Spread Function (PSF). These smaller sub diffraction limited pixels allow spatial oversampling of the image. We show that oversampling IRFPAs enables improved fidelity in imaging including resolution improvements, advanced pixel correlation processing to reduce false alarm rates, improved detection ranges, and an improved ability to track closely spaced objects. Small pixel HD arrays are viewed as the key component enabling lower size, power and weight of the IR Sensor System. Small pixels enables a reduction in the size of the systems components from the smaller detector and ROIC array, the reduced optics focal length and overall lens size, resulting in an overall compactness in the sensor package, cooling and associated electronics. The highly sensitive MWIR small pixel HD FPA has the capability to detect dimmer signals at longer ranges than previously demonstrated.

  15. The status of the CMS forward pixel detector

    SciTech Connect

    Tan, Ping; /Fermilab

    2006-01-01

    The silicon pixel detector is the innermost component of the CMS tracking system. It provides precise measurements of space points to allow effective pattern recognition in multiple track environments near the LHC interaction point. The end disks of the pixel detector, known as the Forward Pixel detector, are constructed mainly by the US-CMS collaborators. The design techniques, readout electronics, test beam activities, and construction status are reviewed.

  16. Bacterial colony counting by Convolutional Neural Networks.

    PubMed

    Ferrari, Alessandro; Lombardi, Stefano; Signoroni, Alberto

    2015-08-01

    Counting bacterial colonies on microbiological culture plates is a time-consuming, error-prone, nevertheless fundamental task in microbiology. Computer vision based approaches can increase the efficiency and the reliability of the process, but accurate counting is challenging, due to the high degree of variability of agglomerated colonies. In this paper, we propose a solution which adopts Convolutional Neural Networks (CNN) for counting the number of colonies contained in confluent agglomerates, that scored an overall accuracy of the 92.8% on a large challenging dataset. The proposed CNN-based technique for estimating the cardinality of colony aggregates outperforms traditional image processing approaches, becoming a promising approach to many related applications. PMID:26738016

  17. K2flix: Kepler pixel data visualizer

    NASA Astrophysics Data System (ADS)

    Barentsen, Geert

    2015-03-01

    K2flix makes it easy to inspect the CCD pixel data obtained by NASA's Kepler space telescope. The two-wheeled extended Kepler mission, K2, is affected by new sources of systematics, including pointing jitter and foreground asteroids, that are easier to spot by eye than by algorithm. The code takes Kepler's Target Pixel Files (TPF) as input and turns them into contrast-stretched animated gifs or MPEG-4 movies. K2flix can be used both as a command-line tool or using its Python API.

  18. A discrete convolution kernel for No-DC MRI

    NASA Astrophysics Data System (ADS)

    Zeng, Gengsheng L.; Li, Ya

    2015-08-01

    An analytical inversion formula for the exponential Radon transform with an imaginary attenuation coefficient was developed in 2007 (2007 Inverse Problems 23 1963-71). The inversion formula in that paper suggested that it is possible to obtain an exact MRI (magnetic resonance imaging) image without acquiring low-frequency data. However, this un-measured low-frequency region (ULFR) in the k-space (which is the two-dimensional Fourier transform space in MRI terminology) must be very small. This current paper derives a FBP (filtered backprojection) algorithm based on You’s formula by suggesting a practical discrete convolution kernel. A point spread function is derived for this FBP algorithm. It is demonstrated that the derived FBP algorithm can have a larger ULFR than that in the 2007 paper. The significance of this paper is that we present a closed-form reconstruction algorithm for a special case of under-sampled MRI data. Usually, under-sampled MRI data requires iterative (instead of analytical) algorithms with L1-norm or total variation norm to reconstruct the image.

  19. Infrared astronomy - Pixels to spare

    SciTech Connect

    Mccaughrean, M. )

    1991-07-01

    An infrared CCD camera containing an array with 311,040 pixels arranged in 486 rows of 640 each is tested. The array is a chip of platinum silicide (PtSi), sensitive to photons with wavelengths between 1 and 6 microns. Observations of the Hubble Space Telescope, Mars, Pluto and moon are reported. It is noted that the satellite's twin solar-cell arrays, at an apparent separation of about 1 1/4 arc second, are well resolved. Some two dozen video frames were stacked to make each presented image of Mars at 1.6 microns; at this wavelength Mars appears much as it does in visible light. A stack of 11 images at a wavelength of 1.6 microns is used for an image of Jupiter with its Great Red Spot and moons Io and Europa.

  20. Imaging in scattering media using correlation image sensors and sparse convolutional coding.

    PubMed

    Heide, Felix; Xiao, Lei; Kolb, Andreas; Hullin, Matthias B; Heidrich, Wolfgang

    2014-10-20

    Correlation image sensors have recently become popular low-cost devices for time-of-flight, or range cameras. They usually operate under the assumption of a single light path contributing to each pixel. We show that a more thorough analysis of the sensor data from correlation sensors can be used can be used to analyze the light transport in much more complex environments, including applications for imaging through scattering and turbid media. The key of our method is a new convolutional sparse coding approach for recovering transient (light-in-flight) images from correlation image sensors. This approach is enabled by an analysis of sparsity in complex transient images, and the derivation of a new physically-motivated model for transient images with drastically improved sparsity. PMID:25401666

  1. The CMS pixel system

    NASA Astrophysics Data System (ADS)

    Bortoletto, Daniela; CMS Collaboration

    2007-09-01

    The CMS hybrid pixel detector is located at the core of the CMS tracker and will contribute significantly to track and vertex reconstruction. The detector is subdivided into a three-layer barrel, and two end-cap disks on either side of the interaction region. The system operating in the 25-ns beam crossing time of the LHC must be radiation hard, low mass, and robust. The construction of the barrel modules and the forward disks has started after extensive R&D. The status of the project is reported.

  2. The ALICE Pixel Detector

    NASA Astrophysics Data System (ADS)

    Mercado-Perez, Jorge

    2002-07-01

    The present document is a brief summary of the performed activities during the 2001 Summer Student Programme at CERN under the Scientific Summer at Foreign Laboratories Program organized by the Particles and Fields Division of the Mexican Physical Society (Sociedad Mexicana de Fisica). In this case, the activities were related with the ALICE Pixel Group of the EP-AIT Division, under the supervision of Jeroen van Hunen, research fellow in this group. First, I give an introduction and overview to the ALICE experiment; followed by a description of wafer probing. A brief summary of the test beam that we had from July 13th to July 25th is given as well.

  3. Accelerated unsteady flow line integral convolution.

    PubMed

    Liu, Zhanping; Moorhead, Robert J

    2005-01-01

    Unsteady flow line integral convolution (UFLIC) is a texture synthesis technique for visualizing unsteady flows with high temporal-spatial coherence. Unfortunately, UFLIC requires considerable time to generate each frame due to the huge amount of pathline integration that is computed for particle value scattering. This paper presents Accelerated UFLIC (AUFLIC) for near interactive (1 frame/second) visualization with 160,000 particles per frame. AUFLIC reuses pathlines in the value scattering process to reduce computationally expensive pathline integration. A flow-driven seeding strategy is employed to distribute seeds such that only a few of them need pathline integration while most seeds are placed along the pathlines advected at earlier times by other seeds upstream and, therefore, the known pathlines can be reused for fast value scattering. To maintain a dense scattering coverage to convey high temporal-spatial coherence while keeping the expense of pathline integration low, a dynamic seeding controller is designed to decide whether to advect, copy, or reuse a pathline. At a negligible memory cost, AUFLIC is 9 times faster than UFLIC with comparable image quality. PMID:15747635

  4. Blind source separation of convolutive mixtures

    NASA Astrophysics Data System (ADS)

    Makino, Shoji

    2006-04-01

    This paper introduces the blind source separation (BSS) of convolutive mixtures of acoustic signals, especially speech. A statistical and computational technique, called independent component analysis (ICA), is examined. By achieving nonlinear decorrelation, nonstationary decorrelation, or time-delayed decorrelation, we can find source signals only from observed mixed signals. Particular attention is paid to the physical interpretation of BSS from the acoustical signal processing point of view. Frequency-domain BSS is shown to be equivalent to two sets of frequency domain adaptive microphone arrays, i.e., adaptive beamformers (ABFs). Although BSS can reduce reverberant sounds to some extent in the same way as ABF, it mainly removes the sounds from the jammer direction. This is why BSS has difficulties with long reverberation in the real world. If sources are not "independent," the dependence results in bias noise when obtaining the correct separation filter coefficients. Therefore, the performance of BSS is limited by that of ABF. Although BSS is upper bounded by ABF, BSS has a strong advantage over ABF. BSS can be regarded as an intelligent version of ABF in the sense that it can adapt without any information on the array manifold or the target direction, and sources can be simultaneously active in BSS.

  5. Metaheuristic Algorithms for Convolution Neural Network.

    PubMed

    Rere, L M Rasdi; Fanany, Mohamad Ivan; Arymurthy, Aniati Murni

    2016-01-01

    A typical modern optimization technique is usually either heuristic or metaheuristic. This technique has managed to solve some optimization problems in the research area of science, engineering, and industry. However, implementation strategy of metaheuristic for accuracy improvement on convolution neural networks (CNN), a famous deep learning method, is still rarely investigated. Deep learning relates to a type of machine learning technique, where its aim is to move closer to the goal of artificial intelligence of creating a machine that could successfully perform any intellectual tasks that can be carried out by a human. In this paper, we propose the implementation strategy of three popular metaheuristic approaches, that is, simulated annealing, differential evolution, and harmony search, to optimize CNN. The performances of these metaheuristic methods in optimizing CNN on classifying MNIST and CIFAR dataset were evaluated and compared. Furthermore, the proposed methods are also compared with the original CNN. Although the proposed methods show an increase in the computation time, their accuracy has also been improved (up to 7.14 percent). PMID:27375738

  6. Metaheuristic Algorithms for Convolution Neural Network

    PubMed Central

    Fanany, Mohamad Ivan; Arymurthy, Aniati Murni

    2016-01-01

    A typical modern optimization technique is usually either heuristic or metaheuristic. This technique has managed to solve some optimization problems in the research area of science, engineering, and industry. However, implementation strategy of metaheuristic for accuracy improvement on convolution neural networks (CNN), a famous deep learning method, is still rarely investigated. Deep learning relates to a type of machine learning technique, where its aim is to move closer to the goal of artificial intelligence of creating a machine that could successfully perform any intellectual tasks that can be carried out by a human. In this paper, we propose the implementation strategy of three popular metaheuristic approaches, that is, simulated annealing, differential evolution, and harmony search, to optimize CNN. The performances of these metaheuristic methods in optimizing CNN on classifying MNIST and CIFAR dataset were evaluated and compared. Furthermore, the proposed methods are also compared with the original CNN. Although the proposed methods show an increase in the computation time, their accuracy has also been improved (up to 7.14 percent). PMID:27375738

  7. Output-sensitive 3D line integral convolution.

    PubMed

    Falk, Martin; Weiskopf, Daniel

    2008-01-01

    We propose an output-sensitive visualization method for 3D line integral convolution (LIC) whose rendering speed is largely independent of the data set size and mostly governed by the complexity of the output on the image plane. Our approach of view-dependent visualization tightly links the LIC generation with the volume rendering of the LIC result in order to avoid the computation of unnecessary LIC points: early-ray termination and empty-space leaping techniques are used to skip the computation of the LIC integral in a lazy-evaluation approach; both ray casting and texture slicing can be used as volume-rendering techniques. The input noise is modeled in object space to allow for temporal coherence under object and camera motion. Different noise models are discussed, covering dense representations based on filtered white noise all the way to sparse representations similar to oriented LIC. Aliasing artifacts are avoided by frequency control over the 3D noise and by employing a 3D variant of MIPmapping. A range of illumination models is applied to the LIC streamlines: different codimension-2 lighting models and a novel gradient-based illumination model that relies on precomputed gradients and does not require any direct calculation of gradients after the LIC integral is evaluated. We discuss the issue of proper sampling of the LIC and volume-rendering integrals by employing a frequency-space analysis of the noise model and the precomputed gradients. Finally, we demonstrate that our visualization approach lends itself to a fast graphics processing unit (GPU) implementation that supports both steady and unsteady flow. Therefore, this 3D LIC method allows users to interactively explore 3D flow by means of high-quality, view-dependent, and adaptive LIC volume visualization. Applications to flow visualization in combination with feature extraction and focus-and-context visualization are described, a comparison to previous methods is provided, and a detailed performance

  8. Toward Content Based Image Retrieval with Deep Convolutional Neural Networks

    PubMed Central

    Sklan, Judah E.S.; Plassard, Andrew J.; Fabbri, Daniel; Landman, Bennett A.

    2015-01-01

    Content-based image retrieval (CBIR) offers the potential to identify similar case histories, understand rare disorders, and eventually, improve patient care. Recent advances in database capacity, algorithm efficiency, and deep Convolutional Neural Networks (dCNN), a machine learning technique, have enabled great CBIR success for general photographic images. Here, we investigate applying the leading ImageNet CBIR technique to clinically acquired medical images captured by the Vanderbilt Medical Center. Briefly, we (1) constructed a dCNN with four hidden layers, reducing dimensionality of an input scaled to 128×128 to an output encoded layer of 4×384, (2) trained the network using back-propagation 1 million random magnetic resonance (MR) and computed tomography (CT) images, (3) labeled an independent set of 2100 images, and (4) evaluated classifiers on the projection of the labeled images into manifold space. Quantitative results were disappointing (averaging a true positive rate of only 20%); however, the data suggest that improvements would be possible with more evenly distributed sampling across labels and potential re-grouping of label structures. This prelimainry effort at automated classification of medical images with ImageNet is promising, but shows that more work is needed beyond direct adaptation of existing techniques. PMID:25914507

  9. Imaging properties of pixellated scintillators with deep pixels

    PubMed Central

    Barber, H. Bradford; Fastje, David; Lemieux, Daniel; Grim, Gary P.; Furenlid, Lars R.; Miller, Brian W.; Parkhurst, Philip; Nagarkar, Vivek V.

    2015-01-01

    We have investigated the light-transport properties of scintillator arrays with long, thin pixels (deep pixels) for use in high-energy gamma-ray imaging. We compared 10×10 pixel arrays of YSO:Ce, LYSO:Ce and BGO (1mm × 1mm × 20 mm pixels) made by Proteus, Inc. with similar 10×10 arrays of LSO:Ce and BGO (1mm × 1mm × 15mm pixels) loaned to us by Saint-Gobain. The imaging and spectroscopic behaviors of these scintillator arrays are strongly affected by the choice of a reflector used as an inter-pixel spacer (3M ESR in the case of the Proteus arrays and white, diffuse-reflector for the Saint-Gobain arrays). We have constructed a 3700-pixel LYSO:Ce Prototype NIF Gamma-Ray Imager for use in diagnosing target compression in inertial confinement fusion. This system was tested at the OMEGA Laser and exhibited significant optical, inter-pixel cross-talk that was traced to the use of a single-layer of ESR film as an inter-pixel spacer. We show how the optical cross-talk can be mapped, and discuss correction procedures. We demonstrate a 10×10 YSO:Ce array as part of an iQID (formerly BazookaSPECT) imager and discuss issues related to the internal activity of 176Lu in LSO:Ce and LYSO:Ce detectors. PMID:26236070

  10. Imaging properties of pixellated scintillators with deep pixels

    NASA Astrophysics Data System (ADS)

    Barber, H. Bradford; Fastje, David; Lemieux, Daniel; Grim, Gary P.; Furenlid, Lars R.; Miller, Brian W.; Parkhurst, Philip; Nagarkar, Vivek V.

    2014-09-01

    We have investigated the light-transport properties of scintillator arrays with long, thin pixels (deep pixels) for use in high-energy gamma-ray imaging. We compared 10x10 pixel arrays of YSO:Ce, LYSO:Ce and BGO (1mm x 1mm x 20 mm pixels) made by Proteus, Inc. with similar 10x10 arrays of LSO:Ce and BGO (1mm x 1mm x 15mm pixels) loaned to us by Saint-Gobain. The imaging and spectroscopic behaviors of these scintillator arrays are strongly affected by the choice of a reflector used as an inter-pixel spacer (3M ESR in the case of the Proteus arrays and white, diffuse-reflector for the Saint-Gobain arrays). We have constructed a 3700-pixel LYSO:Ce Prototype NIF Gamma-Ray Imager for use in diagnosing target compression in inertial confinement fusion. This system was tested at the OMEGA Laser and exhibited significant optical, inter-pixel cross-talk that was traced to the use of a single-layer of ESR film as an inter-pixel spacer. We show how the optical cross-talk can be mapped, and discuss correction procedures. We demonstrate a 10x10 YSO:Ce array as part of an iQID (formerly BazookaSPECT) imager and discuss issues related to the internal activity of 176Lu in LSO:Ce and LYSO:Ce detectors.

  11. Convolution modeling of two-domain, nonlinear water-level responses in karst aquifers (Invited)

    NASA Astrophysics Data System (ADS)

    Long, A. J.

    2009-12-01

    Convolution modeling is a useful method for simulating the hydraulic response of water levels to sinking streamflow or precipitation infiltration at the macro scale. This approach is particularly useful in karst aquifers, where the complex geometry of the conduit and pore network is not well characterized but can be represented approximately by a parametric impulse-response function (IRF) with very few parameters. For many applications, one-dimensional convolution models can be equally effective as complex two- or three-dimensional models for analyzing water-level responses to recharge. Moreover, convolution models are well suited for identifying and characterizing the distinct domains of quick flow and slow flow (e.g., conduit flow and diffuse flow). Two superposed lognormal functions were used in the IRF to approximate the impulses of the two flow domains. Nonlinear response characteristics of the flow domains were assessed by observing temporal changes in the IRFs. Precipitation infiltration was simulated by filtering the daily rainfall record with a backward-in-time exponential function that weights each day’s rainfall with the rainfall of previous days and thus accounts for the effects of soil moisture on aquifer infiltration. The model was applied to the Edwards aquifer in Texas and the Madison aquifer in South Dakota. Simulations of both aquifers showed similar characteristics, including a separation on the order of years between the quick-flow and slow-flow IRF peaks and temporal changes in the IRF shapes when water levels increased and empty pore spaces became saturated.

  12. Pixel size adjustment in coherent diffractive imaging within the Rayleigh-Sommerfeld regime.

    PubMed

    Claus, Daniel; Rodenburg, John Marius

    2015-03-10

    The reconstruction of the smallest resolvable object detail in digital holography and coherent diffractive imaging when the detector is mounted close to the object of interest is restricted by the sensor's pixel size. Very high resolution information is intrinsically encoded in the data because the effective numerical aperture (NA) of the detector (its solid angular size as subtended at the object plane) is very high. The correct physical propagation model to use in the reconstruction process for this setup should be based on the Rayleigh-Sommerfeld diffraction integral, which is commonly implemented via a convolution operation. However, the convolution operation has the drawback that the pixel size of the propagation calculation is preserved between the object and the detector, and so the maximum resolution of the reconstruction is limited by the detector pixel size, not its effective NA. Here we show that this problem can be overcome via the introduction of a numerical spherical lens with adjustable magnification. This approach enables the reconstruction of object details smaller than the detector pixel size or of objects that extend beyond the size of the detector. It will have applications in all forms of near-field lensless microscopy. PMID:25968368

  13. PIXELS: Using field-based learning to investigate students' concepts of pixels and sense of scale

    NASA Astrophysics Data System (ADS)

    Pope, A.; Tinigin, L.; Petcovic, H. L.; Ormand, C. J.; LaDue, N.

    2015-12-01

    Empirical work over the past decade supports the notion that a high level of spatial thinking skill is critical to success in the geosciences. Spatial thinking incorporates a host of sub-skills such as mentally rotating an object, imagining the inside of a 3D object based on outside patterns, unfolding a landscape, and disembedding critical patterns from background noise. In this study, we focus on sense of scale, which refers to how an individual quantified space, and is thought to develop through kinesthetic experiences. Remote sensing data are increasingly being used for wide-reaching and high impact research. A sense of scale is critical to many areas of the geosciences, including understanding and interpreting remotely sensed imagery. In this exploratory study, students (N=17) attending the Juneau Icefield Research Program participated in a 3-hour exercise designed to study how a field-based activity might impact their sense of scale and their conceptions of pixels in remotely sensed imagery. Prior to the activity, students had an introductory remote sensing lecture and completed the Sense of Scale inventory. Students walked and/or skied the perimeter of several pixel types, including a 1 m square (representing a WorldView sensor's pixel), a 30 m square (a Landsat pixel) and a 500 m square (a MODIS pixel). The group took reflectance measurements using a field radiometer as they physically traced out the pixel. The exercise was repeated in two different areas, one with homogenous reflectance, and another with heterogeneous reflectance. After the exercise, students again completed the Sense of Scale instrument and a demographic survey. This presentation will share the effects and efficacy of the field-based intervention to teach remote sensing concepts and to investigate potential relationships between students' concepts of pixels and sense of scale.

  14. THE KEPLER PIXEL RESPONSE FUNCTION

    SciTech Connect

    Bryson, Stephen T.; Haas, Michael R.; Dotson, Jessie L.; Koch, David G.; Borucki, William J.; Tenenbaum, Peter; Jenkins, Jon M.; Chandrasekaran, Hema; Caldwell, Douglas A.; Klaus, Todd; Gilliland, Ronald L.

    2010-04-20

    Kepler seeks to detect sequences of transits of Earth-size exoplanets orbiting solar-like stars. Such transit signals are on the order of 100 ppm. The high photometric precision demanded by Kepler requires detailed knowledge of how the Kepler pixels respond to starlight during a nominal observation. This information is provided by the Kepler pixel response function (PRF), defined as the composite of Kepler's optical point-spread function, integrated spacecraft pointing jitter during a nominal cadence and other systematic effects. To provide sub-pixel resolution, the PRF is represented as a piecewise-continuous polynomial on a sub-pixel mesh. This continuous representation allows the prediction of a star's flux value on any pixel given the star's pixel position. The advantages and difficulties of this polynomial representation are discussed, including characterization of spatial variation in the PRF and the smoothing of discontinuities between sub-pixel polynomial patches. On-orbit super-resolution measurements of the PRF across the Kepler field of view are described. Two uses of the PRF are presented: the selection of pixels for each star that maximizes the photometric signal-to-noise ratio for that star, and PRF-fitted centroids which provide robust and accurate stellar positions on the CCD, primarily used for attitude and plate scale tracking. Good knowledge of the PRF has been a critical component for the successful collection of high-precision photometry by Kepler.

  15. A Geometric Construction of Cyclic Cocycles on Twisted Convolution Algebras

    NASA Astrophysics Data System (ADS)

    Angel, Eitan

    2010-09-01

    In this thesis we give a construction of cyclic cocycles on convolution algebras twisted by gerbes over discrete translation groupoids. In his seminal book, Connes constructs a map from the equivariant cohomology of a manifold carrying the action of a discrete group into the periodic cyclic cohomology of the associated convolution algebra. Furthermore, for proper étale groupoids, J.-L. Tu and P. Xu provide a map between the periodic cyclic cohomology of a gerbe twisted convolution algebra and twisted cohomology groups. Our focus will be the convolution algebra with a product defined by a gerbe over a discrete translation groupoid. When the action is not proper, we cannot construct an invariant connection on the gerbe; therefore to study this algebra, we instead develop simplicial notions related to ideas of J. Dupont to construct a simplicial form representing the Dixmier-Douady class of the gerbe. Then by using a JLO formula we define a morphism from a simplicial complex twisted by this simplicial Dixmier-Douady form to the mixed bicomplex of certain matrix algebras. Finally, we define a morphism from this complex to the mixed bicomplex computing the periodic cyclic cohomology of the twisted convolution algebras.

  16. The use of interleaving for reducing radio loss in convolutionally coded systems

    NASA Technical Reports Server (NTRS)

    Divsalar, D.; Simon, M. K.; Yuen, J. H.

    1989-01-01

    The use of interleaving after convolutional coding and deinterleaving before Viterbi decoding is proposed. This effectively reduces radio loss at low-loop Signal to Noise Ratios (SNRs) by several decibels and at high-loop SNRs by a few tenths of a decibel. Performance of the coded system can further be enhanced if the modulation index is optimized for this system. This will correspond to a reduction of bit SNR at a certain bit error rate for the overall system. The introduction of interleaving/deinterleaving into communication systems designed for future deep space missions does not substantially complicate their hardware design or increase their system cost.

  17. Error-trellis syndrome decoding techniques for convolutional codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1985-01-01

    An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decordig is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.

  18. Error-trellis Syndrome Decoding Techniques for Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1984-01-01

    An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decoding is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.

  19. From Pixels to Planets

    NASA Technical Reports Server (NTRS)

    Brownston, Lee; Jenkins, Jon M.

    2015-01-01

    The Kepler Mission was launched in 2009 as NASAs first mission capable of finding Earth-size planets in the habitable zone of Sun-like stars. Its telescope consists of a 1.5-m primary mirror and a 0.95-m aperture. The 42 charge-coupled devices in its focal plane are read out every half hour, compressed, and then downlinked monthly. After four years, the second of four reaction wheels failed, ending the original mission. Back on earth, the Science Operations Center developed the Science Pipeline to analyze about 200,000 target stars in Keplers field of view, looking for evidence of periodic dimming suggesting that one or more planets had crossed the face of its host star. The Pipeline comprises several steps, from pixel-level calibration, through noise and artifact removal, to detection of transit-like signals and the construction of a suite of diagnostic tests to guard against false positives. The Kepler Science Pipeline consists of a pipeline infrastructure written in the Java programming language, which marshals data input to and output from MATLAB applications that are executed as external processes. The pipeline modules, which underwent continuous development and refinement even after data started arriving, employ several analytic techniques, many developed for the Kepler Project. Because of the large number of targets, the large amount of data per target and the complexity of the pipeline algorithms, the processing demands are daunting. Some pipeline modules require days to weeks to process all of their targets, even when run on NASA's 128-node Pleiades supercomputer. The software developers are still seeking ways to increase the throughput. To date, the Kepler project has discovered more than 4000 planetary candidates, of which more than 1000 have been independently confirmed or validated to be exoplanets. Funding for this mission is provided by NASAs Science Mission Directorate.

  20. High-precision measurement of pixel positions in a charge-coupled device.

    PubMed

    Shaklan, S; Sharman, M C; Pravdo, S H

    1995-10-10

    The high level of spatial uniformity in modern CCD's makes them excellent devices for astrometric instruments. However, at the level of accuracy envisioned by the more ambitious projects such as the Astrometric Imaging Telescope, current technology produces CCD's with significant pixel registration errors. We describe a technique for making high-precision measurements of relative pixel positions. We measured CCD's manufactured for the Wide Field Planetary Camera II installed in the Hubble Space Telescope. These CCD's are shown to have significant step-and-repeat errors of 0.033 pixel along every 34th row, as well as a 0.003-pixel curvature along 34-pixel stripes. The source of these errors is described. Our experiments achieved a per-pixel accuracy of 0.011 pixel. The ultimate shot-noise limited precision of the method is less than 0.001 pixel. PMID:21060522

  1. Robustly optimal rate one-half binary convolutional codes

    NASA Technical Reports Server (NTRS)

    Johannesson, R.

    1975-01-01

    Three optimality criteria for convolutional codes are considered in this correspondence: namely, free distance, minimum distance, and distance profile. Here we report the results of computer searches for rate one-half binary convolutional codes that are 'robustly optimal' in the sense of being optimal for one criterion and optimal or near-optimal for the other two criteria. Comparisons with previously known codes are made. The results of a computer simulation are reported to show the importance of the distance profile to computational performance with sequential decoding.

  2. Error-Trellis Construction for Convolutional Codes Using Shifted Error/Syndrome-Subsequences

    NASA Astrophysics Data System (ADS)

    Tajima, Masato; Okino, Koji; Miyagoshi, Takashi

    In this paper, we extend the conventional error-trellis construction for convolutional codes to the case where a given check matrix H(D) has a factor Dl in some column (row). In the first case, there is a possibility that the size of the state space can be reduced using shifted error-subsequences, whereas in the second case, the size of the state space can be reduced using shifted syndrome-subsequences. The construction presented in this paper is based on the adjoint-obvious realization of the corresponding syndrome former HT(D). In the case where all the columns and rows of H(D) are delay free, the proposed construction is reduced to the conventional one of Schalkwijk et al. We also show that the proposed construction can equally realize the state-space reduction shown by Ariel et al. Moreover, we clarify the difference between their construction and that of ours using examples.

  3. Local Pixel Bundles: Bringing the Pixels to the People

    NASA Astrophysics Data System (ADS)

    Anderson, Jay

    2014-12-01

    The automated galaxy-based alignment software package developed for the Frontier Fields program (hst2galign, see Anderson & Ogaz 2014 and http://www.stsci.edu/hst/campaigns/frontier-fields/) produces a direct mapping from the pixels of the flt frame of each science exposure into a common master frame. We can use these mappings to extract the flt-pixels in the vicinity of a source of interest and package them into a convenient "bundle". In addition to the pixels, this data bundle can also contain "meta" information that will allow users to transform positions from the flt pixels to the reference frame and vice-versa. Since the un-resampled pixels in the flt frames are the only true constraints we have on the astronomical scene, the ability to inter-relate these pixels will enable many high-precision studies, such as: point-source-fitting and deconvolution with accurate PSFs, easy exploration of different image-combining algorithms, and accurate faint-source finding and photometry. The data products introduced in this ISR are a very early attempt to provide the flt-level pixel constraints in a package that is accessible to more than the handful of experts in HST astrometry. The hope is that users in the community might begin using them and will provide feedback as to what information they might want to see in the bundles and what general analysis packages they might find useful. For that reason, this document is somewhat informally written, since I know that it will be modified and updated as the products and tools are optimized.

  4. Die and telescoping punch form convolutions in thin diaphragm

    NASA Technical Reports Server (NTRS)

    1965-01-01

    Die and punch set forms convolutions in thin dished metal diaphragm without stretching the metal too thin at sharp curvatures. The die corresponds to the metal shape to be formed, and the punch consists of elements that progressively slide against one another under the restraint of a compressed-air cushion to mate with the die.

  5. Convolutional virtual electric field for image segmentation using active contours.

    PubMed

    Wang, Yuanquan; Zhu, Ce; Zhang, Jiawan; Jian, Yuden

    2014-01-01

    Gradient vector flow (GVF) is an effective external force for active contours; however, it suffers from heavy computation load. The virtual electric field (VEF) model, which can be implemented in real time using fast Fourier transform (FFT), has been proposed later as a remedy for the GVF model. In this work, we present an extension of the VEF model, which is referred to as CONvolutional Virtual Electric Field, CONVEF for short. This proposed CONVEF model takes the VEF model as a convolution operation and employs a modified distance in the convolution kernel. The CONVEF model is also closely related to the vector field convolution (VFC) model. Compared with the GVF, VEF and VFC models, the CONVEF model possesses not only some desirable properties of these models, such as enlarged capture range, u-shape concavity convergence, subject contour convergence and initialization insensitivity, but also some other interesting properties such as G-shape concavity convergence, neighboring objects separation, and noise suppression and simultaneously weak edge preserving. Meanwhile, the CONVEF model can also be implemented in real-time by using FFT. Experimental results illustrate these advantages of the CONVEF model on both synthetic and natural images. PMID:25360586

  6. Maximum-likelihood estimation of circle parameters via convolution.

    PubMed

    Zelniker, Emanuel E; Clarkson, I Vaughan L

    2006-04-01

    The accurate fitting of a circle to noisy measurements of circumferential points is a much studied problem in the literature. In this paper, we present an interpretation of the maximum-likelihood estimator (MLE) and the Delogne-Kåsa estimator (DKE) for circle-center and radius estimation in terms of convolution on an image which is ideal in a certain sense. We use our convolution-based MLE approach to find good estimates for the parameters of a circle in digital images. In digital images, it is then possible to treat these estimates as preliminary estimates into various other numerical techniques which further refine them to achieve subpixel accuracy. We also investigate the relationship between the convolution of an ideal image with a "phase-coded kernel" (PCK) and the MLE. This is related to the "phase-coded annulus" which was introduced by Atherton and Kerbyson who proposed it as one of a number of new convolution kernels for estimating circle center and radius. We show that the PCK is an approximate MLE (AMLE). We compare our AMLE method to the MLE and the DKE as well as the Cramér-Rao Lower Bound in ideal images and in both real and synthetic digital images. PMID:16579374

  7. Space suit

    NASA Technical Reports Server (NTRS)

    Shepard, L. F.; Durney, G. P.; Case, M. C.; Kenneway, A. J., III; Wise, R. C.; Rinehart, D.; Bessette, R. J.; Pulling, R. C. (Inventor)

    1973-01-01

    A pressure suit for high altitude flights, particularly space missions is reported. The suit is designed for astronauts in the Apollo space program and may be worn both inside and outside a space vehicle, as well as on the lunar surface. It comprises an integrated assembly of inner comfort liner, intermediate pressure garment, and outer thermal protective garment with removable helmet, and gloves. The pressure garment comprises an inner convoluted sealing bladder and outer fabric restraint to which are attached a plurality of cable restraint assemblies. It provides versitility in combination with improved sealing and increased mobility for internal pressures suitable for life support in the near vacuum of outer space.

  8. Multi-scale feature learning on pixels and super-pixels for seminal vesicles MRI segmentation

    NASA Astrophysics Data System (ADS)

    Gao, Qinquan; Asthana, Akshay; Tong, Tong; Rueckert, Daniel; Edwards, Philip "Eddie"

    2014-03-01

    We propose a learning-based approach to segment the seminal vesicles (SV) via random forest classifiers. The proposed discriminative approach relies on the decision forest using high-dimensional multi-scale context-aware spatial, textual and descriptor-based features at both pixel and super-pixel level. After affine transformation to a template space, the relevant high-dimensional multi-scale features are extracted and random forest classifiers are learned based on the masked region of the seminal vesicles from the most similar atlases. Using these classifiers, an intermediate probabilistic segmentation is obtained for the test images. Then, a graph-cut based refinement is applied to this intermediate probabilistic representation of each voxel to get the final segmentation. We apply this approach to segment the seminal vesicles from 30 MRI T2 training images of the prostate, which presents a particularly challenging segmentation task. The results show that the multi-scale approach and the augmentation of the pixel based features with the super-pixel based features enhances the discriminative power of the learnt classifier which leads to a better quality segmentation in some very difficult cases. The results are compared to the radiologist labeled ground truth using leave-one-out cross-validation. Overall, the Dice metric of 0:7249 and Hausdorff surface distance of 7:0803 mm are achieved for this difficult task.

  9. Evaluation of convolutional neural networks for visual recognition.

    PubMed

    Nebauer, C

    1998-01-01

    Convolutional neural networks provide an efficient method to constrain the complexity of feedforward neural networks by weight sharing and restriction to local connections. This network topology has been applied in particular to image classification when sophisticated preprocessing is to be avoided and raw images are to be classified directly. In this paper two variations of convolutional networks--neocognitron and a modification of neocognitron--are compared with classifiers based on fully connected feedforward layers (i.e., multilayer perceptron, nearest neighbor classifier, auto-encoding network) with respect to their visual recognition performance. Beside the original neocognitron a modification of the neocognitron is proposed which combines neurons from perceptron with the localized network structure of neocognitron. Instead of training convolutional networks by time-consuming error backpropagation, in this work a modular procedure is applied whereby layers are trained sequentially from the input to the output layer in order to recognize features of increasing complexity. For a quantitative experimental comparison with standard classifiers two very different recognition tasks have been chosen: handwritten digit recognition and face recognition. In the first example on handwritten digit recognition the generalization of convolutional networks is compared to fully connected networks. In several experiments the influence of variations of position, size, and orientation of digits is determined and the relation between training sample size and validation error is observed. In the second example recognition of human faces is investigated under constrained and variable conditions with respect to face orientation and illumination and the limitations of convolutional networks are discussed. PMID:18252491

  10. Convolutions of Rayleigh functions and their application to semi-linear equations in circular domains

    NASA Astrophysics Data System (ADS)

    Varlamov, Vladimir

    2007-03-01

    Rayleigh functions [sigma]l([nu]) are defined as series in inverse powers of the Bessel function zeros [lambda][nu],n[not equal to]0, where ; [nu] is the index of the Bessel function J[nu](x) and n=1,2,... is the number of the zeros. Convolutions of Rayleigh functions with respect to the Bessel index, are needed for constructing global-in-time solutions of semi-linear evolution equations in circular domains [V. Varlamov, On the spatially two-dimensional Boussinesq equation in a circular domain, Nonlinear Anal. 46 (2001) 699-725; V. Varlamov, Convolution of Rayleigh functions with respect to the Bessel index, J. Math. Anal. Appl. 306 (2005) 413-424]. The study of this new family of special functions was initiated in [V. Varlamov, Convolution of Rayleigh functions with respect to the Bessel index, J. Math. Anal. Appl. 306 (2005) 413-424], where the properties of R1(m) were investigated. In the present work a general representation of Rl(m) in terms of [sigma]l([nu]) is deduced. On the basis of this a representation for the function R2(m) is obtained in terms of the [psi]-function. An asymptotic expansion is computed for R2(m) as m-->[infinity]. Such asymptotics are needed for establishing function spaces for solutions of semi-linear equations in bounded domains with periodicity conditions in one coordinate. As an example of application of Rl(m) a forced Boussinesq equationutt-2b[Delta]ut=-[alpha][Delta]2u+[Delta]u+[beta][Delta](u2)+f with [alpha],b=const>0 and [beta]=const[set membership, variant]R is considered in a unit disc with homogeneous boundary and initial data. Construction of its global-in-time solutions involves the use of the functions R1(m) and R2(m) which are responsible for the nonlinear smoothing effect.

  11. Image Labeling for LIDAR Intensity Image Using K-Nn of Feature Obtained by Convolutional Neural Network

    NASA Astrophysics Data System (ADS)

    Umemura, Masaki; Hotta, Kazuhiro; Nonaka, Hideki; Oda, Kazuo

    2016-06-01

    We propose an image labeling method for LIDAR intensity image obtained by Mobile Mapping System (MMS) using K-Nearest Neighbor (KNN) of feature obtained by Convolutional Neural Network (CNN). Image labeling assigns labels (e.g., road, cross-walk and road shoulder) to semantic regions in an image. Since CNN is effective for various image recognition tasks, we try to use the feature of CNN (Caffenet) pre-trained by ImageNet. We use 4,096-dimensional feature at fc7 layer in the Caffenet as the descriptor of a region because the feature at fc7 layer has effective information for object classification. We extract the feature by the Caffenet from regions cropped from images. Since the similarity between features reflects the similarity of contents of regions, we can select top K similar regions cropped from training samples with a test region. Since regions in training images have manually-annotated ground truth labels, we vote the labels attached to top K similar regions to the test region. The class label with the maximum vote is assigned to each pixel in the test image. In experiments, we use 36 LIDAR intensity images with ground truth labels. We divide 36 images into training (28 images) and test sets (8 images). We use class average accuracy and pixel-wise accuracy as evaluation measures. Our method was able to assign the same label as human beings in 97.8% of the pixels in test LIDAR intensity images.

  12. Painting with pixels.

    PubMed

    Kyte, S

    1989-04-01

    Two decades ago the subject of computer graphics was regarded as pure science fiction, more within the realms of Star Trek fantasy than of everyday use, but today it is difficult to avoid its influence. Television programmes abound with slick moving, twisting, distorting images, the printing media throws colourful shapes and forms off the page at you, and computer games explode noisily into our living rooms. In a very short space of time computer graphics have risen from being a toy of the affluent minority to a working tool of the cost-conscious majority. Even the most purist of artists have realized that in order to survive in an increasingly competitive world they must inevitably take the plunge into the world of electronic imagery. PMID:2607084

  13. Edge pixel response studies of edgeless silicon sensor technology for pixellated imaging detectors

    NASA Astrophysics Data System (ADS)

    Maneuski, D.; Bates, R.; Blue, A.; Buttar, C.; Doonan, K.; Eklund, L.; Gimenez, E. N.; Hynds, D.; Kachkanov, S.; Kalliopuska, J.; McMullen, T.; O'Shea, V.; Tartoni, N.; Plackett, R.; Vahanen, S.; Wraight, K.

    2015-03-01

    Silicon sensor technologies with reduced dead area at the sensor's perimeter are under development at a number of institutes. Several fabrication methods for sensors which are sensitive close to the physical edge of the device are under investigation utilising techniques such as active-edges, passivated edges and current-terminating rings. Such technologies offer the goal of a seamlessly tiled detection surface with minimum dead space between the individual modules. In order to quantify the performance of different geometries and different bulk and implant types, characterisation of several sensors fabricated using active-edge technology were performed at the B16 beam line of the Diamond Light Source. The sensors were fabricated by VTT and bump-bonded to Timepix ROICs. They were 100 and 200 μ m thick sensors, with the last pixel-to-edge distance of either 50 or 100 μ m. The sensors were fabricated as either n-on-n or n-on-p type devices. Using 15 keV monochromatic X-rays with a beam spot of 2.5 μ m, the performance at the outer edge and corners pixels of the sensors was evaluated at three bias voltages. The results indicate a significant change in the charge collection properties between the edge and 5th (up to 275 μ m) from edge pixel for the 200 μ m thick n-on-n sensor. The edge pixel performance of the 100 μ m thick n-on-p sensors is affected only for the last two pixels (up to 110 μ m) subject to biasing conditions. Imaging characteristics of all sensor types investigated are stable over time and the non-uniformities can be minimised by flat-field corrections. The results from the synchrotron tests combined with lab measurements are presented along with an explanation of the observed effects.

  14. The CMS pixel luminosity telescope

    NASA Astrophysics Data System (ADS)

    Kornmayer, A.

    2016-07-01

    The Pixel Luminosity Telescope (PLT) is a new complement to the CMS detector for the LHC Run II data taking period. It consists of eight 3-layer telescopes based on silicon pixel detectors that are placed around the beam pipe on each end of CMS viewing the interaction point at small angle. A fast 3-fold coincidence of the pixel planes in each telescope will provide a bunch-by-bunch measurement of the luminosity. Particle tracking allows collision products to be distinguished from beam background, provides a self-alignment of the detectors, and a continuous in-time monitoring of the efficiency of each telescope plane. The PLT is an independent luminometer, essential to enhance the robustness on the measurement of the delivered luminosity and to reduce its systematic uncertainties. This will allow to determine production cross-sections, and hence couplings, with high precision and to set more stringent limits on new particle production.

  15. Charge amplitude distribution of the Gossip gaseous pixel detector

    NASA Astrophysics Data System (ADS)

    Blanco Carballo, V. M.; Chefdeville, M.; Colas, P.; Giomataris, Y.; van der Graaf, H.; Gromov, V.; Hartjes, F.; Kluit, R.; Koffeman, E.; Salm, C.; Schmitz, J.; Smits, S. M.; Timmermans, J.; Visschers, J. L.

    2007-12-01

    The Gossip gaseous pixel detector is being developed for the detection of charged particles in extreme high radiation environments as foreseen close to the interaction point of the proposed super LHC. The detecting medium is a thin layer of gas. Because of the low density of this medium, only a few primary electron/ion pairs are created by the traversing particle. To get a detectable signal, the electrons drift towards a perforated metal foil (Micromegas) whereafter they are multiplied in a gas avalanche to provide a detectable signal. The gas avalanche occurs in the high field between the Micromegas and the pixel readout chip (ROC). Compared to a silicon pixel detector, Gossip features a low material budget and a low cooling power. An experiment using X-rays has indicated a possible high radiation tolerance exceeding 10 16 hadrons/cm 2. The amplified charge signal has a broad amplitude distribution due to the limited statistics of the primary ionization and the statistical variation of the gas amplification. Therefore, some degree of inefficiency is inevitable. This study presents experimental results on the charge amplitude distribution for CO 2/DME (dimethyl-ether) and Ar/iC 4H 10 mixtures. The measured curves were fitted with the outcome of a theoretical model. In the model, the physical Landau distribution is approximated by a Poisson distribution that is convoluted with the variation of the gas gain and the electronic noise. The value for the fraction of pedestal events is used for a direct calculation of the cluster density. For some gases, the measured cluster density is considerably lower than given in literature.

  16. Comparison of pixel and sub-pixel based techniques to separate Pteronia incana invaded areas using multi-temporal high resolution imagery

    NASA Astrophysics Data System (ADS)

    Odindi, John; Kakembo, Vincent

    2009-08-01

    Remote Sensing using high resolution imagery (HRI) is fast becoming an important tool in detailed land-cover mapping and analysis of plant species invasion. In this study, we sought to test the separability of Pteronia incana invader species by pixel content aggregation and pixel content de-convolution using multi-temporal infrared HRI. An invaded area in Eastern Cape, South Africa was flown in 2001, 2004 and 2006 and HRI of 1x1m resolution captured using a DCS 420 colour infrared camera. The images were separated into bands, geo-rectified and radiometrically corrected using Idrisi Kilimanjaro GIS. Value files were extracted from the bands in order to compare spectral values for P. incana, green vegetation and bare surfaces using the pixel based Perpendicular Vegetation Index (PVI), while Constrained Linear Spectral Unmixing (CLSU) surface endmembers were used to generate sub-pixel land surface image fractions. Spectroscopy was used to validate spectral trends identified from HRI. The PVI successfully separated the multi-temporal imagery surfaces and was consistent with the unmixed surface image fractions from CLSU. Separability between the respective surfaces was also achieved using reflectance measurements.

  17. Microradiography with Semiconductor Pixel Detectors

    NASA Astrophysics Data System (ADS)

    Jakubek, Jan; Cejnarova, Andrea; Dammer, Jiří; Holý, Tomáš; Platkevič, Michal; Pospíšil, Stanislav; Vavřík, Daniel; Vykydal, Zdeněk

    2007-11-01

    High resolution radiography (with X-rays, neutrons, heavy charged particles, …) often exploited also in tomographic mode to provide 3D images stands as a powerful imaging technique for instant and nondestructive visualization of fine internal structure of objects. Novel types of semiconductor single particle counting pixel detectors offer many advantages for radiation imaging: high detection efficiency, energy discrimination or direct energy measurement, noiseless digital integration (counting), high frame rate and virtually unlimited dynamic range. This article shows the application and potential of pixel detectors (such as Medipix2 or TimePix) in different fields of radiation imaging.

  18. The Probabilistic Convolution Tree: Efficient Exact Bayesian Inference for Faster LC-MS/MS Protein Inference

    PubMed Central

    Serang, Oliver

    2014-01-01

    Exact Bayesian inference can sometimes be performed efficiently for special cases where a function has commutative and associative symmetry of its inputs (called “causal independence”). For this reason, it is desirable to exploit such symmetry on big data sets. Here we present a method to exploit a general form of this symmetry on probabilistic adder nodes by transforming those probabilistic adder nodes into a probabilistic convolution tree with which dynamic programming computes exact probabilities. A substantial speedup is demonstrated using an illustration example that can arise when identifying splice forms with bottom-up mass spectrometry-based proteomics. On this example, even state-of-the-art exact inference algorithms require a runtime more than exponential in the number of splice forms considered. By using the probabilistic convolution tree, we reduce the runtime to and the space to where is the number of variables joined by an additive or cardinal operator. This approach, which can also be used with junction tree inference, is applicable to graphs with arbitrary dependency on counting variables or cardinalities and can be used on diverse problems and fields like forward error correcting codes, elemental decomposition, and spectral demixing. The approach also trivially generalizes to multiple dimensions. PMID:24626234

  19. UFLIC: A Line Integral Convolution Algorithm for Visualizing Unsteady Flows

    NASA Technical Reports Server (NTRS)

    Shen, Han-Wei; Kao, David L.; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    This paper presents an algorithm, UFLIC (Unsteady Flow LIC), to visualize vector data in unsteady flow fields. Using the Line Integral Convolution (LIC) as the underlying method, a new convolution algorithm is proposed that can effectively trace the flow's global features over time. The new algorithm consists of a time-accurate value depositing scheme and a successive feed-forward method. The value depositing scheme accurately models the flow advection, and the successive feed-forward method maintains the coherence between animation frames. Our new algorithm can produce time-accurate, highly coherent flow animations to highlight global features in unsteady flow fields. CFD scientists, for the first time, are able to visualize unsteady surface flows using our algorithm.

  20. Deep learning for steganalysis via convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Qian, Yinlong; Dong, Jing; Wang, Wei; Tan, Tieniu

    2015-03-01

    Current work on steganalysis for digital images is focused on the construction of complex handcrafted features. This paper proposes a new paradigm for steganalysis to learn features automatically via deep learning models. We novelly propose a customized Convolutional Neural Network for steganalysis. The proposed model can capture the complex dependencies that are useful for steganalysis. Compared with existing schemes, this model can automatically learn feature representations with several convolutional layers. The feature extraction and classification steps are unified under a single architecture, which means the guidance of classification can be used during the feature extraction step. We demonstrate the effectiveness of the proposed model on three state-of-theart spatial domain steganographic algorithms - HUGO, WOW, and S-UNIWARD. Compared to the Spatial Rich Model (SRM), our model achieves comparable performance on BOSSbase and the realistic and large ImageNet database.

  1. Two-dimensional convolute integers for analytical instrumentation

    NASA Technical Reports Server (NTRS)

    Edwards, T. R.

    1982-01-01

    As new analytical instruments and techniques emerge with increased dimensionality, a corresponding need is seen for data processing logic which can appropriately address the data. Two-dimensional measurements reveal enhanced unknown mixture analysis capability as a result of the greater spectral information content over two one-dimensional methods taken separately. It is noted that two-dimensional convolute integers are merely an extension of the work by Savitzky and Golay (1964). It is shown that these low-pass, high-pass and band-pass digital filters are truly two-dimensional and that they can be applied in a manner identical with their one-dimensional counterpart, that is, a weighted nearest-neighbor, moving average with zero phase shifting, convoluted integer (universal number) weighting coefficients.

  2. Study on Expansion of Convolutional Compactors over Galois Field

    NASA Astrophysics Data System (ADS)

    Arai, Masayuki; Fukumoto, Satoshi; Iwasaki, Kazuhiko

    Convolutional compactors offer a promising technique of compacting test responses. In this study we expand the architecture of convolutional compactor onto a Galois field in order to improve compaction ratio as well as reduce X-masking probability, namely, the probability that an error is masked by unknown values. While each scan chain is independently connected by EOR gates in the conventional arrangement, the proposed scheme treats q signals as an element over GF(2q), and the connections are configured on the same field. We show the arrangement of the proposed compactors and the equivalent expression over GF(2). We then evaluate the effectiveness of the proposed expansion in terms of X-masking probability by simulations with uniform distribution of X-values, as well as reduction of hardware overheads. Furthermore, we evaluate a multi-weight arrangement of the proposed compactors for non-uniform X distributions.

  3. Image Super-Resolution Using Deep Convolutional Networks.

    PubMed

    Dong, Chao; Loy, Chen Change; He, Kaiming; Tang, Xiaoou

    2016-02-01

    We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low/high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage. We explore different network structures and parameter settings to achieve trade-offs between performance and speed. Moreover, we extend our network to cope with three color channels simultaneously, and show better overall reconstruction quality. PMID:26761735

  4. Face Detection Using GPU-Based Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Nasse, Fabian; Thurau, Christian; Fink, Gernot A.

    In this paper, we consider the problem of face detection under pose variations. Unlike other contributions, a focus of this work resides within efficient implementation utilizing the computational powers of modern graphics cards. The proposed system consists of a parallelized implementation of convolutional neural networks (CNNs) with a special emphasize on also parallelizing the detection process. Experimental validation in a smart conference room with 4 active ceiling-mounted cameras shows a dramatic speed-gain under real-life conditions.

  5. New syndrome decoder for (n, 1) convolutional codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1983-01-01

    The letter presents a new syndrome decoding algorithm for the (n, 1) convolutional codes (CC) that is different and simpler than the previous syndrome decoding algorithm of Schalkwijk and Vinck. The new technique uses the general solution of the polynomial linear Diophantine equation for the error polynomial vector E(D). A recursive, Viterbi-like, algorithm is developed to find the minimum weight error vector E(D). An example is given for the binary nonsystematic (2, 1) CC.

  6. Real-time sub-pixel registration of imagery for an IR polarimeter

    NASA Astrophysics Data System (ADS)

    Hanks, Jonathan B.; Pezzaniti, J. Larry; Chenault, David B.; Romano, João M.

    2012-06-01

    In imaging polarimetry, special consideration must be given to ensure proper spatial registration between frames. Edge artifacts caused by the differencing of unregistered frames has the potential to create significant spurious polarization signatures. To achieve 1/10th pixel registration or better, a software based registration approach is often required. The focus of this paper is to present an efficient algorithm for real time sub-pixel registration in a division-of-time IR polarimeter based on a rotating polarizer. This algorithm has been implemented in a commercially available rotating polarizer LWIR imaging polarimeter offered by Polaris Sensor Technologies. This paper presents measurements of image nutation in a rotating polarizer LWIR imaging polarimeter and real-time registration of image data from that same polarimeter. The registration algorithm is based on an optimal 2D convolution. Examples of registered images are provided as well as estimates of residual misregistration artifacts.

  7. Fine-grained representation learning in convolutional autoencoders

    NASA Astrophysics Data System (ADS)

    Luo, Chang; Wang, Jie

    2016-03-01

    Convolutional autoencoders (CAEs) have been widely used as unsupervised feature extractors for high-resolution images. As a key component in CAEs, pooling is a biologically inspired operation to achieve scale and shift invariances, and the pooled representation directly affects the CAEs' performance. Fine-grained pooling, which uses small and dense pooling regions, encodes fine-grained visual cues and enhances local characteristics. However, it tends to be sensitive to spatial rearrangements. In most previous works, pooled features were obtained by empirically modulating parameters in CAEs. We see the CAE as a whole and propose a fine-grained representation learning law to extract better fine-grained features. This representation learning law suggests two directions for improvement. First, we probabilistically evaluate the discrimination-invariance tradeoff with fine-grained granularity in the pooled feature maps, and suggest the proper filter scale in the convolutional layer and appropriate whitening parameters in preprocessing step. Second, pooling approaches are combined with the sparsity degree in pooling regions, and we propose the preferable pooling approach. Experimental results on two independent benchmark datasets demonstrate that our representation learning law could guide CAEs to extract better fine-grained features and performs better in multiclass classification task. This paper also provides guidance for selecting appropriate parameters to obtain better fine-grained representation in other convolutional neural networks.

  8. Automatic localization of vertebrae based on convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Shen, Wei; Yang, Feng; Mu, Wei; Yang, Caiyun; Yang, Xin; Tian, Jie

    2015-03-01

    Localization of the vertebrae is of importance in many medical applications. For example, the vertebrae can serve as the landmarks in image registration. They can also provide a reference coordinate system to facilitate the localization of other organs in the chest. In this paper, we propose a new vertebrae localization method using convolutional neural networks (CNN). The main advantage of the proposed method is the removal of hand-crafted features. We construct two training sets to train two CNNs that share the same architecture. One is used to distinguish the vertebrae from other tissues in the chest, and the other is aimed at detecting the centers of the vertebrae. The architecture contains two convolutional layers, both of which are followed by a max-pooling layer. Then the output feature vector from the maxpooling layer is fed into a multilayer perceptron (MLP) classifier which has one hidden layer. Experiments were performed on ten chest CT images. We used leave-one-out strategy to train and test the proposed method. Quantitative comparison between the predict centers and ground truth shows that our convolutional neural networks can achieve promising localization accuracy without hand-crafted features.

  9. On the growth and form of cortical convolutions

    NASA Astrophysics Data System (ADS)

    Tallinen, Tuomas; Chung, Jun Young; Rousseau, François; Girard, Nadine; Lefèvre, Julien; Mahadevan, L.

    2016-06-01

    The rapid growth of the human cortex during development is accompanied by the folding of the brain into a highly convoluted structure. Recent studies have focused on the genetic and cellular regulation of cortical growth, but understanding the formation of the gyral and sulcal convolutions also requires consideration of the geometry and physical shaping of the growing brain. To study this, we use magnetic resonance images to build a 3D-printed layered gel mimic of the developing smooth fetal brain; when immersed in a solvent, the outer layer swells relative to the core, mimicking cortical growth. This relative growth puts the outer layer into mechanical compression and leads to sulci and gyri similar to those in fetal brains. Starting with the same initial geometry, we also build numerical simulations of the brain modelled as a soft tissue with a growing cortex, and show that this also produces the characteristic patterns of convolutions over a realistic developmental course. All together, our results show that although many molecular determinants control the tangential expansion of the cortex, the size, shape, placement and orientation of the folds arise through iterations and variations of an elementary mechanical instability modulated by early fetal brain geometry.

  10. A nonvoxel-based dose convolution/superposition algorithm optimized for scalable GPU architectures

    SciTech Connect

    Neylon, J. Sheng, K.; Yu, V.; Low, D. A.; Kupelian, P.; Santhanam, A.; Chen, Q.

    2014-10-15

    Purpose: Real-time adaptive planning and treatment has been infeasible due in part to its high computational complexity. There have been many recent efforts to utilize graphics processing units (GPUs) to accelerate the computational performance and dose accuracy in radiation therapy. Data structure and memory access patterns are the key GPU factors that determine the computational performance and accuracy. In this paper, the authors present a nonvoxel-based (NVB) approach to maximize computational and memory access efficiency and throughput on the GPU. Methods: The proposed algorithm employs a ray-tracing mechanism to restructure the 3D data sets computed from the CT anatomy into a nonvoxel-based framework. In a process that takes only a few milliseconds of computing time, the algorithm restructured the data sets by ray-tracing through precalculated CT volumes to realign the coordinate system along the convolution direction, as defined by zenithal and azimuthal angles. During the ray-tracing step, the data were resampled according to radial sampling and parallel ray-spacing parameters making the algorithm independent of the original CT resolution. The nonvoxel-based algorithm presented in this paper also demonstrated a trade-off in computational performance and dose accuracy for different coordinate system configurations. In order to find the best balance between the computed speedup and the accuracy, the authors employed an exhaustive parameter search on all sampling parameters that defined the coordinate system configuration: zenithal, azimuthal, and radial sampling of the convolution algorithm, as well as the parallel ray spacing during ray tracing. The angular sampling parameters were varied between 4 and 48 discrete angles, while both radial sampling and parallel ray spacing were varied from 0.5 to 10 mm. The gamma distribution analysis method (γ) was used to compare the dose distributions using 2% and 2 mm dose difference and distance-to-agreement criteria

  11. SAR Image Complex Pixel Representations

    SciTech Connect

    Doerry, Armin W.

    2015-03-01

    Complex pixel values for Synthetic Aperture Radar (SAR) images of uniform distributed clutter can be represented as either real/imaginary (also known as I/Q) values, or as Magnitude/Phase values. Generally, these component values are integers with limited number of bits. For clutter energy well below full-scale, Magnitude/Phase offers lower quantization noise than I/Q representation. Further improvement can be had with companding of the Magnitude value.

  12. Representing SAR complex image pixels

    NASA Astrophysics Data System (ADS)

    Doerry, A. W.

    2016-05-01

    Synthetic Aperture Radar (SAR) images are often complex-valued to facilitate specific exploitation modes. Furthermore, these pixel values are typically represented with either real/imaginary (also known as I/Q) values, or as Magnitude/Phase values, with constituent components comprised of integers with limited number of bits. For clutter energy well below full-scale, Magnitude/Phase offers lower quantization noise than I/Q representation. Further improvement can be had with companding of the Magnitude value.

  13. CMOS digital pixel sensors: technology and applications

    NASA Astrophysics Data System (ADS)

    Skorka, Orit; Joseph, Dileepan

    2014-04-01

    CMOS active pixel sensor technology, which is widely used these days for digital imaging, is based on analog pixels. Transition to digital pixel sensors can boost signal-to-noise ratios and enhance image quality, but can increase pixel area to dimensions that are impractical for the high-volume market of consumer electronic devices. There are two main approaches to digital pixel design. The first uses digitization methods that largely rely on photodetector properties and so are unique to imaging. The second is based on adaptation of a classical analog-to-digital converter (ADC) for in-pixel data conversion. Imaging systems for medical, industrial, and security applications are emerging lower-volume markets that can benefit from these in-pixel ADCs. With these applications, larger pixels are typically acceptable, and imaging may be done in invisible spectral bands.

  14. Low complexity pixel-based halftone detection

    NASA Astrophysics Data System (ADS)

    Ok, Jiheon; Han, Seong Wook; Jarno, Mielikainen; Lee, Chulhee

    2011-10-01

    With the rapid advances of the internet and other multimedia technologies, the digital document market has been growing steadily. Since most digital images use halftone technologies, quality degradation occurs when one tries to scan and reprint them. Therefore, it is necessary to extract the halftone areas to produce high quality printing. In this paper, we propose a low complexity pixel-based halftone detection algorithm. For each pixel, we considered a surrounding block. If the block contained any flat background regions, text, thin lines, or continuous or non-homogeneous regions, the pixel was classified as a non-halftone pixel. After excluding those non-halftone pixels, the remaining pixels were considered to be halftone pixels. Finally, documents were classified as pictures or photo documents by calculating the halftone pixel ratio. The proposed algorithm proved to be memory-efficient and required low computation costs. The proposed algorithm was easily implemented using GPU.

  15. The FPGA Pixel Array Detector

    NASA Astrophysics Data System (ADS)

    Hromalik, Marianne S.; Green, Katherine S.; Philipp, Hugh T.; Tate, Mark W.; Gruner, Sol M.

    2013-02-01

    A proposed design for a reconfigurable x-ray Pixel Array Detector (PAD) is described. It operates by integrating a high-end commercial field programmable gate array (FPGA) into a 3-layer device along with a high-resistivity diode detection layer and a custom, application-specific integrated circuit (ASIC) layer. The ASIC layer contains an energy-discriminating photon-counting front end with photon hits streamed directly to the FPGA via a massively parallel, high-speed data connection. FPGA resources can be allocated to perform user defined tasks on the pixel data streams, including the implementation of a direct time autocorrelation function (ACF) with time resolution down to 100 ns. Using the FPGA at the front end to calculate the ACF reduces the required data transfer rate by several orders of magnitude when compared to a fast framing detector. The FPGA-ASIC high-speed interface, as well as the in-FPGA implementation of a real-time ACF for x-ray photon correlation spectroscopy experiments has been designed and simulated. A 16×16 pixel prototype of the ASIC has been fabricated and is being tested.

  16. A new dual-isotope convolution cross-talk correction method: a Tl-201/Tc-99m SPECT cardiac phantom study.

    PubMed

    Knesaurek, K

    1994-10-01

    Simultaneous dual-isotope SPECT imaging provides a clear advantage in situations where two concurrent metabolic, anatomic, or background measurements are desired. It obviates the need for two separate imaging sessions, reduces patient motion problems, and provides exact image registration between images. However, a potential limitation of dual-isotope SPECT imaging is contribution of scattered and primary photons from one radionuclide into the second radionuclide's photopeak energy window, referred to here as cross-talk. Cross-talk in both photopeak energy windows can significantly degrade image quality, resolution, and quantitation to an unacceptable level. Simple cross-talk correction method used in dual-radionuclide in vitro counting, even applied on a pixel-by-pixel basis, does not account for the differences in spatial distribution of the photopeak and cross-talk photons. Here a new convolution cross-talk correction method is presented. The convolution filters are derived from point response functions (PRFs) for Tc-99m and Tl-201 point sources. Three separate acquisitions were performed, each with two 20% wide energy windows, one centered at 140 keV and another at 70 keV. The first acquisition was done with Tc-99m solution only, the second with Tl-201 solution only, and the third with a mixture of Tc-99m and Tl-201. The nonuniform RH-2 thorax-heart phantom was used to test a new correction technique. The main difficulty and limitation of the convolution correction approach is caused by the variation in PRF as a function of depth. Thus, average PRF should be used in the creation of an approximative filter.(ABSTRACT TRUNCATED AT 250 WORDS) PMID:7869989

  17. Mapping Electrical Crosstalk in Pixelated Sensor Arrays

    NASA Technical Reports Server (NTRS)

    Seshadri, Suresh (Inventor); Cole, David (Inventor); Smith, Roger M (Inventor); Hancock, Bruce R. (Inventor)

    2013-01-01

    The effects of inter pixel capacitance in a pixilated array may be measured by first resetting all pixels in the array to a first voltage, where a first image is read out, followed by resetting only a subset of pixels in the array to a second voltage, where a second image is read out, where the difference in the first and second images provide information about the inter pixel capacitance. Other embodiments are described and claimed.

  18. Learning Contextual Dependence With Convolutional Hierarchical Recurrent Neural Networks

    NASA Astrophysics Data System (ADS)

    Zuo, Zhen; Shuai, Bing; Wang, Gang; Liu, Xiao; Wang, Xingxing; Wang, Bing; Chen, Yushi

    2016-07-01

    Existing deep convolutional neural networks (CNNs) have shown their great success on image classification. CNNs mainly consist of convolutional and pooling layers, both of which are performed on local image areas without considering the dependencies among different image regions. However, such dependencies are very important for generating explicit image representation. In contrast, recurrent neural networks (RNNs) are well known for their ability of encoding contextual information among sequential data, and they only require a limited number of network parameters. General RNNs can hardly be directly applied on non-sequential data. Thus, we proposed the hierarchical RNNs (HRNNs). In HRNNs, each RNN layer focuses on modeling spatial dependencies among image regions from the same scale but different locations. While the cross RNN scale connections target on modeling scale dependencies among regions from the same location but different scales. Specifically, we propose two recurrent neural network models: 1) hierarchical simple recurrent network (HSRN), which is fast and has low computational cost; and 2) hierarchical long-short term memory recurrent network (HLSTM), which performs better than HSRN with the price of more computational cost. In this manuscript, we integrate CNNs with HRNNs, and develop end-to-end convolutional hierarchical recurrent neural networks (C-HRNNs). C-HRNNs not only make use of the representation power of CNNs, but also efficiently encodes spatial and scale dependencies among different image regions. On four of the most challenging object/scene image classification benchmarks, our C-HRNNs achieve state-of-the-art results on Places 205, SUN 397, MIT indoor, and competitive results on ILSVRC 2012.

  19. Faster GPU-based convolutional gridding via thread coarsening

    NASA Astrophysics Data System (ADS)

    Merry, B.

    2016-07-01

    Convolutional gridding is a processor-intensive step in interferometric imaging. While it is possible to use graphics processing units (GPUs) to accelerate this operation, existing methods use only a fraction of the available flops. We apply thread coarsening to improve the efficiency of an existing algorithm, and observe performance gains of up to 3.2 × for single-polarization gridding and 1.9 × for quad-polarization gridding on a GeForce GTX 980, and smaller but still significant gains on a Radeon R9 290X.

  20. Convolution seal for transition duct in turbine system

    SciTech Connect

    Flanagan, James Scott; LeBegue, Jeffrey Scott; McMahan, Kevin Weston; Dillard, Daniel Jackson; Pentecost, Ronnie Ray

    2015-03-10

    A turbine system is disclosed. In one embodiment, the turbine system includes a transition duct. The transition duct includes an inlet, an outlet, and a passage extending between the inlet and the outlet and defining a longitudinal axis, a radial axis, and a tangential axis. The outlet of the transition duct is offset from the inlet along the longitudinal axis and the tangential axis. The transition duct further includes an interface member for interfacing with a turbine section. The turbine system further includes a convolution seal contacting the interface member to provide a seal between the interface member and the turbine section.

  1. Convolution seal for transition duct in turbine system

    SciTech Connect

    Flanagan, James Scott; LeBegue, Jeffrey Scott; McMahan, Kevin Weston; Dillard, Daniel Jackson; Pentecost, Ronnie Ray

    2015-05-26

    A turbine system is disclosed. In one embodiment, the turbine system includes a transition duct. The transition duct includes an inlet, an outlet, and a passage extending between the inlet and the outlet and defining a longitudinal axis, a radial axis, and a tangential axis. The outlet of the transition duct is offset from the inlet along the longitudinal axis and the tangential axis. The transition duct further includes an interface feature for interfacing with an adjacent transition duct. The turbine system further includes a convolution seal contacting the interface feature to provide a seal between the interface feature and the adjacent transition duct.

  2. Convolutional neural networks for mammography mass lesion classification.

    PubMed

    Arevalo, John; Gonzalez, Fabio A; Ramos-Pollan, Raul; Oliveira, Jose L; Guevara Lopez, Miguel Angel

    2015-08-01

    Feature extraction is a fundamental step when mammography image analysis is addressed using learning based approaches. Traditionally, problem dependent handcrafted features are used to represent the content of images. An alternative approach successfully applied in other domains is the use of neural networks to automatically discover good features. This work presents an evaluation of convolutional neural networks to learn features for mammography mass lesions before feeding them to a classification stage. Experimental results showed that this approach is a suitable strategy outperforming the state-of-the-art representation from 79.9% to 86% in terms of area under the ROC curve. PMID:26736382

  3. Continuous speech recognition based on convolutional neural network

    NASA Astrophysics Data System (ADS)

    Zhang, Qing-qing; Liu, Yong; Pan, Jie-lin; Yan, Yong-hong

    2015-07-01

    Convolutional Neural Networks (CNNs), which showed success in achieving translation invariance for many image processing tasks, are investigated for continuous speech recognitions in the paper. Compared to Deep Neural Networks (DNNs), which have been proven to be successful in many speech recognition tasks nowadays, CNNs can reduce the NN model sizes significantly, and at the same time achieve even better recognition accuracies. Experiments on standard speech corpus TIMIT showed that CNNs outperformed DNNs in the term of the accuracy when CNNs had even smaller model size.

  4. A digital model for streamflow routing by convolution methods

    USGS Publications Warehouse

    Doyle, W.H., Jr.; Shearman, H.O.; Stiltner, G.J.; Krug, W.O.

    1984-01-01

    U.S. Geological Survey computer model, CONROUT, for routing streamflow by unit-response convolution flow-routing techniques from an upstream channel location to a downstream channel location has been developed and documented. Calibration and verification of the flow-routing model and subsequent use of the model for simulation is also documented. Three hypothetical examples and two field applications are presented to illustrate basic flow-routing concepts. Most of the discussion is limited to daily flow routing since, to date, all completed and current studies of this nature involve daily flow routing. However, the model is programmed to accept hourly flow-routing data. (USGS)

  5. New Syndrome Decoding Techniques for the (n, K) Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1983-01-01

    This paper presents a new syndrome decoding algorithm for the (n,k) convolutional codes (CC) which differs completely from an earlier syndrome decoding algorithm of Schalkwijk and Vinck. The new algorithm is based on the general solution of the syndrome equation, a linear Diophantine equation for the error polynomial vector E(D). The set of Diophantine solutions is a coset of the CC. In this error coset a recursive, Viterbi-like algorithm is developed to find the minimum weight error vector (circumflex)E(D). An example, illustrating the new decoding algorithm, is given for the binary nonsystemmatic (3,1)CC.

  6. New syndrome decoding techniques for the (n, k) convolutional codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1984-01-01

    This paper presents a new syndrome decoding algorithm for the (n, k) convolutional codes (CC) which differs completely from an earlier syndrome decoding algorithm of Schalkwijk and Vinck. The new algorithm is based on the general solution of the syndrome equation, a linear Diophantine equation for the error polynomial vector E(D). The set of Diophantine solutions is a coset of the CC. In this error coset a recursive, Viterbi-like algorithm is developed to find the minimum weight error vector (circumflex)E(D). An example, illustrating the new decoding algorithm, is given for the binary nonsystemmatic (3, 1)CC. Previously announced in STAR as N83-34964

  7. Is turbulent mixing a self-convolution process?

    PubMed

    Venaille, Antoine; Sommeria, Joel

    2008-06-13

    Experimental results for the evolution of the probability distribution function (PDF) of a scalar mixed by a turbulent flow in a channel are presented. The sequence of PDF from an initial skewed distribution to a sharp Gaussian is found to be nonuniversal. The route toward homogeneization depends on the ratio between the cross sections of the dye injector and the channel. In connection with this observation, advantages, shortcomings, and applicability of models for the PDF evolution based on a self-convolution mechanism are discussed. PMID:18643510

  8. A Fortran 90 code for magnetohydrodynamics. Part 1, Banded convolution

    SciTech Connect

    Walker, D.W.

    1992-03-01

    This report describes progress in developing a Fortran 90 version of the KITE code for studying plasma instabilities in Tokamaks. In particular, the evaluation of convolution terms appearing in the numerical solution is discussed, and timing results are presented for runs performed on an 8k processor Connection Machine (CM-2). Estimates of the performance on a full-size 64k CM-2 are given, and range between 100 and 200 Mflops. The advantages of having a Fortran 90 version of the KITE code are stressed, and the future use of such a code on the newly announced CM5 and Paragon computers, from Thinking Machines Corporation and Intel, is considered.

  9. Visualizing Vector Fields Using Line Integral Convolution and Dye Advection

    NASA Technical Reports Server (NTRS)

    Shen, Han-Wei; Johnson, Christopher R.; Ma, Kwan-Liu

    1996-01-01

    We present local and global techniques to visualize three-dimensional vector field data. Using the Line Integral Convolution (LIC) method to image the global vector field, our new algorithm allows the user to introduce colored 'dye' into the vector field to highlight local flow features. A fast algorithm is proposed that quickly recomputes the dyed LIC images. In addition, we introduce volume rendering methods that can map the LIC texture on any contour surface and/or translucent region defined by additional scalar quantities, and can follow the advection of colored dye throughout the volume.

  10. Of FFT-based convolutions and correlations, with application to solving Poisson's equation in an open rectangular pipe

    SciTech Connect

    Ryne, Robert D.

    2011-11-07

    A new method is presented for solving Poisson's equation inside an open-ended rectangular pipe. The method uses Fast Fourier Transforms (FFTs)to perform mixed convolutions and correlations of the charge density with the Green function. Descriptions are provided for algorithms based on theordinary Green function and for an integrated Green function (IGF). Due to its similarity to the widely used Hockney algorithm for solving Poisson'sequation in free space, this capability can be easily implemented in many existing particle-in-cell beam dynamics codes.

  11. Methods of editing cloud and atmospheric layer affected pixels from satellite data

    NASA Technical Reports Server (NTRS)

    Nixon, P. R. (Principal Investigator); Wiegand, C. L.; Richardson, A. J.; Johnson, M. P.; Goodier, B. G.

    1981-01-01

    The location and migration of cloud, land and water features were examined in spectral space (reflective VIS vs. emissive IR). Daytime HCMM data showed two distinct types of cloud affected pixels in the south Texas test area. High altitude cirrus and/or cirrostratus and "subvisible cirrus" (SCi) reflected the same or only slightly more than land features. In the emissive band, the digital counts ranged from 1 to over 75 and overlapped land features. Pixels consisting of cumulus clouds, or of mixed cumulus and landscape, clustered in a different area of spectral space than the high altitude cloud pixels. Cumulus affected pixels were more reflective than land and water pixels. In August the high altitude clouds and SCi were more emissive than similar clouds were in July. Four-channel TIROS-N data were examined with the objective of developing a multispectral screening technique for removing SCi contaminated data.

  12. Making a trillion pixels dance

    NASA Astrophysics Data System (ADS)

    Singh, Vivek; Hu, Bin; Toh, Kenny; Bollepalli, Srinivas; Wagner, Stephan; Borodovsky, Yan

    2008-03-01

    In June 2007, Intel announced a new pixelated mask technology. This technology was created to address the problem caused by the growing gap between the lithography wavelength and the feature sizes patterned with it. As this gap has increased, the quality of the image has deteriorated. About a decade ago, Optical Proximity Correction (OPC) was introduced to bridge this gap, but as this gap continued to increase, one could not rely on the same basic set of techniques to maintain image quality. The computational lithography group at Intel sought to alleviate this problem by experimenting with additional degrees of freedom within the mask. This paper describes the resulting pixelated mask technology, and some of the computational methods used to create it. The first key element of this technology is a thick mask model. We realized very early in the development that, unlike traditional OPC methods, the pixelated mask would require a very accurate thick mask model. Whereas in the traditional methods, one can use the relatively coarse approximations such as the boundary layer method, use of such techniques resulted not just in incorrect sizing of parts of the pattern, but in whole features missing. We built on top of previously published domain decomposition methods, and incorporated limitations of the mask manufacturing process, to create an accurate thick mask model. Several additional computational techniques were invoked to substantially increase the speed of this method to a point that it was feasible for full chip tapeout. A second key element of the computational scheme was the comprehension of mask manufacturability, including the vital issue of the number of colors in the mask. While it is obvious that use of three or more colors will give the best image, one has to be practical about projecting mask manufacturing capabilities for such a complex mask. To circumvent this serious issue, we eventually settled on a two color mask - comprising plain glass and etched

  13. Pixelated filters for spatial imaging

    NASA Astrophysics Data System (ADS)

    Mathieu, Karine; Lequime, Michel; Lumeau, Julien; Abel-Tiberini, Laetitia; Savin De Larclause, Isabelle; Berthon, Jacques

    2015-10-01

    Small satellites are often used by spatial agencies to meet scientific spatial mission requirements. Their payloads are composed of various instruments collecting an increasing amount of data, as well as respecting the growing constraints relative to volume and mass; So small-sized integrated camera have taken a favored place among these instruments. To ensure scene specific color information sensing, pixelated filters seem to be more attractive than filter wheels. The work presented here, in collaboration with Institut Fresnel, deals with the manufacturing of this kind of component, based on thin film technologies and photolithography processes. CCD detectors with a pixel pitch about 30 μm were considered. In the configuration where the matrix filters are positioned the closest to the detector, the matrix filters are composed of 2x2 macro pixels (e.g. 4 filters). These 4 filters have a bandwidth about 40 nm and are respectively centered at 550, 700, 770 and 840 nm with a specific rejection rate defined on the visible spectral range [500 - 900 nm]. After an intense design step, 4 thin-film structures have been elaborated with a maximum thickness of 5 μm. A run of tests has allowed us to choose the optimal micro-structuration parameters. The 100x100 matrix filters prototypes have been successfully manufactured with lift-off and ion assisted deposition processes. High spatial and spectral characterization, with a dedicated metrology bench, showed that initial specifications and simulations were globally met. These excellent performances knock down the technological barriers for high-end integrated specific multi spectral imaging.

  14. Discriminative Unsupervised Feature Learning with Exemplar Convolutional Neural Networks.

    PubMed

    Dosovitskiy, Alexey; Fischer, Philipp; Springenberg, Jost Tobias; Riedmiller, Martin; Brox, Thomas

    2016-09-01

    Deep convolutional networks have proven to be very successful in learning task specific features that allow for unprecedented performance on various computer vision tasks. Training of such networks follows mostly the supervised learning paradigm, where sufficiently many input-output pairs are required for training. Acquisition of large training sets is one of the key challenges, when approaching a new task. In this paper, we aim for generic feature learning and present an approach for training a convolutional network using only unlabeled data. To this end, we train the network to discriminate between a set of surrogate classes. Each surrogate class is formed by applying a variety of transformations to a randomly sampled 'seed' image patch. In contrast to supervised network training, the resulting feature representation is not class specific. It rather provides robustness to the transformations that have been applied during training. This generic feature representation allows for classification results that outperform the state of the art for unsupervised learning on several popular datasets (STL-10, CIFAR-10, Caltech-101, Caltech-256). While features learned with our approach cannot compete with class specific features from supervised training on a classification task, we show that they are advantageous on geometric matching problems, where they also outperform the SIFT descriptor. PMID:26540673

  15. Convolutional neural network architectures for predicting DNA–protein binding

    PubMed Central

    Zeng, Haoyang; Edwards, Matthew D.; Liu, Ge; Gifford, David K.

    2016-01-01

    Motivation: Convolutional neural networks (CNN) have outperformed conventional methods in modeling the sequence specificity of DNA–protein binding. Yet inappropriate CNN architectures can yield poorer performance than simpler models. Thus an in-depth understanding of how to match CNN architecture to a given task is needed to fully harness the power of CNNs for computational biology applications. Results: We present a systematic exploration of CNN architectures for predicting DNA sequence binding using a large compendium of transcription factor datasets. We identify the best-performing architectures by varying CNN width, depth and pooling designs. We find that adding convolutional kernels to a network is important for motif-based tasks. We show the benefits of CNNs in learning rich higher-order sequence features, such as secondary motifs and local sequence context, by comparing network performance on multiple modeling tasks ranging in difficulty. We also demonstrate how careful construction of sequence benchmark datasets, using approaches that control potentially confounding effects like positional or motif strength bias, is critical in making fair comparisons between competing methods. We explore how to establish the sufficiency of training data for these learning tasks, and we have created a flexible cloud-based framework that permits the rapid exploration of alternative neural network architectures for problems in computational biology. Availability and Implementation: All the models analyzed are available at http://cnn.csail.mit.edu. Contact: gifford@mit.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307608

  16. Classification of Histology Sections via Multispectral Convolutional Sparse Coding*

    PubMed Central

    Zhou, Yin; Barner, Kenneth; Spellman, Paul

    2014-01-01

    Image-based classification of histology sections plays an important role in predicting clinical outcomes. However this task is very challenging due to the presence of large technical variations (e.g., fixation, staining) and biological heterogeneities (e.g., cell type, cell state). In the field of biomedical imaging, for the purposes of visualization and/or quantification, different stains are typically used for different targets of interest (e.g., cellular/subcellular events), which generates multi-spectrum data (images) through various types of microscopes and, as a result, provides the possibility of learning biological-component-specific features by exploiting multispectral information. We propose a multispectral feature learning model that automatically learns a set of convolution filter banks from separate spectra to efficiently discover the intrinsic tissue morphometric signatures, based on convolutional sparse coding (CSC). The learned feature representations are then aggregated through the spatial pyramid matching framework (SPM) and finally classified using a linear SVM. The proposed system has been evaluated using two large-scale tumor cohorts, collected from The Cancer Genome Atlas (TCGA). Experimental results show that the proposed model 1) outperforms systems utilizing sparse coding for unsupervised feature learning (e.g., PSD-SPM [5]); 2) is competitive with systems built upon features with biological prior knowledge (e.g., SMLSPM [4]). PMID:25554749

  17. Enhancing Neutron Beam Production with a Convoluted Moderator

    SciTech Connect

    Iverson, Erik B; Baxter, David V; Muhrer, Guenter; Ansell, Stuart; Gallmeier, Franz X; Dalgliesh, Robert; Lu, Wei; Kaiser, Helmut

    2014-10-01

    We describe a new concept for a neutron moderating assembly resulting in the more efficient production of slow neutron beams. The Convoluted Moderator, a heterogeneous stack of interleaved moderating material and nearly transparent single-crystal spacers, is a directionally-enhanced neutron beam source, improving beam effectiveness over an angular range comparable to the range accepted by neutron beam lines and guides. We have demonstrated gains of 50% in slow neutron intensity for a given fast neutron production rate while simultaneously reducing the wavelength-dependent emission time dispersion by 25%, both coming from a geometric effect in which the neutron beam lines view a large surface area of moderating material in a relatively small volume. Additionally, we have confirmed a Bragg-enhancement effect arising from coherent scattering within the single-crystal spacers. We have not observed hypothesized refractive effects leading to additional gains at long wavelength. In addition to confirmation of the validity of the Convoluted Moderator concept, our measurements provide a series of benchmark experiments suitable for developing simulation and analysis techniques for practical optimization and eventual implementation at slow neutron source facilities.

  18. Fluence-convolution broad-beam (FCBB) dose calculation.

    PubMed

    Lu, Weiguo; Chen, Mingli

    2010-12-01

    IMRT optimization requires a fast yet relatively accurate algorithm to calculate the iteration dose with small memory demand. In this paper, we present a dose calculation algorithm that approaches these goals. By decomposing the infinitesimal pencil beam (IPB) kernel into the central axis (CAX) component and lateral spread function (LSF) and taking the beam's eye view (BEV), we established a non-voxel and non-beamlet-based dose calculation formula. Both LSF and CAX are determined by a commissioning procedure using the collapsed-cone convolution/superposition (CCCS) method as the standard dose engine. The proposed dose calculation involves a 2D convolution of a fluence map with LSF followed by ray tracing based on the CAX lookup table with radiological distance and divergence correction, resulting in complexity of O(N(3)) both spatially and temporally. This simple algorithm is orders of magnitude faster than the CCCS method. Without pre-calculation of beamlets, its implementation is also orders of magnitude smaller than the conventional voxel-based beamlet-superposition (VBS) approach. We compared the presented algorithm with the CCCS method using simulated and clinical cases. The agreement was generally within 3% for a homogeneous phantom and 5% for heterogeneous and clinical cases. Combined with the 'adaptive full dose correction', the algorithm is well suitable for calculating the iteration dose during IMRT optimization. PMID:21081826

  19. Multiple deep convolutional neural networks averaging for face alignment

    NASA Astrophysics Data System (ADS)

    Zhang, Shaohua; Yang, Hua; Yin, Zhouping

    2015-05-01

    Face alignment is critical for face recognition, and the deep learning-based method shows promise for solving such issues, given that competitive results are achieved on benchmarks with additional benefits, such as dispensing with handcrafted features and initial shape. However, most existing deep learning-based approaches are complicated and quite time-consuming during training. We propose a compact face alignment method for fast training without decreasing its accuracy. Rectified linear unit is employed, which allows all networks approximately five times faster convergence than a tanh neuron. An eight learnable layer deep convolutional neural network (DCNN) based on local response normalization and a padding convolutional layer (PCL) is designed to provide reliable initial values during prediction. A model combination scheme is presented to further reduce errors, while showing that only two network architectures and hyperparameter selection procedures are required in our approach. A three-level cascaded system is ultimately built based on the DCNNs and model combination mode. Extensive experiments validate the effectiveness of our method and demonstrate comparable accuracy with state-of-the-art methods on BioID, labeled face parts in the wild, and Helen datasets.

  20. Convolutional Neural Network Based Fault Detection for Rotating Machinery

    NASA Astrophysics Data System (ADS)

    Janssens, Olivier; Slavkovikj, Viktor; Vervisch, Bram; Stockman, Kurt; Loccufier, Mia; Verstockt, Steven; Van de Walle, Rik; Van Hoecke, Sofie

    2016-09-01

    Vibration analysis is a well-established technique for condition monitoring of rotating machines as the vibration patterns differ depending on the fault or machine condition. Currently, mainly manually-engineered features, such as the ball pass frequencies of the raceway, RMS, kurtosis an crest, are used for automatic fault detection. Unfortunately, engineering and interpreting such features requires a significant level of human expertise. To enable non-experts in vibration analysis to perform condition monitoring, the overhead of feature engineering for specific faults needs to be reduced as much as possible. Therefore, in this article we propose a feature learning model for condition monitoring based on convolutional neural networks. The goal of this approach is to autonomously learn useful features for bearing fault detection from the data itself. Several types of bearing faults such as outer-raceway faults and lubrication degradation are considered, but also healthy bearings and rotor imbalance are included. For each condition, several bearings are tested to ensure generalization of the fault-detection system. Furthermore, the feature-learning based approach is compared to a feature-engineering based approach using the same data to objectively quantify their performance. The results indicate that the feature-learning system, based on convolutional neural networks, significantly outperforms the classical feature-engineering based approach which uses manually engineered features and a random forest classifier. The former achieves an accuracy of 93.61 percent and the latter an accuracy of 87.25 percent.

  1. A Mathematical Motivation for Complex-Valued Convolutional Networks.

    PubMed

    Tygert, Mark; Bruna, Joan; Chintala, Soumith; LeCun, Yann; Piantino, Serkan; Szlam, Arthur

    2016-05-01

    A complex-valued convolutional network (convnet) implements the repeated application of the following composition of three operations, recursively applying the composition to an input vector of nonnegative real numbers: (1) convolution with complex-valued vectors, followed by (2) taking the absolute value of every entry of the resulting vectors, followed by (3) local averaging. For processing real-valued random vectors, complex-valued convnets can be viewed as data-driven multiscale windowed power spectra, data-driven multiscale windowed absolute spectra, data-driven multiwavelet absolute values, or (in their most general configuration) data-driven nonlinear multiwavelet packets. Indeed, complex-valued convnets can calculate multiscale windowed spectra when the convnet filters are windowed complex-valued exponentials. Standard real-valued convnets, using rectified linear units (ReLUs), sigmoidal (e.g., logistic or tanh) nonlinearities, or max pooling, for example, do not obviously exhibit the same exact correspondence with data-driven wavelets (whereas for complex-valued convnets, the correspondence is much more than just a vague analogy). Courtesy of the exact correspondence, the remarkably rich and rigorous body of mathematical analysis for wavelets applies directly to (complex-valued) convnets. PMID:26890348

  2. Deep Convolutional Neural Networks for large-scale speech tasks.

    PubMed

    Sainath, Tara N; Kingsbury, Brian; Saon, George; Soltau, Hagen; Mohamed, Abdel-rahman; Dahl, George; Ramabhadran, Bhuvana

    2015-04-01

    Convolutional Neural Networks (CNNs) are an alternative type of neural network that can be used to reduce spectral variations and model spectral correlations which exist in signals. Since speech signals exhibit both of these properties, we hypothesize that CNNs are a more effective model for speech compared to Deep Neural Networks (DNNs). In this paper, we explore applying CNNs to large vocabulary continuous speech recognition (LVCSR) tasks. First, we determine the appropriate architecture to make CNNs effective compared to DNNs for LVCSR tasks. Specifically, we focus on how many convolutional layers are needed, what is an appropriate number of hidden units, what is the best pooling strategy. Second, investigate how to incorporate speaker-adapted features, which cannot directly be modeled by CNNs as they do not obey locality in frequency, into the CNN framework. Third, given the importance of sequence training for speech tasks, we introduce a strategy to use ReLU+dropout during Hessian-free sequence training of CNNs. Experiments on 3 LVCSR tasks indicate that a CNN with the proposed speaker-adapted and ReLU+dropout ideas allow for a 12%-14% relative improvement in WER over a strong DNN system, achieving state-of-the art results in these 3 tasks. PMID:25439765

  3. PIXEL 2010 - A Résumé

    NASA Astrophysics Data System (ADS)

    Wermes, N.

    2011-09-01

    The Pixel 2010 conference focused on semiconductor pixel detectors for particle tracking/vertexing as well as for imaging, in particular for synchrotron light sources and XFELs. The big LHC hybrid pixel detectors have impressively started showing their capabilities. X-ray imaging detectors, also using the hybrid pixel technology, have greatly advanced the experimental possibilities for diffraction experiments. Monolithic or semi-monolithic devices like CMOS active pixels and DEPFET pixels have now reached a state such that complete vertex detectors for RHIC and superKEKB are being built with these technologies. Finally, new advances towards fully monolithic active pixel detectors, featuring full CMOS electronics merged with efficient signal charge collection, exploiting standard CMOS technologies, SOI and/or 3D integration, show the path for the future. This résumé attempts to extract the main statements of the results and developments presented at this conference.

  4. Predicting human gaze beyond pixels.

    PubMed

    Xu, Juan; Jiang, Ming; Wang, Shuo; Kankanhalli, Mohan S; Zhao, Qi

    2014-01-01

    A large body of previous models to predict where people look in natural scenes focused on pixel-level image attributes. To bridge the semantic gap between the predictive power of computational saliency models and human behavior, we propose a new saliency architecture that incorporates information at three layers: pixel-level image attributes, object-level attributes, and semantic-level attributes. Object- and semantic-level information is frequently ignored, or only a few sample object categories are discussed where scaling to a large number of object categories is not feasible nor neurally plausible. To address this problem, this work constructs a principled vocabulary of basic attributes to describe object- and semantic-level information thus not restricting to a limited number of object categories. We build a new dataset of 700 images with eye-tracking data of 15 viewers and annotation data of 5,551 segmented objects with fine contours and 12 semantic attributes (publicly available with the paper). Experimental results demonstrate the importance of the object- and semantic-level information in the prediction of visual attention. PMID:24474825

  5. Electronic holographic device based on macro-pixel with local coherence

    NASA Astrophysics Data System (ADS)

    Moon, Woonchan; Kwon, Jaebeom; Kim, Hwi; Hahn, Joonku

    2015-09-01

    Holography has been regarded as one of the most ideal technique for three-dimensional (3D) display because it records and reconstructs both amplitude and phase of object wave simultaneously. Nevertheless, many people think that this technique is not suitable for commercialization due to some significant problems. In this paper, we propose an electronic holographic 3D display based on macro-pixel with local coherence. Here, the incident wave within each macro-pixel is coherent but the wave in one macro-pixel is not mutually coherent with the wave in the other macro-pixel. This concept provides amazing freedom in distribution of the pixels in modulator. The relative distance between two macro-pixels results in negligible change of interference pattern in observation space. Also it is possible to make the sub-pixels in a macro-pixel in order to enlarge the field of view (FOV). The idea has amazing effects to reduce the data capacity of the holographic display. Moreover, the dimension of the system is can be remarkably downsized by micro-optics. As a result, the holographic display will be designed to have full parallax with large FOV and screen size. We think that the macro-pixel idea is a practical solution in electronic holography since it can provide reasonable FOV and large screen size with relatively small amount of data.

  6. Characterization of Pixelated Cadmium-Zinc-Telluride Detectors for Astrophysical Applications

    NASA Technical Reports Server (NTRS)

    Gaskin, Jessica; Sharma, Dharma; Ramsey, Brian; Seller, Paul

    2003-01-01

    Comparisons of charge sharing and charge loss measurements between two pixelated Cadmium-Zinc-Telluride (CdZnTe) detectors are discussed. These properties along with the detector geometry help to define the limiting energy resolution and spatial resolution of the detector in question. The first detector consists of a 1-mm-thick piece of CdZnTe sputtered with a 4x4 array of pixels with pixel pitch of 750 microns (inter-pixel gap is 100 microns). Signal readout is via discrete ultra-low-noise preamplifiers, one for each of the 16 pixels. The second detector consists of a 2-mm-thick piece of CdZnTe sputtered with a 16x16 array of pixels with a pixel pitch of 300 microns (inter-pixel gap is 50 microns). This crystal is bonded to a custom-built readout chip (ASIC) providing all front-end electronics to each of the 256 independent pixels. These detectors act as precursors to that which will be used at the focal plane of the High Energy Replicated Optics (HERO) telescope currently being developed at Marshall Space Flight Center. With a telescope focal length of 6 meters, the detector needs to have a spatial resolution of around 200 microns in order to take full advantage of the HERO angular resolution. We discuss to what degree charge sharing will degrade energy resolution but will improve our spatial resolution through position interpolation.

  7. Active pixel sensor with intra-pixel charge transfer

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R. (Inventor); Mendis, Sunetra (Inventor); Kemeny, Sabrina E. (Inventor)

    2003-01-01

    An imaging device formed as a monolithic complementary metal oxide semiconductor integrated circuit in an industry standard complementary metal oxide semiconductor process, the integrated circuit including a focal plane array of pixel cells, each one of the cells including a photogate overlying the substrate for accumulating photo-generated charge in an underlying portion of the substrate, a readout circuit including at least an output field effect transistor formed in the substrate, and a charge coupled device section formed on the substrate adjacent the photogate having a sensing node connected to the output transistor and at least one charge coupled device stage for transferring charge from the underlying portion of the substrate to the sensing node.

  8. Active pixel sensor with intra-pixel charge transfer

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R. (Inventor); Mendis, Sunetra (Inventor); Kemeny, Sabrina E. (Inventor)

    2004-01-01

    An imaging device formed as a monolithic complementary metal oxide semiconductor integrated circuit in an industry standard complementary metal oxide semiconductor process, the integrated circuit including a focal plane array of pixel cells, each one of the cells including a photogate overlying the substrate for accumulating photo-generated charge in an underlying portion of the substrate, a readout circuit including at least an output field effect transistor formed in the substrate, and a charge coupled device section formed on the substrate adjacent the photogate having a sensing node connected to the output transistor and at least one charge coupled device stage for transferring charge from the underlying portion of the substrate to the sensing node.

  9. Active pixel sensor with intra-pixel charge transfer

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R. (Inventor); Mendis, Sunetra (Inventor); Kemeny, Sabrina E. (Inventor)

    1995-01-01

    An imaging device formed as a monolithic complementary metal oxide semiconductor integrated circuit in an industry standard complementary metal oxide semiconductor process, the integrated circuit including a focal plane array of pixel cells, each one of the cells including a photogate overlying the substrate for accumulating photo-generated charge in an underlying portion of the substrate, a readout circuit including at least an output field effect transistor formed in the substrate, and a charge coupled device section formed on the substrate adjacent the photogate having a sensing node connected to the output transistor and at least one charge coupled device stage for transferring charge from the underlying portion of the substrate to the sensing node.

  10. Proceedings of PIXEL98 -- International pixel detector workshop

    SciTech Connect

    Anderson, D.F.; Kwan, S.

    1998-08-01

    Experiments around the globe face new challenges of more precision in the face of higher interaction rates, greater track densities, and higher radiation doses, as they look for rarer and rarer processes, leading many to incorporate pixelated solid-state detectors into their plans. The highest-readout rate devices require new technologies for implementation. This workshop reviewed recent, significant progress in meeting these technical challenges. Participants presented many new results; many of them from the weeks--even days--just before the workshop. Brand new at this workshop were results on cryogenic operation of radiation-damaged silicon detectors (dubbed the Lazarus effect). Other new work included a diamond sensor with 280-micron collection distance; new results on breakdown in p-type silicon detectors; testing of the latest versions of read-out chip and interconnection designs; and the radiation hardness of deep-submicron processes.

  11. Serial Pixel Analog-to-Digital Converter

    SciTech Connect

    Larson, E D

    2010-02-01

    This method reduces the data path from the counter to the pixel register of the analog-to-digital converter (ADC) from as many as 10 bits to a single bit. The reduction in data path width is accomplished by using a coded serial data stream similar to a pseudo random number (PRN) generator. The resulting encoded pixel data is then decoded into a standard hexadecimal format before storage. The high-speed serial pixel ADC concept is based on the single-slope integrating pixel ADC architecture. Previous work has described a massively parallel pixel readout of a similar architecture. The serial ADC connection is similar to the state-of-the art method with the exception that the pixel ADC register is a shift register and the data path is a single bit. A state-of-the-art individual-pixel ADC uses a single-slope charge integration converter architecture with integral registers and “one-hot” counters. This implies that parallel data bits are routed among the counter and the individual on-chip pixel ADC registers. The data path bit-width to the pixel is therefore equivalent to the pixel ADC bit resolution.

  12. Telemetry degradation due to a CW RFI induced carrier tracking error for the block IV receiving system with maximum likelihood convolution decoding

    NASA Technical Reports Server (NTRS)

    Sue, M. K.

    1981-01-01

    Models to characterize the behavior of the Deep Space Network (DSN) Receiving System in the presence of a radio frequency interference (RFI) are considered. A simple method to evaluate the telemetry degradation due to the presence of a CW RFI near the carrier frequency for the DSN Block 4 Receiving System using the maximum likelihood convolutional decoding assembly is presented. Analytical and experimental results are given.

  13. There is no MacWilliams identity for convolutional codes. [transmission gain comparison

    NASA Technical Reports Server (NTRS)

    Shearer, J. B.; Mceliece, R. J.

    1977-01-01

    An example is provided of two convolutional codes that have the same transmission gain but whose dual codes do not. This shows that no analog of the MacWilliams identity for block codes can exist relating the transmission gains of a convolutional code and its dual.

  14. Using convolutional decoding to improve time delay and phase estimation in digital communications

    DOEpatents

    Ormesher, Richard C.; Mason, John J.

    2010-01-26

    The time delay and/or phase of a communication signal received by a digital communication receiver can be estimated based on a convolutional decoding operation that the communication receiver performs on the received communication signal. If the original transmitted communication signal has been spread according to a spreading operation, a corresponding despreading operation can be integrated into the convolutional decoding operation.

  15. The uniform continuity of characteristic function from convoluted exponential distribution with stabilizer constant

    NASA Astrophysics Data System (ADS)

    Devianto, Dodi

    2016-02-01

    It is constructed convolution of generated random variable from independent and identically exponential distribution with stabilizer constant. The characteristic function of this distribution is obtained by using Laplace-Stieltjes transform. The uniform continuity property of characteristic function from this convolution is obtained by using analytical methods as basic properties.

  16. Drug-Drug Interaction Extraction via Convolutional Neural Networks

    PubMed Central

    Liu, Shengyu; Tang, Buzhou; Chen, Qingcai; Wang, Xiaolong

    2016-01-01

    Drug-drug interaction (DDI) extraction as a typical relation extraction task in natural language processing (NLP) has always attracted great attention. Most state-of-the-art DDI extraction systems are based on support vector machines (SVM) with a large number of manually defined features. Recently, convolutional neural networks (CNN), a robust machine learning method which almost does not need manually defined features, has exhibited great potential for many NLP tasks. It is worth employing CNN for DDI extraction, which has never been investigated. We proposed a CNN-based method for DDI extraction. Experiments conducted on the 2013 DDIExtraction challenge corpus demonstrate that CNN is a good choice for DDI extraction. The CNN-based DDI extraction method achieves an F-score of 69.75%, which outperforms the existing best performing method by 2.75%. PMID:26941831

  17. Highly parallel vector visualization using line integral convolution

    SciTech Connect

    Cabral, B.; Leedom, C.

    1995-12-01

    Line Integral Convolution (LIC) is an effective imaging operator for visualizing large vector fields. It works by blurring an input image along local vector field streamlines yielding an output image. LIC is highly parallelizable because it uses only local read-sharing of input data and no write-sharing of output data. Both coarse- and fine-grained implementations have been developed. The coarse-grained implementation uses a straightforward row-tiling of the vector field to parcel out work to multiple CPUs. The fine-grained implementation uses a series of image warps and sums to compute the LIC algorithm across the entire vector field at once. This is accomplished by novel use of high-performance graphics hardware texture mapping and accumulation buffers.

  18. Generalized Viterbi algorithms for error detection with convolutional codes

    NASA Astrophysics Data System (ADS)

    Seshadri, N.; Sundberg, C.-E. W.

    Presented are two generalized Viterbi algorithms (GVAs) for the decoding of convolutional codes. They are a parallel algorithm that simultaneously identifies the L best estimates of the transmitted sequence, and a serial algorithm that identifies the lth best estimate using the knowledge about the previously found l-1 estimates. These algorithms are applied to combined speech and channel coding systems, concatenated codes, trellis-coded modulation, partial response (continuous-phase modulation), and hybrid ARQ (automatic repeat request) schemes. As an example, for a concatenated code more than 2 dB is gained by the use of the GVA with L = 3 over the Viterbi algorithm for block error rates less than 10-2. The channel is a Rayleigh fading channel.

  19. Tomography by iterative convolution - Empirical study and application to interferometry

    NASA Technical Reports Server (NTRS)

    Vest, C. M.; Prikryl, I.

    1984-01-01

    An algorithm for computer tomography has been developed that is applicable to reconstruction from data having incomplete projections because an opaque object blocks some of the probing radiation as it passes through the object field. The algorithm is based on iteration between the object domain and the projection (Radon transform) domain. Reconstructions are computed during each iteration by the well-known convolution method. Although it is demonstrated that this algorithm does not converge, an empirically justified criterion for terminating the iteration when the most accurate estimate has been computed is presented. The algorithm has been studied by using it to reconstruct several different object fields with several different opaque regions. It also has been used to reconstruct aerodynamic density fields from interferometric data recorded in wind tunnel tests.

  20. Plane-wave decomposition by spherical-convolution microphone array

    NASA Astrophysics Data System (ADS)

    Rafaely, Boaz; Park, Munhum

    2001-05-01

    Reverberant sound fields are widely studied, as they have a significant influence on the acoustic performance of enclosures in a variety of applications. For example, the intelligibility of speech in lecture rooms, the quality of music in auditoria, the noise level in offices, and the production of 3D sound in living rooms are all affected by the enclosed sound field. These sound fields are typically studied through frequency response measurements or statistical measures such as reverberation time, which do not provide detailed spatial information. The aim of the work presented in this seminar is the detailed analysis of reverberant sound fields. A measurement and analysis system based on acoustic theory and signal processing, designed around a spherical microphone array, is presented. Detailed analysis is achieved by decomposition of the sound field into waves, using spherical Fourier transform and spherical convolution. The presentation will include theoretical review, simulation studies, and initial experimental results.

  1. Visualization of vasculature with convolution surfaces: method, validation and evaluation.

    PubMed

    Oeltze, Steffen; Preim, Bernhard

    2005-04-01

    We present a method for visualizing vasculature based on clinical computed tomography or magnetic resonance data. The vessel skeleton as well as the diameter information per voxel serve as input. Our method adheres to these data, while producing smooth transitions at branchings and closed, rounded ends by means of convolution surfaces. We examine the filter design with respect to irritating bulges, unwanted blending and the correct visualization of the vessel diameter. The method has been applied to a large variety of anatomic trees. We discuss the validation of the method by means of a comparison to other visualization methods. Surface distance measures are carried out to perform a quantitative validation. Furthermore, we present the evaluation of the method which has been accomplished on the basis of a survey by 11 radiologists and surgeons. PMID:15822811

  2. Finding the complete path and weight enumerators of convolutional codes

    NASA Technical Reports Server (NTRS)

    Onyszchuk, I.

    1990-01-01

    A method for obtaining the complete path enumerator T(D, L, I) of a convolutional code is described. A system of algebraic equations is solved, using a new algorithm for computing determinants, to obtain T(D, L, I) for the (7,1/2) NASA standard code. Generating functions, derived from T(D, L, I) are used to upper bound Viterbi decoder error rates. This technique is currently feasible for constraint length K less than 10 codes. A practical, fast algorithm is presented for computing the leading nonzero coefficients of the generating functions used to bound the performance of constraint length K less than 20 codes. Code profiles with about 50 nonzero coefficients are obtained with this algorithm for the experimental K = 15, rate 1/4, code in the Galileo mission and for the proposed K = 15, rate 1/6, 2-dB code.

  3. Deep convolutional neural networks for ATR from SAR imagery

    NASA Astrophysics Data System (ADS)

    Morgan, David A. E.

    2015-05-01

    Deep architectures for classification and representation learning have recently attracted significant attention within academia and industry, with many impressive results across a diverse collection of problem sets. In this work we consider the specific application of Automatic Target Recognition (ATR) using Synthetic Aperture Radar (SAR) data from the MSTAR public release data set. The classification performance achieved using a Deep Convolutional Neural Network (CNN) on this data set was found to be competitive with existing methods considered to be state-of-the-art. Unlike most existing algorithms, this approach can learn discriminative feature sets directly from training data instead of requiring pre-specification or pre-selection by a human designer. We show how this property can be exploited to efficiently adapt an existing classifier to recognise a previously unseen target and discuss potential practical applications.

  4. Asymptotic expansions of Mellin convolution integrals: An oscillatory case

    NASA Astrophysics Data System (ADS)

    López, José L.; Pagola, Pedro

    2010-01-01

    In a recent paper [J.L. López, Asymptotic expansions of Mellin convolution integrals, SIAM Rev. 50 (2) (2008) 275-293], we have presented a new, very general and simple method for deriving asymptotic expansions of for small x. It contains Watson's Lemma and other classical methods, Mellin transform techniques, McClure and Wong's distributional approach and the method of analytic continuation used in this approach as particular cases. In this paper we generalize that idea to the case of oscillatory kernels, that is, to integrals of the form , with c[set membership, variant]R, and we give a method as simple as the one given in the above cited reference for the case c=0. We show that McClure and Wong's distributional approach for oscillatory kernels and the summability method for oscillatory integrals are particular cases of this method. Some examples are given as illustration.

  5. A convolution model of rock bed thermal storage units

    NASA Astrophysics Data System (ADS)

    Sowell, E. F.; Curry, R. L.

    1980-01-01

    A method is presented whereby a packed-bed thermal storage unit is dynamically modeled for bi-directional flow and arbitrary input flow stream temperature variations. The method is based on the principle of calculating the output temperature as the sum of earlier input temperatures, each multiplied by a predetermined 'response factor', i.e., discrete convolution. A computer implementation of the scheme, in the form of a subroutine for a widely used solar simulation program (TRNSYS) is described and numerical results compared with other models. Also, a method for efficient computation of the required response factors is described; this solution is for a triangular input pulse, previously unreported, although the solution method is also applicable for other input functions. This solution requires a single integration of a known function which is easily carried out numerically to the required precision.

  6. Small convolution kernels for high-fidelity image restoration

    NASA Technical Reports Server (NTRS)

    Reichenbach, Stephen E.; Park, Stephen K.

    1991-01-01

    An algorithm is developed for computing the mean-square-optimal values for small, image-restoration kernels. The algorithm is based on a comprehensive, end-to-end imaging system model that accounts for the important components of the imaging process: the statistics of the scene, the point-spread function of the image-gathering device, sampling effects, noise, and display reconstruction. Subject to constraints on the spatial support of the kernel, the algorithm generates the kernel values that restore the image with maximum fidelity, that is, the kernel minimizes the expected mean-square restoration error. The algorithm is consistent with the derivation of the spatially unconstrained Wiener filter, but leads to a small, spatially constrained kernel that, unlike the unconstrained filter, can be efficiently implemented by convolution. Simulation experiments demonstrate that for a wide range of imaging systems these small kernels can restore images with fidelity comparable to images restored with the unconstrained Wiener filter.

  7. Convolutional Neural Networks for patient-specific ECG classification.

    PubMed

    Kiranyaz, Serkan; Ince, Turker; Hamila, Ridha; Gabbouj, Moncef

    2015-08-01

    We propose a fast and accurate patient-specific electrocardiogram (ECG) classification and monitoring system using an adaptive implementation of 1D Convolutional Neural Networks (CNNs) that can fuse feature extraction and classification into a unified learner. In this way, a dedicated CNN will be trained for each patient by using relatively small common and patient-specific training data and thus it can also be used to classify long ECG records such as Holter registers in a fast and accurate manner. Alternatively, such a solution can conveniently be used for real-time ECG monitoring and early alert system on a light-weight wearable device. The experimental results demonstrate that the proposed system achieves a superior classification performance for the detection of ventricular ectopic beats (VEB) and supraventricular ectopic beats (SVEB). PMID:26736826

  8. Drug-Drug Interaction Extraction via Convolutional Neural Networks.

    PubMed

    Liu, Shengyu; Tang, Buzhou; Chen, Qingcai; Wang, Xiaolong

    2016-01-01

    Drug-drug interaction (DDI) extraction as a typical relation extraction task in natural language processing (NLP) has always attracted great attention. Most state-of-the-art DDI extraction systems are based on support vector machines (SVM) with a large number of manually defined features. Recently, convolutional neural networks (CNN), a robust machine learning method which almost does not need manually defined features, has exhibited great potential for many NLP tasks. It is worth employing CNN for DDI extraction, which has never been investigated. We proposed a CNN-based method for DDI extraction. Experiments conducted on the 2013 DDIExtraction challenge corpus demonstrate that CNN is a good choice for DDI extraction. The CNN-based DDI extraction method achieves an F-score of 69.75%, which outperforms the existing best performing method by 2.75%. PMID:26941831

  9. Enhanced Line Integral Convolution with Flow Feature Detection

    NASA Technical Reports Server (NTRS)

    Lane, David; Okada, Arthur

    1996-01-01

    The Line Integral Convolution (LIC) method, which blurs white noise textures along a vector field, is an effective way to visualize overall flow patterns in a 2D domain. The method produces a flow texture image based on the input velocity field defined in the domain. Because of the nature of the algorithm, the texture image tends to be blurry. This sometimes makes it difficult to identify boundaries where flow separation and reattachments occur. We present techniques to enhance LIC texture images and use colored texture images to highlight flow separation and reattachment boundaries. Our techniques have been applied to several flow fields defined in 3D curvilinear multi-block grids and scientists have found the results to be very useful.

  10. Dead pixel replacement in LWIR microgrid polarimeters.

    PubMed

    Ratliff, Bradley M; Tyo, J Scott; Boger, James K; Black, Wiley T; Bowers, David L; Fetrow, Matthew P

    2007-06-11

    LWIR imaging arrays are often affected by nonresponsive pixels, or "dead pixels." These dead pixels can severely degrade the quality of imagery and often have to be replaced before subsequent image processing and display of the imagery data. For LWIR arrays that are integrated with arrays of micropolarizers, the problem of dead pixels is amplified. Conventional dead pixel replacement (DPR) strategies cannot be employed since neighboring pixels are of different polarizations. In this paper we present two DPR schemes. The first is a modified nearest-neighbor replacement method. The second is a method based on redundancy in the polarization measurements.We find that the redundancy-based DPR scheme provides an order-of-magnitude better performance for typical LWIR polarimetric data. PMID:19547086

  11. Dead pixel replacement in LWIR microgrid polarimeters

    NASA Astrophysics Data System (ADS)

    Ratliff, Bradley M.; Tyo, J. Scott; Boger, James K.; Black, Wiley T.; Bowers, David L.; Fetrow, Matthew P.

    2007-06-01

    LWIR imaging arrays are often affected by nonresponsive pixels, or “dead pixels.” These dead pixels can severely degrade the quality of imagery and often have to be replaced before subsequent image processing and display of the imagery data. For LWIR arrays that are integrated with arrays of micropolarizers, the problem of dead pixels is amplified. Conventional dead pixel replacement (DPR) strategies cannot be employed since neighboring pixels are of different polarizations. In this paper we present two DPR schemes. The first is a modified nearest-neighbor replacement method. The second is a method based on redundancy in the polarization measurements.We find that the redundancy-based DPR scheme provides an order-of-magnitude better performance for typical LWIR polarimetric data.

  12. Equivalence of a Bit Pixel Image to a Quantum Pixel Image

    NASA Astrophysics Data System (ADS)

    Ortega, Laurel Carlos; Dong, Shi-Hai; Cruz-Irisson, M.

    2015-11-01

    We propose a new method to transform a pixel image to the corresponding quantum-pixel using a qubit per pixel to represent each pixels classical weight in a quantum image matrix weight. All qubits are linear superposition, changing the coefficients level by level to the entire longitude of the gray scale with respect to the base states of the qubit. Classically, these states are just bytes represented in a binary matrix, having code combinations of 1 or 0 at all pixel locations. This method introduces a qubit-pixel image representation of images captured by classical optoelectronic methods. Supported partially by the project 20150964-SIP-IPN, Mexico

  13. Resampling of data between arbitrary grids using convolution interpolation.

    PubMed

    Rasche, V; Proksa, R; Sinkus, R; Börnert, P; Eggers, H

    1999-05-01

    For certain medical applications resampling of data is required. In magnetic resonance tomography (MRT) or computer tomography (CT), e.g., data may be sampled on nonrectilinear grids in the Fourier domain. For the image reconstruction a convolution-interpolation algorithm, often called gridding, can be applied for resampling of the data onto a rectilinear grid. Resampling of data from a rectilinear onto a nonrectilinear grid are needed, e.g., if projections of a given rectilinear data set are to be obtained. In this paper we introduce the application of the convolution interpolation for resampling of data from one arbitrary grid onto another. The basic algorithm can be split into two steps. First, the data are resampled from the arbitrary input grid onto a rectilinear grid and second, the rectilinear data is resampled onto the arbitrary output grid. Furthermore, we like to introduce a new technique to derive the sampling density function needed for the first step of our algorithm. For fast, sampling-pattern-independent determination of the sampling density function the Voronoi diagram of the sample distribution is calculated. The volume of the Voronoi cell around each sample is used as a measure for the sampling density. It is shown that the introduced resampling technique allows fast resampling of data between arbitrary grids. Furthermore, it is shown that the suggested approach to derive the sampling density function is suitable even for arbitrary sampling patterns. Examples are given in which the proposed technique has been applied for the reconstruction of data acquired along spiral, radial, and arbitrary trajectories and for the fast calculation of projections of a given rectilinearly sampled image. PMID:10416800

  14. Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition.

    PubMed

    He, Kaiming; Zhang, Xiangyu; Ren, Shaoqing; Sun, Jian

    2015-09-01

    Existing deep convolutional neural networks (CNNs) require a fixed-size (e.g., 224 × 224) input image. This requirement is "artificial" and may reduce the recognition accuracy for the images or sub-images of an arbitrary size/scale. In this work, we equip the networks with another pooling strategy, "spatial pyramid pooling", to eliminate the above requirement. The new network structure, called SPP-net, can generate a fixed-length representation regardless of image size/scale. Pyramid pooling is also robust to object deformations. With these advantages, SPP-net should in general improve all CNN-based image classification methods. On the ImageNet 2012 dataset, we demonstrate that SPP-net boosts the accuracy of a variety of CNN architectures despite their different designs. On the Pascal VOC 2007 and Caltech101 datasets, SPP-net achieves state-of-the-art classification results using a single full-image representation and no fine-tuning. The power of SPP-net is also significant in object detection. Using SPP-net, we compute the feature maps from the entire image only once, and then pool features in arbitrary regions (sub-images) to generate fixed-length representations for training the detectors. This method avoids repeatedly computing the convolutional features. In processing test images, our method is 24-102 × faster than the R-CNN method, while achieving better or comparable accuracy on Pascal VOC 2007. In ImageNet Large Scale Visual Recognition Challenge (ILSVRC) 2014, our methods rank #2 in object detection and #3 in image classification among all 38 teams. This manuscript also introduces the improvement made for this competition. PMID:26353135

  15. [Hadamard transform spectrometer mixed pixels' unmixing method].

    PubMed

    Yan, Peng; Hu, Bing-Liang; Liu, Xue-Bin; Sun, Wei; Li, Li-Bo; Feng, Yu-Tao; Liu, Yong-Zheng

    2011-10-01

    Hadamard transform imaging spectrometer is a multi-channel digital transform spectrometer detection technology, this paper based on digital micromirror array device (DMD) of the Hadamard transform spectrometer working principle and instrument structure, obtained by the imaging sensor mixed pixel were analyzed, theory derived the solution of pixel aliasing hybrid method, simulation results show that the method is simple and effective to improve the accuracy of mixed pixel spectrum more than 10% recovery. PMID:22250574

  16. Method for fabricating pixelated silicon device cells

    SciTech Connect

    Nielson, Gregory N.; Okandan, Murat; Cruz-Campa, Jose Luis; Nelson, Jeffrey S.; Anderson, Benjamin John

    2015-08-18

    A method, apparatus and system for flexible, ultra-thin, and high efficiency pixelated silicon or other semiconductor photovoltaic solar cell array fabrication is disclosed. A structure and method of creation for a pixelated silicon or other semiconductor photovoltaic solar cell array with interconnects is described using a manufacturing method that is simplified compared to previous versions of pixelated silicon photovoltaic cells that require more microfabrication steps.

  17. Commissioning of the CMS Forward Pixel Detector

    SciTech Connect

    Kumar, Ashish; /SUNY, Buffalo

    2008-12-01

    The Compact Muon Solenoid (CMS) experiment is scheduled for physics data taking in summer 2009 after the commissioning of high energy proton-proton collisions at Large Hadron Collider (LHC). At the core of the CMS all-silicon tracker is the silicon pixel detector, comprising three barrel layers and two pixel disks in the forward and backward regions, accounting for a total of 66 million channels. The pixel detector will provide high-resolution, 3D tracking points, essential for pattern recognition and precise vertexing, while being embedded in a hostile radiation environment. The end disks of the pixel detector, known as the Forward Pixel detector, has been assembled and tested at Fermilab, USA. It has 18 million pixel cells with dimension 100 x 150 {micro}m{sup 2}. The complete forward pixel detector was shipped to CERN in December 2007, where it underwent extensive system tests for commissioning prior to the installation. The pixel system was put in its final place inside the CMS following the installation and bake out of the LHC beam pipe in July 2008. It has been integrated with other sub-detectors in the readout since September 2008 and participated in the cosmic data taking. This report covers the strategy and results from commissioning of CMS forward pixel detector at CERN.

  18. Implementation of TDI based digital pixel ROIC with 15μm pixel pitch

    NASA Astrophysics Data System (ADS)

    Ceylan, Omer; Shafique, Atia; Burak, A.; Caliskan, Can; Abbasi, Shahbaz; Yazici, Melik; Gurbuz, Yasar

    2016-05-01

    A 15um pixel pitch digital pixel for LWIR time delay integration (TDI) applications is implemented which occupies one fourth of pixel area compared to previous digital TDI implementation. TDI is implemented on 8 pixels with oversampling rate of 2. ROIC provides 16 bits output with 8 bits of MSB and 8 bits of LSB. Pixel can store 75 M electrons with a quantization noise of 500 electrons. Digital pixel TDI implementation is advantageous over analog counterparts considering power consumption, chip area and signal-to-noise ratio. Digital pixel TDI ROIC is fabricated with 0.18um CMOS process. In digital pixel TDI implementation photocurrent is integrated on a capacitor in pixel and converted to digital data in pixel. This digital data triggers the summation counters which implements TDI addition. After all pixels in a row contribute, the summed data is divided to the number of TDI pixels(N) to have the actual output which is square root of N improved version of a single pixel output in terms of signal-to-noise-ratio (SNR).

  19. Pixel-based reconstruction (PBR) promising simultaneous techniques for CT reconstructions

    SciTech Connect

    Fager, R.S. . Office of the Associate Provost for Research); Peddanarappagari, K.V.; Kumar, G.N. . Dept. of Electrical Engineering)

    1993-03-01

    The authors present some new algorithms belonging to a class of algorithms, pixel-based reconstruction (PBR), similar to SIRT' (simultaneous iterative reconstruction techniques) methods for reconstruction of objects from their fan beam projections in x-ray transmission tomography. The general logic of these algorithms is discussed, and, as a corollary, two new ideas are presented, which gave promising results in the simulation studies. It was found in the simulation studies, contrary to previous results with parallel beam projections, that these iterative algebraic algorithms don't diverge when a more logical technique of obtaining the pseudo-projections is used. These simulations were carried out under conditions where the number of object pixels exceeded (double) the number of detector pixel readings, i.e., the equations were highly under-determined; however, the reconstructions were quite satisfactory. The effect of the number of projections on the reconstruction and the convergence to the exact solution is shown. For comparison, the reconstructions obtained by convolution back-projection are also given.

  20. Urban Image Classification: Per-Pixel Classifiers, Sub-Pixel Analysis, Object-Based Image Analysis, and Geospatial Methods. 10; Chapter

    NASA Technical Reports Server (NTRS)

    Myint, Soe W.; Mesev, Victor; Quattrochi, Dale; Wentz, Elizabeth A.

    2013-01-01

    Remote sensing methods used to generate base maps to analyze the urban environment rely predominantly on digital sensor data from space-borne platforms. This is due in part from new sources of high spatial resolution data covering the globe, a variety of multispectral and multitemporal sources, sophisticated statistical and geospatial methods, and compatibility with GIS data sources and methods. The goal of this chapter is to review the four groups of classification methods for digital sensor data from space-borne platforms; per-pixel, sub-pixel, object-based (spatial-based), and geospatial methods. Per-pixel methods are widely used methods that classify pixels into distinct categories based solely on the spectral and ancillary information within that pixel. They are used for simple calculations of environmental indices (e.g., NDVI) to sophisticated expert systems to assign urban land covers. Researchers recognize however, that even with the smallest pixel size the spectral information within a pixel is really a combination of multiple urban surfaces. Sub-pixel classification methods therefore aim to statistically quantify the mixture of surfaces to improve overall classification accuracy. While within pixel variations exist, there is also significant evidence that groups of nearby pixels have similar spectral information and therefore belong to the same classification category. Object-oriented methods have emerged that group pixels prior to classification based on spectral similarity and spatial proximity. Classification accuracy using object-based methods show significant success and promise for numerous urban 3 applications. Like the object-oriented methods that recognize the importance of spatial proximity, geospatial methods for urban mapping also utilize neighboring pixels in the classification process. The primary difference though is that geostatistical methods (e.g., spatial autocorrelation methods) are utilized during both the pre- and post

  1. Hot pixel generation in active pixel sensors: dosimetric and micro-dosimetric response

    NASA Technical Reports Server (NTRS)

    Scheick, Leif; Novak, Frank

    2003-01-01

    The dosimetric response of an active pixel sensor is analyzed. heavy ions are seen to damage the pixel in much the same way as gamma radiation. The probability of a hot pixel is seen to exhibit behavior that is not typical with other microdose effects.

  2. Soil moisture variability within remote sensing pixels

    NASA Astrophysics Data System (ADS)

    Charpentier, Michael A.; Groffman, Peter M.

    1992-11-01

    The effects of topography and the level of soil moisture on the variability of soil moisture within remote sensing pixels were assessed during the First ISLSCP Field Experiment (FIFE) during 1987 and 1989. Soil moisture data from flat, sloped, and valley-shaped pixels were obtained over a wide range of moisture conditions. Relative elevation data were obtained for each study area to create digital elevation models with which to quantify topographic variability. Within-pixel soil moisture variability was shown to increase with increased topographic heterogeneity. The flat pixel had significantly lower standard deviations and fewer outlier points than the slope and valley pixels. Most pixel means had a positive skewness, indicating that most pixels will have areas of markedly higher than average soil moisture. Soil moisture variability (as indicated by the coefficient of variation) decreased as soil moisture levels increased. However, the absolute value of the standard deviation of soil moisture was independent of wetness. The data suggest that remote sensing will reflect soil moisture conditions less accurately on pixels with increased topographic variability and less precisely when the soil is dry. These differences in the inherent accuracy and precision of remote sensing soil moisture data should be considered when evaluating error sources in analyses of energy balance or biogeochemical processes that utilize soil moisture data produced by remote sensing.

  3. Evaluation of color encodings for high dynamic range pixels

    NASA Astrophysics Data System (ADS)

    Boitard, Ronan; Mantiuk, Rafal K.; Pouli, Tania

    2015-03-01

    Traditional Low Dynamic Range (LDR) color spaces encode a small fraction of the visible color gamut, which does not encompass the range of colors produced on upcoming High Dynamic Range (HDR) displays. Future imaging systems will require encoding much wider color gamut and luminance range. Such wide color gamut can be represented using floating point HDR pixel values but those are inefficient to encode. They also lack perceptual uniformity of the luminance and color distribution, which is provided (in approximation) by most LDR color spaces. Therefore, there is a need to devise an efficient, perceptually uniform and integer valued representation for high dynamic range pixel values. In this paper we evaluate several methods for encoding colour HDR pixel values, in particular for use in image and video compression. Unlike other studies we test both luminance and color difference encoding in a rigorous 4AFC threshold experiments to determine the minimum bit-depth required. Results show that the Perceptual Quantizer (PQ) encoding provides the best perceptual uniformity in the considered luminance range, however the gain in bit-depth is rather modest. More significant difference can be observed between color difference encoding schemes, from which YDuDv encoding seems to be the most efficient.

  4. Optimal convolution SOR acceleration of waveform relaxation with application to semiconductor device simulation

    NASA Technical Reports Server (NTRS)

    Reichelt, Mark

    1993-01-01

    In this paper we describe a novel generalized SOR (successive overrelaxation) algorithm for accelerating the convergence of the dynamic iteration method known as waveform relaxation. A new convolution SOR algorithm is presented, along with a theorem for determining the optimal convolution SOR parameter. Both analytic and experimental results are given to demonstrate that the convergence of the convolution SOR algorithm is substantially faster than that of the more obvious frequency-independent waveform SOR algorithm. Finally, to demonstrate the general applicability of this new method, it is used to solve the differential-algebraic system generated by spatial discretization of the time-dependent semiconductor device equations.

  5. Space.

    ERIC Educational Resources Information Center

    Web Feet K-8, 2001

    2001-01-01

    This annotated subject guide to Web sites and additional resources focuses on space and astronomy. Specifies age levels for resources that include Web sites, CD-ROMS and software, videos, books, audios, and magazines; offers professional resources; and presents a relevant class activity. (LRW)

  6. New SOFRADIR 10μm pixel pitch infrared products

    NASA Astrophysics Data System (ADS)

    Lefoul, X.; Pere-Laperne, N.; Augey, T.; Rubaldo, L.; Aufranc, Sébastien; Decaens, G.; Ricard, N.; Mazaleyrat, E.; Billon-Lanfrey, D.; Gravrand, Olivier; Bisotto, Sylvette

    2014-10-01

    Recent advances in miniaturization of IR imaging technology have led to a growing market for mini thermal-imaging sensors. In that respect, Sofradir development on smaller pixel pitch has made much more compact products available to the users. When this competitive advantage is mixed with smaller coolers, made possible by HOT technology, we achieved valuable reductions in the size, weight and power of the overall package. At the same time, we are moving towards a global offer based on digital interfaces that provides our customers simplifications at the IR system design process while freeing up more space. This paper discusses recent developments on hot and small pixel pitch technologies as well as efforts made on compact packaging solution developed by SOFRADIR in collaboration with CEA-LETI.

  7. Protein Secondary Structure Prediction Using Deep Convolutional Neural Fields

    NASA Astrophysics Data System (ADS)

    Wang, Sheng; Peng, Jian; Ma, Jianzhu; Xu, Jinbo

    2016-01-01

    Protein secondary structure (SS) prediction is important for studying protein structure and function. When only the sequence (profile) information is used as input feature, currently the best predictors can obtain ~80% Q3 accuracy, which has not been improved in the past decade. Here we present DeepCNF (Deep Convolutional Neural Fields) for protein SS prediction. DeepCNF is a Deep Learning extension of Conditional Neural Fields (CNF), which is an integration of Conditional Random Fields (CRF) and shallow neural networks. DeepCNF can model not only complex sequence-structure relationship by a deep hierarchical architecture, but also interdependency between adjacent SS labels, so it is much more powerful than CNF. Experimental results show that DeepCNF can obtain ~84% Q3 accuracy, ~85% SOV score, and ~72% Q8 accuracy, respectively, on the CASP and CAMEO test proteins, greatly outperforming currently popular predictors. As a general framework, DeepCNF can be used to predict other protein structure properties such as contact number, disorder regions, and solvent accessibility.

  8. A quantum algorithm for Viterbi decoding of classical convolutional codes

    NASA Astrophysics Data System (ADS)

    Grice, Jon R.; Meyer, David A.

    2015-07-01

    We present a quantum Viterbi algorithm (QVA) with better than classical performance under certain conditions. In this paper, the proposed algorithm is applied to decoding classical convolutional codes, for instance, large constraint length and short decode frames . Other applications of the classical Viterbi algorithm where is large (e.g., speech processing) could experience significant speedup with the QVA. The QVA exploits the fact that the decoding trellis is similar to the butterfly diagram of the fast Fourier transform, with its corresponding fast quantum algorithm. The tensor-product structure of the butterfly diagram corresponds to a quantum superposition that we show can be efficiently prepared. The quantum speedup is possible because the performance of the QVA depends on the fanout (number of possible transitions from any given state in the hidden Markov model) which is in general much less than . The QVA constructs a superposition of states which correspond to all legal paths through the decoding lattice, with phase as a function of the probability of the path being taken given received data. A specialized amplitude amplification procedure is applied one or more times to recover a superposition where the most probable path has a high probability of being measured.

  9. Innervation of the renal proximal convoluted tubule of the rat

    SciTech Connect

    Barajas, L.; Powers, K. )

    1989-12-01

    Experimental data suggest the proximal tubule as a major site of neurogenic influence on tubular function. The functional and anatomical axial heterogeneity of the proximal tubule prompted this study of the distribution of innervation sites along the early, mid, and late proximal convoluted tubule (PCT) of the rat. Serial section autoradiograms, with tritiated norepinephrine serving as a marker for monoaminergic nerves, were used in this study. Freehand clay models and graphic reconstructions of proximal tubules permitted a rough estimation of the location of the innervation sites along the PCT. In the subcapsular nephrons, the early PCT (first third) was devoid of innervation sites with most of the innervation occurring in the mid (middle third) and in the late (last third) PCT. Innervation sites were found in the early PCT in nephrons located deeper in the cortex. In juxtamedullary nephrons, innervation sites could be observed on the PCT as it left the glomerulus. This gradient of PCT innervation can be explained by the different tubulovascular relationships of nephrons at different levels of the cortex. The absence of innervation sites in the early PCT of subcapsular nephrons suggests that any influence of the renal nerves on the early PCT might be due to an effect of neurotransmitter released from renal nerves reaching the early PCT via the interstitium and/or capillaries.

  10. Toward an optimal convolutional neural network for traffic sign recognition

    NASA Astrophysics Data System (ADS)

    Habibi Aghdam, Hamed; Jahani Heravi, Elnaz; Puig, Domenec

    2015-12-01

    Convolutional Neural Networks (CNN) beat the human performance on German Traffic Sign Benchmark competition. Both the winner and the runner-up teams trained CNNs to recognize 43 traffic signs. However, both networks are not computationally efficient since they have many free parameters and they use highly computational activation functions. In this paper, we propose a new architecture that reduces the number of the parameters 27% and 22% compared with the two networks. Furthermore, our network uses Leaky Rectified Linear Units (ReLU) as the activation function that only needs a few operations to produce the result. Specifically, compared with the hyperbolic tangent and rectified sigmoid activation functions utilized in the two networks, Leaky ReLU needs only one multiplication operation which makes it computationally much more efficient than the two other functions. Our experiments on the Gertman Traffic Sign Benchmark dataset shows 0:6% improvement on the best reported classification accuracy while it reduces the overall number of parameters 85% compared with the winner network in the competition.

  11. Multi-modal vertebrae recognition using Transformed Deep Convolution Network.

    PubMed

    Cai, Yunliang; Landis, Mark; Laidley, David T; Kornecki, Anat; Lum, Andrea; Li, Shuo

    2016-07-01

    Automatic vertebra recognition, including the identification of vertebra locations and naming in multiple image modalities, are highly demanded in spinal clinical diagnoses where large amount of imaging data from various of modalities are frequently and interchangeably used. However, the recognition is challenging due to the variations of MR/CT appearances or shape/pose of the vertebrae. In this paper, we propose a method for multi-modal vertebra recognition using a novel deep learning architecture called Transformed Deep Convolution Network (TDCN). This new architecture can unsupervisely fuse image features from different modalities and automatically rectify the pose of vertebra. The fusion of MR and CT image features improves the discriminativity of feature representation and enhances the invariance of the vertebra pattern, which allows us to automatically process images from different contrast, resolution, protocols, even with different sizes and orientations. The feature fusion and pose rectification are naturally incorporated in a multi-layer deep learning network. Experiment results show that our method outperforms existing detection methods and provides a fully automatic location+naming+pose recognition for routine clinical practice. PMID:27104497

  12. Remote Sensing Image Fusion with Convolutional Neural Network

    NASA Astrophysics Data System (ADS)

    Zhong, Jinying; Yang, Bin; Huang, Guoyu; Zhong, Fei; Chen, Zhongze

    2016-12-01

    Remote sensing image fusion (RSIF) is referenced as restoring the high-resolution multispectral image from its corresponding low-resolution multispectral (LMS) image aided by the panchromatic (PAN) image. Most RSIF methods assume that the missing spatial details of the LMS image can be obtained from the high resolution PAN image. However, the distortions would be produced due to the much difference between the structural component of LMS image and that of PAN image. Actually, the LMS image can fully utilize its spatial details to improve the resolution. In this paper, a novel two-stage RSIF algorithm is proposed, which makes full use of both spatial details and spectral information of the LMS image itself. In the first stage, the convolutional neural network based super-resolution is used to increase the spatial resolution of the LMS image. In the second stage, Gram-Schmidt transform is employed to fuse the enhanced MS and the PAN images for further improvement the resolution of MS image. Since the spatial resolution enhancement in the first stage, the spectral distortions in the fused image would be decreased in evidence. Moreover, the spatial details can be preserved to construct the fused images. The QuickBird satellite source images are used to test the performances of the proposed method. The experimental results demonstrate that the proposed method can achieve better spatial details and spectral information simultaneously compared with other well-known methods.

  13. Forecasting natural aquifer discharge using a numerical model and convolution.

    PubMed

    Boggs, Kevin G; Johnson, Gary S; Van Kirk, Rob; Fairley, Jerry P

    2014-01-01

    If the nature of groundwater sources and sinks can be determined or predicted, the data can be used to forecast natural aquifer discharge. We present a procedure to forecast the relative contribution of individual aquifer sources and sinks to natural aquifer discharge. Using these individual aquifer recharge components, along with observed aquifer heads for each January, we generate a 1-year, monthly spring discharge forecast for the upcoming year with an existing numerical model and convolution. The results indicate that a forecast of natural aquifer discharge can be developed using only the dominant aquifer recharge sources combined with the effects of aquifer heads (initial conditions) at the time the forecast is generated. We also estimate how our forecast will perform in the future using a jackknife procedure, which indicates that the future performance of the forecast is good (Nash-Sutcliffe efficiency of 0.81). We develop a forecast and demonstrate important features of the procedure by presenting an application to the Eastern Snake Plain Aquifer in southern Idaho. PMID:23914881

  14. Protein Secondary Structure Prediction Using Deep Convolutional Neural Fields.

    PubMed

    Wang, Sheng; Peng, Jian; Ma, Jianzhu; Xu, Jinbo

    2016-01-01

    Protein secondary structure (SS) prediction is important for studying protein structure and function. When only the sequence (profile) information is used as input feature, currently the best predictors can obtain ~80% Q3 accuracy, which has not been improved in the past decade. Here we present DeepCNF (Deep Convolutional Neural Fields) for protein SS prediction. DeepCNF is a Deep Learning extension of Conditional Neural Fields (CNF), which is an integration of Conditional Random Fields (CRF) and shallow neural networks. DeepCNF can model not only complex sequence-structure relationship by a deep hierarchical architecture, but also interdependency between adjacent SS labels, so it is much more powerful than CNF. Experimental results show that DeepCNF can obtain ~84% Q3 accuracy, ~85% SOV score, and ~72% Q8 accuracy, respectively, on the CASP and CAMEO test proteins, greatly outperforming currently popular predictors. As a general framework, DeepCNF can be used to predict other protein structure properties such as contact number, disorder regions, and solvent accessibility. PMID:26752681

  15. Method for Veterbi decoding of large constraint length convolutional codes

    NASA Technical Reports Server (NTRS)

    Hsu, In-Shek (Inventor); Truong, Trieu-Kie (Inventor); Reed, Irving S. (Inventor); Jing, Sun (Inventor)

    1988-01-01

    A new method of Viterbi decoding of convolutional codes lends itself to a pipline VLSI architecture using a single sequential processor to compute the path metrics in the Viterbi trellis. An array method is used to store the path information for NK intervals where N is a number, and K is constraint length. The selected path at the end of each NK interval is then selected from the last entry in the array. A trace-back method is used for returning to the beginning of the selected path back, i.e., to the first time unit of the interval NK to read out the stored branch metrics of the selected path which correspond to the message bits. The decoding decision made in this way is no longer maximum likelihood, but can be almost as good, provided that constraint length K in not too small. The advantage is that for a long message, it is not necessary to provide a large memory to store the trellis derived information until the end of the message to select the path that is to be decoded; the selection is made at the end of every NK time unit, thus decoding a long message in successive blocks.

  16. Deep convolutional neural networks for classifying GPR B-scans

    NASA Astrophysics Data System (ADS)

    Besaw, Lance E.; Stimac, Philip J.

    2015-05-01

    Symmetric and asymmetric buried explosive hazards (BEHs) present real, persistent, deadly threats on the modern battlefield. Current approaches to mitigate these threats rely on highly trained operatives to reliably detect BEHs with reasonable false alarm rates using handheld Ground Penetrating Radar (GPR) and metal detectors. As computers become smaller, faster and more efficient, there exists greater potential for automated threat detection based on state-of-the-art machine learning approaches, reducing the burden on the field operatives. Recent advancements in machine learning, specifically deep learning artificial neural networks, have led to significantly improved performance in pattern recognition tasks, such as object classification in digital images. Deep convolutional neural networks (CNNs) are used in this work to extract meaningful signatures from 2-dimensional (2-D) GPR B-scans and classify threats. The CNNs skip the traditional "feature engineering" step often associated with machine learning, and instead learn the feature representations directly from the 2-D data. A multi-antennae, handheld GPR with centimeter-accurate positioning data was used to collect shallow subsurface data over prepared lanes containing a wide range of BEHs. Several heuristics were used to prevent over-training, including cross validation, network weight regularization, and "dropout." Our results show that CNNs can extract meaningful features and accurately classify complex signatures contained in GPR B-scans, complementing existing GPR feature extraction and classification techniques.

  17. Protein Secondary Structure Prediction Using Deep Convolutional Neural Fields

    PubMed Central

    Wang, Sheng; Peng, Jian; Ma, Jianzhu; Xu, Jinbo

    2016-01-01

    Protein secondary structure (SS) prediction is important for studying protein structure and function. When only the sequence (profile) information is used as input feature, currently the best predictors can obtain ~80% Q3 accuracy, which has not been improved in the past decade. Here we present DeepCNF (Deep Convolutional Neural Fields) for protein SS prediction. DeepCNF is a Deep Learning extension of Conditional Neural Fields (CNF), which is an integration of Conditional Random Fields (CRF) and shallow neural networks. DeepCNF can model not only complex sequence-structure relationship by a deep hierarchical architecture, but also interdependency between adjacent SS labels, so it is much more powerful than CNF. Experimental results show that DeepCNF can obtain ~84% Q3 accuracy, ~85% SOV score, and ~72% Q8 accuracy, respectively, on the CASP and CAMEO test proteins, greatly outperforming currently popular predictors. As a general framework, DeepCNF can be used to predict other protein structure properties such as contact number, disorder regions, and solvent accessibility. PMID:26752681

  18. A deep convolutional neural network for recognizing foods

    NASA Astrophysics Data System (ADS)

    Jahani Heravi, Elnaz; Habibi Aghdam, Hamed; Puig, Domenec

    2015-12-01

    Controlling the food intake is an efficient way that each person can undertake to tackle the obesity problem in countries worldwide. This is achievable by developing a smartphone application that is able to recognize foods and compute their calories. State-of-art methods are chiefly based on hand-crafted feature extraction methods such as HOG and Gabor. Recent advances in large-scale object recognition datasets such as ImageNet have revealed that deep Convolutional Neural Networks (CNN) possess more representation power than the hand-crafted features. The main challenge with CNNs is to find the appropriate architecture for each problem. In this paper, we propose a deep CNN which consists of 769; 988 parameters. Our experiments show that the proposed CNN outperforms the state-of-art methods and improves the best result of traditional methods 17%. Moreover, using an ensemble of two CNNs that have been trained two different times, we are able to improve the classification performance 21:5%.

  19. Adapting line integral convolution for fabricating artistic virtual environment

    NASA Astrophysics Data System (ADS)

    Lee, Jiunn-Shyan; Wang, Chung-Ming

    2003-04-01

    Vector field occurs not only extensively in scientific applications but also in treasured art such as sculptures and paintings. Artist depicts our natural environment stressing valued directional feature besides color and shape information. Line integral convolution (LIC), developed for imaging vector field in scientific visualization, has potential of producing directional image. In this paper we present several techniques of exploring LIC techniques to generate impressionistic images forming artistic virtual environment. We take advantage of directional information given by a photograph, and incorporate many investigations to the work including non-photorealistic shading technique and statistical detail control. In particular, the non-photorealistic shading technique blends cool and warm colors into the photograph to imitate artists painting convention. Besides, we adopt statistical technique controlling integral length according to image variance to preserve details. Furthermore, we also propose method for generating a series of mip-maps, which revealing constant strokes under multi-resolution viewing and achieving frame coherence in an interactive walkthrough system. The experimental results show merits of emulating satisfyingly and computing efficiently, as a consequence, relying on the proposed technique successfully fabricates a wide category of non-photorealistic rendering (NPR) application such as interactive virtual environment with artistic perception.

  20. Cell osmotic water permeability of isolated rabbit proximal convoluted tubules.

    PubMed

    Carpi-Medina, P; González, E; Whittembury, G

    1983-05-01

    Cell osmotic water permeability, Pcos, of the peritubular aspect of the proximal convoluted tubule (PCT) was measured from the time course of cell volume changes subsequent to the sudden imposition of an osmotic gradient, delta Cio, across the cell membrane of PCT that had been dissected and mounted in a chamber. The possibilities of artifact were minimized. The bath was vigorously stirred, the solutions could be 95% changed within 0.1 s, and small osmotic gradients (10-20 mosM) were used. Thus, the osmotically induced water flow was a linear function of delta Cio and the effect of the 70-microns-thick unstirred layers was negligible. In addition, data were extrapolated to delta Cio = 0. Pcos for PCT was 41.6 (+/- 3.5) X 10(-4) cm3 X s-1 X osM-1 per cm2 of peritubular basal area. The standing gradient osmotic theory for transcellular osmosis is incompatible with this value. Published values for Pcos of PST are 25.1 X 10(-4), and for the transepithelial permeability Peos values are 64 X 10(-4) for PCT and 94 X 10(-4) for PST, in the same units. These results indicate that there is room for paracellular water flow in both nephron segments and that the magnitude of the transcellular and paracellular water flows may vary from one segment of the proximal tubule to another. PMID:6846543

  1. Turbo-decoding of a convolutionally encoded OCDMA system

    NASA Astrophysics Data System (ADS)

    Efinger, Daniel; Fritsch, Robert

    2005-02-01

    We present a novel multiple access scheme for Passive Optical Networks (PON) based on optical Code Division Multiple Access (OCDMA). Di erent from existing proposals for implementing OCDMA, we replaced the predominating orthogonal or weakly correlated signature codes (e.g. Walsh-Hadamard codes (WHC)) by convolutional codes. Thus CDMA user separation and forward error correction (FEC) are combined. The transmission of the coded bits over the multiple access fiber is carried through optical BPSK. This requires electrical field strength detection rather than direct detection (DD) at the receiver end. Since orthogonality gets lost, we have to employ a multiuser receiver to overcome the inherently strong correlation. Computational complexity of multiuser detection is the major challenge and we show how complexity can be reduced by applying the turbo principle known from soft-decoding of concatenated codes. The convergence behavior of the iterative multiuser receiver is investigated by means of extrinsic information transfer charts (EXIT-chart). Finally, we present simulation results of bit error ratio (BER) vs. signal-to-noise ratio (SNR) including a standard single mode fiber in order to demonstrate the superior performance of the proposed scheme compared to those using orthogonal spreading techniques.

  2. Sub-pixel mapping of water boundaries using pixel swapping algorithm (case study: Tagliamento River, Italy)

    NASA Astrophysics Data System (ADS)

    Niroumand-Jadidi, Milad; Vitti, Alfonso

    2015-10-01

    Taking the advantages of remotely sensed data for mapping and monitoring of water boundaries is of particular importance in many different management and conservation activities. Imagery data are classified using automatic techniques to produce maps entering the water bodies' analysis chain in several and different points. Very commonly, medium or coarse spatial resolution imagery is used in studies of large water bodies. Data of this kind is affected by the presence of mixed pixels leading to very outstanding problems, in particular when dealing with boundary pixels. A considerable amount of uncertainty inescapably occurs when conventional hard classifiers (e.g., maximum likelihood) are applied on mixed pixels. In this study, Linear Spectral Mixture Model (LSMM) is used to estimate the proportion of water in boundary pixels. Firstly by applying an unsupervised clustering, the water body is identified approximately and a buffer area considered ensuring the selection of entire boundary pixels. Then LSMM is applied on this buffer region to estimate the fractional maps. However, resultant output of LSMM does not provide a sub-pixel map corresponding to water abundances. To tackle with this problem, Pixel Swapping (PS) algorithm is used to allocate sub-pixels within mixed pixels in such a way to maximize the spatial proximity of sub-pixels and pixels in the neighborhood. The water area of two segments of Tagliamento River (Italy) are mapped in sub-pixel resolution (10m) using a 30m Landsat image. To evaluate the proficiency of the proposed approach for sub-pixel boundary mapping, the image is also classified using a conventional hard classifier. A high resolution image of the same area is also classified and used as a reference for accuracy assessment. According to the results, sub-pixel map shows in average about 8 percent higher overall accuracy than hard classification and fits very well in the boundaries with the reference map.

  3. Pixel multichip module development at Fermilab

    SciTech Connect

    Turqueti, M A; Cardoso, G; Andresen, J; Appel, J A; Christian, D C; Kwan, S W; Prosser, A; Uplegger, L

    2005-10-01

    At Fermilab, there is an ongoing pixel detector R&D effort for High Energy Physics with the objective of developing high performance vertex detectors suitable for the next generation of HEP experiments. The pixel module presented here is a direct result of work undertaken for the canceled BTeV experiment. It is a very mature piece of hardware, having many characteristics of high performance, low mass and radiation hardness driven by the requirements of the BTeV experiment. The detector presented in this paper consists of three basic devices; the readout integrated circuit (IC) FPIX2A [2][5], the pixel sensor (TESLA p-spray) [6] and the high density interconnect (HDI) flex circuit [1][3] that is capable of supporting eight readout ICs. The characterization of the pixel multichip module prototype as well as the baseline design of the eight chip pixel module and its capabilities are presented. These prototypes were characterized for threshold and noise dispersion. The bump-bonds of the pixel module were examined using an X-ray inspection system. Furthermore, the connectivity of the bump-bonds was tested using a radioactive source ({sup 90}Sr), while the absolute calibration of the modules was achieved using an X-ray source. This paper provides a view of the integration of the three components that together comprise the pixel multichip module.

  4. It's not the pixel count, you fool

    NASA Astrophysics Data System (ADS)

    Kriss, Michael A.

    2012-01-01

    The first thing a "marketing guy" asks the digital camera engineer is "how many pixels does it have, for we need as many mega pixels as possible since the other guys are killing us with their "umpteen" mega pixel pocket sized digital cameras. And so it goes until the pixels get smaller and smaller in order to inflate the pixel count in the never-ending pixel-wars. These small pixels just are not very good. The truth of the matter is that the most important feature of digital cameras in the last five years is the automatic motion control to stabilize the image on the sensor along with some very sophisticated image processing. All the rest has been hype and some "cool" design. What is the future for digital imaging and what will drive growth of camera sales (not counting the cell phone cameras which totally dominate the market in terms of camera sales) and more importantly after sales profits? Well sit in on the Dark Side of Color and find out what is being done to increase the after sales profits and don't be surprised if has been done long ago in some basement lab of a photographic company and of course, before its time.

  5. Applying Convolution-Based Processing Methods To A Dual-Channel, Large Array Artificial Olfactory Mucosa

    NASA Astrophysics Data System (ADS)

    Taylor, J. E.; Che Harun, F. K.; Covington, J. A.; Gardner, J. W.

    2009-05-01

    Our understanding of the human olfactory system, particularly with respect to the phenomenon of nasal chromatography, has led us to develop a new generation of novel odour-sensitive instruments (or electronic noses). This novel instrument is in need of new approaches to data processing so that the information rich signals can be fully exploited; here, we apply a novel time-series based technique for processing such data. The dual-channel, large array artificial olfactory mucosa consists of 3 arrays of 300 sensors each. The sensors are divided into 24 groups, with each group made from a particular type of polymer. The first array is connected to the other two arrays by a pair of retentive columns. One channel is coated with Carbowax 20 M, and the other with OV-1. This configuration partly mimics the nasal chromatography effect, and partly augments it by utilizing not only polar (mucus layer) but also non-polar (artificial) coatings. Such a device presents several challenges to multi-variate data processing: a large, redundant dataset, spatio-temporal output, and small sample space. By applying a novel convolution approach to this problem, it has been demonstrated that these problems can be overcome. The artificial mucosa signals have been classified using a probabilistic neural network and gave an accuracy of 85%. Even better results should be possible through the selection of other sensors with lower correlation.

  6. The Luminous Convolution Model-The light side of dark matter

    NASA Astrophysics Data System (ADS)

    Cisneros, Sophia; Oblath, Noah; Formaggio, Joe; Goedecke, George; Chester, David; Ott, Richard; Ashley, Aaron; Rodriguez, Adrianna

    2014-03-01

    We present a heuristic model for predicting the rotation curves of spiral galaxies. The Luminous Convolution Model (LCM) utilizes Lorentz-type transformations of very small changes in the photon's frequencies from curved space-times to construct a dynamic mass model of galaxies. These frequency changes are derived using the exact solution to the exterior Kerr wave equation, as opposed to a linearized treatment. The LCM Lorentz-type transformations map between the emitter and the receiver rotating galactic frames, and then to the associated flat frames in each galaxy where the photons are emitted and received. This treatment necessarily rests upon estimates of the luminous matter in both the emitter and the receiver galaxies. The LCM is tested on a sample of 22 randomly chosen galaxies, represented in 33 different data sets. LCM fits are compared to the Navarro, Frenk & White (NFW) Dark Matter Model and to the Modified Newtonian Dynamics (MOND) model when possible. The high degree of sensitivity of the LCM to the initial assumption of a luminous mass to light ratios (M/L), of the given galaxy, is demonstrated. We demonstrate that the LCM is successful across a wide range of spiral galaxies for predicting the observed rotation curves. Through the generous support of the MIT Dr. Martin Luther King Jr. Fellowship program.

  7. Crosswell electromagnetic modeling from impulsive source: Optimization strategy for dispersion suppression in convolutional perfectly matched layer.

    PubMed

    Fang, Sinan; Pan, Heping; Du, Ting; Konaté, Ahmed Amara; Deng, Chengxiang; Qin, Zhen; Guo, Bo; Peng, Ling; Ma, Huolin; Li, Gang; Zhou, Feng

    2016-01-01

    This study applied the finite-difference time-domain (FDTD) method to forward modeling of the low-frequency crosswell electromagnetic (EM) method. Specifically, we implemented impulse sources and convolutional perfectly matched layer (CPML). In the process to strengthen CPML, we observed that some dispersion was induced by the real stretch κ, together with an angular variation of the phase velocity of the transverse electric plane wave; the conclusion was that this dispersion was positively related to the real stretch and was little affected by grid interval. To suppress the dispersion in the CPML, we first derived the analytical solution for the radiation field of the magneto-dipole impulse source in the time domain. Then, a numerical simulation of CPML absorption with high-frequency pulses qualitatively amplified the dispersion laws through wave field snapshots. A numerical simulation using low-frequency pulses suggested an optimal parameter strategy for CPML from the established criteria. Based on its physical nature, the CPML method of simply warping space-time was predicted to be a promising approach to achieve ideal absorption, although it was still difficult to entirely remove the dispersion. PMID:27585538

  8. Toward content-based image retrieval with deep convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Sklan, Judah E. S.; Plassard, Andrew J.; Fabbri, Daniel; Landman, Bennett A.

    2015-03-01

    Content-based image retrieval (CBIR) offers the potential to identify similar case histories, understand rare disorders, and eventually, improve patient care. Recent advances in database capacity, algorithm efficiency, and deep Convolutional Neural Networks (dCNN), a machine learning technique, have enabled great CBIR success for general photographic images. Here, we investigate applying the leading ImageNet CBIR technique to clinically acquired medical images captured by the Vanderbilt Medical Center. Briefly, we (1) constructed a dCNN with four hidden layers, reducing dimensionality of an input scaled to 128x128 to an output encoded layer of 4x384, (2) trained the network using back-propagation 1 million random magnetic resonance (MR) and computed tomography (CT) images, (3) labeled an independent set of 2100 images, and (4) evaluated classifiers on the projection of the labeled images into manifold space. Quantitative results were disappointing (averaging a true positive rate of only 20%); however, the data suggest that improvements would be possible with more evenly distributed sampling across labels and potential re-grouping of label structures. This preliminary effort at automated classification of medical images with ImageNet is promising, but shows that more work is needed beyond direct adaptation of existing techniques.

  9. Crosswell electromagnetic modeling from impulsive source: Optimization strategy for dispersion suppression in convolutional perfectly matched layer

    PubMed Central

    Fang, Sinan; Pan, Heping; Du, Ting; Konaté, Ahmed Amara; Deng, Chengxiang; Qin, Zhen; Guo, Bo; Peng, Ling; Ma, Huolin; Li, Gang; Zhou, Feng

    2016-01-01

    This study applied the finite-difference time-domain (FDTD) method to forward modeling of the low-frequency crosswell electromagnetic (EM) method. Specifically, we implemented impulse sources and convolutional perfectly matched layer (CPML). In the process to strengthen CPML, we observed that some dispersion was induced by the real stretch κ, together with an angular variation of the phase velocity of the transverse electric plane wave; the conclusion was that this dispersion was positively related to the real stretch and was little affected by grid interval. To suppress the dispersion in the CPML, we first derived the analytical solution for the radiation field of the magneto-dipole impulse source in the time domain. Then, a numerical simulation of CPML absorption with high-frequency pulses qualitatively amplified the dispersion laws through wave field snapshots. A numerical simulation using low-frequency pulses suggested an optimal parameter strategy for CPML from the established criteria. Based on its physical nature, the CPML method of simply warping space-time was predicted to be a promising approach to achieve ideal absorption, although it was still difficult to entirely remove the dispersion. PMID:27585538

  10. Single-Image Super Resolution for Multispectral Remote Sensing Data Using Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Liebel, L.; Körner, M.

    2016-06-01

    In optical remote sensing, spatial resolution of images is crucial for numerous applications. Space-borne systems are most likely to be affected by a lack of spatial resolution, due to their natural disadvantage of a large distance between the sensor and the sensed object. Thus, methods for single-image super resolution are desirable to exceed the limits of the sensor. Apart from assisting visual inspection of datasets, post-processing operations—e.g., segmentation or feature extraction—can benefit from detailed and distinguishable structures. In this paper, we show that recently introduced state-of-the-art approaches for single-image super resolution of conventional photographs, making use of deep learning techniques, such as convolutional neural networks (CNN), can successfully be applied to remote sensing data. With a huge amount of training data available, end-to-end learning is reasonably easy to apply and can achieve results unattainable using conventional handcrafted algorithms. We trained our CNN on a specifically designed, domain-specific dataset, in order to take into account the special characteristics of multispectral remote sensing data. This dataset consists of publicly available SENTINEL-2 images featuring 13 spectral bands, a ground resolution of up to 10m, and a high radiometric resolution and thus satisfying our requirements in terms of quality and quantity. In experiments, we obtained results superior compared to competing approaches trained on generic image sets, which failed to reasonably scale satellite images with a high radiometric resolution, as well as conventional interpolation methods.