Sample records for response gaussian convolution

  1. NGMIX: Gaussian mixture models for 2D images

    NASA Astrophysics Data System (ADS)

    Sheldon, Erin

    2015-08-01

    NGMIX implements Gaussian mixture models for 2D images. Both the PSF profile and the galaxy are modeled using mixtures of Gaussians. Convolutions are thus performed analytically, resulting in fast model generation as compared to methods that perform the convolution in Fourier space. For the galaxy model, NGMIX supports exponential disks and de Vaucouleurs and Sérsic profiles; these are implemented approximately as a sum of Gaussians using the fits from Hogg & Lang (2013). Additionally, any number of Gaussians can be fit, either completely free or constrained to be cocentric and co-elliptical.

  2. Alternative methods to smooth the Earth's gravity field

    NASA Technical Reports Server (NTRS)

    Jekeli, C.

    1981-01-01

    Convolutions on the sphere with corresponding convolution theorems are developed for one and two dimensional functions. Some of these results are used in a study of isotropic smoothing operators or filters. Well known filters in Fourier spectral analysis, such as the rectangular, Gaussian, and Hanning filters, are adapted for data on a sphere. The low-pass filter most often used on gravity data is the rectangular (or Pellinen) filter. However, its spectrum has relatively large sidelobes; and therefore, this filter passes a considerable part of the upper end of the gravity spectrum. The spherical adaptations of the Gaussian and Hanning filters are more efficient in suppressing the high-frequency components of the gravity field since their frequency response functions are strongly field since their frequency response functions are strongly tapered at the high frequencies with no, or small, sidelobes. Formulas are given for practical implementation of these new filters.

  3. The Gaussian-Lorentzian Sum, Product, and Convolution (Voigt) functions in the context of peak fitting X-ray photoelectron spectroscopy (XPS) narrow scans

    NASA Astrophysics Data System (ADS)

    Jain, Varun; Biesinger, Mark C.; Linford, Matthew R.

    2018-07-01

    X-ray photoelectron spectroscopy (XPS) is arguably the most important vacuum technique for surface chemical analysis, and peak fitting is an indispensable part of XPS data analysis. Functions that have been widely explored and used in XPS peak fitting include the Gaussian, Lorentzian, Gaussian-Lorentzian sum (GLS), Gaussian-Lorentzian product (GLP), and Voigt functions, where the Voigt function is a convolution of a Gaussian and a Lorentzian function. In this article we discuss these functions from a graphical perspective. Arguments based on convolution and the Central Limit Theorem are made to justify the use of functions that are intermediate between pure Gaussians and pure Lorentzians in XPS peak fitting. Mathematical forms for the GLS and GLP functions are presented with a mixing parameter m. Plots are shown for GLS and GLP functions with mixing parameters ranging from 0 to 1. There are fundamental differences between the GLS and GLP functions. The GLS function better follows the 'wings' of the Lorentzian, while these 'wings' are suppressed in the GLP. That is, these two functions are not interchangeable. The GLS and GLP functions are compared to the Voigt function, where the GLS is shown to be a decent approximation of it. Practically, both the GLS and the GLP functions can be useful for XPS peak fitting. Examples of the uses of these functions are provided herein.

  4. A real-time multi-scale 2D Gaussian filter based on FPGA

    NASA Astrophysics Data System (ADS)

    Luo, Haibo; Gai, Xingqin; Chang, Zheng; Hui, Bin

    2014-11-01

    Multi-scale 2-D Gaussian filter has been widely used in feature extraction (e.g. SIFT, edge etc.), image segmentation, image enhancement, image noise removing, multi-scale shape description etc. However, their computational complexity remains an issue for real-time image processing systems. Aimed at this problem, we propose a framework of multi-scale 2-D Gaussian filter based on FPGA in this paper. Firstly, a full-hardware architecture based on parallel pipeline was designed to achieve high throughput rate. Secondly, in order to save some multiplier, the 2-D convolution is separated into two 1-D convolutions. Thirdly, a dedicate first in first out memory named as CAFIFO (Column Addressing FIFO) was designed to avoid the error propagating induced by spark on clock. Finally, a shared memory framework was designed to reduce memory costs. As a demonstration, we realized a 3 scales 2-D Gaussian filter on a single ALTERA Cyclone III FPGA chip. Experimental results show that, the proposed framework can computing a Multi-scales 2-D Gaussian filtering within one pixel clock period, is further suitable for real-time image processing. Moreover, the main principle can be popularized to the other operators based on convolution, such as Gabor filter, Sobel operator and so on.

  5. Technical Note: Impact of the geometry dependence of the ion chamber detector response function on a convolution-based method to address the volume averaging effect

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barraclough, Brendan; Lebron, Sharon; Li, Jonathan G.

    2016-05-15

    Purpose: To investigate the geometry dependence of the detector response function (DRF) of three commonly used scanning ionization chambers and its impact on a convolution-based method to address the volume averaging effect (VAE). Methods: A convolution-based approach has been proposed recently to address the ionization chamber VAE. It simulates the VAE in the treatment planning system (TPS) by iteratively convolving the calculated beam profiles with the DRF while optimizing the beam model. Since the convolved and the measured profiles are subject to the same VAE, the calculated profiles match the implicit “real” ones when the optimization converges. Three DRFs (Gaussian,more » Lorentzian, and parabolic function) were used for three ionization chambers (CC04, CC13, and SNC125c) in this study. Geometry dependent/independent DRFs were obtained by minimizing the difference between the ionization chamber-measured profiles and the diode-measured profiles convolved with the DRFs. These DRFs were used to obtain eighteen beam models for a commercial TPS. Accuracy of the beam models were evaluated by assessing the 20%–80% penumbra width difference (PWD) between the computed and diode-measured beam profiles. Results: The convolution-based approach was found to be effective for all three ionization chambers with significant improvement for all beam models. Up to 17% geometry dependence of the three DRFs was observed for the studied ionization chambers. With geometry dependent DRFs, the PWD was within 0.80 mm for the parabolic function and CC04 combination and within 0.50 mm for other combinations; with geometry independent DRFs, the PWD was within 1.00 mm for all cases. When using the Gaussian function as the DRF, accounting for geometry dependence led to marginal improvement (PWD < 0.20 mm) for CC04; the improvement ranged from 0.38 to 0.65 mm for CC13; for SNC125c, the improvement was slightly above 0.50 mm. Conclusions: Although all three DRFs were found adequate to represent the response of the studied ionization chambers, the Gaussian function was favored due to its superior overall performance. The geometry dependence of the DRFs can be significant for clinical applications involving small fields such as stereotactic radiotherapy.« less

  6. Technical Note: Impact of the geometry dependence of the ion chamber detector response function on a convolution-based method to address the volume averaging effect.

    PubMed

    Barraclough, Brendan; Li, Jonathan G; Lebron, Sharon; Fan, Qiyong; Liu, Chihray; Yan, Guanghua

    2016-05-01

    To investigate the geometry dependence of the detector response function (DRF) of three commonly used scanning ionization chambers and its impact on a convolution-based method to address the volume averaging effect (VAE). A convolution-based approach has been proposed recently to address the ionization chamber VAE. It simulates the VAE in the treatment planning system (TPS) by iteratively convolving the calculated beam profiles with the DRF while optimizing the beam model. Since the convolved and the measured profiles are subject to the same VAE, the calculated profiles match the implicit "real" ones when the optimization converges. Three DRFs (Gaussian, Lorentzian, and parabolic function) were used for three ionization chambers (CC04, CC13, and SNC125c) in this study. Geometry dependent/independent DRFs were obtained by minimizing the difference between the ionization chamber-measured profiles and the diode-measured profiles convolved with the DRFs. These DRFs were used to obtain eighteen beam models for a commercial TPS. Accuracy of the beam models were evaluated by assessing the 20%-80% penumbra width difference (PWD) between the computed and diode-measured beam profiles. The convolution-based approach was found to be effective for all three ionization chambers with significant improvement for all beam models. Up to 17% geometry dependence of the three DRFs was observed for the studied ionization chambers. With geometry dependent DRFs, the PWD was within 0.80 mm for the parabolic function and CC04 combination and within 0.50 mm for other combinations; with geometry independent DRFs, the PWD was within 1.00 mm for all cases. When using the Gaussian function as the DRF, accounting for geometry dependence led to marginal improvement (PWD < 0.20 mm) for CC04; the improvement ranged from 0.38 to 0.65 mm for CC13; for SNC125c, the improvement was slightly above 0.50 mm. Although all three DRFs were found adequate to represent the response of the studied ionization chambers, the Gaussian function was favored due to its superior overall performance. The geometry dependence of the DRFs can be significant for clinical applications involving small fields such as stereotactic radiotherapy.

  7. Identification and Classification of Orthogonal Frequency Division Multiple Access (OFDMA) Signals Used in Next Generation Wireless Systems

    DTIC Science & Technology

    2012-03-01

    advanced antenna systems AMC adaptive modulation and coding AWGN additive white Gaussian noise BPSK binary phase shift keying BS base station BTC ...QAM-16, and QAM-64, and coding types include convolutional coding (CC), convolutional turbo coding (CTC), block turbo coding ( BTC ), zero-terminating

  8. The Total Gaussian Class of Quasiprobabilities and its Relation to Squeezed-State Excitations

    NASA Technical Reports Server (NTRS)

    Wuensche, Alfred

    1996-01-01

    The class of quasiprobabilities obtainable from the Wigner quasiprobability by convolutions with the general class of Gaussian functions is investigated. It can be described by a three-dimensional, in general, complex vector parameter with the property of additivity when composing convolutions. The diagonal representation of this class of quasiprobabilities is connected with a generalization of the displaced Fock states in direction of squeezing. The subclass with real vector parameter is considered more in detail. It is related to the most important kinds of boson operator ordering. The properties of a specific set of discrete excitations of squeezed coherent states are given.

  9. Noise deconvolution based on the L1-metric and decomposition of discrete distributions of postsynaptic responses.

    PubMed

    Astrelin, A V; Sokolov, M V; Behnisch, T; Reymann, K G; Voronin, L L

    1997-04-25

    A statistical approach to analysis of amplitude fluctuations of postsynaptic responses is described. This includes (1) using a L1-metric in the space of distribution functions for minimisation with application of linear programming methods to decompose amplitude distributions into a convolution of Gaussian and discrete distributions; (2) deconvolution of the resulting discrete distribution with determination of the release probabilities and the quantal amplitude for cases with a small number (< 5) of discrete components. The methods were tested against simulated data over a range of sample sizes and signal-to-noise ratios which mimicked those observed in physiological experiments. In computer simulation experiments, comparisons were made with other methods of 'unconstrained' (generalized) and constrained reconstruction of discrete components from convolutions. The simulation results provided additional criteria for improving the solutions to overcome 'over-fitting phenomena' and to constrain the number of components with small probabilities. Application of the programme to recordings from hippocampal neurones demonstrated its usefulness for the analysis of amplitude distributions of postsynaptic responses.

  10. Review of image processing fundamentals

    NASA Technical Reports Server (NTRS)

    Billingsley, F. C.

    1985-01-01

    Image processing through convolution, transform coding, spatial frequency alterations, sampling, and interpolation are considered. It is postulated that convolution in one domain (real or frequency) is equivalent to multiplication in the other (frequency or real), and that the relative amplitudes of the Fourier components must be retained to reproduce any waveshape. It is suggested that all digital systems may be considered equivalent, with a frequency content approximately at the Nyquist limit, and with a Gaussian frequency response. An optimized cubic version of the interpolation continuum image is derived as a set of cubic spines. Pixel replication has been employed to enlarge the visable area of digital samples, however, suitable elimination of the extraneous high frequencies involved in the visable edges, by defocusing, is necessary to allow the underlying object represented by the data values to be seen.

  11. A brain MRI bias field correction method created in the Gaussian multi-scale space

    NASA Astrophysics Data System (ADS)

    Chen, Mingsheng; Qin, Mingxin

    2017-07-01

    A pre-processing step is needed to correct for the bias field signal before submitting corrupted MR images to such image-processing algorithms. This study presents a new bias field correction method. The method creates a Gaussian multi-scale space by the convolution of the inhomogeneous MR image with a two-dimensional Gaussian function. In the multi-Gaussian space, the method retrieves the image details from the differentiation of the original image and convolution image. Then, it obtains an image whose inhomogeneity is eliminated by the weighted sum of image details in each layer in the space. Next, the bias field-corrected MR image is retrieved after the Υ correction, which enhances the contrast and brightness of the inhomogeneity-eliminated MR image. We have tested the approach on T1 MRI and T2 MRI with varying bias field levels and have achieved satisfactory results. Comparison experiments with popular software have demonstrated superior performance of the proposed method in terms of quantitative indices, especially an improvement in subsequent image segmentation.

  12. Edge detection - Image-plane versus digital processing

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.; Fales, Carl L.; Park, Stephen K.; Triplett, Judith A.

    1987-01-01

    To optimize edge detection with the familiar Laplacian-of-Gaussian operator, it has become common to implement this operator with a large digital convolution mask followed by some interpolation of the processed data to determine the zero crossings that locate edges. It is generally recognized that this large mask causes substantial blurring of fine detail. It is shown that the spatial detail can be improved by a factor of about four with either the Wiener-Laplacian-of-Gaussian filter or an image-plane processor. The Wiener-Laplacian-of-Gaussian filter minimizes the image-gathering degradations if the scene statistics are at least approximately known and also serves as an interpolator to determine the desired zero crossings directly. The image-plane processor forms the Laplacian-of-Gaussian response by properly combining the optical design of the image-gathering system with a minimal three-by-three lateral-inhibitory processing mask. This approach, which is suggested by Marr's model of early processing in human vision, also reduces data processing by about two orders of magnitude and data transmission by up to an order of magnitude.

  13. Ion temperature measurements of indirect-drive implosions with the neutron time-of-flight detector on SG-III laser facility

    NASA Astrophysics Data System (ADS)

    Chen, Zhongjing; Zhang, Xing; Pu, Yudong; Yan, Ji; Huang, Tianxuan; Jiang, Wei; Yu, Bo; Chen, Bolun; Tang, Qi; Song, Zifeng; Chen, Jiabin; Zhan, Xiayu; Liu, Zhongjie; Xie, Xufei; Jiang, Shaoen; Liu, Shenye

    2018-02-01

    The accuracy of the determination of the burn-averaged ion temperature of inertial confinement fusion implosions depends on the unfold process, including deconvolution and convolution methods, and the function, i.e., the detector response, used to fit the signals measured by neutron time-of-flight (nToF) detectors. The function given by Murphy et al. [Rev. Sci. Instrum. 68(1), 610-613 (1997)] has been widely used in Nova, Omega, and NIF. There are two components, i.e., fast and slow, and the contribution of scattered neutrons has not been dedicatedly considered. In this work, a new function, based on Murphy's function has been employed to unfold nToF signals. The contribution of scattered neutrons is easily included by the convolution of a Gaussian response function and an exponential decay. The ion temperature is measured by nToF with the new function. Good agreement with the ion temperature determined by the deconvolution method has been achieved.

  14. Dynamic heterogeneity and conditional statistics of non-Gaussian temperature fluctuations in turbulent thermal convection

    NASA Astrophysics Data System (ADS)

    He, Xiaozhou; Wang, Yin; Tong, Penger

    2018-05-01

    Non-Gaussian fluctuations with an exponential tail in their probability density function (PDF) are often observed in nonequilibrium steady states (NESSs) and one does not understand why they appear so often. Turbulent Rayleigh-Bénard convection (RBC) is an example of such a NESS, in which the measured PDF P (δ T ) of temperature fluctuations δ T in the central region of the flow has a long exponential tail. Here we show that because of the dynamic heterogeneity in RBC, the exponential PDF is generated by a convolution of a set of dynamics modes conditioned on a constant local thermal dissipation rate ɛ . The conditional PDF G (δ T |ɛ ) of δ T under a constant ɛ is found to be of Gaussian form and its variance σT2 for different values of ɛ follows an exponential distribution. The convolution of the two distribution functions gives rise to the exponential PDF P (δ T ) . This work thus provides a physical mechanism of the observed exponential distribution of δ T in RBC and also sheds light on the origin of non-Gaussian fluctuations in other NESSs.

  15. Correction Factor for Gaussian Deconvolution of Optically Thick Linewidths in Homogeneous Sources

    NASA Technical Reports Server (NTRS)

    Kastner, S. O.; Bhatia, A. K.

    1999-01-01

    Profiles of optically thick, non-Gaussian emission line profiles convoluted with Gaussian instrumental profiles are constructed, and are deconvoluted on the usual Gaussian basis to examine the departure from accuracy thereby caused in "measured" linewidths. It is found that "measured" linewidths underestimate the true linewidths of optically thick lines, by a factor which depends on the resolution factor r congruent to Doppler width/instrumental width and on the optical thickness tau(sub 0). An approximating expression is obtained for this factor, applicable in the range of at least 0 <= tau(sub 0) <= 10, which can provide estimates of the true linewidth and optical thickness.

  16. Optimized statistical parametric mapping procedure for NIRS data contaminated by motion artifacts : Neurometric analysis of body schema extension.

    PubMed

    Suzuki, Satoshi

    2017-09-01

    This study investigated the spatial distribution of brain activity on body schema (BS) modification induced by natural body motion using two versions of a hand-tracing task. In Task 1, participants traced Japanese Hiragana characters using the right forefinger, requiring no BS expansion. In Task 2, participants performed the tracing task with a long stick, requiring BS expansion. Spatial distribution was analyzed using general linear model (GLM)-based statistical parametric mapping of near-infrared spectroscopy data contaminated with motion artifacts caused by the hand-tracing task. Three methods were utilized in series to counter the artifacts, and optimal conditions and modifications were investigated: a model-free method (Step 1), a convolution matrix method (Step 2), and a boxcar-function-based Gaussian convolution method (Step 3). The results revealed four methodological findings: (1) Deoxyhemoglobin was suitable for the GLM because both Akaike information criterion and the variance against the averaged hemodynamic response function were smaller than for other signals, (2) a high-pass filter with a cutoff frequency of .014 Hz was effective, (3) the hemodynamic response function computed from a Gaussian kernel function and its first- and second-derivative terms should be included in the GLM model, and (4) correction of non-autocorrelation and use of effective degrees of freedom were critical. Investigating z-maps computed according to these guidelines revealed that contiguous areas of BA7-BA40-BA21 in the right hemisphere became significantly activated ([Formula: see text], [Formula: see text], and [Formula: see text], respectively) during BS modification while performing the hand-tracing task.

  17. Convolutional coding combined with continuous phase modulation

    NASA Technical Reports Server (NTRS)

    Pizzi, S. V.; Wilson, S. G.

    1985-01-01

    Background theory and specific coding designs for combined coding/modulation schemes utilizing convolutional codes and continuous-phase modulation (CPM) are presented. In this paper the case of r = 1/2 coding onto a 4-ary CPM is emphasized, with short-constraint length codes presented for continuous-phase FSK, double-raised-cosine, and triple-raised-cosine modulation. Coding buys several decibels of coding gain over the Gaussian channel, with an attendant increase of bandwidth. Performance comparisons in the power-bandwidth tradeoff with other approaches are made.

  18. Gaussian temporal modulation for the behavior of multi-sinc Schell-model pulses in dispersive media

    NASA Astrophysics Data System (ADS)

    Liu, Xiayin; Zhao, Daomu; Tian, Kehan; Pan, Weiqing; Zhang, Kouwen

    2018-06-01

    A new class of pulse source with correlation being modeled by the convolution operation of two legitimate temporal correlation function is proposed. Particularly, analytical formulas for the Gaussian temporally modulated multi-sinc Schell-model (MSSM) pulses generated by such pulse source propagating in dispersive media are derived. It is demonstrated that the average intensity of MSSM pulses on propagation are reshaped from flat profile or a train to a distribution with a Gaussian temporal envelope by adjusting the initial correlation width of the Gaussian pulse. The effects of the Gaussian temporal modulation on the temporal degree of coherence of the MSSM pulse are also analyzed. The results presented here show the potential of coherence modulation for pulse shaping and pulsed laser material processing.

  19. From prompt gamma distribution to dose: a novel approach combining an evolutionary algorithm and filtering based on Gaussian-powerlaw convolutions.

    PubMed

    Schumann, A; Priegnitz, M; Schoene, S; Enghardt, W; Rohling, H; Fiedler, F

    2016-10-07

    Range verification and dose monitoring in proton therapy is considered as highly desirable. Different methods have been developed worldwide, like particle therapy positron emission tomography (PT-PET) and prompt gamma imaging (PGI). In general, these methods allow for a verification of the proton range. However, quantification of the dose from these measurements remains challenging. For the first time, we present an approach for estimating the dose from prompt γ-ray emission profiles. It combines a filtering procedure based on Gaussian-powerlaw convolution with an evolutionary algorithm. By means of convolving depth dose profiles with an appropriate filter kernel, prompt γ-ray depth profiles are obtained. In order to reverse this step, the evolutionary algorithm is applied. The feasibility of this approach is demonstrated for a spread-out Bragg-peak in a water target.

  20. Relationship of strength of turbulence to received power

    NASA Technical Reports Server (NTRS)

    Rottger, J.

    1983-01-01

    Because of contributions due to reflection, the determination of the turbulence refractive index structure constant may be affected. For pure scattering from turbulence in the inertial subrange, the radar echo power can be used to calculate the refractive index structure constant. The radar power is determined by a convolution integral. If the antenna beam is swung to sufficiently large off-zenith angles ( 12.5 deg) so that a quasi-isotropic response from the tail ends of the Gaussian angular distribution can be anticipated, the evaluation of the convolution integral depends only on the known antenna pattern of the radar. This procedure, swinging the radar beam to attenuate the reflected component, may be called angular or direction filtering. The tilted antenna also may be pick up reflected components from near the zenith through the sidelobes. This can be tested by the evaluation of the correlation function. This method applies a time domain filtering of the intensity time series but needs a very careful selection of the high pass filters.

  1. Some error bounds for K-iterated Gaussian recursive filters

    NASA Astrophysics Data System (ADS)

    Cuomo, Salvatore; Galletti, Ardelio; Giunta, Giulio; Marcellino, Livia

    2016-10-01

    Recursive filters (RFs) have achieved a central role in several research fields over the last few years. For example, they are used in image processing, in data assimilation and in electrocardiogram denoising. More in particular, among RFs, the Gaussian RFs are an efficient computational tool for approximating Gaussian-based convolutions and are suitable for digital image processing and applications of the scale-space theory. As is a common knowledge, the Gaussian RFs, applied to signals with support in a finite domain, generate distortions and artifacts, mostly localized at the boundaries. Heuristic and theoretical improvements have been proposed in literature to deal with this issue (namely boundary conditions). They include the case in which a Gaussian RF is applied more than once, i.e. the so called K-iterated Gaussian RFs. In this paper, starting from a summary of the comprehensive mathematical background, we consider the case of the K-iterated first-order Gaussian RF and provide the study of its numerical stability and some component-wise theoretical error bounds.

  2. Spatial Angular Compounding Technique for H-Scan Ultrasound Imaging.

    PubMed

    Khairalseed, Mawia; Xiong, Fangyuan; Kim, Jung-Whan; Mattrey, Robert F; Parker, Kevin J; Hoyt, Kenneth

    2018-01-01

    H-Scan is a new ultrasound imaging technique that relies on matching a model of pulse-echo formation to the mathematics of a class of Gaussian-weighted Hermite polynomials. This technique may be beneficial in the measurement of relative scatterer sizes and in cancer therapy, particularly for early response to drug treatment. Because current H-scan techniques use focused ultrasound data acquisitions, spatial resolution degrades away from the focal region and inherently affects relative scatterer size estimation. Although the resolution of ultrasound plane wave imaging can be inferior to that of traditional focused ultrasound approaches, the former exhibits a homogeneous spatial resolution throughout the image plane. The purpose of this study was to implement H-scan using plane wave imaging and investigate the impact of spatial angular compounding on H-scan image quality. Parallel convolution filters using two different Gaussian-weighted Hermite polynomials that describe ultrasound scattering events are applied to the radiofrequency data. The H-scan processing is done on each radiofrequency image plane before averaging to get the angular compounded image. The relative strength from each convolution is color-coded to represent relative scatterer size. Given results from a series of phantom materials, H-scan imaging with spatial angular compounding more accurately reflects the true scatterer size caused by reductions in the system point spread function and improved signal-to-noise ratio. Preliminary in vivo H-scan imaging of tumor-bearing animals suggests this modality may be useful for monitoring early response to chemotherapeutic treatment. Overall, H-scan imaging using ultrasound plane waves and spatial angular compounding is a promising approach for visualizing the relative size and distribution of acoustic scattering sources. Copyright © 2018 World Federation for Ultrasound in Medicine and Biology. Published by Elsevier Inc. All rights reserved.

  3. Linear velocity fields in non-Gaussian models for large-scale structure

    NASA Technical Reports Server (NTRS)

    Scherrer, Robert J.

    1992-01-01

    Linear velocity fields in two types of physically motivated non-Gaussian models are examined for large-scale structure: seed models, in which the density field is a convolution of a density profile with a distribution of points, and local non-Gaussian fields, derived from a local nonlinear transformation on a Gaussian field. The distribution of a single component of the velocity is derived for seed models with randomly distributed seeds, and these results are applied to the seeded hot dark matter model and the global texture model with cold dark matter. An expression for the distribution of a single component of the velocity in arbitrary local non-Gaussian models is given, and these results are applied to such fields with chi-squared and lognormal distributions. It is shown that all seed models with randomly distributed seeds and all local non-Guassian models have single-component velocity distributions with positive kurtosis.

  4. DCMDN: Deep Convolutional Mixture Density Network

    NASA Astrophysics Data System (ADS)

    D'Isanto, Antonio; Polsterer, Kai Lars

    2017-09-01

    Deep Convolutional Mixture Density Network (DCMDN) estimates probabilistic photometric redshift directly from multi-band imaging data by combining a version of a deep convolutional network with a mixture density network. The estimates are expressed as Gaussian mixture models representing the probability density functions (PDFs) in the redshift space. In addition to the traditional scores, the continuous ranked probability score (CRPS) and the probability integral transform (PIT) are applied as performance criteria. DCMDN is able to predict redshift PDFs independently from the type of source, e.g. galaxies, quasars or stars and renders pre-classification of objects and feature extraction unnecessary; the method is extremely general and allows the solving of any kind of probabilistic regression problems based on imaging data, such as estimating metallicity or star formation rate in galaxies.

  5. New deconvolution method for microscopic images based on the continuous Gaussian radial basis function interpolation model.

    PubMed

    Chen, Zhaoxue; Chen, Hao

    2014-01-01

    A deconvolution method based on the Gaussian radial basis function (GRBF) interpolation is proposed. Both the original image and Gaussian point spread function are expressed as the same continuous GRBF model, thus image degradation is simplified as convolution of two continuous Gaussian functions, and image deconvolution is converted to calculate the weighted coefficients of two-dimensional control points. Compared with Wiener filter and Lucy-Richardson algorithm, the GRBF method has an obvious advantage in the quality of restored images. In order to overcome such a defect of long-time computing, the method of graphic processing unit multithreading or increasing space interval of control points is adopted, respectively, to speed up the implementation of GRBF method. The experiments show that based on the continuous GRBF model, the image deconvolution can be efficiently implemented by the method, which also has a considerable reference value for the study of three-dimensional microscopic image deconvolution.

  6. Generation of an optical frequency comb with a Gaussian spectrum using a linear time-to-space mapping system.

    PubMed

    Hisatake, Shintaro; Tada, Keiji; Nagatsuma, Tadao

    2010-03-01

    We demonstrate the generation of an optical frequency comb (OFC) with a Gaussian spectrum using a continuous-wave (CW) laser, based on spatial convolution of a slit and a periodically moving optical beam spot in a linear time-to-space mapping system. A CW optical beam is linearly mapped to a spatial signal using two sinusoidal electro-optic (EO) deflections and an OFC is extracted by inserting a narrow spatial slit in the Fourier-transform plane of a second EO deflector (EOD). The spectral shape of the OFC corresponds to the spatial beam profile in the near-field region of the second EOD, which can be manipulated by a spatial filter without spectral dispersers. In a proof-of-concept experiment, a 16.25-GHz-spaced, 240-GHz-wide Gaussian-envelope OFC (corresponding to 1.8 ps Gaussian pulse generation) was demonstrated.

  7. On the efficacy of procedures to normalize Ex-Gaussian distributions.

    PubMed

    Marmolejo-Ramos, Fernando; Cousineau, Denis; Benites, Luis; Maehara, Rocío

    2014-01-01

    Reaction time (RT) is one of the most common types of measure used in experimental psychology. Its distribution is not normal (Gaussian) but resembles a convolution of normal and exponential distributions (Ex-Gaussian). One of the major assumptions in parametric tests (such as ANOVAs) is that variables are normally distributed. Hence, it is acknowledged by many that the normality assumption is not met. This paper presents different procedures to normalize data sampled from an Ex-Gaussian distribution in such a way that they are suitable for parametric tests based on the normality assumption. Using simulation studies, various outlier elimination and transformation procedures were tested against the level of normality they provide. The results suggest that the transformation methods are better than elimination methods in normalizing positively skewed data and the more skewed the distribution then the transformation methods are more effective in normalizing such data. Specifically, transformation with parameter lambda -1 leads to the best results.

  8. A Hierarchical Convolutional Neural Network for vesicle fusion event classification.

    PubMed

    Li, Haohan; Mao, Yunxiang; Yin, Zhaozheng; Xu, Yingke

    2017-09-01

    Quantitative analysis of vesicle exocytosis and classification of different modes of vesicle fusion from the fluorescence microscopy are of primary importance for biomedical researches. In this paper, we propose a novel Hierarchical Convolutional Neural Network (HCNN) method to automatically identify vesicle fusion events in time-lapse Total Internal Reflection Fluorescence Microscopy (TIRFM) image sequences. Firstly, a detection and tracking method is developed to extract image patch sequences containing potential fusion events. Then, a Gaussian Mixture Model (GMM) is applied on each image patch of the patch sequence with outliers rejected for robust Gaussian fitting. By utilizing the high-level time-series intensity change features introduced by GMM and the visual appearance features embedded in some key moments of the fusion process, the proposed HCNN architecture is able to classify each candidate patch sequence into three classes: full fusion event, partial fusion event and non-fusion event. Finally, we validate the performance of our method on 9 challenging datasets that have been annotated by cell biologists, and our method achieves better performances when comparing with three previous methods. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Measurement of Flaw Size From Thermographic Data

    NASA Technical Reports Server (NTRS)

    Winfree, William P.; Zalameda, Joseph N.; Howell, Patricia A.

    2015-01-01

    Simple methods for reducing the pulsed thermographic responses of delaminations tend to overestimate the size of the delamination, since the heat diffuses in the plane parallel to the surface. The result is a temperature profile over the delamination which is larger than the delamination size. A variational approach is presented for reducing the thermographic data to produce an estimated size for a flaw that is much closer to the true size of the delamination. The method is based on an estimate for the thermal response that is a convolution of a Gaussian kernel with the shape of the flaw. The size is determined from both the temporal and spatial thermal response of the exterior surface above the delamination and constraints on the length of the contour surrounding the delamination. Examples of the application of the technique to simulation and experimental data are presented to investigate the limitations of the technique.

  10. Sparse image reconstruction for molecular imaging.

    PubMed

    Ting, Michael; Raich, Raviv; Hero, Alfred O

    2009-06-01

    The application that motivates this paper is molecular imaging at the atomic level. When discretized at subatomic distances, the volume is inherently sparse. Noiseless measurements from an imaging technology can be modeled by convolution of the image with the system point spread function (psf). Such is the case with magnetic resonance force microscopy (MRFM), an emerging technology where imaging of an individual tobacco mosaic virus was recently demonstrated with nanometer resolution. We also consider additive white Gaussian noise (AWGN) in the measurements. Many prior works of sparse estimators have focused on the case when H has low coherence; however, the system matrix H in our application is the convolution matrix for the system psf. A typical convolution matrix has high coherence. This paper, therefore, does not assume a low coherence H. A discrete-continuous form of the Laplacian and atom at zero (LAZE) p.d.f. used by Johnstone and Silverman is formulated, and two sparse estimators derived by maximizing the joint p.d.f. of the observation and image conditioned on the hyperparameters. A thresholding rule that generalizes the hard and soft thresholding rule appears in the course of the derivation. This so-called hybrid thresholding rule, when used in the iterative thresholding framework, gives rise to the hybrid estimator, a generalization of the lasso. Estimates of the hyperparameters for the lasso and hybrid estimator are obtained via Stein's unbiased risk estimate (SURE). A numerical study with a Gaussian psf and two sparse images shows that the hybrid estimator outperforms the lasso.

  11. Efficient Modeling of Gravity Fields Caused by Sources with Arbitrary Geometry and Arbitrary Density Distribution

    NASA Astrophysics Data System (ADS)

    Wu, Leyuan

    2018-01-01

    We present a brief review of gravity forward algorithms in Cartesian coordinate system, including both space-domain and Fourier-domain approaches, after which we introduce a truly general and efficient algorithm, namely the convolution-type Gauss fast Fourier transform (Conv-Gauss-FFT) algorithm, for 2D and 3D modeling of gravity potential and its derivatives due to sources with arbitrary geometry and arbitrary density distribution which are defined either by discrete or by continuous functions. The Conv-Gauss-FFT algorithm is based on the combined use of a hybrid rectangle-Gaussian grid and the fast Fourier transform (FFT) algorithm. Since the gravity forward problem in Cartesian coordinate system can be expressed as continuous convolution-type integrals, we first approximate the continuous convolution by a weighted sum of a series of shifted discrete convolutions, and then each shifted discrete convolution, which is essentially a Toeplitz system, is calculated efficiently and accurately by combining circulant embedding with the FFT algorithm. Synthetic and real model tests show that the Conv-Gauss-FFT algorithm can obtain high-precision forward results very efficiently for almost any practical model, and it works especially well for complex 3D models when gravity fields on large 3D regular grids are needed.

  12. Separable concatenated codes with iterative map decoding for Rician fading channels

    NASA Technical Reports Server (NTRS)

    Lodge, J. H.; Young, R. J.

    1993-01-01

    Very efficient signalling in radio channels requires the design of very powerful codes having special structure suitable for practical decoding schemes. In this paper, powerful codes are obtained by combining comparatively simple convolutional codes to form multi-tiered 'separable' convolutional codes. The decoding of these codes, using separable symbol-by-symbol maximum a posteriori (MAP) 'filters', is described. It is known that this approach yields impressive results in non-fading additive white Gaussian noise channels. Interleaving is an inherent part of the code construction, and consequently, these codes are well suited for fading channel communications. Here, simulation results for communications over Rician fading channels are presented to support this claim.

  13. Convolution kernels for multi-wavelength imaging

    NASA Astrophysics Data System (ADS)

    Boucaud, A.; Bocchio, M.; Abergel, A.; Orieux, F.; Dole, H.; Hadj-Youcef, M. A.

    2016-12-01

    Astrophysical images issued from different instruments and/or spectral bands often require to be processed together, either for fitting or comparison purposes. However each image is affected by an instrumental response, also known as point-spread function (PSF), that depends on the characteristics of the instrument as well as the wavelength and the observing strategy. Given the knowledge of the PSF in each band, a straightforward way of processing images is to homogenise them all to a target PSF using convolution kernels, so that they appear as if they had been acquired by the same instrument. We propose an algorithm that generates such PSF-matching kernels, based on Wiener filtering with a tunable regularisation parameter. This method ensures all anisotropic features in the PSFs to be taken into account. We compare our method to existing procedures using measured Herschel/PACS and SPIRE PSFs and simulated JWST/MIRI PSFs. Significant gains up to two orders of magnitude are obtained with respect to the use of kernels computed assuming Gaussian or circularised PSFs. A software to compute these kernels is available at https://github.com/aboucaud/pypher

  14. On the efficacy of procedures to normalize Ex-Gaussian distributions

    PubMed Central

    Marmolejo-Ramos, Fernando; Cousineau, Denis; Benites, Luis; Maehara, Rocío

    2015-01-01

    Reaction time (RT) is one of the most common types of measure used in experimental psychology. Its distribution is not normal (Gaussian) but resembles a convolution of normal and exponential distributions (Ex-Gaussian). One of the major assumptions in parametric tests (such as ANOVAs) is that variables are normally distributed. Hence, it is acknowledged by many that the normality assumption is not met. This paper presents different procedures to normalize data sampled from an Ex-Gaussian distribution in such a way that they are suitable for parametric tests based on the normality assumption. Using simulation studies, various outlier elimination and transformation procedures were tested against the level of normality they provide. The results suggest that the transformation methods are better than elimination methods in normalizing positively skewed data and the more skewed the distribution then the transformation methods are more effective in normalizing such data. Specifically, transformation with parameter lambda -1 leads to the best results. PMID:25709588

  15. Convolution Algebra for Fluid Modes with Finite Energy

    DTIC Science & Technology

    1992-04-01

    signals and systems analysis: the evaluation of the initial condition -or input- to a system given its final condition -or output- and its impulse ...Images Corrupted with Gaussian Blur .............. 30 III.. 5 Deblurring with Hermite-Rodriguez Wavelets 34 5.1 Introduction...66 25. Letter "T", which is diffused for t=12, and corrupted by additive noise at SNR’s = 1

  16. Radiation from High Temperature Plasmas.

    DTIC Science & Technology

    1980-09-09

    the silicon radiation, both lines and continuum, photoionizes and photoexcites bound levels of the aluminum plasma. This raises the state of...experimental broadening, a program was established to catalog all the spectra calculated theoretically and convolute them with Gaussian broadening... theoretical " spectrum into an observed spectrum as the experimental broadening increases. This evolution is seen in the next section for the case of an

  17. Focal ratio degradation: a new perspective

    NASA Astrophysics Data System (ADS)

    Haynes, Dionne M.; Withford, Michael J.; Dawes, Judith M.; Haynes, Roger; Bland-Hawthorn, Joss

    2008-07-01

    We have developed an alternative FRD empirical model for the parallel laser beam technique which can accommodate contributions from both scattering and modal diffusion. It is consistent with scattering inducing a Lorentzian contribution and modal diffusion inducing a Gaussian contribution. The convolution of these two functions produces a Voigt function which is shown to better simulate the observed behavior of the FRD distribution and provides a greatly improved fit over the standard Gaussian fitting approach. The Voigt model can also be used to quantify the amount of energy displaced by FRD, therefore allowing astronomical instrument scientists to identify, quantify and potentially minimize the various sources of FRD, and optimise the fiber and instrument performance.

  18. Modulation and coding for fast fading mobile satellite communication channels

    NASA Technical Reports Server (NTRS)

    Mclane, P. J.; Wittke, P. H.; Smith, W. S.; Lee, A.; Ho, P. K. M.; Loo, C.

    1988-01-01

    The performance of Gaussian baseband filtered minimum shift keying (GMSK) using differential detection in fast Rician fading, with a novel treatment of the inherent intersymbol interference (ISI) leading to an exact solution is discussed. Trellis-coded differentially coded phase shift keying (DPSK) with a convolutional interleaver is considered. The channel is the Rician Channel with the line-of-sight component subject to a lognormal transformation.

  19. Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising.

    PubMed

    Zhang, Kai; Zuo, Wangmeng; Chen, Yunjin; Meng, Deyu; Zhang, Lei

    2017-07-01

    The discriminative model learning for image denoising has been recently attracting considerable attentions due to its favorable denoising performance. In this paper, we take one step forward by investigating the construction of feed-forward denoising convolutional neural networks (DnCNNs) to embrace the progress in very deep architecture, learning algorithm, and regularization method into image denoising. Specifically, residual learning and batch normalization are utilized to speed up the training process as well as boost the denoising performance. Different from the existing discriminative denoising models which usually train a specific model for additive white Gaussian noise at a certain noise level, our DnCNN model is able to handle Gaussian denoising with unknown noise level (i.e., blind Gaussian denoising). With the residual learning strategy, DnCNN implicitly removes the latent clean image in the hidden layers. This property motivates us to train a single DnCNN model to tackle with several general image denoising tasks, such as Gaussian denoising, single image super-resolution, and JPEG image deblocking. Our extensive experiments demonstrate that our DnCNN model can not only exhibit high effectiveness in several general image denoising tasks, but also be efficiently implemented by benefiting from GPU computing.

  20. Bending the Rules: Widefield Microscopy and the Abbe Limit of Resolution

    PubMed Central

    Verdaasdonk, Jolien S.; Stephens, Andrew D.; Haase, Julian; Bloom, Kerry

    2014-01-01

    One of the most fundamental concepts of microscopy is that of resolution–the ability to clearly distinguish two objects as separate. Recent advances such as structured illumination microscopy (SIM) and point localization techniques including photoactivated localization microscopy (PALM), and stochastic optical reconstruction microscopy (STORM) strive to overcome the inherent limits of resolution of the modern light microscope. These techniques, however, are not always feasible or optimal for live cell imaging. Thus, in this review, we explore three techniques for extracting high resolution data from images acquired on a widefield microscope–deconvolution, model convolution, and Gaussian fitting. Deconvolution is a powerful tool for restoring a blurred image using knowledge of the point spread function (PSF) describing the blurring of light by the microscope, although care must be taken to ensure accuracy of subsequent quantitative analysis. The process of model convolution also requires knowledge of the PSF to blur a simulated image which can then be compared to the experimentally acquired data to reach conclusions regarding its geometry and fluorophore distribution. Gaussian fitting is the basis for point localization microscopy, and can also be applied to tracking spot motion over time or measuring spot shape and size. All together, these three methods serve as powerful tools for high-resolution imaging using widefield microscopy. PMID:23893718

  1. Multineuron spike train analysis with R-convolution linear combination kernel.

    PubMed

    Tezuka, Taro

    2018-06-01

    A spike train kernel provides an effective way of decoding information represented by a spike train. Some spike train kernels have been extended to multineuron spike trains, which are simultaneously recorded spike trains obtained from multiple neurons. However, most of these multineuron extensions were carried out in a kernel-specific manner. In this paper, a general framework is proposed for extending any single-neuron spike train kernel to multineuron spike trains, based on the R-convolution kernel. Special subclasses of the proposed R-convolution linear combination kernel are explored. These subclasses have a smaller number of parameters and make optimization tractable when the size of data is limited. The proposed kernel was evaluated using Gaussian process regression for multineuron spike trains recorded from an animal brain. It was compared with the sum kernel and the population Spikernel, which are existing ways of decoding multineuron spike trains using kernels. The results showed that the proposed approach performs better than these kernels and also other commonly used neural decoding methods. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. Improved digital filters for evaluating Fourier and Hankel transform integrals

    USGS Publications Warehouse

    Anderson, Walter L.

    1975-01-01

    New algorithms are described for evaluating Fourier (cosine, sine) and Hankel (J0,J1) transform integrals by means of digital filters. The filters have been designed with extended lengths so that a variable convolution operation can be applied to a large class of integral transforms having the same system transfer function. A f' lagged-convolution method is also presented to significantly decrease the computation time when computing a series of like-transforms over a parameter set spaced the same as the filters. Accuracy of the new filters is comparable to Gaussian integration, provided moderate parameter ranges and well-behaved kernel functions are used. A collection of Fortran IV subprograms is included for both real and complex functions for each filter type. The algorithms have been successfully used in geophysical applications containing a wide variety of integral transforms

  3. Voigt deconvolution method and its applications to pure oxygen absorption spectrum at 1270 nm band.

    PubMed

    Al-Jalali, Muhammad A; Aljghami, Issam F; Mahzia, Yahia M

    2016-03-15

    Experimental spectral lines of pure oxygen at 1270 nm band were analyzed by Voigt deconvolution method. The method gave a total Voigt profile, which arises from two overlapping bands. Deconvolution of total Voigt profile leads to two Voigt profiles, the first as a result of O2 dimol at 1264 nm band envelope, and the second from O2 monomer at 1268 nm band envelope. In addition, Voigt profile itself is the convolution of Lorentzian and Gaussian distributions. Competition between thermal and collisional effects was clearly observed through competition between Gaussian and Lorentzian width for each band envelope. Voigt full width at half-maximum height (Voigt FWHM) for each line, and the width ratio between Lorentzian and Gaussian width (ΓLΓG(-1)) have been investigated. The following applied pressures were at 1, 2, 3, 4, 5, and 8 bar, while the temperatures were at 298 K, 323 K, 348 K, and 373 K range. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. The Gaussian streaming model and convolution Lagrangian effective field theory

    DOE PAGES

    Vlah, Zvonimir; Castorina, Emanuele; White, Martin

    2016-12-05

    We update the ingredients of the Gaussian streaming model (GSM) for the redshift-space clustering of biased tracers using the techniques of Lagrangian perturbation theory, effective field theory (EFT) and a generalized Lagrangian bias expansion. After relating the GSM to the cumulant expansion, we present new results for the real-space correlation function, mean pairwise velocity and pairwise velocity dispersion including counter terms from EFT and bias terms through third order in the linear density, its leading derivatives and its shear up to second order. We discuss the connection to the Gaussian peaks formalism. We compare the ingredients of the GSM tomore » a suite of large N-body simulations, and show the performance of the theory on the low order multipoles of the redshift-space correlation function and power spectrum. We highlight the importance of a general biasing scheme, which we find to be as important as higher-order corrections due to non-linear evolution for the halos we consider on the scales of interest to us.« less

  5. The Gaussian streaming model and convolution Lagrangian effective field theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vlah, Zvonimir; Castorina, Emanuele; White, Martin, E-mail: zvlah@stanford.edu, E-mail: ecastorina@berkeley.edu, E-mail: mwhite@berkeley.edu

    We update the ingredients of the Gaussian streaming model (GSM) for the redshift-space clustering of biased tracers using the techniques of Lagrangian perturbation theory, effective field theory (EFT) and a generalized Lagrangian bias expansion. After relating the GSM to the cumulant expansion, we present new results for the real-space correlation function, mean pairwise velocity and pairwise velocity dispersion including counter terms from EFT and bias terms through third order in the linear density, its leading derivatives and its shear up to second order. We discuss the connection to the Gaussian peaks formalism. We compare the ingredients of the GSM tomore » a suite of large N-body simulations, and show the performance of the theory on the low order multipoles of the redshift-space correlation function and power spectrum. We highlight the importance of a general biasing scheme, which we find to be as important as higher-order corrections due to non-linear evolution for the halos we consider on the scales of interest to us.« less

  6. Improved Discrete Approximation of Laplacian of Gaussian

    NASA Technical Reports Server (NTRS)

    Shuler, Robert L., Jr.

    2004-01-01

    An improved method of computing a discrete approximation of the Laplacian of a Gaussian convolution of an image has been devised. The primary advantage of the method is that without substantially degrading the accuracy of the end result, it reduces the amount of information that must be processed and thus reduces the amount of circuitry needed to perform the Laplacian-of- Gaussian (LOG) operation. Some background information is necessary to place the method in context. The method is intended for application to the LOG part of a process of real-time digital filtering of digitized video data that represent brightnesses in pixels in a square array. The particular filtering process of interest is one that converts pixel brightnesses to binary form, thereby reducing the amount of information that must be performed in subsequent correlation processing (e.g., correlations between images in a stereoscopic pair for determining distances or correlations between successive frames of the same image for detecting motions). The Laplacian is often included in the filtering process because it emphasizes edges and textures, while the Gaussian is often included because it smooths out noise that might not be consistent between left and right images or between successive frames of the same image.

  7. Contextual convolutional neural networks for lung nodule classification using Gaussian-weighted average image patches

    NASA Astrophysics Data System (ADS)

    Lee, Haeil; Lee, Hansang; Park, Minseok; Kim, Junmo

    2017-03-01

    Lung cancer is the most common cause of cancer-related death. To diagnose lung cancers in early stages, numerous studies and approaches have been developed for cancer screening with computed tomography (CT) imaging. In recent years, convolutional neural networks (CNN) have become one of the most common and reliable techniques in computer aided detection (CADe) and diagnosis (CADx) by achieving state-of-the-art-level performances for various tasks. In this study, we propose a CNN classification system for false positive reduction of initially detected lung nodule candidates. First, image patches of lung nodule candidates are extracted from CT scans to train a CNN classifier. To reflect the volumetric contextual information of lung nodules to 2D image patch, we propose a weighted average image patch (WAIP) generation by averaging multiple slice images of lung nodule candidates. Moreover, to emphasize central slices of lung nodules, slice images are locally weighted according to Gaussian distribution and averaged to generate the 2D WAIP. With these extracted patches, 2D CNN is trained to achieve the classification of WAIPs of lung nodule candidates into positive and negative labels. We used LUNA 2016 public challenge database to validate the performance of our approach for false positive reduction in lung CT nodule classification. Experiments show our approach improves the classification accuracy of lung nodules compared to the baseline 2D CNN with patches from single slice image.

  8. An adaptive sparse deconvolution method for distinguishing the overlapping echoes of ultrasonic guided waves for pipeline crack inspection

    NASA Astrophysics Data System (ADS)

    Chang, Yong; Zi, Yanyang; Zhao, Jiyuan; Yang, Zhe; He, Wangpeng; Sun, Hailiang

    2017-03-01

    In guided wave pipeline inspection, echoes reflected from closely spaced reflectors generally overlap, meaning useful information is lost. To solve the overlapping problem, sparse deconvolution methods have been developed in the past decade. However, conventional sparse deconvolution methods have limitations in handling guided wave signals, because the input signal is directly used as the prototype of the convolution matrix, without considering the waveform change caused by the dispersion properties of the guided wave. In this paper, an adaptive sparse deconvolution (ASD) method is proposed to overcome these limitations. First, the Gaussian echo model is employed to adaptively estimate the column prototype of the convolution matrix instead of directly using the input signal as the prototype. Then, the convolution matrix is constructed upon the estimated results. Third, the split augmented Lagrangian shrinkage (SALSA) algorithm is introduced to solve the deconvolution problem with high computational efficiency. To verify the effectiveness of the proposed method, guided wave signals obtained from pipeline inspection are investigated numerically and experimentally. Compared to conventional sparse deconvolution methods, e.g. the {{l}1} -norm deconvolution method, the proposed method shows better performance in handling the echo overlap problem in the guided wave signal.

  9. Rate-compatible punctured convolutional codes (RCPC codes) and their applications

    NASA Astrophysics Data System (ADS)

    Hagenauer, Joachim

    1988-04-01

    The concept of punctured convolutional codes is extended by punctuating a low-rate 1/N code periodically with period P to obtain a family of codes with rate P/(P + l), where l can be varied between 1 and (N - 1)P. A rate-compatibility restriction on the puncturing tables ensures that all code bits of high rate codes are used by the lower-rate codes. This allows transmission of incremental redundancy in ARQ/FEC (automatic repeat request/forward error correction) schemes and continuous rate variation to change from low to high error protection within a data frame. Families of RCPC codes with rates between 8/9 and 1/4 are given for memories M from 3 to 6 (8 to 64 trellis states) together with the relevant distance spectra. These codes are almost as good as the best known general convolutional codes of the respective rates. It is shown that the same Viterbi decoder can be used for all RCPC codes of the same M. The application of RCPC codes to hybrid ARQ/FEC schemes is discussed for Gaussian and Rayleigh fading channels using channel-state information to optimize throughput.

  10. Rolling bearing fault feature learning using improved convolutional deep belief network with compressed sensing

    NASA Astrophysics Data System (ADS)

    Shao, Haidong; Jiang, Hongkai; Zhang, Haizhou; Duan, Wenjing; Liang, Tianchen; Wu, Shuaipeng

    2018-02-01

    The vibration signals collected from rolling bearing are usually complex and non-stationary with heavy background noise. Therefore, it is a great challenge to efficiently learn the representative fault features of the collected vibration signals. In this paper, a novel method called improved convolutional deep belief network (CDBN) with compressed sensing (CS) is developed for feature learning and fault diagnosis of rolling bearing. Firstly, CS is adopted for reducing the vibration data amount to improve analysis efficiency. Secondly, a new CDBN model is constructed with Gaussian visible units to enhance the feature learning ability for the compressed data. Finally, exponential moving average (EMA) technique is employed to improve the generalization performance of the constructed deep model. The developed method is applied to analyze the experimental rolling bearing vibration signals. The results confirm that the developed method is more effective than the traditional methods.

  11. An investigation of error correcting techniques for OMV and AXAF

    NASA Technical Reports Server (NTRS)

    Ingels, Frank; Fryer, John

    1991-01-01

    The original objectives of this project were to build a test system for the NASA 255/223 Reed/Solomon encoding/decoding chip set and circuit board. This test system was then to be interfaced with a convolutional system at MSFC to examine the performance of the concantinated codes. After considerable work, it was discovered that the convolutional system could not function as needed. This report documents the design, construction, and testing of the test apparatus for the R/S chip set. The approach taken was to verify the error correcting behavior of the chip set by injecting known error patterns onto data and observing the results. Error sequences were generated using pseudo-random number generator programs, with Poisson time distribution between errors and Gaussian burst lengths. Sample means, variances, and number of un-correctable errors were calculated for each data set before testing.

  12. Stability Training for Convolutional Neural Nets in LArTPC

    NASA Astrophysics Data System (ADS)

    Lindsay, Matt; Wongjirad, Taritree

    2017-01-01

    Convolutional Neural Nets (CNNs) are the state of the art for many problems in computer vision and are a promising method for classifying interactions in Liquid Argon Time Projection Chambers (LArTPCs) used in neutrino oscillation experiments. Despite the good performance of CNN's, they are not without drawbacks, chief among them is vulnerability to noise and small perturbations to the input. One solution to this problem is a modification to the learning process called Stability Training developed by Zheng et al. We verify existing work and demonstrate volatility caused by simple Gaussian noise and also that the volatility can be nearly eliminated with Stability Training. We then go further and show that a traditional CNN is also vulnerable to realistic experimental noise and that a stability trained CNN remains accurate despite noise. This further adds to the optimism for CNNs for work in LArTPCs and other applications.

  13. Backscattering from a Gaussian distributed, perfectly conducting, rough surface

    NASA Technical Reports Server (NTRS)

    Brown, G. S.

    1977-01-01

    The problem of scattering by random surfaces possessing many scales of roughness is analyzed. The approach is applicable to bistatic scattering from dielectric surfaces, however, this specific analysis is restricted to backscattering from a perfectly conducting surface in order to more clearly illustrate the method. The surface is assumed to be Gaussian distributed so that the surface height can be split into large and small scale components, relative to the electromagnetic wavelength. A first order perturbation approach is employed wherein the scattering solution for the large scale structure is perturbed by the small scale diffraction effects. The scattering from the large scale structure is treated via geometrical optics techniques. The effect of the large scale surface structure is shown to be equivalent to a convolution in k-space of the height spectrum with the following: the shadowing function, a polarization and surface slope dependent function, and a Gaussian factor resulting from the unperturbed geometrical optics solution. This solution provides a continuous transition between the near normal incidence geometrical optics and wide angle Bragg scattering results.

  14. Gibbs sampling on large lattice with GMRF

    NASA Astrophysics Data System (ADS)

    Marcotte, Denis; Allard, Denis

    2018-02-01

    Gibbs sampling is routinely used to sample truncated Gaussian distributions. These distributions naturally occur when associating latent Gaussian fields to category fields obtained by discrete simulation methods like multipoint, sequential indicator simulation and object-based simulation. The latent Gaussians are often used in data assimilation and history matching algorithms. When the Gibbs sampling is applied on a large lattice, the computing cost can become prohibitive. The usual practice of using local neighborhoods is unsatisfying as it can diverge and it does not reproduce exactly the desired covariance. A better approach is to use Gaussian Markov Random Fields (GMRF) which enables to compute the conditional distributions at any point without having to compute and invert the full covariance matrix. As the GMRF is locally defined, it allows simultaneous updating of all points that do not share neighbors (coding sets). We propose a new simultaneous Gibbs updating strategy on coding sets that can be efficiently computed by convolution and applied with an acceptance/rejection method in the truncated case. We study empirically the speed of convergence, the effect of choice of boundary conditions, of the correlation range and of GMRF smoothness. We show that the convergence is slower in the Gaussian case on the torus than for the finite case studied in the literature. However, in the truncated Gaussian case, we show that short scale correlation is quickly restored and the conditioning categories at each lattice point imprint the long scale correlation. Hence our approach enables to realistically apply Gibbs sampling on large 2D or 3D lattice with the desired GMRF covariance.

  15. Time history solution program, L225 (TEV126). Volume 1: Engineering and usage

    NASA Technical Reports Server (NTRS)

    Kroll, R. I.; Tornallyay, A.; Clemmons, R. E.

    1979-01-01

    Volume 1 of a two volume document is presented. The usage of the convolution program L225 (TEV 126) is described. The program calculates the time response of a linear system by convoluting the impulsive response function with the time-dependent excitation function. The convolution is performed as a multiplication in the frequency domain. Fast Fourier transform techniques are used to transform the product back into the time domain to obtain response time histories. A brief description of the analysis used is presented.

  16. Quantization and training of object detection networks with low-precision weights and activations

    NASA Astrophysics Data System (ADS)

    Yang, Bo; Liu, Jian; Zhou, Li; Wang, Yun; Chen, Jie

    2018-01-01

    As convolutional neural networks have demonstrated state-of-the-art performance in object recognition and detection, there is a growing need for deploying these systems on resource-constrained mobile platforms. However, the computational burden and energy consumption of inference for these networks are significantly higher than what most low-power devices can afford. To address these limitations, this paper proposes a method to train object detection networks with low-precision weights and activations. The probability density functions of weights and activations of each layer are first directly estimated using piecewise Gaussian models. Then, the optimal quantization intervals and step sizes for each convolution layer are adaptively determined according to the distribution of weights and activations. As the most computationally expensive convolutions can be replaced by effective fixed point operations, the proposed method can drastically reduce computation complexity and memory footprint. Performing on the tiny you only look once (YOLO) and YOLO architectures, the proposed method achieves comparable accuracy to their 32-bit counterparts. As an illustration, the proposed 4-bit and 8-bit quantized versions of the YOLO model achieve a mean average precision of 62.6% and 63.9%, respectively, on the Pascal visual object classes 2012 test dataset. The mAP of the 32-bit full-precision baseline model is 64.0%.

  17. Implementation of GPU accelerated SPECT reconstruction with Monte Carlo-based scatter correction.

    PubMed

    Bexelius, Tobias; Sohlberg, Antti

    2018-06-01

    Statistical SPECT reconstruction can be very time-consuming especially when compensations for collimator and detector response, attenuation, and scatter are included in the reconstruction. This work proposes an accelerated SPECT reconstruction algorithm based on graphics processing unit (GPU) processing. Ordered subset expectation maximization (OSEM) algorithm with CT-based attenuation modelling, depth-dependent Gaussian convolution-based collimator-detector response modelling, and Monte Carlo-based scatter compensation was implemented using OpenCL. The OpenCL implementation was compared against the existing multi-threaded OSEM implementation running on a central processing unit (CPU) in terms of scatter-to-primary ratios, standardized uptake values (SUVs), and processing speed using mathematical phantoms and clinical multi-bed bone SPECT/CT studies. The difference in scatter-to-primary ratios, visual appearance, and SUVs between GPU and CPU implementations was minor. On the other hand, at its best, the GPU implementation was noticed to be 24 times faster than the multi-threaded CPU version on a normal 128 × 128 matrix size 3 bed bone SPECT/CT data set when compensations for collimator and detector response, attenuation, and scatter were included. GPU SPECT reconstructions show great promise as an every day clinical reconstruction tool.

  18. Definition of the Spatial Resolution of X-Ray Microanalysis in Thin Foils

    NASA Technical Reports Server (NTRS)

    Williams, D. B.; Michael, J. R.; Goldstein, J. I.; Romig, A. D., Jr.

    1992-01-01

    The spatial resolution of X-ray microanalysis in thin foils is defined in terms of the incident electron beam diameter and the average beam broadening. The beam diameter is defined as the full width tenth maximum of a Gaussian intensity distribution. The spatial resolution is calculated by a convolution of the beam diameter and the average beam broadening. This definition of the spatial resolution can be related simply to experimental measurements of composition profiles across interphase interfaces. Monte Carlo calculations using a high-speed parallel supercomputer show good agreement with this definition of the spatial resolution and calculations based on this definition. The agreement is good over a range of specimen thicknesses and atomic number, but is poor when excessive beam tailing distorts the assumed Gaussian electron intensity distributions. Beam tailing occurs in low-Z materials because of fast secondary electrons and in high-Z materials because of plural scattering.

  19. Non-Gaussian information from weak lensing data via deep learning

    NASA Astrophysics Data System (ADS)

    Gupta, Arushi; Matilla, José Manuel Zorrilla; Hsu, Daniel; Haiman, Zoltán

    2018-05-01

    Weak lensing maps contain information beyond two-point statistics on small scales. Much recent work has tried to extract this information through a range of different observables or via nonlinear transformations of the lensing field. Here we train and apply a two-dimensional convolutional neural network to simulated noiseless lensing maps covering 96 different cosmological models over a range of {Ωm,σ8} . Using the area of the confidence contour in the {Ωm,σ8} plane as a figure of merit, derived from simulated convergence maps smoothed on a scale of 1.0 arcmin, we show that the neural network yields ≈5 × tighter constraints than the power spectrum, and ≈4 × tighter than the lensing peaks. Such gains illustrate the extent to which weak lensing data encode cosmological information not accessible to the power spectrum or even other, non-Gaussian statistics such as lensing peaks.

  20. A Variational Approach to Simultaneous Image Segmentation and Bias Correction.

    PubMed

    Zhang, Kaihua; Liu, Qingshan; Song, Huihui; Li, Xuelong

    2015-08-01

    This paper presents a novel variational approach for simultaneous estimation of bias field and segmentation of images with intensity inhomogeneity. We model intensity of inhomogeneous objects to be Gaussian distributed with different means and variances, and then introduce a sliding window to map the original image intensity onto another domain, where the intensity distribution of each object is still Gaussian but can be better separated. The means of the Gaussian distributions in the transformed domain can be adaptively estimated by multiplying the bias field with a piecewise constant signal within the sliding window. A maximum likelihood energy functional is then defined on each local region, which combines the bias field, the membership function of the object region, and the constant approximating the true signal from its corresponding object. The energy functional is then extended to the whole image domain by the Bayesian learning approach. An efficient iterative algorithm is proposed for energy minimization, via which the image segmentation and bias field correction are simultaneously achieved. Furthermore, the smoothness of the obtained optimal bias field is ensured by the normalized convolutions without extra cost. Experiments on real images demonstrated the superiority of the proposed algorithm to other state-of-the-art representative methods.

  1. Handling of computational in vitro/in vivo correlation problems by Microsoft Excel: III. Convolution and deconvolution.

    PubMed

    Langenbucher, Frieder

    2003-11-01

    Convolution and deconvolution are the classical in-vitro-in-vivo correlation tools to describe the relationship between input and weighting/response in a linear system, where input represents the drug release in vitro, weighting/response any body response in vivo. While functional treatment, e.g. in terms of polyexponential or Weibull distribution, is more appropriate for general survey or prediction, numerical algorithms are useful for treating actual experimental data. Deconvolution is not considered an algorithm by its own, but the inversion of a corresponding convolution. MS Excel is shown to be a useful tool for all these applications.

  2. Wavelet transformation to determine impedance spectra of lithium-ion rechargeable battery

    NASA Astrophysics Data System (ADS)

    Hoshi, Yoshinao; Yakabe, Natsuki; Isobe, Koichiro; Saito, Toshiki; Shitanda, Isao; Itagaki, Masayuki

    2016-05-01

    A new analytical method is proposed to determine the electrochemical impedance of lithium-ion rechargeable batteries (LIRB) from time domain data by wavelet transformation (WT). The WT is a waveform analysis method that can transform data in the time domain to the frequency domain while retaining time information. In this transformation, the frequency domain data are obtained by the convolution integral of a mother wavelet and original time domain data. A complex Morlet mother wavelet (CMMW) is used to obtain the complex number data in the frequency domain. The CMMW is expressed by combining a Gaussian function and sinusoidal term. The theory to select a set of suitable conditions for variables and constants related to the CMMW, i.e., band, scale, and time parameters, is established by determining impedance spectra from wavelet coefficients using input voltage to the equivalent circuit and the output current. The impedance spectrum of LIRB determined by WT agrees well with that measured using a frequency response analyzer.

  3. On the Response of a Nonlinear Structure to High Kurtosis Non-Gaussian Random Loadings

    NASA Technical Reports Server (NTRS)

    Rizzi, Stephen A.; Przekop, Adam; Turner, Travis L.

    2011-01-01

    This paper is a follow-on to recent work by the authors in which the response and high-cycle fatigue of a nonlinear structure subject to non-Gaussian loadings was found to vary markedly depending on the nature of the loading. There it was found that a non-Gaussian loading having a steady rate of short-duration, high-excursion peaks produced essentially the same response as would have been incurred by a Gaussian loading. In contrast, a non-Gaussian loading having the same kurtosis, but with bursts of high-excursion peaks was found to elicit a much greater response. This work is meant to answer the question of when consideration of a loading probability distribution other than Gaussian is important. The approach entailed nonlinear numerical simulation of a beam structure under Gaussian and non-Gaussian random excitations. Whether the structure responded in a Gaussian or non-Gaussian manner was determined by adherence to, or violations of, the Central Limit Theorem. Over a practical range of damping, it was found that the linear response to a non-Gaussian loading was Gaussian when the period of the system impulse response is much greater than the rate of peaks in the loading. Lower damping reduced the kurtosis, but only when the linear response was non-Gaussian. In the nonlinear regime, the response was found to be non-Gaussian for all loadings. The effect of a spring-hardening type of nonlinearity was found to limit extreme values and thereby lower the kurtosis relative to the linear response regime. In this case, lower damping gave rise to greater nonlinearity, resulting in lower kurtosis than a higher level of damping.

  4. Linear diffusion-wave channel routing using a discrete Hayami convolution method

    Treesearch

    Li Wang; Joan Q. Wu; William J. Elliot; Fritz R. Feidler; Sergey Lapin

    2014-01-01

    The convolution of an input with a response function has been widely used in hydrology as a means to solve various problems analytically. Due to the high computation demand in solving the functions using numerical integration, it is often advantageous to use the discrete convolution instead of the integration of the continuous functions. This approach greatly reduces...

  5. The Composite Analytic and Simulation Package or RFI (CASPR) on a coded channel

    NASA Technical Reports Server (NTRS)

    Freedman, Jeff; Berman, Ted

    1993-01-01

    CASPR is an analysis package which determines the performance of a coded signal in the presence of Radio Frequency Interference (RFI) and Additive White Gaussian Noise (AWGN). It can analyze a system with convolutional coding, Reed-Solomon (RS) coding, or a concatenation of the two. The signals can either be interleaved or non-interleaved. The model measures the system performance in terms of either the E(sub b)/N(sub 0) required to achieve a given Bit Error Rate (BER) or the BER needed for a constant E(sub b)/N(sub 0).

  6. Simple reaction time in 8-9-year old children environmentally exposed to PCBs.

    PubMed

    Šovčíková, Eva; Wimmerová, Soňa; Strémy, Maximilián; Kotianová, Janette; Loffredo, Christopher A; Murínová, Ľubica Palkovičová; Chovancová, Jana; Čonka, Kamil; Lancz, Kinga; Trnovec, Tomáš

    2015-12-01

    Simple reaction time (SRT) has been studied in children exposed to polychlorinated biphenyls (PCBs), with variable results. In the current work we examined SRT in 146 boys and 161 girls, aged 8.53 ± 0.65 years (mean ± SD), exposed to PCBs in the environment of eastern Slovakia. We divided the children into tertiles with regard to increasing PCB serum concentration. The mean ± SEM serum concentration of the sum of 15 PCB congeners was 191.15 ± 5.39, 419.23 ± 8.47, and 1315.12 ± 92.57 ng/g lipids in children of the first, second, and third tertiles, respectively. We created probability distribution plots for each child from their multiple trials of the SRT testing. We fitted response time distributions from all valid trials with the ex-Gaussian function, a convolution of a normal and an additional exponential function, providing estimates of three independent parameters μ, σ, and τ. μ is the mean of the normal component, σ is the standard deviation of the normal component, and τ is the mean of the exponential component. Group response time distributions were calculated using the Vincent averaging technique. A Q-Q plot comparing probability distribution of the first vs. third tertile indicated that deviation of the quantiles of the latter tertile from those of the former begins at the 40th percentile and does not show a positive acceleration. This was confirmed in comparison of the ex-Gaussian parameters of these two tertiles adjusted for sex, age, Raven IQ of the child, mother's and father's education, behavior at home and school, and BMI: the results showed that the parameters μ and τ significantly (p ≤ 0.05) increased with PCB exposure. Similar increases of the ex-Gaussian parameter τ in children suffering from ADHD have been previously reported and interpreted as intermittent attentional lapses, but were not seen in our cohort. Our study has confirmed that environmental exposure of children to PCBs is associated with prolongation of simple reaction time reflecting impairment of cognitive functions. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. Improving energy efficiency in handheld biometric applications

    NASA Astrophysics Data System (ADS)

    Hoyle, David C.; Gale, John W.; Schultz, Robert C.; Rakvic, Ryan N.; Ives, Robert W.

    2012-06-01

    With improved smartphone and tablet technology, it is becoming increasingly feasible to implement powerful biometric recognition algorithms on portable devices. Typical iris recognition algorithms, such as Ridge Energy Direction (RED), utilize two-dimensional convolution in their implementation. This paper explores the energy consumption implications of 12 different methods of implementing two-dimensional convolution on a portable device. Typically, convolution is implemented using floating point operations. If a given algorithm implemented integer convolution vice floating point convolution, it could drastically reduce the energy consumed by the processor. The 12 methods compared include 4 major categories: Integer C, Integer Java, Floating Point C, and Floating Point Java. Each major category is further divided into 3 implementations: variable size looped convolution, static size looped convolution, and unrolled looped convolution. All testing was performed using the HTC Thunderbolt with energy measured directly using a Tektronix TDS5104B Digital Phosphor oscilloscope. Results indicate that energy savings as high as 75% are possible by using Integer C versus Floating Point C. Considering the relative proportion of processing time that convolution is responsible for in a typical algorithm, the savings in energy would likely result in significantly greater time between battery charges.

  8. New subspace methods for ATR

    NASA Astrophysics Data System (ADS)

    Zhang, Peng; Peng, Jing; Sims, S. Richard F.

    2005-05-01

    In ATR applications, each feature is a convolution of an image with a filter. It is important to use most discriminant features to produce compact representations. We propose two novel subspace methods for dimension reduction to address limitations associated with Fukunaga-Koontz Transform (FKT). The first method, Scatter-FKT, assumes that target is more homogeneous, while clutter can be anything other than target and anywhere. Thus, instead of estimating a clutter covariance matrix, Scatter-FKT computes a clutter scatter matrix that measures the spread of clutter from the target mean. We choose dimensions along which the difference in variation between target and clutter is most pronounced. When the target follows a Gaussian distribution, Scatter-FKT can be viewed as a generalization of FKT. The second method, Optimal Bayesian Subspace, is derived from the optimal Bayesian classifier. It selects dimensions such that the minimum Bayes error rate can be achieved. When both target and clutter follow Gaussian distributions, OBS computes optimal subspace representations. We compare our methods against FKT using character image as well as IR data.

  9. Symplectic evolution of Wigner functions in Markovian open systems.

    PubMed

    Brodier, O; Almeida, A M Ozorio de

    2004-01-01

    The Wigner function is known to evolve classically under the exclusive action of a quadratic Hamiltonian. If the system also interacts with the environment through Lindblad operators that are complex linear functions of position and momentum, then the general evolution is the convolution of a non-Hamiltonian classical propagation of the Wigner function with a phase space Gaussian that broadens in time. We analyze the consequences of this in the three generic cases of elliptic, hyperbolic, and parabolic Hamiltonians. The Wigner function always becomes positive in a definite time, which does not depend on the initial pure state. We observe the influence of classical dynamics and dissipation upon this threshold. We also derive an exact formula for the evolving linear entropy as the average of a narrowing Gaussian taken over a probability distribution that depends only on the initial state. This leads to a long time asymptotic formula for the growth of linear entropy. We finally discuss the possibility of recovering the initial state.

  10. Bayesian seismic inversion based on rock-physics prior modeling for the joint estimation of acoustic impedance, porosity and lithofacies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Passos de Figueiredo, Leandro, E-mail: leandrop.fgr@gmail.com; Grana, Dario; Santos, Marcio

    We propose a Bayesian approach for seismic inversion to estimate acoustic impedance, porosity and lithofacies within the reservoir conditioned to post-stack seismic and well data. The link between elastic and petrophysical properties is given by a joint prior distribution for the logarithm of impedance and porosity, based on a rock-physics model. The well conditioning is performed through a background model obtained by well log interpolation. Two different approaches are presented: in the first approach, the prior is defined by a single Gaussian distribution, whereas in the second approach it is defined by a Gaussian mixture to represent the well datamore » multimodal distribution and link the Gaussian components to different geological lithofacies. The forward model is based on a linearized convolutional model. For the single Gaussian case, we obtain an analytical expression for the posterior distribution, resulting in a fast algorithm to compute the solution of the inverse problem, i.e. the posterior distribution of acoustic impedance and porosity as well as the facies probability given the observed data. For the Gaussian mixture prior, it is not possible to obtain the distributions analytically, hence we propose a Gibbs algorithm to perform the posterior sampling and obtain several reservoir model realizations, allowing an uncertainty analysis of the estimated properties and lithofacies. Both methodologies are applied to a real seismic dataset with three wells to obtain 3D models of acoustic impedance, porosity and lithofacies. The methodologies are validated through a blind well test and compared to a standard Bayesian inversion approach. Using the probability of the reservoir lithofacies, we also compute a 3D isosurface probability model of the main oil reservoir in the studied field.« less

  11. A staggered-grid convolutional differentiator for elastic wave modelling

    NASA Astrophysics Data System (ADS)

    Sun, Weijia; Zhou, Binzhong; Fu, Li-Yun

    2015-11-01

    The computation of derivatives in governing partial differential equations is one of the most investigated subjects in the numerical simulation of physical wave propagation. An analytical staggered-grid convolutional differentiator (CD) for first-order velocity-stress elastic wave equations is derived in this paper by inverse Fourier transformation of the band-limited spectrum of a first derivative operator. A taper window function is used to truncate the infinite staggered-grid CD stencil. The truncated CD operator is almost as accurate as the analytical solution, and as efficient as the finite-difference (FD) method. The selection of window functions will influence the accuracy of the CD operator in wave simulation. We search for the optimal Gaussian windows for different order CDs by minimizing the spectral error of the derivative and comparing the windows with the normal Hanning window function for tapering the CD operators. It is found that the optimal Gaussian window appears to be similar to the Hanning window function for tapering the same CD operator. We investigate the accuracy of the windowed CD operator and the staggered-grid FD method with different orders. Compared to the conventional staggered-grid FD method, a short staggered-grid CD operator achieves an accuracy equivalent to that of a long FD operator, with lower computational costs. For example, an 8th order staggered-grid CD operator can achieve the same accuracy of a 16th order staggered-grid FD algorithm but with half of the computational resources and time required. Numerical examples from a homogeneous model and a crustal waveguide model are used to illustrate the superiority of the CD operators over the conventional staggered-grid FD operators for the simulation of wave propagations.

  12. Deep convolutional neural networks for automatic classification of gastric carcinoma using whole slide images in digital histopathology.

    PubMed

    Sharma, Harshita; Zerbe, Norman; Klempert, Iris; Hellwich, Olaf; Hufnagl, Peter

    2017-11-01

    Deep learning using convolutional neural networks is an actively emerging field in histological image analysis. This study explores deep learning methods for computer-aided classification in H&E stained histopathological whole slide images of gastric carcinoma. An introductory convolutional neural network architecture is proposed for two computerized applications, namely, cancer classification based on immunohistochemical response and necrosis detection based on the existence of tumor necrosis in the tissue. Classification performance of the developed deep learning approach is quantitatively compared with traditional image analysis methods in digital histopathology requiring prior computation of handcrafted features, such as statistical measures using gray level co-occurrence matrix, Gabor filter-bank responses, LBP histograms, gray histograms, HSV histograms and RGB histograms, followed by random forest machine learning. Additionally, the widely known AlexNet deep convolutional framework is comparatively analyzed for the corresponding classification problems. The proposed convolutional neural network architecture reports favorable results, with an overall classification accuracy of 0.6990 for cancer classification and 0.8144 for necrosis detection. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Fast space-varying convolution using matrix source coding with applications to camera stray light reduction.

    PubMed

    Wei, Jianing; Bouman, Charles A; Allebach, Jan P

    2014-05-01

    Many imaging applications require the implementation of space-varying convolution for accurate restoration and reconstruction of images. Here, we use the term space-varying convolution to refer to linear operators whose impulse response has slow spatial variation. In addition, these space-varying convolution operators are often dense, so direct implementation of the convolution operator is typically computationally impractical. One such example is the problem of stray light reduction in digital cameras, which requires the implementation of a dense space-varying deconvolution operator. However, other inverse problems, such as iterative tomographic reconstruction, can also depend on the implementation of dense space-varying convolution. While space-invariant convolution can be efficiently implemented with the fast Fourier transform, this approach does not work for space-varying operators. So direct convolution is often the only option for implementing space-varying convolution. In this paper, we develop a general approach to the efficient implementation of space-varying convolution, and demonstrate its use in the application of stray light reduction. Our approach, which we call matrix source coding, is based on lossy source coding of the dense space-varying convolution matrix. Importantly, by coding the transformation matrix, we not only reduce the memory required to store it; we also dramatically reduce the computation required to implement matrix-vector products. Our algorithm is able to reduce computation by approximately factoring the dense space-varying convolution operator into a product of sparse transforms. Experimental results show that our method can dramatically reduce the computation required for stray light reduction while maintaining high accuracy.

  14. SU-G-IeP3-08: Image Reconstruction for Scanning Imaging System Based On Shape-Modulated Point Spreading Function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Ruixing; Yang, LV; Xu, Kele

    Purpose: Deconvolution is a widely used tool in the field of image reconstruction algorithm when the linear imaging system has been blurred by the imperfect system transfer function. However, due to the nature of Gaussian-liked distribution for point spread function (PSF), the components with coherent high frequency in the image are hard to restored in most of the previous scanning imaging system, even the relatively accurate PSF is acquired. We propose a novel method for deconvolution of images which are obtained by using shape-modulated PSF. Methods: We use two different types of PSF - Gaussian shape and donut shape -more » to convolute the original image in order to simulate the process of scanning imaging. By employing deconvolution of the two images with corresponding given priors, the image quality of the deblurred images are compared. Then we find the critical size of the donut shape compared with the Gaussian shape which has similar deconvolution results. Through calculation of tightened focusing process using radially polarized beam, such size of donut is achievable under same conditions. Results: The effects of different relative size of donut and Gaussian shapes are investigated. When the full width at half maximum (FWHM) ratio of donut and Gaussian shape is set about 1.83, similar resolution results are obtained through our deconvolution method. Decreasing the size of donut will favor the deconvolution method. A mask with both amplitude and phase modulation is used to create a donut-shaped PSF compared with the non-modulated Gaussian PSF. Donut with size smaller than our critical value is obtained. Conclusion: The utility of donutshaped PSF are proved useful and achievable in the imaging and deconvolution processing, which is expected to have potential practical applications in high resolution imaging for biological samples.« less

  15. Extension of a nonlinear systems theory to general-frequency unsteady transonic aerodynamic responses

    NASA Technical Reports Server (NTRS)

    Silva, Walter A.

    1993-01-01

    A methodology for modeling nonlinear unsteady aerodynamic responses, for subsequent use in aeroservoelastic analysis and design, using the Volterra-Wiener theory of nonlinear systems is presented. The methodology is extended to predict nonlinear unsteady aerodynamic responses of arbitrary frequency. The Volterra-Wiener theory uses multidimensional convolution integrals to predict the response of nonlinear systems to arbitrary inputs. The CAP-TSD (Computational Aeroelasticity Program - Transonic Small Disturbance) code is used to generate linear and nonlinear unit impulse responses that correspond to each of the integrals for a rectangular wing with a NACA 0012 section with pitch and plunge degrees of freedom. The computed kernels then are used to predict linear and nonlinear unsteady aerodynamic responses via convolution and compared to responses obtained using the CAP-TSD code directly. The results indicate that the approach can be used to predict linear unsteady aerodynamic responses exactly for any input amplitude or frequency at a significant cost savings. Convolution of the nonlinear terms results in nonlinear unsteady aerodynamic responses that compare reasonably well with those computed using the CAP-TSD code directly but at significant computational cost savings.

  16. Vision-based mobile robot navigation through deep convolutional neural networks and end-to-end learning

    NASA Astrophysics Data System (ADS)

    Zhang, Yachu; Zhao, Yuejin; Liu, Ming; Dong, Liquan; Kong, Lingqin; Liu, Lingling

    2017-09-01

    In contrast to humans, who use only visual information for navigation, many mobile robots use laser scanners and ultrasonic sensors along with vision cameras to navigate. This work proposes a vision-based robot control algorithm based on deep convolutional neural networks. We create a large 15-layer convolutional neural network learning system and achieve the advanced recognition performance. Our system is trained from end to end to map raw input images to direction in supervised mode. The images of data sets are collected in a wide variety of weather conditions and lighting conditions. Besides, the data sets are augmented by adding Gaussian noise and Salt-and-pepper noise to avoid overfitting. The algorithm is verified by two experiments, which are line tracking and obstacle avoidance. The line tracking experiment is proceeded in order to track the desired path which is composed of straight and curved lines. The goal of obstacle avoidance experiment is to avoid the obstacles indoor. Finally, we get 3.29% error rate on the training set and 5.1% error rate on the test set in the line tracking experiment, 1.8% error rate on the training set and less than 5% error rate on the test set in the obstacle avoidance experiment. During the actual test, the robot can follow the runway centerline outdoor and avoid the obstacle in the room accurately. The result confirms the effectiveness of the algorithm and our improvement in the network structure and train parameters

  17. Recurrent Convolutional Neural Networks: A Better Model of Biological Object Recognition.

    PubMed

    Spoerer, Courtney J; McClure, Patrick; Kriegeskorte, Nikolaus

    2017-01-01

    Feedforward neural networks provide the dominant model of how the brain performs visual object recognition. However, these networks lack the lateral and feedback connections, and the resulting recurrent neuronal dynamics, of the ventral visual pathway in the human and non-human primate brain. Here we investigate recurrent convolutional neural networks with bottom-up (B), lateral (L), and top-down (T) connections. Combining these types of connections yields four architectures (B, BT, BL, and BLT), which we systematically test and compare. We hypothesized that recurrent dynamics might improve recognition performance in the challenging scenario of partial occlusion. We introduce two novel occluded object recognition tasks to test the efficacy of the models, digit clutter (where multiple target digits occlude one another) and digit debris (where target digits are occluded by digit fragments). We find that recurrent neural networks outperform feedforward control models (approximately matched in parametric complexity) at recognizing objects, both in the absence of occlusion and in all occlusion conditions. Recurrent networks were also found to be more robust to the inclusion of additive Gaussian noise. Recurrent neural networks are better in two respects: (1) they are more neurobiologically realistic than their feedforward counterparts; (2) they are better in terms of their ability to recognize objects, especially under challenging conditions. This work shows that computer vision can benefit from using recurrent convolutional architectures and suggests that the ubiquitous recurrent connections in biological brains are essential for task performance.

  18. Dispersion-convolution model for simulating peaks in a flow injection system.

    PubMed

    Pai, Su-Cheng; Lai, Yee-Hwong; Chiao, Ling-Yun; Yu, Tiing

    2007-01-12

    A dispersion-convolution model is proposed for simulating peak shapes in a single-line flow injection system. It is based on the assumption that an injected sample plug is expanded due to a "bulk" dispersion mechanism along the length coordinate, and that after traveling over a distance or a period of time, the sample zone will develop into a Gaussian-like distribution. This spatial pattern is further transformed to a temporal coordinate by a convolution process, and finally a temporal peak image is generated. The feasibility of the proposed model has been examined by experiments with various coil lengths, sample sizes and pumping rates. An empirical dispersion coefficient (D*) can be estimated by using the observed peak position, height and area (tp*, h* and At*) from a recorder. An empirical temporal shift (Phi*) can be further approximated by Phi*=D*/u2, which becomes an important parameter in the restoration of experimental peaks. Also, the dispersion coefficient can be expressed as a second-order polynomial function of the pumping rate Q, for which D*(Q)=delta0+delta1Q+delta2Q2. The optimal dispersion occurs at a pumping rate of Qopt=sqrt[delta0/delta2]. This explains the interesting "Nike-swoosh" relationship between the peak height and pumping rate. The excellent coherence of theoretical and experimental peak shapes confirms that the temporal distortion effect is the dominating reason to explain the peak asymmetry in flow injection analysis.

  19. Analysis of randomly time varying systems by gaussian closure technique

    NASA Astrophysics Data System (ADS)

    Dash, P. K.; Iyengar, R. N.

    1982-07-01

    The Gaussian probability closure technique is applied to study the random response of multidegree of freedom stochastically time varying systems under non-Gaussian excitations. Under the assumption that the response, the coefficient and the excitation processes are jointly Gaussian, deterministic equations are derived for the first two response moments. It is further shown that this technique leads to the best Gaussian estimate in a minimum mean square error sense. An example problem is solved which demonstrates the capability of this technique for handling non-linearity, stochastic system parameters and amplitude limited responses in a unified manner. Numerical results obtained through the Gaussian closure technique compare well with the exact solutions.

  20. MUSIC: MUlti-Scale Initial Conditions

    NASA Astrophysics Data System (ADS)

    Hahn, Oliver; Abel, Tom

    2013-11-01

    MUSIC generates multi-scale initial conditions with multiple levels of refinements for cosmological ‘zoom-in’ simulations. The code uses an adaptive convolution of Gaussian white noise with a real-space transfer function kernel together with an adaptive multi-grid Poisson solver to generate displacements and velocities following first- (1LPT) or second-order Lagrangian perturbation theory (2LPT). MUSIC achieves rms relative errors of the order of 10-4 for displacements and velocities in the refinement region and thus improves in terms of errors by about two orders of magnitude over previous approaches. In addition, errors are localized at coarse-fine boundaries and do not suffer from Fourier space-induced interference ringing.

  1. Capacity of noncoherent MFSK channels

    NASA Technical Reports Server (NTRS)

    Bar-David, I.; Butman, S. A.; Klass, M. J.; Levitt, B. K.; Lyon, R. F.

    1974-01-01

    Performance limits theoretically achievable over noncoherent channels perturbed by additive Gaussian noise in hard decision, optimal, and soft decision receivers are computed as functions of the number of orthogonal signals and the predetection signal-to-noise ratio. Equations are derived for orthogonal signal capacity, the ultimate MFSK capacity, and the convolutional coding and decoding limit. It is shown that performance improves as the signal-to-noise ratio increases, provided the bandwidth can be increased, that the optimum number of signals is not infinite (except for the optimal receiver), and that the optimum number decreases as the signal-to-noise ratio decreases, but is never less than 7 for even the hard decision receiver.

  2. Response measurement by laser Doppler vibrometry in vibration qualification tests with non-Gaussian random excitation

    NASA Astrophysics Data System (ADS)

    Troncossi, M.; Di Sante, R.; Rivola, A.

    2016-10-01

    In the field of vibration qualification testing, random excitations are typically imposed on the tested system in terms of a power spectral density (PSD) profile. This is the one of the most popular ways to control the shaker or slip table for durability tests. However, these excitations (and the corresponding system responses) exhibit a Gaussian probability distribution, whereas not all real-life excitations are Gaussian, causing the response to be also non-Gaussian. In order to introduce non-Gaussian peaks, a further parameter, i.e., kurtosis, has to be controlled in addition to the PSD. However, depending on the specimen behaviour and input signal characteristics, the use of non-Gaussian excitations with high kurtosis and a given PSD does not automatically imply a non-Gaussian stress response. For an experimental investigation of these coupled features, suitable measurement methods need to be developed in order to estimate the stress amplitude response at critical failure locations and consequently evaluate the input signals most representative for real-life, non-Gaussian excitations. In this paper, a simple test rig with a notched cantilevered specimen was developed to measure the response and examine the kurtosis values in the case of stationary Gaussian, stationary non-Gaussian, and burst non-Gaussian excitation signals. The laser Doppler vibrometry technique was used in this type of test for the first time, in order to estimate the specimen stress amplitude response as proportional to the differential displacement measured at the notch section ends. A method based on the use of measurements using accelerometers to correct for the occasional signal dropouts occurring during the experiment is described. The results demonstrate the ability of the test procedure to evaluate the output signal features and therefore to select the most appropriate input signal for the fatigue test.

  3. DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs.

    PubMed

    Chen, Liang-Chieh; Papandreou, George; Kokkinos, Iasonas; Murphy, Kevin; Yuille, Alan L

    2018-04-01

    In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First, we highlight convolution with upsampled filters, or 'atrous convolution', as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second, we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third, we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed "DeepLab" system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.

  4. A Data-Driven Response Virtual Sensor Technique with Partial Vibration Measurements Using Convolutional Neural Network.

    PubMed

    Sun, Shan-Bin; He, Yuan-Yuan; Zhou, Si-Da; Yue, Zhen-Jiang

    2017-12-12

    Measurement of dynamic responses plays an important role in structural health monitoring, damage detection and other fields of research. However, in aerospace engineering, the physical sensors are limited in the operational conditions of spacecraft, due to the severe environment in outer space. This paper proposes a virtual sensor model with partial vibration measurements using a convolutional neural network. The transmissibility function is employed as prior knowledge. A four-layer neural network with two convolutional layers, one fully connected layer, and an output layer is proposed as the predicting model. Numerical examples of two different structural dynamic systems demonstrate the performance of the proposed approach. The excellence of the novel technique is further indicated using a simply supported beam experiment comparing to a modal-model-based virtual sensor, which uses modal parameters, such as mode shapes, for estimating the responses of the faulty sensors. The results show that the presented data-driven response virtual sensor technique can predict structural response with high accuracy.

  5. A Data-Driven Response Virtual Sensor Technique with Partial Vibration Measurements Using Convolutional Neural Network

    PubMed Central

    Sun, Shan-Bin; He, Yuan-Yuan; Zhou, Si-Da; Yue, Zhen-Jiang

    2017-01-01

    Measurement of dynamic responses plays an important role in structural health monitoring, damage detection and other fields of research. However, in aerospace engineering, the physical sensors are limited in the operational conditions of spacecraft, due to the severe environment in outer space. This paper proposes a virtual sensor model with partial vibration measurements using a convolutional neural network. The transmissibility function is employed as prior knowledge. A four-layer neural network with two convolutional layers, one fully connected layer, and an output layer is proposed as the predicting model. Numerical examples of two different structural dynamic systems demonstrate the performance of the proposed approach. The excellence of the novel technique is further indicated using a simply supported beam experiment comparing to a modal-model-based virtual sensor, which uses modal parameters, such as mode shapes, for estimating the responses of the faulty sensors. The results show that the presented data-driven response virtual sensor technique can predict structural response with high accuracy. PMID:29231868

  6. Heat source reconstruction from noisy temperature fields using an optimised derivative Gaussian filter

    NASA Astrophysics Data System (ADS)

    Delpueyo, D.; Balandraud, X.; Grédiac, M.

    2013-09-01

    The aim of this paper is to present a post-processing technique based on a derivative Gaussian filter to reconstruct heat source fields from temperature fields measured by infrared thermography. Heat sources can be deduced from temperature variations thanks to the heat diffusion equation. Filtering and differentiating are key-issues which are closely related here because the temperature fields which are processed are unavoidably noisy. We focus here only on the diffusion term because it is the most difficult term to estimate in the procedure, the reason being that it involves spatial second derivatives (a Laplacian for isotropic materials). This quantity can be reasonably estimated using a convolution of the temperature variation fields with second derivatives of a Gaussian function. The study is first based on synthetic temperature variation fields corrupted by added noise. The filter is optimised in order to reconstruct at best the heat source fields. The influence of both the dimension and the level of a localised heat source is discussed. Obtained results are also compared with another type of processing based on an averaging filter. The second part of this study presents an application to experimental temperature fields measured with an infrared camera on a thin plate in aluminium alloy. Heat sources are generated with an electric heating patch glued on the specimen surface. Heat source fields reconstructed from measured temperature fields are compared with the imposed heat sources. Obtained results illustrate the relevancy of the derivative Gaussian filter to reliably extract heat sources from noisy temperature fields for the experimental thermomechanics of materials.

  7. Statistical Modeling of Retinal Optical Coherence Tomography.

    PubMed

    Amini, Zahra; Rabbani, Hossein

    2016-06-01

    In this paper, a new model for retinal Optical Coherence Tomography (OCT) images is proposed. This statistical model is based on introducing a nonlinear Gaussianization transform to convert the probability distribution function (pdf) of each OCT intra-retinal layer to a Gaussian distribution. The retina is a layered structure and in OCT each of these layers has a specific pdf which is corrupted by speckle noise, therefore a mixture model for statistical modeling of OCT images is proposed. A Normal-Laplace distribution, which is a convolution of a Laplace pdf and Gaussian noise, is proposed as the distribution of each component of this model. The reason for choosing Laplace pdf is the monotonically decaying behavior of OCT intensities in each layer for healthy cases. After fitting a mixture model to the data, each component is gaussianized and all of them are combined by Averaged Maximum A Posterior (AMAP) method. To demonstrate the ability of this method, a new contrast enhancement method based on this statistical model is proposed and tested on thirteen healthy 3D OCTs taken by the Topcon 3D OCT and five 3D OCTs from Age-related Macular Degeneration (AMD) patients, taken by Zeiss Cirrus HD-OCT. Comparing the results with two contending techniques, the prominence of the proposed method is demonstrated both visually and numerically. Furthermore, to prove the efficacy of the proposed method for a more direct and specific purpose, an improvement in the segmentation of intra-retinal layers using the proposed contrast enhancement method as a preprocessing step, is demonstrated.

  8. Gaussian process based independent analysis for temporal source separation in fMRI.

    PubMed

    Hald, Ditte Høvenhoff; Henao, Ricardo; Winther, Ole

    2017-05-15

    Functional Magnetic Resonance Imaging (fMRI) gives us a unique insight into the processes of the brain, and opens up for analyzing the functional activation patterns of the underlying sources. Task-inferred supervised learning with restrictive assumptions in the regression set-up, restricts the exploratory nature of the analysis. Fully unsupervised independent component analysis (ICA) algorithms, on the other hand, can struggle to detect clear classifiable components on single-subject data. We attribute this shortcoming to inadequate modeling of the fMRI source signals by failing to incorporate its temporal nature. fMRI source signals, biological stimuli and non-stimuli-related artifacts are all smooth over a time-scale compatible with the sampling time (TR). We therefore propose Gaussian process ICA (GPICA), which facilitates temporal dependency by the use of Gaussian process source priors. On two fMRI data sets with different sampling frequency, we show that the GPICA-inferred temporal components and associated spatial maps allow for a more definite interpretation than standard temporal ICA methods. The temporal structures of the sources are controlled by the covariance of the Gaussian process, specified by a kernel function with an interpretable and controllable temporal length scale parameter. We propose a hierarchical model specification, considering both instantaneous and convolutive mixing, and we infer source spatial maps, temporal patterns and temporal length scale parameters by Markov Chain Monte Carlo. A companion implementation made as a plug-in for SPM can be downloaded from https://github.com/dittehald/GPICA. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Distinguishing response conflict and task conflict in the Stroop task: evidence from ex-Gaussian distribution analysis.

    PubMed

    Steinhauser, Marco; Hübner, Ronald

    2009-10-01

    It has been suggested that performance in the Stroop task is influenced by response conflict as well as task conflict. The present study investigated the idea that both conflict types can be isolated by applying ex-Gaussian distribution analysis which decomposes response time into a Gaussian and an exponential component. Two experiments were conducted in which manual versions of a standard Stroop task (Experiment 1) and a separated Stroop task (Experiment 2) were performed under task-switching conditions. Effects of response congruency and stimulus bivalency were used to measure response conflict and task conflict, respectively. Ex-Gaussian analysis revealed that response conflict was mainly observed in the Gaussian component, whereas task conflict was stronger in the exponential component. Moreover, task conflict in the exponential component was selectively enhanced under task-switching conditions. The results suggest that ex-Gaussian analysis can be used as a tool to isolate different conflict types in the Stroop task. PsycINFO Database Record (c) 2009 APA, all rights reserved.

  10. Displacements of the earth's surface due to atmospheric loading - Effects of gravity and baseline measurements

    NASA Technical Reports Server (NTRS)

    Van Dam, T. M.; Wahr, J. M.

    1987-01-01

    Atmospheric mass loads and deforms the earth's crust. By performing a convolution sum between daily, global barometric pressure data and mass loading Green's functions, the time dependent effects of atmospheric loading, including those associated with short-term synoptic storms, on surface point positioning measurements and surface gravity observations are estimated. The response for both an oceanless earth and an earth with an inverted barometer ocean is calculated. Load responses for near-coastal stations are significantly affected by the inclusion of an inverted barometer ocean. Peak-to-peak vertical displacements are frequently 15-20 mm with accompanying gravity perturbations of 3-6 micro Gal. Baseline changes can be as large as 20 mm or more. The perturbations are largest at higher latitudes and during winter months. These amplitudes are consistent with the results of Rabbel and Zschau (1985), who modeled synoptic pressure disturbances as Gaussian functions of radius around a central point. Deformation can be adequately computed using real pressure data from points within about 1000 km of the station. Knowledge of local pressure, alone, is not sufficient. Rabbel and Zschau's hypothesized corrections for these displacements, which use local pressure and the regionally averaged pressure, prove accurate at points well inland but are, in general, inadequate within a few hundred kilometers of the coast.

  11. A GIHS-based spectral preservation fusion method for remote sensing images using edge restored spectral modulation

    NASA Astrophysics Data System (ADS)

    Zhou, Xiran; Liu, Jun; Liu, Shuguang; Cao, Lei; Zhou, Qiming; Huang, Huawen

    2014-02-01

    High spatial resolution and spectral fidelity are basic standards for evaluating an image fusion algorithm. Numerous fusion methods for remote sensing images have been developed. Some of these methods are based on the intensity-hue-saturation (IHS) transform and the generalized IHS (GIHS), which may cause serious spectral distortion. Spectral distortion in the GIHS is proven to result from changes in saturation during fusion. Therefore, reducing such changes can achieve high spectral fidelity. A GIHS-based spectral preservation fusion method that can theoretically reduce spectral distortion is proposed in this study. The proposed algorithm consists of two steps. The first step is spectral modulation (SM), which uses the Gaussian function to extract spatial details and conduct SM of multispectral (MS) images. This method yields a desirable visual effect without requiring histogram matching between the panchromatic image and the intensity of the MS image. The second step uses the Gaussian convolution function to restore lost edge details during SM. The proposed method is proven effective and shown to provide better results compared with other GIHS-based methods.

  12. Frozen Gaussian approximation for 3D seismic tomography

    NASA Astrophysics Data System (ADS)

    Chai, Lihui; Tong, Ping; Yang, Xu

    2018-05-01

    Three-dimensional (3D) wave-equation-based seismic tomography is computationally challenging in large scales and high-frequency regime. In this paper, we apply the frozen Gaussian approximation (FGA) method to compute 3D sensitivity kernels and seismic tomography of high-frequency. Rather than standard ray theory used in seismic inversion (e.g. Kirchhoff migration and Gaussian beam migration), FGA is used to compute the 3D high-frequency sensitivity kernels for travel-time or full waveform inversions. Specifically, we reformulate the equations of the forward and adjoint wavefields for the purpose of convenience to apply FGA, and with this reformulation, one can efficiently compute the Green’s functions whose convolutions with source time function produce wavefields needed for the construction of 3D kernels. Moreover, a fast summation method is proposed based on local fast Fourier transform which greatly improves the speed of reconstruction as the last step of FGA algorithm. We apply FGA to both the travel-time adjoint tomography and full waveform inversion (FWI) on synthetic crosswell seismic data with dominant frequencies as high as those of real crosswell data, and confirm again that FWI requires a more sophisticated initial velocity model for the convergence than travel-time adjoint tomography. We also numerically test the accuracy of applying FGA to local earthquake tomography. This study paves the way to directly apply wave-equation-based seismic tomography methods into real data around their dominant frequencies.

  13. Full Waveform Modeling of Transient Electromagnetic Response Based on Temporal Interpolation and Convolution Method

    NASA Astrophysics Data System (ADS)

    Qi, Youzheng; Huang, Ling; Wu, Xin; Zhu, Wanhua; Fang, Guangyou; Yu, Gang

    2017-07-01

    Quantitative modeling of the transient electromagnetic (TEM) response requires consideration of the full transmitter waveform, i.e., not only the specific current waveform in a half cycle but also the bipolar repetition. In this paper, we present a novel temporal interpolation and convolution (TIC) method to facilitate the accurate TEM modeling. We first calculate the temporal basis response on a logarithmic scale using the fast digital-filter-based methods. Then, we introduce a function named hamlogsinc in the framework of discrete signal processing theory to reconstruct the basis function and to make the convolution with the positive half of the waveform. Finally, a superposition procedure is used to take account of the effect of previous bipolar waveforms. Comparisons with the established fast Fourier transform method demonstrate that our TIC method can get the same accuracy with a shorter computing time.

  14. Dose calculation algorithm of fast fine-heterogeneity correction for heavy charged particle radiotherapy.

    PubMed

    Kanematsu, Nobuyuki

    2011-04-01

    This work addresses computing techniques for dose calculations in treatment planning with proton and ion beams, based on an efficient kernel-convolution method referred to as grid-dose spreading (GDS) and accurate heterogeneity-correction method referred to as Gaussian beam splitting. The original GDS algorithm suffered from distortion of dose distribution for beams tilted with respect to the dose-grid axes. Use of intermediate grids normal to the beam field has solved the beam-tilting distortion. Interplay of arrangement between beams and grids was found as another intrinsic source of artifact. Inclusion of rectangular-kernel convolution in beam transport, to share the beam contribution among the nearest grids in a regulatory manner, has solved the interplay problem. This algorithmic framework was applied to a tilted proton pencil beam and a broad carbon-ion beam. In these cases, while the elementary pencil beams individually split into several tens, the calculation time increased only by several times with the GDS algorithm. The GDS and beam-splitting methods will complementarily enable accurate and efficient dose calculations for radiotherapy with protons and ions. Copyright © 2010 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  15. A Robust Gold Deconvolution Approach for LiDAR Waveform Data Processing to Characterize Vegetation Structure

    NASA Astrophysics Data System (ADS)

    Zhou, T.; Popescu, S. C.; Krause, K.; Sheridan, R.; Ku, N. W.

    2014-12-01

    Increasing attention has been paid in the remote sensing community to the next generation Light Detection and Ranging (lidar) waveform data systems for extracting information on topography and the vertical structure of vegetation. However, processing waveform lidar data raises some challenges compared to analyzing discrete return data. The overall goal of this study was to present a robust de-convolution algorithm- Gold algorithm used to de-convolve waveforms in a lidar dataset acquired within a 60 x 60m study area located in the Harvard Forest in Massachusetts. The waveform lidar data was collected by the National Ecological Observatory Network (NEON). Specific objectives were to: (1) explore advantages and limitations of various waveform processing techniques to derive topography and canopy height information; (2) develop and implement a novel de-convolution algorithm, the Gold algorithm, to extract elevation and canopy metrics; and (3) compare results and assess accuracy. We modeled lidar waveforms with a mixture of Gaussian functions using the Non-least squares (NLS) algorithm implemented in R and derived a Digital Terrain Model (DTM) and canopy height. We compared our waveform-derived topography and canopy height measurements using the Gold de-convolution algorithm to results using the Richardson-Lucy algorithm. Our findings show that the Gold algorithm performed better than the Richardson-Lucy algorithm in terms of recovering the hidden echoes and detecting false echoes for generating a DTM, which indicates that the Gold algorithm could potentially be applied to processing of waveform lidar data to derive information on terrain elevation and canopy characteristics.

  16. Fovea detection in optical coherence tomography using convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Liefers, Bart; Venhuizen, Freerk G.; Theelen, Thomas; Hoyng, Carel; van Ginneken, Bram; Sánchez, Clara I.

    2017-02-01

    The fovea is an important clinical landmark that is used as a reference for assessing various quantitative measures, such as central retinal thickness or drusen count. In this paper we propose a novel method for automatic detection of the foveal center in Optical Coherence Tomography (OCT) scans. Although the clinician will generally aim to center the OCT scan on the fovea, post-acquisition image processing will give a more accurate estimate of the true location of the foveal center. A Convolutional Neural Network (CNN) was trained on a set of 781 OCT scans that classifies each pixel in the OCT B-scan with a probability of belonging to the fovea. Dilated convolutions were used to obtain a large receptive field, while maintaining pixel-level accuracy. In order to train the network more effectively, negative patches were sampled selectively after each epoch. After CNN classification of the entire OCT volume, the predicted foveal center was chosen as the voxel with maximum output probability, after applying an optimized three-dimensional Gaussian blurring. We evaluate the performance of our method on a data set of 99 OCT scans presenting different stages of Age-related Macular Degeneration (AMD). The fovea was correctly detected in 96:9% of the cases, with a mean distance error of 73 μm(+/-112 μm). This result was comparable to the performance of a second human observer who obtained a mean distance error of 69 μm (+/-94 μm). Experiments showed that the proposed method is accurate and robust even in retinas heavily affected by pathology.

  17. Estimating Isometric Tension of Finger Muscle Using Needle EMG Signals and the Twitch Contraction Model

    NASA Astrophysics Data System (ADS)

    Tachibana, Hideyuki; Suzuki, Takafumi; Mabuchi, Kunihiko

    We address an estimation method of isometric muscle tension of fingers, as fundamental research for a neural signal-based prosthesis of fingers. We utilize needle electromyogram (EMG) signals, which have approximately equivalent information to peripheral neural signals. The estimating algorithm comprised two convolution operations. The first convolution is between normal distribution and a spike array, which is detected by needle EMG signals. The convolution estimates the probability density of spike-invoking time in the muscle. In this convolution, we hypothesize that each motor unit in a muscle activates spikes independently based on a same probability density function. The second convolution is between the result of the previous convolution and isometric twitch, viz., the impulse response of the motor unit. The result of the calculation is the sum of all estimated tensions of whole muscle fibers, i.e., muscle tension. We confirmed that there is good correlation between the estimated tension of the muscle and the actual tension, with >0.9 correlation coefficients at 59%, and >0.8 at 89% of all trials.

  18. Symmetric convolution of asymmetric multidimensional sequences using discrete trigonometric transforms.

    PubMed

    Foltz, T M; Welsh, B M

    1999-01-01

    This paper uses the fact that the discrete Fourier transform diagonalizes a circulant matrix to provide an alternate derivation of the symmetric convolution-multiplication property for discrete trigonometric transforms. Derived in this manner, the symmetric convolution-multiplication property extends easily to multiple dimensions using the notion of block circulant matrices and generalizes to multidimensional asymmetric sequences. The symmetric convolution of multidimensional asymmetric sequences can then be accomplished by taking the product of the trigonometric transforms of the sequences and then applying an inverse trigonometric transform to the result. An example is given of how this theory can be used for applying a two-dimensional (2-D) finite impulse response (FIR) filter with nonlinear phase which models atmospheric turbulence.

  19. LASER BIOLOGY AND MEDICINE: Light scattering study of rheumatoid arthritis

    NASA Astrophysics Data System (ADS)

    Beuthan, J.; Netz, U.; Minet, O.; Klose, Annerose D.; Hielscher, A. H.; Scheel, A.; Henniger, J.; Müller, G.

    2002-11-01

    The distribution of light scattered by finger joints is studied in the near-IR region. It is shown that variations in the optical parameters of the tissue (scattering coefficient μs, absorption coefficient μa, and anisotropy factor g) depend on the presence of the rheumatoid arthritis (RA). At the first stage, the distribution of scattered light was measured in diaphanoscopic experiments. The convolution of a Gaussian error function with the scattering phase function proved to be a good approximation of the data obtained. Then, a new method was developed for the reconstruction of distribution of optical parameters in the finger cross section. Model tests of the quality of this reconstruction method show good results.

  20. A Demons algorithm for image registration with locally adaptive regularization.

    PubMed

    Cahill, Nathan D; Noble, J Alison; Hawkes, David J

    2009-01-01

    Thirion's Demons is a popular algorithm for nonrigid image registration because of its linear computational complexity and ease of implementation. It approximately solves the diffusion registration problem by successively estimating force vectors that drive the deformation toward alignment and smoothing the force vectors by Gaussian convolution. In this article, we show how the Demons algorithm can be generalized to allow image-driven locally adaptive regularization in a manner that preserves both the linear complexity and ease of implementation of the original Demons algorithm. We show that the proposed algorithm exhibits lower target registration error and requires less computational effort than the original Demons algorithm on the registration of serial chest CT scans of patients with lung nodules.

  1. A method for atomic-level noncontact thermometry with electron energy distribution

    NASA Astrophysics Data System (ADS)

    Kinoshita, Ikuo; Tsukada, Chiharu; Ouchi, Kohei; Kobayashi, Eiichi; Ishii, Juntaro

    2017-04-01

    We devised a new method of determining the temperatures of materials with their electron-energy distributions. The Fermi-Dirac distribution convoluted with a linear combination of Gaussian and Lorentzian distributions was fitted to the photoelectron spectrum measured for the Au(110) single-crystal surface at liquid N2-cooled temperature. The fitting successfully determined the surface-local thermodynamic temperature and the energy resolution simultaneously from the photoelectron spectrum, without any preliminary results of other measurements. The determined thermodynamic temperature was 99 ± 2.1 K, which was in good agreement with the reference temperature of 98.5 ± 0.5 K measured using a silicon diode sensor attached to the sample holder.

  2. Matching Matched Filtering with Deep Networks for Gravitational-Wave Astronomy

    NASA Astrophysics Data System (ADS)

    Gabbard, Hunter; Williams, Michael; Hayes, Fergus; Messenger, Chris

    2018-04-01

    We report on the construction of a deep convolutional neural network that can reproduce the sensitivity of a matched-filtering search for binary black hole gravitational-wave signals. The standard method for the detection of well-modeled transient gravitational-wave signals is matched filtering. We use only whitened time series of measured gravitational-wave strain as an input, and we train and test on simulated binary black hole signals in synthetic Gaussian noise representative of Advanced LIGO sensitivity. We show that our network can classify signal from noise with a performance that emulates that of match filtering applied to the same data sets when considering the sensitivity defined by receiver-operator characteristics.

  3. Matching Matched Filtering with Deep Networks for Gravitational-Wave Astronomy.

    PubMed

    Gabbard, Hunter; Williams, Michael; Hayes, Fergus; Messenger, Chris

    2018-04-06

    We report on the construction of a deep convolutional neural network that can reproduce the sensitivity of a matched-filtering search for binary black hole gravitational-wave signals. The standard method for the detection of well-modeled transient gravitational-wave signals is matched filtering. We use only whitened time series of measured gravitational-wave strain as an input, and we train and test on simulated binary black hole signals in synthetic Gaussian noise representative of Advanced LIGO sensitivity. We show that our network can classify signal from noise with a performance that emulates that of match filtering applied to the same data sets when considering the sensitivity defined by receiver-operator characteristics.

  4. A Comparative Study of Interferometric Regridding Algorithms

    NASA Technical Reports Server (NTRS)

    Hensley, Scott; Safaeinili, Ali

    1999-01-01

    THe paper discusses regridding options: (1) The problem of interpolating data that is not sampled on a uniform grid, that is noisy, and contains gaps is a difficult problem. (2) Several interpolation algorithms have been implemented: (a) Nearest neighbor - Fast and easy but shows some artifacts in shaded relief images. (b) Simplical interpolator - uses plane going through three points containing point where interpolation is required. Reasonably fast and accurate. (c) Convolutional - uses a windowed Gaussian approximating the optimal prolate spheroidal weighting function for a specified bandwidth. (d) First or second order surface fitting - Uses the height data centered in a box about a given point and does a weighted least squares surface fit.

  5. Near-infrared optical-absorption behavior in high-beta nonlinear optical chromophore-polymer guest-host materials. II. Dye spacer length effects in an amorphous polycarbonate copolymer host

    NASA Astrophysics Data System (ADS)

    Barto, Richard R.; Frank, Curtis W.; Bedworth, Peter V.; Ermer, Susan; Taylor, Rebecca E.

    2005-06-01

    In the second of a three-part series, spectral absorption behavior of nonlinear optical (NLO) dyes incorporated into amorphous polycarbonate, comprised of a homologous series of dialkyl spacer groups extending from the midsection of the dye molecule, is characterized by UV-Vis and photothermal deflection spectroscopy. The dyes are structural analogs of the NLO dye FTC [2-(3-cyano-4-{2-[5-(2-{4-[ethyl-(2-methoxyethyl)amino]phenyl}vinyl)-3,4-diethylthiophen-2-yl]vinyl}-5,5-dimethyl-5H-furan-2-ylidene)malononitrile]. Previous Monte Carlo calculations [B. H. Robinson and L. R. Dalton, J. Phys. Chem. A 104, 4785 (2000)] predict a strong dependence of the macroscopic nonlinear optical susceptibility on the chromophore waist: length aspect ratio in electric-field-poled films arising from interactions between chromophores. It is expected that these interactions will play a role in the absorption characteristics of unpoled films, as well. The spacer groups range in length from diethyl to dihexyl, and each dye is studied over a wide range of concentrations. Among the four dyes studied, a universal dependence of near-IR loss on inhomogeneous broadening of the dye main absorption peak is found. The inhomogeneous width and its concentration dependence are seen to vary with spacer length in a manner characteristic of the near-IR loss-concentration slope at transmission wavelengths of 1.06 and 1.3μm, but not at 1.55μm. The lower wavelength loss behavior is assigned to purely Gaussian broadening, and is described by classical mixing thermodynamic quantities based on the Marcus theory of inhomogeneous broadening [R. A. Marcus, J. Chem. Phys. 43, 1261 (1965)], modeled as a convolution of dye-dye dipole broadening and dye-polymer van der Waals broadening. The Gaussian dipole interactions follow a Loring dipole-broadening description [R. F. Loring, J. Phys. Chem. 94, 513 (1990)] dominated by the excited-state dipole moment, and have a correlated homogeneous broadening contribution. The long-wavelength loss behavior has a non-Gaussian dye-dye dipole contribution which follows Kador's broadening analysis [L. Kador, J. Chem. Phys. 95, 5574 (1991)], with a net broadening described by a convolution of this term with a Gaussian van der Waals interaction given by Obata et al. [M. Obata, S. Machida, and K. Horie, J. Polym. Sci. B 37, 2173 (1999)], with each term governed by the dye spacer length. A minimum in broadening and loss-concentration slope at a spacer length of four carbons per alkyl at all wavelengths has important consequences for practical waveguide devices, and is of higher aspect ratio than the spherical limit shown by Robinson and Dalton to minimize dipole interactions under a poling field.

  6. Application of the Convolution Formalism to the Ocean Tide Potential: Results from the Gravity and Recovery and Climate Experiment (GRACE)

    NASA Technical Reports Server (NTRS)

    Desai, S. D.; Yuan, D. -N.

    2006-01-01

    A computationally efficient approach to reducing omission errors in ocean tide potential models is derived and evaluated using data from the Gravity Recovery and Climate Experiment (GRACE) mission. Ocean tide height models are usually explicitly available at a few frequencies, and a smooth unit response is assumed to infer the response across the tidal spectrum. The convolution formalism of Munk and Cartwright (1966) models this response function with a Fourier series. This allows the total ocean tide height, and therefore the total ocean tide potential, to be modeled as a weighted sum of past, present, and future values of the tide-generating potential. Previous applications of the convolution formalism have usually been limited to tide height models, but we extend it to ocean tide potential models. We use luni-solar ephemerides to derive the required tide-generating potential so that the complete spectrum of the ocean tide potential is efficiently represented. In contrast, the traditionally adopted harmonic model of the ocean tide potential requires the explicit sum of the contributions from individual tidal frequencies. It is therefore subject to omission errors from neglected frequencies and is computationally more intensive. Intersatellite range rate data from the GRACE mission are used to compare convolution and harmonic models of the ocean tide potential. The monthly range rate residual variance is smaller by 4-5%, and the daily residual variance is smaller by as much as 15% when using the convolution model than when using a harmonic model that is defined by twice the number of parameters.

  7. Distinguishing Response Conflict and Task Conflict in the Stroop Task: Evidence from Ex-Gaussian Distribution Analysis

    ERIC Educational Resources Information Center

    Steinhauser, Marco; Hubner, Ronald

    2009-01-01

    It has been suggested that performance in the Stroop task is influenced by response conflict as well as task conflict. The present study investigated the idea that both conflict types can be isolated by applying ex-Gaussian distribution analysis which decomposes response time into a Gaussian and an exponential component. Two experiments were…

  8. Optimal fitting of Gaussian-apodized or under-resolved emission lines in Fourier transform spectra providing new insights on the velocity structure of NGC 6720

    NASA Astrophysics Data System (ADS)

    Martin, Thomas B.; Prunet, Simon; Drissen, Laurent

    2016-12-01

    An analysis of the kinematics of NGC 6720 is performed on the commissioning data obtained with SITELLE, the Canada-France-Hawaii Telescope's new imaging Fourier transform spectrometer. In order to measure carefully the small broadening effect of a shell expansion on an unresolved emission line, we have determined a computationally robust implementation of the convolution of a Gaussian with a sinc instrumental line shape which avoids arithmetic overflows. This model can be used to measure line broadening of typically a few km s-1 even at low spectral resolution (R < 5000). We have also designed the corresponding set of Gaussian apodizing functions that are now used by ORBS, the SITELLE's reduction pipeline. We have implemented this model in ORCS, a fitting engine for SITELLE's data, and used it to derive the [S II] density map of the central part of the nebula. The study of the broadening of the [N II] lines shows that the main ring and the central lobe are two different shells with different expansion velocities. We have also derived deep and spatially resolved velocity maps of the halo in [N II] and Hα and found that the brightest bubbles are originating from two bipolar structures with a velocity difference of more than 35 km s-1 lying at the poles of a possibly unique halo shell expanding at a velocity of more than 15 km s-1.

  9. Coding gains and error rates from the Big Viterbi Decoder

    NASA Technical Reports Server (NTRS)

    Onyszchuk, I. M.

    1991-01-01

    A prototype hardware Big Viterbi Decoder (BVD) was completed for an experiment with the Galileo Spacecraft. Searches for new convolutional codes, studies of Viterbi decoder hardware designs and architectures, mathematical formulations, and decompositions of the deBruijn graph into identical and hierarchical subgraphs, and very large scale integration (VLSI) chip design are just a few examples of tasks completed for this project. The BVD bit error rates (BER), measured from hardware and software simulations, are plotted as a function of bit signal to noise ratio E sub b/N sub 0 on the additive white Gaussian noise channel. Using the constraint length 15, rate 1/4, experimental convolutional code for the Galileo mission, the BVD gains 1.5 dB over the NASA standard (7,1/2) Maximum Likelihood Convolution Decoder (MCD) at a BER of 0.005. At this BER, the same gain results when the (255,233) NASA standard Reed-Solomon decoder is used, which yields a word error rate of 2.1 x 10(exp -8) and a BER of 1.4 x 10(exp -9). The (15, 1/6) code to be used by the Cometary Rendezvous Asteroid Flyby (CRAF)/Cassini Missions yields 1.7 dB of coding gain. These gains are measured with respect to symbols input to the BVD and increase with decreasing BER. Also, 8-bit input symbol quantization makes the BVD resistant to demodulated signal-level variations which may cause higher bandwidth than the NASA (7,1/2) code, these gains are offset by about 0.1 dB of expected additional receiver losses. Coding gains of several decibels are possible by compressing all spacecraft data.

  10. Brillouin precursors in Debye media

    NASA Astrophysics Data System (ADS)

    Macke, Bruno; Ségard, Bernard

    2015-05-01

    We theoretically study the formation of Brillouin precursors in Debye media. We point out that the precursors are visible only at propagation distances such that the impulse response of the medium is essentially determined by the frequency dependence of its absorption and is practically Gaussian. By simple convolution, we then obtain explicit analytical expressions of the transmitted waves generated by reference incident waves, distinguishing precursor and main signal by a simple examination of the long-time behavior of the overall signal. These expressions are in good agreement with the signals obtained in numerical or real experiments performed on water in the radio-frequency domain and explain in particular some observed shapes of the precursor. Results are obtained for other remarkable incident waves. In addition, we show quite generally that the shape of the Brillouin precursor appearing alone at sufficiently large propagation distance and the law giving its amplitude as a function of this distance do not depend on the precise form of the incident wave but only on its integral properties. The incidence of a static conductivity of the medium is also examined and explicit analytical results are again given in the limit of weak and strong conductivities.

  11. [Glossary of terms used by radiologists in image processing].

    PubMed

    Rolland, Y; Collorec, R; Bruno, A; Ramée, A; Morcet, N; Haigron, P

    1995-01-01

    We give the definition of 166 words used in image processing. Adaptivity, aliazing, analog-digital converter, analysis, approximation, arc, artifact, artificial intelligence, attribute, autocorrelation, bandwidth, boundary, brightness, calibration, class, classification, classify, centre, cluster, coding, color, compression, contrast, connectivity, convolution, correlation, data base, decision, decomposition, deconvolution, deduction, descriptor, detection, digitization, dilation, discontinuity, discretization, discrimination, disparity, display, distance, distorsion, distribution dynamic, edge, energy, enhancement, entropy, erosion, estimation, event, extrapolation, feature, file, filter, filter floaters, fitting, Fourier transform, frequency, fusion, fuzzy, Gaussian, gradient, graph, gray level, group, growing, histogram, Hough transform, Houndsfield, image, impulse response, inertia, intensity, interpolation, interpretation, invariance, isotropy, iterative, JPEG, knowledge base, label, laplacian, learning, least squares, likelihood, matching, Markov field, mask, matching, mathematical morphology, merge (to), MIP, median, minimization, model, moiré, moment, MPEG, neural network, neuron, node, noise, norm, normal, operator, optical system, optimization, orthogonal, parametric, pattern recognition, periodicity, photometry, pixel, polygon, polynomial, prediction, pulsation, pyramidal, quantization, raster, reconstruction, recursive, region, rendering, representation space, resolution, restoration, robustness, ROC, thinning, transform, sampling, saturation, scene analysis, segmentation, separable function, sequential, smoothing, spline, split (to), shape, threshold, tree, signal, speckle, spectrum, spline, stationarity, statistical, stochastic, structuring element, support, syntaxic, synthesis, texture, truncation, variance, vision, voxel, windowing.

  12. Electroencephalography Based Fusion Two-Dimensional (2D)-Convolution Neural Networks (CNN) Model for Emotion Recognition System.

    PubMed

    Kwon, Yea-Hoon; Shin, Sae-Byuk; Kim, Shin-Dug

    2018-04-30

    The purpose of this study is to improve human emotional classification accuracy using a convolution neural networks (CNN) model and to suggest an overall method to classify emotion based on multimodal data. We improved classification performance by combining electroencephalogram (EEG) and galvanic skin response (GSR) signals. GSR signals are preprocessed using by the zero-crossing rate. Sufficient EEG feature extraction can be obtained through CNN. Therefore, we propose a suitable CNN model for feature extraction by tuning hyper parameters in convolution filters. The EEG signal is preprocessed prior to convolution by a wavelet transform while considering time and frequency simultaneously. We use a database for emotion analysis using the physiological signals open dataset to verify the proposed process, achieving 73.4% accuracy, showing significant performance improvement over the current best practice models.

  13. Multidimensional Trellis Coded Phase Modulation Using a Multilevel Concatenation Approach. Part 1; Code Design

    NASA Technical Reports Server (NTRS)

    Rajpal, Sandeep; Rhee, Do Jun; Lin, Shu

    1997-01-01

    The first part of this paper presents a simple and systematic technique for constructing multidimensional M-ary phase shift keying (MMK) trellis coded modulation (TCM) codes. The construction is based on a multilevel concatenation approach in which binary convolutional codes with good free branch distances are used as the outer codes and block MPSK modulation codes are used as the inner codes (or the signal spaces). Conditions on phase invariance of these codes are derived and a multistage decoding scheme for these codes is proposed. The proposed technique can be used to construct good codes for both the additive white Gaussian noise (AWGN) and fading channels as is shown in the second part of this paper.

  14. Structure and magnetism in LaCoO 3

    DOE PAGES

    Belanger, David P.; Keiber, T.; Bridges, Frank; ...

    2015-12-11

    In this paper, the temperature dependence of the hexagonal lattice parameter c of single crystal LaCoO 3 (LCO) with H = 0 and 800 Oe, as well as LCO bulk powders with H = 0, was measured using high-resolution x-ray scattering near the transition temperature T o ≈ 35 K. The change of c(T ) is well characterized by a power law in T – T o for T > T o and by a temperature independent constant for T < T o when convoluted with a Gaussian function of width 8.5 K. Finally, this behavior is discussed in themore » context of the unusual magnetic behavior observed in LCO as well as recent generalized gradient approximation calculations.« less

  15. Light scattering study of rheumatoid arthritis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beuthan, J; Netz, U; Minet, O

    The distribution of light scattered by finger joints is studied in the near-IR region. It is shown that variations in the optical parameters of the tissue (scattering coefficient {mu}{sub s}, absorption coefficient {mu}{sub a}, and anisotropy factor g) depend on the presence of the rheumatoid arthritis (RA). At the first stage, the distribution of scattered light was measured in diaphanoscopic experiments. The convolution of a Gaussian error function with the scattering phase function proved to be a good approximation of the data obtained. Then, a new method was developed for the reconstruction of distribution of optical parameters in the fingermore » cross section. Model tests of the quality of this reconstruction method show good results. (laser biology and medicine)« less

  16. Coordinated design of coding and modulation systems

    NASA Technical Reports Server (NTRS)

    Massey, J. L.

    1976-01-01

    Work on partial unit memory codes continued; it was shown that for a given virtual state complexity, the maximum free distance over the class of all convolutional codes is achieved within the class of unit memory codes. The effect of phase-lock loop (PLL) tracking error on coding system performance was studied by using the channel cut-off rate as the measure of quality of a modulation system. Optimum modulation signal sets for a non-white Gaussian channel considered an heuristic selection rule based on a water-filling argument. The use of error correcting codes to perform data compression by the technique of syndrome source coding was researched and a weight-and-error-locations scheme was developed that is closely related to LDSC coding.

  17. The MONET code for the evaluation of the dose in hadrontherapy

    NASA Astrophysics Data System (ADS)

    Embriaco, A.

    2018-01-01

    The MONET is a code for the computation of the 3D dose distribution for protons in water. For the lateral profile, MONET is based on the Molière theory of multiple Coulomb scattering. To take into account also the nuclear interactions, we add to this theory a Cauchy-Lorentz function, where the two parameters are obtained by a fit to a FLUKA simulation. We have implemented the Papoulis algorithm for the passage from the projected to a 2D lateral distribution. For the longitudinal profile, we have implemented a new calculation of the energy loss that is in good agreement with simulations. The inclusion of the straggling is based on the convolution of energy loss with a Gaussian function. In order to complete the longitudinal profile, also the nuclear contributions are included using a linear parametrization. The total dose profile is calculated in a 3D mesh by evaluating at each depth the 2D lateral distributions and by scaling them at the value of the energy deposition. We have compared MONET with FLUKA in two cases: a single Gaussian beam and a lateral scan. In both cases, we have obtained a good agreement for different energies of protons in water.

  18. An improved multi-domain convolution tracking algorithm

    NASA Astrophysics Data System (ADS)

    Sun, Xin; Wang, Haiying; Zeng, Yingsen

    2018-04-01

    Along with the wide application of the Deep Learning in the field of Computer vision, Deep learning has become a mainstream direction in the field of object tracking. The tracking algorithm in this paper is based on the improved multidomain convolution neural network, and the VOT video set is pre-trained on the network by multi-domain training strategy. In the process of online tracking, the network evaluates candidate targets sampled from vicinity of the prediction target in the previous with Gaussian distribution, and the candidate target with the highest score is recognized as the prediction target of this frame. The Bounding Box Regression model is introduced to make the prediction target closer to the ground-truths target box of the test set. Grouping-update strategy is involved to extract and select useful update samples in each frame, which can effectively prevent over fitting. And adapt to changes in both target and environment. To improve the speed of the algorithm while maintaining the performance, the number of candidate target succeed in adjusting dynamically with the help of Self-adaption parameter Strategy. Finally, the algorithm is tested by OTB set, compared with other high-performance tracking algorithms, and the plot of success rate and the accuracy are drawn. which illustrates outstanding performance of the tracking algorithm in this paper.

  19. A novel microaneurysms detection approach based on convolutional neural networks with reinforcement sample learning algorithm.

    PubMed

    Budak, Umit; Şengür, Abdulkadir; Guo, Yanhui; Akbulut, Yaman

    2017-12-01

    Microaneurysms (MAs) are known as early signs of diabetic-retinopathy which are called red lesions in color fundus images. Detection of MAs in fundus images needs highly skilled physicians or eye angiography. Eye angiography is an invasive and expensive procedure. Therefore, an automatic detection system to identify the MAs locations in fundus images is in demand. In this paper, we proposed a system to detect the MAs in colored fundus images. The proposed method composed of three stages. In the first stage, a series of pre-processing steps are used to make the input images more convenient for MAs detection. To this end, green channel decomposition, Gaussian filtering, median filtering, back ground determination, and subtraction operations are applied to input colored fundus images. After pre-processing, a candidate MAs extraction procedure is applied to detect potential regions. A five-stepped procedure is adopted to get the potential MA locations. Finally, deep convolutional neural network (DCNN) with reinforcement sample learning strategy is used to train the proposed system. The DCNN is trained with color image patches which are collected from ground-truth MA locations and non-MA locations. We conducted extensive experiments on ROC dataset to evaluate of our proposal. The results are encouraging.

  20. Understanding How Kurtosis Is Transferred from Input Acceleration to Stress Response and Its Influence on Fatigue Llife

    NASA Technical Reports Server (NTRS)

    Kihm, Frederic; Rizzi, Stephen A.; Ferguson, Neil S.; Halfpenny, Andrew

    2013-01-01

    High cycle fatigue of metals typically occurs through long term exposure to time varying loads which, although modest in amplitude, give rise to microscopic cracks that can ultimately propagate to failure. The fatigue life of a component is primarily dependent on the stress amplitude response at critical failure locations. For most vibration tests, it is common to assume a Gaussian distribution of both the input acceleration and stress response. In real life, however, it is common to experience non-Gaussian acceleration input, and this can cause the response to be non-Gaussian. Examples of non-Gaussian loads include road irregularities such as potholes in the automotive world or turbulent boundary layer pressure fluctuations for the aerospace sector or more generally wind, wave or high amplitude acoustic loads. The paper first reviews some of the methods used to generate non-Gaussian excitation signals with a given power spectral density and kurtosis. The kurtosis of the response is examined once the signal is passed through a linear time invariant system. Finally an algorithm is presented that determines the output kurtosis based upon the input kurtosis, the input power spectral density and the frequency response function of the system. The algorithm is validated using numerical simulations. Direct applications of these results include improved fatigue life estimations and a method to accelerate shaker tests by generating high kurtosis, non-Gaussian drive signals.

  1. Convis: A Toolbox to Fit and Simulate Filter-Based Models of Early Visual Processing

    PubMed Central

    Huth, Jacob; Masquelier, Timothée; Arleo, Angelo

    2018-01-01

    We developed Convis, a Python simulation toolbox for large scale neural populations which offers arbitrary receptive fields by 3D convolutions executed on a graphics card. The resulting software proves to be flexible and easily extensible in Python, while building on the PyTorch library (The Pytorch Project, 2017), which was previously used successfully in deep learning applications, for just-in-time optimization and compilation of the model onto CPU or GPU architectures. An alternative implementation based on Theano (Theano Development Team, 2016) is also available, although not fully supported. Through automatic differentiation, any parameter of a specified model can be optimized to approach a desired output which is a significant improvement over e.g., Monte Carlo or particle optimizations without gradients. We show that a number of models including even complex non-linearities such as contrast gain control and spiking mechanisms can be implemented easily. We show in this paper that we can in particular recreate the simulation results of a popular retina simulation software VirtualRetina (Wohrer and Kornprobst, 2009), with the added benefit of providing (1) arbitrary linear filters instead of the product of Gaussian and exponential filters and (2) optimization routines utilizing the gradients of the model. We demonstrate the utility of 3d convolution filters with a simple direction selective filter. Also we show that it is possible to optimize the input for a certain goal, rather than the parameters, which can aid the design of experiments as well as closed-loop online stimulus generation. Yet, Convis is more than a retina simulator. For instance it can also predict the response of V1 orientation selective cells. Convis is open source under the GPL-3.0 license and available from https://github.com/jahuth/convis/ with documentation at https://jahuth.github.io/convis/. PMID:29563867

  2. Proteomic profiling and pathway analysis of the response of rat renal proximal convoluted tubules to metabolic acidosis

    PubMed Central

    Schauer, Kevin L.; Freund, Dana M.; Prenni, Jessica E.

    2013-01-01

    Metabolic acidosis is a relatively common pathological condition that is defined as a decrease in blood pH and bicarbonate concentration. The renal proximal convoluted tubule responds to this condition by increasing the extraction of plasma glutamine and activating ammoniagenesis and gluconeogenesis. The combined processes increase the excretion of acid and produce bicarbonate ions that are added to the blood to partially restore acid-base homeostasis. Only a few cytosolic proteins, such as phosphoenolpyruvate carboxykinase, have been determined to play a role in the renal response to metabolic acidosis. Therefore, further analysis was performed to better characterize the response of the cytosolic proteome. Proximal convoluted tubule cells were isolated from rat kidney cortex at various times after onset of acidosis and fractionated to separate the soluble cytosolic proteins from the remainder of the cellular components. The cytosolic proteins were analyzed using two-dimensional liquid chromatography and tandem mass spectrometry (MS/MS). Spectral counting along with average MS/MS total ion current were used to quantify temporal changes in relative protein abundance. In all, 461 proteins were confidently identified, of which 24 exhibited statistically significant changes in abundance. To validate these techniques, several of the observed abundance changes were confirmed by Western blotting. Data from the cytosolic fractions were then combined with previous proteomic data, and pathway analyses were performed to identify the primary pathways that are activated or inhibited in the proximal convoluted tubule during the onset of metabolic acidosis. PMID:23804448

  3. Convolute laminations — a theoretical analysis: example of a Pennsylvanian sandstone

    NASA Astrophysics Data System (ADS)

    Visher, Glenn S.; Cunningham, Russ D.

    1981-03-01

    Data from an outcropping laminated interval were collected and analyzed to test the applicability of a theoretical model describing instability of layered systems. Rayleigh—Taylor wave perturbations result at the interface between fluids of contrasting density, viscosity, and thickness. In the special case where reverse density and viscosity interlaminations are developed, the deformation response produces a single wave with predictable amplitudes, wavelengths, and amplification rates. Physical measurements from both the outcropping section and modern sediments suggest the usefulness of the model for the interpretation of convolute laminations. Internal characteristics of the stratigraphic interval, and the developmental sequence of convoluted beds, are used to document the developmental history of these structures.

  4. The neuronal response to electrical constant-amplitude pulse train stimulation: additive Gaussian noise.

    PubMed

    Matsuoka, A J; Abbas, P J; Rubinstein, J T; Miller, C A

    2000-11-01

    Experimental results from humans and animals show that electrically evoked compound action potential (EAP) responses to constant-amplitude pulse train stimulation can demonstrate an alternating pattern, due to the combined effects of highly synchronized responses to electrical stimulation and refractory effects (Wilson et al., 1994). One way to improve signal representation is to reduce the level of across-fiber synchrony and hence, the level of the amplitude alternation. To accomplish this goal, we have examined EAP responses in the presence of Gaussian noise added to the pulse train stimulus. Addition of Gaussian noise at a level approximately -30 dB relative to EAP threshold to the pulse trains decreased the amount of alternation, indicating that stochastic resonance may be induced in the auditory nerve. The use of some type of conditioning stimulus such as Gaussian noise may provide a more 'normal' neural response pattern.

  5. Across-task priming revisited: response and task conflicts disentangled using ex-Gaussian distribution analysis.

    PubMed

    Moutsopoulou, Karolina; Waszak, Florian

    2012-04-01

    The differential effects of task and response conflict in priming paradigms where associations are strengthened between a stimulus, a task, and a response have been demonstrated in recent years with neuroimaging methods. However, such effects are not easily disentangled with only measurements of behavior, such as reaction times (RTs). Here, we report the application of ex-Gaussian distribution analysis on task-switching RT data and show that conflict related to stimulus-response associations retrieved after a switch of tasks is reflected in the Gaussian component. By contrast, conflict related to the retrieval of stimulus-task associations is reflected in the exponential component. Our data confirm that the retrieval of stimulus-task and -response associations affects behavior differently. Ex-Gaussian distribution analysis is a useful tool for pulling apart these different levels of associative priming that are not distinguishable in analyses of RT means.

  6. Evaluation of a processing scheme for calcified atheromatous carotid artery detection in face/neck CBCT images

    NASA Astrophysics Data System (ADS)

    Matheus, B. R. N.; Centurion, B. S.; Rubira-Bullen, I. R. F.; Schiabel, H.

    2017-03-01

    Cone Beam Computed Tomography (CBCT), a kind of face and neck exams can be opportunity to identify, as an incidental finding, calcifications of the carotid artery (CACA). Given the similarity of the CACA with calcification found in several x-ray exams, this work suggests that a similar technique designed to detect breast calcifications in mammography images could be applied to detect such calcifications in CBCT. The method used a 3D version of the calcification detection technique [1], based on a signal enhancement using a convolution with a 3D Laplacian of Gaussian (LoG) function followed by removing the high contrast bone structure from the image. Initial promising results show a 71% sensitivity with 0.48 false positive per exam.

  7. Efficient three-dimensional resist profile-driven source mask optimization optical proximity correction based on Abbe-principal component analysis and Sylvester equation

    NASA Astrophysics Data System (ADS)

    Lin, Pei-Chun; Yu, Chun-Chang; Chen, Charlie Chung-Ping

    2015-01-01

    As one of the critical stages of a very large scale integration fabrication process, postexposure bake (PEB) plays a crucial role in determining the final three-dimensional (3-D) profiles and lessening the standing wave effects. However, the full 3-D chemically amplified resist simulation is not widely adopted during the postlayout optimization due to the long run-time and huge memory usage. An efficient simulation method is proposed to simulate the PEB while considering standing wave effects and resolution enhancement techniques, such as source mask optimization and subresolution assist features based on the Sylvester equation and Abbe-principal component analysis method. Simulation results show that our algorithm is 20× faster than the conventional Gaussian convolution method.

  8. Discrete transparent boundary conditions for the mixed KDV-BBM equation

    NASA Astrophysics Data System (ADS)

    Besse, Christophe; Noble, Pascal; Sanchez, David

    2017-09-01

    In this paper, we consider artificial boundary conditions for the linearized mixed Korteweg-de Vries (KDV) and Benjamin-Bona-Mahoney (BBM) equation which models water waves in the small amplitude, large wavelength regime. Continuous (respectively discrete) artificial boundary conditions involve non local operators in time which in turn requires to compute time convolutions and invert the Laplace transform of an analytic function (respectively the Z-transform of an holomorphic function). In this paper, we propose a new, stable and fairly general strategy to carry out this crucial step in the design of transparent boundary conditions. For large time simulations, we also introduce a methodology based on the asymptotic expansion of coefficients involved in exact direct transparent boundary conditions. We illustrate the accuracy of our methods for Gaussian and wave packets initial data.

  9. The 1997 North American Interagency Intercomparison of Ultraviolet Spectroradiometers Including Narrowband Filter Radiometers

    PubMed Central

    Lantz, Kathleen; Disterhoft, Patrick; Early, Edward; Thompson, Ambler; DeLuisi, John; Berndt, Jerry; Harrison, Lee; Kiedron, Peter; Ehramjian, James; Bernhard, Germar; Cabasug, Lauriana; Robertson, James; Mou, Wanfeng; Taylor, Thomas; Slusser, James; Bigelow, David; Durham, Bill; Janson, George; Hayes, Douglass; Beaubien, Mark; Beaubien, Arthur

    2002-01-01

    The fourth North American Intercomparison of Ultraviolet Monitoring Spectroradiometers was held September 15 to 25, 1997 at Table Mountain outside of Boulder, Colorado, USA. Concern over stratospheric ozone depletion has prompted several government agencies in North America to establish networks of spectroradiometers for monitoring solar ultraviolet irradiance at the surface of the Earth. The main purpose of the Intercomparison was to assess the ability of spectroradiometers to accurately measure solar ultraviolet irradiance, and to compare the results between instruments of different monitoring networks. This Intercomparison was coordinated by NIST and NOAA, and included participants from the ASRC, EPA, NIST, NSF, SERC, USDA, and YES. The UV measuring instruments included scanning spectroradiometers, spectrographs, narrow band multi-filter radiometers, and broadband radiometers. Instruments were characterized for wavelength accuracy, bandwidth, stray-light rejection, and spectral irradiance responsivity. The spectral irradiance responsivity was determined two to three times outdoors to assess temporal stability. Synchronized spectral scans of the solar irradiance were performed over several days. Using the spectral irradiance responsivities determined with the NIST traceable standard lamp, and a simple convolution technique with a Gaussian slit-scattering function to account for the different bandwidths of the instruments, the measured solar irradiance from the spectroradiometers excluding the filter radiometers at 16.5 h UTC had a relative standard deviation of ±4 % for wavelengths greater than 305 nm. The relative standard deviation for the solar irradiance at 16.5 h UTC including the filter radiometer was ±4 % for filter functions above 300 nm. PMID:27446717

  10. Enhancement of ELDA Tracker Based on CNN Features and Adaptive Model Update.

    PubMed

    Gao, Changxin; Shi, Huizhang; Yu, Jin-Gang; Sang, Nong

    2016-04-15

    Appearance representation and the observation model are the most important components in designing a robust visual tracking algorithm for video-based sensors. Additionally, the exemplar-based linear discriminant analysis (ELDA) model has shown good performance in object tracking. Based on that, we improve the ELDA tracking algorithm by deep convolutional neural network (CNN) features and adaptive model update. Deep CNN features have been successfully used in various computer vision tasks. Extracting CNN features on all of the candidate windows is time consuming. To address this problem, a two-step CNN feature extraction method is proposed by separately computing convolutional layers and fully-connected layers. Due to the strong discriminative ability of CNN features and the exemplar-based model, we update both object and background models to improve their adaptivity and to deal with the tradeoff between discriminative ability and adaptivity. An object updating method is proposed to select the "good" models (detectors), which are quite discriminative and uncorrelated to other selected models. Meanwhile, we build the background model as a Gaussian mixture model (GMM) to adapt to complex scenes, which is initialized offline and updated online. The proposed tracker is evaluated on a benchmark dataset of 50 video sequences with various challenges. It achieves the best overall performance among the compared state-of-the-art trackers, which demonstrates the effectiveness and robustness of our tracking algorithm.

  11. Enhancement of ELDA Tracker Based on CNN Features and Adaptive Model Update

    PubMed Central

    Gao, Changxin; Shi, Huizhang; Yu, Jin-Gang; Sang, Nong

    2016-01-01

    Appearance representation and the observation model are the most important components in designing a robust visual tracking algorithm for video-based sensors. Additionally, the exemplar-based linear discriminant analysis (ELDA) model has shown good performance in object tracking. Based on that, we improve the ELDA tracking algorithm by deep convolutional neural network (CNN) features and adaptive model update. Deep CNN features have been successfully used in various computer vision tasks. Extracting CNN features on all of the candidate windows is time consuming. To address this problem, a two-step CNN feature extraction method is proposed by separately computing convolutional layers and fully-connected layers. Due to the strong discriminative ability of CNN features and the exemplar-based model, we update both object and background models to improve their adaptivity and to deal with the tradeoff between discriminative ability and adaptivity. An object updating method is proposed to select the “good” models (detectors), which are quite discriminative and uncorrelated to other selected models. Meanwhile, we build the background model as a Gaussian mixture model (GMM) to adapt to complex scenes, which is initialized offline and updated online. The proposed tracker is evaluated on a benchmark dataset of 50 video sequences with various challenges. It achieves the best overall performance among the compared state-of-the-art trackers, which demonstrates the effectiveness and robustness of our tracking algorithm. PMID:27092505

  12. Profiler - A Fast and Versatile New Program for Decomposing Galaxy Light Profiles

    NASA Astrophysics Data System (ADS)

    Ciambur, Bogdan C.

    2016-12-01

    I introduce Profiler, a user-friendly program designed to analyse the radial surface brightness profiles of galaxies. With an intuitive graphical user interface, Profiler can accurately model galaxies of a broad range of morphological types, with various parametric functions routinely employed in the field (Sérsic, core-Sérsic, exponential, Gaussian, Moffat, and Ferrers). In addition to these, Profiler can employ the broken exponential model for disc truncations or anti-truncations, and two special cases of the edge-on disc model: along the disc's major or minor axis. The convolution of (circular or elliptical) models with the point spread function is performed in 2D, and offers a choice between Gaussian, Moffat or a user-provided profile for the point spread function. Profiler is optimised to work with galaxy light profiles obtained from isophotal measurements, which allow for radial gradients in the geometric parameters of the isophotes, and are thus often better at capturing the total light than 2D image-fitting programs. Additionally, the 1D approach is generally less computationally expensive and more stable. I demonstrate Profiler's features by decomposing three case-study galaxies: the cored elliptical galaxy NGC 3348, the nucleated dwarf Seyfert I galaxy Pox 52, and NGC 2549, a double-barred galaxy with an edge-on, truncated disc.

  13. US-SOMO HPLC-SAXS module: dealing with capillary fouling and extraction of pure component patterns from poorly resolved SEC-SAXS data

    PubMed Central

    Brookes, Emre; Vachette, Patrice; Rocco, Mattia; Pérez, Javier

    2016-01-01

    Size-exclusion chromatography coupled with SAXS (small-angle X-ray scattering), often performed using a flow-through capillary, should allow direct collection of monodisperse sample data. However, capillary fouling issues and non-baseline-resolved peaks can hamper its efficacy. The UltraScan solution modeler (US-SOMO) HPLC-SAXS (high-performance liquid chromatography coupled with SAXS) module provides a comprehensive framework to analyze such data, starting with a simple linear baseline correction and symmetrical Gaussian decomposition tools [Brookes, Pérez, Cardinali, Profumo, Vachette & Rocco (2013 ▸). J. Appl. Cryst. 46, 1823–1833]. In addition to several new features, substantial improvements to both routines have now been implemented, comprising the evaluation of outcomes by advanced statistical tools. The novel integral baseline-correction procedure is based on the more sound assumption that the effect of capillary fouling on scattering increases monotonically with the intensity scattered by the material within the X-ray beam. Overlapping peaks, often skewed because of sample interaction with the column matrix, can now be accurately decomposed using non-symmetrical modified Gaussian functions. As an example, the case of a polydisperse solution of aldolase is analyzed: from heavily convoluted peaks, individual SAXS profiles of tetramers, octamers and dodecamers are extracted and reliably modeled. PMID:27738419

  14. Aquifer response to stream-stage and recharge variations. II. Convolution method and applications

    USGS Publications Warehouse

    Barlow, P.M.; DeSimone, L.A.; Moench, A.F.

    2000-01-01

    In this second of two papers, analytical step-response functions, developed in the companion paper for several cases of transient hydraulic interaction between a fully penetrating stream and a confined, leaky, or water-table aquifer, are used in the convolution integral to calculate aquifer heads, streambank seepage rates, and bank storage that occur in response to streamstage fluctuations and basinwide recharge or evapotranspiration. Two computer programs developed on the basis of these step-response functions and the convolution integral are applied to the analysis of hydraulic interaction of two alluvial stream-aquifer systems in the northeastern and central United States. These applications demonstrate the utility of the analytical functions and computer programs for estimating aquifer and streambank hydraulic properties, recharge rates, streambank seepage rates, and bank storage. Analysis of the water-table aquifer adjacent to the Blackstone River in Massachusetts suggests that the very shallow depth of water table and associated thin unsaturated zone at the site cause the aquifer to behave like a confined aquifer (negligible specific yield). This finding is consistent with previous studies that have shown that the effective specific yield of an unconfined aquifer approaches zero when the capillary fringe, where sediment pores are saturated by tension, extends to land surface. Under this condition, the aquifer's response is determined by elastic storage only. Estimates of horizontal and vertical hydraulic conductivity, specific yield, specific storage, and recharge for a water-table aquifer adjacent to the Cedar River in eastern Iowa, determined by the use of analytical methods, are in close agreement with those estimated by use of a more complex, multilayer numerical model of the aquifer. Streambank leakance of the semipervious streambank materials also was estimated for the site. The streambank-leakance parameter may be considered to be a general (or lumped) parameter that accounts not only for the resistance of flow at the river-aquifer boundary, but also for the effects of partial penetration of the river and other near-stream flow phenomena not included in the theoretical development of the step-response functions.Analytical step-response functions, developed for several cases of transient hydraulic interaction between a fully penetrating stream and a confined, leaky, or water-table aquifer, are used in the convolution integral to calculate aquifer heads, streambank seepage rates, and bank storage that occur in response to stream-stage fluctuations and basinwide recharge or evapotranspiration. Two computer programs developed on the basis of these step-response functions and the convolution integral are applied to the analysis of hydraulic interaction of two alluvial stream-aquifer systems. These applications demonstrate the utility of the analytical functions and computer programs for estimating aquifer and streambank seepage rates and bank storage.

  15. Stochastic response and bifurcation of periodically driven nonlinear oscillators by the generalized cell mapping method

    NASA Astrophysics Data System (ADS)

    Han, Qun; Xu, Wei; Sun, Jian-Qiao

    2016-09-01

    The stochastic response of nonlinear oscillators under periodic and Gaussian white noise excitations is studied with the generalized cell mapping based on short-time Gaussian approximation (GCM/STGA) method. The solutions of the transition probability density functions over a small fraction of the period are constructed by the STGA scheme in order to construct the GCM over one complete period. Both the transient and steady-state probability density functions (PDFs) of a smooth and discontinuous (SD) oscillator are computed to illustrate the application of the method. The accuracy of the results is verified by direct Monte Carlo simulations. The transient responses show the evolution of the PDFs from being Gaussian to non-Gaussian. The effect of a chaotic saddle on the stochastic response is also studied. The stochastic P-bifurcation in terms of the steady-state PDFs occurs with the decrease of the smoothness parameter, which corresponds to the deterministic pitchfork bifurcation.

  16. Extinction time of a stochastic predator-prey model by the generalized cell mapping method

    NASA Astrophysics Data System (ADS)

    Han, Qun; Xu, Wei; Hu, Bing; Huang, Dongmei; Sun, Jian-Qiao

    2018-03-01

    The stochastic response and extinction time of a predator-prey model with Gaussian white noise excitations are studied by the generalized cell mapping (GCM) method based on the short-time Gaussian approximation (STGA). The methods for stochastic response probability density functions (PDFs) and extinction time statistics are developed. The Taylor expansion is used to deal with non-polynomial nonlinear terms of the model for deriving the moment equations with Gaussian closure, which are needed for the STGA in order to compute the one-step transition probabilities. The work is validated with direct Monte Carlo simulations. We have presented the transient responses showing the evolution from a Gaussian initial distribution to a non-Gaussian steady-state one. The effects of the model parameter and noise intensities on the steady-state PDFs are discussed. It is also found that the effects of noise intensities on the extinction time statistics are opposite to the effects on the limit probability distributions of the survival species.

  17. A comparative assessment of preclinical chemotherapeutic response of tumors using quantitative non-Gaussian diffusion MRI

    PubMed Central

    Xu, Junzhong; Li, Ke; Smith, R. Adam; Waterton, John C.; Zhao, Ping; Ding, Zhaohua; Does, Mark D.; Manning, H. Charles; Gore, John C.

    2016-01-01

    Background Diffusion-weighted MRI (DWI) signal attenuation is often not mono-exponential (i.e. non-Gaussian diffusion) with stronger diffusion weighting. Several non-Gaussian diffusion models have been developed and may provide new information or higher sensitivity compared with the conventional apparent diffusion coefficient (ADC) method. However the relative merits of these models to detect tumor therapeutic response is not fully clear. Methods Conventional ADC, and three widely-used non-Gaussian models, (bi-exponential, stretched exponential, and statistical model), were implemented and compared for assessing SW620 human colon cancer xenografts responding to barasertib, an agent known to induce apoptosis via polyploidy. Bayesian Information Criterion (BIC) was used for model selection among all three non-Gaussian models. Results All of tumor volume, histology, conventional ADC, and three non-Gaussian DWI models could show significant differences between control and treatment groups after four days of treatment. However, only the non-Gaussian models detected significant changes after two days of treatment. For any treatment or control group, over 65.7% of tumor voxels indicate the bi-exponential model is strongly or very strongly preferred. Conclusion Non-Gaussian DWI model-derived biomarkers are capable of detecting tumor earlier chemotherapeutic response of tumors compared with conventional ADC and tumor volume. The bi-exponential model provides better fitting compared with statistical and stretched exponential models for the tumor and treatment models used in the current work. PMID:27919785

  18. Analysis of multidimensional difference-of-Gaussians filters in terms of directly observable parameters.

    PubMed

    Cope, Davis; Blakeslee, Barbara; McCourt, Mark E

    2013-05-01

    The difference-of-Gaussians (DOG) filter is a widely used model for the receptive field of neurons in the retina and lateral geniculate nucleus (LGN) and is a potential model in general for responses modulated by an excitatory center with an inhibitory surrounding region. A DOG filter is defined by three standard parameters: the center and surround sigmas (which define the variance of the radially symmetric Gaussians) and the balance (which defines the linear combination of the two Gaussians). These parameters are not directly observable and are typically determined by nonlinear parameter estimation methods applied to the frequency response function. DOG filters show both low-pass (optimal response at zero frequency) and bandpass (optimal response at a nonzero frequency) behavior. This paper reformulates the DOG filter in terms of a directly observable parameter, the zero-crossing radius, and two new (but not directly observable) parameters. In the two-dimensional parameter space, the exact region corresponding to bandpass behavior is determined. A detailed description of the frequency response characteristics of the DOG filter is obtained. It is also found that the directly observable optimal frequency and optimal gain (the ratio of the response at optimal frequency to the response at zero frequency) provide an alternate coordinate system for the bandpass region. Altogether, the DOG filter and its three standard implicit parameters can be determined by three directly observable values. The two-dimensional bandpass region is a potential tool for the analysis of populations of DOG filters (for example, populations of neurons in the retina or LGN), because the clustering of points in this parameter space may indicate an underlying organizational principle. This paper concentrates on circular Gaussians, but the results generalize to multidimensional radially symmetric Gaussians and are given as an appendix.

  19. A non-gaussian model of continuous atmospheric turbulence for use in aircraft design

    NASA Technical Reports Server (NTRS)

    Reeves, P. M.; Joppa, R. G.; Ganzer, V. M.

    1976-01-01

    A non-Gaussian model of atmospheric turbulence is presented and analyzed. The model is restricted to the regions of the atmosphere where the turbulence is steady or continuous, and the assumptions of homogeneity and stationarity are justified. Also spatial distribution of turbulence is neglected, so the model consists of three independent, stationary stochastic processes which represent the vertical, lateral, and longitudinal gust components. The non-Gaussian and Gaussian models are compared with experimental data, and it is shown that the Gaussian model underestimates the number of high velocity gusts which occur in the atmosphere, while the non-Gaussian model can be adjusted to match the observed high velocity gusts more satisfactorily. Application of the proposed model to aircraft response is investigated, with particular attention to the response power spectral density, the probability distribution, and the level crossing frequency. A numerical example is presented which illustrates the application of the non-Gaussian model to the study of an aircraft autopilot system. Listings and sample results of a number of computer programs used in working with the model are included.

  20. Deep Learning the Universe

    NASA Astrophysics Data System (ADS)

    Singh, Shiwangi; Bard, Deborah

    2017-01-01

    Weak gravitational lensing is an effective tool to map the structure of matter in the universe, and has been used for more than ten years as a probe of the nature of dark energy. Beyond the well-established two-point summary statistics, attention is now turning to methods that use the full statistical information available in the lensing observables, through analysis of the reconstructed shear field. This offers an opportunity to take advantage of powerful deep learning methods for image analysis. We present two early studies that demonstrate that deep learning can be used to characterise features in weak lensing convergence maps, and to identify the underlying cosmological model that produced them.We developed an unsupervised Denoising Convolutional Autoencoder model in order to learn an abstract representation directly from our data. This model uses a convolution-deconvolution architecture, which is fed with input data (corrupted with binomial noise to prevent over-fitting). Our model effectively trains itself to minimize the mean-squared error between the input and the output using gradient descent, resulting in a model which, theoretically, is broad enough to tackle other similarly structured problems. Using this model we were able to successfully reconstruct simulated convergence maps and identify the structures in them. We also determined which structures had the highest “importance” - i.e. which structures were most typical of the data. We note that the structures that had the highest importance in our reconstruction were around high mass concentrations, but were highly non-Gaussian.We also developed a supervised Convolutional Neural Network (CNN) for classification of weak lensing convergence maps from two different simulated theoretical models. The CNN uses a softmax classifier which minimizes a binary cross-entropy loss between the estimated distribution and true distribution. In other words, given an unseen convergence map the trained CNN determines probabilistically which theoretical model fits the data best. This preliminary work demonstrates that we can classify the cosmological model that produced the convergence maps with 80% accuracy.

  1. Performance Bounds on Two Concatenated, Interleaved Codes

    NASA Technical Reports Server (NTRS)

    Moision, Bruce; Dolinar, Samuel

    2010-01-01

    A method has been developed of computing bounds on the performance of a code comprised of two linear binary codes generated by two encoders serially concatenated through an interleaver. Originally intended for use in evaluating the performances of some codes proposed for deep-space communication links, the method can also be used in evaluating the performances of short-block-length codes in other applications. The method applies, more specifically, to a communication system in which following processes take place: At the transmitter, the original binary information that one seeks to transmit is first processed by an encoder into an outer code (Co) characterized by, among other things, a pair of numbers (n,k), where n (n > k)is the total number of code bits associated with k information bits and n k bits are used for correcting or at least detecting errors. Next, the outer code is processed through either a block or a convolutional interleaver. In the block interleaver, the words of the outer code are processed in blocks of I words. In the convolutional interleaver, the interleaving operation is performed bit-wise in N rows with delays that are multiples of B bits. The output of the interleaver is processed through a second encoder to obtain an inner code (Ci) characterized by (ni,ki). The output of the inner code is transmitted over an additive-white-Gaussian- noise channel characterized by a symbol signal-to-noise ratio (SNR) Es/No and a bit SNR Eb/No. At the receiver, an inner decoder generates estimates of bits. Depending on whether a block or a convolutional interleaver is used at the transmitter, the sequence of estimated bits is processed through a block or a convolutional de-interleaver, respectively, to obtain estimates of code words. Then the estimates of the code words are processed through an outer decoder, which generates estimates of the original information along with flags indicating which estimates are presumed to be correct and which are found to be erroneous. From the perspective of the present method, the topic of major interest is the performance of the communication system as quantified in the word-error rate and the undetected-error rate as functions of the SNRs and the total latency of the interleaver and inner code. The method is embodied in equations that describe bounds on these functions. Throughout the derivation of the equations that embody the method, it is assumed that the decoder for the outer code corrects any error pattern of t or fewer errors, detects any error pattern of s or fewer errors, may detect some error patterns of more than s errors, and does not correct any patterns of more than t errors. Because a mathematically complete description of the equations that embody the method and of the derivation of the equations would greatly exceed the space available for this article, it must suffice to summarize by reporting that the derivation includes consideration of several complex issues, including relationships between latency and memory requirements for block and convolutional codes, burst error statistics, enumeration of error-event intersections, and effects of different interleaving depths. In a demonstration, the method was used to calculate bounds on the performances of several communication systems, each based on serial concatenation of a (63,56) expurgated Hamming code with a convolutional inner code through a convolutional interleaver. The bounds calculated by use of the method were compared with results of numerical simulations of performances of the systems to show the regions where the bounds are tight (see figure).

  2. A robust sub-pixel edge detection method of infrared image based on tremor-based retinal receptive field model

    NASA Astrophysics Data System (ADS)

    Gao, Kun; Yang, Hu; Chen, Xiaomei; Ni, Guoqiang

    2008-03-01

    Because of complex thermal objects in an infrared image, the prevalent image edge detection operators are often suitable for a certain scene and extract too wide edges sometimes. From a biological point of view, the image edge detection operators work reliably when assuming a convolution-based receptive field architecture. A DoG (Difference-of- Gaussians) model filter based on ON-center retinal ganglion cell receptive field architecture with artificial eye tremors introduced is proposed for the image contour detection. Aiming at the blurred edges of an infrared image, the subsequent orthogonal polynomial interpolation and sub-pixel level edge detection in rough edge pixel neighborhood is adopted to locate the foregoing rough edges in sub-pixel level. Numerical simulations show that this method can locate the target edge accurately and robustly.

  3. Covariance Matrix Evaluations for Independent Mass Fission Yields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Terranova, N., E-mail: nicholas.terranova@unibo.it; Serot, O.; Archier, P.

    2015-01-15

    Recent needs for more accurate fission product yields include covariance information to allow improved uncertainty estimations of the parameters used by design codes. The aim of this work is to investigate the possibility to generate more reliable and complete uncertainty information on independent mass fission yields. Mass yields covariances are estimated through a convolution between the multi-Gaussian empirical model based on Brosa's fission modes, which describe the pre-neutron mass yields, and the average prompt neutron multiplicity curve. The covariance generation task has been approached using the Bayesian generalized least squared method through the CONRAD code. Preliminary results on mass yieldsmore » variance-covariance matrix will be presented and discussed from physical grounds in the case of {sup 235}U(n{sub th}, f) and {sup 239}Pu(n{sub th}, f) reactions.« less

  4. Upper bounds on sequential decoding performance parameters

    NASA Technical Reports Server (NTRS)

    Jelinek, F.

    1974-01-01

    This paper presents the best obtainable random coding and expurgated upper bounds on the probabilities of undetectable error, of t-order failure (advance to depth t into an incorrect subset), and of likelihood rise in the incorrect subset, applicable to sequential decoding when the metric bias G is arbitrary. Upper bounds on the Pareto exponent are also presented. The G-values optimizing each of the parameters of interest are determined, and are shown to lie in intervals that in general have nonzero widths. The G-optimal expurgated bound on undetectable error is shown to agree with that for maximum likelihood decoding of convolutional codes, and that on failure agrees with the block code expurgated bound. Included are curves evaluating the bounds for interesting choices of G and SNR for a binary-input quantized-output Gaussian additive noise channel.

  5. Edgeworth streaming model for redshift space distortions

    NASA Astrophysics Data System (ADS)

    Uhlemann, Cora; Kopp, Michael; Haugg, Thomas

    2015-09-01

    We derive the Edgeworth streaming model (ESM) for the redshift space correlation function starting from an arbitrary distribution function for biased tracers of dark matter by considering its two-point statistics and show that it reduces to the Gaussian streaming model (GSM) when neglecting non-Gaussianities. We test the accuracy of the GSM and ESM independent of perturbation theory using the Horizon Run 2 N -body halo catalog. While the monopole of the redshift space halo correlation function is well described by the GSM, higher multipoles improve upon including the leading order non-Gaussian correction in the ESM: the GSM quadrupole breaks down on scales below 30 Mpc /h whereas the ESM stays accurate to 2% within statistical errors down to 10 Mpc /h . To predict the scale-dependent functions entering the streaming model we employ convolution Lagrangian perturbation theory (CLPT) based on the dust model and local Lagrangian bias. Since dark matter halos carry an intrinsic length scale given by their Lagrangian radius, we extend CLPT to the coarse-grained dust model and consider two different smoothing approaches operating in Eulerian and Lagrangian space, respectively. The coarse graining in Eulerian space features modified fluid dynamics different from dust while the coarse graining in Lagrangian space is performed in the initial conditions with subsequent single-streaming dust dynamics, implemented by smoothing the initial power spectrum in the spirit of the truncated Zel'dovich approximation. Finally, we compare the predictions of the different coarse-grained models for the streaming model ingredients to N -body measurements and comment on the proper choice of both the tracer distribution function and the smoothing scale. Since the perturbative methods we considered are not yet accurate enough on small scales, the GSM is sufficient when applied to perturbation theory.

  6. Wavelength interrogation of fiber Bragg grating sensors based on crossed optical Gaussian filters.

    PubMed

    Cheng, Rui; Xia, Li; Zhou, Jiaao; Liu, Deming

    2015-04-15

    Conventional intensity-modulated measurements require to be operated in linear range of filter or interferometric response to ensure a linear detection. Here, we present a wavelength interrogation system for fiber Bragg grating sensors where the linear transition is achieved with crossed Gaussian transmissions. This unique filtering characteristic makes the responses of the two branch detections follow Gaussian functions with the same parameters except for a delay. The substraction of these two delayed Gaussian responses (in dB) ultimately leads to a linear behavior, which is exploited for the sensor wavelength determination. Beside its flexibility and inherently power insensitivity, the proposal also shows a potential of a much wider operational range. Interrogation of a strain-tuned grating was accomplished, with a wide sensitivity tuning range from 2.56 to 8.7 dB/nm achieved.

  7. Lp-stability (1 less than or equal to p less than or equal to infinity) of multivariable nonlinear time-varying feedback systems that are open-loop unstable. [noting unstable convolution subsystem forward control and time varying nonlinear feedback

    NASA Technical Reports Server (NTRS)

    Callier, F. M.; Desoer, C. A.

    1973-01-01

    A class of multivariable, nonlinear time-varying feedback systems with an unstable convolution subsystem as feedforward and a time-varying nonlinear gain as feedback was considered. The impulse response of the convolution subsystem is the sum of a finite number of increasing exponentials multiplied by nonnegative powers of the time t, a term that is absolutely integrable and an infinite series of delayed impulses. The main result is a theorem. It essentially states that if the unstable convolution subsystem can be stabilized by a constant feedback gain F and if incremental gain of the difference between the nonlinear gain function and F is sufficiently small, then the nonlinear system is L(p)-stable for any p between one and infinity. Furthermore, the solutions of the nonlinear system depend continuously on the inputs in any L(p)-norm. The fixed point theorem is crucial in deriving the above theorem.

  8. A spherical harmonic approach for the determination of HCP texture from ultrasound: A solution to the inverse problem

    NASA Astrophysics Data System (ADS)

    Lan, Bo; Lowe, Michael J. S.; Dunne, Fionn P. E.

    2015-10-01

    A new spherical convolution approach has been presented which couples HCP single crystal wave speed (the kernel function) with polycrystal c-axis pole distribution function to give the resultant polycrystal wave speed response. The three functions have been expressed as spherical harmonic expansions thus enabling application of the de-convolution technique to enable any one of the three to be determined from knowledge of the other two. Hence, the forward problem of determination of polycrystal wave speed from knowledge of single crystal wave speed response and the polycrystal pole distribution has been solved for a broad range of experimentally representative HCP polycrystal textures. The technique provides near-perfect representation of the sensitivity of wave speed to polycrystal texture as well as quantitative prediction of polycrystal wave speed. More importantly, a solution to the inverse problem is presented in which texture, as a c-axis distribution function, is determined from knowledge of the kernel function and the polycrystal wave speed response. It has also been explained why it has been widely reported in the literature that only texture coefficients up to 4th degree may be obtained from ultrasonic measurements. Finally, the de-convolution approach presented provides the potential for the measurement of polycrystal texture from ultrasonic wave speed measurements.

  9. Shape Selectivity of Middle Superior Temporal Sulcus Body Patch Neurons

    PubMed Central

    2017-01-01

    Abstract Functional MRI studies in primates have demonstrated cortical regions that are strongly activated by visual images of bodies. The presence of such body patches in macaques allows characterization of the stimulus selectivity of their single neurons. Middle superior temporal sulcus body (MSB) patch neurons showed similar stimulus selectivity for natural, shaded, and textured images compared with their silhouettes, suggesting that shape is an important determinant of MSB responses. Here, we examined and modeled the shape selectivity of single MSB neurons. We measured the responses of single MSB neurons to a variety of shapes producing a wide range of responses. We used an adaptive stimulus sampling procedure, selecting and modifying shapes based on the responses of the neuron. Forty percent of shapes that produced the maximal response were rated by humans as animal-like, but the top shape of many MSB neurons was not judged as resembling a body. We fitted the shape selectivity of MSB neurons with a model that parameterizes shapes in terms of curvature and orientation of contour segments, with a pixel-based model, and with layers of units of convolutional neural networks (CNNs). The deep convolutional layers of CNNs provided the best goodness-of-fit, with a median explained explainable variance of the neurons’ responses of 77%. The goodness-of-fit increased along the convolutional layers’ hierarchy but was lower for the fully connected layers. Together with demonstrating the successful modeling of single unit shape selectivity with deep CNNs, the data suggest that semantic or category knowledge determines only slightly the single MSB neuron’s shape selectivity. PMID:28660250

  10. Automatic Fabric Defect Detection with a Multi-Scale Convolutional Denoising Autoencoder Network Model.

    PubMed

    Mei, Shuang; Wang, Yudan; Wen, Guojun

    2018-04-02

    Fabric defect detection is a necessary and essential step of quality control in the textile manufacturing industry. Traditional fabric inspections are usually performed by manual visual methods, which are low in efficiency and poor in precision for long-term industrial applications. In this paper, we propose an unsupervised learning-based automated approach to detect and localize fabric defects without any manual intervention. This approach is used to reconstruct image patches with a convolutional denoising autoencoder network at multiple Gaussian pyramid levels and to synthesize detection results from the corresponding resolution channels. The reconstruction residual of each image patch is used as the indicator for direct pixel-wise prediction. By segmenting and synthesizing the reconstruction residual map at each resolution level, the final inspection result can be generated. This newly developed method has several prominent advantages for fabric defect detection. First, it can be trained with only a small amount of defect-free samples. This is especially important for situations in which collecting large amounts of defective samples is difficult and impracticable. Second, owing to the multi-modal integration strategy, it is relatively more robust and accurate compared to general inspection methods (the results at each resolution level can be viewed as a modality). Third, according to our results, it can address multiple types of textile fabrics, from simple to more complex. Experimental results demonstrate that the proposed model is robust and yields good overall performance with high precision and acceptable recall rates.

  11. MODELING STREAM-AQUIFIER INTERACTIONS WITH LINEAR RESPONSE FUNCTIONS

    EPA Science Inventory

    The problem of stream-aquifer interactions is pertinent to conjunctive-use management of water resources and riparian zone hydrology. Closed form solutions are derived for stream-aquifer interactions in rates and volumes expressed as convolution integrals of impulse response and ...

  12. A deconvolution extraction method for 2D multi-object fibre spectroscopy based on the regularized least-squares QR-factorization algorithm

    NASA Astrophysics Data System (ADS)

    Yu, Jian; Yin, Qian; Guo, Ping; Luo, A.-li

    2014-09-01

    This paper presents an efficient method for the extraction of astronomical spectra from two-dimensional (2D) multifibre spectrographs based on the regularized least-squares QR-factorization (LSQR) algorithm. We address two issues: we propose a modified Gaussian point spread function (PSF) for modelling the 2D PSF from multi-emission-line gas-discharge lamp images (arc images), and we develop an efficient deconvolution method to extract spectra in real circumstances. The proposed modified 2D Gaussian PSF model can fit various types of 2D PSFs, including different radial distortion angles and ellipticities. We adopt the regularized LSQR algorithm to solve the sparse linear equations constructed from the sparse convolution matrix, which we designate the deconvolution spectrum extraction method. Furthermore, we implement a parallelized LSQR algorithm based on graphics processing unit programming in the Compute Unified Device Architecture to accelerate the computational processing. Experimental results illustrate that the proposed extraction method can greatly reduce the computational cost and memory use of the deconvolution method and, consequently, increase its efficiency and practicability. In addition, the proposed extraction method has a stronger noise tolerance than other methods, such as the boxcar (aperture) extraction and profile extraction methods. Finally, we present an analysis of the sensitivity of the extraction results to the radius and full width at half-maximum of the 2D PSF.

  13. Studies in astronomical time series analysis. IV - Modeling chaotic and random processes with linear filters

    NASA Technical Reports Server (NTRS)

    Scargle, Jeffrey D.

    1990-01-01

    While chaos arises only in nonlinear systems, standard linear time series models are nevertheless useful for analyzing data from chaotic processes. This paper introduces such a model, the chaotic moving average. This time-domain model is based on the theorem that any chaotic process can be represented as the convolution of a linear filter with an uncorrelated process called the chaotic innovation. A technique, minimum phase-volume deconvolution, is introduced to estimate the filter and innovation. The algorithm measures the quality of a model using the volume covered by the phase-portrait of the innovation process. Experiments on synthetic data demonstrate that the algorithm accurately recovers the parameters of simple chaotic processes. Though tailored for chaos, the algorithm can detect both chaos and randomness, distinguish them from each other, and separate them if both are present. It can also recover nonminimum-delay pulse shapes in non-Gaussian processes, both random and chaotic.

  14. Global solutions to random 3D vorticity equations for small initial data

    NASA Astrophysics Data System (ADS)

    Barbu, Viorel; Röckner, Michael

    2017-11-01

    One proves the existence and uniqueness in (Lp (R3)) 3, 3/2 < p < 2, of a global mild solution to random vorticity equations associated to stochastic 3D Navier-Stokes equations with linear multiplicative Gaussian noise of convolution type, for sufficiently small initial vorticity. This resembles some earlier deterministic results of T. Kato [16] and are obtained by treating the equation in vorticity form and reducing the latter to a random nonlinear parabolic equation. The solution has maximal regularity in the spatial variables and is weakly continuous in (L3 ∩L 3p/4p - 6)3 with respect to the time variable. Furthermore, we obtain the pathwise continuous dependence of solutions with respect to the initial data. In particular, one gets a locally unique solution of 3D stochastic Navier-Stokes equation in vorticity form up to some explosion stopping time τ adapted to the Brownian motion.

  15. Personal computer (PC) based image processing applied to fluid mechanics research

    NASA Technical Reports Server (NTRS)

    Cho, Y.-C.; Mclachlan, B. G.

    1987-01-01

    A PC based image processing system was employed to determine the instantaneous velocity field of a two-dimensional unsteady flow. The flow was visualized using a suspension of seeding particles in water, and a laser sheet for illumination. With a finite time exposure, the particle motion was captured on a photograph as a pattern of streaks. The streak pattern was digitized and processsed using various imaging operations, including contrast manipulation, noise cleaning, filtering, statistical differencing, and thresholding. Information concerning the velocity was extracted from the enhanced image by measuring the length and orientation of the individual streaks. The fluid velocities deduced from the randomly distributed particle streaks were interpolated to obtain velocities at uniform grid points. For the interpolation a simple convolution technique with an adaptive Gaussian window was used. The results are compared with a numerical prediction by a Navier-Stokes commputation.

  16. Personal Computer (PC) based image processing applied to fluid mechanics

    NASA Technical Reports Server (NTRS)

    Cho, Y.-C.; Mclachlan, B. G.

    1987-01-01

    A PC based image processing system was employed to determine the instantaneous velocity field of a two-dimensional unsteady flow. The flow was visualized using a suspension of seeding particles in water, and a laser sheet for illumination. With a finite time exposure, the particle motion was captured on a photograph as a pattern of streaks. The streak pattern was digitized and processed using various imaging operations, including contrast manipulation, noise cleaning, filtering, statistical differencing, and thresholding. Information concerning the velocity was extracted from the enhanced image by measuring the length and orientation of the individual streaks. The fluid velocities deduced from the randomly distributed particle streaks were interpolated to obtain velocities at uniform grid points. For the interpolation a simple convolution technique with an adaptive Gaussian window was used. The results are compared with a numerical prediction by a Navier-Stokes computation.

  17. Resource allocation for error resilient video coding over AWGN using optimization approach.

    PubMed

    An, Cheolhong; Nguyen, Truong Q

    2008-12-01

    The number of slices for error resilient video coding is jointly optimized with 802.11a-like media access control and the physical layers with automatic repeat request and rate compatible punctured convolutional code over additive white gaussian noise channel as well as channel times allocation for time division multiple access. For error resilient video coding, the relation between the number of slices and coding efficiency is analyzed and formulated as a mathematical model. It is applied for the joint optimization problem, and the problem is solved by a convex optimization method such as the primal-dual decomposition method. We compare the performance of a video communication system which uses the optimal number of slices with one that codes a picture as one slice. From numerical examples, end-to-end distortion of utility functions can be significantly reduced with the optimal slices of a picture especially at low signal-to-noise ratio.

  18. Operational rate-distortion performance for joint source and channel coding of images.

    PubMed

    Ruf, M J; Modestino, J W

    1999-01-01

    This paper describes a methodology for evaluating the operational rate-distortion behavior of combined source and channel coding schemes with particular application to images. In particular, we demonstrate use of the operational rate-distortion function to obtain the optimum tradeoff between source coding accuracy and channel error protection under the constraint of a fixed transmission bandwidth for the investigated transmission schemes. Furthermore, we develop information-theoretic bounds on performance for specific source and channel coding systems and demonstrate that our combined source-channel coding methodology applied to different schemes results in operational rate-distortion performance which closely approach these theoretical limits. We concentrate specifically on a wavelet-based subband source coding scheme and the use of binary rate-compatible punctured convolutional (RCPC) codes for transmission over the additive white Gaussian noise (AWGN) channel. Explicit results for real-world images demonstrate the efficacy of this approach.

  19. Sparse PDF Volumes for Consistent Multi-Resolution Volume Rendering.

    PubMed

    Sicat, Ronell; Krüger, Jens; Möller, Torsten; Hadwiger, Markus

    2014-12-01

    This paper presents a new multi-resolution volume representation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined in the 4D domain jointly comprising the 3D volume and its 1D intensity range. Crucially, the computation of sparse pdf volumes exploits data coherence in 4D, resulting in a sparse representation with surprisingly low storage requirements. At run time, we dynamically apply transfer functions to the pdfs using simple and fast convolutions. Whereas standard low-pass filtering and down-sampling incur visible differences between resolution levels, the use of pdfs facilitates consistent results independent of the resolution level used. We describe the efficient out-of-core computation of large-scale sparse pdf volumes, using a novel iterative simplification procedure of a mixture of 4D Gaussians. Finally, our data structure is optimized to facilitate interactive multi-resolution volume rendering on GPUs.

  20. Tempered fractional calculus

    NASA Astrophysics Data System (ADS)

    Sabzikar, Farzad; Meerschaert, Mark M.; Chen, Jinghua

    2015-07-01

    Fractional derivatives and integrals are convolutions with a power law. Multiplying by an exponential factor leads to tempered fractional derivatives and integrals. Tempered fractional diffusion equations, where the usual second derivative in space is replaced by a tempered fractional derivative, govern the limits of random walk models with an exponentially tempered power law jump distribution. The limiting tempered stable probability densities exhibit semi-heavy tails, which are commonly observed in finance. Tempered power law waiting times lead to tempered fractional time derivatives, which have proven useful in geophysics. The tempered fractional derivative or integral of a Brownian motion, called a tempered fractional Brownian motion, can exhibit semi-long range dependence. The increments of this process, called tempered fractional Gaussian noise, provide a useful new stochastic model for wind speed data. A tempered fractional difference forms the basis for numerical methods to solve tempered fractional diffusion equations, and it also provides a useful new correlation model in time series.

  1. TEMPERED FRACTIONAL CALCULUS.

    PubMed

    Meerschaert, Mark M; Sabzikar, Farzad; Chen, Jinghua

    2015-07-15

    Fractional derivatives and integrals are convolutions with a power law. Multiplying by an exponential factor leads to tempered fractional derivatives and integrals. Tempered fractional diffusion equations, where the usual second derivative in space is replaced by a tempered fractional derivative, govern the limits of random walk models with an exponentially tempered power law jump distribution. The limiting tempered stable probability densities exhibit semi-heavy tails, which are commonly observed in finance. Tempered power law waiting times lead to tempered fractional time derivatives, which have proven useful in geophysics. The tempered fractional derivative or integral of a Brownian motion, called a tempered fractional Brownian motion, can exhibit semi-long range dependence. The increments of this process, called tempered fractional Gaussian noise, provide a useful new stochastic model for wind speed data. A tempered difference forms the basis for numerical methods to solve tempered fractional diffusion equations, and it also provides a useful new correlation model in time series.

  2. TEMPERED FRACTIONAL CALCULUS

    PubMed Central

    MEERSCHAERT, MARK M.; SABZIKAR, FARZAD; CHEN, JINGHUA

    2014-01-01

    Fractional derivatives and integrals are convolutions with a power law. Multiplying by an exponential factor leads to tempered fractional derivatives and integrals. Tempered fractional diffusion equations, where the usual second derivative in space is replaced by a tempered fractional derivative, govern the limits of random walk models with an exponentially tempered power law jump distribution. The limiting tempered stable probability densities exhibit semi-heavy tails, which are commonly observed in finance. Tempered power law waiting times lead to tempered fractional time derivatives, which have proven useful in geophysics. The tempered fractional derivative or integral of a Brownian motion, called a tempered fractional Brownian motion, can exhibit semi-long range dependence. The increments of this process, called tempered fractional Gaussian noise, provide a useful new stochastic model for wind speed data. A tempered difference forms the basis for numerical methods to solve tempered fractional diffusion equations, and it also provides a useful new correlation model in time series. PMID:26085690

  3. Tempered fractional calculus

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sabzikar, Farzad, E-mail: sabzika2@stt.msu.edu; Meerschaert, Mark M., E-mail: mcubed@stt.msu.edu; Chen, Jinghua, E-mail: cjhdzdz@163.com

    2015-07-15

    Fractional derivatives and integrals are convolutions with a power law. Multiplying by an exponential factor leads to tempered fractional derivatives and integrals. Tempered fractional diffusion equations, where the usual second derivative in space is replaced by a tempered fractional derivative, govern the limits of random walk models with an exponentially tempered power law jump distribution. The limiting tempered stable probability densities exhibit semi-heavy tails, which are commonly observed in finance. Tempered power law waiting times lead to tempered fractional time derivatives, which have proven useful in geophysics. The tempered fractional derivative or integral of a Brownian motion, called a temperedmore » fractional Brownian motion, can exhibit semi-long range dependence. The increments of this process, called tempered fractional Gaussian noise, provide a useful new stochastic model for wind speed data. A tempered fractional difference forms the basis for numerical methods to solve tempered fractional diffusion equations, and it also provides a useful new correlation model in time series.« less

  4. Efficient Terahertz Wide-Angle NUFFT-Based Inverse Synthetic Aperture Imaging Considering Spherical Wavefront.

    PubMed

    Gao, Jingkun; Deng, Bin; Qin, Yuliang; Wang, Hongqiang; Li, Xiang

    2016-12-14

    An efficient wide-angle inverse synthetic aperture imaging method considering the spherical wavefront effects and suitable for the terahertz band is presented. Firstly, the echo signal model under spherical wave assumption is established, and the detailed wavefront curvature compensation method accelerated by 1D fast Fourier transform (FFT) is discussed. Then, to speed up the reconstruction procedure, the fast Gaussian gridding (FGG)-based nonuniform FFT (NUFFT) is employed to focus the image. Finally, proof-of-principle experiments are carried out and the results are compared with the ones obtained by the convolution back-projection (CBP) algorithm. The results demonstrate the effectiveness and the efficiency of the presented method. This imaging method can be directly used in the field of nondestructive detection and can also be used to provide a solution for the calculation of the far-field RCSs (Radar Cross Section) of targets in the terahertz regime.

  5. The effects of metamaterial on electromagnetic fields absorption characteristics of human eye tissues.

    PubMed

    Gasmelseed, Akram; Yunus, Jasmy

    2014-01-01

    The interaction of a dipole antenna with a human eye model in the presence of a metamaterial is investigated in this paper. The finite difference time domain (FDTD) method with convolutional perfectly matched layer (CPML) formulation have been used. A three-dimensional anatomical model of the human eye with resolution of 1.25 mm × 1.25 mm × 1.25 mm was used in this study. The dipole antenna was driven by modulated Gaussian pulse and the numerical study is performed with dipole operating at 900 MHz. The analysis has been done by varying the size and value of electric permittivity of the metamaterial. By normalizing the peak SAR (1 g and 10 g) to 1 W for all examined cases, we observed how the SAR values are not affected by the different permittivity values with the size of the metamaterial kept fixed. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. Error-correction coding for digital communications

    NASA Astrophysics Data System (ADS)

    Clark, G. C., Jr.; Cain, J. B.

    This book is written for the design engineer who must build the coding and decoding equipment and for the communication system engineer who must incorporate this equipment into a system. It is also suitable as a senior-level or first-year graduate text for an introductory one-semester course in coding theory. Fundamental concepts of coding are discussed along with group codes, taking into account basic principles, practical constraints, performance computations, coding bounds, generalized parity check codes, polynomial codes, and important classes of group codes. Other topics explored are related to simple nonalgebraic decoding techniques for group codes, soft decision decoding of block codes, algebraic techniques for multiple error correction, the convolutional code structure and Viterbi decoding, syndrome decoding techniques, and sequential decoding techniques. System applications are also considered, giving attention to concatenated codes, coding for the white Gaussian noise channel, interleaver structures for coded systems, and coding for burst noise channels.

  7. Predicting Response to Neoadjuvant Chemotherapy with PET Imaging Using Convolutional Neural Networks

    PubMed Central

    Ypsilantis, Petros-Pavlos; Siddique, Musib; Sohn, Hyon-Mok; Davies, Andrew; Cook, Gary; Goh, Vicky; Montana, Giovanni

    2015-01-01

    Imaging of cancer with 18F-fluorodeoxyglucose positron emission tomography (18F-FDG PET) has become a standard component of diagnosis and staging in oncology, and is becoming more important as a quantitative monitor of individual response to therapy. In this article we investigate the challenging problem of predicting a patient’s response to neoadjuvant chemotherapy from a single 18F-FDG PET scan taken prior to treatment. We take a “radiomics” approach whereby a large amount of quantitative features is automatically extracted from pretherapy PET images in order to build a comprehensive quantification of the tumor phenotype. While the dominant methodology relies on hand-crafted texture features, we explore the potential of automatically learning low- to high-level features directly from PET scans. We report on a study that compares the performance of two competing radiomics strategies: an approach based on state-of-the-art statistical classifiers using over 100 quantitative imaging descriptors, including texture features as well as standardized uptake values, and a convolutional neural network, 3S-CNN, trained directly from PET scans by taking sets of adjacent intra-tumor slices. Our experimental results, based on a sample of 107 patients with esophageal cancer, provide initial evidence that convolutional neural networks have the potential to extract PET imaging representations that are highly predictive of response to therapy. On this dataset, 3S-CNN achieves an average 80.7% sensitivity and 81.6% specificity in predicting non-responders, and outperforms other competing predictive models. PMID:26355298

  8. Convolutional neural networks for event-related potential detection: impact of the architecture.

    PubMed

    Cecotti, H

    2017-07-01

    The detection of brain responses at the single-trial level in the electroencephalogram (EEG) such as event-related potentials (ERPs) is a difficult problem that requires different processing steps to extract relevant discriminant features. While most of the signal and classification techniques for the detection of brain responses are based on linear algebra, different pattern recognition techniques such as convolutional neural network (CNN), as a type of deep learning technique, have shown some interests as they are able to process the signal after limited pre-processing. In this study, we propose to investigate the performance of CNNs in relation of their architecture and in relation to how they are evaluated: a single system for each subject, or a system for all the subjects. More particularly, we want to address the change of performance that can be observed between specifying a neural network to a subject, or by considering a neural network for a group of subjects, taking advantage of a larger number of trials from different subjects. The results support the conclusion that a convolutional neural network trained on different subjects can lead to an AUC above 0.9 by using an appropriate architecture using spatial filtering and shift invariant layers.

  9. Inverse problems-based maximum likelihood estimation of ground reflectivity for selected regions of interest from stripmap SAR data [Regularized maximum likelihood estimation of ground reflectivity from stripmap SAR data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    West, R. Derek; Gunther, Jacob H.; Moon, Todd K.

    In this study, we derive a comprehensive forward model for the data collected by stripmap synthetic aperture radar (SAR) that is linear in the ground reflectivity parameters. It is also shown that if the noise model is additive, then the forward model fits into the linear statistical model framework, and the ground reflectivity parameters can be estimated by statistical methods. We derive the maximum likelihood (ML) estimates for the ground reflectivity parameters in the case of additive white Gaussian noise. Furthermore, we show that obtaining the ML estimates of the ground reflectivity requires two steps. The first step amounts tomore » a cross-correlation of the data with a model of the data acquisition parameters, and it is shown that this step has essentially the same processing as the so-called convolution back-projection algorithm. The second step is a complete system inversion that is capable of mitigating the sidelobes of the spatially variant impulse responses remaining after the correlation processing. We also state the Cramer-Rao lower bound (CRLB) for the ML ground reflectivity estimates.We show that the CRLB is linked to the SAR system parameters, the flight path of the SAR sensor, and the image reconstruction grid.We demonstrate the ML image formation and the CRLB bound for synthetically generated data.« less

  10. Inverse problems-based maximum likelihood estimation of ground reflectivity for selected regions of interest from stripmap SAR data [Regularized maximum likelihood estimation of ground reflectivity from stripmap SAR data

    DOE PAGES

    West, R. Derek; Gunther, Jacob H.; Moon, Todd K.

    2016-12-01

    In this study, we derive a comprehensive forward model for the data collected by stripmap synthetic aperture radar (SAR) that is linear in the ground reflectivity parameters. It is also shown that if the noise model is additive, then the forward model fits into the linear statistical model framework, and the ground reflectivity parameters can be estimated by statistical methods. We derive the maximum likelihood (ML) estimates for the ground reflectivity parameters in the case of additive white Gaussian noise. Furthermore, we show that obtaining the ML estimates of the ground reflectivity requires two steps. The first step amounts tomore » a cross-correlation of the data with a model of the data acquisition parameters, and it is shown that this step has essentially the same processing as the so-called convolution back-projection algorithm. The second step is a complete system inversion that is capable of mitigating the sidelobes of the spatially variant impulse responses remaining after the correlation processing. We also state the Cramer-Rao lower bound (CRLB) for the ML ground reflectivity estimates.We show that the CRLB is linked to the SAR system parameters, the flight path of the SAR sensor, and the image reconstruction grid.We demonstrate the ML image formation and the CRLB bound for synthetically generated data.« less

  11. Jitter Reduces Response-Time Variability in ADHD: An Ex-Gaussian Analysis.

    PubMed

    Lee, Ryan W Y; Jacobson, Lisa A; Pritchard, Alison E; Ryan, Matthew S; Yu, Qilu; Denckla, Martha B; Mostofsky, Stewart; Mahone, E Mark

    2015-09-01

    "Jitter" involves randomization of intervals between stimulus events. Compared with controls, individuals with ADHD demonstrate greater intrasubject variability (ISV) performing tasks with fixed interstimulus intervals (ISIs). Because Gaussian curves mask the effect of extremely slow or fast response times (RTs), ex-Gaussian approaches have been applied to study ISV. This study applied ex-Gaussian analysis to examine the effects of jitter on RT variability in children with and without ADHD. A total of 75 children, aged 9 to 14 years (44 ADHD, 31 controls), completed a go/no-go test with two conditions: fixed ISI and jittered ISI. ADHD children showed greater variability, driven by elevations in exponential (tau), but not normal (sigma) components of the RT distribution. Jitter decreased tau in ADHD to levels not statistically different than controls, reducing lapses in performance characteristic of impaired response control. Jitter may provide a nonpharmacologic mechanism to facilitate readiness to respond and reduce lapses from sustained (controlled) performance. © 2012 SAGE Publications.

  12. Entanglement-assisted quantum convolutional coding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilde, Mark M.; Brun, Todd A.

    2010-04-15

    We show how to protect a stream of quantum information from decoherence induced by a noisy quantum communication channel. We exploit preshared entanglement and a convolutional coding structure to develop a theory of entanglement-assisted quantum convolutional coding. Our construction produces a Calderbank-Shor-Steane (CSS) entanglement-assisted quantum convolutional code from two arbitrary classical binary convolutional codes. The rate and error-correcting properties of the classical convolutional codes directly determine the corresponding properties of the resulting entanglement-assisted quantum convolutional code. We explain how to encode our CSS entanglement-assisted quantum convolutional codes starting from a stream of information qubits, ancilla qubits, and shared entangled bits.

  13. A Scaling Model for the Anthropocene Climate Variability with Projections to 2100

    NASA Astrophysics Data System (ADS)

    Hébert, Raphael; Lovejoy, Shaun

    2017-04-01

    The determination of the climate sensitivity to radiative forcing is a fundamental climate science problem with important policy implications. We use a scaling model, with a limited set of parameters, which can directly calculate the forced globally-average surface air temperature response to anthropogenic and natural forcings. At timescales larger than an inner scale τ, which we determine as the ocean-atmosphere coupling scale at around 2 years, the global system responds, approximately, linearly, so that the variability may be decomposed into additive forced and internal components. The Ruelle response theory extends the classical linear response theory for small perturbations to systems far from equilibrium. Our model thus relates radiative forcings to a forced temperature response by convolution with a suitable Green's function, or climate response function. Motivated by scaling symmetries which allow for long range dependence, we assume a general scaling form, a scaling climate response function (SCRF) which is able to produce a wide range of responses: a power-law truncated at τ. This allows us to analytically calculate the climate sensitivity at different time scales, yielding a one-to-one relation from the transient climate response to the equilibrium climate sensitivity which are estimated, respectively, as 1.6+0.3-0.2K and 2.4+1.3-0.6K at the 90 % confidence level. The model parameters are estimated within a Bayesian framework, with a fractional Gaussian noise error model as the internal variability, from forcing series, instrumental surface temperature datasets and CMIP5 GCMs Representative Concentration Pathways (RCP) scenario runs. This observation based model is robust and projections for the coming century are made following the RCP scenario 2.6, 4.5 and 8.5, yielding in the year 2100, respectively : 1.5 +0.3)_{-0.2K, 2.3 ± 0.4 K and 4.0 ± 0.6 K at the 90 % confidence level. For comparison, the associated projections from a CMIP5 multi-model ensemble(MME) (32 models) are: 1.7 ± 0.8 K, 2.6 ± 0.8 K and 4.8 ± 1.3 K. Therefore, our projection uncertainty is less than half the structural uncertainty of this CMIP5 MME.

  14. Comparison of breast DCE-MRI contrast time points for predicting response to neoadjuvant chemotherapy using deep convolutional neural network features with transfer learning

    NASA Astrophysics Data System (ADS)

    Huynh, Benjamin Q.; Antropova, Natasha; Giger, Maryellen L.

    2017-03-01

    DCE-MRI datasets have a temporal aspect to them, resulting in multiple regions of interest (ROIs) per subject, based on contrast time points. It is unclear how the different contrast time points vary in terms of usefulness for computer-aided diagnosis tasks in conjunction with deep learning methods. We thus sought to compare the different DCE-MRI contrast time points with regard to how well their extracted features predict response to neoadjuvant chemotherapy within a deep convolutional neural network. Our dataset consisted of 561 ROIs from 64 subjects. Each subject was categorized as a non-responder or responder, determined by recurrence-free survival. First, features were extracted from each ROI using a convolutional neural network (CNN) pre-trained on non-medical images. Linear discriminant analysis classifiers were then trained on varying subsets of these features, based on their contrast time points of origin. Leave-one-out cross validation (by subject) was used to assess performance in the task of estimating probability of response to therapy, with area under the ROC curve (AUC) as the metric. The classifier trained on features from strictly the pre-contrast time point performed the best, with an AUC of 0.85 (SD = 0.033). The remaining classifiers resulted in AUCs ranging from 0.71 (SD = 0.028) to 0.82 (SD = 0.027). Overall, we found the pre-contrast time point to be the most effective at predicting response to therapy and that including additional contrast time points moderately reduces variance.

  15. Groundwater response to changing water-use practices in sloping aquifers using convolution of transient response functions

    USDA-ARS?s Scientific Manuscript database

    This study examines the impact of a sloping base on the movement of transients through groundwater systems. Dimensionless variables and regression of model results are employed to develop functions relating the transient change in saturated thickness to the distance upgradient and downgradient from ...

  16. Groundwater response to changing water-use practices in sloping aquifers using convolution of transient response functions

    USDA-ARS?s Scientific Manuscript database

    An integrated foundation is presented to study the impacts of external forcings on irrigated agricultural systems. Individually, models are presented that simulate groundwater hydrogeology and econometric farm level crop choices and irrigated water use. The natural association between groundwater we...

  17. Convolutional Dictionary Learning: Acceleration and Convergence

    NASA Astrophysics Data System (ADS)

    Chun, Il Yong; Fessler, Jeffrey A.

    2018-04-01

    Convolutional dictionary learning (CDL or sparsifying CDL) has many applications in image processing and computer vision. There has been growing interest in developing efficient algorithms for CDL, mostly relying on the augmented Lagrangian (AL) method or the variant alternating direction method of multipliers (ADMM). When their parameters are properly tuned, AL methods have shown fast convergence in CDL. However, the parameter tuning process is not trivial due to its data dependence and, in practice, the convergence of AL methods depends on the AL parameters for nonconvex CDL problems. To moderate these problems, this paper proposes a new practically feasible and convergent Block Proximal Gradient method using a Majorizer (BPG-M) for CDL. The BPG-M-based CDL is investigated with different block updating schemes and majorization matrix designs, and further accelerated by incorporating some momentum coefficient formulas and restarting techniques. All of the methods investigated incorporate a boundary artifacts removal (or, more generally, sampling) operator in the learning model. Numerical experiments show that, without needing any parameter tuning process, the proposed BPG-M approach converges more stably to desirable solutions of lower objective values than the existing state-of-the-art ADMM algorithm and its memory-efficient variant do. Compared to the ADMM approaches, the BPG-M method using a multi-block updating scheme is particularly useful in single-threaded CDL algorithm handling large datasets, due to its lower memory requirement and no polynomial computational complexity. Image denoising experiments show that, for relatively strong additive white Gaussian noise, the filters learned by BPG-M-based CDL outperform those trained by the ADMM approach.

  18. Bayesian sensitivity analysis of bifurcating nonlinear models

    NASA Astrophysics Data System (ADS)

    Becker, W.; Worden, K.; Rowson, J.

    2013-01-01

    Sensitivity analysis allows one to investigate how changes in input parameters to a system affect the output. When computational expense is a concern, metamodels such as Gaussian processes can offer considerable computational savings over Monte Carlo methods, albeit at the expense of introducing a data modelling problem. In particular, Gaussian processes assume a smooth, non-bifurcating response surface. This work highlights a recent extension to Gaussian processes which uses a decision tree to partition the input space into homogeneous regions, and then fits separate Gaussian processes to each region. In this way, bifurcations can be modelled at region boundaries and different regions can have different covariance properties. To test this method, both the treed and standard methods were applied to the bifurcating response of a Duffing oscillator and a bifurcating FE model of a heart valve. It was found that the treed Gaussian process provides a practical way of performing uncertainty and sensitivity analysis on large, potentially-bifurcating models, which cannot be dealt with by using a single GP, although an open problem remains how to manage bifurcation boundaries that are not parallel to coordinate axes.

  19. Mean First Passage Time and Stochastic Resonance in a Transcriptional Regulatory System with Non-Gaussian Noise

    NASA Astrophysics Data System (ADS)

    Kang, Yan-Mei; Chen, Xi; Lin, Xu-Dong; Tan, Ning

    The mean first passage time (MFPT) in a phenomenological gene transcriptional regulatory model with non-Gaussian noise is analytically investigated based on the singular perturbation technique. The effect of the non-Gaussian noise on the phenomenon of stochastic resonance (SR) is then disclosed based on a new combination of adiabatic elimination and linear response approximation. Compared with the results in the Gaussian noise case, it is found that bounded non-Gaussian noise inhibits the transition between different concentrations of protein, while heavy-tailed non-Gaussian noise accelerates the transition. It is also found that the optimal noise intensity for SR in the heavy-tailed noise case is smaller, while the optimal noise intensity in the bounded noise case is larger. These observations can be explained by the heavy-tailed noise easing random transitions.

  20. Seismic body wave separation in volcano-tectonic activity inferred by the Convolutive Independent Component Analysis

    NASA Astrophysics Data System (ADS)

    Capuano, Paolo; De Lauro, Enza; De Martino, Salvatore; Falanga, Mariarosaria; Petrosino, Simona

    2015-04-01

    One of the main challenge in volcano-seismological literature is to locate and characterize the source of volcano/tectonic seismic activity. This passes through the identification at least of the onset of the main phases, i.e. the body waves. Many efforts have been made to solve the problem of a clear separation of P and S phases both from a theoretical point of view and developing numerical algorithms suitable for specific cases (see, e.g., Küperkoch et al., 2012). Recently, a robust automatic procedure has been implemented for extracting the prominent seismic waveforms from continuously recorded signals and thus allowing for picking the main phases. The intuitive notion of maximum non-gaussianity is achieved adopting techniques which involve higher-order statistics in frequency domain., i.e, the Convolutive Independent Component Analysis (CICA). This technique is successful in the case of the blind source separation of convolutive mixtures. In seismological framework, indeed, seismic signals are thought as the convolution of a source function with path, site and the instrument response. In addition, time-delayed versions of the same source exist, due to multipath propagation typically caused by reverberations from some obstacle. In this work, we focus on the Volcano Tectonic (VT) activity at Campi Flegrei Caldera (Italy) during the 2006 ground uplift (Ciaramella et al., 2011). The activity was characterized approximately by 300 low-magnitude VT earthquakes (Md < 2; for the definition of duration magnitude, see Petrosino et al. 2008). Most of them were concentrated in distinct seismic sequences with hypocenters mainly clustered beneath the Solfatara-Accademia area, at depths ranging between 1 and 4 km b.s.l.. The obtained results show the clear separation of P and S phases: the technique not only allows the identification of the S-P time delay giving the timing of both phases but also provides the independent waveforms of the P and S phases. This is an enormous advantage for all the problems related to the source inversion and location In addition, the VT seismicity was accompanied by hundreds of LP events (characterized by spectral peaks in the 0.5-2-Hz frequency band) that were concentrated in a 7-day interval. The main interest is to establish whether the occurrence of LPs is only limited to the swarm that reached a climax on days 26-28 October as indicated by Saccorotti et al. (2007), or a longer period is experienced. The automatically extracted waveforms with improved signal-to-noise ratio via CICA coupled with automatic phase picking allowed to compile a more complete seismic catalog and to better quantify the seismic energy release including the presence of LP events from the beginning of October until mid of November. Finally, a further check of the volcanic nature of extracted signals is achieved by looking at the seismological properties and the content of entropy held in the traces (Falanga and Petrosino 2012; De Lauro et al., 2012). Our results allow us to move towards a full description of the complexity of the source, which can be used for hazard-model development and forecast-model testing, showing an illustrative example of the applicability of the CICA method to regions with low seismicity in high ambient noise

  1. Prediction, time variance, and classification of hydraulic response to recharge in two karst aquifers

    USGS Publications Warehouse

    Long, Andrew J.; Mahler, Barbara J.

    2013-01-01

    Many karst aquifers are rapidly filled and depleted and therefore are likely to be susceptible to changes in short-term climate variability. Here we explore methods that could be applied to model site-specific hydraulic responses, with the intent of simulating these responses to different climate scenarios from high-resolution climate models. We compare hydraulic responses (spring flow, groundwater level, stream base flow, and cave drip) at several sites in two karst aquifers: the Edwards aquifer (Texas, USA) and the Madison aquifer (South Dakota, USA). A lumped-parameter model simulates nonlinear soil moisture changes for estimation of recharge, and a time-variant convolution model simulates the aquifer response to this recharge. Model fit to data is 2.4% better for calibration periods than for validation periods according to the Nash–Sutcliffe coefficient of efficiency, which ranges from 0.53 to 0.94 for validation periods. We use metrics that describe the shapes of the impulse-response functions (IRFs) obtained from convolution modeling to make comparisons in the distribution of response times among sites and between aquifers. Time-variant IRFs were applied to 62% of the sites. Principal component analysis (PCA) of metrics describing the shapes of the IRFs indicates three principal components that together account for 84% of the variability in IRF shape: the first is related to IRF skewness and temporal spread and accounts for 51% of the variability; the second and third largely are related to time-variant properties and together account for 33% of the variability. Sites with IRFs that dominantly comprise exponential curves are separated geographically from those dominantly comprising lognormal curves in both aquifers as a result of spatial heterogeneity. The use of multiple IRF metrics in PCA is a novel method to characterize, compare, and classify the way in which different sites and aquifers respond to recharge. As convolution models are developed for additional aquifers, they could contribute to an IRF database and a general classification system for karst aquifers.

  2. Mixed-effects Gaussian process functional regression models with application to dose-response curve prediction.

    PubMed

    Shi, J Q; Wang, B; Will, E J; West, R M

    2012-11-20

    We propose a new semiparametric model for functional regression analysis, combining a parametric mixed-effects model with a nonparametric Gaussian process regression model, namely a mixed-effects Gaussian process functional regression model. The parametric component can provide explanatory information between the response and the covariates, whereas the nonparametric component can add nonlinearity. We can model the mean and covariance structures simultaneously, combining the information borrowed from other subjects with the information collected from each individual subject. We apply the model to dose-response curves that describe changes in the responses of subjects for differing levels of the dose of a drug or agent and have a wide application in many areas. We illustrate the method for the management of renal anaemia. An individual dose-response curve is improved when more information is included by this mechanism from the subject/patient over time, enabling a patient-specific treatment regime. Copyright © 2012 John Wiley & Sons, Ltd.

  3. Comparison of dynamical approximation schemes for nonlinear gravitaional clustering

    NASA Technical Reports Server (NTRS)

    Melott, Adrian L.

    1994-01-01

    We have recently conducted a controlled comparison of a number of approximations for gravitational clustering against the same n-body simulations. These include ordinary linear perturbation theory (Eulerian), the lognormal approximation, the adhesion approximation, the frozen-flow approximation, the Zel'dovich approximation (describable as first-order Lagrangian perturbation theory), and its second-order generalization. In the last two cases we also created new versions of the approximation by truncation, i.e., by smoothing the initial conditions with various smoothing window shapes and varying their sizes. The primary tool for comparing simulations to approximation schemes was cross-correlation of the evolved mass density fields, testing the extent to which mass was moved to the right place. The Zel'dovich approximation, with initial convolution with a Gaussian e(exp -k(exp 2)/k(sub G(exp 2)), where k(sub G) is adjusted to be just into the nonlinear regime of the evolved model (details in text) worked extremely well. Its second-order generalization worked slightly better. We recommend either n-body simulations or our modified versions of the Zel'dovich approximation, depending upon the purpose. The theoretical implication is that pancaking is implicit in all cosmological gravitational clustering, at least from Gaussian initial conditions, even when subcondensations are present. This in turn provides a natural explanation for the presence of sheets and filaments in the observed galaxy distribution. Use of the approximation scheme can permit extremely rapid generation of large numbers of realizations of model universes with good accuracy down to galaxy group mass scales.

  4. Optimal reduced-rank quadratic classifiers using the Fukunaga-Koontz transform with applications to automated target recognition

    NASA Astrophysics Data System (ADS)

    Huo, Xiaoming; Elad, Michael; Flesia, Ana G.; Muise, Robert R.; Stanfill, S. Robert; Friedman, Jerome; Popescu, Bogdan; Chen, Jihong; Mahalanobis, Abhijit; Donoho, David L.

    2003-09-01

    In target recognition applications of discriminant of classification analysis, each 'feature' is a result of a convolution of an imagery with a filter, which may be derived from a feature vector. It is important to use relatively few features. We analyze an optimal reduced-rank classifier under the two-class situation. Assuming each population is Gaussian and has zero mean, and the classes differ through the covariance matrices: ∑1 and ∑2. The following matrix is considered: Λ=(∑1+∑2)-1/2∑1(∑1+∑2)-1/2. We show that the k eigenvectors of this matrix whose eigenvalues are most different from 1/2 offer the best rank k approximation to the maximum likelihood classifier. The matrix Λ and its eigenvectors have been introduced by Fukunaga and Koontz; hence this analysis gives a new interpretation of the well known Fukunaga-Koontz transform. The optimality that is promised in this method hold if the two populations are exactly Guassian with the same means. To check the applicability of this approach to real data, an experiment is performed, in which several 'modern' classifiers were used on an Infrared ATR data. In these experiments, a reduced-rank classifier-Tuned Basis Functions-outperforms others. The competitive performance of the optimal reduced-rank quadratic classifier suggests that, at least for classification purposes, the imagery data behaves in a nearly-Gaussian fashion.

  5. Performance Analysis of a New Coded TH-CDMA Scheme in Dispersive Infrared Channel with Additive Gaussian Noise

    NASA Astrophysics Data System (ADS)

    Hamdi, Mazda; Kenari, Masoumeh Nasiri

    2013-06-01

    We consider a time-hopping based multiple access scheme introduced in [1] for communication over dispersive infrared links, and evaluate its performance for correlator and matched filter receivers. In the investigated time-hopping code division multiple access (TH-CDMA) method, the transmitter benefits a low rate convolutional encoder. In this method, the bit interval is divided into Nc chips and the output of the encoder along with a PN sequence assigned to the user determines the position of the chip in which the optical pulse is transmitted. We evaluate the multiple access performance of the system for correlation receiver considering background noise which is modeled as White Gaussian noise due to its large intensity. For the correlation receiver, the results show that for a fixed processing gain, at high transmit power, where the multiple access interference has the dominant effect, the performance improves by the coding gain. But at low transmit power, in which the increase of coding gain leads to the decrease of the chip time, and consequently, to more corruption due to the channel dispersion, there exists an optimum value for the coding gain. However, for the matched filter, the performance always improves by the coding gain. The results show that the matched filter receiver outperforms the correlation receiver in the considered cases. Our results show that, for the same bandwidth and bit rate, the proposed system excels other multiple access techniques, like conventional CDMA and time hopping scheme.

  6. The exact thermal rotational spectrum of a two-dimensional rigid rotor obtained using Gaussian wave packet dynamics

    NASA Technical Reports Server (NTRS)

    Reimers, J. R.; Heller, E. J.

    1985-01-01

    The exact thermal rotational spectrum of a two-dimensional rigid rotor is obtained using Gaussian wave packet dynamics. The spectrum is obtained by propagating, without approximation, infinite sets of Gaussian wave packets. These sets are constructed so that collectively they have the correct periodicity, and indeed, are coherent states appropriate to this problem. Also, simple, almost classical, approximations to full wave packet dynamics are shown to give results which are either exact or very nearly exact. Advantages of the use of Gaussian wave packet dynamics over conventional linear response theory are discussed.

  7. Tweaked residual convolutional network for face alignment

    NASA Astrophysics Data System (ADS)

    Du, Wenchao; Li, Ke; Zhao, Qijun; Zhang, Yi; Chen, Hu

    2017-08-01

    We propose a novel Tweaked Residual Convolutional Network approach for face alignment with two-level convolutional networks architecture. Specifically, the first-level Tweaked Convolutional Network (TCN) module predicts the landmark quickly but accurately enough as a preliminary, by taking low-resolution version of the detected face holistically as the input. The following Residual Convolutional Networks (RCN) module progressively refines the landmark by taking as input the local patch extracted around the predicted landmark, particularly, which allows the Convolutional Neural Network (CNN) to extract local shape-indexed features to fine tune landmark position. Extensive evaluations show that the proposed Tweaked Residual Convolutional Network approach outperforms existing methods.

  8. Response to "Comment on 'Stationary self-focusing of Gaussian laser beam in relativistic thermal quantum plasma'" [Phys. Plasmas 21, 064701 (2014)

    NASA Astrophysics Data System (ADS)

    Patil, S. D.; Takale, M. V.

    2014-06-01

    Habibi and Ghamari have presented a Comment on our paper [Phys. Plasmas 20, 072703 (2013)] by examining quantum dielectric response in thermal quantum plasma. They have modeled the relativistic self-focusing of Gaussian laser beam in cold and warm quantum plasmas and reported that self-focusing length does not change in both situations. In this response, we have reached the following important conclusions about the comment itself.

  9. Response to “Comment on ‘Stationary self-focusing of Gaussian laser beam in relativistic thermal quantum plasma’” [Phys. Plasmas 21, 064701 (2014)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patil, S. D., E-mail: sdpatil-phy@rediffmail.com; Takale, M. V.

    2014-06-15

    Habibi and Ghamari have presented a Comment on our paper [Phys. Plasmas 20, 072703 (2013)] by examining quantum dielectric response in thermal quantum plasma. They have modeled the relativistic self-focusing of Gaussian laser beam in cold and warm quantum plasmas and reported that self-focusing length does not change in both situations. In this response, we have reached the following important conclusions about the comment itself.

  10. Non-Gaussian probabilistic MEG source localisation based on kernel density estimation☆

    PubMed Central

    Mohseni, Hamid R.; Kringelbach, Morten L.; Woolrich, Mark W.; Baker, Adam; Aziz, Tipu Z.; Probert-Smith, Penny

    2014-01-01

    There is strong evidence to suggest that data recorded from magnetoencephalography (MEG) follows a non-Gaussian distribution. However, existing standard methods for source localisation model the data using only second order statistics, and therefore use the inherent assumption of a Gaussian distribution. In this paper, we present a new general method for non-Gaussian source estimation of stationary signals for localising brain activity from MEG data. By providing a Bayesian formulation for MEG source localisation, we show that the source probability density function (pdf), which is not necessarily Gaussian, can be estimated using multivariate kernel density estimators. In the case of Gaussian data, the solution of the method is equivalent to that of widely used linearly constrained minimum variance (LCMV) beamformer. The method is also extended to handle data with highly correlated sources using the marginal distribution of the estimated joint distribution, which, in the case of Gaussian measurements, corresponds to the null-beamformer. The proposed non-Gaussian source localisation approach is shown to give better spatial estimates than the LCMV beamformer, both in simulations incorporating non-Gaussian signals, and in real MEG measurements of auditory and visual evoked responses, where the highly correlated sources are known to be difficult to estimate. PMID:24055702

  11. Effects of scale-dependent non-Gaussianity on cosmological structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    LoVerde, Marilena; Miller, Amber; Shandera, Sarah

    2008-04-15

    The detection of primordial non-Gaussianity could provide a powerful means to test various inflationary scenarios. Although scale-invariant non-Gaussianity (often described by the f{sub NL} formalism) is currently best constrained by the CMB, single-field models with changing sound speed can have strongly scale-dependent non-Gaussianity. Such models could evade the CMB constraints but still have important effects at scales responsible for the formation of cosmological objects such as clusters and galaxies. We compute the effect of scale-dependent primordial non-Gaussianity on cluster number counts as a function of redshift, using a simple ansatz to model scale-dependent features. We forecast constraints on these modelsmore » achievable with forthcoming datasets. We also examine consequences for the galaxy bispectrum. Our results are relevant for the Dirac-Born-Infeld model of brane inflation, where the scale dependence of the non-Gaussianity is directly related to the geometry of the extra dimensions.« less

  12. Vibration analysis of FG cylindrical shells with power-law index using discrete singular convolution technique

    NASA Astrophysics Data System (ADS)

    Mercan, Kadir; Demir, Çiǧdem; Civalek, Ömer

    2016-01-01

    In the present manuscript, free vibration response of circular cylindrical shells with functionally graded material (FGM) is investigated. The method of discrete singular convolution (DSC) is used for numerical solution of the related governing equation of motion of FGM cylindrical shell. The constitutive relations are based on the Love's first approximation shell theory. The material properties are graded in the thickness direction according to a volume fraction power law indexes. Frequency values are calculated for different types of boundary conditions, material and geometric parameters. In general, close agreement between the obtained results and those of other researchers has been found.

  13. Nonturbulent dispersion processes in complex terrain

    Treesearch

    Michael A. Fosberg; Douglas G. Fox; E.A. Howard; Jack D. Cohen

    1976-01-01

    Mass divergence influences on plume dispersion modify classic Gaussian calculations by as much as a factor of two in complex terrain. The Gaussian plume was derived in flux form to include this process.Authors' response to comments and criticism received following this publication:

  14. Convolutional coding techniques for data protection

    NASA Technical Reports Server (NTRS)

    Massey, J. L.

    1975-01-01

    Results of research on the use of convolutional codes in data communications are presented. Convolutional coding fundamentals are discussed along with modulation and coding interaction. Concatenated coding systems and data compression with convolutional codes are described.

  15. Voice-onset time and buzz-onset time identification: A ROC analysis

    NASA Astrophysics Data System (ADS)

    Lopez-Bascuas, Luis E.; Rosner, Burton S.; Garcia-Albea, Jose E.

    2004-05-01

    Previous studies have employed signal detection theory to analyze data from speech and nonspeech experiments. Typically, signal distributions were assumed to be Gaussian. Schouten and van Hessen [J. Acoust. Soc. Am. 104, 2980-2990 (1998)] explicitly tested this assumption for an intensity continuum and a speech continuum. They measured response distributions directly and, assuming an interval scale, concluded that the Gaussian assumption held for both continua. However, Pastore and Macmillan [J. Acoust. Soc. Am. 111, 2432 (2002)] applied ROC analysis to Schouten and van Hessen's data, assuming only an ordinal scale. Their ROC curves suppported the Gaussian assumption for the nonspeech signals only. Previously, Lopez-Bascuas [Proc. Audit. Bas. Speech Percept., 158-161 (1997)] found evidence with a rating scale procedure that the Gaussian model was inadequate for a voice-onset time continuum but not for a noise-buzz continuum. Both continua contained ten stimuli with asynchronies ranging from -35 ms to +55 ms. ROC curves (double-probability plots) are now reported for each pair of adjacent stimuli on the two continua. Both speech and nonspeech ROCs often appeared nonlinear, indicating non-Gaussian signal distributions under the usual zero-variance assumption for response criteria.

  16. A Sparse Bayesian Approach for Forward-Looking Superresolution Radar Imaging

    PubMed Central

    Zhang, Yin; Zhang, Yongchao; Huang, Yulin; Yang, Jianyu

    2017-01-01

    This paper presents a sparse superresolution approach for high cross-range resolution imaging of forward-looking scanning radar based on the Bayesian criterion. First, a novel forward-looking signal model is established as the product of the measurement matrix and the cross-range target distribution, which is more accurate than the conventional convolution model. Then, based on the Bayesian criterion, the widely-used sparse regularization is considered as the penalty term to recover the target distribution. The derivation of the cost function is described, and finally, an iterative expression for minimizing this function is presented. Alternatively, this paper discusses how to estimate the single parameter of Gaussian noise. With the advantage of a more accurate model, the proposed sparse Bayesian approach enjoys a lower model error. Meanwhile, when compared with the conventional superresolution methods, the proposed approach shows high cross-range resolution and small location error. The superresolution results for the simulated point target, scene data, and real measured data are presented to demonstrate the superior performance of the proposed approach. PMID:28604583

  17. Quantitation of Fine Displacement in Echography

    NASA Astrophysics Data System (ADS)

    Masuda, Kohji; Ishihara, Ken; Yoshii, Ken; Furukawa, Toshiyuki; Kumagai, Sadatoshi; Maeda, Hajime; Kodama, Shinzo

    1993-05-01

    A High-speed Digital Subtraction Echography was developed to visualize the fine displacement of human internal organs. This method indicates differences in position through time series images of high-frame-rate echography. Fine displacement less than ultrasonic wavelength can be observed. This method, however, lacks the ability to quantitatively measure displacement length. The subtraction between two successive images was affected by displacement direction in spite of the displacement length being the same. To solve this problem, convolution of an echogram with Gaussian distribution was used. To express displacement length as brightness quantitatively, normalization using a brightness gradient was applied. The quantitation algorithm was applied to successive B-mode images. Compared to the simply subtracted images, quantitated images express more precisely the motion of organs. Expansion of the carotid artery and fine motion of ventricular walls can be visualized more easily. Displacement length can be quantitated with wavelength. Under more static conditions, this system quantitates displacement length that is much less than wavelength.

  18. Error Control Techniques for Satellite and Space Communications

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.

    1996-01-01

    In this report, we present the results of our recent work on turbo coding in two formats. Appendix A includes the overheads of a talk that has been given at four different locations over the last eight months. This presentation has received much favorable comment from the research community and has resulted in the full-length paper included as Appendix B, 'A Distance Spectrum Interpretation of Turbo Codes'. Turbo codes use a parallel concatenation of rate 1/2 convolutional encoders combined with iterative maximum a posteriori probability (MAP) decoding to achieve a bit error rate (BER) of 10(exp -5) at a signal-to-noise ratio (SNR) of only 0.7 dB. The channel capacity for a rate 1/2 code with binary phase shift-keyed modulation on the AWGN (additive white Gaussian noise) channel is 0 dB, and thus the Turbo coding scheme comes within 0.7 DB of capacity at a BER of 10(exp -5).

  19. Isolating contour information from arbitrary images

    NASA Technical Reports Server (NTRS)

    Jobson, Daniel J.

    1989-01-01

    Aspects of natural vision (physiological and perceptual) serve as a basis for attempting the development of a general processing scheme for contour extraction. Contour information is assumed to be central to visual recognition skills. While the scheme must be regarded as highly preliminary, initial results do compare favorably with the visual perception of structure. The scheme pays special attention to the construction of a smallest scale circular difference-of-Gaussian (DOG) convolution, calibration of multiscale edge detection thresholds with the visual perception of grayscale boundaries, and contour/texture discrimination methods derived from fundamental assumptions of connectivity and the characteristics of printed text. Contour information is required to fall between a minimum connectivity limit and maximum regional spatial density limit at each scale. Results support the idea that contour information, in images possessing good image quality, is (centered at about 10 cyc/deg and 30 cyc/deg). Further, lower spatial frequency channels appear to play a major role only in contour extraction from images with serious global image defects.

  20. Dynamical analysis of relaxation luminescence in ZnS:Er3+ thin film devices

    NASA Astrophysics Data System (ADS)

    Wang, Yu-Jiang; Wu, Chen-Xu; Chen, Mou-Zhi; Huang, Mei-Chun

    2003-06-01

    The relaxation luminescence of ZnS:Er3+ thin film devices fabricated by thermal evaporation with two boats is studied. The dynamical processes of the luminescence of Er3+ in ZnS are described in terms of a resonant energy transfer model, assuming that the probability of collision excitation of injected electrons with luminescence centers is expressed as a Gaussian function. It is found that the frequency distribution depends on the Lorentzian function by considering the emission from excited states as a damped oscillator. Taking into consideration the energy storing effect of traps, an expression is obtained to describe a profile that contains multiple relaxation luminescence peaks using the convolution theorem. Fitting of experimental results shows that the relaxation characteristics of the electroluminescence are related to the carriers captured by bulk traps as well as by interface states. The numerical calculation carried out agrees well with the dynamical characteristics of relaxation luminescence obtained by experiments.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    I. W. Ginsberg

    Multiresolutional decompositions known as spectral fingerprints are often used to extract spectral features from multispectral/hyperspectral data. In this study, the authors investigate the use of wavelet-based algorithms for generating spectral fingerprints. The wavelet-based algorithms are compared to the currently used method, traditional convolution with first-derivative Gaussian filters. The comparison analyses consists of two parts: (a) the computational expense of the new method is compared with the computational costs of the current method and (b) the outputs of the wavelet-based methods are compared with those of the current method to determine any practical differences in the resulting spectral fingerprints. The resultsmore » show that the wavelet-based algorithms can greatly reduce the computational expense of generating spectral fingerprints, while practically no differences exist in the resulting fingerprints. The analysis is conducted on a database of hyperspectral signatures, namely, Hyperspectral Digital Image Collection Experiment (HYDICE) signatures. The reduction in computational expense is by a factor of about 30, and the average Euclidean distance between resulting fingerprints is on the order of 0.02.« less

  2. Sparse PDF Volumes for Consistent Multi-Resolution Volume Rendering

    PubMed Central

    Sicat, Ronell; Krüger, Jens; Möller, Torsten; Hadwiger, Markus

    2015-01-01

    This paper presents a new multi-resolution volume representation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined in the 4D domain jointly comprising the 3D volume and its 1D intensity range. Crucially, the computation of sparse pdf volumes exploits data coherence in 4D, resulting in a sparse representation with surprisingly low storage requirements. At run time, we dynamically apply transfer functions to the pdfs using simple and fast convolutions. Whereas standard low-pass filtering and down-sampling incur visible differences between resolution levels, the use of pdfs facilitates consistent results independent of the resolution level used. We describe the efficient out-of-core computation of large-scale sparse pdf volumes, using a novel iterative simplification procedure of a mixture of 4D Gaussians. Finally, our data structure is optimized to facilitate interactive multi-resolution volume rendering on GPUs. PMID:26146475

  3. Role of excited state solvent fluctuations on time-dependent fluorescence Stokes shift

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Tanping, E-mail: tanping@lsu.edu, E-mail: revatik@lsu.edu; Kumar, Revati, E-mail: tanping@lsu.edu, E-mail: revatik@lsu.edu

    2015-11-07

    We explore the connection between the solvation dynamics of a chromophore upon photon excitation and equilibrium fluctuations of the solvent. Using molecular dynamics simulations, fluorescence Stokes shift for the tryptophan in Staphylococcus nuclease was examined using both nonequilibrium calculations and linear response theory. When the perturbed and unperturbed surfaces exhibit different solvent equilibrium fluctuations, the linear response approach on the former surface shows agreement with the nonequilibrium process. This agreement is excellent when the perturbed surface exhibits Gaussian statistics and qualitative in the case of an isomerization induced non-Gaussian statistics. However, the linear response theory on the unperturbed surface breaksmore » down even in the presence of Gaussian fluctuations. Experiments also provide evidence of the connection between the excited state solvent fluctuations and the total fluorescence shift. These observations indicate that the equilibrium statistics on the excited state surface characterize the relaxation dynamics of the fluorescence Stokes shift. Our studies specifically analyze the Gaussian fluctuations of the solvent in the complex protein environment and further confirm the role of solvent fluctuations on the excited state surface. The results are consistent with previous investigations, found in the literature, of solutes dissolved in liquids.« less

  4. Approximate bandpass and frequency response models of the difference of Gaussian filter

    NASA Astrophysics Data System (ADS)

    Birch, Philip; Mitra, Bhargav; Bangalore, Nagachetan M.; Rehman, Saad; Young, Rupert; Chatwin, Chris

    2010-12-01

    The Difference of Gaussian (DOG) filter is widely used in optics and image processing as, among other things, an edge detection and correlation filter. It has important biological applications and appears to be part of the mammalian vision system. In this paper we analyse the filter and provide details of the full width half maximum, bandwidth and frequency response in order to aid the full characterisation of its performance.

  5. Robust visual tracking based on deep convolutional neural networks and kernelized correlation filters

    NASA Astrophysics Data System (ADS)

    Yang, Hua; Zhong, Donghong; Liu, Chenyi; Song, Kaiyou; Yin, Zhouping

    2018-03-01

    Object tracking is still a challenging problem in computer vision, as it entails learning an effective model to account for appearance changes caused by occlusion, out of view, plane rotation, scale change, and background clutter. This paper proposes a robust visual tracking algorithm called deep convolutional neural network (DCNNCT) to simultaneously address these challenges. The proposed DCNNCT algorithm utilizes a DCNN to extract the image feature of a tracked target, and the full range of information regarding each convolutional layer is used to express the image feature. Subsequently, the kernelized correlation filters (CF) in each convolutional layer are adaptively learned, the correlation response maps of that are combined to estimate the location of the tracked target. To avoid the case of tracking failure, an online random ferns classifier is employed to redetect the tracked target, and a dual-threshold scheme is used to obtain the final target location by comparing the tracking result with the detection result. Finally, the change in scale of the target is determined by building scale pyramids and training a CF. Extensive experiments demonstrate that the proposed algorithm is effective at tracking, especially when evaluated using an index called the overlap rate. The DCNNCT algorithm is also highly competitive in terms of robustness with respect to state-of-the-art trackers in various challenging scenarios.

  6. The trellis complexity of convolutional codes

    NASA Technical Reports Server (NTRS)

    Mceliece, R. J.; Lin, W.

    1995-01-01

    It has long been known that convolutional codes have a natural, regular trellis structure that facilitates the implementation of Viterbi's algorithm. It has gradually become apparent that linear block codes also have a natural, though not in general a regular, 'minimal' trellis structure, which allows them to be decoded with a Viterbi-like algorithm. In both cases, the complexity of the Viterbi decoding algorithm can be accurately estimated by the number of trellis edges per encoded bit. It would, therefore, appear that we are in a good position to make a fair comparison of the Viterbi decoding complexity of block and convolutional codes. Unfortunately, however, this comparison is somewhat muddled by the fact that some convolutional codes, the punctured convolutional codes, are known to have trellis representations that are significantly less complex than the conventional trellis. In other words, the conventional trellis representation for a convolutional code may not be the minimal trellis representation. Thus, ironically, at present we seem to know more about the minimal trellis representation for block than for convolutional codes. In this article, we provide a remedy, by developing a theory of minimal trellises for convolutional codes. (A similar theory has recently been given by Sidorenko and Zyablov). This allows us to make a direct performance-complexity comparison for block and convolutional codes. A by-product of our work is an algorithm for choosing, from among all generator matrices for a given convolutional code, what we call a trellis-minimal generator matrix, from which the minimal trellis for the code can be directly constructed. Another by-product is that, in the new theory, punctured convolutional codes no longer appear as a special class, but simply as high-rate convolutional codes whose trellis complexity is unexpectedly small.

  7. Fatigue assessment of vibrating rail vehicle bogie components under non-Gaussian random excitations using power spectral densities

    NASA Astrophysics Data System (ADS)

    Wolfsteiner, Peter; Breuer, Werner

    2013-10-01

    The assessment of fatigue load under random vibrations is usually based on load spectra. Typically they are computed with counting methods (e.g. Rainflow) based on a time domain signal. Alternatively methods are available (e.g. Dirlik) enabling the estimation of load spectra directly from power spectral densities (PSDs) of the corresponding time signals; the knowledge of the time signal is then not necessary. These PSD based methods have the enormous advantage that if for example the signal to assess results from a finite element method based vibration analysis, the computation time of the simulation of PSDs in the frequency domain outmatches by far the simulation of time signals in the time domain. This is especially true for random vibrations with very long signals in the time domain. The disadvantage of the PSD based simulation of vibrations and also the PSD based load spectra estimation is their limitation to Gaussian distributed time signals. Deviations from this Gaussian distribution cause relevant deviations in the estimated load spectra. In these cases usually only computation time intensive time domain calculations produce accurate results. This paper presents a method dealing with non-Gaussian signals with real statistical properties that is still able to use the efficient PSD approach with its computation time advantages. Essentially it is based on a decomposition of the non-Gaussian signal in Gaussian distributed parts. The PSDs of these rearranged signals are then used to perform usual PSD analyses. In particular, detailed methods are described for the decomposition of time signals and the derivation of PSDs and cross power spectral densities (CPSDs) from multiple real measurements without using inaccurate standard procedures. Furthermore the basic intention is to design a general and integrated method that is not just able to analyse a certain single load case for a small time interval, but to generate representative PSD and CPSD spectra replacing extensive measured loads in time domain without losing the necessary accuracy for the fatigue load results. These long measurements may even represent the whole application range of the railway vehicle. The presented work demonstrates the application of this method to railway vehicle components subjected to random vibrations caused by the wheel rail contact. Extensive measurements of axle box accelerations have been used to verify the proposed procedure for this class of railway vehicle applications. The linearity is not a real limitation, because the structural vibrations caused by the random excitations are usually small for rail vehicle applications. The impact of nonlinearities is usually covered by separate nonlinear models and only needed for the deterministic part of the loads. Linear vibration systems subjected to Gaussian vibrations respond with vibrations having also a Gaussian distribution. A non-Gaussian distribution in the excitation signal produces also a non-Gaussian response with statistical properties different from these excitations. A drawback is the fact that there is no simple mathematical relation between excitation and response concerning these deviations from the Gaussian distribution (see e.g. Ito calculus [6], which is usually not part of commercial codes!). There are a couple of well-established procedures for the prediction of fatigue load spectra from PSDs designed for Gaussian loads (see [4]); the question of the impact of non-Gaussian distributions on the fatigue load prediction has been studied for decades (see e.g. [3,4,11-13]) and is still subject of the ongoing research; e.g. [13] proposed a procedure, capable of considering non-Gaussian broadbanded loads. It is based on the knowledge of the response PSD and some statistical data, defining the non-Gaussian character of the underlying time signal. As already described above, these statistical data are usually not available for a PSD vibration response that has been calculated in the frequency domain. Summarizing the above and considering the fact of having highly non-Gaussian excitations on railway vehicles caused by the wheel rail contact means that the fast PSD analysis in the frequency domain cannot be combined with load spectra prediction methods for PSDs.

  8. Research on image retrieval using deep convolutional neural network combining L1 regularization and PRelu activation function

    NASA Astrophysics Data System (ADS)

    QingJie, Wei; WenBin, Wang

    2017-06-01

    In this paper, the image retrieval using deep convolutional neural network combined with regularization and PRelu activation function is studied, and improves image retrieval accuracy. Deep convolutional neural network can not only simulate the process of human brain to receive and transmit information, but also contains a convolution operation, which is very suitable for processing images. Using deep convolutional neural network is better than direct extraction of image visual features for image retrieval. However, the structure of deep convolutional neural network is complex, and it is easy to over-fitting and reduces the accuracy of image retrieval. In this paper, we combine L1 regularization and PRelu activation function to construct a deep convolutional neural network to prevent over-fitting of the network and improve the accuracy of image retrieval

  9. Extraction of Built-Up Areas Using Convolutional Neural Networks and Transfer Learning from SENTINEL-2 Satellite Images

    NASA Astrophysics Data System (ADS)

    Bramhe, V. S.; Ghosh, S. K.; Garg, P. K.

    2018-04-01

    With rapid globalization, the extent of built-up areas is continuously increasing. Extraction of features for classifying built-up areas that are more robust and abstract is a leading research topic from past many years. Although, various studies have been carried out where spatial information along with spectral features has been utilized to enhance the accuracy of classification. Still, these feature extraction techniques require a large number of user-specific parameters and generally application specific. On the other hand, recently introduced Deep Learning (DL) techniques requires less number of parameters to represent more abstract aspects of the data without any manual effort. Since, it is difficult to acquire high-resolution datasets for applications that require large scale monitoring of areas. Therefore, in this study Sentinel-2 image has been used for built-up areas extraction. In this work, pre-trained Convolutional Neural Networks (ConvNets) i.e. Inception v3 and VGGNet are employed for transfer learning. Since these networks are trained on generic images of ImageNet dataset which are having very different characteristics from satellite images. Therefore, weights of networks are fine-tuned using data derived from Sentinel-2 images. To compare the accuracies with existing shallow networks, two state of art classifiers i.e. Gaussian Support Vector Machine (SVM) and Back-Propagation Neural Network (BP-NN) are also implemented. Both SVM and BP-NN gives 84.31 % and 82.86 % overall accuracies respectively. Inception-v3 and VGGNet gives 89.43 % of overall accuracy using fine-tuned VGGNet and 92.10 % when using Inception-v3. The results indicate high accuracy of proposed fine-tuned ConvNets on a 4-channel Sentinel-2 dataset for built-up area extraction.

  10. Noise-enhanced convolutional neural networks.

    PubMed

    Audhkhasi, Kartik; Osoba, Osonde; Kosko, Bart

    2016-06-01

    Injecting carefully chosen noise can speed convergence in the backpropagation training of a convolutional neural network (CNN). The Noisy CNN algorithm speeds training on average because the backpropagation algorithm is a special case of the generalized expectation-maximization (EM) algorithm and because such carefully chosen noise always speeds up the EM algorithm on average. The CNN framework gives a practical way to learn and recognize images because backpropagation scales with training data. It has only linear time complexity in the number of training samples. The Noisy CNN algorithm finds a special separating hyperplane in the network's noise space. The hyperplane arises from the likelihood-based positivity condition that noise-boosts the EM algorithm. The hyperplane cuts through a uniform-noise hypercube or Gaussian ball in the noise space depending on the type of noise used. Noise chosen from above the hyperplane speeds training on average. Noise chosen from below slows it on average. The algorithm can inject noise anywhere in the multilayered network. Adding noise to the output neurons reduced the average per-iteration training-set cross entropy by 39% on a standard MNIST image test set of handwritten digits. It also reduced the average per-iteration training-set classification error by 47%. Adding noise to the hidden layers can also reduce these performance measures. The noise benefit is most pronounced for smaller data sets because the largest EM hill-climbing gains tend to occur in the first few iterations. This noise effect can assist random sampling from large data sets because it allows a smaller random sample to give the same or better performance than a noiseless sample gives. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Tensor numerical methods in quantum chemistry: from Hartree-Fock to excitation energies.

    PubMed

    Khoromskaia, Venera; Khoromskij, Boris N

    2015-12-21

    We resume the recent successes of the grid-based tensor numerical methods and discuss their prospects in real-space electronic structure calculations. These methods, based on the low-rank representation of the multidimensional functions and integral operators, first appeared as an accurate tensor calculus for the 3D Hartree potential using 1D complexity operations, and have evolved to entirely grid-based tensor-structured 3D Hartree-Fock eigenvalue solver. It benefits from tensor calculation of the core Hamiltonian and two-electron integrals (TEI) in O(n log n) complexity using the rank-structured approximation of basis functions, electron densities and convolution integral operators all represented on 3D n × n × n Cartesian grids. The algorithm for calculating TEI tensor in a form of the Cholesky decomposition is based on multiple factorizations using algebraic 1D "density fitting" scheme, which yield an almost irreducible number of product basis functions involved in the 3D convolution integrals, depending on a threshold ε > 0. The basis functions are not restricted to separable Gaussians, since the analytical integration is substituted by high-precision tensor-structured numerical quadratures. The tensor approaches to post-Hartree-Fock calculations for the MP2 energy correction and for the Bethe-Salpeter excitation energies, based on using low-rank factorizations and the reduced basis method, were recently introduced. Another direction is towards the tensor-based Hartree-Fock numerical scheme for finite lattices, where one of the numerical challenges is the summation of electrostatic potentials of a large number of nuclei. The 3D grid-based tensor method for calculation of a potential sum on a L × L × L lattice manifests the linear in L computational work, O(L), instead of the usual O(L(3) log L) scaling by the Ewald-type approaches.

  12. Fully Convolutional Network-Based Multifocus Image Fusion.

    PubMed

    Guo, Xiaopeng; Nie, Rencan; Cao, Jinde; Zhou, Dongming; Qian, Wenhua

    2018-07-01

    As the optical lenses for cameras always have limited depth of field, the captured images with the same scene are not all in focus. Multifocus image fusion is an efficient technology that can synthesize an all-in-focus image using several partially focused images. Previous methods have accomplished the fusion task in spatial or transform domains. However, fusion rules are always a problem in most methods. In this letter, from the aspect of focus region detection, we propose a novel multifocus image fusion method based on a fully convolutional network (FCN) learned from synthesized multifocus images. The primary novelty of this method is that the pixel-wise focus regions are detected through a learning FCN, and the entire image, not just the image patches, are exploited to train the FCN. First, we synthesize 4500 pairs of multifocus images by repeatedly using a gaussian filter for each image from PASCAL VOC 2012, to train the FCN. After that, a pair of source images is fed into the trained FCN, and two score maps indicating the focus property are generated. Next, an inversed score map is averaged with another score map to produce an aggregative score map, which take full advantage of focus probabilities in two score maps. We implement the fully connected conditional random field (CRF) on the aggregative score map to accomplish and refine a binary decision map for the fusion task. Finally, we exploit the weighted strategy based on the refined decision map to produce the fused image. To demonstrate the performance of the proposed method, we compare its fused results with several start-of-the-art methods not only on a gray data set but also on a color data set. Experimental results show that the proposed method can achieve superior fusion performance in both human visual quality and objective assessment.

  13. Agile convolutional neural network for pulmonary nodule classification using CT images.

    PubMed

    Zhao, Xinzhuo; Liu, Liyao; Qi, Shouliang; Teng, Yueyang; Li, Jianhua; Qian, Wei

    2018-04-01

    To distinguish benign from malignant pulmonary nodules using CT images is critical for their precise diagnosis and treatment. A new Agile convolutional neural network (CNN) framework is proposed to conquer the challenges of a small-scale medical image database and the small size of the nodules, and it improves the performance of pulmonary nodule classification using CT images. A hybrid CNN of LeNet and AlexNet is constructed through combining the layer settings of LeNet and the parameter settings of AlexNet. A dataset with 743 CT image nodule samples is built up based on the 1018 CT scans of LIDC to train and evaluate the Agile CNN model. Through adjusting the parameters of the kernel size, learning rate, and other factors, the effect of these parameters on the performance of the CNN model is investigated, and an optimized setting of the CNN is obtained finally. After finely optimizing the settings of the CNN, the estimation accuracy and the area under the curve can reach 0.822 and 0.877, respectively. The accuracy of the CNN is significantly dependent on the kernel size, learning rate, training batch size, dropout, and weight initializations. The best performance is achieved when the kernel size is set to [Formula: see text], the learning rate is 0.005, the batch size is 32, and dropout and Gaussian initialization are used. This competitive performance demonstrates that our proposed CNN framework and the optimization strategy of the CNN parameters are suitable for pulmonary nodule classification characterized by small medical datasets and small targets. The classification model might help diagnose and treat pulmonary nodules effectively.

  14. Efficient 3D Watermarked Video Communication with Chaotic Interleaving, Convolution Coding, and LMMSE Equalization

    NASA Astrophysics Data System (ADS)

    El-Shafai, W.; El-Bakary, E. M.; El-Rabaie, S.; Zahran, O.; El-Halawany, M.; Abd El-Samie, F. E.

    2017-06-01

    Three-Dimensional Multi-View Video (3D-MVV) transmission over wireless networks suffers from Macro-Blocks losses due to either packet dropping or fading-motivated bit errors. Thus, the robust performance of 3D-MVV transmission schemes over wireless channels becomes a recent considerable hot research issue due to the restricted resources and the presence of severe channel errors. The 3D-MVV is composed of multiple video streams shot by several cameras around a single object, simultaneously. Therefore, it is an urgent task to achieve high compression ratios to meet future bandwidth constraints. Unfortunately, the highly-compressed 3D-MVV data becomes more sensitive and vulnerable to packet losses, especially in the case of heavy channel faults. Thus, in this paper, we suggest the application of a chaotic Baker interleaving approach with equalization and convolution coding for efficient Singular Value Decomposition (SVD) watermarked 3D-MVV transmission over an Orthogonal Frequency Division Multiplexing wireless system. Rayleigh fading and Additive White Gaussian Noise are considered in the real scenario of 3D-MVV transmission. The SVD watermarked 3D-MVV frames are primarily converted to their luminance and chrominance components, which are then converted to binary data format. After that, chaotic interleaving is applied prior to the modulation process. It is used to reduce the channel effects on the transmitted bit streams and it also adds a degree of encryption to the transmitted 3D-MVV frames. To test the performance of the proposed framework; several simulation experiments on different SVD watermarked 3D-MVV frames have been executed. The experimental results show that the received SVD watermarked 3D-MVV frames still have high Peak Signal-to-Noise Ratios and watermark extraction is possible in the proposed framework.

  15. Magnetic field influences on the lateral dose response functions of photon-beam detectors: MC study of wall-less water-filled detectors with various densities.

    PubMed

    Looe, Hui Khee; Delfs, Björn; Poppinga, Daniela; Harder, Dietrich; Poppe, Björn

    2017-06-21

    The distortion of detector reading profiles across photon beams in the presence of magnetic fields is a developing subject of clinical photon-beam dosimetry. The underlying modification by the Lorentz force of a detector's lateral dose response function-the convolution kernel transforming the true cross-beam dose profile in water into the detector reading profile-is here studied for the first time. The three basic convolution kernels, the photon fluence response function, the dose deposition kernel, and the lateral dose response function, of wall-less cylindrical detectors filled with water of low, normal and enhanced density are shown by Monte Carlo simulation to be distorted in the prevailing direction of the Lorentz force. The asymmetric shape changes of these convolution kernels in a water medium and in magnetic fields of up to 1.5 T are confined to the lower millimetre range, and they depend on the photon beam quality, the magnetic flux density and the detector's density. The impact of this distortion on detector reading profiles is demonstrated using a narrow photon beam profile. For clinical applications it appears as favourable that the magnetic flux density dependent distortion of the lateral dose response function, as far as secondary electron transport is concerned, vanishes in the case of water-equivalent detectors of normal water density. By means of secondary electron history backtracing, the spatial distribution of the photon interactions giving rise either directly to secondary electrons or to scattered photons further downstream producing secondary electrons which contribute to the detector's signal, and their lateral shift due to the Lorentz force is elucidated. Electron history backtracing also serves to illustrate the correct treatment of the influences of the Lorentz force in the EGSnrc Monte Carlo code applied in this study.

  16. New stochastic approach for extreme response of slow drift motion of moored floating structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kato, Shunji; Okazaki, Takashi

    1995-12-31

    A new stochastic method for investigating the flow drift response statistics of moored floating structures is described. Assuming that wave drift excitation process can be driven by a Gaussian white noise process, an exact stochastic equation governing a time evolution of the response Probability Density Function (PDF) is derived on a basis of Projection operator technique in the field of statistical physics. In order to get an approximate solution of the GFP equation, the authors develop the renormalized perturbation technique which is a kind of singular perturbation methods and solve the GFP equation taken into account up to third ordermore » moments of a non-Gaussian excitation. As an example of the present method, a closed form of the joint PDF is derived for linear response in surge motion subjected to a non-Gaussian wave drift excitation and it is represented by the product of a form factor and the quasi-Cauchy PDFs. In this case, the motion displacement and velocity processes are not mutually independent if the excitation process has a significant third order moment. From a comparison between the response PDF by the present solution and the exact one derived by Naess, it is found that the present solution is effective for calculating both the response PDF and the joint PDF. Furthermore it is shown that the displacement-velocity independence is satisfied if the damping coefficient in equation of motion is not so large and that both the non-Gaussian property of excitation and the damping coefficient should be taken into account for estimating the probability exceedance of the response.« less

  17. Yarn-dyed fabric defect classification based on convolutional neural network

    NASA Astrophysics Data System (ADS)

    Jing, Junfeng; Dong, Amei; Li, Pengfei; Zhang, Kaibing

    2017-09-01

    Considering that manual inspection of the yarn-dyed fabric can be time consuming and inefficient, we propose a yarn-dyed fabric defect classification method by using a convolutional neural network (CNN) based on a modified AlexNet. CNN shows powerful ability in performing feature extraction and fusion by simulating the learning mechanism of human brain. The local response normalization layers in AlexNet are replaced by the batch normalization layers, which can enhance both the computational efficiency and classification accuracy. In the training process of the network, the characteristics of the defect are extracted step by step and the essential features of the image can be obtained from the fusion of the edge details with several convolution operations. Then the max-pooling layers, the dropout layers, and the fully connected layers are employed in the classification model to reduce the computation cost and extract more precise features of the defective fabric. Finally, the results of the defect classification are predicted by the softmax function. The experimental results show promising performance with an acceptable average classification rate and strong robustness on yarn-dyed fabric defect classification.

  18. Yarn-dyed fabric defect classification based on convolutional neural network

    NASA Astrophysics Data System (ADS)

    Jing, Junfeng; Dong, Amei; Li, Pengfei

    2017-07-01

    Considering that the manual inspection of the yarn-dyed fabric can be time consuming and less efficient, a convolutional neural network (CNN) solution based on the modified AlexNet structure for the classification of the yarn-dyed fabric defect is proposed. CNN has powerful ability of feature extraction and feature fusion which can simulate the learning mechanism of the human brain. In order to enhance computational efficiency and detection accuracy, the local response normalization (LRN) layers in AlexNet are replaced by the batch normalization (BN) layers. In the process of the network training, through several convolution operations, the characteristics of the image are extracted step by step, and the essential features of the image can be obtained from the edge features. And the max pooling layers, the dropout layers, the fully connected layers are also employed in the classification model to reduce the computation cost and acquire more precise features of fabric defect. Finally, the results of the defect classification are predicted by the softmax function. The experimental results show the capability of defect classification via the modified Alexnet model and indicate its robustness.

  19. Comment on 'Deep convolutional neural network with transfer learning for rectum toxicity prediction in cervical cancer radiotherapy: a feasibility study'.

    PubMed

    Valdes, Gilmer; Interian, Yannet

    2018-03-15

    The application of machine learning (ML) presents tremendous opportunities for the field of oncology, thus we read 'Deep convolutional neural network with transfer learning for rectum toxicity prediction in cervical cancer radiotherapy: a feasibility study' with great interest. In this article, the authors used state of the art techniques: a pre-trained convolutional neural network (VGG-16 CNN), transfer learning, data augmentation, drop out and early stopping, all of which are directly responsible for the success and the excitement that these algorithms have created in other fields. We believe that the use of these techniques can offer tremendous opportunities in the field of Medical Physics and as such we would like to praise the authors for their pioneering application to the field of Radiation Oncology. That being said, given that the field of Medical Physics has unique characteristics that differentiate us from those fields where these techniques have been applied successfully, we would like to raise some points for future discussion and follow up studies that could help the community understand the limitations and nuances of deep learning techniques.

  20. Deep multi-scale convolutional neural network for hyperspectral image classification

    NASA Astrophysics Data System (ADS)

    Zhang, Feng-zhe; Yang, Xia

    2018-04-01

    In this paper, we proposed a multi-scale convolutional neural network for hyperspectral image classification task. Firstly, compared with conventional convolution, we utilize multi-scale convolutions, which possess larger respective fields, to extract spectral features of hyperspectral image. We design a deep neural network with a multi-scale convolution layer which contains 3 different convolution kernel sizes. Secondly, to avoid overfitting of deep neural network, dropout is utilized, which randomly sleeps neurons, contributing to improve the classification accuracy a bit. In addition, new skills like ReLU in deep learning is utilized in this paper. We conduct experiments on University of Pavia and Salinas datasets, and obtained better classification accuracy compared with other methods.

  1. The analysis of convolutional codes via the extended Smith algorithm

    NASA Technical Reports Server (NTRS)

    Mceliece, R. J.; Onyszchuk, I.

    1993-01-01

    Convolutional codes have been the central part of most error-control systems in deep-space communication for many years. Almost all such applications, however, have used the restricted class of (n,1), also known as 'rate 1/n,' convolutional codes. The more general class of (n,k) convolutional codes contains many potentially useful codes, but their algebraic theory is difficult and has proved to be a stumbling block in the evolution of convolutional coding systems. In this article, the situation is improved by describing a set of practical algorithms for computing certain basic things about a convolutional code (among them the degree, the Forney indices, a minimal generator matrix, and a parity-check matrix), which are usually needed before a system using the code can be built. The approach is based on the classic Forney theory for convolutional codes, together with the extended Smith algorithm for polynomial matrices, which is introduced in this article.

  2. Three-dimensional imaging of the ultracold plasma formed in a supersonic molecular beam

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schulz-Weiling, Markus; Grant, Edward

    Double-resonant excitation of nitric oxide in a seeded supersonic molecular beam forms a state-selected Rydberg gas that evolves to form an ultracold plasma. This plasma travels with the propagation of the molecular beam in z over a variable distance as great as 600 mm to strike an imaging detector, which records the charge distribution in the dimensions, x and y. The ω{sub 1} + ω{sub 2} laser crossed molecular beam excitation geometry convolutes the axial Gaussian distribution of NO in the molecular beam with the Gaussian intensity distribution of the perpendicularly aligned laser beam to create an ellipsoidal volume of Rydbergmore » gas. Detected images describe the evolution of this initial density as a function of selected Rydberg gas initial principal quantum number, n{sub 0}, ω{sub 1} laser pulse energy (linearly related to Rydberg gas density, ρ{sub 0}) and flight time. Low-density Rydberg gases of lower principal quantum number produce uniformly expanding, ellipsoidal charge-density distributions. Increase either of n{sub 0} or ρ{sub 0} breaks the ellipsoidal symmetry of plasma expansion. The volume bifurcates to form repelling plasma volumes. The velocity of separation depends on n{sub 0} and ρ{sub 0} in a way that scales uniformly with ρ{sub e}, the density of electrons formed in the core of the Rydberg gas by prompt Penning ionization. Conditions under which this electron gas drives expansion in the long axis dimension of the ellipsoid favours the formation of counter-propagating shock waves.« less

  3. SPAK-mediated NCC regulation in response to low-K+ diet.

    PubMed

    Wade, James B; Liu, Jie; Coleman, Richard; Grimm, P Richard; Delpire, Eric; Welling, Paul A

    2015-04-15

    The NaCl cotransporter (NCC) of the renal distal convoluted tubule is stimulated by low-K(+) diet by an unknown mechanism. Since recent work has shown that the STE20/SPS-1-related proline-alanine-rich protein kinase (SPAK) can function to stimulate NCC by phosphorylation of specific N-terminal sites, we investigated whether the NCC response to low-K(+) diet is mediated by SPAK. Using phospho-specific antibodies in Western blot and immunolocalization studies of wild-type and SPAK knockout (SPAK(-/-)) mice fed a low-K(+) or control diet for 4 days, we found that low-K(+) diet strongly increased total NCC expression and phosphorylation of NCC. This was associated with an increase in total SPAK expression in cortical homogenates and an increase in phosphorylation of SPAK at the S383 activation site. The increased pNCC in response to low-K(+) diet was blunted but not completely inhibited in SPAK(-/-) mice. These findings reveal that SPAK is an important mediator of the increased NCC activation by phosphorylation that occurs in the distal convoluted tubule in response to a low-K(+) diet, but other low-potassium-activated kinases are likely to be involved. Copyright © 2015 the American Physiological Society.

  4. Event rate and reaction time performance in ADHD: Testing predictions from the state regulation deficit hypothesis using an ex-Gaussian model.

    PubMed

    Metin, Baris; Wiersema, Jan R; Verguts, Tom; Gasthuys, Roos; van Der Meere, Jacob J; Roeyers, Herbert; Sonuga-Barke, Edmund

    2016-01-01

    According to the state regulation deficit (SRD) account, ADHD is associated with a problem using effort to maintain an optimal activation state under demanding task settings such as very fast or very slow event rates. This leads to a prediction of disrupted performance at event rate extremes reflected in higher Gaussian response variability that is a putative marker of activation during motor preparation. In the current study, we tested this hypothesis using ex-Gaussian modeling, which distinguishes Gaussian from non-Gaussian variability. Twenty-five children with ADHD and 29 typically developing controls performed a simple Go/No-Go task under four different event-rate conditions. There was an accentuated quadratic relationship between event rate and Gaussian variability in the ADHD group compared to the controls. The children with ADHD had greater Gaussian variability at very fast and very slow event rates but not at moderate event rates. The results provide evidence for the SRD account of ADHD. However, given that this effect did not explain all group differences (some of which were independent of event rate) other cognitive and/or motivational processes are also likely implicated in ADHD performance deficits.

  5. Quantification and classification of neuronal responses in kernel-smoothed peristimulus time histograms

    PubMed Central

    Fried, Itzhak; Koch, Christof

    2014-01-01

    Peristimulus time histograms are a widespread form of visualizing neuronal responses. Kernel convolution methods transform these histograms into a smooth, continuous probability density function. This provides an improved estimate of a neuron's actual response envelope. We here develop a classifier, called the h-coefficient, to determine whether time-locked fluctuations in the firing rate of a neuron should be classified as a response or as random noise. Unlike previous approaches, the h-coefficient takes advantage of the more precise response envelope estimation provided by the kernel convolution method. The h-coefficient quantizes the smoothed response envelope and calculates the probability of a response of a given shape to occur by chance. We tested the efficacy of the h-coefficient in a large data set of Monte Carlo simulated smoothed peristimulus time histograms with varying response amplitudes, response durations, trial numbers, and baseline firing rates. Across all these conditions, the h-coefficient significantly outperformed more classical classifiers, with a mean false alarm rate of 0.004 and a mean hit rate of 0.494. We also tested the h-coefficient's performance in a set of neuronal responses recorded in humans. The algorithm behind the h-coefficient provides various opportunities for further adaptation and the flexibility to target specific parameters in a given data set. Our findings confirm that the h-coefficient can provide a conservative and powerful tool for the analysis of peristimulus time histograms with great potential for future development. PMID:25475352

  6. Probabilistic analysis and fatigue damage assessment of offshore mooring system due to non-Gaussian bimodal tension processes

    NASA Astrophysics Data System (ADS)

    Chang, Anteng; Li, Huajun; Wang, Shuqing; Du, Junfeng

    2017-08-01

    Both wave-frequency (WF) and low-frequency (LF) components of mooring tension are in principle non-Gaussian due to nonlinearities in the dynamic system. This paper conducts a comprehensive investigation of applicable probability density functions (PDFs) of mooring tension amplitudes used to assess mooring-line fatigue damage via the spectral method. Short-term statistical characteristics of mooring-line tension responses are firstly investigated, in which the discrepancy arising from Gaussian approximation is revealed by comparing kurtosis and skewness coefficients. Several distribution functions based on present analytical spectral methods are selected to express the statistical distribution of the mooring-line tension amplitudes. Results indicate that the Gamma-type distribution and a linear combination of Dirlik and Tovo-Benasciutti formulas are suitable for separate WF and LF mooring tension components. A novel parametric method based on nonlinear transformations and stochastic optimization is then proposed to increase the effectiveness of mooring-line fatigue assessment due to non-Gaussian bimodal tension responses. Using time domain simulation as a benchmark, its accuracy is further validated using a numerical case study of a moored semi-submersible platform.

  7. Application of the ex-Gaussian function to the effect of the word blindness suggestion on Stroop task performance suggests no word blindness

    PubMed Central

    Parris, Benjamin A.; Dienes, Zoltan; Hodgson, Timothy L.

    2013-01-01

    The aim of the present paper was to apply the ex-Gaussian function to data reported by Parris et al. (2012) given its utility in studies involving the Stroop task. Parris et al. showed an effect of the word blindness suggestion when Response-Stimulus Interval (RSI) was 500 ms but not when it was 3500 ms. Analysis revealed that: (1) The effect of the suggestion on interference is observed in μ, supporting converging evidence indicating the suggestion operates over response competition mechanisms; and, (2) Contrary to Parris et al. an effect of the suggestion was observed in μ when RSI was 3500 ms. The reanalysis of the data from Parris et al. (2012) supports the utility of ex-Gaussian analysis in revealing effects that might otherwise be thought of as absent. We suggest that word reading itself is not suppressed by the suggestion but instead that response conflict is dealt with more effectively. PMID:24065947

  8. Application of the ex-Gaussian function to the effect of the word blindness suggestion on Stroop task performance suggests no word blindness.

    PubMed

    Parris, Benjamin A; Dienes, Zoltan; Hodgson, Timothy L

    2013-01-01

    The aim of the present paper was to apply the ex-Gaussian function to data reported by Parris et al. (2012) given its utility in studies involving the Stroop task. Parris et al. showed an effect of the word blindness suggestion when Response-Stimulus Interval (RSI) was 500 ms but not when it was 3500 ms. Analysis revealed that: (1) The effect of the suggestion on interference is observed in μ, supporting converging evidence indicating the suggestion operates over response competition mechanisms; and, (2) Contrary to Parris et al. an effect of the suggestion was observed in μ when RSI was 3500 ms. The reanalysis of the data from Parris et al. (2012) supports the utility of ex-Gaussian analysis in revealing effects that might otherwise be thought of as absent. We suggest that word reading itself is not suppressed by the suggestion but instead that response conflict is dealt with more effectively.

  9. Image quality of mixed convolution kernel in thoracic computed tomography.

    PubMed

    Neubauer, Jakob; Spira, Eva Maria; Strube, Juliane; Langer, Mathias; Voss, Christian; Kotter, Elmar

    2016-11-01

    The mixed convolution kernel alters his properties geographically according to the depicted organ structure, especially for the lung. Therefore, we compared the image quality of the mixed convolution kernel to standard soft and hard kernel reconstructions for different organ structures in thoracic computed tomography (CT) images.Our Ethics Committee approved this prospective study. In total, 31 patients who underwent contrast-enhanced thoracic CT studies were included after informed consent. Axial reconstructions were performed with hard, soft, and mixed convolution kernel. Three independent and blinded observers rated the image quality according to the European Guidelines for Quality Criteria of Thoracic CT for 13 organ structures. The observers rated the depiction of the structures in all reconstructions on a 5-point Likert scale. Statistical analysis was performed with the Friedman Test and post hoc analysis with the Wilcoxon rank-sum test.Compared to the soft convolution kernel, the mixed convolution kernel was rated with a higher image quality for lung parenchyma, segmental bronchi, and the border between the pleura and the thoracic wall (P < 0.03). Compared to the hard convolution kernel, the mixed convolution kernel was rated with a higher image quality for aorta, anterior mediastinal structures, paratracheal soft tissue, hilar lymph nodes, esophagus, pleuromediastinal border, large and medium sized pulmonary vessels and abdomen (P < 0.004) but a lower image quality for trachea, segmental bronchi, lung parenchyma, and skeleton (P < 0.001).The mixed convolution kernel cannot fully substitute the standard CT reconstructions. Hard and soft convolution kernel reconstructions still seem to be mandatory for thoracic CT.

  10. A Fast Numerical Method for Max-Convolution and the Application to Efficient Max-Product Inference in Bayesian Networks.

    PubMed

    Serang, Oliver

    2015-08-01

    Observations depending on sums of random variables are common throughout many fields; however, no efficient solution is currently known for performing max-product inference on these sums of general discrete distributions (max-product inference can be used to obtain maximum a posteriori estimates). The limiting step to max-product inference is the max-convolution problem (sometimes presented in log-transformed form and denoted as "infimal convolution," "min-convolution," or "convolution on the tropical semiring"), for which no O(k log(k)) method is currently known. Presented here is an O(k log(k)) numerical method for estimating the max-convolution of two nonnegative vectors (e.g., two probability mass functions), where k is the length of the larger vector. This numerical max-convolution method is then demonstrated by performing fast max-product inference on a convolution tree, a data structure for performing fast inference given information on the sum of n discrete random variables in O(nk log(nk)log(n)) steps (where each random variable has an arbitrary prior distribution on k contiguous possible states). The numerical max-convolution method can be applied to specialized classes of hidden Markov models to reduce the runtime of computing the Viterbi path from nk(2) to nk log(k), and has potential application to the all-pairs shortest paths problem.

  11. Performance of Serially Concatenated Convolutional Codes with Binary Modulation in AWGN and Noise Jamming over Rayleigh Fading Channels

    DTIC Science & Technology

    2001-09-01

    Rate - compatible punctured convolutional codes (RCPC codes ) and their applications,” IEEE...ABSTRACT In this dissertation, the bit error rates for serially concatenated convolutional codes (SCCC) for both BPSK and DPSK modulation with...INTENTIONALLY LEFT BLANK i EXECUTIVE SUMMARY In this dissertation, the bit error rates of serially concatenated convolutional codes

  12. DSN telemetry system performance with convolutionally coded data using operational maximum-likelihood convolutional decoders

    NASA Technical Reports Server (NTRS)

    Benjauthrit, B.; Mulhall, B.; Madsen, B. D.; Alberda, M. E.

    1976-01-01

    The DSN telemetry system performance with convolutionally coded data using the operational maximum-likelihood convolutional decoder (MCD) being implemented in the Network is described. Data rates from 80 bps to 115.2 kbps and both S- and X-band receivers are reported. The results of both one- and two-way radio losses are included.

  13. Predictive lethal proarrhythmic risk evaluation using a closed-loop-circuit cell network with human induced pluripotent stem cells derived cardiomyocytes

    NASA Astrophysics Data System (ADS)

    Nomura, Fumimasa; Hattori, Akihiro; Terazono, Hideyuki; Kim, Hyonchol; Odaka, Masao; Sugio, Yoshihiro; Yasuda, Kenji

    2016-06-01

    For the prediction of lethal arrhythmia occurrence caused by abnormality of cell-to-cell conduction, we have developed a next-generation in vitro cell-to-cell conduction assay, i.e., a quasi in vivo assay, in which the change in spatial cell-to-cell conduction is quantitatively evaluated from the change in waveforms of the convoluted electrophysiological signals from lined-up cardiomyocytes on a single closed loop of a microelectrode of 1 mm diameter and 20 µm width in a cultivation chip. To evaluate the importance of the closed-loop arrangement of cardiomyocytes for prediction, we compared the change in waveforms of convoluted signals of the responses in the closed-loop circuit arrangement with that of the response of cardiomyocyte clusters using a typical human ether a go-go related gene (hERG) ion channel blocker, E-4031. The results showed that (1) waveform prolongation and fluctuation both in the closed loops and clusters increased depending on the E-4031 concentration increase. However, (2) only the waveform signals in closed loops showed an apparent temporal change in waveforms from ventricular tachycardia (VT) to ventricular fibrillation (VF), which is similar to the most typical cell-to-cell conductance abnormality. The results indicated the usefulness of convoluted waveform signals of a closed-loop cell network for acquiring reproducible results acquisition and more detailed temporal information on cell-to-cell conduction.

  14. Enhanced online convolutional neural networks for object tracking

    NASA Astrophysics Data System (ADS)

    Zhang, Dengzhuo; Gao, Yun; Zhou, Hao; Li, Tianwen

    2018-04-01

    In recent several years, object tracking based on convolution neural network has gained more and more attention. The initialization and update of convolution filters can directly affect the precision of object tracking effective. In this paper, a novel object tracking via an enhanced online convolution neural network without offline training is proposed, which initializes the convolution filters by a k-means++ algorithm and updates the filters by an error back-propagation. The comparative experiments of 7 trackers on 15 challenging sequences showed that our tracker can perform better than other trackers in terms of AUC and precision.

  15. Spatial-impulse-response-dependent back-projection using the non-stationary convolution in optoacoustic mesoscopy

    NASA Astrophysics Data System (ADS)

    Lu, Tong; Wang, Yihan; Gao, Feng; Zhao, Huijuan; Ntziachristos, Vasilis; Li, Jiao

    2018-02-01

    Photoacoustic mesoscopy (PAMe), offering high-resolution (sub-100-μm) and high optical contrast imaging at the depth of 1-10 mm, generally obtains massive collection data using a high-frequency focused ultrasonic transducer. The spatial impulse response (SIR) of this focused transducer causes the distortion of measured signals in both duration and amplitude. Thus, the reconstruction method considering the SIR needs to be investigated in the computation-economic way for PAMe. Here, we present a modified back-projection algorithm, by introducing a SIR-dependent calibration process using a non-satationary convolution method. The proposed method is performed on numerical simulations and phantom experiments of microspheres with diameter of both 50 μm and 100 μm, and the improvement of image fidelity of this method is proved to be evident by methodology parameters. The results demonstrate that, the images reconstructed when the SIR of transducer is accounted for have higher contrast-to-noise ratio and more reasonable spatial resolution, compared to the common back-projection algorithm.

  16. Aquifer response to stream-stage and recharge variations. II. Convolution method and applications

    NASA Astrophysics Data System (ADS)

    Barlow, P. M.; DeSimone, L. A.; Moench, A. F.

    2000-05-01

    In this second of two papers, analytical step-response functions, developed in the companion paper for several cases of transient hydraulic interaction between a fully penetrating stream and a confined, leaky, or water-table aquifer, are used in the convolution integral to calculate aquifer heads, streambank seepage rates, and bank storage that occur in response to stream-stage fluctuations and basinwide recharge or evapotranspiration. Two computer programs developed on the basis of these step-response functions and the convolution integral are applied to the analysis of hydraulic interaction of two alluvial stream-aquifer systems in the northeastern and central United States. These applications demonstrate the utility of the analytical functions and computer programs for estimating aquifer and streambank hydraulic properties, recharge rates, streambank seepage rates, and bank storage. Analysis of the water-table aquifer adjacent to the Blackstone River in Massachusetts suggests that the very shallow depth of water table and associated thin unsaturated zone at the site cause the aquifer to behave like a confined aquifer (negligible specific yield). This finding is consistent with previous studies that have shown that the effective specific yield of an unconfined aquifer approaches zero when the capillary fringe, where sediment pores are saturated by tension, extends to land surface. Under this condition, the aquifer's response is determined by elastic storage only. Estimates of horizontal and vertical hydraulic conductivity, specific yield, specific storage, and recharge for a water-table aquifer adjacent to the Cedar River in eastern Iowa, determined by the use of analytical methods, are in close agreement with those estimated by use of a more complex, multilayer numerical model of the aquifer. Streambank leakance of the semipervious streambank materials also was estimated for the site. The streambank-leakance parameter may be considered to be a general (or lumped) parameter that accounts not only for the resistance of flow at the river-aquifer boundary, but also for the effects of partial penetration of the river and other near-stream flow phenomena not included in the theoretical development of the step-response functions.

  17. Hydrograph separation for karst watersheds using a two-domain rainfall-discharge model

    USGS Publications Warehouse

    Long, Andrew J.

    2009-01-01

    Highly parameterized, physically based models may be no more effective at simulating the relations between rainfall and outflow from karst watersheds than are simpler models. Here an antecedent rainfall and convolution model was used to separate a karst watershed hydrograph into two outflow components: one originating from focused recharge in conduits and one originating from slow flow in a porous annex system. In convolution, parameters of a complex system are lumped together in the impulse-response function (IRF), which describes the response of the system to an impulse of effective precipitation. Two parametric functions in superposition approximate the two-domain IRF. The outflow hydrograph can be separated into flow components by forward modeling with isolated IRF components, which provides an objective criterion for separation. As an example, the model was applied to a karst watershed in the Madison aquifer, South Dakota, USA. Simulation results indicate that this watershed is characterized by a flashy response to storms, with a peak response time of 1 day, but that 89% of the flow results from the slow-flow domain, with a peak response time of more than 1 year. This long response time may be the result of perched areas that store water above the main water table. Simulation results indicated that some aspects of the system are stationary but that nonlinearities also exist.

  18. Contributions of Optical and Non-Optical Blur to Variation in Visual Acuity

    PubMed Central

    McAnany, J. Jason; Shahidi, Mahnaz; Applegate, Raymond A.; Zelkha, Ruth; Alexander, Kenneth R.

    2011-01-01

    Purpose To determine the relative contributions of optical and non-optical sources of intrinsic blur to variations in visual acuity (VA) among normally sighted subjects. Methods Best-corrected VA of sixteen normally sighted subjects was measured using briefly presented (59 ms) tumbling E optotypes that were either unblurred or blurred through convolution with Gaussian functions of different widths. A standard model of intrinsic blur was used to estimate each subject’s equivalent intrinsic blur (σint) and VA for the unblurred tumbling E (MAR0). For 14 subjects, a radially averaged optical point spread function due to higher-order aberrations was derived by Shack-Hartmann aberrometry and fit with a Gaussian function. The standard deviation of the best-fit Gaussian function defined optical blur (σopt). An index of non-optical blur (η) was defined as: 1-σopt/σint. A control experiment was conducted on 5 subjects to evaluate the effect of stimulus duration on MAR0 and σint. Results Log MAR0 for the briefly presented E was correlated significantly with log σint (r = 0.95, p < 0.01), consistent with previous work. However, log MAR0 was not correlated significantly with log σopt (r = 0.46, p = 0.11). For subjects with log MAR0 equivalent to approximately 20/20 or better, log MAR0 was independent of log η, whereas for subjects with larger log MAR0 values, log MAR0 was proportional to log η. The control experiment showed a statistically significant effect of stimulus duration on log MAR0 (p < 0.01) but a non-significant effect on σint (p = 0.13). Conclusions The relative contributions of optical and non-optical blur to VA varied among the subjects, and were related to the subject’s VA. Evaluating optical and non-optical blur may be useful for predicting changes in VA following procedures that improve the optics of the eye in patients with both optical and non-optical sources of VA loss. PMID:21460756

  19. Achieving unequal error protection with convolutional codes

    NASA Technical Reports Server (NTRS)

    Mills, D. G.; Costello, D. J., Jr.; Palazzo, R., Jr.

    1994-01-01

    This paper examines the unequal error protection capabilities of convolutional codes. Both time-invariant and periodically time-varying convolutional encoders are examined. The effective free distance vector is defined and is shown to be useful in determining the unequal error protection (UEP) capabilities of convolutional codes. A modified transfer function is used to determine an upper bound on the bit error probabilities for individual input bit positions in a convolutional encoder. The bound is heavily dependent on the individual effective free distance of the input bit position. A bound relating two individual effective free distances is presented. The bound is a useful tool in determining the maximum possible disparity in individual effective free distances of encoders of specified rate and memory distribution. The unequal error protection capabilities of convolutional encoders of several rates and memory distributions are determined and discussed.

  20. Experimental Investigation of Convoluted Contouring for Aircraft Afterbody Drag Reduction

    NASA Technical Reports Server (NTRS)

    Deere, Karen A.; Hunter, Craig A.

    1999-01-01

    An experimental investigation was performed in the NASA Langley 16-Foot Transonic Tunnel to determine the aerodynamic effects of external convolutions, placed on the boattail of a nonaxisymmetric nozzle for drag reduction. Boattail angles of 15 and 22 were tested with convolutions placed at a forward location upstream of the boattail curvature, at a mid location along the curvature and at a full location that spanned the entire boattail flap. Each of the baseline nozzle afterbodies (no convolutions) had a parabolic, converging contour with a parabolically decreasing corner radius. Data were obtained at several Mach numbers from static conditions to 1.2 for a range of nozzle pressure ratios and angles of attack. An oil paint flow visualization technique was used to qualitatively assess the effect of the convolutions. Results indicate that afterbody drag reduction by convoluted contouring is convolution location, Mach number, boattail angle, and NPR dependent. The forward convolution location was the most effective contouring geometry for drag reduction on the 22 afterbody, but was only effective for M < 0.95. At M = 0.8, drag was reduced 20 and 36 percent at NPRs of 5.4 and 7, respectively, but drag was increased 10 percent for M = 0.95 at NPR = 7. Convoluted contouring along the 15 boattail angle afterbody was not effective at reducing drag because the flow was minimally separated from the baseline afterbody, unlike the massive separation along the 22 boattail angle baseline afterbody.

  1. Experimental study of current loss and plasma formation in the Z machine post-hole convolute

    NASA Astrophysics Data System (ADS)

    Gomez, M. R.; Gilgenbach, R. M.; Cuneo, M. E.; Jennings, C. A.; McBride, R. D.; Waisman, E. M.; Hutsel, B. T.; Stygar, W. A.; Rose, D. V.; Maron, Y.

    2017-01-01

    The Z pulsed-power generator at Sandia National Laboratories drives high energy density physics experiments with load currents of up to 26 MA. Z utilizes a double post-hole convolute to combine the current from four parallel magnetically insulated transmission lines into a single transmission line just upstream of the load. Current loss is observed in most experiments and is traditionally attributed to inefficient convolute performance. The apparent loss current varies substantially for z-pinch loads with different inductance histories; however, a similar convolute impedance history is observed for all load types. This paper details direct spectroscopic measurements of plasma density, temperature, and apparent and actual plasma closure velocities within the convolute. Spectral measurements indicate a correlation between impedance collapse and plasma formation in the convolute. Absorption features in the spectra show the convolute plasma consists primarily of hydrogen, which likely forms from desorbed electrode contaminant species such as H2O , H2 , and hydrocarbons. Plasma densities increase from 1 ×1016 cm-3 (level of detectability) just before peak current to over 1 ×1017 cm-3 at stagnation (tens of ns later). The density seems to be highest near the cathode surface, with an apparent cathode to anode plasma velocity in the range of 35 - 50 cm /μ s . Similar plasma conditions and convolute impedance histories are observed in experiments with high and low losses, suggesting that losses are driven largely by load dynamics, which determine the voltage on the convolute.

  2. On the application of under-decimated filter banks

    NASA Technical Reports Server (NTRS)

    Lin, Y.-P.; Vaidyanathan, P. P.

    1994-01-01

    Maximally decimated filter banks have been extensively studied in the past. A filter bank is said to be under-decimated if the number of channels is more than the decimation ratio in the subbands. A maximally decimated filter bank is well known for its application in subband coding. Another application of maximally decimated filter banks is in block filtering. Convolution through block filtering has the advantages that parallelism is increased and data are processed at a lower rate. However, the computational complexity is comparable to that of direct convolution. More recently, another type of filter bank convolver has been developed. In this scheme, the convolution is performed in the subbands. Quantization and bit allocation of subband signals are based on signal variance, as in subband coding. Consequently, for a fixed rate, the result of convolution is more accurate than is direct convolution. This type of filter bank convolver also enjoys the advantages of block filtering, parallelism, and a lower working rate. Nevertheless, like block filtering, there is no computational saving. In this article, under-decimated systems are introduced to solve the problem. The new system is decimated only by half the number of channels. Two types of filter banks can be used in the under-decimated system: the discrete Fourier transform (DFT) filter banks and the cosine modulated filter banks. They are well known for their low complexity. In both cases, the system is approximately alias free, and the overall response is equivalent to a tunable multilevel filter. Properties of the DFT filter banks and the cosine modulated filter banks can be exploited to simultaneously achieve parallelism, computational saving, and a lower working rate. Furthermore, for both systems, the implementation cost of the analysis or synthesis bank is comparable to that of one prototype filter plus some low-complexity modulation matrices. The individual analysis and synthesis filters have complex coefficients in the DFT filter banks but have real coefficients in the cosine modulated filter banks.

  3. On the application of under-decimated filter banks

    NASA Astrophysics Data System (ADS)

    Lin, Y.-P.; Vaidyanathan, P. P.

    1994-11-01

    Maximally decimated filter banks have been extensively studied in the past. A filter bank is said to be under-decimated if the number of channels is more than the decimation ratio in the subbands. A maximally decimated filter bank is well known for its application in subband coding. Another application of maximally decimated filter banks is in block filtering. Convolution through block filtering has the advantages that parallelism is increased and data are processed at a lower rate. However, the computational complexity is comparable to that of direct convolution. More recently, another type of filter bank convolver has been developed. In this scheme, the convolution is performed in the subbands. Quantization and bit allocation of subband signals are based on signal variance, as in subband coding. Consequently, for a fixed rate, the result of convolution is more accurate than is direct convolution. This type of filter bank convolver also enjoys the advantages of block filtering, parallelism, and a lower working rate. Nevertheless, like block filtering, there is no computational saving. In this article, under-decimated systems are introduced to solve the problem. The new system is decimated only by half the number of channels. Two types of filter banks can be used in the under-decimated system: the discrete Fourier transform (DFT) filter banks and the cosine modulated filter banks. They are well known for their low complexity. In both cases, the system is approximately alias free, and the overall response is equivalent to a tunable multilevel filter. Properties of the DFT filter banks and the cosine modulated filter banks can be exploited to simultaneously achieve parallelism, computational saving, and a lower working rate.

  4. Keypoint Density-Based Region Proposal for Fine-Grained Object Detection and Classification Using Regions with Convolutional Neural Network Features

    DTIC Science & Technology

    2015-12-15

    Keypoint Density-based Region Proposal for Fine-Grained Object Detection and Classification using Regions with Convolutional Neural Network ... Convolutional Neural Networks (CNNs) enable them to outperform conventional techniques on standard object detection and classification tasks, their...detection accuracy and speed on the fine-grained Caltech UCSD bird dataset (Wah et al., 2011). Recently, Convolutional Neural Networks (CNNs), a deep

  5. Application of structured support vector machine backpropagation to a convolutional neural network for human pose estimation.

    PubMed

    Witoonchart, Peerajak; Chongstitvatana, Prabhas

    2017-08-01

    In this study, for the first time, we show how to formulate a structured support vector machine (SSVM) as two layers in a convolutional neural network, where the top layer is a loss augmented inference layer and the bottom layer is the normal convolutional layer. We show that a deformable part model can be learned with the proposed structured SVM neural network by backpropagating the error of the deformable part model to the convolutional neural network. The forward propagation calculates the loss augmented inference and the backpropagation calculates the gradient from the loss augmented inference layer to the convolutional layer. Thus, we obtain a new type of convolutional neural network called an Structured SVM convolutional neural network, which we applied to the human pose estimation problem. This new neural network can be used as the final layers in deep learning. Our method jointly learns the structural model parameters and the appearance model parameters. We implemented our method as a new layer in the existing Caffe library. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Comment on ‘Deep convolutional neural network with transfer learning for rectum toxicity prediction in cervical cancer radiotherapy: a feasibility study’

    NASA Astrophysics Data System (ADS)

    Valdes, Gilmer; Interian, Yannet

    2018-03-01

    The application of machine learning (ML) presents tremendous opportunities for the field of oncology, thus we read ‘Deep convolutional neural network with transfer learning for rectum toxicity prediction in cervical cancer radiotherapy: a feasibility study’ with great interest. In this article, the authors used state of the art techniques: a pre-trained convolutional neural network (VGG-16 CNN), transfer learning, data augmentation, drop out and early stopping, all of which are directly responsible for the success and the excitement that these algorithms have created in other fields. We believe that the use of these techniques can offer tremendous opportunities in the field of Medical Physics and as such we would like to praise the authors for their pioneering application to the field of Radiation Oncology. That being said, given that the field of Medical Physics has unique characteristics that differentiate us from those fields where these techniques have been applied successfully, we would like to raise some points for future discussion and follow up studies that could help the community understand the limitations and nuances of deep learning techniques.

  7. Convolution of Two Series

    ERIC Educational Resources Information Center

    Umar, A.; Yusau, B.; Ghandi, B. M.

    2007-01-01

    In this note, we introduce and discuss convolutions of two series. The idea is simple and can be introduced to higher secondary school classes, and has the potential of providing a good background for the well known convolution of function.

  8. A fast complex integer convolution using a hybrid transform

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; K Truong, T.

    1978-01-01

    It is shown that the Winograd transform can be combined with a complex integer transform over the Galois field GF(q-squared) to yield a new algorithm for computing the discrete cyclic convolution of complex number points. By this means a fast method for accurately computing the cyclic convolution of a sequence of complex numbers for long convolution lengths can be obtained. This new hybrid algorithm requires fewer multiplications than previous algorithms.

  9. Performance Analysis of Hybrid ARQ Protocols in a Slotted Code Division Multiple-Access Network

    DTIC Science & Technology

    1989-08-01

    Convolutional Codes . in Proc Int. Conf. Commun., 21.4.1-21.4.5, 1987. [27] J. Hagenauer. Rate Compatible Punctured Convolutional Codes . in Proc Int. Conf...achieved by using a low rate (r = 0.5), high constraint length (e.g., 32) punctured convolutional code . Code puncturing provides for a variable rate code ...investigated the use of convolutional codes in Type II Hybrid ARQ protocols. The error

  10. Modeling and Simulation of a Non-Coherent Frequency Shift Keying Transceiver Using a Field Programmable Gate Array (FPGA)

    DTIC Science & Technology

    2008-09-01

    Convolutional Encoder Block Diagram of code rate 1 2 r = and...most commonly used along with block codes . They were introduced in 1955 by Elias [7]. Convolutional codes are characterized by the code rate kr n... convolutional code for 1 2 r = and = 3κ , namely [7 5], is used. Figure 2 Convolutional Encoder Block Diagram of code rate 1 2 r = and

  11. Three-dimensional FLASH Laser Radar Range Estimation via Blind Deconvolution

    DTIC Science & Technology

    2009-10-01

    scene can result in errors due to several factors including the optical spatial impulse response, detector blurring, photon noise , timing jitter, and...estimation error include spatial blur, detector blurring, noise , timing jitter, and inter-sample targets. Unlike previous research, this paper ac- counts...for pixel coupling by defining the range image mathematical model as a 2D convolution between the system spatial impulse response and the object (target

  12. Dusty Pair Plasma—Wave Propagation and Diffusive Transition of Oscillations

    NASA Astrophysics Data System (ADS)

    Atamaniuk, Barbara; Turski, Andrzej J.

    2011-11-01

    The crucial point of the paper is the relation between equilibrium distributions of plasma species and the type of propagation or diffusive transition of plasma response to a disturbance. The paper contains a unified treatment of disturbance propagation (transport) in the linearized Vlasov electron-positron and fullerene pair plasmas containing charged dust impurities, based on the space-time convolution integral equations. Electron-positron-dust/ion (e-p-d/i) plasmas are rather widespread in nature. Space-time responses of multi-component linearized Vlasov plasmas on the basis of multiple integral equations are invoked. An initial-value problem for Vlasov-Poisson/Ampère equations is reduced to the one multiple integral equation and the solution is expressed in terms of forcing function and its space-time convolution with the resolvent kernel. The forcing function is responsible for the initial disturbance and the resolvent is responsible for the equilibrium velocity distributions of plasma species. By use of resolvent equations, time-reversibility, space-reflexivity and the other symmetries are revealed. The symmetries carry on physical properties of Vlasov pair plasmas, e.g., conservation laws. Properly choosing equilibrium distributions for dusty pair plasmas, we can reduce the resolvent equation to: (i) the undamped dispersive wave equations, (ii) and diffusive transport equations of oscillations.

  13. Pretreatment ADC histogram analysis is a predictive imaging biomarker for bevacizumab treatment but not chemotherapy in recurrent glioblastoma.

    PubMed

    Ellingson, B M; Sahebjam, S; Kim, H J; Pope, W B; Harris, R J; Woodworth, D C; Lai, A; Nghiemphu, P L; Mason, W P; Cloughesy, T F

    2014-04-01

    Pre-treatment ADC characteristics have been shown to predict response to bevacizumab in recurrent glioblastoma multiforme. However, no studies have examined whether ADC characteristics are specific to this particular treatment. The purpose of the current study was to determine whether ADC histogram analysis is a bevacizumab-specific or treatment-independent biomarker of treatment response in recurrent glioblastoma multiforme. Eighty-nine bevacizumab-treated and 43 chemotherapy-treated recurrent glioblastoma multiformes never exposed to bevacizumab were included in this study. In all patients, ADC values in contrast-enhancing ROIs from MR imaging examinations performed at the time of recurrence, immediately before commencement of treatment for recurrence, were extracted and the resulting histogram was fitted to a mixed model with a double Gaussian distribution. Mean ADC in the lower Gaussian curve was used as the primary biomarker of interest. The Cox proportional hazards model and log-rank tests were used for survival analysis. Cox multivariate regression analysis accounting for the interaction between bevacizumab- and non-bevacizumab-treated patients suggested that the ability of the lower Gaussian curve to predict survival is dependent on treatment (progression-free survival, P = .045; overall survival, P = .003). Patients with bevacizumab-treated recurrent glioblastoma multiforme with a pretreatment lower Gaussian curve > 1.2 μm(2)/ms had a significantly longer progression-free survival and overall survival compared with bevacizumab-treated patients with a lower Gaussian curve < 1.2 μm(2)/ms. No differences in progression-free survival or overall survival were observed in the chemotherapy-treated cohort. Bevacizumab-treated patients with a mean lower Gaussian curve > 1.2 μm(2)/ms had a significantly longer progression-free survival and overall survival compared with chemotherapy-treated patients. The mean lower Gaussian curve from ADC histogram analysis is a predictive imaging biomarker for bevacizumab-treated, not chemotherapy-treated, recurrent glioblastoma multiforme. Patients with recurrent glioblastoma multiforme with a mean lower Gaussian curve > 1.2 μm(2)/ms have a survival advantage when treated with bevacizumab.

  14. Audio visual speech source separation via improved context dependent association model

    NASA Astrophysics Data System (ADS)

    Kazemi, Alireza; Boostani, Reza; Sobhanmanesh, Fariborz

    2014-12-01

    In this paper, we exploit the non-linear relation between a speech source and its associated lip video as a source of extra information to propose an improved audio-visual speech source separation (AVSS) algorithm. The audio-visual association is modeled using a neural associator which estimates the visual lip parameters from a temporal context of acoustic observation frames. We define an objective function based on mean square error (MSE) measure between estimated and target visual parameters. This function is minimized for estimation of the de-mixing vector/filters to separate the relevant source from linear instantaneous or time-domain convolutive mixtures. We have also proposed a hybrid criterion which uses AV coherency together with kurtosis as a non-Gaussianity measure. Experimental results are presented and compared in terms of visually relevant speech detection accuracy and output signal-to-interference ratio (SIR) of source separation. The suggested audio-visual model significantly improves relevant speech classification accuracy compared to existing GMM-based model and the proposed AVSS algorithm improves the speech separation quality compared to reference ICA- and AVSS-based methods.

  15. Averaging of random walks and shift-invariant measures on a Hilbert space

    NASA Astrophysics Data System (ADS)

    Sakbaev, V. Zh.

    2017-06-01

    We study random walks in a Hilbert space H and representations using them of solutions of the Cauchy problem for differential equations whose initial conditions are numerical functions on H. We construct a finitely additive analogue of the Lebesgue measure: a nonnegative finitely additive measure λ that is defined on a minimal subset ring of an infinite-dimensional Hilbert space H containing all infinite-dimensional rectangles with absolutely converging products of the side lengths and is invariant under shifts and rotations in H. We define the Hilbert space H of equivalence classes of complex-valued functions on H that are square integrable with respect to a shift-invariant measure λ. Using averaging of the shift operator in H over random vectors in H with a distribution given by a one-parameter semigroup (with respect to convolution) of Gaussian measures on H, we define a one-parameter semigroup of contracting self-adjoint transformations on H, whose generator is called the diffusion operator. We obtain a representation of solutions of the Cauchy problem for the Schrödinger equation whose Hamiltonian is the diffusion operator.

  16. Model of flare lightcurve profile observed in soft X-rays

    NASA Astrophysics Data System (ADS)

    Gryciuk, Magdalena; Siarkowski, Marek; Gburek, Szymon; Podgorski, Piotr; Sylwester, Janusz; Kepa, Anna; Mrozek, Tomasz

    We propose a new model for description of solar flare lightcurve profile observed in soft X-rays. The method assumes that single-peaked `regular' flares seen in lightcurves can be fitted with the elementary time profile being a convolution of Gaussian and exponential functions. More complex, multi-peaked flares can be decomposed as a sum of elementary profiles. During flare lightcurve fitting process a linear background is determined as well. In our study we allow the background shape over the event to change linearly with time. Presented approach originally was dedicated to the soft X-ray small flares recorded by Polish spectrophotometer SphinX during the phase of very deep solar minimum of activity, between 23 rd and 24 th Solar Cycles. However, the method can and will be used to interpret the lightcurves as obtained by the other soft X-ray broad-band spectrometers at the time of both low and higher solar activity level. In the paper we introduce the model and present examples of fits to SphinX and GOES 1-8 Å channel observations as well.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jellison, G. E.; Aytug, T.; Lupini, A. R.

    Nanostructured glass films, which are fabricated using spinodally phase-separated low-alkali glasses, have several interesting and useful characteristics, including being robust, non-wetting and antireflective. Spectroscopic ellipsometry measurements have been performed on one such film and its optical properties were analyzed using a 5-layer structural model of the near-surface region. Since the glass and the film are transparent over the spectral region of the measurement, the Sellmeier model is used to parameterize the dispersion in the refractive index. To simulate the variation of the optical properties of the film over the spot size of the ellipsometer (~ 3 × 5 mm), themore » Sellmeier amplitude is convoluted using a Gaussian distribution. The transition layers between the ambient and the film and between the film and the substrate are modeled as graded layers, where the refractive index varies as a function of depth. These layers are modeled using a two-component Bruggeman effective medium approximation where the two components are the layer above and the layer below. Lastly, the fraction is continuous through the transition layer and is modelled using the incomplete beta function.« less

  18. Novel vehicle detection system based on stacked DoG kernel and AdaBoost

    PubMed Central

    Kang, Hyun Ho; Lee, Seo Won; You, Sung Hyun

    2018-01-01

    This paper proposes a novel vehicle detection system that can overcome some limitations of typical vehicle detection systems using AdaBoost-based methods. The performance of the AdaBoost-based vehicle detection system is dependent on its training data. Thus, its performance decreases when the shape of a target differs from its training data, or the pattern of a preceding vehicle is not visible in the image due to the light conditions. A stacked Difference of Gaussian (DoG)–based feature extraction algorithm is proposed to address this issue by recognizing common characteristics, such as the shadow and rear wheels beneath vehicles—of vehicles under various conditions. The common characteristics of vehicles are extracted by applying the stacked DoG shaped kernel obtained from the 3D plot of an image through a convolution method and investigating only certain regions that have a similar patterns. A new vehicle detection system is constructed by combining the novel stacked DoG feature extraction algorithm with the AdaBoost method. Experiments are provided to demonstrate the effectiveness of the proposed vehicle detection system under different conditions. PMID:29513727

  19. First Test of Stochastic Growth Theory for Langmuir Waves in Earth's Foreshock

    NASA Technical Reports Server (NTRS)

    Cairns, Iver H.; Robinson, P. A.

    1997-01-01

    This paper presents the first test of whether stochastic growth theory (SGT) can explain the detailed characteristics of Langmuir-like waves in Earth's foreshock. A period with unusually constant solar wind magnetic field is analyzed. The observed distributions P(logE) of wave fields E for two intervals with relatively constant spacecraft location (DIFF) are shown to agree well with the fundamental prediction of SGT, that P(logE) is Gaussian in log E. This stochastic growth can be accounted for semi-quantitatively in terms of standard foreshock beam parameters and a model developed for interplanetary type III bursts. Averaged over the entire period with large variations in DIFF, the P(logE) distribution is a power-law with index approximately -1; this is interpreted in terms of convolution of intrinsic, spatially varying P(logE) distributions with a probability function describing ISEE's residence time at a given DIFF. Wave data from this interval thus provide good observational evidence that SGT can sometimes explain the clumping, burstiness, persistence, and highly variable fields of the foreshock Langmuir-like waves.

  20. Kinematical line broadening and spatially resolved line profiles from AGN.

    NASA Astrophysics Data System (ADS)

    Schulz, H.; Muecke, A.; Boer, B.; Dresen, M.; Schmidt-Kaler, T.

    1995-03-01

    We study geometrical effects for emission-line broadening in the optically thin limit by integrating the projected line emissivity along prespecified lines of sight that intersect rotating or expanding disks or cone-like configurations. Analytical expressions are given for the case that emissivity and velocity follow power laws of the radial distance. The results help to interpret spatially resolved spectra and to check the reliability of numerical computations. In the second part we describe a numerical code applicable to any geometrical configuration. Turbulent motions, atmospheric seeing and effects induced by the size of the observing aperture are simulated with appropriate convolution procedures. An application to narrow-line Hα profiles from the central region of the Seyfert galaxy NGC 7469 is presented. The shapes and asymmetries as well as the relative strengths of the Hα lines from different spatial positions can be explained by emission from a nuclear rotating disk of ionized gas, for which the distribution of Hα line emissivity and the rotation curve are derived. Appreciable turbulent line broadening with a Gaussian σ of ~40% of the rotational velocity has to be included to obtain a satisfactory fit.

  1. A saliency-based approach to detection of infrared target

    NASA Astrophysics Data System (ADS)

    Chen, Yanfei; Sang, Nong; Dan, Zhiping

    2013-10-01

    Automatic target detection in infrared images is a hot research field of national defense technology. We propose a new saliency-based infrared target detection model in this paper, which is based on the fact that human focus of attention is directed towards the relevant target to interpret the most promising information. For a given image, the convolution of the image log amplitude spectrum with a low-pass Gaussian kernel of an appropriate scale is equivalent to an image saliency detector in the frequency domain. At the same time, orientation and shape features extracted are combined into a saliency map in the spatial domain. Our proposed model decides salient targets based on a final saliency map, which is generated by integration of the saliency maps in the frequency and spatial domain. At last, the size of each salient target is obtained by maximizing entropy of the final saliency map. Experimental results show that the proposed model can highlight both small and large salient regions in infrared image, as well as inhibit repeated distractors in cluttered image. In addition, its detecting efficiency has improved significantly.

  2. First test of stochastic growth theory for Langmuir waves in Earth's foreshock

    NASA Astrophysics Data System (ADS)

    Cairns, Iver H.; Robinson, P. A.

    This paper presents the first test of whether stochastic growth theory (SGT) can explain the detailed characteristics of Langmuir-like waves in Earth's foreshock. A period with unusually constant solar wind magnetic field is analyzed. The observed distributions P(log E) of wave fields E for two intervals with relatively constant spacecraft location (DIFF) are shown to agree well with the fundamental prediction of SGT, that P(log E) is Gaussian in log E. This stochastic growth can be accounted for semi-quantitatively in terms of standard foreshock beam parameters and a model developed for interplanetary type III bursts. Averaged over the entire period with large variations in DIFF, the P(log E) distribution is a power-law with index ˜ -1 this is interpreted in terms of convolution of intrinsic, spatially varying P(log E) distributions with a probability function describing ISEE's residence time at a given DIFF. Wave data from this interval thus provide good observational evidence that SGT can sometimes explain the clumping, burstiness, persistence, and highly variable fields of the foreshock Langmuir-like waves.

  3. Hyperspectral Image Enhancement and Mixture Deep-Learning Classification of Corneal Epithelium Injuries.

    PubMed

    Noor, Siti Salwa Md; Michael, Kaleena; Marshall, Stephen; Ren, Jinchang

    2017-11-16

    In our preliminary study, the reflectance signatures obtained from hyperspectral imaging (HSI) of normal and abnormal corneal epithelium tissues of porcine show similar morphology with subtle differences. Here we present image enhancement algorithms that can be used to improve the interpretability of data into clinically relevant information to facilitate diagnostics. A total of 25 corneal epithelium images without the application of eye staining were used. Three image feature extraction approaches were applied for image classification: (i) image feature classification from histogram using a support vector machine with a Gaussian radial basis function (SVM-GRBF); (ii) physical image feature classification using deep-learning Convolutional Neural Networks (CNNs) only; and (iii) the combined classification of CNNs and SVM-Linear. The performance results indicate that our chosen image features from the histogram and length-scale parameter were able to classify with up to 100% accuracy; particularly, at CNNs and CNNs-SVM, by employing 80% of the data sample for training and 20% for testing. Thus, in the assessment of corneal epithelium injuries, HSI has high potential as a method that could surpass current technologies regarding speed, objectivity, and reliability.

  4. Processing of chromatic information in a deep convolutional neural network.

    PubMed

    Flachot, Alban; Gegenfurtner, Karl R

    2018-04-01

    Deep convolutional neural networks are a class of machine-learning algorithms capable of solving non-trivial tasks, such as object recognition, with human-like performance. Little is known about the exact computations that deep neural networks learn, and to what extent these computations are similar to the ones performed by the primate brain. Here, we investigate how color information is processed in the different layers of the AlexNet deep neural network, originally trained on object classification of over 1.2M images of objects in their natural contexts. We found that the color-responsive units in the first layer of AlexNet learned linear features and were broadly tuned to two directions in color space, analogously to what is known of color responsive cells in the primate thalamus. Moreover, these directions are decorrelated and lead to statistically efficient representations, similar to the cardinal directions of the second-stage color mechanisms in primates. We also found, in analogy to the early stages of the primate visual system, that chromatic and achromatic information were segregated in the early layers of the network. Units in the higher layers of AlexNet exhibit on average a lower responsivity for color than units at earlier stages.

  5. Effects of Amplitude Compression on Relative Auditory Distance Perception

    DTIC Science & Technology

    2013-10-01

    FFT analyses are shown in Figure 4. The use of convolution of the stimuli with the binaural impulse responses recorded from KEMAR resulted in the...human sound localization (pp. 36-200). Cambridge, MA: The MIT Press. Carmichel, E. L., Harris, F. P., & Story, B. H. (2007). Effects of binaural

  6. The location-, word-, and arrow-based Simon effects: An ex-Gaussian analysis.

    PubMed

    Luo, Chunming; Proctor, Robert W

    2018-04-01

    Task-irrelevant spatial information, conveyed by stimulus location, location word, or arrow direction, can influence the response to task-relevant attributes, generating the location-, word-, and arrow-based Simon effects. We examined whether different mechanisms are involved in the generation of these Simon effects by fitting a mathematical ex-Gaussian function to empirical response time (RT) distributions. Specifically, we tested whether which ex-Gaussian parameters (μ, σ, and τ) show Simon effects and whether the location-, word, and arrow-based effects are on different parameters. Results show that the location-based Simon effect occurred on mean RT and μ but not on τ, and a reverse Simon effect occurred on σ. In contrast, a positive word-based Simon effect was obtained on all these measures (including σ), and a positive arrow-based Simon effect was evident on mean RT, σ, and τ but not μ. The arrow-based Simon effect was not different from the word-based Simon effect on τ or σ but was on μ and mean RT. These distinct results on mean RT and ex-Gaussian parameters provide evidence that spatial information conveyed by the various location modes are different in the time-course of activation.

  7. Protograph-Based Raptor-Like Codes

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Chen, Tsung-Yi; Wang, Jiadong; Wesel, Richard D.

    2014-01-01

    Theoretical analysis has long indicated that feedback improves the error exponent but not the capacity of pointto- point memoryless channels. The analytic and empirical results indicate that at short blocklength regime, practical rate-compatible punctured convolutional (RCPC) codes achieve low latency with the use of noiseless feedback. In 3GPP, standard rate-compatible turbo codes (RCPT) did not outperform the convolutional codes in the short blocklength regime. The reason is the convolutional codes for low number of states can be decoded optimally using Viterbi decoder. Despite excellent performance of convolutional codes at very short blocklengths, the strength of convolutional codes does not scale with the blocklength for a fixed number of states in its trellis.

  8. Convolution of large 3D images on GPU and its decomposition

    NASA Astrophysics Data System (ADS)

    Karas, Pavel; Svoboda, David

    2011-12-01

    In this article, we propose a method for computing convolution of large 3D images. The convolution is performed in a frequency domain using a convolution theorem. The algorithm is accelerated on a graphic card by means of the CUDA parallel computing model. Convolution is decomposed in a frequency domain using the decimation in frequency algorithm. We pay attention to keeping our approach efficient in terms of both time and memory consumption and also in terms of memory transfers between CPU and GPU which have a significant inuence on overall computational time. We also study the implementation on multiple GPUs and compare the results between the multi-GPU and multi-CPU implementations.

  9. Automatic rock detection for in situ spectroscopy applications on Mars

    NASA Astrophysics Data System (ADS)

    Mahapatra, Pooja; Foing, Bernard H.

    A novel algorithm for rock detection has been developed for effectively utilising Mars rovers, and enabling autonomous selection of target rocks that require close-contact spectroscopic measurements. The algorithm demarcates small rocks in terrain images as seen by cameras on a Mars rover during traverse. This information may be used by the rover for selection of geologically relevant sample rocks, and (in conjunction with a rangefinder) to pick up target samples using a robotic arm for automatic in situ determination of rock composition and mineralogy using, for example, a Raman spectrometer. Determining rock samples within the region that are of specific interest without physically approaching them significantly reduces time, power and risk. Input images in colour are converted to greyscale for intensity analysis. Bilateral filtering is used for texture removal while preserving rock boundaries. Unsharp masking is used for contrast enhance-ment. Sharp contrasts in intensities are detected using Canny edge detection, with thresholds that are calculated from the image obtained after contrast-limited adaptive histogram equalisation of the unsharp masked image. Scale-space representations are then generated by convolving this image with a Gaussian kernel. A scale-invariant blob detector (Laplacian of the Gaussian, LoG) detects blobs independently of their sizes, and therefore requires a multi-scale approach with automatic scale se-lection. The scale-space blob detector consists of convolution of the Canny edge-detected image with a scale-normalised LoG at several scales, and finding the maxima of squared LoG response in scale-space. After the extraction of local intensity extrema, the intensity profiles along rays going out of the local extremum are investigated. An ellipse is fitted to the region determined by significant changes in the intensity profiles. The fitted ellipses are overlaid on the original Mars terrain image for a visual estimation of the rock detection accuracy, and the number of ellipses are counted. Since geometry and illumination have the least effect on small rocks, the proposed algorithm is effective in detecting small rocks (or bigger rocks at larger distances from the camera) that consist of a small fraction of image pixels. Acknowledgements: The first author would like to express her gratitude to the European Space Agency (ESA/ESTEC) and the International Lunar Exploration Working Group (ILEWG) for their support of this work.

  10. A convolution model for obtaining the response of an ionization chamber in static non standard fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gonzalez-Castano, D. M.; Gonzalez, L. Brualla; Gago-Arias, M. A.

    2012-01-15

    Purpose: This work contains an alternative methodology for obtaining correction factors for ionization chamber (IC) dosimetry of small fields and composite fields such as IMRT. The method is based on the convolution/superposition (C/S) of an IC response function (RF) with the dose distribution in a certain plane which includes chamber position. This method is an alternative to the full Monte Carlo (MC) approach that has been used previously by many authors for the same objective. Methods: The readout of an IC at a point inside a phantom irradiated by a certain beam can be obtained as the convolution of themore » dose spatial distribution caused by the beam and the IC two-dimensional RF. The proposed methodology has been applied successfully to predict the response of a PTW 30013 IC when measuring different nonreference fields, namely: output factors of 6 MV small fields, beam profiles of cobalt 60 narrow fields and 6 MV radiosurgery segments. The two-dimensional RF of a PTW 30013 IC was obtained by MC simulation of the absorbed dose to cavity air when the IC was scanned by a 0.6 x 0.6 mm{sup 2} cross section parallel pencil beam at low depth in a water phantom. For each of the cases studied, the results of the IC direct measurement were compared with the corresponding obtained by the C/S method. Results: For all of the cases studied, the agreement between the IC direct measurement and the IC calculated response was excellent (better than 1.5%). Conclusions: This method could be implemented in TPS in order to calculate dosimetry correction factors when an experimental IMRT treatment verification with in-phantom ionization chamber is performed. The miss-response of the IC due to the nonreference conditions could be quickly corrected by this method rather than employing MC derived correction factors. This method can be considered as an alternative to the plan-class associated correction factors proposed recently as part of an IAEA work group on nonstandard field dosimetry.« less

  11. Comparison of Large eddy dynamo simulation using dynamic sub-grid scale (SGS) model with a fully resolved direct simulation in a rotating spherical shell

    NASA Astrophysics Data System (ADS)

    Matsui, H.; Buffett, B. A.

    2017-12-01

    The flow in the Earth's outer core is expected to have vast length scale from the geometry of the outer core to the thickness of the boundary layer. Because of the limitation of the spatial resolution in the numerical simulations, sub-grid scale (SGS) modeling is required to model the effects of the unresolved field on the large-scale fields. We model the effects of sub-grid scale flow and magnetic field using a dynamic scale similarity model. Four terms are introduced for the momentum flux, heat flux, Lorentz force and magnetic induction. The model was previously used in the convection-driven dynamo in a rotating plane layer and spherical shell using the Finite Element Methods. In the present study, we perform large eddy simulations (LES) using the dynamic scale similarity model. The scale similarity model is implement in Calypso, which is a numerical dynamo model using spherical harmonics expansion. To obtain the SGS terms, the spatial filtering in the horizontal directions is done by taking the convolution of a Gaussian filter expressed in terms of a spherical harmonic expansion, following Jekeli (1981). A Gaussian field is also applied in the radial direction. To verify the present model, we perform a fully resolved direct numerical simulation (DNS) with the truncation of the spherical harmonics L = 255 as a reference. And, we perform unresolved DNS and LES with SGS model on coarser resolution (L= 127, 84, and 63) using the same control parameter as the resolved DNS. We will discuss the verification results by comparison among these simulations and role of small scale fields to large scale fields through the role of the SGS terms in LES.

  12. Deep neural networks to enable real-time multimessenger astrophysics

    NASA Astrophysics Data System (ADS)

    George, Daniel; Huerta, E. A.

    2018-02-01

    Gravitational wave astronomy has set in motion a scientific revolution. To further enhance the science reach of this emergent field of research, there is a pressing need to increase the depth and speed of the algorithms used to enable these ground-breaking discoveries. We introduce Deep Filtering—a new scalable machine learning method for end-to-end time-series signal processing. Deep Filtering is based on deep learning with two deep convolutional neural networks, which are designed for classification and regression, to detect gravitational wave signals in highly noisy time-series data streams and also estimate the parameters of their sources in real time. Acknowledging that some of the most sensitive algorithms for the detection of gravitational waves are based on implementations of matched filtering, and that a matched filter is the optimal linear filter in Gaussian noise, the application of Deep Filtering using whitened signals in Gaussian noise is investigated in this foundational article. The results indicate that Deep Filtering outperforms conventional machine learning techniques, achieves similar performance compared to matched filtering, while being several orders of magnitude faster, allowing real-time signal processing with minimal resources. Furthermore, we demonstrate that Deep Filtering can detect and characterize waveform signals emitted from new classes of eccentric or spin-precessing binary black holes, even when trained with data sets of only quasicircular binary black hole waveforms. The results presented in this article, and the recent use of deep neural networks for the identification of optical transients in telescope data, suggests that deep learning can facilitate real-time searches of gravitational wave sources and their electromagnetic and astroparticle counterparts. In the subsequent article, the framework introduced herein is directly applied to identify and characterize gravitational wave events in real LIGO data.

  13. Photoelectron Energy Loss in Al(002) Revisited: Retrieval of the Single Plasmon Loss Energy Distribution by a Fourier Transform Method

    NASA Astrophysics Data System (ADS)

    Santana, Victor Mancir da Silva; David, Denis; de Almeida, Jailton Souza; Godet, Christian

    2018-06-01

    A Fourier transform (FT) algorithm is proposed to retrieve the energy loss function (ELF) of solid surfaces from experimental X-ray photoelectron spectra. The intensity measured over a broad energy range towards lower kinetic energies results from convolution of four spectral distributions: photoemission line shape, multiple plasmon loss probability, X-ray source line structure and Gaussian broadening of the photoelectron analyzer. The FT of the measured XPS spectrum, including the zero-loss peak and all inelastic scattering mechanisms, being a mathematical function of the respective FT of X-ray source, photoemission line shape, multiple plasmon loss function, and Gaussian broadening of the photoelectron analyzer, the proposed algorithm gives straightforward access to the bulk ELF and effective dielectric function of the solid, assuming identical ELF for intrinsic and extrinsic plasmon excitations. This method is applied to aluminum single crystal Al(002) where the photoemission line shape has been computed accurately beyond the Doniach-Sunjic approximation using the Mahan-Wertheim-Citrin approach which takes into account the density of states near the Fermi level; the only adjustable parameters are the singularity index and the broadening energy D (inverse hole lifetime). After correction for surface plasmon excitations, the q-averaged bulk loss function, q , of Al(002) differs from the optical value Im[- 1 / ɛ( E, q = 0)] and is well described by the Lindhard-Mermin dispersion relation. A quality criterion of the inversion algorithm is given by the capability of observing weak interband transitions close to the zero-loss peak, namely at 0.65 and 1.65 eV in ɛ( E, q) as found in optical spectra and ab initio calculations of aluminum.

  14. A class of exact solutions for biomacromolecule diffusion-reaction in live cells.

    PubMed

    Sadegh Zadeh, Kouroush; Montas, Hubert J

    2010-06-07

    A class of novel explicit analytic solutions for a system of n+1 coupled partial differential equations governing biomolecular mass transfer and reaction in living organisms are proposed, evaluated, and analyzed. The solution process uses Laplace and Hankel transforms and results in a recursive convolution of an exponentially scaled Gaussian with modified Bessel functions. The solution is developed for wide range of biomolecular binding kinetics from pure diffusion to multiple binding reactions. The proposed approach provides solutions for both Dirac and Gaussian laser beam (or fluorescence-labeled biomacromolecule) profiles during the course of a Fluorescence Recovery After Photobleaching (FRAP) experiment. We demonstrate that previous models are simplified forms of our theory for special cases. Model analysis indicates that at the early stages of the transport process, biomolecular dynamics is governed by pure diffusion. At large times, the dominant mass transfer process is effective diffusion. Analysis of the sensitivity equations, derived analytically and verified by finite difference differentiation, indicates that experimental biologists should use full space-time profile (instead of the averaged time series) obtained at the early stages of the fluorescence microscopy experiments to extract meaningful physiological information from the protocol. Such a small time frame requires improved bioinstrumentation relative to that in use today. Our mathematical analysis highlights several limitations of the FRAP protocol and provides strategies to improve it. The proposed model can be used to study biomolecular dynamics in molecular biology, targeted drug delivery in normal and cancerous tissues, motor-driven axonal transport in normal and abnormal nervous systems, kinetics of diffusion-controlled reactions between enzyme and substrate, and to validate numerical simulators of biological mass transport processes in vivo. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  15. The Hermann-Hering grid illusion demonstrates disruption of lateral inhibition processing in diabetes mellitus.

    PubMed

    Davies, Nigel P; Morland, Antony B

    2002-02-01

    The Hermann-Hering grid illusion consists of dark illusory spots perceived at the intersections of horizontal and vertical white bars viewed against a dark background. The dark spots originate from lateral inhibition processing. This illusion was used to investigate the hypothesis that lateral inhibition may be disrupted in diabetes mellitus. A computer monitor based psychophysical test was developed to measure the threshold of perception of the illusion for different bar widths. The contrast threshold for illusion perception at seven bar widths (range 0.09 degrees to 0.60 degrees) was measured using a randomly interleaved double staircase. Convolution of Hermann-Hering grids with difference of Gaussian receptive fields was used to generate model sensitivity functions. The method of least squares was used to fit these to the experimental data. 14 diabetic patients and 12 control subjects of similar ages performed the test. The sensitivity to the illusion was significantly reduced in the diabetic group for bar widths 0.22 degrees, 0.28 degrees, and 0.35 degrees (p = 0.01). The mean centre:surround ratio for the controls was 1:9.1 (SD 1.6) with a mean correlation coefficient of R(2) = 0.80 (SD 0.16). In the diabetic group, two subjects were unable to perceive the illusion. The mean centre:surround ratio for the 12 remaining diabetic patients was 1:8.6 (SD 2.1). However, the correlation coefficients were poor with a mean of R(2) = 0.54 (SD 0.27), p = 0.04 in comparison with the control group. A difference of Gaussian receptive field model fits the experimental data well for the controls but does not fit the data obtained for the diabetics. This indicates dysfunction of the lateral inhibition processes in the post-receptoral pathway.

  16. Multiscale approach to contour fitting for MR images

    NASA Astrophysics Data System (ADS)

    Rueckert, Daniel; Burger, Peter

    1996-04-01

    We present a new multiscale contour fitting process which combines information about the image and the contour of the object at different levels of scale. The algorithm is based on energy minimizing deformable models but avoids some of the problems associated with these models. The segmentation algorithm starts by constructing a linear scale-space of an image through convolution of the original image with a Gaussian kernel at different levels of scale, where the scale corresponds to the standard deviation of the Gaussian kernel. At high levels of scale large scale features of the objects are preserved while small scale features, like object details as well as noise, are suppressed. In order to maximize the accuracy of the segmentation, the contour of the object of interest is then tracked in scale-space from coarse to fine scales. We propose a hybrid multi-temperature simulated annealing optimization to minimize the energy of the deformable model. At high levels of scale the SA optimization is started at high temperatures, enabling the SA optimization to find a global optimal solution. At lower levels of scale the SA optimization is started at lower temperatures (at the lowest level the temperature is close to 0). This enforces a more deterministic behavior of the SA optimization at lower scales and leads to an increasingly local optimization as high energy barriers cannot be crossed. The performance and robustness of the algorithm have been tested on spin-echo MR images of the cardiovascular system. The task was to segment the ascending and descending aorta in 15 datasets of different individuals in order to measure regional aortic compliance. The results show that the algorithm is able to provide more accurate segmentation results than the classic contour fitting process and is at the same time very robust to noise and initialization.

  17. Photoelectron Energy Loss in Al(002) Revisited: Retrieval of the Single Plasmon Loss Energy Distribution by a Fourier Transform Method

    NASA Astrophysics Data System (ADS)

    Santana, Victor Mancir da Silva; David, Denis; de Almeida, Jailton Souza; Godet, Christian

    2018-04-01

    A Fourier transform (FT) algorithm is proposed to retrieve the energy loss function (ELF) of solid surfaces from experimental X-ray photoelectron spectra. The intensity measured over a broad energy range towards lower kinetic energies results from convolution of four spectral distributions: photoemission line shape, multiple plasmon loss probability, X-ray source line structure and Gaussian broadening of the photoelectron analyzer. The FT of the measured XPS spectrum, including the zero-loss peak and all inelastic scattering mechanisms, being a mathematical function of the respective FT of X-ray source, photoemission line shape, multiple plasmon loss function, and Gaussian broadening of the photoelectron analyzer, the proposed algorithm gives straightforward access to the bulk ELF and effective dielectric function of the solid, assuming identical ELF for intrinsic and extrinsic plasmon excitations. This method is applied to aluminum single crystal Al(002) where the photoemission line shape has been computed accurately beyond the Doniach-Sunjic approximation using the Mahan-Wertheim-Citrin approach which takes into account the density of states near the Fermi level; the only adjustable parameters are the singularity index and the broadening energy D (inverse hole lifetime). After correction for surface plasmon excitations, the q-averaged bulk loss function, q , of Al(002) differs from the optical value Im[- 1 / ɛ(E, q = 0)] and is well described by the Lindhard-Mermin dispersion relation. A quality criterion of the inversion algorithm is given by the capability of observing weak interband transitions close to the zero-loss peak, namely at 0.65 and 1.65 eV in ɛ(E, q) as found in optical spectra and ab initio calculations of aluminum.

  18. Equivalent linearization for fatigue life estimates of a nonlinear structure

    NASA Technical Reports Server (NTRS)

    Miles, R. N.

    1989-01-01

    An analysis is presented of the suitability of the method of equivalent linearization for estimating the fatigue life of a nonlinear structure. Comparisons are made of the fatigue life of a nonlinear plate as predicted using conventional equivalent linearization and three other more accurate methods. The excitation of the plate is assumed to be Gaussian white noise and the plate response is modeled using a single resonant mode. The methods used for comparison consist of numerical simulation, a probabalistic formulation, and a modification of equivalent linearization which avoids the usual assumption that the response process is Gaussian. Remarkably close agreement is obtained between all four methods, even for cases where the response is significantly linear.

  19. Development and application of deep convolutional neural network in target detection

    NASA Astrophysics Data System (ADS)

    Jiang, Xiaowei; Wang, Chunping; Fu, Qiang

    2018-04-01

    With the development of big data and algorithms, deep convolution neural networks with more hidden layers have more powerful feature learning and feature expression ability than traditional machine learning methods, making artificial intelligence surpass human level in many fields. This paper first reviews the development and application of deep convolutional neural networks in the field of object detection in recent years, then briefly summarizes and ponders some existing problems in the current research, and the future development of deep convolutional neural network is prospected.

  20. Chromatographic peak resolution using Microsoft Excel Solver. The merit of time shifting input arrays.

    PubMed

    Dasgupta, Purnendu K

    2008-12-05

    Resolution of overlapped chromatographic peaks is generally accomplished by modeling the peaks as Gaussian or modified Gaussian functions. It is possible, even preferable, to use actual single analyte input responses for this purpose and a nonlinear least squares minimization routine such as that provided by Microsoft Excel Solver can then provide the resolution. In practice, the quality of the results obtained varies greatly due to small shifts in retention time. I show here that such deconvolution can be considerably improved if one or more of the response arrays are iteratively shifted in time.

  1. A spectral nudging method for the ACCESS1.3 atmospheric model

    NASA Astrophysics Data System (ADS)

    Uhe, P.; Thatcher, M.

    2015-06-01

    A convolution-based method of spectral nudging of atmospheric fields is developed in the Australian Community Climate and Earth Systems Simulator (ACCESS) version 1.3 which uses the UK Met Office Unified Model version 7.3 as its atmospheric component. The use of convolutions allow for flexibility in application to different atmospheric grids. An approximation using one-dimensional convolutions is applied, improving the time taken by the nudging scheme by 10-30 times compared with a version using a two-dimensional convolution, without measurably degrading its performance. Care needs to be taken in the order of the convolutions and the frequency of nudging to obtain the best outcome. The spectral nudging scheme is benchmarked against a Newtonian relaxation method, nudging winds and air temperature towards ERA-Interim reanalyses. We find that the convolution approach can produce results that are competitive with Newtonian relaxation in both the effectiveness and efficiency of the scheme, while giving the added flexibility of choosing which length scales to nudge.

  2. A spectral nudging method for the ACCESS1.3 atmospheric model

    NASA Astrophysics Data System (ADS)

    Uhe, P.; Thatcher, M.

    2014-10-01

    A convolution based method of spectral nudging of atmospheric fields is developed in the Australian Community Climate and Earth Systems Simulator (ACCESS) version 1.3 which uses the UK Met Office Unified Model version 7.3 as its atmospheric component. The use of convolutions allow flexibility in application to different atmospheric grids. An approximation using one-dimensional convolutions is applied, improving the time taken by the nudging scheme by 10 to 30 times compared with a version using a two-dimensional convolution, without measurably degrading its performance. Care needs to be taken in the order of the convolutions and the frequency of nudging to obtain the best outcome. The spectral nudging scheme is benchmarked against a Newtonian relaxation method, nudging winds and air temperature towards ERA-Interim reanalyses. We find that the convolution approach can produce results that are competitive with Newtonian relaxation in both the effectiveness and efficiency of the scheme, while giving the added flexibility of choosing which length scales to nudge.

  3. Reconstructing Images in Astrophysics, an Inverse Problem Point of View

    NASA Astrophysics Data System (ADS)

    Theys, Céline; Aime, Claude

    2016-04-01

    After a short introduction, a first section provides a brief tutorial to the physics of image formation and its detection in the presence of noises. The rest of the chapter focuses on the resolution of the inverse problem . In the general form, the observed image is given by a Fredholm integral containing the object and the response of the instrument. Its inversion is formulated using a linear algebra. The discretized object and image of size N × N are stored in vectors x and y of length N 2. They are related one another by the linear relation y = H x, where H is a matrix of size N 2 × N 2 that contains the elements of the instrument response. This matrix presents particular properties for a shift invariant point spread function for which the Fredholm integral is reduced to a convolution relation. The presence of noise complicates the resolution of the problem. It is shown that minimum variance unbiased solutions fail to give good results because H is badly conditioned, leading to the need of a regularized solution. Relative strength of regularization versus fidelity to the data is discussed and briefly illustrated on an example using L-curves. The origins and construction of iterative algorithms are explained, and illustrations are given for the algorithms ISRA , for a Gaussian additive noise, and Richardson-Lucy , for a pure photodetected image (Poisson statistics). In this latter case, the way the algorithm modifies the spatial frequencies of the reconstructed image is illustrated for a diluted array of apertures in space. Throughout the chapter, the inverse problem is formulated in matrix form for the general case of the Fredholm integral, while numerical illustrations are limited to the deconvolution case, allowing the use of discrete Fourier transforms, because of computer limitations.

  4. Some Issues Related to Integrating Active Flow Control With Flight Control

    NASA Technical Reports Server (NTRS)

    Williams, David; Colonius, Tim; Tadmor, Gilead; Rowley, Clancy

    2010-01-01

    Time varying control of CL is necessary for integrating AFC and Flight Control (Biasing allows for +/- changes in lift) Time delays associated with actuation are long (APPROX.5.8 c/U) and must be included in controllers. Convolution of input signal with single pulse kernel gives reasonable prediction of lift response.

  5. Insights into jumonji c-domain containing protein 6 (JMJD6): a multifactorial role in FMDV replication in cells

    USDA-ARS?s Scientific Manuscript database

    The Jumonji C-domain containing protein 6 (JMJD6) has had a convoluted history. It was first identified as the phosphatidylserine receptor (PSR) on the cell surface responsible for recognizing phosphatidylserine on the surface of apoptotic cells resulting in their engulfment by phagocytic cells. Sub...

  6. Estimation of neutron energy distributions from prompt gamma emissions

    NASA Astrophysics Data System (ADS)

    Panikkath, Priyada; Udupi, Ashwini; Sarkar, P. K.

    2017-11-01

    A technique of estimating the incident neutron energy distribution from emitted prompt gamma intensities from a system exposed to neutrons is presented. The emitted prompt gamma intensities or the measured photo peaks in a gamma detector are related to the incident neutron energy distribution through a convolution of the response of the system generating the prompt gammas to mono-energetic neutrons. Presently, the system studied is a cylinder of high density polyethylene (HDPE) placed inside another cylinder of borated HDPE (BHDPE) having an outer Pb-cover and exposed to neutrons. The emitted five prompt gamma peaks from hydrogen, boron, carbon and lead can be utilized to unfold the incident neutron energy distribution as an under-determined deconvolution problem. Such an under-determined set of equations are solved using the genetic algorithm based Monte Carlo de-convolution code GAMCD. Feasibility of the proposed technique is demonstrated theoretically using the Monte Carlo calculated response matrix and intensities of emitted prompt gammas from the Pb-covered BHDPE-HDPE system in the case of several incident neutron spectra spanning different energy ranges.

  7. Cross-Layer Design for Robust and Scalable Video Transmission in Dynamic Wireless Environment

    DTIC Science & Technology

    2011-02-01

    code rate convolutional codes or prioritized Rate - Compatible Punctured ...34New rate - compatible punctured convolutional codes for Viterbi decoding," IEEE Trans. Communications, Volume 42, Issue 12, pp. 3073-3079, Dec...Quality of service RCPC Rate - compatible and punctured convolutional codes SNR Signal to noise

  8. A Video Transmission System for Severely Degraded Channels

    DTIC Science & Technology

    2006-07-01

    rate compatible punctured convolutional codes (RCPC) . By separating the SPIHT bitstream...June 2000. 149 [170] J. Hagenauer, Rate - compatible punctured convolutional codes (RCPC codes ) and their applications, IEEE Transactions on...Farvardin [160] used rate compatible convolutional codes . They noticed that for some transmission rates , one of their EEP schemes, which may

  9. There is no MacWilliams identity for convolutional codes. [transmission gain comparison

    NASA Technical Reports Server (NTRS)

    Shearer, J. B.; Mceliece, R. J.

    1977-01-01

    An example is provided of two convolutional codes that have the same transmission gain but whose dual codes do not. This shows that no analog of the MacWilliams identity for block codes can exist relating the transmission gains of a convolutional code and its dual.

  10. Single Image Super-Resolution Based on Multi-Scale Competitive Convolutional Neural Network

    PubMed Central

    Qu, Xiaobo; He, Yifan

    2018-01-01

    Deep convolutional neural networks (CNNs) are successful in single-image super-resolution. Traditional CNNs are limited to exploit multi-scale contextual information for image reconstruction due to the fixed convolutional kernel in their building modules. To restore various scales of image details, we enhance the multi-scale inference capability of CNNs by introducing competition among multi-scale convolutional filters, and build up a shallow network under limited computational resources. The proposed network has the following two advantages: (1) the multi-scale convolutional kernel provides the multi-context for image super-resolution, and (2) the maximum competitive strategy adaptively chooses the optimal scale of information for image reconstruction. Our experimental results on image super-resolution show that the performance of the proposed network outperforms the state-of-the-art methods. PMID:29509666

  11. Single Image Super-Resolution Based on Multi-Scale Competitive Convolutional Neural Network.

    PubMed

    Du, Xiaofeng; Qu, Xiaobo; He, Yifan; Guo, Di

    2018-03-06

    Deep convolutional neural networks (CNNs) are successful in single-image super-resolution. Traditional CNNs are limited to exploit multi-scale contextual information for image reconstruction due to the fixed convolutional kernel in their building modules. To restore various scales of image details, we enhance the multi-scale inference capability of CNNs by introducing competition among multi-scale convolutional filters, and build up a shallow network under limited computational resources. The proposed network has the following two advantages: (1) the multi-scale convolutional kernel provides the multi-context for image super-resolution, and (2) the maximum competitive strategy adaptively chooses the optimal scale of information for image reconstruction. Our experimental results on image super-resolution show that the performance of the proposed network outperforms the state-of-the-art methods.

  12. Spin-Hall effect in the scattering of structured light from plasmonic nanowire

    NASA Astrophysics Data System (ADS)

    Sharma, Deepak K.; Kumar, Vijay; Vasista, Adarsh B.; Chaubey, Shailendra K.; Kumar, G. V. Pavan

    2018-06-01

    Spin-orbit interactions are subwavelength phenomena which can potentially lead to numerous device related applications in nanophotonics. Here, we report Spin-Hall effect in the forward scattering of Hermite-Gaussian and Gaussian beams from a plasmonic nanowire. Asymmetric scattered radiation distribution was observed for circularly polarized beams. Asymmetry in the scattered radiation distribution changes the sign when the polarization handedness inverts. We found a significant enhancement in the Spin-Hall effect for Hermite-Gaussian beam as compared to Gaussian beam for constant input power. The difference between scattered powers perpendicular to the long axis of the plasmonic nanowire was used to quantify the enhancement. In addition to it, nodal line of HG beam acts as the marker for the Spin-Hall shift. Numerical calculations corroborate experimental observations and suggest that the Spin flow component of Poynting vector associated with the circular polarization is responsible for the Spin-Hall effect and its enhancement.

  13. The Effect of a Non-Gaussian Random Loading on High-Cycle Fatigue of a Thermally Post-Buckled Structure

    NASA Technical Reports Server (NTRS)

    Rizzi, Stephen A.; Behnke, marlana N.; Przekop, Adam

    2010-01-01

    High-cycle fatigue of an elastic-plastic beam structure under the combined action of thermal and high-intensity non-Gaussian acoustic loadings is considered. Such loadings can be highly damaging when snap-through motion occurs between thermally post-buckled equilibria. The simulated non-Gaussian loadings investigated have a range of skewness and kurtosis typical of turbulent boundary layer pressure fluctuations in the vicinity of forward facing steps. Further, the duration and steadiness of high excursion peaks is comparable to that found in such turbulent boundary layer data. Response and fatigue life estimates are found to be insensitive to the loading distribution, with the minor exception of cases involving plastic deformation. In contrast, the fatigue life estimate was found to be highly affected by a different type of non-Gaussian loading having bursts of high excursion peaks.

  14. Accounting for Non-Gaussian Sources of Spatial Correlation in Parametric Functional Magnetic Resonance Imaging Paradigms I: Revisiting Cluster-Based Inferences.

    PubMed

    Gopinath, Kaundinya; Krishnamurthy, Venkatagiri; Sathian, K

    2018-02-01

    In a recent study, Eklund et al. employed resting-state functional magnetic resonance imaging data as a surrogate for null functional magnetic resonance imaging (fMRI) datasets and posited that cluster-wise family-wise error (FWE) rate-corrected inferences made by using parametric statistical methods in fMRI studies over the past two decades may have been invalid, particularly for cluster defining thresholds less stringent than p < 0.001; this was principally because the spatial autocorrelation functions (sACF) of fMRI data had been modeled incorrectly to follow a Gaussian form, whereas empirical data suggested otherwise. Here, we show that accounting for non-Gaussian signal components such as those arising from resting-state neural activity as well as physiological responses and motion artifacts in the null fMRI datasets yields first- and second-level general linear model analysis residuals with nearly uniform and Gaussian sACF. Further comparison with nonparametric permutation tests indicates that cluster-based FWE corrected inferences made with Gaussian spatial noise approximations are valid.

  15. A 2D Gaussian-Beam-Based Method for Modeling the Dichroic Surfaces of Quasi-Optical Systems

    NASA Astrophysics Data System (ADS)

    Elis, Kevin; Chabory, Alexandre; Sokoloff, Jérôme; Bolioli, Sylvain

    2016-08-01

    In this article, we propose an approach in the spectral domain to treat the interaction of a field with a dichroic surface in two dimensions. For a Gaussian beam illumination of the surface, the reflected and transmitted fields are approximated by one reflected and one transmitted Gaussian beams. Their characteristics are determined by means of a matching in the spectral domain, which requires a second-order approximation of the dichroic surface response when excited by plane waves. This approximation is of the same order as the one used in Gaussian beam shooting algorithm to model curved interfaces associated with lenses, reflector, etc. The method uses general analytical formulations for the GBs that depend either on a paraxial or far-field approximation. Numerical experiments are led to test the efficiency of the method in terms of accuracy and computation time. They include a parametric study and a case for which the illumination is provided by a horn antenna. For the latter, the incident field is firstly expressed as a sum of Gaussian beams by means of Gabor frames.

  16. Acceleration of Monte Carlo SPECT simulation using convolution-based forced detection

    NASA Astrophysics Data System (ADS)

    de Jong, H. W. A. M.; Slijpen, E. T. P.; Beekman, F. J.

    2001-02-01

    Monte Carlo (MC) simulation is an established tool to calculate photon transport through tissue in Emission Computed Tomography (ECT). Since the first appearance of MC a large variety of variance reduction techniques (VRT) have been introduced to speed up these notoriously slow simulations. One example of a very effective and established VRT is known as forced detection (FD). In standard FD the path from the photon's scatter position to the camera is chosen stochastically from the appropriate probability density function (PDF), modeling the distance-dependent detector response. In order to speed up MC the authors propose a convolution-based FD (CFD) which involves replacing the sampling of the PDF by a convolution with a kernel which depends on the position of the scatter event. The authors validated CFD for parallel-hole Single Photon Emission Computed Tomography (SPECT) using a digital thorax phantom. Comparison of projections estimated with CFD and standard FD shows that both estimates converge to practically identical projections (maximum bias 0.9% of peak projection value), despite the slightly different photon paths used in CFD and standard FD. Projections generated with CFD converge, however, to a noise-free projection up to one or two orders of magnitude faster, which is extremely useful in many applications such as model-based image reconstruction.

  17. Deep architecture neural network-based real-time image processing for image-guided radiotherapy.

    PubMed

    Mori, Shinichiro

    2017-08-01

    To develop real-time image processing for image-guided radiotherapy, we evaluated several neural network models for use with different imaging modalities, including X-ray fluoroscopic image denoising. Setup images of prostate cancer patients were acquired with two oblique X-ray fluoroscopic units. Two types of residual network were designed: a convolutional autoencoder (rCAE) and a convolutional neural network (rCNN). We changed the convolutional kernel size and number of convolutional layers for both networks, and the number of pooling and upsampling layers for rCAE. The ground-truth image was applied to the contrast-limited adaptive histogram equalization (CLAHE) method of image processing. Network models were trained to keep the quality of the output image close to that of the ground-truth image from the input image without image processing. For image denoising evaluation, noisy input images were used for the training. More than 6 convolutional layers with convolutional kernels >5×5 improved image quality. However, this did not allow real-time imaging. After applying a pair of pooling and upsampling layers to both networks, rCAEs with >3 convolutions each and rCNNs with >12 convolutions with a pair of pooling and upsampling layers achieved real-time processing at 30 frames per second (fps) with acceptable image quality. Use of our suggested network achieved real-time image processing for contrast enhancement and image denoising by the use of a conventional modern personal computer. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  18. A comparison of the convolution and TMR10 treatment planning algorithms for Gamma Knife® radiosurgery

    PubMed Central

    Wright, Gavin; Harrold, Natalie; Bownes, Peter

    2018-01-01

    Aims To compare the accuracies of the convolution and TMR10 Gamma Knife treatment planning algorithms, and assess the impact upon clinical practice of implementing convolution-based treatment planning. Methods Doses calculated by both algorithms were compared against ionisation chamber measurements in homogeneous and heterogeneous phantoms. Relative dose distributions calculated by both algorithms were compared against film-derived 2D isodose plots in a heterogeneous phantom, with distance-to-agreement (DTA) measured at the 80%, 50% and 20% isodose levels. A retrospective planning study compared 19 clinically acceptable metastasis convolution plans against TMR10 plans with matched shot times, allowing novel comparison of true dosimetric parameters rather than total beam-on-time. Gamma analysis and dose-difference analysis were performed on each pair of dose distributions. Results Both algorithms matched point dose measurement within ±1.1% in homogeneous conditions. Convolution provided superior point-dose accuracy in the heterogeneous phantom (-1.1% v 4.0%), with no discernible differences in relative dose distribution accuracy. In our study convolution-calculated plans yielded D99% 6.4% (95% CI:5.5%-7.3%,p<0.001) less than shot matched TMR10 plans. For gamma passing criteria 1%/1mm, 16% of targets had passing rates >95%. The range of dose differences in the targets was 0.2-4.6Gy. Conclusions Convolution provides superior accuracy versus TMR10 in heterogeneous conditions. Implementing convolution would result in increased target doses therefore its implementation may require a revaluation of prescription doses. PMID:29657896

  19. X-Ray Spectro-Polarimetry with Photoelectric Polarimeters

    NASA Technical Reports Server (NTRS)

    Strohmayer, T. E.

    2017-01-01

    We derive a generalization of forward fitting for X-ray spectroscopy to include linear polarization of X-ray sources, appropriate for the anticipated next generation of space-based photoelectric polarimeters. We show that the inclusion of polarization sensitivity requires joint fitting to three observed spectra, one for each of the Stokes parameters, I(E), U(E), and Q(E). The equations for StokesI (E) (the total intensity spectrum) are identical to the familiar case with no polarization sensitivity, and for which the model-predicted spectrum is obtained by a convolution of the source spectrum, F (E), with the familiar energy response function,(E) R(E,E), where (E) and R(E,E) are the effective area and energy redistribution matrix, respectively. In addition to the energy spectrum, the two new relations for U(E) and Q(E) include the source polarization fraction and position angle versus energy, a(E), and 0(E), respectively, and the model-predicted spectra for these relations are obtained by a convolution with the modulated energy response function, (E)(E) R(E,E), where(E) is the energy-dependent modulation fraction that quantifies a polarimeters angular response to 100 polarized radiation. We present results of simulations with response parameters appropriate for the proposed PRAXyS Small Explorer observatory to illustrate the procedures and methods, and we discuss some aspects of photoelectric polarimeters with relevance to understanding their calibration and operation.

  20. Design of Intelligent Cross-Layer Routing Protocols for Airborne Wireless Networks Under Dynamic Spectrum Access Paradigm

    DTIC Science & Technology

    2011-05-01

    rate convolutional codes or the prioritized Rate - Compatible Punctured ...Quality of service RCPC Rate - compatible and punctured convolutional codes SNR Signal to noise ratio SSIM... Convolutional (RCPC) codes . The RCPC codes achieve UEP by puncturing off different amounts of coded bits of the parent code . The

  1. Convolution Operation of Optical Information via Quantum Storage

    NASA Astrophysics Data System (ADS)

    Li, Zhixiang; Liu, Jianji; Fan, Hongming; Zhang, Guoquan

    2017-06-01

    We proposed a novel method to achieve optical convolution of two input images via quantum storage based on electromagnetically induced transparency (EIT) effect. By placing an EIT media in the confocal Fourier plane of the 4f-imaging system, the optical convolution of the two input images can be achieved in the image plane.

  2. Modeling lateral geniculate nucleus response with contrast gain control. Part 2: Analysis

    PubMed Central

    Cope, Davis; Blakeslee, Barbara; McCourt, Mark E.

    2014-01-01

    Cope, Blakeslee and McCourt (2013) proposed a class of models for LGN ON-cell behavior consisting of a linear response with divisive normalization by local stimulus contrast. Here we analyze a specific model with the linear response defined by a difference-of-Gaussians filter and a circular Gaussian for the gain pool weighting function. For sinusoidal grating stimuli, the parameter region for band-pass behavior of the linear response is determined, the gain control response is shown to act as a switch (changing from “off” to “on” with increasing spatial frequency), and it is shown that large gain pools stabilize the optimal spatial frequency of the total nonlinear response at a fixed value independent of contrast and stimulus magnitude. Under- and super-saturation as well as contrast saturation occur as typical effects of stimulus magnitude. For circular spot stimuli, it is shown that large gain pools stabilize the spot size that yields the maximum response. PMID:24562034

  3. High Performance Implementation of 3D Convolutional Neural Networks on a GPU.

    PubMed

    Lan, Qiang; Wang, Zelong; Wen, Mei; Zhang, Chunyuan; Wang, Yijie

    2017-01-01

    Convolutional neural networks have proven to be highly successful in applications such as image classification, object tracking, and many other tasks based on 2D inputs. Recently, researchers have started to apply convolutional neural networks to video classification, which constitutes a 3D input and requires far larger amounts of memory and much more computation. FFT based methods can reduce the amount of computation, but this generally comes at the cost of an increased memory requirement. On the other hand, the Winograd Minimal Filtering Algorithm (WMFA) can reduce the number of operations required and thus can speed up the computation, without increasing the required memory. This strategy was shown to be successful for 2D neural networks. We implement the algorithm for 3D convolutional neural networks and apply it to a popular 3D convolutional neural network which is used to classify videos and compare it to cuDNN. For our highly optimized implementation of the algorithm, we observe a twofold speedup for most of the 3D convolution layers of our test network compared to the cuDNN version.

  4. High Performance Implementation of 3D Convolutional Neural Networks on a GPU

    PubMed Central

    Wang, Zelong; Wen, Mei; Zhang, Chunyuan; Wang, Yijie

    2017-01-01

    Convolutional neural networks have proven to be highly successful in applications such as image classification, object tracking, and many other tasks based on 2D inputs. Recently, researchers have started to apply convolutional neural networks to video classification, which constitutes a 3D input and requires far larger amounts of memory and much more computation. FFT based methods can reduce the amount of computation, but this generally comes at the cost of an increased memory requirement. On the other hand, the Winograd Minimal Filtering Algorithm (WMFA) can reduce the number of operations required and thus can speed up the computation, without increasing the required memory. This strategy was shown to be successful for 2D neural networks. We implement the algorithm for 3D convolutional neural networks and apply it to a popular 3D convolutional neural network which is used to classify videos and compare it to cuDNN. For our highly optimized implementation of the algorithm, we observe a twofold speedup for most of the 3D convolution layers of our test network compared to the cuDNN version. PMID:29250109

  5. Convoluted nozzle design for the RL10 derivative 2B engine

    NASA Technical Reports Server (NTRS)

    1985-01-01

    The convoluted nozzle is a conventional refractory metal nozzle extension that is formed with a portion of the nozzle convoluted to show the extendible nozzle within the length of the rocket engine. The convoluted nozzle (CN) was deployed by a system of four gas driven actuators. For spacecraft applications the optimum CN may be self-deployed by internal pressure retained, during deployment, by a jettisonable exit closure. The convoluted nozzle is included in a study of extendible nozzles for the RL10 Engine Derivative 2B for use in an early orbit transfer vehicle (OTV). Four extendible nozzle configurations for the RL10-2B engine were evaluated. Three configurations of the two position nozzle were studied including a hydrogen dump cooled metal nozzle and radiation cooled nozzles of refractory metal and carbon/carbon composite construction respectively.

  6. Local dynamic range compensation for scanning electron microscope imaging system by sub-blocking multiple peak HE with convolution.

    PubMed

    Sim, K S; Teh, V; Tey, Y C; Kho, T K

    2016-11-01

    This paper introduces new development technique to improve the Scanning Electron Microscope (SEM) image quality and we name it as sub-blocking multiple peak histogram equalization (SUB-B-MPHE) with convolution operator. By using this new proposed technique, it shows that the new modified MPHE performs better than original MPHE. In addition, the sub-blocking method consists of convolution operator which can help to remove the blocking effect for SEM images after applying this new developed technique. Hence, by using the convolution operator, it effectively removes the blocking effect by properly distributing the suitable pixel value for the whole image. Overall, the SUB-B-MPHE with convolution outperforms the rest of methods. SCANNING 38:492-501, 2016. © 2015 Wiley Periodicals, Inc. © Wiley Periodicals, Inc.

  7. The semantic Stroop effect: An ex-Gaussian analysis.

    PubMed

    White, Darcy; Risko, Evan F; Besner, Derek

    2016-10-01

    Previous analyses of the standard Stroop effect (which typically uses color words that form part of the response set) have documented effects on mean reaction times in hundreds of experiments in the literature. Less well known is the fact that ex-Gaussian analyses reveal that such effects are seen in (a) the mean of the normal distribution (mu), as well as in (b) the standard deviation of the normal distribution (sigma) and (c) the tail (tau). No ex-Gaussian analysis exists in the literature with respect to the semantically based Stroop effect (which contrasts incongruent color-associated words with, e.g., neutral controls). In the present experiments, we investigated whether the semantically based Stroop effect is also seen in the three ex-Gaussian parameters. Replicating previous reports, color naming was slower when the color was carried by an irrelevant (but incongruent) color-associated word (e.g., sky, tomato) than when the control items consisted of neutral words (e.g., keg, palace) in each of four experiments. An ex-Gaussian analysis revealed that this semantically based Stroop effect was restricted to the arithmetic mean and mu; no semantic Stroop effect was observed in tau. These data are consistent with the views (1) that there is a clear difference in the source of the semantic Stroop effect, as compared to the standard Stroop effect (evidenced by the presence vs. absence of an effect on tau), and (2) that interference associated with response competition on incongruent trials in tau is absent in the semantic Stroop effect.

  8. Intra-Individual Response Variability Assessed by Ex-Gaussian Analysis may be a New Endophenotype for Attention-Deficit/Hyperactivity Disorder.

    PubMed

    Henríquez-Henríquez, Marcela Patricia; Billeke, Pablo; Henríquez, Hugo; Zamorano, Francisco Javier; Rothhammer, Francisco; Aboitiz, Francisco

    2014-01-01

    Intra-individual variability of response times (RTisv) is considered as potential endophenotype for attentional deficit/hyperactivity disorder (ADHD). Traditional methods for estimating RTisv lose information regarding response times (RTs) distribution along the task, with eventual effects on statistical power. Ex-Gaussian analysis captures the dynamic nature of RTisv, estimating normal and exponential components for RT distribution, with specific phenomenological correlates. Here, we applied ex-Gaussian analysis to explore whether intra-individual variability of RTs agrees with criteria proposed by Gottesman and Gould for endophenotypes. Specifically, we evaluated if normal and/or exponential components of RTs may (a) present the stair-like distribution expected for endophenotypes (ADHD > siblings > typically developing children (TD) without familiar history of ADHD) and (b) represent a phenotypic correlate for previously described genetic risk variants. This is a pilot study including 55 subjects (20 ADHD-discordant sibling-pairs and 15 TD children), all aged between 8 and 13 years. Participants resolved a visual Go/Nogo with 10% Nogo probability. Ex-Gaussian distributions were fitted to individual RT data and compared among the three samples. In order to test whether intra-individual variability may represent a correlate for previously described genetic risk variants, VNTRs at DRD4 and SLC6A3 were identified in all sibling-pairs following standard protocols. Groups were compared adjusting independent general linear models for the exponential and normal components from the ex-Gaussian analysis. Identified trends were confirmed by the non-parametric Jonckheere-Terpstra test. Stair-like distributions were observed for μ (p = 0.036) and σ (p = 0.009). An additional "DRD4-genotype" × "clinical status" interaction was present for τ (p = 0.014) reflecting a possible severity factor. Thus, normal and exponential RTisv components are suitable as ADHD endophenotypes.

  9. Response of MDOF strongly nonlinear systems to fractional Gaussian noises.

    PubMed

    Deng, Mao-Lin; Zhu, Wei-Qiu

    2016-08-01

    In the present paper, multi-degree-of-freedom strongly nonlinear systems are modeled as quasi-Hamiltonian systems and the stochastic averaging method for quasi-Hamiltonian systems (including quasi-non-integrable, completely integrable and non-resonant, completely integrable and resonant, partially integrable and non-resonant, and partially integrable and resonant Hamiltonian systems) driven by fractional Gaussian noise is introduced. The averaged fractional stochastic differential equations (SDEs) are derived. The simulation results for some examples show that the averaged SDEs can be used to predict the response of the original systems and the simulation time for the averaged SDEs is less than that for the original systems.

  10. Response of MDOF strongly nonlinear systems to fractional Gaussian noises

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deng, Mao-Lin; Zhu, Wei-Qiu, E-mail: wqzhu@zju.edu.cn

    2016-08-15

    In the present paper, multi-degree-of-freedom strongly nonlinear systems are modeled as quasi-Hamiltonian systems and the stochastic averaging method for quasi-Hamiltonian systems (including quasi-non-integrable, completely integrable and non-resonant, completely integrable and resonant, partially integrable and non-resonant, and partially integrable and resonant Hamiltonian systems) driven by fractional Gaussian noise is introduced. The averaged fractional stochastic differential equations (SDEs) are derived. The simulation results for some examples show that the averaged SDEs can be used to predict the response of the original systems and the simulation time for the averaged SDEs is less than that for the original systems.

  11. Response of Electrical Activity in an Improved Neuron Model under Electromagnetic Radiation and Noise

    PubMed Central

    Zhan, Feibiao; Liu, Shenquan

    2017-01-01

    Electrical activities are ubiquitous neuronal bioelectric phenomena, which have many different modes to encode the expression of biological information, and constitute the whole process of signal propagation between neurons. Therefore, we focus on the electrical activities of neurons, which is also causing widespread concern among neuroscientists. In this paper, we mainly investigate the electrical activities of the Morris-Lecar (M-L) model with electromagnetic radiation or Gaussian white noise, which can restore the authenticity of neurons in realistic neural network. First, we explore dynamical response of the whole system with electromagnetic induction (EMI) and Gaussian white noise. We find that there are slight differences in the discharge behaviors via comparing the response of original system with that of improved system, and electromagnetic induction can transform bursting or spiking state to quiescent state and vice versa. Furthermore, we research bursting transition mode and the corresponding periodic solution mechanism for the isolated neuron model with electromagnetic induction by using one-parameter and bi-parameters bifurcation analysis. Finally, we analyze the effects of Gaussian white noise on the original system and coupled system, which is conducive to understand the actual discharge properties of realistic neurons. PMID:29209192

  12. Response of Electrical Activity in an Improved Neuron Model under Electromagnetic Radiation and Noise.

    PubMed

    Zhan, Feibiao; Liu, Shenquan

    2017-01-01

    Electrical activities are ubiquitous neuronal bioelectric phenomena, which have many different modes to encode the expression of biological information, and constitute the whole process of signal propagation between neurons. Therefore, we focus on the electrical activities of neurons, which is also causing widespread concern among neuroscientists. In this paper, we mainly investigate the electrical activities of the Morris-Lecar (M-L) model with electromagnetic radiation or Gaussian white noise, which can restore the authenticity of neurons in realistic neural network. First, we explore dynamical response of the whole system with electromagnetic induction (EMI) and Gaussian white noise. We find that there are slight differences in the discharge behaviors via comparing the response of original system with that of improved system, and electromagnetic induction can transform bursting or spiking state to quiescent state and vice versa. Furthermore, we research bursting transition mode and the corresponding periodic solution mechanism for the isolated neuron model with electromagnetic induction by using one-parameter and bi-parameters bifurcation analysis. Finally, we analyze the effects of Gaussian white noise on the original system and coupled system, which is conducive to understand the actual discharge properties of realistic neurons.

  13. Dynamic analysis of nonlinear rotor-housing systems

    NASA Technical Reports Server (NTRS)

    Noah, Sherif T.

    1988-01-01

    Nonlinear analysis methods are developed which will enable the reliable prediction of the dynamic behavior of the space shuttle main engine (SSME) turbopumps in the presence of bearing clearances and other local nonlinearities. A computationally efficient convolution method, based on discretized Duhamel and transition matrix integral formulations, is developed for the transient analysis. In the formulation, the coupling forces due to the nonlinearities are treated as external forces acting on the coupled subsystems. Iteration is utilized to determine their magnitudes at each time increment. The method is applied to a nonlinear generic model of the high pressure oxygen turbopump (HPOTP). As compared to the fourth order Runge-Kutta numerical integration methods, the convolution approach proved to be more accurate and more highly efficient. For determining the nonlinear, steady-state periodic responses, an incremental harmonic balance method was also developed. The method was successfully used to determine dominantly harmonic and subharmonic responses fo the HPOTP generic model with bearing clearances. A reduction method similar to the impedance formulation utilized with linear systems is used to reduce the housing-rotor models to their coordinates at the bearing clearances. Recommendations are included for further development of the method, for extending the analysis to aperiodic and chaotic regimes and for conducting critical parameteric studies of the nonlinear response of the current SSME turbopumps.

  14. Co-trained convolutional neural networks for automated detection of prostate cancer in multi-parametric MRI.

    PubMed

    Yang, Xin; Liu, Chaoyue; Wang, Zhiwei; Yang, Jun; Min, Hung Le; Wang, Liang; Cheng, Kwang-Ting Tim

    2017-12-01

    Multi-parameter magnetic resonance imaging (mp-MRI) is increasingly popular for prostate cancer (PCa) detection and diagnosis. However, interpreting mp-MRI data which typically contains multiple unregistered 3D sequences, e.g. apparent diffusion coefficient (ADC) and T2-weighted (T2w) images, is time-consuming and demands special expertise, limiting its usage for large-scale PCa screening. Therefore, solutions to computer-aided detection of PCa in mp-MRI images are highly desirable. Most recent advances in automated methods for PCa detection employ a handcrafted feature based two-stage classification flow, i.e. voxel-level classification followed by a region-level classification. This work presents an automated PCa detection system which can concurrently identify the presence of PCa in an image and localize lesions based on deep convolutional neural network (CNN) features and a single-stage SVM classifier. Specifically, the developed co-trained CNNs consist of two parallel convolutional networks for ADC and T2w images respectively. Each network is trained using images of a single modality in a weakly-supervised manner by providing a set of prostate images with image-level labels indicating only the presence of PCa without priors of lesions' locations. Discriminative visual patterns of lesions can be learned effectively from clutters of prostate and surrounding tissues. A cancer response map with each pixel indicating the likelihood to be cancerous is explicitly generated at the last convolutional layer of the network for each modality. A new back-propagated error E is defined to enforce both optimized classification results and consistent cancer response maps for different modalities, which help capture highly representative PCa-relevant features during the CNN feature learning process. The CNN features of each modality are concatenated and fed into a SVM classifier. For images which are classified to contain cancers, non-maximum suppression and adaptive thresholding are applied to the corresponding cancer response maps for PCa foci localization. Evaluation based on 160 patient data with 12-core systematic TRUS-guided prostate biopsy as the reference standard demonstrates that our system achieves a sensitivity of 0.46, 0.92 and 0.97 at 0.1, 1 and 10 false positives per normal/benign patient which is significantly superior to two state-of-the-art CNN-based methods (Oquab et al., 2015; Zhou et al., 2015) and 6-core systematic prostate biopsies. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Reducing weight precision of convolutional neural networks towards large-scale on-chip image recognition

    NASA Astrophysics Data System (ADS)

    Ji, Zhengping; Ovsiannikov, Ilia; Wang, Yibing; Shi, Lilong; Zhang, Qiang

    2015-05-01

    In this paper, we develop a server-client quantization scheme to reduce bit resolution of deep learning architecture, i.e., Convolutional Neural Networks, for image recognition tasks. Low bit resolution is an important factor in bringing the deep learning neural network into hardware implementation, which directly determines the cost and power consumption. We aim to reduce the bit resolution of the network without sacrificing its performance. To this end, we design a new quantization algorithm called supervised iterative quantization to reduce the bit resolution of learned network weights. In the training stage, the supervised iterative quantization is conducted via two steps on server - apply k-means based adaptive quantization on learned network weights and retrain the network based on quantized weights. These two steps are alternated until the convergence criterion is met. In this testing stage, the network configuration and low-bit weights are loaded to the client hardware device to recognize coming input in real time, where optimized but expensive quantization becomes infeasible. Considering this, we adopt a uniform quantization for the inputs and internal network responses (called feature maps) to maintain low on-chip expenses. The Convolutional Neural Network with reduced weight and input/response precision is demonstrated in recognizing two types of images: one is hand-written digit images and the other is real-life images in office scenarios. Both results show that the new network is able to achieve the performance of the neural network with full bit resolution, even though in the new network the bit resolution of both weight and input are significantly reduced, e.g., from 64 bits to 4-5 bits.

  16. Classification and unsupervised clustering of LIGO data with Deep Transfer Learning

    NASA Astrophysics Data System (ADS)

    George, Daniel; Shen, Hongyu; Huerta, E. A.

    2018-05-01

    Gravitational wave detection requires a detailed understanding of the response of the LIGO and Virgo detectors to true signals in the presence of environmental and instrumental noise. Of particular interest is the study of anomalous non-Gaussian transients, such as glitches, since their occurrence rate in LIGO and Virgo data can obscure or even mimic true gravitational wave signals. Therefore, successfully identifying and excising these anomalies from gravitational wave data is of utmost importance for the detection and characterization of true signals and for the accurate computation of their significance. To facilitate this work, we present the first application of deep learning combined with transfer learning to show that knowledge from pretrained models for real-world object recognition can be transferred for classifying spectrograms of glitches. To showcase this new method, we use a data set of twenty-two classes of glitches, curated and labeled by the Gravity Spy project using data collected during LIGO's first discovery campaign. We demonstrate that our Deep Transfer Learning method enables an optimal use of very deep convolutional neural networks for glitch classification given small and unbalanced training data sets, significantly reduces the training time, and achieves state-of-the-art accuracy above 98.8%, lowering the previous error rate by over 60%. More importantly, once trained via transfer learning on the known classes, we show that our neural networks can be truncated and used as feature extractors for unsupervised clustering to automatically group together new unknown classes of glitches and anomalous signals. This novel capability is of paramount importance to identify and remove new types of glitches which will occur as the LIGO/Virgo detectors gradually attain design sensitivity.

  17. Scalable Video Transmission Over Multi-Rate Multiple Access Channels

    DTIC Science & Technology

    2007-06-01

    Rate - compatible punctured convolutional codes (RCPC codes ) and their ap- plications,” IEEE...source encoded using the MPEG-4 video codec. The source encoded bitstream is then channel encoded with Rate Compatible Punctured Convolutional (RCPC...Clark, and J. M. Geist, “ Punctured convolutional codes or rate (n-1)/n and simplified maximum likelihood decoding,” IEEE Transactions on

  18. Wireless Visual Sensor Network Resource Allocation using Cross-Layer Optimization

    DTIC Science & Technology

    2009-01-01

    Rate Compatible Punctured Convolutional (RCPC) codes for channel...vol. 44, pp. 2943–2959, November 1998. [22] J. Hagenauer, “ Rate - compatible punctured convolutional codes (RCPC codes ) and their applications,” IEEE... coding rate for H.264/AVC video compression is determined. At the data link layer, the Rate - Compatible Puctured Convolutional (RCPC) channel coding

  19. The general theory of convolutional codes

    NASA Technical Reports Server (NTRS)

    Mceliece, R. J.; Stanley, R. P.

    1993-01-01

    This article presents a self-contained introduction to the algebraic theory of convolutional codes. This introduction is partly a tutorial, but at the same time contains a number of new results which will prove useful for designers of advanced telecommunication systems. Among the new concepts introduced here are the Hilbert series for a convolutional code and the class of compact codes.

  20. Computational analysis of current-loss mechanisms in a post-hole convolute driven by magnetically insulated transmission lines

    DOE PAGES

    Rose, D.  V.; Madrid, E.  A.; Welch, D.  R.; ...

    2015-03-04

    Numerical simulations of a vacuum post-hole convolute driven by magnetically insulated vacuum transmission lines (MITLs) are used to study current losses due to charged particle emission from the MITL-convolute-system electrodes. This work builds on the results of a previous study [E.A. Madrid et al. Phys. Rev. ST Accel. Beams 16, 120401 (2013)] and adds realistic power pulses, Ohmic heating of anode surfaces, and a model for the formation and evolution of cathode plasmas. The simulations suggest that modestly larger anode-cathode gaps in the MITLs upstream of the convolute result in significantly less current loss. In addition, longer pulse durations leadmore » to somewhat greater current loss due to cathode-plasma expansion. These results can be applied to the design of future MITL-convolute systems for high-current pulsed-power systems.« less

  1. Classification of urine sediment based on convolution neural network

    NASA Astrophysics Data System (ADS)

    Pan, Jingjing; Jiang, Cunbo; Zhu, Tiantian

    2018-04-01

    By designing a new convolution neural network framework, this paper breaks the constraints of the original convolution neural network framework requiring large training samples and samples of the same size. Move and cropping the input images, generate the same size of the sub-graph. And then, the generated sub-graph uses the method of dropout, increasing the diversity of samples and preventing the fitting generation. Randomly select some proper subset in the sub-graphic set and ensure that the number of elements in the proper subset is same and the proper subset is not the same. The proper subsets are used as input layers for the convolution neural network. Through the convolution layer, the pooling, the full connection layer and output layer, we can obtained the classification loss rate of test set and training set. In the red blood cells, white blood cells, calcium oxalate crystallization classification experiment, the classification accuracy rate of 97% or more.

  2. Plane-wave decomposition by spherical-convolution microphone array

    NASA Astrophysics Data System (ADS)

    Rafaely, Boaz; Park, Munhum

    2004-05-01

    Reverberant sound fields are widely studied, as they have a significant influence on the acoustic performance of enclosures in a variety of applications. For example, the intelligibility of speech in lecture rooms, the quality of music in auditoria, the noise level in offices, and the production of 3D sound in living rooms are all affected by the enclosed sound field. These sound fields are typically studied through frequency response measurements or statistical measures such as reverberation time, which do not provide detailed spatial information. The aim of the work presented in this seminar is the detailed analysis of reverberant sound fields. A measurement and analysis system based on acoustic theory and signal processing, designed around a spherical microphone array, is presented. Detailed analysis is achieved by decomposition of the sound field into waves, using spherical Fourier transform and spherical convolution. The presentation will include theoretical review, simulation studies, and initial experimental results.

  3. Bioprinting of 3D Convoluted Renal Proximal Tubules on Perfusable Chips

    NASA Astrophysics Data System (ADS)

    Homan, Kimberly A.; Kolesky, David B.; Skylar-Scott, Mark A.; Herrmann, Jessica; Obuobi, Humphrey; Moisan, Annie; Lewis, Jennifer A.

    2016-10-01

    Three-dimensional models of kidney tissue that recapitulate human responses are needed for drug screening, disease modeling, and, ultimately, kidney organ engineering. Here, we report a bioprinting method for creating 3D human renal proximal tubules in vitro that are fully embedded within an extracellular matrix and housed in perfusable tissue chips, allowing them to be maintained for greater than two months. Their convoluted tubular architecture is circumscribed by proximal tubule epithelial cells and actively perfused through the open lumen. These engineered 3D proximal tubules on chip exhibit significantly enhanced epithelial morphology and functional properties relative to the same cells grown on 2D controls with or without perfusion. Upon introducing the nephrotoxin, Cyclosporine A, the epithelial barrier is disrupted in a dose-dependent manner. Our bioprinting method provides a new route for programmably fabricating advanced human kidney tissue models on demand.

  4. Bioprinting of 3D Convoluted Renal Proximal Tubules on Perfusable Chips

    PubMed Central

    Homan, Kimberly A.; Kolesky, David B.; Skylar-Scott, Mark A.; Herrmann, Jessica; Obuobi, Humphrey; Moisan, Annie; Lewis, Jennifer A.

    2016-01-01

    Three-dimensional models of kidney tissue that recapitulate human responses are needed for drug screening, disease modeling, and, ultimately, kidney organ engineering. Here, we report a bioprinting method for creating 3D human renal proximal tubules in vitro that are fully embedded within an extracellular matrix and housed in perfusable tissue chips, allowing them to be maintained for greater than two months. Their convoluted tubular architecture is circumscribed by proximal tubule epithelial cells and actively perfused through the open lumen. These engineered 3D proximal tubules on chip exhibit significantly enhanced epithelial morphology and functional properties relative to the same cells grown on 2D controls with or without perfusion. Upon introducing the nephrotoxin, Cyclosporine A, the epithelial barrier is disrupted in a dose-dependent manner. Our bioprinting method provides a new route for programmably fabricating advanced human kidney tissue models on demand. PMID:27725720

  5. Optimal convolution SOR acceleration of waveform relaxation with application to semiconductor device simulation

    NASA Technical Reports Server (NTRS)

    Reichelt, Mark

    1993-01-01

    In this paper we describe a novel generalized SOR (successive overrelaxation) algorithm for accelerating the convergence of the dynamic iteration method known as waveform relaxation. A new convolution SOR algorithm is presented, along with a theorem for determining the optimal convolution SOR parameter. Both analytic and experimental results are given to demonstrate that the convergence of the convolution SOR algorithm is substantially faster than that of the more obvious frequency-independent waveform SOR algorithm. Finally, to demonstrate the general applicability of this new method, it is used to solve the differential-algebraic system generated by spatial discretization of the time-dependent semiconductor device equations.

  6. A Geometric Construction of Cyclic Cocycles on Twisted Convolution Algebras

    NASA Astrophysics Data System (ADS)

    Angel, Eitan

    2010-09-01

    In this thesis we give a construction of cyclic cocycles on convolution algebras twisted by gerbes over discrete translation groupoids. In his seminal book, Connes constructs a map from the equivariant cohomology of a manifold carrying the action of a discrete group into the periodic cyclic cohomology of the associated convolution algebra. Furthermore, for proper étale groupoids, J.-L. Tu and P. Xu provide a map between the periodic cyclic cohomology of a gerbe twisted convolution algebra and twisted cohomology groups. Our focus will be the convolution algebra with a product defined by a gerbe over a discrete translation groupoid. When the action is not proper, we cannot construct an invariant connection on the gerbe; therefore to study this algebra, we instead develop simplicial notions related to ideas of J. Dupont to construct a simplicial form representing the Dixmier-Douady class of the gerbe. Then by using a JLO formula we define a morphism from a simplicial complex twisted by this simplicial Dixmier-Douady form to the mixed bicomplex of certain matrix algebras. Finally, we define a morphism from this complex to the mixed bicomplex computing the periodic cyclic cohomology of the twisted convolution algebras.

  7. 2D convolution kernels of ionization chambers used for photon-beam dosimetry in magnetic fields: the advantage of small over large chamber dimensions

    NASA Astrophysics Data System (ADS)

    Khee Looe, Hui; Delfs, Björn; Poppinga, Daniela; Harder, Dietrich; Poppe, Björn

    2018-04-01

    This study aims at developing an optimization strategy for photon-beam dosimetry in magnetic fields using ionization chambers. Similar to the familiar case in the absence of a magnetic field, detectors should be selected under the criterion that their measured 2D signal profiles M(x,y) approximate the absorbed dose to water profiles D(x,y) as closely as possible. Since the conversion of D(x,y) into M(x,y) is known as the convolution with the ‘lateral dose response function’ K(x-ξ, y-η) of the detector, the ideal detector would be characterized by a vanishing magnetic field dependence of this convolution kernel (Looe et al 2017b Phys. Med. Biol. 62 5131–48). The idea of the present study is to find out, by Monte Carlo simulation of two commercial ionization chambers of different size, whether the smaller chamber dimensions would be instrumental to approach this aim. As typical examples, the lateral dose response functions in the presence and absence of a magnetic field have been Monte-Carlo modeled for the new commercial ionization chambers PTW 31021 (‘Semiflex 3D’, internal radius 2.4 mm) and PTW 31022 (‘PinPoint 3D’, internal radius 1.45 mm), which are both available with calibration factors. The Monte-Carlo model of the ionization chambers has been adjusted to account for the presence of the non-collecting part of the air volume near the guard ring. The Monte-Carlo results allow a comparison between the widths of the magnetic field dependent photon fluence response function K M(x-ξ, y-η) and of the lateral dose response function K(x-ξ, y-η) of the two chambers with the width of the dose deposition kernel K D(x-ξ, y-η). The simulated dose and chamber signal profiles show that in small photon fields and in the presence of a 1.5 T field the distortion of the chamber signal profile compared with the true dose profile is weakest for the smaller chamber. The dose responses of both chambers at large field size are shown to be altered by not more than 2% in magnetic fields up to 1.5 T for all three investigated chamber orientations.

  8. Non-Gaussian statistics and nanosecond dynamics of electrostatic fluctuations affecting optical transitions in proteins.

    PubMed

    Martin, Daniel R; Matyushov, Dmitry V

    2012-08-30

    We show that electrostatic fluctuations of the protein-water interface are globally non-Gaussian. The electrostatic component of the optical transition energy (energy gap) in a hydrated green fluorescent protein is studied here by classical molecular dynamics simulations. The distribution of the energy gap displays a high excess in the breadth of electrostatic fluctuations over the prediction of the Gaussian statistics. The energy gap dynamics include a nanosecond component. When simulations are repeated with frozen protein motions, the statistics shifts to the expectations of linear response and the slow dynamics disappear. We therefore suggest that both the non-Gaussian statistics and the nanosecond dynamics originate largely from global, low-frequency motions of the protein coupled to the interfacial water. The non-Gaussian statistics can be experimentally verified from the temperature dependence of the first two spectral moments measured at constant-volume conditions. Simulations at different temperatures are consistent with other indicators of the non-Gaussian statistics. In particular, the high-temperature part of the energy gap variance (second spectral moment) scales linearly with temperature and extrapolates to zero at a temperature characteristic of the protein glass transition. This result, violating the classical limit of the fluctuation-dissipation theorem, leads to a non-Boltzmann statistics of the energy gap and corresponding non-Arrhenius kinetics of radiationless electronic transitions, empirically described by the Vogel-Fulcher-Tammann law.

  9. A Linear Variable-[theta] Model for Measuring Individual Differences in Response Precision

    ERIC Educational Resources Information Center

    Ferrando, Pere J.

    2011-01-01

    Models for measuring individual response precision have been proposed for binary and graded responses. However, more continuous formats are quite common in personality measurement and are usually analyzed with the linear factor analysis model. This study extends the general Gaussian person-fluctuation model to the continuous-response case and…

  10. Minimal-memory realization of pearl-necklace encoders of general quantum convolutional codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Houshmand, Monireh; Hosseini-Khayat, Saied

    2011-02-15

    Quantum convolutional codes, like their classical counterparts, promise to offer higher error correction performance than block codes of equivalent encoding complexity, and are expected to find important applications in reliable quantum communication where a continuous stream of qubits is transmitted. Grassl and Roetteler devised an algorithm to encode a quantum convolutional code with a ''pearl-necklace'' encoder. Despite their algorithm's theoretical significance as a neat way of representing quantum convolutional codes, it is not well suited to practical realization. In fact, there is no straightforward way to implement any given pearl-necklace structure. This paper closes the gap between theoretical representation andmore » practical implementation. In our previous work, we presented an efficient algorithm to find a minimal-memory realization of a pearl-necklace encoder for Calderbank-Shor-Steane (CSS) convolutional codes. This work is an extension of our previous work and presents an algorithm for turning a pearl-necklace encoder for a general (non-CSS) quantum convolutional code into a realizable quantum convolutional encoder. We show that a minimal-memory realization depends on the commutativity relations between the gate strings in the pearl-necklace encoder. We find a realization by means of a weighted graph which details the noncommutative paths through the pearl necklace. The weight of the longest path in this graph is equal to the minimal amount of memory needed to implement the encoder. The algorithm has a polynomial-time complexity in the number of gate strings in the pearl-necklace encoder.« less

  11. Coset Codes Viewed as Terminated Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Fossorier, Marc P. C.; Lin, Shu

    1996-01-01

    In this paper, coset codes are considered as terminated convolutional codes. Based on this approach, three new general results are presented. First, it is shown that the iterative squaring construction can equivalently be defined from a convolutional code whose trellis terminates. This convolutional code determines a simple encoder for the coset code considered, and the state and branch labelings of the associated trellis diagram become straightforward. Also, from the generator matrix of the code in its convolutional code form, much information about the trade-off between the state connectivity and complexity at each section, and the parallel structure of the trellis, is directly available. Based on this generator matrix, it is shown that the parallel branches in the trellis diagram of the convolutional code represent the same coset code C(sub 1), of smaller dimension and shorter length. Utilizing this fact, a two-stage optimum trellis decoding method is devised. The first stage decodes C(sub 1), while the second stage decodes the associated convolutional code, using the branch metrics delivered by stage 1. Finally, a bidirectional decoding of each received block starting at both ends is presented. If about the same number of computations is required, this approach remains very attractive from a practical point of view as it roughly doubles the decoding speed. This fact is particularly interesting whenever the second half of the trellis is the mirror image of the first half, since the same decoder can be implemented for both parts.

  12. Flexible link functions in nonparametric binary regression with Gaussian process priors.

    PubMed

    Li, Dan; Wang, Xia; Lin, Lizhen; Dey, Dipak K

    2016-09-01

    In many scientific fields, it is a common practice to collect a sequence of 0-1 binary responses from a subject across time, space, or a collection of covariates. Researchers are interested in finding out how the expected binary outcome is related to covariates, and aim at better prediction in the future 0-1 outcomes. Gaussian processes have been widely used to model nonlinear systems; in particular to model the latent structure in a binary regression model allowing nonlinear functional relationship between covariates and the expectation of binary outcomes. A critical issue in modeling binary response data is the appropriate choice of link functions. Commonly adopted link functions such as probit or logit links have fixed skewness and lack the flexibility to allow the data to determine the degree of the skewness. To address this limitation, we propose a flexible binary regression model which combines a generalized extreme value link function with a Gaussian process prior on the latent structure. Bayesian computation is employed in model estimation. Posterior consistency of the resulting posterior distribution is demonstrated. The flexibility and gains of the proposed model are illustrated through detailed simulation studies and two real data examples. Empirical results show that the proposed model outperforms a set of alternative models, which only have either a Gaussian process prior on the latent regression function or a Dirichlet prior on the link function. © 2015, The International Biometric Society.

  13. Flexible Link Functions in Nonparametric Binary Regression with Gaussian Process Priors

    PubMed Central

    Li, Dan; Lin, Lizhen; Dey, Dipak K.

    2015-01-01

    Summary In many scientific fields, it is a common practice to collect a sequence of 0-1 binary responses from a subject across time, space, or a collection of covariates. Researchers are interested in finding out how the expected binary outcome is related to covariates, and aim at better prediction in the future 0-1 outcomes. Gaussian processes have been widely used to model nonlinear systems; in particular to model the latent structure in a binary regression model allowing nonlinear functional relationship between covariates and the expectation of binary outcomes. A critical issue in modeling binary response data is the appropriate choice of link functions. Commonly adopted link functions such as probit or logit links have fixed skewness and lack the flexibility to allow the data to determine the degree of the skewness. To address this limitation, we propose a flexible binary regression model which combines a generalized extreme value link function with a Gaussian process prior on the latent structure. Bayesian computation is employed in model estimation. Posterior consistency of the resulting posterior distribution is demonstrated. The flexibility and gains of the proposed model are illustrated through detailed simulation studies and two real data examples. Empirical results show that the proposed model outperforms a set of alternative models, which only have either a Gaussian process prior on the latent regression function or a Dirichlet prior on the link function. PMID:26686333

  14. Gaussian curvature directs the distribution of spontaneous curvature on bilayer membrane necks.

    PubMed

    Chabanon, Morgan; Rangamani, Padmini

    2018-03-28

    Formation of membrane necks is crucial for fission and fusion in lipid bilayers. In this work, we seek to answer the following fundamental question: what is the relationship between protein-induced spontaneous mean curvature and the Gaussian curvature at a membrane neck? Using an augmented Helfrich model for lipid bilayers to include membrane-protein interaction, we solve the shape equation on catenoids to find the field of spontaneous curvature that satisfies mechanical equilibrium of membrane necks. In this case, the shape equation reduces to a variable coefficient Helmholtz equation for spontaneous curvature, where the source term is proportional to the Gaussian curvature. We show how this latter quantity is responsible for non-uniform distribution of spontaneous curvature in minimal surfaces. We then explore the energetics of catenoids with different spontaneous curvature boundary conditions and geometric asymmetries to show how heterogeneities in spontaneous curvature distribution can couple with Gaussian curvature to result in membrane necks of different geometries.

  15. Rapid automatized naming (RAN) in children with ADHD: An ex-Gaussian analysis.

    PubMed

    Ryan, Matthew; Jacobson, Lisa A; Hague, Cole; Bellows, Alison; Denckla, Martha B; Mahone, E Mark

    2017-07-01

    Children with ADHD demonstrate increased frequent "lapses" in performance on tasks in which the stimulus presentation rate is externally controlled, leading to increased variability in response times. It is less clear whether these lapses are also evident during performance on self-paced tasks, e.g., rapid automatized naming (RAN), or whether RAN inter-item pause time variability uniquely predicts reading performance. A total of 80 children aged 9 to 14 years-45 children with attention-deficit/hyperactivity disorder (ADHD) and 35 typically developing (TD) children-completed RAN and reading fluency measures. RAN responses were digitally recorded for analyses. Inter-stimulus pause time distributions (excluding between-row pauses) were analyzed using traditional (mean, standard deviation [SD], coefficient of variation [CV]) and ex-Gaussian (mu, sigma, tau) methods. Children with ADHD were found to be significantly slower than TD children (p < .05) on RAN letter naming mean response time as well as on oral and silent reading fluency. RAN response time distributions were also significantly more variable (SD, tau) in children with ADHD. Hierarchical regression revealed that the exponential component (tau) of the letter-naming response time distribution uniquely predicted reading fluency in children with ADHD (p < .001, ΔR 2  = .16), even after controlling for IQ, basic reading, ADHD symptom severity and age. The findings suggest that children with ADHD (without word-level reading difficulties) manifest slowed performance on tasks of reading fluency; however, this "slowing" may be due in part to lapses from ongoing performance that can be assessed directly using ex-Gaussian methods that capture excessively long response times.

  16. A Fourier method for the analysis of exponential decay curves.

    PubMed

    Provencher, S W

    1976-01-01

    A method based on the Fourier convolution theorem is developed for the analysis of data composed of random noise, plus an unknown constant "base line," plus a sum of (or an integral over a continuous spectrum of) exponential decay functions. The Fourier method's usual serious practical limitation of needing high accuracy data over a very wide range is eliminated by the introduction of convergence parameters and a Gaussian taper window. A computer program is described for the analysis of discrete spectra, where the data involves only a sum of exponentials. The program is completely automatic in that the only necessary inputs are the raw data (not necessarily in equal intervals of time); no potentially biased initial guesses concerning either the number or the values of the components are needed. The outputs include the number of components, the amplitudes and time constants together with their estimated errors, and a spectral plot of the solution. The limiting resolving power of the method is studied by analyzing a wide range of simulated two-, three-, and four-component data. The results seem to indicate that the method is applicable over a considerably wider range of conditions than nonlinear least squares or the method of moments.

  17. Thermally Stimulated Currents in Nanocrystalline Titania

    PubMed Central

    Bruzzi, Mara; Mori, Riccardo; Baldi, Andrea; Cavallaro, Alessandro; Scaringella, Monica

    2018-01-01

    A thorough study on the distribution of defect-related active energy levels has been performed on nanocrystalline TiO2. Films have been deposited on thick-alumina printed circuit boards equipped with electrical contacts, heater and temperature sensors, to carry out a detailed thermally stimulated currents analysis on a wide temperature range (5–630 K), in view to evidence contributions from shallow to deep energy levels within the gap. Data have been processed by numerically modelling electrical transport. The model considers both free and hopping contribution to conduction, a density of states characterized by an exponential tail of localized states below the conduction band and the convolution of standard Thermally Stimulated Currents (TSC) emissions with gaussian distributions to take into account the variability in energy due to local perturbations in the highly disordered network. Results show that in the low temperature range, up to 200 K, hopping within the exponential band tail represents the main contribution to electrical conduction. Above room temperature, electrical conduction is dominated by free carriers contribution and by emissions from deep energy levels, with a defect density ranging within 1014–1018 cm−3, associated with physio- and chemi-sorbed water vapour, OH groups and to oxygen vacancies. PMID:29303976

  18. Optical properties of a nanostructured glass-based film using spectroscopic ellipsometry

    DOE PAGES

    Jellison, G. E.; Aytug, T.; Lupini, A. R.; ...

    2015-12-22

    Nanostructured glass films, which are fabricated using spinodally phase-separated low-alkali glasses, have several interesting and useful characteristics, including being robust, non-wetting and antireflective. Spectroscopic ellipsometry measurements have been performed on one such film and its optical properties were analyzed using a 5-layer structural model of the near-surface region. Since the glass and the film are transparent over the spectral region of the measurement, the Sellmeier model is used to parameterize the dispersion in the refractive index. To simulate the variation of the optical properties of the film over the spot size of the ellipsometer (~ 3 × 5 mm), themore » Sellmeier amplitude is convoluted using a Gaussian distribution. The transition layers between the ambient and the film and between the film and the substrate are modeled as graded layers, where the refractive index varies as a function of depth. These layers are modeled using a two-component Bruggeman effective medium approximation where the two components are the layer above and the layer below. Lastly, the fraction is continuous through the transition layer and is modelled using the incomplete beta function.« less

  19. Thermally Stimulated Currents in Nanocrystalline Titania.

    PubMed

    Bruzzi, Mara; Mori, Riccardo; Baldi, Andrea; Carnevale, Ennio Antonio; Cavallaro, Alessandro; Scaringella, Monica

    2018-01-05

    A thorough study on the distribution of defect-related active energy levels has been performed on nanocrystalline TiO₂. Films have been deposited on thick-alumina printed circuit boards equipped with electrical contacts, heater and temperature sensors, to carry out a detailed thermally stimulated currents analysis on a wide temperature range (5-630 K), in view to evidence contributions from shallow to deep energy levels within the gap. Data have been processed by numerically modelling electrical transport. The model considers both free and hopping contribution to conduction, a density of states characterized by an exponential tail of localized states below the conduction band and the convolution of standard Thermally Stimulated Currents (TSC) emissions with gaussian distributions to take into account the variability in energy due to local perturbations in the highly disordered network. Results show that in the low temperature range, up to 200 K, hopping within the exponential band tail represents the main contribution to electrical conduction. Above room temperature, electrical conduction is dominated by free carriers contribution and by emissions from deep energy levels, with a defect density ranging within 10 14 -10 18 cm -3 , associated with physio- and chemi-sorbed water vapour, OH groups and to oxygen vacancies.

  20. GPU-accelerated non-uniform fast Fourier transform-based compressive sensing spectral domain optical coherence tomography.

    PubMed

    Xu, Daguang; Huang, Yong; Kang, Jin U

    2014-06-16

    We implemented the graphics processing unit (GPU) accelerated compressive sensing (CS) non-uniform in k-space spectral domain optical coherence tomography (SD OCT). Kaiser-Bessel (KB) function and Gaussian function are used independently as the convolution kernel in the gridding-based non-uniform fast Fourier transform (NUFFT) algorithm with different oversampling ratios and kernel widths. Our implementation is compared with the GPU-accelerated modified non-uniform discrete Fourier transform (MNUDFT) matrix-based CS SD OCT and the GPU-accelerated fast Fourier transform (FFT)-based CS SD OCT. It was found that our implementation has comparable performance to the GPU-accelerated MNUDFT-based CS SD OCT in terms of image quality while providing more than 5 times speed enhancement. When compared to the GPU-accelerated FFT based-CS SD OCT, it shows smaller background noise and less side lobes while eliminating the need for the cumbersome k-space grid filling and the k-linear calibration procedure. Finally, we demonstrated that by using a conventional desktop computer architecture having three GPUs, real-time B-mode imaging can be obtained in excess of 30 fps for the GPU-accelerated NUFFT based CS SD OCT with frame size 2048(axial) × 1,000(lateral).

  1. One-electron densities of freely rotating Wigner molecules

    NASA Astrophysics Data System (ADS)

    Cioslowski, Jerzy

    2017-12-01

    A formalism enabling computation of the one-particle density of a freely rotating assembly of identical particles that vibrate about their equilibrium positions with amplitudes much smaller than their average distances is presented. It produces densities as finite sums of products of angular and radial functions, the length of the expansion being determined by the interplay between the point-group and permutational symmetries of the system in question. Obtaining from a convolution of the rotational and bosonic components of the parent wavefunction, the angular functions are state-dependent. On the other hand, the radial functions are Gaussians with maxima located at the equilibrium lengths of the position vectors of individual particles and exponents depending on the scalar products of these vectors and the eigenvectors of the corresponding Hessian as well as the respective eigenvalues. Although the new formalism is particularly useful for studies of the Wigner molecules formed by electrons subject to weak confining potentials, it is readily adaptable to species (such as ´balliums’ and Coulomb crystals) composed of identical particles with arbitrary spin statistics and permutational symmetry. Several examples of applications of the present approach to the harmonium atoms within the strong-correlation regime are given.

  2. Sub-pixel localisation of passive micro-coil fiducial markers in interventional MRI.

    PubMed

    Rea, Marc; McRobbie, Donald; Elhawary, Haytham; Tse, Zion T H; Lamperth, Michael; Young, Ian

    2009-04-01

    Electromechanical devices enable increased accuracy in surgical procedures, and the recent development of MRI-compatible mechatronics permits the use of MRI for real-time image guidance. Integrated imaging of resonant micro-coil fiducials provides an accurate method of tracking devices in a scanner with increased flexibility compared to gradient tracking. Here we report on the ability of ten different image-processing algorithms to track micro-coil fiducials with sub-pixel accuracy. Five algorithms: maximum pixel, barycentric weighting, linear interpolation, quadratic fitting and Gaussian fitting were applied both directly to the pixel intensity matrix and to the cross-correlation matrix obtained by 2D convolution with a reference image. Using images of a 3 mm fiducial marker and a pixel size of 1.1 mm, intensity linear interpolation, which calculates the position of the fiducial centre by interpolating the pixel data to find the fiducial edges, was found to give the best performance for minimal computing power; a maximum error of 0.22 mm was observed in fiducial localisation for displacements up to 40 mm. The inherent standard deviation of fiducial localisation was 0.04 mm. This work enables greater accuracy to be achieved in passive fiducial tracking.

  3. Hyperspectral Image Enhancement and Mixture Deep-Learning Classification of Corneal Epithelium Injuries

    PubMed Central

    Md Noor, Siti Salwa; Michael, Kaleena; Marshall, Stephen; Ren, Jinchang

    2017-01-01

    In our preliminary study, the reflectance signatures obtained from hyperspectral imaging (HSI) of normal and abnormal corneal epithelium tissues of porcine show similar morphology with subtle differences. Here we present image enhancement algorithms that can be used to improve the interpretability of data into clinically relevant information to facilitate diagnostics. A total of 25 corneal epithelium images without the application of eye staining were used. Three image feature extraction approaches were applied for image classification: (i) image feature classification from histogram using a support vector machine with a Gaussian radial basis function (SVM-GRBF); (ii) physical image feature classification using deep-learning Convolutional Neural Networks (CNNs) only; and (iii) the combined classification of CNNs and SVM-Linear. The performance results indicate that our chosen image features from the histogram and length-scale parameter were able to classify with up to 100% accuracy; particularly, at CNNs and CNNs-SVM, by employing 80% of the data sample for training and 20% for testing. Thus, in the assessment of corneal epithelium injuries, HSI has high potential as a method that could surpass current technologies regarding speed, objectivity, and reliability. PMID:29144388

  4. Nondestructive, fast, and cost-effective image processing method for roughness measurement of randomly rough metallic surfaces.

    PubMed

    Ghodrati, Sajjad; Kandi, Saeideh Gorji; Mohseni, Mohsen

    2018-06-01

    In recent years, various surface roughness measurement methods have been proposed as alternatives to the commonly used stylus profilometry, which is a low-speed, destructive, expensive but precise method. In this study, a novel method, called "image profilometry," has been introduced for nondestructive, fast, and low-cost surface roughness measurement of randomly rough metallic samples based on image processing and machine vision. The impacts of influential parameters such as image resolution and filtering approach for elimination of the long wavelength surface undulations on the accuracy of the image profilometry results have been comprehensively investigated. Ten surface roughness parameters were measured for the samples using both the stylus and image profilometry. Based on the results, the best image resolution was 800 dpi, and the most practical filtering method was Gaussian convolution+cutoff. In these conditions, the best and worst correlation coefficients (R 2 ) between the stylus and image profilometry results were 0.9892 and 0.9313, respectively. Our results indicated that the image profilometry predicted the stylus profilometry results with high accuracy. Consequently, it could be a viable alternative to the stylus profilometry, particularly in online applications.

  5. Fusion of infrared and visible images based on saliency scale-space in frequency domain

    NASA Astrophysics Data System (ADS)

    Chen, Yanfei; Sang, Nong; Dan, Zhiping

    2015-12-01

    A fusion algorithm of infrared and visible images based on saliency scale-space in the frequency domain was proposed. Focus of human attention is directed towards the salient targets which interpret the most important information in the image. For the given registered infrared and visible images, firstly, visual features are extracted to obtain the input hypercomplex matrix. Secondly, the Hypercomplex Fourier Transform (HFT) is used to obtain the salient regions of the infrared and visible images respectively, the convolution of the input hypercomplex matrix amplitude spectrum with a low-pass Gaussian kernel of an appropriate scale which is equivalent to an image saliency detector are done. The saliency maps are obtained by reconstructing the 2D signal using the original phase and the amplitude spectrum, filtered at a scale selected by minimizing saliency map entropy. Thirdly, the salient regions are fused with the adoptive weighting fusion rules, and the nonsalient regions are fused with the rule based on region energy (RE) and region sharpness (RS), then the fused image is obtained. Experimental results show that the presented algorithm can hold high spectrum information of the visual image, and effectively get the thermal targets information at different scales of the infrared image.

  6. Compensation of long-range process effects on photomasks by design data correction

    NASA Astrophysics Data System (ADS)

    Schneider, Jens; Bloecker, Martin; Ballhorn, Gerd; Belic, Nikola; Eisenmann, Hans; Keogan, Danny

    2002-12-01

    CD requirements for advanced photomasks are getting very demanding for the 100 nm-node and below; the ITRS roadmap requires CD uniformities below 10 nm for the most critical layers. To reach this goal, statistical as well as systematic CD contributions must be minimized. Here, we focus on the reduction of systematic CD variations across the masks that may be caused by process effects, e.g. dry etch loading. We address this topic by compensating such effects via design data correction analogous to proximity correction. Dry etch loading is modeled by gaussian convolution of pattern densities. Data correction is done geometrically by edge shifting. As the effect amplitude has an order of magnitude of 10 nm this can only be done on e-beam writers with small address grids to reduce big CD steps in the design data. We present modeling and correction results for special mask patterns with very strong pattern density variations showing that the compensation method is able to reduce CD uniformity by 50-70% depending on pattern details. The data correction itself is done with a new module developed especially to compensate long-range effects and fits nicely into the common data flow environment.

  7. Characterization, parameter estimation, and aircraft response statistics of atmospheric turbulence

    NASA Technical Reports Server (NTRS)

    Mark, W. D.

    1981-01-01

    A nonGaussian three component model of atmospheric turbulence is postulated that accounts for readily observable features of turbulence velocity records, their autocorrelation functions, and their spectra. Methods for computing probability density functions and mean exceedance rates of a generic aircraft response variable are developed using nonGaussian turbulence characterizations readily extracted from velocity recordings. A maximum likelihood method is developed for optimal estimation of the integral scale and intensity of records possessing von Karman transverse of longitudinal spectra. Formulas for the variances of such parameter estimates are developed. The maximum likelihood and least-square approaches are combined to yield a method for estimating the autocorrelation function parameters of a two component model for turbulence.

  8. Response of a rigid aircraft to nonstationary atmospheric turbulence.

    NASA Technical Reports Server (NTRS)

    Verdon, J. M.; Steiner, R.

    1973-01-01

    The plunging response of an aircraft to a type of nonstationary turbulent excitation is considered. The latter consists of stationary Gaussian noise modulated by a well-defined envelope function. The intent of the investigation is to model the excitation experienced by an airplane flying through turbulence of varying intensity and to examine the influence of intensity variations on exceedance frequencies of the gust velocity and the airplane's plunging velocity and acceleration. One analytical advantage of the proposed model is that the Gaussian assumption for the gust excitation is retained. The analysis described herein is developed in terms of an envelope function of arbitrary form; however, numerical calculations are limited to the case of harmonic modulation.

  9. Signal Detection and Frame Synchronization of Multiple Wireless Networking Waveforms

    DTIC Science & Technology

    2007-09-01

    punctured to obtain coding rates of 2 3 and 3 4 . Convolutional forward error correction coding is used to detect and correct bit...likely to be isolated and be correctable by the convolutional decoder. 44 Data rate (Mbps) Modulation Coding Rate Coded bits per subcarrier...binary convolutional code . A shortened Reed-Solomon technique is employed first. The code is shortened depending upon the data

  10. Using convolutional decoding to improve time delay and phase estimation in digital communications

    DOEpatents

    Ormesher, Richard C [Albuquerque, NM; Mason, John J [Albuquerque, NM

    2010-01-26

    The time delay and/or phase of a communication signal received by a digital communication receiver can be estimated based on a convolutional decoding operation that the communication receiver performs on the received communication signal. If the original transmitted communication signal has been spread according to a spreading operation, a corresponding despreading operation can be integrated into the convolutional decoding operation.

  11. Filter frequency response of time dependent signal using Laplace transform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shestakov, Aleksei I.

    We analyze the effect a filter has on a time dependent signal x(t). If X(s) is the Laplace transform of x and H (s) is the filter Transfer function, the response in frequency space is X (s) H (s). Consequently, in real space, the response is the convolution (x*h) (t), where hi is the Laplace inverse of H. Effects are analyzed and analytically for functions such as (t/t c) 2 e -t/tmore » $$_c$$, where t c = const. We consider lowpass, highpass and bandpass filters.« less

  12. Pathology and proposed pathophysiology of diclofenac poisoning in free-living and experimentally exposed oriental white-backed vultures (Gyps bengalensis)

    USGS Publications Warehouse

    Meteyer, C.U.; Rideout, B.A.; Gilbert, M.; Shivaprasad, H.L.; Oaks, J.L.

    2005-01-01

    Oriental white-backed vultures (Gyps bengalensis; OWBVs) died of renal failure when they ingested diclofenac, a nonsteroidal anti-inflammatory drug (NSAID), in tissues of domestic livestock. Acute necrosis of proximal convoluted tubules in these vultures was severe. Glomeruli, distal convoluted tubules, and collecting tubules were relatively spared in the vultures that had early lesions. In most vultures, however, lesions became extensive with large urate aggregates obscuring renal architecture. Inflammation was minimal. Extensive urate precipitation on the surface and within organ parenchyma (visceral gout) was consistently found in vultures with renal failure. Very little is known about the physiologic effect of NSAIDs in birds. Research in mammals has shown that diclofenac inhibits formation of prostaglandins. We propose that the mechanism by which diclofenac induces renal failure in the OWBV is through the inhibition of the modulating effect of prostaglandin on angiotensin II-mediated adrenergic stimulation. Renal portal valves open in response to adrenergic stimulation, redirecting portal blood to the caudal vena cava and bypassing the kidney. If diclofenac removes a modulating effect of prostaglandins on the renal portal valves, indiscriminant activation of these valves would redirect the primary nutrient blood supply away from the renal cortex. Resulting ischemic necrosis of the cortical proximal convoluted tubules would be consistent with our histologic findings in these OWBVs.

  13. Single image super-resolution based on convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Zou, Lamei; Luo, Ming; Yang, Weidong; Li, Peng; Jin, Liujia

    2018-03-01

    We present a deep learning method for single image super-resolution (SISR). The proposed approach learns end-to-end mapping between low-resolution (LR) images and high-resolution (HR) images. The mapping is represented as a deep convolutional neural network which inputs the LR image and outputs the HR image. Our network uses 5 convolution layers, which kernels size include 5×5, 3×3 and 1×1. In our proposed network, we use residual-learning and combine different sizes of convolution kernels at the same layer. The experiment results show that our proposed method performs better than the existing methods in reconstructing quality index and human visual effects on benchmarked images.

  14. Error-trellis Syndrome Decoding Techniques for Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1984-01-01

    An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decoding is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.

  15. Error-trellis syndrome decoding techniques for convolutional codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1985-01-01

    An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decordig is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.

  16. Molecular graph convolutions: moving beyond fingerprints

    NASA Astrophysics Data System (ADS)

    Kearnes, Steven; McCloskey, Kevin; Berndl, Marc; Pande, Vijay; Riley, Patrick

    2016-08-01

    Molecular "fingerprints" encoding structural information are the workhorse of cheminformatics and machine learning in drug discovery applications. However, fingerprint representations necessarily emphasize particular aspects of the molecular structure while ignoring others, rather than allowing the model to make data-driven decisions. We describe molecular graph convolutions, a machine learning architecture for learning from undirected graphs, specifically small molecules. Graph convolutions use a simple encoding of the molecular graph—atoms, bonds, distances, etc.—which allows the model to take greater advantage of information in the graph structure. Although graph convolutions do not outperform all fingerprint-based methods, they (along with other graph-based methods) represent a new paradigm in ligand-based virtual screening with exciting opportunities for future improvement.

  17. Molecular graph convolutions: moving beyond fingerprints.

    PubMed

    Kearnes, Steven; McCloskey, Kevin; Berndl, Marc; Pande, Vijay; Riley, Patrick

    2016-08-01

    Molecular "fingerprints" encoding structural information are the workhorse of cheminformatics and machine learning in drug discovery applications. However, fingerprint representations necessarily emphasize particular aspects of the molecular structure while ignoring others, rather than allowing the model to make data-driven decisions. We describe molecular graph convolutions, a machine learning architecture for learning from undirected graphs, specifically small molecules. Graph convolutions use a simple encoding of the molecular graph-atoms, bonds, distances, etc.-which allows the model to take greater advantage of information in the graph structure. Although graph convolutions do not outperform all fingerprint-based methods, they (along with other graph-based methods) represent a new paradigm in ligand-based virtual screening with exciting opportunities for future improvement.

  18. Capturing the dynamics of response variability in the brain in ADHD.

    PubMed

    van Belle, Janna; van Raalten, Tamar; Bos, Dienke J; Zandbelt, Bram B; Oranje, Bob; Durston, Sarah

    2015-01-01

    ADHD is characterized by increased intra-individual variability in response times during the performance of cognitive tasks. However, little is known about developmental changes in intra-individual variability, and how these changes relate to cognitive performance. Twenty subjects with ADHD aged 7-24 years and 20 age-matched, typically developing controls participated in an fMRI-scan while they performed a go-no-go task. We fit an ex-Gaussian distribution on the response distribution to objectively separate extremely slow responses, related to lapses of attention, from variability on fast responses. We assessed developmental changes in these intra-individual variability measures, and investigated their relation to no-go performance. Results show that the ex-Gaussian measures were better predictors of no-go performance than traditional measures of reaction time. Furthermore, we found between-group differences in the change in ex-Gaussian parameters with age, and their relation to task performance: subjects with ADHD showed age-related decreases in their variability on fast responses (sigma), but not in lapses of attention (tau), whereas control subjects showed a decrease in both measures of variability. For control subjects, but not subjects with ADHD, this age-related reduction in variability was predictive of task performance. This group difference was reflected in neural activation: for typically developing subjects, the age-related decrease in intra-individual variability on fast responses (sigma) predicted activity in the dorsal anterior cingulate gyrus (dACG), whereas for subjects with ADHD, activity in this region was related to improved no-go performance with age, but not to intra-individual variability. These data show that using more sophisticated measures of intra-individual variability allows the capturing of the dynamics of task performance and associated neural changes not permitted by more traditional measures.

  19. Capturing the dynamics of response variability in the brain in ADHD

    PubMed Central

    van Belle, Janna; van Raalten, Tamar; Bos, Dienke J.; Zandbelt, Bram B.; Oranje, Bob; Durston, Sarah

    2014-01-01

    ADHD is characterized by increased intra-individual variability in response times during the performance of cognitive tasks. However, little is known about developmental changes in intra-individual variability, and how these changes relate to cognitive performance. Twenty subjects with ADHD aged 7–24 years and 20 age-matched, typically developing controls participated in an fMRI-scan while they performed a go-no-go task. We fit an ex-Gaussian distribution on the response distribution to objectively separate extremely slow responses, related to lapses of attention, from variability on fast responses. We assessed developmental changes in these intra-individual variability measures, and investigated their relation to no-go performance. Results show that the ex-Gaussian measures were better predictors of no-go performance than traditional measures of reaction time. Furthermore, we found between-group differences in the change in ex-Gaussian parameters with age, and their relation to task performance: subjects with ADHD showed age-related decreases in their variability on fast responses (sigma), but not in lapses of attention (tau), whereas control subjects showed a decrease in both measures of variability. For control subjects, but not subjects with ADHD, this age-related reduction in variability was predictive of task performance. This group difference was reflected in neural activation: for typically developing subjects, the age-related decrease in intra-individual variability on fast responses (sigma) predicted activity in the dorsal anterior cingulate gyrus (dACG), whereas for subjects with ADHD, activity in this region was related to improved no-go performance with age, but not to intra-individual variability. These data show that using more sophisticated measures of intra-individual variability allows the capturing of the dynamics of task performance and associated neural changes not permitted by more traditional measures. PMID:25610775

  20. Gaussian step-pressure loading of rigid viscoplastic plates. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Hayduk, R. J.; Durling, B. J.

    1978-01-01

    The response of a thin, rigid viscoplastic plate subjected to a spatially axisymmetric Gaussian step pressure impulse loading was studied analytically. A Gaussian pressure distribution in excess of the collapse load was applied to the plate, held constant for a length of time, and then suddenly removed. The plate deforms with monotonically increasing deflections until the dynamic energy is completely dissipated in plastic work. The simply supported plate of uniform thickness obeys the von Mises yield criterion and a generalized constitutive equation for rigid viscoplastic materials. For the small deflection bending response of the plate, the governing system of equations is essentially nonlinear. Transverse shear stress is neglected in the yield condition and rotary inertia in the equations of dynamic equilibrium. A proportional loading technique, known to give excellent approximations of the exact solution for the uniform load case, was used to linearize the problem and to obtain the analytical solutions in the form of eigenvalue expansions. The effects of load concentration, of an order of magnitude change in the viscosity of the plate material, and of load duration were examined while holding the total impulse constant.

  1. Resting State fMRI Functional Connectivity-Based Classification Using a Convolutional Neural Network Architecture

    PubMed Central

    Meszlényi, Regina J.; Buza, Krisztian; Vidnyánszky, Zoltán

    2017-01-01

    Machine learning techniques have become increasingly popular in the field of resting state fMRI (functional magnetic resonance imaging) network based classification. However, the application of convolutional networks has been proposed only very recently and has remained largely unexplored. In this paper we describe a convolutional neural network architecture for functional connectome classification called connectome-convolutional neural network (CCNN). Our results on simulated datasets and a publicly available dataset for amnestic mild cognitive impairment classification demonstrate that our CCNN model can efficiently distinguish between subject groups. We also show that the connectome-convolutional network is capable to combine information from diverse functional connectivity metrics and that models using a combination of different connectivity descriptors are able to outperform classifiers using only one metric. From this flexibility follows that our proposed CCNN model can be easily adapted to a wide range of connectome based classification or regression tasks, by varying which connectivity descriptor combinations are used to train the network. PMID:29089883

  2. Face recognition: a convolutional neural-network approach.

    PubMed

    Lawrence, S; Giles, C L; Tsoi, A C; Back, A D

    1997-01-01

    We present a hybrid neural-network for human face recognition which compares favourably with other methods. The system combines local image sampling, a self-organizing map (SOM) neural network, and a convolutional neural network. The SOM provides a quantization of the image samples into a topological space where inputs that are nearby in the original space are also nearby in the output space, thereby providing dimensionality reduction and invariance to minor changes in the image sample, and the convolutional neural network provides partial invariance to translation, rotation, scale, and deformation. The convolutional network extracts successively larger features in a hierarchical set of layers. We present results using the Karhunen-Loeve transform in place of the SOM, and a multilayer perceptron (MLP) in place of the convolutional network for comparison. We use a database of 400 images of 40 individuals which contains quite a high degree of variability in expression, pose, and facial details. We analyze the computational complexity and discuss how new classes could be added to the trained recognizer.

  3. Resting State fMRI Functional Connectivity-Based Classification Using a Convolutional Neural Network Architecture.

    PubMed

    Meszlényi, Regina J; Buza, Krisztian; Vidnyánszky, Zoltán

    2017-01-01

    Machine learning techniques have become increasingly popular in the field of resting state fMRI (functional magnetic resonance imaging) network based classification. However, the application of convolutional networks has been proposed only very recently and has remained largely unexplored. In this paper we describe a convolutional neural network architecture for functional connectome classification called connectome-convolutional neural network (CCNN). Our results on simulated datasets and a publicly available dataset for amnestic mild cognitive impairment classification demonstrate that our CCNN model can efficiently distinguish between subject groups. We also show that the connectome-convolutional network is capable to combine information from diverse functional connectivity metrics and that models using a combination of different connectivity descriptors are able to outperform classifiers using only one metric. From this flexibility follows that our proposed CCNN model can be easily adapted to a wide range of connectome based classification or regression tasks, by varying which connectivity descriptor combinations are used to train the network.

  4. Comparison of the convolution quadrature method and enhanced inverse FFT with application in elastodynamic boundary element method

    NASA Astrophysics Data System (ADS)

    Schanz, Martin; Ye, Wenjing; Xiao, Jinyou

    2016-04-01

    Transient problems can often be solved with transformation methods, where the inverse transformation is usually performed numerically. Here, the discrete Fourier transform in combination with the exponential window method is compared with the convolution quadrature method formulated as inverse transformation. Both are inverse Laplace transforms, which are formally identical but use different complex frequencies. A numerical study is performed, first with simple convolution integrals and, second, with a boundary element method (BEM) for elastodynamics. Essentially, when combined with the BEM, the discrete Fourier transform needs less frequency calculations, but finer mesh compared to the convolution quadrature method to obtain the same level of accuracy. If further fast methods like the fast multipole method are used to accelerate the boundary element method the convolution quadrature method is better, because the iterative solver needs much less iterations to converge. This is caused by the larger real part of the complex frequencies necessary for the calculation, which improves the conditions of system matrix.

  5. ASIC-based architecture for the real-time computation of 2D convolution with large kernel size

    NASA Astrophysics Data System (ADS)

    Shao, Rui; Zhong, Sheng; Yan, Luxin

    2015-12-01

    Bidimensional convolution is a low-level processing algorithm of interest in many areas, but its high computational cost constrains the size of the kernels, especially in real-time embedded systems. This paper presents a hardware architecture for the ASIC-based implementation of 2-D convolution with medium-large kernels. Aiming to improve the efficiency of storage resources on-chip, reducing off-chip bandwidth of these two issues, proposed construction of a data cache reuse. Multi-block SPRAM to cross cached images and the on-chip ping-pong operation takes full advantage of the data convolution calculation reuse, design a new ASIC data scheduling scheme and overall architecture. Experimental results show that the structure can achieve 40× 32 size of template real-time convolution operations, and improve the utilization of on-chip memory bandwidth and on-chip memory resources, the experimental results show that the structure satisfies the conditions to maximize data throughput output , reducing the need for off-chip memory bandwidth.

  6. Performance Analysis of IEEE 802.11g TCM Waveforms Transmitted over a Channel with Pulse-Noise Interference

    DTIC Science & Technology

    2007-06-01

    17 Table 2. Best (maximum free distance) rate r=2/3 punctured convolutional code ...Hamming distance between all pairs of non-zero paths. Table 2 lists the best rate r=2/3, punctured convolutional code information weight structure dB...Table 2. Best (maximum free distance) rate r=2/3 punctured convolutional code information weight structure. (From: [12]). K freed freeB

  7. A FAST POLYNOMIAL TRANSFORM PROGRAM WITH A MODULARIZED STRUCTURE

    NASA Technical Reports Server (NTRS)

    Truong, T. K.

    1994-01-01

    This program utilizes a fast polynomial transformation (FPT) algorithm applicable to two-dimensional mathematical convolutions. Two-dimensional convolution has many applications, particularly in image processing. Two-dimensional cyclic convolutions can be converted to a one-dimensional convolution in a polynomial ring. Traditional FPT methods decompose the one-dimensional cyclic polynomial into polynomial convolutions of different lengths. This program will decompose a cyclic polynomial into polynomial convolutions of the same length. Thus, only FPTs and Fast Fourier Transforms of the same length are required. This modular approach can save computational resources. To further enhance its appeal, the program is written in the transportable 'C' language. The steps in the algorithm are: 1) formulate the modulus reduction equations, 2) calculate the polynomial transforms, 3) multiply the transforms using a generalized fast Fourier transformation, 4) compute the inverse polynomial transforms, and 5) reconstruct the final matrices using the Chinese remainder theorem. Input to this program is comprised of the row and column dimensions and the initial two matrices. The matrices are printed out at all steps, ending with the final reconstruction. This program is written in 'C' for batch execution and has been implemented on the IBM PC series of computers under DOS with a central memory requirement of approximately 18K of 8 bit bytes. This program was developed in 1986.

  8. Effects of Convoluted Divergent Flap Contouring on the Performance of a Fixed-Geometry Nonaxisymmetric Exhaust Nozzle

    NASA Technical Reports Server (NTRS)

    Asbury, Scott C.; Hunter, Craig A.

    1999-01-01

    An investigation was conducted in the model preparation area of the Langley 16-Foot Transonic Tunnel to determine the effects of convoluted divergent-flap contouring on the internal performance of a fixed-geometry, nonaxisymmetric, convergent-divergent exhaust nozzle. Testing was conducted at static conditions using a sub-scale nozzle model with one baseline and four convoluted configurations. All tests were conducted with no external flow at nozzle pressure ratios from 1.25 to approximately 9.50. Results indicate that baseline nozzle performance was dominated by unstable, shock-induced, boundary-layer separation at overexpanded conditions. Convoluted configurations were found to significantly reduce, and in some cases totally alleviate separation at overexpanded conditions. This result was attributed to the ability of convoluted contouring to energize and improve the condition of the nozzle boundary layer. Separation alleviation offers potential for installed nozzle aeropropulsive (thrust-minus-drag) performance benefits by reducing drag at forward flight speeds, even though this may reduce nozzle thrust ratio as much as 6.4% at off-design conditions. At on-design conditions, nozzle thrust ratio for the convoluted configurations ranged from 1% to 2.9% below the baseline configuration; this was a result of increased skin friction and oblique shock losses inside the nozzle.

  9. Increased intra-individual reaction time variability in attention-deficit/hyperactivity disorder across response inhibition tasks with different cognitive demands.

    PubMed

    Vaurio, Rebecca G; Simmonds, Daniel J; Mostofsky, Stewart H

    2009-10-01

    One of the most consistent findings in children with ADHD is increased moment-to-moment variability in reaction time (RT). The source of increased RT variability can be examined using ex-Gaussian analyses that divide variability into normal and exponential components and Fast Fourier transform (FFT) that allow for detailed examination of the frequency of responses in the exponential distribution. Prior studies of ADHD using these methods have produced variable results, potentially related to differences in task demand. The present study sought to examine the profile of RT variability in ADHD using two Go/No-go tasks with differing levels of cognitive demand. A total of 140 children (57 with ADHD and 83 typically developing controls), ages 8-13 years, completed both a "simple" Go/No-go task and a more "complex" Go/No-go task with increased working memory load. Repeated measures ANOVA of ex-Gaussian functions revealed for both tasks children with ADHD demonstrated increased variability in both the normal/Gaussian (significantly elevated sigma) and the exponential (significantly elevated tau) components. In contrast, FFT analysis of the exponential component revealed a significant task x diagnosis interaction, such that infrequent slow responses in ADHD differed depending on task demand (i.e., for the simple task, increased power in the 0.027-0.074 Hz frequency band; for the complex task, decreased power in the 0.074-0.202 Hz band). The ex-Gaussian findings revealing increased variability in both the normal (sigma) and exponential (tau) components for the ADHD group, suggest that both impaired response preparation and infrequent "lapses in attention" contribute to increased variability in ADHD. FFT analyses reveal that the periodicity of intermittent lapses of attention in ADHD varies with task demand. The findings provide further support for intra-individual variability as a candidate intermediate endophenotype of ADHD.

  10. Application of the kurtosis statistic to the evaluation of the risk of hearing loss in workers exposed to high-level complex noise.

    PubMed

    Zhao, Yi-Ming; Qiu, Wei; Zeng, Lin; Chen, Shan-Song; Cheng, Xiao-Ru; Davis, Robert I; Hamernik, Roger P

    2010-08-01

    Develop dose-response relations for two groups of industrial workers exposed to Gaussian or non-Gaussian (complex) types of continuous noises and to investigate what role, if any, the kurtosis statistic can play in the evaluation of industrial noise-induced hearing loss (NIHL). Audiometric and noise exposure data were acquired on a population (N = 195) of screened workers from a textile manufacturing plant and a metal fabrication facility located in Henan province of China. Thirty-two of the subjects were exposed to non-Gaussian (non-G) noise and 163 were exposed to a Gaussian (G) continuous noise. Each subject was given a general physical and an otologic examination. Hearing threshold levels (0.5-8.0 kHz) were age adjusted (ISI-1999) and the prevalence of NIHL at 3, 4, or 6 kHz was determined. The kurtosis metric, which is sensitive to the peak and temporal characteristics of a noise, was introduced into the calculation of the cumulative noise exposure metric. Using the prevalence of hearing loss and the cumulative noise exposure metric, a dose-response relation for the G and non-G noise-exposed groups was constructed. An analysis of the noise environments in the two plants showed that the noise exposures in the textile plant were of a Gaussian type with an Leq(A)8hr that varied from 96 to 105 dB whereas the exposures in the metal fabrication facility with an Leq(A)8hr = 95 dB were of a non-G type containing high levels (up to 125 dB peak SPL) of impact noise. The kurtosis statistic was used to quantify the deviation of the non-G noise environment from the Gaussian. The dose-response relation for the non-G noise-exposed subjects showed a higher prevalence of hearing loss for a comparable cumulative noise exposure than did the G noise-exposed subjects. By introducing the kurtosis variable into the temporal component of the cumulative noise exposure calculation, the two dose-response curves could be made to overlap, essentially yielding an equivalent noise-induced effect for the two study groups. For the same exposure level, the prevalence of NIHL is greater in workers exposed to non-G noise environments than for workers exposed to G noise. The kurtosis metric may be a reasonable candidate for use in modifying exposure level calculations that are used to estimate the risk of NIHL from any type of noise exposure environment. However, studies involving a large number of workers with well-documented exposures are needed before a relation between a metric such as the kurtosis and the risk of hearing loss can be refined.

  11. Bladder Cancer Segmentation in CT for Treatment Response Assessment: Application of Deep-Learning Convolution Neural Network-A Pilot Study.

    PubMed

    Cha, Kenny H; Hadjiiski, Lubomir M; Samala, Ravi K; Chan, Heang-Ping; Cohan, Richard H; Caoili, Elaine M; Paramagul, Chintana; Alva, Ajjai; Weizer, Alon Z

    2016-12-01

    Assessing the response of bladder cancer to neoadjuvant chemotherapy is crucial for reducing morbidity and increasing quality of life of patients. Changes in tumor volume during treatment is generally used to predict treatment outcome. We are developing a method for bladder cancer segmentation in CT using a pilot data set of 62 cases. 65 000 regions of interests were extracted from pre-treatment CT images to train a deep-learning convolution neural network (DL-CNN) for tumor boundary detection using leave-one-case-out cross-validation. The results were compared to our previous AI-CALS method. For all lesions in the data set, the longest diameter and its perpendicular were measured by two radiologists, and 3D manual segmentation was obtained from one radiologist. The World Health Organization (WHO) criteria and the Response Evaluation Criteria In Solid Tumors (RECIST) were calculated, and the prediction accuracy of complete response to chemotherapy was estimated by the area under the receiver operating characteristic curve (AUC). The AUCs were 0.73 ± 0.06, 0.70 ± 0.07, and 0.70 ± 0.06, respectively, for the volume change calculated using DL-CNN segmentation, the AI-CALS and the manual contours. The differences did not achieve statistical significance. The AUCs using the WHO criteria were 0.63 ± 0.07 and 0.61 ± 0.06, while the AUCs using RECIST were 0.65 ± 007 and 0.63 ± 0.06 for the two radiologists, respectively. Our results indicate that DL-CNN can produce accurate bladder cancer segmentation for calculation of tumor size change in response to treatment. The volume change performed better than the estimations from the WHO criteria and RECIST for the prediction of complete response.

  12. Bladder Cancer Segmentation in CT for Treatment Response Assessment: Application of Deep-Learning Convolution Neural Network—A Pilot Study

    PubMed Central

    Cha, Kenny H.; Hadjiiski, Lubomir M.; Samala, Ravi K.; Chan, Heang-Ping; Cohan, Richard H.; Caoili, Elaine M.; Paramagul, Chintana; Alva, Ajjai; Weizer, Alon Z.

    2017-01-01

    Assessing the response of bladder cancer to neoadjuvant chemotherapy is crucial for reducing morbidity and increasing quality of life of patients. Changes in tumor volume during treatment is generally used to predict treatment outcome. We are developing a method for bladder cancer segmentation in CT using a pilot data set of 62 cases. 65 000 regions of interests were extracted from pre-treatment CT images to train a deep-learning convolution neural network (DL-CNN) for tumor boundary detection using leave-one-case-out cross-validation. The results were compared to our previous AI-CALS method. For all lesions in the data set, the longest diameter and its perpendicular were measured by two radiologists, and 3D manual segmentation was obtained from one radiologist. The World Health Organization (WHO) criteria and the Response Evaluation Criteria In Solid Tumors (RECIST) were calculated, and the prediction accuracy of complete response to chemotherapy was estimated by the area under the receiver operating characteristic curve (AUC). The AUCs were 0.73 ± 0.06, 0.70 ± 0.07, and 0.70 ± 0.06, respectively, for the volume change calculated using DL-CNN segmentation, the AI-CALS and the manual contours. The differences did not achieve statistical significance. The AUCs using the WHO criteria were 0.63 ± 0.07 and 0.61 ± 0.06, while the AUCs using RECIST were 0.65 ± 007 and 0.63 ± 0.06 for the two radiologists, respectively. Our results indicate that DL-CNN can produce accurate bladder cancer segmentation for calculation of tumor size change in response to treatment. The volume change performed better than the estimations from the WHO criteria and RECIST for the prediction of complete response. PMID:28105470

  13. A separable two-dimensional discrete Hartley transform

    NASA Technical Reports Server (NTRS)

    Watson, A. B.; Poirson, A.

    1985-01-01

    Bracewell has proposed the Discrete Hartley Transform (DHT) as a substitute for the Discrete Fourier Transform (DFT), particularly as a means of convolution. Here, it is shown that the most natural extension of the DHT to two dimensions fails to be separate in the two dimensions, and is therefore inefficient. An alternative separable form is considered, corresponding convolution theorem is derived. That the DHT is unlikely to provide faster convolution than the DFT is also discussed.

  14. Iterative deep convolutional encoder-decoder network for medical image segmentation.

    PubMed

    Jung Uk Kim; Hak Gu Kim; Yong Man Ro

    2017-07-01

    In this paper, we propose a novel medical image segmentation using iterative deep learning framework. We have combined an iterative learning approach and an encoder-decoder network to improve segmentation results, which enables to precisely localize the regions of interest (ROIs) including complex shapes or detailed textures of medical images in an iterative manner. The proposed iterative deep convolutional encoder-decoder network consists of two main paths: convolutional encoder path and convolutional decoder path with iterative learning. Experimental results show that the proposed iterative deep learning framework is able to yield excellent medical image segmentation performances for various medical images. The effectiveness of the proposed method has been proved by comparing with other state-of-the-art medical image segmentation methods.

  15. Reconfigurable Gabor Filter For Fingerprint Recognition Using FPGA Verilog

    NASA Astrophysics Data System (ADS)

    Rosshidi, H. T.; Hadi, A. R.

    2009-06-01

    This paper present the implementations of Gabor filter for fingerprint recognition using Verilog HDL. This work demonstrates the application of Gabor Filter technique to enhance the fingerprint image. The incoming signal in form of image pixel will be filter out or convolute by the Gabor filter to define the ridge and valley regions of fingerprint. This is done with the application of a real time convolve based on Field Programmable Gate Array (FPGA) to perform the convolution operation. The main characteristic of the proposed approach are the usage of memory to store the incoming image pixel and the coefficient of the Gabor filter before the convolution matrix take place. The result was the signal convoluted with the Gabor coefficient.

  16. Convolutional neural network for road extraction

    NASA Astrophysics Data System (ADS)

    Li, Junping; Ding, Yazhou; Feng, Fajie; Xiong, Baoyu; Cui, Weihong

    2017-11-01

    In this paper, the convolution neural network with large block input and small block output was used to extract road. To reflect the complex road characteristics in the study area, a deep convolution neural network VGG19 was conducted for road extraction. Based on the analysis of the characteristics of different sizes of input block, output block and the extraction effect, the votes of deep convolutional neural networks was used as the final road prediction. The study image was from GF-2 panchromatic and multi-spectral fusion in Yinchuan. The precision of road extraction was 91%. The experiments showed that model averaging can improve the accuracy to some extent. At the same time, this paper gave some advice about the choice of input block size and output block size.

  17. Molecular graph convolutions: moving beyond fingerprints

    PubMed Central

    Kearnes, Steven; McCloskey, Kevin; Berndl, Marc; Pande, Vijay; Riley, Patrick

    2016-01-01

    Molecular “fingerprints” encoding structural information are the workhorse of cheminformatics and machine learning in drug discovery applications. However, fingerprint representations necessarily emphasize particular aspects of the molecular structure while ignoring others, rather than allowing the model to make data-driven decisions. We describe molecular graph convolutions, a machine learning architecture for learning from undirected graphs, specifically small molecules. Graph convolutions use a simple encoding of the molecular graph—atoms, bonds, distances, etc.—which allows the model to take greater advantage of information in the graph structure. Although graph convolutions do not outperform all fingerprint-based methods, they (along with other graph-based methods) represent a new paradigm in ligand-based virtual screening with exciting opportunities for future improvement. PMID:27558503

  18. Concatenated coding systems employing a unit-memory convolutional code and a byte-oriented decoding algorithm

    NASA Technical Reports Server (NTRS)

    Lee, L.-N.

    1977-01-01

    Concatenated coding systems utilizing a convolutional code as the inner code and a Reed-Solomon code as the outer code are considered. In order to obtain very reliable communications over a very noisy channel with relatively modest coding complexity, it is proposed to concatenate a byte-oriented unit-memory convolutional code with an RS outer code whose symbol size is one byte. It is further proposed to utilize a real-time minimal-byte-error probability decoding algorithm, together with feedback from the outer decoder, in the decoder for the inner convolutional code. The performance of the proposed concatenated coding system is studied, and the improvement over conventional concatenated systems due to each additional feature is isolated.

  19. Concatenated coding systems employing a unit-memory convolutional code and a byte-oriented decoding algorithm

    NASA Technical Reports Server (NTRS)

    Lee, L. N.

    1976-01-01

    Concatenated coding systems utilizing a convolutional code as the inner code and a Reed-Solomon code as the outer code are considered. In order to obtain very reliable communications over a very noisy channel with relatively small coding complexity, it is proposed to concatenate a byte oriented unit memory convolutional code with an RS outer code whose symbol size is one byte. It is further proposed to utilize a real time minimal byte error probability decoding algorithm, together with feedback from the outer decoder, in the decoder for the inner convolutional code. The performance of the proposed concatenated coding system is studied, and the improvement over conventional concatenated systems due to each additional feature is isolated.

  20. Simultaneous Gaussian and exponential inversion for improved analysis of shales by NMR relaxometry

    USGS Publications Warehouse

    Washburn, Kathryn E.; Anderssen, Endre; Vogt, Sarah J.; Seymour, Joseph D.; Birdwell, Justin E.; Kirkland, Catherine M.; Codd, Sarah L.

    2014-01-01

    Nuclear magnetic resonance (NMR) relaxometry is commonly used to provide lithology-independent porosity and pore-size estimates for petroleum resource evaluation based on fluid-phase signals. However in shales, substantial hydrogen content is associated with solid and fluid signals and both may be detected. Depending on the motional regime, the signal from the solids may be best described using either exponential or Gaussian decay functions. When the inverse Laplace transform, the standard method for analysis of NMR relaxometry results, is applied to data containing Gaussian decays, this can lead to physically unrealistic responses such as signal or porosity overcall and relaxation times that are too short to be determined using the applied instrument settings. We apply a new simultaneous Gaussian-Exponential (SGE) inversion method to simulated data and measured results obtained on a variety of oil shale samples. The SGE inversion produces more physically realistic results than the inverse Laplace transform and displays more consistent relaxation behavior at high magnetic field strengths. Residuals for the SGE inversion are consistently lower than for the inverse Laplace method and signal overcall at short T2 times is mitigated. Beyond geological samples, the method can also be applied in other fields where the sample relaxation consists of both Gaussian and exponential decays, for example in material, medical and food sciences.

  1. Developing convolutional neural networks for measuring climate change opinions from social media data

    NASA Astrophysics Data System (ADS)

    Mao, H.; Bhaduri, B. L.

    2016-12-01

    Understanding public opinions on climate change is important for policy making. Public opinion, however, is typically measured with national surveys, which are often too expensive and thus being updated at a low frequency. Twitter has become a major platform for people to express their opinions on social and political issues. Our work attempts to understand if Twitter data can provide complimentary insights about climate change perceptions. Since the nature of social media is real-time, this data source can especially help us understand how public opinion changes over time in response to climate events and hazards, which though is very difficult to be captured by manual surveys. We use the Twitter Streaming API to collect tweets that contain keywords, "climate change" or "#climatechange". Traditional machine-learning based opinion mining algorithms require a significant amount of labeled data. Data labeling is notoriously time consuming. To address this problem, we use hashtags (a significant feature used to mark topics of tweets) to annotate tweets automatically. For example, hashtags, #climatedenial and #climatescam, are negative opinion labels, while #actonclimate and #climateaction are positive. Following this method, we can obtain a large amount of training data without human labor. This labeled dataset is used to train a deep convolutional neural network that classifies tweets into positive (i.e. believe in climate change) and negative (i.e. do not believe). Based on the positive/negative tweets obtained, we will further analyze risk perceptions and opinions towards policy support. In addition, we analyze twitter user profiles to understand the demographics of proponents and opponents of climate change. Deep learning techniques, especially convolutional deep neural networks, have achieved much success in computer vision. In this work, we propose a convolutional neural network architecture for understanding opinions within text. This method is compared with lexicon-based opinion analysis approaches. Results and the advantages/limitations of this method are to be discussed.

  2. Two projects in theoretical neuroscience: A convolution-based metric for neural membrane potentials and a combinatorial connectionist semantic network method

    NASA Astrophysics Data System (ADS)

    Evans, Garrett Nolan

    In this work, I present two projects that both contribute to the aim of discovering how intelligence manifests in the brain. The first project is a method for analyzing recorded neural signals, which takes the form of a convolution-based metric on neural membrane potential recordings. Relying only on integral and algebraic operations, the metric compares the timing and number of spikes within recordings as well as the recordings' subthreshold features: summarizing differences in these with a single "distance" between the recordings. Like van Rossum's (2001) metric for spike trains, the metric is based on a convolution operation that it performs on the input data. The kernel used for the convolution is carefully chosen such that it produces a desirable frequency space response and, unlike van Rossum's kernel, causes the metric to be first order both in differences between nearby spike times and in differences between same-time membrane potential values: an important trait. The second project is a combinatorial syntax method for connectionist semantic network encoding. Combinatorial syntax has been a point on which those who support a symbol-processing view of intelligent processing and those who favor a connectionist view have had difficulty seeing eye-to-eye. Symbol-processing theorists have persuasively argued that combinatorial syntax is necessary for certain intelligent mental operations, such as reasoning by analogy. Connectionists have focused on the versatility and adaptability offered by self-organizing networks of simple processing units. With this project, I show that there is a way to reconcile the two perspectives and to ascribe a combinatorial syntax to a connectionist network. The critical principle is to interpret nodes, or units, in the connectionist network as bound integrations of the interpretations for nodes that they share links with. Nodes need not correspond exactly to neurons and may correspond instead to distributed sets, or assemblies, of neurons.

  3. Laser Stimulated Thermoluminescence

    NASA Astrophysics Data System (ADS)

    Abtahi, Abdollah

    Techniques for localized heating of semi-infinite single-layer and two-layer structures are investigated theoretically and experimentally, motivated by applications in thermoluminescence (TL) dosimetry of ionizing radiation. The heat-conduction equations are solved by the Green's function technique to obtain the transient temperature distribution caused by exposure to laser beams of Gaussian and uniform circular intensity profiles. It is shown that the spatio-temporal temperature response is readily monitored by the TL response that results when layer configuration contains a thermoluminescent phosphor. The experiments for the verification of the developed theory are performed with two specially constructed TL detection systems, one featuring a laser beam of Gaussian profile and the other a uniform circular laser beam. Measurements of the thermoluminescent emission from a number of different TL systems are performed and compared with computed responses on the basis of simple electron kinetics. We experiment exclusively with the commercial TL phosphor LiF:Mg,Ti(TLD-100, Harshaw), the most widely used material in thermoluminescence dosimetry. We study in detail localized Gaussian beam heating of it in the form of 0.9 mm thick slabs, self-supporting firms of fine-grain powder in a polyimide (Kapton) matrix, and on substrates of LiF single crystals or borosilicate glass. Thermoluminescent layers on glass substrates have been heated with Gaussian and uniform circular intensity profiles in two different modes: the laser beam impinges onto (a) the phosphor layer, and (b) the glass substrate. It is demonstrated that the optical and thermal behavior of the dosimeters can be determined by these methods and that, furthermore, the thermoluminescence response of a given configuration can be simulated as a function of a number of experimental parameters such as laser power, beam size, substrate and TL-layer thicknesses, and configuration of the dosimeters. In addition, we have investigated the dependence of the luminous efficiency (normalized thermoluminescence yield) and peak heights on heating rates in the range from 4 K/s to 5500 K/s. The efficiency values obtained are then included in the comparison of experimental and theoretical TL responses curves for various laser powers.

  4. Measurement of Device Parameters Using Image Recovery Techniques in Large-Scale IC Devices

    NASA Technical Reports Server (NTRS)

    Scheick, Leif; Edmonds, Larry

    2004-01-01

    Devices that respond to radiation on a cell level will produce histograms showing the relative frequency of cell damage as a function of damage. The measured distribution is the convolution of distributions from radiation responses, measurement noise, and manufacturing parameters. A method of extracting device characteristics and parameters from measured distributions via mathematical and image subtraction techniques is described.

  5. Raman-Scattering Line Profiles of the Symbiotic Star AG Peg

    NASA Astrophysics Data System (ADS)

    Lee, Seong-Jae; Hyung, Siek

    2017-06-01

    The high dispersion Hα and Hβ line profiles of the Symbiotic star AG Peg consist of top double Gaussian and bottom components. We investigated the formation of the broad wings with Raman scattering mechanism. Adopting the same physical parameters from the photo-ionization study of Kim and Hyung (2008) for the white dwarf and the ionized gas shell, Monte Carlo simulations were carried out for a rotating accretion disk geometry of non-symmetrical latitude angles from -7° < θ < +7° to -16° < θ < +16°. The smaller latitude angle of the disk corresponds to the approaching side of the disk responsible for weak blue Gaussian profile, while the wider latitude angle corresponds to the other side of the disk responsible for the strong red Gaussian profile. We confirmed that the shell has the high gas density ˜ 109.85 cm-3 in the ionized zone of AG Peg derived in the previous photo-ionization model study. The simulation with various HI shell column densities (characterized by a thickness ΔD × gas number density nH) shows that the HI gas shell with a column density Hhi ≈ 3 - 5 × 1019 cm-2 fits the observed line profiles well. The estimated rotation speed of the accretion disk shell is in the range of 44 - 55 kms-1. We conclude that the kinematically incoherent structure involving the outflowing gas from the giant star caused an asymmetry of the disk and double Gaussian profiles found in AG Peg.

  6. A digital pixel cell for address event representation image convolution processing

    NASA Astrophysics Data System (ADS)

    Camunas-Mesa, Luis; Acosta-Jimenez, Antonio; Serrano-Gotarredona, Teresa; Linares-Barranco, Bernabe

    2005-06-01

    Address Event Representation (AER) is an emergent neuromorphic interchip communication protocol that allows for real-time virtual massive connectivity between huge number of neurons located on different chips. By exploiting high speed digital communication circuits (with nano-seconds timings), synaptic neural connections can be time multiplexed, while neural activity signals (with mili-seconds timings) are sampled at low frequencies. Also, neurons generate events according to their information levels. Neurons with more information (activity, derivative of activities, contrast, motion, edges,...) generate more events per unit time, and access the interchip communication channel more frequently, while neurons with low activity consume less communication bandwidth. AER technology has been used and reported for the implementation of various type of image sensors or retinae: luminance with local agc, contrast retinae, motion retinae,... Also, there has been a proposal for realizing programmable kernel image convolution chips. Such convolution chips would contain an array of pixels that perform weighted addition of events. Once a pixel has added sufficient event contributions to reach a fixed threshold, the pixel fires an event, which is then routed out of the chip for further processing. Such convolution chips have been proposed to be implemented using pulsed current mode mixed analog and digital circuit techniques. In this paper we present a fully digital pixel implementation to perform the weighted additions and fire the events. This way, for a given technology, there is a fully digital implementation reference against which compare the mixed signal implementations. We have designed, implemented and tested a fully digital AER convolution pixel. This pixel will be used to implement a full AER convolution chip for programmable kernel image convolution processing.

  7. Software Communications Architecture (SCA) Compliant Software Defined Radio Design for IEEE 802.16 Wirelessman-OFDMTM Transceiver

    DTIC Science & Technology

    2006-12-01

    Convolutional encoder of rate 1/2 (From [10]). Table 3 shows the puncturing patterns used to derive the different code rates . X precedes Y in the order... convolutional code with puncturing configuration (From [10])......11 Table 4. Mandatory channel coding per modulation (From [10...a concatenation of a Reed– Solomon outer code and a rate -adjustable convolutional inner code . At the transmitter, data shall first be encoded with

  8. Synchronization Analysis and Simulation of a Standard IEEE 802.11G OFDM Signal

    DTIC Science & Technology

    2004-03-01

    Figure 26 Convolutional Encoder Parameters. Figure 27 Puncturing Parameters. As per Table 3, the required code rate is 3 4r = which requires...to achieve the higher data rates required by the Standard 802.11b was accomplished by using packet binary convolutional coding (PBCC). Essentially...higher data rates are achieved by using convolutional coding combined with BPSK or QPSK modulation. The data is first encoded with a rate one-half

  9. Design and System Implications of a Family of Wideband HF Data Waveforms

    DTIC Science & Technology

    2010-09-01

    code rates (i.e. 8/9, 9/10) will be used to attain the highest data rates for surface wave links. Very high puncturing of convolutional codes can...Communication Links”, Edition 1, North Atlantic Treaty Organization, 2009. [14] Yasuda, Y., Kashiki, K., Hirata, Y. “High- Rate Punctured Convolutional Codes ...length 7 convolutional code that has been used for over two decades in 110A. In addition, repetition coding and puncturing was

  10. Video Super-Resolution via Bidirectional Recurrent Convolutional Networks.

    PubMed

    Huang, Yan; Wang, Wei; Wang, Liang

    2018-04-01

    Super resolving a low-resolution video, namely video super-resolution (SR), is usually handled by either single-image SR or multi-frame SR. Single-Image SR deals with each video frame independently, and ignores intrinsic temporal dependency of video frames which actually plays a very important role in video SR. Multi-Frame SR generally extracts motion information, e.g., optical flow, to model the temporal dependency, but often shows high computational cost. Considering that recurrent neural networks (RNNs) can model long-term temporal dependency of video sequences well, we propose a fully convolutional RNN named bidirectional recurrent convolutional network for efficient multi-frame SR. Different from vanilla RNNs, 1) the commonly-used full feedforward and recurrent connections are replaced with weight-sharing convolutional connections. So they can greatly reduce the large number of network parameters and well model the temporal dependency in a finer level, i.e., patch-based rather than frame-based, and 2) connections from input layers at previous timesteps to the current hidden layer are added by 3D feedforward convolutions, which aim to capture discriminate spatio-temporal patterns for short-term fast-varying motions in local adjacent frames. Due to the cheap convolutional operations, our model has a low computational complexity and runs orders of magnitude faster than other multi-frame SR methods. With the powerful temporal dependency modeling, our model can super resolve videos with complex motions and achieve well performance.

  11. Further Developments in the Communication Link and Error Analysis (CLEAN) Simulator

    NASA Technical Reports Server (NTRS)

    Ebel, William J.; Ingels, Frank M.

    1995-01-01

    During the period 1 July 1993 - 30 June 1994, significant developments to the Communication Link and Error ANalysis (CLEAN) simulator were completed. Many of these were reported in the Semi-Annual report dated December 1993 which has been included in this report in Appendix A. Since December 1993, a number of additional modules have been added involving Unit-Memory Convolutional codes (UMC). These are: (1) Unit-Memory Convolutional Encoder module (UMCEncd); (2) Hard decision Unit-Memory Convolutional Decoder using the Viterbi decoding algorithm (VitUMC); and (3) a number of utility modules designed to investigate the performance of LTMC's such as LTMC column distance function (UMCdc), UMC free distance function (UMCdfree), UMC row distance function (UMCdr), and UMC Transformation (UMCTrans). The study of UMC's was driven, in part, by the desire to investigate high-rate convolutional codes which are better suited as inner codes for a concatenated coding scheme. A number of high-rate LTMC's were found which are good candidates for inner codes. Besides the further developments of the simulation, a study was performed to construct a table of the best known Unit-Memory Convolutional codes. Finally, a preliminary study of the usefulness of the Periodic Convolutional Interleaver (PCI) was completed and documented in a Technical note dated March 17, 1994. This technical note has also been included in this final report.

  12. The effects of kinesio taping on the color intensity of superficial skin hematomas: A pilot study.

    PubMed

    Vercelli, Stefano; Colombo, Claudio; Tolosa, Francesca; Moriondo, Andrea; Bravini, Elisabetta; Ferriero, Giorgio; Francesco, Sartorio

    2017-01-01

    To analyze the effects of kinesio taping (KT) -applied with three different strains that induced or not the formation of skin creases (called convolutions)- on color intensity of post-surgical superficial hematomas. Single-blind paired study. Rehabilitation clinic. A convenience sample of 13 inpatients with post-surgical superficial hematomas. The tape was applied for 24 consecutive hours. Three tails of KT were randomly applied with different degrees of strain: none (SN); light (SL); and full longitudinal stretch (SF). We expected to obtain correct formation of convolutions with SL, some convolutions with SN, and no convolutions with SF. The change in color intensity of hematomas, measured by means of polar coordinates CIE L*a*b* using a validated and standardized digital images system. Applying KT to hematomas did not significantly change the color intensity in the central area under the tape (p > 0.05). There was a significant treatment effect (p < 0.05) under the edges of the tape, independently of the formation of convolutions (p > 0.05). The changes observed along the edges of the tape could be related to the formation of a pressure gradient between the KT and the adjacent area, but were not dependent on the formation of skin convolutions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ganguly, Jayanta; Ghosh, Manas, E-mail: pcmg77@rediffmail.com

    We investigate the profiles of diagonal components of frequency-dependent first nonlinear (β{sub xxx} and β{sub yyy}) optical response of repulsive impurity doped quantum dots. We have assumed a Gaussian function to represent the dopant impurity potential. This study primarily addresses the role of noise on the polarizability components. We have invoked Gaussian white noise consisting of additive and multiplicative characteristics (in Stratonovich sense). The doped system has been subjected to an oscillating electric field of given intensity, and the frequency-dependent first nonlinear polarizabilities are computed. The noise characteristics are manifested in an interesting way in the nonlinear polarizability components. Inmore » case of additive noise, the noise strength remains practically ineffective in influencing the optical responses. The situation completely changes with the replacement of additive noise by its multiplicative analog. The replacement enhances the nonlinear optical response dramatically and also causes their maximization at some typical value of noise strength that depends on oscillation frequency.« less

  14. Random matrix theory for transition strengths: Applications and open questions

    NASA Astrophysics Data System (ADS)

    Kota, V. K. B.

    2017-12-01

    Embedded random matrix ensembles are generic models for describing statistical properties of finite isolated interacting quantum many-particle systems. A finite quantum system, induced by a transition operator, makes transitions from its states to the states of the same system or to those of another system. Examples are electromagnetic transitions (then the initial and final systems are same), nuclear beta and double beta decay (then the initial and final systems are different) and so on. Using embedded ensembles (EE), there are efforts to derive a good statistical theory for transition strengths. With m fermions (or bosons) in N mean-field single particle levels and interacting via two-body forces, we have with GOE embedding, the so called EGOE(1+2). Now, the transition strength density (transition strength multiplied by the density of states at the initial and final energies) is a convolution of the density generated by the mean-field one-body part with a bivariate spreading function due to the two-body interaction. Using the embedding U(N) algebra, it is established, for a variety of transition operators, that the spreading function, for sufficiently strong interactions, is close to a bivariate Gaussian. Also, as the interaction strength increases, the spreading function exhibits a transition from bivariate Breit-Wigner to bivariate Gaussian form. In appropriate limits, this EE theory reduces to the polynomial theory of Draayer, French and Wong on one hand and to the theory due to Flambaum and Izrailev for one-body transition operators on the other. Using spin-cutoff factors for projecting angular momentum, the theory is applied to nuclear matrix elements for neutrinoless double beta decay (NDBD). In this paper we will describe: (i) various developments in the EE theory for transition strengths; (ii) results for nuclear matrix elements for 130Te and 136Xe NDBD; (iii) important open questions in the current form of the EE theory.

  15. Comparison of dynamical approximation schemes for non-linear gravitational clustering

    NASA Technical Reports Server (NTRS)

    Melott, Adrian L.

    1994-01-01

    We have recently conducted a controlled comparison of a number of approximations for gravitational clustering against the same n-body simulations. These include ordinary linear perturbation theory (Eulerian), the adhesion approximation, the frozen-flow approximation, the Zel'dovich approximation (describable as first-order Lagrangian perturbation theory), and its second-order generalization. In the last two cases we also created new versions of approximation by truncation, i.e., smoothing the initial conditions by various smoothing window shapes and varying their sizes. The primary tool for comparing simulations to approximation schemes was crosscorrelation of the evolved mass density fields, testing the extent to which mass was moved to the right place. The Zel'dovich approximation, with initial convolution with a Gaussian e(exp -k(exp 2)/k(exp 2, sub G)) where k(sub G) is adjusted to be just into the nonlinear regime of the evolved model (details in text) worked extremely well. Its second-order generalization worked slightly better. All other schemes, including those proposed as generalizations of the Zel'dovich approximation created by adding forces, were in fact generally worse by this measure. By explicitly checking, we verified that the success of our best-choice was a result of the best treatment of the phases of nonlinear Fourier components. Of all schemes tested, the adhesion approximation produced the most accurate nonlinear power spectrum and density distribution, but its phase errors suggest mass condensations were moved to slightly the wrong location. Due to its better reproduction of the mass density distribution function and power spectrum, it might be preferred for some uses. We recommend either n-body simulations or our modified versions of the Zel'dovich approximation, depending upon the purpose. The theoretical implication is that pancaking is implicit in all cosmological gravitational clustering, at least from Gaussian initial conditions, even when subcondensations are present.

  16. TH-C-BRD-02: Analytical Modeling and Dose Calculation Method for Asymmetric Proton Pencil Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gelover, E; Wang, D; Hill, P

    2014-06-15

    Purpose: A dynamic collimation system (DCS), which consists of two pairs of orthogonal trimmer blades driven by linear motors has been proposed to decrease the lateral penumbra in pencil beam scanning proton therapy. The DCS reduces lateral penumbra by intercepting the proton pencil beam near the lateral boundary of the target in the beam's eye view. The resultant trimmed pencil beams are asymmetric and laterally shifted, and therefore existing pencil beam dose calculation algorithms are not capable of trimmed beam dose calculations. This work develops a method to model and compute dose from trimmed pencil beams when using the DCS.more » Methods: MCNPX simulations were used to determine the dose distributions expected from various trimmer configurations using the DCS. Using these data, the lateral distribution for individual beamlets was modeled with a 2D asymmetric Gaussian function. The integral depth dose (IDD) of each configuration was also modeled by combining the IDD of an untrimmed pencil beam with a linear correction factor. The convolution of these two terms, along with the Highland approximation to account for lateral growth of the beam along the depth direction, allows a trimmed pencil beam dose distribution to be analytically generated. The algorithm was validated by computing dose for a single energy layer 5×5 cm{sup 2} treatment field, defined by the trimmers, using both the proposed method and MCNPX beamlets. Results: The Gaussian modeled asymmetric lateral profiles along the principal axes match the MCNPX data very well (R{sup 2}≥0.95 at the depth of the Bragg peak). For the 5×5 cm{sup 2} treatment plan created with both the modeled and MCNPX pencil beams, the passing rate of the 3D gamma test was 98% using a standard threshold of 3%/3 mm. Conclusion: An analytical method capable of accurately computing asymmetric pencil beam dose when using the DCS has been developed.« less

  17. Towards dropout training for convolutional neural networks.

    PubMed

    Wu, Haibing; Gu, Xiaodong

    2015-11-01

    Recently, dropout has seen increasing use in deep learning. For deep convolutional neural networks, dropout is known to work well in fully-connected layers. However, its effect in convolutional and pooling layers is still not clear. This paper demonstrates that max-pooling dropout is equivalent to randomly picking activation based on a multinomial distribution at training time. In light of this insight, we advocate employing our proposed probabilistic weighted pooling, instead of commonly used max-pooling, to act as model averaging at test time. Empirical evidence validates the superiority of probabilistic weighted pooling. We also empirically show that the effect of convolutional dropout is not trivial, despite the dramatically reduced possibility of over-fitting due to the convolutional architecture. Elaborately designing dropout training simultaneously in max-pooling and fully-connected layers, we achieve state-of-the-art performance on MNIST, and very competitive results on CIFAR-10 and CIFAR-100, relative to other approaches without data augmentation. Finally, we compare max-pooling dropout and stochastic pooling, both of which introduce stochasticity based on multinomial distributions at pooling stage. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Frame prediction using recurrent convolutional encoder with residual learning

    NASA Astrophysics Data System (ADS)

    Yue, Boxuan; Liang, Jun

    2018-05-01

    The prediction for the frame of a video is difficult but in urgent need in auto-driving. Conventional methods can only predict some abstract trends of the region of interest. The boom of deep learning makes the prediction for frames possible. In this paper, we propose a novel recurrent convolutional encoder and DE convolutional decoder structure to predict frames. We introduce the residual learning in the convolution encoder structure to solve the gradient issues. The residual learning can transform the gradient back propagation to an identity mapping. It can reserve the whole gradient information and overcome the gradient issues in Recurrent Neural Networks (RNN) and Convolutional Neural Networks (CNN). Besides, compared with the branches in CNNs and the gated structures in RNNs, the residual learning can save the training time significantly. In the experiments, we use UCF101 dataset to train our networks, the predictions are compared with some state-of-the-art methods. The results show that our networks can predict frames fast and efficiently. Furthermore, our networks are used for the driving video to verify the practicability.

  19. A fully convolutional networks (FCN) based image segmentation algorithm in binocular imaging system

    NASA Astrophysics Data System (ADS)

    Long, Zourong; Wei, Biao; Feng, Peng; Yu, Pengwei; Liu, Yuanyuan

    2018-01-01

    This paper proposes an image segmentation algorithm with fully convolutional networks (FCN) in binocular imaging system under various circumstance. Image segmentation is perfectly solved by semantic segmentation. FCN classifies the pixels, so as to achieve the level of image semantic segmentation. Different from the classical convolutional neural networks (CNN), FCN uses convolution layers instead of the fully connected layers. So it can accept image of arbitrary size. In this paper, we combine the convolutional neural network and scale invariant feature matching to solve the problem of visual positioning under different scenarios. All high-resolution images are captured with our calibrated binocular imaging system and several groups of test data are collected to verify this method. The experimental results show that the binocular images are effectively segmented without over-segmentation. With these segmented images, feature matching via SURF method is implemented to obtain regional information for further image processing. The final positioning procedure shows that the results are acceptable in the range of 1.4 1.6 m, the distance error is less than 10mm.

  20. Characterization of nonGaussian atmospheric turbulence for prediction of aircraft response statistics

    NASA Technical Reports Server (NTRS)

    Mark, W. D.

    1977-01-01

    Mathematical expressions were derived for the exceedance rates and probability density functions of aircraft response variables using a turbulence model that consists of a low frequency component plus a variance modulated Gaussian turbulence component. The functional form of experimentally observed concave exceedance curves was predicted theoretically, the strength of the concave contribution being governed by the coefficient of variation of the time fluctuating variance of the turbulence. Differences in the functional forms of response exceedance curves and probability densities also were shown to depend primarily on this same coefficient of variation. Criteria were established for the validity of the local stationary assumption that is required in the derivations of the exceedance curves and probability density functions. These criteria are shown to depend on the relative time scale of the fluctuations in the variance, the fluctuations in the turbulence itself, and on the nominal duration of the relevant aircraft impulse response function. Metrics that can be generated from turbulence recordings for testing the validity of the local stationary assumption were developed.

  1. Soft Mixer Assignment in a Hierarchical Generative Model of Natural Scene Statistics

    PubMed Central

    Schwartz, Odelia; Sejnowski, Terrence J.; Dayan, Peter

    2010-01-01

    Gaussian scale mixture models offer a top-down description of signal generation that captures key bottom-up statistical characteristics of filter responses to images. However, the pattern of dependence among the filters for this class of models is prespecified. We propose a novel extension to the gaussian scale mixture model that learns the pattern of dependence from observed inputs and thereby induces a hierarchical representation of these inputs. Specifically, we propose that inputs are generated by gaussian variables (modeling local filter structure), multiplied by a mixer variable that is assigned probabilistically to each input from a set of possible mixers. We demonstrate inference of both components of the generative model, for synthesized data and for different classes of natural images, such as a generic ensemble and faces. For natural images, the mixer variable assignments show invariances resembling those of complex cells in visual cortex; the statistics of the gaussian components of the model are in accord with the outputs of divisive normalization models. We also show how our model helps interrelate a wide range of models of image statistics and cortical processing. PMID:16999575

  2. Error control techniques for satellite and space communications

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.

    1992-01-01

    Worked performed during the reporting period is summarized. Construction of robustly good trellis codes for use with sequential decoding was developed. The robustly good trellis codes provide a much better trade off between free distance and distance profile. The unequal error protection capabilities of convolutional codes was studied. The problem of finding good large constraint length, low rate convolutional codes for deep space applications is investigated. A formula for computing the free distance of 1/n convolutional codes was discovered. Double memory (DM) codes, codes with two memory units per unit bit position, were studied; a search for optimal DM codes is being conducted. An algorithm for constructing convolutional codes from a given quasi-cyclic code was developed. Papers based on the above work are included in the appendix.

  3. Efficient airport detection using region-based fully convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Xin, Peng; Xu, Yuelei; Zhang, Xulei; Ma, Shiping; Li, Shuai; Lv, Chao

    2018-04-01

    This paper presents a model for airport detection using region-based fully convolutional neural networks. To achieve fast detection with high accuracy, we shared the conv layers between the region proposal procedure and the airport detection procedure and used graphics processing units (GPUs) to speed up the training and testing time. For lack of labeled data, we transferred the convolutional layers of ZF net pretrained by ImageNet to initialize the shared convolutional layers, then we retrained the model using the alternating optimization training strategy. The proposed model has been tested on an airport dataset consisting of 600 images. Experiments show that the proposed method can distinguish airports in our dataset from similar background scenes almost real-time with high accuracy, which is much better than traditional methods.

  4. Noise effects in nonlinear biochemical signaling

    NASA Astrophysics Data System (ADS)

    Bostani, Neda; Kessler, David A.; Shnerb, Nadav M.; Rappel, Wouter-Jan; Levine, Herbert

    2012-01-01

    It has been generally recognized that stochasticity can play an important role in the information processing accomplished by reaction networks in biological cells. Most treatments of that stochasticity employ Gaussian noise even though it is a priori obvious that this approximation can violate physical constraints, such as the positivity of chemical concentrations. Here, we show that even when such nonphysical fluctuations are rare, an exact solution of the Gaussian model shows that the model can yield unphysical results. This is done in the context of a simple incoherent-feedforward model which exhibits perfect adaptation in the deterministic limit. We show how one can use the natural separation of time scales in this model to yield an approximate model, that is analytically solvable, including its dynamical response to an environmental change. Alternatively, one can employ a cutoff procedure to regularize the Gaussian result.

  5. Collaborative identification method for sea battlefield target based on deep convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Zheng, Guangdi; Pan, Mingbo; Liu, Wei; Wu, Xuetong

    2018-03-01

    The target identification of the sea battlefield is the prerequisite for the judgment of the enemy in the modern naval battle. In this paper, a collaborative identification method based on convolution neural network is proposed to identify the typical targets of sea battlefields. Different from the traditional single-input/single-output identification method, the proposed method constructs a multi-input/single-output co-identification architecture based on optimized convolution neural network and weighted D-S evidence theory. The simulation results show that

  6. A convolution model for computing the far-field directivity of a parametric loudspeaker array.

    PubMed

    Shi, Chuang; Kajikawa, Yoshinobu

    2015-02-01

    This paper describes a method to compute the far-field directivity of a parametric loudspeaker array (PLA), whereby the steerable parametric loudspeaker can be implemented when phased array techniques are applied. The convolution of the product directivity and the Westervelt's directivity is suggested, substituting for the past practice of using the product directivity only. Computed directivity of a PLA using the proposed convolution model achieves significant improvement in agreement to measured directivity at a negligible computational cost.

  7. Across-Task Priming Revisited: Response and Task Conflicts Disentangled Using Ex-Gaussian Distribution Analysis

    ERIC Educational Resources Information Center

    Moutsopoulou, Karolina; Waszak, Florian

    2012-01-01

    The differential effects of task and response conflict in priming paradigms where associations are strengthened between a stimulus, a task, and a response have been demonstrated in recent years with neuroimaging methods. However, such effects are not easily disentangled with only measurements of behavior, such as reaction times (RTs). Here, we…

  8. Segmental heterogeneity in Bcl-2, Bcl-xL and Bax expression in rat tubular epithelium after ischemia-reperfusion.

    PubMed

    Valdés, Francisco; Pásaro, Eduardo; Díaz, Inmaculada; Centeno, Alberto; López, Eduardo; García-Doval, Sandra; González-Roces, Severino; Alba, Alfonso; Laffon, Blanca

    2008-06-01

    Studies in rats with bilateral clamping of renal arteries showed transient Bcl-2, Bcl-xL and Bax expression in renal tubular epithelium following ischemia-reperfusion. However, current data on the preferential localization of specific mRNAs or proteins are limited because gene expression was not analysed at segmental level. This study analyses the mRNA expression of Bcl-2, Bcl-xL and Bax in four segments of proximal and distal tubules localized in the renal cortex and outer medulla in rat kidneys with bilateral renal clamping for 30 min and seven reperfusion times versus control animals without clamp. Proximal convoluted tubule (PCT), distal convoluted tubule (DCT), proximal straight tubule (PST) and medullary thick ascending limb (MTAL) were obtained by manual microdissection. RT-PCR was used to analyse mRNA expression at segmental level. Proximal convoluted tubule and MTAL showed early, persistent and balanced up-regulation of Bcl-2, Bcl-xL and Bax, while PST and DCT revealed only Bcl-2 and Bcl-xL, when only Bax was detected in PST. DCT expressed Bcl-xL initially, and persistent Bcl-2 later. These patterns suggest a heterogeneous apoptosis regulatory response in rat renal tubules after ischemia-reperfusion, independently of cortical or medullary location. This heterogeneity of the expression patterns of Bcl-2 genes could explain the different susceptibility to undergo apoptosis, the different threshold to ischemic damage and the different adaptive capacity to injury among these tubular segments.

  9. Weighted finite impulse response filter for chromatic dispersion equalization in coherent optical fiber communication systems

    NASA Astrophysics Data System (ADS)

    Zeng, Ziyi; Yang, Aiying; Guo, Peng; Feng, Lihui

    2018-01-01

    Time-domain CD equalization using finite impulse response (FIR) filter is now a common approach for coherent optical fiber communication systems. The complex weights of FIR taps are calculated from a truncated impulse response of the CD transfer function, and the modulus of the complex weights is constant. In our work, we take the limited bandwidth of a single channel signal into account and propose weighted FIRs to improve the performance of CD equalization. The key in weighted FIR filters is the selection and optimization of weighted functions. In order to present the performance of different types of weighted FIR filters, a square-root raised cosine FIR (SRRC-FIR) and a Gaussian FIR (GS-FIR) are investigated. The optimization of square-root raised cosine FIR and Gaussian FIR are made in term of the bit rate error (BER) of QPSK and 16QAM coherent detection signal. The results demonstrate that the optimized parameters of the weighted filters are independent of the modulation format, symbol rate and the length of transmission fiber. With the optimized weighted FIRs, the BER of CD equalization signal is decreased significantly. Although this paper has investigated two types of weighted FIR filters, i.e. SRRC-FIR filter and GS-FIR filter, the principle of weighted FIR can also be extended to other symmetric functions super Gaussian function, hyperbolic secant function and etc.

  10. Potential fault region detection in TFDS images based on convolutional neural network

    NASA Astrophysics Data System (ADS)

    Sun, Junhua; Xiao, Zhongwen

    2016-10-01

    In recent years, more than 300 sets of Trouble of Running Freight Train Detection System (TFDS) have been installed on railway to monitor the safety of running freight trains in China. However, TFDS is simply responsible for capturing, transmitting, and storing images, and fails to recognize faults automatically due to some difficulties such as such as the diversity and complexity of faults and some low quality images. To improve the performance of automatic fault recognition, it is of great importance to locate the potential fault areas. In this paper, we first introduce a convolutional neural network (CNN) model to TFDS and propose a potential fault region detection system (PFRDS) for simultaneously detecting four typical types of potential fault regions (PFRs). The experimental results show that this system has a higher performance of image detection to PFRs in TFDS. An average detection recall of 98.95% and precision of 100% are obtained, demonstrating the high detection ability and robustness against various poor imaging situations.

  11. A Parallel Product-Convolution approach for representing the depth varying Point Spread Functions in 3D widefield microscopy based on principal component analysis.

    PubMed

    Arigovindan, Muthuvel; Shaevitz, Joshua; McGowan, John; Sedat, John W; Agard, David A

    2010-03-29

    We address the problem of computational representation of image formation in 3D widefield fluorescence microscopy with depth varying spherical aberrations. We first represent 3D depth-dependent point spread functions (PSFs) as a weighted sum of basis functions that are obtained by principal component analysis (PCA) of experimental data. This representation is then used to derive an approximating structure that compactly expresses the depth variant response as a sum of few depth invariant convolutions pre-multiplied by a set of 1D depth functions, where the convolving functions are the PCA-derived basis functions. The model offers an efficient and convenient trade-off between complexity and accuracy. For a given number of approximating PSFs, the proposed method results in a much better accuracy than the strata based approximation scheme that is currently used in the literature. In addition to yielding better accuracy, the proposed methods automatically eliminate the noise in the measured PSFs.

  12. Stochastic Stirling Engine Operating in Contact with Active Baths

    NASA Astrophysics Data System (ADS)

    Zakine, Ruben; Solon, Alexandre; Gingrich, Todd; van Wijland, Frédéric

    2017-04-01

    A Stirling engine made of a colloidal particle in contact with a nonequilibrium bath is considered and analyzed with the tools of stochastic energetics. We model the bath by non Gaussian persistent noise acting on the colloidal particle. Depending on the chosen definition of an isothermal transformation in this nonequilibrium setting, we find that either the energetics of the engine parallels that of its equilibrium counterpart or, in the simplest case, that it ends up being less efficient. Persistence, more than non Gaussian effects, are responsible for this result.

  13. Stochastic dynamic analysis of marine risers considering Gaussian system uncertainties

    NASA Astrophysics Data System (ADS)

    Ni, Pinghe; Li, Jun; Hao, Hong; Xia, Yong

    2018-03-01

    This paper performs the stochastic dynamic response analysis of marine risers with material uncertainties, i.e. in the mass density and elastic modulus, by using Stochastic Finite Element Method (SFEM) and model reduction technique. These uncertainties are assumed having Gaussian distributions. The random mass density and elastic modulus are represented by using the Karhunen-Loève (KL) expansion. The Polynomial Chaos (PC) expansion is adopted to represent the vibration response because the covariance of the output is unknown. Model reduction based on the Iterated Improved Reduced System (IIRS) technique is applied to eliminate the PC coefficients of the slave degrees of freedom to reduce the dimension of the stochastic system. Monte Carlo Simulation (MCS) is conducted to obtain the reference response statistics. Two numerical examples are studied in this paper. The response statistics from the proposed approach are compared with those from MCS. It is noted that the computational time is significantly reduced while the accuracy is kept. The results demonstrate the efficiency of the proposed approach for stochastic dynamic response analysis of marine risers.

  14. Non-Gaussian lineshapes and dynamics of time-resolved linear and nonlinear (correlation) spectra.

    PubMed

    Dinpajooh, Mohammadhasan; Matyushov, Dmitry V

    2014-07-17

    Signatures of nonlinear and non-Gaussian dynamics in time-resolved linear and nonlinear (correlation) 2D spectra are analyzed in a model considering a linear plus quadratic dependence of the spectroscopic transition frequency on a Gaussian nuclear coordinate of the thermal bath (quadratic coupling). This new model is contrasted to the commonly assumed linear dependence of the transition frequency on the medium nuclear coordinates (linear coupling). The linear coupling model predicts equality between the Stokes shift and equilibrium correlation functions of the transition frequency and time-independent spectral width. Both predictions are often violated, and we are asking here the question of whether a nonlinear solvent response and/or non-Gaussian dynamics are required to explain these observations. We find that correlation functions of spectroscopic observables calculated in the quadratic coupling model depend on the chromophore's electronic state and the spectral width gains time dependence, all in violation of the predictions of the linear coupling models. Lineshape functions of 2D spectra are derived assuming Ornstein-Uhlenbeck dynamics of the bath nuclear modes. The model predicts asymmetry of 2D correlation plots and bending of the center line. The latter is often used to extract two-point correlation functions from 2D spectra. The dynamics of the transition frequency are non-Gaussian. However, the effect of non-Gaussian dynamics is limited to the third-order (skewness) time correlation function, without affecting the time correlation functions of higher order. The theory is tested against molecular dynamics simulations of a model polar-polarizable chromophore dissolved in a force field water.

  15. A new test statistic for climate models that includes field and spatial dependencies using Gaussian Markov random fields

    DOE PAGES

    Nosedal-Sanchez, Alvaro; Jackson, Charles S.; Huerta, Gabriel

    2016-07-20

    A new test statistic for climate model evaluation has been developed that potentially mitigates some of the limitations that exist for observing and representing field and space dependencies of climate phenomena. Traditionally such dependencies have been ignored when climate models have been evaluated against observational data, which makes it difficult to assess whether any given model is simulating observed climate for the right reasons. The new statistic uses Gaussian Markov random fields for estimating field and space dependencies within a first-order grid point neighborhood structure. We illustrate the ability of Gaussian Markov random fields to represent empirical estimates of fieldmore » and space covariances using "witch hat" graphs. We further use the new statistic to evaluate the tropical response of a climate model (CAM3.1) to changes in two parameters important to its representation of cloud and precipitation physics. Overall, the inclusion of dependency information did not alter significantly the recognition of those regions of parameter space that best approximated observations. However, there were some qualitative differences in the shape of the response surface that suggest how such a measure could affect estimates of model uncertainty.« less

  16. A new test statistic for climate models that includes field and spatial dependencies using Gaussian Markov random fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nosedal-Sanchez, Alvaro; Jackson, Charles S.; Huerta, Gabriel

    A new test statistic for climate model evaluation has been developed that potentially mitigates some of the limitations that exist for observing and representing field and space dependencies of climate phenomena. Traditionally such dependencies have been ignored when climate models have been evaluated against observational data, which makes it difficult to assess whether any given model is simulating observed climate for the right reasons. The new statistic uses Gaussian Markov random fields for estimating field and space dependencies within a first-order grid point neighborhood structure. We illustrate the ability of Gaussian Markov random fields to represent empirical estimates of fieldmore » and space covariances using "witch hat" graphs. We further use the new statistic to evaluate the tropical response of a climate model (CAM3.1) to changes in two parameters important to its representation of cloud and precipitation physics. Overall, the inclusion of dependency information did not alter significantly the recognition of those regions of parameter space that best approximated observations. However, there were some qualitative differences in the shape of the response surface that suggest how such a measure could affect estimates of model uncertainty.« less

  17. Experimental determination of the lateral dose response functions of detectors to be applied in the measurement of narrow photon-beam dose profiles.

    PubMed

    Poppinga, D; Meyners, J; Delfs, B; Muru, A; Harder, D; Poppe, B; Looe, H K

    2015-12-21

    This study aims at the experimental determination of the detector-specific 1D lateral dose response function K(x) and of its associated rotational symmetric counterpart K(r) for a set of high-resolution detectors presently used in narrow-beam photon dosimetry. A combination of slit-beam, radiochromic film, and deconvolution techniques served to accomplish this task for four detectors with diameters of their sensitive volumes ranging from 1 to 2.2 mm. The particular aim of the experiment was to examine the existence of significant negative portions of some of these response functions predicted by a recent Monte-Carlo-simulation (Looe et al 2015 Phys. Med. Biol. 60 6585-607). In a 6 MV photon slit beam formed by the Siemens Artiste collimation system and a 0.5 mm wide slit between 10 cm thick lead blocks serving as the tertiary collimator, the true cross-beam dose profile D(x) at 3 cm depth in a large water phantom was measured with radiochromic film EBT3, and the detector-affected cross-beam signal profiles M(x) were recorded with a silicon diode, a synthetic diamond detector, a miniaturized scintillation detector, and a small ionization chamber. For each detector, the deconvolution of the convolution integral M(x)  =  K(x)  ∗  D(x) served to obtain its specific 1D lateral dose response function K(x), and K(r) was calculated from it. Fourier transformations and back transformations were performed using function approximations by weighted sums of Gaussian functions and their analytical transformation. The 1D lateral dose response functions K(x) of the four types of detectors and their associated rotational symmetric counterparts K(r) were obtained. Significant negative curve portions of K(x) and K(r) were observed in the case of the silicon diode and the diamond detector, confirming the Monte-Carlo-based prediction (Looe et al 2015 Phys. Med. Biol. 60 6585-607). They are typical for the perturbation of the secondary electron field by a detector with enhanced electron density compared with the surrounding water. In the cases of the scintillation detector and the small ionization chamber, the negative curve portions of K(x) practically vanish. It is planned to use the measured functions K(x) and K(r) to deconvolve clinical narrow-beam signal profiles and to correct the output factor values obtained with various high-resolution detectors.

  18. Linear response to nonstationary random excitation.

    NASA Technical Reports Server (NTRS)

    Hasselman, T.

    1972-01-01

    Development of a method for computing the mean-square response of linear systems to nonstationary random excitation of the form given by y(t) = f(t) x(t), in which x(t) = a stationary process and f(t) is deterministic. The method is suitable for application to multidegree-of-freedom systems when the mean-square response at a point due to excitation applied at another point is desired. Both the stationary process, x(t), and the modulating function, f(t), may be arbitrary. The method utilizes a fundamental component of transient response dependent only on x(t) and the system, and independent of f(t) to synthesize the total response. The role played by this component is analogous to that played by the Green's function or impulse response function in the convolution integral.

  19. Lévy/Anomalous Diffusion as a Mean-Field Theory for 3D Cloud Effects in SW-RT: Empirical Support, New Analytical Formulation, and Impact on Atmospheric Absorption

    NASA Astrophysics Data System (ADS)

    Pfeilsticker, K.; Davis, A.; Marshak, A.; Suszcynsky, D. M.; Buldryrev, S.; Barker, H.

    2001-12-01

    2-stream RT models, as used in all current GCMs, are mathematically equivalent to standard diffusion theory where the physical picture is a slow propagation of the diffuse radiation by Gaussian random walks. In other words, after the conventional van de Hulst rescaling by 1/(1-g) in R3 and also by (1-g) in t, solar photons follow convoluted fractal trajectories in the atmosphere. For instance, we know that transmitted light is typically scattered about (1-g)τ 2 times while reflected light is scattered on average about τ times, where τ is the optical depth of the column. The space/time spread of this diffusion process is described exactly by a Gaussian distribution; from the statistical physics viewpoint, this follows from the convergence of the sum of many (rescaled) steps between scattering events with a finite variance. This Gaussian picture follows from directly from first principles (the RT equation) under the assumptions of horizontal uniformity and large optical depth, i.e., there is a homogeneous plane-parallel cloud somewhere in the column. The first-order effect of 3D variability of cloudiness, the main source of scattering, is to perturb the distribution of single steps between scatterings which, modulo the '1-g' rescaling, can be assumed effectively isotropic. The most natural generalization of the Gaussian distribution is the 1-parameter family of symmetric Lévy-stable distributions because the sum of many zero-mean random variables with infinite variance, but finite moments of order q < α (0 < α < 2), converge to them. It has been shown on heuristic grounds that for these Lévy-based random walks the typical number of scatterings is now (1-g)τ α for transmitted light. The appearance of a non-rational exponent is why this is referred to as anomalous diffusion. Note that standard/Gaussian diffusion is retrieved in the limit α = 2-. Lévy transport theory has been successfully used in the statistical physics to investigate a wide variety of systems with strongly nonlinear dynamics; these applications range from random advection in turbulent fluids to the erratic behavior of financial time-series and, most recently, self-regulating ecological systems. We will briefly survey the state-of-the-art observations that offer compelling empirical support for the Lévy/anomalous diffusion model in atmospheric radiation: (1) high-resolution spectroscopy of differential absorption in the O2 A-band from ground; (2) temporal transient records of lightning strokes transmitted through clouds to a sensitive detector in space; and (3) the Gamma-distributions of optical depths derived from Landsat cloud scenes at 30-m resolution. We will then introduce a rigorous analytical formulation of anomalous transport through finite media based on fractional derivatives and Sonin calculus. A remarkable result from this new theoretical development is an extremal property of the α = 1+ case (divergent mean-free-path), as is observed in the cloudy atmosphere. Finally, we will discuss the implications of anomalous transport theory for bulk 3D effects on the current enhanced absorption problem as well as its role as the basis of a next-generation GCM RT parameterization.

  20. Gaussian polarizable-ion tight binding.

    PubMed

    Boleininger, Max; Guilbert, Anne Ay; Horsfield, Andrew P

    2016-10-14

    To interpret ultrafast dynamics experiments on large molecules, computer simulation is required due to the complex response to the laser field. We present a method capable of efficiently computing the static electronic response of large systems to external electric fields. This is achieved by extending the density-functional tight binding method to include larger basis sets and by multipole expansion of the charge density into electrostatically interacting Gaussian distributions. Polarizabilities for a range of hydrocarbon molecules are computed for a multipole expansion up to quadrupole order, giving excellent agreement with experimental values, with average errors similar to those from density functional theory, but at a small fraction of the cost. We apply the model in conjunction with the polarizable-point-dipoles model to estimate the internal fields in amorphous poly(3-hexylthiophene-2,5-diyl).

  1. Gaussian polarizable-ion tight binding

    NASA Astrophysics Data System (ADS)

    Boleininger, Max; Guilbert, Anne AY; Horsfield, Andrew P.

    2016-10-01

    To interpret ultrafast dynamics experiments on large molecules, computer simulation is required due to the complex response to the laser field. We present a method capable of efficiently computing the static electronic response of large systems to external electric fields. This is achieved by extending the density-functional tight binding method to include larger basis sets and by multipole expansion of the charge density into electrostatically interacting Gaussian distributions. Polarizabilities for a range of hydrocarbon molecules are computed for a multipole expansion up to quadrupole order, giving excellent agreement with experimental values, with average errors similar to those from density functional theory, but at a small fraction of the cost. We apply the model in conjunction with the polarizable-point-dipoles model to estimate the internal fields in amorphous poly(3-hexylthiophene-2,5-diyl).

  2. Efficient convolutional sparse coding

    DOEpatents

    Wohlberg, Brendt

    2017-06-20

    Computationally efficient algorithms may be applied for fast dictionary learning solving the convolutional sparse coding problem in the Fourier domain. More specifically, efficient convolutional sparse coding may be derived within an alternating direction method of multipliers (ADMM) framework that utilizes fast Fourier transforms (FFT) to solve the main linear system in the frequency domain. Such algorithms may enable a significant reduction in computational cost over conventional approaches by implementing a linear solver for the most critical and computationally expensive component of the conventional iterative algorithm. The theoretical computational cost of the algorithm may be reduced from O(M.sup.3N) to O(MN log N), where N is the dimensionality of the data and M is the number of elements in the dictionary. This significant improvement in efficiency may greatly increase the range of problems that can practically be addressed via convolutional sparse representations.

  3. Multithreaded implicitly dealiased convolutions

    NASA Astrophysics Data System (ADS)

    Roberts, Malcolm; Bowman, John C.

    2018-03-01

    Implicit dealiasing is a method for computing in-place linear convolutions via fast Fourier transforms that decouples work memory from input data. It offers easier memory management and, for long one-dimensional input sequences, greater efficiency than conventional zero-padding. Furthermore, for convolutions of multidimensional data, the segregation of data and work buffers can be exploited to reduce memory usage and execution time significantly. This is accomplished by processing and discarding data as it is generated, allowing work memory to be reused, for greater data locality and performance. A multithreaded implementation of implicit dealiasing that accepts an arbitrary number of input and output vectors and a general multiplication operator is presented, along with an improved one-dimensional Hermitian convolution that avoids the loop dependency inherent in previous work. An alternate data format that can accommodate a Nyquist mode and enhance cache efficiency is also proposed.

  4. Detecting atrial fibrillation by deep convolutional neural networks.

    PubMed

    Xia, Yong; Wulan, Naren; Wang, Kuanquan; Zhang, Henggui

    2018-02-01

    Atrial fibrillation (AF) is the most common cardiac arrhythmia. The incidence of AF increases with age, causing high risks of stroke and increased morbidity and mortality. Efficient and accurate diagnosis of AF based on the ECG is valuable in clinical settings and remains challenging. In this paper, we proposed a novel method with high reliability and accuracy for AF detection via deep learning. The short-term Fourier transform (STFT) and stationary wavelet transform (SWT) were used to analyze ECG segments to obtain two-dimensional (2-D) matrix input suitable for deep convolutional neural networks. Then, two different deep convolutional neural network models corresponding to STFT output and SWT output were developed. Our new method did not require detection of P or R peaks, nor feature designs for classification, in contrast to existing algorithms. Finally, the performances of the two models were evaluated and compared with those of existing algorithms. Our proposed method demonstrated favorable performances on ECG segments as short as 5 s. The deep convolutional neural network using input generated by STFT, presented a sensitivity of 98.34%, specificity of 98.24% and accuracy of 98.29%. For the deep convolutional neural network using input generated by SWT, a sensitivity of 98.79%, specificity of 97.87% and accuracy of 98.63% was achieved. The proposed method using deep convolutional neural networks shows high sensitivity, specificity and accuracy, and, therefore, is a valuable tool for AF detection. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. A Bayesian Semiparametric Latent Variable Model for Mixed Responses

    ERIC Educational Resources Information Center

    Fahrmeir, Ludwig; Raach, Alexander

    2007-01-01

    In this paper we introduce a latent variable model (LVM) for mixed ordinal and continuous responses, where covariate effects on the continuous latent variables are modelled through a flexible semiparametric Gaussian regression model. We extend existing LVMs with the usual linear covariate effects by including nonparametric components for nonlinear…

  6. Removal of Differential Capacitive Interferences in Fast-Scan Cyclic Voltammetry.

    PubMed

    Johnson, Justin A; Hobbs, Caddy N; Wightman, R Mark

    2017-06-06

    Due to its high spatiotemporal resolution, fast-scan cyclic voltammetry (FSCV) at carbon-fiber microelectrodes enables the localized in vivo monitoring of subsecond fluctuations in electroactive neurotransmitter concentrations. In practice, resolution of the analytical signal relies on digital background subtraction for removal of the large current due to charging of the electrical double layer as well as surface faradaic reactions. However, fluctuations in this background current often occur with changes in the electrode state or ionic environment, leading to nonspecific contributions to the FSCV data that confound data analysis. Here, we both explore the origin of such shifts seen with local changes in cations and develop a model to account for their shape. Further, we describe a convolution-based method for removal of the differential capacitive contributions to the FSCV current. The method relies on the use of a small-amplitude pulse made prior to the FSCV sweep that probes the impedance of the system. To predict the nonfaradaic current response to the voltammetric sweep, the step current response is differentiated to provide an estimate of the system's impulse response function and is used to convolute the applied waveform. The generated prediction is then subtracted from the observed current to the voltammetric sweep, removing artifacts associated with electrode impedance changes. The technique is demonstrated to remove select contributions from capacitive characteristics changes of the electrode both in vitro (i.e., in flow-injection analysis) and in vivo (i.e., during a spreading depression event in an anesthetized rat).

  7. Off-resonance artifacts correction with convolution in k-space (ORACLE).

    PubMed

    Lin, Wei; Huang, Feng; Simonotto, Enrico; Duensing, George R; Reykowski, Arne

    2012-06-01

    Off-resonance artifacts hinder the wider applicability of echo-planar imaging and non-Cartesian MRI methods such as radial and spiral. In this work, a general and rapid method is proposed for off-resonance artifacts correction based on data convolution in k-space. The acquired k-space is divided into multiple segments based on their acquisition times. Off-resonance-induced artifact within each segment is removed by applying a convolution kernel, which is the Fourier transform of an off-resonance correcting spatial phase modulation term. The field map is determined from the inverse Fourier transform of a basis kernel, which is calibrated from data fitting in k-space. The technique was demonstrated in phantom and in vivo studies for radial, spiral and echo-planar imaging datasets. For radial acquisitions, the proposed method allows the self-calibration of the field map from the imaging data, when an alternating view-angle ordering scheme is used. An additional advantage for off-resonance artifacts correction based on data convolution in k-space is the reusability of convolution kernels to images acquired with the same sequence but different contrasts. Copyright © 2011 Wiley-Liss, Inc.

  8. Automated Detection of Obstructive Sleep Apnea Events from a Single-Lead Electrocardiogram Using a Convolutional Neural Network.

    PubMed

    Urtnasan, Erdenebayar; Park, Jong-Uk; Joo, Eun-Yeon; Lee, Kyoung-Joung

    2018-04-23

    In this study, we propose a method for the automated detection of obstructive sleep apnea (OSA) from a single-lead electrocardiogram (ECG) using a convolutional neural network (CNN). A CNN model was designed with six optimized convolution layers including activation, pooling, and dropout layers. One-dimensional (1D) convolution, rectified linear units (ReLU), and max pooling were applied to the convolution, activation, and pooling layers, respectively. For training and evaluation of the CNN model, a single-lead ECG dataset was collected from 82 subjects with OSA and was divided into training (including data from 63 patients with 34,281 events) and testing (including data from 19 patients with 8571 events) datasets. Using this CNN model, a precision of 0.99%, a recall of 0.99%, and an F 1 -score of 0.99% were attained with the training dataset; these values were all 0.96% when the CNN was applied to the testing dataset. These results show that the proposed CNN model can be used to detect OSA accurately on the basis of a single-lead ECG. Ultimately, this CNN model may be used as a screening tool for those suspected to suffer from OSA.

  9. Convolutional virtual electric field for image segmentation using active contours.

    PubMed

    Wang, Yuanquan; Zhu, Ce; Zhang, Jiawan; Jian, Yuden

    2014-01-01

    Gradient vector flow (GVF) is an effective external force for active contours; however, it suffers from heavy computation load. The virtual electric field (VEF) model, which can be implemented in real time using fast Fourier transform (FFT), has been proposed later as a remedy for the GVF model. In this work, we present an extension of the VEF model, which is referred to as CONvolutional Virtual Electric Field, CONVEF for short. This proposed CONVEF model takes the VEF model as a convolution operation and employs a modified distance in the convolution kernel. The CONVEF model is also closely related to the vector field convolution (VFC) model. Compared with the GVF, VEF and VFC models, the CONVEF model possesses not only some desirable properties of these models, such as enlarged capture range, u-shape concavity convergence, subject contour convergence and initialization insensitivity, but also some other interesting properties such as G-shape concavity convergence, neighboring objects separation, and noise suppression and simultaneously weak edge preserving. Meanwhile, the CONVEF model can also be implemented in real-time by using FFT. Experimental results illustrate these advantages of the CONVEF model on both synthetic and natural images.

  10. Improved convolutional coding

    NASA Technical Reports Server (NTRS)

    Doland, G. D.

    1970-01-01

    Convolutional coding, used to upgrade digital data transmission under adverse signal conditions, has been improved by a method which ensures data transitions, permitting bit synchronizer operation at lower signal levels. Method also increases decoding ability by removing ambiguous condition.

  11. Design of convolutional tornado code

    NASA Astrophysics Data System (ADS)

    Zhou, Hui; Yang, Yao; Gao, Hongmin; Tan, Lu

    2017-09-01

    As a linear block code, the traditional tornado (tTN) code is inefficient in burst-erasure environment and its multi-level structure may lead to high encoding/decoding complexity. This paper presents a convolutional tornado (cTN) code which is able to improve the burst-erasure protection capability by applying the convolution property to the tTN code, and reduce computational complexity by abrogating the multi-level structure. The simulation results show that cTN code can provide a better packet loss protection performance with lower computation complexity than tTN code.

  12. Performance Evaluation of UHF Fading Satellite Channel by Simulation for Different Modulation Schemes

    DTIC Science & Technology

    1992-12-01

    views expressed in this thesis are those of the author end do net reflect olicsia policy or pokletsm of the Deperteaset of Defame or the US...utempl u v= cncd (2,1,6,G64,u,zeros(l,12));%Convolutional encoding mm=bm(2,v); %Binary to M-ary conversion clear v u; mm=inter(50,200,mm);%Interleaving (50...save result err B. CNCD.X (CONVOLUTIONAL ENCODER FUNCTION) function (v,vr] - cncd (n,k,m,Gr,u,r) % CONVOLUTIONAL ENCODER % Paul H. Moose % Naval

  13. Transfer Function Bounds for Partial-unit-memory Convolutional Codes Based on Reduced State Diagram

    NASA Technical Reports Server (NTRS)

    Lee, P. J.

    1984-01-01

    The performance of a coding system consisting of a convolutional encoder and a Viterbi decoder is analytically found by the well-known transfer function bounding technique. For the partial-unit-memory byte-oriented convolutional encoder with m sub 0 binary memory cells and (k sub 0 m sub 0) inputs, a state diagram of 2(K) (sub 0) was for the transfer function bound. A reduced state diagram of (2 (m sub 0) +1) is used for easy evaluation of transfer function bounds for partial-unit-memory codes.

  14. Spin-Hall effect in the scattering of structured light from plasmonic nanowire.

    PubMed

    Sharma, Deepak K; Kumar, Vijay; Vasista, Adarsh B; Chaubey, Shailendra K; Kumar, G V Pavan

    2018-06-01

    Spin-orbit interactions are subwavelength phenomena that can potentially lead to numerous device-related applications in nanophotonics. Here, we report the spin-Hall effect in the forward scattering of Hermite-Gaussian (HG) and Gaussian beams from a plasmonic nanowire. Asymmetric scattered radiation distribution was observed for circularly polarized beams. Asymmetry in the scattered radiation distribution changes the sign when the polarization handedness inverts. We found a significant enhancement in the spin-Hall effect for a HG beam compared to a Gaussian beam for constant input power. The difference between scattered powers perpendicular to the long axis of the plasmonic nanowire was used to quantify the enhancement. In addition, the nodal line of the HG beam acts as the marker for the spin-Hall shift. Numerical calculations corroborate experimental observations and suggest that the spin flow component of the Poynting vector associated with the circular polarization is responsible for the spin-Hall effect and its enhancement.

  15. Simulation of ICD-9 to ICD-10-CM Transition for Family Medicine: Simple or Convoluted?

    PubMed

    Grief, Samuel N; Patel, Jesal; Kochendorfer, Karl M; Green, Lee A; Lussier, Yves A; Li, Jianrong; Burton, Michael; Boyd, Andrew D

    2016-01-01

    The objective of this study was to examine the impact of the transition from International Classification of Diseases, 9th Revision, Clinical Modification (ICD-9-CM), to Interactional Classification of Diseases, 10th Revision, Clinical Modification (ICD-10-CM), on family medicine and to identify areas where additional training might be required. Family medicine ICD-9-CM codes were obtained from an Illinois Medicaid data set (113,000 patient visits and $5.5 million in claims). Using the science of networks, we evaluated each ICD-9-CM code used by family medicine physicians to determine whether the transition was simple or convoluted. A simple transition is defined as 1 ICD-9-CM code mapping to 1 ICD-10-CM code, or 1 ICD-9-CM code mapping to multiple ICD-10-CM codes. A convoluted transition is where the transitions between coding systems is nonreciprocal and complex, with multiple codes for which definitions become intertwined. Three family medicine physicians evaluated the most frequently encountered complex mappings for clinical accuracy. Of the 1635 diagnosis codes used by family medicine physicians, 70% of the codes were categorized as simple, 27% of codes were convoluted, and 3% had no mapping. For the visits, 75%, 24%, and 1% corresponded with simple, convoluted, and no mapping, respectively. Payment for submitted claims was similarly aligned. Of the frequently encountered convoluted codes, 3 diagnosis codes were clinically incorrect, but they represent only <0.1% of the overall diagnosis codes. The transition to ICD-10-CM is simple for 70% or more of diagnosis codes, visits, and reimbursement for a family medicine physician. However, some frequently used codes for disease management are convoluted and incorrect, and for which additional resources need to be invested to ensure a successful transition to ICD-10-CM. © Copyright 2016 by the American Board of Family Medicine.

  16. Simulation of ICD-9 to ICD-10-CM transition for family medicine: simple or convoluted?

    PubMed Central

    Grief, Samuel N.; Patel, Jesal; Lussier, Yves A.; Li, Jianrong; Burton, Michael; Boyd, Andrew D.

    2017-01-01

    Objectives The objective of this study was to examine the impact of the transition from International Classification of Disease Version Nine Clinical Modification (ICD-9-CM) to Interactional Classification of Disease Version Ten Clinical Modification (ICD-10-CM) on family medicine and identify areas where additional training might be required. Methods Family medicine ICD-9-CM codes were obtained from an Illinois Medicaid data set (113,000 patient visits and $5.5 million dollars in claims). Using the science of networks we evaluated each ICD-9-CM code used by family medicine physicians to determine if the transition was simple or convoluted.1 A simple translation is defined as one ICD-9-CM code mapping to one ICD-10-CM code or one ICD-9-CM code mapping to multiple ICD-10-CM codes. A convoluted transition is where the transitions between coding systems is non-reciprocal and complex with multiple codes where definitions become intertwined. Three family medicine physicians evaluated the most frequently encountered complex mappings for clinical accuracy. Results Of the 1635 diagnosis codes used by the family medicine physicians, 70% of the codes were categorized as simple, 27% of the diagnosis codes were convoluted and 3% were found to have no mapping. For the visits, 75%, 24%, and 1% corresponded with simple, convoluted, and no mapping, respectively. Payment for submitted claims were similarly aligned. Of the frequently encountered convoluted codes, 3 diagnosis codes were clinically incorrect, but they represent only < 0.1% of the overall diagnosis codes. Conclusions The transition to ICD-10-CM is simple for 70% or more of diagnosis codes, visits, and reimbursement for a family medicine physician. However, some frequently used codes for disease management are convoluted and incorrect, where additional resources need to be invested to ensure a successful transition to ICD-10-CM. PMID:26769875

  17. Secondary isocurvature perturbations from acoustic reheating

    NASA Astrophysics Data System (ADS)

    Ota, Atsuhisa; Yamaguchi, Masahide

    2018-06-01

    The superhorizon (iso)curvature perturbations are conserved if the following conditions are satisfied: (i) (each) non adiabatic pressure perturbation is zero, (ii) the gradient terms are ignored, that is, at the leading order of the gradient expansion (iii) (each) total energy momentum tensor is conserved. We consider the case with the violation of the last two requirements and discuss the generation of secondary isocurvature perturbations during the late time universe. Second order gradient terms are not necessarily ignored even if we are interested in the long wavelength modes because of the convolutions which may pick products of short wavelength perturbations up. We then introduce second order conserved quantities on superhorizon scales under the conditions (i) and (iii) even in the presence of the gradient terms by employing the full second order cosmological perturbation theory. We also discuss the violation of the condition (iii), that is, the energy momentum tensor is conserved for the total system but not for each component fluid. As an example, we explicitly evaluate second order heat conduction between baryons and photons due to the weak Compton scattering, which dominates during the period just before recombination. We show that such secondary effects can be recast into the isocurvature perturbations on superhorizon scales if the local type primordial non Gaussianity exists a priori.

  18. Fast tomosynthesis for lung cancer detection using the SBDX geometry

    NASA Astrophysics Data System (ADS)

    Fahrig, Rebecca; Pineda, Angel R.; Solomon, Edward G.; Leung, Ann N.; Pelc, Norbert J.

    2003-06-01

    Radiology-based lung-cancer detection is a high-contrast imaging task, consisting of the detection of a small mass of tissue within much lower density lung parenchyma. This imaging task requires removal of confounding image details, fast image acquisition (< 0.1 s for pericardial region), low dose (comparable to a chest x-ray), high resolution (< 0.25 mm in-plane) and patient positioning flexibility. We present an investigation of tomosynthesis, implemented using the Scanning-Beam Digital X-ray System (SBDX), to achieve these goals. We designed an image-based computer model of tomosynthesis using a high-resolution (0.15-mm isotropic voxels), low-noise CT volume image of a lung phantom, numerically added spherical lesions and convolution-based tomographic blurring. Lesion visibility was examined as a function of half-tomographic angle for 2.5 and 4.0 mm diameter lesions. Gaussian distributed noise was added to the projected images. For lesions 2.5 mm and 4.0 mm in diameter, half-tomographic angles of at least 6° and 9° respectively were necessary before visualization of the lesions improved. The addition of noise for a dose equivalent to 1/10 that used for a standard chest radiograph did not significantly impair lesion detection. The results are promising, indicating that lung-cancer detection using a modified SBDX system is possible.

  19. Multiresolution analysis of characteristic length scales with high-resolution topographic data

    NASA Astrophysics Data System (ADS)

    Sangireddy, Harish; Stark, Colin P.; Passalacqua, Paola

    2017-07-01

    Characteristic length scales (CLS) define landscape structure and delimit geomorphic processes. Here we use multiresolution analysis (MRA) to estimate such scales from high-resolution topographic data. MRA employs progressive terrain defocusing, via convolution of the terrain data with Gaussian kernels of increasing standard deviation, and calculation at each smoothing resolution of (i) the probability distributions of curvature and topographic index (defined as the ratio of slope to area in log scale) and (ii) characteristic spatial patterns of divergent and convergent topography identified by analyzing the curvature of the terrain. The MRA is first explored using synthetic 1-D and 2-D signals whose CLS are known. It is then validated against a set of MARSSIM (a landscape evolution model) steady state landscapes whose CLS were tuned by varying hillslope diffusivity and simulated noise amplitude. The known CLS match the scales at which the distributions of topographic index and curvature show scaling breaks, indicating that the MRA can identify CLS in landscapes based on the scaling behavior of topographic attributes. Finally, the MRA is deployed to measure the CLS of five natural landscapes using meter resolution digital terrain model data. CLS are inferred from the scaling breaks of the topographic index and curvature distributions and equated with (i) small-scale roughness features and (ii) the hillslope length scale.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ganeshalingam, Mohan; Li Weidong; Filippenko, Alexei V.

    We present BVRI light curves of 165 Type Ia supernovae (SNe Ia) from the Lick Observatory Supernova Search follow-up photometry program from 1998 through 2008. Our light curves are typically well sampled (cadence of 3-4 days) with an average of 21 photometry epochs. We describe our monitoring campaign and the photometry reduction pipeline that we have developed. Comparing our data set to that of Hicken et al., with which we have 69 overlapping supernovae (SNe), we find that as an ensemble the photometry is consistent, with only small overall systematic differences, although individual SNe may differ by as much asmore » 0.1 mag, and occasionally even more. Such disagreement in specific cases can have significant implications for combining future large data sets. We present an analysis of our light curves which includes template fits of light-curve shape parameters useful for calibrating SNe Ia as distance indicators. Assuming the B - V color of SNe Ia at 35 days past maximum light can be presented as the convolution of an intrinsic Gaussian component and a decaying exponential attributed to host-galaxy reddening, we derive an intrinsic scatter of {sigma} = 0.076 {+-} 0.019 mag, consistent with the Lira-Phillips law. This is the first of two papers, the second of which will present a cosmological analysis of the data presented herein.« less

  1. Flare Characteristics from X-ray Light Curves

    NASA Astrophysics Data System (ADS)

    Gryciuk, M.; Siarkowski, M.; Sylwester, J.; Gburek, S.; Podgorski, P.; Kepa, A.; Sylwester, B.; Mrozek, T.

    2017-06-01

    A new methodology is given to determine basic parameters of flares from their X-ray light curves. Algorithms are developed from the analysis of small X-ray flares occurring during the deep solar minimum of 2009, between Solar Cycles 23 and 24, observed by the Polish Solar Photometer in X-rays (SphinX) on the Complex Orbital Observations Near-Earth of Activity of the Sun-Photon (CORONAS- Photon) spacecraft. One is a semi-automatic flare detection procedure that gives start, peak, and end times for single ("elementary") flare events under the assumption that the light curve is a simple convolution of a Gaussian and exponential decay functions. More complex flares with multiple peaks can generally be described by a sum of such elementary flares. Flare time profiles in the two energy ranges of SphinX (1.16 - 1.51 keV, 1.51 - 15 keV) are used to derive temperature and emission measure as a function of time during each flare. The result is a comprehensive catalogue - the SphinX Flare Catalogue - which contains 1600 flares or flare-like events and is made available for general use. The methods described here can be applied to observations made by Geosynchronous Operational Environmental Satellites (GOES), the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI) and other broad-band spectrometers.

  2. Medical reliable network using concatenated channel codes through GSM network.

    PubMed

    Ahmed, Emtithal; Kohno, Ryuji

    2013-01-01

    Although the 4(th) generation (4G) of global mobile communication network, i.e. Long Term Evolution (LTE) coexisting with the 3(rd) generation (3G) has successfully started; the 2(nd) generation (2G), i.e. Global System for Mobile communication (GSM) still playing an important role in many developing countries. Without any other reliable network infrastructure, GSM can be applied for tele-monitoring applications, where high mobility and low cost are necessary. A core objective of this paper is to introduce the design of a more reliable and dependable Medical Network Channel Code system (MNCC) through GSM Network. MNCC design based on simple concatenated channel code, which is cascade of an inner code (GSM) and an extra outer code (Convolution Code) in order to protect medical data more robust against channel errors than other data using the existing GSM network. In this paper, the MNCC system will provide Bit Error Rate (BER) equivalent to the BER for medical tele monitoring of physiological signals, which is 10(-5) or less. The performance of the MNCC has been proven and investigated using computer simulations under different channels condition such as, Additive White Gaussian Noise (AWGN), Rayleigh noise and burst noise. Generally the MNCC system has been providing better performance as compared to GSM.

  3. Electronic Band Structure of Helical Polyisocyanides.

    PubMed

    Champagne, Benoît; Liégeois, Vincent; Fripiat, Joseph G; Harris, Frank E

    2017-10-19

    Restricted Hartree-Fock computations are reported for a methyl isocyanide polymer (repeating unit -C═N-CH 3 ), whose most stable conformation is expected to be a helical chain. The computations used a standard contracted Gaussian orbital set at the computational levels STO-3G, 3-21G, 6-31G, and 6-31G**, and studies were made for two line-group configurations motivated by earlier work and by studies of space-filling molecular models: (1) A structure of line-group symmetry L9 5 , containing a 9-fold screw axis with atoms displaced in the axial direction by 5/9 times the lattice constant, and (2) a structure of symmetry L4 1 that had been proposed, containing a 4-fold screw axis with translation by 1/4 of the lattice constant. Full use of the line-group symmetry was employed to cause most of the computational complexity to depend only on the size of the asymmetric repeating unit. Data reported include computed bond properties, atomic charge distribution, longitudinal polarizability, band structure, and the convoluted density of states. Most features of the description were found to be insensitive to the level of computational approximation. The work also illustrates the importance of exploiting line-group symmetry to extend the range of polymer structural problems that can be treated computationally.

  4. Computer vision camera with embedded FPGA processing

    NASA Astrophysics Data System (ADS)

    Lecerf, Antoine; Ouellet, Denis; Arias-Estrada, Miguel

    2000-03-01

    Traditional computer vision is based on a camera-computer system in which the image understanding algorithms are embedded in the computer. To circumvent the computational load of vision algorithms, low-level processing and imaging hardware can be integrated in a single compact module where a dedicated architecture is implemented. This paper presents a Computer Vision Camera based on an open architecture implemented in an FPGA. The system is targeted to real-time computer vision tasks where low level processing and feature extraction tasks can be implemented in the FPGA device. The camera integrates a CMOS image sensor, an FPGA device, two memory banks, and an embedded PC for communication and control tasks. The FPGA device is a medium size one equivalent to 25,000 logic gates. The device is connected to two high speed memory banks, an IS interface, and an imager interface. The camera can be accessed for architecture programming, data transfer, and control through an Ethernet link from a remote computer. A hardware architecture can be defined in a Hardware Description Language (like VHDL), simulated and synthesized into digital structures that can be programmed into the FPGA and tested on the camera. The architecture of a classical multi-scale edge detection algorithm based on a Laplacian of Gaussian convolution has been developed to show the capabilities of the system.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olama, Mohammed M; Matalgah, Mustafa M; Bobrek, Miljko

    Traditional encryption techniques require packet overhead, produce processing time delay, and suffer from severe quality of service deterioration due to fades and interference in wireless channels. These issues reduce the effective transmission data rate (throughput) considerably in wireless communications, where data rate with limited bandwidth is the main constraint. In this paper, performance evaluation analyses are conducted for an integrated signaling-encryption mechanism that is secure and enables improved throughput and probability of bit-error in wireless channels. This mechanism eliminates the drawbacks stated herein by encrypting only a small portion of an entire transmitted frame, while the rest is not subjectmore » to traditional encryption but goes through a signaling process (designed transformation) with the plaintext of the portion selected for encryption. We also propose to incorporate error correction coding solely on the small encrypted portion of the data to drastically improve the overall bit-error rate performance while not noticeably increasing the required bit-rate. We focus on validating the signaling-encryption mechanism utilizing Hamming and convolutional error correction coding by conducting an end-to-end system-level simulation-based study. The average probability of bit-error and throughput of the encryption mechanism are evaluated over standard Gaussian and Rayleigh fading-type channels and compared to the ones of the conventional advanced encryption standard (AES).« less

  6. Deep-cascade: Cascading 3D Deep Neural Networks for Fast Anomaly Detection and Localization in Crowded Scenes.

    PubMed

    Sabokrou, Mohammad; Fayyaz, Mohsen; Fathy, Mahmood; Klette, Reinhard

    2017-02-17

    This paper proposes a fast and reliable method for anomaly detection and localization in video data showing crowded scenes. Time-efficient anomaly localization is an ongoing challenge and subject of this paper. We propose a cubicpatch- based method, characterised by a cascade of classifiers, which makes use of an advanced feature-learning approach. Our cascade of classifiers has two main stages. First, a light but deep 3D auto-encoder is used for early identification of "many" normal cubic patches. This deep network operates on small cubic patches as being the first stage, before carefully resizing remaining candidates of interest, and evaluating those at the second stage using a more complex and deeper 3D convolutional neural network (CNN). We divide the deep autoencoder and the CNN into multiple sub-stages which operate as cascaded classifiers. Shallow layers of the cascaded deep networks (designed as Gaussian classifiers, acting as weak single-class classifiers) detect "simple" normal patches such as background patches, and more complex normal patches are detected at deeper layers. It is shown that the proposed novel technique (a cascade of two cascaded classifiers) performs comparable to current top-performing detection and localization methods on standard benchmarks, but outperforms those in general with respect to required computation time.

  7. Resource theory of non-Gaussian operations

    NASA Astrophysics Data System (ADS)

    Zhuang, Quntao; Shor, Peter W.; Shapiro, Jeffrey H.

    2018-05-01

    Non-Gaussian states and operations are crucial for various continuous-variable quantum information processing tasks. To quantitatively understand non-Gaussianity beyond states, we establish a resource theory for non-Gaussian operations. In our framework, we consider Gaussian operations as free operations, and non-Gaussian operations as resources. We define entanglement-assisted non-Gaussianity generating power and show that it is a monotone that is nonincreasing under the set of free superoperations, i.e., concatenation and tensoring with Gaussian channels. For conditional unitary maps, this monotone can be analytically calculated. As examples, we show that the non-Gaussianity of ideal photon-number subtraction and photon-number addition equal the non-Gaussianity of the single-photon Fock state. Based on our non-Gaussianity monotone, we divide non-Gaussian operations into two classes: (i) the finite non-Gaussianity class, e.g., photon-number subtraction, photon-number addition, and all Gaussian-dilatable non-Gaussian channels; and (ii) the diverging non-Gaussianity class, e.g., the binary phase-shift channel and the Kerr nonlinearity. This classification also implies that not all non-Gaussian channels are exactly Gaussian dilatable. Our resource theory enables a quantitative characterization and a first classification of non-Gaussian operations, paving the way towards the full understanding of non-Gaussianity.

  8. Error control techniques for satellite and space communications

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.

    1994-01-01

    Brief summaries of research in the following areas are presented: (1) construction of optimum geometrically uniform trellis codes; (2) a statistical approach to constructing convolutional code generators; and (3) calculating the exact performance of a convolutional code.

  9. a Novel Deep Convolutional Neural Network for Spectral-Spatial Classification of Hyperspectral Data

    NASA Astrophysics Data System (ADS)

    Li, N.; Wang, C.; Zhao, H.; Gong, X.; Wang, D.

    2018-04-01

    Spatial and spectral information are obtained simultaneously by hyperspectral remote sensing. Joint extraction of these information of hyperspectral image is one of most import methods for hyperspectral image classification. In this paper, a novel deep convolutional neural network (CNN) is proposed, which extracts spectral-spatial information of hyperspectral images correctly. The proposed model not only learns sufficient knowledge from the limited number of samples, but also has powerful generalization ability. The proposed framework based on three-dimensional convolution can extract spectral-spatial features of labeled samples effectively. Though CNN has shown its robustness to distortion, it cannot extract features of different scales through the traditional pooling layer that only have one size of pooling window. Hence, spatial pyramid pooling (SPP) is introduced into three-dimensional local convolutional filters for hyperspectral classification. Experimental results with a widely used hyperspectral remote sensing dataset show that the proposed model provides competitive performance.

  10. Detection of prostate cancer on multiparametric MRI

    NASA Astrophysics Data System (ADS)

    Seah, Jarrel C. Y.; Tang, Jennifer S. N.; Kitchen, Andy

    2017-03-01

    In this manuscript, we describe our approach and methods to the ProstateX challenge, which achieved an overall AUC of 0.84 and the runner-up position. We train a deep convolutional neural network to classify lesions marked on multiparametric MRI of the prostate as clinically significant or not. We implement a novel addition to the standard convolutional architecture described as auto-windowing which is clinically inspired and designed to overcome some of the difficulties faced in MRI interpretation, where high dynamic ranges and low contrast edges may cause difficulty for traditional convolutional neural networks trained on high contrast natural imagery. We demonstrate that this system can be trained end to end and outperforms a similar architecture without such additions. Although a relatively small training set was provided, we use extensive data augmentation to prevent overfitting and transfer learning to improve convergence speed, showing that deep convolutional neural networks can be feasibly trained on small datasets.

  11. No-reference image quality assessment based on statistics of convolution feature maps

    NASA Astrophysics Data System (ADS)

    Lv, Xiaoxin; Qin, Min; Chen, Xiaohui; Wei, Guo

    2018-04-01

    We propose a Convolutional Feature Maps (CFM) driven approach to accurately predict image quality. Our motivation bases on the finding that the Nature Scene Statistic (NSS) features on convolution feature maps are significantly sensitive to distortion degree of an image. In our method, a Convolutional Neural Network (CNN) is trained to obtain kernels for generating CFM. We design a forward NSS layer which performs on CFM to better extract NSS features. The quality aware features derived from the output of NSS layer is effective to describe the distortion type and degree an image suffered. Finally, a Support Vector Regression (SVR) is employed in our No-Reference Image Quality Assessment (NR-IQA) model to predict a subjective quality score of a distorted image. Experiments conducted on two public databases demonstrate the promising performance of the proposed method is competitive to state of the art NR-IQA methods.

  12. Sensitivity Kernels for the Cross-Convolution Measure: Eliminate the Source in Waveform Tomography

    NASA Astrophysics Data System (ADS)

    Menke, W. H.

    2017-12-01

    We use the adjoint method to derive sensitivity kernels for the cross-convolution measure, a goodness-of-fit criterion that is applicable to seismic data containing closely-spaced multiple arrivals, such as reverberating compressional waves and split shear waves. In addition to a general formulation, specific expressions for sensitivity with respect to density, Lamé parameter and shear modulus are derived for a isotropic elastic solid. As is typical of adjoint methods, the kernels depend upon an adjoint field, the source of which, in this case, is the reference displacement field, pre-multiplied by a matrix of cross-correlations of components of the observed field. We use a numerical simulation to evaluate the resolving power of a topographic inversion that employs the cross-convolution measure. The estimated resolving kernel shows is point-like, indicating that the cross-convolution measure will perform well in waveform tomography settings.

  13. A Response Function Approach for Rapid Far-Field Tsunami Forecasting

    NASA Astrophysics Data System (ADS)

    Tolkova, Elena; Nicolsky, Dmitry; Wang, Dailin

    2017-08-01

    Predicting tsunami impacts at remote coasts largely relies on tsunami en-route measurements in an open ocean. In this work, these measurements are used to generate instant tsunami predictions in deep water and near the coast. The predictions are generated as a response or a combination of responses to one or more tsunameters, with each response obtained as a convolution of real-time tsunameter measurements and a pre-computed pulse response function (PRF). Practical implementation of this method requires tables of PRFs in a 3D parameter space: earthquake location-tsunameter-forecasted site. Examples of hindcasting the 2010 Chilean and the 2011 Tohoku-Oki tsunamis along the US West Coast and beyond demonstrated high accuracy of the suggested technology in application to trans-Pacific seismically generated tsunamis.

  14. Encoding Gaussian curvature in glassy and elastomeric liquid crystal solids

    PubMed Central

    Mostajeran, Cyrus; Ware, Taylor H.; White, Timothy J.

    2016-01-01

    We describe shape transitions of thin, solid nematic sheets with smooth, preprogrammed, in-plane director fields patterned across the surface causing spatially inhomogeneous local deformations. A metric description of the local deformations is used to study the intrinsic geometry of the resulting surfaces upon exposure to stimuli such as light and heat. We highlight specific patterns that encode constant Gaussian curvature of prescribed sign and magnitude. We present the first experimental results for such programmed solids, and they qualitatively support theory for both positive and negative Gaussian curvature morphing from flat sheets on stimulation by light or heat. We review logarithmic spiral patterns that generate cone/anti-cone surfaces, and introduce spiral director fields that encode non-localized positive and negative Gaussian curvature on punctured discs, including spherical caps and spherical spindles. Conditions are derived where these cap-like, photomechanically responsive regions can be anchored in inert substrates by designing solutions that ensure compatibility with the geometric constraints imposed by the surrounding media. This integration of such materials is a precondition for their exploitation in new devices. Finally, we consider the radial extension of such director fields to larger sheets using nematic textures defined on annular domains. PMID:27279777

  15. Dimensionality-varied convolutional neural network for spectral-spatial classification of hyperspectral data

    NASA Astrophysics Data System (ADS)

    Liu, Wanjun; Liang, Xuejian; Qu, Haicheng

    2017-11-01

    Hyperspectral image (HSI) classification is one of the most popular topics in remote sensing community. Traditional and deep learning-based classification methods were proposed constantly in recent years. In order to improve the classification accuracy and robustness, a dimensionality-varied convolutional neural network (DVCNN) was proposed in this paper. DVCNN was a novel deep architecture based on convolutional neural network (CNN). The input of DVCNN was a set of 3D patches selected from HSI which contained spectral-spatial joint information. In the following feature extraction process, each patch was transformed into some different 1D vectors by 3D convolution kernels, which were able to extract features from spectral-spatial data. The rest of DVCNN was about the same as general CNN and processed 2D matrix which was constituted by by all 1D data. So that the DVCNN could not only extract more accurate and rich features than CNN, but also fused spectral-spatial information to improve classification accuracy. Moreover, the robustness of network on water-absorption bands was enhanced in the process of spectral-spatial fusion by 3D convolution, and the calculation was simplified by dimensionality varied convolution. Experiments were performed on both Indian Pines and Pavia University scene datasets, and the results showed that the classification accuracy of DVCNN improved by 32.87% on Indian Pines and 19.63% on Pavia University scene than spectral-only CNN. The maximum accuracy improvement of DVCNN achievement was 13.72% compared with other state-of-the-art HSI classification methods, and the robustness of DVCNN on water-absorption bands noise was demonstrated.

  16. Pulse shaping system research of CdZnTe radiation detector for high energy x-ray diagnostic

    NASA Astrophysics Data System (ADS)

    Li, Miao; Zhao, Mingkun; Ding, Keyu; Zhou, Shousen; Zhou, Benjie

    2018-02-01

    As one of the typical wide band-gap semiconductor materials, the CdZnTe material has high detection efficiency and excellent energy resolution for the hard X-ray and the Gamma ray. The generated signal of the CdZnTe detector needs to be transformed to the pseudo-Gaussian pulse with a small impulse-width to remove noise and improve the energy resolution by the following nuclear spectrometry data acquisition system. In this paper, the multi-stage pseudo-Gaussian shaping-filter has been investigated based on the nuclear electronic principle. The optimized circuit parameters were also obtained based on the analysis of the characteristics of the pseudo-Gaussian shaping-filter in our following simulations. Based on the simulation results, the falling-time of the output pulse was decreased and faster response time can be obtained with decreasing shaping-time τs-k. And the undershoot was also removed when the ratio of input resistors was set to 1 to 2.5. Moreover, a two stage sallen-key Gaussian shaping-filter was designed and fabricated by using a low-noise voltage feedback operation amplifier LMH6628. A detection experiment platform had been built by using the precise pulse generator CAKE831 as the imitated radiation pulse which was equivalent signal of the semiconductor CdZnTe detector. Experiment results show that the output pulse of the two stage pseudo-Gaussian shaping filter has minimum 200ns pulse width (FWHM), and the output pulse of each stage was well consistent with the simulation results. Based on the performance in our experiment, this multi-stage pseudo-Gaussian shaping-filter can reduce the event-lost caused by pile-up in the CdZnTe semiconductor detector and improve the energy resolution effectively.

  17. Light-induced thermodiffusion in two-component media

    NASA Astrophysics Data System (ADS)

    Ivanov, V.; Ivanova, G.; Okishev, K.; Khe, V.

    2017-01-01

    We have theoretically studied the optical transmittance response of thin cell with liquid containing absorbing nanoparticles in a Gaussian beam field. The transmittance spatial changing is caused by thermal diffusion phenomenon (Soret effect) which produces the variations of concentration of absorbing nanoparticles. The thickness of optical cell (including windows) is significantly less than the size of the beam. As a result, an exact analytical expression for the one dimensional thermal task is derived, taking into account the Soret feedback that leads to the temperature rising on the axis of a Gaussian beam. We have experimentally studied this phenomenon in carbon nanosuspension.

  18. Super-Gaussian laser intensity output formation by means of adaptive optics

    NASA Astrophysics Data System (ADS)

    Cherezova, T. Y.; Chesnokov, S. S.; Kaptsov, L. N.; Kudryashov, A. V.

    1998-10-01

    An optical resonator using an intracavity adaptive mirror with three concentric rings of controlling electrodes, which produc low loss and large beamwidth super-Gaussian output of order 4, 6, 8, is analyzed. An inverse propagation method is used to determine the appropriate shape of the adaptive mirror. The mirror reproduces the shape with minimal RMS error by combining weights of experimentally measured response functions of the mirror sample. The voltages applied to each mirror electrode are calculated. Practical design parameters such as construction of an adaptive mirror, Fresnel numbers, and geometric factor are discussed.

  19. Comment on “Stationary self-focusing of Gaussian laser beam in relativistic thermal quantum plasma” [Phys. Plasmas 20, 072703 (2013)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habibi, M., E-mail: habibi.physics@gmail.com; Ghamari, F.

    2014-06-15

    Patil and Takale in their recent article [Phys. Plasmas 20, 072703 (2013)], by evaluating the quantum dielectric response in thermal quantum plasma, have modeled the relativistic self-focusing of Gaussian laser beam in a plasma. We have found that there are some important shortcomings and fundamental mistakes in Patil and Takale [Phys. Plasmas 20, 072703 (2013)] that we give a brief description about them and refer readers to important misconception about the use of the Fermi temperature in quantum plasmas, appearing in Patil and Takale [Phys. Plasmas 20, 072703 (2013)].

  20. Optimal application of Morrison's iterative noise removal for deconvolution. Appendices

    NASA Technical Reports Server (NTRS)

    Ioup, George E.; Ioup, Juliette W.

    1987-01-01

    Morrison's iterative method of noise removal, or Morrison's smoothing, is applied in a simulation to noise-added data sets of various noise levels to determine its optimum use. Morrison's smoothing is applied for noise removal alone, and for noise removal prior to deconvolution. For the latter, an accurate method is analyzed to provide confidence in the optimization. The method consists of convolving the data with an inverse filter calculated by taking the inverse discrete Fourier transform of the reciprocal of the transform of the response of the system. Various length filters are calculated for the narrow and wide Gaussian response functions used. Deconvolution of non-noisy data is performed, and the error in each deconvolution calculated. Plots are produced of error versus filter length; and from these plots the most accurate length filters determined. The statistical methodologies employed in the optimizations of Morrison's method are similar. A typical peak-type input is selected and convolved with the two response functions to produce the data sets to be analyzed. Both constant and ordinate-dependent Gaussian distributed noise is added to the data, where the noise levels of the data are characterized by their signal-to-noise ratios. The error measures employed in the optimizations are the L1 and L2 norms. Results of the optimizations for both Gaussians, both noise types, and both norms include figures of optimum iteration number and error improvement versus signal-to-noise ratio, and tables of results. The statistical variation of all quantities considered is also given.

  1. Development of an LSI maximum-likelihood convolutional decoder for advanced forward error correction capability on the NASA 30/20 GHz program

    NASA Technical Reports Server (NTRS)

    Clark, R. T.; Mccallister, R. D.

    1982-01-01

    The particular coding option identified as providing the best level of coding gain performance in an LSI-efficient implementation was the optimal constraint length five, rate one-half convolutional code. To determine the specific set of design parameters which optimally matches this decoder to the LSI constraints, a breadboard MCD (maximum-likelihood convolutional decoder) was fabricated and used to generate detailed performance trade-off data. The extensive performance testing data gathered during this design tradeoff study are summarized, and the functional and physical MCD chip characteristics are presented.

  2. A unitary convolution approximation for the impact-parameter dependent electronic energy loss

    NASA Astrophysics Data System (ADS)

    Schiwietz, G.; Grande, P. L.

    1999-06-01

    In this work, we propose a simple method to calculate the impact-parameter dependence of the electronic energy loss of bare ions for all impact parameters. This perturbative convolution approximation (PCA) is based on first-order perturbation theory, and thus, it is only valid for fast particles with low projectile charges. Using Bloch's stopping-power result and a simple scaling, we get rid of the restriction to low charge states and derive the unitary convolution approximation (UCA). Results of the UCA are then compared with full quantum-mechanical coupled-channel calculations for the impact-parameter dependent electronic energy loss.

  3. Coordinated design of coding and modulation systems

    NASA Technical Reports Server (NTRS)

    Massey, J. L.; Ancheta, T.; Johannesson, R.; Lauer, G.; Lee, L.

    1976-01-01

    The joint optimization of the coding and modulation systems employed in telemetry systems was investigated. Emphasis was placed on formulating inner and outer coding standards used by the Goddard Spaceflight Center. Convolutional codes were found that are nearly optimum for use with Viterbi decoding in the inner coding of concatenated coding systems. A convolutional code, the unit-memory code, was discovered and is ideal for inner system usage because of its byte-oriented structure. Simulations of sequential decoding on the deep-space channel were carried out to compare directly various convolutional codes that are proposed for use in deep-space systems.

  4. On the application of a fast polynomial transform and the Chinese remainder theorem to compute a two-dimensional convolution

    NASA Technical Reports Server (NTRS)

    Truong, T. K.; Lipes, R.; Reed, I. S.; Wu, C.

    1980-01-01

    A fast algorithm is developed to compute two dimensional convolutions of an array of d sub 1 X d sub 2 complex number points, where d sub 2 = 2(M) and d sub 1 = 2(m-r+) for some 1 or = r or = m. This algorithm requires fewer multiplications and about the same number of additions as the conventional fast fourier transform method for computing the two dimensional convolution. It also has the advantage that the operation of transposing the matrix of data can be avoided.

  5. Cascaded K-means convolutional feature learner and its application to face recognition

    NASA Astrophysics Data System (ADS)

    Zhou, Daoxiang; Yang, Dan; Zhang, Xiaohong; Huang, Sheng; Feng, Shu

    2017-09-01

    Currently, considerable efforts have been devoted to devise image representation. However, handcrafted methods need strong domain knowledge and show low generalization ability, and conventional feature learning methods require enormous training data and rich parameters tuning experience. A lightened feature learner is presented to solve these problems with application to face recognition, which shares similar topology architecture as a convolutional neural network. Our model is divided into three components: cascaded convolution filters bank learning layer, nonlinear processing layer, and feature pooling layer. Specifically, in the filters learning layer, we use K-means to learn convolution filters. Features are extracted via convoluting images with the learned filters. Afterward, in the nonlinear processing layer, hyperbolic tangent is employed to capture the nonlinear feature. In the feature pooling layer, to remove the redundancy information and incorporate the spatial layout, we exploit multilevel spatial pyramid second-order pooling technique to pool the features in subregions and concatenate them together as the final representation. Extensive experiments on four representative datasets demonstrate the effectiveness and robustness of our model to various variations, yielding competitive recognition results on extended Yale B and FERET. In addition, our method achieves the best identification performance on AR and labeled faces in the wild datasets among the comparative methods.

  6. A convolutional neural network to filter artifacts in spectroscopic MRI.

    PubMed

    Gurbani, Saumya S; Schreibmann, Eduard; Maudsley, Andrew A; Cordova, James Scott; Soher, Brian J; Poptani, Harish; Verma, Gaurav; Barker, Peter B; Shim, Hyunsuk; Cooper, Lee A D

    2018-03-09

    Proton MRSI is a noninvasive modality capable of generating volumetric maps of in vivo tissue metabolism without the need for ionizing radiation or injected contrast agent. Magnetic resonance spectroscopic imaging has been shown to be a viable imaging modality for studying several neuropathologies. However, a key hurdle in the routine clinical adoption of MRSI is the presence of spectral artifacts that can arise from a number of sources, possibly leading to false information. A deep learning model was developed that was capable of identifying and filtering out poor quality spectra. The core of the model used a tiled convolutional neural network that analyzed frequency-domain spectra to detect artifacts. When compared with a panel of MRS experts, our convolutional neural network achieved high sensitivity and specificity with an area under the curve of 0.95. A visualization scheme was implemented to better understand how the convolutional neural network made its judgement on single-voxel or multivoxel MRSI, and the convolutional neural network was embedded into a pipeline capable of producing whole-brain spectroscopic MRI volumes in real time. The fully automated method for assessment of spectral quality provides a valuable tool to support clinical MRSI or spectroscopic MRI studies for use in fields such as adaptive radiation therapy planning. © 2018 International Society for Magnetic Resonance in Medicine.

  7. Accelerated Cartesian expansion (ACE) based framework for the rapid evaluation of diffusion, lossy wave, and Klein-Gordon potentials

    DOE PAGES

    Baczewski, Andrew David; Vikram, Melapudi; Shanker, Balasubramaniam; ...

    2010-08-27

    Diffusion, lossy wave, and Klein–Gordon equations find numerous applications in practical problems across a range of diverse disciplines. The temporal dependence of all three Green’s functions are characterized by an infinite tail. This implies that the cost complexity of the spatio-temporal convolutions, associated with evaluating the potentials, scales as O(N s 2N t 2), where N s and N t are the number of spatial and temporal degrees of freedom, respectively. In this paper, we discuss two new methods to rapidly evaluate these spatio-temporal convolutions by exploiting their block-Toeplitz nature within the framework of accelerated Cartesian expansions (ACE). The firstmore » scheme identifies a convolution relation in time amongst ACE harmonics and the fast Fourier transform (FFT) is used for efficient evaluation of these convolutions. The second method exploits the rank deficiency of the ACE translation operators with respect to time and develops a recursive numerical compression scheme for the efficient representation and evaluation of temporal convolutions. It is shown that the cost of both methods scales as O(N sN tlog 2N t). Furthermore, several numerical results are presented for the diffusion equation to validate the accuracy and efficacy of the fast algorithms developed here.« less

  8. Naturalistic stimulation changes the dynamic response of action potential encoding in a mechanoreceptor

    PubMed Central

    Pfeiffer, Keram; French, Andrew S.

    2015-01-01

    Naturalistic signals were created from vibrations made by locusts walking on a Sansevieria plant. Both naturalistic and Gaussian noise signals were used to mechanically stimulate VS-3 slit-sense mechanoreceptor neurons of the spider, Cupiennius salei, with stimulus amplitudes adjusted to give similar firing rates for either stimulus. Intracellular microelectrodes recorded action potentials, receptor potential, and receptor current, using current clamp and voltage clamp. Frequency response analysis showed that naturalistic stimulation contained relatively more power at low frequencies, and caused increased neuronal sensitivity to higher frequencies. In contrast, varying the amplitude of Gaussian stimulation did not change neuronal dynamics. Naturalistic stimulation contained less entropy than Gaussian, but signal entropy was higher than stimulus in the resultant receptor current, indicating addition of uncorrelated noise during transduction. The presence of added noise was supported by measuring linear information capacity in the receptor current. Total entropy and information capacity in action potentials produced by either stimulus were much lower than in earlier stages, and limited to the maximum entropy of binary signals. We conclude that the dynamics of action potential encoding in VS-3 neurons are sensitive to the form of stimulation, but entropy and information capacity of action potentials are limited by firing rate. PMID:26578975

  9. Novel positioning method using Gaussian mixture model for a monolithic scintillator-based detector in positron emission tomography

    NASA Astrophysics Data System (ADS)

    Bae, Seungbin; Lee, Kisung; Seo, Changwoo; Kim, Jungmin; Joo, Sung-Kwan; Joung, Jinhun

    2011-09-01

    We developed a high precision position decoding method for a positron emission tomography (PET) detector that consists of a thick slab scintillator coupled with a multichannel photomultiplier tube (PMT). The DETECT2000 simulation package was used to validate light response characteristics for a 48.8 mm×48.8 mm×10 mm slab of lutetium oxyorthosilicate coupled to a 64 channel PMT. The data are then combined to produce light collection histograms. We employed a Gaussian mixture model (GMM) to parameterize the composite light response with multiple Gaussian mixtures. In the training step, light photons acquired by N PMT channels was used as an N-dimensional feature vector and were fed into a GMM training model to generate optimal parameters for M mixtures. In the positioning step, we decoded the spatial locations of incident photons by evaluating a sample feature vector with respect to the trained mixture parameters. The average spatial resolutions after positioning with four mixtures were 1.1 mm full width at half maximum (FWHM) at the corner and 1.0 mm FWHM at the center section. This indicates that the proposed algorithm achieved high performance in both spatial resolution and positioning bias, especially at the corner section of the detector.

  10. A novel convolution-based approach to address ionization chamber volume averaging effect in model-based treatment planning systems

    NASA Astrophysics Data System (ADS)

    Barraclough, Brendan; Li, Jonathan G.; Lebron, Sharon; Fan, Qiyong; Liu, Chihray; Yan, Guanghua

    2015-08-01

    The ionization chamber volume averaging effect is a well-known issue without an elegant solution. The purpose of this study is to propose a novel convolution-based approach to address the volume averaging effect in model-based treatment planning systems (TPSs). Ionization chamber-measured beam profiles can be regarded as the convolution between the detector response function and the implicit real profiles. Existing approaches address the issue by trying to remove the volume averaging effect from the measurement. In contrast, our proposed method imports the measured profiles directly into the TPS and addresses the problem by reoptimizing pertinent parameters of the TPS beam model. In the iterative beam modeling process, the TPS-calculated beam profiles are convolved with the same detector response function. Beam model parameters responsible for the penumbra are optimized to drive the convolved profiles to match the measured profiles. Since the convolved and the measured profiles are subject to identical volume averaging effect, the calculated profiles match the real profiles when the optimization converges. The method was applied to reoptimize a CC13 beam model commissioned with profiles measured with a standard ionization chamber (Scanditronix Wellhofer, Bartlett, TN). The reoptimized beam model was validated by comparing the TPS-calculated profiles with diode-measured profiles. Its performance in intensity-modulated radiation therapy (IMRT) quality assurance (QA) for ten head-and-neck patients was compared with the CC13 beam model and a clinical beam model (manually optimized, clinically proven) using standard Gamma comparisons. The beam profiles calculated with the reoptimized beam model showed excellent agreement with diode measurement at all measured geometries. Performance of the reoptimized beam model was comparable with that of the clinical beam model in IMRT QA. The average passing rates using the reoptimized beam model increased substantially from 92.1% to 99.3% with 3%/3 mm and from 79.2% to 95.2% with 2%/2 mm when compared with the CC13 beam model. These results show the effectiveness of the proposed method. Less inter-user variability can be expected of the final beam model. It is also found that the method can be easily integrated into model-based TPS.

  11. Enhanced line integral convolution with flow feature detection

    DOT National Transportation Integrated Search

    1995-01-01

    Prepared ca. 1995. The Line Integral Convolution (LIC) method, which blurs white noise textures along a vector field, is an effective way to visualize overall flow patterns in a 2D domain [Cabral & Leedom '93]. The method produces a flow texture imag...

  12. Automatic estimation of retinal nerve fiber bundle orientation in SD-OCT images using a structure-oriented smoothing filter

    NASA Astrophysics Data System (ADS)

    Ghafaryasl, Babak; Baart, Robert; de Boer, Johannes F.; Vermeer, Koenraad A.; van Vliet, Lucas J.

    2017-02-01

    Optical coherence tomography (OCT) yields high-resolution, three-dimensional images of the retina. A better understanding of retinal nerve fiber bundle (RNFB) trajectories in combination with visual field data may be used for future diagnosis and monitoring of glaucoma. However, manual tracing of these bundles is a tedious task. In this work, we present an automatic technique to estimate the orientation of RNFBs from volumetric OCT scans. Our method consists of several steps, starting from automatic segmentation of the RNFL. Then, a stack of en face images around the posterior nerve fiber layer interface was extracted. The image showing the best visibility of RNFB trajectories was selected for further processing. After denoising the selected en face image, a semblance structure-oriented filter was applied to probe the strength of local linear structure in a discrete set of orientations creating an orientation space. Gaussian filtering along the orientation axis in this space is used to find the dominant orientation. Next, a confidence map was created to supplement the estimated orientation. This confidence map was used as pixel weight in normalized convolution to regularize the semblance filter response after which a new orientation estimate can be obtained. Finally, after several iterations an orientation field corresponding to the strongest local orientation was obtained. The RNFB orientations of six macular scans from three subjects were estimated. For all scans, visual inspection shows a good agreement between the estimated orientation fields and the RNFB trajectories in the en face images. Additionally, a good correlation between the orientation fields of two scans of the same subject was observed. Our method was also applied to a larger field of view around the macula. Manual tracing of the RNFB trajectories shows a good agreement with the automatically obtained streamlines obtained by fiber tracking.

  13. Why noise is useful in functional and neural mechanisms of interval timing?

    PubMed Central

    2013-01-01

    Background The ability to estimate durations in the seconds-to-minutes range - interval timing - is essential for survival, adaptation and its impairment leads to severe cognitive and/or motor dysfunctions. The response rate near a memorized duration has a Gaussian shape centered on the to-be-timed interval (criterion time). The width of the Gaussian-like distribution of responses increases linearly with the criterion time, i.e., interval timing obeys the scalar property. Results We presented analytical and numerical results based on the striatal beat frequency (SBF) model showing that parameter variability (noise) mimics behavioral data. A key functional block of the SBF model is the set of oscillators that provide the time base for the entire timing network. The implementation of the oscillators block as simplified phase (cosine) oscillators has the additional advantage that is analytically tractable. We also checked numerically that the scalar property emerges in the presence of memory variability by using biophysically realistic Morris-Lecar oscillators. First, we predicted analytically and tested numerically that in a noise-free SBF model the output function could be approximated by a Gaussian. However, in a noise-free SBF model the width of the Gaussian envelope is independent of the criterion time, which violates the scalar property. We showed analytically and verified numerically that small fluctuations of the memorized criterion time leads to scalar property of interval timing. Conclusions Noise is ubiquitous in the form of small fluctuations of intrinsic frequencies of the neural oscillators, the errors in recording/retrieving stored information related to criterion time, fluctuation in neurotransmitters’ concentration, etc. Our model suggests that the biological noise plays an essential functional role in the SBF interval timing. PMID:23924391

  14. Comparison between Frame-Constrained Fix-Pixel-Value and Frame-Free Spiking-Dynamic-Pixel ConvNets for Visual Processing

    PubMed Central

    Farabet, Clément; Paz, Rafael; Pérez-Carrasco, Jose; Zamarreño-Ramos, Carlos; Linares-Barranco, Alejandro; LeCun, Yann; Culurciello, Eugenio; Serrano-Gotarredona, Teresa; Linares-Barranco, Bernabe

    2012-01-01

    Most scene segmentation and categorization architectures for the extraction of features in images and patches make exhaustive use of 2D convolution operations for template matching, template search, and denoising. Convolutional Neural Networks (ConvNets) are one example of such architectures that can implement general-purpose bio-inspired vision systems. In standard digital computers 2D convolutions are usually expensive in terms of resource consumption and impose severe limitations for efficient real-time applications. Nevertheless, neuro-cortex inspired solutions, like dedicated Frame-Based or Frame-Free Spiking ConvNet Convolution Processors, are advancing real-time visual processing. These two approaches share the neural inspiration, but each of them solves the problem in different ways. Frame-Based ConvNets process frame by frame video information in a very robust and fast way that requires to use and share the available hardware resources (such as: multipliers, adders). Hardware resources are fixed- and time-multiplexed by fetching data in and out. Thus memory bandwidth and size is important for good performance. On the other hand, spike-based convolution processors are a frame-free alternative that is able to perform convolution of a spike-based source of visual information with very low latency, which makes ideal for very high-speed applications. However, hardware resources need to be available all the time and cannot be time-multiplexed. Thus, hardware should be modular, reconfigurable, and expansible. Hardware implementations in both VLSI custom integrated circuits (digital and analog) and FPGA have been already used to demonstrate the performance of these systems. In this paper we present a comparison study of these two neuro-inspired solutions. A brief description of both systems is presented and also discussions about their differences, pros and cons. PMID:22518097

  15. Comparison between Frame-Constrained Fix-Pixel-Value and Frame-Free Spiking-Dynamic-Pixel ConvNets for Visual Processing.

    PubMed

    Farabet, Clément; Paz, Rafael; Pérez-Carrasco, Jose; Zamarreño-Ramos, Carlos; Linares-Barranco, Alejandro; Lecun, Yann; Culurciello, Eugenio; Serrano-Gotarredona, Teresa; Linares-Barranco, Bernabe

    2012-01-01

    Most scene segmentation and categorization architectures for the extraction of features in images and patches make exhaustive use of 2D convolution operations for template matching, template search, and denoising. Convolutional Neural Networks (ConvNets) are one example of such architectures that can implement general-purpose bio-inspired vision systems. In standard digital computers 2D convolutions are usually expensive in terms of resource consumption and impose severe limitations for efficient real-time applications. Nevertheless, neuro-cortex inspired solutions, like dedicated Frame-Based or Frame-Free Spiking ConvNet Convolution Processors, are advancing real-time visual processing. These two approaches share the neural inspiration, but each of them solves the problem in different ways. Frame-Based ConvNets process frame by frame video information in a very robust and fast way that requires to use and share the available hardware resources (such as: multipliers, adders). Hardware resources are fixed- and time-multiplexed by fetching data in and out. Thus memory bandwidth and size is important for good performance. On the other hand, spike-based convolution processors are a frame-free alternative that is able to perform convolution of a spike-based source of visual information with very low latency, which makes ideal for very high-speed applications. However, hardware resources need to be available all the time and cannot be time-multiplexed. Thus, hardware should be modular, reconfigurable, and expansible. Hardware implementations in both VLSI custom integrated circuits (digital and analog) and FPGA have been already used to demonstrate the performance of these systems. In this paper we present a comparison study of these two neuro-inspired solutions. A brief description of both systems is presented and also discussions about their differences, pros and cons.

  16. Low-Impedance Compact Modulators Capable of Generating Intense Ultra-fast Rising Nanosecond Waveforms

    DTIC Science & Technology

    2006-10-31

    spark gap is shown in Fig. 1. The Blumleins were constructed from copper plates separated by laminated layered Kapton (polyimide) dielectrics. Scaling... convolution factor. The diamond/GaAs heterojunction response is limited to a very thin layer across the cross section between amorphic diamond and GaAs...were fastened to electrode mounts and passed through the cast material of the base before it hardened. A thick kapton laminate 1.2 cm wide separated

  17. The decoding of majority-multiplexed signals by means of dyadic convolution

    NASA Astrophysics Data System (ADS)

    Losev, V. V.

    1980-09-01

    The maximum likelihood method can often not be used for the decoding of majority-multiplexed signals because of the large number of computations required. This paper describes a fast dyadic convolution transform which can be used to reduce the number of computations.

  18. Directional Radiometry and Radiative Transfer: the Convoluted Path From Centuries-old Phenomenology to Physical Optics

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.

    2014-01-01

    This Essay traces the centuries-long history of the phenomenological disciplines of directional radiometry and radiative transfer in turbid media, discusses their fundamental weaknesses, and outlines the convoluted process of their conversion into legitimate branches of physical optics.

  19. [Application of numerical convolution in in vivo/in vitro correlation research].

    PubMed

    Yue, Peng

    2009-01-01

    This paper introduced the conception and principle of in vivo/in vitro correlation (IVIVC) and convolution/deconvolution methods, and elucidated in details the convolution strategy and method for calculating the in vivo absorption performance of the pharmaceutics according to the their pharmacokinetic data in Excel, then put the results forward to IVIVC research. Firstly, the pharmacokinetic data ware fitted by mathematical software to make up the lost points. Secondly, the parameters of the optimal fitted input function were defined by trail-and-error method according to the convolution principle in Excel under the hypothesis that all the input functions fit the Weibull functions. Finally, the IVIVC between in vivo input function and the in vitro dissolution was studied. In the examples, not only the application of this method was demonstrated in details but also its simplicity and effectiveness were proved by comparing with the compartment model method and deconvolution method. It showed to be a powerful tool for IVIVC research.

  20. DeepFix: A Fully Convolutional Neural Network for Predicting Human Eye Fixations.

    PubMed

    Kruthiventi, Srinivas S S; Ayush, Kumar; Babu, R Venkatesh

    2017-09-01

    Understanding and predicting the human visual attention mechanism is an active area of research in the fields of neuroscience and computer vision. In this paper, we propose DeepFix, a fully convolutional neural network, which models the bottom-up mechanism of visual attention via saliency prediction. Unlike classical works, which characterize the saliency map using various hand-crafted features, our model automatically learns features in a hierarchical fashion and predicts the saliency map in an end-to-end manner. DeepFix is designed to capture semantics at multiple scales while taking global context into account, by using network layers with very large receptive fields. Generally, fully convolutional nets are spatially invariant-this prevents them from modeling location-dependent patterns (e.g., centre-bias). Our network handles this by incorporating a novel location-biased convolutional layer. We evaluate our model on multiple challenging saliency data sets and show that it achieves the state-of-the-art results.

  1. Spatial and Time Domain Feature of ERP Speller System Extracted via Convolutional Neural Network.

    PubMed

    Yoon, Jaehong; Lee, Jungnyun; Whang, Mincheol

    2018-01-01

    Feature of event-related potential (ERP) has not been completely understood and illiteracy problem remains unsolved. To this end, P300 peak has been used as the feature of ERP in most brain-computer interface applications, but subjects who do not show such peak are common. Recent development of convolutional neural network provides a way to analyze spatial and temporal features of ERP. Here, we train the convolutional neural network with 2 convolutional layers whose feature maps represented spatial and temporal features of event-related potential. We have found that nonilliterate subjects' ERP show high correlation between occipital lobe and parietal lobe, whereas illiterate subjects only show correlation between neural activities from frontal lobe and central lobe. The nonilliterates showed peaks in P300, P500, and P700, whereas illiterates mostly showed peaks in around P700. P700 was strong in both subjects. We found that P700 peak may be the key feature of ERP as it appears in both illiterate and nonilliterate subjects.

  2. Spatial and Time Domain Feature of ERP Speller System Extracted via Convolutional Neural Network

    PubMed Central

    2018-01-01

    Feature of event-related potential (ERP) has not been completely understood and illiteracy problem remains unsolved. To this end, P300 peak has been used as the feature of ERP in most brain–computer interface applications, but subjects who do not show such peak are common. Recent development of convolutional neural network provides a way to analyze spatial and temporal features of ERP. Here, we train the convolutional neural network with 2 convolutional layers whose feature maps represented spatial and temporal features of event-related potential. We have found that nonilliterate subjects' ERP show high correlation between occipital lobe and parietal lobe, whereas illiterate subjects only show correlation between neural activities from frontal lobe and central lobe. The nonilliterates showed peaks in P300, P500, and P700, whereas illiterates mostly showed peaks in around P700. P700 was strong in both subjects. We found that P700 peak may be the key feature of ERP as it appears in both illiterate and nonilliterate subjects.

  3. Image inpainting and super-resolution using non-local recursive deep convolutional network with skip connections

    NASA Astrophysics Data System (ADS)

    Liu, Miaofeng

    2017-07-01

    In recent years, deep convolutional neural networks come into use in image inpainting and super-resolution in many fields. Distinct to most of the former methods requiring to know beforehand the local information for corrupted pixels, we propose a 20-depth fully convolutional network to learn an end-to-end mapping a dataset of damaged/ground truth subimage pairs realizing non-local blind inpainting and super-resolution. As there often exist image with huge corruptions or inpainting on a low-resolution image that the existing approaches unable to perform well, we also share parameters in local area of layers to achieve spatial recursion and enlarge the receptive field. To avoid the difficulty of training this deep neural network, skip-connections between symmetric convolutional layers are designed. Experimental results shows that the proposed method outperforms state-of-the-art methods for diverse corrupting and low-resolution conditions, it works excellently when realizing super-resolution and image inpainting simultaneously

  4. Convolutional encoding of self-dual codes

    NASA Technical Reports Server (NTRS)

    Solomon, G.

    1994-01-01

    There exist almost complete convolutional encodings of self-dual codes, i.e., block codes of rate 1/2 with weights w, w = 0 mod 4. The codes are of length 8m with the convolutional portion of length 8m-2 and the nonsystematic information of length 4m-1. The last two bits are parity checks on the two (4m-1) length parity sequences. The final information bit complements one of the extended parity sequences of length 4m. Solomon and van Tilborg have developed algorithms to generate these for the Quadratic Residue (QR) Codes of lengths 48 and beyond. For these codes and reasonable constraint lengths, there are sequential decodings for both hard and soft decisions. There are also possible Viterbi-type decodings that may be simple, as in a convolutional encoding/decoding of the extended Golay Code. In addition, the previously found constraint length K = 9 for the QR (48, 24;12) Code is lowered here to K = 8.

  5. Response of space shuttle insulation panels to acoustic noise pressure

    NASA Technical Reports Server (NTRS)

    Vaicaitis, R.

    1976-01-01

    The response of reusable space shuttle insulation panels to random acoustic pressure fields are studied. The basic analytical approach in formulating the governing equations of motion uses a Rayleigh-Ritz technique. The input pressure field is modeled as a stationary Gaussian random process for which the cross-spectral density function is known empirically from experimental measurements. The response calculations are performed in both frequency and time domain.

  6. Representations of pitch and slow modulation in auditory cortex

    PubMed Central

    Barker, Daphne; Plack, Christopher J.; Hall, Deborah A.

    2013-01-01

    Iterated ripple noise (IRN) is a type of pitch-evoking stimulus that is commonly used in neuroimaging studies of pitch processing. When contrasted with a spectrally matched Gaussian noise, it is known to produce a consistent response in a region of auditory cortex that includes an area antero-lateral to the primary auditory fields (lateral Heschl's gyrus). The IRN-related response has often been attributed to pitch, although recent evidence suggests that it is more likely driven by slowly varying spectro-temporal modulations not related to pitch. The present functional magnetic resonance imaging (fMRI) study showed that both pitch-related temporal regularity and slow modulations elicited a significantly greater response than a baseline Gaussian noise in an area that has been pre-defined as pitch-responsive. The region was sensitive to both pitch salience and slow modulation salience. The responses to pitch and spectro-temporal modulations interacted in a saturating manner, suggesting that there may be an overlap in the populations of neurons coding these features. However, the interaction may have been influenced by the fact that the two pitch stimuli used (IRN and unresolved harmonic complexes) differed in terms of pitch salience. Finally, the results support previous findings suggesting that the cortical response to IRN is driven in part by slow modulations, not by pitch. PMID:24106464

  7. Spectral interpolation - Zero fill or convolution. [image processing

    NASA Technical Reports Server (NTRS)

    Forman, M. L.

    1977-01-01

    Zero fill, or augmentation by zeros, is a method used in conjunction with fast Fourier transforms to obtain spectral spacing at intervals closer than obtainable from the original input data set. In the present paper, an interpolation technique (interpolation by repetitive convolution) is proposed which yields values accurate enough for plotting purposes and which lie within the limits of calibration accuracies. The technique is shown to operate faster than zero fill, since fewer operations are required. The major advantages of interpolation by repetitive convolution are that efficient use of memory is possible (thus avoiding the difficulties encountered in decimation in time FFTs) and that is is easy to implement.

  8. A low-power, high-throughput maximum-likelihood convolutional decoder chip for NASA's 30/20 GHz program

    NASA Technical Reports Server (NTRS)

    Mccallister, R. D.; Crawford, J. J.

    1981-01-01

    It is pointed out that the NASA 30/20 GHz program will place in geosynchronous orbit a technically advanced communication satellite which can process time-division multiple access (TDMA) information bursts with a data throughput in excess of 4 GBPS. To guarantee acceptable data quality during periods of signal attenuation it will be necessary to provide a significant forward error correction (FEC) capability. Convolutional decoding (utilizing the maximum-likelihood techniques) was identified as the most attractive FEC strategy. Design trade-offs regarding a maximum-likelihood convolutional decoder (MCD) in a single-chip CMOS implementation are discussed.

  9. Recent Applications of Higher-Order Spectral Analysis to Nonlinear Aeroelastic Phenomena

    NASA Technical Reports Server (NTRS)

    Silva, Walter A.; Hajj, Muhammad R.; Dunn, Shane; Strganac, Thomas W.; Powers, Edward J.; Stearman, Ronald

    2005-01-01

    Recent applications of higher-order spectral (HOS) methods to nonlinear aeroelastic phenomena are presented. Applications include the analysis of data from a simulated nonlinear pitch and plunge apparatus and from F-18 flight flutter tests. A MATLAB model of the Texas A&MUniversity s Nonlinear Aeroelastic Testbed Apparatus (NATA) is used to generate aeroelastic transients at various conditions including limit cycle oscillations (LCO). The Gaussian or non-Gaussian nature of the transients is investigated, related to HOS methods, and used to identify levels of increasing nonlinear aeroelastic response. Royal Australian Air Force (RAAF) F/A-18 flight flutter test data is presented and analyzed. The data includes high-quality measurements of forced responses and LCO phenomena. Standard power spectral density (PSD) techniques and HOS methods are applied to the data and presented. The goal of this research is to develop methods that can identify the onset of nonlinear aeroelastic phenomena, such as LCO, during flutter testing.

  10. THE DISTRIBUTION OF COOK’S D STATISTIC

    PubMed Central

    Muller, Keith E.; Mok, Mario Chen

    2013-01-01

    Cook (1977) proposed a diagnostic to quantify the impact of deleting an observation on the estimated regression coefficients of a General Linear Univariate Model (GLUM). Simulations of models with Gaussian response and predictors demonstrate that his suggestion of comparing the diagnostic to the median of the F for overall regression captures an erratically varying proportion of the values. We describe the exact distribution of Cook’s statistic for a GLUM with Gaussian predictors and response. We also present computational forms, simple approximations, and asymptotic results. A simulation supports the accuracy of the results. The methods allow accurate evaluation of a single value or the maximum value from a regression analysis. The approximations work well for a single value, but less well for the maximum. In contrast, the cut-point suggested by Cook provides widely varying tail probabilities. As with all diagnostics, the data analyst must use scientific judgment in deciding how to treat highlighted observations. PMID:24363487

  11. Deconvolution Method on OSL Curves from ZrO2 Irradiated by Beta and UV Radiations

    NASA Astrophysics Data System (ADS)

    Rivera, T.; Kitis, G.; Azorín, J.; Furetta, C.

    This paper reports the optically stimulated luminescent (OSL) response of ZrO2 to beta and ultraviolet radiations in order to investigate the potential use of this material as a radiation dosimeter. The experimentally obtained OSL decay curves were analyzed using the computerized curve de-convolution (CCD) method. It was found that the OSL curve structure, for the short (practical) illumination time used, consists of three first order components. The individual OSL dose response behavior of each component was found. The values of the time at the OSL peak maximum and the decay constant of each component were also estimated.

  12. Assessment of a Three-Dimensional Line-of-Response Probability Density Function System Matrix for PET

    PubMed Central

    Yao, Rutao; Ramachandra, Ranjith M.; Mahajan, Neeraj; Rathod, Vinay; Gunasekar, Noel; Panse, Ashish; Ma, Tianyu; Jian, Yiqiang; Yan, Jianhua; Carson, Richard E.

    2012-01-01

    To achieve optimal PET image reconstruction through better system modeling, we developed a system matrix that is based on the probability density function for each line of response (LOR-PDF). The LOR-PDFs are grouped by LOR-to-detector incident angles to form a highly compact system matrix. The system matrix was implemented in the MOLAR list mode reconstruction algorithm for a small animal PET scanner. The impact of LOR-PDF on reconstructed image quality was assessed qualitatively as well as quantitatively in terms of contrast recovery coefficient (CRC) and coefficient of variance (COV), and its performance was compared with a fixed Gaussian (iso-Gaussian) line spread function. The LOR-PDFs of 3 coincidence signal emitting sources, 1) ideal positron emitter that emits perfect back-to-back γ rays (γγ) in air; 2) fluorine-18 (18F) nuclide in water; and 3) oxygen-15 (15O) nuclide in water, were derived, and assessed with simulated and experimental phantom data. The derived LOR-PDFs showed anisotropic and asymmetric characteristics dependent on LOR-detector angle, coincidence emitting source, and the medium, consistent with common PET physical principles. The comparison of the iso-Gaussian function and LOR-PDF showed that: 1) without positron range and acolinearity effects, the LOR-PDF achieved better or similar trade-offs of contrast recovery and noise for objects of 4-mm radius or larger, and this advantage extended to smaller objects (e.g. 2-mm radius sphere, 0.6-mm radius hot-rods) at higher iteration numbers; and 2) with positron range and acolinearity effects, the iso-Gaussian achieved similar or better resolution recovery depending on the significance of positron range effect. We conclude that the 3-D LOR-PDF approach is an effective method to generate an accurate and compact system matrix. However, when used directly in expectation-maximization based list-mode iterative reconstruction algorithms such as MOLAR, its superiority is not clear. For this application, using an iso-Gaussian function in MOLAR is a simple but effective technique for PET reconstruction. PMID:23032702

  13. Quantum steering of Gaussian states via non-Gaussian measurements

    NASA Astrophysics Data System (ADS)

    Ji, Se-Wan; Lee, Jaehak; Park, Jiyong; Nha, Hyunchul

    2016-07-01

    Quantum steering—a strong correlation to be verified even when one party or its measuring device is fully untrusted—not only provides a profound insight into quantum physics but also offers a crucial basis for practical applications. For continuous-variable (CV) systems, Gaussian states among others have been extensively studied, however, mostly confined to Gaussian measurements. While the fulfilment of Gaussian criterion is sufficient to detect CV steering, whether it is also necessary for Gaussian states is a question of fundamental importance in many contexts. This critically questions the validity of characterizations established only under Gaussian measurements like the quantification of steering and the monogamy relations. Here, we introduce a formalism based on local uncertainty relations of non-Gaussian measurements, which is shown to manifest quantum steering of some Gaussian states that Gaussian criterion fails to detect. To this aim, we look into Gaussian states of practical relevance, i.e. two-mode squeezed states under a lossy and an amplifying Gaussian channel. Our finding significantly modifies the characteristics of Gaussian-state steering so far established such as monogamy relations and one-way steering under Gaussian measurements, thus opening a new direction for critical studies beyond Gaussian regime.

  14. Working covariance model selection for generalized estimating equations.

    PubMed

    Carey, Vincent J; Wang, You-Gan

    2011-11-20

    We investigate methods for data-based selection of working covariance models in the analysis of correlated data with generalized estimating equations. We study two selection criteria: Gaussian pseudolikelihood and a geodesic distance based on discrepancy between model-sensitive and model-robust regression parameter covariance estimators. The Gaussian pseudolikelihood is found in simulation to be reasonably sensitive for several response distributions and noncanonical mean-variance relations for longitudinal data. Application is also made to a clinical dataset. Assessment of adequacy of both correlation and variance models for longitudinal data should be routine in applications, and we describe open-source software supporting this practice. Copyright © 2011 John Wiley & Sons, Ltd.

  15. Acral melanoma detection using a convolutional neural network for dermoscopy images.

    PubMed

    Yu, Chanki; Yang, Sejung; Kim, Wonoh; Jung, Jinwoong; Chung, Kee-Yang; Lee, Sang Wook; Oh, Byungho

    2018-01-01

    Acral melanoma is the most common type of melanoma in Asians, and usually results in a poor prognosis due to late diagnosis. We applied a convolutional neural network to dermoscopy images of acral melanoma and benign nevi on the hands and feet and evaluated its usefulness for the early diagnosis of these conditions. A total of 724 dermoscopy images comprising acral melanoma (350 images from 81 patients) and benign nevi (374 images from 194 patients), and confirmed by histopathological examination, were analyzed in this study. To perform the 2-fold cross validation, we split them into two mutually exclusive subsets: half of the total image dataset was selected for training and the rest for testing, and we calculated the accuracy of diagnosis comparing it with the dermatologist's and non-expert's evaluation. The accuracy (percentage of true positive and true negative from all images) of the convolutional neural network was 83.51% and 80.23%, which was higher than the non-expert's evaluation (67.84%, 62.71%) and close to that of the expert (81.08%, 81.64%). Moreover, the convolutional neural network showed area-under-the-curve values like 0.8, 0.84 and Youden's index like 0.6795, 0.6073, which were similar score with the expert. Although further data analysis is necessary to improve their accuracy, convolutional neural networks would be helpful to detect acral melanoma from dermoscopy images of the hands and feet.

  16. Accelerating Convolutional Sparse Coding for Curvilinear Structures Segmentation by Refining SCIRD-TS Filter Banks.

    PubMed

    Annunziata, Roberto; Trucco, Emanuele

    2016-11-01

    Deep learning has shown great potential for curvilinear structure (e.g., retinal blood vessels and neurites) segmentation as demonstrated by a recent auto-context regression architecture based on filter banks learned by convolutional sparse coding. However, learning such filter banks is very time-consuming, thus limiting the amount of filters employed and the adaptation to other data sets (i.e., slow re-training). We address this limitation by proposing a novel acceleration strategy to speed-up convolutional sparse coding filter learning for curvilinear structure segmentation. Our approach is based on a novel initialisation strategy (warm start), and therefore it is different from recent methods improving the optimisation itself. Our warm-start strategy is based on carefully designed hand-crafted filters (SCIRD-TS), modelling appearance properties of curvilinear structures which are then refined by convolutional sparse coding. Experiments on four diverse data sets, including retinal blood vessels and neurites, suggest that the proposed method reduces significantly the time taken to learn convolutional filter banks (i.e., up to -82%) compared to conventional initialisation strategies. Remarkably, this speed-up does not worsen performance; in fact, filters learned with the proposed strategy often achieve a much lower reconstruction error and match or exceed the segmentation performance of random and DCT-based initialisation, when used as input to a random forest classifier.

  17. Childhood malnutrition in Egypt using geoadditive Gaussian and latent variable models.

    PubMed

    Khatab, Khaled

    2010-04-01

    Major progress has been made over the last 30 years in reducing the prevalence of malnutrition amongst children less than 5 years of age in developing countries. However, approximately 27% of children under the age of 5 in these countries are still malnourished. This work focuses on the childhood malnutrition in one of the biggest developing countries, Egypt. This study examined the association between bio-demographic and socioeconomic determinants and the malnutrition problem in children less than 5 years of age using the 2003 Demographic and Health survey data for Egypt. In the first step, we use separate geoadditive Gaussian models with the continuous response variables stunting (height-for-age), underweight (weight-for-age), and wasting (weight-for-height) as indicators of nutritional status in our case study. In a second step, based on the results of the first step, we apply the geoadditive Gaussian latent variable model for continuous indicators in which the 3 measurements of the malnutrition status of children are assumed as indicators for the latent variable "nutritional status".

  18. An Improved Algorithm to Generate a Wi-Fi Fingerprint Database for Indoor Positioning

    PubMed Central

    Chen, Lina; Li, Binghao; Zhao, Kai; Rizos, Chris; Zheng, Zhengqi

    2013-01-01

    The major problem of Wi-Fi fingerprint-based positioning technology is the signal strength fingerprint database creation and maintenance. The significant temporal variation of received signal strength (RSS) is the main factor responsible for the positioning error. A probabilistic approach can be used, but the RSS distribution is required. The Gaussian distribution or an empirically-derived distribution (histogram) is typically used. However, these distributions are either not always correct or require a large amount of data for each reference point. Double peaks of the RSS distribution have been observed in experiments at some reference points. In this paper a new algorithm based on an improved double-peak Gaussian distribution is proposed. Kurtosis testing is used to decide if this new distribution, or the normal Gaussian distribution, should be applied. Test results show that the proposed algorithm can significantly improve the positioning accuracy, as well as reduce the workload of the off-line data training phase. PMID:23966197

  19. An improved algorithm to generate a Wi-Fi fingerprint database for indoor positioning.

    PubMed

    Chen, Lina; Li, Binghao; Zhao, Kai; Rizos, Chris; Zheng, Zhengqi

    2013-08-21

    The major problem of Wi-Fi fingerprint-based positioning technology is the signal strength fingerprint database creation and maintenance. The significant temporal variation of received signal strength (RSS) is the main factor responsible for the positioning error. A probabilistic approach can be used, but the RSS distribution is required. The Gaussian distribution or an empirically-derived distribution (histogram) is typically used. However, these distributions are either not always correct or require a large amount of data for each reference point. Double peaks of the RSS distribution have been observed in experiments at some reference points. In this paper a new algorithm based on an improved double-peak Gaussian distribution is proposed. Kurtosis testing is used to decide if this new distribution, or the normal Gaussian distribution, should be applied. Test results show that the proposed algorithm can significantly improve the positioning accuracy, as well as reduce the workload of the off-line data training phase.

  20. Assessment of DPOAE test-retest difference curves via hierarchical Gaussian processes.

    PubMed

    Bao, Junshu; Hanson, Timothy; McMillan, Garnett P; Knight, Kristin

    2017-03-01

    Distortion product otoacoustic emissions (DPOAE) testing is a promising alternative to behavioral hearing tests and auditory brainstem response testing of pediatric cancer patients. The central goal of this study is to assess whether significant changes in the DPOAE frequency/emissions curve (DP-gram) occur in pediatric patients in a test-retest scenario. This is accomplished through the construction of normal reference charts, or credible regions, that DP-gram differences lie in, as well as contour probabilities that measure how abnormal (or in a certain sense rare) a test-retest difference is. A challenge is that the data were collected over varying frequencies, at different time points from baseline, and on possibly one or both ears. A hierarchical structural equation Gaussian process model is proposed to handle the different sources of correlation in the emissions measurements, wherein both subject-specific random effects and variance components governing the smoothness and variability of each child's Gaussian process are coupled together. © 2016, The International Biometric Society.

Top